Back to writing

Building Ray's Lab

Why I built this site, why I didn't write a single line of code, and what I learned from handing everything to AI agents.

By

Ray

Published

April 11, 2026

Reading time

4 min read

Type

Essay

Key takeaway

A site born out of low visibility, built entirely by AI agents as a deliberate experiment in a new way of working.

Building Ray’s Lab

In a lab or a team, people usually only see the final output: the shipped feature, the finished system. They rarely see the thinking behind it: the approaches you tested and dropped, the problems you debugged at 3 AM, the small decisions that shaped the architecture. Unless you go out of your way to show that process, it stays invisible.

That was my situation. I had been building projects (a Japanese learning app, AntiCopilot, a collection of smaller AI tools), but unless someone was directly involved, there was no easy way for them to see what I was actually doing or how I think about problems.

I needed a place to leave traces. Not polished tutorials or groundbreaking research, just honest notes from the process: what I tried, what broke, what I learned along the way. A blog felt like the most natural way to start.

The experiment: not writing a single line of code

Here is something that might seem contradictory. I spent months building AntiCopilot, a tool designed to stop AI from just handing students the answer, and then I built this entire blog without writing any code myself. Every line was generated by AI agents: Claude Code, Codex, Gemini CLI, among others.

This wasn’t laziness. It was a deliberate constraint.

I’m used to working with AI. In past projects, I would prompt for a rough draft, then step in to reshape, refactor, and fix things by hand. That workflow is comfortable, and it works. But it also means I’ve never fully experienced what it’s like to operate at the agent level — to define the architecture, describe the intent, and let the tools handle execution end to end.

As someone who calls himself an “AI-powered tool builder,” I felt like I had a blind spot. I understood AI as an assistant, but not as something I could delegate to more completely. The blog was a good opportunity to test that boundary. It’s a well-scoped project — a static site with clear structure — so the risk was low and the learning potential was high.

What actually happened

I’ll be honest: I was nervous. When I first installed Claude Code, I didn’t jump straight into prompting. I sat down and read through the main pages of the documentation first. Handing full development control to what is essentially a black box felt uncomfortable, and I wanted to at least understand the tool before trusting it.

The early results were solid. The base site (Astro, Tailwind, GitHub Pages deployment) took a little over an hour. Applying a personalized theme and refining the layout took about one more. That speed didn’t come from strong coding ability on my part. It came from knowing what I wanted and being able to describe it clearly enough for the tools to execute.

AI is not perfect, of course. During the process, I ran into the kind of small issues that are familiar to any frontend developer: a navbar with the wrong z-index, components disappearing because of scroll animation conflicts, minor layout inconsistencies across breakpoints. Most of these were things I knew how to fix myself.

In past projects, my instinct would have been to jump into the code and fix them by hand — it always felt faster that way. But this time, I forced myself to hold back. Instead of fixing each issue individually as it came up, I started batching similar problems and describing them together in a single prompt. The results surprised me. Not only did the version history stay much cleaner, but the fixes were often more consistent than what I would have produced by addressing them one at a time.

That shift — from “I’ll just fix this myself” to “let me describe the pattern and let the tool handle it” — turned out to be the most valuable thing I learned. When I stopped spending mental energy on implementation details, I had more room to think about presentation, structure, and what the site should actually communicate. The cognitive load moved from how to build to what to build, and the value I could create in the same amount of time went up noticeably.

Leaving traces

The biggest takeaway from this experiment is not about AI tooling. It’s about a shift in how I think about development. Knowing what to build, being able to articulate it precisely, and choosing the right tool for the job matters more than whether I typed the code myself.

I plan to keep using this space to document what I’m learning and building, especially around AI. Some posts will be project write-ups like the AntiCopilot essay. Some will be shorter notes on things I tried or problems I ran into. The format will vary, but the habit of writing things down is what I want to build.

If nothing else, having a record makes it easier to look back and see how my thinking has changed — and maybe someone passing through will find something useful along the way.