← Blog

How I Ship

Someone told me my resume looks fake.

Not the content — the volume. Four active projects, shipped apps on the App Store, an open-source contribution to a framework with 5M weekly downloads, a GPU compute shader pipeline for physarum neural architecture extraction — all in the span of a year, as one person. The reasonable conclusion is that I’m either lying or letting AI write everything while I take credit.

I’m not doing either. But I understand why it looks that way, and I think the perception gap is worth talking about because it’s going to affect a lot of people in the next few years.

What the AI does

I use Claude Code as a development collaborator. It’s running in my terminal, it reads my codebase, it writes code. The Co-Authored-By: Claude tag is in my git history because I think transparency about this is more important than optics.

Here’s what Claude is good at: scaffolding. Boilerplate. Writing the 15th WebGPU bind group layout that follows the same pattern as the previous 14. Generating test fixtures. Implementing a function when I’ve already described the interface. Parallelizing independent tasks — I dispatch three agents to write three files simultaneously and review the output. The mechanical work that used to take a day takes an hour.

What I do

Architecture. Taste. The decisions that determine whether a system works or doesn’t.

When I diagnosed the Safari WebGL context loss bug in Astro’s view transition system, I spent two days in Safari’s Web Inspector understanding why replaceWith() on a body element causes a canvas to lose its rendering context. The fix was 114 lines. Claude helped write them. But Claude didn’t know the bug existed — I found it because my own portfolio site’s physarum simulation was breaking on page navigation, and I traced it through Astro’s swap implementation to the DOM detachment cascade.

The Astro maintainer (martrapp) reviewed the PR, suggested using moveBefore() with feature detection, and approved it. That interaction — reading code review feedback, understanding the nuance, implementing the suggestion correctly — is engineering judgment. It’s not automatable.

When I decided to try extracting neural network topologies from physarum simulations, I had a hypothesis: multi-species organisms carving out territories might discover modular network architectures that gradient-based methods miss. The first benchmark failed — physarum-grown topologies performed worse than random sparse networks of the same size. The second benchmark, with a synthetic modular graph, showed physarum winning on classification but losing on regression. The third attempt, with real agent-grown topologies, lost again because the organisms weren’t pruning aggressively enough.

That iteration — hypothesis, test, fail, understand why, adjust, retest — is the actual work. The shaders, the PyTorch module builder, the readback pipeline — those are implementation. Important, but not the hard part.

The velocity equation

I shipped my portfolio site in a weekend. I shipped ideAI with on-device ML inference in months, not years. This isn’t because AI makes engineering easy. It’s because AI removes the friction between deciding what to build and having it built. The bottleneck was never typing speed. It was decision surface — the number of micro-decisions per unit of output.

When the implementation cost of an idea drops toward zero, the quality of your ideas becomes the rate limiter. Architecture taste, debugging instinct, research direction, knowing when a benchmark result means “the approach is wrong” versus “the parameters need tuning” — those are the skills that compound.

Why I’m writing this

Because the gap between “what one person can ship with AI” and “what looks plausible for one person to ship” is going to keep widening. And the people who figure out how to work this way will need to explain themselves, repeatedly, to people who haven’t yet.

I’d rather explain it once, honestly, than let the work speak for itself and have it whisper “fake.”