What changes when coding with AI
Notes from a true believer in AI-assisted coding
I’ve been noticing a few shifts in how I work when building with AI agents, both challenges and opportunities. Just viewing coding with AI as a way to save time is a limiting mindset. Instead, the whole mental model of what it means to code and how to do it changes. Planning, discovery, and refactoring is different. These aren’t just incremental improvements but fundamental leaps. These are a few notes that might help open up a different mindset.
Building as Discovery
The traditional way has always been: plan it carefully, then build it. This changes with AI.
When you can actually build three different versions in the time it used to take to argue about one, you stop theorizing and start exploring. Testing UI flows and different architectural patterns can now be done much faster and more importantly, in parallel. This enables seeing and feeling what’s right instead of guessing and theorizing. I’m using AI-assisted building much as a discovery tool. This means using artifacts you can touch, feel and investigate to answer questions that are almost impossible to answer in your head or in a meeting. What does this actually feel like? Can we architect it this way? Will this scale? Stop debating, start building and find out.
When you’re discovering, you don’t need clean code. You don’t need security best practices. You only need working prototypes that teach you something. The AI can generate messy exploration code. That’s literally its job in this phase. You’re not building to ship - you’re building to learn.
Restart Often
Here’s the thing that feels weird at first: once you’ve found something that works through rapid and parallel discovery - you surprisingly often want to throw it away and start over.
This is counterintuitive. You found the solution, so why not just polish it? AI generated code is (at least not yet) known to be the cleanest and complexity typically grows fast if you don’t steer it clearly. But, you’ve learned so much by the discovery phase that you can write a much clearer description to the AI agent and, due to the lower cost of generating code, you often benefit from just starting over instead of trying to fix the mess created in the discovery phase.
My experience so far is that this often beats trying to refactor AI-generated exploration code into production quality. I don’t claim to be 100% at overcoming the sunk cost fallacy though. There’s also a psychological benefit to having this restart planned even before the discovery start: if you know you’re restarting anyway, you stop trying to make early prototypes perfect. You can be messy. Try wild ideas. Break things. That freedom actually makes discovery faster when the goal is to teach you, not to build maintainable code.
The Focus Problem with Long-Running Agents
I’ve noticed something uncomfortable happening as I give agents bigger tasks. When the agent is churning on a complex feature even for just a few minutes, it’s easy to loose focus and start context-switching. Deep coding used to mean you stayed in flow for hours straight. The single-threadedness of human attention forced you to focus. Now, the agent is doing the work while you sit there.
But here’s what I’m learning: that breaks everything. The agent generates thousands of lines of code while you’re not paying attention, and suddenly you lose mental focus. Also, you lose track of the codebase. This waiting time can instead be used to counter this comprehension debt (see and listen to Arvid Kahl about the concept of comprehension debt).
I’ve found it useful to try to follow the agent working. You get at least two things out of this. First, you understand the code better and improve at evaluating the agent’s output. Second, you learn stuff by seeing how agents reason while producing the code and sometimes notice perspectives you wouldn’t have discovered alone.
Use AI to Learn
As much as I’m extremely bullish on the impact of AI on software and many other areas, I’m also not naive to the risk of brain rot from outsourcing thinking to machines and the potential negative impact this will have on learning. But, for anyone with high agency, the learning opportunities are larger than before if using AI productively.
You can now use AI agents to build things you couldn’t build before. Also, you can let the agent explain its output and/or reasoning by asking it questions. Things that would have taken months to learn and execute can now be done in a few hours. This is not merely a time-saver, it’s enabling something that previously was not feasible. The right move here is: don’t just let the agent generate and move on. Engage with it. Read what it produces. Ask it to explain decisions. Push back on approaches that seem wrong. Treat it like a teacher, not just a productivity tool.
Yes, you’ll ship faster. But more importantly, you’ll develop capabilities you didn’t have before. That compounds. Six months of this and you know things you wouldn’t have time to learn any other way.
General takeaway
These aren’t rules. They’re just patterns I’m noticing as I use AI agents to build software. The specifics will change: better agents, new dev tools, new interfaces for interacting with models, etc. But a general takeaway is that something fundamental has shifted. We’re not just coding faster. The whole way you think about design, iteration, focus, and learning is different. The people who dare to take the leap, adapt and try to figure this out instead of just using agents for a few % faster coding will be the ones getting real value from what’s coming. What are you noticing in your own work? What’s actually working, and what feels broken? Curious to hear about what patterns other people are finding regardless of previous experience.
A final message: don’t be afraid to get your hands dirty and make your ideas and dreams real. Even if I had written code pre-AI, I was far from being able to build the stuff I am now. And even if you haven’t written a single line: start building and use AI as a learning partner.
