Thoughts on AI / Vibe Coding
2025-08-10
I have had a lot of conversations with friends/coworkers about AI/LLMs, specifically in the context of software development and I thought I'd share my current thoughts here - as of summer 2025. Naturally, these could change as the technology evolves.
Background: I'd consider myself a moderately advanced LLM user. I've been using AI tools regularly for the past several years as part of my day to day workflow. I do my best to keep up with new models and tools as they come out. I've used them with a variety of technologies across the entire stack.
In short, they can save me a significant amount of time on certain tasks, but can also lead me astray and waste it on others. What I really want to talk about though, is a particular hidden cost that I rarely see addressed in conversations online.
There are plenty of ways to break down what makes one developer "better" than another, but one immutable aspect is that institutional technical knowledge is extremely valuable. Developers do their best to mitigate that through various techniques (conventions, frameworks, documentation, etc.) - but most of the time a developer who has a deep understanding of a sufficiently complex codebase will outperform another developer who does not have that deep understanding, regardless of experience or skill.
When you use AI to code extensively (aka "vibecoding") - you aren't building that deep understanding. This creates a dependency loop where you need to keep using the AI understand and fix issues that in code that it wrote, and sometimes it can't. Your only option is to partially eschew the advantages it was providing and dive into the code yourself, which very well may take longer than if you had written it yourself.
To me, the hidden cost is that you can very easily create a meta-level leaky abstraction. Leaky abstractions are dangerous because you eventually have to understand what's underneath. With LLMs, you're creating a giant, constantly changing abstraction layer between you and your own code. When it "leaks" - when the AI is wrong, ambiguous, or unavailable - you can't fall back on your own understanding, because you never built it in the first place.
I think there are ways to mitigate this, but they require intentional habits. This is what I try to do:
- Only use AI for small, surgical changes.
- Only use AI for changes that you fully understand ahead of time and could write yourself.
- Deeply review all code changes the AI makes. Your goal should be to mentally integrate it into your understanding of the codebase.
- When using it for research or exploration (eg. around an architectural decision), develop an opinion first and validate it against other options - don't offload your thinking completely. Remember to externally validate it.
It's hard to anticipate the future, but I doubt these problems will go away even with significant advancements - they might even get worse. I also imagine they're fully applicable to other fields where AI/LLM usage is high.