LLMs are remarkably good at certain kinds of programming tasks and genuinely bad at others. The difference usually comes down to how much hidden context the task requires. Jonathan Blow's distinction between "shallow state" and "deep state" is the clearest framework I've found for thinking about this - and it has practical implications for how you design systems that AI can actually work in effectively.

Shallow State

Shallow state is the context you can see by reading a small slice of code. A function's signature, its local variables, the immediate logic. When an LLM is working in shallow state territory - fixing a type error, writing a utility function, refactoring a self-contained method - it performs well. The necessary context fits inside the prompt, the patterns are well-represented in training data, and the task is largely mechanical.

This is where AI-assisted coding already delivers real value.

Deep State

Deep state is the accumulated weight of every decision your codebase has absorbed over its lifetime.

Early on, you make foundational choices. How you model users and tenants. What your auth token structure looks like. How services pass data to each other. Whether prices are stored as cents or dollars, where timezone conversions happen. These decisions are easy to make and easy to understand in isolation.

But the codebase keeps growing, and every new line of code has to cooperate with all of those earlier decisions simultaneously. You add a caching layer that assumes a certain shape for user objects. You add a billing service that interprets prices differently than the storefront. You add a webhook system that serializes events in a format that assumes things about your tenant model. None of this is written down as a single rule - it's a spiderweb of implicit constraints that emerged over time, distributed across thousands of files.

This is what makes deep state so difficult. The "state" isn't just what's in memory at runtime. It's the full web of architectural choices that constrain what correct code even looks like. A new feature isn't just "write code that does X" - it's "write code that does X while honoring hundreds of constraints that exist nowhere explicitly but everywhere implicitly."

LLMs can't hold this. They're language models, not state models. They can process your code file by file, but they can't internalize the relationships between those files, the history behind the decisions, or the constraints that only reveal themselves when you violate them. Give an LLM enough context and it might track five constraints, ten, maybe twenty - but at some point it starts silently dropping the older ones. It generates locally correct code that breaks things three modules away, and it does so with confidence.

Making Your Codebase AI-Friendly

The fix isn't just waiting for smarter models. It's designing systems where deep state has fewer places to hide.

1. Domain-Informed Module Boundaries

Break your application into modules with clear responsibilities and explicit public interfaces. Modules communicate only through these interfaces - no reaching into another module's internals.

The key insight is where you draw these lines. Align them with your business domain (what DDD calls "Bounded Contexts") rather than arbitrary technical seams. When a Policy in code means the same thing as a "policy" in your product requirements, LLMs can interpret intent, not just syntax. They reason over language - so code that speaks clearly in domain terms is inherently more legible to them.

This constrains the blast radius of any single change and gives an LLM a focused, coherent context window to work within.

2. Explicit State Management & Immutability

Make state changes visible and controlled. Instead of mutating objects in place, create new state with each transition - patterns like Redux, Flux, or plain functional programming.

When state can only change through explicit, traceable paths, deep state loses its favorite hiding spot: the mysterious mutation that happened somewhere upstream but nobody can find. It also makes a system's behavior far more predictable for an LLM, even without the ability to simulate execution.

3. Documentation That Explains "Why"

Inline comments aren't enough. Maintain high-level docs that capture architectural decisions, module responsibilities, key data flows, and the rationale behind non-obvious design choices.

LLMs can't infer deep state from code, but they can process well-written documentation. A sentence like "we chose this coordinate system because the rendering pipeline assumes left-handed coordinates" is exactly the kind of implicit constraint that creates deep state - and exactly the kind of context an LLM can use if you write it down.

4. Robust Automated Testing

A comprehensive test suite won't eliminate deep state directly, but it acts as a safety net when an LLM (or a human) makes a change that violates an implicit constraint. A well-written integration test surfaces the failure immediately instead of letting it metastasize into a mystery three sprints later.

Tests also serve as executable documentation - concrete examples of how the system is expected to behave that give an LLM actionable context beyond the source code itself.

5. Enhanced Observability

Logging, metrics, and distributed tracing make deep state visible. When something goes wrong, good observability lets you reconstruct the exact sequence of state changes that caused it - the kind of dynamic, runtime context that no amount of static code analysis can provide.

Maybe someday the structured data from robust observability feeds into models that can reason about dynamic system state directly. We're not there yet though.

Conclusion

The future of AI-assisted development isn't just smarter models - it's smarter system design. Every practice here (strong boundaries, explicit state, good docs, solid tests, deep observability) makes your codebase better for humans and more legible to the AI tools that are increasingly part of the workflow. Shrink the surface area where deep state can hide, and you give LLMs a fighting chance at the hard problems too.