The landscape of software development has fundamentally changed. Advanced AI tools—from GitHub Copilot’s tab completion to CLI agents like Claude Code, Cursor CLI, Google Gemini, OpenAI Codex, and OpenCode—have become indispensable parts of our workflow. Tab completion alone was revolutionary when GitHub Copilot introduced it. These tools are so useful that we absolutely should use them, but we must work around their limitations responsibly.
The core challenge is this: AI agents hallucinate, and they always lack context unless explicitly introduced. Despite these issues, the productivity gains are undeniable. The question isn’t whether to use AI—it’s how to use it responsibly.
Do not assume the AI agent knows what it is doing. You must review and understand all code that enters your production application.
The only exceptions are narrow:
But if code is going into your primary production application, you should be reviewing not just the code itself, but also the prompts used to create it. Understanding the AI’s instructions helps you understand its reasoning—and its potential blind spots.
To effectively review AI-generated code (and the prompts that created it), documentation must stay synchronized with your codebase. AI can facilitate this task.
When you produce a large volume of code that’s difficult for colleagues to review, update the documentation to explain what the code does. This serves two critical purposes:
A simple implementation: create a docs/
folder with Markdown files linked from
an AGENTS.md
file. Document the various aspects of your system with emphasis
on information most useful to AI agents—essentially, all counter-intuitive
aspects of your architecture.
All of this implies a fundamental shift: software engineers are becoming software architects.
Instead of simply writing code, we’re instructing AI agents to write most of the actual code while remaining responsible for:
We’re moving to a higher level of computer programming. Just as machine code gave way to C and C++, which gave way to JavaScript and Python, AI represents the next step up: system design using English as the primary language.
Human programmers are entering a new phase where our job is less about adding code and more about specifying technical architecture to AI agents, so that code can simply be filled in within that architecture.
Consider a CRUD application: instead of hand-coding create, read, update, and delete methods, you specify “I need a CRUD app for menu items for a restaurant,” and the AI writes the straightforward, pattern-following implementation—so long as the architecture is known upfront.
A new problem emerges: junior developers who use AI tools without fully understanding their code.
We should embrace AI use, but we must not embrace AI as a replacement for the human mind. Humans still need to understand the code, or at least isolate it so security vulnerabilities or scalability issues can’t spread.
Junior developers whose entire programming experience involves AI can produce enormous quantities of code—more than humans can review, and often riddled with errors.
Solutions require diligence:
Culture of understanding: All engineers must strive to understand their own code. If you don’t understand code you’re adding to the repo, you’re making a mistake.
Documentation requirements: Attach human-readable docs to every MR/PR. This helps reviewers understand motivation and verify that implementation matches intent.
AI-assisted review: Check out the MR/PR and use your own AI tools to verify that documentation matches code.
Allow non-AI development: Junior developers especially should be allowed—even encouraged—to NOT use AI when learning. They might look less productive, but they’re learning far more. Without experience writing code by hand, they’re unprepared to produce or review AI-generated code.
We all know LeetCode problems don’t reflect real-world software engineering. That’s even more true in the age of AI, where LeetCode happens to be exactly the sort of thing AI agents can implement almost instantly—they’re trained on volumes of LeetCode data.
It’s actually irrational to program LeetCode solutions by hand in the real world when AI can do it faster and probably with fewer problems, plus help you test and document it with great speed.
A hybrid approach:
Traditional coding: “Code this LeetCode problem to make sure you know how to program”
Real-world with AI: “Code this real-world solution using all your favorite AI tools to see how you work on something bigger and more realistic”
For the AI portion, developers should explain as they go:
This ensures developers can actually code (so they know what they’re doing) while also demonstrating real-world productivity with AI tools—a huge force multiplier when used correctly.
Using AI tools is like moving up the programming language hierarchy to a higher level, where we’re programming more with English than with actual code.
The goal of software engineering is becoming less about banging out code and more about system design: data flow, dependencies, language choices, documentation, and architectural decisions rather than implementation details.
This presents challenges—reviewing massive volumes of code, interviewing candidates in this dynamic environment. But there are practical solutions: using AI to help document and review, updating interview practices to evaluate real-world AI-assisted development.
The future is bright. AI is evolving rapidly in a helpful direction. The new suite of CLI tools especially is truly revolutionary, lifting a massive weight of work from developers when used properly.
The trick is recognizing that fundamentally, we’re shifting from computer programmers to software architects. And that shift—if we approach it responsibly, with eyes open to both capabilities and limitations—will make us all more productive builders.