Ryan X. Charles
Ryan X. Charles
Follow

How I Use Claude Code to Do My Entire Job as a Software Engineer in 2025

December 30, 2025 · Ryan X. Charles

Claude Code writes 100% of my code. I spend my time overseeing AI agents now.

This was inconceivable one year ago. In 2024, AI coding assistants were helpful but unreliable—good for autocomplete, dangerous for anything more. In 2025, they write production code. All of it. I literally do not write the code that goes into my codebases anymore. If I write any code at all, it’s demonstration code that gets plugged into a prompt.

The Models Are Finally Good Enough

The latest coding models—Claude Opus 4.5 from Anthropic, GPT 5.2 from OpenAI—are genuinely good. They write high-quality code when given accurate context. This is the key insight: accurate context in, quality code out.

The reliability threshold has been crossed. A year ago, you couldn’t trust AI to write a function without introducing subtle bugs. Now you can trust it to implement entire features, refactor modules, add tests. Not blindly—you still review everything—but the output is good enough that reviewing is faster than writing.

2025 is the year this became real.

Why Rust and TypeScript Excel

Not all languages work equally well with AI coding agents. Rust and TypeScript stand out because their tooling forms an error-correction loop that prevents AI from writing nonsense.

Rust has the type system, Clippy, cargo fmt, and the compiler all working together. When AI writes Rust code, these tools immediately catch mistakes. The code either compiles and passes lints, or it doesn’t. There’s no middle ground where broken code slips through.

TypeScript achieves the same effect, but you have to consciously assemble it. Choose your formatter (Prettier, Biome). Choose your linter (ESLint, Biome). Choose your test runner. With all these in place, the cycle becomes: format, lint, build, test. Repeat. Each pass constrains the AI’s freedom to introduce garbage.

This is the secret: the tooling acts as guardrails. AI writes the code; the tools verify it. If you format, lint, build, and test repeatedly, the AI cannot drift far from correctness. The feedback loop catches mistakes before they compound.

My Eight-Step Workflow

This is how I actually work now:

  1. Know what I’m going to do. Clarity before action. I don’t start typing prompts until I understand the change I want to make.

  2. Write down my thoughts. I write out what I’m trying to accomplish in plain English. This forces precision. Vague thoughts become vague prompts become vague code.

  3. AI agent turns thoughts into a step-by-step plan. I feed my written thoughts to the AI and ask it to produce a structured execution plan. This surfaces gaps in my thinking.

  4. Research to refine the plan. Sometimes I need to explore the codebase first. Sometimes I need to look something up on the internet. The AI helps with both. The plan gets refined based on what I learn.

  5. Review passes. One or more passes through the plan to verify it’s correct and makes sense. I catch mistakes here before any code is written.

  6. AI executes, I watch. The AI agent works through the plan step by step. I watch its progress, occasionally stepping in to change course if something goes wrong.

  7. Document in docs/. I keep a record of why things were done the way they were done. Sometimes this is architectural decision records. Sometimes it’s a long-running checklist I work through across multiple sessions.

  8. Review final output. By this point, I’ve been following along the whole time. The final review confirms the plan was executed correctly.

The payoff: I almost never have to revert. The planning catches mistakes before they become code. The oversight catches mistakes as they happen. By the time the work is done, it’s actually done.

What I’m Not Doing Yet

There are obvious improvements I haven’t made yet:

100% test coverage. I don’t have it. Maybe striving for it would make the AI even more effective—more tests means more feedback loops, more constraints on incorrect code.

Overnight autonomous agents. I don’t yet run agents that work while I sleep. There’s untapped potential here. Background agents could handle routine tasks, code reviews, test generation.

MCP and skills. I haven’t invested in Model Context Protocol integrations or custom skills. Third-party MCP servers exist; I could write my own. Maybe there are productivity gains I’m missing.

These are future experiments. The current workflow is already transformative enough that I haven’t needed them.

The Tool Landscape Is Converging

I use Claude Code for most coding tasks because it just works reliably. But I’ve also had success with OpenCode, OpenAI Codex CLI, Gemini CLI, and GitHub Copilot CLI. I continue to experiment.

What I’ve noticed: all these tools are converging on a common set of features. File editing, shell access, web search, context management—they all do roughly the same things now. The specific tool matters less and less. Pick one that works reliably and use it.

The Economics

I spend $500-$1000 per month on API-based coding agents. This is a non-trivial amount of money.

It’s also significantly less than hiring a human engineer. And in many ways more useful—AI doesn’t need onboarding, doesn’t take vacations, doesn’t have bad days.

The calculus is clear: a senior engineer who uses AI effectively is the optimal configuration right now. You get human judgment for architecture and review, plus AI speed for implementation. This combination outperforms either alone.

You have to be willing to pay for it. If you’re trying to use free tiers or cheap models, you’ll get worse results and conclude AI coding doesn’t work. The good models cost money. They’re worth it.

I Still Work All Day

This needs emphasis: AI does not let me kick my feet up and do nothing.

What I no longer do:

  • Open 15 files and find which part of each to edit
  • Type out implementations character by character
  • Manually chase down every reference when renaming something
  • Write boilerplate from scratch

What I still do all day:

  • Learn new technologies and patterns
  • Architect systems and make design decisions
  • Write requirements and specifications
  • Oversee AI execution and course-correct when needed
  • Review every line of output
  • Document decisions for future reference
  • Manage complexity as systems grow

The work shifted from laborious mechanical tasks to oversight and guidance. I’m not typing code, but I’m still working. I’m thinking about architecture. I’m reviewing diffs. I’m catching mistakes. I’m planning the next change.

The nature of the work changed. The amount of work did not.

Conclusion

2025 is the year AI started writing all the code. This is revolutionary. A year ago it wasn’t possible. Now it’s my daily reality.

But the job didn’t disappear—it transformed. The mechanical labor is gone: no more hunting through files, no more typing out implementations, no more tedious refactoring. What remains is the thinking: architecture, planning, judgment, review.

That’s where the value always was. Now it’s all that’s left.

If you’re a software engineer who hasn’t adopted AI coding agents yet, you’re working harder than you need to. The tools are ready. The models are good enough. The economics make sense. The only thing missing is your willingness to change how you work.

Make 2025 the year you stop writing code and start overseeing the AI that writes it for you.




Copyright © 2026 Ryan X. Charles