The False Choice
When AI coding tools emerged, the debate was framed as "use AI and sacrifice quality" vs "don't use AI and stay slow". This framing is wrong. Used well, AI tooling improves both speed and quality. Used badly, it reduces both — AI-generated code that isn't reviewed, understood, or owned is worse than no code at all.
The question isn't whether to use AI tools. It's how.
Starting With the Right Mental Model
The mental model that works: AI is a capable but unreliable junior developer. It can write code quickly, knows a lot of syntax, and can produce reasonable first drafts. But it makes confident mistakes, doesn't know your codebase's specific patterns, and can't reason about business context it hasn't been given.
With a junior developer, you wouldn't merge code without reviewing it. You'd give feedback, explain context, and verify the output meets your standards. The same applies to AI-generated code.
The Practices We Implemented
Mandatory review for AI-generated code. We made this explicit in our code review guidelines: AI-generated code gets the same review scrutiny as human-generated code. No "it came from Cursor so it's probably fine". The commit history doesn't care who (or what) wrote the code — the quality standard is constant.
Prompt engineering as a team skill. We ran internal sessions on prompt engineering for coding: how to provide context, how to specify constraints, how to ask for tests alongside implementation. The quality of AI output is heavily determined by the quality of the prompt. This is a learnable skill.
Agentic tasks with explicit checkpoints. When using Claude Code for multi-step agentic tasks, we define checkpoints — "stop and show me the plan before you make changes", "run the tests after each file change and stop if they fail". Agentic execution without checkpoints can make many changes before you realise it's gone wrong.
Know when not to use AI. Security-sensitive code (authentication, authorisation, payment handling), novel architectural decisions, complex data migrations — these benefit from full human attention. AI tools are best for boilerplate, test generation, documentation, and well-understood patterns.
What We Measured
Deployment frequency increased. Lead time for changes decreased. Bug rate in AI-assisted features? Unchanged from baseline — because we maintained the same review and testing standards.
The productivity gains came from eliminating the tedious parts: writing boilerplate CRUD, generating test scaffolding, writing migration SQL. Engineers spent more time on the interesting, high-judgment problems.
The Dependency Risk
The one risk I take seriously: over-reliance. Engineers who use AI tools extensively should periodically test themselves — can you implement this feature without the AI? Can you debug this problem by reading the code and logs, without pasting it to Claude?
Skill atrophy is real. Maintain the habit of solving problems independently, using AI tools to accelerate rather than replace.