- Published on
Claude Code: use it when local execution is the bottleneck
I started treating Claude Code as a specialist, not a chatbot. It is far more useful when it can run close to my code, keep state, and iterate quickly on concrete files.
What made it click for me
I stopped using it for broad advice. Instead, I use it for a narrow pattern:
- load context,
- change a small surface,
- run checks,
- repeat.
That changes quality more than any prompt trick.
Where it wins
1) Local debugging sessions
When I have a failing build, I pass in:
- full error context,
- expected invariants,
- and the exact file cluster to inspect.
It can then suggest edits without making random structural guesses.
2) Multi-file but scoped refactors
I ask for an ordered plan first. If it can split work by module, it usually avoids the classic “big bang diff” mess.
3) Documentation drift
Whenever behavior changes, I keep docs update in the same run. Claude Code is best when it can close that loop instead of leaving notes for “later”.
The non-negotiable discipline
I keep four checks around every session:
- a scoped command map (what files and commands are allowed),
- strict success criteria (
tests,lint,build), - diff review before apply,
- and explicit rollback notes in commit message.
npm run lint
npm run build
If either fails, I treat the output as a draft, not a done patch.
What surprised me
Most of the time, the bottleneck was not model quality. It was my instruction quality. Tighter goals and strict boundaries gave better results than longer prompts.
When you can verify locally, Claude Code becomes a powerful execution loop. When you can’t verify, it should become less authoritative.