Published on

My toolchain overhead budget

avatar for Jigar PatelJigar Patel
2 min read

Last month I looked at my workstation and found I had a dozen CLI helpers. Most were fine. Some were obsolete. A few were doing nothing but adding cognitive overhead.

I decided to stop accumulating tools and start budgeting them.

My overhead budget has three parts

  • Setup cost: what I lose learning the tool.
  • Operational cost: keystrokes, context switching, and mental load.
  • Recovery cost: how hard it is to debug when it breaks.

If the recovery cost is high, I do not keep the tool unless it removes equivalent or greater cost elsewhere.

A practical 15-minute audit routine

I run this every two months:

  1. List all tools in shell profile and docs.
  2. Pick each tool and answer three questions:
    • What problem does it solve?
    • What is the average time saved per use?
    • What is the breakage penalty?
  3. Delete or disable what does not pass.

Commands I use when trimming

# show profile paths
printf '%s\n' ~/.zshrc ~/.bashrc ~/.config/fish/config.fish

# list binaries in my personal bin
ls -1 ~/bin

# check global Node tools that might add noise
npm ls -g --depth=0

I don’t treat this as a perfect science. I just keep the list small and intentionally useful.

Three filters before I install something new

  • Can I do this in two existing commands?
  • Does it save me at least one manual step each loop?
  • Does failure show clearly in logs or exit codes?

If all three are no, I do not install it.

Tooling cleanup flow that helped

After I removed dead weight, I got two gains:

  • scripts got shorter, because I could delete wrappers for retired tools,
  • on-call confidence increased, because fewer moving parts meant fewer “works on my machine” assumptions.

I now treat each helper like production code: it must justify its own maintenance debt.

My stack got smaller. It is still practical, and that is the best signal.