István

Developing with Confidence

Listen to a deep dive version of this article: Shipping code without the dread

Most of the uncertainty and stress in our work is self-inflicted. We have tools that would tell us when we're wrong — we just don't use them.

I'm not talking about exotic practices or expensive infrastructure. I mean linters, type checkers, automated tests, CI pipelines. Boring stuff that's been around for years. The kind of thing we all know we should be using, and yet somehow don't.

What confidence actually means

Confidence isn't arrogance. It's not "I'm so good I don't make mistakes." It's the opposite: knowing that when you do make mistakes, something will catch them before your users do.

It's deploying on a Friday without dread. It's refactoring code you didn't write without holding your breath. It's going on holiday without your laptop.

If you don't have that, ask yourself why not.

Where we put effort in the wrong places

We write tests that prove our code works, not tests designed to break it. This is confirmation bias in action. We test the happy path, see green, and feel productive. But the happy path wasn't going to break anyway. The gnarly state transitions, the edge cases, the error handling — that's where bugs live, and that's what we leave untested.

We chase coverage percentages. 80% coverage sounds good until you realise it's 80% of the trivial code. The function that formats dates is tested six ways. The function that processes payments has no tests at all because it's "too hard to mock."

We write tests that test the mocks. We mock so aggressively that the test is just verifying our mock setup. The real dependencies, the actual integrations, the parts that break in production — none of that is exercised. The tests pass, we feel good, and then it falls over the moment it hits a real database.

We do manual verification rituals instead of automating them. The elaborate click-through before every deploy. The mental checklist you run through. The "let me just check one thing" that takes 20 minutes. These don't scale, they're error-prone, and they're exhausting.

We'll spend hours debugging but won't spend 30 minutes setting up a linter. The linter would have caught the typo. The type checker would have caught the null reference. But we don't set them up because it feels like yak-shaving, and then we spend the afternoon in the debugger instead.

The research backs this up

There's solid evidence that these practices work:

  • Microsoft and IBM case studies found test-driven development reduced defect density by 40-90%, at a cost of 15-35% more time upfront. That's a trade most of us would take.
  • A study of JavaScript bugs found that 15% could have been caught by static type checking alone. That's 15% of your production bugs, caught before you even run the code.
  • DORA's research across 39,000 professionals found elite teams deploy 208 times more frequently than low performers with 106x faster lead time. The difference isn't talent — it's practices.

But here's the uncomfortable finding: when researchers ask developers why they don't test more, time pressure and organisational dysfunction rank highest. Not "I don't know how" or "the tools are bad." The implication is that we're blocked by our environment.

That's often true. But it's also a convenient excuse.

The agency you actually have

Yes, management pressure is real. Yes, deadlines are brutal. But when you do have agency — and you have more than you think — what do you do with it?

You can add a test to the file you're already touching. You can run the linter locally before pushing. You can write the type definition even if the codebase is mostly JavaScript. You can document the thing that just took you two hours to figure out.

None of these require permission. None of them require buy-in. They're choices you make in the margins of your work, and they compound.

The TypeScript adoption story is instructive: 78% of JavaScript developers now use TypeScript, up from nothing a decade ago. Developers do adopt tools when the benefit becomes obvious. The problem is the benefit is delayed — you invest now, you reap the reward in six months when you're not debugging a type error in production.

Borrowed time

When we skip tests, disable the linter, or push without CI checks, we tell ourselves we're saving time. We're not. We're borrowing it.

The interest rate is brutal. Every shortcut creates uncertainty. Uncertainty slows you down — you move carefully because you don't trust the code. You manually verify because there's no automated check. You avoid refactoring because you can't be sure you won't break something. The codebase becomes a minefield, and you're the only one who knows where the mines are.

This is how people become valuable for all the wrong reasons. Not because they write good code, but because they hold tribal knowledge. They're the only ones who know that you can't touch that module without breaking the billing system. They're irreplaceable — not because of skill, but because of accumulated context that exists nowhere but their heads.

That's not job security. It's a trap. For them and for the team.

The compound interest works against you. Skip quality now, move slower later. Keep skipping, keep slowing. Eventually you're spending all your time fighting fires in code nobody dares to change, and you've forgotten what it feels like to build something new.

The real problem

I said at the start this was self-inflicted. That's only half true.

The systemic pressures are real. Organisations that reward shipping over quality, that treat testing as optional, that set deadlines without slack — they create environments where rational developers skip the practices that would help them.

But we also make it worse for ourselves. We don't push back, even once. We don't demonstrate the value. We accept "we don't have time" without questioning whether that's actually true. We treat quality tooling as something we'll get to later, when things calm down. Things never calm down.

The drag on our industry isn't primarily that we don't have good tools. It's that we don't use the ones we have.

What confidence gets you

When you develop with confidence, you can give solid estimates — because there are no surprises. You're not padding your timelines to account for the unknown horrors lurking in the codebase.

You don't even need to know what all your code does. The tooling and the tests hold it together so firmly that you know whatever it does is the right thing. The tests are the specification. If it passes, it's correct. If it's not correct, the tests are wrong and you fix them. Either way, you're not relying on someone's memory of what a function was supposed to do three years ago.

You can refactor whatever you want. There are no sacred cows, no modules everyone's afraid to touch, no "we don't go in there" zones. If something needs to change, you change it. The tests tell you if you broke anything. If they don't, you write better tests.

You can let a coding agent modify your code without a worry in the world. The AI can suggest changes, make changes, even write new features — because you have a safety net. The linter catches style issues. The type checker catches structural errors. The tests catch behavioural regressions. You review the diff, run the suite, and you're done.

This isn't some utopian future. It's just what happens when you invest in the basics.


Related