# How good are you?

This is just basic theory of probability, and it’s overly simplified, but it goes like this:

Suppose an event has probability of occurring, say, 1%. So, on average, let’s say it happens on 1 out of a hundred attempts. What’s the probability it happens *at least once* in 10 attempts? Well, it’s (1 – probability it never happens), that is 1 – 0.99 ^ 10 ~ **10%**. In 20 attempts, the probability goes up to 1 – 0.99 ^ 20 ~ **18%**.

Repeating an attempt is drastically increasing the chance of the event occurring at least once, even if the event itself is quite rare.

How does this map to software development? You are a great developer. You write great code 99% of the time! However, you write a lot of code, so the chance you make 0 errors is… can barely be distinguished from zero.

As a very specific example / best practice: every time you do a refactoring that touches 10+ places in the code, you know there is probably something like 1 bug in there! Once the refactoring is done, go look for it – you will find it…