04/02/2008 @17:57:55 ^18:44:44

Because of Nethack I've found myself often wondering "how many times do I have to do this before there's a good chance of it working?" Then when I've bothered to sit down and work it out properly I've found that there's a kind of a pattern. Namely that if you have a chance of 1 in some large number, you expect to have to make about two thirds of that large number of attempts.

Earlier today I got annoyed enough to try to prove it. Suppose you have some event that has a probability of p. Now the event happening at least once after n attempts is equivalent to it not being the case that the event failed to happen n times in a row. Now the probability the event failed to happen n times in a row is (1-p)^n. Thus the probability that this is not the case is 1-(1-p)^n.

You would expect the event to have happened once this probability exceeds 50%. So we have 1-(1-p)^n > 0.5, that is (1-p)^n < 0.5, but for simplicity let's forget the inequality.

Take logs so that you have n.ln(1-p)=ln(0.5). Rearranging that we have the exact answer, which is that you need to make ln(0.5)/ln(1-p) attempts to expect the event to happen. This is obviously too awkward to do in your head.

However you can approximate it if p is small; you can just look up the Taylor series expansion for ln(1+x), the first term of which is simply x. Therefore we have that for small p, ln(1-p) ~= -p.

So we have -ln(0.5)/p as an approximation. As ln(0.5) = -ln(2) it can be written ln(2)/p.

And that's it! ln(2) is very roughly two thirds (more like 0.7 but okay) So if we write p=1/L for some large L, we have the expected number of attempts being very approximately two thirds L.