You are currently browsing the tag archive for the ‘probability’ tag.

A related problem to the last post. You toss a fair coin, and keep going until the number of tails you’ve seen is n more than the number of heads. On average, how many times do you toss the coin before you stop?

Unlike the last problem, I don’t know the answer to this one, so I’d be interested in hearing anyone else’s approach. I believe that I have a good method for getting at the answer, but that I’m missing one crucial step that I haven’t yet been able to fill in.

Advertisements

I have a lot of coin-tossing problems floating around in my head, of varying levels of difficulty. Here’s one which requires some thought, but shouldn’t present too much of a challenge if you’ve seen this kind of problem before.

You start flipping a fair coin, and keep going until you’ve seen n heads in a row, when you stop. On average, how many times do you flip the coin before stopping?

Spaghetti

This is what spaghetti looks like

Here’s a nice problem in probability. I like it because at first sight it seems fiercely complicated, but thinking about it in the right way makes it seem much simpler.

You have a bowl filled with n strands of spaghetti. You pick two ends from the bowl at random, and tie them together. You keep doing this until there are no more loose ends. Obviously, you will have made some number of loops when you reach the end of this process.

What is the average number of loops created?

My last post, on the 17 x 17 challenge, ended with a solution to a problem which I thought sounded incredibly naïve* but which turned out to be correct. It got me thinking about a problem that I was posed when I was interviewing to work at investment banks. In fact I was asked the same question in interviews for two different banks (potential applicants take note).

Tunnels

Sketch of the problem as presented to me

It goes like this: You find yourself in a dark cave, which has three tunnels leading from it (see the crude sketch on the right). One of the tunnels leads outside, and it takes 3 hours to traverse it. The other two tunnels connect to each other, returning you to the cave, and they take 1 hour to walk down. If you choose tunnels at random each time you’re faced with a choice, how long will it take you on average to get outside?

I’m about to tell you the answer, so maybe you want to stop and think about it for a bit. The point I’m making is that even though the calculation you have to do is a bit involved (although simple to anyone who’s ever studied Markov chains) there’s a cheeky shortcut: you can get the answer just by assuming that you take each of the tunnels once, and add up the times to traverse them, getting an answer of 1+1+3=5 hours.

Is this just a coincidence? To find out, let’s generalise the problem. Now we find ourselves in a cave with n tunnels, of which m of them lead to an exit and n-m dump you back in the cave. Each of the tunnels takes time \tau_i to traverse, for i=1,…,n, and is chosen with probability p_i. We’ll adopt the convention that the first m tunnels are the ones that lead to an exit, and the last n-m are the return tunnels (notice that our generalisation allows the possibility that a return tunnel might take a different amount of time to traverse if we go in the opposite direction).

To work out what the average escape time is, we notice that if we pick an escape tunnel then we’re free in the time it takes to traverse the tunnel. If we pick a return tunnel, then we’ve increased our expected time to leave by the time that it took to traverse the tunnel. If the time to leave is T, then we write the expected leaving time as E[T], and

E[T] = \sum_{i=1}^m p_i\tau_i + \sum_{j=m+1}^n p_j(\tau_j + E[T]).

After a couple of lines of algebra we see that the expected time to leave is given by

E[T] = (\sum_{i=1}^n p_i\tau_i) / (1 - \sum_{j=m+1}^n p_j)

or, written in everyday language:

Avg time to leave = (Avg tunnel length) ÷ (Chance of picking an exit tunnel)

That’s it! The average time to escape only depends on the average time it takes to traverse a randomly picked tunnel, and the probability of picking a tunnel which leads to an exit. It’s completely insensitive to the specifics of the problem, namely how long the individual tunnels are, and what the precise probabilities of picking each tunnel are. In the special case where each of the tunnels is equally likely, all you have to do is add up the time it takes to traverse each of the tunnels, and divide by the number of tunnels that lead to an exit.

This appeals to me because a problem which has a lot of fine detail turns out to be insensitive to the fine detail. I’d be interested to hear about other examples of this phenomenon.

* Is it pretentious to write ‘naïve’ instead of ‘naive’? Answers, opinions and furious rants on a postcard, please.

A recent post on bit-player drew my attention to William Gasarch’s 17 x 17 rectangle challenge. In short, Gasarch is offering $289 (289 = 17 x 17) for the solution to the following puzzle.

6 x 6 coloring

A coloring of a 6 x 6 grid which satisfies the 'no rectangles' constraint.

The problem is to take a rectangular n x m grid, and color each of the squares with one of four colors, so that there are no rectangles in the grid whose corners all have the same color. For example, on the right you can see a 6 x 6 grid which satisfies this constraint.

It turns out that if the size of the grid is 19 x 19 or bigger, then you can prove that it’s impossible to color the grid without creating a rectangle. In all cases up to a 16 x 16 grid, Gasarch has an example of a coloring that satisfies the no-rectangles constraint. However, the 17 x 17 and 18 x 18 cases are still open. To win the $289, you need to send Gasarch an example of a rectangle-free coloring of the 17 x 17 grid.

If it turns out that it’s possible to color the grid in this way, then the proof that you can do so is simple — you just draw a picture of the coloring (it’s easy to get a computer to check that it doesn’t have any rectangles). However, if it turns out to be impossible then it can be very hard (read: long and ugly) to prove that that’s the case. This is why the problem is so appealing to people with very little experience in mathematics: to solve the problem, all you have to do is draw a picture!

Of course, it’s easier said than done. There are 4^{289} different ways to color a 17 x 17 grid, and very few of them will be rectangle free. If you were to write a computer program to check all possible colorings, it would take longer than the age of the universe to run.

My first thought after reading the bit-player post was that a simulated annealing algorithm might do the trick. Simulated annealing is a computational technique which is based on the way that metals can be cooled slowly in order to minimize the number of defects, effectively finding a minimal energy solution to the problem of arranging the atoms in the metal. Simulated annealing tends to work well in situations that have a large number of possible states, and a large number of local minima — just as the rectangles problem does.

I’m working on this algorithm at the moment, and hopefully I’ll have a post about that in the near future. But while I was thinking about the problem, I became distracted by another question: if you color the grid randomly, how many same-color rectangles do you create on average? It’s easy to write a compute program that uses Monte Carlo to approximate this number, but I wanted to do it analytically.

It seems to be a difficult problem, because each cell in the grid can potentially be part of many rectangles. It seems as though there’s a lot of interdependence between the colors of different cells, which will make the calculation very difficult indeed.

Undeterred, I decided to try a naive calculation, in the hope that even though it was wrong, it wouldn’t be too wrong, and it would give me some insight into how to do the full calculation. First I worked out how many rectangles it’s possible to make on an n x n grid by joining corners. After some thought, you can see that this number N is given by:

N = \sum_{i=1}^{n-1} \sum_{j=1}^{n-1} (n-i)(n-j) = \tfrac{1}{4}n^2(n-1)^2

For each potential rectangle, to be counted it needs all four of its corners to be the same color to be included in our count. Each cell has a probability of 1/4 of being any particular color, so the probability of getting all four the same is (1/4)^4, but this could happen with any of the four possible colors, giving a total probability of 4(1/4)^4=1/64. Therefore, according to this dodgy calculation, the average number of rectangles R  in an n x n grid is

R = N/64 = \frac{n^2(n-1)^2}{256}

So how does this rough calculation compare with the numerically computer results? Take a look for yourselves. The dots are the numerical approximations (which should be pretty accurate) and the dashed line is the formula I derived above:

Average number of rectangles

Average number of rectangles plotted against grid size.

It turns out that this formula gives precisely the right value for the expected number of rectangles in a random coloring! So now my question is: is it actually the case that, for some subtle reason, the calculation I did above is valid after all, even though intuitively it seems that it should be completely wrong? Or is it just a coincidence that it comes out with exactly the right answer? I may be thinking about this one for a while.