Many recreational games combine skill and chance. Skill you can work on, chance is, well, a matter of luck. For all the ‘games’ discussed here, it is easy to persuade yourself that there is some finite list of outcomes, all equally likely. Thus in this chapter, unless explicitly stated otherwise, we will use the classical approach to finding probabilities: count the number of possible outcomes, and the probability of any event is taken as the proportion of those outcomes where the event happens.
My aim is to show how the ideas of probability can help a player make good decisions under conditions of uncertainty. An understanding of probability can also add to the enjoyment or entertainment of spectators.
Lot teries
A common lottery format is that known as 6/49, as in the UK National Lottery. Here 49 rubber spheres, painted with different numbers, are whirled around a plastic tub, six are chosen at random. Gamblers pay £1 to select six numbers, and win a prize if their selection contains at least three of those winning numbers. But since only 50% of the takings go into the prize money, the mean return to Lottery players is far less than in casinos, or at the racetrack.
The main attraction is the prospect, however remote, of an enormous prize – one UK ticket has won over £22 million, and prizes in the USA have exceeded $300 million. Counting tells us that the probability of winning a share of the top prize from one ticket, by matching all the winning numbers, is about one in 14 million in the UK, less than one in 116 million in the Euromillions game, and around one in 176 million in the USA Mega Millions game.
To appreciate just how tiny these chances are, fix on the UK game. Figures show that the probability of death within one year for a random 40-year-old man is around one in a thousand. So the chance of his death within a day is about one in 365,000, within an hour it is about one in 9 million, so to get down to one in 14 million we are talking about the chance he dies within the next 35 minutes! For the Mega Millions game, under the same assumptions, the chance his ticket wins a Jackpot share is comparable to his chance of death within the next three minutes.
Despite the low return and forbidding odds, ‘utility’ gives a rational explanation for buying tickets. In exchange for £1, you will get back 50p on average anyway, and the other 50p buys you the right to dream about your future luxurious lifestyle, your philanthropy, and the possible envy of people like me who assured you it was a waste of money. These rights surely have some utility.
We shall assume that future lottery draws are independent of past results – an inanimate rubber sphere cannot ‘remember’ whether it is ‘due’ to be chosen. Short of cheating, there is no way of changing the chance of winning a prize. But you can influence the size of any prize, in those lotteries where a fixed proportion of the prize fund is shared out among the winners at each tier. There is an opportunity to exercise a little skill.
It arises because certain numbers, typically low (birth dates) and odd, are chosen more often than others, and because many lottery players spread their choices evenly over the ticket, perhaps in the mistaken belief that doing so means that they are selecting ‘at random’. In consequence, combinations with several high numbers, or with numbers clustered together, or on the edge, are chosen less often. If you can identify the sort of choices other players are making, and do something different , your winning chance is not affected, but if you do win, you will win more than average.
Beware of trying to be too crafty, like selecting {1, 2, 3, 4, 5, 6}, or the winning numbers in the last draw, on the grounds that ‘Nobody else will think of that’. They will. When the UK Lottery began, about 10,000 people were choosing the first six numbers. In September 2009, the winning numbers were identical in two successive draws in the Bulgarian Lottery: no-one chose them the first time, but 18 did so the second time.
Provided that other players continue to mark their tickets much as they have in the past, the following procedure, for UK-type 6/49 lotteries, will help you. Take an ordinary deck of 52 playing cards and discard three of them. Identify the remaining cards with the numbers 1 to 49, shuffle well, and select six cards. This is a way of choosing six numbers completely at random. Human beings cannot make such a selection unaided, they need this sort of auxiliary help.
And use these six numbers, provided that
(a) they total at least 177 (to give a bias to high numbers), and
(b) when marked on the ticket, they fall into two, three, four, or five clusters, and
(c) three, four, or five of them are on the outside border of the ticket, and
(d) they do not form any obvious pattern on the ticket.
If any of these conditions fail, return the six cards to the pack, shuffie it thoroughly, and repeat this sequence.
If you follow this recipe, you should still expect to lose money – the overall payback of only 50% of money received is hard to overcome. But you are less likely to have to share any Jackpot with the world and his wife.
TV games
Golden Balls was first aired in 2007. The last two players are faced with eleven spheres (the Golden Balls), some of which are worth money, the rest (the Killers) are worth zero. The players select five of these spheres to generate a potential prize fund; any Killer chosen reduces the value of any previously selected Ball to one tenth of its current value. Thus two Killers chosen after a £50,000 Ball make it worth just £500.
All the spheres are outwardly identical, so the players are choosing completely at random. There are 462 ways to choose five objects from eleven, so the chance of picking the five most valuable Balls is 1/462. In the first 288 shows, this occurred just once.
Take a Ball nominally worth £1,000: even ignoring the Killers, the chance of selecting it is only 5/11, so its real mean value is £455. Any Killers will reduce this sum even further – with three Killers, its mean value can be calculated as £255.
When the five Balls have been chosen, and the actual prize fund is known, each player makes a private decision, whether to Split the fund with the other player, or seek to Steal all of it. They reveal their choices simultaneously: if both Split, they share the fund, if just one of them Steals, that player gets the entire fund, if both Steal, neither get anything.
This scenario is well known in Game Theory, under the title ‘The Prisoners’ Dilemma’. Suppose your opponent chooses Split: then you are better off if you Steal. And if the opponent chooses Steal, you won’t get anything anyway. So whatever choice the other makes, you can argue that choosing Steal will never lose. Frequently, both select Steal, and the only winner is the TV production company who pay out zero.
Versions of Deal or No Deal have been shown in over seventy countries. In the UK, there are 22 sealed boxes containing different amounts ranging from 1p to £250,000. The boxes are allocated randomly to 22 players, one of whom, Amy, will play that day. Her own box remains closed until the end. She first selects five other boxes, whose contents are revealed. A banker then offers a sum of money in exchange for the amount in her box. To accept this, she says ‘Deal’, ending the game, while the words ‘No Deal’ reject this offer. If the game continues, more boxes are opened, a new offer is made, and so on.
At the time of any offer, the exact amounts still in play are known, so their mean is easily calculated. In the early rounds, the banker’s offer is normally far less than this amount, but Amy must have her utility function firmly in sight: if she strongly desires £5,000, and the offer is £5,400, she could rationally accept, even if the mean amount in the boxes left is over £20,000 – she might end up with 1p if she hangs on.
One time in 22, Amy will own the box with the top prize, but she will win that amount far less often. Utilities give a convincing explanation. At the final decision point, two boxes remain, one with £250,000 and the other with maybe £2. If the banker offers £80,000, even though this is well below the mean value of £125,001, only the bravest or richest Amy will reject it. A bird in the hand . . . .
Provided the banker always offers less than the mean amount in the remaining boxes, the Law of Large Numbers ensures that, in the long run, contestants who Deal take home less than the amount in their box. So a real bank, that did pay out the offer but also received the amount in the box when the contestant Deals, would make a long-run profit.
The Colour of Money was billed as the most stressful show on TV, yet it survived only a few episodes in 2009. But it does give a splendid opportunity to illustrate uses of the Addition and Multiplication Laws in finding a probability.
The sums £1,000, £2,000, . . . ,£20,000 were randomly allocated to twenty boxes of different colours, and the player, Paula, sought to reach some pre-assigned target, say £64,000. To do so, she could select up to ten boxes, one at a time. If she (unknowingly) chose the £14,000 box, the amounts £1,000, £2,000, . . . up to £14,000 would appear in that order at a stately pace: she could call Stop at any stage. If she made that call in time, she banked the amount last showing, but if she waited too long, she banked nothing. If, after ten boxes, she had not reached the target, she won zero. What tactics should she use?
Colour apart, all the boxes are identical, so Paula makes a completely random selection from those left in each round. In her last round, with eleven boxes left, her strategy will be obvious: for example, if she needs a further £9,000 to hit her target and exactly six boxes are worth £9,000 or more, she will hope to call Stop when £9,000 is shown, and her winning chance is 6/11. But what should she do in earlier rounds?
Perhaps the twelve boxes left with two rounds to go contain (in units of £1,000) the amounts 1, 4, 5, 6, 9, 10, 12, 13, 15, 17, 19, 20, and she requires another £15,000. It makes no sense to call Stop when she sees £7,000; if that figure ever appears, she knows that her box contains at least £9,000, so she could Stop at that sum, plainly better tactics. She can restrict her options to selecting from the twelve values in the boxes. The same argument also applies at the earlier rounds – her best call of Stop will always be at a value corresponding to one of the remaining boxes.
If Paula does intend to Stop at £9,000 here, she can argue: ‘Eight of the twelve boxes have at least that amount, so my chance of success is 8/12. And if I do succeed, I’ll need just £6,000 in the final round, and then eight of the last eleven boxes will work. The Multiplication Law tells me that chance of both of these is (8/12)*(8/11) = 64/132. Also four boxes have less than £9,000, so the chance I bank nothing first go is 4/12; I then need £15,000 from the last box, with chance 4/11. By the Multiplication Law again, the chance this path will work is (4/12)*(4/11) = 16/132. These ways of winning are disjoint, so the Addition Law gives the overall chance of success as 80/132.’
She can make a similar analysis for her other choices, such as going for £6,000, or £12,000. I invite you to do the sums – the Appendix describes her best choice.
In the planning stages of this show, the idea of using an expert mathematician to offer running advice was mooted. Paula could suggest she will try to call Stop at £8,000, the expert might say ‘Not a bad choice. If you do that, you’ve got a 75% chance of winning the money. But if you plan to Stop at £11,000, your chance goes up to 80%.’
You can well imagine what could happen! Everything the expert said was correct, Paula changed her call – and failed to win the money, while her original instinct would have worked. Some tabloid newspaper would surely scream ‘Maths Boffin Robs Army Hero’s Widow of £64,000’.
All of us who investigate the maths of TV game shows are relieved that no such mathematical advice was ever given on this show!
Card games
The Law of Large Numbers means that you will receive your fair share of good or bad cards in the long run, so differences in skill levels will tell eventually. We look at three popular games.In Blackjack, the house must follow fixed rules about when to draw cards, the player can do what he likes. Until Edward Thorp started winning significant sums, casinos believed that no system could beat their built-in advantage. Their logic had a fatal flaw: although they could expect to win 1–2% of stakes with a full stack of six or eight decks of cards, after a few deals the odds might shift in favour of the player. The casinos had omitted to use the conditional probabilities based on what cards remained unused, rather than rely on the probabilities calculated for a full stack.
Thorp developed a way of keeping track of which cards were left in the stack. When there is a high proportion of high value cards remaining, it becomes more likely that the rules compel the house to draw a card that leads to a losing total of above twenty-one: in the same circumstances, the player can opt not to draw. Thorp would bet the minimum amount so long as the stack had a low or average proportion of high value cards, then larger amounts should the stack composition shift in his favour. Simple, but effective.
When the stack composition does give an advantage to the player, how much should he bet? John Kelly had answered precisely that question a few years before Thorp’s analysis: he should bet that proportion of his capital that is equal to the size of his advantage. This choice maximizes the rate at which his capital will grow.
For example, suppose he has £1,000 and the game is slightly in his favour – his winning chance is 51%, his losing chance is 49%. His advantage is 2%, so he bets 2% of his current capital, i.e. £20. Next time, he will have either £980 or £1,020, so if his advantage remains at 2%, his bet will be either £19.60 or £20.40, according to which outcome pertained. If he were too greedy – betting 10% of his capital when Kelly indicates just 2% – then he would eventually be ruined, despite his advantage. His capital is finite, and the stake would be too high to stand the inevitable losing streak.
Casinos take steps to identify and ban proficient card counters. No finer tribute to the power of understanding probability has ever been paid.
We noted that Bayes’ Rule is the proper way to see how pieces of evidence should change our beliefs about Guilt or Innocence in a court case. In card games like Whist or Bridge, using this Rule can improve your chances of making the best decisions during play. For convenience, I retain the legal vocabulary, and use the word Guilty to mean that a particular opponent holds certain cards, say both King and Queen of Hearts, while Innocent means that she holds at most one of those cards.
By counting, we can find the proportion of all possible deals where she holds both cards, to give an initial assessment of the probability of ‘Guilt’. It turns out best to convert this probability into its equivalent odds, in the standard manner. As this calculation is made at the outset, we say that we have found the Prior odds (of Guilt).
As cards are played, relevant Evidence emerges – perhaps she plays the King of Hearts on a trick. To see how such Evidence affects the odds of Guilt, a quantity termed the Likelihood Ratio is found: first, assess the probability of the Evidence assuming Guilt (she holds both King and Queen), then find its probability assuming Innocence (she has at most one of them). The Likelihood Ratio is just the ratio of the first to the second.
We can now deduce the Posterior odds , i.e. the odds of Guilt, taking account of this Evidence, using Bayes’ Rule which says
Posterior odds = Prior odds ≈ Likelihood Ratio.
Its format is plainly sensible: the bigger the Likelihood Ratio (i.e. the more likely is the Evidence when the opponent is Guilty), the more the odds of Guilt increase – but this Rule tells you precisely how much the Evidence affects the chances of Guilt.
To see it in operation, consider a realistic situation: our opponent either holds just the King and Queen (Guilt), or she has the King only (Innocence); the Prior odds are that those alternatives are just about equally likely. If she is Guilty you do best to play the Ace, if she is Innocent you should play some other card. Evidence now appears – she plays the King.
Without the Evidence, you must guess, and you will make the winning play half the time. With Innocence (she has King alone), the probability of the Evidence (she played the King) is 100%; but with Guilt (she had both King and Queen), she might equally well have played the Queen rather than the King that you saw, so the probability of the Evidence is only 50%. Their ratio is one half, so the Rule tells you that the Posterior odds are one half, i.e. she is twice as likely to be Innocent as Guilty – she is twice as likely to have the King alone. Not playing the Ace is the right decision two thirds of the time.
If, by this proper use of probability, you will make the winning play two thirds of the time, rather than just half the time, you should expect to do much better. You cannot guarantee to make the winning play, but you can improve your chances of doing so.
Bridge players refer to this scenario as the Principle of Restricted Choice – if the opponent had King alone she had to play it, with both King and Queen she had a choice. The fact that she did play the King shifts the odds towards her having to do so.
Today, the most popular form of Poker is Texas Hold’Em. Each player is dealt two cards and seeks to make the best poker hand possible from her own cards, and five communal cards that are dealt face up later. Which of the following hands is best, in the sense of being more likely to beat either of the other two when those communal cards are dealt?
Hand A: Two of Clubs, Two of Spades
Hand B: Ace of Spades, King of Diamonds
Hand C: Jack and Ten of Hearts.
Trick question, of course: after careful counting, it turns out that Hand A will beat B about 52% of the time, B beats C 59% of the time, while the chance C beats A is around 53%. So you would rather hold A than B, and rather hold B than C, but also you prefer C to A! Your chance of winning exceeds 50% if you let your opponent pick any of the three hands, provided you may then choose either of the others for yourself.
There is far more to poker than facility with probabilities. You must make judgements about what cards your opponents are likely to hold, and when you might bluff. But sometimes probability is very useful. Suppose the pot has 50 chips, and one communal card remains to be dealt. You can see that, if this final card is a Spade, you will make a Flush, which must win; if it is not a Spade, an opponent will win. Should you bet more chips to remain in the game?
Ignore how much you have already put in the pot. All that matters is the future. You can see six cards – two in your hand, four communal cards on the table. Of the 46 unknown cards, nine are Spades that give you victory, the rest lead to defeat. With 50 chips already in the pot, is it worth paying 10 more to see the final card dealt? 20 more?
By working out the mean profit (or loss) if you must pay x chips to see the final card, find the cut-off value of x that will, on average, give a profit. The Appendix gives the answer.