A friend recently posted the following the problem:

There are 10 green balls, 20 red balls, and 25 blues balls in a a jar. I choose a ball at random. If I choose a green then I take out all the green balls, if i choose a red ball then i take out all the red balls, and if I choose, a blue ball I take out all the blue balls, What is the probability that I will choose a red ball on my second try?

The math works out fairly easily. It’s the probability of first drawing a green ball AND then drawing a red ball, OR the probability of drawing a blue ball AND then drawing a red ball.

\[
\frac{10}{10+20+25} * \frac{20}{20+25} + \frac{25}{10+20+25} * \frac{20}{10+20} = 0.3838
\]

But I always prefer simulations over probability so let’s break out the R code like we did for the Monty Hall Problem and calculating lottery odds.  The results are after the break.

Continue reading

Michael Malecki recently shared a link to a Business Insider article that discussed the Monty Hall Problem.

The problem starts with three doors, one of which has a car and two of which have a goat. You choose one door at random and then the host reveals one door (not the one you chose) that holds a goat. You can then choose to stick with your door or choose the third, remaining door.

Probability theory states that people who switch win the car two-thirds of the time and those who don’t switch only win one-third of time.

But people often still do not believe they should switch based on the probability argument alone. So let’s run some simulations.

This function randomly assigns goats and cars behind three doors, chooses a door at random, reveals a goat door, then either switches doors or does not.

monty <- function(switch=TRUE)
{
    # randomly assign goats and cars
    doors <- sample(x=c("Car", "Goat", "Goat"), size=3, replace=FALSE)

    # randomly choose a door
    doorChoice <- sample(1:3, size=1)

    # get goat doors
    goatDoors <- which(doors == "Goat")
    # show a door with a goat
    goatDoor <- goatDoors[which(goatDoors != doorChoice)][1]

    if(switch)
        # if we are switching choose the other remaining door
    {
        return(doors[-c(doorChoice, goatDoor)])
    }else
        # otherwise keep the current door
    {
        return(doors[doorChoice])
    }
}

Now we simulate switching 10,000 times and not switching 10,0000 times

withSwitching <- replicate(n = 10000, expr = monty(switch = TRUE), simplify = TRUE)
withoutSwitching <- replicate(n = 10000, expr = monty(switch = FALSE), simplify = TRUE)

head(withSwitching)
## [1] "Goat" "Car"  "Car"  "Goat" "Car"  "Goat"
head(withoutSwitching)
## [1] "Goat" "Car"  "Car"  "Car"  "Car"  "Car"

mean(withSwitching == "Car")
## [1] 0.6678
mean(withoutSwitching == "Car")
## [1] 0.3408

Plotting the results really shows the difference.

require(ggplot2)
## Loading required package: ggplot2
require(scales)
## Loading required package: scales
qplot(withSwitching, geom = "bar", fill = withSwitching) + scale_fill_manual("Prize", 
    values = c(Car = muted("blue"), Goat = "orange")) + xlab("Switch") + ggtitle("Monty Hall with Switching")

qplot(withoutSwitching, geom = "bar", fill = withoutSwitching) + scale_fill_manual("Prize", 
    values = c(Car = muted("blue"), Goat = "orange")) + xlab("Don't Switch") + 
    ggtitle("Monty Hall without Switching")

(How are these colors? I’m trying out some new combinations.)

This clearly shows that switching is the best strategy.

The New York Times has a nice simulator that lets you play with actual doors.

Thanks to Rachel Schutt, who I’m teaching with at Columbia, and Cathy O’Neil from MathBabe I had the opportunity to go on TV and talk about the statistics of tonight’s Powerball lottery.

There’s an article with a brief quote from me and a video where I may a very quick appearance at the 1:14 mark.  My interview during the live broadcast actually went on for about three minutes but I can’t find that online.  If I can transfer the video from my DVR, I’ll post that too.

In the longer interview I discussed the probability of winning and the expected value of a given ticket and other such statistical nuggets.  In particular I broke down how choosing numbers based on birthdays eliminates any number higher than 31 mean you are missing out on 28 of the 59 possible numbers that are uniformly distributed.  Hopefully I’ll find that longer cut.

The video can be found here:  Video

Distribution of Lottery Winners based on 1,000 Simulations

With tonight’s Mega Millions jackpot estimated to be over $640 million there are long lines of people waiting to buy tickets.  Of course you always hear about the probability of winning which is easy enough to calculate:  Five numbers ranging from 1 through 56 are drawn (without replacement) then a sixth ball is pulled from a set of 1 through 46.  That means there are choose(56, 5) * 46 = 175,711,536 possible different combinations.  That is why people are constantly reminded of how unlikely they are to win.

But I want to see how likely it is that SOMEONE will win tonight.  So let’s break out R and ggplot!

As of this afternoon it was reported (sorry no source) that two tickets were sold for every American.  So let’s assume that each of these tickets is an independent Bernoulli trial with probability of success of 1/175,711,536.

Running 1,000 simulations we see the distribution of the number of winners in the histogram above.

So we shouldn’t be surprised if there are multiple winners tonight.

The R code:

winners <- rbinom(n=1000, size=600000000, prob=1/175000000)
qplot(winners, geom="histogram", binwidth=1, xlab="Number of Winners")

Shortly after the Giants fantastic defeat of the Patriots in Super Bowl XLVI (I was a little disappointed that Eli, Coughlin and the Vince Lombardi Trophy all got off the parade route early and the views of City Hall were obstructed by construction trailers, but Steve Weatherford was awesome as always) a friend asked me to settle a debate amongst some people in a Super Bowl pool.

He writes:

We have 10 participants in a superbowl pool.  The pool is a “pick the player who scores first” type pool.  In a hat, there are 10 Giants players.  Each participant picks 1 player out of the hat (in no particular order) until the hat is emptied.  Then 10 Patriots players go in the hat and each participant picks again.

In the end, each of the 10 participants has 1 Giants player and 1 Patriots player.  No one has any duplicate players as 10 different players from each team were selected.  Pool looks as follows:

Participant 1 Giant A Patriot Q
Participant 2 Giant B Patriot R
Participant 3 Giant C Patriot S
Participant 4 Giant D Patriot T
Participant 5 Giant E Patriot U
Participant 6 Giant F Patriot V
Participant 7 Giant G Patriot W
Participant 8 Giant H Patriot X
Participant 9 Giant I Patriot Y
Participant 10 Giant J Patriot Z

Winners = First Player to score wins half the pot.  First player to score in 2nd half wins the remaining half of the pot.

The question is, what are the odds that someone wins Both the 1st and 2nd half.  Remember, the picks were random.

Before anyone asks about the safety, one of the slots was for Special Teams/Defense.

There are two probabilistic ways of thinking about this.  Both hinge on the fact that whoever scores first in each half is both independent and not mutually exclusive.

First, let’s look at the two halves individually.  In a given half any of 20 players can score first (10 from the Giants and 10 from the Patriots) and an individual participant can win with two of those.  So a participant has a 2/20 = 1/10 chance of winning a half.  Thus that participant has a (1/10) * (1/10) = 1/100 chance of winning both halves.  Since there are 10 participants there is an overall probability of 10 * (1/100) = 1/10 of any single participant winning both halves.

The other way is to think a little more combinatorically.  There are 20 * 20 = 400 different combinations of players scoring first in each half.  A participant has two players which are each valid for each half giving them four of the possible combinations leading to a 4 / 400 = 1/100 probability that a single participant will win both halves.  Again, there are 10 participants giving an overall 10% chance of any one participant winning both halves.

Since both methods agreed I am pretty confidant in the results, but just in case I ran some simulations in R which you can find after the break.

Continue reading

With the Super Bowl only hours away now is your last chance to buy your boxes.  Assuming the last digits are not assigned randomly you can maximize your chances with a little analysis.  While I’ve seen plenty of sites giving the raw numbers, I thought a little visualization was in order.

In the graph above (made using ggplot2 in R, of course) the bigger squares represent greater frequency.  The axes are labelled “Home” and “Away” for orientation, but in the Super Bowl that probably doesn’t matter too much, especially considering that Indianapolis is (Peyton) Manning territory so the locals will most likely be rooting for the Giants.  Further, I believe Super Bowl XLII, featuring the same two teams, had a disproportionate number of Giants fans.  Bias disclaimer:  GO BIG BLUE!!!

Below is the same graph broken down by year to see how the distribution has changed over the past 20 years.

All the data was scraped from Pro Football Reference.  All of my code and other graphs that didn’t make the cut are at my github site.

As always, send any questions my way.

A post on Slashdot caught my attention.  It was about a microchip from Lyric Semiconductor that does calculations using analog probabilities instead of digital bits of 1’s and 0’s.

The article says that this will both make flash storage more efficient and make statistical calculations quicker.  I doubt it will help with fitting simple regressions where have a fixed formula, but the first thing that came to mind were Bayesian problems, especially a Markov chain Monte Carlo (MCMC).  Using BUGS to run these simulations can be VERY time consuming, so a faster approach would make the lives of many statisticians much easier.  The article did mention that the chip uses Bayesian NAND gates as opposed to digital NAND gates, but I don’t know how that relates to MCMC’s.

I reached out to my favorite Bayesian, Andy Gelman, to see what he thinks.  I’ll report back on what he says.