I was discussing probability with a colleague, and the conversation reminded me of an incident in college.
For the purpose of this narrative, I will present a lot of verbal exchanges using quotation marks. That is just a convenience for the purposes of telling the story. Except for the final punchline, I don't remember what was said verbatim.
In a probability class, the professor was introducing us to the concept of hypothesis testing. She asked us: "If I flip a coin 100 times and it lands on heads each time, how likely is it the it's a fair coin?"
What she meant to ask -- and it was a long time before I realized this -- was, "If I have a fair coin, and flip it 100 times, how likely is it to come up heads each time."
The difference may seem subtle, but it's crucial. The answer to the question she meant to ask is (1/2)^100, which is tiny. But the question she actually asked cannot be answered without more information.
She expected a straightforward answer, but I said that it depends.
"On what?"
"On how certain you were that it was a fair coin before you started flipping it."
She insisted that that was irrelevant. It was really unlikely that I had a fair coin if I flipped it and got heads 100 times.
"If I pulled it from a drawer of coins, and I know that half -- or even 1% -- of the coins in the drawer are double-headed, sure. But what if I have absolutely perfect knowledge going in that it's a fair coin? Then, even after 100 heads in a row -- or 1000, or 10,000 -- I still know it's a fair coin."
We went back and forth for a while, restating the question and related logic. I didn't realize what she had meant to ask. And she had gotten so caught up that she didn't realize her mistake. Eventually it became clear that the discussion wasn't productive. And she had to move on with the lesson.
"Oh, you're just a Bayesian" she told me...
No comments:
Post a Comment