Archiver > GENEALOGY-DNA > 2010-11 > 1290249149

From: James Heald <>
Subject: Re: [DNA] Odds Are, It's Wrong - 5% of the time
Date: Sat, 20 Nov 2010 10:32:32 +0000
References: <F9C440A2-FC59-4A9E-AAAC-85DEE9D2FAB0@GMAIL.COM><COL115-W50D879F102DC3996D9D454A03A0@phx.gbl>
In-Reply-To: <COL115-W50D879F102DC3996D9D454A03A0@phx.gbl>

On 19/11/2010 23:36, Steven Bird wrote:
> The headline should have read:
> "Tom Sigfried fails to display any real understanding of statistics."
> 95% confidence interval means that you have a 1 in 20 chance of being wrong. If that isn't good enough for your purposes, then use a broader confidence interval, say 99.7% (3 sigmas). There's no mystery here.
> A steroids test that misidentifies 5% of the time is a bad test. Don't use it. However, if I have a 19 out of 20 chance of winning a bet, I'll take it. If I lose, I'll do it again double or nothing. Then the odds are now .05 *.05 that I will lose, or .0025. Third times a charm! Double or nothing at .05*.05*.05=.000125, or .0125% chance I'm wrong; about one chance in a thousand that I will lose three times in a row. I'll take those odds any time.
> Now, if I had a only 95% chance of making it home in my car tonight (5% chance of a fatal accident) I would probably consider postponing the trip to a more favorable time. :-) It all depends on your needs.

No. A 95% confidence interval does NOT mean that you have a 1 in 20
chance of being wrong.

See Box 2 and Box 4 in the article for examples.

To calculate, for example, the probability that the dog is hungry (Box
2), as the article says, you actually need to know how often the dog is
fed, so how much of the day the dog is hungry for, i.e. the prior

If (hypothetically) one imagines the dog was fed so often that it was
/never/ hungry, and you knew that, then assuming the dog was hungry
because you heard it barking would be wrong with a 20 in 20 chance --
even if the dog only barks 5% of the time when it is not hungry.

Box 4 presents a further similar numerical example.

In essence the article simply presents the standard Bayesian critique of
confidence intervals -- which has been around pretty much ever since
confidence intervals were invented (see for example Harold Jeffreys).

Unfortunately, the fact that confidence intervals are still so
misunderstood (as witnessed by your post; and also people still using
confidence methods rather than Bayesian methods to report eg ranges of
plausible TMRCAs) shows that, sadly, there still remains a need for
articles like this even 80 years on.

This thread: