Unit 5: Bayesian Hypothesis Tests
Often evidence which, for a Bayesian statistician, strikingly supports the null leads to rejection by standard classical procedures.
Bayes computes the probability of the null being true given the data \(p ( H_0 | D )\). That’s not the p-value. Why?
Surely they agree asymptotically?
How do we model the prior and compute likelihood ratios \(L ( H_0 | D )\) in the Bayesianwork?
Edwards, Lindman and Savage (1963)
Simple approximation for the likelihood ratio. \[ L ( p_0 ) \approx \sqrt{2 \pi} \sqrt{n} \exp \left ( - \frac{1}{2} t^2 \right ) \]
This will asymptotically favour the null.
Intuition: Imagine a coin tossing experiment and you want to determine whether the coin is “fair” \(H_0 : p = \frac{1}{2}\).
There are four experiments.
Expt 1 | 2 | 3 | 4 | |
---|---|---|---|---|
n 50 | 100 | 400 | 10, | 000 |
r 32 | 60 | 220 | 50 | 98 |
\(L(p_0)\) | 0.81 | 1.09 | 2.17 | 11.68 |
Implications:
Classical: In each case the \(t\)-ratio is approx \(2\). They we just \(H_0\) ( a fair coin) at the 5% level in each experiment.
Bayes: \(L ( p_0 )\) grows to infinity and so they is overwhelming evidence for \(H _ 0\). Connelly shows that the Monday effect disappears when you compute the Bayes version.