Bayes and the probability of hypotheses - summary of Chapter 4 of Understanding Psychology as a science by Dienes
Critical thinking
Chapter 4 of Understanding Psychology as a science by Dienes
Bayes and the probability of hypotheses
Objective probability: a long-run relative frequency.
Classic (Neyman-Pearson) statistics can tell you the long-run relative frequency of different types of errors.
- Classic statistics do not tell you the probability of any hypothesis being true.
An alternative approach to statistics is to start with what Bayesians say are people’s natural intuitions.
People want statistics to tell them the probability of their hypothesis being right.
Subjective probability: the subjective degree of conviction in a hypothesis.
Subjective probability
Subjective or personal probability: the degree of conviction we have in a hypothesis.
Probabilities are in the mind, not in the world.
The initial problem to address in making use of subjective probabilities is how to assign a precise number to how probable you think a proposition is.
The initial personal probability that you assign to any theory is up to you.
Sometimes it is useful to express your personal convictions in terms of odds rather than probabilities.
Odds(theory is true) = probability(theory is true)/probability(theory is false)
Probability = odds/(odds +1)
These numbers we get from deep inside us must obey the axioms of probability.
This is the stipulation that ensures the way we change our personal probability in a theory is coherent and rational.
- People’s intuitions about how to change probabilities in the light of new information are notoriously bad.
This is where the statistician comes in and forces us to be disciplined.
There are only a few axioms, each more-or-less self-evidently reasonable.
- Two aximons effectively set limits on what values probabilities can take.
All probabilities will lie between 0 and 1 - P(A or B) = P(A) + P(B), if A and B are mutually exclusive.
- P(A and B) = P(A) x P(B|A)
- P(B|A) is the probability of B given A.
Bayes’ theorem
H is the hypothesis
D is the data
P(H and D) = P(D) x P(H|D)
P(H and D) = P(H) x P(D|H)
so
P(D) x P(H|D) = P(H) x P(D|H)
Moving P(D) to the other side
P(H|D) = P(D|H) x P(H) / P(D)
This last one is Bayes theorem.
It tells you how to go from one conditional probability to its inverse.
We can simplify this equation if we are interested in comparing the probability of different hypotheses given the same data D.
Then P(D) is just a constant for all these comparisons.
P(H|D) is proportional to P(D|H) x P(H)
P(H) is called the prior.
It is how probable you
- 2445 keer gelezen
Add new contribution