WSRt, critical thinking - a summary of all articles needed in the fourth block of second year psychology at the uva
- 2519 reads
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
Critical thinking
Chapter 4 of Understanding Psychology as a science by Dienes
Bayes and the probability of hypotheses
Objective probability: a long-run relative frequency.
Classic (Neyman-Pearson) statistics can tell you the long-run relative frequency of different types of errors.
An alternative approach to statistics is to start with what Bayesians say are people’s natural intuitions.
People want statistics to tell them the probability of their hypothesis being right.
Subjective probability: the subjective degree of conviction in a hypothesis.
Subjective or personal probability: the degree of conviction we have in a hypothesis.
Probabilities are in the mind, not in the world.
The initial problem to address in making use of subjective probabilities is how to assign a precise number to how probable you think a proposition is.
The initial personal probability that you assign to any theory is up to you.
Sometimes it is useful to express your personal convictions in terms of odds rather than probabilities.
Odds(theory is true) = probability(theory is true)/probability(theory is false)
Probability = odds/(odds +1)
These numbers we get from deep inside us must obey the axioms of probability.
This is the stipulation that ensures the way we change our personal probability in a theory is coherent and rational.
This is where the statistician comes in and forces us to be disciplined.
There are only a few axioms, each more-or-less self-evidently reasonable.
H is the hypothesis
D is the data
P(H and D) = P(D) x P(H|D)
P(H and D) = P(H) x P(D|H)
so
P(D) x P(H|D) = P(H) x P(D|H)
Moving P(D) to the other side
P(H|D) = P(D|H) x P(H) / P(D)
This last one is Bayes theorem.
It tells you how to go from one conditional probability to its inverse.
We can simplify this equation if we are interested in comparing the probability of different hypotheses given the same data D.
Then P(D) is just a constant for all these comparisons.
P(H|D) is proportional to P(D|H) x P(H)
P(H) is called the prior.
It is how probable you thought the hypothesis was prior to collecting data.
It is your personal subjective probability and its value is completely up to you.
P(H|D) is called the posterior.
It is how probable your hypothesis is to you, after you have collected data.
P(D|H) is called the likelihood of the hypothesis
The probability of obtaining the data, given your hypothesis.
This tells you how you can update our prior probability in a hypothesis given some data.
Your prior can be up to you, but having settled on it, the posterior is determined by the axioms of probability.
From the Bayesian perspective, scientific inference consists precisely in updating one’s personal conviction in a hypothesis in the light of data.
The likelihood
According to Bayes’ theorem, if you want to update your personal probability in a hypothesis, the likelihood tells you everything you need to know about the data.
The likelihood principle: the notion that all the information relevant to inference contained in data is provided by the likelihood.
The data could be obtained given many different population proportions, but the data are more probable for some population proportions than others.
The highest likelihood is not the same as the highest probability.
We can use the likelihood to obtain our posterior, but they are not the same.
Just because a hypothesis has the highest likelihood, it does not mean you will assign the highest posterior probability.
Probability density distribution: the distribution of if the dependent variable can be assumed to vary continuously
A likelihood could be (or be proportional to) a probability density as well as a probability.
In significance testing, we calculate a form of P(D|H).
But, the P(D|H) used in significance testing is conceptually very different from the likelihood, the P(D|H) we are dealing with here.
In significance testing, tail areas are calculating in order to determine long-run error rates.
The aim of classic statistics is to come up with a procedure for making decisions that is reliable, which is to say that the procedures has known controlled long-run error rates.
To decide the long-run error rates, we need to define a collective.
Bayes’ theorem says that posterior is proportional to likelihood times prior.
We can use this in two ways when dealing with real data
Credibility intervals
Flat prior or uniform prior: you have no idea what the population value is likely to be
In choosing a prior decide:
Formulae for normal posterior:
For a reasonably diffuse prior (one presenting fairly vague prior options), the posterior is dominated by the likelihood.
If you started with a flat or uniform prior (you have no opinion concerning which values are most likely), the posterior would be identical to the likelihood.
Even if people started with very different priors, if you collect enough data, as long as the priors were smooth and allowed some non-negligble probability in the region of the true population value, the posteriors, being dominated by the likelihood, would come to be very similar.
If the prior and likelihood are normal, the posterior is also normal.
Having found the posterior distribution, you have really found out all you need to know.
The credibility interval is affected by any prior information you had.
But not with all the things that affect the confidence interval.
The Bayes factor
There is no such thing as significance testing in Bayesian statistics.
All one often has to do as a Bayesian statistician is determine posterior distributions.
With the Bayes factor you can compare the probability of an experimental theory to the probability of the null hypothesis.
H1 is your experimental hypothesis
H0 is the null hypothesis
P(H1|D) is proportional to P(D|H1) x P(H1)
P(H0|D) is proportional to P(D|H0) x P(H0)
P(H1|D)/ P(H0|D) = P(D|H1) /P(D|H0) x P(H1)/ P(H0)
Posterior odds = likelihood ratio x prior odds
The likelihood ratio is (in this case) called the Bayes factor B in favour of the experimental hypothesis.
Whatever your prior odds were in favour of the experimental hypothesis over the null, after data collection multiply those odds by B to get your posterior odds.
The Bayes factor gives the means of adjusting your odds in a continuous way.
This is a summary of the articles and reading materials that are needed for the fourth block in the course WSR-t. This course is given to second year psychology students at the Uva. The course is about thinking critically about how scientific research is done and how this
...There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
Main summaries home pages:
Main study fields:
Business organization and economics, Communication & Marketing, Education & Pedagogic Sciences, International Relations and Politics, IT and Technology, Law & Administration, Medicine & Health Care, Nature & Environmental Sciences, Psychology and behavioral sciences, Science and academic Research, Society & Culture, Tourisme & Sports
Main study fields NL:
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
2425 |
Add new contribution