Probability refers to the proportion of occurrence when a particular experiment is repeated infinitely often under different circumstances. It is a long-term relative frequency, does not apply to unique events and is dependent on the reference category.
Subjective probability refers to the subjective degree of conviction in a hypothesis. Objective probability refers to the long-term relative frequency and is the same probability used in classical statistics.
The p-value is the probability of finding a test statistic at least as extreme as the one observed, given that the null hypothesis is true. An X% confidence interval for a parameter is an interval that in repeated use has an X% chance to capture the true value of the parameter. The p-values are only concerned about the null hypothesis, although it is not possible to make statements about the probability of a hypothesis in classical statistics.
If the null hypothesis is true, then the p-values drift randomly. Therefore, it is possible that the p-value is significant by chance. This is why stopping rules are imperative in classical statistics. In Bayesian statistics, the Bayes factor does not drift randomly but drifts towards the correct decision.
In classical statistics, the stopping rules (1), the timing of explanations (posthoc test or not) (2) and multiple tests influence the conclusion. This is not the case in Bayesian statistics.
Classical statistics does not allow for probabilities to be assigned to hypotheses or parameters, whereas Bayesian statistics does allow this.
Bayesian statistics is a method of learning from prediction errors. It assumes that probability does not exist but only uncertainty, which has to be quantified in a principled manner. Therefore, in Bayesian statistics, probability can be assigned to a single hypothesis.
The data drive an update from prior knowledge to posterior knowledge. This method investigates whereas classical statistics investigates .
The Bayes factor can also be seen as the predictive updating factor for the posterior belief. It is the ratio of likelihoods. The likelihood refers to the probability of obtaining the data given the hypothesis. Bayesian statistics use Bayes rule:
The prior distribution determines the posterior distribution, therefore, a high predictive updating factor in favour of the alternative hypothesis does not necessarily mean that the alternative hypothesis is better. It only predicts the dataset X times better than the null hypothesis in this case.
The posterior belief and the Bayes factor are the same if the prior belief is that the distribution is 50/50. Otherwise, the posterior belief and the Bayes factor are not the same.
The Bayes factor can be used as evidence, although these categories are arbitrary. Statistical evidence refers to a change in conviction concerning a hypothesis
.....read more
Add new contribution