What are weaknesses of the Bayesian approach?

Weaknesses of the Bayesian approach are:

  • The prior is subjective
  • Bayesian analysis force people to consider what a theory actually predicts, but specifying the predictions in detail may by contentious
  • Bayesian analysis escape the paradoxes of violating the likelihood principle, but in doing so they no longer control for Type I and Type II errors

For more information, look at the (free) summary of 'Bayesian versus orthodox statistics: which side are you on?'

 

Image

Tip category: 
Studies & Exams
Related organization or sector page:
Supporting content or organization page:
Bayesian Versus orthodox statistics: which side are you on? - summary of an article by Dienes, 2011

Bayesian Versus orthodox statistics: which side are you on? - summary of an article by Dienes, 2011

Critical thinking
Article: Dienes, Z, 2011
Bayesian Versus orthodox statistics: which side are you on?
doi: 10.1177/1745691611406920

The contrast: orthodox versus Bayesian statistics

The orthodox logic of statistics, starts from the assumption that probabilities are long-run relative frequencies.
A long-run relative frequency requires an indefinitely large series of events that constitutes the collective probability of some property (q) occurring is then the proportion of events in the collective with property q.

  • The probability applies to the whole collective, not to any one person.
    • One person may belong to two different collectives that have different probabilities
  • Long run relative frequencies do not apply to the truth of individual theories because theories are not collectives. They are just true or false.
    • Thus, when using this approach to probability, the null hypothesis of no population difference between two particular conditions cannot be assigned a probability.
  • Given both a theory and a decision procedure, one can determine a long-run relative frequency with which certain data might be obtained. We can symbolize this as P(data| theory and decision procedure).

The logic of Neyman Pearson (orthodox) statistics is to adopt decision procedures with known long-term error rates and then control those errors at acceptable levels.

  • Alpha: the error rate for false positives, the significance level
  • Beta: the error rate for false negatives

Thus, setting significance and power controls long-run error rates.

  • An error rate can be calculated from the tail area of test statistics.
  • An error rate can be adjusted for factors that affect long-run error rates
  • These error rates apply to decision procedures, not to individual experiments.
    • An individual experiment is a one-time event, so does not constitute a long-run set of events
    • A decision procedure can in principle be considered to apply over a indefinite long-run number of experiments.

The probabilities of data given theory and theory given data

The probability of a theory being true given data can be symbolized as P(theory|data).
This is what orthodox statistics tell us.
One cannot infer one conditional probability just by knowing its inverse. (So P(data|theory) is unknown).

Bayesian statistics starts from the premise that we can assign degrees of plausibility to theories, and what we want our

.......read more
Access: 
Public
WSRt, critical thinking - a summary of all articles needed in the fourth block of second year psychology at the uva

WSRt, critical thinking - a summary of all articles needed in the fourth block of second year psychology at the uva

This is a summary of the articles and reading materials that are needed for the fourth block in the course WSR-t. This course is given to second year psychology

........Read more
Kinds versus continua: a review of psychometric approaches to uncover the structure of psychiatric constructs - summary of an article by Borsboom, Rhemtulla, Cramer, van der Maas, Scheffer and Dolan

Kinds versus continua: a review of psychometric approaches to uncover the structure of psychiatric constructs - summary of an article by Borsboom, Rhemtulla, Cramer, van der Maas, Scheffer and Dolan

Image

Critical thinking
Article: Borsboom, Rhemtulla, Cramer, van der Maas, Scheffer and Dolan (2016)
Kinds versus continua: a review of psychometric approaches to uncover the structure of psychiatric constructs

The present paper reviews psychometric modelling approaches that can be used to investigate the question whether psychopathology constructs are discrete or continuous dimensions through application of statistical models.

Introduction

The question of whether mental disorders should be thought of as discrete categories or as continua represents an important issue in clinical psychology and psychiatry.

  • The DSM-V typically adheres to a categorical model, in which discrete diagnoses are based on patterns of symptoms.

But, such categorizations often involve apparently arbitrary conventions.

Measurement theoretical definitions of kinds and continua

All measurement starts with categorization, the formation of equivalence classes.
Equivalence classes: sets of individuals who are exchangeable with respect to the attribute of interest.
We may not succeed in finding an observational procedure that in fact yields the desired equivalence classes.

  • We may find that individuals who have been assigned the same label are not indistinguishable with respect to the attribute of interest.
    Because there are now three classes rather than two, next to the relation between individuals within cases (equivalence), we may also represent systematic relations between members of different cases.
  • One may do so by invoking the concept of order.
    But, we may find that within these classes, there are non-trivial differences between individuals that we wish to represent.

If we break down the classes further, we may represent them with a scale that starts to approach continuity.

The continuity hypothesis formally implies that:

  • in between any two positions lies a third that can be empirically instantiated
  • there are no gaps in the continuum.

In psychological terms, categorical representations line up naturally with an interpretation of disorders as discrete disease entities, while continuum hypotheses are most naturally consistent with the idea that a construct varies continuously in a population.

  • in a continuous interpretation, the distinction between individuals depends on the imposition of a cut-off score that does not reflect a gap that is inherent in the attribute itself.

Kinds and continua as psychometric entities

In psychology, we have no way to decide conclusively whether two individuals are ‘equally depressed’.
This means we cannot form the equivalence classes necessary for measurement theory to operate.
The standard approach to dealing with this situation in psychology is to presume that, even though equivalence classes for theoretical entities like depression

.....read more
Access: 
Public
Toward a Model-Based Approach to the Clinical Assessment of Personality Psychopathology - summary of an article by Eaton, Krueger, Docherty, and Sponheim

Toward a Model-Based Approach to the Clinical Assessment of Personality Psychopathology - summary of an article by Eaton, Krueger, Docherty, and Sponheim

Image

Critical thinking
Article: Eaton, Krueger, Docherty, and Sponheim (2013)
Toward a Model-Based Approach to the Clinical Assessment of Personality Psychopathology

This paper illustrates how new statistical methods can inform conceptualization of personality psychopathology and therefore its assessment.

The relationship between structure and assessment

Structural assumptions about personality variables are inextricably linked to personality assessment.

  • reliable assessment of normal-range personality traits, and personality disorder categories, frequently takes different forms, given that the constructs of interest are presumed to have different structures.
  • when assessing personality traits, the assessor needs to measure the full range of the trait dimension to determine where an individual falls in it.
  • then assessing the presence or absence of a DSM-V personality disorder, the assessor needs to evaluate the presence of absence of the binary categorical diagnosis.
  • given the polythetic nature of criterion sets, the purpose of the assessment is to determine which criteria are present, calculate the number of present criteria, and note whether this sum meets or exceeds a diagnostic threshold.

The nature of the personality assessment instrument reflect assumptions about the distributional characteristics of the construct of interest.

  • items on DSM-oriented inventories are usually intended to gather converging pieces of information about each criterion to determine whether or not it is present.

Distributional assumptions of personality constructs

Historically, many assumptions about the distributions of data reflecting personality constructs resulted form expert opinion or theory.
Both ‘type’ theories and dimensional theories have been proposed.
Assessment instruments have reflected this bifurcation in conceptualization.

  • The resulting implications for assessment are far from trivial
    The structure of a personality test designed to determine whether an individual is one or two personality types, needs only to assess the two characteristics, as opposed to assessing characteristics that are more indicative or mid-range.
    • There is no mid-ground in type theory, so items covering middle-ground are not relevant.

Because the structure of personality assessment is reflective of the underlying distributional assumptions of the personality constructs of interest, reliance solely on expert opinion about these distributions is potentially problematic.

Model-based tests of distributional assumptions

It is critical for personality theory and assessment that underlying distributional assumptions of symptomatology be correct and justifiable.

  • different distributions impact the way clinical and research constructs are conceptualized, measured, and applied to individuals.
  • characterizing these latent constructs properly is a prerequisite for efforts to asses them.
    • it is of limited value to assess an improperly conceived construct with high reliability.
.....read more
Access: 
Public
Bayes and the probability of hypotheses - summary of Chapter 4 of Understanding Psychology as a science by Dienes

Bayes and the probability of hypotheses - summary of Chapter 4 of Understanding Psychology as a science by Dienes

Image

Critical thinking
Chapter 4 of Understanding Psychology as a science by Dienes
Bayes and the probability of hypotheses

Objective probability: a long-run relative frequency.
Classic (Neyman-Pearson) statistics can tell you the long-run relative frequency of different types of errors.

  • Classic statistics do not tell you the probability of any hypothesis being true.

An alternative approach to statistics is to start with what Bayesians say are people’s natural intuitions.
People want statistics to tell them the probability of their hypothesis being right.
Subjective probability: the subjective degree of conviction in a hypothesis.

Subjective probability

Subjective or personal probability: the degree of conviction we have in a hypothesis.
Probabilities are in the mind, not in the world.

The initial problem to address in making use of subjective probabilities is how to assign a precise number to how probable you think a proposition is.
The initial personal probability that you assign to any theory is up to you.
Sometimes it is useful to express your personal convictions in terms of odds rather than probabilities.

Odds(theory is true) = probability(theory is true)/probability(theory is false)
Probability = odds/(odds +1)

These numbers we get from deep inside us must obey the axioms of probability.
This is the stipulation that ensures the way we change our personal probability in a theory is coherent and rational.

  • People’s intuitions about how to change probabilities in the light of new information are notoriously bad.

This is where the statistician comes in and forces us to be disciplined.

There are only a few axioms, each more-or-less self-evidently reasonable.

  • Two aximons effectively set limits on what values probabilities can take.
    All probabilities will lie between 0 and 1
  • P(A or B) = P(A) + P(B), if A and B are mutually exclusive.
  • P(A and B) = P(A) x P(B|A)
    • P(B|A) is the probability of B given A.

Bayes’ theorem

H is the hypothesis
D is the data

P(H and D) = P(D) x P(H|D)
P(H and D) = P(H) x P(D|H)

so

P(D) x P(H|D) = P(H) x P(D|H)

Moving P(D) to the other side

P(H|D) = P(D|H) x P(H) / P(D)

This last one is Bayes theorem.
It tells you how to go from one conditional probability to its inverse.
We can simplify this equation if we are interested in comparing the probability of different hypotheses given the same data D.
Then P(D) is just a constant for all these comparisons.

P(H|D) is proportional to P(D|H) x P(H)

P(H) is called the prior.
It is how probable you

.....read more
Access: 
Public
Bayesian Versus orthodox statistics: which side are you on? - summary of an article by Dienes, 2011

Bayesian Versus orthodox statistics: which side are you on? - summary of an article by Dienes, 2011

Image

Critical thinking
Article: Dienes, Z, 2011
Bayesian Versus orthodox statistics: which side are you on?
doi: 10.1177/1745691611406920

The contrast: orthodox versus Bayesian statistics

The orthodox logic of statistics, starts from the assumption that probabilities are long-run relative frequencies.
A long-run relative frequency requires an indefinitely large series of events that constitutes the collective probability of some property (q) occurring is then the proportion of events in the collective with property q.

  • The probability applies to the whole collective, not to any one person.
    • One person may belong to two different collectives that have different probabilities
  • Long run relative frequencies do not apply to the truth of individual theories because theories are not collectives. They are just true or false.
    • Thus, when using this approach to probability, the null hypothesis of no population difference between two particular conditions cannot be assigned a probability.
  • Given both a theory and a decision procedure, one can determine a long-run relative frequency with which certain data might be obtained. We can symbolize this as P(data| theory and decision procedure).

The logic of Neyman Pearson (orthodox) statistics is to adopt decision procedures with known long-term error rates and then control those errors at acceptable levels.

  • Alpha: the error rate for false positives, the significance level
  • Beta: the error rate for false negatives

Thus, setting significance and power controls long-run error rates.

  • An error rate can be calculated from the tail area of test statistics.
  • An error rate can be adjusted for factors that affect long-run error rates
  • These error rates apply to decision procedures, not to individual experiments.
    • An individual experiment is a one-time event, so does not constitute a long-run set of events
    • A decision procedure can in principle be considered to apply over a indefinite long-run number of experiments.

The probabilities of data given theory and theory given data

The probability of a theory being true given data can be symbolized as P(theory|data).
This is what orthodox statistics tell us.
One cannot infer one conditional probability just by knowing its inverse. (So P(data|theory) is unknown).

Bayesian statistics starts from the premise that we can assign degrees of plausibility to theories, and what we want our

.....read more
Access: 
Public
Network Analysis: An Integrative Approach to the Structure of Psychopathology - summary of an article by Borsboom and Cramer (2013)

Network Analysis: An Integrative Approach to the Structure of Psychopathology - summary of an article by Borsboom and Cramer (2013)

Image

Critical thinking
Article: Borsboom, D. and Cramer, A, O, J. (2013)
Network Analysis: An Integrative Approach to the Structure of Psychopathology
doi: 10.1146/annurev-clinpsy-050212-185608

Introduction

The current dominant paradigm of the disease model of psychopathology is problematic.
Current handling of psychopathology data is predicated on traditional psychometric approaches that are the technical mirror of of this paradigm.
In these approaches, observables (clinical symptoms) are explained by means of a small set of latent variables, just like symptoms are explained by disorders.

  • From this psychometric perspective, symptoms are regarded as measurements of a disorder, and in accordance, symptoms are aggregated in a total score that reflects a person’s stance on that latent variable.
  • The dominant paradigm is not merely a matter of theoretical choice, but also of methodological and pragmatic necessity.

In this review, we argue that complex network approaches, which are currently being developed at the crossroads of various scientific fields, have the potential to provide a way of thinking about disorders that does justice to their complex organisation.

  • In such approaches, disorders are conceptualized as systems of causally connected symptoms rather than as effects of a latent disorder.
  • Using network analysis techniques, such systems can be represented, analysed, and studied in their full complexity.
  • In addition, network modeling has the philosophical advantage of dropping the unrealistic idea that symptoms of a single disorder share a single causal background, while it simultaneously avoids the realistic consequence that disorders are merely labels for an arbitrary set of symptoms.
    • It provides a middle ground in which disorders exists as systems, rather than as entities

Symptoms and disorders in psychopathology

We know for certain that people suffer from symptoms and that these symptoms cluster in a non-arbitrary way.
For most psychopathological conditions, the symptoms are only empirically identifiable causes of distress.

  • Mental disorders are themselves not empirically identifiable in that they cannot be diagnosed independently of their symptoms.
    • It is impossible to identify any of the common mental disorders as conditions that exists independently of their symptoms.

In order for a disease model to hold, it should be possible to conceptually separate conditions from symptoms.

  • It must be possible (or at least imaginable) that a person should have a condition/disease without the associated symptoms.

This isn’t possible for mental disorders.
As an important corollary, this means that disorders cannot be causes of these symptoms.
This strongly suggests that the treatment of disorders as

.....read more
Access: 
Public
Introduction to qualitative psychological research - an article by Coyle (2015)

Introduction to qualitative psychological research - an article by Coyle (2015)

Image

Critical thinking
Article: Coyle, A (2015)
Introduction to qualitative psychological research

Introduction

This chapter examines the development of psychological interest in qualitative methods in historical context and point to the benefits that psychology gains from qualitative research.
It also looks at some important issues and developments in qualitative psychology.

Epistemology and the ‘scientific method’

At its most basic, qualitative psychological research may be regarded as involving the collection and analysis of non-numerical data through a psychological lens in order to provide rich descriptions and possibly explanations of peoples meaning-making, how they make sense of the world and how they experience particular events.

Qualitative research is bound up with particular sets of assumptions about the bases or possibilities of knowledge.
Epistemology: particular sets of assumptions about the bases or possibilities of knowledge.
Epistemology refers to a branch of philosophy that is concerned with the theory of knowledge and that tries to answer questions about how we can know what we know.
Ontology: the assumptions we make about the nature of being, existence or reality.

Different research approaches and methods are associated with different epistemologies.
The term ‘qualitative research’ covers a variety of methods with a range of epistemologies, resulting in a domain that is characterized by difference and tension.

The epistemology adopted by a particular study can be determined by a number of factors.

  • A researcher may have a favoured epistemological outlook or position and may locate their research within this, choosing methods that accord to with that position.
  • Alternatively, the researcher may be keen to use a particular qualitative method in their research and so they frame their study according to the epistemology that is usually associated with that method.

Whatever epistemological position is adopted in a study, it is usually desirable to ensure that you maintain this position consistently throughout the wire-up to help produce a coherent research report.

Positivism: holds that the relationship between the world and our sense perception of the world is straightforward. There is a direct correspondence between things in the world and our perception of them provided that our perception is not skewed by factors that might damage that correspondence.
So, it is possible to obtain accurate knowledge of things in the world, provided we can adopt an impartial, unbiased, objective viewpoint.

Empiricism: holds that our knowledge of the world must arise from the collection and categorization of our sense perceptions/observations of the world.
This categorization allows us to develop more complex knowledge of the world and to develop theories to explain the world.

.....read more
Access: 
Public
Surrogate Science: The Idol of a Universal Method for Scientific Inference - summary of an article by Gigerenzer & Marewski

Surrogate Science: The Idol of a Universal Method for Scientific Inference - summary of an article by Gigerenzer & Marewski

Image

Critical thinking
Article: Gigerenzer, G. & Marewski, J, N. (2015)
Surrogate Science: The Idol of a Universal Method for Scientific Inference
doi: 10.1177/0149206314547522

Introduction

Scientific inference should not be made mechanically.
Good science requires both statistical tools and informed judgment about what model to construct, what hypotheses to test, and what tools to use.

This article is about the idol of a universal method of statistical inference.

In this article, we make three points:

  • There is no universal method of scientific inference, but, rather a toolbox of useful statistical methods. In the absence of a universal method, its followers worship surrogate idols, such as significant p values.
    The inevitable gap between the ideal and its surrogate is bridged with delusions.
    These mistaken beliefs do much harm. Among others, by promoting irreproducible results.
  • If the proclaimed ‘Bayesian revolution’ were to take place, the danger is that the idol of a universal method might survive in a new guise, proclaiming that all uncertainty can be reduced to subjective probabilities.
  • Statistical methods are not simply applied to a discipline. They change the discipline itself, and vice versa.

Dreaming up a universal method of inference

The null ritual

The most prominent creation of a seemingly universal inference method is the null ritual:

  • Set up a null hypothesis of ‘no mean inference’ or ‘zero correlation’. Do not specify the predictions or your own research hypothesis.
  • Use 5% as a convention for rejecting the null. If significant, accept you research hypothesis. Report the result as p<.05, p<.01, p<.001, whichever comes next to the obtained p value.
  • Always perform this procedure.

Level of significance has three different meanings:

  • A mere convention
  • The alpha level
  • The exact level of significance

Three meanings of significance

The alpha level: the long-term relative frequency of mistakenly rejecting hypothesis H0 if it is true, also known as Type I error rate.
The beta level: the long-term frequency of mistakenly rejecting H1 if it is true.

Two statistical hypothesis need to be specified in order to be able to determine both alpha and beta.
Neyman and Pearson rejected a mere convention in favour of an alpha level that required a rational scheme.

  • Set up two statistical hypotheses, H1, H2, and decide on alpha, beta and the sample size before the experiment, based on subjective cost-benefit considerations.
  • If the data fall into the rejection region of H1, accept H2, otherwise accept H1
.....read more
Access: 
Public
WSRt, critical thinking, a list of terms used in the articles of block 4

WSRt, critical thinking, a list of terms used in the articles of block 4

Image

This is a list of the important terms used in the articles of the fourth block of WSRt, with the subject alternative approaches to psychological research.

Article: Kinds versus continua: a review of psychometric approaches to uncover the structure of psychiatric constructs

Equivalence classes: sets of individuals who are exchangeable with respect to the attribute of interest.

Taxometrics: by inspecting particular consequences of the model for specific statistical properties of (subsets of) items, such as the patterns of bivariate correlations expected to hold in the data

Toward a Model-Based Approach to the Clinical Assessment of Personality Psychopathology

Latent trait models: posit the presence of one or more underlying continuous distributions.

Zones of rarity: locations along the dimension that are unoccupied by some individuals.

Discrimination: the measure of how strongly the item taps into the latent trait.

Quasi-continuous: the construct would be bounded at the low end by zero, a complete absence of the quality corresponding with the construct.

Latent class models: based on the supposition of a latent group (class) structure for a construct’s distribution.

Conditional independence: that inter-item correlations solely reflect class membership.

Hybrid models (of factor mixture models): combine the continuous aspects of latent trait models with the discrete aspects of latent class models.

EFMA: exploratory factor mixture analysis.

Bayes and the probability of hypotheses

Objective probability: a long-run relative frequency.

Subjective probability: the subjective degree of conviction in a hypothesis.

The likelihood principle: the notion that all the information relevant to inference contained in data is provided by the likelihood.

Probability density distribution: the distribution of if the dependent variable can be assumed to vary continuously

Credibility interval: the Bayesian equivalent of a confidence interval

The Bayes factor: the Bayesian equivalent of null hypothesis testing

Flat prior or uniform prior: you have no idea what the population value is likely to be

Bayesian Versus orthodox statistics: which side are you on?

Alpha: the error rate for

.....read more
Access: 
Public
Everything you need for the course WSRt of the second year of Psychology at the Uva

Everything you need for the course WSRt of the second year of Psychology at the Uva

Image

This magazine contains all the summaries you need for the course WSRt at the second year of psychology at the Uva.

Access: 
Public
What is a confidence interval in null hypothesis significance testing?
What are important elements of Bayesian statistics?
What is the Bayes factor?

What is the Bayes factor?

Image

The Bayes factor (B) compares the probability of an experimental theory to the probability of the null hypothesis.
It gives the means of adjusting your odds in a continuous way.

  • If B is greater than 1, your data support the experimental hypothesis over the null
  • If B is less than 1, your data support the null over the experimental hypothesis
  • If B is about 1, then your experiment was not sensitive

For more information, look at the (free) summary of 'Bayes and the probability of hypotheses' or 'Bayesian versus orthodox statistics: which side are you one?'

What are weaknesses of the Bayesian approach?

What are weaknesses of the Bayesian approach?

Image

Weaknesses of the Bayesian approach are:

  • The prior is subjective
  • Bayesian analysis force people to consider what a theory actually predicts, but specifying the predictions in detail may by contentious
  • Bayesian analysis escape the paradoxes of violating the likelihood principle, but in doing so they no longer control for Type I and Type II errors

For more information, look at the (free) summary of 'Bayesian versus orthodox statistics: which side are you on?'

 

What is qualitative psychological research?

What is qualitative psychological research?

Image

At its most basic, qualitative psychological research can be seen as involving the collection and analysis of non-numerical data through a psychological lens in order to provide rich descriptions and possibly explanations of peoples meaning-making, how they make sense of the world and how they experience particular events.

For more information, look at the (free) summary of 'Introduction to qualitative psychological research'

What criteria should be held by good qualitative research?
Concerned countries and regions
Image
Side road:
Tip: type
Advice & Instructions
Tip: date of posting
27-01-2019

Image

Image

Help other WorldSupporters with additions, improvements and tips

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Image

Related activities, jobs, skills, suggestions or topics
Institutions, jobs and organizations:
Countries and regions:
This content is used in bundle:
Content access
Content access: 
Public
Date
27-01-2019
Statistics
2752