WSRt, critical thinking - a summary of all articles needed in the fourth block of second year psychology at the uva
- 2256 keer gelezen
Critical thinking
Article: Borsboom, Rhemtulla, Cramer, van der Maas, Scheffer and Dolan (2016)
Kinds versus continua: a review of psychometric approaches to uncover the structure of psychiatric constructs
The present paper reviews psychometric modelling approaches that can be used to investigate the question whether psychopathology constructs are discrete or continuous dimensions through application of statistical models.
The question of whether mental disorders should be thought of as discrete categories or as continua represents an important issue in clinical psychology and psychiatry.
But, such categorizations often involve apparently arbitrary conventions.
All measurement starts with categorization, the formation of equivalence classes.
Equivalence classes: sets of individuals who are exchangeable with respect to the attribute of interest.
We may not succeed in finding an observational procedure that in fact yields the desired equivalence classes.
If we break down the classes further, we may represent them with a scale that starts to approach continuity.
The continuity hypothesis formally implies that:
In psychological terms, categorical representations line up naturally with an interpretation of disorders as discrete disease entities, while continuum hypotheses are most naturally consistent with the idea that a construct varies continuously in a population.
In psychology, we have no way to decide conclusively whether two individuals are ‘equally depressed’.
This means we cannot form the equivalence classes necessary for measurement theory to operate.
The standard approach to dealing with this situation in psychology is to presume that, even though equivalence classes for theoretical entities like depression and anxiety are not subject to direct empirical determination, we may still entertain them as hypothetical entities purported to underlay the thoughts, feelings and behaviours we do observe.
Models assume that, given a specific level of a latent variable, the indicators are uncorrelated.
This feature, local independence, is consistent with a causal interpretation of the effects of the latent on the observed variables.
The distribution of observed variables is typically taken as a given in psychometric modelling, as it is dictated by the response format used in questionnaires or interviews.
This is often the case in psychiatric nosology, because we do not have strong independent evidence to resolve the question of whether psychiatric disorders vary continuously or categorically in the population.
One may apply models in an attempt to determine the form of the latent structure.
This can be done in two ways:
The logic underlying taxometric analysis.
If the underlying construct is continuous, then the covariance between any two indicators conditional on a given range of a proxy of the construct should be same regardless of the exact range.
If the underlying variable is a binary variable, then the covariance between any two indicators is expected to vary with the value of the proxy.
Taxometric analysis capitalizes on such implications of latent structure hypothesis.
To carry out a taxometric analysis:
The taxometric approach is not uncontroversial in psychometrics.
Complementary to taxometric analysis, one may use latent variable modelling as a framework in which to query the structure of psychiatric constructs.
Latent variable approaches are not without problems
McGrath and Walters (2012) have systematically evaluated the performance of latent variable models and taxometric procedures, and propose a combination of modelling approaches, in which taxometric strategies are used to detect categorical structures, whereas latent class or profile models are used to select the optimal number of classes in the structure is determined to be categorical.
The hypothesis of kinds and continua do not exhaust the space of possibilities, so that evidence against one hypothesis is not necessary evidence for the other.
Factor mixture models
Finite mixture models partition the population into distinct latent classes, but allow for continuous variation within these classes.
If that variation is itself measured through a number of indicator variables, then we obtain a factor mixture model.
Factor mixture models provide a useful framework for formalizing the distinction between categorical and continuous latent variables in terms of distributional assumptions and model constraints.
Mixture modelling allows us the connect factor model and latent class models by means of intermediate models and associated constrains.
GoM models
In GoM models, one can also depart form a simple latent class model to integrate continuous features.
But, the GoM model is not widely used in psychometric applications, probably due to a lack of readily assessable statistical software to apply the GoM model.
Network models and dynamical systems
It is possible that the transition to and from a psychiatric disorder proceeds as a categorical sudden transition for some individuals, whereas it is a smooth process of change for others.
Psychometric latent variable models represent differences in the structure of psychiatric constructs as differences in the distributional form of a latent variable, which acts as a common cause of the indicators.
Correlations between variables commonly seen as ‘indicators’ then arise from a network of causal effects among these variables themselves (they form mechanistic property clusters).
Individual differences in network structure may lead to different patterns of symptom dynamics.
Differences in dynamics across different network structures are important to the kinds vs continua discussion.
If present, discontinuous transitions have direct measurable consequences that may be exploited in further research.
Transitions from a healthy state to a disordered state are typically preceded by early warning signals that indicate that the system is close to a tipping point for a transition.
Critical thinking
Article: Borsboom, Rhemtulla, Cramer, van der Maas, Scheffer and Dolan (2016)
Kinds versus continua: a review of psychometric approaches to uncover the structure of psychiatric constructs
The present paper reviews psychometric modelling approaches that can be used to investigate the question whether psychopathology constructs are discrete or continuous dimensions through application of statistical models.
The question of whether mental disorders should be thought of as discrete categories or as continua represents an important issue in clinical psychology and psychiatry.
But, such categorizations often involve apparently arbitrary conventions.
All measurement starts with categorization, the formation of equivalence classes.
Equivalence classes: sets of individuals who are exchangeable with respect to the attribute of interest.
We may not succeed in finding an observational procedure that in fact yields the desired equivalence classes.
If we break down the classes further, we may represent them with a scale that starts to approach continuity.
The continuity hypothesis formally implies that:
In psychological terms, categorical representations line up naturally with an interpretation of disorders as discrete disease entities, while continuum hypotheses are most naturally consistent with the idea that a construct varies continuously in a population.
In psychology, we have no way to decide conclusively whether two individuals are ‘equally depressed’.
This means we cannot form the equivalence classes necessary for measurement theory to operate.
The standard approach to dealing with this situation in psychology is
Critical thinking
Article: Eaton, Krueger, Docherty, and Sponheim (2013)
Toward a Model-Based Approach to the Clinical Assessment of Personality Psychopathology
This paper illustrates how new statistical methods can inform conceptualization of personality psychopathology and therefore its assessment.
Structural assumptions about personality variables are inextricably linked to personality assessment.
The nature of the personality assessment instrument reflect assumptions about the distributional characteristics of the construct of interest.
Historically, many assumptions about the distributions of data reflecting personality constructs resulted form expert opinion or theory.
Both ‘type’ theories and dimensional theories have been proposed.
Assessment instruments have reflected this bifurcation in conceptualization.
Because the structure of personality assessment is reflective of the underlying distributional assumptions of the personality constructs of interest, reliance solely on expert opinion about these distributions is potentially problematic.
It is critical for personality theory and assessment that underlying distributional assumptions of symptomatology be correct and justifiable.
Critical thinking
Chapter 4 of Understanding Psychology as a science by Dienes
Bayes and the probability of hypotheses
Objective probability: a long-run relative frequency.
Classic (Neyman-Pearson) statistics can tell you the long-run relative frequency of different types of errors.
An alternative approach to statistics is to start with what Bayesians say are people’s natural intuitions.
People want statistics to tell them the probability of their hypothesis being right.
Subjective probability: the subjective degree of conviction in a hypothesis.
Subjective or personal probability: the degree of conviction we have in a hypothesis.
Probabilities are in the mind, not in the world.
The initial problem to address in making use of subjective probabilities is how to assign a precise number to how probable you think a proposition is.
The initial personal probability that you assign to any theory is up to you.
Sometimes it is useful to express your personal convictions in terms of odds rather than probabilities.
Odds(theory is true) = probability(theory is true)/probability(theory is false)
Probability = odds/(odds +1)
These numbers we get from deep inside us must obey the axioms of probability.
This is the stipulation that ensures the way we change our personal probability in a theory is coherent and rational.
This is where the statistician comes in and forces us to be disciplined.
There are only a few axioms, each more-or-less self-evidently reasonable.
H is the hypothesis
D is the data
P(H and D) = P(D) x P(H|D)
P(H and D) = P(H) x P(D|H)
so
P(D) x P(H|D) = P(H) x P(D|H)
Moving P(D) to the other side
P(H|D) = P(D|H) x P(H) / P(D)
This last one is Bayes theorem.
It tells you how to go from one conditional probability to its inverse.
We can simplify this equation if we are interested in comparing the probability of different hypotheses given the same data D.
Then P(D) is just a constant for all these comparisons.
P(H|D) is proportional to P(D|H) x
.....read moreCritical thinking
Article: Dienes, Z, 2011
Bayesian Versus orthodox statistics: which side are you on?
doi: 10.1177/1745691611406920
The orthodox logic of statistics, starts from the assumption that probabilities are long-run relative frequencies.
A long-run relative frequency requires an indefinitely large series of events that constitutes the collective probability of some property (q) occurring is then the proportion of events in the collective with property q.
The logic of Neyman Pearson (orthodox) statistics is to adopt decision procedures with known long-term error rates and then control those errors at acceptable levels.
Thus, setting significance and power controls long-run error rates.
The probability of a theory being true given data can be symbolized as P(theory|data).
This is what orthodox statistics tell us.
One cannot infer one conditional probability just by knowing its inverse. (So P(data|theory) is unknown).
Bayesian statistics starts from the premise that we
.....read moreCritical thinking
Article: Borsboom, D. and Cramer, A, O, J. (2013)
Network Analysis: An Integrative Approach to the Structure of Psychopathology
doi: 10.1146/annurev-clinpsy-050212-185608
The current dominant paradigm of the disease model of psychopathology is problematic.
Current handling of psychopathology data is predicated on traditional psychometric approaches that are the technical mirror of of this paradigm.
In these approaches, observables (clinical symptoms) are explained by means of a small set of latent variables, just like symptoms are explained by disorders.
In this review, we argue that complex network approaches, which are currently being developed at the crossroads of various scientific fields, have the potential to provide a way of thinking about disorders that does justice to their complex organisation.
We know for certain that people suffer from symptoms and that these symptoms cluster in a non-arbitrary way.
For most psychopathological conditions, the symptoms are only empirically identifiable causes of distress.
In order for a disease model to hold, it should be possible to conceptually separate conditions from symptoms.
This isn’t possible for mental disorders.
As an important corollary, this means that disorders cannot be causes of these
Critical thinking
Article: Coyle, A (2015)
Introduction to qualitative psychological research
Introduction
This chapter examines the development of psychological interest in qualitative methods in historical context and point to the benefits that psychology gains from qualitative research.
It also looks at some important issues and developments in qualitative psychology.
At its most basic, qualitative psychological research may be regarded as involving the collection and analysis of non-numerical data through a psychological lens in order to provide rich descriptions and possibly explanations of peoples meaning-making, how they make sense of the world and how they experience particular events.
Qualitative research is bound up with particular sets of assumptions about the bases or possibilities of knowledge.
Epistemology: particular sets of assumptions about the bases or possibilities of knowledge.
Epistemology refers to a branch of philosophy that is concerned with the theory of knowledge and that tries to answer questions about how we can know what we know.
Ontology: the assumptions we make about the nature of being, existence or reality.
Different research approaches and methods are associated with different epistemologies.
The term ‘qualitative research’ covers a variety of methods with a range of epistemologies, resulting in a domain that is characterized by difference and tension.
The epistemology adopted by a particular study can be determined by a number of factors.
Whatever epistemological position is adopted in a study, it is usually desirable to ensure that you maintain this position consistently throughout the wire-up to help produce a coherent research report.
Positivism: holds that the relationship between the world and our sense perception of the world is straightforward. There is a direct correspondence between things in the world and our perception of them provided that our perception is not skewed by factors that might damage that correspondence.
So, it is possible to obtain accurate knowledge of things in the world, provided we can adopt an impartial, unbiased, objective viewpoint.
Empiricism: holds that our knowledge of the world must arise from the collection and categorization of our sense perceptions/observations of the world.
This categorization allows us to develop more complex knowledge
Critical thinking
Article: Gigerenzer, G. & Marewski, J, N. (2015)
Surrogate Science: The Idol of a Universal Method for Scientific Inference
doi: 10.1177/0149206314547522
Introduction
Scientific inference should not be made mechanically.
Good science requires both statistical tools and informed judgment about what model to construct, what hypotheses to test, and what tools to use.
This article is about the idol of a universal method of statistical inference.
In this article, we make three points:
The null ritual
The most prominent creation of a seemingly universal inference method is the null ritual:
Level of significance has three different meanings:
Three meanings of significance
The alpha level: the long-term relative frequency of mistakenly rejecting hypothesis H0 if it is true, also known as Type I error rate.
The beta level: the long-term frequency of mistakenly rejecting H1 if it is true.
Two statistical hypothesis need to be specified in order to be able to determine both alpha and beta.
Neyman and Pearson rejected a mere convention in favour of an alpha level that required a rational scheme.
This is a list of the important terms used in the articles of the fourth block of WSRt, with the subject alternative approaches to psychological research.
Equivalence classes: sets of individuals who are exchangeable with respect to the attribute of interest.
Taxometrics: by inspecting particular consequences of the model for specific statistical properties of (subsets of) items, such as the patterns of bivariate correlations expected to hold in the data
Latent trait models: posit the presence of one or more underlying continuous distributions.
Zones of rarity: locations along the dimension that are unoccupied by some individuals.
Discrimination: the measure of how strongly the item taps into the latent trait.
Quasi-continuous: the construct would be bounded at the low end by zero, a complete absence of the quality corresponding with the construct.
Latent class models: based on the supposition of a latent group (class) structure for a construct’s distribution.
Conditional independence: that inter-item correlations solely reflect class membership.
Hybrid models (of factor mixture models): combine the continuous aspects of latent trait models with the discrete aspects of latent class models.
EFMA: exploratory factor mixture analysis.
Objective probability: a long-run relative frequency.
Subjective probability: the subjective degree of conviction in a hypothesis.
The likelihood principle: the notion that all the information relevant to inference contained in data is provided by the likelihood.
Probability density distribution: the distribution of if the dependent variable can be assumed to vary continuously
Credibility interval: the Bayesian equivalent of a confidence interval
The Bayes factor: the Bayesian equivalent of null hypothesis testing
Flat prior or uniform prior: you have no idea what the population value is likely to be
This magazine contains all the summaries you need for the course WSRt at the second year of psychology at the Uva.
The three most important elements of Bayesian statistics are:
For more information about Bayesian statistics, check out my summary of the fourth block of WSRt
The Bayes factor (B) compares the probability of an experimental theory to the probability of the null hypothesis.
It gives the means of adjusting your odds in a continuous way.
For more information, look at the (free) summary of 'Bayes and the probability of hypotheses' or 'Bayesian versus orthodox statistics: which side are you one?'
Weaknesses of the Bayesian approach are:
For more information, look at the (free) summary of 'Bayesian versus orthodox statistics: which side are you on?'
At its most basic, qualitative psychological research can be seen as involving the collection and analysis of non-numerical data through a psychological lens in order to provide rich descriptions and possibly explanations of peoples meaning-making, how they make sense of the world and how they experience particular events.
For more information, look at the (free) summary of 'Introduction to qualitative psychological research'
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
Field of study
Je vertrek voorbereiden of je verzekering afsluiten bij studie, stage of onderzoek in het buitenland
Study or work abroad? check your insurance options with The JoHo Foundation
Add new contribution