Moderation refers to the combined effect of two or more predictor variables on an outcome. This is also known as an interaction effect. A moderator variable is one that affects the relationship between two others. It affects the strength or direction of the relationship between the variables.The interaction effect indicates whether moderation has occurred. The predictor and the moderator must be included for the interaction term to be valid. If, in the linear model, the interaction effect is included, then the individual predictors represent the regression of the outcome on that predictor when the other predictor is zero. The predictors are often transformed using grand mean centring. Centring refers to transforming a variable into deviations around a fixed point. This fixed point is typically the grand mean. Centring is important when the model contains an interaction effect, as it makes the bs for lower-order effects interpretable. It makes interpreting the main effects easier (lower-order effects) if the interaction effect is not significant. The bs of individual predictors can be interpreted as the effect of that predictor at the mean value of the sample (1) and the average effect of the predictor across the range of scores for the other predictors (2) when the variables are centred. In order to interpret a (significant) moderation effect, a simple slopes analysis needs to be conducted. It is comparing the relationship between the predictor and outcome at low and high levels of the moderator. SPSS gives a zone of significance. Between two values of the moderator the predictor does not significantly predict the outcome and below and above the values it does.The steps for moderation are the following if there is a significant interaction effect: centre the predictor and moderator (1), create the interaction term (2), run a forced entry regression with the centred variables and the interaction of the two centred variables (3). The simple...


Access options

      How do you get full online access and services on JoHo WorldSupporter.org?

      1 - Go to www JoHo.org, and join JoHo WorldSupporter by choosing a membership + online access
       
      2 - Return to WorldSupporter.org and create an account with the same email address
       
      3 - State your JoHo WorldSupporter Membership during the creation of your account, and you can start using the services
      • You have online access to all free + all exclusive summaries and study notes on WorldSupporter.org and JoHo.org
      • You can use all services on JoHo WorldSupporter.org (EN/NL)
      • You can make use of the tools for work abroad, long journeys, voluntary work, internships and study abroad on JoHo.org (Dutch service)
      Already an account?
      • If you already have a WorldSupporter account than you can change your account status from 'I am not a JoHo WorldSupporter Member' into 'I am a JoHo WorldSupporter Member with full online access
      • Please note: here too you must have used the same email address.
      Are you having trouble logging in or are you having problems logging in?

      Toegangsopties (NL)

      Hoe krijg je volledige toegang en online services op JoHo WorldSupporter.org?

      1 - Ga naar www JoHo.org, en sluit je aan bij JoHo WorldSupporter door een membership met online toegang te kiezen
      2 - Ga terug naar WorldSupporter.org, en maak een account aan met hetzelfde e-mailadres
      3 - Geef bij het account aanmaken je JoHo WorldSupporter membership aan, en je kunt je services direct gebruiken
      • Je hebt nu online toegang tot alle gratis en alle exclusieve samenvattingen en studiehulp op WorldSupporter.org en JoHo.org
      • Je kunt gebruik maken van alle diensten op JoHo WorldSupporter.org (EN/NL)
      • Op JoHo.org kun je gebruik maken van de tools voor werken in het buitenland, verre reizen, vrijwilligerswerk, stages en studeren in het buitenland
      Heb je al een WorldSupporter account?
      • Wanneer je al eerder een WorldSupporter account hebt aangemaakt dan kan je, nadat je bent aangesloten bij JoHo via je 'membership + online access ook je status op WorldSupporter.org aanpassen
      • Je kunt je status aanpassen van 'I am not a JoHo WorldSupporter Member' naar 'I am a JoHo WorldSupporter Member with 'full online access'.
      • Let op: ook hier moet je dan wel hetzelfde email adres gebruikt hebben
      Kom je er niet helemaal uit of heb je problemen met inloggen?

      Join JoHo WorldSupporter!

      What can you choose from?

      JoHo WorldSupporter membership (= from €5 per calendar year):
      • To support the JoHo WorldSupporter and Smokey projects and to contribute to all activities in the field of international cooperation and talent development
      • To use the basic features of JoHo WorldSupporter.org
      JoHo WorldSupporter membership + online access (= from €10 per calendar year):
      • To support the JoHo WorldSupporter and Smokey projects and to contribute to all activities in the field of international cooperation and talent development
      • To use full services on JoHo WorldSupporter.org (EN/NL)
      • For access to the online book summaries and study notes on JoHo.org and Worldsupporter.org
      • To make use of the tools for work abroad, long journeys, voluntary work, internships and study abroad on JoHo.org (NL service)

      Sluit je aan bij JoHo WorldSupporter!  (NL)

      Waar kan je uit kiezen?

      JoHo membership zonder extra services (donateurschap) = €5 per kalenderjaar
      • Voor steun aan de JoHo WorldSupporter en Smokey projecten en een bijdrage aan alle activiteiten op het gebied van internationale samenwerking en talentontwikkeling
      • Voor gebruik van de basisfuncties van JoHo WorldSupporter.org
      • Voor het gebruik van de kortingen en voordelen bij partners
      • Voor gebruik van de voordelen bij verzekeringen en reisverzekeringen zonder assurantiebelasting
      JoHo membership met extra services (abonnee services):  Online toegang Only= €10 per kalenderjaar
      • Voor volledige online toegang en gebruik van alle online boeksamenvattingen en studietools op WorldSupporter.org en JoHo.org
      • voor online toegang tot de tools en services voor werk in het buitenland, lange reizen, vrijwilligerswerk, stages en studie in het buitenland
      • voor online toegang tot de tools en services voor emigratie of lang verblijf in het buitenland
      • voor online toegang tot de tools en services voor competentieverbetering en kwaliteitenonderzoek
      • Voor extra steun aan JoHo, WorldSupporter en Smokey projecten

      Meld je aan, wordt donateur en maak gebruik van de services

      Check page access:
      JoHo members
      Check more or recent content:

      Scientific & Statistical Reasoning – Summary interim exam 3 (UNIVERSITY OF AMSTERDAM)

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 6

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 6

      Image

      Bias can be detrimental for the parameter estimates (1), standard errors and confidence intervals (2) and the test statistics and p-values (3). Outliers and violations of assumptions are forms of bias.

      An outlier is a score very different from the rest of the data. They bias parameter estimates and have an impact on the error associated with that estimate. Outliers have a strong effect on the sum of squared errors and this biases the standard deviation.

      There are several assumptions of the linear model:

      1. Additivity and linearity
        The scores on the outcome variable are linearly related to any predictors. If there are multiple predictors, their combined effect is best described by adding them together.
      2. Normality
        The parameter estimates are influenced by a violation of normality and the residuals of the parameters should be normally distributed. It is normality for each level of the predictor variable that is relevant. Normality is also important for confidence intervals and for null hypothesis significance testing.
      3. Homoscedasticity / homogeneity of variance Homoscedasticity / homogeneity of variance
        This impacts the parameters and the null hypothesis significance testing. It means that the variance of the outcome variable should not change between levels of the predictor variable. Violation of this assumption leads to bias in the standard error.
      4. Independence
        This assumption means that the errors in the model are not related to each other. The data has to be independent.

      The assumption of normality is mainly relevant in small samples. Outliers can be spotted using graphs (e.g. histograms or boxplots). Z-scores can also be used to find outliers.

      The P-P plot can be used to look for normality of a distribution. It is the expected z-score of a score against the actual z-score. If the expected z-scores overlap with the actual z-scores, the data will be normally distributed. The Q-Q plot is like the P-P plot but it plots the quantiles of the data instead of every individual score.

      Kurtosis and skewness are two measures of the shape of the distribution. Positive values of skewness indicate a lot of scores on the left side of th distribution. Negative values of skewness indicate a lot of scores on the right side of the distribution. The further the value is from zero, the more likely it is that the data is not normally distributed.

      Normality can be checked by looking at the z-scores of the skewness and kurtosis. It uses the following formula:

      Levene’s test is a one-way ANOVA on the deviation scores. The homogeneity of variance can be tested using Levene’s test or by evaluating a plot of the standardized predicted values against the standardized residuals.

      REDUCING BIAS
      There are four ways of correcting problems with the data:

      1. Trim the data
        Delete a
      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 8

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 8

      Image

      Variance of a single variable represents the average amount that the data vary from the mean. The cross-product deviation multiplies the deviation for one variable by the corresponding deviation for the second variable. The average value of the cross-product deviation is the covariance. This is an averaged sum of combined deviation. It uses the following formula:

      A positive covariance indicates that if one variable deviates from the mean, the other variable deviates in the same direction. A negative covariance indicates that if one variable deviates from the mean, the other variable deviates in the opposite direction.

      Covariance is not standardized and depends on the scale of measurement. The standardized covariance is the correlation coefficient and is calculated using the following formula:

      A correlation coefficient of values  0.1 represents a small effect. Values of  0.3 represent a medium effect and values of  0.5 represent a large effect.

       In order to test the null hypothesis of the correlation, namely that the correlation is zero, z-scores can be used. In order to use the z-scores, the distribution must be normal, but the r-sampling distribution is not normal. The following formula adjusts r in order to make the sampling distribution normal:

      The standard error uses the following formula:

      This leads to the following formula for z:

      The null hypothesis of correlations can also be tested using the t-score with degrees of freedom N-2:

      The confidence intervals for the correlation uses the same formula as all the other confidence intervals. These values have to be converted back to a correlation efficient using the following formula:

      CORRELATION
      Normality in correlation is only important if the sample size is small (1), there is significance testing (2) or there is a confidence interval (3). The assumptions of correlation are normality (1) and linearity (2).

      The correlation coefficient squared (R2) is a measure of the amount of variability in one variable that is shared by the other. Spearman’s correlation coefficient (rs) is a non-parametric statistic that is sued to minimize the effects of extreme scores or the effects of violations of the assumptions. Spearman’s correlation coefficient works best if the data is ranked. Kendall’s tau, denoted by τ, is a non-parametric statistic that is used when the data set is small with a large set of tied ranks.

      A biserial or point-biserial correlation is used when a relationship between two variables is investigated when one of the two variables is dichotomous (e.g. yes

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 9

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 9

      Image

      Any straight line can be defined by the slope (1) and the point at which the line crosses the vertical axis of the graph (intercept) (2). The general formula for the linear model is the following:

      Regression analysis refers to fitting a linear model to data and using it to predict values of an outcome variable (dependent variable) from one or more predictor variables (independent variables). The residuals are the differences between what the model predicts and the actual outcome. The residual sum of squares is used to assess the ‘goodness-of-fit’ of the model on the data. The smaller the residual sum of squares, the better the fit.

      Ordinary least squares regression refers to defining the regression models for which the sum of squared errors is the minimum it can be given the data. The sum of squared differences is the total sum of squares and represents how good the mean is as a model of the observed outcome scores. The model sum of squares represents how well the model can predict the data. The larger the model sum of squares, the better the model can predict the data. The residual sum of squares uses the differences between the observed data and the model and shows how much of the data the model cannot predict.

      The proportion of improvement due to the model compared to using the mean as a predictor can be calculated using the following formula:

      This value represents the amount of variance in the outcome explained by the model relative to how much variation there was to explain. The F-statistic can be calculated using the following formulas:

      ‘k’ represents the degrees of freedom and denotes the number of predictors.

      The F-statistic can also be used t test the significance of  with the null hypothesis being that  is zero. It uses the following formula:

      Individual predictors can be tested using the t-statistic.

      BIAS IN LINEAR MODELS
      An outlier is a case that differs substantially from the main trend in the data. Standardized residuals can be used to check which residuals are unusually large and can be viewed as an outlier. Standardized residuals are residuals converted to z-scores. Standardized residuals greater than 3.29 are considered an outlier (1), if more than 1% of the sample cases have a standardized residual of greater than 2.58, the level of error in the model may be unacceptable (2) and if more than 5% of the cases have standardized residuals with an absolute value greater than 1.96, the model may be a poor representation of the data (3).

      The studentized residual is the unstandardized residual divided

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 11

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 11

      Image

      Moderation refers to the combined effect of two or more predictor variables on an outcome. This is also known as an interaction effect. A moderator variable is one that affects the relationship between two others. It affects the strength or direction of the relationship between the variables.

      The interaction effect indicates whether moderation has occurred. The predictor and the moderator must be included for the interaction term to be valid. If, in the linear model, the interaction effect is included, then the individual predictors represent the regression of the outcome on that predictor when the other predictor is zero.

      The predictors are often transformed using grand mean centring. Centring refers to transforming a variable into deviations around a fixed point. This fixed point is typically the grand mean. Centring is important when the model contains an interaction effect, as it makes the bs for lower-order effects interpretable. It makes interpreting the main effects easier (lower-order effects) if the interaction effect is not significant.

      The bs of individual predictors can be interpreted as the effect of that predictor at the mean value of the sample (1) and the average effect of the predictor across the range of scores for the other predictors (2) when the variables are centred.

      In order to interpret a (significant) moderation effect, a simple slopes analysis needs to be conducted. It is comparing the relationship between the predictor and outcome at low and high levels of the moderator. SPSS gives a zone of significance. Between two values of the moderator the predictor does not significantly predict the outcome and below and above the values it does.

      The steps for moderation are the following if there is a significant interaction effect: centre the predictor and moderator (1), create the interaction term (2), run a forced entry regression with the centred variables and the interaction of the two centred variables (3).

      The simple slopes analysis gives three models. One model for a predictor when the moderator value is low (1), one model for a predictor when the moderator value is at the mean (2) and one model for a predictor when the moderator value is high (1).

      If the interaction effect is significant, then the moderation effect is also significant.

      MEDIATION
      Mediation refers to a situation when the relationship between the predictor variable and an outcome variable can be explained by their relationship to a third variable, the mediator. Mediation can be tested through three linear models:

      1. A linear model predicting the outcome from the predictor variable (c).
      2. A linear model predicting the mediator from the predictor variable (a).
      3. A linear model predicting the outcome from both the predictor variable and the mediator (predictor = c’ and mediator = b).

      There are four conditions for mediation: the predictor variable must significantly predict the outcome variable (in model 1)(1), the predictor variable must significantly predict the mediator

      .....read more
      Access: 
      JoHo members
      Foster (2010). Causal inference and developmental psychology.” – Article summary

      Foster (2010). Causal inference and developmental psychology.” – Article summary

      Image

      The problem of causality is difficult in developmental psychology, as many questions of that field regard factors that a person cannot be randomly assigned to (e.g. single parent family). Causal inference refers to the study and measurement of cause-and-effect relationships outside of random assignment.

      In the current situation in developmental psychology, it is unclear among researchers whether causality can be implied and why. Causal inferences are necessary for the goals of developmental psychology because causal inferences can improve the lives of people (1), can help distinguish between associations and causal claims for laypeople (2) and causal thinking is unavoidable (3).

      The directed acyclic graph (DAG) is a tool which is useful in moving from associations to causal relationships. It is particularly useful in identifying covariates and understanding the anticipated consequences of incorporating these variables.

      The DAG is a symbolic representation of dependencies among variables. The causal Markov assumption states that the absence of a path (in the DAG) implies the absence of a relationship. In the DAG, models that represent data with fewer links are preferred to the more complex (parsimony). If two variables are simultaneously determined, the DAG could incorporate this possibility by treating the two as reflecting a common cause.

      Variables (in the DAG) can be related in three ways:

      1. Z is a common cause of X and Y
        In this case, Z needs to be controlled for.
      2. Z is a common effect of X and Y
        This is a collider. Conditioning on a collider creates a spurious relationship between X and Y. This relationship can suppress or inflate a true causal effect.
      3. Z mediates the effect of X on Y

         

      Access: 
      JoHo members
      “Pearl (2018). Confounding and deconfounding: Or, slaying the lurking variable.” - Article summary

      “Pearl (2018). Confounding and deconfounding: Or, slaying the lurking variable.” - Article summary

      Image

      Confounding bias occurs when a variable influences both who is selected for the treatment and the outcome of the experiment. If a possible confounding variable is known, it is possible to control for the possible confounding variable. Researchers tend to control for all possible variables, which leaves the possibility of controlling for the thing you are trying to measure (e.g. controlling for mediators).

      Confounding needs a causal solution, not a statistical one and causal diagrams provide a complete and systematic way of finding that solution. If all the confounders are controlled for, a causal claim can be made. However, it is not always sure whether all confounders are controlled for.

      Randomization has two clear benefits. It eliminates confounder bias and it enables the researcher to quantify his uncertainty. Randomization eliminates confounders without introducing new confounders. In a non-randomized study, confounders must be eliminated by controlling for them, although it is not always possible to know all the possible confounders.

      It is not always possible to conduct a randomized controlled experiment because of ethical, practical or other constraints. Causal estimates of observational studies can provide with provisional causality. This is causality contingent upon the set of assumptions that the causal diagram advertises.

      Confounding stands for the discrepancy between what we want to assess (the causal effect) and what we actually do assess using statistical methods. A mediator is the variable that explains the causal effect of X on Y (X>Z>Y). If you control for a mediator, you will conclude that there is no causal link, when there is.

      There are several rules for controlling for possible confounders:

      1. In a chain junction (A -> B -> C), controlling for B prevents information from A getting to C and vice versa.
      2. In a fork or confounding junction (A <- B -> C), controlling for B prevents information from A getting to C and vice versa.
      3. In a collider (A -> B <- C), controlling for B will allow information from A getting to C and vice versa.
      4. Controlling for a mediator partially closes the stream of information. Controlling for a descendant of a collider partially opens the stream of information.

      A variable that is associated with both X and Y is not necessarily a confounder.

      Access: 
      JoHo members
      “Shadish (2008). Critical thinking in quasi-experimentation.” - Article summary

      “Shadish (2008). Critical thinking in quasi-experimentation.” - Article summary

      Image

      A common element in all experiments is the deliberate manipulation of an assumed cause followed by an observation of the effects that follow. A quasi-experiment is an experiment that does not uses random assignment of participants to conditions.

      An inus condition is an insufficient but non-redundant part of an unnecessary but sufficient condition. It is insufficient, because in itself it cannot be the cause, but it is also non-redundant as it adds something that is unique to the cause. It is an insufficient cause.

      Most causal relationships are non-deterministic. They do not guarantee that an effect occur, as most causes are inus conditions, but they increase the probability that an effect will occur. To different degrees, all causal relationships are contextually dependent.

      A counterfactual is something that is contrary to fact. An effect is the difference between what did happen and what would have happened. The counterfactual cannot be observed. Researchers try to approximate the counterfactual, but it is impossible to truly observe it.

      Two central tasks of experimental design are creating a high-quality but imperfect source of counterfactual and understanding how this source differs from the experimental condition.

      Creating a good source of counterfactual is problematic in quasi-experiments. There are two tools to attempt this:

      1. Observe the same unit over time
      2. Make the non-random control groups as similar as possible to the treatment group

      A causal relationship exists if the cause preceded the effect (1), the cause was related to the effect (2) and there is no plausible alternative explanation for the effect other than the cause (3). Although quasi-experiments are flawed compared to experimental studies, they improve on correlational studies in two ways:

      1. Quasi-experiments make sure the cause precedes the effect by first manipulating the presumed cause and then observing an outcome afterwards.
      2. Quasi-experiments allows to control for some third-variable explanations. 

      Campbell’s threats to valid causal inference contains a list of common group differences in a general system of threats to valid causal inference:

      1. History
        Events occurring concurrently with treatment could cause worse performance.
      2. Maturation
        Naturally occurring changes over time, not too be confused with treatment effects.
      3. Selection
        Systematic differences over conditions in respondent characteristics.
      4. Attrition
        A loss of participants can produce artificial effects if that loss is systematically correlated with conditions.
      5. Instrumentation
        The instruments of measurement might differ or change over time.
      6. Testing
        Exposure to a test can affect subsequent scores on a test.
      7. Regression to the mean
        An extreme observation will be less extreme on the second observation.

      Two flaws of falsification are that it requires a causal claim to be clear, complete and agreed upon in all its details and it requires observational procedures to perfectly reflect the theory that is being tested.

      Access: 
      JoHo members
      “Kievit et al. (2013). Simpson’s paradox in psychological science: A practical guide.” - Article summary

      “Kievit et al. (2013). Simpson’s paradox in psychological science: A practical guide.” - Article summary

      Image

      Simpson’s paradox states that the direction of an association at the population-level may be reversed within subgroups of that population. Inadequate attention to the Simpson’s paradox may lead to faulty inferences. The Simpson’s paradox can arise because of differences in proportions on subgroup levels compared to population levels. It also states that a pattern (association) does not need to hold within a subgroup.

      The paradox is related to a lot of things, including causal inference. A generalized conclusion (e.g. extraversion causes party-going) might hold for the general population, but does not mean that this inference can be drawn at the individual level. A correlation across the population does not need to hold in an individual over time.

      In order to deal with Simpson’s paradox, the situations in which the paradox occurs frequently have to be assessed. There are several steps in preventing Simpson’s paradox:

      1. Consider when it occurs.
      2. Explicitly propose a mechanism, determining at which level it is presumed to operate.
      3. Assess whether the explanatory level of data collection aligns with the explanatory level of the proposed mechanism.
      4. Conduct an experiment to assess the association between variables.

      In the absence of strong top-down knowledge, people are more likely to make false inferences based on Simpson’s paradox.

      Access: 
      JoHo members
      Dienes (2008). Understanding psychology as a science.” – Article summary

      Dienes (2008). Understanding psychology as a science.” – Article summary

      Image

      A falsifier of a theory is any potential observation statement that would contradict the theory. There are different degrees of falsifiability, as some theories require fewer data points to be falsified than others. In other words, simple theories should be preferred as these theories require fewer data points to be falsified. The greater the universality a theory, the more falsifiable it is.

      A computational model is a computer simulation of a subject. It has free parameters, numbers that have to be set (e.g. number of neurons used in a computational model of neurons). When using computational models, more than one model will be able to fit the actual data. However, the most falsifiable model that has not been falsified by the data (fits the data) should be used.

      A theory should only be revised or changed to make it more falsifiable. Making it less falsifiable is ad hoc. Any revision or amendment to the theory should also be falsifiable. Falsifia

      Standard statistics are useful in determining probabilities based on the objective probabilities, the long-run relative frequency. This does not, however, give the probability of a hypothesis being correct.

      Subjective probability refers to the subjective degree of conviction in a hypothesis. The subjective probability is based on a person’s state of mind. Subjective probabilities need to follow the axioms of probability.

      Bayes’ theorem is a method of getting from one conditional probability (e.g. P(A|B)) to the inverse. The subjective probability of a hypothesis is called the prior. The posterior is how probable the hypothesis is to you after data collection. The probability of obtaining the data given the hypothesis is called the likelihood (e.g. P(D|H). The posterior is proportional to the likelihood times the prior. Bayesian statistics is updating the personal conviction in light of new data.

      The likelihood principle states that all the information relevant to inference contained in data is provided by the likelihood. A hypothesis having the highest likelihood does not mean that it has the highest probability. A hypothesis having the highest likelihood means that the data support the hypothesis the most. The posterior probability is not reliant on the likelihood.

      The probability distribution of a continuous variable is called the probability density distribution. It has this name, as a continuous variable has infinite possibilities and probabilities in this distribution gives the probability of any interval.

      A likelihood could be a probability or a probability density and it can also be proportional to a probability or a probability density. Likelihoods provide a continuous graded measure of support for different hypotheses.

      In Bayesian statistics (likelihood analysis), the data is fixed but the hypothesis can vary. In significance testing, the hypothesis is fixed (null hypothesis) but the data can vary. The height of the curve of the distribution for each hypothesis is relevant in calculating the likelihood. In significance testing, the tail area of

      .....read more
      Access: 
      JoHo members
      “Marewski & Olsson (2009). Formal modelling of psychological processes.” - Article summary

      “Marewski & Olsson (2009). Formal modelling of psychological processes.” - Article summary

      Image

      One way of avoiding the null hypothesis testing ritual in science is to increase the precision of theories by casting them as formal models. Rituals can be characterized by a repetition of the same action (1), fixations on special features (2), anxieties about punishment for rule violation (3) and wishful thinking (4). The null hypothesis testing ritual is mainly maintained because many psychological theories are too weak to make precise predictions besides the direction of the effect.

      A model is a simplified representation of the world that aims to explain observed data. It specifies a theory’s predictions. Modelling is especially suited for basic and applied research about the cognitive system. There are four advantages of formally specifying the theories as models:

      1. Designing strong tests of theories
        Modelling theories leads to being able to make quantitative predictions about a theory, which then leads to comparable, competing predictions between theories which allows for comparison and testing of theories.
      2. Sharpening research questions
        Null hypothesis testing allows for vague descriptions of theories and specifying the theories as models requires more precise research questions. These vague descriptions make theories difficult to test and sharpening the research questions makes it easier to test the theories.
      3. Going beyond linear theories
        Null hypothesis testing is especially applicable to simple hypotheses. The statistical tools available are used to create theories, mostly linear theories and by specifying the theory as a model, this is not necessary anymore.
      4. Using more externally valid designs to study real-world questions
        Modelling can lead to more externally valid designs, as confounds are not eliminated in the analysis, but built into the model.

      Goodness-of-fit measures cannot make the distinction between variation in the data as a result of noise or as a result of the psychological process of interest. A model can end up overfitting the data, capturing the variance of the psychological process of interest and variance as a result of random error. The ability of a model to predict new data is the generalizability. The complexity of a model refers to a model’s inherent flexibility that enables to fit diverse patterns of data. The complexity of a model is related to the degree to which a model is susceptible to overfitting. The number of free parameters (1) and how parameters are combined in the model (2) contribute to the model’s complexity.

      Increased complexity makes a model more likely to overfit while the generalizability to new data decreases. Increased complexity can also lead to better generalizability of the data, but only if the model is complex enough and not too complex. A good fit to current data does not predict a good fit to other data.

      The irrelevant specification problem refers to the difficulty bridging the gap between description of theories and formal implementations. This can lead to unintended discrepancies between theories and their formal counterparts. The Bonari paradox refers to when models become more complex and

      .....read more
      Access: 
      JoHo members
      “Dennis & Kintsch (2008). Evaluating theories.” - Article summary

      “Dennis & Kintsch (2008). Evaluating theories.” - Article summary

      Image

      A theory is a concise statement about how we believe the world to be. There are several things to look at when evaluating theories:

      1. Descriptive adequacy

      Does the theory accord with the available data?

      2. Precision and interpretability

      Is the theory described in a sufficiently precise fashion that it is easy to interpret?

      3. Coherence and consistency

      Are there logical flaws in the theory? Is it consistent with theories of other domains?

      4. Prediction and falsifiability

      Can the theory be falsified?

      5. Postdiction and explanation

      Does the theory provide a genuine explanation of existing results?

      6. Parsimony

      Is the theory as simple as possible?

      7. Originality

      Is the theory new or a restatement of an old theory?

      8. Breadth

      Does the theory apply to a broad range of phenomena?

      9. Usability

      Does the theory have applied implications?

      10. Rationality

      Are the claims of the theory reasonable?

      Postdiction refers to predictions under controlled conditions. 

      Access: 
      JoHo members
      "Furr & Bacharach (2014). Estimating and evaluating convergent and discriminant validity evidence.” - Article summary

      "Furr & Bacharach (2014). Estimating and evaluating convergent and discriminant validity evidence.” - Article summary

      Image

      There are four procedures to present the implications of a correlation in terms of our ability to use the correlations to make successful predictions:

      1. Binomial effect size display (dichotomous)
        This illustrates the practical consequences of using correlations to make decisions. It can show how many successful and unsuccessful predictions can be made on the basis of a correlation. It uses the following formula:
      2. Binomial effect size display can be used to translate a validity correlation into an intuitive framework. However, it frames the situation in terms of an ‘equal proportions’ situation.
      3. Taylor-Russell tables (dichotomous)
        These tables inform selection decisions and provide a probability that a prediction will result in a successful performance on a criterion. The size of the validity coefficient (1), selection proportion (2) and the base rate (3) are required for the tables.
      4. Utility analysis
        This frames validity in terms of a cost-benefit analysis of test use.
      5. Analysis of test sensitivity and test specificity
        A test is evaluated in terms of its ability to produce correct identifications of a categorical difference. This is useful for tests that are designed to detect a categorical difference.

      Validity correlations can be evaluated in the context of a particular area of research or application.

      A nomological network refers to the interconnections between a construct and other related construct. There are several methods to evaluate the degree to which measures show convergent and discriminate associations:

      1. Focusses associations
        This method focusses on a few highly relevant criterion variables. This can make use of validity generalization.
      2. Sets of correlations
        This method focusses on a broad range of criterion variables and computes the correlations between the test and many criterion variables. The degree to which the pattern of correlations ‘makes sense’ given the conceptual meaning of the construct is evaluated.
      3. Multitrait-multimethod matrices
        This method obtains measures of several traits, each measured through several methods. The purpose is to set clear guidelines for evaluating convergent and discriminant validity evidence. This is done by evaluating trait variance and method variance. Evidence of convergent validity is represented by monotrait-heteromethod correlations.

      The correlations between measures are called validity coefficients. Validity generalization is a process of evaluating a test’s validity coefficients across a large set of studies. Validity generalization studies are intended to evaluate the predictive utility of test’s scores across a range of settings, times and situations. These studies can reveal the general level of predictive validity (1), reveal the degree of variability among the smaller individual studies (2) and it can reveal the source of the variability among studies (3).

       

      Method used to measure the two constructs

      .....read more
      Access: 
      JoHo members
      “Furr & Bacharach (2014). Estimating practical effects: Binomial effect size display, Taylor-Russell tables, utility analysis and sensitivity / specificity.” – Article summary

      “Furr & Bacharach (2014). Estimating practical effects: Binomial effect size display, Taylor-Russell tables, utility analysis and sensitivity / specificity.” – Article summary

      Image

      Validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by the proposed uses (e.g. to what degree does it measure what it is supposed to measure). Items of a test itself cannot be valid or invalid, only the interpretations can be valid or invalid.

      Validity is a property of the interpretation (1), it is a matter of degree (2) and the validity of a test’s interpretation is based on evidence and theory (3). Validity influences the accuracy of our understanding of the world, as research conclusions are based on the validity of a measure.

      Construct validity refers to the degree to which test scores can be interpreted as reflecting a particular psychological construct. Face validity refers to the degree to which a measure appears to be related to a specific construct, in the judgement of nonexperts, test takers and representatives of the legal system. Convergent validity refers to the degree to which test scores are correlated with tests of related constructs.

      Validity is important for the accuracy of our understanding of the world (1), decisions on societal level (e.g. laws based on ‘invalid’ research) and decisions on individual level (3) (e.g. college admissions).

      The validity of test score interpretation depends on five types of evidence: test content (1), consequences of use (2), association with other variables (3), response processes (4) and internal structure (5).

      Test content can be seen as content validity. There are two threats to content validity:

      1. A test including construct-irrelevant content
        The inclusion of content that is not relevant to the construct of interest reduces validity.
      2. Construct underrepresentation
        A test should include the full range of content that is relevant to the construct.

      Construct underrepresentation can be constrained by practical issues (e.g. time of a test). The internal structure of a test refers to the way the parts of a test are related to each other. There should be a proper match between the actual internal structure of a test and the internal structure a test should have. The internal structure can be examined through the correlations among items in the test and among the subscales in the test. This can be done using factor analysis.

      Factor analysis helps to clarify the number of factors within a set of items (1), reveals the associations among the factors within a multidimensional test (2) and identifies which items are linked to which factors (3). Factors are dimensions of the test.

      Response processes refers to the match between the psychological processes that respondents actually use when completing a measure and the processes that they should use.

      In order to assess validity, the association with other variables (e.g. happiness and self-esteem) should be assessed. If a positive relationship is to be expected between two variables, then, in order for the interpretation of a measure to be valid, this relationship needs to exist. The association with other variables involves the match between a measure’s actual associations with other measures

      .....read more
      Access: 
      JoHo members
      “Furr & Bacharach (2014). Scaling.” - Article summary

      “Furr & Bacharach (2014). Scaling.” - Article summary

      Image

      Scaling refers to assigning numerical values to psychological attributes. Individuals in a group should be similar to each other in the regard of sharing a psychological feature. There are rules to follow in order to put people in categories:

      1. People in a category must be identical with respect to the feature that categorizes the group (e.g. hair colour).
      2. The groups must be mutually exclusive
      3. The groups must be exhaustive (e.g. everyone in the population can fall into a category).

      Each person should fall into one category and not more than one. If numerals are used to indicate order, then the numerals serve as labels indicating rank. If numerals have the property of quantity, then they convey information about the exact amounts of an attribute. Units of measurement are standardized quantities. The three levels of groups are identity (1), order (2) and quantity (3).

      There are two possible meanings of the number zero. It can be the absolute zero (1) (e.g. a reaction time of 0ms) or it can be an arbitrary quantity of an attribute (2). This is called the arbitrary zero. The arbitrary zero does not represent the absence of anything, rather, it is a point on a scale to measure that feature. A lot of psychological attributes use the arbitrary zero (e.g. social skill, self-esteem, intelligence).

      An unit of measurement might be arbitrary because unit size may be arbitrary (1), some units of measurement are not tied to any one type of object (2) (e.g. centimetres can measure anything with a spatial property) and some units of measurement can be used to measure different features of the same object (3) (e.g. weight and length).

      One assumption of counting is additivity. This requires that unit size does not change. This would mean that an increase of one point is equal at every point. This is not always the case, as an IQ test asks increasingly difficult questions to increase one point of IQ. Therefore, the unit size changes.

      Counting only qualifies as measurement if it reflects the amount of some feature or attribute of an object. There are four scales of measurement:

      1. Nominal scale
        This is used to identify groups of people who share a common attribute that is not shared by people in other groups (e.g. ‘0’ for male and ‘1’ for female). It assesses the principle of identity.
      2. Ordinal scale
        This is used to rank people according to some attribute. It is used to make rankings within groups and cannot be used to make comparisons between groups, as this would require quantity. It assesses the principle of identity and order.
      3. Interval scale
        This is a scale that is used to represent quantitative difference between people. It assesses the principle of identity, order and quantity.
      4. Ratio scales
        This is a scale that has an absolute zero point. It satisfies the principle of identity, order, quantity and has an absolute zero.

      Psychological attributes might not be able to be put

      .....read more
      Access: 
      JoHo members
      “Mitchell & Tetlock (2017). Popularity as a poor proxy for utility.” - Article summary

      “Mitchell & Tetlock (2017). Popularity as a poor proxy for utility.” - Article summary

      Image

      Before the existence of the IAT, indirect measures of prejudice were developed in order to overcome response bias and psychologists began to examine automatic processes that may contribute to contemporary forms of prejudice. After the existence of the IAT, implicit prejudice became the same thing as widespread unconscious prejudices that are more difficult to spot and regularly infect intergroup interactions.

      The IAT has been used throughout different areas of society and is a very popular mean of describing implicit prejudice. Prejudice extends beyond negative or positive associations with an attitude object to include motivational and affective reactions to in-group and out-group members. IAT does not have a strong predictive validity. The IAT score is a poor predictor of discriminating behaviour.

      There are no guidelines for how to interpret the scores on the IAT. This is referred to as the score interpretation problem. The test scores are dependent on arbitrary thresholds and it is not possible to link them to behaviour outcomes.

      The focus of the IAT on implicit gender stereotypes is (not implicit sexism) is problematic because implicit measures of gender stereotypes are not a good predictor of discriminatory behaviour (1), only a very limited set of implicit gender stereotypes has been examined (2) and no explanation is provided about how conflicts between automatic evaluative associations and automatic semantic associations are resolved (3).

      Individuating information, getting personal information about a certain group, exerts effects to counter explicit biases. It does the same with regard to implicit biases.

      Subjective evaluation criteria are not associated with discrimination. Therefore, the solution that only objective measures must be used in decision making to counter (implicit) bias is unnecessary. This is referred to as the subjective judgement problem.

      Access: 
      JoHo members
      “LeBel & Peters (2011). Fearing the future of empirical psychology: Bem’s (2011) evidence of psi as a case study of deficiencies in modal research practice.” - Article summary

      “LeBel & Peters (2011). Fearing the future of empirical psychology: Bem’s (2011) evidence of psi as a case study of deficiencies in modal research practice.” - Article summary

      Image

      Psi refers to the anomalous retroactive influence of future events on an individual’s current behaviour. There are three important deficiencies in modal research practice: an overemphasis on conceptual replication (1), insufficient attention to verifying the integrity of measurement instruments and experimental procedures (2) problems with the implementation of null hypothesis testing (3).

      The interpretation bias refers to a bias towards interpretations of data that favour a researcher’s theory. A potential consequence of this is an increased risk of reported false positives and a disregard of true negatives. The knowledge system of psychology consists of theory relevant beliefs (1), this is about the mechanisms that produce behaviour and method-relevant beliefs (2), this is about the procedures through which data is obtained.

      Deficiencies in modal research practice bias systematically bias the interpretation of confirmatory data as theory relevant (1) and the interpretation of disconfirmatory data as method relevant (2).

      Central beliefs are beliefs on which many other beliefs depend. Conservatism refers to choosing the theoretical explanation consistent with the data that requires the least amount of restructuring of the existing knowledge system.

      If method-relevant beliefs are central in a knowledge system, it becomes more difficult to blame methodology related errors for disconfirmatory results. If theory-relevant beliefs become central, it poses the threat of becoming a logical assumption. A hypothesis under test should be described in a way that is falsifiable and not logically necessary.

      An overemphasis on conceptual replication at the expense of direct replication weakens method-relevant beliefs in the knowledge system. A statistical significant result is often followed by a conceptual replication. A failure of the conceptual replication leads to the question whether the negative result was due to the falsity of the underlying theory or to methodological flaws introduced by changes in conceptual replication.

      The failure to verify the integrity of measurement instruments and experimental procedures weakens method-relevant beliefs and leads to ambiguity in the interpretation of results. The null hypothesis can be viewed as a straw man, as two identical populations are almost not possible. Basing theory choices on null hypothesis significance tests detaches theories from the broader knowledge system.

      In order to overcome the flaws of the modal research practice, method-relevant beliefs must be strengthened. There are three ways in order to do this:

      1. Stronger emphasis on direct replication
        A direct replication leads to greater confidence in the results. They are necessary to ensure that an effect is real.
      2. Verify integrity of methodological procedures
        Method-relevant beliefs are more difficult to reject if the integrity of methodological procedures are verified and this leads to a less ambiguous interpretation of results. This includes routinely checking the internal consistency of the scores of any measurement instrument that is used. This includes the use of objective markers of instruction comprehension.
      3. Use stronger forms of NHST
        The null hypothesis should be a theoretically derived point value of the focal variable, instead
      .....read more
      Access: 
      JoHo members

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Book summary

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 1

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 1

      Image

      The research process generally starts with an observation. After the observation, relevant theories are consulted and hypotheses are generated, from which predictions are made. After that, data is collected to test the predictions and finally the data is analysed. The data analysis either supports or does not support the hypothesis. A theory is an explanation or set of principles that is well substantiated by repeated testing and explains a broad phenomenon. A theory should be able to explain all of the data. A hypothesis is a proposed explanation for a fairly narrow phenomenon or set of observations. Hypotheses are theory-driven. Predictions are often used to move from the conceptual domain to the observable domain to be able to collect evidence. Falsification is the act of disproving a hypothesis or theory. A scientific theory should be falsifiable and explain as much of the data as possible.

      DATA
      Variables are things that can vary. An independent variable is a variable thought to be the cause of some effect and is usually manipulated, in research. A dependent variable is a variable thought to be affected by changes in an independent variable. The predictor variable is a variable thought to predict an outcome variable (independent variable). The outcome variable is a variable thought to change as a function of changes in a predictor a predictor variable (dependent variable). The difference between dependent variables and outcome variables is that one is about experimental research and the other is applicable to both experimental and correlational research.

      The level of measurement is the relationship between what is being measured and the numbers that represent what is being measured. A categorical variable is made up of categories. There are three types of categorical variables:

      1. Binary variable
        A categorical variable with two options (e.g. ‘yes’ or ‘no’).
      2. Nominal variable
        A categorical variable with more than two options (e.g. hair colour).
      3. Ordinal variables
        A categorical variable that has been ordered (e.g. winner and runner-up)

      Nominal data can be used when considering frequencies. Ordinal data does not tell us anything about the difference between points on a scale. A continuous variable is a variable that gives us a score for each person and can take on any value. An interval variable is a continuous variable with equal differences between the intervals (e.g. the difference between a ‘9’ and a ‘10’ on a grade). Ratio variables are continuous variables in which the ratio has meaning (e.g. a rating of ‘4’ is twice as good as a rating of ‘2’). Ratio variables require a meaningful zero point. A discrete variable is a variable that can take on only certain values.

      Measurement error is the discrepancy between the numbers we use to represent the thing we’re measuring and the actual value of this thing. Self-report will produce larger measurement error. Validity is whether an instrument measures what it sets out to measure. Reliability is whether an instrument

      .....read more
      Access: 
      Public
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 2

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 2

      Image

      Many statistical models try to predict an outcome from one or more predictor variables. Statistics includes five things: Standard error (S), parameters (P), interval estimates (I), null hypothesis significance testing (N) and estimation (E), together making SPINE. Statistics often uses linear models, as this simplifies reality in an understandable way.

      All statistical models are based on the same thing:

      The data we observe can be predicted from the model we choose to fit plus some amount of error. The model will vary depending on the study design. The bigger a sample is, the more likely it is to reflect the whole population.

      PARAMETER
      A parameter is a value of the population. A statistic is a value of the sample. A parameter can be denoted by ‘b’. The outcome of a model uses the following formula:

      In this formula, ‘b’ denotes the parameter and ‘X’ denotes the predictor variable. This formula calculates the outcome from two predictors. Degrees of freedom relates to the number of observations that are free to vary. One parameter is hold constant and the degrees of freedom must be one fewer than the number of scores used to calculate that parameter because the last number is not free to vary.

      ESTIMATION
      The method of least squares is minimizing the sum of squared errors. The smaller the error, the better your estimate is. When estimating a parameter, we try to minimize the error in order to have a better estimate.

      STANDARD ERROR
      The standard deviation tells us how well the mean represents the sample data. The difference in means across samples is called the sampling variation. Samples vary because they include different members of a population. A sampling distribution is the frequency distribution of sample means from the same population. The mean of the sampling distribution is equal to the population mean. A standard deviation of the sampling distribution tells us how the data is spread around the population mean, meaning that the standard deviation of the sampling distribution is approximately equal to the standard deviation of the population. The standard error of the mean (SE) can be calculated by taking the difference between each sample mean and the overall mean, squaring these differences, adding them up and dividing them by the number of samples and taking the square root of it. It uses the following formula:

      The central limit theorem states that when samples get large (>30), the sampling distribution has a normal distribution with a mean equal to the population mean and the following standard deviation:

      If the sample is small (<30), the sampling distribution has a t-distribution shape.

      INTERVAL ESTIMATES
      It is not possible to know the parameters, thus confidence intervals are used. Confidence intervals

      .....read more
      Access: 
      Public
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 3

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 3

      Image

      There are three main misconceptions of statistical significance:

      1. A significant result means that the effect is important
        Statistical significance is not the same as practical significance.
      2. A non-significant result means that the null hypothesis is true
        Rejecting the alternative hypothesis does not mean we accept the null hypothesis.
      3. A significant result means that the null hypothesis is false
        If we reject the null hypothesis in favour of the alternative hypothesis, this does not mean that the null hypothesis is false, as rejection is all based on probability and there still is a probability of it not being false.

      The use of NHST encourages ‘all-or-nothing’ thinking. A result is either significant or not. If a confidence interval contains zero, it could be that the population effect might be zero.

      An empirical probability is the proportion of events that have the outcome in which you’re interested in an indefinitely large collective of events. The p-value is the probability of getting a test statistic at least as large as the one observed relative to all possible values of the null hypothesis from an infinite number of identical replications of the experiment. It is the frequency of the observed test statistic relative to all possible values that could be observed in the collective of identical experiments. The p-value is affected by the intention of the researcher as the p-values are relative to all possible values in identical experiments and sample size and time of collection of data (the intentions) could influence the p-values.

      In journals, based on NHST, there is a publication bias. Significant results are more likely to get published. Researcher degrees of freedom are ways in which the researcher could influence the p-value. This could be used to make it more likely to find a significant result (e.g. by excluding some cases to make the result significant). Researcher degrees of freedom could include not using some observations and not publishing key findings.

      P-hacking refers to selective reporting of significant p-values by trying multiple analyses and reporting only the significant ones. HARKing refers to making a hypothesis after data collection and presenting it as if it was made before data collection. P-hacking and HARKing makes results difficult to replicate. Tests of excess success (e.g. looking at multiple studies studying the same and calculating the probability of them all having success) are used to see whether it is likely that p-hacking or something else may have occurred.

      EMBERS
      There is an abbreviation for how to tackle the problems of NHST: Effect sizes (E), Meta-analysis (M), Bayesian Estimation (BE), Registration (R) and Sense (S), together making EMBERS.

      SENSE
      There are six principles for when using NHST in order to use your sense:

      1. The exact p-value can indicate how incompatible the data are with the null hypothesis.
      2. P-values are not interpreted as the probability that the hypothesis is true.
      .....read more
      Access: 
      Public
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 5

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 5

      Image

      A good graph has the following properties:

      1. Shows the data
      2. Induce the reader to think about the presented data
      3. Avoid distorting data
      4. Present many numbers with minimum ink
      5. Make large data sets coherent
      6. Encourage the reader to compare different pieces of data
      7. Reveal the underlying message of data

      There are also some graph building guidelines:

      1. If plotting two variables, never use 3-D plots
      2. Do not use unnecessary patterns in the bars
      3. Do not use cylinder shaped bars if that is not functional
      4. Properly label the x- and y-axis.

      HISTOGRAMS
      There are different types of histograms:

      1. Simple histogram
        Visualize frequencies of scores for a single variable
      2. Stacked histogram
        Compare relative frequencies of scores across groups
      3. Frequency polygon
        The same as a simple histogram, but uses a line, instead of a bar
      4. Population pyramid
        Comparing distributions across groups and the relative frequencies of scores in two populations.

      BOXPLOTS
      A box-plot or box-whisker diagram  uses the median as the centre of the plot. It is surrounded by the quartiles which show 25% and 75% of the data. There are several types of boxplots:

      1. 1-D boxplot
        A single boxplot for all scores of the chosen outcome
      2. Simple boxplot
        Multiple boxplots for the chosen outcome by splitting the data by a categorical variable
      3. Clustered boxplot
        A simple boxplot, but it splits the data by a second categorical variable.

      BAR CHARTS
      Bar charts are often used to display means. There are different types of bars:

      1. Simple bar
        The means of scores across different groups or categories.
      2. Clustered bar
        Different coloured bars to represent levels of a second grouping variable (e.g: film rating and excitement and enjoyment)
      3. Stacked bar
        Clustered bar, but the bars are stacked.
      4. Simple 3-D bar
        Second grouping variable is represented by an additional axis
      5. Clustered 3-D bar
        A clustered bar, but an extra categorical variable can be added on an extra axis
      6. Stacked 3-D bar
        A 3-D clustered bar, but the bars are stacked
      7. Simple error bar
        A simple bar chart, but there is no bar, but a line and a dot
      8. Clustered error bar
        Clustered bar chart, but a dot with an error band around it.

      LINE CHARTS
      Line charts are bar charts but with lines instead of bars. There are two types of line charts:

      1. Simple line
        The means of scores across different groups of cases
      2. Multiple line
        This is equivalent to the clustered bar chart.

      SCATTERPLOTS
      A scatterplot is a graph that plots each person’s score on one variable against their score on another. There are several types of scatterplots:

      1. Simple scatter
        A scatterplot of
      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 6

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 6

      Image

      Bias can be detrimental for the parameter estimates (1), standard errors and confidence intervals (2) and the test statistics and p-values (3). Outliers and violations of assumptions are forms of bias.

      An outlier is a score very different from the rest of the data. They bias parameter estimates and have an impact on the error associated with that estimate. Outliers have a strong effect on the sum of squared errors and this biases the standard deviation.

      There are several assumptions of the linear model:

      1. Additivity and linearity
        The scores on the outcome variable are linearly related to any predictors. If there are multiple predictors, their combined effect is best described by adding them together.
      2. Normality
        The parameter estimates are influenced by a violation of normality and the residuals of the parameters should be normally distributed. It is normality for each level of the predictor variable that is relevant. Normality is also important for confidence intervals and for null hypothesis significance testing.
      3. Homoscedasticity / homogeneity of variance Homoscedasticity / homogeneity of variance
        This impacts the parameters and the null hypothesis significance testing. It means that the variance of the outcome variable should not change between levels of the predictor variable. Violation of this assumption leads to bias in the standard error.
      4. Independence
        This assumption means that the errors in the model are not related to each other. The data has to be independent.

      The assumption of normality is mainly relevant in small samples. Outliers can be spotted using graphs (e.g. histograms or boxplots). Z-scores can also be used to find outliers.

      The P-P plot can be used to look for normality of a distribution. It is the expected z-score of a score against the actual z-score. If the expected z-scores overlap with the actual z-scores, the data will be normally distributed. The Q-Q plot is like the P-P plot but it plots the quantiles of the data instead of every individual score.

      Kurtosis and skewness are two measures of the shape of the distribution. Positive values of skewness indicate a lot of scores on the left side of th distribution. Negative values of skewness indicate a lot of scores on the right side of the distribution. The further the value is from zero, the more likely it is that the data is not normally distributed.

      Normality can be checked by looking at the z-scores of the skewness and kurtosis. It uses the following formula:

      Levene’s test is a one-way ANOVA on the deviation scores. The homogeneity of variance can be tested using Levene’s test or by evaluating a plot of the standardized predicted values against the standardized residuals.

      REDUCING BIAS
      There are four ways of correcting problems with the data:

      1. Trim the data
        Delete a
      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 7

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 7

      Image

      Non-parametric tests can be used when the assumptions of the regular statistical tests have been violated. Non-parametric tests use fewer assumptions and are robust. A non-parametric test has less power than parametric tests if the sampling distribution is normally distributed.

      Ranking the data refers to giving the lowest score the rank of 1, the next highest score a rank of 2 and so on. This eliminates the effect of outliers. It does neglect the difference in magnitude between the scores. If there are two scores that are the same, there are tied ranks. These scores are ranked by the value of the average potential rank for those scores (e.g. rank 3 and 4 will become rank 3.5).

      There are several alternatives to the four most used non-parametric tests:

      1. Kolmogorov-Smirnov Z
        It tests whether two groups have been drawn from the same population. It has more power than the Mann-Whitney test when the sample sizes are less than 25 per group.
      2. Moses Extreme Reaction
        It tests the variability of scores across the two groups and is a non-parametric form of the Levene’s test.
      3. Wald-Wolfowitz runs
        It looks at clusters of scores in order to determine whether the groups differ. If there is no difference, the ranks should be randomly interspersed.
      4. Sign test
        It does the same as the Wilcoxon-signed rank test but it is only based on the direction of the difference. The magnitude of change is neglected. It lacks power unless the sample size is really small.
      5. McNemar’s test
        It uses nominal, rather than ordinal data. It is useful when looking for changes in people’s scores. It compares the number of people who changed their response in one direction to those who changed in the opposite direction.
      6. Marginal homogeneity
        It is an extension of McNemar’s test and is similar to the Wilcoxon test.
      7. Friedman’s 2-way ANOVA by ranks (k samples)
        It is a non-parametric ANOVA to compare two groups but has low power compared to the Wilcoxon signed-rank test.
      8. Median test
        It assesses whether samples are drawn from a population with the same median.
      9. Jonckheere-Terpstra
        It tests for trends in the data. It tests for an ordered pattern of the medains of the group. It does the same as the Kruskal-Wallis test but incorporates the order of the groups. This test should be used when a meaningful order of medians is expected.
      10. Kendall’s W
        It tests the agreement between raters and ranges between 0 and 1.
      11. Cochran’s Q
        It is a Friedman test on dichotomous data.

      The effect size for both the Wilcoxon rank-sum test and the Mann-Whitney test can be calculated using the following formula:

       denotes the total sample size.

      WILCOXON RANK-SUM TEST
      This test can be used to

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 8

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 8

      Image

      Variance of a single variable represents the average amount that the data vary from the mean. The cross-product deviation multiplies the deviation for one variable by the corresponding deviation for the second variable. The average value of the cross-product deviation is the covariance. This is an averaged sum of combined deviation. It uses the following formula:

      A positive covariance indicates that if one variable deviates from the mean, the other variable deviates in the same direction. A negative covariance indicates that if one variable deviates from the mean, the other variable deviates in the opposite direction.

      Covariance is not standardized and depends on the scale of measurement. The standardized covariance is the correlation coefficient and is calculated using the following formula:

      A correlation coefficient of values  0.1 represents a small effect. Values of  0.3 represent a medium effect and values of  0.5 represent a large effect.

       In order to test the null hypothesis of the correlation, namely that the correlation is zero, z-scores can be used. In order to use the z-scores, the distribution must be normal, but the r-sampling distribution is not normal. The following formula adjusts r in order to make the sampling distribution normal:

      The standard error uses the following formula:

      This leads to the following formula for z:

      The null hypothesis of correlations can also be tested using the t-score with degrees of freedom N-2:

      The confidence intervals for the correlation uses the same formula as all the other confidence intervals. These values have to be converted back to a correlation efficient using the following formula:

      CORRELATION
      Normality in correlation is only important if the sample size is small (1), there is significance testing (2) or there is a confidence interval (3). The assumptions of correlation are normality (1) and linearity (2).

      The correlation coefficient squared (R2) is a measure of the amount of variability in one variable that is shared by the other. Spearman’s correlation coefficient (rs) is a non-parametric statistic that is sued to minimize the effects of extreme scores or the effects of violations of the assumptions. Spearman’s correlation coefficient works best if the data is ranked. Kendall’s tau, denoted by τ, is a non-parametric statistic that is used when the data set is small with a large set of tied ranks.

      A biserial or point-biserial correlation is used when a relationship between two variables is investigated when one of the two variables is dichotomous (e.g. yes

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 9

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 9

      Image

      Any straight line can be defined by the slope (1) and the point at which the line crosses the vertical axis of the graph (intercept) (2). The general formula for the linear model is the following:

      Regression analysis refers to fitting a linear model to data and using it to predict values of an outcome variable (dependent variable) from one or more predictor variables (independent variables). The residuals are the differences between what the model predicts and the actual outcome. The residual sum of squares is used to assess the ‘goodness-of-fit’ of the model on the data. The smaller the residual sum of squares, the better the fit.

      Ordinary least squares regression refers to defining the regression models for which the sum of squared errors is the minimum it can be given the data. The sum of squared differences is the total sum of squares and represents how good the mean is as a model of the observed outcome scores. The model sum of squares represents how well the model can predict the data. The larger the model sum of squares, the better the model can predict the data. The residual sum of squares uses the differences between the observed data and the model and shows how much of the data the model cannot predict.

      The proportion of improvement due to the model compared to using the mean as a predictor can be calculated using the following formula:

      This value represents the amount of variance in the outcome explained by the model relative to how much variation there was to explain. The F-statistic can be calculated using the following formulas:

      ‘k’ represents the degrees of freedom and denotes the number of predictors.

      The F-statistic can also be used t test the significance of  with the null hypothesis being that  is zero. It uses the following formula:

      Individual predictors can be tested using the t-statistic.

      BIAS IN LINEAR MODELS
      An outlier is a case that differs substantially from the main trend in the data. Standardized residuals can be used to check which residuals are unusually large and can be viewed as an outlier. Standardized residuals are residuals converted to z-scores. Standardized residuals greater than 3.29 are considered an outlier (1), if more than 1% of the sample cases have a standardized residual of greater than 2.58, the level of error in the model may be unacceptable (2) and if more than 5% of the cases have standardized residuals with an absolute value greater than 1.96, the model may be a poor representation of the data (3).

      The studentized residual is the unstandardized residual divided

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 10

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 10

      Image

      Researchers should not compare artificially created groups in an experiment (e.g. based on the median). There are several problems with median-splits:

      1. Median splits change the original information drastically
      2. Effect sizes get smaller
      3. There is an increased chance of finding spurious effects

      CATEGORICAL PREDICTORS IN THE LINEAR MODEL
      Comparing the difference between the means of two groups is predicting an outcome based on membership of two groups. A t-statistic is used to ascertain whether a model parameter is equal to zero. In other words, the t-statistic tests whether the difference between group means is equal to zero.

      THE T-TEST
      There are two types of t-tests:

      1. Independent t-test (independent measures t-test)
        This is comparing two means in which each group has its own set of participants.
      2. Paired-samples t-test (dependent t-test)
        This is comparing two means in which each group uses the same participants.

      The t-test is used to see whether there is an actual difference between two groups (e.g. experimental and control). If there is no difference between the two groups, we expect to see the same mean. There is natural variation in each sample, so the mean is (almost) never exactly the same. Therefore, just by looking at the means, it is impossible to state whether there is a significant difference between two groups. In the t-test, a set level of confidence (normally 0.95), alpha, is used as a threshold of when the difference is significant. The t-statistic is used to compute a p-value and this p-value is compared to the alpha. If the p-value is equal to or smaller than the alpha, it means that there is a significant difference between the two means and then we state that there is an actual difference. The larger the difference between two means relative to the standard error, the more likely it is that there is an actual difference between the two means.

      The t-test is always computed under the assumption that the null hypothesis is true. It uses the following general formula:

      The null hypothesis usually states that there is no difference between the two means, meaning that the null hypothesis mean would equal ‘0’. The standard error of the sampling distribution is the standard error of differences. The standard error helps the t-test because it gives a scale of likely variability between samples.

      The variance sum law states that the variance of a difference between two independent variables is equal to the sum of their variances (e.g. the variance of x1-x2 = variance of x1 + variance x2). The variance of the sampling distribution of difference between two sample means is equal to the sum of variances of the two populations from which the samples were taken. This leads to the following formula for the standard error:

      This equation holds if the

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 11

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 11

      Image

      Moderation refers to the combined effect of two or more predictor variables on an outcome. This is also known as an interaction effect. A moderator variable is one that affects the relationship between two others. It affects the strength or direction of the relationship between the variables.

      The interaction effect indicates whether moderation has occurred. The predictor and the moderator must be included for the interaction term to be valid. If, in the linear model, the interaction effect is included, then the individual predictors represent the regression of the outcome on that predictor when the other predictor is zero.

      The predictors are often transformed using grand mean centring. Centring refers to transforming a variable into deviations around a fixed point. This fixed point is typically the grand mean. Centring is important when the model contains an interaction effect, as it makes the bs for lower-order effects interpretable. It makes interpreting the main effects easier (lower-order effects) if the interaction effect is not significant.

      The bs of individual predictors can be interpreted as the effect of that predictor at the mean value of the sample (1) and the average effect of the predictor across the range of scores for the other predictors (2) when the variables are centred.

      In order to interpret a (significant) moderation effect, a simple slopes analysis needs to be conducted. It is comparing the relationship between the predictor and outcome at low and high levels of the moderator. SPSS gives a zone of significance. Between two values of the moderator the predictor does not significantly predict the outcome and below and above the values it does.

      The steps for moderation are the following if there is a significant interaction effect: centre the predictor and moderator (1), create the interaction term (2), run a forced entry regression with the centred variables and the interaction of the two centred variables (3).

      The simple slopes analysis gives three models. One model for a predictor when the moderator value is low (1), one model for a predictor when the moderator value is at the mean (2) and one model for a predictor when the moderator value is high (1).

      If the interaction effect is significant, then the moderation effect is also significant.

      MEDIATION
      Mediation refers to a situation when the relationship between the predictor variable and an outcome variable can be explained by their relationship to a third variable, the mediator. Mediation can be tested through three linear models:

      1. A linear model predicting the outcome from the predictor variable (c).
      2. A linear model predicting the mediator from the predictor variable (a).
      3. A linear model predicting the outcome from both the predictor variable and the mediator (predictor = c’ and mediator = b).

      There are four conditions for mediation: the predictor variable must significantly predict the outcome variable (in model 1)(1), the predictor variable must significantly predict the mediator

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 12

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 12

      Image

      The overall fit of a linear model is tested using the F-statistic. The F-statistic is used to test whether groups are significantly different and then specific model parameters (the bs) are used to show which groups are different.

      The F-statistic gives an associated p-value as well. A p-value which is smaller than 0.05 (or any set alpha) stands for a significant difference between the group means. The downside of the F-test is that it does not tell us which groups are different. Associated t-tests can show which groups are significantly different.

      The null hypothesis if the F-statistic is that the group means are equal and the alternative hypothesis is that the group means are not equal. If the null hypothesis is true, then the b-coefficients should be zero. The F-statistic can also be described as the ratio of explained to the unexplained variation.

      The total sum of squares is the total amount of variation within the data. This can be calculated by using the following formula:

      It is the difference between each observed data point and the grand mean squared. The grand variance is the total sum of squares of all observations. It is the variation between all scores, regardless of the group from which the scores come.

      The model sum of squares is calculated by taking the difference between the values predicted by the model and the grand mean. It tells us how much of the variation can be explained using the model. It uses the following formula:

      It is the difference of the group mean and the grand mean squared. This value is multiplied with the number of participants in this group and these values for each group are added together.

      The residual sum of squares tells us how much of the variation cannot be explained by the model. It is calculated by looking at the difference between the score obtained by a person and the mean of the group to which the person belongs. It uses the following formula:

      It is the squared difference between the participant’s score (xig) and the group mean and this is done for all the participants in all the groups. The residual sum of squares can also be denoted in the following way:

      One other way of denoting the residual sum of squares is the following formula:

      It is the variance of a group multiplied by one less than the number of people in that group and this value is added together for all the groups. The average sum of squares (mean squares) is calculated by dividing the model sum of squares with the degrees of freedom (N-k).

      ASSUMPTIONS WHEN COMPARING MEANS
      There are several assumptions when

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 13

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 13

      Image

      Covariates are characteristics of the participants in an experiment. These are characteristics outside of the actual treatment. If a researcher wants to compare means of multiple groups using the additional predictors, the covariates, then the ANCOVA is used. Examples of covariates could be love for puppies, softness of puppy fur.

      Covariates can be included in an ANOVA for two reasons:

      1. Reduce within-group error variance
        The unexplained variance is attributed to other variables, the covariates, which reduces the total error variance. This allows for a more sensitive test for the difference of group means.
      2. Elimination of confounds
        By adding other variables, covariates, in the analysis, confounds are eliminated.

      If there are covariates, the b-values represent the differences between the means of each group and the control adjusted for the covariate.

      ASSUMPTIONS AND ISSUES WITH ANCOVA
      There are two new assumptions for ANCOVA that are not present with ANOVA. These assumptions are independence of the covariate and treatment effect and homogeneity of regression slopes.

      The ideal case is that the covariate is independent from the treatment effect. If the covariate is not independent from the treatment effect, then the covariate will reduce the experimental effect because it explains some of the variance that would otherwise be applicable to the experiment. The ANCOVA does not control for or balance out the differences caused by the covariate. The problem of covariates potentially explaining a bit of the data and wanting to filter these confounds is using randomizing participants to experimental groups or matching experimental groups on a covariate.

      Another assumption of the ANCOVA is that the relationship between covariate and outcome variable holds true for all groups of participants and not only for a few groups of participants (e.g. for both males and females and not only males). This assumption can be checked by checking the regression line for all the covariates and all the conditions. The lines should be similar.

      In order to test the assumption of homogeneity of regression slopes, the ANCOVA model should be customized on SPSS to look at the independent variable x the covariate interaction.

      CALCULATING THE EFFECT SIZE
      The partial eta squared is the effect size which takes the covariates into account. It uses the proportion of variance that a variable explains that is not explained by other variables in the analysis. It uses the following formula:

       

      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 14

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 14

      Image

      Factorial designs are used when there are more than one independent variables. There are several factorial designs:

      1. Independent factorial design (between groups)
        There are several independent variables measured using different entities.
      2. Repeated-measured (related) factorial design
        There are several independent variables using the same entities in all conditions.
      3. Mixed design
        There are several independent variables. Some conditions use the same entities and some conditions use different entities.

      INDEPENDENT FACTORIAL DESIGNS AND THE LINEAR MODEL
      The calculation of factorial designs is similar to that of ANOVA, but the explained variance (between-groups variance) consists of more than one independent variable. The model sum of squares (between-groups variance) consists of the variance due to the first variable, the variance due to the second variable and the variance due to the interaction between the first and the second variable.

      It uses the following formula:

      This is the model sum of squares and shows you how much variance the independent variables explain. It can be useful to see how much of the total variance each independent variable explains. This can be done by using the same formula, but then only for one independent variable. In order to achieve this, the independent variable has to be grouped together in one group (this normally increases the n, as more multiple groups are being put together in one big group).

      The residual sum of squares, the error variance (SSR) shows how much variance cannot be explained by the independent variables. It uses the following formula:

      It is the variance of a group times the number of participants in the group minus one for each group added together. The degrees of freedom are added up together too. In a two-way design, the F-statistic is computed for the two main effects and the interaction.

      OUTPUT FROM FACTORIAL DESIGNS
      A main effect should not be interpreted in the presence of a significant interaction involving that main effect. In other words, main effects don’t need to be interpreted if an interaction effect involving that variable is significant.

      Simple effects analysis looks at the effect of one independent variable at individual levels of the other independent variable. When judging interaction graphs, there are two general rules:

      1. Non-parallel lines on an interaction graph indicate some degree of interaction, but how strong and whether the interaction is significant depends on how non-parallel the lines are.
      2. Lines on an interaction graph that cross are very non-parallel, which hints at a possible significant interaction, but does not necessarily mean that it is a significant interaction.
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 15

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 15

      Image

      Repeated-measures refers to when the same entities (e.g. people) participate in all conditions of an experiment or provide data at multiple points of time.

      One of the assumptions of the standard linear model is that residuals are independent, which is not true for repeated-measures designs. The residuals are affected by both between-participant factors and within-participant factors. There are two solutions to this:

      1. Model within-participant variability
      2. Apply additional assumptions to make a simpler, less flexible model fit

      One of these assumptions is sphericity (circularity). This assumption states that the relationship between scores in pairs of treatment conditions is similar (e.g. the level of dependence between means is roughly equal). It states that variation within conditions are similar and that no two conditions are any more dependent than any other two. Local sphericity refers to when some conditions do have equal variance and some do not. Sphericity is not relevant if there are only two groups. It becomes relevant when there are at least three conditions.

      The assumption of sphericity can be tested using Mauchly’s test. The degree of sphericity can be estimated using the Greenhouse-Geisser estimate or the Huyn-Feldt estimate. If the assumption of sphericity is not met, then there is a loss of power and the F-statistic doesn’t have the distribution it is supposed to have. In order to do post-hoc tests when you worry about whether the assumption of sphericity is violated, Bonferonni method can be used, if it is not violated, Tukey’s test can be used.

      If the assumption of sphericity is violated, the degrees of freedom has to be adjusted. The degrees of freedom is multiplied by the estimate of sphericity to calculate the adjusted degrees of freedom.

      F-STATISTIC OF REPEATED MEASURES DESIGN
      In repeated measured designs, the within-groups variance consists of within-participant variance, as there is only one group of participants. This consists of the effect of the experiment and the error (variance not explained by the experiment). The between-groups variance now consists of the between-participant variance.

      The formula for the within-entity (groups) variance is the following:

      The n represents the number of scores within the person (e.g. number of experimental conditions). The total amount of variance that is explained by the experimental manipulation can be calculated by comparing the condition mean to the grand mean for all the conditions. It uses the following formula:

      The total error variance (residual sum of squares), the amount of variance that cannot be explained by the experimental manipulation can be calculated in the following way:

      In order to calculate the F-statistic, the mean squares have to be calculated and this can be done by dividing both the SSR and the SSM by the degrees of freedom:

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 16

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 16

      Image

      Mixed designs are a combination of repeated-measures and independent designs. It includes some independent variables that were measured using different entities and some independent variables that used repeated measures.

      The most important assumptions of the mixed designs ANOVA are sphericity and homogeneity of variance.

      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 17

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 17

      Image

      A multivariate analysis is used when there is more than one dependent (outcome) variable. It is possible to use several F-tests when there are several dependent variables, but this inflates the type-I error rate. A MANOVA can detect whether groups differ along a combination of dimensions. MANOVA has a greater potential power to detect an effect.

      A matrix is a grid of numbers arranged in columns and rows. The values within a matric are called components or elements and the rows and columns are vectors. A square matrix has an equal number of columns and rows. An identity matrix is a matrix where the diagonal numbers are ‘1’ and the non-diagonal numbers are ‘0’. The sum of squares and cross-products (SSCP) matrices are a way of operationalize multivariate versions of the sums of squares. The matrix that represents the systematic variance (model sum of squares) is denoted by the letter ‘H’ and is called the hypothesis sum of squares and cross-products matrix (hypothesis SCCP). The matrix that represents the unsystematic variance (residual sum of squares) is denoted by the letter ‘E’ and is called the error sums of squares and cross-products matrix (error SSCP). The matrix that represents the total sums of squares for each outcome (total SSCP) is denoted by the letter ‘T’.

      The cross-product is the total combined error between two variables.

      THEORY BEHIND MANOVA
      The total sum of squares is calculated by calculating the difference between each of the scores and the mean of those scores, then squaring those differences and adding them together.

      The degrees of freedom is N-1. The model sum of squares is calculated by taking the difference between each group mean and the grand mean, squaring it, multiplying by the number of scores in the group and then adding it all together.

      The degrees of freedom is the sample size of each group minus one multiplied by the number of groups. The SST and the SSM then have to be divided by their own degrees of freedom, before being divided by each other to get to the F-statistic.

      The cross-product is the difference between the scores and the mean for one variable multiplied by the difference between the scores and the mean for another variable. It is similar to covariance. It uses the following formula:

      For each outcome (dependent) variable, the score is taken and subtracted from the grand mean for that variable. This gives x values per participant, with x being the number of outcome variables.

      The model cross-product, how the relationship between the outcome variables is influenced by the experimental manipulation, uses the following formula:

      The residual cross-product, how the relationship between the outcome variables is influenced by individual differences and unmeasured variables, can be calculated

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 18

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 18

      Image

      Factor analysis and principal component analysis (PCA) are techniques for identifying clusters of variables. These techniques have three uses: understanding the structure of a set of variables (1), construct a questionnaire to measure an underlying variable (2) and reduce a dataset to a more manageable size while retaining as much of the original information as possible (3).

      Factor analysis attempts to achieve parsimony by explaining the maximum amount of common variance in a correlation matrix using the smallest number of explanatory constructs (latent variables). PCA attempts to explain the maximum amount of total variance in a correlation matrix by transforming the original variables into linear components.

      A factor loading refers to the coordinate of a variable along a classification axis (e.g. Pearson correlation between factor and variable). It tells us something about the relative contribution that a variable makes to a factor.

      In factor analysis, scores on the measured variables are predicted from the means of those variables plus a person’s scores on the common factors (e.g. factors that explain the correlations between variables) multiplied by their factor loadings, plus scores on any unique factors within the data (e.g. factors that cannot explain the correlations between variables).

      In PCA, the components are predicted from the measured variables.

      One major assumption of factor analysis is that the algebraic factors represent real-world dimensions. A regression technique can be used to predict a person’s score on a factor. Using this technique, the resulting actor scores have a mean of 0 and a variance equal to the squared multiple correlation between the estimated factor scores and the true factor values. A downside is that the scores can correlate with other factor scores from a different orthogonal factor. The Bartlett method and the Anderson-Rubin method can be used to overcome this problem. The Bartlett method produces factor scores that are uncorrelated and standardized.

      DISCOVERING FACTORS
      The method used for discovering factors depends on whether the results should be generalized from the sample to the population (1) and whether you are exploring your data or testing a specific hypothesis (2).

      Random variance refers to variance that is specific to one measure but not reliably so. Communality refers to the proportion of common variance present in a variable. Extraction refers to the process of deciding how many factors to keep.

      Eigenvalues associated with a variate indicate the substantive importance of that factor. Therefore, factors with large eigenvalues are retained. Eigenvalues represent the amount of variation explained by a factor.

      A scree plot is a plot where each eigenvalue is plotted against the factor with which it is associated. The point of inflexion is where the slope of the line changes dramatically. This point can be used as a cut-off point to retain factors. It is also possible to use eigenvalues as a criterion. Kaiser’s criterion is to retain factors with eigenvalues greater than 1. Joliffe’s criterion

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 19

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 19

      Image

      It is possible to predict categorical outcome variables, meaning, in which category an entity falls. When looking at categorical variables, frequencies are used. The chi-squared test can be used to see whether there is a relationship between two categorical variables. It is comparing the observed frequencies with the expected frequencies. The chi-squared test standardizes the deviation for each observation and these are added together.

      The chi-squared test uses the following formula:

      The expected score has the following formula:

      The degrees of freedom of the chi-squared distribution are (r-1)(c-1). In order to use the chi-squared distribution with the chi-squared statistic, there is a need for the expected value in each cell to be greater than 5. If this is not the case, then Fisher’s exact test can be used.

      The likelihood ratio statistic is an alternative to the chi-square statistic. It is comparing the probability of obtaining the observed data with the probability of obtaining the same data under the null hypothesis. The likelihood ratio statistic uses the following formula:

      It uses the chi-squared distribution and is the preferred test if the sample size is small. The chi-square statistic tends to make a type-I error if the table is 2 x 2. This can be corrected for by using Yates’ correction and uses the following formula:

      In short, the chi-square test tests whether there is a significant association between two categorical variables.

      ASSUMPTIONS WHEN ANALYSING CATEGORICAL DATA
      One assumption the chi-square test uses is the assumption of independence of cases. Each person, item or entity must contribute to only one cell of the contingency table. Another assumption is that in 2x2 tables, no expected value should be below 5. In larger tables, not more than 20% of the expected values should be below 5 and all expected values should be greater than 1. Not meeting this assumption leads to a reduction in test power.

      The residual is the error between what the expected frequency and the observed frequency. The standardized residual can be calculated in the following way:

      Individual standardized residuals have a direct relationship with the test statistic, as the chi-square statistic is composed of the sum of the standardized residuals. The standardized residuals behave like z-scores.

       

      EFFECT SIZE
      Cramer’s V can give an effect size. In 2x2 tables, the odds-ratio is often used as the effect size. The odds-ratio uses the following formula:

      The actual odds ratio is the odds of event A divided by the odds of event B.

       

      Access: 
      JoHo members
      Work for WorldSupporter

      Image

      JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

      Working for JoHo as a student in Leyden

      Parttime werken voor JoHo

      Check more of this topic?
      How to use more summaries?


      Online access to all summaries, study notes en practice exams

      Using and finding summaries, study notes en practice exams on JoHo WorldSupporter

      There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

      1. Starting Pages: for some fields of study and some university curricula editors have created (start) magazines where customised selections of summaries are put together to smoothen navigation. When you have found a magazine of your likings, add that page to your favorites so you can easily go to that starting point directly from your profile during future visits. Below you will find some start magazines per field of study
      2. Use the menu above every page to go to one of the main starting pages
      3. Tags & Taxonomy: gives you insight in the amount of summaries that are tagged by authors on specific subjects. This type of navigation can help find summaries that you could have missed when just using the search tools. Tags are organised per field of study and per study institution. Note: not all content is tagged thoroughly, so when this approach doesn't give the results you were looking for, please check the search tool as back up
      4. Follow authors or (study) organizations: by following individual users, authors and your study organizations you are likely to discover more relevant study materials.
      5. Search tool : 'quick & dirty'- not very elegant but the fastest way to find a specific summary of a book or study assistance with a specific course or subject. The search tool is also available at the bottom of most pages

      Do you want to share your summaries with JoHo WorldSupporter and its visitors?

      Quicklinks to fields of study (main tags and taxonomy terms)

      Field of study

      Access level of this page
      • Public
      • WorldSupporters only
      • JoHo members
      • Private
      Statistics
      2133
      Comments, Compliments & Kudos:

      Add new contribution

      CAPTCHA
      This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
      Image CAPTCHA
      Enter the characters shown in the image.
      Promotions
      vacatures

      JoHo kan jouw hulp goed gebruiken! Check hier de diverse studentenbanen die aansluiten bij je studie, je competenties verbeteren, je cv versterken en een bijdrage leveren aan een tolerantere wereld

      Follow the author: JesperN