Lecture notes with Multivariate Data Analysis at the Leiden University - 2019/2020
Lecture 1, What is Multiple Regression Analysis (MRA)?
What will we learn in this course?
All the techniques we will discuss in the upcoming weeks have one thing in common: They explore the relationship among several variables. Up until now, we have always focused on 2 variables; but in this course, we will deal on 3.
We will learn how to choose a method for a specific problem, how to perform data analysis, to understand the output, to understand the theoretical properties, to interpret the parameters of the technique, and to judge if the interpretations are valid (so check for assumptions).
When do we do multiple regression analysis?
When we want to predict Y from Xi variables, in the case of interval variables, then we do multiple regression anaysis.Binary variables are variables with only 2 categories; and they can be included in the analysis both as nominal and interval variables. An example of a multiple regression model; Can depression (Y) be predicted from life events (X1) and/or coping style (X2)?
What is the multiple regression equation?
A multiple regression equation has the following formula:
Y = b0 + b1X1 + b2X2 + ... + bkXk + e
We choose the regression line such that the summed difference between Y and the predicted Y is as small as possible. With two predictors, we make a regression plane instead of a regression line; a three-dimensional space.
What are the hypotheses in multiple regression analysis?
H0: b1 = b2 = ... = bk = 0
Ha; at least one bj ≠ 0
So, the null hypothesis is that there is no relation between Y and the X variables.
How do we test the null hypothesis in multiple regression analysis?
We test H0 with the F test: F = MSregression / MSresidual = (SSregression/dfregression) / (SSresidual / dfresidual). Remember that SStotal = SSregression + SSresidual. If the p-value of F is significant, so <.05; we can reject H0; At least one regression coefficient deviates from zero, so there is a relationship between Y and the X variables.
How good is the prediction?
How good the prediction is can be calculated with R2 . R is the multiple correlation coefficient. R is the Pearson correlation between Y and a combi of X1 and X2. It is always a value between 0 and 1. R2 reflects how much variance of Y is explained by X1 and X2 (VAF = Variance Accounted For). R2 = VAF = SSregression/SStotal. So R2 reflects how good the linear model describes the observed data. If R2 is for example .500; 50% of variance in depression (Y) is explained by life events (X1) and coping (X2).
What about the predictors?
For each predictor, we can perform a t-test. A t-test shows us whether the coefficient differs significantly from 0. Standardized regression coefficients are useful as they can be compared to one another (β). Unstandardized regression coefficients (b) cannot be compared.
What are partial and semi-partial correlations?
The partial correlation of a predictor reflects how much variance of Y is explained by the predictor that is not explained by other variables in the analysis. We don’t really use it. The semi-partial correlation of Y and X1, corrected for X2, is: rY(1.2) = (ry1 - ry2r12) / √1-r2 1.2. It is always a value between 1 and -1. It reflects how much variance of Y is uniquely explained by X1. In SPSS, it is called the part correlation.When we want to estimate how much percent of variance is uniquely explained by a certain predictor, we square the semi-partial correlation. When we want to calculate what percent of variance is explained by both predictors, we do R2 – both semi-partial correlations.
What are the assumptions of multiple regression analysis?
When assumptions of regression analysis are not met, a prediction will still be made. However, if assumptions are violated, this affects the standard errors of coefficients, the F- and t-values, and the p-values; causing us to draw wrong conclusions about significance. The assumptions are:
1. The model is linear; the variables are on the interval level of measurement, and the dependent variable is a linear combination of the predictors.
2. The testing coefficients show 1) homoscedasticity (the variance of the residuals is constant across the predicted values), 2) the residuals are independent of one another, 3) and they show normality.
3. There is no multicollinearity in the predictors (high inter-correlations among the predictors).
How do we check these assumptions in our data?
By making a scatterplot of the predicted values vs the standardized residuals, we can check for linearity and homoscedasticity; it should yield a band of residuals around the horizontal lines without any nonlinear shapes. If the cloud of dots does not form a linear shape; there is nonlinearity; the band should be equally as big everywhere. If the band is wider in a specific area of the graph; there is heteroscedasticity. By making a normality probability pot, we can check for the normality of the residuals; it should yield and approximately straight line. By performing collinearity statistics in SPSS, we check for multicollinearity. What is then not checked, is the interval level of the variables and the independence of the residuals. Outliers can occur on the dependent variable Y (residuals between -3 and 3). Outliers can also occur on the independent variable(s) (the leverage is smaller than 3(k+1)/n). Influential data-points are points with a Cook’s distance smaller than 1. These values can be found in the residual statistics table; but the leverage value should be calculated.
When assumptions are violated, we should remove predictors that cause violations. We can also try to transform the variables, which does unfortunately not always work. However, we can instead also use a more robust regression technique:
--> When there are predictors with error: we should use error in variables regression.
--> When the residuals are correlated: we should use multilevel regression (linear mixed models).
--> When there is heteroscedasticity: we use the weighted least squares regression or bootstrap se’s.
--> When there are non-linear relations, we use generalized additive models (GAMs).
What is multicollinearity and when does it form a problem?
Multicollinearity is a problem as it 1) limits the size of R2 , 2) makes it difficult to determine the importance of a predictor, and 3) it makes the regression equation unstable since it increases the standard errors of regression coefficients. SPSS produces several collinearity diagnostics; namely the tolerance of the predictor: TOLj = 1 – R2 j. There is multicollinearity when it is below .10 for a predictor. The variance inflation factor (VIF): VIFj = 1 / TOLj = 1 / 1 – R2 . There is multicollinearity if it is above 10 for one predictor.
How useful is our regression model?
The usefulness of a regression model depends on 1) whether it is based on substantive theory, 2) whether it is parsimonious, 3) and whether the ratio of N (individuals) / k (number of predictors) is large enough. If this ratio is small, than R2 is small, which makes the regression equation useless. The advice is to have a ratio of bigger than 20.
Five questions:
- What topics are covered in this lecture?
This lecture covers Multiple Regression Analysis. It discusses the multiple regression model, what the formula is and how we should use it. It discusses the central hypotheses in multiple regression analysis, that we can test with the F-test. Doing MRA in SPSS gives certain useful output, and this lecture explains how we can use this. The assumptions of MRA are discussed, and advice is given in cases of violation of these assumptions.
- What topics are covered that are not included in the literature?
The literature is more elaborate in explaining Multiple Regression Analysis. However, this lecture gives a more clear overview of the most important concepts.
- What recent development are discussed?
n.a.
- What remarks are made about the exam?
No specific remarks were made about the exam.
- What questions are discussed that may be included in the exam?
n.a.
Lecture 2, What is Analysis of Variance (ANOVA)?
When do we use Analysis of Variance?
In this week, we will discuss ANOVA: In which the predictor variables (Xk) are nominal, and the dependent variable (Y) is interval. The predictor variables are now called factors.
What are the hypotheses of one-way ANOVA?
For ANOVA, we study the following hypotheses:
H0: μ1 = μ2 = ... = μk
and Ha: at least two means μi ≠ μj
The statistical significance of rejecting H0 is more likely if there are larger differences between the group means, when there are smaller differences within groups, and when there is a large sample size N.
What is the one-way ANOVA model?
The one-way ANOVA model is: Yij = μ + αj + eij
in which Yij is the score of individual i in group j, μ is the overall mean, αj is the group effect of group j of leadership style, and eij is a residual (error). This model allows for partitioning of the between-groups variation (αj variation around grand mean μ), and the within-groups variation (eij variation around each group mean αj).
What is two-way ANOVA and what are the advantages over one-way ANOVA?
In case of two-way ANOVA, or in other words, factorial ANOVA, there is more than one factor. The advantages of a two-way ANOVA analysis is that it allows for a study of the combined effects of two factors. In addition to two main effects, there is also an interaction effect. This allows for a better understanding of individual factors. Another advantage is that there are more factors included; which makes the model more efficient (more info with the same number of participants N), there is possible reduction of error within variation (which leads to more statistical power). By adding factors, these factors are taken into account; so the analysis is corrected for these factors.
What is the two-way ANOVA model?
The two-way ANOVA model is: Yijk = μ + αj + βk + ϕjk + eijk
in which Yijk is the score of individual i in group j and group k, μ is the overall mean, αj is the group effect of group j, βk is the group effect of group k, ϕjk is the interaction effect, and eijk is a residual (error). The interaction effect checks whether the effect of one of the factors αj is different across different levels of the other factor βk.
What are the hypotheses in two-way / factorial ANOVA?
The hypotheses in factorial ANOVA are similar to those in one-way ANOVA; it only includes more means.
How do we study the data of our ANOVA?
In SPSS, we are interested in the corrected model (that displays the model without the intercept and the effects of αj, βk, and the interaction effect ϕjk combined), and in the corrected total (that displays the total sums of squares without the intercept). A two-way ANOVA consists of 4 F-tests; the corrected model, the main effect of αj, the main effect of βk, and the interaction effect ϕjk. In each case, the F-statistic is given by: F = MSeffect / MSerror = (SSeffect / dfeffect) / (SSerror / dferror)
-We test H0 using the F formula. When reporting the significance of for example, the corrected model, we give F (dfCorrected model, dfError) = for example 8.908, with a p-value of < .100, so we reject H0. We can also calculate the coefficient of determination: R2 = VAF.
What is the effect size?
The effect size is η2 effect = SSeffect / SSCorrected Total. It displays how much variance of Y is uniquely explained by this specific effect. SPSS only produces the statistic partial eta squared; while we are interested in the semi-partial eta squared; so we will have to calculate it by hand.
How do we interpret the effects we find in SPSS?
To interpret the effects, we use the estimated marginal means (the least squares estimates of group means): These are the observed group means adjusted for the unequal group sizes (in the case of the unbalanced design), and the covariates in the model. If the design is balanced and there are no covariates, we have the estimated marginal means. If the groups are approximately the same in size, the adjustment of making a balanced design will be only small.
When studying the effect of leadership style on job satisfaction to start, we make multiple comparisons, as a priori we have no clue what leadership style is better. We do this by making pairwise comparisons of all means. We correct for multiple testing with Tukey HSD. If you have specific hypotheses, you should use planned comparisons.
What are main effects and interaction effects?
The main effect is about a difference between means. The interaction effect is about a difference between main effects, and a difference between differences. When the interaction effect is significant, the interaction can change the meaning of the main effect: interpreting the main effect is pointless when the interaction is significant.
What is a balanced design?
In a balanced design, all groups have the same size n. In an unbalanced design, the n is different between groups. If the design is balanced, factors are not correlated; so this yields an advantage. However, in research towards existing groups, unbalanced designs frequently occur. The factors may then be correlated, and there is no unique partitioning of explained variance. You should not remove participants to make a design balanced; but it is good to incorporate it when designing your study. If the design is balanced, SSA + SSB + SSA * B = SSCorrected Model.
What are the assumptions in factorial ANOVA?
The assumptions of the ANOVA model characterize the population, and not the sample. They are needed for sampling distributions of F-tests. If the assumptions are violated, this affects the F-statistics and p-values, leading us to possibly draw the wrong conclusions about significance. The ANOVA assumptions are:
1. Independence of the residuals eijk: Individuals respond independently of one another.
2. Group normality: The residuals eijk are normally distributed in each group (cell).
3. Homoscedasticity: There is equality of group (cell) variances.
How do we check these assumptions?
The independence of the residuals is usually not investigated directly, but it is accounted for by the design of the study. We can check for group normality by inspecting the histograms per group, or do the Kolmogorov-Smirnov test. We can check for the equality of group variances (homoscedasticity) by doing the Leven’s test. Both of these tests are sensitive to a large sample size N.
When is the F-test robust against these violations?
F-tests are under certain conditions robust to violations of both the violation of group normality and homoscedasticity. A test is robust with respect to some assumption if violation of the assumption does not substantially influence the Type I error (alpha). The F-test is robust to non-normality if n > 15 in each group. The F-test is robust to unequal group variances if nmax / nmin < 1.5, where nmax is the largest group, and nminis the smallest group. If group variances are unequal, Levene’s test will be significant, the design is unbalanced, and nmax / nmin > 1.5. If the largest variances occur in the largest groups, the F-test is too conservative and H0 is rejected not often enough. If the largest variances occur in the smallest groups, the F-test is too liberal and H0 is rejected too often.
What is the advantage of doing planned comparisons?
With planned comparisons, we can do fewer tests than when we would do multiple comparisons. Doing fewer tests, we have more statistical power.
How do we calculate F in a balanced design?
If the design is balanced, we first calculate SSA*B. Since the design is balanced, SSA + SSB + SSA*B = SSCorrected Model. We always have SSCorrected Model + SSError = SSCorrected Total. Then, we calculate the degrees of freedom: For the main effect df = number of categories – 1. The degrees of freedom for the interaction is dfA * dfB. The degrees of freedom of the corrected total is N -1. Since dfCorrected Total = dfA + dfB + dfA*B + dfError, we can calculate dfError. Next, we calculate the mean sums of squares: MSeffect = SSeffect / dfeffect. Finally, we calculate the F-values: MSeffect / MSerror.
Five questions:
- What topics are covered in this lecture?
This lecture covers Analysis of Variance. It first discusses one-way ANOVA; the model, assumptions and way of using it. The lecture then discusses factorial ANOVA and discusses the same things. It elaborates on how to work with the statistics in calculating with them and in SPSS.
- What topics are covered that are not included in the literature?
The literature is more elaborate in explaining Analysis of Variance. It always elaborates more on one-way ANOVA. However, this lecture gives a more clear overview of the most important concepts that students need to know.
- What recent development are discussed?
n.a.
- What remarks are made about the exam?
No specific remarks were made about the exam.
- What questions are discussed tat may be included in the exam?
n.a.
Lecture 3, What is Analysis of Covariance (ANCOVA)?
Analysis of Covariance (ANCOVA) combines ANOVA and regression analysis. We use it when Y must be predicted from both interval and nominal variables, as in ANOVA, all factors are nominal, and in MRA, all predictors are interval. ANCOVA can be useful in both experimental and quasi-experimental research studies. Quasi-experimental studies are studies with existing groups, in which the researcher assigns the participants to different (treatment) groups.
What is the ANCOVA model?
The ANCOVA model is: Yij = μ + αj + bW(Cij - Cbar) + eij. in which μ is the overall mean, αj is the group effect of group j, bW is the within-groups regression weight, Cij is the covariate score of individual i in group j, and C is the mean value of the covariate. The ANCOVA model plots Y on the Y-axis, and C on the X-axis. The slope in the ANCOVA model indicates the slope of all groups together.
Why do we do ANCOVA?
ANCOVA is useful, as it leads to the reduction of error-variance: A properly chosen covariate can explain a part of the error variance. Adding this covariate may increase the power of the F-test. Another advantage of ANCOVA is that systematic bias can be removed: Research groups may differ systematically on external variables that are related to the dependent variable. Adding these variables as covariates may remove bias. Lastly, ANCOVA takes into account alternative explanations: External variables may give alternative explanations of the found effect. We can check for this by adding these variables as covariates.
What is the pre-test and covariate C?
If the covariate C is correlated with Y, C shares a part of the individual differences and a part of the error variance. So, if a proper covariate C is added to the ANCOVA model, C will explain a part of the error variance, so that MSError becomes smaller, and F becomes larger: So the study will be more powerful. ANOVA combined with a pre-test is an ANCOVA study. If the covariate pre-test reduces enough error variance, the F-test will be significant, and we conclude that the treatment is effective. To detect effectiveness of a treatment, ANOVA is not enough. We need the more powerful technique of ANCOVA.
An example ANCOVA question could be: What is the effect of video gaming on spatial visualization ability? We first do a pre-test of spatial ability (Y), and after the ‘treatment’ with X (gaming / control group) we do a post-test of spatial ability. If we only do an ANOVA test, we could find that video gaming has no effect on spatial visual ability. However, if we include the pre-test as a covariate (C), we would find that gaming does in fact have an effect on spatial ability.
What if we don't use ANCOVA?
Intact groups that are used in quasi-experimental research may differ systematically on variables that are related to the dependent variable. One possibility is that real effects may be masked. Another possibility is that false effects may be masked.
What are the assumptions in ANCOVA?
The assumptions of ANCOVA are similar to those of ANOVA. If these assumptions are violated, this affects the F-statistics and consequently the p-values: We can then draw the wrong conclusions about significance. The assumptions are:
1. Independence of residuals eij: The individuals respond independently of one another. We usually do not investigate this assumption.
2. Group normality: The residuals (eij) are normally distributed in each group. We can run a Kolmogorov-Smirnov or Levene’s test to test for this assumption. The F-test is robust against violation of this assumption under certain conditions, namely the sample size should be greater than 15 in each group.
3. Homogeneity of group variances. The F-test is robust to unequal group variances if nmax / nmin < 1.5
ANCOVA has additional assumptions:
1. The covariate C is measured without error
2. Linearity: The covariate displays a linear relationship with the dependent variable. We can check for linearity by visually inspecting the scatter plot of predicted values versus standardized residuals. This is similar to MRA.
3. Parallelism of regression lines: The regression line betweeen the covariate and the dependent variable should have the same regression weight bW in each group. bw is the within-groups regression weight. We can check for this assumption with a test that includes the treatment * covariate interaction. If the lines in this graph are parellel, this assumption is met. However, the lines are almost never exactly parallel in the sample. We check for the paralellism assumption by adding the interaction to the ANCOVA model. If there is no interaction, the assumption of parallelism is accounted for. We can only use this analysis for checking the parellelism assumption, but not to answer a research question.
How does ANCOVA work?
Doing an ANCOVA usually consists of three models. The first is an ANOVA, to check if factor X is effective. The second is an ANCOVA including the interaction between X and C, to check for the parallelism assumption. The final model you do is the ANCOVA, to check if the interaction is non-significant.
What is the pooled within-groups correlation?
If we inspect the scatter plot of the dependent variable with the covariate, rYC is the total correlation between dependent variable (Y) and the covariate (C). The pooled within-groups correlation is the rYC(W). A covariate may be useful, when 1) the pooled within-groups correlations differs from 0, so that error variance is reduced. If rYC(W) > rYC, the statistical power of the F-test tends to increase. Adding a covariate to your model is also useful when 2) the group means differ on the covariate. There is then a possibility of removing systematic bias.
What is mean adjustment?
To remove systematic bias, the groupmeans can be adjusted using the formula Ybar *j = Ybar j – bW(Cbar j – Cbar ), where Ybar *j is the group mean after adjustment, Ybar j is the original group mean, bW is the within-groups regression weight, Cbar j is the group mean on covariate C, and Cbar is the overall mean of covariate C.
To find out which group mean is higher after adjustment, we can also visually inspect the following output: We plot the group means in the scatterplot of Y with C. Then, we draw the within-groups regression lines and calculate the mean of the covariate C. We draw a vertical line at the mean of the covariate. The intersections of the regression line sand the vertical lines are the adjusted group means. When the within-groups correlation is 0, or when the groups do not differ on the covariate, it is hard to make interpretations.
Five questions:
- What topics are covered in this lecture?
This lecture covers Analysis of Covariance. It describes what ANCOVA is and how it works. ANCOVA is useful for certain reasons, which are described in this lecture. The lecturer has provided us with the assumptions of ANCOVA, and explains the concepts of pooled within-groups correlation and mean adjustment.
- What topics are covered that are not included in the literature?
The literature is more elaborate than this lecture in explaining Analysis of Covariance. What is important, is that the literature elaborates more on how to do an ANCOVA in SPSS and how to interpret the output.
- What recent development are discussed?
n.a.
- What remarks are made about the exam?
No specific remarks were made about the exam.
- What questions are discussed that may be included in the exam?
n.a.
Lecture 4, What is Logistic Regression Analysis (LRA)?
In logistic regression analysis (LRA), we predict the binary, or in other words, dichotomous variable (Y) based on two interval independent variables (Xk). Binary outcomes are quite common in real-life; you either have a disease or not, you either graduate or not, et cetera. Logistic regression is widely used. The LRA plot has a non-linear fit. The predicted scores are number between 0 and 1; these are probabilities.
What does the number e have to do with LRA?
The number e is a famous mathematical constant that we use in LRA. It is an exponential function. An example: e to the power of 0 is always 1. The logistic function, the probability P = en / 1 + en . If n is large and negative, en will be small and P is small. If n is large and positive, en is large, and P is large. If n is 0, than e0 is 1, so P = 1 / 1+1 = .5.
What is the logistic regression model?
The logistic regression model: P1 = eb0+b1X1+b2X2 / 1 + eb0+b1X1+b2X2 , in which P1 is the probability of passing, b0 is a constant, b1 and b2 are regression coefficients, and X1 and X2 are the predictors. If we know the probability of passing an exam P1, we also know the probability of failing as P0 = 1 – P1.
How good is our model?
We want to say something about how good the model’s prediction is. Each model we can evaluate based on a -2Log likelihood (-2LL). This is a measure of unexplained variation. A lower -2LL indicates a better fit. The difference in two -2LLs of nested models can be used to test whether more complex models are a significant improvement of the model. We use the χ2 distributed, for which the degrees of freedom is the number of extra predictors in the more complex model. We look this up in the χ2 table. As you may have noticed, we do need nested models in order to do this. Two models are nested if all terms of the simpler model occur in the more complex model.
What are the hypotheses in LRA and how do we test it?
The null hypothesis in LRA is that there is no relationship:
H0: b1* = b2* = ... = bk* = 0
Ha: at least one bi* ≠ 0.
We test H0 by calculating the difference between the -2LL of the constant only model (in which there are no predictors) and the -2LL of the full model (that includes the predictors). An example question could be: Can the null hypothesis between no relation between passing the exam and study hours and/or number of lectures attended be rejected? If yes, then our test has showed that there is at least one non-zero regression coefficient, and that there is a relationship between Y and at least one Xi. Also, Y can then be predicted on the basis of the X predictors.
How do we interpret LRA output in SPSS?
In LRA, we only have a pseudo R2 : R2 does not mean VAF. This is the case because LRA is based on maximum likelihood, and not on explained variance. SPSS produces two pseudo measures, but we will use neither. We will use Hsomer and Lemeshow’s R2 L: R2 L = -2LL0 - -2LL3 / -2LL0, in which -2LL0 is the constant only model (without predictors) and -2LL3 is the model that includes all predictors. So, R2 L displays the proportional reduction in -2LL; and we say that R2 L is the amount of variance explained.
In SPSS, if some coefficients do not differ significantly from 0, we run the analysis again without these predictors. We don’t interpret the probabilities directly, but instead we interpret the odds ratio (Exp(B) in SPSS).
What is a classification table and how can we work with it?
A classification table compares the observed to the predicted values. The positive predicted value (PPV) = the probability that given a positive prediction, an individual actually belongs to the target group. The negative predicted value (NPV) = the probability that given a negative prediction, an individual actually does belong to the other group.
What are odds?
The odds of an event is the ratio of the probability that the event will happen to the probability that the event will not happen. If P is the probability that an event happens, then 1-P is the probability that the event does not happen. So, the odds of an event = P / 1-P. The probability of an event always lies between 0 and 1. The odds, howeve,r lie between 0 and infinity.
The odds ratio (OR) of a predictor is the factor with which odds change with a unit increase in the predictors. The ratio of two odds, for example, is OR = odds after 1 unit increase in predictor / orginial odss. Each predictor has an associated OR. We do not interpret the regression coefficient, but rather the OR. The OR for k units is OR(k) = OR(1)k .
How do we calculate confidence intervals (CI) in LRA?
Confidence Intervals (CI) are calculated with z = 1.96. The lower limit = b1 – 1.96*SEb. The upper limit = b1 + 1.96*SEb. We can also calculate the CIs for the odds ratio: eb1 - / + 1.96*Seb
What are the assumptions of LRA and how do we check them?
The assumptions of LRA are: 1) the log odds of the dependent variable is a linear combination of the predictors, 2) the predictors are measured without error, and 3) the individuals respond independently of one another. None of these assumptions are checked in practice. We can however check for the absence of multicollinearity.
We can check for the linearity of the log odds (the log of odds) by visually inspecting the scatter plot of these log odds: You should be able to see a linear trend.
Both the assumptions of normality of the residuals and homoscedasticity, which we remember from MRA, are not required in LRA: From the fact that we have a binary dependent variable it follows that there is a binomial distribution; so normality is not even possible. The variance of binary variables is P(1-P), so homoscedasticity is not possible either.
The usefulness of performing a logistic regression equation in other samples depends on the ratio between the number of the individuals and the number of predictors: N/k. If the ratio is bigger than 30, then it is okay to proceed. In other words, there should be at least 30 individuals for each predictor.
Five questions:
- What topics are covered in this lecture?
This lecture covers Logistic Regression Analysis. It explains how it works and how it is different from linear analyses that we already knew about. The LRA model is provided, and the lecturer also told us about how to test this model. Other central questions of this lecture were: How does LRA work in SPSS? What are the assumptions of LRA? And how can we test for those? We discussed the topics of Confidence Intervals and classification tables as well.
- What topics are covered that are not included in the literature?
The paper that is recommended reading material for this week is more elaborate than this lecture in explaining Logistic Regression Analysis; as it provides a more step-by-step explanation. The lecture however explains about how we use -2LLs to estimate the value of the predictions of our model; and the paper does not explain this.
- What recent development are discussed?
n.a.
- What remarks are made about the exam?
No specific remarks were made about the exam.
- What questions are discussed that may be included in the exam?
n.a.
Lecture 5, What is Multivariate ANOVA (MANOVA) and Discriminant Analysis (DA)?
In MANOVA, we try to predict p interval variables (set Y) as good as possible from one or more nominal variables. Why do we do this? Often it is very natural to compare groups, whether they are existing or experimental, on more than one variable.
When do we use MANOVA?
In the case of one dependent variable, we would do a (univariate) ANOVA. In this case, we have two or more dependent variables, so we opt for MANOVA. In the case of one independent variable, we run a oneway (M)ANOVA, and in the case of two independent variables, we do a twoway (M)ANOVA; and so on. We cannot do multiple ANOVAs instead, as many F-tests would lead to insufficient control over Type I errors. Also, separate ANOVAs ignore the relationships between dependent variables.
What are the multivariate null hypotheses?
The multivariate null hypotheses are the following:
H0: μ11 = μ12 = ... = μ1k.
H0: μ21 = μ22 = ... = μ2k.
H0: μp1 = μp2 = ... = μpk.
In other words, the null hypothesis is that there is no relationship between sets X and Y in the population.
How do we test the hypotheses?
When multivariate tests, such as Wilks, Pillai’s, Hotellings, and Roy’s, are nonsignificant (p > .05). H0 is not rejected, so there is no relationship between sets X and Y. We are then finished. However, when the test is significant (p < .05), H0 is rejected. So somewhere, for at least one dependent variable (or linear combination of dependent variables) there is at least one difference between group means.
What do we do next?
Now there are four different ways to continue, but in this course, two will be discussed. The first option is the protected F approach, in which we focus on the interpretation of univariate F tests and means. The second option is the descriptive discriminant analysis; in which we interpret the positions of groups on discriminant function variates.
What are the assumptions in MANOVA?
The assumptions of MANOVA are the following:
1. Multivariate normality: There is a normal distribution of errors for each dependent variable as a whole, but also for each subset of individuals with identical scores on the other dependent variables. We can’t check for this assumption so directly, but multivariate tests are generally robust against violation of this assumption (if n > 20 for each group).
2. Homogeneity of variance-covariance matrices: There should be equal variances and covariances in all groups. We can check for this assumption with the Box M test, but generally, this is too sensitive. Multivariate tests are robust for approximately equal n (nmax / nmin < 1.5).
What is the protected F procedure?
With two or more univariate ANOVAs, the probability of Type I errors (so incorrectly rejecting H0) gets too large. So, univariate ANOVAs are interpreted if and only if multivariate tests are significant. Univariate F tests are protected against too much Type I errors by the multivariate tests; so for this reason we call this procedure protected F test. The protected F procedure goes by the following steps: 0. Check the assumptions (a preliminary step); 1. inspect the multivariate tests; 2. if the multivariate tests are significant, inspect the univariate F tests; and 3. for variables with significant F’s only, compare the group means.
What is the problem with the protected F procedure?
There is however a problem with the protected F procedure; namely, the protection of univariate F tests by multivariate tests is not sufficient. This problem is however almost completely ignored by applied researchers, because despite its flaws, protected F is the most common method of interpreting MANOVA. If you really want multiple F tests with protection, you could ignore the multivariate tests and do multiple F tests with the Bonferroni correction instead.
How can MANOVA be used?
MANOVA is very flexible, and can also be combined with other AN(C)OVA elements. For example, factorial MANOVA investigates two or more independent variabels. Each main effect and each interaction then has its own multivariate test (Wilks, Pillai’s, et cetera). If these multivariate tests of effect are significant, look at the univariate F test for that effect, and if significant, look at the relevant group means.
Multivariate analysis of covariance (MANCOVA) is another option. We then add covariates just like we do in univariate ANCOVA, but now the covariates also have multivariate tests. If these are significant, we should look at the univariate F test (and bW) for that covariate. Other possible extensions of MANOVA are repeated measures, random effects, and so on.
What is discriminant analysis (DA)?
When multivariate tests are significant, descriptive discriminant analysis (DDA) is an alternative for the protected F procedure. The general objective of DA, is that on the basis of p (but at least more than 2) interval variables (set X) we distinguish k (at least more than 2) groups (which are defined by nominal variable Y) as good as possible from each other. There are two different types of DA. Descriptive DA aims to formulate a multivariate description of the differences between k groups. If and only if the MANOVA multivariate tests are significant, we try to find interpretations for the discriminant function variates (just as with PCA). If the interpretation is found, we compare the group means on these variates. The other type of discriminant analysis is predictive DA: It aims to predict as good as possible to which group an individual belongs. We focus on the individual prediction instead of on the group means, and on the accuracy of the prediction.
How does descriptive discriminant analysis work?
Canonical discrimination function variates are linear combinations of the interval variables: Di = a + b1iY1 + b2iY2 + ... + bpiYp
We choose the weights in such a way for D1 that the k groups are maximally distinguished from each other. For D2, we choose the weights in such a way that the groups are maximally distinguished from each other (again). D2 is orthogonal, so completely uncorrelated to D1. Then, we choose D3 and so on according to these same rules, until the maximum number of variates has been reached. The maximum number of variates is imax = min(k-1,p). We interpret DA from the diagram it yields. We interpret discriminant function variates as a kind of underlying dimensions, just like components in PCA. However, dimensions are not primarily based on correlations between the interval variables, but on how these variables discriminate groups from each other.
What are the steps in DDA?
Descriptive DA provides a multivariate description of groups in discriminant function space. So on which underlying dimensions are the groups different from each other? In DDA, we first determine the number of discriminant function variates. We only keep those variates in our analysis that have enough variance explained at the 5% level, and lead to a clear interpretation. The second step is to interpret the variates. The third and last step is to position the groups on variates: Take the discriminant function for each variate Dj, and compute each group mean on that variate by substituting for each Y variable the group mean on that variable.
The formula for the variate is: Dj = a + b1jY1 + b2jY2 + ... + bpjYp
The formula for the position is: Pjg = b0j + b1jYbar 1g + b2jYbar 2g + ... + bpjYpg
What are the limitations of descriptive DA?
The limitations of descriptive DA are that 1) it is only useful if we can find substantial interpretation for the discriminant function variates, and 2) that there are no formal tests (at least not in SPSS) that show which group means on some variate are significantly different from other groups means on the same variate.
Five questions:
- What topics are covered in this lecture?
This lecture covers MANOVA (Multivariate ANOVA) and discriminant analysis. The lecturer discussed how each of these procedures work, how we can run them in SPSS, and what the limitations are.
- What topics are covered that are not included in the literature?
The literature in the exercise book is more elaborate then the lecture, as it also discusses the eigenvalues of the discriminant function and the squared canonical correlation as well. It is also more elaborate in explaining how to interpret discriminant analyses. The literature does however not discuss the hypotheses and assumptions that are important in MANOVA, which the lecture does.
- What recent development are discussed?
n.a.
- What remarks are made about the exam?
No specific remarks were made about the exam.
- What questions are discussed that may be included in the exam?
n.a.
Lecture 6, What is Repeated measures ANOVA?
In an ordinary ANOVA, we compare two or more group means on one dependent variable. So, we make between-subjects comparisons. In repeated measures ANOVA, we compare the means of only one group on two or more dependent variables. So, we make within-subjects comparisons. In a mixed ANOVA, we make both between- and within-subjects comparisons.
When do we use repeated measures?
There ware four kinds of situations in which we use repeated measures ANOVA. The first is that we want to measure how a variable changes over time (time series). The second possible situation is when we want to measure the same participant’s score under different experimental conditions; a repeated measures experiment. The third possibility is a common measuring rod; we use non-identical measures, but the variables have an identical answering scale. The fourth and last possibility is measuring pairs or groups: Individuals come in pairs or in groups. We make observations within these groups.
Often, a choice for a repeated measures design follows from the research question. In the experimental situation, as described above, there is a choice between a between-subjects (BS) and a within-subjects (WS) approach.
What are the advantages and disadvantages?
The advantage of a WS approach is that each individual is primarily compared with itself, so it is the ultimate method for removing error variation: Systematic individual differences are removed from error, so the repeated measures design has more statistical power.
The WS approach also has disadvantages: Firstly, there is a latency (or learning) effect; relatively permanent effects of earlier measures on later measures. Also, there may be carry-over effects; more temporary effects of earlier measures on later measures.
What is the null hypothesis?
The null hypothesis in repeated measures ANOVA looks like this: H0: μ1 = μ2 = ... = μp. We can’t do standard ANOVA, as we have different kinds of variables in repeated measures ANOVA, and our dependent variables are correlated.
What are the possible approaches?
In the univariate approach of repeated measures ANOVA, we do a standard ANOVA, but with adjusted errors. All variation is then due to individual differences is eliminated from error. In the multivariate approach, somewhat similar to MANOVA, we use contrasts of dependent variables. Previously, in standard ANOVA, when F tests were significant, we would use post-hoc tests (e.g. Tukey) to find out which means significantly differ. However, in repeated measures, we can also make planned comparisons. The researcher decides beforehand, on theoretical grounds, to compare only a selected combination of group means with each other. If chosen carefully, contrasts may accomplish fewer hypothesis tests, and the possibility of orthogonal tests.
What are contrasts?
The formula of a contrast is: L = c1Y1 + c2Y2 + ... + cpYp, where Σci = 0. Contrasts compare two linear combinations of variables. The mean of a contrast is the contrast of means of dependent variables. It looks like this: ML = c1M1 + c2M2 + ... + cpMp. This statistic allows us to test whether in the population some means are qual.
How does it work?
If H0: μ1 = μ2 = ... = μp is true,, than the mean of every possible contrast L should be equal to zero in the population. For p dependent variables there can only be p-1 linearly independent contrasts. Linearly independent contrasts add something new to previous contrast. Orthogonal contrasts are absolutely independent of one another. MANOVA can not only test for differences between group means, but also whether the average of all means is zero; we can this the test for the intercept.
The main steps that we take in the multivariate approach are the following:
1. Make p-1 linearly independent contrasts of the dependent variables.
2. Take these contrasts as dependent variables in a MANOVA. Now the test of the intercept addresses H0: L1 = L2 = ... = Lp-1. Multivariate tests for the intercept play the same role at the F-test in standard ANOVA.
3. Univariate F tests in such a MANOVA are tests for different contrasts. They can be used for questions about which variable means are significantly different from each other.
What is the difference with a standard MANOVA?
So, the multivariate approach is just like MANOVA, only with two differences: The dependent variables are not the original variables, but contrasts. And; now the test is about the intercept, and not about differences between groups.
What are the assumptions?
The assumptions in repeated measures are 1) multivariate normality, and 2) independent errors. In a mixed design (so when we have both BS and WS factors), the assumption of homogeneity of variance-covariance matrices also applies.
Five questions:
- What topics are covered in this lecture?
This lecture covers repeated measures ANOVA. The lecturer described in what situations we use this method, what different approaches we can take, and how these work.
- What topics are covered that are not included in the literature?
The lecture also discusses an SPSS example, which makes it more comprehensible. The assumptions aren't mentioned in the literature either.
- What recent development are discussed?
n.a.
- What remarks are made about the exam?
No specific remarks were made about the exam.
- What questions are discussed that may be included in the exam?
n.a.
Lecture 7, What is Mediation Analysis (MA)?
In simple mediation analysis, we have a variable X (independent) that influences variable Y (dependent). Perhaps, variable X influences some intermediate variable M; a mediator variable, which in turn influences variable Y.
What do we investigate in simple mediation analysis?
In this situation, we can ask different questions about the data: Firstly, is there a relationship between X and Y (total effect)? Then, is the relationship between X and Y mediated by M (indirect effect)? Lastly, after controlling for the mediator, is there still a relationship between X and Y (direct effect)? Our aim is to distinguish the indirect and direct effect.
What is the causal steps approach?
Baron and Kenny proposed the causal steps approach to mediation. Mediation requires investigating four logical causal steps, according to them, in which four regression coefficients are estimated; a, b, c, and c’.
c stands for the total effect; the effect of X on Y, without consideration of M yet.
a stands for the influence of X on M, and b for the influence of M on Y; so together they describe the indirect effect.
c’ stands for the direct effect of X on Y, with consideration of M.
We can perform three different regression analyses: 1) X --> Y by estimating c, 2) X --> M by estimating A, and 3) X, M --> Y by estimating b and c’.
What steps do we take in causal steps?
The first step is to investigate whether there is a total effect. In this step, we test whether c = 0. Baron and Kennedy hypothesized that if no total effect was found, we could stop the mediation analysis. However, now we know that a mediator variable can also neutralize a total effect. A simple example is that exercise leads to a reduction in weight. However, if someone who exercises rewards himself by heavy eating, this effect is of course neutralized. We will not find a total effect as a result, as the indirect and direct effects rule each other out because they have the opposite signs.
The second step is to investigate whether there is an indirect effect: So is X related to M? If X is unrelated to M, there is no mediation in the data. In this step, we test whether a = 0. The third step is also related to the indirect effect: We test whether M is related to Y, controlling for X? If M is unrelated to Y, there is no mediation. In this step, we test whether b ≠ 0. If both step 2 and 3 are significant, there is mediation.
Then, in the final step, we investigate the direct effect by determining whether X is related to Y, after controlling for M. Baron and Kenny saw this step as curcial for distinguishing complete and partial mediation. If c’ = 0, there is complete mediation, because there is no direct effect but only an indirect effect. X-->Y is then completely explained by the mediator. If c’ ≠ 0, there is a direct effect: X-->Y is only partially explained by the mediator. The problem with this conceptualization in practice is however, that we can never be sure that the direct effect does not exist, as we can only test for rejecting a null hypothesis based on significance levels.
How do we do this in SPSS?
For step 1, we run a regression analysis on the effect of X on Y in SPSS. For step 2, we run a regression analysis on the effect of X on M. For step 3, we run a regression analysis on the effect of M on Y.. For step 4, we run a regression analysis of the effect of X on Y after controlling for M. In all these steps we look at the b ir Beta coefficients (depending on whether we want our solution to be standardized) with the associated t and p values.
How do we interpret the effects?
We describe the effects in mediation as follows: One extra unit of X yields a extra units of M. One extra units of M yields b extra units of Y. One extra unit of X yields ab extra units of Y. The indirect effect = ab. For the direct effect we say that one extra unit of X yields c’ extra units of Y. The direct effect = c’. For the total effect we say that one extra unit of X yields c extra units of Y. The total effect = c. The relationship is total = direct + indirect, so c = c’ + ab.
What about the effect size?
Mediation analysis is based on multiple regression analyses. The standard measure of effect size in regression is VAF = explained variance. VAF does however not work for mediation analysis. There is no consensus about the best measure of effect size in mediation. Measures that are often used are the completely standardized indirect effect (CSIE), or the proportion mediated.
CSIE = ab. We interpret this in the following way: 1 extra SD of X leads to ab extra SDs of Y. We can only use standardized coefficients in calculating CSIE.
The proportion mediated (Pmed) = ab / c. We interpret this in the following way: Pmed is the proportion of the total effect that is due to mediation. When we compete Pmed, is does not matter whether you use standardiz ed or unstandardized coefficients; they should yield the same outcome.
What is the Sobel test?
The indirect effect ab is composed of X->M (a) and M->Y (b). The Sobel test is a single test of the indirect effect ab. We don’t run it in SPSS, but calculate it by hand with the following formula: z = ab / SEab. The indirect effect is divided by its standard error. There are three different versions of the Sobel test, with different formulas for the standard error. We discuss the Aroian version: z = ab / √b2 SE2 a + a2 SE2 b+SE2 aSE2 b.
What are the possibilities and limitations?
Mediation analysis can be generalized to situations with other measurement levels, multiple independent variables, and multiple mediators. For example, in the case of two mediators, we have a total effect (c), two indirect effects (a1b1 and a2b2), and one direct effect (c’). We run four regression analyses; X-->Y, X-->M1, X-->M2, and X, M1 & M2 --> Y.
However, we should be aware that correlation does not imply causality. Mediation models are clearly causal, but are often applied to correlational data. Mediation analysis allows us to reject a causal model, but it cannot prove that a causal model is correct. Another problem is that confounders can occur in the data. Confounder variables are variables that influence both dependent variables and independent variables. The consequence is that the causal relationship between X and Y is overestimated. We can never be sure about the confounders assumptions, but we should always think about possible confounders.
What do we mean by suppression and spurious correlations?
The effect of a predictor in multiple regression analysis may change radically when a second predictor (Z) is included.Suppression occurs when the relationship between X1 and Y becomes much stronger when Z is added to the MRA. A spurious correlation occurs when the relationship between X1 and Y becomes zero or much weaker when Z is added to the MRA.
If the relationship between the outcome and predictor variable is suppressed when a second predictor is not taken into account, we speak of suppression. So, the inclusion of the second predictor enhances the effect of the first predictor. We can observe it by comparing the zero-order correlation of the predictor to the beta coefficient or the (semi-)partial correlation. When the beta or (semi-)partial correlations have the same sign, but are larger than the zero-order correlation, there is suppression. When the beta or (semi-)partial correlation has another sign (+ or -) than the zero-order correlation, there is suppression as well.
With spurious correlations, there is an overall correlation that disappears when a third variable has been taken into account.
Five questions:
- What topics are covered in this lecture?
This lecture covers mediation analysis, suppression, and spurious correlations. It discusses the causal steps approach in mediation analysis, effect sizes, the Sobel test, and how the interpretation of all of these work. Suppression and spurious correlations were discussed and displayed with an SPSS example.
- What topics are covered that are not included in the literature?
The lecture also discusses an SPSS example of spurious correlations and suppression. The lecturer however indicated that his article about this topic is more elaborate than the lecture, and therefore recommended to read.
- What recent development are discussed?
The lecturer discussed how some of the findings of Baron and Kennedy have been rejected. He describes why and what alternative theories have been made up.
- What remarks are made about the exam?
No specific remarks were made about the exam.
- What questions are discussed that may be included in the exam?
n.a.
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
Contributions: posts
Spotlight: topics
Online access to all summaries, study notes en practice exams
- Check out: Register with JoHo WorldSupporter: starting page (EN)
- Check out: Aanmelden bij JoHo WorldSupporter - startpagina (NL)
How and why use WorldSupporter.org for your summaries and study assistance?
- For free use of many of the summaries and study aids provided or collected by your fellow students.
- For free use of many of the lecture and study group notes, exam questions and practice questions.
- For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
- For compiling your own materials and contributions with relevant study help
- For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.
Using and finding summaries, notes and practice exams on JoHo WorldSupporter
There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
- Use the summaries home pages for your study or field of study
- Use the check and search pages for summaries and study aids by field of study, subject or faculty
- Use and follow your (study) organization
- by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
- this option is only available through partner organizations
- Check or follow authors or other WorldSupporters
- Use the menu above each page to go to the main theme pages for summaries
- Theme pages can be found for international studies as well as Dutch studies
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
- Check out: Why and how to add a WorldSupporter contributions
- JoHo members: JoHo WorldSupporter members can share content directly and have access to all content: Join JoHo and become a JoHo member
- Non-members: When you are not a member you do not have full access, but if you want to share your own content with others you can fill out the contact form
Quicklinks to fields of study for summaries and study assistance
Main summaries home pages:
- Business organization and economics - Communication and marketing -International relations and international organizations - IT, logistics and technology - Law and administration - Leisure, sports and tourism - Medicine and healthcare - Pedagogy and educational science - Psychology and behavioral sciences - Society, culture and arts - Statistics and research
- Summaries: the best textbooks summarized per field of study
- Summaries: the best scientific articles summarized per field of study
- Summaries: the best definitions, descriptions and lists of terms per field of study
- Exams: home page for exams, exam tips and study tips
Main study fields:
Business organization and economics, Communication & Marketing, Education & Pedagogic Sciences, International Relations and Politics, IT and Technology, Law & Administration, Medicine & Health Care, Nature & Environmental Sciences, Psychology and behavioral sciences, Science and academic Research, Society & Culture, Tourisme & Sports
Main study fields NL:
- Studies: Bedrijfskunde en economie, communicatie en marketing, geneeskunde en gezondheidszorg, internationale studies en betrekkingen, IT, Logistiek en technologie, maatschappij, cultuur en sociale studies, pedagogiek en onderwijskunde, rechten en bestuurskunde, statistiek, onderzoeksmethoden en SPSS
- Studie instellingen: Maatschappij: ISW in Utrecht - Pedagogiek: Groningen, Leiden , Utrecht - Psychologie: Amsterdam, Leiden, Nijmegen, Twente, Utrecht - Recht: Arresten en jurisprudentie, Groningen, Leiden
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
2730 | 1 | 1 |
Very nice of you to post all Roos Heeringa contributed on 14-01-2021 13:11
Very nice of you to post all the lectures online and link them!! This has helped a lot!!
Add new contribution