MVDA - Repeated Measures ANOVA

Week 6: Repeated Measures ANOVA

When do we use it? When comparing two or more (dependent) variable means for only one group (within-subjects [WS] comparisons).

We use RMA in 4 types of situations:

  1. Time-series - Example: Does IQ get lower when we get older?
  2. Repeated measures experiment - Example: Is task performance better when working with others than when working alone?
  3. Common measuring rod - Example: Which TV program do kids like more: Pokemon, Powerpuff Girls, or Totally Spies?
  4. Pairs or groups - Example: do people work harder in cohesive groups than in non-cohesive groups?

Discuss the robustness of the multivariate tests with respect to possible violations of multivariate normality in each group.

We look at the number of participants per group. If it’s larger than 15, then it’s robust to non-normality.

Is there a significant effect of condition on trait x (e.g.: anxiety)? If so, which group has the higher estimated marginal mean?

We look at the at the significance in the Multivariate tests table. If they’re below 0.05, then there is a significant effect. Example of reporting:

Yes. All multivariate tests have F(3,5) = 26.955,p=.002

The researcher uses the ready-made contrast ‘Simple’ in SPSS. Which group means are compared with which other group means in this set of contrasts?

We look at the Tests of Within-Subjects Contrasts table. Below condition, look at what each level is compared to. For example, if all of them are compared to level 1, that means each group is compared to the first group.

Check if this set of contrasts is orthogonal.

Orthogonal means that there is absolutely no overlap between contrasts. We can check this by looking at the Estimated marginal means table, and checking the lower and upper bound. If this range overlaps with another group’s, then it’s not orthogonal.

Which contrasts have a significant F test? Interpret the significant effects.

We look at the Tests of Within-Subjects Contrasts table. Then, we check which condition the significant effect (p<0.05) belongs to. We also report the mean square belonging to that condition.

Example: Level 4 vs Level 1 is significant. An example of how we can report this:

Let’s say Level 4 means 50 people and  Level 1 means 1 person. We are testing scores of anxiety and number of people in front of which participants give a presentation.

Answer: Higher anxiety in front of an audience of 50 persons (M= 8.25) than an audience of one person (M= 4.250),F(1,7) = 22.803,p= .002

Check page access:
Public
Check more or recent content:

Multivariate data analysis (MVDA) bundle

MVDA -multiple regression analysis

MVDA -multiple regression analysis

Image

Multivariate Data Analysis

 

Week 1: Multiple Regression Analysis

Multivariate means exploring the dynamics between 3 or more variables.

Multiple regression analysis (MRA) can be done when all variables are interval level (e.g. weight, height, IQ score).

The research question for MRA is: can Y be predicted from X1 and/or X2?

What are the linearity, homoscedasticity and normality of residuals?

Linearity: the relationship between the independent and dependent is linear. Can be tested with scatterplots.

Homoscedasticity: variance of residuals is constant across values predictors. Can be seen on the scatterplot.

Normality: approximate straight line on P-P plot.

What does multicollinearity mean?

That there is a high intercorrelation between predictors. There is multicollinearity when the tolerance is higher than 0.10 and the VIF is lower than 10.

Are there outliers, influential points, or outliers on the predictors?

Outliers: on dependent variable Y: Residuals, between −3 and 3.

Influential points: Cook’s distance smaller than 1.

Outliers on the predictors: on independent variable(s) X: Leverage, smaller than 3(k+1)/n

What are the null and the alternative hypothesis to test the regression model?

Ho: b*1 =b2 =···=bk =0 (No relation between Y and X1, X2)

Ha: :at least one bj =/= 0

When can the null hypothesis be rejected?

When the hypothesis of no relation between variables can be rejected (no relation would mean all variables equal 0)

What are the null and the alternative hypothesis to test the individual coefficients?

H0: b1=0 and Ha = b1=/=0

H0: b2=0 and Ha = b2=/=0

What are the unstandardized and standardized regression equations?

Unstandardized (MRA): b0+b1X1+b2X2  

Standardized (MRAst): β1X1st+ β2X2st

How much variance of Y is explained in total by X1 and X2?

The R2 gives the value for the explained data (X1 and X2).

How much variance of Y is uniquely explained by X1? How much variance of Y is uniquely explained by X2? What is the best predictor?

We look at part coefficient. r2x(1.2) and r2y(2.1) gives the answer. Whichever the highest value is, that’s the best predictor.

Access: 
Public
MVDA - analysis of variance

MVDA - analysis of variance

Image

Week 2: Analysis of variance (ANOVA)

ANOVA can be done when the independent variables are nominal level and the dependent variable is interval level.

Research question: What is the effect of X1 and X2 on Y?

Check the assumptions of homogeneity of the variances and normality of the residuals.

Homogeneity: Check Levene’s test. If non-significant, then the assumption is violated

Normality: are residuals normally distributed on the histogram? If no, the assumption is violated

What are the group sizes? Discuss robustness of the F tests with respect to possible violations of the assumptions.

If group size is larger than 15, then the F test is robust for non-normality. The F test is robust to unequal group variances if the maximum number of people in a subgroup divided by the minimum number of people is smaller than 1.5.

What are the null and alternative hypotheses to test the ANOVA model?

H0:μ11=μ12=μ21=μ22

Ha: at least two μij not equal

Can the null hypothesis be rejected? If so, which effects are significant?

If the average of different groups is different, the H0 can be rejected. Here report the F statistic with the degrees of freedom and p value. For example: F(3,164) = 12.679,p < .001

How much variance of test score is explained by the total model and by each individual effect?

When we want to know how much variance is explained by the total model we look at R2. This can be seen written at the bottom of the Tests of Between-Subjects Effect table or can be calculated by dividing the Sum of Square (SS) of the corrected model by the Sum of Square of the total.

The individual contribution can be calculated by dividing the SS of the individual coefficient by the SS of the corrected total.

  • Example: the SS of a method is 146.720. The SS of the corrected model is 1429.661. We then divide the former by the latter and we get: .103. The way we would mark the contribution of method would be then: η2Method=.102
Access: 
Public
MVDA - analysis of covariance

MVDA - analysis of covariance

Image

Week 3: Analysis of covariance (ANCOVA)

ANCOVA can be used when one of the independent variables (factor X) is nominal level,

while the other one (covariate) is interval level.

The goal of ANCOVA is the reduction of error variance and removal of systematic bias.

 

The research question is: What is the effect of X on Y after correction for C?

Is there a significant effect of x method (e.g. teaching method) on posttest? (report test statistic, df, p value and a suitable measure of effect size). If yes, interpret that effect.

Here we look at the table tests of Between-Subjects Effects. We need the eta squared value, the F statistic and the p value.

The eta squared can be calculated by dividing the Sum of Square (SS) of the method by the SS of the total.

The F is the Mean Square (MS) of method, divided by the MS of error. This value can also be seen in the table. The two degrees of freedom used are the one for method (in the example given below, 2) and for error (in the example: 177). The p value is seen in the table under significance.

  • An example of how the test statistic should be reported: F(2, 177)=13.371, p<0.001.

If the answer is: yes there is a significant effect of method on posttest, then we look at the multiple comparisons table. We look at the mean difference for all methods and see if there is any that stand out or are significantly different. For example, if the mean difference of method B with all other methods is a lot larger than mean differences without method B, then we can say B is significantly different.

Is there a significant correlation between pretest and post-test in all groups?

Here we look at the within-group correlations table and see if all are significant (p < .001)

Are there significant differences between the groups at pretest? If yes, interpret these differences.

Here we look at the Tests of Between-Subjects Effects table, with the dependent variable: pretest. If the F test is significant, there are differences between the groups. Then, in the multiple comparisons table look at mean differences.

Example of reporting what we found:

Significant group differences at pretest, F(2,177) = 10.221,p<.001.

No difference between methods A and B (Mdif f= -2.67,p= .339), but Method C differs from both method A (Mdif f= -5.72,p= .008) and B (Mdif f= -8.38,p<.001).

Do you think that adding the covariate might lead to reduction of error, reduction of bias, neither, or both?

If there are large and significant within-group correlations→Possible reduction of error

If there are significant differences between group means→Possible elimination of bias

 

 

Access: 
Public
MVDA - logistic regression analysis

MVDA - logistic regression analysis

Image

Week 4: Logistic Regression Analysis (LRA)

LRA can be used when the dependent variable (Y) is binary and the predictors (X1, X2) interval level (or binary).

The research question is: Can Y be predicted fromX1and/orX2?

  • Example: Can the passing (1) or failing (0) the MVDA exam (Y) be predicted from the student’s grade on the psychometrics exam (X)?

Is there a significant association between grade and passing/failing the exam? (report test statistic, df, and p value)?

Here, we look at the Variables in the Equation table at the Wald of the grade. If it’s significant, then yes there is a significant association. An example of how this can be reported:

Yes, Wald  χ2(1) = 7.090,p=.006

Write down the logistic regression equation

For example:

if the constant B is -4.200

the grade B is: .671

Then the equation looks like this:

(From now on, sorry for the weird format of the formulas)

For what grade is the probability of passing the MVDA exam equal to the probability of failing the MVDA exam?

Passing= 50%

Failing=50%

P=1/2 =

In order for  to be 1, -4.200+ .671(Grade) has to be equal to 0. This is because e to the power of 0 is 1.

So, -4.200 + 0.671(g)=0

0.671(g)=4.200

g=6.259

Therefore, the grade where there is an equal chance for passing and failing is 6.259.

Calculate the probabilities and odds of passing for X= 0,5, 10

X                      P                                              Odds (rounded up)              

0                      =0.0148                           = = 0.015                    

5                    = 0.3005                            = 0.429                              

10                   = 0.9248                           =11.5

 

How to calculate the odds ratio?

Example:

X                      P                      Odds               Odds ratio

1                   .0285           .02931          =1.958

2                   .0543            .0574            1.958

Therefore, if X increases 1 unit, the odds are going to increase by x 1.958 (times 1.958).

 

What is the odds ratio of X of

.....read more
Access: 
Public
MVDA - Multivariate analysis of variance and Descriptive Data Analysis

MVDA - Multivariate analysis of variance and Descriptive Data Analysis

Image

Week 5: MANOVA and Descriptive DA

The goal of MANOVA and of Descriptive DA is the optimal prediction of differences between group means on several interval variables.

Why? Often very natural to compare groups on more than one variable, for example:

• quality and quantity of task performance;

• different stress reaction (e.g. emotions, physiological measures).

Compared to ANOVA:

ANOVA: one dependent variable -->(univariate)

MANOVA: two or more dependent variables -->(multivariate)

Check the assumption of equality of the covariance matrices and discuss robustness ofthe MANOVA against violations of the assumption of multivariate normality.

To check for the equality of covariance matrices, in the table: Test of Equality of Covariance Matrices, look at Box’s M. If higher than 0.05 then not significant.

To check for robustness of non-normality: number of participants per group has to be ≥20:

The design is balanced when robust to unequal covariance matrices.

 

Consider the mean values in the descriptive statistics. Which groups differ a lot on characteristic x (e.g. physical complaints)? Which groups differ only a little?

Look at descriptive statistics table and the appropriate box, in this case: physical complaints. Compare the values under Mean.

Given your crude assessment in the previous question, is it plausible to expect a multivariate effect?

If there are differences under sample group means, then yes, a multivariate effect is expected.

Example: Is there a significant effect of occupation on physical complaints, experience of hostility, and/or dissatisfaction?

We look at the multivariate tests table, in the Occupation box. Then, we report F statistic, df, and p value. If the tests are higher than p≤.05, there is a significant effect.

For which variables is the univariate effect significant?

We look at the Tests of Between-Subjects Effects table. Then, you report the tests were the p<0.05.

What is the answer to the previous question be if we apply a Bonferroni correction for multiple testing?

For Bonferroni, we divide the Alpha by number of categories (dependent variables). For example, if there are three categories (e.g.: hostility, physical complaint, dissatisfaction). The alpha then becomes smaller and we have to check if the p values are still smaller. Often, they aren’t or only few remain significant. Example:

Before Bonferroni:

Physical complaints F(2,177) = 3.196,p=.043 -->significant

Hostility F(2,177) = 3.405,p=.035 ---> significant

Dissatisfaction F(2,177) = 4.511,p=.013 --> significant

After Bonferroni, Withα=.05/3 =.0167:

Only Dissatisfaction F(2,177) = 4.511,p=.013 -->significant.

Interpret the significant effect(s) (after Bonferroni correction) using the table with multiple comparisons. Why is the Tukey HSD correction applied?

The Tukey test is invoked when you need to determine if the interaction among three or more variables is mutually

.....read more
Access: 
Public
MVDA - Repeated Measures ANOVA

MVDA - Repeated Measures ANOVA

Image

Week 6: Repeated Measures ANOVA

When do we use it? When comparing two or more (dependent) variable means for only one group (within-subjects [WS] comparisons).

We use RMA in 4 types of situations:

  1. Time-series - Example: Does IQ get lower when we get older?
  2. Repeated measures experiment - Example: Is task performance better when working with others than when working alone?
  3. Common measuring rod - Example: Which TV program do kids like more: Pokemon, Powerpuff Girls, or Totally Spies?
  4. Pairs or groups - Example: do people work harder in cohesive groups than in non-cohesive groups?

Discuss the robustness of the multivariate tests with respect to possible violations of multivariate normality in each group.

We look at the number of participants per group. If it’s larger than 15, then it’s robust to non-normality.

Is there a significant effect of condition on trait x (e.g.: anxiety)? If so, which group has the higher estimated marginal mean?

We look at the at the significance in the Multivariate tests table. If they’re below 0.05, then there is a significant effect. Example of reporting:

Yes. All multivariate tests have F(3,5) = 26.955,p=.002

The researcher uses the ready-made contrast ‘Simple’ in SPSS. Which group means are compared with which other group means in this set of contrasts?

We look at the Tests of Within-Subjects Contrasts table. Below condition, look at what each level is compared to. For example, if all of them are compared to level 1, that means each group is compared to the first group.

Check if this set of contrasts is orthogonal.

Orthogonal means that there is absolutely no overlap between contrasts. We can check this by looking at the Estimated marginal means table, and checking the lower and upper bound. If this range overlaps with another group’s, then it’s not orthogonal.

Which contrasts have a significant F test? Interpret the significant effects.

We look at the Tests of Within-Subjects Contrasts table. Then, we check which condition the significant effect (p<0.05) belongs to. We also report the mean square belonging to that condition.

Example: Level 4 vs Level 1 is significant. An example of how we can report this:

Let’s say Level 4 means 50 people and  Level 1 means 1 person. We are testing scores of anxiety and number of people in front of which participants give a presentation.

Answer: Higher anxiety in front of an audience of 50 persons (M= 8.25) than an audience of one person (M= 4.250),F(1,7) = 22.803,p= .002

Access: 
Public
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Check more of this topic?
How to use more summaries?


Online access to all summaries, study notes en practice exams

Using and finding summaries, study notes en practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Starting Pages: for some fields of study and some university curricula editors have created (start) magazines where customised selections of summaries are put together to smoothen navigation. When you have found a magazine of your likings, add that page to your favorites so you can easily go to that starting point directly from your profile during future visits. Below you will find some start magazines per field of study
  2. Use the menu above every page to go to one of the main starting pages
  3. Tags & Taxonomy: gives you insight in the amount of summaries that are tagged by authors on specific subjects. This type of navigation can help find summaries that you could have missed when just using the search tools. Tags are organised per field of study and per study institution. Note: not all content is tagged thoroughly, so when this approach doesn't give the results you were looking for, please check the search tool as back up
  4. Follow authors or (study) organizations: by following individual users, authors and your study organizations you are likely to discover more relevant study materials.
  5. Search tool : 'quick & dirty'- not very elegant but the fastest way to find a specific summary of a book or study assistance with a specific course or subject. The search tool is also available at the bottom of most pages

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study (main tags and taxonomy terms)

Field of study

Access level of this page
  • Public
  • WorldSupporters only
  • JoHo members
  • Private
Statistics
1553
Comments, Compliments & Kudos:

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.
Promotions
vacatures

JoHo kan jouw hulp goed gebruiken! Check hier de diverse studentenbanen die aansluiten bij je studie, je competenties verbeteren, je cv versterken en een bijdrage leveren aan een tolerantere wereld