Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 17

A multivariate analysis is used when there is more than one dependent (outcome) variable. It is possible to use several F-tests when there are several dependent variables, but this inflates the type-I error rate. A MANOVA can detect whether groups differ along a combination of dimensions. MANOVA has a greater potential power to detect an effect.

A matrix is a grid of numbers arranged in columns and rows. The values within a matric are called components or elements and the rows and columns are vectors. A square matrix has an equal number of columns and rows. An identity matrix is a matrix where the diagonal numbers are ‘1’ and the non-diagonal numbers are ‘0’. The sum of squares and cross-products (SSCP) matrices are a way of operationalize multivariate versions of the sums of squares. The matrix that represents the systematic variance (model sum of squares) is denoted by the letter ‘H’ and is called the hypothesis sum of squares and cross-products matrix (hypothesis SCCP). The matrix that represents the unsystematic variance (residual sum of squares) is denoted by the letter ‘E’ and is called the error sums of squares and cross-products matrix (error SSCP). The matrix that represents the total sums of squares for each outcome (total SSCP) is denoted by the letter ‘T’.

The cross-product is the total combined error between two variables.

THEORY BEHIND MANOVA
The total sum of squares is calculated by calculating the difference between each of the scores and the mean of those scores, then squaring those differences and adding them together.

The degrees of freedom is N-1. The model sum of squares is calculated by taking the difference between each group mean and the grand mean, squaring it, multiplying by the number of scores in the group and then adding it all together.

The degrees of freedom is the sample size of each group minus one multiplied by the number of groups. The SST and the SSM then have to be divided by their own degrees of freedom, before being divided by each other to get to the F-statistic.

The cross-product is the difference between the scores and the mean for one variable multiplied by the difference between the scores and the mean for another variable. It is similar to covariance. It uses the following formula:

For each outcome (dependent) variable, the score is taken and subtracted from the grand mean for that variable. This gives x values per participant, with x being the number of outcome variables.

The model cross-product, how the relationship between the outcome variables is influenced by the experimental manipulation, uses the following formula:

The residual cross-product, how the relationship between the outcome variables is influenced by individual differences and unmeasured variables, can be calculated using the following formula:

In order to calculate the F-statistic for MANOVA, the inverse matrix should be used. The matrix of H should be multiplied by the inverse matrix of E. This results in the matrix HE-1. This represents the ratio of systematic variance in the model to the unsystematic variance in the model.

It is possible to calculate the variates of dependent variables. These variates can be used to predict to which group a person belongs. Variates that can be used to discriminate in which group people are, are called discriminant function variates. In other words, an independent variable is predicted using a dependent variable using components of the dependent variable.

There are several assumptions of the MANOVA:

  1. Independence
    The residuals should be statistically independent
  2. Random sampling
    Data should be randomly sampled from the population of interest.
  3. Multivariate normality
    The residuals should have multivariate normality.
  4. Homogeneity of covariance matrices
    The variances in each group should be roughly equal for each outcome (dependent) variable and the correlation between any two outcome (dependent) variables is the same in all groups.

The assumption of multivariate normality cannot be tested in SPSS and should be tested by using the homogeneity of variance test for all the dependent variables. The homogeneity of covariance matrices scan be tested using Box’s test.

Bartlett’s test of sphericity tests whether the variance-covariance matrix and is proportional to an identity matrix.

Image

Access: 
Public

Image

Join WorldSupporter!
This content is used in:

Scientific & Statistical Reasoning – Summary interim exam 2 (UNIVERSITY OF AMSTERDAM)

Summary of Discovering statistics using IBM SPSS statistics by Andy Field - 5th edition

Image

 

 

Contributions: posts

Help other WorldSupporters with additions, improvements and tips

Image

Spotlight: topics

Check the related and most recent topics and summaries:
Activities abroad, study fields and working areas:
Institutions, jobs and organizations:
This content is also used in .....

Image

Check how to use summaries on WorldSupporter.org
Submenu: Summaries & Activities
Follow the author: JesperN
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Statistics
Search a summary, study help or student organization