The linear model - summary of Chapter 9 by A. Field 5th edition

Statistics
Chapter 9
The linear model (regression)

An introduction to the linear model (regression)

The linear model with one predictor

outcome = (b0+b1xi) +errori

This model uses an unstandardised measure of the relationship (b1) and consequently we include a parameter b0 that tells us the value of the outcome when the predictor is zero.

Any straight line can be defined by two things:

  • the slope of the line (usually denoted by b1)
  • the point at which the the line crosses the vertical axis of the graph (the intercept of the line, b0)

These parameters are regression coefficients.

The linear model with several predictors

The linear model expands to include as many predictor variables as you like.
An additional predictor can be placed in the model given a b to estimate its relationship to the outcome:

Yi = (b0 +b1X1i +b2X2i+ … bnXni) + Ɛi

bn is the coefficient is the nth predictor (Xni)

Regression analysis is a term for fitting a linear model to data and using it to predict values of an outcome variable form one or more predictor variables.
Simple regression: with one predictor variable
Multiple regression: with several predictors

Estimating the model

No matter how many predictors there are, the model can be described entirely by a constant (b0) and by parameters associated with each predictor (bs).

To estimate these parameters we use the method of least squares.
We could assess the fit of a model by looking at the deviations between the model and the data collected.

Residuals: the differences between what the model predicts and the observed values.

To calculate the total error in a model we square the differences between the observed values of the outcome, and the predicted values that come from the model:

total error: Σni=1(observedi-modeli)2

Because we call these errors residuals, this is called the residual sum of squares (SSR).
It is a gauge of how well a linear model fits the data.

  • if the SSR is large, the model is not representative
  • if the SSR is small, the model is representative for the data

The least SSR gives us the best model.

Assessing the goodness of fit, sums of squares R and R2

Goodness of fit: how well the model fits the observed data

Total sum of squares (SST): how good the mean is as a model of the observed outcome scores.

We can use the values of SST and SSR to calculate how much better the linear model is than the baseline model of ‘no relationship’.
The improvement in prediction resulting from using the linear model rather than the mean is calculated as the difference between SST and SSR.
This improvement is the model sum of squares SSM

  • if SSM is large, the linear model is very different from using the mean to predict the outcome variable. It is a big improvement.

R2 = SSM/ SST

R2 is the improvement due to the model

  • To express this value as a percentage, multiply it by 100.
  • R2 represents the amount of variance in the outcome explained by the model relative to how much variation there was to explain in the first place.
  • we can take the square root of this value to obtain Pearson’s correlation coefficient for the relationship between values of the outcome predicted by the model and the observed values of the outcome.

Another use of the sums of squares is in assessing the F-test.

  • F is based upon the ratio of the improvement due to the model and the error in the model.

Mean squares (MS): the sum of squares divided by the associated degrees of freedom.

MSM = SSM/k

MSR = SSR/ (N – k – 1)

F = MSM/MSR

F has an associated probability distribution from which a p-value can be derived to tell us the probability of getting an F at least as big as one we have if the null hypothesis were true.
The F statistic can also used to the significance R2

F = ((N – k – 1)R2) / (k(1-R2)

Assessing individual predictors

Any predictor in a linear model has a coefficient (bi). The value of b represents the change in the outcome resulting from a unit change in a predictor.
The t-statistic is based on the ratio of explained variance against unexplained variance or error

t = (bobserved – bexpected)/ SEb

The statistic t has a probability distribution that differs accordingly to the degrees of freedom for the text.

Bias in linear models?

Outliers

An outlier: a case that differs substantially from the main trend in the data.
Outliers can affect the estimates of the regression coefficients.

Standardized residuals: the residuals converted to z-scores and so are expressed in standard deviation units.
Regardless of the variables of the model, standardized residuals are distributed around a mean of 0 with a standard deviation of 1.

  • Standardized residuals with an absolute value greater than 3,29 are cause for concern because in an average sample a value this high is unlikely to occur
  • if more than 1% of our sample cases have standardized residuals with an absolute value greater than 2,58 there is evidence that the level of error within our model may be unacceptable
  • if more than 5% of cases have standardized residuals with an absolute value greater than 1,96 then the model may be a poor representation of the data

Influential cases

There are several statistics used to assess the influence of a case.

  • adjusted predicted value
    the predicted value of the outcome for that case from a model in which the case is excluded.
    If the model was stable, then the predicted value of a case should be the same regardless of whether that case was used to estimate the model
  • Deleted residual
    the difference between the adjusted predicted value and the original observed value.
  • studentized deleted residual
    the deleted residual divided by the standard error
  • Cook’s distance
    a measure of the overall influence of a case on the model
  • the leverage
    gauges the influence of the observed value of the outcome variable over the predicted values
  • Mahalanobis distances
    measure the distance of cases from the mean(s) of the predictor variable(s)
  • to look at how the estimates b in a model change as a result of excluding a case

DFBeta: the difference between a parameter estimated using all cases and estimated when one case is excluded.
DFFit: the difference between the predicted values for a case when the model is estimated including or excluding that case.
Covariance ratio (CVR): quantifies the degree to which a case influences the variance of the regression parameters.

Generalizing the model

Assumptions of the linear model

  • Additivity and linearity
    the outcome variable should be linearly related to any predictors and, with several predictors, their combined effect is the best described by adding their effect together.
  • Independent errors
    for any two observations the residual terms should be uncorrelated.
    This can be tested with the Durbin-Watson test.
  • homoscedasticity
    at each level of the predictor variable(s) the variance of the residual terms should be constant.
    Residuals at each level of the predictor(s) should have the same variance (homoscedasticity).
  • Normally distributed errors
    the differences between the predicted and observed data are most frequently zero or close to zero and differences much greater than zero happen only occasionally.
  • Predictors are uncorrelated with ‘external variables’
    External variables: variables that haven’t been included in the model and that influence the outcome variable
  • Variable types
    all predictor variables must be quantitative or categorical.
    The outcome variable must be quantitative, continuous and unbounded.
  • No perfect multicollinearity
    if your model has more than one predictor, then there should be no perfect linear relationship between two or more of the predictors.
  • Non-zero variance
    the predictors should have same variation in value

Cross-validation of the model

Even if we can’t be confident that the model derived from our sample accurately represents the population, we can assess how well our model might predict the outcome in a different sample.
Cross-validation: assessing the accuracy of a model across different samples.
If a model can be generalized, then it must be capable of accurately predicting the same outcome variable form the same set of predictors in a different group of people.

Once we have estimated the model there are two main methods of cross-validation:

  • Adjusted R2
    Adjusted R2 tells us how much variance in Y would be accounted for if the model had been derived from the population from which the sample was taken.
    The adjusted value indicates the loos of predictive power.
  • Data splitting
    involves randomly splitting your sample data, estimating the model in both halves of the data and comparing the resulting models.

Sample size and the linear model

The sample size required depends on the size of effect that we’re trying to detect and how much power we want to detect in these effects.
The bigger the sample size the better.

Summary

  • A linear model (regression) is a way of predicting values of one variable form another based on a model that describes a straight line.
  • this line is the line that best summarizes the pattern of the data
  • to asses how well the model fits the data use:
    - R2, which tells us how much variance is explained by the model compared to how much variance there is to explain in the first place. It is the proportion of variance in the outcome variable that is shared by the predictor variable
    - F, which tells us how much variability the model can explain relative to how much it can’t explain.
    - the b-value, which tells us the gradient of the regression line and the strength of the relationship between a predictor and the outcome variable. If it is significant then the predictor variable significantly predicts the outcome variable.

The linear model with two or more predictors (multiple regression)

a great deal of care should be taken in selecting predictors for a model because the estimates of the regression coefficients depend upon the variables in the model.

Methods of entering predictors into the model

Having chosen predictors, you must decide the order to enter them into the model.

  • when predictors are completely uncorrelated, the order of variance entry has very little effect on the parameters estimated, but we rarely have uncorrelated predictors.
  • Other things being equal, use hierarchical regression.
    You select predictors based on past work and decide in which order to enter them in the model.
  • You should enter known predictors into the model first in order of their importance in predicting the outcome.
  • An alternative method is entry.
    Here you force all predictors into the model simultaneously.
  • Stepwise regression
    avoid this

Comparing models

Hierarchical methods involve adding predictors to the model stages, and it is useful to assess the improvement to the model at each stage.
A simple way to quantify the improvement is to compare R2 for the new model to that for the old model.

Fchange = ((N – knew -1)R2change)/(kchange(1-R2change))

We can compare models using this F-statistic.

Multicollinearity

Multicollinearity exists when there is a strong correlation between two or more predictors.
Perfect collinearity: when at least one predictor is a perfect linear combination of the others.

As collinearity increases there are three problems that arise:

  • - Untrustworthy bs
    As collinearity increases, so to the standard errors of the b coefficients.
    Big standard errors for b coefficients mean more variability in these bs across samples, and greater change of
    - predictor equations that are unstable across samples
    - b coefficients in the sample that are unrepresentative of those in the population
  • It limits the size of R
  • Importance of predictors
    it makes it difficult to assess the individual importance of a predictors

Variance inflation factor (VIF): indicates whether a predictor has a strong linear relationship with the other predictor(s). The tolerance statistic is its reciprocal.

  • if the largest VIF is greater than 10, this this indicates a serious problem
  • If the average VIF is substantially greater than 1 then the regression may be biased
  • Tolerance below 0,2 indicates a potential problem.

Image

Access: 
Public

Image

Join WorldSupporter!
This content is used in:

Summary of Discovering statistics using IBM SPSS statistics by Field - 5th edition

Search a summary

Image

 

 

Contributions: posts

Help other WorldSupporters with additions, improvements and tips

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Image

Spotlight: topics

Check the related and most recent topics and summaries:
Institutions, jobs and organizations:
Countries and regions:
WorldSupporter and development goals:
This content is also used in .....

Image

Check how to use summaries on WorldSupporter.org

Online access to all summaries, study notes en practice exams

How and why use WorldSupporter.org for your summaries and study assistance?

  • For free use of many of the summaries and study aids provided or collected by your fellow students.
  • For free use of many of the lecture and study group notes, exam questions and practice questions.
  • For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
  • For compiling your own materials and contributions with relevant study help
  • For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.

Using and finding summaries, notes and practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Use the summaries home pages for your study or field of study
  2. Use the check and search pages for summaries and study aids by field of study, subject or faculty
  3. Use and follow your (study) organization
    • by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
    • this option is only available through partner organizations
  4. Check or follow authors or other WorldSupporters
  5. Use the menu above each page to go to the main theme pages for summaries
    • Theme pages can be found for international studies as well as Dutch studies

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study for summaries and study assistance

Main summaries home pages:

Main study fields:

Main study fields NL:

Follow the author: SanneA
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Statistics
4138