Non-parametric tests can be used when the assumptions of the regular statistical tests have been violated. Non-parametric tests use fewer assumptions and are robust. A non-parametric test has less power than parametric tests if the sampling distribution is normally distributed.Ranking the data refers to giving the lowest score the rank of 1, the next highest score a rank of 2 and so on. This eliminates the effect of outliers. It does neglect the difference in magnitude between the scores. If there are two scores that are the same, there are tied ranks. These scores are ranked by the value of the average potential rank for those scores (e.g. rank 3 and 4 will become rank 3.5). There are several alternatives to the four most used non-parametric tests:Kolmogorov-Smirnov ZIt tests whether two groups have been drawn from the same population. It has more power than the Mann-Whitney test when the sample sizes are less than 25 per group.Moses Extreme ReactionIt tests the variability of scores across the two groups and is a non-parametric form of the Levene’s test.Wald-Wolfowitz runsIt looks at clusters of scores in order to determine whether the groups differ. If there is no difference, the ranks should be randomly interspersed.Sign testIt does the same as the Wilcoxon-signed rank test but it is only based on the direction of the difference. The magnitude of change is neglected. It lacks power unless the sample size is really small.McNemar’s testIt uses nominal, rather than ordinal data. It is useful when looking for changes in people’s scores. It compares the number of people who changed their response in one direction to those who changed in the opposite direction. Marginal homogeneityIt is an extension of McNemar’s test and is similar to the Wilcoxon test.Friedman’s 2-way ANOVA by ranks (k samples)It is a non-parametric ANOVA to compare two groups but has low power compared to the Wilcoxon signed-rank test.Median testIt assesses...


Access options

      How do you get full online access and services on JoHo WorldSupporter.org?

      1 - Go to www JoHo.org, and join JoHo WorldSupporter by choosing a membership + online access
       
      2 - Return to WorldSupporter.org and create an account with the same email address
       
      3 - State your JoHo WorldSupporter Membership during the creation of your account, and you can start using the services
      • You have online access to all free + all exclusive summaries and study notes on WorldSupporter.org and JoHo.org
      • You can use all services on JoHo WorldSupporter.org (EN/NL)
      • You can make use of the tools for work abroad, long journeys, voluntary work, internships and study abroad on JoHo.org (Dutch service)
      Already an account?
      • If you already have a WorldSupporter account than you can change your account status from 'I am not a JoHo WorldSupporter Member' into 'I am a JoHo WorldSupporter Member with full online access
      • Please note: here too you must have used the same email address.
      Are you having trouble logging in or are you having problems logging in?

      Toegangsopties (NL)

      Hoe krijg je volledige toegang en online services op JoHo WorldSupporter.org?

      1 - Ga naar www JoHo.org, en sluit je aan bij JoHo WorldSupporter door een membership met online toegang te kiezen
      2 - Ga terug naar WorldSupporter.org, en maak een account aan met hetzelfde e-mailadres
      3 - Geef bij het account aanmaken je JoHo WorldSupporter membership aan, en je kunt je services direct gebruiken
      • Je hebt nu online toegang tot alle gratis en alle exclusieve samenvattingen en studiehulp op WorldSupporter.org en JoHo.org
      • Je kunt gebruik maken van alle diensten op JoHo WorldSupporter.org (EN/NL)
      • Op JoHo.org kun je gebruik maken van de tools voor werken in het buitenland, verre reizen, vrijwilligerswerk, stages en studeren in het buitenland
      Heb je al een WorldSupporter account?
      • Wanneer je al eerder een WorldSupporter account hebt aangemaakt dan kan je, nadat je bent aangesloten bij JoHo via je 'membership + online access ook je status op WorldSupporter.org aanpassen
      • Je kunt je status aanpassen van 'I am not a JoHo WorldSupporter Member' naar 'I am a JoHo WorldSupporter Member with 'full online access'.
      • Let op: ook hier moet je dan wel hetzelfde email adres gebruikt hebben
      Kom je er niet helemaal uit of heb je problemen met inloggen?

      Join JoHo WorldSupporter!

      What can you choose from?

      JoHo WorldSupporter membership (= from €5 per calendar year):
      • To support the JoHo WorldSupporter and Smokey projects and to contribute to all activities in the field of international cooperation and talent development
      • To use the basic features of JoHo WorldSupporter.org
      JoHo WorldSupporter membership + online access (= from €10 per calendar year):
      • To support the JoHo WorldSupporter and Smokey projects and to contribute to all activities in the field of international cooperation and talent development
      • To use full services on JoHo WorldSupporter.org (EN/NL)
      • For access to the online book summaries and study notes on JoHo.org and Worldsupporter.org
      • To make use of the tools for work abroad, long journeys, voluntary work, internships and study abroad on JoHo.org (NL service)

      Sluit je aan bij JoHo WorldSupporter!  (NL)

      Waar kan je uit kiezen?

      JoHo membership zonder extra services (donateurschap) = €5 per kalenderjaar
      • Voor steun aan de JoHo WorldSupporter en Smokey projecten en een bijdrage aan alle activiteiten op het gebied van internationale samenwerking en talentontwikkeling
      • Voor gebruik van de basisfuncties van JoHo WorldSupporter.org
      • Voor het gebruik van de kortingen en voordelen bij partners
      • Voor gebruik van de voordelen bij verzekeringen en reisverzekeringen zonder assurantiebelasting
      JoHo membership met extra services (abonnee services):  Online toegang Only= €10 per kalenderjaar
      • Voor volledige online toegang en gebruik van alle online boeksamenvattingen en studietools op WorldSupporter.org en JoHo.org
      • voor online toegang tot de tools en services voor werk in het buitenland, lange reizen, vrijwilligerswerk, stages en studie in het buitenland
      • voor online toegang tot de tools en services voor emigratie of lang verblijf in het buitenland
      • voor online toegang tot de tools en services voor competentieverbetering en kwaliteitenonderzoek
      • Voor extra steun aan JoHo, WorldSupporter en Smokey projecten

      Meld je aan, wordt donateur en maak gebruik van de services

      Check page access:
      JoHo members
      Check more or recent content:

      Scientific & Statistical Reasoning – Summary interim exam 4 (UNIVERSITY OF AMSTERDAM)

      “Schmittmann et al. (2013). Deconstructing the construct: A network perspective on psychological phenomena.” - Article summary

      “Schmittmann et al. (2013). Deconstructing the construct: A network perspective on psychological phenomena.” - Article summary

      Image

      In the reflective model, the attribute is seen as the common cause of observed scores (e.g. depression causes people feeling sad). In the formative model, observed scores define or determine the attribute (e.g. depression occurs when people feel sad a lot).

      Reflective models are presented as measurement models. A latent variable is introduced to account for the covariance between other variables. In the reflective model, variables are regarded as exchangeable save for measurement parameters (e.g. reliability) and correlations between the variables are spurious in the reflective model. The correlation only exists because variables are related and might be the same thing.

      Formative models differ from reflective models because the variables are not exchangeable. This is because variables are hypothesised to capture different aspects of the same construct. There is also no assumption about whether the variables should correlate.

      There are three problems with the conceptualization of reflective and formative models:

      1. Time
        In reflective and formative models, time is not explicitly represented. The precedence criteria for causal relationships is not taken into account.
      2. Inability to articulate processes
        The processes of causal mechanisms cannot be described and tested using these models.
      3. Relations between observables
        Causal relationships between observable variables are neglected in these models as the models do not account for these relationships, although it is likely that there is a causal relationship between at least some observable variables.

      The network model states that observable variables of latent variables should be seen as autonomous causal entities in a network of dynamical systems.

      Access: 
      JoHo members
      Borsboom & Cramer (2013). Network analysis: An integrative approach to the structure of psychopathology.

      Borsboom & Cramer (2013). Network analysis: An integrative approach to the structure of psychopathology.

      Image

      The disease model states that problems are symptoms of a small set of underlying disorders. This explains observable clinical symptoms by a small set of latent variables (e.g. depression). A network is a set of elements (nodes) connected through a set of relations. In network models, disorders are conceptualized as systems of causally connected symptoms rather than effects of a latent disorder.

      Mental disorders cannot be identified independently of their symptoms. In medicine, the medical condition can be separated from the symptoms. In psychology, this is not possible. In order to separate this, it must be possible that a person has symptoms without the disorder (e.g. depression without feeling down is not possible). In mental disorders, it is likely that there is symptom-symptom causation. One symptom causes another symptom and this leads to a mental disorder.

      With network systems, it might be unclear where one disorder starts and another stops. The boundaries between disorders become unclear. Network models might change treatment, as the treatment is then no longer aimed at the disorder but rather at the symptoms and the causal relationship between the symptoms.

      Networks in psychopathology can be created by using data on symptom endorsement frequencies (e.g. looking at correlations between symptoms) (1), assess the relationship between symptoms rated by clinicians and patients (2) and use the information in the diagnostic systems (3).

      In networks, any node can reach another node in only a few steps. This is called the small world property. The DSM attempts to be neutral, theoretically, but makes claims about causal relationships between the disorders.

      Asking experts on how nodes are related (e.g. clinicians and symptoms of a disorder) is called perceived causal relations scaling

      Extended psychopathology systems refers to network systems in which the network is not isolated in a single individual but spans across multiple individuals. This would mean that one symptom in one person could cause a symptom in another person. These networks can be used to review what the interaction is between symptoms of different people in different social situations.

      Association networks show what the strength of the correlations between symptoms is. This gives an indication of different disorders, as the symptoms in disorder A are more correlated with the other symptoms in disorder A than with the symptoms of disorder B.

      A partial correlation network, also called a concentration network, shows the partial correlations between symptoms. This can be used to be a bit more certain about the causal relationship of two nodes as it rules out some third-variable explanations. Concentration graphs can be used to assess which pathways between symptoms appear common in a disorder.

      Association and concentration graphs provide information about the causal relationship between nodes but it does not provide information about the causal direction of the network. Directed networks give information about the causal relationship between nodes. This is usually represented in a DAG. In order to generate statements

      .....read more
      Access: 
      JoHo members
      Borsboom et al. (2016). Kinds versus continua: a review of psychometric approaches to uncover the structure of psychiatric constructs.

      Borsboom et al. (2016). Kinds versus continua: a review of psychometric approaches to uncover the structure of psychiatric constructs.

      Image

      The danger of using a dichotomous system when it comes to mental disorders is not treating people who require treatment or treating people who do not require treatment. It is unclear where the boundary between disorder and no disorder is and this is not progressive for science and research as a whole.

      Equivalence classes refers to sets of individuals who are exchangeable with respect to the attribute of interest. Measurement starts with categorization. The continuity hypothesis states that in between any two positions lies a third that can be empirically confirmed (1) and that there are no gaps in the continuum (2).

      In a continuous interpretation, the distinction between people that have a disorder and do not have a disorder depends on the imposition of a cut-off score that does not reflect a gap in the inherent attribute itself (e.g. difference between average length and being tall). However, there is no way of measuring how depressed someone is (i.e. there is no scale).

      Local independence states that given a specific level of a latent variable, the observed variables are uncorrelated (e.g. guilt and suicide ideation is uncorrelated in healthy individuals).

      The form of the latent structure can be assessed by inspecting particular consequences of the model for specific statistical properties of items (1) and on the basis of the global fit measures that allow one to compare whether a model with a categorical latent structure fits better than a model with a continuous latent structure on the observed data.(2).

      Taxometrics refers to inspecting particular consequences of the model for specific statistical properties of items. If an underlying construct is continuous, then the covariance between any two observed variables should be the same regardless of the exact range. This analysis can be done by choosing a variable and denoting it as the index variable. In other words, the covariance between A and B should be the same on different levels of index variable C if the index variable is continuous. If it is categorical, then the covariance between A and B should differ on different levels of the index variable and be 0 at the ‘no disorder’ level of the index variable.

      ALTERNATIVE LATENT VARIABLE MODELS
      The factor mixture models subdivide the population into different categories but there is a continuous scale within categories. It is a model in which each category is characterized by its own common factor model. It is a multi-group common factor model in which group membership is unknown. The class variable takes the place of an observed grouping variable.  

      Grade of membership (GoM) models can integrate continuous features. This continuous variation concerns group membership. This model allows individuals to be members of multiple classes at the same time but to different degrees. This model is useful if there is no clear distinction between classes.

      In a network model, modes are causally related to each other and this network

      .....read more
      Access: 
      JoHo members
      Eaton et al. (2014). Toward a model-based approach to the clinical assessment of personality psychopathology.” – Article summary

      Eaton et al. (2014). Toward a model-based approach to the clinical assessment of personality psychopathology.” – Article summary

      Image

      Types refers to categories and traits refers to dimensions. In order to determine where an individual falls on a trait, the measure needs to measure the full range of the trait dimension. There are several models:

      1. Latent trait model
        This model assumes that there is one or more underlying continuous distributions. There are no locations across the continuum that are unoccupied. The dimensional scores of this model can be changed to percentiles in order to facilitate interpretation.
      2. Latent class model
        This model assumes a latent group ( class) structure for the distribution. There are a finite number of latent classes. They are mutually exclusive and nominal. It assumes conditional independence.
      3. Hybrid model (factor mixture model)
        This model combines the continuous aspect of the latent trait model with the discrete aspects of the latent class model. This model assumes that there are classes, but there are individual differences in the classes. The distribution within a class is continuous.

      Discrimination is a measure of how strongly the item taps into the latent trait. Conditional independence states that interitem correlations solely reflect class membership.

      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 7

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 7

      Image

      Non-parametric tests can be used when the assumptions of the regular statistical tests have been violated. Non-parametric tests use fewer assumptions and are robust. A non-parametric test has less power than parametric tests if the sampling distribution is normally distributed.

      Ranking the data refers to giving the lowest score the rank of 1, the next highest score a rank of 2 and so on. This eliminates the effect of outliers. It does neglect the difference in magnitude between the scores. If there are two scores that are the same, there are tied ranks. These scores are ranked by the value of the average potential rank for those scores (e.g. rank 3 and 4 will become rank 3.5).

      There are several alternatives to the four most used non-parametric tests:

      1. Kolmogorov-Smirnov Z
        It tests whether two groups have been drawn from the same population. It has more power than the Mann-Whitney test when the sample sizes are less than 25 per group.
      2. Moses Extreme Reaction
        It tests the variability of scores across the two groups and is a non-parametric form of the Levene’s test.
      3. Wald-Wolfowitz runs
        It looks at clusters of scores in order to determine whether the groups differ. If there is no difference, the ranks should be randomly interspersed.
      4. Sign test
        It does the same as the Wilcoxon-signed rank test but it is only based on the direction of the difference. The magnitude of change is neglected. It lacks power unless the sample size is really small.
      5. McNemar’s test
        It uses nominal, rather than ordinal data. It is useful when looking for changes in people’s scores. It compares the number of people who changed their response in one direction to those who changed in the opposite direction.
      6. Marginal homogeneity
        It is an extension of McNemar’s test and is similar to the Wilcoxon test.
      7. Friedman’s 2-way ANOVA by ranks (k samples)
        It is a non-parametric ANOVA to compare two groups but has low power compared to the Wilcoxon signed-rank test.
      8. Median test
        It assesses whether samples are drawn from a population with the same median.
      9. Jonckheere-Terpstra
        It tests for trends in the data. It tests for an ordered pattern of the medains of the group. It does the same as the Kruskal-Wallis test but incorporates the order of the groups. This test should be used when a meaningful order of medians is expected.
      10. Kendall’s W
        It tests the agreement between raters and ranges between 0 and 1.
      11. Cochran’s Q
        It is a Friedman test on dichotomous data.

      The effect size for both the Wilcoxon rank-sum test and the Mann-Whitney test can be calculated using the following formula:

       denotes the total sample size.

      WILCOXON RANK-SUM TEST
      This test can be used to

      .....read more
      Access: 
      JoHo members
      Dienes (2008). Understanding psychology as a science.” – Article summary

      Dienes (2008). Understanding psychology as a science.” – Article summary

      Image

      A falsifier of a theory is any potential observation statement that would contradict the theory. There are different degrees of falsifiability, as some theories require fewer data points to be falsified than others. In other words, simple theories should be preferred as these theories require fewer data points to be falsified. The greater the universality a theory, the more falsifiable it is.

      A computational model is a computer simulation of a subject. It has free parameters, numbers that have to be set (e.g. number of neurons used in a computational model of neurons). When using computational models, more than one model will be able to fit the actual data. However, the most falsifiable model that has not been falsified by the data (fits the data) should be used.

      A theory should only be revised or changed to make it more falsifiable. Making it less falsifiable is ad hoc. Any revision or amendment to the theory should also be falsifiable. Falsifia

      Standard statistics are useful in determining probabilities based on the objective probabilities, the long-run relative frequency. This does not, however, give the probability of a hypothesis being correct.

      Subjective probability refers to the subjective degree of conviction in a hypothesis. The subjective probability is based on a person’s state of mind. Subjective probabilities need to follow the axioms of probability.

      Bayes’ theorem is a method of getting from one conditional probability (e.g. P(A|B)) to the inverse. The subjective probability of a hypothesis is called the prior. The posterior is how probable the hypothesis is to you after data collection. The probability of obtaining the data given the hypothesis is called the likelihood (e.g. P(D|H). The posterior is proportional to the likelihood times the prior. Bayesian statistics is updating the personal conviction in light of new data.

      The likelihood principle states that all the information relevant to inference contained in data is provided by the likelihood. A hypothesis having the highest likelihood does not mean that it has the highest probability. A hypothesis having the highest likelihood means that the data support the hypothesis the most. The posterior probability is not reliant on the likelihood.

      The probability distribution of a continuous variable is called the probability density distribution. It has this name, as a continuous variable has infinite possibilities and probabilities in this distribution gives the probability of any interval.

      A likelihood could be a probability or a probability density and it can also be proportional to a probability or a probability density. Likelihoods provide a continuous graded measure of support for different hypotheses.

      In Bayesian statistics (likelihood analysis), the data is fixed but the hypothesis can vary. In significance testing, the hypothesis is fixed (null hypothesis) but the data can vary. The height of the curve of the distribution for each hypothesis is relevant in calculating the likelihood. In significance testing, the tail area of

      .....read more
      Access: 
      JoHo members
      Dienes (2011). Bayesian versus orthodox statistics: Which side are you on?” – Article summary

      Dienes (2011). Bayesian versus orthodox statistics: Which side are you on?” – Article summary

      Image

      Probabilities are long-run relative frequencies for the collective, rather than an individual. Probabilities do not apply to theories, as individual theories are not collectives. Therefore, the null hypothesis cannot be assigned a probability. A p-value does not indicate the probability of the null hypothesis being true.

      Power or a p-value is not necessary in Bayesian statistics, as a degree of plausibility can be assigned to theories and the data tells us how to adjust these plausibilities. It is only needed to determine a factor by which we should change the probability of different theories given the data.

      The probability of a hypothesis being true is the prior probability (P(H)). The probability of a hypothesis given the data is the posterior probability (P(H|D)). The probability of obtaining the exact data given the hypothesis is the likelihood (P(D|H)). Therefore, the posterior probability is the likelihood times the prior probability.

      The likelihood principle states that all information relevant to inference contained in data is provided by the likelihood. In a distribution, the p-value is the area under the curve at a certain point. The likelihood is the height of the distribution at a certain point.

      The p-value is influenced by the stopping rule (1), whether or not the test is post-hoc (2) and how many other tests have been conducted (3). These things do not influence the likelihood.

      The Bayes factor is the ratio of the likelihoods. The Bayes factor is driven to 0 if the null hypothesis is true, whereas the p-values fluctuate randomly if the null hypothesis is true and data-collection continues. The Bayes factor is slowly driven towards the ‘truth’. Therefore, the Bayes factor gives a notion of sensitivity. It distinguishes evidence that there is no relevant effect from no evidence of a relevant effect. It can be used to determine the practical significance of an effect.

      Adjusting conclusions according to when the hypothesis was thought of would introduce irrelevancies in inference and therefore, the timing of the hypothesis is irrelevant in Bayesian statistics. In assessing evidence for or against a theory, all relevant evidence should be taken into account and the evidence should not be cherry picked.

      Rationality refers to having sufficient justification for one’s beliefs. Critical rationalism is a matter of having one’s beliefs subjected to critical scrutiny. Irrational beliefs are beliefs not subjected to sufficient criticism.

      It is possible to have a uniform (1), normal (2) and half-normal (3) distribution. In a uniform distribution, all values are equally likely. In a normal distribution, one value is most likely given the theory and a half-normal distribution is a normal distribution centred on zero with only one tail. It predicts a theory into one direction but smaller effects are more likely than larger effects.

      There are several weaknesses of the Bayesian approach:

      1. Bayesian analyses force people to specify predictions in detail
      2. Bayesian analyses do not
      .....read more
      Access: 
      JoHo members
      Coyle (2015). Introduction to qualitative psychological research.” – Article summary

      Coyle (2015). Introduction to qualitative psychological research.” – Article summary

      Image

      Qualitative research refers to the collection and analysis of non-numerical data through a psychological lens in order to provide rich descriptions and possibly explanations of people’s meaning-making, how they make sense of the world and how they experience particular events.

      Epistemology refers to the theory of knowledge regarding what we can know and how we can know. Ontology refers to the assumptions made about the nature of being, existence or reality. Different research approaches are associated with different epistemologies.

      Positivism holds that there is a direct correspondence between the state in the world and our perceptions through our senses, provided that our perception is now skewed by factors that could damage that correspondence (e.g. interest in a topic). Empiricism states that our knowledge must arise from the collection and categorization of our sense perceptions of the world. Hypothetico-deductivism states that theories should be exposed to attempts of falsifications, rather than attempts of verification.

      The classical scientific method assumed that reality exists independently of the observer and that reality can be observed through research. It assumes that any existing psychological dimension could be measured with precision.

      ‘Small q’ qualitative research is a structured form of content analysis, which categorizes and quantifies qualitative data systematically. ‘Big Qqualitative research refers to the use of qualitative techniques within a qualitative paradigm which rejects notions of objective reality or universal truth.

      Nomothetic research seeks generalizable findings that uncover laws to explain objective phenomena and idiographic research seeks to examine individual cases in detail to understand an outcome. Phenomenological methods focus on obtaining detailed descriptions of experience as understood by those who have that experience in order to discern its essence.

      Critical realism states that reality exists independent of the observer, although we cannot know that reality with certainty. Social constructionism has a critical stance towards assumptions about the world. It states that the way we understand the world and ourselves are built up through social processes. This is not fixed. Relativism states that reality is dependent on the ways we come to know it.  

      Reflexivity refers to the acknowledgement by the researcher of the role played by their interpretative framework in creating their analytic account.

      Sensitivity to context refers to whether the context of the theory is made clear. Commitment refers to prolonged engagement with the research topic. Rigour refers to the completeness of the data collection and analysis. Coherence refers to the quality of the research narrative and the fit between the research question and the adopted philosophical perspective. Impact and importance refers to the theoretical, practical and socio-cultural impact of the study.

      There are several evaluative criteria for qualitative research:

      Qualitative and quantitative

      Qualitative research

      .....read more
      Access: 
      JoHo members
      “Gigerenzer & Marewski (2015). Surrogate science: The idol of a universal method for scientific inference.” - Article summary

      “Gigerenzer & Marewski (2015). Surrogate science: The idol of a universal method for scientific inference.” - Article summary

      Image

      Good science requires statistical tools and informed judgement about what model to construct, what hypotheses to test and what tools to use.

      There is no universal method of scientific inference, but, rather, a toolbox of useful scientific methods. Besides that, the danger of Bayesian statistics is that this will become a new universal method of statistics. Lastly, statistical methods are not simply applied to a discipline, they change the discipline itself.

      In natural sciences, the probabilistic revolution shaped theorizing. In social sciences, it led to scientists mechanizing scientists’ inferences. Inference revolution refers to the idea that inference from sample to population was considered the most important part of research. This revolution led to a dismissive attitude towards replication.

      There are three meanings of significance:

      1. Mere convention
        This means that it is convenient for researchers to use 5% as a standard level of significance.
      2. Alpha level
        This means that significance refers to the long-term relative frequency of making a type-I error.
      3. Exact level of significance
        This is the exact level of significance and is used in null hypothesis testing using a nil and not a null of zero difference.

      There are three interpretations of probability:

      1. A relative frequency
        This is a long-term relative frequency.
      2. Propensity
        This is the physical design of an object (e.g. dice)
      3. Reasonable degree of subjective belief
        This is the degree an individual believes in somethings.

      Bayesian statistics should not be used in an automatic way, like frequentism. Objections to the use of Bayes rule are that frequency-based prior probabilities do not exist (1), that the set of hypotheses needed for the prior probability distribution is not known (2) and that researchers’ introspection does not confirm the calculation of probabilities (3).

      Fishing expeditions refers to the idea that hypothesis finding is the same as hypothesis testing, characterised by using a lot of p-values in a research article.

      Access: 
      JoHo members

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Book summary

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 1

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 1

      Image

      The research process generally starts with an observation. After the observation, relevant theories are consulted and hypotheses are generated, from which predictions are made. After that, data is collected to test the predictions and finally the data is analysed. The data analysis either supports or does not support the hypothesis. A theory is an explanation or set of principles that is well substantiated by repeated testing and explains a broad phenomenon. A theory should be able to explain all of the data. A hypothesis is a proposed explanation for a fairly narrow phenomenon or set of observations. Hypotheses are theory-driven. Predictions are often used to move from the conceptual domain to the observable domain to be able to collect evidence. Falsification is the act of disproving a hypothesis or theory. A scientific theory should be falsifiable and explain as much of the data as possible.

      DATA
      Variables are things that can vary. An independent variable is a variable thought to be the cause of some effect and is usually manipulated, in research. A dependent variable is a variable thought to be affected by changes in an independent variable. The predictor variable is a variable thought to predict an outcome variable (independent variable). The outcome variable is a variable thought to change as a function of changes in a predictor a predictor variable (dependent variable). The difference between dependent variables and outcome variables is that one is about experimental research and the other is applicable to both experimental and correlational research.

      The level of measurement is the relationship between what is being measured and the numbers that represent what is being measured. A categorical variable is made up of categories. There are three types of categorical variables:

      1. Binary variable
        A categorical variable with two options (e.g. ‘yes’ or ‘no’).
      2. Nominal variable
        A categorical variable with more than two options (e.g. hair colour).
      3. Ordinal variables
        A categorical variable that has been ordered (e.g. winner and runner-up)

      Nominal data can be used when considering frequencies. Ordinal data does not tell us anything about the difference between points on a scale. A continuous variable is a variable that gives us a score for each person and can take on any value. An interval variable is a continuous variable with equal differences between the intervals (e.g. the difference between a ‘9’ and a ‘10’ on a grade). Ratio variables are continuous variables in which the ratio has meaning (e.g. a rating of ‘4’ is twice as good as a rating of ‘2’). Ratio variables require a meaningful zero point. A discrete variable is a variable that can take on only certain values.

      Measurement error is the discrepancy between the numbers we use to represent the thing we’re measuring and the actual value of this thing. Self-report will produce larger measurement error. Validity is whether an instrument measures what it sets out to measure. Reliability is whether an instrument

      .....read more
      Access: 
      Public
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 2

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 2

      Image

      Many statistical models try to predict an outcome from one or more predictor variables. Statistics includes five things: Standard error (S), parameters (P), interval estimates (I), null hypothesis significance testing (N) and estimation (E), together making SPINE. Statistics often uses linear models, as this simplifies reality in an understandable way.

      All statistical models are based on the same thing:

      The data we observe can be predicted from the model we choose to fit plus some amount of error. The model will vary depending on the study design. The bigger a sample is, the more likely it is to reflect the whole population.

      PARAMETER
      A parameter is a value of the population. A statistic is a value of the sample. A parameter can be denoted by ‘b’. The outcome of a model uses the following formula:

      In this formula, ‘b’ denotes the parameter and ‘X’ denotes the predictor variable. This formula calculates the outcome from two predictors. Degrees of freedom relates to the number of observations that are free to vary. One parameter is hold constant and the degrees of freedom must be one fewer than the number of scores used to calculate that parameter because the last number is not free to vary.

      ESTIMATION
      The method of least squares is minimizing the sum of squared errors. The smaller the error, the better your estimate is. When estimating a parameter, we try to minimize the error in order to have a better estimate.

      STANDARD ERROR
      The standard deviation tells us how well the mean represents the sample data. The difference in means across samples is called the sampling variation. Samples vary because they include different members of a population. A sampling distribution is the frequency distribution of sample means from the same population. The mean of the sampling distribution is equal to the population mean. A standard deviation of the sampling distribution tells us how the data is spread around the population mean, meaning that the standard deviation of the sampling distribution is approximately equal to the standard deviation of the population. The standard error of the mean (SE) can be calculated by taking the difference between each sample mean and the overall mean, squaring these differences, adding them up and dividing them by the number of samples and taking the square root of it. It uses the following formula:

      The central limit theorem states that when samples get large (>30), the sampling distribution has a normal distribution with a mean equal to the population mean and the following standard deviation:

      If the sample is small (<30), the sampling distribution has a t-distribution shape.

      INTERVAL ESTIMATES
      It is not possible to know the parameters, thus confidence intervals are used. Confidence intervals

      .....read more
      Access: 
      Public
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 3

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 3

      Image

      There are three main misconceptions of statistical significance:

      1. A significant result means that the effect is important
        Statistical significance is not the same as practical significance.
      2. A non-significant result means that the null hypothesis is true
        Rejecting the alternative hypothesis does not mean we accept the null hypothesis.
      3. A significant result means that the null hypothesis is false
        If we reject the null hypothesis in favour of the alternative hypothesis, this does not mean that the null hypothesis is false, as rejection is all based on probability and there still is a probability of it not being false.

      The use of NHST encourages ‘all-or-nothing’ thinking. A result is either significant or not. If a confidence interval contains zero, it could be that the population effect might be zero.

      An empirical probability is the proportion of events that have the outcome in which you’re interested in an indefinitely large collective of events. The p-value is the probability of getting a test statistic at least as large as the one observed relative to all possible values of the null hypothesis from an infinite number of identical replications of the experiment. It is the frequency of the observed test statistic relative to all possible values that could be observed in the collective of identical experiments. The p-value is affected by the intention of the researcher as the p-values are relative to all possible values in identical experiments and sample size and time of collection of data (the intentions) could influence the p-values.

      In journals, based on NHST, there is a publication bias. Significant results are more likely to get published. Researcher degrees of freedom are ways in which the researcher could influence the p-value. This could be used to make it more likely to find a significant result (e.g. by excluding some cases to make the result significant). Researcher degrees of freedom could include not using some observations and not publishing key findings.

      P-hacking refers to selective reporting of significant p-values by trying multiple analyses and reporting only the significant ones. HARKing refers to making a hypothesis after data collection and presenting it as if it was made before data collection. P-hacking and HARKing makes results difficult to replicate. Tests of excess success (e.g. looking at multiple studies studying the same and calculating the probability of them all having success) are used to see whether it is likely that p-hacking or something else may have occurred.

      EMBERS
      There is an abbreviation for how to tackle the problems of NHST: Effect sizes (E), Meta-analysis (M), Bayesian Estimation (BE), Registration (R) and Sense (S), together making EMBERS.

      SENSE
      There are six principles for when using NHST in order to use your sense:

      1. The exact p-value can indicate how incompatible the data are with the null hypothesis.
      2. P-values are not interpreted as the probability that the hypothesis is true.
      .....read more
      Access: 
      Public
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 5

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 5

      Image

      A good graph has the following properties:

      1. Shows the data
      2. Induce the reader to think about the presented data
      3. Avoid distorting data
      4. Present many numbers with minimum ink
      5. Make large data sets coherent
      6. Encourage the reader to compare different pieces of data
      7. Reveal the underlying message of data

      There are also some graph building guidelines:

      1. If plotting two variables, never use 3-D plots
      2. Do not use unnecessary patterns in the bars
      3. Do not use cylinder shaped bars if that is not functional
      4. Properly label the x- and y-axis.

      HISTOGRAMS
      There are different types of histograms:

      1. Simple histogram
        Visualize frequencies of scores for a single variable
      2. Stacked histogram
        Compare relative frequencies of scores across groups
      3. Frequency polygon
        The same as a simple histogram, but uses a line, instead of a bar
      4. Population pyramid
        Comparing distributions across groups and the relative frequencies of scores in two populations.

      BOXPLOTS
      A box-plot or box-whisker diagram  uses the median as the centre of the plot. It is surrounded by the quartiles which show 25% and 75% of the data. There are several types of boxplots:

      1. 1-D boxplot
        A single boxplot for all scores of the chosen outcome
      2. Simple boxplot
        Multiple boxplots for the chosen outcome by splitting the data by a categorical variable
      3. Clustered boxplot
        A simple boxplot, but it splits the data by a second categorical variable.

      BAR CHARTS
      Bar charts are often used to display means. There are different types of bars:

      1. Simple bar
        The means of scores across different groups or categories.
      2. Clustered bar
        Different coloured bars to represent levels of a second grouping variable (e.g: film rating and excitement and enjoyment)
      3. Stacked bar
        Clustered bar, but the bars are stacked.
      4. Simple 3-D bar
        Second grouping variable is represented by an additional axis
      5. Clustered 3-D bar
        A clustered bar, but an extra categorical variable can be added on an extra axis
      6. Stacked 3-D bar
        A 3-D clustered bar, but the bars are stacked
      7. Simple error bar
        A simple bar chart, but there is no bar, but a line and a dot
      8. Clustered error bar
        Clustered bar chart, but a dot with an error band around it.

      LINE CHARTS
      Line charts are bar charts but with lines instead of bars. There are two types of line charts:

      1. Simple line
        The means of scores across different groups of cases
      2. Multiple line
        This is equivalent to the clustered bar chart.

      SCATTERPLOTS
      A scatterplot is a graph that plots each person’s score on one variable against their score on another. There are several types of scatterplots:

      1. Simple scatter
        A scatterplot of
      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 6

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 6

      Image

      Bias can be detrimental for the parameter estimates (1), standard errors and confidence intervals (2) and the test statistics and p-values (3). Outliers and violations of assumptions are forms of bias.

      An outlier is a score very different from the rest of the data. They bias parameter estimates and have an impact on the error associated with that estimate. Outliers have a strong effect on the sum of squared errors and this biases the standard deviation.

      There are several assumptions of the linear model:

      1. Additivity and linearity
        The scores on the outcome variable are linearly related to any predictors. If there are multiple predictors, their combined effect is best described by adding them together.
      2. Normality
        The parameter estimates are influenced by a violation of normality and the residuals of the parameters should be normally distributed. It is normality for each level of the predictor variable that is relevant. Normality is also important for confidence intervals and for null hypothesis significance testing.
      3. Homoscedasticity / homogeneity of variance Homoscedasticity / homogeneity of variance
        This impacts the parameters and the null hypothesis significance testing. It means that the variance of the outcome variable should not change between levels of the predictor variable. Violation of this assumption leads to bias in the standard error.
      4. Independence
        This assumption means that the errors in the model are not related to each other. The data has to be independent.

      The assumption of normality is mainly relevant in small samples. Outliers can be spotted using graphs (e.g. histograms or boxplots). Z-scores can also be used to find outliers.

      The P-P plot can be used to look for normality of a distribution. It is the expected z-score of a score against the actual z-score. If the expected z-scores overlap with the actual z-scores, the data will be normally distributed. The Q-Q plot is like the P-P plot but it plots the quantiles of the data instead of every individual score.

      Kurtosis and skewness are two measures of the shape of the distribution. Positive values of skewness indicate a lot of scores on the left side of th distribution. Negative values of skewness indicate a lot of scores on the right side of the distribution. The further the value is from zero, the more likely it is that the data is not normally distributed.

      Normality can be checked by looking at the z-scores of the skewness and kurtosis. It uses the following formula:

      Levene’s test is a one-way ANOVA on the deviation scores. The homogeneity of variance can be tested using Levene’s test or by evaluating a plot of the standardized predicted values against the standardized residuals.

      REDUCING BIAS
      There are four ways of correcting problems with the data:

      1. Trim the data
        Delete a
      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 7

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 7

      Image

      Non-parametric tests can be used when the assumptions of the regular statistical tests have been violated. Non-parametric tests use fewer assumptions and are robust. A non-parametric test has less power than parametric tests if the sampling distribution is normally distributed.

      Ranking the data refers to giving the lowest score the rank of 1, the next highest score a rank of 2 and so on. This eliminates the effect of outliers. It does neglect the difference in magnitude between the scores. If there are two scores that are the same, there are tied ranks. These scores are ranked by the value of the average potential rank for those scores (e.g. rank 3 and 4 will become rank 3.5).

      There are several alternatives to the four most used non-parametric tests:

      1. Kolmogorov-Smirnov Z
        It tests whether two groups have been drawn from the same population. It has more power than the Mann-Whitney test when the sample sizes are less than 25 per group.
      2. Moses Extreme Reaction
        It tests the variability of scores across the two groups and is a non-parametric form of the Levene’s test.
      3. Wald-Wolfowitz runs
        It looks at clusters of scores in order to determine whether the groups differ. If there is no difference, the ranks should be randomly interspersed.
      4. Sign test
        It does the same as the Wilcoxon-signed rank test but it is only based on the direction of the difference. The magnitude of change is neglected. It lacks power unless the sample size is really small.
      5. McNemar’s test
        It uses nominal, rather than ordinal data. It is useful when looking for changes in people’s scores. It compares the number of people who changed their response in one direction to those who changed in the opposite direction.
      6. Marginal homogeneity
        It is an extension of McNemar’s test and is similar to the Wilcoxon test.
      7. Friedman’s 2-way ANOVA by ranks (k samples)
        It is a non-parametric ANOVA to compare two groups but has low power compared to the Wilcoxon signed-rank test.
      8. Median test
        It assesses whether samples are drawn from a population with the same median.
      9. Jonckheere-Terpstra
        It tests for trends in the data. It tests for an ordered pattern of the medains of the group. It does the same as the Kruskal-Wallis test but incorporates the order of the groups. This test should be used when a meaningful order of medians is expected.
      10. Kendall’s W
        It tests the agreement between raters and ranges between 0 and 1.
      11. Cochran’s Q
        It is a Friedman test on dichotomous data.

      The effect size for both the Wilcoxon rank-sum test and the Mann-Whitney test can be calculated using the following formula:

       denotes the total sample size.

      WILCOXON RANK-SUM TEST
      This test can be used to

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 8

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 8

      Image

      Variance of a single variable represents the average amount that the data vary from the mean. The cross-product deviation multiplies the deviation for one variable by the corresponding deviation for the second variable. The average value of the cross-product deviation is the covariance. This is an averaged sum of combined deviation. It uses the following formula:

      A positive covariance indicates that if one variable deviates from the mean, the other variable deviates in the same direction. A negative covariance indicates that if one variable deviates from the mean, the other variable deviates in the opposite direction.

      Covariance is not standardized and depends on the scale of measurement. The standardized covariance is the correlation coefficient and is calculated using the following formula:

      A correlation coefficient of values  0.1 represents a small effect. Values of  0.3 represent a medium effect and values of  0.5 represent a large effect.

       In order to test the null hypothesis of the correlation, namely that the correlation is zero, z-scores can be used. In order to use the z-scores, the distribution must be normal, but the r-sampling distribution is not normal. The following formula adjusts r in order to make the sampling distribution normal:

      The standard error uses the following formula:

      This leads to the following formula for z:

      The null hypothesis of correlations can also be tested using the t-score with degrees of freedom N-2:

      The confidence intervals for the correlation uses the same formula as all the other confidence intervals. These values have to be converted back to a correlation efficient using the following formula:

      CORRELATION
      Normality in correlation is only important if the sample size is small (1), there is significance testing (2) or there is a confidence interval (3). The assumptions of correlation are normality (1) and linearity (2).

      The correlation coefficient squared (R2) is a measure of the amount of variability in one variable that is shared by the other. Spearman’s correlation coefficient (rs) is a non-parametric statistic that is sued to minimize the effects of extreme scores or the effects of violations of the assumptions. Spearman’s correlation coefficient works best if the data is ranked. Kendall’s tau, denoted by τ, is a non-parametric statistic that is used when the data set is small with a large set of tied ranks.

      A biserial or point-biserial correlation is used when a relationship between two variables is investigated when one of the two variables is dichotomous (e.g. yes

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 9

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 9

      Image

      Any straight line can be defined by the slope (1) and the point at which the line crosses the vertical axis of the graph (intercept) (2). The general formula for the linear model is the following:

      Regression analysis refers to fitting a linear model to data and using it to predict values of an outcome variable (dependent variable) from one or more predictor variables (independent variables). The residuals are the differences between what the model predicts and the actual outcome. The residual sum of squares is used to assess the ‘goodness-of-fit’ of the model on the data. The smaller the residual sum of squares, the better the fit.

      Ordinary least squares regression refers to defining the regression models for which the sum of squared errors is the minimum it can be given the data. The sum of squared differences is the total sum of squares and represents how good the mean is as a model of the observed outcome scores. The model sum of squares represents how well the model can predict the data. The larger the model sum of squares, the better the model can predict the data. The residual sum of squares uses the differences between the observed data and the model and shows how much of the data the model cannot predict.

      The proportion of improvement due to the model compared to using the mean as a predictor can be calculated using the following formula:

      This value represents the amount of variance in the outcome explained by the model relative to how much variation there was to explain. The F-statistic can be calculated using the following formulas:

      ‘k’ represents the degrees of freedom and denotes the number of predictors.

      The F-statistic can also be used t test the significance of  with the null hypothesis being that  is zero. It uses the following formula:

      Individual predictors can be tested using the t-statistic.

      BIAS IN LINEAR MODELS
      An outlier is a case that differs substantially from the main trend in the data. Standardized residuals can be used to check which residuals are unusually large and can be viewed as an outlier. Standardized residuals are residuals converted to z-scores. Standardized residuals greater than 3.29 are considered an outlier (1), if more than 1% of the sample cases have a standardized residual of greater than 2.58, the level of error in the model may be unacceptable (2) and if more than 5% of the cases have standardized residuals with an absolute value greater than 1.96, the model may be a poor representation of the data (3).

      The studentized residual is the unstandardized residual divided

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 10

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 10

      Image

      Researchers should not compare artificially created groups in an experiment (e.g. based on the median). There are several problems with median-splits:

      1. Median splits change the original information drastically
      2. Effect sizes get smaller
      3. There is an increased chance of finding spurious effects

      CATEGORICAL PREDICTORS IN THE LINEAR MODEL
      Comparing the difference between the means of two groups is predicting an outcome based on membership of two groups. A t-statistic is used to ascertain whether a model parameter is equal to zero. In other words, the t-statistic tests whether the difference between group means is equal to zero.

      THE T-TEST
      There are two types of t-tests:

      1. Independent t-test (independent measures t-test)
        This is comparing two means in which each group has its own set of participants.
      2. Paired-samples t-test (dependent t-test)
        This is comparing two means in which each group uses the same participants.

      The t-test is used to see whether there is an actual difference between two groups (e.g. experimental and control). If there is no difference between the two groups, we expect to see the same mean. There is natural variation in each sample, so the mean is (almost) never exactly the same. Therefore, just by looking at the means, it is impossible to state whether there is a significant difference between two groups. In the t-test, a set level of confidence (normally 0.95), alpha, is used as a threshold of when the difference is significant. The t-statistic is used to compute a p-value and this p-value is compared to the alpha. If the p-value is equal to or smaller than the alpha, it means that there is a significant difference between the two means and then we state that there is an actual difference. The larger the difference between two means relative to the standard error, the more likely it is that there is an actual difference between the two means.

      The t-test is always computed under the assumption that the null hypothesis is true. It uses the following general formula:

      The null hypothesis usually states that there is no difference between the two means, meaning that the null hypothesis mean would equal ‘0’. The standard error of the sampling distribution is the standard error of differences. The standard error helps the t-test because it gives a scale of likely variability between samples.

      The variance sum law states that the variance of a difference between two independent variables is equal to the sum of their variances (e.g. the variance of x1-x2 = variance of x1 + variance x2). The variance of the sampling distribution of difference between two sample means is equal to the sum of variances of the two populations from which the samples were taken. This leads to the following formula for the standard error:

      This equation holds if the

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 11

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 11

      Image

      Moderation refers to the combined effect of two or more predictor variables on an outcome. This is also known as an interaction effect. A moderator variable is one that affects the relationship between two others. It affects the strength or direction of the relationship between the variables.

      The interaction effect indicates whether moderation has occurred. The predictor and the moderator must be included for the interaction term to be valid. If, in the linear model, the interaction effect is included, then the individual predictors represent the regression of the outcome on that predictor when the other predictor is zero.

      The predictors are often transformed using grand mean centring. Centring refers to transforming a variable into deviations around a fixed point. This fixed point is typically the grand mean. Centring is important when the model contains an interaction effect, as it makes the bs for lower-order effects interpretable. It makes interpreting the main effects easier (lower-order effects) if the interaction effect is not significant.

      The bs of individual predictors can be interpreted as the effect of that predictor at the mean value of the sample (1) and the average effect of the predictor across the range of scores for the other predictors (2) when the variables are centred.

      In order to interpret a (significant) moderation effect, a simple slopes analysis needs to be conducted. It is comparing the relationship between the predictor and outcome at low and high levels of the moderator. SPSS gives a zone of significance. Between two values of the moderator the predictor does not significantly predict the outcome and below and above the values it does.

      The steps for moderation are the following if there is a significant interaction effect: centre the predictor and moderator (1), create the interaction term (2), run a forced entry regression with the centred variables and the interaction of the two centred variables (3).

      The simple slopes analysis gives three models. One model for a predictor when the moderator value is low (1), one model for a predictor when the moderator value is at the mean (2) and one model for a predictor when the moderator value is high (1).

      If the interaction effect is significant, then the moderation effect is also significant.

      MEDIATION
      Mediation refers to a situation when the relationship between the predictor variable and an outcome variable can be explained by their relationship to a third variable, the mediator. Mediation can be tested through three linear models:

      1. A linear model predicting the outcome from the predictor variable (c).
      2. A linear model predicting the mediator from the predictor variable (a).
      3. A linear model predicting the outcome from both the predictor variable and the mediator (predictor = c’ and mediator = b).

      There are four conditions for mediation: the predictor variable must significantly predict the outcome variable (in model 1)(1), the predictor variable must significantly predict the mediator

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 12

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 12

      Image

      The overall fit of a linear model is tested using the F-statistic. The F-statistic is used to test whether groups are significantly different and then specific model parameters (the bs) are used to show which groups are different.

      The F-statistic gives an associated p-value as well. A p-value which is smaller than 0.05 (or any set alpha) stands for a significant difference between the group means. The downside of the F-test is that it does not tell us which groups are different. Associated t-tests can show which groups are significantly different.

      The null hypothesis if the F-statistic is that the group means are equal and the alternative hypothesis is that the group means are not equal. If the null hypothesis is true, then the b-coefficients should be zero. The F-statistic can also be described as the ratio of explained to the unexplained variation.

      The total sum of squares is the total amount of variation within the data. This can be calculated by using the following formula:

      It is the difference between each observed data point and the grand mean squared. The grand variance is the total sum of squares of all observations. It is the variation between all scores, regardless of the group from which the scores come.

      The model sum of squares is calculated by taking the difference between the values predicted by the model and the grand mean. It tells us how much of the variation can be explained using the model. It uses the following formula:

      It is the difference of the group mean and the grand mean squared. This value is multiplied with the number of participants in this group and these values for each group are added together.

      The residual sum of squares tells us how much of the variation cannot be explained by the model. It is calculated by looking at the difference between the score obtained by a person and the mean of the group to which the person belongs. It uses the following formula:

      It is the squared difference between the participant’s score (xig) and the group mean and this is done for all the participants in all the groups. The residual sum of squares can also be denoted in the following way:

      One other way of denoting the residual sum of squares is the following formula:

      It is the variance of a group multiplied by one less than the number of people in that group and this value is added together for all the groups. The average sum of squares (mean squares) is calculated by dividing the model sum of squares with the degrees of freedom (N-k).

      ASSUMPTIONS WHEN COMPARING MEANS
      There are several assumptions when

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 13

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 13

      Image

      Covariates are characteristics of the participants in an experiment. These are characteristics outside of the actual treatment. If a researcher wants to compare means of multiple groups using the additional predictors, the covariates, then the ANCOVA is used. Examples of covariates could be love for puppies, softness of puppy fur.

      Covariates can be included in an ANOVA for two reasons:

      1. Reduce within-group error variance
        The unexplained variance is attributed to other variables, the covariates, which reduces the total error variance. This allows for a more sensitive test for the difference of group means.
      2. Elimination of confounds
        By adding other variables, covariates, in the analysis, confounds are eliminated.

      If there are covariates, the b-values represent the differences between the means of each group and the control adjusted for the covariate.

      ASSUMPTIONS AND ISSUES WITH ANCOVA
      There are two new assumptions for ANCOVA that are not present with ANOVA. These assumptions are independence of the covariate and treatment effect and homogeneity of regression slopes.

      The ideal case is that the covariate is independent from the treatment effect. If the covariate is not independent from the treatment effect, then the covariate will reduce the experimental effect because it explains some of the variance that would otherwise be applicable to the experiment. The ANCOVA does not control for or balance out the differences caused by the covariate. The problem of covariates potentially explaining a bit of the data and wanting to filter these confounds is using randomizing participants to experimental groups or matching experimental groups on a covariate.

      Another assumption of the ANCOVA is that the relationship between covariate and outcome variable holds true for all groups of participants and not only for a few groups of participants (e.g. for both males and females and not only males). This assumption can be checked by checking the regression line for all the covariates and all the conditions. The lines should be similar.

      In order to test the assumption of homogeneity of regression slopes, the ANCOVA model should be customized on SPSS to look at the independent variable x the covariate interaction.

      CALCULATING THE EFFECT SIZE
      The partial eta squared is the effect size which takes the covariates into account. It uses the proportion of variance that a variable explains that is not explained by other variables in the analysis. It uses the following formula:

       

      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 14

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 14

      Image

      Factorial designs are used when there are more than one independent variables. There are several factorial designs:

      1. Independent factorial design (between groups)
        There are several independent variables measured using different entities.
      2. Repeated-measured (related) factorial design
        There are several independent variables using the same entities in all conditions.
      3. Mixed design
        There are several independent variables. Some conditions use the same entities and some conditions use different entities.

      INDEPENDENT FACTORIAL DESIGNS AND THE LINEAR MODEL
      The calculation of factorial designs is similar to that of ANOVA, but the explained variance (between-groups variance) consists of more than one independent variable. The model sum of squares (between-groups variance) consists of the variance due to the first variable, the variance due to the second variable and the variance due to the interaction between the first and the second variable.

      It uses the following formula:

      This is the model sum of squares and shows you how much variance the independent variables explain. It can be useful to see how much of the total variance each independent variable explains. This can be done by using the same formula, but then only for one independent variable. In order to achieve this, the independent variable has to be grouped together in one group (this normally increases the n, as more multiple groups are being put together in one big group).

      The residual sum of squares, the error variance (SSR) shows how much variance cannot be explained by the independent variables. It uses the following formula:

      It is the variance of a group times the number of participants in the group minus one for each group added together. The degrees of freedom are added up together too. In a two-way design, the F-statistic is computed for the two main effects and the interaction.

      OUTPUT FROM FACTORIAL DESIGNS
      A main effect should not be interpreted in the presence of a significant interaction involving that main effect. In other words, main effects don’t need to be interpreted if an interaction effect involving that variable is significant.

      Simple effects analysis looks at the effect of one independent variable at individual levels of the other independent variable. When judging interaction graphs, there are two general rules:

      1. Non-parallel lines on an interaction graph indicate some degree of interaction, but how strong and whether the interaction is significant depends on how non-parallel the lines are.
      2. Lines on an interaction graph that cross are very non-parallel, which hints at a possible significant interaction, but does not necessarily mean that it is a significant interaction.
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 15

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 15

      Image

      Repeated-measures refers to when the same entities (e.g. people) participate in all conditions of an experiment or provide data at multiple points of time.

      One of the assumptions of the standard linear model is that residuals are independent, which is not true for repeated-measures designs. The residuals are affected by both between-participant factors and within-participant factors. There are two solutions to this:

      1. Model within-participant variability
      2. Apply additional assumptions to make a simpler, less flexible model fit

      One of these assumptions is sphericity (circularity). This assumption states that the relationship between scores in pairs of treatment conditions is similar (e.g. the level of dependence between means is roughly equal). It states that variation within conditions are similar and that no two conditions are any more dependent than any other two. Local sphericity refers to when some conditions do have equal variance and some do not. Sphericity is not relevant if there are only two groups. It becomes relevant when there are at least three conditions.

      The assumption of sphericity can be tested using Mauchly’s test. The degree of sphericity can be estimated using the Greenhouse-Geisser estimate or the Huyn-Feldt estimate. If the assumption of sphericity is not met, then there is a loss of power and the F-statistic doesn’t have the distribution it is supposed to have. In order to do post-hoc tests when you worry about whether the assumption of sphericity is violated, Bonferonni method can be used, if it is not violated, Tukey’s test can be used.

      If the assumption of sphericity is violated, the degrees of freedom has to be adjusted. The degrees of freedom is multiplied by the estimate of sphericity to calculate the adjusted degrees of freedom.

      F-STATISTIC OF REPEATED MEASURES DESIGN
      In repeated measured designs, the within-groups variance consists of within-participant variance, as there is only one group of participants. This consists of the effect of the experiment and the error (variance not explained by the experiment). The between-groups variance now consists of the between-participant variance.

      The formula for the within-entity (groups) variance is the following:

      The n represents the number of scores within the person (e.g. number of experimental conditions). The total amount of variance that is explained by the experimental manipulation can be calculated by comparing the condition mean to the grand mean for all the conditions. It uses the following formula:

      The total error variance (residual sum of squares), the amount of variance that cannot be explained by the experimental manipulation can be calculated in the following way:

      In order to calculate the F-statistic, the mean squares have to be calculated and this can be done by dividing both the SSR and the SSM by the degrees of freedom:

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 16

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 16

      Image

      Mixed designs are a combination of repeated-measures and independent designs. It includes some independent variables that were measured using different entities and some independent variables that used repeated measures.

      The most important assumptions of the mixed designs ANOVA are sphericity and homogeneity of variance.

      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 17

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 17

      Image

      A multivariate analysis is used when there is more than one dependent (outcome) variable. It is possible to use several F-tests when there are several dependent variables, but this inflates the type-I error rate. A MANOVA can detect whether groups differ along a combination of dimensions. MANOVA has a greater potential power to detect an effect.

      A matrix is a grid of numbers arranged in columns and rows. The values within a matric are called components or elements and the rows and columns are vectors. A square matrix has an equal number of columns and rows. An identity matrix is a matrix where the diagonal numbers are ‘1’ and the non-diagonal numbers are ‘0’. The sum of squares and cross-products (SSCP) matrices are a way of operationalize multivariate versions of the sums of squares. The matrix that represents the systematic variance (model sum of squares) is denoted by the letter ‘H’ and is called the hypothesis sum of squares and cross-products matrix (hypothesis SCCP). The matrix that represents the unsystematic variance (residual sum of squares) is denoted by the letter ‘E’ and is called the error sums of squares and cross-products matrix (error SSCP). The matrix that represents the total sums of squares for each outcome (total SSCP) is denoted by the letter ‘T’.

      The cross-product is the total combined error between two variables.

      THEORY BEHIND MANOVA
      The total sum of squares is calculated by calculating the difference between each of the scores and the mean of those scores, then squaring those differences and adding them together.

      The degrees of freedom is N-1. The model sum of squares is calculated by taking the difference between each group mean and the grand mean, squaring it, multiplying by the number of scores in the group and then adding it all together.

      The degrees of freedom is the sample size of each group minus one multiplied by the number of groups. The SST and the SSM then have to be divided by their own degrees of freedom, before being divided by each other to get to the F-statistic.

      The cross-product is the difference between the scores and the mean for one variable multiplied by the difference between the scores and the mean for another variable. It is similar to covariance. It uses the following formula:

      For each outcome (dependent) variable, the score is taken and subtracted from the grand mean for that variable. This gives x values per participant, with x being the number of outcome variables.

      The model cross-product, how the relationship between the outcome variables is influenced by the experimental manipulation, uses the following formula:

      The residual cross-product, how the relationship between the outcome variables is influenced by individual differences and unmeasured variables, can be calculated

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 18

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 18

      Image

      Factor analysis and principal component analysis (PCA) are techniques for identifying clusters of variables. These techniques have three uses: understanding the structure of a set of variables (1), construct a questionnaire to measure an underlying variable (2) and reduce a dataset to a more manageable size while retaining as much of the original information as possible (3).

      Factor analysis attempts to achieve parsimony by explaining the maximum amount of common variance in a correlation matrix using the smallest number of explanatory constructs (latent variables). PCA attempts to explain the maximum amount of total variance in a correlation matrix by transforming the original variables into linear components.

      A factor loading refers to the coordinate of a variable along a classification axis (e.g. Pearson correlation between factor and variable). It tells us something about the relative contribution that a variable makes to a factor.

      In factor analysis, scores on the measured variables are predicted from the means of those variables plus a person’s scores on the common factors (e.g. factors that explain the correlations between variables) multiplied by their factor loadings, plus scores on any unique factors within the data (e.g. factors that cannot explain the correlations between variables).

      In PCA, the components are predicted from the measured variables.

      One major assumption of factor analysis is that the algebraic factors represent real-world dimensions. A regression technique can be used to predict a person’s score on a factor. Using this technique, the resulting actor scores have a mean of 0 and a variance equal to the squared multiple correlation between the estimated factor scores and the true factor values. A downside is that the scores can correlate with other factor scores from a different orthogonal factor. The Bartlett method and the Anderson-Rubin method can be used to overcome this problem. The Bartlett method produces factor scores that are uncorrelated and standardized.

      DISCOVERING FACTORS
      The method used for discovering factors depends on whether the results should be generalized from the sample to the population (1) and whether you are exploring your data or testing a specific hypothesis (2).

      Random variance refers to variance that is specific to one measure but not reliably so. Communality refers to the proportion of common variance present in a variable. Extraction refers to the process of deciding how many factors to keep.

      Eigenvalues associated with a variate indicate the substantive importance of that factor. Therefore, factors with large eigenvalues are retained. Eigenvalues represent the amount of variation explained by a factor.

      A scree plot is a plot where each eigenvalue is plotted against the factor with which it is associated. The point of inflexion is where the slope of the line changes dramatically. This point can be used as a cut-off point to retain factors. It is also possible to use eigenvalues as a criterion. Kaiser’s criterion is to retain factors with eigenvalues greater than 1. Joliffe’s criterion

      .....read more
      Access: 
      JoHo members
      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 19

      Discovering statistics using IBM SPSS statistics by Andy Field, fifth edition – Summary chapter 19

      Image

      It is possible to predict categorical outcome variables, meaning, in which category an entity falls. When looking at categorical variables, frequencies are used. The chi-squared test can be used to see whether there is a relationship between two categorical variables. It is comparing the observed frequencies with the expected frequencies. The chi-squared test standardizes the deviation for each observation and these are added together.

      The chi-squared test uses the following formula:

      The expected score has the following formula:

      The degrees of freedom of the chi-squared distribution are (r-1)(c-1). In order to use the chi-squared distribution with the chi-squared statistic, there is a need for the expected value in each cell to be greater than 5. If this is not the case, then Fisher’s exact test can be used.

      The likelihood ratio statistic is an alternative to the chi-square statistic. It is comparing the probability of obtaining the observed data with the probability of obtaining the same data under the null hypothesis. The likelihood ratio statistic uses the following formula:

      It uses the chi-squared distribution and is the preferred test if the sample size is small. The chi-square statistic tends to make a type-I error if the table is 2 x 2. This can be corrected for by using Yates’ correction and uses the following formula:

      In short, the chi-square test tests whether there is a significant association between two categorical variables.

      ASSUMPTIONS WHEN ANALYSING CATEGORICAL DATA
      One assumption the chi-square test uses is the assumption of independence of cases. Each person, item or entity must contribute to only one cell of the contingency table. Another assumption is that in 2x2 tables, no expected value should be below 5. In larger tables, not more than 20% of the expected values should be below 5 and all expected values should be greater than 1. Not meeting this assumption leads to a reduction in test power.

      The residual is the error between what the expected frequency and the observed frequency. The standardized residual can be calculated in the following way:

      Individual standardized residuals have a direct relationship with the test statistic, as the chi-square statistic is composed of the sum of the standardized residuals. The standardized residuals behave like z-scores.

       

      EFFECT SIZE
      Cramer’s V can give an effect size. In 2x2 tables, the odds-ratio is often used as the effect size. The odds-ratio uses the following formula:

      The actual odds ratio is the odds of event A divided by the odds of event B.

       

      Access: 
      JoHo members
      Work for WorldSupporter

      Image

      JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

      Working for JoHo as a student in Leyden

      Parttime werken voor JoHo

      Check more of this topic?
      How to use more summaries?


      Online access to all summaries, study notes en practice exams

      Using and finding summaries, study notes en practice exams on JoHo WorldSupporter

      There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

      1. Starting Pages: for some fields of study and some university curricula editors have created (start) magazines where customised selections of summaries are put together to smoothen navigation. When you have found a magazine of your likings, add that page to your favorites so you can easily go to that starting point directly from your profile during future visits. Below you will find some start magazines per field of study
      2. Use the menu above every page to go to one of the main starting pages
      3. Tags & Taxonomy: gives you insight in the amount of summaries that are tagged by authors on specific subjects. This type of navigation can help find summaries that you could have missed when just using the search tools. Tags are organised per field of study and per study institution. Note: not all content is tagged thoroughly, so when this approach doesn't give the results you were looking for, please check the search tool as back up
      4. Follow authors or (study) organizations: by following individual users, authors and your study organizations you are likely to discover more relevant study materials.
      5. Search tool : 'quick & dirty'- not very elegant but the fastest way to find a specific summary of a book or study assistance with a specific course or subject. The search tool is also available at the bottom of most pages

      Do you want to share your summaries with JoHo WorldSupporter and its visitors?

      Quicklinks to fields of study (main tags and taxonomy terms)

      Field of study

      Access level of this page
      • Public
      • WorldSupporters only
      • JoHo members
      • Private
      Statistics
      1817
      Comments, Compliments & Kudos:

      Add new contribution

      CAPTCHA
      This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
      Image CAPTCHA
      Enter the characters shown in the image.
      Promotions
      Image
      The JoHo Insurances Foundation is specialized in insurances for travel, work, study, volunteer, internships an long stay abroad
      Check the options on joho.org (international insurances) or go direct to JoHo's https://www.expatinsurances.org

       

      Follow the author: JesperN