Psychology
Chapter 2
Methods of psychology
In psychology, the data are usually measures or descriptions of some form of behaviour produces by humans or other animals.
A fact (or observation) is an objective statement, usually based on direct observation, that reasonable observers agree is true. In psychology, facts are usually particular behaviours, or reliable patterns of behaviours, for persons or animals.
A theory is an idea, or conceptual model, that is designed to explain existing facts and make predictions about new facts that might be discovered.
Any prediction about new facts that is made from a theory is called a hypothesis.
Facts lead to theories, which leads to hypothesis, which are tested by experiments, which leads to new fact. It is a cycle of science.
- The value of scepticism.
It makes you notice what others missed and think of an alternative explanation.
Occam’s razor: when there are two or more explanations that are equally able to account for a phenomenon, the simplest explanation is usually preferred. - The value of careful observations under controlled conditions.
Careful observation under controlled conditions is a hallmark of the scientific method. - The problem of observer-expectancy effects.
In studies of humans or other animals, the observers may unintentionally communicate to the subjects their expectations of how they should behave. The subjects, intentionally or not, may respond by doing what the researcher expect.
Each of this dimensions can vary form the others, resulting in any possible combination.
Research design
Researches design a study to test a hypothesis, choosing the design that best fits the conditions the researcher wants to control.
Also in three basic types.
- Experiments
The most direct a conclusive approach to testing a hypothesis about a cause-effect relationship between two variables.
An experiment is a procedure in which a researcher systematically manipulates one or more independent variables and looks for changes in one or more dependent variables while keeping all other variables constant. If only the independent variable is changed, than the experimenter can conclude that any change observed in the depend variable is caused by the change in the independed variable.
A variable that causes some effect on another variable is the independent variable.
The variable that is hypothesised to be affected is called the dependent variable.
The aim of any experiment is to learn whether and how the dependent variable is affected by the independent variable.
Within-subject experiments: each subject is tested in each of the different conditions of the independent variable.
Between-groups experiments: there is a separate group of subjects for each different condition of the independent variable.
In this experiment random assignment is used to ensure that the subjects are not assigned in a way that could bias the results. - Correlational studies
Sometimes we cannot assign subjects to particular experimental conditions and control their experiences.
A correlational study is a study in which the researcher does not manipulate any variable, but observes or measures two or more already existing variables to find relationships between them.
It can identify the relationship between variables, which allow us to make predictions about one variable based on knowledge of another. It does not tell us cause relationship.
Cause effect cannot be determid from a correlational study.
A casual relationship may go (for example) in both directions. Or there can be a third variable. - Descriptive studies
A descriptive study is when the aim of research is to describe the behaviour of an individual (or set of individuals) whiteout assessing relationships between the different variables. Simply describe the prevalence of each disorder without correlating the disorders to other characteristics of the community members.
Setting
In two basic types
- Field
Any research study conducted in a setting in which the researchers does not have control over the experiences that a subject had. - Laboratory
Any research study in which the subjects are brought to a specially designated area that has been up to facilitate the researchers’ collection of data or control over environmental conditions.- Advantage: The laboratory allows the researcher to collect data under more uniform, controlled conditions that are possible in the field.
- But: the strangeness of artificiality of the laboratory may induce behaviours that obscure those the researchers wants to study.
Experiments happen more in laboratories.
Correlational and descriptive studies happen more often in the field.
But not always!
3 Data-collection method
Two basic types
- Self-report
Procedures in which people are asked to rate or describe their own behaviour or mental state in some way.
For example a questionnaire or interview.
People might also be asked to describe other people.- One form of self-report is introspection. The personal observations of one’s thoughts, perceptions and feelings. This is highly subjective.
- Observation
Observational methods include all procedures by which researchers observe and record the behaviour of interest, rather than relying on subjects self-reports.- One subcategory is ‘tests’. The researcher deliberately presents problems, tasks or situations to which subjects responds.
- One other subcategory is ‘naturalistic observation’. The researcher avoids interfering with the subjects’ behaviour.
Caution! Sometimes subjects know that they’re being watched. Does this knowledge affect how they behave?
Hawthorne effect:
The subjects’ knowledge that they’re being watched and their belief that they are receiving special treatment includes their behaviour.
One technique to minimalize the Hawthorne effect is habituation. A decline in response when a stimulus is repeatedly or continuously present. Over time, subjects may habituate to the presence of the researcher and go about their daily activities more naturally than they would have if suddenly placed under observation.
Those are divided in two categories.
- Descriptive statistics.
Used to summarize sets of data. - Inferential statistics
Help researchers decide how confident they can be in judging that the results observer are not due chance.
Descriptive Statistics
Describing a set of scores
The mean: The arithmetic average. Determined by adding the scores and dividing the sum by the number of scores.
The median: the centre score. Determined by ranking the scores from highest to lowest and finding the score that had the same number of scores above it as below it.
The variability: the degree to which the numbers in the set differ from one another and from the mean. Close to the mean is low variability and widely differ is a high variability. A common measure of variability is the standard deviation.
Describing a correlation
The strength and direction of the relationship of a correlation can be assessed by a statistic called the correlation coefficient. This is a ranking form -1.00 to +1.00. The – and + indicates the direction of the correlation. Positive or negative.
Positive: an increase of one variable coincides with a tendency for the other variable to increase.
Negative: An increase in one variable coincides with a tendency for the other variable to decrease.
The absolute value of the correlation coefficient indicates the strength of the correlation.
A correlation close to 0 means that the two variables are statistically unrelated.
To visualise the relationship between two variables, researchers might produce a scatter plot.
Inferential statistics
Inferential statistics are necessary because any set of data collected in a research study contains some degree of variability that can be attributed to chance. There are random effects caused by uncontrollable variables.
Given that results can vary as a result of chance, how confident can a researcher be in inferring a general conclusion for the study’s data?
Inferential statistics are ways of answering that question using the laws of probability.
Statistical Significance
P is for probability (or level of significance).
Then two means are being compared, p is the probability that a difference as great as or greater than that observed would occur by chance, in the larger population, there were no difference between the two means.
P is the probability that a difference would occur if the independent variable had no real effect on the scores.
By convention the results are usually labelled as statistically significant if the value of p is less than 0,05 (5 percent).
Components of a test of statistical significance
The elements that go in calculations are:
- The size of the observer effect.
Other things equal, a large effect is more likely to be significant than a small one. A large effect is less likely to be caused by chance. - The number of individual subjects or observations in the study.
Other things being equal, results are more likely to be significant the more subjects or observations included in a research study. Large samples are less distorted by chance than are small samples.
The larger the sample, the more accurately an observed mean (or correlation coefficient) reflects the true mean (or correlation coefficient) of the population from which it was drawn.
If the number of subjects or observations is huge, then even very small effects will be statistically significant, that reflect a true different in the population. - The variability of the data within each group.
This element applies to cases in which group means are compered to one another and an index of variability can be calculated for each group.
Variability is an index of the degree to which uncontrolled chance factor influence the scores in a set of data.
Other things equal, the less the variability within each group, the more likely the results are to be significant. If all of the scores within each group are close to the group mean, then even a small difference between the means of different groups may be significant.
In short:
A large observed effect, a large number of observations, and a small degree of variability in scores within groups all reduce the likelihood of the effect is due chance (and increase the likelihood that a difference between two means, or a correlation between two variables, will be statistical significant).
Statistical significance tells us that a result probably did not come about by chance, but does not, by itself, tell us that the result has practical value.
Good scientist strive to minimize bias in their research.
Bias is non-random effects causes by some factor (or factors) extraneous to the research hypothesis.
Bias is a serious problem in research because statistical techniques cannot identify it or correct for it. Whereas error only reduces the chance that researchers will find statistically significant results.
Bias can lead to false conclusion.
Three types of bias are:
- Sampling bias
- Measurement bias
- Expectancy bias
Avoiding biased samples
It has to do which the way individuals are studied or selected or assigned in groups.
If the members of a particular group are initially different from those of another group in some systematic way (or different from the larger population the researcher is interested in) the group is a biased sample. The group is no longer representative.
Reliability and validity of measurements
Reliability
Reliability has to do whit measurement of error, not bias.
- Replicability.
A measure is reliable to the degree that it yields similar results each time it is used with a particular subject under a particular set of conditions.
Low reliability decreases the chance of finding statistical significance in a research study. - Interobserver reliability
The same behaviour seen by one observer is also seen by a second observer.
This requires that the behaviour in question be carefully defined ahead of time. This is done by generating operational definition (specifying exactly what constitutes an example of your dependent measure).
Validity
A lack of validity can be a source of bias. It can lead to false conclusions.
- Criterion validity
Correlate a test scores with another, more direct index of the characteristic that we wish to measure or predict. The more direct index is the criterion.
Avoiding biases from observers’ and subjects’ expectancies
- Observer-expectancy effects.
The best way to prevent this is to keep the observer blind (uninformed) about those aspects of the studies design that could lead him to form potentially biasing expectations. - Subject expectancy effects
If different treatment in an experiment induce different expectations in subjects, then those expectations, rather than anything else about the treatments, may account for observed differences in how the subjects respond.
To prevent bias from subject expectancy, subjects should be kept blind about the treatment they are receiving. For example a placebo.
Double-blind experiment: an experiment in which both the observer and the subject are kept blind.
Subjects cannot always be kept blind about their treatment.
Research with humans
In research with humans, ethical considerations revolve around three interrelated issues:
- The person’s right to privacy
Subjects must be informed that they do not have to share any information about themselves that they do not wish to share. And all records must be in ways that ensure anonymity. - The possibility of discomfort or harm
If a planned research study involves some risk of discomfort or harm to subjects, researchers are obliged to determine whether the same question can be answered in a study that involves less risk. If the answer is no, a determination must be made that the risk is minimal and is outweighed by the human benefits of the study. Humans must be advised that they are free to quit at any given time. - The use of deception
Add new contribution