Introductory Psychology and Cognition - Part A: Introduction to Psychology: Summaries, Study Notes & Practice Exams - Psychology BA1 - UvA
- 3857 keer gelezen
Introduction
Psychology is the science of the mind and behaviour.
The Mind consists of an individual’s sensations, perceptions, memories, dreams, thoughts, motives, feelings, and other subjective experiences, as well as the subconscious processes of knowledge. The mind controls the observable actions, behaviours of the individual.
The three fundamental ideas of psychology are:
Behaviour and thinking have measurable physical causes.
Thoughts, behaviour and emotions are gradually modified by environmental influences.
The body is a product of evolution by natural selection.
The Christian view of the human being consisted of two distinct but conjoined elements: the material body and the immaterial soul. This idea is called dualism.
Descartes concluded that the body was like a machine, capable of functioning on its own. The soul, therefore, must be responsible for all that differentiates the human being from the animal: specifically, human thought. He believed that the immaterial soul acts through the pineal body organ in the brain and sends information to the senses. This theory began to solidify the connection between body and mind, and suggest a physical source of behaviour.
Thomas Hobbes did away with the concept of dualism, arguing that nothing exists beyond matter and energy. This philosophy is called materialism. He concluded that conscious thought was a product of the mechanics of the brain, and subject to the laws of nature. This mode of thinking inspired empiricism.
Working upon the idea that the body and brain are like a machine, a great deal of research into human physiology was undertaken in the 19th century.
In 1822, François Magendie demonstrated that the spinal cord consists of two directional nerve systems – one bringing information to the brain and the other sending instructions out to the limbs. After this discovery, scientists began to learn more about reflexes, even suggesting that all behaviour (even voluntary) might occur through reflex. This philosophy is called reflexology. This idea inspired Ian Pavlov to begin his studies on reflexes and behaviour.
Johannes Müller (1838-1965) proposed that different parts of the brain are responsible for different behaviours and sensory experience. Pierre Flourens (1824-1965) experimented on animals, disabling different parts of the brain and observing different resulting deficits. Paul Broca (1861-1965) discovered that people suffering from damage to a particular lobe of the brain lose only their ability to speak.
Empiricism is the idea that human knowledge and thought come from sensory experience. Its main contributors were John Locke (1632-1704), David Hartley (1705-1759), James Mill (1773-1836), and John Stuart Mill (1806-1873).
British empiricists believed that all thoughts are a product of personal experience rather than free will. According to the empiricists, the most basic operating principle of a mind is association by contiguity: If a person experiences two environmental stimuli at the same time, or in sequence, these two stimuli become associated in the mind. In the future, when one stimuli is presented, the other will be remembered. To this day, the law of association by contiguity is still considered fundamental to learning and memory psychology.
The opposite view to empiricism is nativism, which states that the most basic forms of knowledge, the foundation of human nature, are inborn and not acquired through experience. This view was embraced by German philosophers like Leibniz (1646-1716) and Kant (1724-1804).
In 1859, the Origin of Species was published. This provided a biological grounding for psychology and insight into the reasons for native behaviours. Darwin’s fundamental idea was that, through the process of natural selection over generations, living things evolve. Environmentally adaptive characteristics are thus more likely to survive generations, especially those that encourage the production of offspring and the ability to survive until this is achieved.
In one book, Darwin illustrated the evolutionary benefit of basic human emotional expressions as a means to communicate intentions. Mechanisms like emotions, drives, perception, learning, and reasoning thus developed gradually because they were beneficial to survival and reproduction.
Psychologists strive to explain (identify causes of) mental experiences and behaviour. There are many levels of analysis upon which specific mental experiences and behaviours can be examined:
The nervous system plays a large role in the production of mental experiences and behavioural acts. The research specialty that centres on the neural level is behavioural neuroscience. Researchers in this field study how neurons, neural pathways, and regions of the brain influence behaviour and thought. They also study the effects of brain-altering drugs.
Genes are inherited instructional codes that determine how the body and brain develop. Differences in genes can cause differences in the brain. Research in this field is called behavioural genetics. Some researchers focus on nonhuman animal study, altering genes to observe the resulting effects on behaviour. Some compare DNA of people who exhibit specific traits to locate the functions of certain genes.
Evolutionary psychologists attempt to explain how behaviour came about in the course of evolution. They sometimes study non-human primates to see what behaviours our own might have come from.
Since nearly all forms of human behaviour and mental experience are modifiable by learning, some psychologists specialize in explaining behaviour in terms of learning (learning/behavioural psychology).
Cognition refers to information that is stored and activated by the brain. This includes thoughts, beliefs, memory, and innate, conscious, and unconscious information. Cognitive psychology research addresses the influence of cognition on behaviour and mental experiences.
Social psychology researchers investigate the influences of interpersonal interactions on mental experiences and behaviour. Social pressure changes our behaviour when in the actual, imagined, or implied presence of others. Social cognition consists of our beliefs about other people and the potential social consequences of certain behaviours. Social psychologists look at these and other aspects of interpersonal relations.
Cultures vary in dialect, accepted values and attitudes, and normal behaviours and emotions. The psychological specialty that addresses culture is cultural psychology. Unlike social psychology, cultural psychology strives to characterize entire cultures in terms of behaviour and attitudes.
People behave differently ad different ages in their lives. Developmental psychology focuses on the differences that occur in thought, emotion, and behaviour throughout the course of a human life. They also look for the causes of these differences. In order to do this, developmental psychologists often draw from the other psychological fields.
While these specialties are defined by having different levels of analysis, others are defined by the specific topics they study, such as sensory psychology and motivational psychology. Personality psychology focuses on the differences between the way people think, feel, and behave. Abnormal psychology is concerned with disruptive variations in psychological traits and systems. Clinical psychologists tend to be practitioners, but some do research into improving clinical techniques.
Psychology can be seen as bridging natural and social sciences, and connecting strongly with the humanities. In fact, according to the “psychocentric model of the university”, psychology is strongly linked to all of the major subjects of higher education.
The main professional settings for psychology include the following.
Academic Psychology
Clinical Psychology
Elementary and Secondary School Psychology
Business and Governmental Psychology
Facts, Theories, Hypothesis
A fact is an objective statement usually based on direct observation that can reasonably be accepted as true. A theory is a conceptual model (idea) that attempts to explain known facts and make predictions about new potential facts. These predictions are called hypotheses.
Scepticism towards both miraculous claims and reasonable-sounding scientific theories maintains scientific integrity.
Careful observations must be made in controlled conditions for results to be taken as having a reasonable reliability.
Observer-expectancy effects are those in which the subtle behaviour of the observer indicates how the subject acts.
There are three main categories of research strategies:
Research design, which includes experiments, correlational studies, and descriptive studies.
Setting in which the study is conducted, either in the field or in the laboratory.
Data-collection, through self-report and observation.
Experiments are used to test a hypothesis about a cause-effect relationship between two variables. The variable that is the supposed cause is the independent variable, and the supposed effect occurs to the dependent variable. The independent variable can be manipulated to observe how the effect on the dependent variable differs. When all other variables are constant, it is easiest to observe real effects on the dependent variable and establish causation. People and animals that are studied in a research are called subjects.
In within-subject experiments, each subject is studied within different conditions of the independent variable. That is, many people may participate in the experiment, each tested with the different conditions of the independent variable.
In between-groups experiments, there is a separate group of subjects for each different condition of the independent variable. This usually also includes a control group, for whom the conditions are normal – this allows for data that can be compared against. In between-group experiments, random assignment is used to determine which group subjects will belong to. This helps control against possible confounding variables like age, sex, birthplace, etc. as well as the way people are treated.
In some cases, ethical and practical reasons prevent the conducting of experiments. A correlational study is one in which the researcher does not manipulate any variable, but instead measures two or more existing variables to determine relationships between them.
While it is tempting to treat correlational results as if one variable caused the other, without controlling the variables, it is not possible to determine causation. Causal relationships may go in two directions, or the reverse of what is assumed. There may also be a third, unknown variable which lies at the heart of the observed correlation.
These studies aim to describe the behaviour of an individual or set of individuals, without assessing relationships between variables. These studies may or may not involve numbers. They can be narrow in focus, looking at one aspect of behaviour, or broader, aiming to learn as much about one group or individual as possible.
Conducting research in a laboratory allows data to be collected under controlled conditions. However, the clinical and abnormal atmosphere of a laboratory may have an effect on the subject’s behaviour. The results may not reflect reality.
Any research conducted outside of the laboratory is called field research. These settings may include the subjects’ workplaces, homes, consumer areas, or other parts of the subjects’ normal environments. This has the disadvantage of being difficult/impossible to control, but the advantage of providing more reality-based results.
Procedures in which people are asked to reflect and report on their own mental states and behaviour, often done through a written questionnaire or an oral interview.
Observational procedures are those by which researchers observe and record behaviour without self-report. This type of data collection includes naturalistic observation, when the researcher avoids interacting with the subjects. It also includes tests, in which the researcher presents problems or situations to which the subject must respond.
All numerical methods for summarizing a set of data are descriptive statistics. The mean is the arithmetic average, determined by adding the scores and dividing by the sum of the number of scores. The median is the middle most score, determined by ranking scores from highest and lowest and noting the one in the exact centre. For some comparisons, the variability of a set of numbers must also be determined. When the scores cluster close to the mean, they have a low variability. Standard deviation is the most common measure of variability. The further the scores from the mean, the greater the standard deviation.
When both variables of a correlational study are measured numerically, a statistic called the correlation coefficient can be determined. A formula produces a result between +1.00 and -1.00. The direction of the correlation might be positive (+: the increase of one variable causes the other to increase) or negative (-: the increase of one variable causes the other to decrease). To visualize the relationship between variables, a scatter plot may be used. From this, you can see how strong the correlation is and in what direction it goes.
Inferential statistics are necessary to determine how confident a research can be in inferring a general conclusion from data.
When two means are compared, p is the probability that the difference is as big or bigger than if the independent variable had no effect and the result was a matter of chance. When the p is less than 0.05%, the results can be considered statistically significant.
Size of effect— If an effect is large, chances are it is also significant.
Number of subjects or observations in the study—the larger the sample, the more accurately the observed mean will reflect the true mean.
Variability of data within the group—when the group means are compared and an index of variability is created, it can be determined how different the scores are from one another. The higher the variability, the higher the possible randomness of the result.
Ideally, bias and error should be minimized. Error is the random variability in results and is inevitable in most research. Error can often be measured and corrected for. Bias includes non-random effects caused by extraneous factors. Bias is hard to identify and cannot be corrected for with statistics.
If members of one group are chosen differently than those in another group, the sample might be considered biased. A sample is biased when not representative of the larger population it is supposed to describe. Random assignment is a method for counteracting sample bias.
When a test can be repeated with a particular subject in a particular set of conditions and produce similar results, it is considered reliable. If the scores are greatly affected by the whims of the subjects, the test has low reliability. Validity is the extent to which a test measures what it claims to measure. If a test lacks validity, it is likely biased. Face validity is how valid a test seems to be, according to common sense. Criterion validity is determined by correlating scores with a more direct index of the desired characteristic of study.
Researchers have wishes and expectations that might affect their behaviour and observations – this is the observer-expectancy bias. If a desire is communicated unintentionally, the subject might pick up on this and behave according to expectation.
An example of observer-expectency can be seen in the development of “facilitated communication”, in which a facilitator would help autistic people type by holding their hands up to a keyboard. At first, it seemed as if the autistic children were truly communicating. However, further research discovered that the facilitators were subconsciously influencing he movements of the autistic child’s finger by the subtle motions of their hand, and the resulting text was not communication controlled by the autistic child.
Observer expectations might not only influence the subjects’ behaviour, but also the observer’s observations. To prevent this, the observer can be kept blind (uninformed) about the aspects of the study that might lead them to form biased expectations. Ensuring that they don’t know which group in a between-group study has been exposed to an altered independent variable can keep observer-expectancy to a minimum.
When subjects have expectations, the results of an experiment can be biased. A double blind keeps both the observer AND the subject uninformed of whether they are or are not in a control group, receiving a placebo.
Three ethical issues must be considered when conducting research with humans:
Right to privacy: Informed consent should be obtained before the subjects take part, and they should be informed that they do not have to share information they don’t want to share.
Possibility of harm: If a study involves discomfort or harm, the psychologist must determine whether the same hypothesis can be equally tested in a harmless experiment. Subjects must also be reassured that they can quit at any time.
Deception: In some experiments, the independent variable involves a lie. Some believe that deception is intrinsically unethical and undermines truly informed consent. Others justify deception as necessary for the study of certain psychological processes.
Basic Genetic Mechanisms
Adaptation is the modification of a trait or set of traits to meet environmental challenges. Evolution is a long-term adaptive process, through generations of natural selection, in which each species is equipped with traits that help it survive in the conditions of its environment.
Genes never control behaviour directly, but rather provide the building blocks for the physical structures of the body. When the structures interact with the environment, behaviour is produced. Because of this, not all genes have a visible influence on behaviour, and some genetic traits are never activated.
Genes influence the production of protein molecules, which is how they influence the body. Structural proteins form the structure of the body’s cells, and enzymes control the rate of chemical reaction in these cells. Genes are components of long molecules called DNA (deoxyribonucleic acid) which exist in the gametes (sex cells) of each gender. When a new child develops, the DNA in the cells replicates with each new cell that is formed. DNA molecules exist in every cell of the body, regulating the production of protein molecules. One definition of a genes is that it is a segment of the DNA molecule that contains the code responsible for a particular sequence of amino acids for one type of protein. This definition is becoming broader with further research. There are now considered to be two types – coding genes, which code unique protein molecules, and regulatory genes, which activate or suppress coding genes, influencing the body’s development.
Genes interact directly with an individual’s environment. Environmental effects might activate or deactivate certain genes, resulting in bodily changes that effect behaviour. Both the internal (chemical) environment and the external environment of stimuli have an effect on gene activation, protein development, physiological systems and behaviour.
Genotypes are the sets of genes that the individual inherits. The term phenotype refers to those properties of the body and behaviour that are observable. One gene can have different effects depending on the environment and gene interaction, resulting in a different phenotype.
DNA strands make up chromosomes, of which each human cell normally has 23 pairs of. Of these, 22 are true pairs and the remaining pair is made up of sex chromosomes. In the male, that pair consists of an “X” and a “Y” chromosome. The female has two “X” chromosomes.
Normal cell division is called mitosis, in which each chromosome is replicated into a new cell. When producing egg or sperm cells, the process of meiosis occurs. The resulting cells are not genetically alike – each chromosome replicates once, but the cell divides twice, leaving two cells with half the normal amount of chromosomes. Each egg or sperm cell is different from the others produced, thus encouraging genetic diversity.
When a sperm cell penetrates an egg cell, the result is a zygote with the full 23 chromosomes. Every zygote is different from any other. The zygote grows, through mitosis, into a child. This genetic diversity is essential for both evolution and the prevention of disease. Identical twins are the only people genetically identical to each other, as they are the result of the division of a zygote once it has begun growing. Fraternal twins are not identical because they originate from two separately formed zygotes.
When two genes occupy the same locus/ location on chromosomes, they might be identical. If so, this is called a homozygous locus. If not, it is a heterozygous locus. When different genes occupy one locus, they are called alleles. An allele can be dominant or recessive. If dominant, it will produce observable affects in the homozygous OR heterozygous condition. If recessive, it will only produce effects in the homozygous condition. In cases where neither allele shows dominance or recessiveness, a blend of traits will occur.
In the mid-nineteenth century, a monk named Gregor Mendel did experiments with the cross-breeding of peas, and discovered a pattern of heredity in which a certain trait would occur in a quarter of the cases, when two recessive alleles paired.
In Scott and Fuller’s research crossbreeding basenji dogs and cocker spaniels, it was shown that fearfulness in dogs is a dominant gene. This predisposition towards fear, however, is still highly dependent on environmental factors.
The rare specific language impairment (SLI) was studied extensively in three generations of a family known as the KE family. It was found that SLI was inherited as any Mandelien dominant gene. This case illustrates that genes can influence behaviour by affecting the development of the brain, that each gene can have multiple effects, and that they exert their effects through the activation of other genes. Evolution involves alterations in anatomy and behaviour as a result of alterations in genes.
Categorical characteristics are those that derive from a single gene locus. These result in sharp differences between people, as if an on-off switch was used. Most characteristics, however, can result in degrees rather than types, and are considered continuous. Characteristics that are continually variable are called polygenic characteristics. These can be affected by many genes and also by the environment.
Characteristics can be modified over generations through selective breeding. A certain breed of silver foxes, for example, has been gradually bred for tameness, allowing for that breed to be kept as pets.
In a long-term, systematic study of selective breeding, Robert Tryon wanted to show that maze-navigation abilities in rats could be strongly influenced by variations in genes. He found that he was able to breed maze-bright and maze-dull rats. It is, however, contested as to whether intelligence was being bread or if it was some other trait that influenced the ability to navigate the maze.
Some psychologists are interested in the heritability of personality and behavioural characteristics. While they cannot perform selective breeding studies, they can observe closely-related individuals for the display of polygenic characteristics.
Charles Darwin referred to selective breeding as artificial selection. He recognized the occurrence of selection in the natural world, and called this natural selection. Natural selection is dictated by the obstacles to reproduction imposed by the environment, and the tendency for environmentally adaptive traits to be passed on through reproduction. Traits that discourage reproduction and individual survival up to reproduction age are not likely to have a chance at being passed down, as the carriers are less likely to reproduce successfully.
Genetic variability has two main sources- the reshuffling of genes during sexual reproduction, and genetic mutations. Mutations are errors that occur unpredictably during DNA replication, making a unique strand of DNA. In evolution, mutation is the ultimate source of genetic variation, and the rare, chance mutation of helpful traits can cause species to change dramatically over time.
Because the environment keeps changing over time, different traits become more or less useful. Evolution can occur rapidly, slowly, or not at all, depending on how the environment changes and the degree of genetic variability in a population. Some evolution occurs so rapidly that it can be observed, as in the case of the medium and large ground finches of the Galapagos Islands.
While some think evolution is a steady upward path to a perfect form, it is, in reality, a process of adaptation and change. There is no foreseen end result, and humans are not “more evolved” than chimpanzees, but rather are “differently evolved”.
Functionalism is the attempt to explain behaviour in terms of what it accomplishes for the individual.
Ultimate explanations: These are functional explanations at the evolutionary level, stating what role behaviour plays in survival and reproduction.
Proximate explanations: These are explanations that deal with mechanism rather than functions, stating the immediate conditions that result in the behaviour.
All complex mechanisms of behaviour and experience are products of evolution that came about through their tendency to promote reproduction and survival. Evidence for the ultimate evolutionary reasons behind human traits can come from analysis of the traits, cross-species comparisons, and studies on the results of the lack of such traits.
Traits that evolved to serve the needs of ancestors that no longer serve a purpose are called vestigial characteristics. One example is the extremely strong grasping ability of infants, reminiscent of the trait that allows baby chimpanzees to cling to the fur of their parents. Another is our preference for the now abundant substance sugar.
Some traits develop as by-products of natural selection. One example is the belly-button, a functionless remnant of the presence of the umbilical cord. Some traits, like the tendency for musical or artistic ability, may be by-products of other traits, like planning and communication. They might also have been naturally selected for an ability to attract traits. It is often impossible to tell causation in examining evolutionary traits.
If a characteristic occurs by chance and has neither a positive nor a negative effect on survival and reproduction, it may simply continue down among the generations because they were never weeded out. Inconsequential variations between racial groups can be referred to as genetic drift.
Commonly called instincts, species-typical behaviours are those that are specific to a certain species.
Darwin suggested that human facial expressions developed as a way to communicate moods and intentions to others. Paul Ekman and Wallace Friesen developed a study on six basic emotional expressions: fear, disgust, anger, happiness, surprise, and sadness. They showed that these expressions were universally recognized among all cultures and are thus a species-typical behaviour. Eibl-Eibesfeldt documented the cross-cultural nature of other nonverbal signs, like the “eyebrow flash” that signifies greeting. Observations of the same facial expressions in blind children indicates that this behaviour is innate and not learned by sight.
Learning still has an important effect on innate behaviours. For example, while the fundaments of language are inborn, teaching and practice coaxes this innate aptitude into a fully formed mode of communication.
Biological preparedness is the determining factor in identifying species-typical behaviours. We are born with the biology to learn to walk upright and the neural systems in the brain that motivate us to practice walking as toddlers.
Species-typical behaviour is a relative concept. Rather than seeking to identify behaviour as species-typical, it is more useful to look into the environmental conditions needed for the behaviour’s full development, the biology involved in the behaviour, the consequences of the behaviour and the possible evolutionary adaptive reason for the behaviour.
A homology is a similarity that exists between species because they share a common ancestor. The more closely related, the more homologies they show. Analogies, however, are similarities that stem from convergent evolution. This means that a certain trait has developed independently in more than one species. This can be seen in the flying mechanisms of birds, bats, and insects. Analogies are similar in function and general form, but not in the underlying structures and mechanisms.
Homologous traits can be studied in other animals as a way to gain insight on their presence in humans. For example, learning, motivation, and sensation are homologous in mammals and so can be studied in rats.
In research on smiling as a homologous trait between apes and humans, it has been shown that there are two types of smiles. The first expresses genuine happiness, and is often coupled by the pulling in of the skin around the eyes, making crow’s feet wrinkles. The other smile is one of the mouth only, and is usually used to give the impression of pleasantness, often in uncertain or tense circumstances. It has also been called the greeting smile.
In apes, the baring of the teeth is a symbol used by submissive apes towards dominant ones, as a way of avoiding the threat normally associated with direct eye contact. When used by dominant apes, it assures the other ape that they won’t be attacked. This is similar to its use by humans.
Primates have a smile-like expression called the relaxed open-mouth display/play face. It is usually seen in play fighting and chasing, and is homologous to laughter. It is seen as a way to ensure the playmate that there is no real aggression in the play behaviour. Human children laugh more when playing seemingly aggressive games and we tend to laugh more at physical comedy as well.
Analogies are useful for determining the functions of species-typical behaviours.
There are four broad classes of mating patterns:
Polygyny: The male mates with more than one female.
Polyandry: The female mates with more than one male.
Monogamy: One male mates with one female.
Polygynandry: Groups of multiple males and females mate with one another.
Robert Trivers theorized that parental investment is the time, energy, and risk to survival involved in the feeding, producing, and caring for offspring. The amount of parental investment from each parent is usually unequal. In cases like these, the more invested sex will be more vigorously competed for and more discriminating in choosing mates.
Most mammals are polygynous. This is because female mammals, through pregnancy and lactation, are more parentally invested than males. Male reproductive potential is limited by the number of females with whom he mates, encouraging males to mate with as many females as possible. Because competition for females occurs, the larger and stronger males have a higher chance of mating. The more polygynous the species, the greater the size difference between the sexes.
In cases when there is low parental investment in the female, as occurs in many bird species, polyandry is the norm.
When both male and female parents are needed in the raising of the young, monogamy can be observed. Over 90% of birds are monogamous. Social monogamy, the parental partnership, does not necessitate sexual monogamy.
Chimpanzees and bonobos tend to be polygynandrous. The fertile females mate with all the males during ovulation. Because of this, there is no knowledge of who is the father of the infant, protecting it from possible death from competing males. Bonobos are the most sexual primates, and also the most peaceful.
Consistent with the size difference between men and women, humans are mostly monogamous but do also practice moderate polygyny. Both women and men have a role in parental investment, but the mother’s role exceeds that of the father.
Romantic love and sexual jealousy are twin emotions found in people of every culture. Love encourages a person to stay close to their mate, and jealousy helps them prevent that partnership from being intruded upon. Other monogamous animals like birds show some manifestations similar to love and jealousy. At the same time, lust drives us to extramarital affairs. In men, this can provide more opportunities for conception. In women, this can provide a safeguard in case the husband is infertile or incompatible.
Aggression involves fighting and the threat of fighting in members of the same species. Much animal aggression centres on mating.
In most mammals, especially primates, males are more violent than females. Female aggression is directed at protecting the young and securing resources, while male aggression are more easily aggressive and more prone to killing. Male fighting is often over sexual matters, or to gain power and status in a group.
Men in every culture are more likely to maim or kill than women. Sexual jealousy is among the main motives for murder. Male violence is also directed towards the gaining of power, and the control of women.
Female bonobos tend to dominate the males, through strong alliances and collaboration that aid in disputes with males. In humans, it has been shown that in cultures that have strong female alliances, violence towards women is less common.
Helping is any behaviour that increases the survival chance or reproductive capacity of another individual. This includes cooperation and altruism. Cooperation is when one helps another while they help themselves. It occurs often in the animal world, as there are many advantages to cooperation in social living, and a greater chance of survival in a cooperative group. Altruism is when and individual helps another while decreasing their own survival chance. While less common, altruism occurs in many species.
The kin selection theory of altruism states that altruism preferentially helps close relatives that are genetically similar, thus allowing the genes, if not the individual, a greater chance of survival. In cases in which there is a great enough chance that strangers are in some way related, altruism may happen as often outside of the family.
This concept accounts for even non-kin altruism. In this theory, altruism is a way of encouraging long-term cooperation and return benefits. Humans have a very strong drive to return help that is given to them – gratitude towards others, pride when returning a favour, guilt when failing to return help, and anger when another fails to return our help.
The naturalistic fallacy encourages the belief that that which is natural is also moral. The philosopher Herbert Spencer believed that evolution was an upward path towards moral superiority, and that some were more evolved than others. He is the one that coined “survival of the fittest” and whose ideas led to social Darwinism as a defense for ruthless capitalism.
The deterministic fallacy is the assumptions that genetic influences actually control our behaviour and that we cannot counteract them.
Learning can be defined as any process through which an individual experience can influence an individual’s subsequent behaviour.
Classical conditioning is a learning process through which new reflexes are created. Reflexes are automatic, often simple, reactions to a stimulus, mediated by the nervous system. For example, when hearing a sudden loud noise you automatically tense and startle. In all reflexes, a defined environmental event (stimulus) elicits a specific behaviour (response). Reflex responses must be mediated by the nervous system. Experience influences reflexes through habituation – when a person experiences one stimulus enough times in a row, their response decreases. While habituation diminishes a currently-held reflex, classical conditioning forms a new stimulus-response sequence.
Ian Pavlov (1849-1936) was a Russian scientist working on the reflexes involved in eating. He studied primarily on dogs. He measured the salivation reflex of dogs and noticed that after enough experiments, the dogs salivated in anticipation of food, when seeing the cues that would lead to feeding. This led him to study conditioned reflexes.
Pavlov began pairing the feeding moment with the sound of a ringing bell, eventually causing the dog to salivate at the sound of the bell alone. This reflex is called a conditioned reflex, dependant on the conditions of the dog’s previous experience. The bell sound is considered a conditioned stimulus, and the salivation as a conditioned response. The original reflex can be called the unconditioned reflex, made up of an unconditioned stimulus (the food) and an unconditioned response (salivation). To condition a reflex, you must pair a neutral stimulus like the bell sound with an existing unconditioned stimulus like food which creates an unconditioned response like salivation. Do this enough times and you create a conditioned stimulus/response. Conditioned responses occur often in our daily lives, as familiar sights, sounds, and tastes are associated with certain biological behaviours (i.e. the sight of a toilet and the need to urinate).
When the conditioned stimulus (the bell) is repeatedly presented without being paired with the unconditioned stimulus (food), eventually the conditioned response (salivation) is unlearned. However, in a phenomenon known as spontaneous recovery, even when a reflex is unlearned, it can occur again with the passage of time.
When new stimuli similar to the conditioned stimuli would be presented, the same conditioned response would often also occur, in a phenomenon called generalization. The closer the similar stimulus to the original conditioned stimulus, the greater the response. This generalization can be abolished if response to one is reinforced while the other is extinguished. This is discrimination training. This combination of classical conditioning and discrimination training can be a useful tool in studying an animal’s sensory capacities.
In humans, generalization occurs not just when stimuli are physically similar, but also when they have similar meanings to a person. A Soviet psychologist classically conditioned a child to salivate to the Russian word “good” and discriminate against the word “bad”. Subsequently, in hearing statements that he subjectively considered good (i.e. “The Soviet army was victorious”) he would salivate. In subjectively bad statements, he would not salivate.
Those in the school of behaviourism believed that the science of psychology should avoid references to mental entities that cannot be directly observed, such as thoughts and emotions, and instead focus on the observable environmental events (stimuli) that elicit observable behavioural responses. Learning was the main explanatory concept of early behaviourists.
The important question of what, exactly, is learned in classical conditioning has been approached in different ways. Early behaviourists believed it is the simple addition of a stimulus-response connection.
According to Pavlov, the animal doesn’t learn a stimulus-response connection, but instead learns to connect two stimuli. When a bell is heard, a mental representation of food causes the salivation. The stimulus-stimulus theory (S-S theory), while unappealing to traditional behaviourists because of the assumption of a thought, has nevertheless been repeatedly favoured by experiments comparing it and the stimulus-response theory (S-R theory).
The S-S theory is more cognitive than the S-R theory, dealing with unobservable mental processes. The mental representation in the S-S theory can be seen as an expectation of the unconditioned stimulus. This expectancy theory suggests that different behaviour occurs when, for example, a dog expects food (salivation, tail-wagging, food-begging), than when a dog receives food (salivation, chewing, swallowing).
The expectancy theory is supported by research that shows that classical conditioning only occurs when the new stimulus is a helpful predictor of the arrival of the unconditioned stimulus.
Conditioned stimulus must always come before the unconditioned stimulus to be most effective. (This causes it to be classified as predictive information)
Conditioned stimulus must signal a higher probability that the unconditioned stimulus will occur. If the unconditioned stimulus occurs equally as often without the conditioned stimulus pairing, then the predictor becomes useless and is not heeded.
Conditioning does not work when the animal already has a good predictor of the unconditioned stimulus. This is called the blocking effect.
John Watson found that infants have two unconditioned stimuli for fear – sudden loud noises and suddenly losing physical support. He conditioned “Little Albert” to fear laboratory rats by making a loud sound immediately when the rat was placed near the baby. In subsequent pairings, Albert became afraid of the rat. This fear generalized to other white furry objects, like rabbits and even fur coats.
Signals that reliably predict food become conditioned stimuli for salivation and body responses that occur to prepare the body for eating. These can make us feel hungrier than we naturally would be. Thus, merely seeing the symbol of a McDonald’s restaurant causes us to crave the type of foods we remember having eaten there.
Sexual arousal can also be paired with unconditioned stimuli and eventually become elicited by those unconditioned stimuli. The ability to predict sexual pairings is obviously an adaptive tool to allow for more effective reproduction.
Experiments have shown that normal reactions to drugs can be conditioned in pairings with other stimuli. For example, coffee drinkers can become more alert by the mere smell and taste of coffee.
Many drugs have a direct effect that is followed by a compensatory effect as the body tries to return to its normal state. It has been found that only the compensatory effects can be conditioned – thus, when morphine is given to rats and then taken away, the compensatory effect, rather than bringing the body to a normal pain threshold, causes the rat to be hypersensitive to pain. This is because, unlike direct effects, compensatory effects are reflexes of the nervous system.
Drug tolerance is a phenomenon that depends partly on conditioning (and partly through the body's biological defences) – as a drug is taken repeatedly, its physiological and behavioural effects tend to decline, forcing recipients to raise their doses for the intended effect to occur. Due to conditioning, stimuli that act as predictors for the drug (the sight of the doctor, the needle, etc.) elicit the compensatory reaction before the drug is even administered, counteracting the effects of the drug. Because of this, most heroin overdoses occur when the victim uses drugs in a different setting than normally, removing cues that would normally trigger a compensatory effect.
When returning to normal life after rehabilitation, many conditioned stimuli in the environment remain. In order to prevent relapse after drug rehabilitation, some facilities attempt to introduce the recognized stimuli that would normally elicit a compensatory reaction, and slowly extinguish the conditioning. A permanent move to a new environment is the best way to prevent a relapse.
In the environment, we not only respond to stimuli, but we also act to obtain stimuli or environmental changes. Operant responses are those in which we act in the world to produce an effect. They can also be called instrumental responses. The process of teaching these behaviours is operant conditioning.
Thorndike did an experiment in which he deprived cats of food, put them in a cage, and placed food outside of the cage. The cage door could be opened by either pulling a loop or pressing a lever. Through a trial and error process, the cats would eventually open the door accidentally. In each subsequent time they were placed in the cage, their reaction would be faster, until they had learned to immediately trigger the mechanism.
Thorndike's Law of Effect states that responses which lead to positive results in one situation are more likely to be repeated in the same situation – those that lead to negative results are not likely to be repeated.
Behaviourist Burrhus Skinner made many improvements based on Thorndike's Law. He created a more efficient device nicknamed the “Skinner box” which held a mechanism (a lever or button) that could be pressed to deliver a desired effect (a drop of water or a food pellet). The animal would remain in the box, leaving no constraints as to when it can choose to respond again. The small reward would not be truly satisfying, encouraging the response to be repeated. The reward is called the reinforcer- something which increases the likelihood of a behaviour being repeated. Some reinforcers only have an effect because of previous learning – these are called conditioned reinforcers (ex. money).
Skinner argued that almost all behaviours are operantly conditioned, even if we might not be aware of it. In one experiment, uninformed subjects were in a room with music through which intermittent periods of static would occur. They could stop the static with a minute twitch of the thumb. While not consciously learning this behaviour, all subjects began to unconsciously twitch their thumbs when the static played.
If the rat in skinner's box were to never accidentally press the lever, it could not get reinforcement – in this situation, the technique shaping should be used. This involves the reinforcement of behaviours that are increasingly closer to the desired response. Moving towards the lever would be reinforced, then touching the lever, then pressing the lever. This type of conditioning is used to train domestic and circus animals, and often indirectly used to teach people new skills.
When an operantly conditioned response no longer results in a reinforcer, it gradually diminishes. This is called extinction. As in classical conditioning, the behaviour may be relearned with the passage of time.
When a response only occasionally produces a reinforcer, this is called partial reinforcement, instead of continuous reinforcement (always reinforced) or extinction (never reinforced). During initial training, continuous reinforcement is the most efficient. Once learned, the behaviour will continue with partial reinforcement, which can occur on four schedules:
Fixed-ratio schedule: A reinforcer is given after every nth response (4, 5, etc.)
Variable-ratio schedule: A reinforcer is given after different amounts of responses, averaging on 5.
Fixed-interval schedule: A fixed amount of time must occur between reinforcers (ex. 30 seconds)
Variable-interval schedule: The period between reinforcers is unpredictably varied around a certain average.
Behaviour reinforced on a variable-ratio or variable-interval schedule is hardest to extinguish, because subjects learn that being persistent is likely to lead to a reinforcer. This has been used to explain slot-machine gambling.
Reinforcement is anything that increases the chance of a response. Positive reinforcement does this by introducing something rewarding (like a food pellet). Negative reinforcement does this by taking something punishing away (stopping static-noises, taking a break from work). The terms positive and negative refer not to the direction of change but instead to the introduction or removal of the stimulus.
Punishment is any process through which the consequences of a response decrease the likelihood of recurrence. Positive punishment involves adding an unpleasant stimulus (a slap, an electric shock...). Negative punishment involves removing something pleasant (taking food away, charging money.
Understanding begins to occur about in which situations a certain behaviour will produce a reward.
The purpose of discrimination training in operant conditioning is to reinforce a particular response in the presence of a certain stimulus, and extinguish the response in the absence of this stimulus. This stimulus is called the discriminative stimulus. In a way, the discriminative stimulus acts as a predictor of reinforcement. Operant discrimination training can be used to learn more about infants and animals.
Generalization also occurs in operant conditioning – this phenomenon can help us discover how well an animal can grasp a concept. For example, Richard Hernnstein (1979) operantly conditioned pigeons to peck for grain when presented with a slide that pictured a tree or part of a tree, and not to peck at other slides. The successful result of this experiment allowed him to come to the conclusion that pigeons have a concept of trees, a rule that allows them to categorize stimuli into groups. Further research found that pigeons have a concept of cars, chairs, faces, and even abstract symbols.
Some theorists argue that even in operant conditioning, an animal learns more than just the S-R relationship; they also learn an S-S relationship between the experimental conditions (the Skinner Box) and the S-R relationship (that pressing a lever will bring food).
The means-end expectancy is the expectation that responding in a particular way, in a particular situation, will have a particular effect. In this case, hearing a tone, the rat will expect that pressing the lever will give food and will press the lever if hungry.
Animals are able to distinguish between different reinforcers, and evidence shows that they do cognitively choose whether they need the reinforcer. Reward contrast effects involve shifts in response rate when the value of the reward changes. Rats rewarded with tasty food will press the lever more than rats rewarded with less tasty food. If suddenly given a less-valuable reward, the rats used to valuable rewards will press the lever even less than rats given the less-valuable reward all along. This is the negative contrast effect (imagine that the rats are spoiled children). In the opposite case, the rats given low-value rewards will respond much more than other rats when suddenly presented with the high-value reward. This is the positive contrast effect. The animal has learned to expect a certain reward and is able to compare the received reward with the expected reward.
An experiment was done to examine reward systems in nursery-school children. When given a reward for a particular activity, the initial reaction of children was to spend more time doing this activity than those in a non-reward situation. However, once the rewards stopped being given, these children stopped that activity almost entirely, participating less than those who were never given a reward. This decline is the overjustification effect. The reward gives the task more justification than it needs – normally it may be done for its own sake. Once the task becomes work, they stop doing it without payoff.
Play is instinctive behaviour with no immediate useful purpose that provides a sense of fun.
Groos argued that play is a means to practice species-typical behaviours.
Young animals play more than adults.
Species that need to learn more to survive tend to be more playful. (Primates more than other mammals, carnivores more than herbivores, etc.)
Young animals tend to play at skills that will be useful in their adult life.
Repetition is a hallmark of play and is also essential in learning.
Play involves deliberately challenging oneself.
Culture is generally unique to humans – we not only learn species-typical skills but also culturally unique skills. Thus, humans play largely through imitation, enacting what they see other people in their culture do.
There are two broad categories of learning – learning to do and learning about the environment. Exploration serves the latter category, allowing people to discover where food, shelter, and other resources occur around them. All mammals explore new environments and objects.
Exploration often involves an element of fear (which drives you away) balanced with curiosity (which drives you forward). Typical exploration behaviour involves first keeping a distance, then a repeated set of cautious approaches and speedy retreats. Once more familiar, the animal will move less but patrol the area, often standing on hind legs for a better view.
Rats can learn pathways through a maze without a reward at the end. Rats that were not given a reward for 10 days (showing minimal success in the maze) were given a reward once and were subsequently more effective at the maze than rats given rewards each time. The conclusion to this is that rewards influence not what animals learn, but what they do. Latent learning is that which occurs but isn’t demonstrated through behaviour.
Observational learning is the knowledge we get by watching other people and how they behave. We are able to judge social compatibility and predict some of the behaviours of others, and can judge how it is proper to behave in any situation.
Many animals have been found to learn things faster after seeing the task performed by another. While imitation is a complex cognitive activity that mostly occurs in primates, observational learning in other mammals is simpler. One aspect is stimulus enhancement, the increase in attractiveness of an object that another is interacting with. Goal enhancement is an increase in the motivation to obtain the same rewards that another has been seen attaining. Humans and some primates have organized systems of neurons designed to make imitation easy – these are called mirror neurons. When we see someone else make a motion, the same neurons that would be involved in performing that action become active.
Culture consists of the beliefs and traditions that are passed on through generations. Chimpanzees are next to humans in exhibiting culture – those that live in geographically different areas develop different traditions, such as tool design and mating displays.
Humans reflexively look in the same direction that we see someone else’s eyes move, looking to the same object they may be looking at. This is called gaze following. This begins in infancy, and helps infants learn language and understand the things around them. We do this more than most animals, and our eyes may even be adapted to it with the clear contrast of white sclera and coloured irises.
If one becomes sick after eating a new food, that food will be avoided. This is sometimes described in terms of classical conditioning, though it is different from standard cases. Food aversion learning is actually more likely to occur if nausea happens more than a few minutes after eating. Food that tastes or smells like the offending food will also be avoided, even if it looks different.
If a rat lacks in a certain vitamin or nutrient, food that contains that vitamin will be its favoured food. There is evidence that people prefer food that is high in calories, something that was once a very adaptive preference.
Animals are strongly influenced by the food choices of those in their same species. Humans also learn by observations, especially between ages 1 and 4. Between ages 4 and 8, when the child becomes more adventurous, they tend to also be more picky about what they eat, which may be an evolutionary adaptation to protect against poison.
There is a natural bias in fear-related learning – we tend to more easily fear things that posed a threat our evolutionary ancestors. Thus, learning to fear snakes and bears is natural, while learning to fear blankets or flowers or even electrical sockets is not.
Imprinting is a phenomenon wherein an animal will learn to recognize its mother – this most often occurs in species of bird. There is a restricted critical period in which this can occur, usually within the first five days after hatching. Given the choice between a female of its own species and another object, the baby birds will follow the female.
Birds that tend to hide seeds for the winter tend to have a larger hippocampus than those without, indicating an increased spatial memory. They use landmarks to guide them, but salmon use the smell of the water as an indicator.
Neurons (also known as nerve cells) communicate through synapses. These are part of the complex nervous system which analyzes sensory data, creates mental experience and controls movements.
The brain and spinal cord make up the central nervous system (CNS) and the nerves that extend from this system make up the peripheral nervous system (PNS). A nerve is a bundle of neurons that connects the sensory organs, muscles and glands to the central nervous system. There are three general categories of neurons:
Sensory neurons which carry information from the sensory organs to the CNS.
Motor neurons which carry messages from the CNS to the muscles and glands.
Interneurons which exist in the CNS and carry messages between neurons. They collect, organize, and integrate information and are the most numerous of neuron types.
The basic parts of the neuron are the cell body (which contains the nucleus), dendrites (tubular extensions that receive information), and the axon (another tubular extension that carries messages to other neurons or muscle cells. Each branch of an axon has a bulbous swelling called an axon terminal, designed to release chemical transmitter molecules onto other neurons. Some axons have a myelin sheath that speeds the movement of neural impulses.
Neurons fire all-or-none impulses called action potentials. They are triggered usually at the junction between cell body and axon and fire down the axon to the axon terminals. A variance in the rate of action potentials can vary the strength a neuron has on other neurons or muscle cells.
The cell membrane that encloses each neuron is a porous skin which permits or denies access to different chemicals. Intracellular fluid lines the inside of the neuron and extracellular fluid coats the outside. These include soluble protein molecules (A-), potassium ions (K+), sodium ions (Na+), and chloride ions (Cl-). The more negatively charged ions exist inside the cell, causing an electrical charge across the membrane called a resting potential, which makes action potential possible.
There are two phases of the action potential, initiated by a change in the structure of the cell membrane as it permits sodium ions to pass through. In the depolarization phase, a concentration force and an electrical force cause sodium to move inward and the electrical charge of the membrane to reverse itself. Potassium cells are forced outward by the now positive charge within the cell, constituting the repolarization phase and returning the cell to resting potential.
The critical value of millivolts to trigger action potential is referred to as a cell’s threshold. An action potential is a chain reaction that begins on one part of the axon and moves down to the other end, following each branch until it hits the axon terminals at the ends. The speed of this movement is affected by both the axon’s diameter and the presence of a myelin sheath.
A synapse is the junction between each axon terminal and the dendrite or cell body of the receiving neuron. An action potential causes the terminal to release a neurotransmitter that influences the production of action potentials in the receiving neuron. There are two types of synapses – fast and slow.
The narrow gap that separates the axon terminal and the cell membrane is called the synaptic cleft. The membrane of the axon terminal is called the presynaptic membrane, and the membrane on the other side of the cleft is the postsynaptic membrane. Tiny vesicles on the axon terminal contain the chemical neurotransmitter. Once activated by the action potential, the neurotransmitters are released into the cleft and become attached to the postsynaptic membrane. An excitatory synapse opens sodium (Na+) in the postsynaptic membrane, causing a depolarization of the receiving neuron and increasing action potentials. An inhibitory synapse opens either chloride (Cl-) or potassium (K+) causing a slight hyperpolarization of the receiving neuron and decreasing the action potential rate.
Whenever axon membranes are depolarized below critical value, the action potential triggers – the rate of action potentials in the postsynaptic neuron`s axon depends on the depolarizing and hyperpolarizing influences of excitatory and inhibitory synapses.
Slow neurotransmitters trigger sequences of biochemical events that take time to develop in postsynaptic neurons and affect the neuron’s functioning for longer. Neuromodulators are transmitters that alter the cell in more enduring ways. Neuromodulators influence emotional states, sleep, arousal, learning and motivation. There are many slow-acting transmitters, including the class of neuropeptides and biogenic amines.
Neurons are organized into nuclei (clusters of cell bodies in the CNS) and tracts (bundles of axons that move between nuclei). Tracts are white matter and nuclei are gray matter.
Brain damage often leaves a person deficient in a particular area of mental functioning but still fully capable in others. By studying what areas of the brain were damaged, psychologists can come to an insight as to the normal function of that area of the brain.
In transcranial magnetic stimulation (TMS), a pulse of electricity is sent through a copper coil, producing a magnetic field that passes through the scalp and skull. This causes a temporary loss in neuron functioning in the area under the coil, and allows for a reversible simulation of brain damage.
Using an electroencephalogram (EEG), the activity in different areas of the brain can be detected and recorded in waves, using electrodes placed on the scalp. The brief change in EEG recording immediately following a stimulus is called an event-related potential (ERP). Repeated testing can produce an average ERP, which can be compared with other average ERPs to determine which areas of the brain are active when presented with the stimulus.
Increased neural activity in an area of the brain is always accompanied by an increase in blood flow to that area. Using positron emission tomography (PET), three-dimensional neuroimages can be created. This method involves injecting a non-harmful amount of radioactive substance into the blood that can be measured. A newer method is functional magnetic resonance imaging (fMRI) which creates a magnetic field around the head, causing haemoglobin in the blood to give off detectible radio waves. PET and fMRI can depict activity that occurs even deep in the brain.
In a laboratory, scientists can deliberately cause lesions in the brains of rats or other laboratory animals, either electrically with the help of a stereotaxic instrument or chemically with a cannula. Making lesions in the primitive areas of the brain where functioning is similar in all mammals has led to insights about basic motivation and emotion.
Neurons can be stimulated electrically with an electrode lowered into the brain then cemented in place. It can be easily activated after surgery through a wire or by radio. While being stimulated, the animal will exhibit drive states or emotional states that last as long as the stimulation remains.
Thin microelectrodes can be permanently implanted in the brain, penetrating into the cell bodies of single neurons and acting as a recording device when that neuron is firing.
The nervous system consists of two distinct and interacting hierarchies: the sensory-perceptual hierarchy (which controls data processing) and the motor-control hierarchy (which controls movement).
The two classes of nerves in the peripheral nervous system (PNS) are spinal nerves, which extend from the spinal cord, and cranial nerves, which project directly from the brain. There are 12 pairs of cranial nerves and 31 pairs of spinal nerves.
The rates and patterns of action potentials in sensory neuron constitute the data used to inform the body of the external and internal environment. Collectively, the sensations translated through these inputs are referred to as somatosensation.
All behavioural decisions of the nervous systems eventually travel through motor neurons to the muscles and glands.
Motor neurons work on skeletal muscles (those that are attached to bones and produce externally visible motions) and autonomic muscles (those that form the walls of the heart, arteries, the stomach, and intestines, and the glands). The autonomic visceral muscles and glands continue to function in the absence of nervous system control. Most of these receive two sets of opposite neurons from the sympathetic and parasympathetic system. The sympathetic system reacts to stressful stimulation and activates the “fight or flight” instincts, increasing the heart rate, releasing energy from storage, increasing blood flow to the skeletal muscles, and inhibiting digestion. The parasympathetic system has the exact opposite effects.
The spinal cord has ascending tracts that bring information to the brain, and descending tracts that carry motor commands to the muscles and glands.
Some simple reflexes are controlled by the spinal cord alone. One of these, the flexion reflex, involves the contraction of flexor muscles in the presence of a painful stimulus (causing someone to pull their hand away at a pinprick).
Networks of neurons called pattern generators stimulate one another to produce bursts of action potentials that wax and wane in a rhythm – these are used in walking, running, flying, and swimming, etc.
At the point where the spinal cord reaches the brain, it enlarges to become the brainstem. At the top of this are the medulla, pons, and midbrain which make up the most primitive part of the brain system. The medulla and pons organize more complex reflexes than those of the spine. This includes postural reflexes that control balance, and vital reflexes that control breathing, heart rate, and metabolism.
Sitting above the brainstem is the thalamus, a structure that acts as a relay between the various parts of the brain. It also plays a role in the arousal of activity in the brain as a whole.
The cerebellum and basal ganglia both help to coordinate skilled movements. The cerebellum seems to control rapid, well-timed movements. It uses sensory information in a feed forward manner, planning for movement in advance. The basal ganglia control slower, more deliberate movements. It uses information in a feedback manner, in which movements are adjusted while they occur.
The limbic system divides the more primal parts of the brain from the new part (the cerebral cortex). It consists of the amygdala, the hippocampus, the pituitary gland, and the hypothalamus. The amygdala is involved in basic emotions and drives. The hippocampus keeps track of spatial location and plays a role in memory. The pituitary gland plays a big role in development and puberty. The hypothalamus helps regulate the internal environment of the body by influencing the activity of the autonomic nervous system and affecting basic drive states.
The cerebral cortex is the outer layer of the major portion of the brain and makes up its largest part. It is divided in two hemispheres, each divided into four lobes. These are the occipital, temporal, parietal, and frontal lobes.
There are three categories of functional regions in the brain:
Primary sensory areas: receive nerve signals from sensory nerves by relay with the thalamus.
Primary motor area: sends axons to motor neurons in the brainstem and spinal cord.
Association areas: receive input from sensory areas and the lower brain, involved in complex processes like perception, decision, and thought.
The principle of topographic organization refers to the fact that the primary sensory and motor areas of the brain receive signals from and send signals to adjacent portions of sensory and muscle tissue. The larger portions of these areas deal with the control of the fingers and the vocal apparatus.
Premotor areas, located in front of the primary motor area, are in charge of setting up neural programs for organizing movements. They become active while a person mentally rehearses coordinated movements and during the production of these movements.
The prefrontal cortex is the area of the brain that is much bigger in humans than most animals. It is involved in planning, short-term and long-term memory, and complex problem-solving.
The corpus callosum is a complex bundle of axons located below the fissure dividing the hemispheres of the brain and serves as a connection between them. In primary sensory and motor functions, both sides are symmetrical, each dealing with the opposite side of the body. This is called a contralateral connection. The largest distinction between the two areas of the brain is that the left hemisphere has large areas specialized in language and the right is specialized in nonverbal, visuospatial information analysis.
In the 1960’s, the corpus callosum of epilepsy sufferers was severed as a last resort treatment for the illness. Subsequent tests on split-brain patients revealed split-mind behaviour.
Information from the part of the visual field on the right of a person is normally sent to the left hemisphere of the brain before it goes to the right hemisphere. The opposite is also true. Without a corpus callosum to share this information, it is processed by only one half of the split-brain patient’s brain – thus, each side can be tested separately.
During split-brain tests, a patient might be shown an image on one side of their view. Flashed on the right, the patient could give an accurate visual description. Flashed on the left, however, and the patient claimed to see nothing, or guessed randomly. However, when asked to reach under a barrier with their left hand (not right), the person could choose the right object by touch. There are many individual differences among split-brain patients as to the level of speech comprehension in the right hemisphere – some have none, while a rare few have a larger language area on the right side.
Immediately after surgery, it is most likely for conflicts between sides of the brain to occur. However, over time this rights itself, partly due to the fact that both sides of the brain usually receive similar information, and not all of the parts of the brain rely on connection through the corpus callosum. Each hemisphere also learns to communicate indirectly by observing the behaviour produced by the other side (cross-cuing).
In order to explain incongruent behaviour produced by the right hemisphere of the brain, the left-hemisphere of the brain is often able to express a quick, confident, and often false explanation. For example, in cases where an emotion was triggered by a right hemisphere stimulus, the person could explain away the sudden emotion by commenting on a plausible but false trigger. This has led to the conclusion that the left hemisphere acts as an interpreter of seemingly contradictory or irrational things.
Loss of language related to brain damage is called aphasia, of which there are many types.
Damage to the Broca’s area of the brain causes speech to become telegraphic (short and simple).This is called Broca’s aphasia, or, nonfluent aphasia. Thus, Broca’s area seems to be crucial for both the fluent articulation of words and sentences, and the transformation of complex sentences into simpler ones in order to extract the meaning.
In the left temporal lobe lies the Wernicke’s area of the brain. Damage to this area leads to a different form of aphasia in which patients have difficulty both understanding the meanings of words presented and finding the appropriate words to express what they need to. Grammatical words (a, the, but, and, etc.) remain present, but nouns, verbs, and adjectives disappear. This type of aphasia is called Wernicke’s aphasia, or fluent aphasia. It leads us to believe that this part of the brain is used to translate sounds of words into their meanings, and recalling words to express meaning.
PET and fMRI tests have shown these conclusions to be true. When generating appropriate verbs in response to visual or auditory stimuli, the Broca’s and Wernicke’s areas of the brain were most active.
In studies where rats were given either an object-rich environment that could be explored and played with, or a barren environment with little stimulation, it was shone that an enriched environment caused denser, more fully developed brains than those the deprived rats. New neurons can be generated, especially in the hippocampus, in response to environmental stimulation.
When a new skill is developed, neurons are recruited into the performance of that skill. Thus, when examining the brains of violinists, the areas of the somatosensory cortex that respond to the stimulation of the fingers are much larger than in the average person. Among blind people, the occipital lobe (usually related to interpreting visual information) is repurposed for tasks that make up for the deficiency, such as locating the direction of sounds. These changes can happen quickly in response to blindness.
In a study of London cab drivers, it was found that the hippocampus (the area most involved in spatial learning and memory) was significantly larger in other similar people who had no cab-driving experience. The longer the cab experience, the bigger the posterior hippocampus.
Donald Hebb (1949) theorized that some synapses grow stronger when the postsynaptic neuron fires immediately after the presynaptic neuron, leading to the conclusion that neurons can “learn” to respond to new stimuli. The phenomenon called long-term potentiation (LTP) supports this theory. In LTP, neurons are given an electrical pulse, strengthening the synapses and eliciting a stronger response from postsynaptic neurons. Some sets of neurons behave like this in the mammalian brain.
Experiments provide evidence that LTP is involved with learning. Elevating LTP in rats through genetic engineering, the “Doogie” mouse was created. These mice had more LTP in response to electrical stimulation than normal mice. As predicted, the Doogie mice had better memory capacity, responded better to learning tests and showed increased object recognition.
Hormones are chemical messengers that are released into the bloodstream and carried to all parts of the body, acting upon target tissues. The classic hormones are secreted by the endocrine glands.
The main difference between hormones and neurotransmitters is the distance from the site of release to the site of action. Neurotransmitters must cross only the synaptic cleft, while hormones travel through the entire circulatory system of the body before reaching their target cells. Despite these differences, hormones and neurotransmitters seem to have a common origin, evidenced by the chemical similarity of many. Neurohormones provide yet more evidence, as they are chemicals produced by neurons yet released not into the synaptic cleft but rather into capillaries (tiny blood vessels).
Hormones affect the growth of bodily structures, influence the metabolism, and act on the brain to influence moods and drives. Testosterone levels before birth produce different brain and body developments between the sexes, and later increase these differences during puberty. Hormones can have short-term effects lasting between minutes and days, such as those released in response to stress.
The brain controls the pituitary gland through neurohormones, which in turn controls and stimulates the other glands of the endocrine system. The posterior lobe of the pituitary gland consists of modified neurons, neurosecretory cells, which extend down from the hypothalamus. When activated, these release neurohormones that enter the blood stream to affect the body. The anterior lobe is connected to the brain with capilliaries. When the brain’s hypothalamus produces releasing factors, neurohormones are carried to the anterior pituitary which releases hormones into the blood stream.
Drugs reach the brain first by entering the blood stream. Ingested drugs enter through the intestinal walls, inhaled drugs through capillaries in the lungs, etc. The drug must pass through the blood-brain barrier to reach the brain. Fat-soluble drugs are most easily passed through.
Drugs can influence synapse activity in three ways:
Acting upon the presynaptic neuron to promote or inhibit the release of neurotransmitters.
Acting within the cleft to promote or inhibit processes that normally terminate the transmitter’s actions.
Acting directly on postsynaptic receptors, either reproducing the effect or blocking the effect.
Depending of the level of the behaviour-control hierarchy at which a drug works, different behavioural affects will be produced. Drugs that work at the highest level of the behaviour-control hierarchy are called psychoactive drugs, because they influence psychological processes, altering mood, arousal, thoughts, and perceptions.
General Principles
Motivation is a term used to describe the factors that cause an individual to behave in a certain way at a certain time. The motivational state or drive is an internal condition that leads a person towards a specific type of goals and can increase or decrease over time. Motivated behaviour is directed toward incentives (reinforcers, rewards, goals). Drives and incentives influence one another – the stronger the drive, the more attractive the incentive; the more attractive the incentive, the stronger the drive.
Physiological processes work towards achieving homeostasis, the constant internal conditions that the body must maintain to remain healthy. These conditions include body temperature, oxygen, minerals, water, and energy-producing food molecules. Thus, when low on a certain needed material, the body will increase the drive to replenish that material. When lacking in salt, we crave salty foods.
Homeostasis has no impact on drives like the sex drive. There is no basic tissue need for sex, it provides no vital bodily substance. Thus, homeostasis drives are considered regulatory drives, and others are considered nonregulatory drives.
Regulatory drives: drives that maintain homeostasis.
Safety drives: drives that motivate to avoid, escape, or fight dangers – these include fear, anger, and possibly sleep.
Reproductive drives: drives that motivate animals to reproduce and care for their offspring, including libido and sexual jealousy.
Social drives: drives that motivate people to cooperate with others and seek acceptance and approval of social groups.
Educative drives: drives to play and explore, like curiosity.
Some drives do not seem to serve a survival or reproduction purpose. For example, our desire to produce and experience art and fiction seems to have no direct evolutionary gain. Perhaps, however, these activities provide vicarious experiences that help us learn.
According to the central-state theory of drives, different drives correspond to neural activity in different sets of neurons in the brain. These neural sets are called central drive systems. These are characterized by being able to receive and integrate signals that impact drive states. They also act on all processes involved in carrying out motivated actions. The hypothalamus is believed to be the central hub to these central drive systems.
There are three components of the concept of reward:
Liking: The subjective feeling of pleasure we experience when we receive a reward.
Wanting: The desire to obtain a reward.
Reinforcement: The effects of rewards in the promotion of learning.
Research on rats found the presence of a pleasure area in the brain, especially located in the tract called the medial forebrain bundle. The bodies of these nuclei are in the midbrain and the synaptic terminals are in the nucleus accumbens. This part of the brain is now seen as crucial for rewards in humans and animals.
The release of the neurotransmitter dopamine is essential for “wanting” but not for “liking”. On the other hand, endorphins are released in “liking” situations. The facial expression associated with “liking” can actually be elicited with drugs that increase endorphins.
The release of dopamine into the nucleus accumbens motivates animals to work for rewards and promotes the long-term potentiation (LTP) of neural connections. It is directly related to learning. When a reward is unexpected, dopamine is released immediately after it is received to reinforce the association between the reward and the preceding stimulus-response. Once the reward comes to be expected, dopamine is released prior to the reward, in response to the conditioned stimulus.
Addictions occur when drugs interact with the pleasure centres of our brain. Drugs like heroin, opium, and amphetamines mimic or promote the effects of dopamine and endorphins in the nucleus accumbens. Drugs are addictive because of their activation of the dopamine-receiving neurons responsible for promoting reward-based learning. While over time, “liking” decreases in drug users, “wanting” continues to grow.
Gambling, like drugs, overrides the dopamine-conserving mechanism of the brain, thus reinforcing “wanting” to gamble. Those with a genetically higher number of dopamine receptors are most at risk for compulsive gambling.
Hunger is a regulatory system. When the amount of food materials in the body is at an appropriate level for survival, we are satisfied. However, when this amount is lacking, we feel hungry – eating replenishes us to this homeostasis.
The neurons related to the hunger drive are concentrated in the accurate nucleus in the centre of the base of the hypothalamus. It contains appetite-stimulating neurons and appetite-supressing neurons. Neuropeptide Y is the most potent appetite-stimulating neurotransmitter discovered, and can cause voracious hunger when injected into the hypothalamus of a sated animal.
After eating a large meal, the body temperature rises slightly, blood glucose levels increase, the stomach and intestines become distended, and hormones are produced by the endocrine cells in the gastrointestinal system. These hormones include the appetite-suppressing hormone peptide YY3-36 (PYY). Lean people have higher natural levels of PYY than obese subjects.
Long-term effects of eating include the addition of body fat stored in fat cells – a certain amount of fat is adaptable when there is a scarcity of food, allowing the body a store of energy that can be drawn upon. Fat cells secrete the hormone leptin, an appetite-suppressor, when they contain a certain amount of fat. A lack of the gene needed to produce leptin leads to obesity that can only be stemmed by the reintroduction of leptin. When humans become obese, their brains often lose sensitivity to leptin.
Hunger is provoked by both internal and external events. The sight and smell of food can cause hunger even when you might be sated. The phenomenon of sensory-specific satiety occurs when we satisfy an appetite for one taste but experience a renewed appetite in the presence of a different-tasting food. A higher food choice leads to a tendency to eat more.
There is a large genetic component to obesity in Western cultures – some people have a predisposition for obesity when presented with the environmental conditions of a western country. When the conditions of over-abundance are not present, however, these genes do not activate.
Most dieters have trouble keeping off the weight they lose, as hunger mechanisms in the brain activate during a decreased food intake, and metabolism also declines. Exercise is crucial for successful long-term weight loss. Any diet that cannot be permanently maintained will only lead to weight regain. A shift from high-calorie fats and sweets to low-calorie filling foods like fruits, vegetables, and complex carbohydrates is also smart. Using sensory-specific satiety to advantage, keeping a highly varied selection of healthy food in the house and few choices of unhealthy food will encourage healthy eating.
Testosterone, a hormone produced in men by the testes, is crucial in the maintenance of sex drive in men. In castrated or testosterone-deficient men, the injection of testosterone on a regular basis greatly increases sexual drive.
Self-confidence promoting conditions tend to increase a man’s production of testosterone. Winning a game or having a successful social encounter with a woman, for example, will produce a burst of testosterone. Enough of these over time could help maintain long-term capacity for sexual drive. Testosterone also seems to increase aggressiveness and competitiveness.
After puberty, the ovaries begin to produce estrogen and progesterone in a cycle. This is called the estrous cycle in animals and the menstrual cycle in humans.
In most mammals, sexual behaviour in females peaks during the time of ovulation. Removal of ovaries in female nonhuman mammals abolishes sex drive. In rats, the ventromedial area of the hypothalamus plays a similar role in sex drive as the preoptic area in the male. Many primates will copulate out of their peak ovulation period, due perhaps to the production of androgens (male hormones) in the adrenal glands.
In humans, the sex drive can be high or low at any part of the cycle, likely due to a greater reliance on adrenal androgens. Women without ovaries do not generally experience a decline in sex drive, but women without adrenals do. There is however, a cyclical tendency to initiate sex more often during the ovulation period. While arousability (the sexual receptiveness to appropriate stimuli) remains constant throughout the menstrual cycle, proceptivity (the motivation to initiate sex even without arousing stimuli present) occurs mostly in the ovulation period.
Sex hormones have two different effects on the brain: activating effects strengthen and activate the sex drive. Differentiating effects occur after birth and influence differences in sex drive and orientation.
Before birth, male infants (those with the Y chromosome) produce testosterone which acts on the brain and bodily structures, differentiating it into male. There is a critical period in which testosterone must act for this differentiation to occur, and there is a difference in timing between the change in the genitals and that in the brain. It is possible, then, for a person to have the genitals of one gender and the brain structures of the other.
Studies show that 2-5% of males are exclusively homosexual and 1-2% of women, while bisexuality is more common in women than men. Genetic differences among individuals play the largest role in determining orientation. Other environmental influences may involve levels of prenatal stress that might alter the amount of testosterone released in the brain of the fetus in the critical period. One consistent factor is the fraternal birth-order effect, in which a boy is more likely to be homosexual the more older brothers he has.
While in males, experience has little to no impact on homosexual orientation, female sexuality is much more variable. Females generally show sexual arousal towards both genders, while males tend to be firmly homosexual or heterosexual. Women are more likely to see their own homosexuality as a choice.
Alpha waves are large, regular waves that occur when a person is awake, but with their eyes closed and their thoughts relaxed. When trying to focus on a topic or problem, or when becoming excited, the EEG pattern shows quicker beta waves. During sleep, the brain follows a cycle of four stages of brain waves.
Stage 1: transition between wakefulness and sleep.
Stage 2: deeper sleep, marked by spindles, short bursts of activity.
Stage 3: 10-50% of the EEG includes delta waves.
Stage 4: Delta wave sleep
After the first 80-100 minutes of sleep, sleep lightens to stage 3 and 2, and then enters REM (rapid eye movement) sleep. REM sleep is characterized by beta waves that seem similar to those seen in an active, awake, thinking state. Breathing and heart rate become rapid and irregular, males experience erections, and twitching occurs in the fingers, face, and eyes.
When awoken from REM sleep, people remember true dreams (those which they feel were like reality). Most dreams involve people and things meaningful to the dreamer, and often strong, often negative emotions. Being awoken from non-REM sleep, many people report mental activity that is more akin to daytime thinking – this is called sleep thought. Sleep thought often involves working on some problem that had been unresolved the day before.
The preservation and protection theory suggests that the sleep drive motivates an animal to find a safe, comfortable place to spend the most dangerous hours (and least productive) of the day in a quiet, energy-conserving state. Support for this theory includes the difference in sleep habits of animals that seems based on their feeding habits and safety. Grazing animals sleep less than carnivores, likely because vegetation affords less calories and needs to be eaten more for homeostasis – in carnivores, meat is high in calories and prey is often best hunted at a specific time of day.
The body-restoration theory suggests that sleep puts the worn-out body back in shape. Some support for this theory includes the observation that growth-hormone involved in repairing the body is secreted more often in sleep, and the metabolism is much lower during sleep. Prolonged periods of sleeplessness also lead to bodily breakdown and eventual death.
The brain-maintenance theory suggests that REM sleep provides exercise to groups of neurons in the brain that might degenerate if inactive for too long. Evidence for this theory includes higher REM sleep in the foetuses and infants than adults, in almost all species.
One theory of dreams suggests that they provide a means for rehearsing and resolving threatening experiences and possibilities from real life. Other theories suggest that dreams are actually side effects of the physiological changes that occur in REM sleep. The brain might develop some degree of logic to connect the random bursts of thought. Findings have shown that the images which dreams connect can be influenced by thoughts that have been supressed the day before.
While some people need 8 hours of sleep to function well, rate people who need much less are called nonsomniacs. These people, while sleeping much less than most others, show no negative physical effects. Nonsomniacs tend to show a higher than average score on “Attitude to Life” tests and high energy. Nonsomnia is different than insomnia, which is a common difficulty sleeping that occurs in those with a normal drive for sleep. After 72 hours without sleep, people may begin to have distorted perceptions and be very irritable.
The sleep drive waxes and wanes in a cycle over a 24-hour period. This is considered a circadian rhythm. It is controlled by a specific nucleus in the hypothalamus called the suprachiasmatic nucleus. Along with the sleep drive, this nucleus also controls a rhythm of body temperature and certain other hormones (especially melatonin).
The sleep drive is partially regulated by input from the eyes – bright light in the morning and dim light/darkness in the evening helps maintain a daily cycle of sleep. People with sleep-onset insomnia, the inability to fall asleep until very late at night or early in the morning, can be partly cured by morning treatments of bright light and by dimming the lights during the evening.
There is a sleep-activating centre in the ventromedial preoptic nucleus, which is just in front of the suprachiasmatic nucleus. There is also a wake-activating set of neurons in the side and rear portions of the hypothalamus that activate the production of orexins (excitatory neurons) which help maintain prolonged periods of alertness.
Emotions are subjective feelings that are mentally directed towards an object. This object can be a person, thing, or event that is either real or imagined but necessarily meaningful. Self-conscious emotions are those in which the object of the emotion is one’s own behaviour or self. The feeling associated with any particular emotion is referred to as affect. There are two dimensions of affect. The first is the degree of pleasantness; the other is the degree of arousal. Feelings may manifest different emotions depending on their object, their degree of intensity, and their positivity. Pleasure is pride when the object is oneself and love when the object is someone else. When feelings have no object, they are considered a mood. Moods can last over a long period, or they could be more temporary.
Using methods that use language lexicons, Robert Plutchick identified eight primary emotions: joy/sorrow, anger/fear, acceptance/disgust, and surprise/expectancy. He believes that these emotions are often mixed to make other emotions.
Positive emotions tend to draw us to objects that can help us, while negative emotions repel us from things that might hurt us. The stronger the emotion, the more focused we are on the object of that emotion. Because they communicate our intentions and needs very clearly, emotions promote survival within a group and reproduction possibilities. The facial expressions related to emotions may also have an adaptive purpose. Consider the flared nostrils and wide eyes of fear- these motions increase the field of view and the sense of smell.
Emotions are often accompanied by peripheral changes in the body, beyond the central nervous system.
William James (1890-1950) had a theory that the physical changes associated with each emotion actually cause the emotion to be felt. For instance, a quickened heart and trembling limbs cause the experience of fear. Without these physical indicators, emotions would not be felt. This automatic physical reaction to an emotion-provoking stimulus is an adaptive means of responding to emergencies.
According to Stanley Schachter (1960’s), emotion occurs due to both sensory feedback from the body’s response as well as cognitive processes (perceptions and thoughts) about the event that provoked the response. Cognition influences the type of emotion, and body feedback influences the intensity of this emotion. Tests show that the hormone epinephrine (adrenalin) influences the intensity but not the type of emotion, lending credence to his theory.
Tests by Paul Ekman (1984) suggest that forcing one’s face into a certain facial expression predisposes one to feeling the associated emotion. Thus, like in James’ theory, feedback from the muscles of the face can inspire emotional feelings.
The amygdala is a cluster of nuclei under the cerebral cortex in the temporal lobe. It is part of the limbic system. The amygdala receives input quickly from the subcortical route and more slowly from the cortical route. Removal of the amygdala leads to something called psychic blindness, in which objects lose their psychological significance. The amygdala is activated in reaction to negative emotional stimuli, and less often by positive images. Emotions can be unconsciously triggered by stimuli too quick for the mind to consciously interpret.
The prefrontal cortex is the foremost part of the frontal lobe in the cerebral cortex. It is essential for conscious emotion, and the ability to act on those emotions. The left prefrontal cortex seems to be involved in responses that entail withdrawal, mostly negative emotions. On the other hand, the right prefrontal cortex controls responses of approach, mostly positive emotions. Anger, a negative emotion which often triggers an approach response, is controlled by the right prefrontal cortex.
The process of sensation occurs when a physical stimulus causes a physiological response, leading to a sensory experience. Perception is the complex organizing of sensory information from which meaningful interpretations are extracted.
Aristotle began the notion that people have five senses (sight, taste, smell, touch, hearing), but in reality humans have much more. Body position, balance, temperature and pain are just some of the senses left out. Sensory receptors are structures that respond to physical stimuli, producing changes of electricity that initiate neural impulses in sensory neurons. Sensory neurons bring information from sensory receptors to the brain. The brain has many specific sensory areas, including ones devoted to touch, hearing, and vision. Conscious experience of sensation requires the cerebral cortex.
Transduction is the process by which an electrical charge is produced by a receptor cell. The membrane of the receptor cell becomes increasingly permeable to allow either positively or negatively charged chemicals in. The charge travels across the membrane in a change called the receptor potential. They trigger action potentials in the sensory neurons.
Sensory encoding is the preservation of information about physical stimuli. It can vary among the dimension of quantitative variation, in which the intensity of energy is either weak or strong. It can also vary along the dimension of qualitative variation, in which the qualities of energy are specific to that energy – blue has the qualities of blue, not red; lavender smells different than cinnamon. Quantity is encoded through the size of receptor potentials – faster action potentials indicate a more intense stimulus. Quality is encoded by the different sets of neurons activated by different stimuli. Some sensory receptors are more sensitive to one quality than another, allowing for differentiation to occur.
Our senses are designed to alert us to changes in the environment, causing us to be more sensitive to an environmental stimulus when it is new, and less sensitive the longer we are around it. This is called sensory adaptation. It occurs in part due to changes in neurons in the brain and in part by the receptors.
Smell (along with taste) is a chemical sense, one that reacts to chemicals in the environment. While less sensitive in humans than many animals, humans can distinguish roughly 10,000 chemicals by smell, differentiate people by smell.
Chemical molecules in the air enter our nostrils, are dissolved into the mucous that lines the olfactory epithelium, the sensory tissue of smell, in the roof of the nostrils. The axons of the olfactory sensory neurons bass through a porous bone into the brain, meeting in the olfactory bulb to form glomeruli in neurons of the brain. For each of the 350 different types of olfactory sensory neurons, there is a receiving glomerulus. The pattern of olfactory neurons triggers determines the odorant we are smelling.
From the glomeruli in the olfactory bulb, sensory input travels to the limbic system and the hypothalamus. This explains the connection of smell, emotion, and memory. Input also travels to the piriform cortex of the temporal lobe, which sends the information onwards to the orbitofrontal cortex. These are crucial for the conscious differentiation of smells.
While smells from the air are perceived in the nostrils, odour sources inside the mouth reach the nasal cavity through the nasal pharynx at the back of the mouth. Taste consists of both the input of taste receptors in the mouth, and the smell that has been triggered through this route. Differences in taste tend to largely rely on smell, explaining why we have trouble tasting food when our nasal passages are clogged.
Women tend to be more sensitive to smells than men, especially during pregnancy. Odour sensitivity declines with age in both sexes. Roughly 75 chemicals can be smelled by some people and not at all by others, including androstenone, a derivative of testosterone produced mostly in male sweat. People with the common variant of a certain gene find it a strong and putrid smell, while people with the second most common variant of the gene find it weak and pleasant. A third group cannot even sense it.
We can identify individuals by our sense of smell, even if we are not aware of it. The strongest recognition occurs among family and those we are close to. Babies and mothers have an especially high ability to recognize each other’s scent, even as early as 60 minutes after birth.
Individual differences in odour result from 50 highly variable genes called the major histocompatibility complex (MHC). In animals, it has been found that they tend to be attracted to other animals with the highest difference in odour, as the chances of genetic overlap are much smaller in such a case. It seems to be similar in humans, though not enough tests have been conducted.
Pheromones are chemical substances released by animals that promote a specific behavioural or physiological response in other members of the species. Most mammals have a structure called the vomeronasal organ which has specialized receptor cells for recognizing pheromones. Humans have localized glands that secrete odorous substances, especially concentrated in areas that have retained hair – these include the armpits, genitals, the area around the nipples, the navel, the top of the head, the cheeks and the forehead. There is, however, no evidence that humans produce sex-attractive pheromones, nor that the rudimentary vomeronasal organ of the human nose is active.
Taste receptor cells exist in taste buds, each of which holds 50-100 cells. Most people have 2000-10000 taste buds on the tongue, the roof of the mouth, and the opening of the throat. The more taste buds a person has, the higher their sensitivity to taste.
There are believed to be five types of taste receptor cells: sweet, salty, sour, bitter, and, most recently, umami. The fifth can roughly be descried as “savoury”, and is common to high protein natural foods like meats, fish, and cheese, as well as to MSG (monosodium glutamate).
Taste sensory neurons connect to the limbic system and the cerebral cortex, primarily to the insula.
Taste motivates us to eat some things and avoid others. Low-medium salty foods, sweet foods, and umami foods are pleasant tastes related to beneficial foods. Salts, sugars, and proteins are necessary for survival. Generally, sour and bitter tastes are unpleasant. Decay (which can cause disease) produces acidic compounds that would taste unbearably sour. Poisonous plants tend to taste extremely bitter.
Evolution has influenced our taste preferences, causing toxicity to taste bitter. Animals with a dislike for bitter foods would be more likely to avoid poison and survive, continuing the succession of the trait.
Women are generally more sensitive to bitter tastes than men, especially in the first 3 months of pregnancy, possibly due to the sensitivity of the foetus to toxins. Young children are also sensitive to bitter tastes, protecting them from eating poisonous materials in their exploratory phase of life.
Pain is a somatosense, one that can originate from multiple places in the body rather than just one specific sensory organ. Pain is also an emotion and a drive, overwhelming the mind and coming with its own particular expression. Pain is an extremely important warning mechanism – in rare cases when one is born without the ability to experience pain, they lose the motivation to avoid danger. They may keep their hand on a hot stove, chew their tongue while they eat, or strain muscles and joints by not noticing dangerous positions.
Pain neurons have sensitive terminals called free nerve endings, which are not encased in end organs and exist all over the body, in teeth, muscles, membranes around joints and bones, and even visceral organs like the stomach.
There are two types of pain sensory neurons – thin, unmyelinated C-fibres and thicker, myelinated A-delta fibres. Sharp, localized pain is mediated by A-delta fibres while longer lasting pain is caused by slower C-fibres. Pain neurons travel to the spinal cord and brainstem and terminate on interneurons. These may provoke reflexive responses.
The sensory aspect of experiencing pain depends on the somatosensory cortex which also controls touch and temperature sense.
The emotional and motivational aspect of pain is reliant on the cingulate cortex of the limbic system and insular cortex of the frontal lobe
The cognitive secondary emotional and motivational component involves the worry about the meaning and impact of pain. This is regulated by the prefrontal cortex, the planning area of the brain.
Melzack and Wall (1965,1996) proposed a gate-control theory of pain. It suggests that the experience of pain depends on how much input from sensory neurons can pass through a neural “gate” to higher centres of pain in the brain. Pain-enhancing and pain-inhibiting neurons control the “gate”.
Illness often increases pain sensitivity, possibly to motivate people to rest – this is believed to be an immune system response. Recently injured areas are highly pain-sensitive as the A-delta and C-fibres change in repair, and areas that have raw skin are extremely sensitive to ward against interference.
A portion of the midbrain called the periaqueductal gray is a major control centre for pain reduction. Stimulation of this area can be so effective in pain relief that surgery can be performed on animals that have this area activated. Opiate drugs act on this area, and bodily hormones that have a similar effect fit into the group of endorphins.
Stress-induced analgesia is the decreased sensitivity to pain that occurs in moments of high stress. This occurs in situations where survival depends on temporarily ignoring injury. It depends partly on the release of endorphins. In long-distance runners, endorphins are secreted, resulting in the “runner’s high”.
Sometimes dramatic pain reduction can happen due to belief or faith. This faith can be in a religion, or in the effectiveness of a drug. In some cases, the placebo effect can reduce pain in a person who believes a sugar pill is actually a painkiller.
Sound is both a physical stimulus and the sensation produced by that stimulus. Sound vibrates the air (or any other medium), which moves outward in a wave pattern. The height of the wave is the sound’s amplitude (intensity/loudness). Amplitude is measured in decibels. Sound waves also vary in frequency (pitch). This is the rate at which the molecules of the vibrated medium move – it is measured in hertz.
Hearing is sensitivity to pressure placed on sensory tissue in the ear, so it is related to touch. Ears have developed to magnify the pressure of sound waves as they come into the inner ear. The outer ear is made up of pinna (the visible folds of skin and cartilage) and the auditory canal (the entrance to the eardrum). The middle ear is just past the eardrum, and contains the hammer, anvil, and stirrup (the ossicles bones). These vibrate against the oval window, amplifying the pressure from the eardrum. The oval window lies between the middle ear and the inner ear. The inner ear contains a coiled structure called the cochlea, which contains the outer duct and the inner duct. The receptor cells for hearing (hair cells) are located on the floor of the inner duct in the basilar membrane. The four rows of these cells run the length of the basilar membrane. Hair cells are covered in cilia, which protrude and reach to the tectorial membrane. On the other end of hair cells are synapses connected with auditory neurons.
One form of deafness is called conduction deafness. In this form, the ossicles become rigid and cannot bring sounds past the ear drum. A conventional hearing aid is helpful because it amplifies sound enough that it can bypass the middle ear and reach the cochlea through the bones of the face. Sensorineural deafness occurs when the hair cells of the cochlea or the auditory neurons are damaged. This can result from prolonged exposure to loud noises, or due to congenital deafness. A cochlear implant might help in cases like these.
Georg von Bekesy (1920’s) made a breakthrough in understanding pitch perception. He hypothesized that when neurons at the proximal (close to the oval window) end of the membrane are accompanied by little firing in neurons at the distal end, the brain hears a high-pitch sound. As the rapid-firing neurons occur in a more distal position, they make a lower pitched sound.
Auditory masking is the ability of a sound to override (mask) another sound. This is asymmetrical – low frequency sounds more easily drown out high frequency sounds. The travelling-wave mechanism also affects hearing loss – we lose our ability to hear high-pitched sounds more easily (and earlier) than low-pitch sounds.
For frequencies below 4000Hz, pitch depends on both the activity of the basilar membrane and the timing of the activity.
Neurons of the primary auditory cortex are organized tonotopically, each maximally responsive to a different frequency of sound. The characteristics of neurons in this area are, like other sensory neurons, largely influenced by experience. Thus, a musician’s brain has a stronger response to the frequency of their chosen instrument. An area in the parietal lobe called the intraparietal sulcus also plays a role in pitch processing.
We are able to locate a sound within 5 and 10 degrees of its real direction, and can attend to one voice while ignoring others in a noisy environment. Our perception of sound location depends on the gap between when the sound waves reach each ear. If there is no gap, the sound comes from straight ahead or straight behind.
We can recognize patterns in waveforms, even when the sound arrives in a different pitch. There are cortical areas designed to analyze these patterns, which include specialized neurons that react to different frequency combinations, rising and falling pitches, brief sounds, moving sounds, etc.
We sometimes perceive sounds that are not really present as physical stimuli. Phonemic restoration occurs when phonemes (the consonant and vowel sounds of words) are left out of speech – we often perceive that we hear the missing sounds. The perceived sound is reliant on the surrounding phonemes – our brain makes sense of words according to the meaning of the sentences they are in, and relies somewhat on auditory memory to fill in the blanks.
Psychophysics is the relationship between the physical qualities of a stimulus and the sensory experience it produces.
The absolute threshold is the very faintest, weakest point at which a stimulus can still be detected. These vary between people, explaining why some are more sensitive to stimuli than others.
The difference threshold, on the other hand, is the absolute minimum difference in intensity needed to distinguish between two stimuli. This is also known as the just-noticeable difference (jnd). Weber’s Law states that the jnd for stimulus magnitude is a constant proportion of the magnitiude of the original stimulus. This formula illustrates the law: jnd=kM (M is the magnitiude of the stimulus, k is a proportional constant known as the Weber fraction.
Gustav Fechner (1860-1966) came up with a logarithmic law in order to derive a mathematical relationship between stimulus magnitude and sensory magnitude. Fechner’s law states that the magnitude of the sensory experience is directly proportional to the logarithm of the previous magnitude. This formula illustrates the law: S= c log M (S is the magnitude of the sensory experience, c is the proportional constant, M is the magnitude of the physical stimulus.
S. S. Stevens (1975) believed, contrary to Fechner, that people can actually report the magnitudes of their sensations consistently. Using his method of magnitude estimation, he asked people to assign numbers to the magnitude of sensations, comparing them with one another numerically. A clap of hands might be a 5, then a clap of thunder might be a 20. He found that people didn’t follow Fechner’s law, but rather a power relationship. Stevens’ power law states that the magnitude of a sensation is directly proportional to the magnitude of the physical stimulus raised by a constant power. This formula illustrates the law: S= cMp (S is the reported magnitude, M is the physical magnitude, and p is the power to which M must be raised.
The power law allows us to preserve constant sensory ratios as overall intensity increases or decreases, so we can recognize thunder when it is close as reliably as when it is far away.
The Eye
The photoreceptors (light sensitive cells) of the eye lie in the retina. The retina is a membrane that lines the rear interior of the eyeball, which itself is filled with a gelatinous substance that allows for the easy passage of light. The eyeball is covered in front by a transparent tissue called the cornea, which is convex and focuses the light that passes through it. The iris lies behind the cornea, and determines how much light can pass through the pupil (the hole in the centre of the iris). The lens lies behind the iris, and further focuses the light based on distance. The image appears upside down, but is interpreted properly by the brain.
Transduction is the process by which environmental stimuli generate electrical changes in neurons. The photoreceptor cells of the eye are composed of two types – cones (bright light, focused colour) and rods (dim light). Cones are concentrated in the fovea which is specialized for acuity. Rods are found everywhere but the fovea and are most concentrated in a ring 20 degrees away from it. On the outside of each photoreceptor is a photochemical. In rods, this is rhodopsin. Electrical changes in rods and cones cause responses in the retina cells, which lead to action potentials in the optic nerve neurons. Where the optic nerve lies is a blind spot on the retina.
Cone vision (photopic vision) is adapted for bright light, high detail (acuity), and colour perception. Rod vision (scotopic vision) is specialized for sensitivity, and is used to see in low light. Because of this, we see very little colour in low light situations.
The transition the eye makes from light to dark is called dark adaptation, and the reverse is light adaptation. The iris dilates the pupil in low light conditions to allow as much light as possible. In bright light, the iris contracts the pupil to stop the eye from being overwhelmed with light. Because the sensitive photochemical rhodopsin breaks down in bright light, we see mostly with our cone receptors in the light.
Rods and cones form synapses on bipolar cells which form synapses on ganglion cells. There is a convergence in the connections of rods and bipolar cells to ganglion cells, increasing dim light sensitivity. This sensitivity comes at the cost of acuity.
Humans can see light between the wavelengths of 400 and 700 nm (nanometres). While light contains all of these wavelengths, and when these are separated by a prism, we see a rainbow. We see objects as having different colours because their surfaces contain pigments, chemicals that allow only some wavelengths to be reflected. If all light is absorbed, the object is black – if no light is absorbed, the object is white.
Subtractive colour mixing occurs when pigments are blended together. This creates the perception of colour by subtracting reflected light waves. Mixing too many colours, you are left with brownish-black, stopping the reflection of most waves.
When coloured lights are mixed, the result is additive colour mixing. This occurs because the amount of reflected light is increased rather than decreased. The discovery of additive colour mixing led to the three-primaries law, in which three different wavelengths of light can be used to mix any colour. The law of complementarity states that certain pairs of wavelengths, when added together, produce the sensation of white. The processes of colour mixing are actually psychological, as no changes in the physical stimuli actually occur, only in our perception of them.
The trichromatic theory suggests that colour vision is a result of three types of cone receptors, each responsible for one of the three primary wavelengths. These colours are “blue”, “green”, and “red”.
Some people have dichromatic vision, either due to a lack of the photochemical for red or green cones. This is what is typically considered red-green colour-blindness. This causes a difficulty in perceiving the differences of colours in the green-red end of the spectrum. Non-primate mammals tend to have blue and green cones, while most birds have four cones, including one sensitive to ultraviolet light.
The opponent-process theory was developed to explain the law of complementarity. It proposes that complementary colours cause us to see white because complementary wavelengths have opposite effects on neurons, effectively cancelling each other out while exciting brightness receptors – causing white.
The phenomenon of complementarity of afterimages occurs when the neurons sensitive to one colour become tired – when looking at a white space afterwards (in which all colours are reflected), these wavelengths are not perceived and the illusion of colour occurs.
While once believed to be contradictory, the trichromatic and opponent-process theories are both correct.
Contours are the visible edges and borders of objects which give them a distinct shape, which are often distinguished by colour differences and contrast.
Our visual system exaggerates the change that occurs on contours, making them even easier to distinguish.
Contrast is enhanced through lateral inhibition – a process by which neurons become less active when surrounded by other neurons receiving similar input. At the edges of a contour, neurons are not completely surrounded by active neurons of the same type, and are thus more responsive.
Research has found that neurons (feature detectors) of the primary visual cortex are often sensitive to different features of a scene. Thus, there are neurons that act as edge detectors, and others that act as bar detectors, etc.
According to Treisman’s feature-integration theory, every object contains primitive sensory features (like colour and shape). There is a two-stage process of perceiving these objects. In the first stage, all features are detected instantly and processed simultaneously (parallel processing). The second stage is the integration of features in which the primitive features are processed into a whole. This involves serial processing, which takes time and occurs sequentially.
In Gestalt psychology, it is believed that we automatically perceive whole, organized patterns and objects. Gestaltists believe that the mind should be understand as an organized whole.
There are six main rules that predispose ups to respond to patterns as a whole:
Proximity: elements near to one another are probably part of the same object. We link clusters of objects together.
Similarity: elements that resemble one another are likely part of the same object (ex. textural continuity)
Closure: objects are perceived as having enclosed borders, even when part of the border is obscured.
Good continuation: lines are grouped in a way that continuous lines are formed more easily, with minimal directional changes.
Common movement: when elements move together in the same direction and at the same rate, we perceive them as one object.
Good form: we tend to see simple, uncluttered, symmetrical, regular, and predictable elements as if they are one object.
We tend to separate any scene into a figure (the object) and ground (the background). This division is determine by specific stimulus characteristics. We tend to see the surrounding form as the background, but can reverse this perception if we imagine it functioning the other way around. Some more ambiguous forms cause a reversible figure in which we can switch between figure/ground.
In the process of unconscious inference, the brain fills in perceptual “blanks”.
Once the mind has inferred the presence of a particular form, it essentially creates the form in perception, causing an illusion. An illusory contour becomes more clear when it the objects that create it are unexpected.
When a grey square is shown in a dark colour field, and an identical grey square is shown in a light colour field, we perceive the identical squares as being different shades of grey. This is partly due to the lateral inhibition effect described earlier, which causes contour edges to be exaggerated in contrast. It also may be a method for vision to compensate for seeing things in shadows – the relative relationship of an object to its ground causes us to recognize it as the same object.
Unconscious-inference is a result of neural activity in higher brain areas that assess sensory information according to the most likely reality. This is considered top-down control. When the sensory input has a direct impact on perception, this is bottom-up control. The two work together in all perceptual experience.
Recognizing objects means categorizing and labelling them.
Biederman's recognition-by-components theory attempts to explain how we can recognize an object regardless of its orientation. It suggests that our brain first organizes the object into basic, 3-D components called geons. By how these geons are arranged, we can recognize the object, similar to the geometric instructions in “how to draw” book. Biederman formulated 36 geons. Tests have shown that recognition relies on the ability to determine at least some of an object's geons and how they connect. The process of recognition is as follows:
Percieving sensory features → Detecting geons → Recognizing the object.
In the condition visual agnosia, people can see but no longer make sense of what they see. There are some general types of agnosia, including visual form agnosia, in which the presence of something and its elements can be seen and identified, but not its shape. Those with visual object agnosia can describe and even draw an object that they are shown, but cannot identify it or its possible purpose. Thus, a bicycle would be “two circles and a pole”.
Visual Processing in the Brain (Two Streams)
There are two streams of visual processing in the brain. The first is called the “what” pathway, and is responsible for object recognition, or what an object is. The second is the “where and how” pathway, which helps map the object in space and determine how to approach or pick up the object.
Damage to the “What” Pathway
Visual agnosias like the ones previously describe result from damage to the “What” pathway. However, if the “where and how” pathway is maintained, the person will remain able to unconsciously sense the size and shape of objects in space.
Damage to the “Where and How” Pathway
In contrast, those with damage to the “where and how” pathway are completely able to recognize objects and even describe their spatial location, but are unable to follow moving objects with their eyes, to avoid obstacles, and to efficiently pick up objects.
Complementary Functions
While the “what” pathway allows us to think and talk about objects consciously, the “where and how” pathway allows s to interact with them and configure our movements accordingly, without conscious thought.
3-D Vision
According to Hermann von Helmholtz (1867-1962), using unconscious inference, our brain calculates the distances of objects in our field of vision based on environmental cues like light, size, position and shape.
We have better depth perception with the use of both eyes than with the use of one. Eye converence is how the eyes turn inward when we look at things close-up. Thus, the more the eyes converge, the closer an object must be. More important, however, is binocular disparity. Both eyes have a slightly different (disparate) viewpoint on an object or scene. Thus, to close one eye while looking at an object, then switch eyes, causes the object to perceptually “jump”. Calculating the degree of disparity between the input from each eye, the brain can infer depth relationships.
Binocular disparity is also called stereopsis. Evidence of its influence on depth perception can be seen in a 19th century device called the stereoscope, in which two slightly disparate images could be seen simultaneously, each with a different eye. The result was the illusion of depth, which made the device extremely popular.
An extremely valuable monocular depth cue is motion parallax,in which the view of an object or scene changes with the moving of one's head. The closer something is, the more it seems to change in relation to the background when our head moves from side to side. This can be demonstrated when looking out the window of a car – the road seems to speed past us while a tree in the distance will move much slower through our field of vision.
Pictorial cues for depth are those that can be mimicked in two dimensional images. These include:
Occlusion: If one object blocks out (occludes) another, then it must be closer.
Relative image size of familiar objects. If we recognize objects in a picture, we are able to infer their distance based on their size relative to other objects in the image.
Linear perspective: As parallel lines move towards the horizon, they begin to converge towards a “vanishing point”.
Texture gradient: As the size and spacing of texture elements decrease, distance increases.
Position relative to the horizon: The closer an object is to the horizon, the further away it must be.
Differential surface lighting: Because we assume light is coming from above, we can perceive a lit object as 3-D, with the top-most surfaces receiving the most light.
The size of the retinal image of an object is inversely proportional to the object's distance from the retina. The further an object in our field of view, the smaller it might objectively seem, were it not for our brain's ability to calculate this difference as distance rather than size. This is called size constancy.
There are m any illusions in which two lines or objects can be made to appear different in size. One of these is the Ponzo illusion, another is the Muller-Lyer illusion. Richard Gregory's depth-processing theory explains these illusions by suggesting that depth an distance cues cause our brain to interpret relative sizes differently.
The moon illusion occurs when the moon is seen as bigger when close to the horizon. This is because when it is high in the sky, we cannot compare it to the horizon. Since we naturally perceive things close to the horizon as being far away, the setting or rising moon seems further away to our brain. Since it is also the same size in our field of vision as it is when high in the sky, it must be larger. However, since we consciously know the moon can't change in size, we perceive it as closer, not larger. This is the farther-larger-nearer theory.
Information-Processing Model of Mind
Memory is broadly defined as all the information in a person's mind and the mind's capacity to store and retrieve that information. The modal model of the mind portrays the mind as containing three types of memory stores including sensory, short-term, and long-term memory. Each memory store has a function, a capacity, and a duration. The model includes control processes like attention, rehearsal, encoding, and retrieval.
Sensory memory is the ability to very temporarily hold information in your information-processing system; less than 1 second for sights and several seconds for sounds. Sensory memory serves the function of storing information long enough for the brain to decide whether or not it needs to be brought into working memory through the process of attention.
Working memory (short-term memory) is considered the workplace of the mind and the seat of conscious thought. It is where perceiving, feeling, comparing, reasoning, and computing occur. When something in working memory is no longer needed or thought about, is is quickly discarded. Information can come to the working memory from either the senses or the long-term memory.
Information from the working memory may or may not be further encoded into the long-term memory. We are only conscious of items in our long-term memory when they are retrieved by the working-memory. Long-term memory is a passive store of information which is maintained over a long period, with an essentially unlimited capacity.
Attention controls which information moves beyond the sensory store into the working memory, acting as a gatekeeper to ensure our working memory is not overloaded. Encoding is the process that takes information from the working memory and implants it into the long-term memory, often due to an interest in certain information, or repetition. Retrieval is the process of drawing information out of the long-term memory into the working memory. This is also called remembering, or recollection.
Attention can meet two needs- the first is focusing mental resources on the immediate task, and ignoring irrelevant information. The second is the need to monitor irrelevant stimuli for possible significance. This analysis is called preattentive processing.
We have the ability to listen to and understand one voice even in a crowded room with equally loud or louder voices surrounding us. This is the cocktail-party phenomenon.
Inattentional blindness occurs when we are focusing our visual attention on one thing, and unintentionally ignoring another, less relevant seeming stimulus. This is what occurs in magic tricks and is used also by pickpockets – while distracting us with something showy in their right hand, they may steal our wallet or pocket a card with their left.
Humans are also good at shifting their attention to possibly relevant or dangerous stimuli. While we may be deeply involved in a task (like reading a book), we might still be able to turn our attention to important stimuli (like a new person entering the room). This occurs on a miniscule delay, as the sensory memory picks up information, and through preattentive processing comes to the conclusion that it is relevant.
Auditory sensory memory is also called echoic memory. The brief memory trace of a sound is an echo, which fades within 8-10 seconds.
Visual sensory memory is also called iconic memory and the memory trace is called an icon. People can retain icons for up to 1/3 of a second after seeing the stimulus.
When flashed an array of stimuli, then a bright patterned stimulus (a masking stimulus) most people can recall only up to three items of the original array. However, the capacity of attention can be trained to grow, as happens in the frequent playing of video games.
Priming is the unconscious activation of information already stored in long-term memory, causing it to become more readily available and to influence cognition. This process provides a context for the things we are actually paying attention to, and make useful memories available.
The mind is able to program certain processing tasks into automatic, subconscious tasks. For example, when reading, you are rarely conscious of interpreting individual letters and words to formulate meaningful language, and an experienced driver will not notice specific wheel and pedal movements when looking for a road sign. One demonstration of this is the Stroop interference effect.
There are three general conclusions that have been made in regards to the location of activity in the brain during preattentive processing:
Unattended stimuli still activate the sensory and perceptual areas of the brain.
Attention increases the amount of activity related to task-relevant stimuli, and decreases the activity related to irrelevant stimuli.
Neural mechanisms in the front of the cortex control attention.
There are three main components of the working memory. The phonological loop holds verbal information, the visiospatial sketchpad holds visual and spatial information, and the central executive brings information to the working memory from sensory and long-term memory stores.
The verbal working memory often involves a phonological loop, the mental repetition and subvocalization of verbal information. This keeps the information readily available, and depends on how quickly a person can pronounce the information.
We think with both words and images. Images are kept in what is often called the visiospatial sketchpad, and are organized spatially in a similar way to real pictures.
Doing two mental tasks at once is easier when one uses a phonological loop and the other uses the visiospatial sketchpad, and least when the mental tasks use the same device. Many tasks involve both, such as driving or conversing. Because of this, doing both of these at once can be dangerous. It increases the chance of inattentional blindness.
Research has found that the same areas of the brain that are active during talking and listening are also active when a person uses the phonological loop to keep a list in their working memory. The areas that deal with looking and seeing also become active when a person uses the visiospatial sketchpad. Deficits in either of these brain areas translate to poorer memory of that type. Working memory tasks also involve the prefrontal cortex. It is the neural hub of the central executive part of working memory, organizing the other parts of the brain to maintain focus.
There are two types of rehearsal, each with a different effect: maintenance rehearsal serves to hold information in the working memory, while encoding rehearsal brings information into the long-term memory store.
We remember things that catch our interest or stimulate thought. When we elaborate on something, thinking deeply about it and connecting it to other information, we are engaging in elaborative rehearsal. Understanding is the best way to create long-term memories. Even nonsensical connections help store things better. Many experiments provide evidence that thinking about meaning improves long-term memory.
Studies have found that focusing on comprehending ideas, and on the relation of ideas to personal experience and opinion is the best way to encode information into the long-term memory. Jotting down relevant notes in the margins, including connections you can make between what you read and other ideas, serves as an excellent tool for learning.
When memorizing items on a long list, it is helpful to artificially group information into meaningful chunks, thus decreasing the perceived length of the list. Arranging these chunks into a meaningful sentence will further increase its memorability. Thus, the planets can be understood using the mnemonic device “My Very Educated Mother Just Served Us Nine Pizzas”, the first letter of each word corresponding with the first letter of each planet.
In relation to the fact that experts are better at forming long-term memories within their area of expertise, K. Ander Ericsson suggested a special type of long-term memory called long term working memory. When encoded, expert information is organized in a way that makes It more readily accessible to the expert, at least until the relevant problem is finished.
Hierarchical organization involves clustering related items to form a category, and related categories to form higher order categories, etc.
Encoding verbal information can occur more readily when one attempts to visualize what they are being told. One strategy when having to remember a long list is to imagine taking a walk on a familiar route, and placing each object on the list by a familiar landmark. When recalling, you can take this mental walk again and imagine seeing the objects in their rightful place.
After a portion of his temporal lobe was removed in epilepsy surgery, H.M. was unable to encode new long-term memories. While he could remember events that occurred before his operation, and function perfectly reasonably at problem solving, he was unable to keep information once his attention was distracted. While he could not form conscious memories, he could form certain unconscious memories.
H.M’s disorder is called temporal-lobe amnesia, and is correlated with destruction of the hippocampus and the cortical and subcortical structures connected to it. Activity in the hippocampus is vital for the encoding of certain types of long-term memories.
Anterograde amnesia entails the loss of ability to form long-term memories (as in the case of H.M.). Retrograde amnesia involves the loss of memories that were formed before the injury. This is usually limited to the brief time before the injury, and not years beforehand. This time-graded aspect suggests that there are two main forms of long-term memory – labile (easily disrupted) and stable (hard to disrupt). Conversion of the labile form to the stable form is called consolidation.
While it is unclear how and when consolidation occurs, some evidence shows that memories that are used more often over a long period of time are more likely to be consolidated.
Sleep helps consolidate new long-term memories, especially slow-wave, non-REM sleep. It may improve both the durability and quality of memories.
Retrieval of information from the long-term memory depends on how it is organized. They are often stored in networks, connected by associations. When one memory is called forth, connecting memories are primed, becoming more easily accessible. A priming stimulus is called a retrieval cue.
Some things are associated by contiguity, meaning that the occurred together in experience. Others are associated by similarity, in that they share one or more common properties.
In the spreading-activation model, the strength of an association is represented by the length of the path between any two memories. Activity declines with distance, so the more closely linked the memories, the more they are primed.
Elaborative rehearsal serves to set up more retrieval cues for any given idea. The more mental associations created during encoding, the more available the memory during retrieval.
Contextual stimuli prime us for relevant memories in any given situation – sitting at the wheel of the car primes our memories of driving skills and road signs, etc. People remember things better within the same context in which they encoded the information.
Encoding is not the same as recording, and as such, memory is fallible. We may not encode all the information, and we might fill memory gaps in with logic and knowledge, until the invented becomes indistinguishable from reality.
The term schema can be used to refer to the generalized mental representations (concepts) of any class of objects, scenes, or events. In every culture, there are shared schemas. Schemas related to time rather than places or objects, are called scripts. At a birthday party, balloons and cake make up part of the schema, while singing Happy Birthday might be a script.
The construction of memory is affected both by the encoding and the retrieval processes. False memories can be implanted in criminal eyewitnesses through suggestions, leading questions, and encouragement. Even the subtle use of words (such as “smash” instead of “hit”) can cause a vastly different interpretation and remembrance of events. This can also happen through hypnosis.
Imagination, suggestion, and encouragement can create false memories of childhood experiences. In some extreme cases, clients pursued legal action against family members for falsely remembered abuse that had been implanted accidentally by their experiences in therapy. Imagery and imagination, even without misleading suggestions, can create false memories.
The basic cause of false-memory construction is source confusion. Our minds might take information from reality and blend it with information from stories and imagination, without distinguishing the actual event. Social pressure also plays a role, as people lean to the slight influence of others who encourage them to remember something in a specific way.
The modal memory model cannot account for memories that affect behaviour without reaching a level of consciousness. Others have been suggested.
Explicit memory is that which can be brought into consciousness and provides the basis and content of conscious thought. It is also called declarative memory because that which is remembered can be declared out right. Implicit memories are those that do not enter consciousness, and can be assessed using implicit tests. Behaviour might reflect memory, such as the ability to stay balanced on a bicycle, without being declared. They are more related to context than explicit memories, and are called up automatically.
Explicit memory is made up of episodic and semantic memory. Episodic memory contains one’s own experiences, especially those with a personal quality. Semantic memory is not tied with personal past experience, but with objective knowledge. This includes such things as word meanings, facts, etc.
Implicit memory can be divided into subclasses, one of which is classically conditioned memory. Another is procedural memory, which holds all unconsciously learned rules about how to do things, often acquired through practice. A third category is priming, which occurs as a subconscious activation of information in the long-term memory that influences perception and thought.
In temporal-lobe amnesia like that experienced by H.M., implicit memory remains intact. People with this sort of amnesia can still learn new skills and retain artificial grammars, and demonstrate similar priming effects as healthy people.
In a rare disorder called developmental amnesia, patients have severe deficits in episodic memory, while retaining the ability to create and retain semantic memories.
Analogies and Induction
Because we use personal experience to understand present circumstances, we need to be able to draw upon their similarities. Identifying similarities allows us to use analogical and inductive reasoning.
An analogy is any similarity between different objects/events/actions/situations. In psychology, an analogy is behavioural or functional similarity, or a relationship between two things that are in most respects different from one another.
Scientific thinking often relies on analogy. For example, thinking about selective breeding allowed Darwin to come up with natural selection. Some analogies are better for different ideas, and many lead to inventive ways of thinking.
Analogies are used in law to persuade juries towards certain arguments, and everywhere in political discourse. They can be used to clarify and guide understanding, but also to mislead. Good reasoners are able to see structural relationships between two kinds of events.
The Miller Analogy Test is made up of analogy problems, and is intended to measure a person’s reasoning and problem-solving skills. In an analogy problem, two words are paired by a relationship, and a second word is given. The goal is to provide the appropriately similar relationship. For example: “Plane is to air as boat is to _____” (the answer is “water”). Another test is the Progressive Matrices test, which is used to measure fluid intelligence.
When you attempt to infer a proposition from observations or facts (clues), this is called inductive reasoning or induction. Reasoning using analogy is a mode of inductive reasoning. We tend to be very good at inductive reasoning, but our failures in it tend to stem from certain biases.
The availability bias involves drawing on only the most readily available information, and discarding that which is more difficult to find. Because of this, when asked about the causes of major political events, a person who watches the news will likely cite the reasons they hear most often from the reporters, rather than those that external sources or even knowledge of history might lend.
Studies should be designed to disconfirm a hypothesis, rather than confirm it. The reason for this is the confirmation bias, which is the tendency to try to confirm rather than disconfirm one’s ideas or hypotheses. Thus, a doctor might misdiagnose due to the availability bias, and seek to prove himself right by looking for evidence that confirms his hypothesis, falling prey to the confirmation bias.
Because we are predisposed to seek order in the world, we tend to assume order when it isn’t there. Chance often seems like fate, positive events seem like luck. This is called the predictable-world bias. In games of chance, the best strategy is maximizing one’s odds, by choosing the most statistically likely result. Most people, however, use the matching strategy, in which they guess the same numbers as the odds – if you play a game where one choice has a 60% chance of winning, you might guess that choice 60% of the time.
In deductive reasoning (deduction), you assume that if certain premises are true, then certain consequences must be the logical result. There are two main types of problems that are meant to test deductive reasoning skills. In series problems, items are to be organized in a series based on comparison statements, after which you must come to a conclusion. In a syllogism, a major and a minor premise are presented, from which you must see if a certain conclusion is true or false.
While Piaget suggested that people have a natural inclination towards “abstract logic”, most people, in fact, draw upon their own experiences to solve problems in more concrete terms rather than the laws of logic.
In solving syllogisms, content matters greatly to our solutions. Even in situations where a syllogism might be logically incorrect, if its content reflects our own understanding of the world, we will tend to think it is correct. Thus, there may be a bias towards inductive rather than deductive thinking.
Physical diagrams and mental models are valuable tools for deductive reasoning, especially in the solving of syllogisms. One such model is the Euler circle, which is a visual way to depict the basic syllogistic logic problem, regardless of content. When people solve deductive reasoning problems without formal training or drawing, they do so through the use of mental models. While these may not necessarily be visual images, those that do are the easiest to grasp. Ability to solve syllogisms is correlated with visiospatial ability more than verbal ability.
Insight problems are those that are designed to be unsolvable except through a reorganization of the way we look at them. These might also be considered “trick” questions, and usually involve perceiving an analogy or understanding the problem in a different way.
The reason insight problems are difficult is because they require us to abandon a mental set, one of our well-established habits of thought and perception. One of these mental sets is the tendency to see certain problems as having trial-and-error solutions, rather than logical solutions. Another is functional fixedness, in which the individual and prescribed functions of the objects provided are too narrow, and one fails to see new uses for the objects.
When paying specific attention to aspects of the problem and materials not previously noticed tends to lead more easily to a solution. Those who are best at solving insight problems tend to be good at noticing details.
Research has found that the mind uses different tools to solve insight problems than those used to solve deductive reasoning problems. Ability to solve insight problems correlates with creativity, but not working memory capacity. An incubation period, in which one takes time away from the problem, is helpful – it allows the mind to unconsciously reorganize the material to facilitate insight. This does not work with deductive reasoning. One suggestion is that this is because the memories related to the task are primed, and remain so, while a person does something unrelated. This allows new associations to be created that can cause a solution to arise seemingly out of nowhere.
Research has shown that being in a happy, playful mood greatly increases performance in creativity and insight problems. The broaden-and-build theory of positive emotions suggests that negative emotions actually narrow focus on perception and thought, responding only to specific objects and routine responses. Positive emotions like joy and interest broaden perception and increase creativity. Playfulness is an important part of this, as in periods of play people are more prone to testing things, and viewing information in new ways.
Research has found that the way non-Western cultures tend to approach tests is different from the way Western cultures do. For instance, non-Western people often find it absurd to respond to questions that don’t relate to their own concrete experience, and tend towards practical, functional responses rather than abstract properties. This is often a matter of preference rather than ability.
It has been found that East Asians tend to perceive and reason more holistically and less analytically than people from North America, seeing relationships between elements of a scene more readily than Americans. This may be due, in part, to our philosophical origin being the more individualistic culture of Ancient Greece, while East Asian philosophy is rooted in ancient Asian philosophies like Confucianism which emphasize balance and harmony.
Language is important for both communication and problem solving. We not only learn from our own experiences, but also from the reports of others and from history. Language is also a vehicle of thought, verbal thought. People who speak different languages also think differently, in a phenomenon called linguistic relativity.
In terms of representing space, English and European languages represent space using an egocentric frame of reference. We use “left, right, front, back,” to describe locations according to our own location. Some languages use an absolute frame of reference, describing locations instead by “north, south, east, west.” Because of this difference in language, people with an absolute frame of reference learn to be attuned to cues of cardinal directions. It also changes the way people think about space, seeing everything as relative to its cardinal direction rather than to themselves.
The way we describe numbers in different languages has an impact on our basic arithmetic ability. In some hunter-gatherer cultures, there are no number words beyond two, causing their arithmetic abilities to be stunted. In Asian cultures, the way numbers are referred two makes the base-10 system visible. Instead of saying something like “twenty”, they would say “two-tens”. This linguistic clarity may allow for children to grow up with a more implicit grasp of the arithmetic system.
During the 1970’s, the English language began to be reformed to stop using the word “man” in the generic term meaning humankind. Studies have shown that when “man” is used in this context, the brain unconsciously sees it as having the same meaning as “man” the gender. Using more accurate terms like “person” or “human” is one way of ensuring the decline of sexism.
The capacity that underlies differences between individuals in their ability to reason, solve problems, and acquire new knowledge is called intelligence. Most modern ideas about intelligence began with Francis Galton (1822-1911) and Alfred Binet (1857-1911).
Francis Galton, a relative of Darwin, defined intelligence as the biological capacity for intellectual achievement, and considered it a heritable trait. He decided that mental quickness and sensory acuity were the best way of measuring intelligence.
Binet and his assistant put together the Binet-Simon Intelligence Scale, upon which many of today’s intelligence tests have their origins. He saw intelligence more as a collection of higher-order mental abilities loosely related to one another, and believed that natural intelligence is nurtured through schooling and environmental interaction.
In North America, the first commonly used intelligence test was the Stanford-Binet Scale, a modified version of Binet’s test. It is still used today, in a greatly revised form. The most common intelligence tests today are the Wechsler Adult Intelligence Scale, Third Edition (WAIS-III) and the Wechsler Intelligence Scale for Children, Fourth Edition (WISC-IV). The WAIS-III involves verbal subtests and performance subtests. When the score of this test is related to normative data from the population, an IQ (intelligence quotient) score is reached. An exactly average score is 100.
Tests are valid when they measure what they are intended to measure. IQ scores correlate with success in school and in some lines of work. Of course, in jobs where little reasoning and mental complexity is needed, the correlation is highly diminished. IQ scores are also correlated with longevity, possibly due to better personal care.
Charles Spearman (1927) observed that people who score high on one type of mental test also score high on other mental tests, leading him to the conclusion that there is a common factor being measured by every mental test. He called this g for general intelligence. He believed that each of the mental tests measures both its intended subject, and g. Using this logic, the WAIS-III uses different subtests to determine the IQ score.
A student of Spearman, Raymond Cattell believed that general intelligence is actually two factors – fluid intelligence (gf) and crystallized intelligence (gc). Fluid intelligence allows a person to perceive relationships among stimuli without previous practice or instruction. Crystallized intelligence is a mental ability derived from personal experience, allowing people to hold a large knowledge of word meanings, cultural practices, and knowledge about how things work. The television show Jeopardy would require crystallized intelligence. Crystallized intelligence depends on fluid intelligence and verbal learning ability.
There are positive correlations between certain reaction-time measures and IQ scores. One measure is inspection time, the time it takes to look at and listen to a pair of stimuli and differentiate between them. Mental speed might contribute to the capacity of working memory, thus increasing IQ.
Since working memory holds all of the information you need to solve a problem at a given time, and has a limited capacity, it makes sense that the faster one can process information and add it to the working memory, the more information can be maintained in the working memory. A high-capacity working memory helps create fluid intelligence. Working memory capacity can be measure either using a digit span or word span, a number of digits or unrelated words that a person can hold in their mind and repeat back.
Some researchers have equated intelligence with mental self-government, or a powerful central executive that coordinates the mind’s activities and deals with goals and strategies. This is considered to be the supervisory component of working memory. Deciding how much time to spend on each part of a problem is an example of a mental self-government process.
Intelligence is sometimes related to the overall ability to cope with the challenges of one’s environment. Intelligence may be, more specifically, an evolutionary adaptation to dealing with novel challenges.
Galton’s research may have led to the nature-nurture debate, in which some believe certain traits are entirely genetic, where others believe that certain traits are entirely learned through environmental stimuli. Of course, the reality is that intelligence and IQ are different among people for a variety of reasons, including genetics and environment.
The question is posed whether differences in IQ among individuals are due to differences in genes or differences in environment. The answer varies among different groups of people.
Heritability is the degree to which a particular trait, within a particular population, stems from genetic rather than environmental differences. It is quantified by the heritability coefficient, (h2). The heritability coefficient can vary from 0.00 to 1.00. When it is 0.00, this means that none of the observed variances occur due to genetic differences. When it is 1.00, all of the variances are genetic.
Twin studies involve looking at identical twins raised in different situations. Another type involves comparing the expression of a trait in identical twins (100% genetically similar), with its expression in fraternal twins (50% genetically similar).
Correlation coefficients can be used to estimate heritability coefficients. This is most commonly done by subtracting the correlation for fraternal twins from that for identical twins and multiplying the difference by 2. Overall, studies suggest that genetic differences account for 30-50% of IQ differences in children and more than 50% for adults.
The average IQ correlation of unrelated children living in the same family is 0.25, while the average for genetically unrelated adults in the same family is -0.01. These correlations have been shown to decline with growing up, suggesting that the family’s influence is short-lived. This may because as we grow, we increasingly choose our own environments.
Other traits may have an influence on IQ, as it is strengthened and maintain by active intellectual engagement with the world. Openness to experience (which includes curiosity, independence, and broadness of interests) positively correlates with IQ. IQ is like muscle strength, you need to exercise your brain to keep it sharp. It has been found that people who spend long periods in intellectually challenging jobs or hobbies have an increase in IQ over time, while those in intellectually deficient jobs and hobbies have a decrease.
It has been found that within-group heritability coefficients cannot be applied to between-group differences. Since environment plays a role in IQ scores, people from different environments cannot be compared in the same way.
There is an IQ difference favouring white Americans over black Americans. This is not a genetic issue, but a cultural one. There is no evidence that the lower IQ scores are due to genes.
Through cross-cultural research, John Ogbu distinguished between voluntary minorities and involuntary/caste-like minorities. The first are people who have emigrated from another country in hopes of upward movement, and consider themselves better off than those in their country of origin. Involuntary minorities are those groups who became minorities through being conquered, colonized, or enslaved. Involuntary minorities everywhere perform more poorly in school and lower on IQ tests than the majority. According to studies, people who feel outcast and receive more limited access to achievement tend to have lower IQs.
There has been a gradual increase in overall global IQ in the last century, with rising scores from generation to generation. Fluid intelligence has increased through better and more available schooling, and active mental stimuli like television, computers, and other technology that require more problem solving. Fast-paced videogames exercise attention and working memory capacity.
Learning about the Physical World
Infancy is the first 18-24 months after birth and is filled with physical and mental developments. Infants are born with all sensory systems functioning (though vision doesn’t function so well at first.) Quickly after birth, they respond selectively to stimuli in ways that enhance learning.
Babies tend to look longer at new stimuli, gradually becoming used to them in a phenomenon called habituation. This allows for their attention to drift to new things that they understand less, increasing the efficiency of learning.
Infants, within a few weeks of birth, become especially interested in manipulating aspects of their environments. They lose interest once they become skilled at a certain task. The removal of a baby’s control can cause anger and sadness, as it indicates the loss of an opportunity to see and hear something new.
In the first 3-4 months, babies reach for things and often try to put them in their mouths. Eventually, they begin to examine things with their hands and eyes, in a more sophisticated way, squeezing and turning the objects – any action that may test its properties. Their examination also varies based on the object’s properties – they will look at patterns, feel textures, shake rattles, etc. This examining and exploring behaviour is universal.
Babies often mimic the actions of others. They also exhibit eye following, a tendency to look in the same direction where they see other people looking, so as to attend objects and situations of relevance. This is also important for the development of language. When able to crawl or walk, infants engage in social reference, looking at the expressions of their caregivers for safety cues.
There are certain assumptions (An object will continue to exist even if we cover it with a cloth, etc.) which make up our core principles of knowledge about the physical world. Knowledge in core principles is manifested in infant exploration.
Babies look longer at unexpected events than expected ones. This allows for the use of violation-of-expectancy experiments to assess what infants expect will happen. Some expectations occur later than others – for example, a 4-month old expects an object to fall when they see it dropped, but not when they see it teetering on the edge of a shelf – that comes at around 6 or 7 months.
Infants under 5 months lack object permanence, the understanding that objects continue to exist when they are out of view. This is tested by hiding an attractive toy under a napkin – if the child reaches for it once it “disappears”, then they understand it must still exist. Between 6 and 9 months, they can solve a the simple hiding problem, but not a changed-hiding-place problem, in which the object is repeatedly hidden under one napkin, but is then hidden under a different napkin. Even having seen the transition, the 6-9 month old will reach for the first napkin. These changes might be due to an inability to mentally picture objects that are hidden.
Piaget had the idea that mental development comes about through how a child interacts with the physical environment. By acting on objects in the environment, they develop mental representations called schemes that act as blueprints for future actions. Schemes are different than schema in that they refer to things a person can DO with an object or category of objects.
Assimilation is the process by which a new scheme is integrated into existing schemes, which Piaget saw as similar to the way we assimilate food into nutrients for our body. When existing schemes broaden or change in response to a new event, this is called accommodation.
Children have a natural tendency to show interest in activities that can be assimilated but still require the accommodation of existing schemes. This maximizes mental growth.
Reversible actions, or operations, are the most conducive to development when a child grows out of infancy. This includes manipulating clay into different shapes, flicking light switches, etc. The understanding that something can be reversed is fundamental to understanding physical principles, especially the principle of conservation of space.
Piaget came up with four types of schemes that develop in stages.
Sensorimotor stage (birth-2 years): In this stage, children need to develop classes of schemes specific for different categories of objects. They eventually develop to a level in which they can be used as mental symbols for objects when they are not present.
Preoperational stage (2-7 years): In this stage, children exercise their imaginative abilities, representing absent objects but still not thinking about the reversible consequences of actions. They will not yet understand the principle of conservation of substance.
Concrete-operational stage (7-12): In this stage, children have developed schemes which allow them to understand the conservation of substance and cause-effect relationships.
Formal-operational stage (12+): In this final stage, children begin to recognize the similarities of schemes, and understand them as basic principles that can be applied to new situations. This is the beginning of abstract reasoning and theoretical thinking.
The most criticized aspect of Piaget’s work is the concept of development as occurring in divisible stages. More recent research suggests that Piaget may have underestimated infants and overestimated adolescents and adults. Because abstract concepts are best understood by making personal analogies rooted in real experience, it may be that we all operate with concrete-operational schemes.
Lev Vygotsky is attributed with originated the sociocultural perspective on development. While Piaget emphasized the child’s interaction with the physical environment, Vygotsky emphasized interaction with the social environment.
According to Vygotsky, language is the foundation of learning and for the development of higher thought. Words may begin as a way to communicate, but he argues that they become ways of forming symbolic thought. For example, use of the word “because” before cause and effect relationships are understood helps children begin to think about these relationships.
Children from 4 to 6 years old tend to talk aloud to themselves, which might be the beginning of verbal thought. Children do this most often when presented with a difficult task. Talking to oneself makes it easier to stay focussed on a task and to solve problems.
According to Vygotsky, development begins on a social level and moves to the individual, private level. People learn to solve problems collaboratively before they do so by themselves. Vygotsky terms this the zone of proximal development, the realm of activities too difficult to do alone, but possible to do with help. According to this perspective, critical think in both adults and children derives largely from the collaborative activity of dialogue.
Rather than Piaget’s little scientist, Vygotsky sees children as apprentices. Since the goal is to function effectively as an adult in one’s society, children learn the skills and norms of their parents.
In the information-processing perspective, development is explained in terms of operational changes in the components of mental machinery.
Very young infants show the ability to form implicit memories. Evidence of semantic memory becomes clear at about 10 to 12 months, when children begin to speak. Episodic memory develops slowest of all, usually beginning between 20 to 24 months, with reliable memories occurring between age 3 and 4. Research shows that children must learn to encode experiences in words before they can form episodic memories about them. The ability to encode episodic memories increases throughout childhood and plateaus in adolescence.
The working memory capacity also increases steadily over childhood. Speed of processing, the speed at which elementary information-processing tasks can be carried out, increases along with working memory capacity.
Young children automatically divide the world between things that move on their own, and things that don’t. This indicates the foundation of an understanding about “mind”.
Once children are able to speak (around 2 and 3 years), they explain behaviour in terms of mental constructs like perception, emotion, and desire. Even infants as young as 12 months old are able to display an understanding of what’s on another person’s mind.
Aside from emotions, perceptions, and desires, there are also beliefs. Tests indicate that three-year-olds don’t understand that beliefs can differ from reality. This may be because false belief is inherently contradictory, and thus more difficult to understand.
While they cannot understand false beliefs, 3 year-olds can understand “pretend”, and actively engage in pretend play. Understanding of pretence may be a precursor to the understanding of false belief. Children with siblings, especially older ones, have been shown to develop understanding of false belief earlier, as they experience more social role-play.
Pretend play might also form the basis for hypothetical reasoning. When thinking in a “fictional mode”, children can solve simple syllogisms better than when they might be in a “reality mode”.
Autism is a congenital disorder that is sometimes genetic and sometimes due to prenatal brain damage. It is characterized by deficits in language acquisition, a narrow focus of interest, and repetitive movements. Autistic children have trouble communicating and relating to other people. They perform badly on false-belief tests and other tests to detect deception, though they are still able to understand physical representations (pictures).
Every language consists of symbols that represent other things, the smallest of which are morphemes (letter sounds and words). There are two classes of morphemes: content morphemes include nouns, verbs, adjectives, and adverbs; grammatical morphemes include (in English) articles, conjunctions, prefixes, and suffixes. Morphemes are arbitrary (with no real connection to what they represent) and discrete (unable to change in a graded way to reflect a graded meaning). Nonverbal signals are neither arbitrary or discrete.
The largest unit of language is a sentence (e.g. The boy hit the ball), which can be broken down into phrases (the boy + hit the ball), then into words (the + boy), then into phonemes (th + e). The rules of any language are called grammar, which includes phonology (arrangement of phonemes), morphology (arrangement of words), and syntax (arrangement of phrases and sentences).
People are able to implicitly learn grammar, even before explicit instruction in school. People also have a sense for when a certain sentence is or is not grammatically structured.
The mother’s voice is audible in the womb and has a calming effect on the fetus in the few weeks before birth. Infants are able to distinguish between phonemes and tend to actively seek for human voices. At around 6 months, infants begin to become better at distinguishing phonemes from their native language and worse at those from other languages. The more rapid this change, the more rapidly the child learns the language.
At about 2 months, infants begin to make speech-like vowel sounds called cooing (aah, ooh). At around 6 months, cooing becomes babbling as children repeat consonant-and-vowel sounds together (paa-paa). Deaf infants begin to “babble” with their hands, forming nonsensical motions that mimic the basics of sign language.
During the babbling phase, children begin to connect recognizable words with their meanings, distinguishing from “mommy” and “daddy”.
Around 10 to 12 months, babies begin to playfully name things, using words to point things out. At around 15 to 20 months, the rate of vocabulary development accelerates. Young children demonstrate a number of cognitive biases that help them narrow down the referent to a new word they hear – they link new words to unnamed objects, and eventually infer referents from basic grammatical structures (ex. “–ing” refers to an action).
One bias causes children to assume that labels are common nouns rather than proper nouns. Because of this, they might make the mistake of calling every man “Daddy”. By around 2 years old, children can differentiate from common and proper nouns using contextual and grammatical cues. They often also underextend words, applying them to narrower categories than adults do.
When children learn a new grammar rule (like “–ed” for past tense) they almost always overgeneralize at first (thinked, swimmed, etc.). This is evidence that the grammar rule has been learned, not just imitated.
We are born with brain structures for understanding and forming language, anatomical structures for making a broad range of sounds, a preference for listening to speech, and mechanisms that cause us to explore our own sound-making abilities.
Noam Chomsky was responsible to drawing attention to grammatical rules as fundamental to the human mind, rather than learned. This is called universal grammar, which is a set of mental mechanisms designed for the rapid acquisition of language, the language-acquisition device (LAD).
Evidence has been shown that, in the development of creole languages (in places where people need to learn to communicate without having a shared language), the first generation of children develop these pidgin languages into full languages, complete with grammatical rules. In Nicaragua, where a deaf school was first introduced in 1977, children began to make up their own sort of sign language, beginning with a sort of manual pidgin and becoming an increasingly grammatical language over the years and with the introduction of new, younger students.
LAD functions best in the first 10 years of life. After this, grammar becomes much more difficult to learn. This is true of the first language, but not as important for learning a second language.
Language development requires both LAD and LASS, the language-acquisition support system, which is made up of the social world in which a child is born. People often simplify their speech to infants in ways that help them learn words and grammar more easily, enunciating more clearly and repeating important words. This exaggerated form of language is commonly called motherese.
Children more frequently involved in back-and-forth verbal play with their parents between 3 and 15 months develop language much sooner than children without this training. A parent’s verbal responsiveness to cooing and babbling plays a large role in this.
There are differences in LASS across cultures. Large variations can occur without greatly affecting language acquisition.
Bonobos and chimpanzees have complex nonverbal language systems. In an attempt to see how much language an ape can acquire, studies have been done in which apes are raised with people who communicate with the apes using sign language.
A bonobo called Kanzi was given a language-rich culture and taught to communicate using lexigrams, (visual icons arranged on a keyboard). Kanzi eventually learned to understand around 500 spoken English words, and used roughly 200 lexigrams to communicate, even when it wasn’t functional communication.
Apes have been shown to be adept at acquiring a vocabulary, and are able to use signs and symbols. However, the innate grammar mechanisms of human children must have developed sometime after humans split from the evolutionary line that produced chimps and bonobos.
Infancy and Caregivers
Attachment is the term used to refer to the emotional bonds between infants and caregivers.
Harry Harlow experimented with attachment in rhesus monkeys. He provided infant monkeys with two “surrogate mothers”, one made of wire the other of cloth. Regardless of which surrogate had a milk bottle, the infants all treated the cloth surrogate as a mother, becoming more confidant in its present and more attached. This proves that infants need not only food, but also close contact with comforting caregivers.
Children show distress when their mothers leave tem, especially in an unfamiliar environment, and show pleasure when their mothers return. They show distress when approached by strangers unless reassured by mothers, and are more likely to explore in the presence of their mother than alone. This is a universal phenomenon with a foundation in natural selection. Attachment strengthens at the age when children learn to crawl and walk, providing evidence for this evolutionary origin.
Mary Ainsworth developed the strange-situation test to explore infant-mother attachment. Infants are found to be securely attached if they explore while their mother is present, become upset and stop exploring when she is absent, and show pleasure when she returns. Avoidant attachment happens when an infant avoids and acts coldly towards the mother. Anxious attachment happens when an infant continues to cry and fret upon the mother’s return, even when she comforts him. Seventy percent of middle-class children are securely attached.
Sensitive care involves regular contact comfort, prompt and helpful response to communicative signals, and emotionally synchronous interactions. Sensitive care is positively correlated with secure attachment style and positive adjustment later in life. Ainsworth suggested that attachment styles developed in infancy provided a basis for subsequent relationships in life. Children with secure attachment style are found to be more confident, emotionally healthy, sociable, and good at problem solving later in life. The correlation, of course, does not necessitate causation. Experiments have been done to determine the causal relationship.
In situations where an infant has a highly irritable temperament, it often occurs that mothers begin to distance themselves, almost out of self-protection. Mothers trained to respond more appropriately and provide more sensitive care ended up with children who were more securely attached than those who were not trained, indicating sensitive care is the cause.
Some children are genetically more prone to the effects of parental care than others. This depends partly on a gene that affects the uptake of serotonin in the brain.
Except in Western culture, children tend to sleep in the same room or even the same bed as their caregiver. In fact, sleeping with the mother correlates with more self-reliance and better relationships with other children, and somewhat more mature personalities. Requiring infants to sleep alone often results in a high attachment to soft inanimate objects.
In hunter-gatherer societies, infants tend to be kept in direct contact with their mother’s body for most of their first year, often held in a sling at breast-feeding level and allowed to feed at will. In some hunter-gatherer societies, this attachment is split between the mother and other women of the group.
While many parenting manuals in western society suggest that being too indulgent with an infant can lead to overdependence as the child grows, more recent research shows that the opposite is true. High indulgence of an infant’s desires and the integration of infants into social life might actually foster lasting emotional bonds and result in interdependence, loyalty and obligation to community and family, rather than dependence.
Erikson divided the years of age 1-12 into stages concerned with the development of autonomy (self-control), initiative (willingness to initiate actions), and industry (competence in completing tasks). Since many new behaviours might bring disagreement from caregivers, children might develop feelings of shame, doubt, and inferiority that could interfere with autonomy, initiative, and industry.
The enjoyment of sharing and giving seems to be an inborn human instinct. At around 12-18 months, children begin to spontaneously offer their toys and other objects to people, even without being asked to do so. Young children also enjoy helping adults with their tasks, especially between 18 and 30 months. Giving and helping are linked to the innate capacity of empathy in a child.
Newborn babies will cry when they hear other babies crying. Later on, they become more controlled, looking at the crying infant and looking sad or whimpering. Until 15 months, this is considered egocentric empathy, as the child seeks comfort for his own distress. Between 15 months and 2 years, the child begins to try to provide comfort for the distressed child.
Around the age of 2, children become increasingly wilful, challenging authority and seeing what the limits are in their own behaviour. Appropriate parenting corrects the bad behaviours while maintaining the child’s autonomy and initiative. Around the end of this period, children begin to develop guilt, which allows them to control harmful behaviour. Empathy-based guilt is constructive, but anxiety-based guilt can be harmful, as these children will feel guilty about things they cannot control.
Discipline includes the methods parents use to correct misbehaviour.
Hoffman categorized three classes of discipline:
Induction: a form of verbal reasoning in which the parent tries to make the child think about the harmful consequences their behaviour has on others. This helps foster empathic thinking and reinforces moral standards.
Power assertion: the use of physical force, punishment, or sometimes bribery to control behaviour.
Love withdrawal: when parents, often inadvertently, express disapproval of the child rather than the specific actions, using either words or cold behaviour.
Power assertion and love withdrawal can be harmful, though at times a certain amount of power assertion is necessary, as long as the child understands they are still loved.
Diana Baumrind categorized parenting styles into three groups:
Authoritarian: parents who value obedience for its own sake and use power assertion.
Authoritative: parents who are concerned their children learn and abide by basic principles of right and wrong, preferring inductive discipline.
Permissive: parents who are highly tolerant of disruptive behaviour and show their frustration rather than attempt more systematic correction.
Children of authoritative parents tend to be more cooperative than other children, while those with authoritarian parents are actually most likely to do disruptive things when unsupervised.
Again, one must be aware of making causal inferences from correlational data – it could be that children of a certain temperament elicit different parenting styles.
Many forms of human play serve similar functions of the play you can see in other animals – play to develop physical stamina and agility, play at nurturing infants using dolls as substitutes. Specific human play includes constructive play (making things), word play, and imaginative play. Children also play at the activities they see adults in their culture perform, as a sort of practice for when they too will require those skills. Play also serves the purpose of extending the skills of each generation – children who grew up with the first home computers used them for play, and many of those children are now making innovations in the field of computer science.
Aside from just skill learning, social roles, rules, and self-control are practiced. Piaget argued that unsupervised play with peers is essential for a development of moral understanding. While parents enforce order on children through power, children have to turn to reason and a concept of fairness when solving interpersonal problems amongst themselves. They also learn that games can be changed when all of the children agree, forming the foundation for an understanding of law and society. Most forms of play have rules that the children make up, and roles that must be inhabited. Because of this, play may also be important for the development of self-discipline and social competence.
Play in which children of different ages play with one another is qualitatively different from same-age play. It benefits younger children by allowing them role models closer to their own age and ways to learn more advanced interests and skills. It benefits older children by developing their nurturing skills and consolidating knowledge by helping younger children. Unfortunately, in the age-graded school systems of Western cultures as neighbourhood play diminishes, this is becoming less common.
In psychological use, sex refers to the clear biological basis for categorizing people as male and female, while gender refers to the entire set of culturally specific differences between men and women.
Infant girls and boys behave slightly differently, with boys being more irritable and less responsive than girls, and later on more prone to squirming and fussing. Parents tend to be gentler with girls, talking with them, rather than jostling them about like they do with boys. There is speculation that the warm treatment of girls and expectation of self-reliance for boys can lead them to exhibit these traits themselves later in life. This might also play a role in the types of careers typically chosen by each gender.
Both the differing treatment of children by their parents and the general cultural expectations of gender work to shape a child’s gender identity as they grow. Gender stereotypes are learned early on, and young children often begin to model themselves (exaggeratedly) to the role of their sex. Young children also tend to overgeneralize gender differences, applying stricter role requirements than actually exist.
Children of all cultures go through a period in which they prefer same-sex playmates to opposite-sex playmates. The peak of gender segregation is between age 8 and 11. In their separate groups, girls and boys play at gender-specific activities. Maybe as a side effect of an overall cultural view that male roles are superior, boys that play at female activities or show feminine traits are met with disapproval, while girls who play at male activities are met with approval and affection.
Boys tend to play in rather large, hierarchical groups which include power struggles played out in competition, teasing and boasting. Girls tend to play in smaller groups in which cooperative play is predominant and competition is more subtle. Age-mixed play, however, tends to involve much less competition and more gender mixing.
The transition from childhood to adulthood begins with puberty, and ends when the person sees themselves as a full member of the adult community. A girl’s first menstruation happens typically between 12 and 13 in Western cultures, much earlier than it used to be. The end of adolescence is gradual in our culture, and typical adult responsibilities are gradually doled out over the course of the teens and twenties. Adolescence is a period of identity change in which people reform themselves into functioning adults.
Adolescence involves a breaking away from parental control (which does not always take the form of out-right rebellion or rejection). Rebellion often takes the form more of a wish to be treated like adults, while parents who might fear for new dangers that accompany adolescence tend to tighten controls instead.
Adolescents increasingly see their friends as the providers of their emotional support instead of their parents.
In the teenage years, people begin to pay more attention on how they “fit in” with their peers. The tendency to conform peaks between 10 and 14, then gradually declines as people experiment with their identities. People tend to choose friends that have similar interests to their own. Peer pressure, much feared by parents and teachers, often has a positive rather than negative influence as adolescents often encourage each other towards healthier behaviours.
In most cultures, risky behaviours are most common in adolescence. Adolescents have been found to have a myth of invulnerability, in which they believe themselves immune to the mishaps that occur to others. They tend to have heightened aggressiveness, and immature inhibitory control centres in their brain. Coupled with being sensation seekers, these traits all could be considered maladaptive.
Two theories exist to explain this behaviour. Terrie Moffitt suggests that high delinquency is a result of early puberty and delayed acceptance into adult society, causing youth to become disenfranchised and to engage in adult-like behaviours. Judith Harris suggests that the sort of risky behaviour adolescents engage in is intended to increase acceptance of peers, the next generation of adults.
Margo Wilson and Martin Daly came up with an evolutionary explanation – young women are more sexually attracted to men who succeed in risky, adventurous actions that allow them to achieve higher status. This may have been a valuable trait in hunter-gatherer societies, where bravery could be useful for the safety of the group.
Lawrence Kohlberg assessed moral reasoning by posing moral dilemmas to people, with answers judged on the basis of their reasoning. He proposed that moral reasoning develops in a set of stages.
Thought of one’s self.
Thought of others directly involved in the action.
Thought of others who will hear about and evaluate the action.
Thought about how the action will affect society at large.
Thought about how the action will impact all of humankind.
He suggested that people reach different stages at different points in their lives, and some don’t develop past stage 2. Adolescence is a time of rapid advancement in moral reasoning.
A study of exceptionally moral youth living in a rough neighbourhood of New Jersey found that rather than having some sort of abstract sense of duty, these youth were invested in doing the “right thing”. Morality, in this case, is an aspect of identity and self-image.
In early adolescence, youth leave their gender-segregated groups and learn to interact better with the opposite sex. Adolescence is a time of sexual blooming and experimentation.
In a culture that glorifies sex and presents sexual images in television shows and movies, we typically disapprove of teenage sex. The rate of teen pregnancy in the United States was once around 12 percent, but now has lowered sharply due to an increase in sexual education and parental openness. There is also a cultural and biological difference in the eagerness for sex each gender exhibits. Boys are more often encouraged in their sexual adventures and celebrated when they have frequent sex. The opposite is true for girls. One consequence of this difference is the high frequency of date rape in adolescence.
The evolutionary explanation for these sex differences lies in the fact that, in any species, the sex that pays a greater cost in bearing and rearing young will be the more discriminant sex in choosing when to copulate and with whom. In humans, women fit into this category. Men, on the other hand, are more aggressive in seeking copulation with multiple partners.
In cultures where men tend to stay and raise their children, rates of sexual promiscuity are lower than in cultures where men move on. Growing up with a father at home greatly increases the chance that young boys and girls will grow into sexually discriminating adults.
Freud described maturity as the capacity to love and to work. In almost every theory of adult development, caring and working are the two threads of adulthood.
Romantic love is similar to the love between infants and parents. Physical contact, caressing, gentle talking and gazing into each other’s eyes are all crucial to the early formation of these relationships. A feeling of security and exclusivity, the need to be together, are also important. Adult love attachments, like infant attachments, can be classified as secure, anxious, or avoidant.
Happy marriages are both romantic and loyal, with couples who see each other as friends and lovers. While arguments happen as often, happily married couples argue constructively, with respect for the other’s feelings and views, and keeping past grievances out of the issue. In happy marriages, both partners are attuned to the unspoken feelings and needs of the other, which is more lop-sided in unhappy marriages, where the men tend not to notice the unspoken feelings of the wife.
People tend to enjoy their work if it is complex, varied, and not closely supervised. This desired collection of characteristics is called occupational self-direction. Jobs high in occupational self-direction promote positive personality changes and tend to be less stressful, even when they involve more work. Among these personality changes, people working in these jobs become more open to experiences, more democratic parents, and more critical of established authority.
Women tend to, more often than men, work both in and out of the home. It has been found that men tend to enjoy working in the house more than working in the office, while woman enjoy working outside of the home more. This may be due to the sense of obligation and lack of choice each sex has towards their stereotyped role. Thus, doing the opposite activity is a matter of choice and personal power.
The “paradox of aging” is the fact that, as one grows older and physically declines, life satisfaction actually goes up. Priorities and expectations change to match reality, and people become wiser and more relaxed.
Laura Carstensen’s socioemotional selectivity theory explains that as people age, they become more concerned with living the rest of their life more fully and become less obsessed with planning for the future. Married couples stop trying to improve, impress, and dominate each other and become more satisfied with just spending time with each other. Older people also tend to be happier when working than they once were, probably because they become less concerned about competition.
It has been found that older people attend more to emotionally positive stimuli than emotionally negative stimuli. They also remember positive things more easily than negative things, nurturing a more positive outlook.
Fear of death peaks in the 50’s when people see peers die from heart attack or cancer. Older people tend to see death as more inevitable and less unfair. Elisabeth Kubler-Ross proposed that people go through five stages when they believe they are about to die:
Denial
Anger
Bargaining
Depression
Acceptance
Impressions of Other People
People are naturally naïve psychologists, looking at others to assess and predict their personalities, behaviours, and feelings. Unfortunately, when not thinking with all our resources, we tend to fall prey to a number of biases that colour our judgments.
Judgments are made on the basis of observed behaviour. Depending on the context and precise form of behaviour, many different conclusions could be made of a single action or expression. These judgments are essentially claiming that the person’s behaviour is a result of their stable personality characteristics. This is called attribution.
When behaviour is clearly an appropriate response to a specific situation, the behaviour is not judged as being caused by the person’s personality. Harold Kelley came up with a logical model for judging how behaviour should be attributed:
If a person regularly behaves this way in the same situation, the behaviour could result from a stable personality trait.
If many others also regularly behave this way in this situation, the behaviour is more likely a result of the situation.
If the person behaves this way in many different situations, it is most likely a result of a personality trait.
When people have sufficient information, time, and care, they tend to follow these logical steps in judging behaviour. When they don’t, they tend to make shortcuts that lead them to errors and biases.
In the person bias, people tend to attribute behaviour too heavily on personality. This extends to the roles people play and social pressure that makes them act in certain ways – when a person reads an opinion article aloud that they don’t agree with, we nevertheless attribute that opinion to them. We attribute the characteristic roles of people’s jobs and titles to their personalities, while at home they might be much different people. CEOs and other people in leadership positions tend to be overrated and overpaid due to this person bias. The bias is so pervasive that it has now been labelled the fundamental attribution error.
There is an east-west difference in the person bias. In eastern countries, people tend to more often explain behaviour based on situation and environmental influences rather than internal personality characteristics.
People who are seen as physically attractive are also judged more intelligent, kind, competent, and moral. This occurs in classroom situations, in courtrooms – everywhere that a person is judged on personality. On the other hand, personality judgments can actually influence how we judge physical attractiveness. The attractiveness bias exists more in Western cultures than Eastern ones.
People with rounder, more baby-like faces that have larger eyes and smaller jawbones tend to be seen as more honest, naïve, kind, helpless and warm than other people of their age and sex. Drastic manifestations of this bias can be found in political elections – people’s political competence is often subtly judged – those without baby-faces are seen as more reliable and powerful than those with baby-faces.
There is speculation that humans are becoming more baby-faced from generation to generation. This may be because baby-like characteristics elicit instinctual compassionate and caring responses in others. Since women tend to have more baby-like facial characteristics than men, they may be treated as more helpless and considered more kind and compassionate than men.
When impressions of members of the opposite sex are first formed on the internet, subsequent meetings in real life are judged more highly. This is likely due to the fact that the relative anonymity of the internet allows people to show a more characteristic aspect of themselves, without the social anxiety of a face-to-face interaction.
The most common test to see if a child has a concept of self is the rouge test. A spot of rouge is applied sneakily to the child’s nose or cheek. When placed in front of a mirror so that they can see the spot, self-aware children touch their own nose or cheek to touch the rouge. Without self-awareness, they will more likely touch the mirror. Self-concept, in both humans and chimps, is a product of social interactions.
Charles Cooley came up with the term the looking-glass self, to refer to the way other people react to our own behaviour.
The beliefs and expectations of other people can have an effect on a person’s self-concept and behaviour. The self-fulfilling prophecy, or Pygmalion effect (similar to what is seen in the play Pygmalion and the movie My Fair Lady) occurs when expectations of a person’s abilities actually elicits these abilities. A teacher, believing a normal child to have potential, will spend more time and effort and give the children more challenging work, thereby creating a better learning environment and causing the children to view themselves as successful and smart, and actually become smarter. Altering a person’s self-concept by acting in a different way towards them can actually change their behaviour. If a child is told that they are neat and tidy, their self-concept may change so that they see themselves that way, and behave accordingly. If the attribution, however, is completely counter to the person’s self-concept, they will actually behave in an opposite manner.
Self-esteem is one’s feeling of approval, acceptance, and liking of oneself. Self-judgment relies largely on one’s perception of how others see them, their sociometre. Behaviours that raise a person’s status and the positive opinions of others also raise their self-esteem. High self-esteem encourages us to continue on a positive path, while low self-esteem might encourage us to seek a more accepting social group.
People see and judge themselves in relation to other people. This is called social comparison. Self-concept varies on the reference group against whom the comparison is made. Judging one’s weight in comparison to magazine models will lead to a more negative self-image than judging weight in comparison with peers. People identify themselves by the ways in which they differ from people around them. There is often an emotional aspect to social comparison – if we feel we don’t measure up to our reference group, we are likely to feel distressed. The big-fish-little-pond effect occurs when academically successful students move from an average high school to a university filled with other academically successful students. This new reference group might cause them to lower their own self-concept of academic performance.
There are four means in which we typically skew our self-evaluations in positive directions:
We attribute our successes to our own personalities and traits, and attribute our failures to the situation or sometimes other people. This is called the self-serving attribution bias.
We accept praise at face value in a culture where politeness is a norm. When the same flattering praise is directed towards others, we discount it as insincere.
We have a better long-term memory for success than for failure.
When criteria for success are vague, we invent criteria that are most favourable to ourselves.
Harry Triandis made a rough distinction between Western and Eastern cultures: individualist vs. collectivist. In individualist countries (Western Europe, North America, Australia), traditions emphasize personal freedom, self-determination, and individual competition. In collectivist countries (Africa, Latin America, East Asia), interdependence, community responsibility, and social roles are emphasized. In collectivist cultures, people more often describe themselves in terms of their group affiliations and roles; in individualist cultures, people describe themselves according to internal, individual traits.
The ideal person in many Eastern cultures is one who is modest and works hard to overcome their flaws. Self-enhancing biases are small and sometimes nonexistent in these cultures. In fact, people in collectivist cultures are more likely to show a self-effacing bias.
Self-descriptions that refer the individual are called personal identity, and self-descriptions that refer to the person’s role in a social category or group are referred to as social identity.
People everywhere can consider themselves in terms of their personal identity or their social identity depending on what their situation demands. This flexibility allows us to benefit from and cooperate with social groups while still asserting ourselves as individuals.
Self-esteem is not only related to personal accomplishments, but to group ones as well. Depending on whether our social identity or our personal identity predominates at any given time, successes of other members of our group can lower or raise our self-esteem.
The self-serving attribution bias also extends to our social personality. In this way, we exaggerate the virtues of our social groups over other groups, even when there is no realistic difference between each group. These biases are enhanced when people are primed to think in terms of their social identity.
A stereotype is a schema (an organized set of knowledge or beliefs) that we carry about any group of people. We develop stereotypes based on our own experiences with certain groups, and also based on the way these groups are depicted in our culture. While some may be accurate, others greatly exaggerate or even invent traits. They are useful because they might allow us initial, valid information about people. On the other hand, they may cause us to prejudge those people.
Public and private stereotypes make up explicit stereotypes, because they are consciously used in the judgment of others. Implicit stereotypes are more insidious. They are sets of mental associations that cause us to unconsciously judge others in a certain way, and act differently towards them. Priming tests and implicit association tests are used to assess implicit stereotypes. Priming tests involve flashing an image of a black or a white face fast enough that it cannot be consciously perceived, then putting an adjective on the screen and asking whether the person could describe the adjective’s meaning. Results showed descriptive words corresponding to the person’s stereotypes of either the black or the white face. In implicit association tests, people are asked to categorize groups of words in different ways. The speed in which they can do this is a good determination of implicit stereotypes they hold. It takes longer for white students to link the word “black” with positive traits than to link the word “white” with positive traits, indicating an implicit stereotype.
Implicit stereotypes lead to prejudiced behaviour. Scores of implicit prejudice of white students against black students correlate with their reactions towards black students with which they are paired. While acting friendly, subtle nonverbal cues of apprehension and negativity make them come across as less friendly than their unprejudiced peers.
Implicit stereotypes can lead to dangerous situations when, for example, black male youths are seen as more implicitly suspicious by white police men. There are training programs to help counter this implicit judgment.
Explicit stereotypes can be defeated by simple logic and reasoned thinking. Implicit stereotypes tend to be diminished by exposure to positive or contrary examples of the stereotype victims in literature, television, and social interactions.
An attitude is a belief or opinion that has an evaluative component (a judgment about whether something is good or bad, moral or immoral, etc.) Our most central attitudes are called values.
Explicit attitudes are those that are conscious, verbally stated evaluations. Implicit attitudes are manifested in mental associations and the resulting behaviours.
Implicit attitudes have an automatic, bodily effect on our behaviour, which diminishes the moment we think more about what we are doing. Implicit and explicit attitudes often coincide, but not always. If you believe in not eating meat, explicitly, but maintain implicitly positive attitudes about eating meat, it might be difficult to say no when offered a hamburger unless you think about your explicit attitude and express restraint.
Explicit attitudes, when not coupled with implicit attitudes, tend to play little or no role in guiding behaviour, at least when that behaviour occurs privately.
With more time and more inducement to think about their behaviour, people are able to retrieve their explicit attitudes from their long-term memory and exert mental effort to behave in accordance to them. Eventually, explicit attitudes may become implicit.
Attitudes are largely products of learning (both conscious and unconscious). As such, they can be greatly influenced by culture, media, and experiences.
Classical conditioning causes us to feel positive about things that have been linked to pleasant, life-promoting experiences, and negatively about things that have been linked to unpleasant, life-threatening experiences. Advertisements often link images of happy people with strong social ties with the enjoyment of their products, as a way to subtly pair the two in our minds. We consciously like people or objects that elicit positive associations and dislike those that elicit negative associations. In cases where we develop an implicit negative impression, we will feel negative unless it is countered with an explicit good stimulus.
Heuristics are automatic decision rules that allow us to quickly evaluate information and develop attitudes. Some common ones include:
We tend to assume that big words and numbers mean something is well-documented and a trustworthy source.
We assume that successful and famous people are more likely to be correct than someone unknown.
If the values in the message are similar to one’s own, it must be correct.
The more people believe this message, the truer it must be.
These heuristics are mental habits that allow for the quick processing of new information.
In the elaboration likelihood model, the personal relevance of a message will cause it to either be processed superficially or systematically. When things seem less relevant, we tend to spend less mental effort on them. Lacking in personal relevance, people tend to assume that if an opinion comes from an expert source, it must be correct. When an opinion is high in personal relevance, people will attend to the specific arguments in a careful, systematic way.
The cognitive dissonance theory suggests that we have a mental mechanism which causes us to feel a lack of harmony when we do something counter to our own explicit and implicit attitudes and beliefs. The dissonance drive causes us to attempt to regain harmony by rationalizing our behaviour so it fits more comfortably in with our beliefs.
People have a tendency to avoid information they disagree with, that causes dissonance. In this way, people seek out information that “proves them right” and ignore that which might “prove them wrong”.
After making an irreversible decision without being entirely sure of our choice, the lingering doubts become discordant with our action. Because of this, our brain encourages us to justify this choice to ourselves and set those doubts aside.
When people have done things that run counter to their attitudes, sometimes the only way to reduce dissonance is to change their attitudes entirely. This is called the insufficient-justification effect. For this to occur there needs to be no clear incentive for performing the counter-attitudinal action. The action must also be perceived as occurring by one’s free choice rather than the influence of another.
Effects of Observation and Evaluation
Social facilitation occurs when having an audience has a positive effect on a person’s behaviour. The opposite of this is social interference, when having an audience has a detrimental effect on behaviour.
Social facilitation usually occurs when a person is performing simple or well-rehearsed tasks, while social interference most often occurs when the task is difficult, complex, or requires new learning. Zajonc called simpler tasks dominant tasks, and more complicated ones non-dominant. He suggested that because having an audience increases a person’s drive and arousal, tasks which benefit from this increase (dominant tasks) will be facilitated, and vice versa. The effect of an audience can be different depending on how comfortable with a task a person is – an expert pool player will perform better with an audience, an amateur would not.
The primary cause of interference seems to be evaluation anxiety, the worry that those watching will negatively judge the performance.
Social interference is part of the phenomenon called choking-under-pressure. “Choking” is especially likely to occur on tasks that require the working memory, since distracting thoughts take valuable mental space.
The more pressure there is to perform well on an academic test, the harder it is for people to keep their thoughts on the task at hand. This causes them to do exceptionally poorly on areas that require working memory. On the other hand, on sections of the test that do not require much working memory, they do as well or better than others.
Stereotype threat is a type of interference that occurs when people are reminded that they belong to a group who stereotypically performs badly on their specific task. Girls, when reminded before a math test that girls are thought to be less mathematical than boys, perform much worse than girls who do not recall this stereotype. This is related to self-fulfilling prophecy. Awareness of the stereotype threat is often enough to reverse the effects.
Impression management is the set of methods people employ to influence the impressions others have of them.
People can be considered as actors, playing a different role in every different situation. We might also be considered intuitive politicians, performing and compromising in certain ways to achieve our goals. We often don’t even notice that we’re campaigning for our interests. Most of the time, we want to look good- attractive, friendly, competent, modest, etc.
People tend to be more concerned with managing the impressions that acquaintances have of them, and be more relaxed around friends. Dating partners are more concerned with making a good impression on each other than married partners.
When we learn about the objective nature of a thing or event by observing clues given by other people, this is called informational influence. Conforming allows for a higher likelihood of safety and promotes group adherence and acceptance. Social influence that works on people’s desires to be part of a group is called normative influence.
Solomon Asch (1950’s) conducted experiments designed to determine whether people would conform to group opinion even in the case of clear-cut evidence. He provided a group with an absurdly easy perception problem in which they had to point out which two vertical lines were identical. One member of the group was a real subject, the rest were confederates of the experimenter instructed to all give the wrong answer. In this situation, roughly 37% of the trials caused the subject to conform to the wrong answer.
The social influence in Asch’s experiment has been found to be partly informational but mostly normative. When not required to state their answer aloud but instead to write it down, rates of conformity dropped significantly, though not entirely.
In further tests where one conspirator was told to answer differently than the rest, the presence of this nonconformist caused the conformity of the subject to be significantly reduced. This is perhaps because it helps deter complacency and the assumption of the majority rule.
Criminologist George Kelling came up with the broken window theory of crime and applied it to the New York City law enforcement community, greatly lowering crime. He suggested that physical evidence of crime or chaos (graffiti, broken windows, etc.) cause people to believe that disrespect for the law and the city is normal. Cleaning the city and enforcing laws against petty crimes, by creating an atmosphere of order, caused criminal activity to seem abnormal once again.
In public service messages that mention the number of people involved in an activity (“Half of all teenagers start smoking!”) they accidentally imply that smoking is a norm. It is more effective to imply that an activity is both abnormal and wrong.
The more witnesses there are to a crime or an accident, the less likely each witness is to help. This is partly due to the diffusion of responsibility – each person feels like someone else might help out and they are less responsible to do so. It also has to do with conformity – if someone trips and twists their ankle, you will look to the people around you to see whether they move to help. If not, you might question whether it is really the emergency you thought it might be, and thus remain inactive to avoid looking foolish. If bystanders know each other well or if they show signs of distress, helping is more likely.
Group polarization occurs when a group is unevenly split on an issue – discussion of the issue tends to push the majority towards an extreme version of their original point of view. An even split in a discussion causes the winning side to form a more moderate version of their original point of view. Entering a group that shares your viewpoint is most likely to make your view much more extreme.
On the informational side, group discussions in a majority group tend to involve a pooling of “for” arguments while “against” arguments are ignored. Discussion also serves to validate opinions that might have been previously tentative. On the normative side, people might change their opinions to be both more similar and more extreme to those of the group. This might be explained by the one-upmanship hypothesis which suggests that people try to become strong supporters of popular opinions. According to the group differentiation hypothesis, it is because people try to differentiate themselves from a group while still conforming.
Groupthink is a term used to describe the mode of thinking that people engage in when involved in a cohesive in-group. Their striving for unanimity overrides the motivation to look at alternative courses of action and realistically appraise the situation. This phenomenon is partly responsible for a number of disastrous political decisions. This effect is diminished when leaders let their advisors discuss and debate a point without giving them an idea of what he or she might believe. It is also diminished when group members focus on solving the problem instead of uniting the group.
People have a tendency to comply with the requests of others, especially when the requests are made politely, when they are reasonable, and when their attention is otherwise distracted.
The mechanism of cognitive dissonance can be used to manipulate people into complying with suggestions. Some common techniques used in sales that work on cognitive dissonance are as follows:
The Low-Ball Technique: in this technique, the salesman first offers a low price. Once the customer agrees on this price, they wait and then claim that something has caused the low price to be impossible, offering the high price instead. During the delay, customers have usually reduced cognitive dissonance by convincing themselves they want the product.
The Foot-in-the-door Technique: People are much more likely to agree to a large request if they have already agreed to a small one. This is because, to relieve cognitive dissonance, people convince themselves that they fulfilled the first request due to the qualities and trustworthiness of the asker.
Globally, people tend to comply with a reciprocity norm, the obligation we feel to return favours, even those they didn’t want in the first place. This technique is also called pregiving, and often involves giving a free item away to encourage people to buy more of them or give a donation. Pregiving and the foot-in-the-door technique cancel each other out.
We automatically like and trust people with whom we share something in common, or with whom we’ve had a friendly conversation. Because of this, salespeople who can make connections with people quickly are more successful.
Obedience is compliance in the case that the requester is seen as an authority figure and the request is perceived as an order. People learn to obey parents and teachers, and later new authority figures. In the case that a leader is unethical or immoral, people might obey criminal orders in crimes of obedience. Stanley Milgram experimented on obedience in the early 1960’s.
In the Milgram experiment, the subject is brought in and given what they think is the arbitrary role of “teacher”, while a second person (pretending to be a subject) is given the role of “learner”. The learner is strapped into a chair and electrodes are taped to his wrists. The subject is told that the learner will receive a shock at every wrong answer, and the learner mentions he or she has a heart condition. The subject is then moved to an adjacent room in which they can communicate with the learner through an intercom. The subject is instructed to perform quick tests of verbal memory, and administrate shocks after every wrong answer, gradually increasing the degree. As you administer the shocks, the learner shouts into the intercom, more and more frantic. At the highest level of shocks, the learner screams with pain and exclaims that he cannot handle more, but the experimenter prompts the subject to continue onwards. While the subject tends to believe the learner is suffering, most subjects go on when prompted up to the maximum voltage, even while pleading with the experimenter to let them stop.
Social psychologists have come up with a number of explanations to why people obey authority:
Society trains people to obey legitimate authorities and play by the rules. This is the norm of obedience.
Obedience is often based on an assumption that the person giving the orders is in control and responsible for the outcome. The more self-assured and confident the authority seems to be, the more obedient the subject would be.
The physical proximity of the experimenter and the distance of the learner are important factors in the obedience of the subject in Milgram’s experiment.
Without an alternative model as to how to behave, people follow the only one provided when a situation is stressful and unfamiliar.
Because the shocks were given incrementally, to stop later on would be to admit that all previous shocks were also wrong.
Milgram’s experiment has been criticized by many. Some cite ethical reasons – the experiments caused the subjects to experience a great deal of distress. While Milgram took care to protect the subjects from psychological harm and most subjects later interviewed expressed no regrets at taking the study, a similar study would be rejected by ethics committees today. Other critics suggest that, because of the strict laboratory environment, results are not really indicative of real life.
Social dilemmas are the tension between acting in one’s own self-interest (defection) and acting for the needs of the group (cooperation).
The tragedy of the commons is a social-dilemma allegory in which a farmer must decide whether to add another cow to his flock. The common pasture is already full and if each farmer adds a cow, the pasture will fail. However, if this one farmer adds a cow, his profits will be large. They all think this way, and all buy an extra cow, causing the pasture to fail and the town to suffer. Group projects always pose a social dilemma. A person’s cooperative solution would be social working, a non-cooperative solution would be social loafing, or free riding.
The one-trial prisoner’s dilemma game plays as follows: Two prisoners must decide whether to confess or remain silent. If both say nothing, they will both get a short prison charge. If both confess, they will both get a moderately long sentence. If one confesses, he will be granted immunity while the other gets a very long sentence. Neither can communicate with the other until the choice has been made. In the one-trial version, it is most logical to choose the self-serving, defective option. However, both players would benefit from being illogical.
If the prisoner’s dilemma game is played with monetary values rather than prison sentences, the logic remains the same. When the game is not one-time, but played repeatedly, the logic changes. Cooperating becomes most reasonable, if you can convince the other player to cooperate. In programs designed to win this game, Tit-for-Tat (TFT) won. It involved cooperating the first time you meet a new opponent, then to do on each subsequent trial that which the opponent did in the most recent trial. This works because cooperation encourages cooperation from others. By copying behaviour, it discourages the other player from defecting. When the other player begins to cooperate, it cooperates as well. It is also simple, and it is easy to realize that the best option is cooperation.
When the prisoner’s dilemma is increased to more players, it becomes a public goods game. In this case, each player can contribute. The more who do so, the higher the payout to every player. However, if one individual doesn’t pay, he still receives the payout. The more players are involved, the less likely one player is to contribute, resulting in fewer rewards for all. People in smaller groups earn more.
Most real-life situations are not simple cost-benefit issues. We care not just about our short-term gains, but also about long-term positive social relationships, and other factors.
The TFT strategy is good for the long-term because each player is accountable for their actions. Accountability establishes a reputation as being one who helps others and reciprocates when treated well, but isn’t exploited.
A sense of fairness goes beyond self-interest. In an ultimatum game, two players are told that they will be given a certain amount of money to divide between them. One player, the proposer, must suggest how the money will be divided. The other, the responder, can choose to accept the proposal, or reject it, in which case nobody receives the money. While the responder should logically accept any proposal where he or she earns more than $0, when the result is significantly less than half, most people reject. It has been found that people are more willing to punish a cheater at their own cost than to let someone get away with an act of unfairness.
Personal identity and social identity are ways of looking at oneself that relate to defection and cooperation. We tend to cooperate more when we feel we belong to a group than when we feel others belong to a different group than our own. Identification makes us feel connected and increases our willingness to help those in our group, as this will likely benefit us in the long run. Hostility against groups to which we don’t belong also results from this, and can be incredibly intense.
People can be incredibly vicious when we see ourselves as part of a group united against another group. Because of this, between-group hostility can be hard to ethically study.
Muzafer Sherif conducted a study in the 1950’s in which they divided a group of young boys at a summer camp into two groups assigned each group separate tasks. Each group, within a few days, established a coherent group identity, with rules, behavioural norms, and a name (the Eagles and the Rattlers). Competitions were arranged in which prizes would be awarded to the winning group. Three changes occurred:
Within-group solidarity: Working against a common enemy strengthened group cohesion and loyalty.
Negative stereotypes of the opposing group: Each group began to develop stereotypes about the other (objectively identical) group, differentiating themselves from that stereotype.
Hostile intergroup interactions: Good sportsmanship collapsed and the groups became violent and aggressive towards one another.
Reducing hostility between warring groups is more difficult than creating it. The only way Sherif found that worked was to introduce superordinate goals, goals that each group desired and could only be reached through cooperation. This causes group boundaries to fade once more, and social identity to extend to the other group.
People describe social loss and exclusion as similar to physical pain. Because of this, it is called social pain by psychologists. The two are similar to the point that drugs which numb physical pain also numb social pain. The same areas of the brain that activate in physical pain also activate from social pain. Social pain motivates us to reach out for people with whom we can cooperate and to avoid social rejection.
We tend to automatically mimic the postures, attitudes, and even styles of speech of our group members. This extends to emotions. This makes emotions within a group somewhat contagious, and causes the group to feel and often act as one. Laughter is the most contagious of all emotional signals, encouraging general group harmony and cohesiveness.
Guilt, shame, embarrassment, and pride are self-conscious emotions, since they are linked to a person’s thoughts about themselves.
Guilt most often occurs when we feel we have neglected, offended, or been disloyal to a friend or partner. Guilt is a motivator to repair relationships, either through apology or behavioural change.
Shame focuses our attention on a real or imagined flaw in ourselves, such as incompetence, appearance, or moral character. Shame motivates people to withdraw and hide. Expressions of shame may induce compassion and sometimes guilt in possible judgers or punishers.
Embarrassment most often occurs with inadvertent violation of a social norm, or the unexpected or undesired attention of others. It more often occurs around strangers (who are unpredictable). The human expression of embarrassment is similar to the appeasement displays of other primates, an apologetic way of saying “Woops, I didn’t mean it.”
Pride, the opposite of shame, occurs when we do something well and get high self-esteem. It might serve as a sign to others to get close and receive possible benefits of getting to know this person. It also acts as an index of social acceptability. It is a reward for doing good.
Personality as Dispositions or Traits of Behaviour
Personality is a person’s general style of interacting with the world and other people. It is what makes one person unique from another. A personality trait is a stable predisposition to behave in a certain way. Traits are personal and consistent, based on the person and not on the situation. States of motivation and emotion, on the other hand, are temporary. Traits exist as dimensions on which people differ by degree.
The goal of trait theories is to establish clear personality dimensions which can be used to summarize the fundamental psychological differences between people.
Factor analysis is a statistical technique that is used to analyze patterns of correlations to extract defined factors. Looking at data from a large sample of personality tests in which people need to use adjectives to describe themselves, factor analysis can tell us which dimensions of personality exist and clear up redundancies.
Raymond Cattell (1950) condensed 17,953 English personality adjectives down to 170 that seemed to be logically different from one another. He had a large sample of people rate themselves on these characteristics and performed factor analysis on the results. From this, he identified 16 trait dimensions. He created the 16 PF Questionnaire to measure them.
The most well-known and supported taxonomy of personality is the five-factor model. It was originally based on a combination of the lexical and statistical approaches. Allport and Odbert organized trait terms into four lists – stable traits, temporary states/moods, social evaluations, and metaphorical/physical terms. The deeper study of the stable traits using factor analysis allowed psychologists to come up with a five-factor solution, and finally identify the Big Five.
The Big Five
Surgency/extraversion:
Adventurous vs. cautious
Talkative vs. silent
Open vs. secretive
Sociable vs. reclusive
Agreeableness:
Cooperative vs. negativistic
Mild vs. headstrong
Good-natured vs. irritable
Not-jealous vs. jealous
Conscientiousness:
Scrupulous vs. unscrupulous
Persevering vs. quitting
Responsible vs. undependable
Fussy vs. careless
Emotional stability:
Composed vs. excitable
Poised vs. nervous
Calm vs. anxious
Openness-intellect:
Intellectual vs. unreflective
Imaginative vs. direct
Polished vs. crude
Artistic vs. non-artistic
The NEO Personality Inventory was developed by Costa and McCrae. It includes 240 statements to which the subject must react on a five-point scale ranging from “strongly disagree” to “strongly agree”. Most personality tests are very transparent, so the person taking them must be honest and insightful for the results to be accurate.
Studies have shown that the NEO-PI is valid. Scores on the test correlate with real-life behaviour. People high on neuroticism have been shown to pay more attention to threats and negative information, to divorce more frequently, and to be more susceptible to mood and anxiety disorders. People high on extraversion have been shown to attend more parties and be rated more popular. They tend to be seen as leaders, live and work with people, and be less disturbed by loud sounds and intense stimuli. People high on openness to experience tend to be more likely to enrol in liberal arts degrees, to change careers often in adulthood, to play an instrument, and to be less racially prejudiced. People high on agreeableness are more willing to lend money, less likely to have childhood behavioural problems, and have more satisfying marriages. People high on conscientiousness tend to be more sexually faithful, to receive higher grades and job ratings, and to maintain healthier lifestyles.
Personality has been found to be relatively stable over time. It becomes increasingly stable up until the age of 50, and remains relatively constant thereafter. However, some degree of chance can happen at any age.
Relatively consistent personality changes that occur in people as they grow older are thought of as maturity. In the adult years, neuroticism and openness to experience decline while conscientiousness and agreeableness increase. Agreeableness increases well into the 70’s and 80’s. A major life change (marriage status, disease, career change) can result in a personality change at any age.
Average heritability is roughly .50 for most traits, including the Big Five. Thus, environmental influences play an equal part in personality as do genes.
Being raised in the same family actually has very little effect on measures of personality. Twins raised apart have an equal difference in personality from each other as twins raised together. This may be partially due to the fact that everyone experiences their environment differently and is thus influenced by it in a different way.
Genes influence personality because they affect the physiological characteristics of the nervous system and influence neurotransmitters in the brain. The effect of any gene on personality depends on both the mix of other genes that the person carries and environmental influences that might trigger the gene’s activation.
Personality is part of genetic diversity, and seems to be a basic biological fact of life. In every animal species tested, individual differences in behavioural styles can be found.
Personality diversity reduces the potential for dramatic loss while maintaining the potential for substantial long term gains. Because of this, personality diversity would be favoured in evolution.
The most common personality trait studied in animals is the boldness-cautiousness dimension in many varieties of fish. Bold fish adapt more readily to new environments, more frequently swim off alone, and are more likely to approach a human observer. When grouped with other cautious fish, cautious fish become bolder, likely filling a group need for exploration.
The Big Five traits can be thought of as problem-solving strategies, as each comes with a wealth of both benefits and downfalls. Because environments tend to vary in unpredictable ways, it makes evolutionary sense for offspring to be born with different personality-based problem solving strategies. This increases the odds that some of them will survive. People can adapt their personality traits, to a certain degree, according to long-term environmental demands.
Home environments might offer an equal chance of divergence in personality traits between siblings as they do convergence. Existing differences might elicit parental reactions and treatment that further enhances those differences.
Siblings tend to define themselves as different from one another and exaggerate those differences through behaviour. Parents also focus on the differences rather than the similarities of their children. This emphasis is called sibling contrast. This may be a strategy to diminish sibling rivalry, allowing each child their own area of characteristics which cannot be judged comparatively. Sibling contrast might also serve the evolutionary need to diversify parental investment.
Overall, first born children are higher in conscientiousness, lower in openness, higher in dominance and lower in sociability than later-born children. It is suggested that this is because first born children have a longer time living as the only child in a family, creating stronger bonds with parents and a strong respect for conventions and authority. Later-born children enter a family which already has a child, and have to learn to be second. They attempt more new things, since they cannot compete with the activities and accomplishments of the first-born. This makes them generally higher in openness to experience.
Small-moderate differences in personality tests between the genders do exist. The largest and most consistent difference is the tendency for women to be higher in agreeableness than men. There is also a tendency for neuroticism to be higher in women, as well as conscientiousness. Women score higher on warmth and gregariousness but not on the excitement-seeking facet of extraversion. Personality differences that run counter to stereotypes can have social and emotional costs, as society is structured in a way that favours the maintenance of gender differences.
Evolutionary explanations for gender differences in personality suggest that women have adapted more to childcare and social cohesiveness while men have adapted for competitiveness and risk-taking.
Cultural theorists explain gender differences in personality according to the fact that societal forces encourage girls to develop the nurturing, conscientious, and agreeable parts of their personalities while boys are encouraged to develop competitiveness, aggression, and risk-taking. The changes in gendered personality differences have been found to change over time as culture changes. It has also been found that in more prosperous countries, the gender differences are greater than in poorer countries. This is likely because, when presented with the choice, people tend to choose situations in which their own inherent traits are encouraged. Occupational roles have also become more inclusive, functioning with both female and male styles of personality.
Freud coined the term psychoanalysis to refer to his method of talking therapy and to his theory of personality. His was the first of what are now known as psychodynamic theories, which suggest that people are often unaware of their motives and that defense mechanisms work to keep unacceptable thoughts out of the consciousness.
In order to understand people’s actions, problems and personalities, Freud argued that you must first understand their unconscious. This is done through careful analysis of hidden cues in behaviour and speech. The least logical responses would provide the best clues.
Freud believed that sex (libido, life-seeking) and aggression (thanatos, destructive) are the two basic drives that influence human behaviour.
Karen Horney developed a psychodynamic theory in which she considered security to be the basic human need. In her theory, basic anxiety is the feeling of being isolated and helpless in a hostile world. In succeeding or failing at relieving this anxiety, parents can influence the child’s developing personality. The object-relations theory of personality suggests that the way people interact with others stems from their attachment tendencies to people and objects in childhood. Alfred Adler suggested a different theory, in which personal achievements or a lack thereof will cause a person to develop either a superiority complex or an inferiority complex.
Anna Freud developed the concept of defence mechanisms more thoroughly than her father. They are as follows:
Repression: Anxiety producing thoughts are pushed out of the conscious mind, “bottled up” until they might spill out and reveal themselves through distortions.
Displacement: An unconscious drive that cannot be fulfilled is redirected towards a more acceptable alternative. A desire for the intimacy of breastfeeding might manifest in the tendency to take up smoking or overeating.
Sublimation: A type of displacement in which the drive is redirected to creative or socially useful purposes. An aggressive person might become a lawyer or a competitive runner.
Reaction formation: The conversion of a frightening desire into its safer opposite. Someone who is highly homophobic might actually have frightening sexual desires towards the same sex and overcompensate by becoming homophobic.
Projection: A person consciously experiences a desire or drive as if it is someone else’s.
Rationalization: The use of reasoning to explain away undesired thoughts and feelings.
People differ in defensive styles, the particular blend of defense mechanism they employ to cope. The most fully researched style is repressive coping.
Freud claimed that people repress unwanted memories. Unfortunately, it is hard to determine whether recovered memories ever actually happened, or if they are a product of our constructive memory systems. There is, however, a great deal of evidence that people repress emotional feelings related to disturbing events. People who can describe these events but not their accompanying emotions are called repressors, and answer related questions defensively while reporting little anxiety. Repressors show less psychological stress and more physiological distress when recalling the unhappy event. Repressors tend to be more emotionally positive, preserving their working memory for rational planning and problem solving rather than emotions. Unfortunately, there are often long term health problems and physical pain associated with repression.
George Vaillant did a study in which he divided defense mechanisms into categories according to their effectiveness. Immature defenses distort reality and lead to ineffective actions. Intermediate defenses involve less distortion and lead to somewhat more effective defenses. Mature defenses are the most adaptive and least distorted. The most common of these is suppression, another is humour. Mature defenders tend to be happier and more successful in love and work.
Phenomenological reality is a person’s conscious understanding of his or her own world. Self-concept is an essential part of this reality. Carl Rogers based his humanistic theory on this, calling it self theory. He suggested that a common goal for most people is the desire to discover and embrace the self. This desire is usually diverted by the pressures of other people and the demands of authority. People succeed more at tasks that they feel are the result of their own decisions and not those of others.
Self-actualization is the process of becoming one’s full self, of reaching one’s potential in the face of environmental challenges. To grow best, one must be able to make one’s own decisions and trust themself to do so. Abraham Maslow suggested that five sets of needs must be satisfied before self-actualization can take place. This is often depicted in a pyramid diagram:
Self-actualization
Esteem
Belonging and love
Safety
Physiological Needs
People strive to make the events in their lives fit into a whole story, a personal myth. This gives a sense of direction and meaning to life. People who are happy tend to have life stories filled with situations in which the overcame or learned from hardships and a striving to give back to society. Life story can change as life goes on and a person’s perspective reflects their current situation. Changes like these can be dramatic and fast, in the form of “epiphanies”.
Social-cognitive theories emphasize the role of general beliefs in the shaping of a person’s personalities.
Rotter identified a disposition called locus (location) of control. He found that some people generally attributed events to external factors, like fate or other people. These people have an external locus of control. On the other hand, people with an internal locus of control feel like they have the power to control their own rewards and experiences. People with an internal locus of control are more likely to be proactive about their health and their life decisions, and tend to be happier and more confident.
Albert Bandura came up with the concept of self-efficacy, the degree to which a person feels they can perform tasks effectively. Improved self-efficacy actually leads to better results on a task – believe you can do it, and by golly, you can!
Another dimension of belief is the belief that one has the capacity to change. People who see themselves as fixed sit at one end, while people who believe themselves malleable sit at the opposite end. People with the malleable view strive for self-improvement, embracing education and rebounding quickly from setbacks. This belief can be taught and often leads to real change… in effect, proving those with a fixed point of view wrong.
Hope is considered one’s belief in their own ability to solve solvable problems and a belief that most problems can be solved. Dispositional optimism is the tendency to believe in the inevitability of positive outcomes. People with an optimistic style are often happier and more able to cope than people with a pessimistic style. Optimistic people are more likely to put in the effort required for success, and often achieve more success because of it.
Self-delusional forms of optimism can be damaging. Adolescents often feel indestructible, and may do dangerous things carelessly. Defensive optimism occurs when people push aside anxiety about negative outcomes and might not do what they need to improve. Pessimism can be both maladaptive and adaptive. The belief that you might not do well on a test acts as a motivator to work hard and beat the odds.
Social-cognitive theorists suggest that to know a person, you need to know not only the big five personality traits, but also in what sort of situations they manifest themselves. People may be aggressive in some situations and not others, extraverted at a party but introverted at an office meeting, etc.
The collectivism and individualism dimension that seems to apply to Western and Eastern cultures can also be considered a personality trait. Roughly 40 percent of people in collectivist cultures score as individualist on personality tests that measure this dimension. Having the opposite trait than the tendency of one’s culture often leads to maladjustment and psychological issues.
People from different cultures talk about personality in different ways. In China, emphasis is placed on harmony (peace of mind), face (concern for dignity), and ren qing (the mutual exchange of favours in a relationship). These concerns have caused Chinese psychologists to include a dimensional factor of interpersonal relatedness in their own personality model.
Diagnosing Disorders
The American Psychiatric Association developed a manual to distinguish mental disorders, called the Diagnostic and Statistical Manual of Mental Disorders, the DSM. The manual is currently in fourth edition, the DSM-IV.
Mental disorders are syndromes made up of interrelated symptoms. They involve a clinically significant detriment or impairment of function, they come from an internal source, and they are not subject to voluntary control.
Because of the great deal of judgment required to diagnose a mental disorder, there needs to be diagnostic methods for diagnosticians to reach the same conclusion when separately diagnosing the same person. The DSM was a step towards such a system.
Validity is a measure of the extent to which a diagnostic system identifies clinically meaningful categories. It needs to be determined whether people with the same diagnosis can be treated in the same way, whether their illness comes from a similar cause, and whether the diagnosis helps predict future symptoms. Validity of any DSM entry can only be confirmed through research.
Adding a label, while helpful for treatment, can also be detrimental. It can reduce social esteem and self-esteem, interfering with a person’s ability to cope. It might also cause clinicians to ignore a person’s other qualities outside of the label. Referring to people as separate from their disease can reduce these effects (e.g. “Sara has schizophrenia).
Culture-bound syndromes are those that are largely limited to a specific cultural group. One of these is taijin kyofusho, an incapacitating fear of offending or harming others. It seems to be an exaggeration of the politeness norm in Japanese culture. Anorexia nervosa and bulimia nervosa are likewise largely Western disorders.
Cultural values also affect the decisions that clinicians make in determining what to label as disorders. For a long time, homosexuality was considered a disorder. When it was discovered that the impairment homosexuals suffered was due to external prejudice rather than the condition itself, it was removed from the DSM.
Attention-deficit/hyperactivity disorder (ADHD) is one of the most frequently diagnosed disorders in the United States, more often in boys than girls. It is characterized by inattentiveness to teachers and schoolwork, carelessness in completing assignments, fidgeting, interrupting others, etc. Critics of the high rate of this diagnosis suggest that these behaviours are a symptom of an inappropriate school system and increasingly high academic standards at a young age.
In chronic mental disorders, the role of the brain is clearer than in other mental disorders. For example, autism corresponds to a brain abnormality that is caused by genes and sometimes by prenatal toxins. Down syndrome is congenital and is caused by an error in meiosis which results in an extra chromosome 21 in the egg or sperm cell. This creates damage to the developing brain. Alzheimer ’s disease, characterized by a degeneration of cognitive abilities and memory, includes the presence of amyloid plaques in the brain which disrupt neural communication.
Episodic mental disorders are often temporary and reversible, brought on by a stressful environment. However, some people may have a predisposition for the disorder. Most mental disorders are to some degree heritable.
There are three categories of causes for mental disorders:
Predisposing causes: In place before the onset of the disorder, these make the person susceptible to the disorder. Causes include genes, prenatal exposure to toxins, birth difficulties, and some viruses. Damaging social environments (like abusive homes), learned beliefs, and maladaptive thinking patterns are also predisposing causes.
Precipitating causes: Any event in a person’s life that triggers the disorder, such as a death, a new responsibility… any large change in daily life that can bring on a disorder. These causes tend to fit under the umbrella of stress.
Perpetuating causes: Anything that causes a disorder to continue, especially circumstances caused by the disorder itself. For example, depressed people withdraw from friends, causing them to be lonely and perpetuating the depression.
Women are more often diagnosed with anxiety and depression, while men are more often diagnosed with intermittent explosive disorder, antisocial personality disorder, and substance-use disorders. Reasons for these differences include:
Men, encouraged to be the stronger sex, may not report or admit to feeling anxiety or depression.
Diagnosticians may expect to find a certain disorder more in one sex than the other, skewing their diagnoses through the expectancy bias.
Some sex differences in disorders may be caused by the different social experiences and responsibilities of each gender.
Both sexes respond differently to stressful situations and to objectively similar situations. Women tend to internalize discomfort (becoming depressed and anxious), while men externalize it (becoming aggressive or violent).
Anxiety disorders include those in which fear and anxiety are the most common. These include generalized anxiety disorder, phobias, obsessive-compulsive disorder, panic disorder, and posttraumatic stress disorder.
People suffering from this disorder worry continuously about multiple issues, experiencing muscle tension, irritability, sleep issues, and gastrointestinal upset. People with this disorder are hypervigilant towards negative and threatening stimuli. This may be a predisposing cause. Rates of generalized anxiety disorder have been on the rise from the late 20th century to the present day.
A phobia is an intense, irrational fear related to a particular category of object or event. A social phobia is a fear of being evaluated or scrutinized by others. A specific phobia is a fear of a non-social category of object or event (like the fear of spiders, or a fear of heights). While aware of the irrational nature of the fear, a phobia sufferer cannot control their reaction.
Most people have some irrational fears; it is the degree and intensity of the fear that makes it a phobia.
Classical conditioning often plays a role in creating a fear (the pairing of a stimulus, like a dog, with a natural fear inducer, like being bitten). However, people who develop phobias to things like snakes (even in areas without them), may be experiencing a severe form of an evolutionarily inherited fear. The best way to overcome a phobia is to be exposed to the object of one’s fear for increasingly long periods of time.
Obsessions are intrusive, disturbing thoughts that repeatedly and irrationally enter consciousness. Compulsions are repetitive actions often performed in response to an obsession.
When obsessive thoughts and compulsive acts are severe, prolonged, and disruptive to normal life, the sufferer can be diagnosed with OCD. OCD often involves irrational fear of something that is only a thought, which can be alleviated by the performance of the compulsive act. The most common compulsions include checking and cleaning, based in a fear of disease or of accident.
Brain damage from an accident or a difficult birth may be a predisposing cause for OCD. Involved areas include the frontal cortex, the limbic system and the basal ganglia. These areas may be in charge of creating a sense of closure or safety following a protective action.
With panic disorder, a person suffers from panic attacks that involve a physiological response coupled with a frantic, desperate fear of losing control. People with panic disorders tend to develop agoraphobia, fear of public places, because of the embarrassment and humiliation of public panic attacks.
PTSD is brought on by extremely stressful experiences. It is characterised by the frequent, frightening re-experience of the traumatic event, in nightmares and flashbacks. Turning to drugs often makes PTSD worse. Prolonged, repeated exposure to stress and traumatic events, as often occurs in war-torn countries, can wear down resilience and raise the amount of occurrences of this disorder.
Mood is a prolonged emotional start that influences thought and behaviour. It can be thought of as a dimension running from depression to elation. Extreme depression can be harmful, as can extreme elation, or mania. There are two main types of mood disorders – depressive disorders and bipolar disorder.
Depression involves a long period of sadness, self-blame, a feeling of worthlessness, and lack of pleasure. It can include changes in sleep, appetite, and motor symptoms. There are two classes of depression: major depression has severe symptoms that can last up to 2 weeks uninterrupted. Dysthymia has less severe symptoms that can last for 2 years. To have both is called a double depression.
The same genes predispose both types of disorders. Anxiety is characterized by worry and frantic attempts to cope with real and imagined threats, while depression is more of a giving up.
Negative thoughts are common in depressed people, and may be a sign that a person is vulnerable to depression. Hopelessness theory suggests that depression comes out of this pattern of thought: The person assumes a bad event will lead to disaster, that the negative event reflects a personal flaw, and that the cause of the bad event is stable, global, and unchangeable. Cognitive therapy that changes this thinking pattern can help people overcome depression.
Stressful events, especially losses, can trigger depression. Some people are resilient to these stressful events, possibly as a result of a genetic difference.
Drugs used to treat depression increase the activity of norepinephrine and/or serotonin. One theory suggests that depressed people have a deficiency of these neurotransmitters. This has been largely disproven. Another theory is that depression results from a stress-induced loss of neurons or neural connections in parts of the brain.
Depression, in its less severe forms, may serve as a mechanism to cope with loss, to think realistically and stop pursuing hopeless goals. It induces a helplessness that may elicit aid from others. Depression in the winter, seasonal effective disorder, includes increased appetite, increased sleepiness, and lethargy. This may have been useful urges to conserve energy in the harsh winter months.
Bipolar disorders are characterized by manic episodes and depressive episodes which may last for a few days or a few months. Bipolar I disorder is the classic type, while in bipolar II disorder, the high phase is less extreme, called hypomania. Bipolar disorder can be controlled with lithium.
Manic episodes include a feeling of euphoria, high self-esteem, energy, enthusiasm, talkativeness, and grandiose ideas. Judgment is often poor and maladaptive, and inflated belief in one’s own abilities can lead to many problems. Some people experience many as anger and suspiciousness instead.
Hypomania episodes are linked to the output of many famous creative people. It may be that creative people seem manic about their ideas, or that bipolar people tend to be more creative. It might also be that they are more drawn to creative activities as arenas for expression.
Somatoform disorders are those in which a person experiences physical pain without physical cause.
Somatization disorder is characterized by a history of dramatic complaints about one’s own medical problems. It seems to be closely related to depression. The more depression is recognized and treated by a culture, the less often somatoform disorders seem to occur. Clinicians must understand that the physical pain is real and must be treated through psychological counselling and anti-depressants.
Conversion disorder is characterized by the loss in body function (sight, hearing, use of limbs) that cannot be physically explained. While rare in modern Western societies, they were somewhat common 100 years ago. It occurs most often in response to traumatic events.
Studies of the brains of conversion patients and people “pretending” showed large differences. These differences were less in people hypnotised to believe they had a certain disorder. Areas of the brain that inhibit sight, or certain motor functions, become more active in conversion patients.
Cases where negative thoughts and emotions precipitate clearly physical medical conditions fit into the category called psychological factors affecting medical condition, in the DSM-IV. This includes traumatic grief, experienced by widows and widowers, which can bring on disease.
People with Type A personality are competitive, aggressive, and often workaholic. These people are at greater risk for heart issues. It might be that there is no real personality type like this, but the evidence is actually indicative that people who experience more negative emotions are at heightened risk.
Emotional distress has a disabling effect on the immune systems, increasing our vulnerability. What evolutionary purpose could this serve? It may simply be the result of a redirection of energy from disease resistance towards the more immediate seeming problem of distress.
Schizophrenia is a disorder slightly more prevalent in men than women, and more severe. It manifests most often in late adolescence and may or may not involve a full recovery.
To be considered schizophrenic, a person must experience a decline in ability to work, care for themselves, and connect socially with people. The person must manifest two or more of the symptoms explained below.
In the active stage of their disorder, people with schizophrenia have difficulty encoding information, making meaningful connections, and speaking in a coherent manner.
Delusions are false beliefs held even in the face of contradicting evidence. The most common include delusions of persecution, delusions of being controlled, and delusions of grandeur. They may result from difficulty in identifying the source of ideas and actions.
Hallucinations are false sensory perceptions, like hearing voices or seeing ghosts. They tend to coincide with delusions. Many experience verbal hallucinations, in which they “hear” their thoughts being broadcast.
People with schizophrenia often behave in disorganized, inappropriate ways. They often have difficulty forming a coherent plan of action in simple activities. Sometimes, people with schizophrenia have catatonic episodes, when they behave in a way that is unresponsive to the environment. This may manifest in uncontrolled behaviour, or into the near-stillness of a catatonic stupor.
Negative symptoms are a lack of expected behaviours, thoughts, feelings, or drives. They include flatness of affect, laboured speech, or loss of hunger and pleasure.
Subcategories of schizophrenia have been proposed, because schizophrenia manifests very differently in different people. The paranoid type includes delusions of persecution and grandeur accompanied by hallucinations. The catatonic type includes non-reaction to the environment, and the disorganized type includes disorganized speech, behaviour, and inappropriate or flattened affect.
Deficits in attention and memory processes occur in schizophrenia patients. Attention is lacking, as is memory retrieval. Remembering the source of new information is especially difficult, possibly causing delusions to explain possible sources. .
Overactive dopamine (especially in the basal ganglia) plays a role in schizophrenia. Underactive dopamine in the prefrontal cortex may induce negative symptoms. Glutamate also plays a role, an inhibition of glutamate causing some schizophrenic symptoms.
Cerebral ventricles of schizophrenia patients are enlarged while the surrounding neural tissue is reduced. Many areas of the brain are slightly decreased in mass in people who suffer from schizophrenia. A possible cause of schizophrenia might occur in the structural changes of the brain during adolescence, where some neural cell bodies are lost and others replace them (this is called pruning)
Genetic differences play a substantial role in the predisposition for schizophrenia. It is highly heritable. The involved genes may include some that influence dopamine and glutamate neurotransmission.
Early brain traumas and prenatal stressors/toxins also play a role in predisposing people to schizophrenia. Prenatal malnutrition is one influence for which evidence has been found. Diseases during pregnancy also play a role.
Manifestation of schizophrenia might be precipitated by stressful life events. People raised by disorganized or highly emotional parents are more likely to develop schizophrenia than those with calmer parents. People in families high in expressed emotion, negative criticism directed at the sufferer of schizophrenia, have worsening symptoms and are more likely to be hospitalized.
Most aspects of schizophrenia are similar among the 13 different nations studied. However, a surprisingly high percentage of people were found to recover from this disorder, and this was more common in developing countries. This might be due in part to a stronger social community sense and less labelling that might discourage interaction. Antipsychotic drugs might also impede full recovery of the disorder.
Social Issue
In the Middle Ages, severe mental illnesses were called “lunacy” and sufferers were seen as devil-worshipers, tortured and killed. Around the 18th century, people with mental disorders were kept in dungeons, often chained to the wall and kept alive.
Phillipe Pinel (1745-1826), while director of a mental hospital, unchained the mental patients and gave them sunny, airy rooms and exercise opportunities. Many that were considered hopeless recovered enough to be released. Dorothy Dix (1802-1887) publicized the deplorable living conditions of institutions, eliciting public sympathy and kinder care techniques.
In the mid 1950’s came a movement to deinstitutionalize people with mental disorders and establish out-patient care centres. Unfortunately, many of the released patients became homeless or criminal due to a lack of integration and social support.
Assertive community treatment (ACT) includes outreach programs designed to provide locak mentally ill persons with treatment and care. These programs are highly effective in reducing the need for hospitalization.
Psychiatrists: a medical degree and the ability to prescribe drugs.
Clinical psychologists: doctoral degrees and training in clinical research and practice.
Counselling psychologists: Doctoral degrees in counselling, and less focus on research.
Counsellors: Master’s degrees in counselling, less training.
Psychiatric social workers: Master’s degrees in social work and advanced training in psychological issues.
Psychiatric nurses: Degrees in nursing with extra training in the care of mental patients.
These are used to treat schizophrenia and other disorders with psychotic symptoms. Chlorpromazine was the first. All antipsychotic drugs decrease the activity of dopamine in certain areas of the brain. There are two classes of these drugs – typical drugs (like haloperidol) were first developed, atypical (like olanzapine and risperidone) are newer. All antipsychotic drugs have unpleasant and damaging physical side effects, from dizziness to sexual impotence, to blurred vision. Prolonged use can cause symptoms of Parkinson’s disease and a motor disturbance called tardive dyskinesia.
These drugs are often called tranquilizers. In the 1960’s, the addictive barbiturates once used as tranquilizers were replaced with benzodiazepines, like chlordiazepoxide and diazepam. These produce their effect by augmenting the inhibitory neurotransmitter GABA. When taken with alcohol, benzodiazepines are dangerous. They are also moderately addictive. Some antidepressant drugs are more effective at treating anxiety.
Between the 1960’s and the 80’s, tricyclics were the most common depression treatment. They block reuptake of serotonin and norepinephrine. In the mid-1980’s, selective serotonin reuptake inhibitors (SSRIs) were developed. These are equally effective yet have milder side effects.
To assess drug effectiveness, comparisons must be made between the drug, a placebo version, and no treatment whatsoever. This allows researchers to distinguish between three types of recovery: spontaneous remission, the placebo effect, and the effect of the drug itself.
Primarily used in severe depression cases, electroconvulsive therapy involves electric shocks administered to the brain in a painless and safe way (though its origins were much less humane). The electric shocks produce a seizure of approximately a minute, and this can be repeated in a series over days or weeks. Some side effects can include memory loss.
The last resort in severe cases is called psychosurgery, the cutting or producing of lesions in the brain to relieve disorders. Between the late 1930’s and 1950’s, a prefrontal lobotomy was considered a good treatment for many severe disorders. However, it left patients unable to make and execute plans, resulting in a need for constant care. Refined types of psychosurgery involve disabling small parts of the brain with electrodes, which can be used to treat severe OCD. Deep brain stimulation is the implantation of a thin wire electrode permanently in the brain, which can be stimulated to produce a similar effect that can be reversed.
Psychotherapy aims at improving moods, thinking, and behaviour through talk, reflection, learning and practice. In many cases, psychotherapy works best in conjunction with biological treatments. Psychotherapy is any theory-based systematic procedure of this variety. There are many different types of therapies, and practitioners of different types acknowledge the strengths and weaknesses of the others. Some use methods from many schools of thought.
Psychodynamic therapy includes psychoanalysis and other more loose derivatives of Freud’s theories. What follows are the main principles shared by most psychodynamic therapies.
The uniting idea is that unresolved mental conflicts cause mental problems to arise. Unconscious motives influence thoughts and actions. Most psychodynamic therapists believe conflicts that arise from problems at any time in life, not just childhood as Freud suggested. They do see sexual and aggressive drives as important and childhood as a vulnerable period.
Common to psychodynamic therapies is the idea that the therapist must look past symptoms to see the underlying unconscious conflicts that cause the disorder. This is done by analyzing clues that can be found in the patient’s speech and observable behaviour. The least logical elements provide the best clues. Some clues include:
Free associations: a technique in which a patient reports every image and idea that enters their awareness in response to a stimulus, without letting logic impede.
Dream interpretation: Because conventional logic is generally absent in dreams, Freud believed that you could extract real meanings (latent content) from the remembered explanation of a dream (manifest content). Freud looked for sexual themes disguised in dreams, now known as Freudian Symbols.
Mistakes and Slips of the Tongue: Clues can be found in the mistakes that people make, and accidental slips of the tongue. They are thought to indicate secret desires and conflicts.
Resistance occurs when the therapist is close to uncovering unconscious memories or wishes. This includes “forgetting” to come to therapy, arguing, or refusing to talk about certain topics. Transference is the tendency for psychotherapy patients to transfer desires or feeling about unconscious memories towards the therapist –acting in love or angry with the therapist.
Most psychodynamic therapists aim to make inferences about unconscious conflicts that might make patients more aware of their existence. Once conscious, these desires and beliefs can be acted upon or modified, while the defenses are alleviated.
Humanistic therapy is based on the idea that people have the capacity to make adaptive choices in their own behaviour, to feel good about themselves, and motivated. The inner potential for positive growth is called actualizing potential. People must be conscious of thoughts and feelings, avoiding distortion and denial. Person-centred therapy involves focusing on the abilities and insights of the client, rather than the therapist alone.
While “patient” implies a passive, weak role, “client” is an empowered term used by humanistic therapist. The therapist allows the client to take the lead while he or she focuses on listening and understanding.
The context that humanistic therapy tries to create is one in which the client becomes more self-aware and trusting of his or her own decisions. The therapist uses empathy to comprehend the client’s point of view, rather than making judgments.
Important in humanistic therapies is unconditional positive regard –the therapist must believe the client is worthy and capable despite their behaviour. This regard must be genuine to play the role it needs to.
Cognitive and behavioural therapies narrow in on symptoms and specific problems, and are concerned with data and objective measures. An integrated method popular today is cognitive-behavioural therapy.
Cognitive therapy assumes that people disturb themselves through illogical beliefs and maladaptive thoughts. Cognitive therapy identifies these ways of thinking and replaces them with adaptive ways.
There are different methods of pointing out irrational ways of thinking. One is the use of a Socratic questioning approach. Another is to give humorous names to styles of irrational thinking (musturbating = the belief that one goal or behaviour is the key to happiness; awfulizing=the mental exaggeration of inconveniences). In the ABC theory of emotions, A, the activating event, is followed by B, the belief triggered in the clients mind, that causes C, the emotional consequence.
Recognizing irrational thinking is the first step, replacing it with rational, adaptive thinking is the second. This is a goal, which work is required to achieve.
Cognitive therapists are directive, more like a consultant than a friend or teacher. When a client reaches the point where they no longer need guidance, the therapy is over.
Behaviour therapy is rooted in the research of Pavlov, Watson, and Skinner. Behaviour therapy clients are exposed to new environmental conditions designed to retrain them, extinguishing maladaptive habits and reflexes to replace them with adaptive ones.
Therapy programs that change the contingency between actions and rewards are called contingency management. This includes discovering in what way bad behaviours are rewarded, and reversing this response. Contingency management is used in mental hospitals in the form of token economies, in which tokens are awarded when positive acts are performed, and those tokens can pay for privileges. This has proven very effective in hastening treatment.
Behaviour therapy is useful in treating specific phobias, through exposure treatment. Fears are classically conditioned associations which can be extinguished through habituation leading to extinction. Exposure can take the form of imaginative visualizations of the fear object while remaining physically calm. In vivo exposure involves real-life exposure, often with the help of a therapist. This is gradual, beginning with greater distances until reaching the real object of fear. There is also virtual reality exposure, in which this is done on screens in virtual worlds.
To determine whether a type of psychotherapy is helpful, therapy-outcome experiments can be performed. This involves comparing people given no treatment with those given different types of treatment, for any given disorder. It has been found that most types of therapy are helpful, but none is necessarily better than the others.
Acceptance, empathy and encouragement make up support from social networks, therapists, and family. It enhances self-esteem and leads to general improvement. This is the most important role of therapy.
The expectation that things can and will improve comes from a faith in the therapeutic process, along with support. Hope plays a large role in the positive outcome of a therapy.
The main predictor of a therapy’s success is the client’s willingness and motivation to change. How many therapists does it take to change a light bulb? ONE, but it really has to want to change.
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
Hi guys!
Here you can find some summaries and study tips that I have found around Introductory Psychology and Cognition at the University of Amsterdam. Most of it is free.
There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
Main summaries home pages:
Main study fields:
Business organization and economics, Communication & Marketing, Education & Pedagogic Sciences, International Relations and Politics, IT and Technology, Law & Administration, Medicine & Health Care, Nature & Environmental Sciences, Psychology and behavioral sciences, Science and academic Research, Society & Culture, Tourisme & Sports
Main study fields NL:
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
3824 | 2 |
Add new contribution