Psychology and behavorial sciences - Theme
- 16169 reads
Social cognition explains the processes of how people care about what other people think of them and of how we all want to understand the thoughts and actions of other people. In other words, social cognition is the study of how people make sense of other people and themselves. This can be studied in a number of ways, one of which is phenomenology – a systematic description of how people say they experience the world. Two main viewpoints in social cognition are naive psychology and cognition. Naive psychology refers to the common sense beliefs people hold about the thoughts and behavior of themselves and others. The Cognition viewpoint involves a detailed and systematic analysis of how people think about themselves and others, relying heavily on the tools of cognitive psychology.
Solomon Asch (1946) noticed that people describe others using a set number of descriptors (traits) which together form a unifying concept of that person (an impression). His research team came up with a study which demonstrated that descriptive words could cause participants to form particular impressions of people they had not seen before. This led him to come up with two models accounting for these results: the configural model and the algebraic model.
This model hypothesizes that people will form a unified view of other people which denies variation. This means that if a particular behavior does not fit into one’s overall impression of a person, one may interpret the behavior so that it aligns. Context changes the meaning of different traits (i.e. a whiny child is tired, a whiny adult is immature). The brain attempts to organize and unify perceptions of people. All mental activity results in an impression made up of relationships, which make up a schema.
The algebraic model does not begin with a unified whole, but starts with the observation of a number of isolated evaluations which are collected into a summary evaluation. It is algebraic because traits are added up together to form a total picture.
Next a historical context will be presented, as the two models mentioned above form the core of much research that will be discussed in this book.
The elemental approach to social cognition research aligns with the algebraic model in that it breaks scientific problems down into pieces that are analyzed separately before being recombined. Information comes through our senses and perceptions, forming ideas. These ideas become associated through contiguity in space and time. Thus, if two ideas occur together (i.e. dancing and shame), they become a unit. The more often they are paired, the easier and stronger the association becomes. Early psychologists studying memory (Wundt, Ebbinghaus) formed the basis of the elemental view of social cognition.
German philosopher Kant observed that we tend to view things in a holistic way, linking the parts of a thing into a whole (a bunch of grapes rather than 20 individual grapes). Similarly, movement is not the sequence of isolated moments, but a cause-effect connection. The mind organizes the world according to an order of grouping. German-American Gestalt psychology recognized this insight and applied it to the phenomenon of interest. It used phenomenology, systematically asking people about their insights into the world. Gestalt dealt with the perception of dynamic wholes while the Elemental researchers focused on the ability to break the whole into measurable parts. In terms of a song: the melody (holistic) vs. the notes (elemental).
Kurt Lewin brought Gestalt to social psychology. He focused on a subjective rather than objective analysis of people’s realities, calling the influence of the perceived social environment the psychological field. Most important is not what actually happens, but how one interprets what happens. Lewin also put importance on describing total situations, seeing the person as an element within a collision of forces. The total psychological field (and therefore behavior) is determined by two factors: (1) the person in the situation (all that makes up their personality), and (3) cognition and motivation. Cognitions determine what you might do, but motivation determines whether you will do it. Within a psychological field, an individual encounters forces which influences both cognition and motivation.
Wundt, one of the earliest psychologists, conducted research heavily reliant on introspection; a psychologists’ own description of psychological phenomena based on their own experiences and observations. Wundt’s goal was to reveal the internal experiences of an individual. Introspection was quickly dismissed, because it was regarded as being unscientific as it did not produce measurable and comparable data, and was too subjective to be reproduced. Psychologists went to avoiding cognition (deemed immeasurable) and focused their efforts on physical manifestations of mental processes (reflexes, memory, training) instead. Behaviorist psychology appealed to the scientific due to its philosophy of cause and effect, which left cognition entirely out of the equation. A stimulus (S) and response (R) could be outlined and measured. For about 50 years, behaviorist theory dominated the field of psychology.
In the 1960s, people became critical of behaviorism. It could not account for the development of language, for instance. The information processing approach was developed. This is the idea that mental operations can be split into stages (occurring between stimulus and response). Information-processing research arose out of work on learning. The sequential processing of information is an important feature of information processing theories. These approaches try to specify cognitive mechanisms to get a grip on the mind’s black box.
New scientific tools allowed for psychologists to trace previously non-observable processes. The computer acted as both a tool and as a metaphor for cognitive processing. It provided a framework for a new way to think about psychology.
The advent of cognitive neuroscience in the 1990s caused the metaphors and models to change. Nowadays cognitive psychologists are more focused on plausible modeling processes with regard to the increased understanding of neural networks, brain systems and single-cell responses. It is exactly this approach that could save psychology from being torn apart, since it doesn’t divide the brain into different clusters. Cognitive neuroscience instead acknowledges that we are all social, affective and cognitive actors in the world we live in.
Social psychology has always dealt with cognitive concepts. Social behavior has always been cognitive in at least three ways. First it has been considered as a function of people’s perceptions rather than of objective descriptions of a stimulus environment. Next, social psychologists consider both causes and the end result of social perception and interaction in cognitive terms. Third, the individual in between the presumed cause and the result is regarded as a thinking organism, instead of an emotional organism. Cognitive structures like motivation, memory, and attribution were clearly vital in understanding social interaction. In social psychology there are five general models of the social thinker that can be identified:
The consistency theories, emerged in the late 1950s, regarded people as consistency seekers. According to these theorists, people are driven to reduce the discomfort they experience from the perceived discrepancies among their cognitions. The best known example is the dissonance theory: If someone who has publicly announced that he will stop smoking and he has just smoked a cigarette, he must evoke some thoughts to bring those two cognitions into line.
The consistency theories thus relied on perceived inconsistency and a central role is assigned to cognitive activity. Subjective, not objective, inconsistency is central to these theories. When perceiving inconsistency, the individual is presumed to feel uncomfortable and thus motivated to reduce this inconsistency. This is also called the drive reduction model.
This model emerged in the 1970s and was focused on the way people uncover the causes of behavior. Attribution theories – which concern the way individuals explain both their own behavior and the behavior of others around them (external vs. internal attribution) – came to the forefront. The first hypothesis of these theories was that individuals are analytical data collectors and will therefore arrive at the most argumentative conclusion. So the role of cognition in this model is an outcome of reasonable rational analysis. In contrast to the consistency theories, in which motivation was emphasized, attribution theorists don’t regard unresolved attributions causing motivation in an attempt to resolve these. According to the attribution theories motivation only helps to catalyze the attribution process.
Off course people aren’t always as rational as the naive scientist believes them to be. The cognitive miser model (1980s) regards people as limited in their competence to process information. According to this model, individuals take shortcuts (strategies that simplify complex information or problems) whenever they can. At first hardly any role was given to motivation. But as the cognitive miser model developed, theorists began to pay attention to the influences of motivation on cognition. Social interaction became more important.
This view emerged during the 1990s. It saw people as fully engaged thinkers that use multiple cognitive strategies, depending on what their goals, motives and needs are. In some situations people will be motivated to choose wisely for the purpose of accuracy and adaptability, and in other situations people will be motivated to choose defensively for the purpose of self-esteem or speed.
Nowadays individuals are considered as being activated actors. Without being aware of it, people’s social concepts are quickly cued by their social environments. As a result they also almost inevitably cue the cognitions, affect, evaluations, motivation and behavior that are associated with these social concepts.
Most research on social cognition shares some basic features, which are discussed in this paragraph.
Mentalism is the first basic assumption in research on social cognition. This is the belief that cognitive representations are important. The cognitive elements that we use when we try to understand other people and social interaction are what make up the study of social cognition. General knowledge about ourselves and others allows us to predict what others will do and enables us to properly function in the world.
Research on social cognition also concerns cognitive process. Cognitive processes are the ways in which cognitive elements form, operate, interact, and change over time. Behaviorists avoided the discussion of internal process with the belief that they were not scientifically approachable. Social cognition researchers are now able to measure and describe previously unexamined aspects of cognitive processes.
Another feature of social cognition research is the cross-fertilization between cognitive and social psychology. Research into social cognition adopts elements from cognitive and social psychological research, and extends to neuroscience and other areas of psychology.
Social cognition is focused on applying research to real-world social interactions. Contemporary issues as far-reaching as crowd behavior, propaganda, and organizational team building are all dealt with in social cognition research.
The behaviorists argued that the cognitions of a person were not observable, so we might as well just be robots or farm beasts. Yet we are people and are different than objects in a number of ways. We influence our environments, we perceive and are perceived. Social cognition implies the existence of a self, an independent identity that can be judged. Personality traits are not always observable, but are fundamental to how we are observed by others. We are unavoidably complex and have intents and traits that are hidden from view, whose influence on our behavior are ambiguous and unclear.
There are a number of neuroscientific techniques used in studying social cognition, including neuropsychology, fMRI (functional magnetic resonance imaging), EMG (electromyography), EEG (electroencephalography), TMS (transcranial magnetic stimulation), electrodermal responses, cardiovascular activity, hormone levels (such as cortisol), immune functioning and genetic analyses.
An important thing to note when looking at research that has been done in the field of social cognition is that researchers have focused in western, educated, industrialized, rich, democratic undergraduates. This means that research tends not to be cross-culturally valid. One major distinction that can be made is the difference between cultures in which people have an independent self and are more autonomous, versus cultures where people are more interdependent and harmonious. This will be dealt with in the coming chapters.
The motivated tacticia, refers to the tendency to rely on relatively automatic processes depending on situational demands. These automatic processes come in different forms. People may be doing things in a mindless way, simply performing an activity. Full automaticity, in its purest form, is when unintentional, uncontrollable, efficient, autonomous responses occur outside awareness. Some of these processes will be discussed here.
Subliminal priming is when a concept will be activated in our mind by some environmental cue that doesn’t penetrate the surface of our consciousness. Priming can be used to put the brain in a state of mind more or less conducive to a certain goal. Seeing a smiling face very briefly, at a pre-conscious level, is able to impact the positivity of one’s judgment of a situation. Immediate emotional priming elicits responses in the amygdala. Your amygdala (your animal brain) is hyper-focused on recognizing emotionally charged cues, especially negative or threatening ones. Other brain regions involved in automatic processes (together called the “X-System”) are the ventromedial prefrontal cortex (orbitofrontal cortex), the basal ganglia, the lateral temporal cortex, the posterior superior temporal suculus, the temporal pole, and the dorsal anterior cingulate.
It is not just emotional content that can be activated using subliminal priming. Emotionally neutral concepts can evoke brain processes involved with pattern-matching, categorizing, and identifying processes. All of these processes are connected with the inferior temporal cortex. Subliminal priming studies look at the impact of different cues on the reactions of participants. One particularly shocking study primed white participants with the faces of black people and noted a reliably higher level of hostility in a subsequent activity, indicating the subtle effects of racism on behavior. A subliminal prime has to be clear enough to be registered on the senses, but fast enough not to be registered on the conscious mind.
Postconscious automaticity occurs when we are consciously aware of a prime (we notice something), and yet we have no awareness of how that thing impacts our subsequent behavior. For instance, after being asked to imagine a day in the life of a professor, participants showed better results in the subsequent game of Trivial Pursuit. When thinking of words and concepts connected with slowness and the elderly, participants moved more slowly. Conscious and unconscious priming have similar effects, thanks to the fact that most people don’t realize how the things they notice impact them.
Chronically accessible concepts are those attributes we learn to associate with others through experience. They act as a mental shortcut, which we can call up easily, and which helps us save mental processing time. For instance, we might categorize people into types, giving them labels like “friendly” or “intelligent”. The practice of developing automaticity is called proceduralization. Something can become a procedure after a few dozen trials. People can thus be trained to judge traits, for instance, honesty. Automatic judgments speed up people’s responses. That means that well-practiced judgments will be made more quickly and with greater priority than unpracticed judgments, leading to issues like susceptibility to prejudice.
Some types of judgments are especially likely to be automated – for instance, trait inferences. Those inferences are personally and culturally specific. Furthermore, self-relevant knowledge is often coded automatically, with people being highly tuned to information that pertains to them. Threat-related stimuli are quickly parsed, especially angry/fearful expressions and threatening postures. Automaticity is useful and prevalent because people are cognitive misers, more likely to take mental shortcuts and conserve mental energy. We can make decisions faster and more accurately by following well-worn mental paths that have proven to work in the past. Both subliminal (preconscious) and supraliminal (conscious) automatic activation of mental representations can influence evaluations and emotions, associated strategies and cognitions, and behavior. What distinguishes the two kinds of automatic activation, is that if a person is aware that the prime might possibly affect his/her responses, supraliminal activation can invoke controlled strategies.
A controlled process is one in which the perceiver’s conscious intent determines how a process operates.
Goal-dependent automaticity is mostly automatic, yet requires some intentional processing and depends on the task being undertaken. A goal is defined as a mental representation of desired outcomes. Habits are behaviors frequently repeated, and activating a particular goal may set a habit into motion. Goal-dependent automaticity may determine who we choose to socialize with at a party, as we habitually use our spontaneous trait inferences more often in a forced social situation. The goal of meeting new people actively triggers a passive, automatic process.
Goal-inconsistent automaticity occurs when our automatic processes steer us away from achieving our actual goals. For instance, trying not to think about a hamburger will cause a rebound effect, causing us to think only about hamburgers. Goal-inconsistent automaticity is the bane of unrequited lovers, dieters, and procrastinators. Finding a substitute thought is the only way to counter the rebound effect. Actively suppressing a thought is done by creating an automatic monitoring system, which, ironically, keeps the unwanted thought active and easily accessible. One of the symptoms of depression is that it makes negative thoughts more accessible and difficult to repress.
One may ruminate (think repetitively and counterproductively) on an unwanted thought. Dwelling on unrequited attraction, for instance, is a process of calculating, fantasizing, agonizing, persisting, and ruminating.
When do we intend a train of thought? Judging intent is how we judge responsibility and control. We can say that someone intends to interpret something in a certain way when they feel like they have options to think in other ways. If someone has options, making a difficult choice is seen as more intentional. It means rejecting the default, overcoming instinct. Intentional thought is enacted by paying attention and implementing intent.
Some situations may automatically trigger certain motives (auto-motives). The question is, to what extent does “free will” control our behavior? Wegner argues that conscious will is an illusion we create when we have thought about an action before performing it. People just infer that the thought caused the action, without any proof that this is the case. People are more likely to experience agency (the feeling that you’re responsible for the outcome of something), when subliminally primed before a situation to think about the outcome. We do not control our own behaviors (or those of others) as much as we think we do.
William James called consciousness the stream of thought, a flowing river of ideas and connections, within which things may surface and sink back down out of sight. Popular among introspectionist psychologists, the immeasurability of consciousness caused the behaviorists to treat it as if it didn’t exist. Cognitive psychologists gave it more attention, defining it as awareness of something, or the inference of awareness through behavior. Another idea is that consciousness is a sort of executive controller that directs mental structures – once an idea is sufficiently activated, it comes into the conscious mind (short-term / working memory) and can be utilized. The executive can control automatic associations to make them responsive to current intents. Alternately, one view sees consciousness as necessary for human understanding and related to intent. Another idea is that consciousness is constructed, that it is made up of accessible concepts and is limited to the furthering of momentary goals. Consciousness helps people learn by forming new associations, and is necessary for choice, as two options must be held in awareness at the same time.
First-order consciousness is the awake and mindful state of experiencing cognitions and intentionally using them. Second-order meta-cognition is made up of people’s beliefs about their own thinking processes.
Ongoing consciousness is described by social-personality psychologists as a stimulus field, which is composed of body sensations, emotional experiences, and thoughts. According to social-personality psychologists, these factors can successfully compete with the external world.
Kind of thought
Stimulus dependent thoughts are those focused on the current environment of an individual, whereas stimulus independent thoughts occur when we daydream or let our mind wander. Our mind wanders to activate parts of the brain’s network that deal with social cognition, friendships and self.
One might also distinguish between operant thoughts (thoughts that solve problems), and respondent thought (distractions and unbidden images). Looking at it this way, we tend to engage in a great deal of operant thoughts, with respondent thoughts coming unbidden in between. The task we are involved in can impact how many respondent thoughts we have.
Sampling people’s thoughts
Since we can’t read minds, we need other ways to keep track of people’s thoughts. Experience-sampling methods allow researchers to ask participants about their current states and thoughts at random moments throughout the day, and often involve a trigger device like a beeping timer that randomly asks someone to write down their thoughts. This can also be done in the laboratory – researchers might randomly probe people to ask for reports on mind-wandering or zoning out. Experimenters may also ask participants to think aloud in order to get a relatively unfiltered response.
Naturalistic social cognition is a technique that involves filming participants as they wait, unaware that they are being studied at that moment. Reviewing the tape afterwards, participants are asked to point out moments when they remember having a particular thought or feeling. This can help us study the interaction of people’s feelings about themselves, others, and their environment, whether these feelings are positive or negative. This technique can also help us explore empathic accuracy, how accurate people are in guessing what another is feeling.
Finally, role-play participation allows researchers to sample people’s thoughts in a relatively realistic but controlled social setting, often involving a recorded conversation in which one must pretend they are involved. One result found using this technique is that people tend to report more irrational thoughts in stressful, evaluative social situations, especially when these people are anxious.
How do we move between unconscious, automatic thoughts and controlled, conscious thoughts? Our tactics depend on our motives. The main motives will be dealt with here – what do we want?
Social cognition looks at the need for a sense of belonging, being accepted by other people, especially one’s own group. People feel bad and out of control when they are ostracized and rejected. Social pain even looks (neurologically) like physical pain. People tend to create social groups as a way to feel belonging, categorizing stimuli into “us” and “them”. People tend to conform to the majority in automatic ways, presumably out of an urge to belong.
Understanding is the most obvious motive that drives social cognition. People feel the need for socially shared cognition, the belief that one’s own views are shared and understood by those of their own group. When people are motivated to affiliate, they will often change their viewpoints to be more like those of the people in their group. We are motivated to reach an understanding with others – when placed in an unfamiliar situation we seek to establish understanding from which to make informed judgments.
We depend on other people in our social relationships. Feeling like we are dependent on an outcome we do not influence motivates us to seek out more controlled and deliberate processes. When our sense of control is threatened, we feel like being wrong is more dangerous and feel more stress. Pressures for urgency (quick decisions) and permanence (lasting decisions) can cause us to feel out of control. People want to predict and understand their situation because even if they cannot impact the outcome, they feel more in control of their responses.
People are motivated towards self-enhancement, seeing themselves in a positive light. Automatic and immediate reactions tend to favor positive self-esteem, while controlled reflection tends to favor feedback that fits one’s self-view, no matter how negative that may be. We want to be optimistic for the future, feel more control than we have, and feel that we are better than we are. These are adaptive illusions and encourage active participation in social life.
We are motivated to trust others within our social group. This causes us to expect good from other people, and to judge people better than they may actually be. People react to negative events in a quick way and act to minimize the damage. People’s intention to trust others correlates with the neuroactive hormone oxytocin, particularly active in women, and implicated in befriending behavior.
Models of both Automatic and Controlled Processes
Depending on the circumstances, people always try to make sense of themselves and other people around them in more and less thoughtful ways. There are several models that focus on these ways.
There are two main models on how we perceive others. One is the dual-process model of impression formation, in which we make an initial categorization. This is good enough if the person is not relevant to our goals – we see a construction worker in passing, and they don’t get more definition than “male construction worker”. However, the more relevant they are to our goal, the more distinctions we make to our mental representation, especially if the individual exhibits traits that do not fit within our category. We type, and then we subtype.
The other main model is the continuum model of impression formation, which suggests that we move people from one end of a continuum to the other. We place people initially in an automatic category, then re-categorize and specialize on closer inspection when we find data that contradicts our assumption. While we begin with category-based responses, we can advance to attribute-based responses.
Attribution is the process of causal reasoning – how we decide who or what is responsible for an observed outcome. According to the dual-process model of overconfident attribution, we first automatically identify behavior (identification stage), then we deliberately try to explain the behavior, either by seeing the environment as the cause, or the other person. The cognitive busyness model suggests that we first categorize, then characterize (determining the disposition of the other person), and then correct for situational factors if we are not too cognitively busy. A third model contrasts spontaneous trait inferences with goal-driven processes.
Attitudes are people’s evaluations of objects in their world. The elaboration likelihood model suggests that we may construct an attitude using peripheral information (usually automatic, unexamined information that we passively receive), or we might use central information (deliberate and controlled research). Peripheral cues are little to no effort, meaning that we are unlikely to elaborate unless motivated to do so.
The heuristic-systematic model contrasts systematic processing with heuristic processing. Systematic processing is analytical and comprehensive and allows for a more accurate basis for an attitude than heuristic processing, and yet is much slower. We tend to stop evaluating something when we feel we have sufficient information for our current goal.
When we think about ourselves, we tend to react on the basis of our automatic self-concepts. Sometimes we may look at relevant information more carefully. When we look at others, we also move from an automatic prejudice to a controlled, educated judgment. Pretty much every inference we make about our social environment can fall under System 1 (the automatic, intuitive, holistic system) or System 2 (the rational, analytic, effortful system).
There is a counter-movement that suggests the dichotomy of dual-mode processing may not be the only way of approaching cognition. The unimode model builds on a lay epistemic theory and proposes that people’s subjective understanding tests their daily hypotheses. The parallel processes model suggests that all attributes are activated simultaneously, and information is seen as equally weighted. Impressions develop not in stages, but all at once, a combined coherent impression of another person.
The motivated tacticia, refers to the tendency to rely on relatively automatic processes depending on situational demands. These automatic processes come in different forms. People may be doing things in a mindless way, simply performing an activity. Full automaticity, in its purest form, is when unintentional, uncontrollable, efficient, autonomous responses occur outside awareness. Some of these processes will be discussed here.
Attention and encoding are the first steps in mental representation – what we see and how we manage that information. Encoding transforms a perceived stimulus into an internal representation. That process takes effort, and being cognitive misers means that we lose some details, and alter others. Our inferences, then, are often flawed. Attention is where we place our focus during the encoding process. When we think about something, we make a temporary mental representation. Whatever occupies the conscious mind at any given time is the focus of attention. We may attend external stimuli, or we may turn inwards and focus on a mental image from memory or imagination. Attention tends to have two components – direction (selectivity) and intensity (effort). The direction of your attention when reading this is the content of the book, if you selectively focus on it. But you might also be thinking about having another cup of coffee, or scratching your leg, or checking Facebook. Aspects of attention that contribute to voluntary control are working memory, competitive selection, and top-down sensitivity control. An automatic aspect of attention is bottom-up filtering for salient stimuli.
There is also a distinction being made by cognitive psychologists between the amount of fundamental perceptual processing that occurs outside of focused attention (early vs. late selective attention).
Keywords for this chapter are:
Someone’s face is a very important social driver of attention.
From the moment we can first see, we are attuned to follow the direction of other people’s gaze. When someone looks directly at you (directed-gaze) rather than elsewhere (averted-gaze), you are compelled to attend them. You can rapidly categorize them by gender and stereotype, and tend to remember people more who direct their gaze at you. People tend to find a direct gaze to be attractive. We tend to follow the gaze of those around us – if we see someone looking in a particular direction, we look to see the focus of their attention.
Face perception is a highly developed visual skill involving a web of neural systems. We must be able to identify not only the fixed features of a face, but also how that face changes when expressing. The fusiform face area (FFA) is a face-responsive region of the brain that lets us recognize features that do not change. The superior temporal sulcus (STS) responds to changeable aspects of faces like expression and gaze. A third set of processes deals with our knowledge about the person.
Facial recognition is a global, configural, holistic process – we see a face. When we focus separately on a facial feature like the nose, we have more trouble recognizing someone (feature-oriented processing). When having to verbally describe someone, the description can overshadow our memory of their actual face, because it tends to be feature focused. Faces are globally perceived when we are trying to identify people, but when we are just categorizing we tend to use simple salient clues like hair or gender. Since we are attuned to faces, we can identify that a stimulus is a face in 1/10 of a second, and recognize familiar faces almost as quickly. We can also usually distinguish race and gender very rapidly. Categories are easier to determine than identity.
We are especially primed to find baby faces attractive and innocent. Adults with baby-faced features tend to be seen as having child-like qualities, to be less dominant and intelligent, but more honest and warm. The typical baby-face traits are large eyes, big foreheads, and short features. People with baby-face features are less likely to be convicted of malicious crimes but more likely to be judged guilty of negligence. Childlike voices make someone appear weaker, less competent, and warmer. The instant categorization of baby-face features as weak and innocent is an adaptive instinct, inspiring care-taking behavior.
Angry faces are particularly salient – our danger warning systems prime us against any danger or threat. This is especially te case when we see an angry expression on someone not in our own group. Even subliminally recognized angry faces get our amygdala going. Certain faces structurally resemble emotional facial expressions even when non-expressive, and we sometimes overgeneralize and infer those non-expressions as traits. We emotionally overgeneralize, seeing happiness as trustworthy, anger as untrustworthy, and we link masculinity and maturity with power, immaturity and femininity with submission.
Spontaneous trait inferences become linked directly to a person’s face in our memory. Even after a few mere seconds of seeing someone, we make judgments about them. This can have consequences – we will judge the competence of a political candidate instinctually, and will judge the criminality of someone just as quickly.
Salience, one’s noticeability, can have a big impact on how we feel and interact. If you are the only one of your “kind” in a situation, you may be anxious, feel that all eyes are on you, and worry about how you behave. Self-regulation becomes more difficult and you may talk too much, disclose too much, act arrogant, etc.
Social salience is context-dependent. We are socially salient whenever we present some sort of novelty, even as simplistic as having red hair in a room of brunettes. Gestalt psychology says that a stimulus is salient when it is bright, complex, changing, moving, or otherwise in contrast to its drab background. People tend to pay attention to information that is against their expectations. Someone might be salient if acting in contradiction to what people think about them as individuals or category members. Social stimuli that are extreme or unusual attract more attention – we look when a movie star enters the room, or when a fight breaks out. We tend to expect positive life outcomes and are thus attentive to negative ones, which aside from being against expectations also tend to demand coping.
The goals of the perceiver also impact salience. People pay most attention to those that might impact the achievement of their goals. If you are in competition with someone, they hold your attention. You also attend those who are closer to you physically, as well as those who you see more often. Salience is, therefore, relative.
The consequences of social salience are wide ranging. Prominence shows up most in perceptions of causality. For instance, salient people are considered influential – we assume that they are the cause of things. Causal attributions follow the focus of attention. Salience also exaggerates the directions of our evaluations – if someone would be judged as unlikable when not salient, they will be judged much worse if they are salient. The same is true in the reverse. Salience organizes impressions. Our impressions of salient people are more coherent than of non-salient people. Salience effects are semi-automatic because, while instinctual, people can sometimes control them. Salience effects may be mediated by causally relevant recall or its accessibility.
A stimulus is vivid when it is emotionally interesting, concrete and imagery provoking, and close by in a sensory, temporal, or spatial way.
Vividness is commonplace. Psychologists have made a number of guesses about the impact of vivid information:
Vivid information is theoretically more persuasive than pallid information of equal validity, partly because it’s more easily drawn to mind.
Vivid information provokes internal visual representations, making it especially memorable.
Vivid information is also more emotionally impactful, enhancing its influence on judgments.
There is a scarcity of empirical evidence, however, to ground these claims. While it intuitively makes sense, most evidence does not show any difference between pallid and vivid information. The only major exception is the effectiveness of case histories in persuasion.
While little evidence supports the vividness effect, we instinctually feel that it should exist. We see vivid messages as persuasive to others, yet do not feel persuaded by them ourselves. Concrete language makes statements seem more reliable, yet vivid information makes us more confident in opinions we already have. Vivid information is entertaining, but the state of emotional arousal and entertainment it brings can be mistakenly considered an attitude changing experience.
Several principles define the boundaries of the vividness effect. Vivid messages and vivid presentations are different things – if a presentation is too vivid, the message may be drowned. Some empirical evidence shows that pallid written material is actually more informative, but a vivid video will get the intention of the uninvolved. Vivid ads are attention-grabbing but shallow. People also differ in their reliance on vivid imagery.
Our brains naturally categorize and organize information. These categories may be more or less accessible, depending on priming. A frequent or recent idea is more easily accessed than a dormant idea.
Priming can change our interpretation of things and people. For instance, being exposed to positive or negative trait terms causes us to interpret ambiguous information accordingly. This priming is especially powerful when specifically relevant to the ambiguous situation being interpreted. Priming has allowed psychologists to look at subtle racism and stereotyping in a different way. When primed with race-related words (like a racial slur) people will be more likely to enact racist judgments, for instance. Gender-role stereotypes are also subject to priming. For instance, watching pornography primes men to be more stereotypically masculine in the way they treat women immediately afterwards. There are many avenues of research into priming that have been fruitful. Unconscious affect primes person categorization, unconscious threat and violence prime anxiety, self-discrepancies from standards prime arousal and mood, unconscious polarized evaluation primes good-bad judgments of loaded words, relevant questions prime reports of life satisfaction.
Priming has long-term and short-term consequences. Even an arbitrary link made to a prime and a stimulus can affect the encoding of that stimulus. Accessibility affects social behavior like race-hostility and test performance. It can influence not only how we see ourselves but also how we treat other people. For instance, if we are angry, we are more likely to act violently if aggressive cues are present, like the presence of a gun.
Priming can often result in the assimilation of stimuli into accessible categories. However, sometimes we see contrast effects. If someone is blatantly presented with a prime they might actively contrast their judgment on an ambiguous target with the prime. Consciousness of the prime can be important, because conscious priming is more flexible. If we are aware of a prime and its potential link to what we are evaluating, we may see it as too extreme or too obvious, and think more reasonably about our judgment.
Assimilation and contrast not only depend on the consciousness of the prime, but also on features of the stimuli involved.
Similar primes (the similarity to the prime and the stimulus) are more likely to show assimilation. Lack of overlap tends to lead to contrast effects, especially when extreme primes are used. The ambiguity of the stimulus allows for its easy assimilation to a prime. The clearer the stimulus, the less likely it will be subject to priming effects.
Finally, the goal of the perceiver is also an influencing factor. For instance, self-protecting motivations can interact with a prime. People may assimilate shared goals and coordinate with ingroup members that are pursuing the same goals, for instance.
Together with other factors, these factors come together in a selective accessibility model of assimilation and contrast, in which conscious rather than automatic comparisons are addressed. Even though the model assumes that accessibility is more flexible and specific to the judgment at hand, rather than general semantic priming, it does hinge on accessibility. Depending on the accessibility of the strategy, people will search for similarity (so assimilation results) or differences (so contrast results).
Priming is most active at the moment of encoding. We know this because presenting primes after a stimulus has no effect, whereas presenting the prime before the stimulus allows it to affect coding. Prime-relevant information elicits differential attention – we do not report primed dimensions as important, but we still recall them better.
Accessibility may occur because of recent priming or because of frequent priming of a particular idea. Well-practiced judgments become automatic through proceduralization. People for whom a particular personality trait is especially accessible tend to judge others on that personality dimension, and describe others in those terms. Frequent reliance on a particular personality dimension will cause one to develop chronicity on that dimension. Individual differences in chronicity impact interpersonal interactions. For instance, chronicity can explain positive stereotyping side-effects, in which chronically stereotyped people show more tolerance for other chronically stereotyped people because victimization is primed.
Sometimes priming and heuristics are helpful – they have adapted because when an association is frequently made, that often means it is highly likely to occur in the same way again. Several theorists have suggested that at the moment of perception itself, we organize what we see because of what we see. That is, organization is actually inherent in the stimulus. A particular stimulus affords or offers particular behaviors to the observer, and the observer is attuned to certain properties in a stimulus. This approach is inspired by Gibson’s work in object perception and is called ecological perception because it emphasizes our dependence on our environments.
Ecological causal perception (aka the Gibsonian perspective) suggests that social interpretations result from the perceptual field rather than inferences and memories. One example is how one might deal with overhearing a neighbor’s quarrel. Hearing a woman shouting and the man replying softly, one immediately perceives that the woman started the fight and is threatening the man. However, seeing the quarrel and noticing that the man is making threatening advances to the woman as the woman makes small retreats, one would perceive that the man is causing the fight by threatening the woman. Gibson suggests that inferred blame occurs automatically in the perception of the event, rather than afterwards in a process of inference.
Direct perception is interesting because inherent perceptual units have been shown to impact social judgments. Researchers use the unitizing method to define perceptual units. It involves having participants press a button to indicate the end of a perceived segment of a stimulus (the end of an overheard sentence, for instance). This method is reliable and valid, as people tend to agree on the definition of perceptual units in any given scene. We break down behavior into meaningful units based on the actor’s intentions and goals, like scenes in a story. When we notice a distinctive change in motion, for example, that becomes a breakpoint. Basic perceptual-motor configurations could function independently of cognitive processes and relay vital judgment information at the moment of perception.
We use finer perceptual units when observing nonverbal behavior, remembering task behavior, encountering an unexpected action, and observing individuals, especially strangers. We gain more information when we use finer perceptual units, and this has the side effect of increasing how much we like the observed other.
The direct perception view argues that cognitive constructs enter into the inference process only to the degree that they can influence the initial direct perception of an event. This perspective is useful to counteract biases that challenge mainstream encoding research. The ecological approach also emphasizes that perception has an adaptive function, with certain perceptions inspiring certain behaviors, without one needing to think. The environment is seen as full of action possibilities (affordances). Another benefit is that cross-cultural, animal, and developmental research may be conducted for comparative psychology research.
Attention and encoding are the first steps in mental representation – what we see and how we manage that information. Encoding transforms a perceived stimulus into an internal representation. That process takes effort, and being cognitive misers means that we lose some details, and alter others. Our inferences, then, are often flawed. Attention is where we place our focus during the encoding process. When we think about something, we make a temporary mental representation. Whatever occupies the conscious mind at any given time is the focus of attention. We may attend external stimuli, or we may turn inwards and focus on a mental image from memory or imagination. Attention tends to have two components – direction (selectivity) and intensity (effort). The direction of your attention when reading this is the content of the book, if you selectively focus on it. But you might also be thinking about having another cup of coffee, or scratching your leg, or checking Facebook. Aspects of attention that contribute to voluntary control are working memory, competitive selection, and top-down sensitivity control. An automatic aspect of attention is bottom-up filtering for salient stimuli.
In the following sections several models of memory will be addressed, starting with the classic memory model (the associative network approach).
The associative network approach underlies most social cognitive studies, especially the earliest ones. According to this model, the more links or associations from other concept exit for any given memory, the easier it is to recall. Things are represented in a memory code. A variety of possible codes exist, but early cognitive psychology knew of the proposition code. A proposition is something like “the mug is on the counter”. Each proposition has nodes and links that relate to other ideas. The connection of one node to another node is an association. In associative memory models, activation spreads from a single point of recall, to all nodes that connect to that point of recall. Joint activation (rehearsal), makes it more memorable. The more separate links to any given idea, the more likely it is to be recalled. These links create retrieval routes and enhanced memory.
Network memory models tend to differentiate between short-term and long-term memory. Long-term memory is the store of information that it may be possible to bring to mind. Short-term memory refers to information that one might be considering at any given moment (also known as working memory). The conscious part of long-term memory is considered short-term, because it exists in the accessible conscious mind. We have a limited capacity for activation, making short-term memory very small. This limit can cause someone to contradict their own testimony. One technique for holding more information in the working memory is to chunk items together into a greater, meaningful whole.
This distinction between long- and short-term memory may possibly be breaking down. Within neuroscience three types of memory are now being observed – active memory being attended in the conscious mind, long-term memory that deals with the deep past and long-established knowledge, and intermediate memory, which holds more recent events.
The PM-1 model is a network model of social memory, which works as a computer simulation. It predicts extra attention to impression-inconsistent material, resulting in extra associative links for those items and increasing their alternate retrieval paths. Essentially, when something is surprising, we pay attention to it and in doing so, we link it to more ideas. This is the inconsistency advantage. The encoding process activates a limited-capacity working memory that works to link perceived items. The longer an idea is in the working memory, the more links are formed. The model proposes that impression formation occurs at the same time as memory encoding. This is called the anchoring and adjustment process, and provides an impression that is updated with each new piece of information.
When people form online impressions (receiving information), they are based on the gradual process of perception. However, without the stimulus in front of us, we rely on retrieved information to create an impression about something. This is called a memory-based impression.
The person memory model suggests that we get an impression from a target’s behavior, which we interpret in trait terms, evaluate and review inconsistent behaviors. Due to the primacy effect, earlier information influences an evaluation the most. First impressions, therefore, count in how others perceive us. The inconsistency advantage also plays a role in person memory as it causes us to puzzle, trying to relate them with consistent behaviors and create links. In the long term, however, consistent information has the advantage. One disadvantage of the person memory model is how it assumes multiple representations pertaining to a single person. Person memory models predict the inconsistency advantage, but this advantage is not obtained when the research paradigm makes the perceiver’s task more complicated. Having to make a complex judgment or experiencing an overloaded selective memory can reduce the impact of the inconsistency advantage.
The twofold retrieval by associative pathways (TRAP) model is a dual process model that favors both inconsistent and consistent memory, depending on the strategy enacted.
The associated systems theory (AST) creates representations of others through four systems:
The visual system
The verbal/semantic system
The affective system
The action system
These four systems are relatively independent at concrete levels, but become more interdependent when representations are more abstract.
The basic model of procedural and declarative memory systems relates to the process of learning. The suggestion is that there are two types of memory. Procedural knowledge deals with how to do things, the process of things. Declarative memory is the what of things, the factual information we store and can retrieve for later use. Declarative long-term memory includes episodic memory (memories of specific events) and semantic memory (memories of facts, word meanings, and encyclopedic knowledge). Procedural knowledge is represented in condition-action pairs called productions.
For instance, if someone reaches their hand out to me after a business meeting, then shake their hand. Goals determine which procedures fire at any given time, and a good deal of decision-making is based on procedural knowledge.
The advantages of declarative associative networks are that they allow for easy learning, are widely applicable, and are flexible. The disadvantage of declarative knowledge is that it is slower and more taxing to the working memory. This is why it is beneficial to repeat certain processes until they become proceduralized.
At least some social processes have to be proceduralized. In chapter 2 practice effects as an explanation for automaticity is given. An alternative explanation for priming effects is provided by procedural memory. As mentioned previously, priming can influence the processing of new category-relevant information. With help of declarative memory, it activates relevant concepts along the associative network’s pathways. This process can be general, but it can also be specific. Priming a personality trait by reading a word is different from priming a personality trait by inferring it from behavior. Faster responses are primed a second time on the exact same task by each process (procedural priming), and each process primes the trait itself (category accessibility). So procedural priming shows as that both processes as contents can be primed.
The more a particular procedure is practiced, the more likely it will be used again. Such effects are considered to be a type of implicit memory. Past judgmental processes influence current judgments and reactions.
A parallel process activates many related pathways at once. A serial process occurs instead as a sequence of steps. The activation of social categories can occur via either serial or parallel processing.
Parallel distributed processing (PDP) is an approach to the structure of cognition developed as an alternative to serial models. In this model, each unit helps represent many different concepts that are retrieved when the appropriate pattern of activation occurs. PDP models assume that memory consists of elementary units connected with links to one another. The connections represent constraints about what units are associated, and connection strengths represent the type and magnitude of association. Since connectionist models only store the strengths of connections, they recreate the pattern by activating parts of it and keep doing so until entire pattern is activated. PDP was initially applied to the study of motor control and the perception of letters. Despite these simplistic origins, PDP may be useful for analysis at a social cognitive level. PDP models do not merely represent static knowledge, but instead represent dynamic forms of knowledge which might be stronger or weaker depending on the power of the connections between ideas. Recent memory models combine serial and parallel processes: declarative memory retrieval might simultaneously search parallel memories.
PDP can be applied in studying stereotypes, and how they simultaneously interact with one another. For instance, if we think of a female Mormon lawyer, we instantly overlay multiple stereotypes and come up with a blended impression.
The parallel constraint satisfaction model is a single-mode alternative to the dual-mode processing models in that it views the formation of an impression in the same way that we comprehend text: the “reader” needs to interpret and integrate a variety of information while accessing their relevant knowledge bank. It all happens at once. Expectancies and new information constrain interpretation, especially when information is ambiguous.
The connectionist model of impression formation applies PDP principles to social cognition. According to this model, there is an initial activation phase in which stimuli are balanced against expectations. Consolidation then occurs, as the external inputs are adjusted to fit with the long-term connections already present. The more often that people’s expectations are disproven, the weaker those expectations become. The set size effect posits that people are more certain when they have more support for their perceptions. Competition among links means that successful, accurate links gain more power than inaccurate ones. Computer programs have been designed to stimulate standard impression formation effects, with patterns of primacy and recency in mind.
The tensor-product model uses a Hebbian learning approach instead of the previously described competition approach. This type of learning approach describes associative learning by changes in the strength of links between nerve cells. It is not viewed as a literal representation of neural networks.
According to the perceptual theory of knowledge, our internal and external experience is encoded using perceptual symbols. Perceptual experiences are all senses, including introspection and proprioception. Attention isolates specific perceptual experiences or their components. Perception captures information about the gestalt of something – its edges, color, movement, temperature, etc. Information is embodied in that it includes external stimuli and internal sensations. This form of memory representation captures top-down expectations.
PSS, the perceptual symbol systems, record neural activation that occurs when we receive input from a stimulus, and can be reactivated later. We can use imagery, a conscious visualization of a remembered stimulus, or conception, the knowledge of a stimulus that is recovered without retrieving sensory motor (or visual) details. The PSS has a simulator that captures the sensation of something that it can later recreate. This can be thought of as a sort of burned-in pattern of sensation that can be reactivated later. The simulator has an underlying frame that integrates across experience, and simulations which are created from the frame.
So according to this view, cognitive processes involve bottom-up sensory motor perceptions at one extreme, and top-down sensory motor representations including conception and imagery at the other extreme. In between are processes such as priming, filling in missing information, anticipating future events, and interpreting unclear information.
PSS is still a topic being researched and examined by psychologists, and offers new potential avenues for understanding memory.
The PSS view is particularly applicable to social cognition because it does not merely focus on archiving memories, but on preparing for situated actions, embedded in context. Social psychology suggests that one’s social environment plays a major role in thoughts, feelings, and behaviors. The embodied cognition viewpoint places the actor in their interpersonal context. Embodied cognition could be behind online cognition that occurs during social interaction, as well as offline cognition that occurs in the absence of a social object. One example of the application of embodied cognition in social psychology is the research that has been done in the induction of positive and negative affect using mimicked facial expressions. For instance, participants who held a pencil between their teeth (forcing a smile) found cartoons funnier than those holding a pencil between their lips (denying a smile). This hints at bottom-up sensory motor representation – that our behavior is interpreted by our brains as fitting a certain emotion, and so our emotions follow suit.
Social categories are mental constructs that organize our expectations about people, entities, and social groups. We make assumptions about others and ourselves, and these can often be misleading. We still make those assumptions, because they perform the vital function of allowing us to predict and control our interactions to no small degree. Categories orient us where we might otherwise feel lost and anxious.
Categories involve schemas, the knowledge we have about a concept and its relation to other concepts and experiences. Categorical person perception is considered a top-down process, as we impose our previously assumed ideas onto reality. Other processes are bottom-up, based on actual data observed from real stimuli. Categorical perceptions allow us to generalize across instances while data driven perceptions are specific to particular instances and help us analyze individual experiences with more accuracy. As in all things, our perceptions about the world are an interplay between reality and our own experiences. Categorical expectations emphasize the internal, preconscious part of our experience of the world. Gestalt psychology, however, sees perceptions as mediated through an interpretive lens, putting great emphasis on the importance of context. Context provides a specific configuration that alters the meaning its individual elements hold. All things are, then, relational. Two theoretical developments that paved the way for current categorical theories were based on Gestalt stimulus configurations: Asch’s configural model of forming impressions and Heider’s theory of social configurations that produce psychological balance.
How does one define the boundaries of everyday categories? Natural categories do not exactly have sufficient attributes to allow for easy definition. Category members fall within fuzzy sets: they do not always clearly fit their categories. Some instances may be more prototypical of a category than others. For instance, soccer is clearly a game, but playing house is less so. Category members that are fuzzy like that are related by family resemblance – any pair of category members will share some features with each other. The more features something shares with other category members, the more quickly and consistently it is recognized as a category member.
Categories are organized hierarchically, with broad categories containing many sub-categories. The complexity of our categorical systems depends on our culture and environment. Indigenous people have more sophisticated categories of biology and nature, while urban-dwelling students will have more complicated categories related to their studies or their urban environment.
We categorize people according to many traits, both external and internal. For instance, we might categorize someone as extroverted, and notice confirming behaviors. Social categories are also fuzzy sets centering around certain prototypes. In order to confirm categories, however, people may remember category-consistent information that was never actually observed. False memories are a strange phenomenon that can easily occur when people are distracted. False alarms are mistakenly identified distracter items on a test, believed to be part of the original set. Leading questions to a witness can unintentionally plant false memories in witnesses’ minds.
One change made to the social category view is that social categories are most often represented by ideals and extremes, so that the “ideal” prototype of a student, for instance, is considered more representative of the category ‘student’ than is the average student one might encounter. Social categories have also been viewed as inevitable and automatic, but new research has revealed that they may be conditionally automatic, depending on one’s goals. Social categories might not form clear hierarchies in which broader categories contain specific sub-categories. Social categories in general are not very neatly delineated, suggesting that social categories exist as a web of associations rather than a hierarchy.
How do we use social categories? We can activate, apply, and even inhibit them based on social conditions. Category activation depends on our attentional resources – under some circumstances, we might not even notice clear categories, or we might activate a category without activating associated stereotypes. Furthermore, because people always fit in many social categories at once, the activation of one actually inhibits others. Category salience, chronic accessibility, and processing goals determine which are activated at any given time.
The exemplar approach suggests that one remembers separate instances that they have encountered rather than an abstract prototype. They then compare perceived stimuli against their own memories of exemplars of the same category. One advantage is that the exemplar approach takes into account the variety that exists within a category. It also makes it easier to modify existing categories to accommodate new instances. In contrast, it is unclear how prototypes are altered.
There is a good amount of evidence for exemplars in social cognition. One set of studies found that people’s judgments are affected by irrelevant similarities to specific past examples that they have experienced. Rather than comparing the new example to a stereotype or ideal, the new example is compared to one’s own experienced exemplars. For instance, when faced with two strangers, a person will feel more comfortable with the stranger who most superficially resembles a stranger that was recently kind to them, without being conscious of this influencing factor. Familiarity may be a factor that changes our judgment strategy from exemplar-based to stereotype-based.
Despite the existing evidence for exemplars, this evidence doesn’t seem to be all clear-cut. Even though people understand that some groups are more variable than others, this knowledge does not seem to be based on memory for exemplars. A possible explanation for this, is that people may especially use exemplars when they are trying to account for something unfamiliar. The norm theory addresses post hoc interpretation based on a past encounter with a certain stimulus in a certain context. It aims to judge whether a stimulus was normal or surprising. Where schema and category theories describe reasoning forward, norm theory describes it backward. This latter theory states that people consider a stimulus in light of exemplars it brings to mind.
People rely on multiple representations, and developing these representations depends on task demands. This means that abstracting a prototype isn’t an automatic process. Exemplars on the other hand are more likely to be automatic, since they are used when (1) people’s cognitive capacity is strained, (2) for more complex concept and (3) especially by younger children. So exemplars seem form the basic foundation for abstract generalizations such as categories. But once the category is established, exceptions require unfolding the category so one can return to the more concrete individual exemplar level.
It is clear that people can choose between the use of abstract category-level information (such as prototypes), or instances and memory for exemplars to make categorical judgments.
The associative network approach underlies most social cognitive studies, especially the earliest ones. According to this model, the more links or associations from other concept exit for any given memory, the easier it is to recall. Things are represented in a memory code. A variety of possible codes exist, but early cognitive psychology knew of the proposition code. A proposition is something like “the mug is on the counter”. Each proposition has nodes and links that relate to other ideas. The connection of one node to another node is an association. In associative memory models, activation spreads from a single point of recall, to all nodes that connect to that point of recall. Joint activation (rehearsal), makes it more memorable. The more separate links to any given idea, the more likely it is to be recalled. These links create retrieval routes and enhanced memory.
A person’s self-concept is made up of their complex beliefs about who they are. We develop a sense of our personal characteristics, our roles in relation to others, as well as our beliefs, thoughts, and goals. Most of our self-encoding is done in person-situation interactions, since our self-concept is highly flexible. Each situational norm brings a different element of self to the forefront. Whichever aspect of the self is being accessed at a particular time forms the working self-concept. Situations influence self-concept, as do relationships. When we become emotionally involved with significant others, we access our relational self with that significant other. This is called transference. For instance, someone living far from their family might revert to a childish version of themselves when reunited with their parents, because the relational role of ‘child’ will be activated. We tend to reciprocate certain behaviors (like agreeableness) with others, while we may contrast other traits (becoming submissive in the presence of someone dominant). In tight-knit groups, our personal identities may become highly linked to the group identity.
Self-schemas are cognitive-affective structures that represent the self’s qualities in any given domain. Our self-schemas tend to be dimensions that we find important or ones in which we sit at a particular extreme. For instance, I might consider myself to be hardworking but not sure whether I can call myself shy. I would, then, have no self-schema for shyness. Possible and feared selves are selves we would either like to become or are afraid of becoming. We will actively avoid behaving in ways associated with our feared self and strive towards becoming our possible self. Changes in either the possible or the feared self can influence our motivations and behavior in measurable ways.
The subjective sense of self seems to be a function of the left-hemisphere interpreter that integrates self-relevant processing in different areas of the brain. Our brain activates in different ways when we think about ourselves vs. when we think about others. On person-appraisal tasks, the medial prefrontal cortex activates more, just as it does in other forms of social judgment. When we self-evaluate, the lateral prefrontal cortex as well as Brodmann’s area 10 activate. Neuroimaging studies distinguish processing self-schematic information from non-self-schematic information, which must activate more episodic memory retrieval. Self-schemas are more automatic and affective.
Self-esteem is the result of self-evaluations. We are concerned with who we are, but also with what we are worth. Self-esteem contributes to a sense of well-being, acts as a motivation for goals, and allows us to cope with difficult situations. The desire for self-esteem is related to a need to gain the approval of others, and can be considered a sociometer, or a general indicator of how we are doing in the eyes of others. Implicit self-esteem is a sort of instinctual valuing of the self, while explicit self-esteem is what we express about ourselves to others. These can contradict one another. Besides overall self-esteem, we also hold contingencies of self-worth, certain standards we hold for ourselves. If we do not meet these standards, our self-esteem may plummet.
Cultural background has an influence on self-concept. European and North-American cultures often emphasize individuality and tend to have an independent self-concept that is bounded, unique, and integrated. It tends to be a center of emotion, judgment, and awareness that sets one as distinctly separate to their social and cultural context. Many Asian, Latin American, and southern European cultures have more of an interdependent self-concept, in which one sees oneself as a part of a social whole, and adjusts one’s behavior to the perceived thoughts, feelings, and actions of others around them. Context and culture are much more integrated in an interdependent self-concept. While both types of self-concept take note of internal qualities and abilities, the interdependent view sees the self as more variable and situation-dependent.
There are some fundamental cultural differences in cognition. The main difference lies in the holistic nature of “eastern” thinking versus the distinctiveness-oriented “western” cultures. This also holds true for self-perception. With an independent sense of self, someone will strive to see themselves as distinctive and to achieve personal goals. With an interdependent sense of self, one is more connected with their social context, driven to belong and fit in more than an independently focused individual, and will base their self-esteem on their ability to restrain themselves.
These differences also have an influence on memory. People with an independent sense of self often reconstruct events in terms of specific actors and their personal qualities. In doing so, they tend to ignore context when drawing conclusions about the social environment. People with an interdependent sense of self more often take into account the social environment by relying on the social group the actor belongs to.
Emotions are also affected by this cultural distinction: people with an independent sense of self will more often experience ego-focused emotions like pride and frustration. Interdependent people will feel more other-focused emotions.
Self-regulation describes how people control and direct their own actions, emotions, and thoughts. Much of this occurs automatically, though goal-relevant cues in the environment can also guide our behavior. Other times, we just get ourselves into action.
The working self-concept is influenced by situational cues, social roles, self-conceptions and values. Sometimes the working self-concept is actually in conflict with the stable self-concept. You may think yourself smart, and when you say something stupid, you will temporarily feel like you are stupid. If this happens often enough, self-concept is likely to change.
We have two semi-independent motivational systems regulating our behavior. The first is appetite (desire), the behavioral activation system (BAS). The second is the aversive (repulsive) system, the behavioral inhibition system (BIS). BAS will cause us to approach other people and activities while BIS will cause us to avoid others. Environmental and internal factors both influence whether we are in a state of BIS or BAS. People differ on which activation system dominates.
Higgins examined how self-discrepancies guide emotions and coping behavior. A self-discrepancy is a shortfall between one’s current self and one’s ideal self. Higgins distinguished two types of self-guides: the ideal self is the person one desires to be, and the ought self is the person one thinks they should be. The ought self is often based on one’s beliefs about appropriate societal behavior and the expectations of others. Discrepancies from the ideal self are a motivator – people strive to improve, to promote themselves. Efforts to become the ought self lead one to an inhibitory/prevention focus. Some people are more likely to have promotion goals (promotion focus) while others will have prevention goals. This has something to do with the personality scales of extraversion and of neuroticism.
Well-being is high when one experiences regulatory fit between their pursued goals and their regulatory focus. In other words, when we are not there yet but feel that we are making steady progress is when we are happiest. Socialization and culture also influences whether we look to the opinions of others for motivation or whether we look for internal sources of motivation. This difference can also be seen in patterns of brain activation. A promotion focus is associated with left frontal lobe activity, while a prevention focus is more heavily represented in the right frontal lobe region.
Self-efficacy refers to one’s beliefs about their own abilities to accomplish specific tasks. For instance, a star hockey player will have a high sense of self-efficacy when it comes to the ability to shoot a puck into the net. I, however, know that I have terrible aim and awful coordination, and have very low self-efficacy in that task. People have a general sense of personal control (mastery) that allows them to plan ahead, cope with setbacks, and regulate their own actions. That means that low self-efficacy could stop us from even trying to do certain things. In fact, even when it is unrelated to actual skills and abilities, high self-efficacy often translates into achievement. We often underestimate ourselves.
Self-regulation is influenced by attention. For instance, if our attention is directed inwards, we are self-aware. We can then evaluate our behavior against a standard. We then strive to adjust in order to reduce the discrepancy between our actual self and the standard. Or we give up. This is the feedback process that forms the basis of the cybernetic theory of self-regulation.
There are certain things that threaten our ability to self-regulate. For instance, when we feel socially excluded, we tend to have more difficulty self-regulating, ending up paying less attention, showing less control, and being more easily frustrated. This is characteristic of a sort of childish ego-defensive state. Other circumstances can create self-control dilemmas where we must choose between seeing small short-term desires met, or holding off in order to achieve long-term goals. Emotions are often involved in these dilemmas. Hedonic emotions characterize a short-term perspective while self-conscious emotions fit a long-term perspective. Self-regulation is most difficult when we are otherwise mentally tasked. This is why dieters might binge eat late at night, when their resources are depleted and their complex thinking is impaired.
Different sorts of self-regulation involve different brain regions. Conscious self-regulation puts demands on the prefrontal cortex, in particular the dorsolateral prefrontal cortex (dlPFC). This brain area is also implicated in behavioral self-regulation.
Where the dlPFC is especially implicated in planning, decision making, controlling (working) memory, processing novel information, and language functioning, the ventromedial prefrontal cortex (vmPFC) has been tied to controlling our behavior, emotional output, and interaction with other people.
The anterior cingulate cortex (ACC) interacts with the PFC in guiding and monitoring our behavior and has been tied to both cognitive processing as to affective-evaluative processing.
We often feel the need to have an accurate view of ourselves. We compare ourselves with those around us (making social comparisons) in order to get feedback about our own abilities. Self-assessments can help us anticipate future problems and prepare for them. We seek accurate self-relevant information even when we know that news may be bad.
People feel the need to believe that we have intrinsic qualities and goals that remain relatively stable over time. We seek out situations and feedback that confirms our pre-existing self-concepts, and resist situations that are at odds with that concept. This is self-verification. We seek to confirm both our positive and our negative attributes. We selectively interact with people who see us in the same way that we see ourselves. We mostly maintain our own self-views without consciously trying to. A consistent self-concept comes about naturally as we interact with others. In interdependent cultures, this need is slightly less strong and self-concept is more situationally variable.
People are motivated by a desire to improve. We make goals that will bring us closer to our possible selves. Self-improvement is also served by upward social comparison. If we compare ourselves to someone more talented or experienced than ourselves, we become motivated to learn from them and improve ourselves. Criticism also motivates self-improvement, both implicit and explicit feedback. Making and maintaining changes is a difficult process. We may think we have improved when we have not, instinctively distorting our memory of our previous skill level, imagining it as poorer than it really was.
Self-improvement is not the same as self-enhancement. Self-enhancement is the effort to maintain or create a positive sense of self. Self-enhancement needs are especially important when we are faced with threats to our self-esteem, like failure. Part of this need is related to our desire to feel socially accepted. People can satisfy self-enhancement needs by maintaining positive illusions, fabricated ideas about the self that are exaggerated towards the positive. People tend to see themselves as more positively than is true, tend to believe they have more control and tend to be unrealistically optimistic about the future. When we feel bad about ourselves, it may be a strategy to make downward social comparisons to people less fortunate than ourselves. “I may have failed that test, but at least I’m not as dumb as Carl!” Ironically, we see ourselves as less biased than we believe others to be.
Why are self-perceptions so prone to self-enhancement? It turns out that positive illusions are actually adaptive for mental health. They allow us to feel competent enough to pursue our goals and give us the energy to engage in creative, productive behaviors. Self-enhancement also helps us cope in times of stress.
Social validation, being accepted for who we are, reduces defensiveness. When we feel like we are liked for intrinsic elements of our personalities, we are more receptive to potentially ego-threatening information. People with low self-esteem tend to have unclear self-conceptions, make unrealistically difficult goals, and are more pessimistic about the future. They have more adverse emotional reactions to negative feedback, and tend to evaluate themselves more negatively. They are vulnerable to depression.
Self-affirmation is a way to maintain self-enhancement needs and helps us cope. Telling yourself “I might be terrible at hockey but I’m a fantastic cook!” is a way to soften the emotional blow of failure and make us more receptive to negative feedback.
Tesser’s self-evaluation maintenance theory deals with the facilitation and maintenance of self-regard. We tend to place importance on the behavior of those close to us. When someone near to us does well, are we envious or proud? That depends on how close the area of their success is to our own self-concept. If I’m a painter and my friend wins an award for her awesome oil painting, I may be jealous and feel personally threatened. However, if my friend won an award for her outstanding work in biochemistry, there is no threat and I can take pride in her accomplishment.
The terror management theory suggests that threat can stimulate self-enhancement. People are biologically driven to self-preservation. When we are under threat, we strive to restore a sense of order to our world, and become motivated to follow culturally-approved norms.
There are cultural differences in self-enhancement and related needs. In Asian cultures, self-effacing biases are more common than self-enhancing biases. When an “eastern” student outperforms others, they credit external factors. It has been noted that self-enhancement needs may be met in different ways. Enhancing one’s social group or one’s standing within the social group may meet those same needs.
The motives just described govern behavior under different circumstances. If our social standing is ambiguous, we search for accurate feedback. We want consistency when circumstances challenge our self-perceptions, or when we have a prevention focus. We like to have accurate feedback, but are naturally happiest when it is positive. People who are promotion focused are concerned with self-esteem compared to those with a prevention focus. Self-regulatory behavior serving self-enhancement and self-improvement is often automatic.
Self-relevant information is naturally very important to someone, and this shows in the rich, interconnected and enduring memory trace it leaves. Simulation theory describes how we self-reference – we infer the mental state of others by imagining what our own thoughts would be in a similar situation. We are more likely to self-reference when dealing with similar others than with people we see as very different from ourselves.
Social projection is when we estimate our own preferences, traits, problems, activities, and attitudes to be characteristic of others. The differences are larger than we guess. Just because we are a certain way, doesn’t mean others share those traits. Social projection can occur even in the face of more accurate feedback and relevant information about others. We project because it helps us feel like we have good characteristics and also lends us a simple cognitive heuristic for social judgment.
Traits central to our own self-concept are the ones by which we judge others most strongly. We may assume others share our weaknesses but that our strengths are unique. We tend to project our attitudes on people who are attractive, and our undesirable qualities on people who are unattractive. These types of projection increase when we feel our self-esteem is under threat, and diminish when we feel secure. We use stereotypes to invalidate the expertise of those who don’t like us or criticize us.
A person’s self-concept is made up of their complex beliefs about who they are. We develop a sense of our personal characteristics, our roles in relation to others, as well as our beliefs, thoughts, and goals. Most of our self-encoding is done in person-situation interactions, since our self-concept is highly flexible. Each situational norm brings a different element of self to the forefront. Whichever aspect of the self is being accessed at a particular time forms the working self-concept. Situations influence self-concept, as do relationships. When we become emotionally involved with significant others, we access our relational self with that significant other. This is called transference. For instance, someone living far from their family might revert to a childish version of themselves when reunited with their parents, because the relational role of ‘child’ will be activated. We tend to reciprocate certain behaviors (like agreeableness) with others, while we may contrast other traits (becoming submissive in the presence of someone dominant). In tight-knit groups, our personal identities may become highly linked to the group identity.
Attribution concerns how people infer causal explanations for other people’s actions and mental states.
When we attempt to interpret any given social situation, we will look for internal causes (the disposition of those involved) and external causes (environmental pressures). In the 1970s, the naïve scientist view suggested that causal reasoning was a time-consuming and complex process. However, this is unlikely to be true. Because we have a store of examples to draw from in our long-term memory, when trying to figure out the cause for something, we will choose a cause we recognize. Controlled information processing may take over if an outcome is difficult, like an unexpected failure.
Looking at how children attribute cause and effect reveals our own basic principles of causation. The main principles:
Cause precedes effects.
Temporal contiguity: effects tend to follow quickly after causes.
Spatial contiguity: effects tend to occur in close proximity to causes.
Noticeable/salient causes seem most plausible.
Causes resemble effects in their magnitude
Representative causes are attributed to effects.
These basic rules are often followed by children, and resorted to by adults when dealing with an unfamiliar domain. The less we know about something, the quicker we are to resort to these rules.
Theory of mind is a term that refers to our ability to understand that other people have a mind, just like we do. Children develop systems of reasoning about other people’s minds. Mind perception is a term that describes everyday inferences about the mental states, intentions, desires, and feelings of others. People tend to see minds as active not only in humans, but in objects, animals, deities, etc. Ever heard the phrase “that dog has a mind of his own”? We want to explain the behavior of animals, to personify the objects around us. It is because we know other people have minds that we can explain their actions by their emotions, intentions, and desires, as well as their beliefs and personality characteristics.
The brain regions involved in the theory of mind are the anterior cingulate cortex (ACC), the posterior superior temporal sulcus (pSTS) at the temporoparietal junction (TPJ), and the temporal pole. When people actually interact socially, the anterior paracingulate cortex (mPFC) also seems to be activated.
Early attribution theories looked at the type of attribution processes that demand effort, in which we gather information to explain events. Six main theoretical traditions form the basis of early attribution theory. These will be dealt with below.
Common sense psychology (aka naïve epistemology) is a term used to describe how people think about and infer meaning from what occurs around them. It can be inferred by listening to how people describe their experiences. Heider examined how people are able to come up with dispositional properties from the behaviors they observe in others. He looked at the locus of causality, which might be internal or external, or both. People might have the ability to accomplish something, but they must also be motivated in order for that thing to get done. Ability is made up of a person’s talents and strengths, as well as the environment’s facilitating and inhibiting influences. Motivation is a combination of intention and effort. Heider’s major contribution was to note that people look for stable invariances in their environment that can account for stability and change.
Jones and Davis came up with the correspondent inference theory, which is a theory of person perception. Social perceivers aim to identify intentions behind behavior so that they can figure out a person’s disposition and more accurately predict that person’s future behavior. The social desirability of an outcome provides a clue for the social perceiver. If someone does something that is outside of socially accepted norms, that behavior is probably much more representative of the individual’s disposition than mere conforming behavior might be. This is also the case when behavior is constrained by the social role one is playing. We cannot judge a person very well based on their expected role behavior.
The social perceiver analyzes the unique noncommon effects that result from an action to infer the intention of the actor. Analyzing non-common effects can, however, be time-consuming and complex. We tend to see the behavior of others as based on their disposition when their actions have hedonic relevance (are self-serving) and when they relate to the observer (personalism). If the behavior of the actor is situationally constrained, it will be hard to infer the actor’s intentions and disposition. The correspondent inference theory found that people are biased to infer the disposition of others and often ignore relevant qualifying information. The theory and the research that followed it popularized research into attribution psychology.
Kelley’s attribution theory looked and when and how people seek to validate their causal attributions. Our information might be inadequate if it receives little support from others, if problems are too difficult, or if our self-esteem is undermined. Uncertainty, in other words, makes us want an explanation.
The covariation model
Three elements of attribution testing are:
Distinctiveness: Would the effect occur regardless of the presence of the actor?
Consistency over time/modality: Does the effect occur each time the actor is present, or during each different interaction?
Consensus: Do other people experience the same effect when the actor is present?
With high distinctiveness, high consistency, and high consensus information, a confident attribution can be made. With low consensus, high consistency, and low distinctiveness, the fault may be placed on the receiver of the effect. With high distinctiveness, low consensus, and high consistency, the fault may lie in both parties. The social perceiver collects information on one dimension at a time, holding the other dimensions constant until it is their turn to be analyzed.
Kelley’s model is normative, a formal, idealized set of rules for validating attributions in reality, people deviate in many ways from this format. The evidence available to the social perceiver is rarely so clear. When forming a causal judgment, we tend to put lower emphasis on consensus because we naturally assume that other people share our opinions (the false consensus effect). Causal attribution varies dependent on the type of event being explained. Involuntary occurrences are treated differently than voluntary actions. Voluntary actions are divided into endogenous acts (ends in themselves) and exogenous acts (goal-directed behaviors). Exogenous voluntary acts seem less freely chosen and yield less pleasure.
Causal schemas
Kelley’s models of causal schemas were also influential: a difficult task requires both effort and ability (multiple necessary causal schemas), whereas a simple task requires very little effort and ability (multiple sufficient causal schemas). Kelley also explained the discounting principle, which states that people reduce the importance of one cause when they learn of another significant cause. The augmenting principle maintains that people increase the importance of a cause when it is the only one to be found.
Schachter’s theory of emotional lability deals with the labeling of arousal states. Schachter observed that people, when stressed, seek to compare their emotional states with others. He reasoned that emotional, arousal states may actually be subject to interpretation. The misattribution effect occurs when people reattribute emotional reactions caused by threatening events to a neutral or less threatening source, in order to reduce anxiety. Some experimental support has been found for the misattribution effect, though it is not as significant as originally hoped, and such effects tend to be short-lived.
David Bem’s self-perception theory was based in behaviorist psychology. He hypothesized that people infer their own attitudes just as much as they infer the attitudes of others. Thus, we infer that we are angry because our heart rate is elevated and our nostrils are flaring. Self-perception theory holds when our pre-existing attitudes and internal cues are weak, but is not relevant when our values are strong. One of the major benefits of Bem’s theory is its application in the analysis of behavioural motivation. We look for the reasons behind what we do, blaming either extrinsic or intrinsic motivations. Bem’s theory suggests that this motivation is flexible and depends on the salience of information available to us.
Weiner’s attributional model of motivated behavior was derived from Heider’s theory. Weiner’s theory, however, relies not on generic rules but on the context in which behaviors take place. This theory suggests that unexpected results prompt us to attribute causes for things – we don’t need to explain something that goes according to plan. The dimensions of locus, stability, and controllability help us understand the causes of behavior. The stability dimension indicates whether the cause of something will change and therefore predict subsequent explanations of success or failure. The locus dimension concerns whether internal or external factors are given the blame, and this is connected with self-esteem, pride and shame. The controllability dimension concerns whether a person can influence the outcome.
In contrast to early attribution theories, which focused primarily on the logical principles of attribution processes, later research has focused primarily on the mental operations that cause attributions.
One of the first researchers that investigated the mental operations that underlie attributions, was Trope. According to his two-stage model of attribution processes, judgments about other people’s dispositions derive from both spontaneous identification process and a more controlled inferential process. Immediate behavior, situation, and every bit of existing information about the actor in terms of disposition-relevant categories is labeled by the identification stage. Other’s expectancies can influence this identification process. However, biased or not, the identifications provide data for following inferences about the actor’s dispositions. At stage two, situational expectancies subtract from the disposition implied by the identified behavior. This happens according to the subtractive rule; an inhibiting situation increases the diagnostic value of the identified behavior, whereas a facilitating situation reduces this diagnostic value. So according to the two-stage model the effects of situational, behavioral, and prior information may or may not mirror their role at the inferential stage.
One factor that affects the effect of situational information on behavior identification is the ambiguity of the perceived behavior. Ambiguous behavior results in behavior identification bias. This, in turn, masks the subtraction of the situation at the inferential stage.
Elements of these insights suit various stage models of attribution processes, for example Uleman’s theory of spontaneous trait inferences. This theory states that traits can serve as retrieval cues.
According to the integrative stage theory, attribution consists of three stages: (1) a stage where the stimulus configuration is perceived (categorization stage), (2) a stage that attributes dispositional qualities to the action (characterization stage), and (3) a stage that uses situational and other information to reduce or increase the initial dispositional attribution (correction phase). This integrative model suggests that a person’s attentional capacities are restricted and easily overwhelmed. This causes cognitive busyness, which, in turn, will lead to the decrease of attentional resources for inference processes.
Lieberman has provided a neural model for the integrative stage theory. According to him there are two networks that characterize automatic and controlled processing, namely the X system (reflexive), and the C system (reflective). The reflexive network involves regions implicated in automatic processing, like the amygdala and basal ganglia. The reflective network, on the other hand, involves regions that are involved in controlled processing, such as the lateral and medial prefrontal cortex. When everything goes without any problems or difficulties, thought and behavior are guided by X system processes. However, when a conflict arises, or when more controlled processing is required to interpret behavior, the C system is involved.
In this section the most important errors and biases are discussed.
The fundamental attribution error (or correspondence bias) over-attributes the behavior of someone else to dispositional causes. Instead of taking into account external forces, we often assume that another person’s behavior is a reflection of that person’s stable qualities. The fundamental attribution error depends on behavior and setting; behaviors in different setting map onto dispositions in different ways. Besides, it also varies, depending on factors such as cognitive busyness.
When people are in a good mood, dispositional attributions for someone else’s behavior increase, whereas they reduces when people are accountable to others for their judgment, or when they are convinced that the other person may possibly have ulterior motives. We also tend to qualify dispositional attributions for the behavior of people we know well, because we take more information into account.
The fundamental attribution error is stronger in Western cultures, compared to non-Western cultures. And thus the Western social perceiver may be more justified in making dispositional attributions.
The actor-observer effect maintains that people explain other’s behavior as due to dispositional factors, whereas they explain their own behavior as due to situational factors. However, research suggests that this model primarily holds (a) when the actor seems peculiar or unusual, (b) when measured by free responses (rather than ratings), and (c) for hypothetical events or situations. Actors seem to wonder more about their unintentional and unobservable behaviors, whereas observers seem to wonder more about intentional and observable behaviors. Also, the actor-observer effect is more likely when an event has a negative, rather than a positive, valence.
According to the self-serving attributional bias, people have a tendency to take credit for success and deny responsibility for failure. People don’t only explain their own behavior, but also that of their friends or groups. The group-serving bias refers to the tendency of group members to attribute positive actions that are committed by members of their own group to positive ingroup qualities, and negative actions to external causes. For the outgroup this works the other way around. A benefit of the self-serving attribution bias, is that it may be motivating, since it preserves one’s ego.
The self-centered attribution bias consists of taking more than someone’s share of credit for a jointly produced outcome. This bias can be explained by the fact that someone more easily notices and recalls their own contributions than those of others. Besides, your self-esteem benefits from believing that your own contributions are greater.
Naive realism refers to the idea that other people are more susceptible to bias than we are, especially the people that disagree with us. Shortly, it believes that we see the world around us as it truly is. If others see or experience it differently, that means they must be biased.
Attributions of Responsibility or Blame
Attribution of responsibility concern who or what is responsible for a certain event or action, especially the negative ones. Negative events easily cause attribution, a belief that someone could and should have foreseen the situation, that the actions of the person involved were not justified, and that he/she acted on his/her own free will.
A related phenomenon – defensive attribution – refers to the fact that people attribute more responsibility for actions that generate severe, rather than mild consequences.
One of the first researchers that investigated the mental operations that underlie attributions, was Trope. According to his two-stage model of attribution processes, judgments about other people’s dispositions derive from both spontaneous identification process and a more controlled inferential process. Immediate behavior, situation, and every bit of existing information about the actor in terms of disposition-relevant categories is labeled by the identification stage. Other’s expectancies can influence this identification process. However, biased or not, the identifications provide data for following inferences about the actor’s dispositions. At stage two, situational expectancies subtract from the disposition implied by the identified behavior. This happens according to the subtractive rule; an inhibiting situation increases the diagnostic value of the identified behavior, whereas a facilitating situation reduces this diagnostic value. So according to the two-stage model the effects of situational, behavioral, and prior information may or may not mirror their role at the inferential stage.
To reduce our cognitive load, we take shortcuts. We could, in any given situation, optimize our decisions by using the expected utility model and weighing all the alternatives before we make our decision. In this sense we are satisficers who make adequate inferences and decisions. Yet, we are not optimizers. We do what suffices.
Heuristics are standard shortcuts that allow us to reduce complex problem-solving to simple operations. Heuristics are useful, and not always fallible. They make time-consuming and complex activities faster and easier to do. Below, we will discuss four of the main heuristics.
Representativeness is used to generate inferences about probability. We match the information we receive about a particular instance to the general category we have in our minds. We assume a shy person is more likely to be a librarian than a comedian because our stereotype of librarians includes the descriptor “shy”. When using the representativeness heuristic, people often ignore the prior probability of outcomes (context), and will also ignore sample size. Sampling theory suggests that estimates derived from a large sample are more reliable than those derived from a small one. Nevertheless, when using the representativeness heuristic, we are more likely to draw conclusions even if our sample size is too small. People also have misconceptions about chance and probability due to this heuristic.
The availability heuristic is our assumption that something is more likely to occur if we can easily remember other times that something similar has occurred. For instance, one might say that everybody is having kids if they notice that many of their friends are having kids. This tends to skew our view of statistics – the more examples of things we see on television or other media, the more likely we think they are to occur. Memory accessibility facilitates the availability heuristic. In social situations, which are full of information and often involve high-memory-load conditions, we tend to use the availability heuristic more easily.
The simulation heuristic is an inferential technique in which we construct a hypothetical scenario to estimate how something will occur. This simulation acts as a guide in our present behavior. We want to avoid regret, so we will avoid decisions we think might lead to that state. Imagining certain events makes them seem more likely. Furthermore, the easier something is to imagine, the more likely we think it is to occur.
Counterfactual reasoning
Counterfactual reasoning, or the mental simulation of how events might otherwise have occurred, affects our judgments. For example, it assesses causality in attempting to identify the unique or unusual specific factor that produced a dramatic outcome. This contrast between the normal situation and the exceptional circumstance can intensify the emotional reactions to unusual situations. Imagining an alternative through counterfactual reasoning has a great effect on expectations, causal attributions, impressions, and emotions, especially when someone’s (in)actions are inconsistent with his/her personal beliefs or orientation. Counterfactual thoughts have several advantages for people. First, in some scenarios they can help people feel better, for example by thinking that things could have gone worse in stressful events. It also provides meaning from pivotal life events by finding benefits and crediting fate. Besides, counterfactual thoughts may also serve a preparatory function for the future by letting it learn from your mistakes. For example, if you realize that you would have passed the test if you would have started earlier with studying, you probably will start studying earlier for the next test.
Using mental simulation
Simulating how events might happen allow us to develop plans and prepare for problems. A fantasy might allow us to believe our goals are more approachable. It has been found, however, that wishful thinking does not improve the likelihood of achieving our goals (mental subtraction), but imagining ourselves engaging in the process of working towards those goals does (mental addition).
Anchoring is the use of a reference point in order to reduce ambiguity. While we know that others would realistically not always behave as we do, we often use ourselves as an anchor point when predicting the behavior of others. Our choice of anchor significantly impacts our decisions and predictions. For instance, if we use ourselves as an anchor, we will find any behavior different from our own to be wrong. Juries asked to consider the harshest verdict first will be much harsher than juries asked to consider lenient verdicts.
Judgments are affected by how decisions are framed. A decision frame is the underlying structure and background context of a particular choice. Changing the frame even slightly can impact decisions in a major way. One particularly strong principle in decision making is risk aversion: people look for ways to avoid risks when dealing with possible gains, but seek risks when dealing with possible losses. That means that if a problem is phrased so that the focus is on gains, people will be willing to take fewer risks than vice versa. Personality comes into play as well – individuals with a promotion focus will be more persuaded by gain arguments than loss arguments.
Prospect theory explains how people weigh options against one another. This involves a frame of reference and a subjective value function. We select a reference point when evaluating options, usually an internal standard, and can judge from that whether the option is positive or negative. The subjective value function of prospect theory is expressed as positive or negative deviations from a neutral reference point. This whole theory explains, to some degree, why fear-based media coverage tends to encourage people to take risks, since we base our decisions on the possible losses we observe.
Bayes’ theorem is a normative model that dictates how we should predict the likelihood of events occurring in the future. Yet, even with appropriate population characteristics, we are often unable to effectively use our knowledge in decision making. One of our major problems is that we don’t use consensus information in making causal attributions. We tend to ignore base rate information in favor of the more concrete, anecdotal, but less reliable information that we come across. We don’t always see how base-rate information is relevant to our particular judgments.
The conjunction fallacy is an error made when we make more extreme predictions for the joint occurrence of events than for a single event. We prefer easily imagined conjunctive explanations. This can be explained by looking at how we understand the actions of others. We try to look at people’s behavior in terms of their goals. Since conjunctive explanations give detailed information on goals, we tend to expect them to be informative. Because they are easy and seem more likely, we don’t often look for alternative explanations of behavior.
Covariation is the strength of the relationship between two things. We use correlation and covariation to make social attributions. The social perceiver makes the mistake of looking for supporting evidence rather than considering all variables in a situation. Not only do we not compare all possible elements of a judgment, we also use a poor sample, often a biased and small selection of people we have encountered. The covariation process requires classifying instances in terms of the types of evidence they provide. Negative instances (ones that contradict the relationship) may be miscategorized or dismissed as an exception. Instances that fit expectations are easily identified and incorporated into inferences. The perceiver tends to also misremember the evidence of certain behavior, more easily forgetting information that disconfirms the hypothesis. Mood can change how we analyze information, and we are all subject to confirmation bias.
An illusory correlation is the perception of a relationship between two variables when only a minor or absolutely no relationship actually exists. Two factors can produce an illusory correlation: associative meaning, in which two items are seen as belonging together because they for together in our expectations, and paired distinctiveness, in which two things are thought to go together because they share an unusual feature.
Sometimes people use heuristics and shortcuts more easily than other times. They especially turn to them in domains there they have a sufficient amount of practice and developed strategies that have worked before. People also are more likely to use heuristics when they experience approach emotions and when they have to solve unimportant tasks, so they can save online capacity for more significant judgments.
As mentioned previously, the discounted utility (DU) model posits that the utility of any choice diminishes as consequences are spread over time. The farther ahead an event may be, the greater the weight of cognitive outcomes and the less the weight of affective outcomes. When people take a cool mindset, they experience an empathy gap whereby they cannot easily imagine people acting in a hot mindset.
Temporal construal theory suggests that the greater the temporal distance to an event, the more one thinks about that event in abstract terms. There are low-level construals (the nitty-gritty, concrete details of a thing), which we tend to focus on when we’re close to a deadline. However, when that deadline is far away, we focus on the abstract, bigger picture (the high-level construals). We tend to imagine future events to be more positive than upcoming events. Spatial distance also affects judgment, causing us to reflect high-level construals.
Remember that whole idea that if we forget the past, we are doomed to repeat it? One problem with that idea is that we have twenty-twenty hindsight. We always exaggerate the extent to which we should have anticipated or predicted events. The hindsight bias is motivated by cognitive factors related to the ability to construct causal explanations for events, rather than by the desire for things to look correct in retrospect.
To reduce our cognitive load, we take shortcuts. We could, in any given situation, optimize our decisions by using the expected utility model and weighing all the alternatives before we make our decision. In this sense we are satisficers who make adequate inferences and decisions. Yet, we are not optimizers. We do what suffices.
We tend to assume that people behave in order to achieve goals. It’s pragmatic to assume that people are rational – we are able to use reason as a reference point. Even though social inference doesn’t correspond to normative models, the comparison of naïve inference against normative models can be revealing. One model used to evaluate human inference is expected utility theory. One reason that social inference doesn’t conform to normative models is that the social perceiver is not motivated by accuracy, but rather motivated to feel good. Affective considerations guide the process. Furthermore, there are efficiency pressures on social inferences, as the environment changes rapidly and the mind is overwhelmed with mixed data. Our short-term memory has a limited capacity, meaning that not all information can be used properly and that many shortcuts are made.
When we make an inference, our first step is to gather information. Even at this early stage, errors begin to occur. We are unable to take all relevant information into account in a short time and while distracted by other things. Instead, we select data according to pre-existent expectations. Often, the theories upon which we base our observations are often flawed, and we often mistakenly categorize biased data as raw data. Theories should not guide sampling, but the social perceiver is prone to do this. If the perceiver discovers this early, they may be careful not to continue in the same way.
Once we decide what information to collect, we have to sample that data. However, social perceivers are notoriously bad at sampling. When we notice extremes, we tend to write them off as exceptions, or use them as an example of the norm. This occurs because of the anchoring-and-adjustment heuristic. The law of large numbers states that a larger sample size leads to more accurate results. Yet when the social perceiver makes a sample, it can often be as small as a single instance of another person’s behavior. When we increase our sample size, we look at our friends and acquaintances – all of whom are connected to us in some way, and thus likely not adequately representative of the population as a whole.
Regression to the mean refers to the fact that extreme events will be less extreme when reassessed later. If we have limited or unreliable information, it’s best to make an inference that is less extreme. However, when we encounter extreme information, we draw extreme inferences about subsequent behavior. Regression is rarely appreciated, and this can lead to many errors in judgment.
When we are judging someone, the more information we have about them, the less likely we are to categorize them into a simple stereotype. This is called the dilution effect. Interestingly, we can cause something like the dilution effect when we ask people to be accountable for their judgments, to justify their inferences. This causes them to make use of a wider range of information.
If we want to improve the inference process, we have to look at errors and biases for what they are. There are three perspectives on errors and biases:
Errors are consequential and real: we need to intervene!
Errors are a byproduct of laboratory experimentation: we need to trust the checks and balances that exist in real social contexts.
Heuristics might be superior to considerate reasoning: we should not over-analyze the issue.
According to a controversial paper by Nisbett and Wilson, people have very little introspective access to their cognitive processes.
Our inferential failings are made more obvious when we are pit against computer programs. Some businesses may be better off using a computer to analyze information and reach more reasoned decisions. One normatively appropriate way to do this is to examine each bit of relevant data, multiply each bit by its weight, add it up to a total case score, and score it against others. That is the linear model of decision-making. Human decision makers often feel like they are very good at inferences and feel that intuition is better than a judgment made by an algorithm. However, people are more prone to stereotype, play favorites, and in other ways allow their personality to cloud their judgment.
People can be trained to improve their reasoning skills. For instance, the guided induction approach (learning through example) allowed participants to improve not only in areas in which they were specifically trained, but also in areas that were merely inferred. There are different types of reasoning – we are all capable of using abstract statistical methods, but we do not always apply them to the right situation.
The idea that biases don’t actually matter in the real world is one that raises a couple of interesting points. For instance, research documenting errors and biases cannot fully recreate the complex conditions under which people make their judgments. Furthermore, judgment tasks used in experimental tests are often stacked against intuitive strategies of inference that we might normally apply. Normative models ignore the content and context of a decision in favor of its formal structure. Intuition does take these things into account. We make judgments in a dynamic environment, so experiments conducted in a static one are not necessarily valid.
Some errors actually do not matter – when we make a snap judgment about a stranger that we will never meet again, this will not impact our future behavior. If biases are consistent over time, this also reduces the impact they have. The conversations and interactions we have constantly test our inferences against reality. We correct them. Sometimes this occurs because we encounter opposing evidence often enough sometimes because it is made clear to us that we are wrong.
Are decisions that are made in the blink of an eye sometimes better than those we consider carefully? Our minds are immensely good at making judgments given only a thin slice of behavior to work with. While not all instantaneous judgments are correct, people who have expertise in a certain domain will be able to make faster and more accurate judgments than one might expect. When we analyze the reasons behind our inferences, we can actually change those inferences. Sometimes conscious deliberation can be detrimental – a complex decision can sometimes be more easily trusted to intuition. For instance, when faced with the choice of which house to buy, it may be helpful to remove yourself for a while and focus on other things. Your mind will unconsciously sift through alternatives and complex features involved in the decision, and will make itself for you.
Social inferences are not based solely on utilitarian needs for accuracy and efficiency. They meet our motivational needs. It can be psychologically advantageous to hold false beliefs – we believe our marriage will last even though it is statistically unlikely. False beliefs can be motivating!
Neuroeconomics is an interdisciplinary field that merges economics, neuroscience, and psychology to clarify the accuracy-efficiency tradeoff and role of motivation in inference. It uses the expected utility model to generate predictions about how inference should proceed, which brain regions are active, and whether or not the actual process of inference reflects the ideal. This is useful in that it allows us to analyze the process from a neurological level, in a way that can lead to surprising results. For instance, the dopamine system may be critical to value assessment. The norepinephrine system is involved in regulatory activity and also activates in our judgment process.
Neuroeconomics integrates predictions from social cognition research, like the slowness of controlled processing versus the speed of automatic processing.
We tend to assume that people behave in order to achieve goals. It’s pragmatic to assume that people are rational – we are able to use reason as a reference point. Even though social inference doesn’t correspond to normative models, the comparison of naïve inference against normative models can be revealing. One model used to evaluate human inference is expected utility theory. One reason that social inference doesn’t conform to normative models is that the social perceiver is not motivated by accuracy, but rather motivated to feel good. Affective considerations guide the process. Furthermore, there are efficiency pressures on social inferences, as the environment changes rapidly and the mind is overwhelmed with mixed data. Our short-term memory has a limited capacity, meaning that not all information can be used properly and that many shortcuts are made.
Attitude is considered a hypothetical mediating variable. Attitudes categorize a stimulus along evaluative dimensions. This is good/bad. Attitude research now incorporates many elements of social cognition research.
Cognition has always been an element of research into attitudes. The most influential approaches to the study of attitudes have been cognitive consistency theories, which all assume that inconsistencies among cognitions, affects, etc. cause attitude change.
Newer cognitive approaches build on older attitude theories. Current researchers use old experimental designed, and tend to build upon old theoretical constructs. However, one of the major differences between new and old approaches is the metatheoretical framework. Most cognitive consistency theories posit a strong motivational need for consistency and a drive to reduce internal discrepancies. Newer approaches draw on new cognitive theories that were previously unavailable, and borrow methods from cognitive psychology that became available after the 1970s.
The first of the two main consistency theories is dissonance theory. This view suggests that the experience of inconsistency between our beliefs and our behavior causes us to feel dissonant. Dissonance is a motivational state that we seek to relieve ourselves us. We feel uncomfortable when we experience cognitive dissonance, and try to relieve it in a number of ways. For instance, we may change our behavior to fit our belief. Or we might change our belief. Otherwise, we might find an acceptable justification for our behavior that allows us to feel as if we have not broken our own code.
Selective perception theory suggests that people seek out and interpret data that reinforce their attitudes and beliefs, ignoring countering information. Selective perception is made up of three acts: selective exposure means we seek out consistent information (we see what we want to see), selective attention means we only heed information that is consistent, and selective interpretation means that we interpret ambiguous information as if it bolsters our belief. De facto selective exposure occurs when we develop an environment that is biased in favor of our belief systems (we want to read things that preach to the choir, and make friends who are like us).
Different cultures show different manifestations of cognitive dissonance. Europeans tend to justify choices by considering it superior and downplaying its flaws (spreading the alternatives). East Asians are less likely to engage in this sort of justification.
Dissonance theory predicts selective learning. We will be more motivated to learn and retain information favorable to our preconceived attitudes and beliefs. This is most common when we are engaged in incidental learning. However, when we are actively studying and know we will be examined, we will be more open in what we learn.
According to balance theory, there are structures in the social perceiver’s mind that represent the perceiver (P), another person (O), and the mutual object (X). Liking and belonging are positive (+), disliking is negative (-). If you like someone, and agree with them, then the relationship is balanced. But if you like someone and disagree with them, the relationship is imbalanced and can cause trouble. When we are romantically involved with someone, our attitudes tend to align with theirs. When we disagree with people we like, we can feel intensely uncomfortable and ambivalent. Social schema research suggests that balanced relationships are stored in memory as a single unit. People couple others who tend to agree with one another, and have more difficulty remembering unbalanced relationships between people.
Our attitudes impact what we remember about our situation. If we judge someone to be intelligent, we will be more likely to recall their witty comment than their silly joke. Dual attitudes are when an older, automatic attitude and a newer, explicitly accessible attitude interact. These two combating attitudes can create subtle forms of ambivalence and undermine confidence.
Discrete, declarative representations that operate in serial processes of categorization and reasoning can be contrasted with distributed, procedural representations that operate in parallel processes of attitude generation and response. According to the associative-propositional evaluation model of attitudes, both types of representations play a role in our formation of attitudes.
In this section people’s everyday social theories about the reason why some attitudes do or do not change is discussed.
One thing that many advertisers have taken advantage of is the impact of the communicator in changing our attitudes. When a message is communicated by someone we respect – a hero or model – we are more likely to see their message as trustworthy. Similarly, if the communicator is relatable (similar to ourselves), we will also take heed. In determining the validity of the message, we will take reporting biases and knowledge biases into account, and be more likely to trust someone who seems to be risking a lot to express their dissident opinion, rather than someone who we feel might think they should portray a certain message. We analyze multiple sufficient causes (Kelley’s theory). Given several plausible causes, the weaker ones are discounted. When disposition and circumstances clearly lead to someone giving a certain speech, the facts are not important. Attributions that make us feel a communicator is less credible can stop the message from having an impact on our attitude.
We feel people who agree with us have more objective and fact-based arguments, and discount others who disagree with us on the basis of bias and exaggeration. This explanation draws on attribution theory and specifically, Kelley’s covariation model of attribution. When two people disagree, each person feels that the other is not being objective. We tend to believe that any source that agrees with us is credible. Familiar others are persuasive.
Group polarization occurs when a group is unevenly split on an issue – discussion of the issue tends to push the majority towards an extreme version of their original point of view. An even split in a discussion causes the winning side to form a more moderate version of their original point of view. Entering a group that shares your viewpoint is most likely to make your view much more extreme. There are several explanations offered for group polarization. Some of these relied on traditional variables (normative influences, such as norms and values). One other explanation relied on a cognitive interpretation of the group interaction (informational influence). On the informational side, group discussions in a majority group tend to involve a pooling of “for” arguments while “against” arguments are ignored. Discussion also serves to validate opinions that might have been previously tentative. On the normative side, people might change their opinions to be both more similar and more extreme to those of the group. This might be explained by the hypothesis that people value risk more than caution, which suggests that people try to become strong supporters of popular but risky extremes.
An informational explanation is offered by the persuasive arguments theory, which argues that attitudes within groups polarize toward relatively extreme (cautious or risky) alternatives when people are faced with new information. Another informational explanation relies on social comparison theory, which posits that people evaluate their position relative to similar others that are doing better or worse. Social identity and self-categorization theories combine informational and normative influences to explain attitude polarization in groups. The first states that people interact along a continuum from interpersonal to intergroup identities. The latter builds on this theory, stating that people categorize themselves and others into distinct social groups, ingroup and outgroup members. According to this theory intergroup behavior is determined by social identities because people act as group members, categorized by normative and comparative fit in the meta-contrast ratio.
Another approach, namely agent-based modeling, uses computer simulations to represent the distributions of individuals with various attitudes, goals, knowledge, and other characteristics. According to this approach all these particular characteristics interact with each other as autonomous actors to produce emergent outcome patterns.
Self-perception theory originally suggested that people tend to infer their attitudes from their own behavior when they are uncertain. This can cause people to misremember their own prior attitudes. We don’t always necessarily recognize that our beliefs and attitudes have changed. People have implicit theories that construct their personal histories. It’s a two-step processes whereby we use our current attitudes as a starting point and try to decide whether we have been different in the past.
Temporal self-appraisal theory suggests that people distance themselves from negative past selves and decrease the distance to positive past selves as a way of guarding self-esteem.
Classic theories have differentiated the motivational functions of attitude and attitude change, between compliance (to gain rewards and avoid punishments), identification (to enhance belonging), and internalization (to store attitude-relevant knowledge).
Conviction is the emotional commitment we feel towards an attitude. When we feel our attitude is absolutely correct, morally and otherwise, we don’t want to change it. Conviction usually means that we have elaborated on a view, connecting it to many issues and applying it to many situations. Attitude strength is the degree to which we try to persuade others to share our opinions. Attitudes become stronger as we age. Attitude importance is our interest or concern about an attitude, and predicts whether we will seek out relevant information. It is most similar to value-relevant involvement; this involvement in an attitude indicates its importance to a person’s social or moral standards.
When an attitude shows centrality (rich connection) to our personal values, it can be difficult to change. Attitudes in which one has certainty help a person to understand their experiences and formulate a sense of self.
Attitude functions can be divided in several ways: knowledge, values, and sociality.
The most important attitude function is object appraisal, which is comprised of two parts. Part one, the knowledge function, is cognitive and adaptive, allowing us to make sense and order of the world. Part two, the instrumental function, helps us achieve adaptive goals, to avoid pain and receive rewards. Attitudes can have a bipolar structure in which extreme material on either side is easily recalled, or a unipolar structure, with well-elaborated material supporting one side and nothing on the other side.
The value-expressive function of attitudes describes the importance of people demonstrating and maintaining long-term standards and orientations. This is similar to value-relevant involvement. People who are low in self-monitoring do not regulate themselves in their social situation but rely on their own feelings instead. This means that their own attitudes become more important to them than the norms of the situation. These people tend to be more motivated towards value-expression.
Attitudes allow for us to get along with others, serving a social-adjustive function. This function resembles impression-relevant involvement; people’s need for attitudes that promote a positive public image, affiliation, and social approval. People with a high need for affiliation, sensitivity to approval, and awareness of others tend to base their attitudes on social-adjustive grounds.
Attitude is considered a hypothetical mediating variable. Attitudes categorize a stimulus along evaluative dimensions. This is good/bad. Attitude research now incorporates many elements of social cognition research.
How do we process attitudes? The chain of cognitive processes suggest that there are many necessary conditions for persuasive communication to influence behavior. They form a number of steps:
The heuristic-systematic model proposes that people engage in considerate, thoughtful processes when they are sufficiently motivated to do so, and when they are not overwhelmed with confounding information. Motivated people will engage in systematic processing whereby they weigh the pros and cons of an argument. People also engage in rapid, heuristic processing whereby they base attitudes and judgments on easy rules of thumb. These rules are often accurate enough to work with.
According to the elaboration likelihood model (ELM), the central route involves elaborative thinking (similar to the systematic processing of the previous model). The peripheral route to persuasion includes attitude change that occurs outside of elaboration. Elaboration involves making relevant associations, scrutinizing arguments, inferring value, and evaluating overall message.
Cognitive response analysis involves examining cognitions as a message is received. If the cognitive response is favorable, persuasion may occur. Cognitive mediation means that some stimulus causes a cognitive effect that in turn causes an overt response. The stimulus might be the communicator’s credibility, the message, other listeners, the context in which the message is received, etc. Elaboration is the act of cognitive mediation that leads to attitude change.
As mentioned before, the expertise an attractiveness of the communicator can have a strong impact on the outcomes of persuasion. Communicator credibility effects are the strongest for people who are not impacted by the outcomes of the attitude/decision. Lack of involvement means that we will deal with things on a more superficial level, and be easily swayed by superficial qualifications. When outcome involvement is low, the expertise of the source serves as a peripheral cue to persuasion, bypassing the need to process message arguments. These effects also emerge when the message sources are attractive or famous (think of commercials with attractive models/famous actors).
The message also influences persuasion. The more often a message is repeated (nonlinguistically in the form of a logo, linguistically in the form of a persuasive message). Repeated exposure to a stimulus is known to increase liking. However, if the stimulus is initially judged as negative, the mere exposure effect can reverse. Mere exposure is more effective when we are not thinking in an elaborative way, but merely being cognitively lazy.
Aside from repetition, a message may vary in terms of difficulty. The number of supporting arguments and counterarguments that people can generate regarding a message determines its difficulty. When encountering a difficult message, one might have the tendency to put very little effort into it unless they are actively dependent on the outcome. Comprehension encourages persuasion if arguments are good – we need to be able to repeat a message to ourselves in order to elaborate upon it.
Another message effect is the number of arguments that can act as a superficial cue for persuasion. Under low personal relevance (when we are lazy), the number of arguments makes it seem to us like the communicator must know what they’re talking about. Under high relevance, we will look more at the quality of the arguments rather than the quantity.
Audience involvement is important to look at when discussing the ELM. Different types of involvement have different effects. The personal importance of a message greatly impacts its persuasive ability. There are a number of types of involvement, including:
Ego involvement: our ego is at stake, the message impacts our self-concept.
Issue involvement: the issue being dealt with in the message is one that has meaning to us.
Personal involvement: the message is personally relevant due to elements of our identity or our social connections.
Vested interest: the message has intrinsic importance to our belief systems.
Task involvement: the message has consequences for our goal achievement.
Response involvement: the message brings us closer to maximizing rewards.
Value involvement: the message implicates our enduring values and principles.
Impression involvement: the message implicates our concern for other people’s opinions of us.
The effects of these different types of outcome involvement are that they cause us to break away from automatic processing and begin to consciously consider the arguments being put towards us. We are more likely to counter-argue weak and counter-attitudinal communications when we are outcome-involved.
The need for cognition refers to the urge people have to think about external stimuli like persuasive messages. If we are low in this, we use more heuristic processes. High need for cognition means that we are more likely to elaborate and engage. Another individual variable is the uncertainty orientation. People who are certainty-oriented will look for ways to keep their world-view predictable and non-threatening. East-Asians are more certainty-oriented, while Western cultures tend to be more uncertainty-oriented. We also vary in our need to evaluate. Some people spend more time weighing the pros and cons of a topic than others, forming opinions rather than staying neutral.
The limitations of the ELM are that it does not explain the reasons why people will support or counter-argue. It leans towards seeing people as wanting to validate their attitudes but underestimates the motivation to feel secure. Persuasion variables are very complex, each with multiple roles, meaning that predicting their impact on attitude change is difficult from an experimental standpoint. Some have even questioned whether cognitive responses actually cause attitude change, or if they are instead merely correlated with it. The ELM and cognitive response analysis are powerful tools but mustn’t be taken for granted as the only right way to study attitudes.
The MODE model of attitude processing is a dual-process model that doesn’t explain persuasion but focuses instead on how attitudes operate and how they are activated. It is an attitude accessibility model: it views attitudes as associations in memory between the object and one’s evaluation if that object. That association may vary in strength, and if weak it will take significantly more time to call to mind. Factors that contribute to the accessibility of an attitude are similar to those that contribute to the accessibility of any cognitive construct. Repetition and recency increase association. People low on self-monitoring may have chronically more accessible attitudes than others.
Attitudes with high accessibility tend to have more influence on our perceptions of the attitude object. They resist contradiction and are enduring. They often ignore small variations in the attitude object, painting many unique samples with one big brush. People more consistently act on accessible attitudes. When we see an object to which we have an accessible attitude, we automatically make a strong evaluative association.
In implicit association tests (IAT), people are asked to categorize groups of words in different ways. The speed in which they can do this is a good determination of implicit stereotypes they hold. It takes longer for white students to link the word “black” with positive traits than to link the word “white” with positive traits, indicating an implicit stereotype. Implicit attitudes measured by the IAT are predictively valid and correlate with judgments, behavior, and physiological responses. Self-reports of prejudice are correlated with explicit attitudes but are not always related to implicit attitudes. One criticism of the IAT is that it merely assesses cultural beliefs rather than personal prejudice. The IAT is also malleable, meaning that it can change with context. Diversity training and counter-stereotypic images can actually impact the IAT scores.
Embodied expressions of attitudes are physical expressions that occur even when we are not conscious of the fact that we are evaluating stimuli. Positive embodied responses elicit approach (pull), and negative embodied responses have an avoidance (push) effect. Some kinds of body movements reinforce and reflect the valence of our attitudes. For instance, if someone is giving a strong argument, people nodding their heads will end up agreeing more. However, if the message is weak, they will react in the opposite way. The facial expressions of the perceiver can impact how they interpret received information, and how they process attitudes.
The event-related potential (ERP) shows that negative stimuli have greater immediate impact than extreme positive stimuli. Electroencephalography (EEG) data shows that people respond extremely rapidly to valenced inputs.
Implicit responses involve amygdala activation to both negative, as well as extremely positive attitude objects. Another brain region that seems to be involved in implicit attitudes, is the insula.
The need for cognition refers to the urge people have to think about external stimuli like persuasive messages. If we are low in this, we use more heuristic processes. High need for cognition means that we are more likely to elaborate and engage. Another individual variable is the uncertainty orientation. People who are certainty-oriented will look for ways to keep their world-view predictable and non-threatening. East-Asians are more certainty-oriented, while Western cultures tend to be more uncertainty-oriented. We also vary in our need to evaluate. Some people spend more time weighing the pros and cons of a topic than others, forming opinions rather than staying neutral.
Everyday and everywhere there are casual forms of bias present. Together with chapter 12, this chapter will make a distinction between the cognitive side of intergroup bias (stereotypes) and the affective side of intergroup bias (prejudice). Just like the basic concepts of social cognition, intergroup bias can be both automatic as well as controlled. First cognitive aspects of blatant bias (intentional bias) will be addressed, followed by subtle bias (automatic bias).
Despite the fact that approximately only 10% of the population in Western cultures hold extreme, blatant stereotypes, this is a dangerous percentage. Realistic group conflicts, caused by subjective perceptions, seem to underlie these blatant stereotypes.
People often consider themselves and other people as members of distinct groups. This causes intergroup conflict. In this section four group identity theories and their consequences for intergroup misunderstanding will be addressed.
Whether consisting of two or twenty individuals, groups always compete more than individuals do. Tajfel’s social identity theory (SIT) states that interactions range from interpersonal to intergroup. This theory argues that people try to create a positive social identity in order to maintain their self-esteem. Social identity arises from someone’s membership of a distinctive group that he or she regards positively on subjectively important dimensions (someone’s ingroup), vs. a comparison group (the outgroup). Apart from the assumption of maintaining self-esteem, SIT also argues that social identity is defined by the individual itself, the society one lives in, and the current context.
SIT thus emphasizes the cognitive process of categorization into groups. As research on SIT developed, the self-esteem hypothesis became less popular. Even though discriminating does elevate a temporary evaluative self-assessment (state self-esteem), it does not alter someone’s long-term view of oneself (trait self-esteem). In addition, low self-esteem doesn’t serve as a direct motivation for discrimination.
SIT was extended by Turner’s self-categorization theory (SCT). Turner rejected the self-esteem predictions. Instead, he argued that people who identify themselves with a particular group, will behave more like other ingroup members. This means that the self is not fixed, but that it depends on the salient intergroup context. For example, when I’m at work, I think of myself as an employee, while when I’m at my doctor, I consider myself a patient. So the operative group categorization depends on fit to the context. Comparative fit creates a meta-contrast ratio by comparing between-group and within-group differences. The best observed comparison determines the relevant categorization. Apart from comparative fit, self-categorization also depends on a socially shared meaning that defines two categories (normative fit). In this case the relevant categorization isn’t determined by the best observed comparison, but by the consensus about characteristic group differences. Self-categorization thus underscores psychological group membership.
Brewer agrees with the view of SCT. His optimal distinctiveness theory (ODT) states that people balance individual autonomy and distinctiveness against a sense of belonging to the ‘right’ group, so they can create a self-affirming and satisfying identity.
With his subjective uncertainty reduction theory, Hogg argues that ingroup norms reduce anxiety. By adopting the values and norms of the ingroup, people equate to the group prototype and depersonalize themselves. This creates security and certainty.
In case group membership is the only information people have, people favor the ingroup. Ingroup favoritism exploits the relative advantage of ingroup over outgroup, even to the expense of oneself and one’s own group’s absolute outcomes. It occurs mostly on dimensions where the ingroup is favored. It also increases during conflict, social breakdown and ingroup importance. The security conforming to the ingroup brings possibly contributes to perceived group homogeneity.
Group homogeneity is the reduction in perceived variability within groups, caused by categorization. Besides stereotyping, it includes perceived dispersion and similarity. When groups are real but abstract and unfamiliar, there is a greater chance of outgroup homogeneity. Because people believe that other people have more biases against outgroups than they do themselves, they also expect that intergroup perceptions are biased. Each side believes the other side to be homogeneous. When intergroup distinctions are most noticeable, both ingroups and outgroups are seen as more homogeneous.
Categorization theories show us that categorization is sufficient to produce overt bias. Ingroup favoritism is largely overt and conscious. Nevertheless, intergroup bias has important underlying cognitive factors. Threat and conflict increase category-based bias, and threat and conflict are in turn determined by cognitions. Next, several theories will be addressed that describe perceived economic threat and perceived value threat; two primary bases of threat.
The next few theories all explain society’s intergroup relations as reasonable, inevitable, and legitimate.
Group hierarchies develop as a result of group competition for resources. According to the social dominance theory (SDT) group hierarchies are universal. In fact, the theory argues that they are even adaptive in all kinds of societies in all kinds of eras. Societies, groups and individuals all have one thing in common: they differ in their endorsement of social dominance. Legitimizing myths (complex cognitions, such as stereotypes) support the status quo. In social dominance orientation (SDO) individual differences correlate with many blatant biases. Dominance and intergroup contexts interact. This way they reinforce the whole system in several ways. For example, high SDO is a predictor of ingroup favoritism.
Controversy over SDT is centered on the idea of inevitability. For example, boys always perform better on math tests than girls, hypothesized to account for the universality of men’s greater mathematical skills. An SIT explanation argues that the SDO differences are caused by individual gender identification. A group socialization model states that dominant positions, such as is politics, motivate SDO attitudes. In turn, these increase prejudice. SDT agrees that chronic differences in power and status cause intergroup orientations between socially constructed groups. SDO highlights the influence of a person’s beliefs in a hierarchy that often advantages their own group’s position. We live in a world where we believe we have to take care of ourselves, and wherein stronger groups inevitably must dominate weaker groups. Social dominance beliefs are motivated by competitive threat.
Authoritarian beliefs, especially right-wing authoritarianism (RWA), is another perceived threat. People that score high on RWA conform to traditional values, listen to and obey powerful leaders, exhibit authoritarian aggression, and often have prejudices towards outgroups. RWA is characterized by the combination of intense ingroup identification and perceived value threat. It is correlated with punitive parenting, social conformity, and regarding the world as a dangerous place. People with high levels of both RWA and SDO are among the most prejudiced people in society.
Mortality salience causes people to cherish worldviews that will still be there when they die. They way people deal with the fear of dying is addressed by terror management theory (TMT). According to this theory, people identify themselves with their ingroups, which will outlive them, because they want to transcend death. When their self is under threat, people want to preserve what is familiar to them. This negatively influences their reactions to outgroups.
System justification theory (SJT) also posits ideology as soothing negative reactions, such as anxiety. It differs from the other theories, in that it argues that not only the advantaged, but also the disadvantaged seek to legitimate the status quo.
The most reassuring cognitive group representation, is the one that makes the category seem objectively real. Once groups are categorized, they seem to possess the property of being a real entity (entitativity). This often polarizes intergroup relations, and thus strengthening belief systems.
When groups are considered an entity, they are often given a second property, namely an essence. This includes a foundational core, such as shared genes. Perceived essence builds upon interpretation of biology, a natural science that is viewed by people as fixed. So people often think of group categories as real biological phenomena instead of changing social constructions. Essentialism reinforces stereotypes.
Multiculturalism supports the view that groups differ from each other in significant ways, and that organizations should value these essential differences. In its extreme form, multiculturalism implies biological essence. Moderate multiculturalism, on the other hand, implies only chronic societal differences. A contrasting colorblind view states that everyone is the same and that we also should be treated that way.
Even though being human is the ultimate biological essence, people assign human essence more to their own group than to other groups. When it comes to emotions, we ascribe primary emotions to both our own groups and to others. Secondary emotions, on the other hand, are put aside for the ingroup. Not surprisingly, this infra-human perception allows people to sympathize less with infrahumanized groups.
Dehumanization takes on two forms:
Animalistic dehumanization: denying others the uniquely human culture, morality, logic, maturity, and refinement.
Mechanistic dehumanization: denying others the typical human nature, such as warmth, agency, and depth.
As a result of dramatic changes in norm about acceptable beliefs in the later 20th century, subtle, more automatic, ambiguous, and ambivalent modern forms stereotypes emerged.
There are relatively automatic cognitive processes at work that make stereotypes readily accessible. These processes will be addressed in this section.
People quickly identify others based on race, age, and gender by placing them in different categories. This often causes category confusion, where they tend to confuse other people who fall into the same category. This is demonstrated by the who-said-what experimental paradigm.
Racial priming experiments were among the first priming experiments to identify truly automatic bias. These experiments revealed that whites identify positive traits faster when they are primed with ‘whites’ than with ‘blacks’.
Despite the automatic bias, most people have good intentions and reject their own racist beliefs (aversive racism). They only express their ingroup-favoring associations when they have seemingly nonracist reasons, for example in case of ambiguous information. Unfortunately life itself is ambiguous. A possible solution for aversive racism is to use the us-them effect and to expand the ‘us’ to the previous ‘them’. This common ingroup identity model increases perspective-taking, awareness of injustice, shared interaction, and common fate.
The indirect priming technique and aversive racism measures both asses the speed-up in responses of people given a fit in the evaluation of a prime and stimulus. However, the tasks differ in the stimulus that follows the prime, as well as in the required response. In aversive racism tasks people have to make a lexical decision, as the words that follow the prime are either race-related or non-existing words. So participants have to choose between word or nonword. In indirect priming tasks the initial racial prime precedes a word that is unrelated to race. Participants now have to choose between good or bad. In both tasks, participants often show speed-up responses to negative of stereotypic terms following an outgroup prime. This provides an indirect index of racial attitudes. However, these relatively automatic reactions can certainly be diminished. People just have to be motivated to avoid prejudice.
The basic implicit association test (IAT) underscores associations between ingroup and positive attitude objects, as well as between outgroup and negative attitude objects. The IAT reveals stereotypes of all kinds of race and gender-stereotypes, such as activities, objects, professions, and roles.
Both Category activation and application, as well as later judgment depend on cognitive load. At each stage, cognitive load has a different effect. Category activation is conditionally automatic (it varies with load, task, and context) and emerges mostly after seeing a face. People attend to cues, such as gender and age, that are relevant to multiple alternative categories. They also tend to activate the more accessible categories.
Once the category is activated, an interpretation is given. At this stage, stereotype-consistent meanings absorb very easily, which means that they require less cognitive capacity. So when resources are scarce, people selectively assign their attention to stereotype-inconsistent information, trying to assimilate or explain it away. When people apply stereotypes, they first prioritize coherent impressions. So they first work on the inconsistencies. Later on they recall the inconsistent information that needed cognitive work to assimilate. Still later, when having to make judgments, people rely on their stored stereotypes. The use of stereotypes leaves more mental capacity for other tasks.
As said before, when people are motivated, and have sufficient capacity and information, the impact of stereotyping can be averted. Practice can help reduce automatic stereotypic associations. However, if the goal is merely to suppress stereotypes, without adding alternative information, this can have an opposite effect. People merely trying to avoid stereotypes may experience a rebound, and will later have redoubled stereotypic associations. This rebound effect can possibly be explained by the load on executive control the controlling of stereotypes causes. When people are concerned about appearing prejudiced, for example in interracial interactions, they inhibit their behavior. However, this inhibition brings up negative feelings about the interaction, which in turn causes the executive control capacities to deplete.
So the first ramification of subtle stereotyping, namely its automaticity, rests on basic categorization and rapid association processes; much depends on interpretation. These stereotypic interpretations aren’t always preconscious or limited to race. Nor do people interpret only the content of ambiguous information, but also its causal meaning. This means that when the ingroup does something positive, this is attributed to an inherent ability of goodness. However, when the outgroup does something positive, this is attributed to coincidence or the situation. The opposite obtains for negative actions. This subtle and ambiguous bias is called the ultimate attribution error (UAE). UAE depends on interpretation of the underlying causes of group behavior. Trait attributions stabilize the negative outgroup and positive ingroup stereotypes. Situational attributions, on the other hand, reduce this stability. Whereas entity theorists favor such fixed, entity attributions for negative stereotypes, incremental theorists favor a more relative view.
Because of shifting standards, the dominant group isn’t always favored by subjective judgments of ambiguous information. Stereotyping emphasizes the ambiguity of the information that is given, meaning that the influence of the stereotype itself is implicit and unexamined. People try to cover up their stereotypes from themselves and others around them.
Stereotyping can also be ambiguous if the situation creates ambiguity.
Apart from being automatic and ambiguous, stereotypes can also be ambivalent. The stereotype content model (SCM) argues that when people encounter a group they are unfamiliar with, they have to immediately answer two questions: (1) Friend or foe? (are the intentions of the group good or bad?), and (2) Able or unable? (Can or can the group not enact their intentions?). SCM describes two ambivalent and two unambivalent combinations:
warm and competent;
neither warm nor competent;
warm and incompetent;
not warm and competent.
This warm-by-competence space can be applied to both individuals and groups. Two similar stereotypic dimensions are described by related frameworks: competence or agency as self-profitable, and morality/sociability or communality as other-profitable. People care most about morality
A third approach is the enemy images theory, which describes a framework that includes the competence and warmth dimension, along with power.
Stereotyping can be studies best in interactions between people. In this section stereotypic expectancies and their effects are addressed.
Attributional ambiguity reflects the uncertainty whether negative reactions are aimed at you personally, or at the group to which you belong. When intergroup interactions get out of hand, targets can cope by blaming the other person of being prejudiced, instead of blaming their own behavior. In this way they protect their self-esteem. However, targets do not readily blame prejudice for negative outcomes, because of the social – people who attribute negative reactions to discriminations risk being labeled as complainers and troublemakers, or even worse (think of the Zwarte Piet discussion) – and personal costs – attributions to discrimination undermine the target’s sense of control.
Exaggerated heightened alertness in interacting with outgroup members (stigma consciousness) can enclose a negativity-feedback loop: When one expects prejudice, this leads to negativity. This in turn provokes the expected negative experiences.
Stereotype threat is driven by expectations about success and failure of other people. When performance seems to be diagnostic of one’s ability in a relevant domain, then the stereotype is more threatening than the normal threat that is associated with high-pressure performance. Consider, for example, the stereotype ‘girls are bad at maths’. Now if a girl fails on a math test, not only will she experience personal humiliation of failure, she will also experience shame of confirming stereotypes about the ingroup’s intrinsic abilities, in this case, that girls are bad at maths. Stereotype threat only takes place if a person’s relevant category is salient, the domain is relevant, the test is allegedly diagnostic, and the person cares about it.
A way to cope with stereotype threat, is by disengaging from the domain (disidentification). This explains why some group members give up on domains in which other people around them don’t expect them to do well. Off course, this isn’t the best coping strategy. Luckily there are some other ways to avoid the negative consequences of stereotype threat, such as realizing that intelligence is context-dependent and improvable, and that it isn’t fixed. Another antidotes are individuating oneself, affirming a distinct valued attribute, retraining, or bifurcating one’s identity. Another remedy that improves performance is learning about stereotype threat and attribution of one’s anxiety to stereotypes. These are all personal remedies. Examples of structural remedies are not asking for demographic information before taking the test, and creating identity safety.
Well-being and private aspects of collective self-esteem – someone’s personal, private regard for their group and someone’s own feelings of worth as a group member – are closely linked. Trying to validate one’s self-esteem by getting approval from others can create a number of problems. Private regard for your own identity appears to be more adaptive than relying on public regard. This is the case for various identity domains.
Identity shapes perceived discriminations, that is, some minority individuals perceive discrimination faster than others. Perceived discrimination is a process of several steps, known as ask-answer-announce. First, the target has to think of the possibility of discrimination (ask: “is this a case of discrimination?”). Next, the target has to judge if a certain behavior is indeed discriminating (awnser). This depends on situational factors, the role of affect, need for control, and protecting self-esteem. Finally, the target has to decide on whether or not to announce if discrimination has or has not occurred. Group members with a lower status (often minority groups) are more likely to be concerned with evaluations regarding the self when the outcomes depend on a high-status person.
Intergroup interactions have two sides: higher-status (usually majorities) and lower-status (usually minorities) groups. Groups with a higher status also often worry about the impression they make. Sometimes neither of both sides understands the other (pluralistic ignorance). This stands in the way of communication. Dominant group members can perish under pressure if they are low in prejudice, but are worried to appear prejudiced. Conversely, when dominant group members are high in prejudice, an increased evaluative concern enables them to shine, making them appear warm and friendly.
Everyday and everywhere there are casual forms of bias present. Together with chapter 12, this chapter will make a distinction between the cognitive side of intergroup bias (stereotypes) and the affective side of intergroup bias (prejudice). Just like the basic concepts of social cognition, intergroup bias can be both automatic as well as controlled. First cognitive aspects of blatant bias (intentional bias) will be addressed, followed by subtle bias (automatic bias).
Feelings drive behavior, but both feelings and behavior interact with cognition, illustrated perfectly by intergroup relations. In this chapter the emotional prejudices resulting from cognitive biases, and how they interact with them, are discussed. Emotional prejudice covers qualitatively distinct emotions, such as fear and envy. These differentiated emotions matter both in a practical way, since they are aimed at a specific group and motivate specific behavior, and theoretical way, since they not only describe, but also predict and explain the world.
In the first section several theories of intergroup cognition and emotion will be addressed. The other sections address the idea that distinct emotional prejudices underpin reactions to different outgroups.
Some emotions, such as anger and happiness, make people stereotype more, while other emotions, such as sadness, make people stereotype less. Groups also usually evoke affect as an integral function of who the individuals in that group seem to be and the situation in which they appear. According to several theories, different groups trigger different affective configurations as an integral feature of the encounter.
Sociality and capability are two fundamental dimensions that describe social perception. The twin dimensions that differentiate social groups are warmth and competence. The stereotype content model (SCM) argues that stereotypes on these two dimensions result from structural relations between groups. Perceived competition predicts warmth stereotypes, and perceived status predicts competence stereotypes. If the group’s position in society changes, so do the stereotypes associated with that group.
The SCM stereotypes correlate with intergroup emotions and behavior. The BIAS map, which describes stereotype-based clusters of emotions that directly predict intergroup behavior, extends the SCM into discriminating actions. Both the SCM and the BIAS turn crucially on intergroup emotions. These emotional prejudices are the result from cognitions about structural characteristics of society (for example, competition and hierarchy). The perceived social structure and the expected emotions and behaviors are linked by stereotypes. These models, which underscore the role of social structures, fit well with the next theory the intergroup emotions theory.
Intergroup emotions theory (IET) argues that not only people’s sense of self extends to their group membership, but also that people include the group in the self representation. This means that people are quicker and more accurately to respond to traits that suit with their self-concept as well as their ingroup concept, compared to outgroup matches and mismatches. According to IETS, the emotional reactions of ingroup members are also similar.
Appraisal theories of emotions state that people evaluate stimuli in the first place as things that are either good or bad for themselves. This results in primitive positive-negative reactions. Next, when people start to analyze the situation for causes and certainty, more complex emotions come into play.
Prejudice is conceptualized by IET as a specific intergroup emotion that is the result of a specific appraisal (stereotype), and that creates certain emotional action tendencies (discrimination). It focuses more on specific intergroup experiences rather than general societal dimensions. In doing so, it corresponds with an exemplar-based approach to intergroup representations. According to this approach, people see each other as representing multiple group memberships, encountered in specific situations.
Enemy images theory argues that perceived international context and perceived behavioral intentions can result in five national images. Behavioral orientations toward the other national group can be accounted for and justified by these images. The evaluation of the other nation consists of the combination goal-compatible/goal-incompatible and perceived status and perceived power. In the first two images, each side views the other symmetrically. In the other three images, each side views the other asymmetrically:
Ally (goal-compatible, equal);
Enemy (goal-incompatible, equal);
Dependent (perceiver goal-independent, lower status, and lower power);
Imperialist (other goal-independent, higher status, and higher power);
Barbarian (goal-incompatible, lower status, higher power).
According to this theory, emotions have two roles. First, under the right provoking conditions, arousal-related images (e.g., barbarian) are cued by high arousal. Second, particular intergroup emotions are encouraged by certain relationship patterns. For example, enemies may arouse anger, which leads to the facilitation of containment.
According to the biocultural approach discrete intergroup emotions are the result from discrete intergroup relations. In turn, these intergroup emotions predict discrete intergroup behaviors. This approach underscores the effect of human interdependence, effective group functioning, and adaptation on the benefits and threats of group life. When the integrity of the group is at stake, this will predict emotions and motivations. For example, danger causes fear and protection. Consistent with these predictions, qualitatively distinct profiles of threat and emotion are evoked by different outgroups, and responses thus differentiate among various ethnic and social groups.
Even though the integrated threat theory (ITT) incorporates on many intergroup variables, it focuses on one major emotion to predict attitudes; anxiety. Threats mediate between antecedents – such as intergroup relations, individual differences, cultural dimensions, and immediate situation – and attitudes. Threats can take two forms; realistic (perceived tangible harms) and symbolic (abstract harms). The threats serve as a cognitive appraisal, the stereotypes as a cognitive response, and anxiety as an emotional response.
There are four main claims of ITT. First, it states that anxiety (a negative emotion in response to uncertain threats) facilitates stereotyping. Second, ITT applies to an astonishing range of intergroup settings, attitudes toward various groups (for example AIDS patients). Third, the theory poses a simple causal chain: antecedents threats prejudiced attitudes. And finally, ITT suggests that both cognitive and emotional empathy can help to overcome anxiety and perceived intergroup threats.
Guilty feelings of remorse depend on the level of prejudice. People that aren’t very prejudiced have high, internalized standards regarding their own interracial behavior. When they violate these standards, they feel conflicted and guilty. Highly prejudiced people have lower, more externalized standards. When they violate these standards, they experience anger. This awareness of the discrepancy causes them to inhibit behavior. It also triggers guilt. In time, the discrepancy-related stimuli and responses become associated with guilt, and they start to establish cues for control. So guilt is useful both in high- and in low-prejudiced people.
Besides the internal standards, there are also external standards, such as activated social norms against prejudice that can lower prejudiced responses by inducing guilt.
The most studies on racial prejudice focus on White racial prejudice against Blacks. Racial prejudices are unusual in several ways: (1) they are emotionally loaded; (2) they are aversive to most people who hold them; (3) they aren’t plausibly evolved, in fact, social construction appears to play a big role; (4) racial groups remain hypersegregated, which fosters the current divide in ordinary intergroup interactions.
Guilt in general is a moral emotion oriented at others. It shows us that someone is concerned that his/her behavior has harmed someone else. With respect to racism, the guilt Whites experiences is a reflection of beliefs that their group has harmed another group. Most of the emotionally loaded responses Whites experience regarding racism could possibly be better described as shame rather than guilt. The shame, together with the fact that Whites are very well acquainted with negative cultural stereotypes about Blacks, put Whites on their guard. This contributes to the emotionally loaded interaction between Whites and Blacks.
What’s way worse, is that racial stereotypes and the emotional prejudices that come along with them, create life-and-death consequences for Blacks. Think about the unarmed 19 year old Black kid that got killed by a police offices in Wisconsin in 2015; this wasn’t the first incident where an unarmed Black person got shot by the police. As we have learned from previous chapters, these cultural associations are automatic. Automatic interracial responses often reflect emotionally loaded cultural associations, even if people don’t stand behind these associations. Devine’s dissociation model and the implicit association test have taught us that we all have automatic stereotypic thoughts, and that although motivation to avoid prejudice can help to influence these thoughts, they are still very difficult to control. Most of the evaluations of other people take place in less than 100 msec.
Neuropsychological evidence shows us that amygdala responses correlate not only with negative implicit associations (IAT), but also with indicators of alertness and arousal, particularly in Whites responding to Blacks. Neural indicators thus endow racial cues with immediate emotional significance.
Physiological studies provide evidence of the immediate affective loading of interracial encounters, especially for Whites. Subtle racial bias is indicated by facial muscle activity, and the mental costs executive control over prejudiced responses causes.
Whites are often very cautious, because being called a racist is a potent interpersonal threat to most. Most racism concerns internal control, external control, and avoidance. Aversive racism refers to the good intentions most people have with regard to race, as well as their rejection of their own potentially racist beliefs. The rejection of the presence of racism causes interracial interactions to be aversive, so people avoid them. When they do interact with Blacks, Whites often display uncomfortable nonverbal behavior. Blacks may, in turn, be potentially concerned about being treated in a racist manner when approaching interracial encounters.
Biology plays a great role in racism, since people often highly exaggerate the biology of racial differences, while in fact genetic markers for race do not support the common sense biology view of race.
People divide others into race categories, based on a configuration of socially defined cues. However, these categories are created by people and are in no way natural kinds (like species).
The second aspect of race and biology is evolutionary. People often assume that racial prejudice is a hard-wired process to alien social groups. However, placing people in racial categories doesn’t fit a plausible evolutionary explanation. So maybe race detection is the by-product of essentialist encoding natural kinds or of sensitivity to distinct social groups and their coalitions. Altogether, biological and evolutionary explanations for race don’t seem to hold. Social construction, on the other hand, does seem to play a huge role in racial categorization. Much of the evidence that supports this claim, comes from facial recognition research. This research shows us that social contexts, and not fixed features of faces, shape racial categorization. Evidence for social construction is also reflected in the changing standards for judging race, as well as in biases that correlate with the typicality of racial features. Racial judgments operate by various socially constructed routes. Racial categorization can operate through:
Label; for example, “Black” or “White”.
A single feature; for example, skin tone
Individual race-related features associated with stereotypic traits or evaluations; for example, Black defendants who have more Afrocentric facial features are more likely to be sentenced for death row than those without such features.
These routes interact with conceptual information that help set the social context. In short, there clearly is a lack of clear biological evidence for racial differences of evolutionary explanations of racial perception. In fact, studies show that social-cognitive construction plays a great role in racial categorization. Still, people keep clinging on the idea of races as natural kinds.
Besides the emotional load, the aversive quality, and the (denied) social construction, racial prejudice is also segregated in modern society. There are many implications for social cognition and racial prejudice, with limited interracial contact as one of the primary effects. Equal-status intergroup contact seems to reliably reduce prejudice. The more the settings in which the contact takes place fits optimal conditions, the more effectively it reduces prejudice. Also, these effects of contact on prejudice mainly apply to White Americans to Black Americans, rather than vice versa.
Men and women have plenty of contact, and both men and women are interdependent. But at the same time male status is evident in every culture; in society at large, men dominate women. When a woman does take a leadership role in a traditionally male domain, this causes gender-role tension, which is expressed both cognitively and affectively. The realization of this incongruity between the roles expected of leaders and women suggests two things: (1) it is more difficult for women to become leaders because they will be considered more negatively as potential leaders, and (2) when women are leaders, their leader-like behavior will be evaluated more negatively compared to the same behavior when conducted by a man. This role congruity theory is supported by a series of meta-analyses. People’s expectations about gender partially reflect the statistical means (more men in charge), but these expectations don’t take into account the variability surrounding those means. The incongruence between gender expectations and job roles occurs beyond leadership. Our gender stereotypes are guided by the differentially distribution of women and men into homemaker and employee roles.
Men and women are expected to conform to gender stereotypes; women must moderate their agentic (‘masculine’) side with communal (‘feminine’) warmth. The stereotype of the typical woman is described as superstitious, sentimental, and emotional. The warmth aspect is a reflection of the prescriptive ideal. The sentimental aspect, on the other hand, is nothing but a descriptive stereotype. According to research, female gender-deviants risk their heterosexual interdependence.
The descriptive stereotype of men – adventurous, independent, strong, and active – also fits the prescriptive ideal. Together with male societal dominance, wherein women are considered as depending on men, these stereotypes reinforce heterosexual interdependence.
Along with male status, intimate interdependence creates ambivalent sexism. This cognitive belief system states that anti-female prejudices not only include hostile sexism (HS), but also subjectively benevolent sexism (BS). HS resents nontraditional women, who are considered as competing with and trying to control men, and who resist conventional roles. BS describes a subjectively positive, but rather controlling and paternalistic attitude toward women. It only cherishes them if they stay in traditional roles.
In this section both biological-evolutionary and cultural explanations are discussed for the interplay of gender beliefs and feelings. Just like all other people, researchers are often busy contrasting the sexes. Some of them favor explaining gender differences in evolutionary terms, while others favor explaining such differences in sociocultural terms. An example from an evolutionary perspective, are the parental investment models. These models argue that women always have had to invest more in reproduction than men. According to this theory, this is the reason that men are promiscuous, and women are choosy.
The social role theory is an example from a sociocultural perspective. It makes labor divisions between men and women. This distinction guides both gender-role expectations and sex-typed skills and beliefs. These two factors guide sex differences in behavior.
Both biological differences in average size and parental investment, as divisions of labor are acknowledged by a biosocial approach. According to this approach most of the variance can explained by social forces. It also highlights the great joint contributions of men and women in child rearing and earning.
In this section the prejudice against older adults is discussed. Pity and sympathy are the main prejudices addressed to older adults.
Unlike people’s acceptance of their gender and race, they resist their identity as older. Since older adults are often considered as socially, cognitively, and physically incompetent, people don’t want to see and believe themselves as ‘old’. People see ‘old age’ as a moving target, and they keep adjusting the definition of ‘old’ as they approach its boundary.
Since aging stereotypes are uniquely related to death, older people need buffers. Terror management theory (TMT) addresses the way people cope with the dread of death when they think of it. TMT argues that people are biologically driven for self-preservations. People manage the threat of death at both a cultural level – by developing worldviews that provide meaning and purpose – as well as a individual level – through self-esteem.
Prejudice against gay men, lesbians, and bisexuals differs from other prejudices in at least three primary respects:
Sexual orientation is not as clear as race, gender, and age. Indeed, it is often hidden from the outside world.
Sexual orientation is the most widespread prejudice.
Belief that homosexuality is biologically determined tends to correlate with tolerance (whereas with race, gender, and age, the belief that biology is destiny tends to correlate with prejudice).
Heterosexism creates controversy, since not everyone thinks it is a problem. Still, antigay attitudes are among the most negative prejudices. Gay men suffer more from prejudice than lesbians. Still, both are often targets of hate crimes, which often results in depression, anger, anxiety, and stress, more than for comparable crime victims. And just like all other outgroups, gay men and lesbians experience daily hassles with prejudice, and just like any other group, this can undermine their mental and physical health.
Feelings drive behavior, but both feelings and behavior interact with cognition, illustrated perfectly by intergroup relations. In this chapter the emotional prejudices resulting from cognitive biases, and how they interact with them, are discussed. Emotional prejudice covers qualitatively distinct emotions, such as fear and envy. These differentiated emotions matter both in a practical way, since they are aimed at a specific group and motivate specific behavior, and theoretical way, since they not only describe, but also predict and explain the world.
In the first section several theories of intergroup cognition and emotion will be addressed. The other sections address the idea that distinct emotional prejudices underpin reactions to different outgroups.
There are many words thrown around in psychology related to emotions. Some important definitions:
There are a number of ways to characterize affect, one of which is in terms of a bipolar (positive-negative) dichotomy, crossed with degree of arousal. We tend not to be able to feel two emotions at the same time that sit on opposite sides of the same dimension. It’s hard to feel happy and grouchy at the same time. When people are asked to rate their emotion over time, the results are somewhat different. Positive and negative affect are rated independently, making the model look more bivalent in structure. Positive emotions tend to be limited but prevalent, while negative information may be especially attention-grabbing. People tend to expect and experience positive outcomes, and tend to have a slightly positive baseline. This is called the positivity offset (aka Pollyanna effect / positivity bias). We pay more attention to negative outcomes and put in more resources to cope with those threats. Negative emotions are more complex and diverse.
The prototype view of categories determines category membership as a matter of degree. The basic emotions form the “prototype”, yet more complex emotions can be manifestations of these prototypical emotions. A typical emotional script begins with the appraisal of an event, the eliciting of emotion, and the expression of that emotion in physical and emotional state. People have schemas for the experiencing of the basic emotions, and these are similar across cultures. We also have standard ways of assessing the emotions of other people. The social constructivist view of emotions suggests that emotions are transitory social roles, variances on a social theme. People construct emotions from a meta-experience, combining their sensations, cognitions, behavior, and appraisals into an “emotion”.
The James-Lange view suggested that autonomic feedback constitutes emotion. This downplayed the role of cognition, and was later discarded by Walter Cannon’s argument that visceral sensations are too variable to be considered the sole basis for emotion.
According to the facial feedback hypothesis, emotional events trigger automatic configurations of muscle movements and sensations from which we extrapolate our emotion. Evidence for this theory is limited to pleasant versus unpleasant experiences. Facial expressions have been found to directly affect reported mood and evaluations. Effects are, however, controversial and relatively minor.
Arousal is the activation of the sympathetic nervous system. It has automatic as well as learned origins. It is non-specific and slow to decay. People have a hard time pinning down the source of their arousal and spend some effort to cognitively interpret their arousal. Arousal in one situation can bleed over into the following situation, in a phenomenon known as excitation transfer. For instance, fear can be mistaken for sexual arousal, disgust can enhance humor, etc. Physical arousal can intensify anger and sexual attraction.
Early affective neuroscience used EEG to measure the timing and location of affect responses. Recent neuroscience has relied more on neuroimaging. This is better at measuring specific brain locations activated during emotional responses but is worse at measuring the course of time. The amygdala is largely implicated in emotion, especially of the fearful or intense variety. The insula participate is also involved. The insula is involved in disgust. Other types of emotion have been more difficult to locate, suggesting that neuroimaging may not be the best approach to studying emotion in the brain.
A combination of physiological and neural response systems form our emotions, yet beyond the physical is the social element of emotional arousal. There are many theories that suggest that emotion is a cognitive structure.
The arousal-plus-mind theory sees emotion as a combination of physical arousal and the cognitive processes involved with evaluating that arousal. The physical provides the feel of the experience, while the cognition provides the quality of the emotion. This theory locates the origin of arousal in interruption – we are aroused when a perceptual or cognitive discrepancy occurs, or when ongoing action is interrupted. This might be a new musical pattern, a joke, an unexpected twist in a story, etc. The bigger the goal being interrupted, the bigger the interference, the stronger the emotional response we feel. Arousal initiates cognitive interpretation, which determines whether what we feel is positive or negative.
The effects of interruptions for perceptual-cognitive schemas are often more subtle than for goal-directed action sequences, with the degree of arousal determining the valence of the response. Disruptions can be familiar to discordant. Familiarity is pleasant but not intense, while some novelty can be good and add some intensity to the familiar. If novelty can be interpreted in familiar terms, it is even better, but complete incongruity leads to negative affect if unsuccessfully assimilated.
Mandler’s theory of emotion in relationships suggests that the more intimate the relationship, the more one’s goals depend on the other person. Longer and stronger interdependence means a higher degree of closeness. When goals are meshed, people can more seriously interrupt each other and therefore cause stronger emotions. These can be interruptions of unexpected helping (leading to joy) or unexpected hindrance (causing frustrations). This means that when an intimate relationship is running smoothly, emotion isn’t all that high.
Power asymmetries can shed some light on affect. People with high power have more control over resources and thus expect both rewards and freedom. People with little power experience a world in which they have very little control of resources and expect fewer positive outcomes, while fearing more negative ones. These experiences and expectancies influence emotions. When one has a lot of power, one experiences a rewarding release of dopamine translates this as pride and desire. As in mania, behavior is extraverted, more impulsive, and focus is more flexible. Low power people experience more inhibited emotions and behaviors, experiencing the release of norepinephrine and cortisol that determine stress responses. Specific emotions include fear, shame, guilt, but also awe. This can become anxiety and is characterized by vigilance.
As mentioned before, a schema is our complete knowledge about a certain type of situation. Some people and situations inspire emotion simply because they are part of an affect-laden schema. This schema-triggered affect can be categorized into category-based responses and attribute-based responses. We encode affective tags into our schemas, and these are recalled when we encounter the particular situation or type of person again. This can go so far as causing us to link a certain emotion with a certain stereotype. Affective transference also occurs when we meet someone who is similar to a significant other. This transference can be subliminal.
The complexity-extremity hypothesis looks at the affective consequences of complexity. For instance, a member of the out-group has low-complexity because we tend to have less information about them. We evaluate things with low complexity more extremely. People with simple self-concepts are more prone to mood swings, for instance.
The thought-polarization hypothesis suggests that our feelings are polarized when we have a schema for thinking about someone. The more attributions we make, the more we add to this schema. Lower knowledge then means a less extreme reaction. While these two theories seem opposite, yet an important point to know is that in organizing a concept into a schema, thought is essentially simplifying the object. According to the complexity-extremity hypothesis that means that thinking about something must polarize our emotions about it.
How do we make sense of our successes and failures? Weiner’s attributional theory describes some basic dimensions upon which we make our evaluation. These dimensions inspire particular emotions:
Internal vs. External locus of control: if positive, internal gives us pride while external inspires gratitude. If negative, internal gives us guilt while external makes us angry.
Stability over time
Controllability: If controllable and negative, behavior is met with anger from others. If uncontrollable and negative, it is met with pity and helping behavior.
Imagining hypothetical outcomes can also inspire affect. For instance, the simulation heuristic is how we come up with post hoc inferences of probability. A situation seems inevitable when it is hard to imagine things going another way. A near miss is therefore way more frustrating than missing by a wide margin. This is one of the reasons why second place is worse than third. Norm theory extends these ideas to our judgments of normality. Abnormal events, being more interruptive, inspire stronger emotions. Inconsistent events are seen as unlikely in retrospect.
One perspective on emotions is that they help us manage our goals and priorities. This is based on the observation that the interruption/facilitation of planned behavior causes emotion. Emotions can also act as interruptions themselves, alerting us to other important goals. We can pursue only one goal at a time, yet our survival depends on being able to interrupt goals easily when more urgent matters are at hand. Environmental stimuli might warn us of danger, internal stimuli might warn us of physiological needs, and cognitive stimuli might trigger unmet psychological needs. Emotions shuffle our priorities. Intense emotions can interrupt well-planned ongoing activities, and emotionally charged events will continue returning in our memory and interrupting our thoughts.
According to the cybernetic theory of self-attention self-focused people notice discrepancies between their current state and some goal or standard. To reduce this discrepancy, people try to adjust their behavior. People constantly continue adjusting and comparing until they meet the standard or give up.
Appraisal theories suggest that when we encounter something, we quickly appraise it for personal significance, which allows us to decide whether to act.
Lazarus viewed appraisal as evaluating any given stimulus according to its significance to our well-being. This process begins with a primary appraisal in which personal relevance, motivational relevance, and motivational congruence is determined (does it relate to my goals, is there something at stake, will it help or hurt me?). Secondary appraisal is when we consider how to cope, which can be either problem-focused coping in which we try to solve the situation, or emotion-focused coping, in which we try to adjust our emotional reaction. The appropriate type of coping depends on our level of control over the situation.
Cognitive appraisals lead to emotion by allowing us to react upon our judgment of a situation. Our own responsibility for a given situation will impact how we feel about it – this is in line with the attribution model. We are easily able to differentiate negative emotions connected to a situation (a car accident is scary and sad), but positive emotions are less clear and detailed. Another appraisal framework is the stimulus evaluation check, in which we examine a situation for novelty, intrinsic pleasantness, goal significance, coping potential, and degree of control.
People can often have a hard time making affective forecasts of their own upcoming emotional experiences. We might think that new job will make us happy, but overestimate our joy. We overestimate the impact of negative events, for instance. This is partially because we fail to take into account our psychological immune system, a collection of defensive mechanisms that allows us to cope with life’s blows. We also fall prey to the durability bias and feel like negative events will affect us for longer than they really do. When it comes to emotion, we don’t learn well from experience.
There are many words thrown around in psychology related to emotions. Some important definitions:
How does affect influence our cognition? To introduce this section, recognize that most of this research deals with mood rather than intense emotions. Note also that the Pollyanna effect means that people have a generally positive bias, and are usually mildly optimistic. What this means is that positive and negative moods are not opposite to one another and our emotional spectrum is therefore asymmetrical.
Good moods lead us to help others, a fact that generalizes across age, class, and ethnicity. Why? Four main mechanisms are implicated:
Attention: good moods caused by a focus on one’s own good fortune can inspire one to pay it forward.
Separate process: people in a good mood will help if the request emphasizes the rewards of helping rather than a guilt-inducing obligation.
Social outlook: a good mood can be caused by an improvement in one’s social outlook.
Mood Maintenance: Cheerful people are less likely to help if it would ruin their mood, avoiding negative affect.
Cheerful people tend to be more sociable, initiating interactions and self-disclosing more, aggressing less, etc. Positive moods cause people to be easier to get along with, less combative, and more rewarding of both others and the self. Unhappy people who see themselves as the cause of a negative event (objective self-awareness) are likely to help. Grumpy people, on the other hand, who imagine their own reactions to a negative situation will be less helpful. The negative state-relief hypothesis suggests that unhappy people help when it will make them happier.
A mood can be induced for the purpose of study, using methods from hypnosis to mood-related movements, to facial expression adjustment. Such methods revealed mood-congruent memory, that we are more likely to remember happy things when happy, sad things when sad. We pay attention to stimuli that reflect our moods as well.
The affect infusion model (AIM) follows the affect-as-information perspective, as it suggests that affective influences can be automatic, controlled, or absent. Under heuristic processing, affect informs quick judgments. Under controlled processing, affect primes judgment and follows more traditional memory models. Our mood increases the salience and recall of stimuli that reflects our mood (positive or negative). Research has been less successful finding effects for negative moods, probably because we are resistant to them and try to repair negative moods automatically. Negative mood congruence effects are weaker, probably because of their greater differentiation and complexity. This means that matching the mood to the stimulus is also more complicated.
People high on neuroticism tend to have exaggerated negative mood effects, while extraverts show enhanced mood-memory effects. Mood acts as a moderator of personality, yet personality traits predispose someone to certain moods, which then affect cognition. Depression is one of our clearest examples of mood-congruence effects, and real-life situations tend to show more evidence of mood congruence than studies have.
There are some problems surrounding research on mood-memory. First, participants may be encouraged to respond to experimental demand, because of the within-participant designs. Second, real-life events show stronger mood-congruence effects than the items that are presented by an experimenter do.
The idea of mood state-dependent memory is that our mood facilitates recall. If we study when happy, we will remember material better if we take a test while happy. Unfortunately, evidence for this theory is weak. So why are there drug state-dependent memory effects? Because arousal-dependent memory exists.
The network model of mood and memory posits that emotion is a retrieval cue, and memories that come to mind at the same time as an emotion are linked to that emotion. This explains mood congruent memory, because certain emotions would become more accessible, with more links to connect them. Unfortunately, this theory has also produced unsatisfactory results.
When in a good mood, we rate just about everything more positively and feel more benevolent. The only exception is that cheerful people do not value criminal behavior or unattractive people/things. Negative mood effects are less extreme compared to the norm. When depressed, people do judge others on their flaws more than their strengths, and perceive themselves as being more alone than they are. Similarly, people who are in an aroused state judge ambiguous information as also more highly charged – joy in place of serenity, rage in place of annoyance. Furthermore, mood impacts our intuition, the source of “feeling” we draw upon to make instant moral judgments.
Two negative moods with strong effects are fear and anger. Fear increases paranoia, risk-focus, and pessimism. When afraid, we seek to avoid and prevent trouble. Angry people, on the other hand, is an approach orientation. Angry people are risk-seeking, not avoiding. Anger facilitates prejudice and competition. If the mood of the audience matches that of the message they are hearing, they are more easily persuaded.
Specific emotions matter in specific types of judgment. Many moral judgments, for example, respond to disgust. Moral judgments rely on intuition.
Mood-incongruent stimuli may interfere with information processing, causing people to make illusory correlations. It may also stop people from elaborating. Models that see emotions as an interruption suggest that it can be disruptive regardless of congruence.
Mood influences how we make decisions. Happy people tend to make decisions more impulsively, tend to work faster, to generalize, to make more unusual connections between things, and to have less organized mental associations. They also take more risks. These are all tendencies conducive to creativity. While happy people tend to make snap decisions, they can also easily make detailed, controlled decisions when motivated to do so. They are slightly more prone to fundamental attribution errors. Sad people are more careful and considerate of their decisions.
Our memory and how positively we evaluate our world aren’t the only factors that are influenced by our mood. Mood also influences the manner in which we make judgments. Overall, happy people are flexible decision makers. An explanation is that happy people interpret their mood as information that all is well. This explanation is consistent with the affect-as-information approach. However, their tendency toward quick decisions may lead them astray, and more readily relying on misinformation. This makes them an easy prey to the fundamental attribution error.
Sad people seem to be more likely to be careful and to mull over their decisions.
Research done on mood and persuasion has often involved the classical conditioning paradigm, but more recently, there have been studies involving cognition. Positive moods may enhance persuasion only under conditions of low involvement and low cognitive activity. When moderately involved, positive affect may actually improve processing and the retrieval of relevant information, causing one to be persuaded only by good arguments.
Cheerful people experience more long-term well-being. It gives one a feeling of control and motivates one to positive and active behavioral goals.
The separate-systems view of cognition and affect suggests that they exist as parallel pathways and do not actually influence one another as much as we have just discussed. This view sees affective reactions as primary, instinctive, and inescapable, while cognition is a secondary response, considered and easier to ignore. Affective judgments are hard to verbalize, and do not necessarily depend on thinking. What we think about something or someone does not predict how we feel about them.
Research that has dealt with our ability to know how we feel about something even if we don’t recognize it often deals with the mere exposure effect. This is when we like something because we have encountered it frequently, and for no other logical reason. Research into the mere exposure effect has shown that affective processes have more to do with it than cognitive ones. (One counter argument is that brief exposure activates a cognitive schema, explaining the liking). A stimulus is more easily processed after an initial exposure, causing us to feel as if we have seen something before, giving us a sense of familiarity. This is the mechanism of perceptual fluency. This increases mere exposure effects.
Many researchers disagree with the separate systems view, including Lazarus, whose theory of emotion links affect to the appraisal of personal meaning. In this view, cognition is necessary for emotion. While appraisal is not deliberate or conscious, it is still a cognitive element of information processing. Epstein argued that preconscious cognitions precede emotions.
One can look at the issue as one of semantics – there are two different meanings for the word cognition: 1) Intelligent knowledge acquisition; 2) All mental activity, rather than physical activity. Emotions are only non-cognitive in the first sense, not the second.
How does affect influence our cognition? To introduce this section, recognize that most of this research deals with mood rather than intense emotions. Note also that the Pollyanna effect means that people have a generally positive bias, and are usually mildly optimistic. What this means is that positive and negative moods are not opposite to one another and our emotional spectrum is therefore asymmetrical.
Self-regulation is how people control and direct their behavior. They usually do this in pursuit of attaining goals.
When in a social situation, we assess the people around us and the circumstances for how they will impact our goals, our feelings, etc. We try to predict what will happen. The knowledge, plans, and strategies we bring to dealing with social interactions is our social intelligence. Not only do we assess our situations, we also choose what situations we get ourselves in to. When constructing goal-oriented situations, we go through motivational and volitional stages. We begin with a deliberative mindset in which we choose among alternative goals. This leads us to the implementational mindset, in which we decide when and how to act to implement the course of actions that will lead to our goal. We employ goal-shielding, an attentional focus that allows us to ignore deactivated alternative goals and pursue our activated goal single-mindedly. These two mindsets are marked by different cognitions, with a deliberative mindset being more pessimistic.
What causes us to persist in our pursuit of goals? If we are invested in a goal and see a certain method to attain it, the feeling of satisfaction we might get from the goal is transferred to the means of attaining the goal – we feel happy when we’re making progress towards something that will make us happy.
We are not always great at planning, being overly optimistic about the speed at which we will accomplish things. This is the planning fallacy. Initial optimistic predictions of goal accomplishment may be a motivator in themselves, making us accomplish more in a short period of time than we might otherwise do. People evaluate outcomes based on whether they match their expected value.
Some goal pursuit is conscious and deliberate, but much of our self-regulatory activity goes on subconsciously. Some goals are chronic, like the desire to be liked, and can automatically elicit behavior, like smiling. A habit is an association between a goal and an action, so that if the goal is primed, the action is automatically cued (like in the Sims). Practice in dealing with our environments automates our emotional, cognitive, and behavioral responses in many situations. Familiar situations make us mindless. In new situations, we seek out the “rules” that we should follow to “do it right”. Environmental cues can prime behavior. When we have clear goals, goal-relevant cues are seen as positive, which can help us maintain goal pursuit.
Automatic behaviors often cause us to assimilate to environmental cues, causing us to do what is expected. However, at times they cause us to act in contrast to cues. This is usually when we encounter exemplars (good examples), because it triggers social comparison.
Not surprisingly, goals with a promotion focus promise a sense of accomplishment or reward. Prevention focused goals involve a sense of responsibility, with goal attainment bringing one some sense of security. Goals can be primed specifically by thinking about a significant other – if I picture my husband, I am primed with his views on goal achievement. He thinks that one should go all out to win, and if I think of him, I am likely to take that tactic. If a significant other has high expectations of you, you might feel more able to persist and perform better.
The active maintenance of activity patterns in the prefrontal cortex represents the cognitive control of behavior that allows us to represent goals and work to achieve them. The prefrontal cortex seems to have the main job of goal regulation.
“We are what we repeatedly do”. We base our self-concept on certain prototypic behaviors that we consistently enact.
One of the problems with assessing the relationship of behaviors to cognitions is the degree of specificity you are measuring. One solution is using multiple-act criterion. Instead of examining the relationship between a general attitude and a single act, one measures a number of specific acts to get a general behavioral measure. This might help because multiple actions better allow us to deduce “typical behavior”, at least one situation will have been predicted by attitudes, and multiple actions that include at least two situations seen as similar to each other warrant similar behavior.
Another solution is to measure cognitions more specifically, asking for all the details of someone’s attitude, including exceptions and hypothetical situations. The theory of reasoned action suggests that these intentions directly predict behavior.
In order for a cognition to predict behavior, it must be strong and clear. Accessible attitudes influence behavior, and embedded attitudes are especially strong influencers on behavior. Attitudes formed from direct experience are better predictors of behavior than attitudes based on indirect experience. Vested interest (in which outcomes relate to the self) is more likely to predict behavior for attitudes that are personally important. Stable cognitions are more influential than instable ones. Stability is predicted by the strength of attitudes. Self-schemas also determine the attitude-behavior relationship. Most of all, important attitudes that reflect fundamental values, self-interest, and identification with a valued group, will both stand up against persuasion and predict our behavior.
When we analyze the reasons behind our attitudes, we may temporarily experience a change in those attitudes. We don’t always consider our attitudes, and often make them unthinkingly. When forced to justify them, their flaws are often revealed. When we form new attitudes, our immediate behavior will conform to those attitudes. However, over time, our behavior may snap back to the attitude we once held.
Consummatory behavior (behavior engaged in for its own sake), is affect-driven. Instrumental behavior (goal-directed behavior), seems cognition-driven.
Action identification is when the way that we label our actions changes our subsequent behavior. We are able to place our actions in a hierarchy. When actions are placed at a high level, they are identified in a more abstract and big-picture kind of way. When an action cannot stay at a high level, it drops to a lower one. Unsuccessful actions tend to be identified at lower levels, indicating that we might need to “cover the basics” before we can see something for its more abstract purpose.
There are five maintenance indicators that determine the potential disruption of an action identity:
Difficulty
Familiarity
Complexity
Time to enact
Time to master
When an action is practiced enough to be automatic, it can be maintained at a higher level. Actions at a high level can be performed in many different ways, but if an action is at the wrong level, performance will be impaired. Action identities can produce emergent action, actions people find themselves doing when they did not intend to. This usually occurs on low-level actions.
Attitude-behavior consistency is affected by the meaning we place on the attitude and the behavior made salient by situational cues. We may behave differently based on the role we take on in any given situation. Situational factors that force our attention inwards cause us to base behavior on our own personal attitudes. Constructs may also be primed, like extraversion.
Some people blend into social situations easily (high self-monitors) while others are always themselves regardless of their social setting (low self-monitors). A high self-monitor will be very sensitive to social norms and interpersonal cues related to appropriate behavior, becoming the person they need for the situation they are in. Low self-monitors will do the opposite, attempting to put their best self forward. High self-monitors tend to be more successful in social situations, and will have less of a problem behaving in a range of situations, while low self-monitors will tailor situations to fit their self-concepts, and focus on how they behave. When making attributions, high self-monitors will point to their situation while low self-monitors will look to themselves. Such differences in self-perception can be seen in how people behave, and have a high impact on how much one’s attitudes will affect their behavior.
One way to make a good impression is to match your behavior to the person you are trying to impress (behavioral matching). Alternatively, you might try ingratiating yourself by saying positive things about the target other. Flattery is successful when centered on attributes that the target person values but is unsure about. Self-promotion can create a good impression, but might also be seen as arrogant. When you slip up, you might attribute your behavior to external causes to make yourself look better.
Another measure for managing failure is self-handicapping, in which people do something that will hurt their chances of success, but also act as an excuse when things go wrong. This is ego-protective in that we can point the blame at something else if we fail, but take extra pride in the victory if we succeed. However, these strategies make a poor impression.
People sometimes choose to obscure the impressions others have of them. Under certain conditions, people will muddle the waters by making their attributions ambiguous, for example by engaging in an inconsistent behavior.
Unfortunately, we are not always the best scientists in our daily lives. We engage in confirmatory hypothesis testing when we have a hypothesis about another person’s personality – we ask them leading questions that only prove our hypothesis, rather than disprove it. This is particularly troubling in situations like the courtroom, which is why leading questions are objectionable.
A self-fulfilling prophecy is any positive or negative expectation about circumstances, events, or people that may affect a person's behavior toward them in a manner that causes those expectations to be fulfilled. So an initially false definition evokes behavior that makes it true (behavioral confirmation).
Observers lessen their expectations depending on their own and the targets’ respective certainty. Behavior confirmations only occurs when perceivers are absolutely certain of their expectations but targets are uncertain of their self-conception. Target self-verification, on the other hand, occurs when targets are certain of their self-conceptions. It also tends to occur when both perceivers and targets are uncertain of their beliefs.
When in a social situation, we assess the people around us and the circumstances for how they will impact our goals, our feelings, etc. We try to predict what will happen. The knowledge, plans, and strategies we bring to dealing with social interactions is our social intelligence. Not only do we assess our situations, we also choose what situations we get ourselves in to. When constructing goal-oriented situations, we go through motivational and volitional stages. We begin with a deliberative mindset in which we choose among alternative goals. This leads us to the implementational mindset, in which we decide when and how to act to implement the course of actions that will lead to our goal. We employ goal-shielding, an attentional focus that allows us to ignore deactivated alternative goals and pursue our activated goal single-mindedly. These two mindsets are marked by different cognitions, with a deliberative mindset being more pessimistic.
What causes us to persist in our pursuit of goals? If we are invested in a goal and see a certain method to attain it, the feeling of satisfaction we might get from the goal is transferred to the means of attaining the goal – we feel happy when we’re making progress towards something that will make us happy.
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
Main summaries home pages:
Main study fields:
Business organization and economics, Communication & Marketing, Education & Pedagogic Sciences, International Relations and Politics, IT and Technology, Law & Administration, Medicine & Health Care, Nature & Environmental Sciences, Psychology and behavioral sciences, Science and academic Research, Society & Culture, Tourisme & Sports
Main study fields NL:
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
4246 |
Add new contribution