Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
BulletPointsummary of Thinking, Fast and Slow by Kahneman - 1st edition
- What is the distinction between System 1 and System 2? – Chapter 1
- How do System 1 and System 2 deal with effortful tasks? - Chapter 2
- Why is System 2 deemed the ‘lazy controller’? – Chapter 3
- How does the ‘associative machine’ of System 1 work? - Chapter 4
- What is cognitive ease? - Chapter 5
- How does our mind deal with surprises? - Chapter 6
- Why do people often jump to conclusions? - Chapter 7
- How are judgments formed? – Chapter 8
- What is the role of substitution? - Chapter 9
- What is meant by ‘the law of small numbers’? – Chapter 10
- What is the ‘anchoring effect’? – Chapter 11
- What is the availability heuristic? - Chapter 12
- How do availability, risk and emotion relate to each other? - Chapter 13
- What is the representativeness heuristic? – Chapter 14
- What is meant by the ‘less-is-more’ pattern? - Chapter 15
- Why do causes trump statistics? – Chapter 16
- What is regression by the mean? - Chapter 17
- How can intuitive predictions be tamed? - Chapter 18
- What is the illusion of understanding? – Chapter 19
- What is the illusion of validity? - Chapter 20
- How do intuitions and formulas relate to each other? – Chapter 21
- When can we trust an expert intuition? – Chapter 22
- What is the importance of ‘the outside view’? - Chapter 23
- What is the optimistic bias? - Chapter 24
- What are ‘bernoulli’s errors’? – Chapter 25
- What is the prospect theory? – Chapter 26
- What is the endowment effect? – Chapter 27
- How do people react to bad events? – Chapter 28
- What is meant by the 'fourfold pattern'? - Chapter 29
- How do we respond to rare events? – Chapter 30
- What are risk policies? – Chapter 31
- What is mental accounting? – Chapter 32
- What are preference reverals? - Chapter 33
- What is emotional framing? - Chapter 34
- How does our memory affect our judgments of experiences? - Chapter 35
- How do we evaluate stories? - Chapter 36
- What does research about experienced well-being learn us? - Chapter 37
- What is the focusing illusion? - Chapter 38
What is the distinction between System 1 and System 2? – Chapter 1
- Fast thinking happens automatically and effortlessly: an answer or impression immediately comes to mind. Solving a difficult mathematical problem requires some time, which is a feature of slow thinking.
- Kahnemans refers to the two modes of thinking as System 1 and System 2. Features of System 1: fast thinking, automatically, involuntary and effortless. Features of System 2: slow thinking, deliberately, reasonable.
- Examples of events that occur automatically and effortlessly (System 1) are: giving the answer to ‘1 + 1 = ?’ and looking in a certain direction when you hear an unexpected sound.
- The abilities of System 1 include skills which we also see in animals, like recognizing things and orienting attention. Other quick and automatic mental activities are the result of prolonged practice. System 1 involves learned skills (how to behave socially, reading) and learned associations (capitals of countries). Certain skills are acquired solely by specialized professionals.
- Learned skills require knowledge, which is stored in memory and can be accessed effortlessly and unintentionally. Some responses are entirely involuntary. You cannot stop yourself from knowing that 1 +1 = 2 or looking in the direction of a sudden sound. Others can be controlled but are usually done automatically. Controlling attention is an activity that fits both systems.
- The various operations of System 2 share one feature: they all require attention and are disrupted when the attention is moved away. Examples are: bracing yourself for fireworks going off or focusing on the voice of a specific person in a noisy and crowded setting.
- Doing something that does not come naturally requires effort, you need to ‘pay attention’. Conducting several effortful activities at once is hard or impossible, because they interfere with each other. Solving a complex mathematical problem while crossing a busy road is very difficult. Talking to your partner while walking in a quiet park is not, because these activities are undemanding and easy. We all have some awareness of the limited capacity of attention.
- The book ‘The Invisible Gorilla’ demonstrates how focusing intensely on a task can result into being blind to distractions. The study demonstrates two important findings about the mind: people can be blind to the obvious and people can be blind to their own blindness. The task of counting passes between one team and ignoring the other team made them effectively blind to a gorilla walking by. The viewers who did not notice the gorilla were sure it did not happen, they could not imagine failing to spot a gorilla on a sport court.
- When we are awake, both Systems are active. System 1 runs automatically, System 2 is usually in a low-effort mode. System 1 continuously generates feelings, intuitions, intentions and impressions for System 2. System 2 turns intuitions and impressions into beliefs, and impulses into conscious actions. System 2 normally adopts the suggestions of System 1 without modification. In general, you believe your impression and follow your feeling, which is fine in most cases.
- When System 1 encounters difficulties and does not provide an answer, it relies on System 2 to tackle the problem and provide the right answer. Surprises can also activate System 2. System 2 also monitors a person’s own behavior: the control that keeps people respectful when they are furious and alert when they are driving in the dark.
- Their interaction between System 1 and System 2 is mostly successful, because System 1 usually provides accurate short-term predictions and models of familiar situations, and its initial responses to challenges are quick and normally appropriate. It has biases though: systematic errors that are likely to be made in certain circumstances. System 1 occasionally answers an easier question than the real question and has a limited understanding of statistics and logic. Another limitation is that you cannot turn System 1 off.
- Conflicts between an intention of carrying out a task and an automatic (opposite) response occur regularly. You may remember a time when you tried not to stare at someone with an alternative hairstyle or when you forced your attention on boring homework. One task of System 2 is self-control: overcoming the impulses of System 1.
- The Müller-Lyer illusion demonstrates the difference between an impression and a belief, as well as the autonomy of System 1. When looking at the image, you believe what you see: lines of different lengths. After measuring them, you (or your System 2) believe something else: you know that the lines have the same length, even though you still see a difference in length. You cannot turn System 1 off: you cannot decide to see two equal lines, despite knowing they are. Resisting the illusion requires learning to mistrust your impressions by recognizing the illusory pattern and remembering what the ‘catch’ is.
- Some illusions are visual, others are cognitive. You cannot control feeling sympathy for someone who turns out to be a psychopath (System 1). This can be the automatic response to psychopathic charm. You can learn how to recognize the illusion and how to respond to it (System 2).
- Errors of intuitive thoughts are generally hard to prevent. Some biases cannot be avoided, for instance when System 2 does not have a clue to the error. Even when a clue is available, preventing mistakes requires a lot of effort and we cannot constantly question our thoughts. The best option is learning to recognize circumstances in which mistakes are likely to occur and trying harder to prevent making significant mistakes when there is a lot at stake.
- Kahneman uses the terms ‘System 1’ and ‘System 2’ as nicknames, because they are easier to say and take less space in our memory than ‘automatic system’ and ‘effortful system’. This is important, because any occupation of the working memory reduces our ability to think. He emphasizes that the systems are not real parts of the brain.
How do System 1 and System 2 deal with effortful tasks? - Chapter 2
- System 2 is defined by its effortful operations, although it is also lazy: it puts in no more effort than needed. Some crucial tasks can only be performed by System 2, because they require self-control and effort to overcome the impulses and intuitions of System 1.
- The ‘Add-1’ task puts our System 2 to work and demonstrates the limits of our cognitive abilities within seconds. Not many people can handle more than four numbers. If you truly want to challenge yourself, try Add-3.
- Your body will also react to mental work. Hess, psychologist, described pupils as windows to the soul. Pupils indicate the level of mental effort: they dilate more if people have to solve more difficult problems.
- Kahneman set up an experiment to study the reaction of pupils while the participant performed paced tasks. The pupils got wider as the tasks got more demanding. The Add-1 task demonstrated how longer strings of numbers caused bigger pupils. When performing the Add-3 task, the pupils got 50% bigger and the heart rate increased by seven beats per minute. This is the maximum of mental effort, people give up if the task gets more demanding than Add-3 (the dilating of the pupils stopped).
- When solving a mathematical problem, the pupils of the participant dilated within seconds and constricted as soon as the problem was solved or giving up upon. The pupils had a constant normal size when the participant was chatting to someone else during a break. Engaging in small talks and easy tasks are deemed effortless, while tasks as Add-1 and Add-3 are extremely effortful.
- System 2 has limited capacity. It responds to threatened (mental) overload by protecting the most important activity. That activity gets the attention it needs. Remaining capacity will be divided to other tasks. An experiment regarding the detection of the letter K as a ‘side task’ showed that the observes failed when the main task was highly demanding.
- The allocation of attention has always played an important role in our evolution. The ability to orient and respond rapidly to sudden threats or great opportunities was needed to survive, which we also recognize in the animal world. Even now, System 1 is activated when an emergency occurs and fully focuses on self-protection. We respond to a sudden threat before we are fully aware of it.
- Brain studies have demonstrated that the degree of activity needed for an action changes as we become more skilled. An increase in skills resulted into the involvement of fewer brain regions. The same goes for talent: the brain activity and pupil size of highly intelligent people show they need less effort to successfully complete the same task.
- Law of least effort: if we have several options for achieving a goal, we choose the least demanding one. It is human nature to be lazy.
- Only System 2 is capable of following rules, comparing objects on different attributes and making conscious choices between multiple options. The automatic System 1 lacks these capabilities, it cannot deal with more than one task at once and is not able to use merely statistical information.
- A key capability of System 2 is the adoption and termination of ‘task sets’: it can program our memory to follow instructions that overrule habitual responses. Psychologists call this ‘executive control’.
- One of the most significant findings of cognitive psychologists in the last decades is that moving the focus to another task is effortful, particularly when there’s a time limit involved. People who do well on tests that demand them to switch constantly between two effortful tasks are also likely to do well on tests of general intelligence.
Why is System 2 deemed the ‘lazy controller’? – Chapter 3
- System 2 has a natural pace. Having random thoughts and monitoring what happens around you is not effortful. We make small decisions when we ride our bicycle, take in some information as we watch the news and have low key conversations with our colleagues or partner. These actions take little effort and can be compared to a stroll.
- It is usually easy to be walking and thinking at the same time, but in some cases they cause a mental overload. When you go on a walk with someone and you ask that person to instantly solve the problem 32 x 64, he or she will stop walking.
- Walking faster than your natural pace worsens your thinking ability, as your attention shifts to maintaining a faster pace. If you walk as fast as you can, it will be impossible to focus on anything else. Next to the psychical effort, it takes mental effort to fight the urge to slow down: self-control.
- Conscious thoughts and self-control fight over the same restricted budget of effort.
- Sometimes people are in a state of effortless concentration in which the maintenance of a coherent train of thoughts requires no willpower. Psychologist Csikszentmihalyi called this a ‘flow’. Being in a flow state can make you lose your sense of yourself and time. Activities that induce this flow are called ‘optimal experiences’. These activities take considerable effort, but in a state of flow, the maintenance of focused attention on them requires no discipline.
- ‘Flow’ separates the two forms of effort: the deliberate control of attention (self-control) and concentration on the task (cognitive effort).
- Psychological research has demonstrated that someone who is simultaneously challenged by a temptation and by a demanding mental task is more likely to give into the temptation. When you get the task to remember a list of numbers for several minutes and at the same time have to choose what you want to eat: broccoli or pizza, you are more likely to go for the pizza. System 1 has more influence on our behavior when System 2 is occupied.
- Someone who is cognitively busy is also more likely to use sexist language, be superficially judgmental in social settings and making selfish decisions. A busy System 2 loses the hold on behavior, although mental load is not the only cause of depleted self-control. Other possible causes are a bad night of sleep, drinking alcohol or anxiety about the task. Conclusion: self-control requires effort and attention.
- Experiments conducted by psychologist Baumeister displayed that voluntary physical, emotional and cognitive effort all – partly – drain our tank of mental energy. His experiments involved successive tasks. Efforts of self-control or will are tiring: if we have had to force ourselves to do a task, we are less likely to exert self-control when starting the next task. This is called ‘ego depletion’.
- Participants who had to suppress their emotional response did not do well in a later psychical test. Emotional effort has a bad influence on your ability to endure muscle pains. An ego-depleted person is therefore more likely to give up faster. In another experiment, participants who started with the task to eat healthy food while resisting sweet treats later gave up quicker than usual when faced with a demanding mental task.
- Many tasks and situations lead to depletion of self-control. They all involve the need to suppress a natural urge and conflict. Examples are avoiding the thought of red cats, trying to impress someone and responding friendly to your husband’s bad behavior. There are also many and various indications of depletion, for example reacting aggressively to someone provoking you or not doing well in cognitive tasks.
- Highly demanding tasks require self-control, while the exertion of self-control is unpleasant and depleting. Unlike mental load, ego depletion is a loss of motivation. It does not equal being cognitively busy.
- Baumeister also found that mental energy is not merely a metaphor. The nervous system is one of the most glucose consuming parts of the body, especially when you are carrying out demanding mental tasks. Carrying out a cognitive activity that requires self-control results into a lower blood glucose level. This effect of ego depletion can be reversed by ingesting glucose. Only the participants who got a glucose drink before starting the second task were not depleted. Intuitive mistakes are usually more frequent among ego-depleted individuals.
- A recent study showed the effects of depletion on judgment. Judges had to review parole applications. The researchers found an increase in approved requests after every food break. In the period until the next break, the rate dropped to nearly zero just before their next eating moment. The best explication is that hungry and fatigued judges had the urge to go for the easier default decision: denial of parole.
- Monitoring and controlling actions and thoughts suggested by System 1 is one of the most important functions of System 2. System 2 either allows, suppresses or modifies them.
- “An ice cream and chocolate dip cost € 1.10. The ice cream costs one euro more than the dip. What is the price of the dip?” You automatically answer € 0.10, which is wrong. If the price of the dip is € 0.10, then the total price will be € 1.20 (0.10 for the dip and 1.10 for the ice cream). The right answer is € 0.05. Answering with € 0.10 means that you did not actively review your intuitive answer and your System 2 supported a wrong answer that it could have prevented with little effort. Here we see the ‘law of least effort’ at work.
- The ice cream – dip puzzle demonstrates that most people are overconfident: they are prone to put too much trust in their intuitions and avoid cognitive effort.
- “All apples are fruits. Some fruits are pink. Therefore some apples are pink.” Most college students agreed with the conclusion, but it is actually an invalid syllogism: it is possible that there are no pink apples. Because a plausible answer comes to mind straight away, not many people are willing to put effort into thinking it through. This indicates that when people believe a conclusion is valid, they also tend to believe the arguments are valid. System 1 focuses firstly on the conclusion, the arguments follow later.
- “How many homicides occur in the state of Tennessee in 12 months?” This question challenges System 2. The trick is whether people will remember that Memphis, a city with a very high crime rate, is in Tennessee. People from the United States know that Memphis is a large city in Tennessee. The ones that remember that Memphis is in Tennessee give higher estimates. Most respondents who were asked this question did not think of the city when asked about the state and reported lower guesses than the respondents who were asked about the number of homicides in Memphis. Failing to think of Memphis can be a flaw of both System 1 and System 2. Whether the city pops up in your mind depends partly on the automatic function of memory. That is something people differ in. Some people have extensive knowledge about the state and are more likely to remember facts about it. It also depends on the interests of people and their intelligence.
- Intelligence is not solely about reasoning, but also about retrieving relevant facts from memory and deploying attention. While memory function is associated with System 1, taking your time for a conscious search of memory is a feature of System 2. The extent of this search varies among people.
- The ice cream – dip puzzle, the apples syllogism and the Memphis – Tennessee question have one thing in common: giving the wrong answers seems to be caused by insufficient motivation, not making enough effort. Students of high-ranked universities have the capability to provide the right answer. Without the temptation of accepting a plausible answer that automatically comes to mind, they can solve much harder problems. It is troubling that they are so easily satisfied and stop thinking. Their System 2 proved to be lazy. People should be less willing to accept tempting answers, more alert and intellectually active and have less confidence in their intuitions.
- Shane Frederick used his Cognitive Reflection Test to examine the characteristics of students who had performed poorly and found that they are prone to answer with the first thought that comes to mind and are reluctant to make the effort of checking this intuition. They are also prone to believe other ideas from System 1. The students were particularly impatient, impulsive and wanted instant gratification. The findings of Frederick indicate that System 1 and System 2 have different ‘personalities’. System 1 is intuitive and impulse, System 2 is cautious and capable of reasoning, but can also be lazy. The same goes for people: some are like System 1, others like System 2.
- The link between self-control and thinking was also examined by Walter Mischel. He subjected four year olds to a dilemma: receiving a small reward (one cookie) whenever they wanted it or a bigger reward (two Oreos) after waiting for 15 minutes in a non-distracting room. Half of them succeeded in waiting for 15 minutes, mostly by trying not to pay attention to the cookie. Over a decade later, the children that had managed to resist the temptation had greater measures of executive control in cognitive tasks, in particular the ability to reallocate attention. They were also less likely to do drugs and had better scores on intelligence tests.
- Researchers examined the connection between intelligence and cognitive control by exposing four to six year olds to computer games specifically designed to engage their attention and control abilities. The researchers discovered that training attention improved both executive control and their scores on intelligence tests. They also found that parenting techniques affected the kids’ ability of controlling attention and a close link between this ability and the ability of controlling emotions.
How does the ‘associative machine’ of System 1 work? - Chapter 4
- When you read the words ‘mango’ and ‘puke’, you will pull a disgusted face. You automatically responded to the word ‘puke’ like you would respond to the actual event. Our minds automatically assume causality between the words mango and puke, forming a scenario in which the mango caused nausea. This results into a short-term aversion to mangos. You are extra ready to recognize and respond to concepts and objects associated with ‘puke' (vomit, sick, nausea) and words associated with ‘mango’ (exotic, fruit and red). Words associated with other causes of puking are also easier to recognize (food poisoning, hangover). This wide range of responses occurred effortlessly, automatically, quickly and could not be stopped. This is an example of your System 1 at work.
- The visions and thoughts you experienced are the result of the process called ‘associative activation’: ideas that have been formed generate numerous other ideas. A word evokes memories, which triggers emotions, which evoke reactions like facial expressions and an avoidance tendency. These reactions intensify the feelings to which they are connected, and the feelings intensify compatible thoughts. This rapid process of physical, emotional and cognitive response is called ‘associatively coherent’.
- System 1 tries to make sense of an unusual situation by linking them in a logical story. It starts with evaluating the current level of threat and then creates a context for the current situation and future events. System 1 treats the connection between two words as a representation of reality. Your body reacts as it would react to the real event and your emotional reaction is part of the interpretation of that event. As cognitive researchers recently emphasized: you do not merely think with your brain, but also with your body.
- The process that causes mental events in sequences is called ‘the association of ideas’. Philosopher Hume identified three principles of association: causality, contiguity in place and time, and resemblance. According to the current view of the functioning of the associative memory, the mind goes through a sequence of ideas at once. One idea evokes many other ideas, of which only a few are conscious.
- Psychologists see an idea as a node in a network, the associative memory, in which it is linked to numerous others. Types of links: cause-effect (drinking-hangover), things- properties (carrot - orange), things – categories (tulip – flower).
- Psychologists discovered in the 1980’s that seeing or hearing words causes instant and measurable changes in the ease with which numerous related words can be evoked. If you have read the word ‘beverage’ and then have to finish the word ‘TE_’, you are more likely to go for ‘tea’ than for ‘ten’. You will also be quicker than normal to recognize the word ‘tea’ when it is whispered or blurred out. In addition, you are primed for other drinking-related ideas (thirsty, water, coffee). These primed ideas can prime other ideas.
- Priming is not restricted to words and concepts. Our emotions and actions can be primed by events that we are not aware of. The experiment of Bargh showed how young students walked significantly slower after finishing the task of constructing sentences with a set of words associated with old people (bald, wrinkle, gray, Florida). Two stages of priming: 1) the words prime thoughts of old people and 2) the thoughts prime an action which is associated with the elderly (walking slowly). The students had not noticed that the words had an elderly theme and insisted that none of their actions were influenced by the words. Although they were not aware of the idea of old age, their behavior had changed nonetheless. This phenomenon is called the ‘ideomotor effect’.
- Reciprocal links are common in the associative machinery. Being happy makes you smiles and smiling tends to make you feel happy. Gestures can unconsciously influence feelings and thoughts. Nodding makes you more acceptive of something you hear and shaking your head results in the tendency of rejecting it.
- Our choices and judgments are not as autonomous and conscious as we think they are. We may see voting as a conscious act reflecting our values and judgments of policies, one that is not affected by irrelevant factors. However, a study demonstrated that the location of polling station can influence the voting pattern.
- Money primes evoke problematic effects. People who were shown words with a money theme or images of money became more independent, self-reliant, selfish and preferred being alone. The idea of money thus primes individualism. These findings indicate that living in a money-driven society unconsciously and negatively shapes our attitudes and behavior.
- Priming is a phenomena arising in System 1, which we have no conscious access to. System 1 produces impressions that frequently turn into beliefs, which become choices, judgments and actions, without us being aware of it. It is therefore no surprise that System 1 causes systematic errors in our intuition. We are not completely influenced by random primes though, the effects are often small. Only voters in doubt will be influenced by the location of the polling station. They could, however, make the difference.
What is cognitive ease? - Chapter 5
- When we are conscious, several assessments take place in our brain, providing answers to important questions: Is something new happening? Are things going alright? Is there a threat? Should I redirect my attention? System 1 carries out these assessments automatically. It determines whether System 2 needs to put in more effort.
- Cognitive ease’ is one of the variables being measured. On a scale of easy to strained, ‘easy’ means that things are going alright (no news, no threats, no redirecting of attention or extra effort needed) and ‘strained’ means that a problem occurred and System 2 has some work to do. ‘Cognitive is affected by the presence of unmet demands and the current level of effort.
- The causes of strain or ease have interchangeable effects. When you are in a state of strained ease, you are probably suspicious, putting in more effort, feeling less comfortable but also less creative and intuitive. When you feel at ease, you are probably in a positive mood, satisfied, feeling comfortable and rather causal in your thinking.
- Thinking and memory are susceptible to illusions.
- Familiarity has a quality of ‘pastness’ that suggests that there is a direct reflection of a past experience. This quality is an illusion. Words you have seen earlier become easier to see again and quicker to read. Thus, seeing a word you have seen before induces cognitive ease, which results into the illusion of familiarity.
- When the correct answer does not come to mind, we tend to go by the cognitive ease: we pick the answer that feels familiar and assume it is true. Extreme and new answers are likely to get rejected.
- System 1 produces the impression of familiarity and System 2 provides a judgment (true or not) based on that impression. If a judgment is based on an impression of cognitive strain or ease, a predictable illusion will occur. If you want people to believe a false statement, you have to frequently repeat it, because it is hard to distinguish the truth from familiarity. This is a well-known fact among marketers and authorities.
- If you want to write a persuasive text, you should enlist cognitive ease and truth illusions. In order to avoid cognitive strain, start with maximizing legibility. Avoid using complex and pretentious language. Make your statement memorable, by putting it in verse. Rhyming aphorisms are considered to be more true. If quoting a source, avoid names that are difficult to pronounce. System 2 is lazy, minimal mental effort is preferred.
- Whether we believe a statement is true or not depends on feeling a sense of cognitive ease: is there a link with logic or an association with other preferences or beliefs you hold and does it come from a source you like and trust? The problem is that there could be other causes of cognitive ease, like an attractive presentation of a text. It is not easy to overcome superficial factors that evoke illusions of truth, as System 2 is lazy and usually backs the suggestions of System 1.
- When System 2 is engaged in effortful operations, you experience cognitive strain. Experiencing cognitive strain mobilizes System 2 to a more active mode which in turn rejects the suggestion by System 1.
- The article ‘Mind at ease puts a smile on the face’ suggests that System 1 associates cognitive ease with positive feelings. The same goes for words that are easy to pronounce. Businesses with pronounceable names initially do better on the stock market.
- Repetition also induces cognitive ease. More frequently displayed words are considered to mean something good as opposed to words that are shown just once or twice. Psychologist Zajonc called this link between the repetition of arbitrary stimuli and the mild affection people have for it the ‘mere exposure effect’.
- The effect of repetition on affection a biological phenomenon common to all living creatures. For surviving in a world full of dangers, we need to respond cautiously to novel stimuli: with fear and withdrawal. Caution fades if the stimulus proved to be safe. The mere exposure effect occurs because being repeatedly exposed to the stimulus did not cause harm.
- Psychologist Mednick came up with the Remote Association Test (RAT). He argued that creativity is associative memory that works extremely well.
- Studies prove that a sense of cognitive ease can be induced by a very weak signal from the associative memory. It ‘knows’ that certain words share an association, long before retrieving that association. Manipulations that increase cognitive ease (clear font, priming, pre-exposing images) increase the tendency to see the association between the words. Our mood also affects our intuition: being sad makes us lose touch with our intuition.
- Intuition, gullibility, creativity, good mood and increased reliance on System 1 are part of a cluster. A good mood loosens the control of System 2 over performance: we become more creative and intuitive, but also more prone to logical errors and less vigilant. Cognitive ease is both a consequence and a cause of feeling happy.
- You are likely to smile when you read a coherent triad of words. Smiling and cognitive ease occur together and in turn, feeling happy leads to intuitions of coherence. Studies show that a brief emotional reaction following the display of words forms the basis of judgments of coherence.
How does our mind deal with surprises? - Chapter 6
- The main function of System 1 is maintaining and updating a model of your personal world, which represents normality. This model is constructed by associations that connect ideas of events, circumstances, outcomes and actions that regularly occur. The formed connections become a pattern of associated ideas, which represents the structure of events in your life. It determines how you interpret the present and your future expectations.
- Surprises are crucial elements of our mental life, they are the most sensitive indication of our understanding of the world and our expectations from it. Surprises can be divided into two varieties: conscious and active surprises, and passive surprises.
- Active surprise: you wait for an event to happen, but something else happens. In case of a passive event you did not wait for it, but you are also not surprised when it happens. Although not actively expected, it is normal in that situation.
- How incidents come to be perceived as abnormal or normal can be explained by the ‘norm theory’. If you witness two abnormal events, the second event will retrieve the first one from memory and together they will make sense.
- ‘Moses illusion’: “How many animals of each kind did Moses take into the ark?” Very few people realize that it was Noah who took them into the ark. The thought of animals in an ark creates a biblical context, in which Moses is normal. Reading his name did not come as a surprise. The (unconscious) associative coherence makes you accept the question. Replace Moses with Bill Gates and there would have been no illusion, because his name is abnormal in the context.
- The brain quickly detects deviations of normality. Our world knowledge instantly recognizes the abnormality and is why we can communicate with each other: we use the same words. We have ‘norms’ for lots of categories, which provide the background for the instant detection of abnormalities. System 1 has access to norms, specifying the range of possible values and the most typical cases.
- Searching for causal connections is a component of understanding a story. When there is little information about what happened, System 1 starts searching for a coherent causal story that brings the fragments of information together.
- People have impressions of causality from birth, produced by System 1. Our minds are from an early age ready to identify agents, assign them personality traits and certain intentions. Before the age of one, we are prepared to identify victims and bullies.
- People have the tendency to apply causal thinking when the situation actually requires statistical thinking. Statistical reasoning derives conclusions about individual cases from ensembles and categories, which System 1 is not capable of. System 2 is able to reason statistically, but this requires training (which most people do not receive).
Why do people often jump to conclusions? - Chapter 7
- One of the characteristics of System 1 is jumping to conclusions.
- Jumping to a conclusion is efficient if the conclusion is likely to be true, the costs of a potential mistake are acceptable and it saves a fair amount of effort and time. It is risky when the stakes are high, the situation is unfamiliar and there is a lack of time for collecting further information. In this case, it is likely to make an intuitive error, unless System 2 intervenes.
- When there is no explicit context, System 1 produces a plausible context. When the situation is uncertain, System 1 takes a bet, which is guided by experience. System 1 does not consider alternatives: it does not know conscious doubt. Doubt and uncertainty are typical for System 2.
- The current context and recent events strongly influence the interpretation. When you do not remember recent events, you rely on older memories.
- Psychologist Gilbert came up with the theory of believing and unbelieving. He argued that understanding an idea starts with attempting to believe it. What would it mean if it were true? The first attempt to believe is an automatic process of System 1, which constructs the most plausible interpretation of the situation.
- Unbelieving is a process of System 2. When System 2 is engaged, we tend to believe most things. We are more likely to be persuaded by commercials when we are depleted and fatigued.
- The operations of associative memory are linked to ‘confirmation bias’. System 2 tests a hypothesis by conscious searching for confirming facts. It is a rule of science to test a hypothesis by trying to refute it, but people (even scientists) tend to search for evidence that supports their beliefs. The confirmation bias of System 1 is guilty of uncritically accepting suggestions and exaggerating the probability of unlikely events.
- If you like someone’s views and opinions, you are likely to also like his/her appearance and voice. The tendency to like or dislike everything about someone, including the unobserved things, is called the ‘halo effect’. This common bias plays a significant role in the way we shape our view of situations and people. It represents the worlds more coherent than it is in reality.
- The sequence in which we observe things is important , because the halo effect assigns more weight to first impressions.
- The halo effect can be tamed by the principle of decorrelate error.
- Our mind treats currently available information completely different from information that is not retrieved from memory. System 1 construct the most plausible story from currently activated ideas, without considering missing information. The coherence of the created story is it’s measure of success, not the quality and amount of the information it is based on. When there is very little information, which occurs regularly, System 1 jumps to conclusions.
- In order to understand intuitive thinking, you must realize that jumping to conclusions on the basis of very little information is an important part of it. Keep the abbreviation ‘WYSIATI’ in mind (What You See Is All There Is). System 1 is extremely insensitive to the quantity and quality of the information that leads to intuitions and impressions.
- The less you know, the easier it is to create a coherent story. WYSIATI induces achieving coherence and cognitive ease, that makes us believe a statement is true. t is why we are fast thinkers and can make sense of incomplete stories. In most cases, the created stories come close to reality and result into appropriate actions. WYSIATI can, however, lead to biases of choice and judgement. Examples are overconfidence, base-rate neglect and framing effects.
How are judgments formed? – Chapter 8
- System 2 deals with both questions from someone else (“Did you like the food?”) and from your own mind (“Do I really need to buy this?”). Both answers come from directing your attention and searching your memory.
- System 1 constantly monitors what is happening inside and outside our mind. It unintentionally and effortlessly assesses the elements of the situation. These ‘basic assessments’ affect our intuitive judgment, because they are easily replaced with harder questions.
- An example of a ‘basic assessment’ is the ability to distinguish between an enemy and a friend in the blink of an eye. System 1 rapidly provides the judgment whether it is safe or not to interact with a stranger.
- In one glance at someone’s face, we can evaluate how trustworthy and dominant (thus threatening) that person is and whether we expect his/her intentions to be hostile or friendly. Dominance is assessed by looking at the shape of the face (square chin) and intentions are predicted through facial expressions. In today’s society, this evolutionary ability is used to influence the voting behavior of people.
- Participants were shown campaign portraits of politicians and asked to rate their competence and likability based on their faces. The winner of the election turned out to be the person with the highest competence rating. Ratings of likability were less predictive of the voting result. Competence was judged by combining trustworthiness and strength.
- Studies of the brain show that losing politicians evoked a greater negative emotional response, which is an example of a ‘judgment heuristic’.
- The influence of System 1 on voting varies among people. Research shows that politically uninformed and television-prone voters are more likely to fall back on the automatic and quickly formed preferences of System 1. The effect of facial competence on their voting behavior is three times greater in comparison to informed voters who watch less television.
- Questions about one’s popularity, happiness or suitable punishments have one thing in common: they refer to an underlying dimension of amount or intensity. It is linked to using the word ‘more’ (more popular, more happy, more severe). This regards another ability of System 1: matching across various dimensions. Predicting by matching is a natural operation of System 1 and System 2 usually accepts it, but it is statistically wrong.
- System 1 is constantly monitoring what is going on around you and unintentionally carries out multiple routine assessments at the same time. Our control over intended judgments is not precise, we usually assess much more than needed or wanted. This excess computation is called the ‘mental shotgun’. Similar to a shotgun causing scattering pellets when aiming at one specific target, it is impossible for System 1 not to do more than System 2 demands it to do.
- The combination of intensity matching and a mental shotgun explains having intuitive judgments.
What is the role of substitution? - Chapter 9
- If System 1 can’t find an adequate answer to a difficult question fast enough, it will seek for an easier, related question and answer that one instead. This operation is called ‘substitution’, the intended question the ‘target question’ and the easier question the ‘heuristic question’.
- ‘Heuristic’: the simple procedure that helps find an adequate but not perfect answer to a difficult question.
- Instead of providing an optimally reasoned answer, you can go for the heuristic alternative. Sometimes this works well and sometimes it results into a major error.
- Substitution can be useful when you have to solve difficult problems. This strategy is consciously implemented by System 2. Other heuristics are the result of the mental shotgun, which are not chosen.
- The automatic processes of intensity matching and the mental shotgun normally produce answers to simple questions that are related to the main question. The lazy system 2 tends to endorse a heuristic answer, although it could reject or modify it after retrieving more information. You probably won’t even notice how difficult the target question was, because an intuitive answer easily came to mind.
- A good example of substitution is the following experiment, in which participants were asked the questions: “How happy have you been lately?” “On how many dates did you go last month?” There turned out to be no correlation between the answers. Dating did not immediately came to mind when asked to rate happiness. Other participants also got both questions, but in reverse order. The outcome was totally different: the correlation was very high.
- The participants with the most dates were reminded of happy moments, while those who did not date experienced sadness. The happiness induced by the dating-question was still lingering when the happiness-question was asked. The last question requires hard thinking, but got substituted by an easy-to-answer question instead.
- Any emotion-inducing question that influences someone’s mood will have this effect. WYSITA. Our current mood has a big influence on the evaluation of our happiness. The is known as the mood heuristic for happiness.
- We tend to let our dislikes and likes shape our beliefs about the world. How convincing we find arguments is determined by our political preferences. If you favor a certain policy, you believe the benefits are greater than those of the alternatives. If you dislike things (bungee jumping or tattoos for instance), you are prone to believe they are very risky and have no benefits.
- Conclusions are dominant over arguments, but the mind is not completely immune to sensible reasoning and information. Emotional attitude and beliefs may alter when you learn that something is not as risky as you thought, but information about a lower risk also makes the benefits appear greater.
- This is a different trait of System 2: when it comes to attitudes, it tends to defend the emotions of System 1. The search for arguments and information is usually restricted to information that matches existing beliefs, with no intention of examining it.
What is meant by ‘the law of small numbers’? – Chapter 10
- System 1 excels in one form of thinking: it effortlessly and automatically detects causal links between events. System 1 fails to deal with merely statistical information, which affects the probability of the outcome but not the cause of the event.
- Fact: number of cancer diagnoses varies across areas. Extreme outcomes are more likely to be found in smaller samples. This is a statistical explanation, it is not causal.
- ‘Artifacts’: observations that are produced exclusively by an aspect of the research method.
- Outcomes of large samples are more trustworthy, also known as the ‘law of large numbers’.
- If the hypothesis to be tested is “The vocabulary of seven-year-old girls is greater than the vocabulary of seven-year-old boys”, you must use a large enough sample. In the whole population, the hypothesis is true. Boys and girls vary greatly though, so you could select a sample in which the boys score higher or no difference is detected.
- Even researchers have a poor understanding of sampling effects. Most research psychologists see sampling variation as an unpleasant obstacle in their research project. Picking a too small sample size puts you at the mercy of sampling luck. It is possible to estimate the risk of error for all sample sizes, but psychologists tend to skip this procedure and use their often flawed judgment.
- Psychologists usually make the mistake of choosing very small samples resulting into a 50% risk of failing to confirm hypotheses that are actually true. A likely explanation is that these researchers have intuitive misconceptions about the extent of sampling variation. Instead of picking a sample size by computation, researchers tend to trust their intuition and tradition.
- A study among researchers, including statisticians, showed that the majority made sample size mistakes. Kahneman and Amos advocate that researchers should be more suspicious of their statistical intuitions and recommend replacing impressions with computations.
- We often are not ‘adequately sensitive to sample size’. We automatically focus on the story, not on the reliability of the data.
- The law of small numbers is an example of the bias that makes us favor certainty over doubt. The belief that a small sample represents the population from which it is drawn is part of the tendency to exaggerate the coherence and consistency of what we witness. This exaggerated faith in the value of a few observations is related to the halo effect.
- The associative machine searches for causes: how did something came to be? The statistical approach focuses on what could have happened instead. Our preference for causal thinking makes us susceptible for significant mistakes in the evaluation of the randomness of truly random incidents.
- We seek patterns and believe in a coherent world, in which regularities are the result of intention or mechanical causality: they do not occur accidentally. We refuse to believe regularities are the result of randomness. This misconception, and the ease with which we see patterns when there are none, can have serious consequences.
- Multiple successful shots in a row result into the causal judgments that the basketball player is ‘hot’ and likely to make more shots. Teammates pass more often to this player. However, researchers found that the sequence of missed and successful shots is random. The hot hand is just a cognitive illusion. The public response to this finding was disbelief, due to the strong tendency to see patters in randomness (illusion of pattern). This illusion affects our lives in various ways.
What is the ‘anchoring effect’? – Chapter 11
- The ‘anchoring effect’ is the phenomenon that occurs when you consider a particular value for an unknown quantity, prior to estimating that quantity.
- If you get the question “Was Mother Theresa 112 years old when she died?”, your guess would be significantly higher than it would be if the anchoring question referred to the age of 40 years.
- The anchoring effect is very important and common in our everyday lives. Our judgments are influenced by uninformative numbers.
- Anchoring effects are produced by two mechanisms. One form of anchoring is an operation of System 2: deliberate adjusting. The other form is an automatic operation of System 1: priming.
- The anchoring and adjustment heuristic is a good strategy for estimating uncertain quantities: start from the anchoring number, assess whether it is too low or high and gradually adjust your estimated number. The adjusting process ends when people reach a certain level of uncertainty, which is usually too soon.
- Adjusting means deliberately trying to find reasons to mentally move away from the anchor, which requires effort. A mentally depleted person adjusts less (staying nearer to the anchor). Not adjusting enough is a failure of a lazy or weak System 2.
- The priming effect of anchoring is explained by the same automatic operation of System 1 as suggestion. Suggestion is a priming effect, inducing compatible evidence. Low and high numbers activate different ideas in memory. A high temperature makes you retrieve summery memories, which leads to a biased estimation of the annual temperature.
- Bringing something to mind is sometimes enough to make you feel, see or hear it. The question “Was Mandela younger or older than 134 when he died?” results into your associative machine generating the impression of a very old man, although you immediately knew that Mandela did not live for 135 years.
- System 1 makes sense of statements by attempting to make them true, it tries to create a world in which the anchor is the truth.
- Anchoring is one of the few psychological phenomena that can be measured.
- One group of participants got asked questions with a high anchor (135 years) and another group questions with a low anchor (30 years). The difference is 105 years. The difference between the mean estimates produced by both groups can also be measured. Imagine it being 55 years. The ratio of the two differences (55/105) is called the anchoring index: 52%. This is a common value, seen in various cases. The closer to 100%, the closer to the anchor someone stays.
- Anchoring effects are particularly strong in decisions regarding money (how much we are willing to pay for something).
- Anchoring seems reasonable in some situations, for instance when the questions asked are difficult. If you know nothing about the topic, you could assume that the anchor number is close to the truth.
- Remarkably, anchoring research shows that obviously random anchors can be just as effective as possible informative anchors. The anchoring effect does not occur because people believe the anchors are informative.
- Experienced judges were informed about a thief and then had to roll a pair of rigged dice that would result into a 9 or a 3. The judges then were asked whether they would sentence the thief to an imprisonment lesser or longer than the outcome of the dice rolling (in months) and lastly what exact sentence they would give. The judges with the 9-result would sentence the thief to 8 months, the judges with the 3-results would sentence the thief to 5 months, resulting into an anchoring effect of 50%.
- The anchoring strategy is also used by house sellers. Making a first move by setting a listing price, the anchor, gives an advantage in the negotiation phase. In order to resist the powerful anchoring effect, potential buyers should active System 2: focus the attention and search the memory for counterarguments. You can focus your attention on the minimal offer or on the costs related to not reaching an agreement.
- System 2 has no knowledge of and control over the effects of random anchors. People who deny that being exposed to random or nonsensical anchors could have influenced their estimate are wrong.
- Priming and anchoring effects are similarly threatening, because we are unaware of the way it constrains and guides our thinking, even if we are aware of the anchor itself. Advice: assume that any number you see has an anchoring influence. Resist that influence by mobilizing your System 2, especially if the stakes are high.
What is the availability heuristic? - Chapter 12
- What do people do when they want to estimate the frequency of certain categories? The reliance on the ease of memory search (instances coming to mind) is called the ‘availability heuristic’. This heuristic is both an automatic operation (System 1) as a deliberate problem-solving strategy (System 2).
- The availability heuristic substitutes questions, which results into biases (systematic errors). Examples of factors that are potential sources of bias are: conspicuous events attract attention and are easy to retrieve from memory, dramatic events temporarily increase the availability of the concerning category and a personal experience is more available than an incident that happened to someone else.
- It requires a fair amount of effort to resist so many potential availability biases. It takes reconsidering our intuitions and impressions by asking yourself questions.
- A well-known study of availability indicates that being aware of our own biases contributes to peaceful marriage, and potentially other joint projects. Surveys among spouses about their own contributions to housekeeping and causing arguments demonstrated that they remember their own contributions more clearly.
- Psychologist Schwarz assessed how our impressions of the frequency of a category will be influenced by the task to list a certain number of instances. His experiment showed that the listing instances task enhances the judgment by the ease with which they come to mind and the number of instances retrieved. The first instances will come easily to mind, but the fluency of the last instances will be low.
- People who list eight instances of indecisive behavior will rate themselves as less indecisive than people who list only three. People who are asked to list eight instances of decisive behavior will think of themselves as rather indecisive. Self-rating is dominated by the ease with which instances come to mind. The fluency of the retrieval counts more than the amount of retrieved instances.
- Numerous experiments have yielded paradoxical results. Examples are: people who are asked to report more arguments to support a choice are less confident in that choice, people who had to list many advantages of a gadget were afterwards less impressed by it and students who listed more ways to improve a course rated it better.
- The ease with which instances are retrieved is a System 1 heuristic, which gets replaced by a focus on content as soon as System 2 becomes more engaged.
- Someone that lets System 1 guide him/her is more susceptible to availability biases than some someone who is more vigilant. Conditions in which someone is more affected by the ease of retrieval than by the retrieved content: scoring low on a depression scale, being in a good mood, being simultaneously engaged in another demanding task, being or feeling powerful.
How do availability, risk and emotion relate to each other? - Chapter 13
- Economist Kunreuther found that availability effects are helpful in explaining the pattern of insurance purchase and prevention following disasters.
- Victims are worried after a disaster making them more eager to purchase insurance and adopt measures of prevention. This is temporary: once the memories start to fade, so does the worry. The recurrent cycles of disaster, worry and increasing complacency can be explained by the dynamics of memory.
- A classic example of an availability bias is the survey carried out to analyse the public perceptions of risks. Participants were asked to consider sets of causes of death: accidents and strokes or asthma and diabetes. They had to indicate the most frequent cause per set and estimate the ratio of both frequencies. Their judgments were then compared to statistics: they were wrong. Media coverage influenced the estimates of causes of death.
- Media coverage is biased towards sensationalism and novelty. The media shape the public interest and are shaped by it. Unusual causes of death receive disproportionate attention and are therefore seen as less unusual than they actually are. The world in our mind does not equal the real world. Expectations about the frequency of events are warped by the emotional intensity and prevalence of the information we are exposed to.
- The estimates of causes of death represent the activated ideas in associative memory and are an example of substitution. Research also shows that the ease with which ideas of several risks come to mind and the emotional responses to these risks are connected. Terrifying images and thoughts easily come to mind, and vivid thoughts of danger induce fear.
- Psychologist Slovic introduced the affect heuristic: people rely on their emotions when making decisions and judgments. Do I hate or love it? In many aspects of life, our choices and opinions express our feelings. The affect heuristic is an example of substitution: the difficult question (What do I think about this?) is replaced by the easier question (How do I feel about this?).
- Slovic relates his finding to the finding of neuroscientist Damasio: when making decisions, our emotional evaluations of outcomes, the bodily state and the avoidance and approach tendencies connected to them all play a key role. Someone who does not show the appropriate emotions before making a decision also has an impaired ability to make reasonable decisions.
- According to Slovic, people are guided by emotion instead of reason. Experts show a lot of the same biases as ‘normal people’, but their preferences and judgments about risks differ from those of others. Differences between the public and experts reflect a conflict of values. Experts usually measure risks by the number of years or lives lost. The public differentiates between ‘bad and good deaths’.
- Slovic argues that the assessment of a risk depends on the chosen measure. Measurement and risk are both subjective.
- Legal scholar Sunstein disagrees with Slovic. He argues that objectivity can be achieved by expertise, careful deliberation and science. He believes that biased responses to risks are a source of misplaced priorities in the United States’ policy. The system of regulation should reflect objective analysis, not irrational concerns from the public. Citizens are prone to cognitive biases, which in turn influences regulators. Jurist Kuran calls this process of biases turning into policy the ‘availability cascade’.
- The availability cascade could start from media coverage of a relatively minor incident, leading up to public worry and ultimately government action. The Alar scare-case demonstrates how a huge public overreaction to a chemical sprayed on apples, which turned out to pose a minimal health risk, led to the FDA banning it.
- Dealing with small risks is a limitation in the ability of the mind: they either get ignored or given way too much weight. The amount of concern does not adequately reflect probability of harm. You imagine the dramatic story in the paper (the numerator) and do not think about all the safe cases (the denominator). Sunstein calls this the ‘probability neglect’. The combination of availability cascades and probability neglect leads to a major exaggeration of a minor threat.
- Nowadays, terrorists are a significant source of availability cascades. Terror attacks cause relatively few deaths, for instance compared to the amount of traffic deaths. The difference is in the availability of the risk, the frequency and ease with which they are retrieved from memory. A lot of media coverage and horrible images cause a public concern. Terrorism triggers System 1.
- Kahneman shares the discomfort of Sunstein with the influence of availability cascades and irrationals concerns on the public risk policy. But he also agrees with Slovic’s opinion that policy makers should not ignore public concerns, whether they are reasonable or not. The public must be protected from fear, not merely from real dangers. Risk policies should combine the emotions of the public with the knowledge of experts.
What is the representativeness heuristic? – Chapter 14
- Imagine drawing one ball from a jar. To determine whether the ball is more likely to be black or yellow, you need to know how many balls of each color there are in the jar. The proportion of balls of a specific color is called a ‘base rate’. We use base-rate information when there is no further information.
- Focusing exclusively on the similarity of someone’s description to stereotypes is called ‘representativeness’.
- When base rates and representativeness clash with each other, substitution is likely to occur: the easier question about similarity (judgment of representativeness) substitutes the difficult question about probability. Ignoring base rates and not paying attention to the accuracy of evidence in probability tasks will certainly lead to serious mistakes.
- Questions about likelihood or probability activate a mental shotgun: evoking answers to less difficult questions. An example of an easy answer is the automatic assessment of representativeness. System 1 unintentionally produces an impression of similarity.
- Examples of the representativeness heuristic are “She won’t become a good doctor with all those piercings’ or ‘He will receive the most votes, you can see he is a great leader.”
- Although it occurs often, predictions by representativeness are not statistically optimal.
- Intuitive impressions produced by representativeness are usually more accurate than a random guess would be. A person who acts friendly usually is friendly. In most cases, there is some truth to a stereotype. In other cases, the stereotypes are wrong and the representativeness heuristic is misleading, particularly if it causes the neglect of contradictory base-rate information.
- Relying merely on the heuristic, even if it is somewhat valid, goes against statistical logic. The exorbitant willingness to predict the occurrence of low base-rate (unlikely) events is considered a sin of representativeness.
- Base-rate information will not always be neglected when more information about the topic is available. Research shows that many people are influenced by explicitly provided base-rate information, although the information about the specific case normally trumps mere statistics.
- System 1 and System 2 are both to blame when a false intuitive judgement is made. System 1 generated the intuition, while System 2 validated it and expressed it in the form of a judgment. System 2 fails due to either laziness or ignorance. Base rates get ignored by some people because individual information is available (deeming it irrelevant), while other people are not focused on the task (laziness).
- Insensitivity to the quality of evidence is another sin of representativeness. This is related to the WYSIATI-rule. System 1 automatically processes the available information as if it were the truth, unless it immediately rejects it (for instance because it came from someone you don’t trust).
- When you are doubting the quality of the evidence, you should let your probability judgment stay near the base rate, which is an effortful exercise of discipline.
- You should stop yourself from believing anything that comes to mind: discipline your intuition. The logic of probability should constrain your beliefs. If you believe there is a 70% chance of snow, you must also believe that there is a 30% chance it will not snow and not believe that there is a 20% chance of snow.
- The rule of Thomas Bayes specifies how initial beliefs (for example, base rates) should be combined with evidence diagnosticity. Two ideas are important to remember: base rates matter and intuitive judgments of the evidence diagnosticity are frequently exaggerated.
- The combination of associative coherence and WYSIATI has the tendency to make us believe our own fabricated stories. You can discipline your intuition the Bayesian way by: anchoring your judgment of the probability of an outcome on a plausible base rate & questioning the diagnosticity of the evidence.
What is meant by the ‘less-is-more’ pattern? - Chapter 15
- A famous and controversial experiment is known as the ‘Linda problem’. It was made up by Amos and Kahneman to demonstrate the role of heuristics in judgment and their incompatibility with logic.
- Participants were asked to read a list of possible scenarios regarding Linda and rank them by representativeness and by probability. They agreed that one scenario (“She is a feminist bank teller”) seems more likely than another one (“She is a bank teller”).
- The twist is found in the judgments of probability, because there is a logical connection between both scenarios. Since every feminist bank teller is a bank teller, the probability of Linda being a feminist bank teller must be lower than the probability of Linda being merely a bank teller. Specifying possible events in greater detail always lowers the probability. This issue causes a conflict between the logic of probability and the intuition of representativeness. The participants ranked ‘feminist bank teller’ higher than ‘bank teller’ in their ranking by probability and by resemblance.
- The scenarios ‘bank teller’ and ‘feminist bank teller’ were placed on the list as number 6 and 8, close to each other. Kahneman and Amos expected that participants would notice the connection between them and that their rankings would follow logic. But surprisingly, they had ranked ‘feminist bank teller’ as more likely. Representativeness had won the battle, which is considered a failure of System 2.
- Failing to apply an obviously relevant logical rule is called ‘fallacy’. It is called a ‘conjunction fallacy’ when people judge a conjunction of two events (in this case: feminist and bank teller) to be more probable than one of the events (bank teller) in a straight comparison.
- In the short version of the Linda problem, participants had to answer the question which of the following alternatives is more likely: “She is a bank teller” or “She is a bank teller and a feminist”. Most of them gave the right answer: “bank teller”. The difference with the long version is the separation between these outcomes by the intervening seventh scenario on the list: they were judged independently, no comparison was made between them. The short version involved a direct comparison, which mobilized System 2 and prevented the commitment of the fallacy.
- Hsee’s dinnerware study demonstrated how absurd the less-is-more pattern is. He asked participants to price dinnerware sets. One group was shown a display that allowed a comparison between two sets (set X: 40 pieces of which 9 are broken and set Z: 24 pieces). This is called a ‘joint evaluation’. The two other groups were shown only one set, making it a ‘single evaluation’. Set X contains the same pieces as set Z and seven extra pieces, so it must be valued higher. The participants in the joint evaluation group priced set X higher, but the participants in the single evaluation group valued set Z much higher. This happened because the average value of the pieces is much lower for set X due to the broken pieces and the single evaluation was dominated by the average. Hsee calls this pattern ‘less is more’. Removing broken pieces from set X improves the value, just like adding a high valued item increases the value of the set.
- The incidence of the conjunction fallacy can be reduced by formulating an easier question. The question “What percentage of the participants…?” is much harder than “How many of the 100 participants..?” 100 people are easier to imagine, while the percentage-question does not make you think of individuals.
Why do causes trump statistics? – Chapter 16
- There are two types of base rates: statistical base rates (irrelevant facts about a population) and causal base rates. Statistical base rate are often underweighted or even neglected when specific information about the individual case is available.
- Causal base rates are used as information about a concrete case and are easily combined with other relevant facts.
- Stereotypes: statements about a group that are accepted as facts about individual members.
- System 1 is known for representing categories as prototypical exemplars and norms, our memory holds a representation of one or more regular members of a category (cats, blenders). A representation is called a stereotype when the category is social.
- In the cases of profiling or hiring, stereotyping is seen as morally (and lawfully) wrong and causal base rates get rejected. However, rejecting valid stereotypes results in judgments that are not optimal. It might be politically correct, but it is not costless.
- Stereotypical traits of individuals and a significant feature of a situation that influences individual outcomes are two inferences that are drawn from causal base rates.
- The well-known ‘helping experiment’ illustrates that people won’t draw inferences from base-rate information if they conflict with other beliefs. It also suggests that teaching psychology is hard.
- The ‘helping experiment’ shows that people feel relieved of responsibility when they know that other people heard to same appeal for help. This is surprising, because we tend to see ourselves as decent people who would immediately help others in need. This expectation proved to be wrong, which is something psychology teachers try to make their students aware of. It is, however, not easy to (negatively) change their minds about human nature and our behavior in certain situations.
What is regression by the mean? - Chapter 17
- A key principle of skill training is that rewarding improvement works better than punishing mistakes. An experienced instructor doubted this, he stated that his students performed worse after receiving a compliment and did better after being shouted at. He was right and wrong. A praised performance is likely to be followed by a poor performance and punishment is normally followed by an improved performance. The conclusion he had drawn about the efficacy of punishment and reward was wrong. His observation is known as ‘regression to the mean’, which was due to random fluctuations in the performance quality. The mistake of the instructor was attaching a causal interpretation to random fluctuations.
- The difference between a first and a second performance does not need a causal explanation, it is a mathematically consequence of luck.
- The ‘correlation coefficient’ between two measures is a measure of the relative weight of the shared factors and varies between 0 and 1. Regression and correlation are different perspectives on the same concept. An imperfect correlation between two scores means that there will be regression to the mean.
- The concept of regression is difficult, because our mind cannot handle mere statistics very well, it is biased towards causal explanations.
- Associative memory starts looking for a cause when an event caught our attention. This is problematic when regression to the mean is detected, because that does not have a cause.
- Both System 1 and System 2 struggle with regression. While System 1 searches for causal interpretations, System 2 finds the relation between regression and correlation hard to understand.
- Not only readers of newspapers are prone to wrong causal interpretations of regression effects, even researchers make this mistake.
- In order to prove whether a treatment is effective, a group of patients receiving the treatment must be compared to a control group (not receiving treatment or a placebo). The control group will improve by merely regression, will the treatment-group improve more than can be explained by regression?
How can intuitive predictions be tamed? - Chapter 18
- Forecasting is a major part of our professional and private lives. A number of predictive judgments are based on analyses or computations, but most involve System 1 and intuition. Some intuitions draw on expertise and skill, gained through experience.
- The automatic and quick judgments and decisions of physicians, chess masters and fire chiefs are examples of skilled intuitions. They quickly come with solutions, because they recognize familiar cues. Other intuitions are the result of (substitution) heuristics. Numerous judgments arise from a combination of intuition and analysis.
- A question regarding a current situation and a prediction activates System 1. “Mark is currently a bachelor student. He could count to 30 when he was two years old. What is his GPA?” People who have knowledge about the educational system provide quick answers thanks to the operations of System 1: seeking a causal connection, evaluating the evidence in relation to the relevant norm, substitution and intensity matching.
- People substitute an evaluation about the evidence when a prediction is asked, without being aware of the fact that the question they answer is not the question they were asked. This will lead to systematically biased predictions, as regression to the mean is fully ignored.
- The right way to predict Mark’s GPA is by using a formula for the factors that determine college grades and counting age: GPA = factors specific to GPA + shared factors = 100%. Counting age = factors specific to counting age + shared factors = 100%. The shared factors are the degree to which family supports academic interests, genetically determined aptitude and other factors that would cause similar people to be precocious counters as minors and academical talents as adults.
- The correlation between both measures (GPA and counting age) equals the proportion of shared factors among their determinants. Assume the proportion being 30%. You are now ready to generate an unbiased prediction, in four steps: estimate the average GPA (baseline), determine what GPA matches your impression of the evidence (intuitive prediction), estimate the correlation between GPA and counting precocity (moving from the baseline towards the intuition) and move 30% away from the average to the matching GPA (makes the prediction more moderate).
- This is a general roadmap for predicting quantitative variables, like a GPA, company growth or investment profit. It builds on intuition, but moderates it by regressing it towards the mean. An intuitive prediction is not regressive, therefore biased, and needs to be corrected.
- Common biases of predicting the probability of an outcome are insensitivity to the accuracy of evidence and neglect of base rates. The biases of predictions that are expressed on a scale and the corrective procedures are similar to the biases of discrete predictions.
- Similarities of the corrective procedures are: containing a baseline prediction, containing an intuitive prediction, aiming for a prediction that lies between the baseline and the intuitive answer, in the absence of relevant evidence: staying with the baseline, other extremes: staying with the initial prediction.
- System 2 is responsible for correcting intuitive predictions. Finding the relevant reference category, estimating the baseline prediction and evaluating the quality of evidence requires some effort, which is justified only when there is a lot at stake and you can’t afford making a mistake.
- A willingness to predict rare events from low-quality evidence and extreme predictions are typical for System 1. The associative machinery naturally matches the extremeness of the prediction to the extremeness of the supporting evidence (substitution). System 1 also produces overconfident judgments. On the other hand: System 1 finds it hard to understand the idea of regression. Students often struggle with this topic. System 2 needs extra training to comprehend it.
What is the illusion of understanding? – Chapter 19
- The concept of a ‘narrative fallacy’ was introduced by Nassim Taleb and describes how flawed stories of the past influence our current views and future expectations. An explanation is considered more appealing if it’s concrete, assigns a significant role to talent, intentions or ignorance (instead of luck) and focuses on a few conspicuous events that happened than on numerous events that did not happen.
- People are prone to interpret someone’s behavior as a reflection of personality traits and general propensities, which are easy to match to effects. The halo effect contributes to coherence: our judgement of one significant attribute influences how we view all qualities. If you consider a soccer player to be strong and attractive, you are likely to think of him as an excellent player as well. If you find him unattractive, you will probably underrate his soccer skills.
- The halo effect exaggerates the consistency of judgments: bad people are all bad and nice people do only nice things. Reading ‘Hitler liked cats and toddlers” causes a shock, because such a bad person having a good side violates our expectations.
- An explanation can be tested by determining whether it would have made the event predictable in advance. The story about a very successful company won’t meet that test, because no story can include all the events that would have caused a divergent outcome.
- Our minds can’t handle events that did not happen. The fact that most significant events involved choices makes you exaggerate the role of skill and underestimate the influence of luck. Although the founders of the successful company were skilled, luck had a big influence on the great outcome. This demonstrates the power of the WYSIATI-rule. You deal with the restricted information you received as if it were all there is to know. You construct the best possible story from the available information and if it’s a nice one, you believe it.
- People saying “I knew well before the economic crisis happened that it was inevitable” are wrong, because they thought it would happen, they did not ‘know’ it. They afterwards say ‘knew’ because it did happen.
- It is an illusion to believe that we understand the past, because we understand it less than we believe we do. The words ‘know’, ‘premonition’ and ‘intuition’ refer to past thoughts that turned out to be true. They need to be avoided in order to think clearly about future events.
- Our mind is limited by its flawed ability to reconstruct beliefs that have changed or past states of knowledge. As soon as you adjust your view of the world, you are not able to recall your past belief. Instead of reconstructing what they used to believe, people retrieve their current belief (substitution) and most people cannot believe they ever had another belief.
- Not being able to reconstruct former beliefs causes the underestimation of the extent to which we were surprised by past events. This is called the ‘hindsight bias’ or the ‘l-knew-it-all-along’ effect.
- Studies demonstrate how we tend to revise our past beliefs in light of what actually occurred, which generates a cognitive illusion.
- Hindsight bias negatively affects the evaluations of decision makers. The quality of decisions should be assessed by whether the process was right, not by whether the outcome was right.
- Imagine a low-risk surgery going wrong due to an unpredictable accident. People are afterwards likely to believe that it actually was a risky surgery and the decision of the doctor to order it was wrong. This is an example of the outcome bias, which makes it very hard to properly evaluate a decision
- Hindsight is particularly troubling for people who make decisions for others, like financial advisers, politicians or physicians. When the outcome is bad, clients usually blame them for failing to see it coming, although the signs only became clear afterwards.
- Decision makers who fear having their decisions scrutinized in hindsight tend to change their procedures, which leads to bureaucracy and increased social costs. Physicians order more tests, refer more people to specialists and apply treatments that probably won’t work.
- Hindsight and the outcome bias can also result into rewarding irresponsible decision makers who got lucky but took a lot of risk.
- System 1’s habit of trying to make sense of things makes us view the world as more simple, coherent, tidy and predictable than it actually is. The illusion that we understand the past induces the illusion that we are capable of predicting and controlling the future. They makes us feel comfortable, as acknowledging the uncertainty of our existence would make us anxious.
- Managers and leaders influence the outcomes of their businesses, but the impact of management practices and leadership style on success are often exaggerated in success stories.
- If you ask business experts what they think about the reputation of a CEO, their knowledge about the business doing well or poorly produces a halo. The CEO of a profitable company will be praised, but one year later things go south: the same CEO will be reviewed negatively. While both reviews seem correct at the time, it is weird to say contradicting things about the same person (first decisive, then confused). This illustrates the power of the halo effect.
- Backward causal relationship: we tend to believe that the business fails because the leader is confused, but the opposite is true: the leader appears confused because the business is doing poorly.
- The combination of the outcome bias and the halo effect explains the popularity of books with titles like ‘how to build a successful business’. Key message of these books is that good management practices will be rewarded with profit. The difference between a successful company and a less successful company is often not great leadership but luck.
- Even if you are convinced that the leader is extremely competent and visionary, you would not be able to predict the performance of the company. The average gap between compared successful and less successful companies shrank over time, most likely because the original gap was due to luck (regression to the mean).
What is the illusion of validity? - Chapter 20
- System 1 is known for jumping to conclusions from limited evidence (WYSIATI). The coherence of the story created by System 1 and System 2 makes us confident about our opinions. The quality and amount of the evidence are less important, because poor evidence created a good story. It is ridiculous how confident we are in our beliefs when we know so little.
- An example of the illusion of validity is the Müller-Lyer illusion.
- Confidence reflects the coherence of the information and the cognitive ease of processing the information. Remember that a very confident person has formed a coherent story in his mind, which does not necessarily mean it’s the truth.
- Illusion of stick-picking skill: stock traders believe they know more about the future price than others, but for many, that belief is merely an illusion. A studies showed that, on average, the shares that were sold did much better than the bought ones and that the most active investors gained the lowest returns and the least trading investors had the best results.
- Other research indicates that men acted on their bad predictions more often than women, which is why women had better investments outcomes.
- Individual investors are more influenced by companies being in the news than professional investors. Only a few stock pickers have the skill to beat the market repeatedly. Even professional investors are no persistent achievers. The persistence of individual differences in achievements is proof of having a skill. Research shows that most fund managers select stock like rolling dice: they play a game of chance, not of skill. A fund having a good year is mostly due to luck.
- Visual illusions tend to be less stubborn than cognitive illusions. The Müller-Lyer illusion changed your behavior, but not how you see the lines. You know you cannot trust what you saw. Investors who are told good outcomes are the result of luck and skill still believe they are doing better than the market, despite the statistical facts proving otherwise. They accept the information intellectually, but it has no effect on their feelings.
- The illusion of skill is persistent in the financial world, but why? Stock pickers are highly skilled when it comes to consulting data, examining balance sheets and assessing the competition. Their work requires a lot of training and experience in using these skills. However, they lack the skill of knowing whether the information about a company is already incorporated in the price of their shares and seem unaware of their ignorance.
- Subjective confidence is a feature of System 1. The illusions of skill and validity are supported by the powerful professional culture of the financial community. A lot of members believe they can do something that someone else cannot.
- It is for people hard to accept that the future cannot be predicted, due to the ease with which they can explain the past. In hindsight, many things make sense. This leads to the intuition that something that makes sense in hindsight today could be predicted yesterday.
- The illusion that we understand the past makes us overconfident in our ability to predict the future. We think the past can be explained by focusing on the abilities and intentions of a few great leaders, social movements, or technological and cultural developments. We cannot believe that big historical events are determined by luck.
- The illusion of valid prediction gets exploited by pundits in politics, business, media or the financial world. Newspapers and television stations hire experts to evaluate the past and predict the future. Readers/viewers think they receive insightful information, which the experts think they are offering.
- Psychologist Tetlock collected over 80.000 expert predictions and the outcome was shocking: they performed worse than blindfolded golfers. Even in the field of their expertise, they did not better than non-experts. The study also showed how these experts are less willing to admit they had been wrong and came with several excuses.
- People with the most knowledge are frequently less reliable, because they develop an illusion of their skill and become overly confident. The more famous the expert is, the more overconfident he is and the more outrageous his predictions are.
- The lessons to be learned from this chapter is firstly that predictions error are inevitable, because our world is unpredictable. Secondly, high subjective confidence is not trustworthy as an indicator of accuracy. Short-term trends can be predicted, and achievements and behavior can be forecast from previous achievements and behaviors, but you should not rely on long-term predictions made by pundits.
How do intuitions and formulas relate to each other? – Chapter 21
- Psychologist Meehl reviewed the results of studies that had assessed whether ‘clinical predictions’ based on the subjective impressions of trained professionals were more accurate than ‘statistical predictions’ made by combining ratings or scores according to a rule.
- In one study, trained counselors were asked to predict the grades of students at the end of their first school year. They interviewed the students and had access to personal statements, aptitude tests and their high school grades. The statistical formula used only one aptitude test and high school grades, but was more accurate than 11 out of 14 counselors. Other study reviews showed similar results (regarding a variety of predictions: criminal recidivism, parole violations, success in pilot training).
- The outcome shocked clinical psychologist and lead to many more studies. But fifty years later, algorithms still score better than humans. 60% of the research shows that algorithms have better accuracy, other studies resulted into a tie.
- Domains that involve a fair amount of unpredictability and uncertainty are called ‘low-validity environments’. Examples are medical variables (longevity of patients, diagnoses of diseases, length of hospital stay), economic measures (prospects of success, assessments of credit risks) and governmental interests (odds of recidivism, likelihood of criminal behavior). In all these cases, the accuracy of algorithm was better or equally good.
- Simple statistics beat the predictions of world-renowned professionals. Meehl’s explanation is that experts try to be smart, consider complex combinations of features and think outside the box. Complexity usually reduces validity.
- Research has shown that human experts are inferior to formulas even when they are handed the score predicted by the formula. They believe they can do better than the formula because they have more information about the case.
- Another explanation is that people are inconsistent in making summary judgments of complex information. Two evaluations of the same information result often into two different answers. This inconsistency is probably caused by System 1’s need for context. Unnoticed stimuli in our environment influence our actions and thoughts.
- Meehl’s research indicates that final decisions should be made by formulas, particularly in low-validity environments. The final selection of students for medical schools is often determined by interviewing the candidates, which reduces the accuracy of the selection procedure. Interviewers have too much confidence in their intuitions and favor their impressions over other information sources, which reduces validity.
- The dominant statistical practice in social sciences is assigning weight to several predictors by following the formula ‘multiple regression’. Robyn Dawes argues that this complex statistical algorithm is rather worthless. Recent studies show that formulas that assign equal weight to all the predictors are best, because they are not affected by sampling accidents. Equal-weighting has a major advantage: useful algorithms can be developed without any previous statistical research, Simple equally weighted formulas based on common sense or on existing statistics are excellent predictors of significant outcomes.
- Clinical psychologists received Meehl’s finding with disbelief and hostility, due to the illusion of skill regarding their ability to make long-term predictions. Right judgments are often short-term predictions. The hostility towards formulas will probably diminish, as their value in our daily lives becomes more and more visible. Examples are recommendations by software, decisions about credit limits, health guidelines and the payment of sportsmen.
When can we trust an expert intuition? – Chapter 22
- Gary Klein is the intellectual leader of students of Naturalistic Decision Making (NDM), who study real people in natural situations. He rejects the focus on biases in heuristics, doing artificial experiments and is highly skeptical about choosing algorithms over human judgments.
- Klein is known for studies of expertise in firefighters and the development of intuitive skills in experienced experts.
- Despite their differences, Kahneman worked together with Gary Klein on a joint project in order to answer the question “When can you trust an experienced professional who claims to have an intuition?”
- They agreed about Gladwell’s bestselling book ‘Blink’ about art experts that had the gut feeling that the object was a fake, but could not tell what it exactly was that made them think it was not the real deal. They knew it was a fake without knowing how they knew: a perfect example of intuition.
- While Kahneman’s views of intuition were shaped by observing the illusion of validity in himself and reading Meehl’s review about clinical predictions, Klein’s thinking was shaped by his studies of fire ground commanders. He introduced the ‘recognition-primed decision (RPD) model, which applies to several experts (from fire commanders to chess masters). System 1 and System 2 are both involved in this process. A tentative plan automatically comes to mind (System 1) and then gets mentally tested (System 2). The model of making intuitive decisions involves recognition: the situation provides a cue, the cue retrieves information from memory, which provides the solution. Intuition is merely recognition.
- Information gets stored in memory by learning emotions, like fear. A scary experience stays with you for a long time. Fear can be learned by experience and by words. Soldiers get trained to identify situations and firefighters discussed all types of fires with others. Emotional learning is quick, developing expertise takes a long time. Chess masters need more than 10.000 hours of practice to reach the top. During these hours, players become familiar with all the possible moves and able to quickly read the situation.
- How do we know when judgments reflect true expertise? The answer lies in the two conditions for acquiring a skill: the environment must be sufficiently regular so its predictable and there must be an opportunity to learn the regularities through prolonged practice. An intuition is normally skilled when both conditions are met. Chess players, nurses, physicians, firefighters and sportsmen are active in regular, orderly situations. Political scientists and stock pickers are not, they operate in a less regular (non-validity) environment.
What is the importance of ‘the outside view’? - Chapter 23
- Kahneman was asked to write a textbook about decision making and judgments. After one year, a number of chapters and the syllabus had been written, which was considered good progress. Kahneman asked his team to separately estimate how long it would take to finish the textbook. The average estimate was two years.
- He then asked an expert in developing curricula, who was part of the team, how long it took for similar teams to finish a textbook. He answered that about 40% of the teams never managed to complete one. Kahneman never considered the possibility of failing. The teams that completed the task had finished the book in seven to ten years. He also rated the resources and skills of Kahneman’s team slightly below average.
- Even the expert himself was surprised by it, as his previous estimate was two years. Before the questions were asked, his mind did not make the connection between his knowledge of the progress of other teams and his prediction of the future of the team he was in.
- While everybody ‘knew’ that a 40% chance of failure and a minimum of seven years was more likely than the prediction of two years, they did not acknowledge this information. It seemed unreal, because it was impossible to imagine it taking so long. The reasonable plan to finish the book in two years conflicted with the statistics. The base-rate information should lead to the conclusion that writing a textbook is much harder than previously thought, but that conflicted with the direct experience of making good progress. The textbook was finished eight years later, due to numerous unpredictable events.
- Three lessons were learned from this story: there is a distinction between two very different approaches to predicting (called the inside view and the outside view), the initial predictions exhibited a planning fallacy and irrational perseverance (not cancelling the project).
- The inside view was adopted to assess the future of the project. The team focused on their specific circumstances and searched for evidence in their own experiences They knew how many chapters they were going to write and knew how long it taken to write the already finished chapters. Only a few less optimistic members of the team added some months to their estimates (margin of error).
- The predictions were based on the available information: WYSIATI, but the already written chapters were probably the easiest and the motivation at its peak.
- The biggest problem was failing to take the ‘unknown unknowns’ into account. On that day, the events that would make it a prolonged project were not foreseen (sickness, divorces, bureaucracy). A failed plan can have many reasons, and although most of them are very unlikely to happen, the probability that something will go wrong in a major project is high.
- The baseline prediction (seven-ten years and a 40% chance of failing) should have been the anchor for adjustments. The comparison of the team with other teams indicated that the predicted outcome was worse than the baseline prediction. This regards the outside view, which suggested that the inside-view predictions were not even close. The difference between the expert’s judgments is remarkable: he had so much relevant knowledge in his head, but he did not use it. The other members of the team did not have access to the outside view, but they also did not feel they needed information about similar teams.
- This happens often: people who have information about one case almost never feel the need to know the statistics of similar cases. When they were told about the outside view, they ignored it. Statistical information tends to get ignored when it challenges someone’s own impressions of a case. The inside view beats the outside view.
- Forecasts and plans are called ‘planning fallacies’ when they are unrealistically close to best-case scenarios and could be improved by consulting statistics of similar cases. An outside view can prevent a planning fallacy.
- The remedy for the planning fallacy, introduced by planning expert Flyvbjerg, is called ‘reference class forecasting’: a big database which provides information for numerous projects world-wide.
- People frequently take on risky projects because they are too optimistic about the odds: they underestimate the costs and overestimate the benefits. Many executives fall victim to the planning fallacy. They base their decisions on unjustified optimism instead of on a rational weighting of probabilities, losses and gains.
What is the optimistic bias? - Chapter 24
- The planning fallacy is one of many manifestations of the optimistic bias. Many people view their attributes as more favorable than they probably are and consider their goals as more achievable than they probably are. Optimistic bias can be a risk as well as a blessing, which is why you should be cautious when you feel optimistic.
- Some people are more optimistic than others. They are usually happy, popular and resilient. Optimists play a disproportionate role in shaping society. Their decisions have an impact on others: they are leaders, inventors, entrepreneurs. They seek challenges and take risks, are talented and lucky. Their successes and the admiration by others makes them even more confident.
- Hypothesis: the most influential people are likely to be overconfident and optimistic, and take more risks than they are aware of. The evidence indicates that an optimistic bias causes institutions or people to take on risks.
- The chances that a small company will survive for five years in the US are slightly over 33%. Someone who starts a company believes that these statistics do not apply to him/her. Research shows that American entrepreneurs are prone to believe that their company is something else: their estimated chance of success was almost twice as high: 60%. Would they still have invested time and money if they knew the odds? They never thought of the outside view.
- One of the benefits of being an optimist is considered persistence when faced with obstacles. However, being persistent can be costly. Studies show how almost half of the people continue their project after being told it would not succeed. Their initial losses doubled.
- According to psychologist, the majority of people genuinely believes that they are better than others, they would even bet money on it. This belief has significant consequences in the market. Misguided acquisitions by large businesses in the stock market are explained by the ‘hubris hypothesis’: leaders of acquiring firms are less competent than they think they are.
- The optimistic risk taking of entrepreneurs contributes to the economic dynamism of our capitalistic society, but also evoke policy issues. Should founders of small companies be financially supported by the government, when they are very likely to fail? There is no satisfying answer to this question.
- Entrepreneurial optimism is not merely explained by wishful thinking, emotions and cognitive biases also play a significant role, especially the WYSIATI-rule of System 1. Focusing on the goal and neglecting relevant base rates can result into the planning fallacy. Focusing on the causal role of skill and neglecting the role of luck can result into the illusion of control. Focusing on what is known and neglect what is not known leads to overconfidence.
- Many founders believe that the success of their company depends to a great extent on their effort. They think their fate is almost completely in their own hands. This is not true: the changes in the market and the achievements of competitors are just as important. Entrepreneurs focus on what they know: their plans, actions, opportunities and most immediate threats (WYSIATI). They usually know very little about their competitors. This is called the concept of competition neglect.
- Another manifestation of WYSIATI is overconfidence. When they estimate a quantity, they rely on information that comes to mind and form a coherent story in which it makes sense. The consequences can be costly. Overconfident experts are also overconfident about the prospects of their own company and willing to take more risks they should avoid. Ironically enough, companies and people reward misleading optimists more than they reward truth tellers.
- Overconfident optimism is very hard to overcome by training. Overconfidence is a immediate consequence of System 1-features that can be tamed but not eliminated.
- The biggest obstacle is that subjective confidence is determined by the coherence of the constructed story, not by the amount and quality of the supporting information. Organizations are better at taming optimism than individual people.
- The best remedy comes from Gary Klein and is called the ‘premortem’. When an organization is about to make an important decision, a group of individuals with relevant knowledge regarding the decision should gather for a brief session. They have to imagine being one year into the future: the outcome of the decision turned out to be extremely bad and write a brief history of what happened. The premortem overcomes the groupthink that influences a lot of teams once a decision is close to being made and it directs the imagination of knowledgeable individuals into an important direction.
What are ‘bernoulli’s errors’? – Chapter 25
- Economics and psychologist have very different views of people. The first think of them as rational and selfish beings. The latter argue that people are neither completely rational nor selfish.
- Kahneman and Amos studied the attitudes of people to risky options in order the answer the question “What rules govern choices between different simple gambles and between sure things and gambles?”
- A simple gamble is for instance “45% chance to win € 500”. Gamble: the consequences of the choice are always uncertain. Choices between simple gambles provide a model that shares main features with more complex decisions. The ‘expected utility theory’ was the basis of the rational-agent model and still is the most important theory in the social sciences.
- The study of Kahneman and Amos resulted into the ‘prospect theory’, a descriptive model that was constructed to explain systematic violations of the axioms of rationality in choices between gambles. Their article about the theory is one of the most cited in their field.
- A few years later they published an essay about framing effects: the significant changes of preferences that are sometimes caused by inconsequential variants in the way a choice problem is worded.
- Daniel Bernoulli introduced a theory about the relationship between the psychological desirability or value of money (now: utility) and the actual amount of money. According to Bernoulli, a gift of 10 euros has the same value to someone who already has 100 euros as a gift of 20 euros to someone who already has 200 euros.
- Psychological responses to a change in wealth are proportional to the initial amount of wealth: utility is a logarithmic function of wealth.
- Bernoulli argued that the majority of people dislikes risk and wants to avoid the poorest outcome. People will choose the sure thing, even if it is less than expected value. His theory is that the psychological value of gambles is the average of the utilities of the outcomes, each weighted by its probability, and not the weighted average of the possible euros outcomes. The theory explains why poor people buy insurance and wealthy people sell it to them.
- 300 years later, his theory of risk attitudes and the preference for wealth is still being used in economic analysis. This is quite surprising, as it fairly flawed.
- Today Molly and Mike each have a wealth of 6 million. Yesterday, Molly had 2 million and Mike had 10 million. Do they have the same utility? (Or they equally content?) According to the theory of Bernoulli, they are equally content, but this is obviously not the case: Mike is less content. Bernoulli’s model does not take reference points into account (2 million for Molly and 10 million for Mike.
- Why is Bernoulli’s theory still so popular? The explanation is ‘theory-induced blindness’. Once people have accepted a theory and used it in their thinking, it is extremely hard to notice the flaws. The theory gets the benefit of the doubt if your observation does not fit the model, because all the other experts use it. Disbelieving requires effort and System 2 is lazy.
What is the prospect theory? – Chapter 26
- In utility theory, the utility of a gain is determined by comparing the utilities of two states of wealth. The utility of receiving an extra € 400 when your wealth is € 2 million is the difference between the utility of € 2.000.400 and the utility of 2 million. If you lose € 400, the disutility is again the difference between the utilities of both states of wealth. It was assumed that the distinction between losses and gains did not matter and was not examined due to the theory-induced blindness.
- Kahneman and Amos had focused on differences between gambles with low or high probabilities of winning, until Amos casually mentioned the losses. The risk aversion turned out to be replaced by going for the risk.
- Reference point: the previous state relative to which losses and gains are evaluated. Reference points usually get ignored by people and Bernoulli’s theory lacks them. The prospect theory takes reference points into account.
- The prospect theory involves three cognitive features (associated with System 1), which play a crucial role in the evaluation of financial outcomes and are common to a lot of automatic processes of emotion, judgment and perception: evaluation is relative to a neutral reference point (‘adaptation level’), principle of diminishing sensitivity applies to the evaluation of changes of wealth and to sensory dimensions, and loss aversion.
What is the endowment effect? – Chapter 27
- Imagine looking at a graph displaying someone’s ‘indifference map’ for two goods: income and vacation days. This map specifies particular combinations. Each curve connects the combinations of the goods that are equally desirable: they have the same utility. The convex shape suggests diminishing marginal utility: the more vacation days you have, the less you care for one more, and each added day is worth less than the previous one. All locations on an indifference curve are equally appealing.
- All economics textbooks for students contain images of indifference curves, but only a few students have noticed that something is missing: an indication of the person’s current income and vacation days, also known as the reference point. This is an example of Bernoulli’s error.
- The utility is not completely determined by your current situation, the past is also relevant. The missing of the reference point is also an example of theory-induces blindness.
- Richard Thaler introduced the ‘endowment effect’: owning a good increases its value, especially if the goods are not regularly traded. Imagine you bought a ticket for a major soccer match for the normal price of € 300.
- The endowment effect can be explained by the prospect theory.
- The willingness to sell or buy depends on the reference point: whether or not the person currently owns the good. If he is the owner, he considers the pain of giving up the good. If he is not the owner, he considers the pleasure of getting the good. The values are not equal because of loss aversion: giving up the good is more painful than getting a similar good is enjoyable. The reaction to a loss is stronger than the reaction to a corresponding gain.
How do people react to bad events? – Chapter 28
- In an experiment, participants were shown several images. Among them were pictures of the eyes of a happy person and of a terrified person. They were shown for a fraction a second: the participants never consciously knew they had seen the pictures. One part of their brain did know: the amygdala, the ‘treat center’. Brain images showed an intense reaction to the threatening picture. The same process makes us process angry faces (a possible threat) more efficiently and faster than happy faces.
- An angry person in a happy crowd gets noticed faster than the opposite situation. Our brains are equipped with a mechanism that gives priority to bad news.
- Our brains also respond faster to merely symbolic threats. Bad words (war, murder), emotionally loaded words and opinions with which you strongly disagree attract attention quicker than their opposites. Loss aversion is another manifestation of negativity dominance. Bad feedback and bad parenting proved to have more impact, and bad impressions and stereotypes are formed faster.
- Gottman argues that long-term success of marriages depends more on the avoidance of negatives than on looking for positives. One bad action can ruin a long-term relationship.
- The boundary between good and bad is a reference point that changes over time and depends on the current situation.
- People are driven more strongly to avoiding a loss than to achieving a gain. A reference point can be a future goal or a the status quo. These two motives have different strengths: loss aversion (not reaching the goal) is a lot stronger than the wish to exceed it. This explains why many people set short-term goals.
- The different intensities of the motives to achieve gains and avoid losses show up in many situations. It is often detected in negotiations, in particular the renegotiations of existing contracts. Reference point: existing terms. Any proposed change is considered a concession (loss) by one of the parties. Loss aversion makes reaching an agreement difficult.
- A study on what the public considers unfair behavior by employers, landlords and merchants showed that the opprobrium linked to unfairness imposes constraints on profit seeking. Reference point: the existing rent, wage or price. The participants deemed it unfair for stores to impose losses on customers, while the stores behaved according to the standard economic model: increased demand leads to a raised price. The latter is seen as a loss. Exploiting market power to impose losses on others is considered unfair. On the other hand, companies are entitled to retain current profit if it faces a loss by transferring the loss to customers or workers.
- Research shows that merchants who set unfair prices are likely to lose sales and that employers who are considered unfair have to deal with reduced productivity.
What is meant by the 'fourfold pattern'? - Chapter 29
- When we evaluate complex objects (mother-in-law, gadgets), we assign weights to their characteristics: some have a bigger influence than others, which we might not be aware of.
- When we evaluate an uncertain situation, we assign weights to the possible outcomes. These weights are correlated with the probabilities of the outcomes: a 40% chance of winning the jackpot is more appealing than a 2% chance.
- Assigning weights sometimes happens deliberately, but often it is an automatic process of System 1.
- The decision making in gambling provides a natural rule for the assignment of weights to outcomes: the more probable an outcome, the more weight it gets. The expected value of a gamble is the average of the outcomes, all weighted by their probability. This is called the ‘expectation principle’.
- Bernoulli applied the expectation principle to the psychological value of the outcomes: the utility of a gamble is the average of the utilities of the outcomes, all weighted by their probability.
- The expectation principle is flawed, because it does not describe how we think about the probabilities associated with risky prospects.
- The chance of winning the jackpot improves by 5%. Is every option equally good? From 0 to 5 %, from 5% to 10%, from 50% to 55% from 95% to 100% Expectation principle: utility increases by 5% in each option, but from 0-5 and from 95-100 appears more impressive than the other two options. The first option creates a (previously non-existing) possibility, which gives hope and therefore is a qualitative change. This impact is known as the ‘possibility effect’: highly unlikely outcomes are weighted disproportionally more than they should. The change from 95-100 is also a qualitative change that induces the ‘certainty effect’: almost certain outcomes are assigned less weight than it should.
- Certainty and possibility both have powerful effects when it comes to losses. The possibility effect causes us to overweight small risks and being more willing to pay a lot more than expected value to avoid those risks. The psychological difference between 95% risk of a bad event happening or 100% (certainty) seems even bigger: a tiny bit of hope looms large.
- The overweighting of small probabilities increases the appeal of insurance policies and gambling.
- Maurice Allais introduced the theory that people are susceptible to a certainty effect and thus violate expected utility theory and the axioms of rational choice. There have been several attempts to provide a plausible justification for the certainty effect, but so far all failed.
- The prospect theory describes the choice making of people, whether they are rational or not. In this theory, decision weight do not equal probabilities. At the extremes of 0 and 100, the decision weights match the corresponding probabilities. Unlikely events are generally overweighted (possibility effect).
- The corresponding decision weight of a 5% chance of a gain is 13.2. The decision weight would be 5, if the axioms of rational choice were met. The other end of the probability scale demonstrates the certainty effect: a 5% risk of not winning (95% of winning) reduces the utility of the gamble by 21% (from 100 to 79).
- People are inadequate sensitive to intermediate probabilities: the range of probabilities between 5% and 95% correspond with a much smaller range of decision weights.
- Amos and Kahneman found that decision weights assigned to outcomes differ from probabilities and that people attach values to losses and gains (not to wealth). Both conclusions explain the ‘fourfold pattern’, a pattern of preferences which is the main achievement of the prospect theory.
- People are averse to risk when they consider prospects with a substantial chance of a large win. They are willing to accept less than the expect value of a gamble if it means a certain win.
- The possibility effect explains the popularity of lotteries. When the jackpot is huge, people appear indifferent to a minuscule winning chance. Buying a lottery ticket gives a chance to win and dream about a nice life. Insurance is bought ‘in the fourth row’. People are willing to pay a lot more for insurance than expected value.
- Purchasing insurance: people do not only buy protection against an unlikely disaster, they purchase a comfortable feeling and eliminate worrying.
How do we respond to rare events? – Chapter 30
- Try remembering a time in which terrorist attacks in public transport were relatively common. The attacks were fairly rare in absolute numbers and the risks for travellers very small, but that is not how they felt about it. People tried to avoid public transport or were very cautious.
- People assigned an absurdly high decision weight to a very small probability due to the experience of the moment: being near a bus made them have unpleasant thoughts, so they avoided buses.
- Terrorism is effective because it evokes an availability cascade. Very vivid images of victims, constantly mentioned by media and the topic of many conversations, become highly accessible, especially if it related to a specific situation (seeing a bus). This emotional response is automatic, uncontrolled, associative and it generates an impulse for protective behavior. System 2 knows about the low probability, but System 1 cannot be switched off.
- The exciting possibility of winning the jackpot is shared by the community and reinforced by interactions with others. Buying a ticket instantly results into appealing fantasies. Merely the possibility matters, not the actual probability.
- According to the prospect theory, highly unlikely events get overweighted or ignored.
- Kahneman’s current view of weighting decisions has been shaped by research on the role of vividness in decision making and of emotions. Vividness and emotion influence judgments of probability, availability and fluency and therefore explain disproportionate responses to rare events.
- People tend to overestimate the probability of an unlikely event and overweight the unlikely event. Overweighting and overestimation are different notions, but the psychological mechanism behind them are the same: cognitive ease, confirmation bias and focused attention.
- The associative machinery of System 1 is triggered by specific descriptions. Your associative machinery starts selectively retrieving evidence, images and instances that would make the statement true. The judgment of probability is determined by the cognitive ease with which a credible scenario come to mind.
- The probability of a rare event will be overestimated in case of a not fully specified alternative.
- Research demonstrates that the valuation is a gamble is much less sensitive to probability when the outcomes are emotional (kissing, getting electric shocks) than when the outcomes are losses or gains of money. The fear of receiving a shock does not correlate with the probability of receiving it. The probability alone triggered the feeling of fear, which overrules the response to probability.
- Other researchers considered that the low sensitivity to probability for emotional outcomes is normal (gambles on money excluded): insensitivity to probability is not caused by the intensity of emotion.
What are risk policies? – Chapter 31
- The emotional evaluation of ‘sure loss’ and ‘sure gain’ is an automatic response of System 1, which takes place before the computation of the expected values of the gambles.
- People that have to make choices that involve high or moderate probabilities tend to be risk seeking when it comes to losses and risk averse when it comes to gains, which can be costly. These tendencies make you willing to pay a high price to receive a sure gain rather than face a gamble, and willing to pay a high price to avoid a sure loss.
- Construing decisions is possible in two ways: broad framing (a single comprehensive decision, with four options) and narrow framing (a sequence of two simple decisions, considered apart from each other).
- The concept of logical consistency cannot be achieved be our mind. We tend to avoid mental effort and are susceptible to WYSITA, so we have the tendency of making decisions as problems arise, even when they have to be considered jointly.
- Broad framing blunts the emotional response to losses and increases the willingness to take risks. Financial traders shield themselves from the pain of losses by this type of framing.
- The combination of narrow framing and loss aversion must be avoided. Individual investors avoid it by checking less often how their investments are doing. Constantly checking is unwise, because the pain of frequent small losses trumps the joy of small gains. Deliberately avoiding being exposed to short-term outcomes improves the quality of decisions and outcomes. The short-term reaction to bad news is usually increased loss aversion.
- A decision maker who is prone to narrow framing should have a ‘risk policy’ that he applies whenever a relevant problem arises. Example:: “never buy extended warranties”. A risk policy is a broad frame that embeds a certain risky choice in a set of similar choices.
- The risk policy and the outside view are remedies against two opposite biases that influence a lot of decisions: the exaggerated caution evoked by loss aversion and the exaggerated optimism of the planning fallacy.
What is mental accounting? – Chapter 32
- For most people, gaining money reflects achievement and self-regard. We keep score in our mind when we lose or gain money and consider them punishments and rewards, threats and promises. The ‘scores’ motivate our actions and influence our preferences. Cutting our losses feels like a failure, so we refuse doing it.
- We hold money in both physical and mental accounts. Mental accounts are a form of narrow framing: they keep things manageable and under control. Mental accounts are used for keeping score. For instance, successful golfers have a separate account for each hole, not just one for their overall score.
- System 1 performs the calculations of emotional balance. For System 2 to respond rationally, it would have to be aware of the counterfactual possibility. This requires a disciplined and active mind.
- Financial research shows that individual investors have a major preference for selling winners. This bias is called the ‘disposition effect’, which is an example of narrow framing. The state of the mental account is considered a valid consideration for selling. If you care more about your wealth, you would sell the loser.
- Companies often refuse to accept the humiliation of closing the account of a failing project and invest more money in it. In light of the fourfold pattern: this represents the choice between an unfavorable gamble and a sure loss.
- The sunk-cost fallacy keeps people too long in unhappy relationships, bad jobs and unpromising projects.
- Regret is something we consider a punishment. The fear of regret is a factor in a lot of decisions we make.
- Regret is triggered by the availability of alternatives to reality. Regret differs from blame, but both are induced by a comparison to a norm.
- We tend to feel greater regret we experience after acting than after failing to act. People expect to have stronger emotional responses, like regret, to an outcome produced by action than by inaction. This is also found in the context of gambling.
- When it comes to the endowment effect, reactions to price changes and choices between gambles, losses are weighted approximately twice as much as gains. In certain situations, the loss-aversion coefficient is a lot higher. An example is health. We are not supposed to sell our health and more importantly: you will be responsible for a potential bad outcome.
- Another example is the reluctance of parents to expose their child to a danger for a few seconds in return for money.
- The intense aversion to trading increased risk for a benefit is also found in European laws (precautionary principle: actions that might cause harm are prohibited).
What are preference reverals? - Chapter 33
- Poignancy, related to regret, is a counterfactual feeling: “if only she had.”. The mechanism of intensity matching and substitution (System 1) translate the strength of the emotional response to the case into a monetary value.
- People who see scenarios together (within-subject) endorse the principle that poignancy is not a legitimate consideration. The principle is relevant only when both scenarios are shown together, and this usually is not the case in daily life.
- Life is usually experienced in the between-subjects mode. The lack of contrasting alternatives that could change your mind and WYSIATI result into the fact that your (moral) beliefs do not necessarily govern your emotional responses.
- The discrepancy between joint and single evaluation of a scenario is part of a broad category of reversals of choice and judgment (preference reversals).
- Preference reversals occur because joint evaluation focuses attention on a specific aspect of the case, which was less salient in single evaluation. Single evaluation is mostly determined by the emotional responses of System 1. Joint evaluation involves a effortful and more careful assessment (System 2).
What is emotional framing? - Chapter 34
- France and Argentina competed in the 2022 World Cup final. The following sentences both describe the outcome: “Argentina won.” “France lost.” Whether these statements have the same meaning or not depends on your idea of ‘meaning’.
- The truth conditions of the two interchangeable descriptions are identical: one is true, so the other is true as well.
- Economics consider the preferences and beliefs of people as reality-bound, which are not influenced by the wording of their descriptions.
- There is another sense of ‘meaning’, in which both sentences have different meanings. The two sentences induce different associations (System 1). “Argentina won” induces thought of the actions of their national team and “France lost’ induces thought of what they did that made them lose. In terms of induced associations, the sentences mean something else.
- Most people do not have reality-bound preferences as System 1 is not reality-bound. Many are influenced by the formulating of a problem.
- A negative outcome is more acceptable if it is framed as the cost of a lottery ticket instead of lost gamble. Losses evoke stronger negative feelings than costs. The same goes for discounts and surcharges: they are economically the same thing, but emotionally not.
- Neuroscientists performed an experiment in which they studied framing effects by recording the activity of several brain areas. The study resulted into three significant findings.
- The amygdala (region related to emotional arousal) was most likely to be active when participants’ choices conformed to the frame. This region is accessed very quickly by emotional stimuli (System 1).
- The anterior cingulate (region related to self-control and conflict) was more active when participants did not act naturally (choosing the sure thing despite the ‘lose’-label). Resisting the suggestion by System 1 appears to cause conflict.
- The most rational participants showed enhanced activity in the frontal area that is known for combining reasoning and emotion.
- Conclusion: words that induce emotion influence our decision making.
How does our memory affect our judgments of experiences? - Chapter 35
- The notion ‘utility’ has two different meanings. Jeremy Bentham argued that people are under the governance of two masters: pleasure and pain. They determine what we shall do and what we ought to do. Kahneman refers to this idea as ‘experienced utility’.
- When economists use the term ‘utility’, they mean ‘wantability’, which Kahneman refers to as ‘decision utility’. Expected utility theory concerns the rationality rules that should govern decision utilities.
- Both concepts of utility can coincide: when people want what they will like and like what they chose.
- There are several possible discrepancies between the forms of utility.
- Experienced utility can be measured, it is the criterion by which decisions should be assessed.
- Economist Edgeworth argued that experienced utility could be measured by using a ‘hedonimeter’: an imaginary instrument that measures the level of pain or pleasure that someone experience at one moment. Time is an important factor in his theory.
- In a study of the experiences of two patients undergoing a painful medical procedure, the patients were asked to indicate how much pain they experienced every 60 seconds. 0: no pain. 10: intolerable pain. The experience lasted 8 minutes for patient Y and 24 minutes for patient Z. Which patient suffered more? You would go for patient Z, as his procedure lasted a lot longer. After the procedure, they were asked to rate the total amount of pain they had experienced. The two main findings were: duration neglect: the duration of the procedure did not influence the ratings of total pain at all and peak-end rule: the global retrospective rating was predicted by the level of pain at the end of the experience and by the average level reported at the worst moment.
- The two measures of experienced utility, the retrospective assessment and the hedonimeter, are different. Judgments based on the hedonimeter are duration-weighted: it assigns equal weights to all moments. The retrospective assessment is insensitive to duration and weights two singular moments (the end and the peak).
- Which one is better: the hedonimeter or the retrospective assessment? For the medical practice, this is an important question. It depends: if the physician wants to reduce the memory of pain, minimizing the peak intensity of pain is better than lowering the duration of the procedure. Also, gradual relief is better than abrupt relief. If the physician wants reduce the actually experienced amount of pain, he should lower the duration of the procedure.
- It is likely that most people will prefer reducing the memory of pain. The dilemma demonstrates a conflict of interests between two selves: the remembering self (How was it, overall?) and the experiencing self (do I feel pain right now?).
- Imagine being at a concert that gets stopped near the end due to a fight and you hear someone say: “My whole experience is now ruined”. That is wrong: the memory of the experience was ruined, not the experience. The experiencing self had an almost entirely nice experience and the bad end could not undo that.
- Confusing experience with the memory of it is a common cognitive illusion. The remembering self is the one keeping score and making decisions.
- Decisions and tastes can be shaped by memories that are wrong. The memory (System 1) represents the most intense moment of pleasure or pain (the peak) and the feelings when the experience was at its end. A memory that neglects the duration of experiences will not serve our preference for lasting pleasure and brief pain.
How do we evaluate stories? - Chapter 36
- Stories are about memorable moments and important events, not about the passing of time. In a story, duration neglect is normal and the ends often defines whether it’s a good or bad story.
- Caring for someone usually means being concerned for the quality of his/her story, not for his/her feelings. We also deeply care for the narrative of our own life story.
- Psychologist Diener examined whether the peak-end rule and duration neglect govern the evaluation of an entire life. The results demonstrated that both did. Doubling the duration of the life of a fictitious woman had no effect on the judgments of her total happiness nor on the desirability of her life. In addition, a less-more effect was found: adding ‘slightly happy’ years to a really happy life caused a drop in evaluations of total happiness: they made the whole life worse.
- Consider you are making vacation plans. Would you go for the beautiful place you enjoyed with your family last summer or visit a whole new location, enriching your memory store? The tourism industry helps people collecting memories and constructing stories. The goal of storing memories shapes the vacation plans and the experience of it. The word ‘memorable’ is frequently used to describe the highlights of the vacation.
- The remembering self is the one that chooses vacations. A study shows that the final evaluation of a vacation entirely determines the intentions for future breaks, although that did not accurately reflect the quality of the whole experience (as described in a diary).
- We choose by memory when we decide whether we repeat an experience or not. Eliminating memories is likely to significantly reduce the value of the experience.
What does research about experienced well-being learn us? - Chapter 37
- Research about well-being revolves around one survey question, which was considered a measure of happiness and was addressed to the remembering self: “All things considered, how satisfied are you with your life as a whole these days?” In his experiments, Kahneman found that the remembering self was not the best witness, so he decided to focus on the well-being on the experiencing self.
- There are numerous experiences we would prefer to continue instead of stop, including psychical and mental pleasures. Example are being in a ‘flow’ (absorbed in a task) and playing with toys. The resistance to interruption is an indicator of having a good time.
- Together with other specialists Kahneman developed a measure of the well-being of the experiencing self. Experience sampling seemed a good option, but it is burdensome and expensive. This led to the development of the ‘Day Reconstruction Method’ (DRM).
- Most moments in life can be classified as ultimately negative or positive. The American participants experienced negative feelings approximately 19% of the time. This percentage is called the U-index. The advantage of the U-index is that it is not based on a rating scale but on an objective measurement of time. It can also be computed for activities.
- A remarkable finding was the extent of inequality in the distribution of emotional pain. Half of the participants went through a whole day without experiencing unpleasant episodes. A significant number of participants experienced negative feelings for a big part of the day. This suggests that a minority of the population does most of the emotional suffering.
- The mood of people at any moment depends on their overall happiness and temperament. Emotional well-being also fluctuates over day/week. The mood of the moment depends mostly on the current situation. Situational factors are the most important.
- We are usually focused on our current activities and environment, but sometimes the quality of subjective experience is dominated by recurrent thoughts (being in love, grieving). However, in most cases we draw pain and pleasure from what currently is happening.
- The findings have implications for society and individuals. People have some control over their use of time. A number of people could arrange their lives to spend more time doing things they like and less time doing things that make them unhappy.
- Some aspects of life have more effect on the evaluation of someone’s life than on the experience of living, like educational attainment. Bad health, living with children and religious participation have a stronger adverse effect on experienced well-being than on life evaluation.
- Does money make us happy? Being poor is depressing and being rich is satisfying, but having a lot of money does not improve experienced well-being.
What is the focusing illusion? - Chapter 38
- The decision to marry someone reflects a huge error of ‘affection forecasting’. On their big day, the groom and bride know that the divorce rate is high, but they believe that these numbers do not apply to them.
- A study on the level of life satisfaction from the day people get married shows a gradual drop. It is argued that the honey moon phase fades and married life becomes a routine. Another example is plausible: heuristics of judgment.
- A mood heuristic is one way of answering questions about life-satisfaction. In addition to the current mood, people are likely to think about significant events in the recent past. Only a few relevant ideas come to mind, but most do not.
- The rating of life-satisfaction is heavily influenced by a small amount of highly available ideas, not by carefully weighting all life domains.
- People who recently got married will retrieve that happy event when asked a general question about life. As time passes, the salience of the thought will diminish. This explains the remarkably high level of life satisfaction in the first years after marriage.
- On average, experienced well-being is not affected by marriage, not because marriage does not makes us happy, but because it changes some aspects of life for the worse and others for the better.
- A reason for the low correlations between life-satisfaction and the circumstances of individuals, is that life-satisfaction and experienced happiness are significantly determined by the genetics of temperament. A disposition for well-being is heritable. In other cases, like marriage, the correlations with well-being are low due to balancing effects. Setting (financial) goals also proved to have lifelong effects.
- People tend to respond fairly quick to life questions. This speed of answering and the effects of current mood on the answers demonstrate that they skip a careful assessment. They probably use heuristics, which are examples of WYSIATI and substitution. When attention is directed to a specific aspect of life, it greatly affects the overall evaluation. This is known as the ‘focusing illusion’. The most important thing in life seems the thing you are thinking about. The essence of this illusion is WYSIATI.
- The focusing illusion results into a bias in favor of experiences and goods that are initially appealing, but will eventually lose their charm.
Access:
Public
Click & Go to more related summaries or chapters
Contributions: posts
Help other WorldSupporters with additions, improvements and tips
Spotlight: topics
Check the related and most recent topics and summaries:
Activity abroad, study field of working area:
Check how to use summaries on WorldSupporter.org
Online access to all summaries, study notes en practice exams
- Check out: Register with JoHo WorldSupporter: starting page (EN)
- Check out: Aanmelden bij JoHo WorldSupporter - startpagina (NL)
How and why would you use WorldSupporter.org for your summaries and study assistance?
- For free use of many of the summaries and study aids provided or collected by your fellow students.
- For free use of many of the lecture and study group notes, exam questions and practice questions.
- For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
- For compiling your own materials and contributions with relevant study help
- For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.
Using and finding summaries, study notes and practice exams on JoHo WorldSupporter
There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
- Use the menu above every page to go to one of the main starting pages
- Starting pages: for some fields of study and some university curricula editors have created (start) magazines where customised selections of summaries are put together to smoothen navigation. When you have found a magazine of your likings, add that page to your favorites so you can easily go to that starting point directly from your profile during future visits. Below you will find some start magazines per field of study
- Use the topics and taxonomy terms
- The topics and taxonomy of the study and working fields gives you insight in the amount of summaries that are tagged by authors on specific subjects. This type of navigation can help find summaries that you could have missed when just using the search tools. Tags are organised per field of study and per study institution. Note: not all content is tagged thoroughly, so when this approach doesn't give the results you were looking for, please check the search tool as back up
- Check or follow your (study) organizations:
- by checking or using your study organizations you are likely to discover all relevant study materials.
- this option is only available trough partner organizations
- Check or follow authors or other WorldSupporters
- by following individual users, authors you are likely to discover more relevant study materials.
- Use the Search tools
- 'Quick & Easy'- not very elegant but the fastest way to find a specific summary of a book or study assistance with a specific course or subject.
- The search tool is also available at the bottom of most pages
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
- Check out: Why and how to add a WorldSupporter contributions
- JoHo members: JoHo WorldSupporter members can share content directly and have access to all content: Join JoHo and become a JoHo member
- Non-members: When you are not a member you do not have full access, but if you want to share your own content with others you can fill out the contact form
Quicklinks to fields of study for summaries and study assistance
Field of study
- All studies for summaries, study assistance and working fields
- Communication & Media sciences
- Corporate & Organizational Sciences
- Cultural Studies & Humanities
- Economy & Economical sciences
- Education & Pedagogic Sciences
- Health & Medical Sciences
- IT & Exact sciences
- Law & Justice
- Nature & Environmental Sciences
- Psychology & Behavioral Sciences
- Public Administration & Social Sciences
- Science & Research
- Technical Sciences
Follow the author: Psychology Supporter
Work for WorldSupporter
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
Statistics
1832 | 1 |
Add new contribution