Summary of Cognitive Psychology by Gilhooly, Lyddy & Pollick - 1st edition
Cognitive Psychology from Gilhooly, Lyddy & Pollick - Chapter 0
What is Cognitive Psychology of Gilhooly, Lyddy & Pollick about?
The book Cognitive psychology is an international and widely used book in the Netherlands that provides an extensive introduction to cognitive psychology; the book is specially written for use by students. The book deals with the basic principles of cognition - how the human brain processes information - but also how mistakes in this processing can arise and how it can be learned. The book is written with a practical slant and applies the theories and results of academic research to practical situations and gives many examples.
How is the book classified?
The order of the chapters is as much as possible the same as the order of information processing by the human brain. In classifying the chapters, the requirements of the British Psychological Society have been taken into account in psychology training in the United Kingdom.
Each chapter starts with introductory questions that highlight the main topics of the book. Each chapter also always provides one or more practical applications or practical examples of the discussed theory. Each chapter also has a framework called "when things go wrong". This provides insight into how defects in information processing by the brain in individuals or groups of people can help research on cognitive pscyhology.
Who are the authors?
Ken Gilhooly specializes in cognitive psychology in the topics of thought processes and problem solving; and the consequences that old age can have on these processes. He is vice chairman of the Cognitive Psychology department of the British Psychological Society. Fiona Liddy specializes in communication and language processing within cognitive psychology and is professor and chair of the Psychology department at the University of Maynooth (Ireland). Frank Pollick teaches at the University of Glasgow, has previously studied at various universities in the US and was previously a researcher at a research institute (ATR Human Information on Processing Research Laboratories) in Kyoto.
What is cognitive psychology? - Chapter 1
Cognitive psychology is the study of how people and animals collect information, store information in memory, retrieve information and work with information to achieve goals.
preface
Cognitive psychology is concerned with how the brain represents and uses information about the outside world. It also tries to explain how errors in perception or judgment can arise. In short, cognitive psychology is the study of how people and animals collect information, store information in memory, retrieve information and work with information to achieve goals. Mental representations , inner representations of an external reality, such as images or verbal concepts, play a major role in this.
History and approaches
In ancient times people were mainly concerned with the art of rhetoric, as a result of which useful reminders were used ( mnemonics ), such as the loci method . Here you form a collection of images that connect the objects to be remembered to a sequence of places known to you. The keyword method is used when learning a foreign language, where the student connects a new word with a similar sounding word in his own language and forms a mental image. These and other memory techniques are often based on imagination.
Associationism
From the seventeenth to the nineteenth century, the dominant approach to cognition was associationism. Empiric philosophers such as Locke and Hume believed that all knowledge came from experience and that ideas and memories were connected through associations. For example, associations can be formed when two events follow each other over time, or when two objects are often close together.
Introspectionism
In the second half of the nineteenth century Wundt tried to break up normal observations (for example, a table) into simpler sensations (for example brown, straight lines, textures). The method used for this was introspection , or self-observation in which the participants gave verbal reports of their sensations. Introspection required a lot of training, could not be learned by anyone, and only applied to certain mental processes. Moreover, the introspection itself could have an influence on the cognitive process to be studied.
Behaviorism
Partly in response to the shortcomings of introspectionism, Watson (1913) and Thorndike (1898) developed behaviourism. This only looked at observable behavior and stimuli as data, without including internal cognitive processes (such as introspection). The main goals of behaviorism were prediction and control of behavior. Watson suggested that all apparently mental phenomena could be reduced to behavioral activity. Other behaviorists, such as Tolman, were less extreme about the status of mental activity; he argued that experimental animals could indeed have goals, mental representations and mental maps (mental representations of a spatial layout). Tolman did a lot of research into mental or cognitive maps based on the behavior of rats in a maze. His research, for example, comes from the concept of latent learning , a situation in which learning does take place, but is not translated into behavior at the same time.
Although behaviourism had many successes with simple learning in animals, it was less well applicable to complex mental phenomena such as reasoning, problem solving and language. Through the research of Tolman and that of Macfarlane (1930), there was more support for the existence of abstract mental representations.
Information processing: the cognitive revolution
The information processing approach brought mental representations back into the picture, and was based on the programming of computers. Computer programs that solved certain problems can then be seen as similar to strategies that people use to solve problems, consisting of fixed steps, decisions, storing information and retrieving old information. A program that simulates a model of human thinking is called a simulation program. The information processing approach has been the dominant approach in cognitive psychology since 1960. Researchers try to explain achievement through internal representations, which are transformed through inner actions called mental operations. Information processing theories are often represented by diagrams that show the flow of information and operations.
Some information processing models used computer models to stimulate human thinking. Examples are Newells (1985) General Problem Solver and Andersons (2004) ACT-R model. An alternative way of modeling information processing is connectionism . Connectionist models simulate simple learning and perceptual phenomena through a large network of simple units, organized in input, output and internal units. The units are connected to each other by links of varying strength. This strength is adjusted by means of learning rules, such as backwards propagation , where the strength is adjusted on the basis of detected errors.
Examining mental strategies, information processing and storage, for example, is about the functional characteristics of the brain. These kinds of questions can be answered without thinking about the underlying hardware of the brain. According to functionalism , the nature of the brain and the details of neural processes are not relevant to cognitive psychology. Nowadays, however, more and more researchers are working on the neuroscientific side of cognitive psychology.
Cognitive neuroscience
The brain
The brain is the central part of the nervous system and is strongly structured and subdivided. First, it can be subdivided into the left and right hemisphere, which are connected by the corpus callosum , a thick band of nerve fibers. Both hemispheres are subdivided into frontal, parietal, occipital and temporal lobes. Deeper in the brain can be found structures such as the thalamus, the hippocampus and the amygdala. To indicate locations in the brain, the following terms are often used; dorsal (at the top), ventral (at the bottom), anterior (at the front), lateral (on the side) and medial (in the middle). All the structures of the brain consist of neurons ; specialized cells that exchange information by delivering electrical impulses.
Cognitive neuropsychology
Cognitive neuropsychology investigates the effects of brain damage on behavior, with the aim of finding out how psychological functions are organized. This research area goes back to the Broca study (1861), which discovered that a patient had serious deviations in his speech ability after damage in a small brain area. This brain area, which is necessary for speech production, is now called the area of Broca . This is a striking example of neuropsychology, in which functions are associated with the functioning of specific brain areas. A precursor to neuropsychology was the now extinct phrenology ; the idea that brain functions can be read from the bumps on the skull.
The idea of modularity suggests that cognition consists of a large number of independent processing units that work separately and can be applied to relatively specific domains. The opposite idea is that mental functions are not localized, but distributed over the brain. Nowadays the idea of localization is seen as very useful and is subject to much neuropsychological research. Especially interesting cases for neuropsychologists are cases of double dissociation , in which people with different types of brain damage show deviations on opposite tasks.
Brain scans
There are two types of brain scans: structural imaging , where the static anatomy of the brain is shown, and functional imaging , in which brain activity is represented over time. Nowadays, the dominant method in the field of structural imaging magnetic resonance imaging (MRI), which uses radio rays and a strong magnetic field around the participant. In the field of functional imaging is electroencephalography (EEG), where a report of functioning is given as a summary of electrical activity over a wide cortex area, measured by sensors on the skull. A functional method that produces a more localized image is positron emission tomography (PET). In this method, a radioactive substance is injected into the blood, after which the blood supply to different parts of the brain is measured. Increased blood flow is then interpreted as increased activity in that brain area.
Nowadays the most used functional method is functional magnetic resonance imaging (fMRI), in which oxygen supply in the blood is measured. This method has a good temporal and spatial resolution. A disadvantage of fMRI is the complexity of interpreting the data. It is also suggested that the reliability of repeated scans is low. Moreover, the statistical procedures that are often used would make findings appear more significant than they are. Finally, the circumstances in which an fMRI is taken (completely at a standstill) are very specific and unusual.
Brain scans and cognitive processes
Despite the disadvantages, fMRI is widely used. A frequently used method to connect cognitive processes with the outcomes of brain scans is reverse inference . An example of this is' if cognitive function F1 is involved in a task, then brain area Y is active ', and' task area B is active brain area Y, 'task B involves function F1'. Although these kinds of arguments are not conclusive, they are used to generate probable hypotheses for later research.
Networking
It may be useful to look at activities such as networked instead of being highly localized. Research has shown that a large number of brain areas are active at rest, some of which are deactivated when doing a task. From this it was deduced that there would be a Default Mode Network that reflects internal tasks such as daydreaming, imagining the future and recalling memories.
What are the principles of perception? - Chapter 2
Perception is the whole of processes that organizes sensory experiences into an understanding of the world around us. Perception stands on a continuum between sensation , the processes by which physical properties are converted into neural signals, and cognition , the use of mental representations to reason and to plan behavior. Perceptual information can come in various forms, such as vision, sound and somatic perception (through touch and the sense of orientation of your body parts in space).
The physical world
Because human sensory organs are limited, they can never process enough information to accurately describe the physical world. In addition, there is the inversion problem , which indicates that information is fundamentally lost in the sensory encoding of the physical world. This happens, for example, when the three-dimensional images of the physical world are projected as two-dimensional images on our eyes.
Principles and theories of perception
Bottom-up and top-down processing
An important distinction in perceptual processing is that between bottom-up processing, where it starts with the original sensory input that is gradually transformed into the final representation, and top-down processing, whereby connections and feedback between the higher and lower levels are crucial. Although it is the question of which way of processing we make more use, it seems clear that there is often an interplay between the two ways.
The probability principle
The probability principle states that the probability that an object or event occurs is important for the perceptual processing of that object / event. This idea is linked to Bayesian decision theory, in which three components play a role in the question: which event is most likely responsible for my perception? The first component is the probability , or all uncertainty in the image (in the case of vision). The second component is the preceding , or all information about the scene before you have seen it. The third component is the decision rule , for example finding the most likely interpretation or selecting a random interpretation.
Information processing approach
According to ecological psychology, perception largely works in a bottom-up manner by using sight regularities called invariants ; properties of the three-dimensional object being viewed that can be derived from any two-dimensional image of the object. By discovering these invariants we could understand how direct perception works, or the bottom-up process in which objects and their functions are recognized. Marr (1982), however, doubted how direct this process was, and suggested that information processing should be understood at three levels. The first level of this is the computational theory ; examining the purpose of a calculation. With sight, hearing and sense of touch this is keeping us aware of the external world and helping us to adapt to a changing world. The second level is the choice of representation for the input and output and the algorithm to achieve the transformation between input and output (for example, the transformation from air pressure to pitch and volume). The third level is to reach the computations , with the emphasis on the actual way in which the computations are achieved (machine, human, animal, etc.) and the shortcomings of that organism or machine.
The body and perception
The embodied approach to cognition states that in order to understand a cognitive system, we have to take the system itself, as it is embedded in the environment, as an analysis unit. This approach is concerned with the following statements, all of which are under discussion: cognition is situated in the physical world, knows time pressure, we use the environment to reduce our cognitive work pressure, the environment is part of the cognitive system and cognition must be seen in terms of how it contributes to action.
Human perception systems
Visual system
The encoding of visual information begins in the retinas and is transferred from there to the primary visual cortex. Cones are special neurons in the retina that are sensitive to colored light and distinguish fine image details. Rods are special neurons on the outer edge of the retina that are effective in low light and detecting movement. The right visual world eventually ends up in the left half of the primary visual cortex in the brain, and the left visual world eventually ends up in the right half of the primary visual cortex in the brain.
There are two primary ways for visual processing that lead from the primary visual cortex to the occipital cortex and beyond. The ventral flow leads to the temporal lobe and specializes in determining which objects are in the visual world. The dorsal flow leads to the parietal cortex and specializes in determining where objects are in the visual world. However, there is a lot of debate about the extent to which these two streams are independent of each other. Research on this subject is contradictory: on the one hand brain damage in specific areas yields specific visual limitations, but on the other hand there are also perceptual features that are not so precisely localized in the brain.
Auditory system
The encoding of auditory information starts with a special structure in the ear called the cochlea , and is transmitted from there to the primary auditory cortex in the brain. In the cochlea is the basilar membrane , a strip of nerve fibers with hair cells that move in response to sound pressure. This vibration is then converted into a nerve signal. The basilar membrane is responsible for encoding pitch by means of localization. The tonotopic map is where the auditory processing of different tones is arranged orderly in the cortex. The firing rates in the auditory nerve are, in addition to the basilar membrane, an indication of pitch. The secondary auditory cortex is important for speech perception and timing patterns.
Damage to the auditory cortex and the area around it can cause various abnormalities, such as aphasia (the inability to use language) and amusion (tone deafness).
Somato-supervision system
The somato-perception system is a combination of a number of systems: proprioception and vestibular sensation, which give us a sense of the position of our limbs in relation to our body and space, and sense of touch. The processing of the sense of touch begins with receptors in the skin, from which pathways lead to the neurons in the brain. These pathways end in the primary somatosensory cortex, which is located next to the central sulcus (the border between the parietal and frontal cortex). The organization of this area is somatotopic , with local regions of the cortex being dedicated to specific body parts. Furthermore, the area can be divided on the basis of specialization through the Brodmann areas. Damage to the somatosensory cortex can lead to loss of proprioception and fine sense of touch.
Multisensory integration
Several theoretical explanations are possible about how the perceptual system combines information from the different senses. The modality-appropriate hypothesis states that the sensory modality that has higher accuracy for a given physical property of the environment will always dominate the bimodal estimation of that property. For example, vision is dominant in spatial tasks. The maximum probability estimation theory , however, states that more reliable perceptual information is weighted more heavily than less reliable perceptual information. This last theory has not been studied much.
Recognition
The simplest approach to how recognition works in humans is that we compare a representation of an object with a stored inner representation. However, this is not a conclusive explanation; we can evaluate and recognize different perceptual inputs as the same thing. This is an essential feature of our effective recognition system; it can represent information in such a way that it retains the essence of an object in different transformations. Different theories to explain this have been proposed. The feature analysis proposes that we deconstruct an object in different components, where each object has a unique characteristic list. The pandemonium model is a hierarchical model to recognize an object. The prototype theory states that the goal is to find out which member of a category is the best example of that category. For example, a robin is a more typical example of the 'bird' category than a penguin. The categorization process aims to set up maximum informative and distinctive categories.
Visual object recognition
As described earlier, the main problem with visual object recognition is the processing of three-dimensional images as two-dimensional projections on the retina. The viewpoint invariant relationship is every aspect of an object that is kept independent of the direction from which we look at the object. This concept was further elaborated in the recognition by components approach (RBC) as geonics : elements of a set of volumetric principles or forms that can be recognized from any point of view.
The multiple approach theory emerged as a counter-movement of the RBC approach and states that recognition is fundamentally image-based. For each object, a number of approaches would be stored, whereby intermediate representations could be linked to the stored approaches by certain mechanisms.
Somatoperceptical object recognition
The somato-perception system is also used to recognize objects. Haptic perception is the combination of skills that enable us to make the material characteristics of objects and surfaces representative of recognition. In haptic recognition exploratory procedures are used, in which different types of touch of the hand with the object have different functions in the recognition of structure, hardness, temperature, weight, etc.
Visual agnosia and prospopagnosis
Visual agnosia occurs in lesions in the inferior region of the temporal cortex and is a condition where people are not blind, but can not give meaning to their perception. Prosopagnosia is a special form of this; where only the recognition of faces is severely limited.
Recognition of scenes and events
Recognizing a scene requires not only the perception of an environment and individual objects, but also the perception of all objects together. Research shows that people are very good at quickly processing visual scenes. To observe a scene well, the eyes have to make many movements. An important question is therefore by which process these eye movements are sent. According to the bottom-up explanation, the eye movements are controlled by new image properties such as brightness, color or shape. According to the top-down statement, eye movements are guided by our goals and expectations. In addition to sight, information about a scene is provided by auditory observations.
For the recognition of events, including factors such as movement and sequence, schemas are important: frameworks that represent a plan or theory and support the organization of knowledge. The schedules create certain expectations about how the event will be observed, and are adjusted if this is not in line with the event itself.
Social perception
Although faces are essential in social recognition, their appearance changes constantly (eg exposure, position, make-up, health and expression). Yet people can very accurately recognize faces of others. This is especially the case when recognizing familiar faces. Unknown faces, as with eyewitness testimonies, are sometimes very badly recognized. The Bruce and Young model states that the following processes are essential in face recognition: recognition (I know this person), identification and emotion analysis. The neural model of Haxby et al suggests that face recognition is based on different areas through the brain, where a distinction can be made between immutable and changeable aspects of faces. An important part of this model is the Fusiform Face Area (FFA) knitting area , which would be used selectively for recognizing faces.
In addition to faces, voices are also important for social recognition. Voting, regardless of linguistic content, transfer information. Emotional content of a statement is for example recognized by the prosody; the rhythm, intonation and stress patterns of speech. Because the voice of each individual, due to the size and shape of the speech systems articulation, is unique, this is an important source of identity information.
Finally, it appears that people can derive a lot of information from movement, such as identity, emotion, gender and the action taken.
What are the processes of attention and awareness? - Chapter 3
There are different processes of attention and awareness, these are interrelated.
Attention
Attention is a limited resource that is used to facilitate the processing of important information. Attention is necessary because there is a lot more information around us than we could handle. Attention helps us carry out a task and select relevant information.
Taxonomy of attention research
External attention refers to selecting and checking incoming sensory information. Internal attention refers to the selection of control strategies and the retention of internally generated information such as thoughts, goals and motivations.
The attention system of the human brain
The attention system is a model of the human brain that presents three separate systems for alarm, orientation and executive functions. The alarming system consists of brain areas that are responsible for achieving a state of excitement. The frontal eye fields are areas in the frontal and parietal cortex that are involved in rapid strategic attention control. The alarming system is the 'on' button for our behavior when an event takes place, while the orienting and implementing systems are important for organizing our behavior in response to what happens.
Early theories of attention
The cocktail party problem describes how we can successfully focus on one speaker in a background of noise and other conversations. Two important explanations have been suggested for this capacity for dichotic listening: the filter theory and the source theory.
The filter theory derives from experiments with dichotic listening, which showed that people could ignore a message in one ear while concentrating on another message in the other ear. One aspect of this theory is early selection; the idea that only one signal is left and other information is rejected. Late selection indicates the idea that all stimuli are identified, while only those that focus attention are given access to further processing. Within the filter theory, no definitive decision has ever been taken for early or late selection.
The source theory also states that attention is limited, but instead of attributing this limitation to the information capacity of a single central channel, attention is seen as a limited resource that must be appropriately distributed. Attention can then be seen on the one hand as a 'spotlight' that illuminates interesting locations for us, or as a 'zoom lens' that determines how much of a scene is covered at a certain time. However, evidence has also been found that attention does not necessarily focus on a location, such as a spotlight, but rather on a certain object (or parts of it).
The dual task paradigm, where performance of participants on two tasks separately and those tasks are measured simultaneously, shows that attention is indeed a limited source. Once the limit is reached, attention must be divided among the tasks and interference will occur. Since certain combinations of tasks (eg two visual tasks) yield more inference than other combinations (eg a visual and auditory task), it also shows that there is not just one central source of attention.
An important criticism of source theory is the question of how our attention system 'knows' which events in the environment are important enough to focus attention. There is also evidence that may indicate that people sometimes do not divide their attention, but very quickly switch between tasks.
Attention mechanisms in perception and memory
Neural mechanism of attention in the primary visual cortex
Several neural models have been proposed to explain how attention can selectively increase the visual response of neurons. The standardization model of attention is a recent theory that unites previous theories. According to this model, attention has two functions: the ability to increase susceptibility to weak stimuli when presented alone, and the ability to reduce the impact of irrelevant distractors for the task if multiple stimuli are presented. In the distribution of the attention between these two functions, normalization plays a role, in which the original input is changed on the basis of the surrounding context.
Attention and working memory
The working memory is a central cognitive mechanism linked to separate storage locations for visual-spatial and phonological information. It serves as an interface between perceptual input and internal representations. Research shows that there is a close relationship between attention and working memory, although the exact nature of this interaction is still under discussion.
Paradigms for studying attention
Within attention research two broad trends can be observed: the emphasis on vision as a primary modality to explore attention models, and the development of experimental paradigms such as visual search, dual-task interference , inhibiton or return and attentional blink.
Visual search
This research focuses on the problem of how we use attention to find a goal in a visual environment. An important theory for this was the feature integration theory (FIT), in which recognition of a target was determined by two processes. The first process is pre-attentive and can simultaneously analyze an entire scene and recognize unique features. The second process is combining the individual characteristics. The latter has to do with the binding problem ; in other words, that we know that related characteristics are processed separately, but are experienced as one whole.
Another theory in this area is the guided search , in which there is a non-selective path , which analyzes collective aspects of the visual input to lead the attention. This is done by means of divided attention , a process that is similar to the previously described pre-attentive process.
Inhibition of return
This refers to the phenomenon that after visual attention has been given to a location in the visual field and the attention has subsequently shifted, this location suffers from a delayed response to events. This mechanism ensures that we will investigate new locations rather than previously researched locations. Inhibition of return also ensures that we can ignore striking, but irrelevant parts of an image and focus attention on less noticeable parts.
Attentional blink refers to the phenomenon that if we look at a succession of rapidly presented visual images, the second of two stimuli can not be identified if it is presented very shortly after the first stimulus.
Failure of attention
Change-blindness is the phenomenon that substantial differences between two almost identical scenes are not observed when presented sequentially.
Inattentional blindness: This is the phenomenon that we can look straight at a stimulus but that we do not really perceive it if we do not focus on it. This can be seen, for example, when we look at a film that has big differences between two successive scenes, which we do not notice anyway.
Awareness
Subliminal perception refers to the case where a stimulus is presented below the perception threshold, but still influences behavior. This is one of the components of consciousness research, although there is still no unambiguous definition of consciousness. An important reason for this is that consciousness is primarily a subjective, first-person experience of your 'own existence'.
Functions of consciousness
There are two general approaches to the function of consciousness. The conscious inessentialism states that consciousness is not necessary for the actions we perform and therefore does not 'exist'. The epiphenomenalism denies consciousness, but states that it has no function.
At first glance, there seems to be a logical link between consciousness and free will. However, research shows that unconscious preparation for an action precedes conscious awareness of that action. This seems to contradict the earlier assumption, which makes some people tend to conclude that our sense of free will is merely an illusion.
A proposed function of consciousness is that it provides us with a summary of our current situation that integrates all incoming information. The global workspace theory suggests something similar, namely that consciousness facilitates flexible, context-dependent behavior. It has also been suggested that consciousness is an important mechanism for understanding the mental state of others, since it gives us an insight into our own reasoning and decision making.
Attention and awareness
Attention and awareness have many similarities. How can we distinguish between the two, if this distinction already exists? Lamme (2003) suggested that attention does not determine whether an input reaches consciousness, but rather determines whether a conscious report on the input is possible. Important here is the distinction between phenomenal awareness (experience in itself) and access awareness (what we intuitively think is consciousness and what is available for reporting). According to the model, we are aware of many things phenomenally, but in the absence of attention, these experiences quickly evaporate.
The link between consciousness and brain activity
Nowadays, there is more and more attention within research to relate aspects of consciousness to brain functioning. From research on patients where the two hemispheres of the brain were separated from each other, it appeared that consciousness is divided in a certain way over the hemispheres. Research into patients with a blind spot (in which brain activity shows that they perceive an object, but can not report this and therefore does not 'consciously' perceive) has led to questions about the nature of consciousness.
The term neural correlates of consciousness (NCC) is a method that tries to investigate how brain activity changes when a stimulus is consciously perceived or not. The goal of the approach is to find the minimal neuronal mechanisms that are sufficient for conscious observation. For example, research into binocular rivalry, or the presentation of an image to each eye, where only one of the images can be perceived simultaneously, shows that activity in the primary visual cortex is necessary but not sufficient for consciousness.
What different parts and functions does the memory have? - Chapter 4
The memory has various functions, such as encoding (ensuring that information can be stored), saving and recalling . Traditionally, a distinction is made between the long-term memory, which stores long-lasting memories and information about performing skills, and the short-term memory, which can store a small amount of information briefly. The term working memory has a lot of overlap with the short-term memory; this is the part of the memory that enables us, for example, to manipulate active information or perform a calculation.
Sensory memory
According to the traditional approach, sensory memory is a temporary sensory register that temporarily prolongs the input of the senses for a very short time so that relevant parts of it can be processed. The sensory memory consists of different parts.
Iconic memory
The iconic storage is the sensory storage space for visual stimuli. Things that we see are thus still being kept extended and are just as accessible to us. Research shows that images are preserved for about half a second.
Egoic memory
The egoic memory is the auditory equivalent of the iconic memory. Here too, many auditory information can be stored for a very short time. Research using, among other things, the shadow technique , in which participants have to repeat a message that has to be recorded in one of the ears, shows that the performance on this task will deteriorate after about 4 seconds. Different factors can also negatively affect performance. With backward masking , a masking stimulus is presented close to or immediately after the target stimulus, so that the performance deteriorates sharply.
The haptic memory has not yet been adequately studied, but there is some evidence that there is a sensory memory for sense of touch.
Short-term memory
The short-term memory (KTG) keeps active information in consciousness for a short time. The information is very fragile and quickly lost. According to the Atkinson-Shiffrin model, information is first stored in sensory storage, after which incident information is transferred to the KTG. Whether the information is ultimately stored in the long-term memory (LTG) depends on a number of factors. Repetition and elaborative repetition (the information will actually be processed) often cause the information to be transferred to the LTG. Expiry (loss of information in the KTG over time) and replacement (loss of information in the KTG due to other information coming in), impede the transfer to the LTG.
The model therefore states that the KTG has a limited capacity. The digit span is a method to measure the capacity of the KTG by recovering as many numbers as possible from free recall (recalling information without instructions). Most people have a capacity of around 7 (+ or -2). Chunking is a strategy to increase capacity by grouping small units of information together; thus, for example, by remembering 147 as a number instead of 1, 4 and 7 separately.
The free recall experiments show that people can remember items at the beginning ( primacy effect) and at the end ( recency effect ) better than items in the middle of the list. This is probably because older items are more often stored in the LTG, while there is less time in the middle for that. Subsequent items are held in the KTG. Tasks with multiple item lists also show the negative recency effect ; where items at the end of the list are remembered worse, because they are not stored in the LTG. The above experiments all provide evidence for the existence of a separate short and long-term memory. Moreover, this idea is supported by cases of double dissociation of function, in which patients with different types of brain damage have defects in either the LTG or the KTG. However, it seems that the LTG is dependent on processes in the KTG.
Random access memory
Research shows that other subsystems may be underlying in tasks such as digit span and the recency effect . Here comes the working memory (WG) look; this can be seen as the 'workplace' of the human brain. Precise definitions of the WG and its relationship with the KTG and the LTG differ between researchers.
In Cowan's embedded process model , the WG is seen as an attention focus with limited capacity and a temporarily activated subset of the LTG. This approach emphasizes the interaction between attention and memory and considers WG in the light of the LTG. Multiple component models, on the other hand, suggest that WG can be subdivided into components, with the primary function of WG coordinating sources, and aiming at identifying and examining the structures that have this function.
Baddeley's working memory model
According to Baddeley's working memory model , WG is not just a repository for retaining information in consciousness, but plays an important role in processing. The model represents four components of WG, which will be described below.
Phonological loop
The phonological loop is the component of WG that provides temporary storage and manipulation of phonological information. This has two subcomponents again; the phonological repository, where speech-based information is stored for 2-3 seconds, and articulatory control processes that repeat information on a sub-vocal level. The number of words that can be stored has to do with the articulation time of the word (and not with the number of syllables), so that speakers of some languages can store more words than others. If someone is asked to repeat something other than the relevant information (articulatory suppression), the ability to repeat subvocally is disrupted. Also, retaining the relevance information is more difficult if irrelevant speech is present during learning,or if the words to be remembered are very similar in terms of sound.
Visual-spatial sketchpad
In the Baddeley model, the visual-spatial sketch block is responsible for maintaining and manipulating visual and spatial information. This component also has two components; the visual cache (the storage location for visual information) and the inner clerk (which allows spatial processing). Research indeed shows support for the claim that the components for spatial and visual information, although strongly connected with each other, are separate.
Central executor
The central operator in Baddeley's model is the most important part of WG; it is a coordinating system that regulates the functions and components described above. According to the supervisory activating system (SAS) model, there are two types of cognitive control; automatic processes (for routine tasks and well-trained tasks) and a process that can disrupt automatic processing and select an alternative schedule. Indeed, research shows support for the existence of two such separate control systems. For example, people often make capture errors , failing to deviate from routed actions. In addition, people with the dis-executive syndrome to perform well-trained routine tasks, but not to learn new things or deviate from the established order. Cases of utilization behavior show patients who exhibit spontaneous and uncontrollable actions or compulsive interaction with objects.
Episodic buffer
The episodic buffer is a later addition to the Baddeley model. The previous model did not have its own repository in addition to the interaction with the KTG. So an explanation was needed for the apparent interaction of the WM with the LTG and the way in which the WM sometimes exhibits a much larger storage capacity. The episodic buffer is accessible to the central operator or subsystems and has a connection to LTM. It is a temporary storage structure with limited capacity and allows integration of modality-specific information.
What are the functions and structure of the long-term memory? - Chapter 5
The long-term memory acts as a repository for all the memories that we possess. It consists of two components, the non-declarative and the declarative memory.
Memory and amnesia
The amnestic syndrome is a permanent and penetrating memory disorder that affects many memory functions. This involves both anterograde amnesia , ie limitation of memory for memories after the onset of the disorder, and retrograde amnesia , or loss of memory before the onset of the disorder. Possible causes of the amnestic syndrome are brain surgery, infections, head damage or disorders such as Korsakoff's syndrome. In many patients with amnesia the linguistic ability and awareness of concepts is intact. Because this knowledge is often often retained, while other types of memories are lost, a number of models have been developed to explain this.
The structure of the long-term memory
The long-term memory (LTG) is a repository for all the memories we have. The multiple memory systems model states that the LTG consists of several components that are responsible for different types of memories. The non-declarative or implicit memory refers to memories that we do not consciously pick up, such as how to drive a car. The declarative or explicit memory refers to conscious memories of events, facts, people and places. Memory tests that use methods such as free recall (eg what is the capital of France?), Cued recall (which word starts with a P and is the capital of France?), and recognition (is Paris the capital of France?) mainly appeal to the declarative memory. A disturbance of the declarative memory often occurs in patients with amnesia.
Tulving (1972) proposed a three-part model of the LTG. In declarative memory he made an extra distinction between episodic memory, or memory for events and experiences, and semantic memory, or memory for facts and knowledge about the world. Not everyone agrees with this distinction; it is not always clear when a memory falls under the episodic memory or under the semantic memory, for example in the case of autobiographical memories.
Non-declarative memory
Learning skills
The non-declarative memory plays a role in many different tasks, such as classic conditioning, motor skills and priming. The procedural memory is an example of non-declarative memories, such as driving a car, tying your shoelaces or putting your signature on it. Such kind of knowledge is collected through practice and after a while often becomes automatic and unconscious.
Learning from habits
The learning of habits takes place over time, by means of repeated associations between stimuli and reactions. Because it is often difficult to examine learned habits without the influence of declarative memories, this type of research often uses learning through probabilistic classification. Participants learn associations that do not automatically speak and can not be 'remembered'. Learning is based here on experience that one does about different trials.
Repetitive priming
Priming refers to an implicit memory effect in which exposure to a stimulus influences a later reaction. In an experiment that uses priming, participants get to see a glossary with uncommon words. Later they see words with missing letters, which they then have to add to come to existing words. This kind of research shows that the participants are primed by the previously displayed word list, where they supplement those words more easily than words that are not on the word list. For example, similar experiments are used to show repetitive priming is intact in amientia patients in the absence of declarative memory. These findings provide support for the assumption that there is a distinction between declarative and non-declarative memory.
Declarative memory
Episodic memory
The episodic memory within the LTG is the system that enables us to remember previous experiences and consciously re-experience them. Three important features of episodic memory are that it is a form of mental time travel, that it has a connection with the self and that mental time travel is associated with autonomic consciousness. This type of consciousness allows us to imagine ourselves in the future, plan ahead and set goals. Episodic memory is severely limited in people with amnesia.
It is important to remember that memories in episodic memory are not an exact replica of the actual event; the memories are constructive and often supplemented by ourselves with other information. Bartlett (1932) emphasized the role of schemes in remembering events, or organized remembrance structures that allow us to apply experiences to new situations. The schemes create expectations and can be used to supplement missing information with memories (unconsciously). For example, research shows that participants supplement stories that they have heard and have to retell with their own knowledge about their well-known events and stories. In this way memories can be disturbed.
Prospective memory
An important function of episodic memory is the ability to use memories to influence future behavior. Memory that allows us to keep track of plans and perform intended actions, or "remember to remember", is called prospective memory . If this fails, there is often talk of interrupting routines, such as going straight home (as always), instead of making a detour to post a letter (breaking the routine). Ellis (1988) distinguished between pulses, intentions that are time-bound, and steps , intentions that can be carried out within a larger time frame.
Autobiographical memory
Autobiographical memories are episodic memories of events that a person personally experienced during his life. These are strongly associated with the self, and can be seen as our conscious life history. These kinds of memories are susceptible to distortion due to later events and the self-image that someone has. To such false memories, or inaccurate memories of events that did or did not happen, much research has been done. It turns out that false memories persist, sometimes even after someone has been reminded that the memory is not right. This can be caused by imagination inflation , strengthening a false memory through repeated recall.
A déjà vu is an illusion of autobiographical memory, and can be described as 'knowledge that a situation is not experienced, combined with the feeling that it is so'. This is a very common experience, and the explanation for this is not yet established.
Semantic memory
Semantic memory is the store of general knowledge about the world, the people in it and facts about ourselves. People who live in the same culture often share a large part of their semantic memory. Meta-memory refers to the ability to inspect and control the contents of our memory, or to 'know if we know something'. In the case of amnesia, much of the semantic memory, such as language and concepts, is retained. Or other knowledge, such as things you learn at school, is also lost in amnesia, is under discussion. Research shows some evidence for the existence of a permastore in other words, the long-term preservation of content that has been acquired and learned over and over again, even if it is rarely used afterwards. This may also be the case for personal semantic memory.
It is clear that a general distinction can be made between declarative and non-declarative memory, and within the declarative memory between semantic and episodic memory. However, the categories show a lot of overlap in certain cases, and the degree of separation or overlap is the subject of further research.
How does one learn and forget? - Chapter 6
There are different ways and methods for learning, and causes for forgetting.
To learn
The first step in learning new information is to encode that information into an internal representation in the working memory. That representation must then develop into a memory remnant : a mental representation of stored information. This is done through processing.
Theory of processing levels
According to the theory of processing levels , superficial encoding leads to weak preservation, and deep encoding to improved preservation and memory. Learning does not have to be done intentionally; incidental learning (without intention to learn) can also work well. The difficulty of this theory, however, is that there is no objective measure for the depth of processing.
Memory strategies
There are different memory strategies that improve memory performance. Categorization is a memory strategy where items are classified into known categories, leading to better memory. The method of loci is a strategy where a known route is proposed, and images of the items to be remembered are linked to known places on the route. Interacting images is a strategy in which vivid and bizarre images are formed of the items to be remembered, which interact in a certain way.
The effectiveness of this type of method can be explained on the basis of the dual coding hypothesis . This theory states that the meaning of concrete words can be presented both verbally and visually. Because abstract words can only be verbally presented, they are more difficult to remember.
Encoding specificity
The principle of encoding specificity states that if the context of retrieval is similar to the context of encoding, the memory will work better. For example, if words are presented in capitals during learning, they are better recognized when retrieving if they are presented in upper case than in lower case.
Context-dependent retrieval
Research into context effects shows that the memory works better if the external environment during testing is equal to the environment in learning. For example, words are better remembered when people learn words under water and are also tested under water, than when the words are tested on land. Similar results have also been found for similar internal physiological states or states of mind in both learning and retrieval.
The spacing effect
The spacing effect refers to the phenomenon that material learned at scattered moments is better remembered than material learned in one connecting session. 'Blocks' the night before an exam is therefore less effective than learning the material over a longer period in smaller pieces. There are several possible explanations for this effect. For example, there may be more variability in the presentation of the material during scattered sessions, as a result of which more clues are linked to the material learned.
To forget
We speak of forgetting if someone can not retrieve information that was previously available for memory. Many researchers have spent years studying research about forgetting, including Ebbinghaus. He devised a method with nonsense syllables (eg FEC, DUV, etc.). The results were presented in terms of savings , ie syllables that did not need to be re-learned in repeated trials.
Interference
Interference is a major cause of forgetting. We speak of proactive interference when previously learned material disturbs later learning, and of retroactive interference as later learning disrupts memory for previously learned material. Research shows that the more interfering learning resembles the original material learned, the greater the degree of forgetting. A common method in research into forgetting is the paired association paradigm. This is a memory task where participants get to see word pairs while learning, and get to see one of the two words during the test. They must then mention the second word of the word pair.
Expiry and consolidation
If interference does not play a role, would forgetting still occur? Unfortunately, it is impossible to investigate the effects of decay over time while excluding all possible interference.
A possible approach states that memories will expire unless they are consolidated (strengthened). Research shows, for example, that memory is better if learning is followed by a period of sleep than by a period of normal daily activity. This positive effect of sleep / inactivity is also called retrograde facilitation .
In neuroscientific research on sleep, the emphasis is on long-term potentiation (LTP) , or the long-term improvement in signal transmission between two neurons resulting from the simultaneous stimulation of these neurons. LTP is considered an important mechanism in learning and remembering. LTP can not be generated in non-REM sleep; thus, during non-REM sleep, recent memories that begin to consolidate become protected against interference from new memories.
Research into retrograde amnesia and the effects of alcohol and benzodiazepines also shows strong evidence for the idea that memories over time consolidate. The hippocampus plays an important role in this process. Alcohol and benzodiazepines work for the memory about the same as sleep; if mental activity and thus the formation of new memories is reduced, previously formed memories are protected against the effects of retroactive interference. This is consistent with the idea that forgetting is a retroactive effect of the memory formation associated with normal mental activity.
Functional approaches to forgetting
Although forgetting is often seen as something negative, it would not be practical to remember everything you have ever learned. In dramatic cases, people may suffer from intrusive memories of traumatic events that they would rather not remember.
Retrieval-induced forgetting and instantly forgotten
The RIF paradigm refers to the disrupted ability to retrieve items through previous retrieval of related items. For example, if you try to recover nice memories from a holiday, the less fun memories become more difficult to retrieve. In the immediately forgotten paradigm , memory limitation is brought about by the instruction to forget certain items. If people then have to retrieve these items later, this is much more difficult.
The thinking / not thinking paradigm
This paradigm is a memory manipulation in which participants are instructed not to recall a memory, even if a strong indication is present. This shows that people can consciously regulate activation of the hippocampus, which is related to recalling memories.
Everyday memory
An important problem with the memory studies discussed above is the ecological validity, or the extent to which the results of experiments in the laboratory can be applied in everyday situations. Research into memory is not always as representative or generalizable to the 'real' world. Below are a number of findings presented that are closer to our everyday life.
Flashbulb memories
Flashbulb memories are vivid memories of a dramatic event and the circumstances in which that event was experienced or heard. Although people first thought that these memories were extremely lively and accurate, research shows that large inaccuracies in flashbulb memories are common. It is striking that people have a lot of confidence in their memories with these memories, while that often turns out not to be right.
Eye testimonies
Even with stories of eyewitnesses, great caution has to be taken into account. The stress and anxiety associated with the events often proves to reduce memory performance. The formulation of the questions and the gestures and body language of the interrogator to the witnesses also has a great influence on the testimony that is then given. These are all examples of retroactive interference.
Studying effectively
Research among students shows that three learning styles can be distinguished. In the case of superficial learning , students try to memorize texts without understanding them. In deep learning , students struggle to understand the material and give it meaning. In strategic learning , students struggle to think which questions will occur in the exam and devise strategies to learn the minimum requirement. Deep learning appears to be the most effective method, especially when combined with strategic learning. Testing yourself on what you have learned also works well.
Which representations of knowledge are there? - Chapter 7
We use concepts, mental representations of classes of items, to represent all objects belonging to that category. Our long-term knowledge of the world is therefore based on concepts and relationships between concepts.
Theories of conceptual representation
There are different approaches to the term concepts that will be discussed below.
Definition approach
Some concepts, such as 'bachelor', can easily be defined (unmarried man). Other concepts, especially nowadays, are harder to capture in a definition. A lot of research is being done into alternative ways to represent and use poorly defined concepts.
Prototype approach
Although people often think in categories, not all concepts are easy to place in a certain category. Typicality is the extent to which an object is representative of the category. People appear to be very good at making judgments on the basis of typicality. Rosch and Mervis suggested that members of a category share a family resemblance and that members can get scores for the extent to which they are similar. Judgments about typicality are then made on the basis of how much an item is similar to other category members. The item that has the most family resemblance and therefore represents the category best is called the prototype .
Categories and concepts usually form hierarchies, such as animal, dog, Pekingese dog, etc. The middle level of the hierarchy is the most fundamental and is called the basic level of categorization . At this level, members of the category are very similar, but the category concepts are clearly distinguished. For example, hammers and saws are very distinct from each other, while types of saws are very similar. The category of tool is then the basic level.
Although the prototype approach has many advantages, there are also a number of limitations. Indeed, abstract and ad hoc concepts are difficult to fit into the approach. It is also difficult to fit knowledge about variability in members' characteristics and their usefulness as indications in the idea of prototypes.
Model-based approaches
A popular alternative to the prototype approach is model theory ( exemplar theory) . This states that categories are represented by saved examples, each of which is linked to the name of the category. This is therefore not a proto type. An advantage of this theory is that it represents variability within a category. For example, if people have to say whether an item is a pizza or a ruler when they only know that the item is 30 cm long, many people opt for the 'pizza' option. This is because they can think of more types of pizza that are 30 cm long, while of rulers less variable examples can be thought of.
Theory-knowledge-based approaches
Not all categories are based on superficial similarities or shared characteristics; some are for example based on goals (things that you would save from a burning house). Others are very diverse, such as 'drunken actions'. Such categorizations are driven by knowledge instead of likeness. The theory-based approach therefore assumes that concepts contain information about the relationships with other concepts and the relationships between the characteristics.
Essentialism
Essentialistic approaches assume that all members of a particular category share an essential quality. Barton and Komatsu (1989) state that there are three different types of concepts. Nominal concepts have clear definitions, such as a triangle. Natural concepts are seen as naturally occurring, such as cats, dogs, rain showers, etc. The essential characteristic of this type is based on appearance. Artifact concepts are related to people-designed objects that are defined in terms of their function, such as televisions, cars, etc. The essential feature of this type is their function.
Well-founded representations versus amodal representations
In many information processing approaches, conceptual knowledge, such as motor characteristics (touchable, rough, etc.) is represented by amodal, or abstract representations. Barsalou, however, states that the representation is more grounded , where the brain represents an object (for example, a chair) in terms of what it looks like, how it feels to sit on it, etc. Simulation , the extensive re-experience of a previous experience , plays a role here. , an important role. Research shows a lot of support for the idea of these well-founded modality-specific aspects of conceptual representation. Whether abstract concepts can be explained through reliving, however, remains controversial.
Imagination and concepts
Imagination and visual-spatial processing: overlap?
Visual imagination is often studied in terms of overlap with visual-spatial processing : the mental manipulation of visual or spatial information. Because visual-spatial and imaginative tasks often cause interference, there is reason to assume that both processes use the same mental and neural sources.
Scanning and comparing image
We often scan and compare mental images for practical purposes: would that cabinet fit through the door, this cabinet is larger than that in the store, etc. Research shows support for the idea that scanning, comparison and rotation of mental images, is equivalent to operating 'pictures' in the head. However, Pylyshyn believes that imagination is only a by-product of underlying cognitive processes, but has no functional role in itself. He is therefore convinced of amodal representations as underlying the experience of imagination.
Ambiguity of images
The famous Duck-Rabbit figure and the Necker Cube are good examples of ambiguous figures that generate alternative and alternating structures. Research into these types of figures shows that people do have a fixed interpretation of mental images, whereas this does not have to be the case for real images. It is therefore plausible that mental images do not work exactly like real pictures.
Neuropsychology / neuroscience of imagination
If imagination is a perception of perception, it is to be expected that the same brain areas are involved in both processes. Indeed, research shows that activation of the occipital lobe and early visual cortex plays a role in both processes. However, some people with brain damage have intact visual perception but distorted imagination, and vice versa. So it seems that although brain areas show overlap for perception and imagination, they are not identical.
What is the motor system? - Chapter 8
The motor system includes the components of the central and peripheral nervous systems, together with the muscles, joints and bones that make movement possible.
Motor control
Woodworth (1899) was the first to propose different phases for planning and controlling movement. In the twentieth century motor control was approached mainly from a physiological perspective. Bernstein (1967) came first with the degrees of freedom problem: because the muscles and joints can move in countless different possible ways, the question is how to select a certain movement to achieve a certain goal. This is a bit like the inversion problem within the study of vision; there are also countless different ways to interpret a 2-D image that falls on the retina as a 3-D image. There are different approaches that offer an explanation for how movements are planned and executed.
Equilibrium hypothesis
This theory emphasizes the special relationship between the brain and the muscles. The muscles are hereby represented as springs, where, depending on the posture, more or less pressure is applied because they are stretched. Every movement to a different posture is thus a movement from one stable posture to another. However, this theory is not so well applicable to more complex movements.
Dynamic system theory
This theory describes motor control as a process of self-organization between an animal and its environment. Special mathematical techniques are then used to describe how the behavior of a system (in this case, the human body) changes over time.
Optimal control theory
This theory sees motor control as the evolutionary or developmental outcome of the nervous system, which seeks to optimize organizational principles. It is in fact an advanced form of simple feedback mechanisms: movements are optimized on the basis of feedback on the extent to which a certain goal is achieved. To prevent delay in the feedback, the forward model is used; where predictions are made about the relationships between actions and their consequences.
The above three theories all contribute significantly to explaining how motor control works. The equilibrium hypothesis shows that the complexity of a motor plan can be simplified on the basis of muscle characteristics. The dynamic system theory shows that transitions between different action states can be explained on the basis of the development of a system over time. The optimal control theory ensures that optimal organizational principles can be integrated into the system of planning, producing and observing our actions. However, every theory is only a part of behavior and theory.
Production of complex actions
Explaining how we achieve goals through a sequence of movements requires more interaction with other cognitive processes. The following theories focus on explaining more complex actions.
Action sequences
The associative chain theory states that the end of a particular action is associated with stimulating the start of a subsequent action in the sequence. Lashley explained this on the basis of language production; words in a sentence would, for example, be linked to each other by associated links. Evidence for this comes from the 'slip-of-the-tongue' phenomenon, in which you accidentally say another word that is associated with the word you actually wanted to say. However, this theory does not explain which mechanisms and overarching limitations guide the associative process.
Hierarchical models of action production
Since the different mechanisms in Lashley's model worked simultaneously in parallel to create sequences, it was important to see how these mechanisms were organized. Miller and Estes, among others, proposed a hierarchical layout of schemes. The temporal aspect of an action sequence (eg making coffee), or the sequence and how loose steps are triggered, can be explained on the basis of recurring networks . These are artificial neural networks with connections between the units, creating a circle of activation. Patterns of activation and inhibition within the hierarchy work through interactive interaction, where the activation of a certain unit causes inhibition of the other units.
Brain damage and action production
Damage to the frontal cortex is often spread over different areas, and can lead to different syndromes where the patient makes mistakes in producing action sequences, such as action disorganization syndrome. This syndrome is part of a broader family of movement disorders called apraxia , where the patient loses the ability to perform certain motor actions while the sensory and motor systems are still intact.
Action representation and perception
Theories of action representation
The idea of the cognitive sandwich is that cognition is surrounded on the one hand by perception, and on the other by action. The above theories fit within this idea. Below, however, theories are discussed in which cognitive representations of action mix with representations of both perception and action.
Ideomotor theory
The ideomotor theory is a long-standing theory that sees action and perception as closely connected to each other. For example, a certain action is associated with the sensory outcomes of that action. In the nineties, this idea was elaborated within the common coding framework ; a theory that states that both production and perception share certain representations of actions. Instead of a translation from sensory codes to motor codes and vice versa, like with the cognitive sandwich, according to this theory there is a layer of presentation where event codes and action codes overlap. Support for this approach comes from, among other things, research showing that interference occurs when, at the same time, an appeal is made to the observation and production of an action.
Mirror mechanisms and action observation
Mirror neurons represent both the sensory aspects of observing an action and the motor aspects of producing that action. Neurons that are normally involved in performing an action are therefore sensitive to observing that action. Research shows that this is a general information processing strategy and not just a limited mechanism. There is a lot of controversy about the role of mirror neurons. It may be a way to discover the purpose of the action of the one we perceive, or is it a way to learn through imitation.
Embodied cognition
The embodied approach to cognition states that perception and action are very closely linked. This idea that perceptual representations of the world are connected with representations of actions is illustrated by common coding and mirror neurons. The idea of embodied cognition emphasizes the importance of our body in cognition, as well as the importance of the environment in cognition. Metaphorical gestures are an example of this; we often use gestures to express or clarify abstract or concrete concepts. Possibly this provides evidence for the approach that our ideas are embodied in physical actions.
In what ways are problems solved? - Chapter 9
Problems can be solved through restructuring, through other representation or insight, or through creative solutions.
Problems and problem types
A problem can be defined as a situation where you have a goal, but do not know how to achieve it. A problem can be described as poorly defined or well defined , depending on the amount of information you have at the beginning with regard to the initial situation, possible actions or goals. Knowledge-based problems require specialist knowledge, while this is not the case with knowledge-poor problems or less. Opponent-problems are those in which there is a thinking opponent who tries to beat your goals (eg chess), while this is not the case with non-opponent problems (for example a puzzle).
History and background
The Gestalt approach
The Gestalt approach sees problem solving as seeing new patterns, ie restructuring , where insight and understanding play a major role. A restructuring that leads to a quick solution is called an insight . There are two main barriers to insight: set , the tendency to linger in a particular approach to a problem, and functional fixation. , the difficulty to come up with a new function for a well-known object. A set can be caused by intensive experience or training in certain problems, while functional fixation occurs more often in adults or if the object is presented in such a way that the usual function is easily associated with it.
The information processing approach
Within the information processing approach, problem solving in people is compared with the strategies that computer programs use. A number of concepts have emerged from this. The problem space is an abstract representation of the possible states that the problem can take. This includes the subtypes of state-action-space , or a representation of how a problem can be transformed from the starting state by intermediate states to the target, and the target-sub-goal-space , or a representation of how a problem can be broken up. sub-subgoals.
Condition-action spaces
With a larger problem area, it is more difficult to find the target state. There are three possible main strategies for this. With depth first search , only one state is generated from each intermediate state (for example, always choosing the right branch in a decision tree). However, this strategy does not guarantee that the goal is found or that the best solution is achieved. In the breadth first search every possible move is considered at each level. This is a very intensive strategy, but it does ensure that the goal is achieved. In progressive deepening where the depth first strategy is used to a limited depth, after which it goes back to the beginning and is searched again to a limited depth. The goal is guaranteed to be achieved, and may be faster than a complete depth first if the solution is randomly found quickly.
Target sub-target spaces
In these types of problem rooms, the problem is subdivided into sub-goals and sub-sub goals. It is useful to use this in the case of a large number of possible alternative actions, because it causes problem reduction.
Insight
The above strategies are applicable to problems that can be resolved by looking for a particular representation. In case of problems that require a change in representation, however, these approaches are less applicable. Although the Gestalt approach states that solving insight problems requires the special process of restructuring, some scientists believe that it requires only normal search and problem analysis processes. Neurological research shows that different neural processes are active in solving insight and non-insight problems. It therefore seems that there is indeed a fundamental distinction between these two types of problems.
Recent theories of insight
There are two main approaches to problem-solving through insight.
The representational theory of change explains on the basis of different phases: problem perception , problem solving (heuristic search processes), deadlock (the initial representation causes a breakdown), restructuring (new encoding of the representation), partial insight and full insight . Constraint relaxation is required during restructuring ; in other words, loosening the restrictions on what should or should be done to achieve the goal. This theory seems to be correct in certain algebra problems, but whether this also applies to other problem areas is the question.
The progress control theory states that the major source of difficulty in understanding tasks, the use of inappropriate subject heuristics. It is assumed that people use both a maximization heuristic, where the aim is to be reached as quickly as possible, and progress monitoring, while keeping an eye on whether progress is fast and efficient enough. If this is not the case, criterion failure occurs . According to this theory, insight is most often achieved when criterion failure is followed by constraint relaxation . There is a lot of evidence for the theory; however, it does not clearly explain how new strategies are precisely achieved.
Knowledgeable (expert) problem solving
To achieve expertise, 10 years of intensive training is required, conscious exercise, focused training and coaching.
Research shows that experts often have an extensive memory for known patterns that trigger the right actions. However, this benefit is specific to the expertise domain; For example, chess experts have no advantage over laymen in non-chess-related memory tasks. It also appears that experts represent or 'see' problems differently than laymen, because they can draw on a more extensive set of schemes.
Creative problem solving
A creative solution is one that is new and valuable or useful in a certain way. Approaches in research into creative thinking and problem solving fall into personal explanations and theories or laboratory tests. Personal explanations were mainly used as a basis for models of creative problem solving.
Wallas's four-phase analysis
This analysis consists of four phases: preparation (becoming familiar with the problem, does not yet lead to a solution), incubation (problem is put aside for a moment), illumination (inspiration / insight, does not immediately lead to a solution) and verification (solution is achieved by consciously testing ideas from the illumination). According to Wallas, the incubation phase is crucial for solving the problem, which is supported by research. There are several possible explanations for this effect. You might think that conscious work is done during the incubation, but research shows that this is not the case. Research results point out that unconscious work plays an important role during incubation. Another explanation is that the break is only a possibility to rest and focus on the problem with more energy. A final possibility is that misleading strategies, wrong assumptions and related mind sets are forgotten during the incubation phase.
Information processing theory of creative processes
According to the geneplore model , there are two important phases during creative work; in the first phase, pre-inventive structures are generated, and in the discovering phase these structures are interpreted.
Increasing idea production
Is it possible to take conscious steps to increase the flow of creative ideas? Research shows that small indications can have large unconscious effects on our thinking behavior. People appear to become more creative when they first have to think a few minutes of creative subjects than when they have to think about non-creative subjects. A creative environment can also unconsciously generate more creativity. The brainstorming method , which encourages the production of as many unusual ideas as possible, also has a positive influence on creative thinking.
How does one make decisions? - Chapter 10
Making a decision is a cognitive process in which a choice is made between alternative possible actions. Decisions can be risky, where there is a chance that one of the options could lead to negative consequences, or risk-free, with the outcomes of the options being certain. Decision problems with one attribute have alternatives that differ on a single dimension. More often, however, it concerns decision problems with multiple attributes, with alternatives that differ on several dimensions.
Theory of the expected value
A number of mathematicians in the 17th century argued that the expected value had to be maximized when making choices . The expected value is the average value in the long term, which is determined by the probability and size of a result. In reality, however, people are usually not led by maximizing the expected value. For example, people engage in gambling, buy lottery tickets, take out insurance and make other non-profitable choices. A possible explanation for this observation is risk aversion ; the tendency of people to avoid risky choices, even if they offer a higher expected value. Another explanation is risk-seeking : the tendency to take risky choices, even if the risk-free alternatives offer a higher expected value. It is plausible that people do not base their choices on objective monetary values or opportunities, but rather on subjective opportunities ; how likely someone thinks a particular outcome is (independent of the objective probability).
Utility and expectation theory
The idea of utility , or the subjective value of a choice, is emphasized in utility theory. In the case of money, this theory states, for example, that the utility of a monetary amount decreases as you have more money. The expectation theory ( prospect theory ) explains decisions on the basis of relative profit and loss. Loss aversion plays an important role in this; For example, the loss of 10 euros has a more negative utility than the profit of 10 euros has a positive utility. Related to this is the endowment effect ; the tendency to overstate an item that you own, and require more money to sell it than to buy it in the first place. The status quo bias is the strong preference for maintaining the current state of affairs and avoiding change.
Subjective opportunities and expectation theory
The expectation theory therefore states that perceived opportunities differ systematically from objective opportunities or values. Because loss aversion plays a major role, the way in which alternatives are presented in a choice problem has a big influence ( framing ). If people are not affected by framing, they show invariance .
Making probability assessments
Tversky and Kahneman state that the use of heuristics, such as availability heuristics and representativenessheuristics, plays a major role in making probability assessments.
availabilty
With availability heuristics , probability or the frequency of an event is estimated by how easy it is to come up with examples of that event. Because availability is not only based on frequency, but also on how recently that event occurred or its emotional impact, this heuristics can lead to false probability assessments.
Representativeness
In representativeness heuristic frequency or probability of an event or object is estimated on the basis of how representative or typical it is for its category. The conjuncture pitfall plays a role here; the erroneous assumption that the conjunction of two events (A and B) is more likely than either A or B.
The base rate of an event is the overall probability of the event in a population. For example: the base rate of engineers in the Netherlands is the probability that a randomly selected person in the Netherlands is an engineer. Research shows that people often ignore the base rate. This is especially the case when information is presented in terms of percentages; when formulating the information in terms of frequencies, the base rate pitfall is often reduced or even removed.
The affect heuristics
In the case of affect heuristic , goal attributes are replaced by readily available feelings or affective judgments. For example, if people hear about the risks of nuclear energy, the assessment of their potential benefits will be greatly reduced. The risks and benefits are therefore not assessed independently of each other, but they influence each other strongly.
Decision processes for alternatives with multiple attributes
Theory of multiple-attributes-utility
Even when there are no risks, it is demanding to make a decision between options that differ on many attributes. The theory of multiple-attributes-utility states that the creator of a decision 1) must identify the relevant dimensions, 2) assign the relative value to the attributes, 3) calculate a total utility for each object by adding the weighted attribute values. count and 4) choose the object with the highest weighted total. Difficulties with this approach include that the relevant dimensions are not always known, and that it is usually impossible to assign an objective value to the attribute.
Elimination through aspects
A slightly less demanding strategy is elimination through aspects . Here you select an attribute, and you eliminate all options that do not meet the criterion level for that attribute. You then do this with all attributes until one option is left. However, the value of the attributes may affect the order of elimination in this strategy.
Satisficing
The fundamental idea of satisficing is that people often do not choose to spend time and effort in maximizing utility, but are satisfied with a choice that meets a minimum acceptable level.
Testing multiple-attribute choice models
Research shows that people often choose not just one decision strategy when choosing between alternatives with multiple attributes. Instead, strategies are used that are a compromise between minimizing the cognitive workload and maximizing the utility of the outcome.
Two-systems approach to making decisions
The two-systems approach to making decisions states that there are two distinct cognitive systems. System 1 ensures fast intuitive thinking and System 2 ensures slow, conscious thinking. When making decisions one of the two systems is used, depending on the importance of the decision.
Fast and economical heuristics: the adaptive toolbox
According to Gigerenzer et al, many simple heuristics have considerable validity in daily life and are sometimes more effective or just as effective as complex methods. Together these heuristics form an 'adaptive toolbox', since the heuristics are often valid for the real-life situations in which they are developed. Even in important situations, such as when a doctor has to make a diagnosis, it appears that heuristics such as a quick decision tree are used.
Heuristics and consequentialism
The above approaches are based on consequentialism : the vision that decisions are made on the basis of the consequences that are expected to follow from the different choices. However, people often turn out to make non-consequentialist decisions. This is often the result of simple heuristics that often work well, but sometimes fail.
The omission bias indicates that the tendency to do something negative (as a result of the failure to do something (for example not vaccinating your child) is often seen as less negative than the same consequences of doing something (but vaccinating your child).
Although from the consequentialist approach punishment is only valuable if it has a deterrent effect and the behavior changes, people in practice appear to see retribution as the main function of punishment. So they give little to the consequences of punishment.
Finally, it appears that people in new laws (eg higher taxes for CO2 emissions) often indicate that they agree with the consequences of the law (better for the environment), but would not vote for the law yet.
Make naturalistic decisions
Making naturalistic decisions means making real life decisions in the field. In the critical incident analysis method , people are asked to describe a recent situation in which they had to make an important decision. A common strategy among professionals turned out to be recognition primed decisions ; where the decisions are based on expertise and the recognition of clues in the environment.
The question is whether theories such as the theory of multiple-attributes-utility can be applied to naturalistic decisions. Research shows that under time pressure, people make no decision by deliberately weighing all choices, but often choose the first option that comes to mind. When it comes to important decisions without time pressure, the decision process comes closer to theory.
Neuro-economics: making neuroscientific approaches to decisions
Neuro-economics is the study of neural processes that are underlying economic decisions. This type of research shows that utility or pleasure from a range of options can be presented through reward systems in the brain. An option with higher utility stimulates the reward systems more than an option with lower utility. Research also shows that System 1 activity, as discussed earlier, is driven by the limbic system. Activity of System 2 is reflected in the lateral prefrontal cortex.
The aging brain and financial decisions
Research shows that older people more often make mistakes in financial decisions, because they emphasize potential benefits too much and disadvantages weigh less.
The psychology of making financial decisions and economic crises
Taking risks is an important part of making financial decisions. Research shows that people in uncertain situations (such as in a financial crisis) base their decisions on perceived risk rather than on objective risk. It also appears that people with other financial decisions, such as buying or selling shares and buying on credit, are often inclined towards cognitive biases such as excessive trust.
What is inductive and deductive reasoning? - Chapter 11
Reasoning refers to the cognitive process of deriving new information from old information. Inductive and deductive reasoning do this in different ways.
Deductive reasoning
Deductive reasoning is drawing logically necessary conclusions from given information. The premisses are the statements that are assumed to be true and from which the conclusion is drawn. Valid arguments are arguments in which the conclusion must necessarily be true if the premisses are true. There are two different types of deductive reasoning. In propositional reasoning , the statements are linked by logical relations such as 'and', 'or', 'not' and 'if' (for example: if it is Tuesday, we have a statistics exam.) We do not have a statistics exam, so it is not Tuesday) . In syllogistic reasoning the statements are linked by logical relationships such as 'some', 'none' and 'all' (for example: All apples are red, some apples are sweet, some red things are sweet).
Propositionally reasoning
Logicians have developed a number of inference rules that can be used to draw correct conclusions from patterns of propositions. Some examples are:
1. Mode ponens : If P, then Q, and if P is true, then Q is true. For example: When it is Saturday I go to the cinema. It's Saturday, so I'm going to the cinema.
2. Mode tollens : If P then Q, and if non-Q, then not-P. For example: If it is Saturday I will not go to the cinema. I'm not going to the cinema, so it's not Saturday.
3. Double negation : Not non-P, therefore P. For example: It is not Saturday, so it is Saturday.
Two examples of common miscalculations are the following:
1. Confirm the consistency : From if P then derive Q that if Q is true, P is also true. For example: When it is Saturday, Tom goes to the cinema. Tom goes to the cinema, so it's Saturday.
2. Denying the antecedent : From if P then derive Q as non-P, then non-Q. For example: if it's Saturday, Tom goes to the cinema. It is not Saturday, so Tom is not going to the cinema today.
Research shows that people are better at recognizing as correct statements in the ponens mode than statements in the tollens mode. It also appears that incorrect reasoning may result from misinterpretation of the premisses. For example: if you take the premise 'If there is a dog in the box, then there is an orange in the box', you can think that means' If there is no dog in the box, then there is no orange in the box. box'. Whether this is true or not depends on whether the 'if ... then' relationship is seen as equivalence : here the relationship means 'if, and only if' and thus the above assumption is true. If the relationship is seen as a material implication ('if ... then' is only 'if ... then'), the assumption is not correct.
The mental logical approach states that people have a limited number of mental inference rules (schemes) that allow direct inferences if the conditions of the scheme are met. According to this model, there are 16 basic schemes where people make few mistakes. Another approach is the mental modeling approach; it assumes that people solve logical reasoning problems by forming mental representations of possible states of the world, and making those representations into inferences. By explicitly representing only what is true in these models, the burden on the working memory is minimized. This latter approach also applies to syllogistic reasoning.
Syllogistic reasoning
As explained earlier, the task in a syllogism is to see which conclusion follows from a number of assumptions about categories of things. If the conclusion does not follow from premisses that are true, the argument is invalid.
Research shows that people have far more difficulties with syllogisms if the terms used in them are abstract than if they are concrete. Another effect that causes trouble is the atmosphere effect , the tendency to draw conclusions in syllogisms that are influenced more by the shape of the premises than by the logic of the argument. If, for example, both premisses have 'all', people are inclined to accept a conclusion with 'all'. Another explanation for false conclusions in syllogisms are conversion effects , for example, if people assume that 'All X is Y', that 'All Y is X' and probabilistic inference For example, if people argue that 'Some cloudy days are wet', 'some wet days are uncomfortable' and so 'some days are uncomfortable'.
Henle (1962) argued, in contrast to arguments such as those behind the atmosphere hypothesis and conversion effects, that people did actually rationalize. She stated that when people come to invalidated conclusions, this often comes about because they interpret the material differently than the intention or undertook a different task than the one that was asked for.
Research shows that people from collectivist cultures, where practical and contextual knowledge is more important than formal, abstract knowledge (as in individualistic cultures), often interpret logical questions as a real demand for information from the real world. Individuals from individualistic cultures, on the other hand, often see these kinds of questions as decontextualized logical issues.
The figural bias refers to the effect of the layout of a syllogistic figure (eg AB, BA or BC, AB) on the preferred conclusion. For example, if one gets the question: "Some parents are scientists. Some scientists are drivers. So ..? ', Many people conclude' Some parents are drivers', instead of the equally valid conclusion 'Some drivers are parents'. Although this effect is not predicted by the atmosphere hypothesis, conversion effects or probabilistic inference, it is explained by the previously discussed mental model theory.
The persuasion bias is the tendency to accept invalid but easy to believe conclusions and to reject valid but not easy to believe conclusions.
Inductive reasoning: testing and generating hypotheses
Inductive reasoning is the process of deriving probable conclusions from given information. There are two types of inductive tasks. In hypothesis testing , hypotheses are assessed for truth in the light of certain data. In hypothesis generation , possible hypotheses are derived from data for later testing. At both processes apply that the hypothesis can not be definitively proven, but can be refuted.
In hypothetico-deductive reasoning , a hypothesis is tested by deriving the necessary consequences of the hypothesis, and it is determined whether these consequences are true or false.
Hypothesis testing
A well-known task to examine hypothetico-deductive testing is the four-card selection task . People are asked to test a rule (for example if P then Q), by presenting them with four cards with P, Q, non-P and not Q. On the other side is also P, Q, non-P or non-Q. People have to test the hypothesis by turning over one of the cards.
If a rule in the form if P then Q is tested, there are four possible scenarios that we can find: P and Q, P and not Q, not P and Q and not P and not Q. Only the second scenario is consistent with the rule. Research shows that people have a tendency to verify and confirm themselves when testing such a hypothesis: they turn the card around with P or Q, and not the card that possibly falsifies the hypothesis (not Q). However, if the information on the maps is concrete rather than abstract, people appear to perform better.
A possible explanation for the poor performance on the four-card selection task is that people misinterpret the task, but then correctly reason with their incorrect interpretation. For example, there may be ambiguity about or if P, then Q also means Q, then P. Another explanation is the matching bias ; the tendency to simply select the cards that show the symbols that are mentioned in the line. It is also possible that people are better at the task if the situations reflect real rules that they know and for which they can easily come up with examples.
The social contract theory suggests that rules expressing charges for payment privileges are easily solved by humans, because the sound card choices cheating would disclose. This would result from the evolutionary mechanism that people have developed to detect cheating (for example if someone used to eat from the loot during the hunt, but did not hunt them). Research into this theory shows that deontical rules, ie rules that relate to duties and terms such as 'must', 'should', etc., are facilitating performance in the four-card selection task.
Generating and testing hypotheses
Unlike in the four-card selection task, in daily life you have to generate and test a hypothesis more often instead of a given rule. This is being investigated, for example, in Wason's reversed 20 questions task ; where people get three figures and the rule on which the figures are kept must be discovered. Then they have to produce different number series, of which the test leader indicates whether they fit the rule or not. The results show that people often generate far too restrictive hypotheses instead of the simple rule that actually exists. Also very little use is made of a falsification strategy; participants are mainly busy generating series of numbers that meet the hypothesis they have come up with. Also from other types of tasks evidence for the confirmation bias ;
What is language production? - Chapter 12
Language production refers to a number of processes in which we convert thoughts into language output in the form of speech, gestures or writing. Language production is important for many skills, such as social cognition , the ways in which people become wise from themselves and others to function effectively in a social world, mental representation and thinking. Voice production is conceptually driven ; it is a top-down process that is influenced by cognitive processes such as thoughts, beliefs and expectations.
Language and communication
Language is our primary goal of communication and forms the basis of the majority of social interactions.
Language universalia
There are about 6,000 spoken languages in the world, varying in a number of aspects such as the number and type of sounds, word order and vocabulary size. According to Aitchison, there are a number of absolute universals that apply to all languages, such as that they have vowels and consonants, can express nouns, verbs, negatives and questions and are structure-dependent. However, the universals quickly become problematic; sign language, for example, does not use vowels and consonants, and in tone languages use is also made of pitch to change the meaning of a word.
Hockett proposed 16 characteristics of human language that distinguish animal communication systems:
1. Vocal-auditory communication channel; there is a speaker and a listener.
2. Broadcast broadcast and directional reception: speech is emitted from the source (mouth of the speaker) and localized by the listener.
3. Quick decay: the spoken message expires after production
4. Interchangeability: the speaker can also be a listener and vice versa.
5. Feedback: the speaker has access to the message and can check its contents.
6. Specialization: whether we whisper or shout, the message remains the same.
7. Semanticity: Sounds within speech refer to objects and entities in the world: they have meaning.
8. Arbitribility: the relationship between the spoken word and what it refers to is arbitrary.
9. Discreet: The speech signal is composed of discrete units.
10. Displacement: We can use language to refer to things that are not in the current time or location.
11. Productivity: Language allows us to create new expressions.
12. Cultural transmission: language is learned through interaction with more experienced language users within a community.
13. Duality: Meaningful elements are created by combining a small set of meaningless units.
14. Prevarication: Language can be used to lie or deceive.
15. Reflexivity: We can use language to communicate about language.
16. Learnability: a language can be learned by a speaker from another language.
Components of language
A phoneme is the smallest meaningful sound unit within a language. Phonetics is the study of raw sounds that can be used to make words ( fonen ). Of these, about 100, but no language used all these units. Allophones are different fons (such as the 't' in trumpet or tender) that are seen as the same phoneme. A phoneme is thus a relatively subjective category. The tendency to perceive differences between allophones decreases with age. Phonotactic rules describe which combinations of sounds are allowed in a language.
Morphemes are the meaning units of a language. They are the building blocks of words, and a single word can contain different morphemes. For example, the word 'fathers' has two morphemes: the free morpheme (can occur independently) father and the bound morpheme (has no meaning unless it is attached to a free morpheme) 's'. Function words, such as prepositions, provide a grammatical structure that is late how content words relate to each other within a sentence.
A word is the smallest unit of grammar that can be produced meaningfully and independently; it consists of one or more morphemes. Semantics is the study of meaning.
The productivity of language refers to the possibility of generating new utterances. Two aspects of the language system enable us to use language productively: syntax and morphology. The syntax describes the rules that determine the construction of phrases and sentences. This includes, for example, the word order and the way of embedding phrases into sentences. Recursion refers to the possibility to extend phrases infinitely by embedding phrases in sentences.
Discourse refers to speech with multiple sentences, dialogue, conversation and narrative. Pragmatics refers to the understanding of communicative functions of language and conventions that oversee language use. Effective discourse is based on a shared understanding between the discussion partners, such as knowing the rules of 'taking turns' and cooperation. Grice identified four conversational rules or maximums of effective conversations: the maxim of quantity (the speaker must provide enough information to be understood, but not too much), the maxim of quality (the speaker must provide accurate information), the maxim of relevance ( the speaker must provide relevant information) and the maxim of manners (ambiguity and vagueness must be avoided). If one of these rules is broken, it requires more cognitive processing to understand the conversation or to respond to the other.
Voice errors
Many theories about speech production arise from analysis of speech errors, for example in normal daily language use, speech errors that are generated in the laboratory or arising from brain damage.
Hesitations and breaks
Fluids are hesitations or interruptions of normal fluent speech, such as silences or 'uhm' say. Occasionally, blurring facilitates understanding; Saying 'uhm' seems to increase listeners' attention to the following words.
Delusions
Fromkin was the first to make a systematic description of error types. He discovered that errors are not arbitrary, but are systematic and therefore informative about the nature of the underlying processing. The majority of speech errors are sound-based errors, and errors often occur on one linguistic level (for example phonemes or morphemes).
The lexical bias refers to the tendency of phonological speech errors to result in real words. This may be because non-words are detected earlier and restored, while mistakes with real words tend to slip past the 'control'. In addition, content words are exchanged with other content words, while function words are exchanged with other function words. In addition, errors are consistent with the stress pattern in the expression.
Tip-of-the-tongue phenomenon
If something is on the tip of your tongue, it means a temporary inability to gain access to a certain word that you do know. Research shows that this condition is universal, occurs about once a week, occurs more often in old age, often involves proper names, and is often accompanied by the availability of a first letter.
Theories of speech production
There is general agreement that there are a number of phases of speech production; conceptualisation (the thought forms and is prepared to be transmitted by language), the formulation of a linguistic plan and articulation of the plan, and the monitoring of the output.
Modular theories of speech production
1. Garrett's model
Modular theories state that speech production goes through a series of phases or levels, each with a different type of processing. According to Garett's model, speech is produced on the basis of a number of phases in a top-down manner. The phases are the conceptual level (meaning is selected), the functional level (contents are selected), the positional level (content words are placed in order and function words are selected), the phonological level (speech sounds are selected) and the level of articulation (sounds become prepared for speech). The idea that content and function words are treated differently is supported by research. On the other hand, this model does not imply the prevention of non-plan internal errors from; these occur when intrusion is external to the planned content of an expression. For example: you stand in front of the library and want to say: let's get coffee, but instead you say: let's get a book.
2. Levelt's model
Levelt et al developed a successive model called Weaver ++, which focuses on producing a few words. The first two phases in the model relate to lexical selection, followed by three phases of form encoding, ending in the articulation. The model attributes an important role to self-monitoring at various levels throughout the processing phases. This allows errors to be detected and recovered; this process is partly driven by speech comprehension. However, this model does not explain the existence of errors that are the result of interference from lower to higher levels.
Interactive theories of speech production
1. The Dell model
The disseminating activation approach Dell is based on connectionist principles (see chapter 1) and uses the concept of spreading activation in a lexical network. Processing is interactive here; one-level activation can affect processing at other levels. The model has four levels: the semantic level, the syntactic level, the morphological level and the phonological level. A word unit can influence phonological units (top-down distribution), but also semantic units (bottom-up distribution). This model explains many patterns in speech errors and some mistakes made by people with aphasia, for example. However, little attention is paid to the semantic level. An optimal model of speech production may exist due to a combination of modular and interactive approaches.
Neuroscientific approach to speech production
Neurolinguistics is the study of the relationship between brain areas and language processing.
Lateralization of function
Sensory information arriving at one side of the body is processed by the contralateral (opposite) side of the brain. There are also various functions associated with left and right cortical hemispheres. If a cognitive function is lateralized it means that one cortical hemisphere is dominant for that function.
The left hemisphere
In most people, speech is gelateralized in the left hemisphere of the brain, and the left hemisphere is dominant for most language functions. However, the degree of lateralization differs within the population.
Evidence from the typical population
In the dichotic listening task , the participant is presented with different stimuli at the same time for each ear. The results of this task show that there is an advantage in verbal stimuli presented to the right ear. Research with event-related potentials shows that different areas within the left hemisphere process information with respect to meaning and syntax. From research with transcranial stimulation, a non-invasive method in which cortical areas are temporarily activated or inhibited, it appears that Broca's area plays a crucial role in the processing of grammar. The right hemisphere plays a role in emotional aspects of speech and aspects of non-literal speech.
Proof of aphasia patients
The Wernicke-Geschwind model is a simplified model of language function that is used as a basis for classifying aphasic disorders. For a schematic overview see p. 390 of the book. With aphasia there is a language deficiency due to brain damage. In the case of crossed aphasia , there is a language dysfunction due to damage to the right hemisphere in a right-handed individual. In Broca's aphasia , there is non-fluent speech, impaired speech and problems with grammar output processing. With global aphasia there is an extreme restriction of language function. Wernicke's aphasia is a fluid aphasia, characterized by smooth but meaningless output and repetition errors.
Scripture
The Hayes and Flower model of writing proposes a cognitive approach to writing that focuses on three domains: the task environment (subject of writing, intended audience, etc.), the long-term memory (availability and accessibility) and the immediate cognitive aspects of writing. the writing process. The model also proposes, plans, translates and revises three phases of writing.
Which processes of language comprehension are there? - Chapter 13
Voice perception refers to the process of converting a flow of speech into individual words and sentences.
Understanding speech
Prosody refers to all aspects of an utterance that are not specific to the words themselves, such as rhythm, intonation and stress patterns. The speech signal is not produced as discrete units; there are few clear boundaries between words and successive sounds mix with each other. In addition, factors such as age, gender and speech speed influence the sounds produced by the speaker. Moreover, the continuous, smooth nature of the speech signal makes speech perception a complex process.
The invariance problem
The invariance problem refers to the variation in the production of speech sounds over speech contexts. Phonemes are pronounced differently in different situations. Co-articulation, the fact that a speech sound is influenced by the sounds before or after, contributes to this problem.
The segmentation problem
The segmentation problem refers to the detection of distinct words in a continuous series of speech sounds. An important source of information for segmenting a speech signal is the sound patterns in a language, such as stress and prosody. In English, for example, people appear to use a stress-based strategy for distinguishing words.
Directions for word limits
The stress-based strategy is already present at a very young age (7.5 months); children of this age can already distinguish words that satisfy the main emphasis patterns of English words. Fonotactic limitations describe the language-specific sound groups that occur in a language; these give directions for word boundaries.
Mistakes of the ear
A slip of the ear occurs when we misunderstand a word or phrase. Such errors are almost always caused by mistakes in recognizing the word boundaries. This is more common, for example, when hearing lyrics, because the prosodic information that leads to segmentation is reduced and the context sometimes gives less clues for word selection. Moreover, people have a tendency to segmentation based on indications from their mother tongue; errors in recognizing word boundaries are therefore more common when listening to speech in another language.
Categorical perception
Categorical perception refers to the perception of stimuli on a sensory continuum as falling within distinct categories. Because of this we are often not aware of variation in the way in which sounds are pronounced and we can still perceive a certain sound in different situations as the same sound. Categorical perception is observed in infants from four months of age.
The advantage of the right ear for speech sounds
The right ear advantage refers to the finding that language sounds are processed more efficiently when presented to the right ear than to the left ear. Most likely this is the result of superior processing of language stimuli through the left hemisphere.
Top-down influences: more about context
The phoneme restoration effect refers to the tendency to hear a complete word, even though a phoneme has been removed from the input. On the basis of the context, people still perceive a whole word; for example in the sentence: The * heel (heel) of my shoe is broken. Whether this is due to top-down effects on perception or whether the restoration takes place after perception is the question.
Visual instructions
Sight also plays an important role in accurate speech comprehension. This is demonstrated, for example, by the McGurk effect ; a perceptual illusion whereby participants get conflicting auditory and visual clues. For example, they hear the sound 'ba', but they see someone who utters the sound 'go'. Many people perceive a mix between the two sounds, in this case 'da'.
Models of speech perception
Models of speech perception try to explain how information from the continuous speech stream we hear makes contact with our stored knowledge about words. The models fall into two categories: the first assumes that processes of speech perception are modular (ie that knowledge of words has no influence on the processing of speech at low levels), the second that they are interactive.
The cohort model
This model assumes a sequential nature of speech perception and assumes that incoming speech sounds have a direct and parallel access to the storage of words in the mental lexicon. If we hear the first phoneme of a word, it is therefore already possible to have expectations about the probably intended word. The set of words that are consistent with the initial sounds is called the initial cohort of the word. At the unique point , enough phonemes have been heard to recognize only the intended word. However, this model does not explain how the beginning of a word is identified. It also says nothing about the role of the size of the cohort.
ROUTE
The TRACE model of speech perception presents an alternative to the modular approach that phonemic processes at lower levels are not influenced by higher processes. Top-down effects play an important role in speech perception according to this model. It is a connectionist model; where activation by spoken input spreads over different processing levels. Multiple sources of information, such as acoustic information, instructions from other phonemes and the semantic context, influence speech perception. There are three levels of processing units that deal with characteristics, phonemes and words respectively. However, this model overestimates the role of top-down effects.
Understanding words and sentences
Lexical access
Lexical access is the process through which we gain access to stored knowledge about words. Much research has been done on this process. The nomination tasks ask the participant to name a word; the response time is seen here as the speed of access. Sentence verification tasks present a sentence frame with a target word, where the participant has to decide whether the word fits in the frame.
There are a number of factors that affect lexical access. The frequency effect refers to the finding that the more frequent a word occurs, the easier it is to process it. However, this only applies to open-class words, such as nouns, verbs and adjectives, and not to closed-class words such as articles and prepositions. Priming effects show that lexical access to words that are primed in advance is faster and easier. The syntactic context also influences lexical decision time; people recognize words faster if a word is in the appropriate grammatical context of a sentence than if it is not. Finally, lexical access is affected by lexical ambiguity: because in ambiguous words multiple meanings are activated, the decision time for the next phoneme is longer than for non ambiguous words.
Syntax and semantics
Parsing is the process by which we mentally represent the syntactic structure of a sentence. This is being investigated, for example, in the field of psycholinguistics : the study that deals with the mental processes underlying language comprehension and production. Cognitive psychology is strongly influenced by the work of Noam Chomsky. His research showed that the grammatical structure of a sentence influences the processing time of that sentence. Frazier described two main strategies for parsing, ie assigning the correct role of words within a sentence. Minimal adhesion allows us to create the simplest structure that is consistent with the grammar of a language. Late closure attach inbound material to the phrase that is currently being processed, as long as grammatically permitted. This type of model assumes that parsing is incremental, or that we assign a syntactic role to a word once that word has been observed.
Read
Writing systems
Different languages have different scripts, differing in the degree and manner of representation of spoken words. Logographic scripts represent morphemes or the units of word meanings, such as Chinese. Syllabic scripts use a symbol to represent each syllable. Consonantal scriptures represent the consonants of the language. Alphabetical scripts use letters to represent phonemes or sounds. The latter type is most common among the world languages. A grapheme is the written representation of a phoneme. With transparent languages there is a one-to-one correspondence between letters and sounds. At opaque languages there is no one-to-one agreement; the same sound can be written in different ways and a letter can be pronounced in different ways (eg with homophones like the English 'reign' and 'rain').
Context effects on visual word recognition
The word superiority effect indicates the finding that a target letter within a letter string is more easily recognized when the sequence forms a word. This shows that context has a major influence on visual word recognition. In addition, words that follow a related is recognized more quickly.
Eye movements
Saccades are fast movements of the eye that are made when you scan or read an image. Between saccades are fixations , which occur when the eye lingers briefly on an area of interest in a visual scene. Research shows that fixation time on a word is reduced if it has been seen before, and if the words can easily be recognized. Certain words are fixed longer than other words.
Dual route model of reading
This model suggests three routes for reading. Route 1, the grapheme-to-phoneme conversion route allows the conversion from writing to sounds. Route 2, the lexical route , allows reading through word recognition. Route 3 goes outside the semantic system and is responsible for cases where a different word is read correctly even if the meaning is not recognized.
The brain and language understanding
Neuropsychology of speech comprehension
The brain area that seems to be most associated with language comprehension is Wernicke's area. The Broca area also plays an important role, especially for sentences with a more complex structure. Pure word deafness refers to a disorder where there is a limitation for recognizing speech sounds, but not for non-speech sounds. With pure word meaning deafness , the patient can repeat the word but not understand it. The existence of this type of disorder suggests that there are three routes for processing spoken words: one for direct access to the phoneme level, and two for familiar words and auditory analysis, giving access to lexical information.
Neuropsychology of reading
Evidence for the dual route model comes from research into acquired dyslexia. There is a difference between surface dyslexia, in which there is a limitation in reading abnormal words, but not in normal words, and phonological dyslexia, where there is only a deviation in reading non-words.
Electrophysiological data
Electrophysiological research with event-related potentials show the importance of different areas of breeading for reading, such as the inferior frontal and premotor cortex. Activation of brain areas differs between deep and transparent scripts.
What is the connection between emotion and cognition? - Chapter 14
Emotion plays an important role in cognition; facial recognition, for example, is severely limited if the emotional connection is lost.
What is emotion?
Emotion refers to a number of mental states, including anger, joy and aversion. They are short states that are related to a certain mental or real event. Emotions provide us with important information; for example, about the execution of our plans relative to our goals (whether they are achieved for example) and help to reduce discrepancies between actual and expected outcomes. Because emotion has long been seen as irrational and difficult to investigate, there has been little research for emotion in cognitive psychology for a long time. Breathing areas that play an important role in emotion are the amygdala (fear, anger, aversion, joy and sadness) and the insula (among others aversion).
Core emotions
Emotions are associated with distinct facial expressions and gestures. Although there are screening rules in each culture , or social conventions that determine how, when and with whom emotions may be expressed, there is evidence for a basic set of emotional expressions about different cultures. However, the degree of universality of facial expressions is still under debate. Facial expressions in babies and blind people show that emotions are partly innate.
Ekman identified six basic emotions: anger, aversion, fear, joy and surprise. Later on, there were a number of emotions, such as pride, contentment and hatred. Languages differ in the way they name emotions; in English, for example, there is no word for 'bad luck'. The identified basic emotions might have been different if the research in this area was dominated by a language other than English.
The core of emotions
With an emotion comes more to see than just facial expression. Physiological phenomena, behaviors, beliefs and thoughts are all examples of phenomena that are accompanied by emotions. According to Clore and Ortony, emotions are characterized by a cognitive component (the appreciation of emotion), a motivational-behavioral component (our actions in response to an emotion), a somatic component (bodily reaction) and a subjective-experience component.
Theories of emotion and cognition
An important issue in theories about the relationship between cognition and emotion revolves around the question of what comes first: cognition or emotion.
Early theories and their influence
The James-Lange theory of emotion
This theory states that the experience of an emotion follows the physiological changes associated with that condition. Although this does not seem to be intuitive, there is evidence for the face-feedback hypothesis ; the assumption that feedback from the facial muscles influences the emotional state. For example, if people are asked to accept a smiling facial expression, they will feel happier afterwards. However, people who, for example, have damage to the spinal cord, hardly report less emotion. Moreover, the conscious experience of an emotion sometimes precedes the physical change; if you realize that you have said something embarrassing, you will only blush afterwards.
The Cannon-Bard theory
The criticism of Cannon on the above theory was that the same physiological condition can be associated with different emotions; accelerated resin stroke can, for example, be associated with both anger or fear. The same physiological condition can also occur without emotion (for example during physical exertion).
Finally, the conscious experience of an emotion happens quickly, while for example visceral changes go slower. Cannon therefore proposed that the experience of emotion and the physical response to an event arise independently of each other. However, this theory omits the role of cognition.
The two-factor theory
This theory states that two factors create emotion: physiological excitement and our interpretation of it. If you find that your heart beats faster and you almost have to make an exam, you interpret this as fear. However, if you have an argument, you interpret the palpitations as anger. This theory has had a lasting influence on later valuation theories of emotion.
Affective-primacy: the theory of Zajonc
This theory states that cognition is not necessary for emotion, and that the two systems can function independently. Although cognition can influence emotion in a later processing phase, the initial emotional response is not affected. Evidence for this approach comes from research into the mere exposure effect: the tendency of people to develop a preference for a stimulus to which they are repeatedly exposed. It is therefore possible that emotion can occur without cognition. However, the debate about whether cognition or emotion comes first is far from over.
Cognitive primacy: the theory of Lazarus
This was the first valuation theory, which means that the theory assumes that emotions result from our interpretation of events. Cognitive appreciation is thus fundamental to emotional experience and can not be separated from it. Valuation depends on whether the event is seen as positive or negative, which sources we have at our disposal to deal with the event and monitor the situation. Indeed, research shows that how we think about stimulus thinking influences the emotional experience. However, multiple-level theories state that both pre-attentive and conscious processes are involved in emotion, and not just one of the two as in the two theories above.
Effects of emotion on cognition
Emotion and attention
The attention bias refers to the tendency of emotional stimuli to attract or hold our attention. In the emotional Stroop task, participants must name the color in which a word is written. If a word has emotional value, it keeps attention longer and performance on the task is reduced. The visual search task is also used to examine the effects of emotion on attention.
Emotion and perception
Emotion also appears to have an effect on early stages of vision. For example, the presence of an emotional stimulus increases sensitivity to contrast. Emotions also play a role in other senses; for example, the perception of loudness of a sound is influenced by the emotional value of the sound.
Emotion and memory
Extreme emotion can have a negative effect on memory, as we saw earlier with the flashbulb memories. Memories of emotional events are less detailed, more often incorrect and sensitive to bias. False memories can also be more easily recalled for emotionally charged events. Research shows that the timing of retrieval is crucial; the longer there is between the event and its recall, the greater the chance of error. Recollection for facts, however, seems to be better when learning is associated with emotion. The tunnel memory refers to the positive effect of negative emotions on memory for central details of an event and the negative effect on the memory for edge details.
There is much evidence for the state of mind-congruence effect: the tendency to remember events that are consistent with the current state of mind. This effect is often explained on the basis of network models ; where reminders are treated as items in a network that influence each other through activation. Condition-dependent memory refers to the facilitation of memory when the mental or physiological state matches the encoding and retrieval. Some findings, however, are not consistent with an associative network model; people, for example, seem to pick up more positive memories when they are in a negative mood.
Cognitive Psychology from Gilhooly, Lyddy & Pollick - Practice Bundle
Practice questions in chapter 1 of Cognitive Psychology of Gilhooly et. already.
Ask
1. What is the relationship between empiricism and associationism?
2. a. Name three disadvantages of the introspective method.
b. What was the name of the counter-movement of introspectionism and what did it involve?
3. Name an important problem that the behaviorists encountered. How is this related to the concept of mental maps?
4. How are simulation programs within information processing theory used to better understand mental processes?
5. a. What does the connectionism involve?
b. Give an example of a learning rule within a connectionist network and describe how it works.
6. What is the opposite idea of modularity? How do both approaches differ?
7. a. What is the most used functional method in brain scans and how does it work?
b. Name an advantage and a disadvantage of this method.
Answers
1. Empiricists, such as Locke and Hume, believed that all knowledge came through experience. This was done through associations that were formed between ideas and memories, whereby activation of a certain concept also activates another concept.
2. a. Introspection required a lot of training; it could not be learned by anyone (for example, mentally challenged or children), it only applied to certain mental processes and the introspective method itself possibly influenced the mental processes.
b. The counter-movement of introspectionism was behaviorism, in which only observable behaviors were considered without taking into account internal processes.
3. Behaviorism was not very applicable to complex mental phenomena such as reasoning, problem solving and language, as mental representations were not taken into account. That is why Tolman came up with the concept mental map , an abstract mental representation of the environment that was used by rats, for example, to find their way through a maze.
4. Simulation programs, programs that mimic a particular model of human thought, were based on a certain theory about mental processes. The success of the model was then a measure of how close this theory came to describing mental processes.
5.a. Connectionism is an approach to cognition in terms of networks of simple neuron-like units that transmit activation and inhibition by receptor, secret and output units.
b. An example of a living rule within a connectionist network is backwards propagation , in which weights on the connections between units in a connectionist network are adjusted in response to errors, in order to obtain the desired output.
6. The opposite idea of modularity is that mental functions are not localized, but distributed over the brain. Modularity states that cognition consists of a large number of independent processing units that work separately and can be applied to relatively specific domains.
7. a. The most common functional method is functional magnetic resonance imaging (fMRI), in which oxygen supply in the blood is measured.
b. Advantage: This method has good temporal and spatial resolution.
Possible disadvantages: The complexity of interpreting the data, low reliability of repeated scans, unusual and specific circumstances in which an fMRI is taken.
Practice questions in chapter 2 of Cognitive Psychology of Gilhooly et. already.
Ask
1. It is said that perception is on a continuum between sensation and cognition. What's the meaning of this?
2. Name an example of an inversion problem.
3. What is the difference between bottom-up and top-down processing?
4. Which three components play a role in Bayesian decision theory?
5. Explain how discovering invariants can help to understand direct perception.
6. Complete the following text with the concepts of cones, rods, dorsal flow, ventral current:
_____ are special neurons on the outer edge of the retina that are effective in low light and detecting movement. The _____ leads to the temporal lobe and specializes in determining which objects are in the visual world. _____ leads to the parietal cortex and specializes in determining where objects are in the visual world. _____ are special neurons in the retina that are sensitive to colored light and distinguish fine image details.
7. Identify two mechanisms that are responsible for detecting pitch in the ear.
8. The modality-appropriate hypothesis and the maximum likelihood estimation theory both have a different view on the integration of information from different senses. Explain where this difference is.
Answers
1. This means that perception is a middle way between sensation , the processes through which physical properties are converted into neural signals, and cognition , the use of mental representations to reason and to plan behavior.
2. An example of an inversion problem can be found in the field of vision. This involves the problem that three-dimensional images of the physical world are projected as two-dimensional images on our eyes.
3. Bottom-up processing starts with the original sensory input that is gradually transformed into the final representation, and with top-down processing there is a permanent connection and feedback the higher and lower levels of processing.
4. Bayesian decision theory is about the question: which event is most likely responsible for my perception. This question is answered on the basis of three components. The first component is the probability , or all uncertainty in the image (in the case of vision). The second component is the preceding , or all information about the scene before you have seen it. The third component is the decision rule , for example finding the most likely interpretation or selecting a random interpretation.
5. Invariants are properties of a three-dimensional object that can be viewed that can be derived from any two-dimensional image of the object. If we know the invariants of objects, we can understand how the bottom-up processing of objects and their functions (direct perception) works.
6. Rods, ventral flow, dorsal flow, cones.
7. The basilar membrane encodes pitch by means of localization.
Firing rates in the auditory nerve also count as an indication of pitch.
8. The modality-appropriate hypothesis states that the sense that possesses higher accuracy for a given physical property of the environment will always dominate the bimodal estimation of that property. For example, vision is dominant in spatial tasks. The maximum probability estimation theory, however, states that more reliable perceptual information is weighted more heavily than less reliable perceptual information.
Practice questions in Chapter 3 of Cognitive Psychology by Gilhooly et. already.
Ask
1. Indicate whether these are examples of internal or external attention: 1) your intention to go to the supermarket and also to remember this until you get there, 2) notice that the traffic light is on red, 3) be startled by someone you call while you are studying, 4) keep your attention while studying because you want to finish it for dinner.
2. a. Describe the approach to attention according to filter theory.
b. What important distinction can be made under filter theories?
3. Describe how the dual task paradigm works. Which theory of attention is based on the findings from this paradigm?
4. Which two functions does attention according to the standardization model of attention?
5. What is the binding problem? Name a theory that offers a solution for this.
6. Connect the following concepts to the correct description:
1) Attentional blink
2) Inattentional blindness
3) Inhibition of return 4) Change blindness
A) The phenomenon that after visual attention has been given to a location in the visual field and the attention is moved thereafter, this location suffers from a delayed response to events.
B) The phenomenon that if we look at a succession of quickly presented visual images, the second of two stimuli can not be identified if it is presented very shortly after the first stimulus.
C) The phenomenon that substantial differences between two almost identical scenes are not observed when presented sequentially.
D) The phenomenon that we can look straight at a stimulus but that we do not really perceive it if we do not focus on it.
7. Explain the difference between the approach of consciousness according to conscious inessentialism and epiphenomenalism.
Answers
1. 1) internal attention, 2) external attention, 3) external attention, 4) internal attention.
2. a. Filter theories assume that a filter in the processing is used to block irrelevant information so that only the relevant information is further processed.
b. An important distinction between filter theories is whether they assume that the filter is used early in the processing, or late in the processing.
3. In the dual task paradigm, performance of participants on two tasks is measured separately and those tasks simultaneously. The amount of interference is measured with different combinations of tasks. The source theory is based on this.
4. The ability to increase sensitivity for weak stimuli as they are only presented and the ability to reduce the impact of the job irrelevant distractors when multiple stimuli are presented.
5. The problem that we know that related characteristics are processed separately, but are experienced as one whole. The feature integration theory offers an explanation for integrating the matching features.
6.
1 - B
2 - D
3 - A
4 - C
7. The conscious inessentialism states that consciousness is not necessary for the actions we perform and therefore does not 'exist'. The epiphenomenalism denies consciousness, but states that it has no function.
Practice questions in chapter 4 of Cognitive Psychology of Gilhooly et. already.
Ask
1. From which two modality-specific storage spaces does the sensory memory exist? Explain what these mean.
2. Name and describe a factor that negatively affects performance in the shadow technique.
3. Name two factors that positively influence the transfer of information from the KTG to the LTG, and two factors that negatively affect this transfer.
4. Explain how the negative recency effect works. What is the difference with the recency effect?
5. Which of the following three models does the WG see as a subset of the LTG?
a. Cowan's embedded process model
b. Multiple component model
c. Baddeley's working memory model
6. Which model does the term 'phonological loop' belong to? From which two sub-components does this component exist and what is its function?
7. Explain how the existence of capture errors provides evidence for the supervisory activating system.
8. What is the added value of the episodic buffer to the Baddeley model?
Answers
1. The iconic storage is the sensory storage space for visual stimuli. The egoic memory is the auditory equivalent of the iconic memory.
2. Backward masking: a masking stimulus is presented near or immediately after the target stimulus. It is then more difficult to repeat the target stimulus.
3. Positive: repeat and elaboration. Negative: decay and replacement.
4. The negative recency effect plays with tasks with multiple item lists; items at the end of the list are remembered worse, because they are not stored in the LTG. The recency effect plays with tasks with a single item list; items at the end of the list are then better remembered.
5. a.
6. Baddeley's working memory model; the phonological loop is the component of the WG that provides temporary storage and manipulation of phonological information. The two sub-components are the phonological repository, where speech-based information is stored for 2-3 seconds, and articulatory control processes that repeat information on a sub-vocal level.
7. According to the supervisory activating system model, there are two types of cognitive control; automatic processes (for routine tasks and well-trained tasks) and a process that can disrupt automatic processing and select an alternative schedule. Capture errors, or failure to deviate from routed actions, shows that there is indeed a separate system for automatic processes.
8. Baddeley's model initially had no room for a KTG's own warehouse. The episodic buffer adds this to the model. Moreover, it explains how modality-specific information is integrated.
Practice questions in Chapter 5 of Cognitive Psychology by Gilhooly et. already.
Ask
1. Patient X is going on holiday with friends in May 2007. In 2009 he gets a car accident where he sustains brain damage. After this accident, he can not remember anything about the holiday. What type of amnesia is involved? Which syndrome is patient X suffering from?
2. What two components does the LTG consist of according to the multiple memory system model?
3. Imagine a memory test in which the following question is asked: Which city in North Holland is the capital of Amsterdam? Is this an example of:
a. free recall
b. recognition
c. cued recall
4. What is the main objection to the distinction made by Tulving (1972) between episodic and semantic memory?
5. What is the purpose of the research method 'learning by probabilistic classification'?
6. Explain how schemes can contribute to the emergence of false memories.
7. Imagine that you leave your work every day at 5 o'clock to go home. However, one day you have to leave half an hour earlier to pick up your daughter from school. You forget this, and you go home at 5 o'clock without going to your daughter's school. How is the part of your memory mentioned that fails here?
8. Name a possible cause for maintaining and strengthening false memories.
Answers
1. Anterograde amnesia; amnestic syndrome.
2. The non-declarative or implicit memory refers to memories that we do not consciously pick up, such as how to drive a car. The declarative or explicit memory refers to conscious memories of events, facts, people and places.
3. c.
4. It is not always clear when a memory falls under the episodic or semantic memory. For example, autobiographical memories could be placed under both categories.
5. This method is used to examine learned habits. It is often difficult to examine learned habits without the influence of declarative memories. That is why in this method associations are taught that do not automatically speak that can not be remembered. Learning is therefore based purely on the experience gained during the trials.
6. Schedules are organized remembrance structures that allow us to apply experiences to new situations. The schemes create expectations and can be used to supplement missing information with memories (unconsciously). This missing information is therefore not based on the actual events, and false memories can thus be created.
7. The prospective memory, which allows us to keep track of plans and carry out the intended actions.
8. False memories are preserved through imagination inflation; the false memory is repeatedly retrieved and strengthened.
Practice questions in chapter 6 of Cognitive Psychology of Gilhooly et. already.
Ask
1. Why are abstract words more difficult to remember than concrete words according to the dual-coding hypothesis? Explain this using the loci method.
2. Imagine an experiment in which half of the subjects must learn a glossary in upper case, and the other half in lowercase. When testing, the words are presented in lower case. Which principle is being investigated here? Which group of test subjects will perform better in testing?
3. What is meant by context effects? Give an example.
4. What is the difference between proactive interference and retroactive interference?
5. Explain the role of long-term potentiation in retrograde facilitation.
6. Name an important limitation of existing memory research. What do we call this problem?
7. Which combination of learning styles appears to work most effectively when studying? Which learning style is not effective?
Answers
1. In the method of loci a known route is proposed, and images of the items to be remembered are linked to known places on the route. According to the dual coding hypothesis, the meaning of concrete words can be represented both verbally and visually, while abstract words can only be verbally presented. These are therefore more difficult to remember using memory strategies as the method of loci.
2. The principle of encoding specificity. The group of subjects who have learned the dictionary in lower case will perform better; the principle states that if the context in retrieval is similar to the context of encoding, the memory will work better.
3. Context effects indicate that the memory works better if the external environment during testing is equal to the environment when learning. An example is that retrieval is easier if the words are presented in the open air both during testing and in learning, than when the words are tested outside and during learning.
4. We speak of proactive interference when previously learned material disturbs learning later, and of retroactive interference as later learning disrupts memory for previously learned material.
5. Long-term potentiation, or the long-term improvement in signal transmission between two neurons resulting from the simultaneous stimulation of these neurons, is an important mechanism in learning and remembering. LTP can not be generated during inactivity / sleep, so that existing memories are protected by the interference of new memories and thus better remembered. That effect is called retrograde facilitation.
6. Many memory searches can only be applied to everyday situations to a limited extent, and are therefore not representative or generalizable to the real world. We then say that the studies are low in ecological validity.
7. A combination of strategic learning (devising a strategy to learn the minimum for an exam on the basis of which questions are likely to occur) and deep learning (making an effort to understand the material and giving meaning) works best . Superficial learning (without understanding the substance) appears to be ineffective.
Practice questions in Chapter 7 of Cognitive Psychology of Gilhooly et. already.
Ask
1. Complete the following text with the correct terms:
According to the prototype approach, people share concepts in ____. _____ is the extent to which an object is representative of the category. Members of the category can also receive scores for the degree to which they resemble each other; we call this the degree of ____. The item that most closely resembles other category members is called ____.
2. Name two disadvantages of the prototype approach.
3. The model theory is an alternative to the prototype approach. What does this theory mean? What is an advantage of this theory compared to the prototype approach?
4. According to the essentialist approach, there are three different concepts; nominal concepts, natural concepts and artefact concepts. Indicate the following concepts to which type they belong: rain, book, centimeter, square, horse, computer.
5. What is the difference between amodal and well-founded representations?
6. Many researchers assume that mental images work the same as literal 'pictures' in the brain. Pylyshyn has a different opinion about this; explain it. Also mention a finding from research that supports his claims.
7. Name an argument and counter argument for the idea that brain areas are identical for perception and imagination.
Answers
1. Categories; typicality, family similarity, prototype.
2. Abstract and ad hoc concepts are difficult to classify based on the prototype approach; it is also difficult to fit the usefulness of clues about knowledge of variability in members' characteristics into the idea of prototypes.
3. The model theory states that categories are represented by saved examples, each of which is linked to the name of the category. This is therefore not a prototype. The advantage of this theory is that it presents variability within a category.
4. Naturally, artifact, nominal, nominal, natural, artifact.
5. Amodal representations are representations that are abstract and do not require sensory codes. The representations could thus be reproduced in a computer program, so to speak, using abstract symbols. Well-founded representations do, however, need sensory-motor codes; these are dependent on simulation or re-perception of perceptual, motor and introspective states that have been acquired during experience in the world.
6. Pylyshyn is of the opinion that imagination is only a by-product of underlying cognitive processes, but does not have a functional role in itself. He is therefore convinced of amodal representations as underlying the experience of imagination. Evidence for this claim stems from research into ambiguous figures; this shows that people do have a fixed interpretation of stored mental images. In real pictures, however, they can adjust and change their interpretation.
7. For: Activation of certain areas of the brain, such as the occipital lobe and the early visual cortex, play a role in both perception and imagination.
Against: Some people with brain damage have intact visual perception but distorted imagination, and vice versa. If the brain areas involved in both processes were identical, this would be impossible.
Practice questions in chapter 8 of Cognitive Psychology of Gilhooly et. already.
Ask
1. What is meant by the degrees of freedom problem?
2. What is the main drawback of the equilibrium hypothesis?
3. Describe the role of the forward model within the optimal control theory.
4. Explain how the slip of the tongue phenomenon provides evidence for associative chain theory.
5. Within hierarchical models of action production, recurring networks provide the temporal aspect and the order of an action sequence. Explain what recurring networks are.
6. The associative chain theory and hierarchical models of action production all follow the idea of the cognitive sandwich. Explain what this means and describe the opposite vision.
7. What is meant by common coding? Which theory does this concept belong to?
8. Name two possible functions of the existence of mirror neurons
Answers
1. Muscles and joints can move in countless different ways. So the question is how to select a certain movement to achieve a certain motor purpose.
2. The equilibrium hypothesis represents each movement as a movement from one stable posture to another. However, this does not explain how we plan and execute more complex movements.
3. The optimal control theory assumes that movements are optimized on the basis of feedback on the extent to which a certain goal is achieved. The forward model ensures that there is no delay in this feedback by making predictions about the relationships between actions and their consequences.
4. The associative chain states that the end of a particular action is associated with stimulating the start of a subsequent action in the sequence. In language production this is presented by associations between words in a sentence. The slip of the tongue phenomenon shows that we sometimes accidentally say a word associated with the word we intended to say.
5. Recurring networks are neural networks with connections between the units (within an action sequence, ie the steps that have to be taken), creating a circle of activation. Patterns of activation and inhibition within the hierarchy work through interactive interaction, where the activation of a certain unit causes inhibition of the other units. The first operation thus automatically activates the start of the next step, which activates the step afterwards, etc.
6. The idea of the cognitive sandwich is that cognition is surrounded on the one hand by perception, and on the other by action. The opposite idea is that cognitive representations of action mix with representations of both perception and action.
7. Common coding refers to the theory that states that both production and perception share certain representations of actions. The ideomotor theory, which sees action and perception as closely related, was elaborated within the frame of common coding.
8. On the one hand, mirror neurons and the mechanisms associated with them can be a way of finding out the purpose of the action of the one we perceive. Another possibility is that it is a way to learn through imitation.
Practice questions in chapter 9 of Cognitive Psychology of Gilhooly et. already.
Ask
1. Describe solving a Sudoku puzzle in the newspaper using the following terms: poorly defined / well-defined, knowledge-rich / knowledge-poor and opponent / non-adversary problem.
2. Explain how the concepts set and functional fixation can stop gaining insight.
3. Which two subtypes of problem rooms exist?
4. Connect the right search strategy within a large problem area with the accompanying advantages and disadvantages of that strategy:
1) depth first search , 2) breadth first search, 3) progressive deepening
A) Very intensive, but solution is always achieved
B) Not intensive but solution is not always achieved
C) Is not too intensive under the right circumstances and the goal is guaranteed.
5. Explain why it is plausible to think that there is a fundamental distinction between solving problems that can be solved by looking for a particular representation and insight problems.
6. In which cases is insight most often achieved according to the progress monitoring strategy?
7. What four phases are there in Wallas's four-phase analysis? Which is the most important?
8. Name three factors that promote creative thinking.
Answers
1. Solving a Sudoku puzzle in the newspaper is well defined because you know what the initial situation is and you know what the possible actions are and the ultimate goal. In addition, it is a knowledge-poor problem because it requires little specialized knowledge. Finally, it is a non-opponent problem, because there is no question of a thinking opponent trying to beat your goals.
2. Set is the tendency to linger in a certain approach to a problem, and functional fixation is the difficulty to devise a new function for a familiar object. Both prevent restructuring and thus gaining insight.
3. The state-action-space, or a representation of how a problem can be transformed from the starting state by intermediate states to the goal, and the target-sub-goal-space, or a representation of how a problem can be broken up sub-goals and sub sub goals.
4. 1-B, 2-A, 3-C.
5. Research shows evidence for the idea that solving insight problems requires the special process of restructuring; For example, neurological research shows that different neural processes are active in solving insight problems and non-insight problems.
6. According to the progress theory, insight is most often achieved if there is criterion failure (if the progress in searching for a solution is not fast or efficient enough), which is the result of constraint relaxation (loosening the limitations of what is must or may be used to come to a solution).
7. The four phases are preparation (becoming familiar with the problem, does not yet lead to a solution), incubation (problem is put aside for a moment), illumination (inspiration / insight, does not lead to a solution) and verification (solution becomes achieved by consciously testing ideas from the illumination). Incubation is crucial here; this is supported by research.
8. 1. Being primed with creative subjects.
2. A creative environment (such as a room with a lot of art on the wall).
3. The brainstorming method, in which the production of as many unusual ideas as possible is stimulated.
Practice questions in chapter 10 of Cognitive Psychology of Gilhooly et. already.
Ask
1. People often make decisions that are not based on maximizing the expected value, such as buying lottery tickets or taking out insurance. Name two possible explanations for this type of decision.
2. Explain what the following terms mean within the expectation theory ( prospect theory) : loss aversion, the endowment effect and the status quo bias.
3. Why is availability heuristics often not an effective way to estimate the frequency of an event?
4. What is the role of the conjuncture pitfall in the failure of correctly estimating a frequency on the basis of representativenessheuristics?
5. Name two disadvantages of the theory of multiple-attributes-utility.
6. What is the difference between System 1 and System 2 within the two-systems approach to making decisions?
7. Explain why the existence of the omission bias contradicts the consequentialist view.
8. Describe in which situations the theory of multiple-attributes-utility is applicable or not.
Answers
1. A possible explanation for this type of decision is risk aversion (in the case of taking out insurance); the tendency of people to avoid risky choices, even if they offer a higher expected value. Another explanation is risk-seeking (in the case of lottery tickets): the tendency to make risky choices, even if the risk-free alternatives offer a higher expected value.
2. Loss aversion is about avoiding loss; For example, the loss of 10 euros has a more negative utility than the profit of 10 euros has a positive utility. Related to this is the endowment effect ; the tendency to overstate an item that you own, and require more money to sell it than to buy it in the first place. The status quo bias is the strong preference for maintaining the current state of affairs and avoiding change.
3. Availability of an event is not only based on frequency, but also on how recently you experienced such an event or what its emotional impact was. This can lead to incorrect probability assessments.
4. The conjuncture pitfall is the false assumption that the conjunction of two events (A and B) is more likely than either A or B. This pitfall is often the reason that the representativeness heuristic, in which the frequency of an event is estimated on the basis of how representative or typical of his category fails.
5. The theory assumes that a decision is made by assigning an objective value to each attribute of a choice option. These values are added to each possible option, after which the highest rated option is selected. Two problems are that the relevant dimensions of all choice options are not always known and that often no objective value can be assigned to the attributes.
6. System 1 ensures fast intuitive thinking and System 2 ensures slow, conscious thinking.
7. The omission bias refers to the negative consequences resulting from the failure to do something (for example not vaccinating your child) often as less negative than the same consequences of doing something (but vaccinating your child). This is in contradiction with consequentialism, which states that decisions are made on the basis of the consequences that are expected to follow from the different choices.
8. In situations with a lot of time pressure the theory often does not apply; people do not weigh all possible options, but choose the option that comes to mind. When it comes to important decisions without time pressure, the theory is more applicable.
Practice questions in chapter 11 of Cognitive Psychology of Gilhooly et. already.
Ask
1. In which form of reasoning do propositional reasoning and syllogistic reasoning apply? What is the difference between the two?
2. Connect the examples below with one of the entry rules (mode punching, mode tolling or double negation).
1) When I get up early, I eat cornflakes for breakfast. I do not eat corn flakes for breakfast, so I did not get up early.
2) It is not a nice weather, so it is nice weather.
3) When I study hard, I get my exam. I study hard, so I get my exam.
3. Is the following incorrect reasoning an example of confirming the consistent or denying of the antecedent? 'If an apple is red, it tastes sweet. This apple tastes sweet, so it is red. "
4. What is meant by the atmosphere effect?
5. Why did Henle, contrary to arguments behind the atmosphere hypothesis and conversion effects, think that people do rational reasoning?
6. Give an example of a situation in which the figural bias is involved.
7. Explain how the four-card selection task is used to examine hypothetico-deductive reasoning. Name a factor that affects performance on this task.
8. Explain the relationship between deontic rules and social contract theory.
Answers
1. These are forms of deductive reasoning. In propositional reasoning the statements are connected by logical relations such as 'and', 'or', 'not' and 'if', and in syllogistic reasoning the statements are connected by logical relations such as 'some', 'none' and 'all' .
2. 1) Mode tollens
2) Double negation
3) Mode ponens
3. Confirmation of the consistent.
4. The atmosphere effect is the tendency to draw conclusions in syllogisms that are influenced more by the form of the premises than by the logic of the argument. If, for example, both premisses have 'all', people are inclined to accept a conclusion with 'all'.
5. Henle believed that when people come to invalidated conclusions, this often comes from interpreting the material differently than intended or taking on a different task than that which was asked. The reasoning that followed, however, according to her, was rational.
6. Possible answer: Some women are students. Some students are Dutch. Most people give the conclusion here: some women are Dutch, while the conclusion is that some Dutch women are equally valid.
7. Here, people are asked to test a rule (for example, if P then Q), by presenting them with four cards with P, Q, non-P and not Q. On the other side is also P, Q, non-P or non-Q. People have to test the hypothesis by turning over one of the cards. This is an example of hypothetico-deductive reasoning, namely testing a hypothesis on the basis of testing the necessary consequences of the hypothesis. With concrete information on the maps, people perform better than if the information is abstract.
8. Social contract theory suggests that rules that express payment of privilege fees are more easily resolved by people because the correct card choices would reveal cheating (this is an evolutionary mechanism). Research into this theory shows that deontical rules, ie rules that relate to duties and terms such as 'must', 'should', etc., are facilitating performance in the four-card selection task.
Practice questions in chapter 12 of Cognitive Psychology of Gilhooly et. already.
Ask
1. What is meant when we say that speech production is conceptually driven?
2. Hockett proposed 16 characteristics of human language that distinguish animal communication systems. Which two of the following characteristics does not belong to these 16 characteristics? Specialization, productivity, interdependence, arbitrariness and long-term preservation.
3. Explain how the existence of allophones demonstrates that a phoneme is a subjective category.
4. Connect the correct term with the bold part of the following words: 1) The lid s are on the pans. 2) The lids are on the pans. 3) The lids are on the pans.
Terms: free morpheme, function word, bound morpheme.
5. Name the four maxims of effective conversations according to Grice.
6. What is meant by the lexical bias? Call a possible explanation for this phenomenon.
7. What is the main difference between modular and interactive theories of speech production?
8. Explain how results from the dichotic listening task provide evidence for the idea of lateralization of speech.
Answers
1. This means that speech production is a top-down process and is influenced by cognitive processes such as thoughts, beliefs and expectations.
2. Interdependence and long-term preservation (Well: interchangeability and rapid decay).
3. Allophones are different fons (such as the 't' in trumpet or tender) that are seen as the same phoneme. People do not consciously perceive the difference between allophones and automatically place them under the same phoneme, although there is indeed a difference in sound. The distinction between phonemes is thus subjective.
4. 1) bound morpheme 2) function word 3) free morpheme.
5. The maxim of quantity (the speaker must provide enough information to be understood, but not too much), the maxim of quality (the speaker must provide accurate information), the maxim of relevance (the speaker must provide relevant information) and the maxim of manners (ambiguity and vagueness must be avoided).
6. The lexical bias refers to the tendency of phonological speech errors to result in real words. This may be because non-words are detected earlier and restored, while mistakes with real words tend to slip past the 'control'.
7. Modular theories state that speech production goes through a series of phases or levels, with little interaction between the different levels. Interactive theories, however, view speech production as spreading activation in a lexical network. Processing is interactive here, which means that activation of one level can influence processing at other levels.
8. In the dichotic listening task , the participant is presented with different stimuli at the same time for each ear. The results of this task show that there is an advantage in verbal stimuli presented to the right ear. This provides evidence for the idea of lateralization (specialization) of speech in the left hemisphere; input in the right ear is mainly processed in the left hemisphere.
Practice questions in chapter 13 of Cognitive Psychology of Gilhooly et. already.
Ask
1. Name two factors that make speech perception more difficult.
2. What is meant by the segmentation problem? Name two information sources for segmenting the voice signal.
3. Segmentation of the speech signal is made more difficult when listening to lyrics. Explain why this is the case.
4. Suppose you watch a program on TV about fashion. The presenter pronounces the following sentence: This shoe designer is known for his models with high * aks. Although there is no phoneme missing, you still observe 'high heels'. What do we call this effect?
5. What do the terms 'initial cohort' and 'unique point' mean within the cohormodel?
6. According to the TRACE model, which different sources influence speech perception? What is a disadvantage of this model?
7. Imagine an investigation into lexical access. In condition one is tested how easily a common word can be processed. In condition two, first associations with a certain word are generated before the lexical access of this word is tested. In condition 3 words are searched that follow a word with multiple meanings. Which effects / factors of lexical access are investigated in each condition?
8. What type of script is Dutch? And Chinese?
9. Which three routes for reading represent the dual route model of reading?
10. What is the difference between pure word deafness and pure word meaning deafness?
Answers
1. The speech signal is continuous: there are few clear boundaries between words and consecutive sounds mix with each other. In addition, factors such as age, gender and speech speed influence the sounds produced by the speaker.
2. The segmentation problem refers to the detection of distinct words in a continuous sequence of speech sounds. An important source of information for segmenting a speech signal is the sound patterns in a language, such as stress and prosody.
3. For lyrics, prosodic information, a tool for segmentation, has been reduced. The context also often provides fewer indications for word selection.
4. The phoneme restoration effect.
5. The cohort model assumes a direct and parallel access of speech sounds to the storage of words in the mental lexicon. If we hear the first phoneme of a word, it is therefore already possible to have expectations about the probably intended word. The set of words that are consistent with the initial sounds is called the initial cohort of the word. At the unique point, enough phonemes have been heard to recognize only the intended word.
6. Acoustical information, instructions from other phonemes and the semantic context. A disadvantage of this model is that it overestimates the role of top-down effects.
7. Condition 1: Frequency effect. Condition 2: Priming effects. Condition 3: Lexical ambiguity.
8. Dutch is an alphabetical script; Chinese is a logographic script.
9. Route 1, the grapheme-to-phoneme conversion route , allows the conversion from writing to sounds. Route 2, the lexical route , allows reading through word recognition. Route 3 goes outside the semantic system and is responsible for cases where a different word is read correctly even if the meaning is not recognized.
10. Pure word deafness refers to a disorder where there is a limitation for recognizing speech sounds, but not for non-speech sounds. With pure word meaning deafness, the patient can repeat the word but not understand it.
Practice questions in Chapter 14 of Cognitive Psychology by Gilhooly et. already.
Ask
1. Research in the field of emotions is dominated by the English language. Give a possible consequence of this for results from this research.
2. Which three components characterize emotions according to Clore and Ortony?
3. Explain how evidence for the facial feedback hypothesis supports the James-Lange theory of emotion.
4. Name Cannon's three main criticisms on the James-Lange theory.
5. How did Lazarus argue that cognition and emotion could not be seen separately?
6. Explain how emotion can influence attention processes.
7. Connect the terms 'tunnel memory ', ' flashbulb memory' and 'state- dependent memory' with the correct description:
1) Vivid, emotional memory that is often incorrect and more sensitive to bias.
2) The facilitation of memory when the mental or physiological state corresponds to encoding and retrieval.
3) The positive effect of negative emotions on memory for central details of an event and the negative effect on the memory for edge details.
Answers
1. Because in different languages emotions are named differently (for example, different languages refer to different, more or less emotions), it is quite possible that research into emotions is influenced by English. It is plausible that the six identified basic emotions (anger, aversion, fear, grief, joy and surprise) would be different if the research had not been in English.
2. Emotions are characterized by a cognitive component (the appreciation of emotion), a motivational-behavioral component (our actions in response to an emotion), a somatic component (bodily reaction) and a subjective-experience component.
3. The James-Lange theory of emotion states that the experience of an emotion follows the physiological changes associated with that condition. This is supported by evidence for the face-feedback hypothesis ; the assumption that feedback from the facial muscles influences the emotional state.
4. The criticism of Cannon was that the same physiological condition can be associated with different emotions; accelerated resin stroke can, for example, be associated with both anger or fear. The same physiological condition can also occur without emotion (for example during physical exertion). Finally, the conscious experience of an emotion happens quickly, while for example visceral changes go slower.
5. Lazarus came up with the theory of valuation; it states that emotions result from our interpretation of events. Cognitive appreciation is thus fundamental to emotional experience and can not be separated from it. This is supported by research.
6. Emotional stimuli hold our attention earlier and hold it longer. This is evident from the results of the emotional Stroop task, in which participants must name the color in which a word is written. If a word has emotional value, it keeps attention longer and performance on the task is reduced.
8. Tunnel memory - 2 Flashbulb memory - 1 Status- dependent memory - 3
Cognitive psychology from Gilhooly - BulletPoint Bundle
Bulletpoint What is cognitive psychology? - Chapter 1
Cognitive psychology is the study of how people gather information, store it in memory and retrieve it, and how people work with information to achieve goals. Mental representations play a major role in this.
Approaches in cognitive psychology: associationism, introspectionism, behaviorism, information processing approach, connectionism, functionalism.
Cognitive neuropsychology: investigates the effects of brain damage on behavior. Goal: find out how psychological functions are organized. Modularity states that cognition consists of a large number of independent processing units that work separately and are applicable to relatively specific domains.
There are 2 types of brain scans; structural imaging (MRI) and functional imaging (EEG, PET & fMRI).
Reverse inference is a common method to connect cognitive processes with the outcomes of brain scans.Brain activities could also be regarded as networked rather than localized. There may be a Default Mode Network that reflects internal tasks.
Bulletpoint What are the principles of perception? - Chapter 2
Perception is the whole of processes through which sensory experiences are organized into an understanding of the world around us. It stands on a continuum between sensation and cognition. Because of the inversion problem and the limited functioning of human sensory organs, people can never process enough information to describe the physical world exactly.
There are two perception principles: bottom - up versus top - down processing and the probability principle.
There are two perception theories: the information processing approach and the embodied approach to cognition.Perception takes place via three systems: the visual system, the auditory system and the somato-supervision system (combination of proprioception and vestibular sensation, and sense of touch).
There are two theoretical explanations of how the perceptual system combines information from the different senses: The modality-appropriate hypothesis and the less-investigated maximum probability estimation theory.
There are five theories to explain recognition of an object: the feature analysis, the pandemonium model, the prototype theory, recognition by components approach (RBC) and the multiple approach theory .
Disorders in visual perception: visual agnosia and prosopagnosia.To recognize events and situations, schemas are important.
Faces, voices and movement are important for social recognition.
Bulletpoint What are the processes of attention and awareness? - Chapter 3
According to the attention system, there are three separate systems for alarm, orientation and executive functions.
There are two important explanations for the ability to listen dichotically: the filter theory and the source theory. The source theory is supported by the dual task paradigm.
Criticism point source theory: how does our attention system know which events in the environment are important enough to focus attention?According to the standardization model of attention, attention has two functions, in which normalization plays a role: the ability to increase sensitivity to weak stimuli when presented independently and the ability to reduce the impact of task-irresponsible distractors if multiple stimuli are presented .
There are two general trends within the research into attention: the emphasis on vision as a primary modality to explore attention models and the development of experimental paradigms, such as visual search, dual-task interference, inhibition of return and attentional blink .
There are two general approaches to the function of consciousness: conscious inessentialism and epiphenomenalism.
If a distinction is made between attention and consciousness, it is important to distinguish between phenomenal awareness and access awareness.
Consciousness appears divided over both hemispheres. The method neural correlates of consciousness (NCC) tries to investigate how brain activity changes when a stimulus is consciously perceived or not.
Bulletpoint What different parts and functions does the memory have? - Chapter 4
The memory (long-term, short-term, and working memory) has various functions: encoding, storing and recalling.
The sensory memory consists of different parts: iconic memory, egoic memory and haptic memory.
According to Atkinson-Shiffrin's model, information is first stored in sensory storage. Striking information is transferred to the short-term memory (KTG). Whether information is stored in the long-term memory (LTG) depends on various factors. Repetition promotes transfer to the LTG and decay and replacement impede transfer to the LTG. Various studies support the distinction between LTG and KTG.
In addition to the KTG and LTG, a working memory (WG) is also assumed. According to Baddeley's working memory model, the WG stores information temporarily and plays an important role in processing. According to this model, the WG consists of four components: the phonological loop, the visual-spatial sketch pad, the central performer (= main component) and the episodic buffer.
Bulletpoint What are the functions and structure of the long-term memory? - Chapter 5
The amnestic syndrome is a permanent and penetrating memory disorder involving both anterograde and retrograde amnesia. The linguistic ability and awareness of concepts often remains intact.
According to the multiple memory systems model, the LTG consists of various components that are responsible for different types of memories, namely the non-declarative /
implicit memory and the declarative / explicit memory.Non-declarative memory plays a role in many different tasks, such as classical conditioning, learning habits, motor skills and priming. The procedural memory is an example of non-declarative memories.
Declarative memory can be divided into episodic memory and semantic memory. Examples of episodic memory are prospective memory and autobiographical memory. Meta memory is an example of semantic memory.
Bulletpoint How do you learn and forget? - Chapter 6
According to the theory of processing levels, superficial encoding results in poor preservation and deep encoding to improve memory retention. The disadvantage of this theory: there is no objective measure for the depth of processing
Memory strategies that improve memory performance: categorization, method of loci and interacting images.
The principle of encoding specificity states that if the context of retrieval is similar to the context of encoding, the memory will work better (contact-dependent retrieval).
The spacing-effect refers to the phenomenon that material that is taught at scattered moments is better remembered than material that is learned in one connecting session.Interference is a major cause of forgetting. Proactive interference: when previously learned material disturbs learning later. Retroactive interference: when later learning disturbs the recall of previously learned material. A common method in research into forgetting is the paired association paradigm.
Memories seem to consolidate (strengthen) over time. Long-term potential (LTP) is considered an important mechanism in learning and remembering.
A number of memory paradigms : The Retrieval-induced forgetting (RIF) paradigm, the directly forgotten paradigm and the thinking / non-thinking paradigm.
Ecological validity is an important problem with regard to memory research.Despite the fact that flashbulb memories contain many inaccuracies, people have a lot of confidence in their memory with these memories.
There are three ways to study: superficial learning, deep learning and strategic learning.
Bulletpoint Which representations of knowledge are there? - Chapter 7
We use concepts to represent all objects that belong to a particular category.
Approaches of the term 'concepts': definition approach, prototype approach, model-based approach (model theory / exemplar theory ), theory and knowledge based approach, essentialism (3 types of concepts, nominal concepts, natural concepts and artefact concepts), information processing approaches (well-founded versus amodal representations) .
It is assumed that visual imagery and visual-spatial processing use the same mental and neural sources.
The Duck-Rabbit figure and the Necker Cube are examples of ambiguous figures that generate alternative and alternating structures. Research into these types of figures shows that people have a fixed interpretation of mental images, whereas physical images do not have to be the case.
The occipital lobe and the early visual cortex appear to play a role in imagination.
Bulletpoint What is the motor system? - Chapter 8
The motor system includes the components of the central and peripheral nervous systems, together with the muscles, joints and bones that make movement possible.
Approaches to motor control: equilibrium hypothesis, dynamic system theory, optimal control theory.
The above three theories all contribute significantly to the explanation of how motor control works. The equilibrium hypothesis shows that the complexity of a motor plan can be simplified on the basis of muscle characteristics. The dynamic system theory shows that transitions between different action states can be explained on the basis of the development of a system over time. The optimal control theory ensures that optimal organizational principles can be integrated into the system of planning, producing and observing our actions. However, every theory is only a part of behavior and theory.
Theories of movements in interaction with other cognitive processes: associative chain theory, hierarchical models of action production (recurring networks).
Action disorganization syndrome is a movement disorder (apraxia) in which the patient loses the ability to perform certain motor actions while the sensory and motor systems are still intact.
Theories of action representation where cognitive representations of action are people with representations of both perception and action: ideomotor theory (elaborated within the framework of common coding, according to this theory there is a layer of presentation where event codes and action codes overlap), approach mirror mechanisms (mirror neurons ), and the embodied approach to cognition (metaphorical gestures).
Bulletpoint In what ways can problems be solved? - Chapter 9
There are various types of problems: knowledge-rich versus knowledge-poor problems, opponent versus non-opponent problems.
There are several approaches to problem solving:
(1) The Gestalt approach sees problem solving as restructuring, where insight and understanding play a major role. (2) The information processing approach compares the human problem with computer strategies and includes the following concepts: the problem space, which is subdivided into the subtypes state-action-space and target-sub-target-space.Two recent theories explain problem solving on the basis of insight:
(1) The representational theory of change explains the following phases: problem perception, problem solving, deadlock, restructuring ( constraint relaxation for needed), and partial and full insight. (2) Progress monitoring theory states that the main source of difficulty in understanding tasks is the use of inadequate heuristics.To solve theories of creative problem:
Walla's four-phase analysis. The four phases are: preparation, incubation (crucial for problem solving), illumination and verification.
Information processing theory of creative processes: geneplore model .
The embodied approach to cognition states that perceptual representations of the world are connected with action representations.
Bulletpoint How does one make decisions? - Chapter 10
Utility theory plays utility a very important role in making a decision.
The theory of expectation makes decisions on the basis of relative profit and loss. Important concepts that play a role here are loss aversion and the endownment effect .
The availability, representativeness and affectheuristics play a major role in making probability assessments.
It is difficult to make a decision between options that differ on many attributes. Various decision processes for alternatives with multiple attributes are proposed: the theory of multiple-attributes-utility, elimination through aspects and satisficing. In practice, people often fail to use just one decision strategy.
According to the two-system approach to decisions, there are two distinct cognitive systems; system 1 provides fast intuitive thinking and system 2 for slow, conscious thinking. When making decisions one of the two systems is used, depending on the importance of the decision.
Neuro-economics shows that utility or pleasure of a range of options can be presented through reward systems in the brain.
Bulletpoint What is inductive and deductive reasoning? - Chapter 11
There are two different types of deductive reasoning: propositional reasoning (using inference rules) and syllogistic reasoning.
Two examples of common misunderstandings in propositional reasoning are: (1) confirming the consistent, and (2) denying the antecedent.
Two approaches to propositional reasoning are the mental logical approach and mental modeling approach
. By means of this latter approach the figural bias can be explained. In addition, this approach also applies to syllogistic reasoning.Explanations for incorrect reasoning in syllogistic reasoning: (1) people seem to have more trouble with syllogisms when the terms are abstract, (2) the atmosphere effect causes problems, (3) conversion effects, and (4) probabilistic inference.
There are two types of inductive tasks: hypothesis testing (using the four-card selection task ) and hypothesis generation (using the reversed 20 questions task ). In both processes, the hypothesis can not be definitively proven, but it can be refuted.
Possible explanations for poor performance on the four-card selection task: (1) misinterpretation of the task, (2) matching bias, and (3) recognisability of submitted situations. Deontic rules work for performance on this task.
Bulletpoint What is language production? - Chapter 12
Speech production is conceptually driven: it is a top-down process that is influenced by cognitive processes, such as thoughts, beliefs and expectations.
Grice identified four conversational rules or maximums of effective conversations: the maxim of quantity, the maxim of quality, the maxim of relevance, and the maxim of manners. When one of these rules is broken, it requires more cognitive processing to understand a conversation or to respond to the other.
Modular theories state that speech production goes through a series of phases / levels, each with a different type of processing. Modular theories include Garrett's model and Levelt's model
Interactive theories make use of the concept of spreading activation in a lexical network, where processing is interactive; one-level activation can affect processing at other levels. The Dell model is an interactive theory.In most people, speech is gelateralized in the left hemisphere. The right hemisphere plays a role in emotional aspects of speech and aspects of non-literal speech.
The Wenicke-Geschwind model is a simplified model of language function used as a basis for classifying aphasic disorders (eg Broca 'and Wernicke's aphasia).
The Hayes and Flower model of writing proposes a cognitive approach to writing that focuses on three domains: the task environment, the long-term memory, and the immediate cognitive aspects of the writing process. The model also proposes three phases of writing; plan, translate and revise.
Bulletpoint Which processes of language comprehension are there? - Chapter 13
Prosody refers to all aspects of an utterance that are not specific to the words themselves.
Problems that can occur when understanding speech are the invariance problem, the segmentation problem and a slip of the ear .
The following factors play a role in accurate speech comprehension: the stress-based strategy, categorical perception, the right ear advantage, the phoneme restoration effect and visual clues.
There are various models of speech perception that try to explain how information from the continuous speech stream we hear makes contact with our stored knowledge about words. The models fall into two categories. The cohort model assumes that processes of speech perception are modular. The TRACE model assumes that processes are interactive.
There are a number of factors that influence lexical access: the frequency effect, priming effects, syntactic context and lexical ambiguity.
Through parsing we can represent the syntactic structure of a sentence in our head. Frazier described two main strategies for parsing: minimal attachment and late closure.
There are different types of scripts all over the world, namely: logographic scripts, syllabic scripts, consonantal scriptures, and alphabetical scriptures.
The dual route model of reading states that there are three routes for reading: (1) the graphem-to-phoneme conversion route, (2) the lexical route, and (3) the route outside the semantic system.
The area of Wernicke seems to be most associated with language comprehension. The Broca area also plays an important role.
Bulletpoint What is the connection between emotion and cognition? - Chapter 14
Brain areas that play an important role in emotion are the amygdala and the insula.
Every culture has its own display rules with regard to emotions. Yet there is evidence for a basic set of emotional expressions about different cultures.
According to Ekman, there are six basic emotions: joy, sadness, anger, fear, surprise and disgust. Later there were a number of emotions, including pride, contentment and hatred.
In addition to facial expressions, certain physiological phenomena, behaviors, beliefs and thoughts are also accompanied by emotions.
There are many theories of emotion and cognition, namely: (1) the James-Lange theory, (2) the Cannon-Bard theory, (3) the two-factor theory, (4) the theory of Zajonc ( affective-primacy ) and (5) the theory of Lazarus ( cognitive primacy ).
Emotions can affect the following components: attention (attention bias ), perception and memory ( flashbulb memories , mood-condition-congruence effect).
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
Contributions: posts
Spotlight: topics
Introductory Psychology and Cognition - Summaries and Tips for UvA students
Hi guys!
Here you can find some summaries and study tips that I have found around Introductory Psychology and Cognition at the University of Amsterdam. Most of it is free.
Summaries, notes & exams - Introductory Psychology and Cognition - Part B: Introduction to Psychology - Psychology - Amsterdam University (UvA) - Study bundle
In this bundle summaries, exam questions and lecture notes will be shared for the course Introductory Psychology and Cognition - Part B: Introduction to Psychology, Year 1 Amsterdam University.
For a complete overview of the summaries and
Online access to all summaries, study notes en practice exams
- Check out: Register with JoHo WorldSupporter: starting page (EN)
- Check out: Aanmelden bij JoHo WorldSupporter - startpagina (NL)
How and why use WorldSupporter.org for your summaries and study assistance?
- For free use of many of the summaries and study aids provided or collected by your fellow students.
- For free use of many of the lecture and study group notes, exam questions and practice questions.
- For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
- For compiling your own materials and contributions with relevant study help
- For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.
Using and finding summaries, notes and practice exams on JoHo WorldSupporter
There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
- Use the summaries home pages for your study or field of study
- Use the check and search pages for summaries and study aids by field of study, subject or faculty
- Use and follow your (study) organization
- by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
- this option is only available through partner organizations
- Check or follow authors or other WorldSupporters
- Use the menu above each page to go to the main theme pages for summaries
- Theme pages can be found for international studies as well as Dutch studies
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
- Check out: Why and how to add a WorldSupporter contributions
- JoHo members: JoHo WorldSupporter members can share content directly and have access to all content: Join JoHo and become a JoHo member
- Non-members: When you are not a member you do not have full access, but if you want to share your own content with others you can fill out the contact form
Quicklinks to fields of study for summaries and study assistance
Main summaries home pages:
- Business organization and economics - Communication and marketing -International relations and international organizations - IT, logistics and technology - Law and administration - Leisure, sports and tourism - Medicine and healthcare - Pedagogy and educational science - Psychology and behavioral sciences - Society, culture and arts - Statistics and research
- Summaries: the best textbooks summarized per field of study
- Summaries: the best scientific articles summarized per field of study
- Summaries: the best definitions, descriptions and lists of terms per field of study
- Exams: home page for exams, exam tips and study tips
Main study fields:
Business organization and economics, Communication & Marketing, Education & Pedagogic Sciences, International Relations and Politics, IT and Technology, Law & Administration, Medicine & Health Care, Nature & Environmental Sciences, Psychology and behavioral sciences, Science and academic Research, Society & Culture, Tourisme & Sports
Main study fields NL:
- Studies: Bedrijfskunde en economie, communicatie en marketing, geneeskunde en gezondheidszorg, internationale studies en betrekkingen, IT, Logistiek en technologie, maatschappij, cultuur en sociale studies, pedagogiek en onderwijskunde, rechten en bestuurskunde, statistiek, onderzoeksmethoden en SPSS
- Studie instellingen: Maatschappij: ISW in Utrecht - Pedagogiek: Groningen, Leiden , Utrecht - Psychologie: Amsterdam, Leiden, Nijmegen, Twente, Utrecht - Recht: Arresten en jurisprudentie, Groningen, Leiden
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
1979 | 1 |
Add new contribution