Summary of Principles of Cognitive Neuroscience by Purves a.o. - 2nd edition

Summary to be used with some of the chapters of the book Principles of Cognitive Neuroscience by Purves, 2nd edition.

2. Methods

The goal of cognitive neuroscience is to use the function of the brain and the nervous system to explain cognition and behaviour. Cognitive psychology pointed out the importance of measurement during cognitive and perceptual tasks to investigate neural activity is converted to action, thought and behaviour. Neuroscience has two categories (Figure 2.1 on page 19). The first is researching changes in cognitive behaviour when the brain has been stimulated, for example by trauma, medication or electronic stimulation. The other category is measuring brain activity when carrying out tasks using electrophysiological and imaging techniques.

Brain Imaging Techniques

X-ray techniques have been used for a long time to make non-invasive images of the human body. Contrast agents can be injected to enhance the contrast of the picture. The first technological use of this was computerized tomography (CT) (Figure A on page 22). A CT uses X-rays to gather intensity information from multiple angles. The data can be viewed in ‘slices of tissue’, called tomograms.

Magnetic Resonance Imaging (MRI, figure B on page 23) is faster and cheaper than CT. MRI uses strong magnets to ‘feed’ energy to protons in the water of the cell tissue, which move in the frequency of radio-wavelength. When the magnet is turned off, the protons in the body release the energy. The scanner measures excitation of energy by the protons. The proton density differs in every tissue, which enables the computer to calculate a clear image of different tissues. The spatial resolution of MRI is about one millimetre. MRI is non-invasive, has a high spatial resolution and can be made sensitive to different kinds of tissues.

Structural Imaging Techniques

Researchers use different methods to investigate the connections between brain areas. A variant of MRI, diffusion-weighted imaging, can allocate the white matter fibre tracts in the brain. It shows the local diffusion of water. Diffusion tensor imaging (DTI) quantifies the diffusivity of water molecules. Most axonal fibre tracts in the brain are encased in hydrophobic myelin, which makes water more diffused along the tracts.

White matter shows a preferred direction of diffusion (anisotropy), whereas other regions have no preference (isotropic). The amount to which a voxel (imaged pixel of MRI) is anisotropic is called the fractional anisotropy. Fractional anisotropy gives information about the tissue within a voxel. Voxels with fractional anisotropy in the same direction can be connected to each other to ‘draw’ a fibre tract in a MRI image (Figure B on page 24). DTI can be combined with fMRI. Researchers are trying to reconstruct the whole network of connections in the brain (the connectome). This research is called connectomics, but it is expensive and the outcome is unclear. At microscopic level, a method called Brainbow is used to contrast individual neurons and their connections. An electron microscope can also make photographs of single neurons.

Brain disturbances explaining cognition

Making clinical-pathological correlations is the oldest method investigate the connection between the brain, thoughts and behaviour. When a brain area and a cognitive function are both disrupted, it is likely these two are connected. First, the connection could only be discovered by autopsy. Due to modern techniques we can now research this connection in living patients. A major restriction of clinical-pathological correlations is that brain damage can have many causes that are not under the control of the researcher. Also, brains differ from person to person, which makes it hard to generalise the results. Researchers can use imaging techniques to make an image of patients with similar symptoms. Then they layer the affected areas on top of each other: the regions that overlap are likely to be the cause of the symptoms.

Researchers can also deliberately make lesions in the brain of non-human subjects to get a more direct cause-effect picture. However, lesion studies in humans and animals have interpretation problems. Damaged connections in the brain do not longer transmit information to other areas and can cause problems as well. The symptoms can be mistakenly attributed to the damaged area, instead of the area with the missing input. This interpretation problem is known as diaschisis.

Pharmacological products can disrupt neurotransmitters in the nervous system. Researchers have two methods to find out how medicines affect the brain. The researcher can obtain cognitive data from people with a long history of drug use. An example is the operation of the dopamine system (the reward system) of drug addicts.

Another more controlled method is administering drugs and observing their effects in an experimental setting. Substances that activate neurotransmitter receptors are called agonists; substances that block the receptors are called antagonists. A disadvantage of this method is the uncertainty about the effects of drugs on specific areas. The whole brain is affected. A solution can be to inject the substances straight into specific brain regions.

Intracranial stimulation is direct electronic brain stimulation. This invasive method uses electrodes placed on specific brain areas. Temporarily of chronically placed electrodes stimulate the area during cognitive tasks. Specific information about the function of the area can be obtained by varying the amount of stimulation. Moderate stimulation enhances or elicits behaviour, whereas strong stimulation disrupts behaviour (like lesions). This method is used by brain surgeons to map important areas that have to be avoided during surgery.

There are several types of extracranial brain stimulation. During Transcranial Magnetic Stimulation (TMS) an experimenter gives a powerful magnetic pulse on the scalp of a subject (Figure 2.3). The induced magnetic-electrical field can stimulate or disrupt processes, depending on the intensity of the pulse. The technique is almost like direct electric stimulation, except that the effects are temporal, reversible and less invasive. During repetitive TMS (rTMS), repetitive pulses are administered for a longer period of time. The cognitive and behavioural effects are measured afterwards.

Another method is to measure the influence of the timing of the pulses at different intervals during a task trial. It provides a better temporal resolution of the influence of a brain region on a cognitive task. TMS has several disadvantages. It has a low spatial resolution, because TMS disturbs a large area of the brain during the pulse. The stimulation doesn’t penetrate very deep into the brain either: only about 1,5 centimetres. TMS mainly affects the cerebral cortex. TMS can also induce twitching: the field also affects the muscles in the head. Lastly, not everyone is suited for TMS. It can induce epileptic seizures.

Extracranial stimulation can also be administered by transcranial direct current stimulation (tDCS). It is a continuous, low-amplitude, electrical current placed on the scalp. An electrode is placed on the area of interest. There are two types of stimulation: anodal (enhancing cortical excitability) and cathodal (decreasing cortical excibility). Subjects either have real stimulation or a placebo. The effects of tDCS last for a period of time after the session (just like rTMS). The method is simple, cheap and has several scientific and clinical implications. tDCS also has drawbacks: it has low spatial resolution and provides only limited understanding of its workings.

Optogenetics has high neuronal discrimination and a high temporal resolution. It uses genetics and laser light to activate neural systems or cells types.

Ion channels in neurons open or close when the current of an action potential passes by. Optogenetics alters neurons in such a way that they excite or inhibit after stimulation by a specific wavelength of light. The genes to make neurons sensitive to light are extracted from algae and implanted in a carrier virus, which gets injected to the brain area

Measuring neural activity

There are several methods to measure brain activity while cognitive tasks are performed. One method is direct electrophysiological recording. The most popular version of this is single-neuron electrical recording of the action potentials. During extracellular recording, electrodes are placed into the space around the neurons, measuring the neural activity of one or multiple cells. For intracellular recording, a fine glass electrode is inserted in one single neuron.

Often, the record device can be moved in different directions in the animal’s brain. Researchers often use two different methods to obtain single-unit data. A peristimulus time histogram (PSTH, figure 2.5B on page 30) shows the activity of a neuron per trial and time-locked. Another method is using neuronal tuning curves. Stimuli are varied in a specific dimension (colour, orientation). Then the strength of the neural response is measured and plotted.

The resulting curve shows the sensitivity of the cell to the different dimensions. Multielectrode recording arrays involve patches with multiple electrodes to measure a whole field of neurons. Direct electrophysiological recording is only done to animals. However, it takes much effort to set up the research. This approach can’t measure complex human cognition.

Electroencephalographic recording (EEG) is a non-invasive method to measure electrical brain waves on the scalp. It uses electrodes filled with a conductive substance to pick up voltage fluctuations. The computer compares the voltage differences between each separate electrode and a reference electrode. EEG measures dendritic field potentials (Figure 2.7 on page 33) of clutches of neurons. Dendrites are often oriented vertically to the surface of the cortex.

The action potential causes a current flow along the dendrites. The electrodes of the EEG-cap pick up this temporal change in voltage. Dendrite field fluctuations are called local field potentials (LFPs) when they are measured inside the scalp. They show the integrative processing of large cortical neurons. EEG signals are separated in different frequency bands:

  • Delta, <4 Hz

  • Theta, 4-8 Hz

  • Alpha, 8-12 Hz

  • Beta, 12-25 Hz

  • Gamma, 25-70 Hz

  • High gamma, 70-150 Hz

The relative power in these bands represents the general state of the brain, such as arousal. Event-related potentials (ERPs, Figure 2.8 on page 34) are calculated using multiple time-locked pieces of the ongoing EEG. They show the neural activity of processing with a high temporal resolution. The peaks in an ERP are named according to their electrical polarity and latency, or according to their order in the image.

Although the temporal resolution of ERPs is relatively high, its relative poor spatial resolution makes it difficult to find out the exact source of activity. However, it is not impossible to map relative amounts of activity at different places on the scalp. The inverse problem makes it difficult to find out the exact clutch of neurons in the brain causing the activity: it could be any set. Besides time-locked averaged ERPs, researchers can also use oscillatory activity in the different bands of the EEG signal to investigate cognitive functions.

Magnetoencephalography (MEG, figure 2.10 on page 37) uses magnets to record event-related magnetic-field responses (ERFs) from ongoing MEG signals. It is similar to EEG, but it uses the magnetic field of the brain instead of the electrical fluctuations on the scalp. Using the ‘right-hand rule’ from physics, we can find the flow of a magnetic field around a current. MEG measures the magnetic fields and calculates the flow of the action potentials in dendrites.

MEG and EEG also differ in physical properties: MEG is relatively insensitive to activity in gyri. However, magnetic fields in the sulci are ‘sticking out’ of the head which makes MEG sensitive to activity in the sulci. EEG picks up activity in both sulci and gyri. MEG has a better spatial resolution, because EEG currents can have distortions due to the variable resistances of the head. MEG does not have this problem. Also, MEG picks up simpler signal distributions from the sulci.

Positron emission tomography (PET) measures changes in blood flow and metabolism of oxygen by the brain. Unstable positron-emitting isotopes are synthesised in a large tube. The isotope mostly used in PET is oxygen-15 in water molecules, which has a short half-life. When a particular brain area needs extra oxygen, which is a sign of extra activity, these molecules are distributes to that area in a few seconds. When the instable isotope decays, it may collide with an electron. This collision results in gamma-rays in opposite directions. Detectors in the tube only report registered rays when they appear in opposite direction. The computer then calculates the exact position of the collision. Due to this analysis, the temporal resolution is lower than expected. It also takes a lot of time to get a good signal. Experiments using PET use a blocked design in which several blocks of sessions come after each other.

Haemoglobin is a substance found in red blood cells, which can bind oxygen. Oxyheamoglobin carries an oxygen molecule and deoxyheamoglobin carries none. They have different magnetic resonance (MR) signals. The concentration of oxyheamoglobin indicates activity in the brain. Changes in concentration cause changes in magnetic resonance. This is known as the blood oxygenation level-dependent (BOLD) response, which is used to make fMRI images. fMRI has a better spatial resolution and a much better temporal resolution than PET. It does not require a radioactive injection, which makes fMRI non-invasive. fMRI is often used in an event-related design, in which trials take up only a few seconds (Figure 2.13 on page 41 for a comparison fMRI and PET). The trial blocks can have multiple stimuli which enables researchers to connect behavioural responses to neural responses.

The analysis of fMRI data can be done in different ways. Although information about local differences in MR can give interesting information about the function of certain brain areas, it is barely used in fMRI analysis. Researchers often apply spatial smoothing on the fMRI data to improve sensitivity to the stimuli. To find patterns in the data, researchers use pattern classification algorithms. The most common analysis is the multivoxel pattern analysis (MVPA), which searches for stimulus- or event-related patterns of activation across voxels. Total increase or decrease of activation in a certain brain area is less important. Repetition suppression is the tendency of the brain to supress a response to stimuli that look similar to the previous presented stimulus. When this principle is used in fMRI it is called fMRI adaption. The idea is that when repetition suppression occurs to a certain stimulus-pair, these stimuli share the same process in a brain area.

Information between brain regions is transported through fibre tracts. We know that some areas are better connected than others, but we do not know much about the functional connectivity. Using functional MRI, this question can be investigated. The simplest relationship between brain areas is co-activation: two or more brain areas adapt similarly in reaction to an experimental condition. To show this, researchers investigate the resting-state connectivity. Brain regions in resting subjects always show some random activation. Brain areas with co-varying fluctuations are believed to be connected.

Researchers use algorithms to search for simultaneous activation in the data. A seed voxel is a reference voxel: the signal fluctuations in this this particular voxel is compared to other regions. Brain regions that show resting-state connectivity tend to show functional connectivity during tasks as well. However, when a connection is found, the relationship between the two areas remains unclear. Combining fMRI and behavioural data (psychophysiological interaction, PPI) can help. A large disadvantage of fMRI and many other neural activity based methods is that they cannot confirm a causal relationship between brain areas. Methods such as structural equation modelling predict the best causal model based on the obtained data. A related approach is dynamic causal modelling, which makes models of the functional connections between brain areas and predicts how the connections change as a result of experimental manipulations.

Active brain tissue transmits and/or reflects light in a different manner than inactive brain tissue. These differences can be recorded and result in optical brain imaging. The method can be based on hemodynamic changes in response to neural activity, just like fMRI. However, the scalp has to be opened and illuminated with red light with a wavelength between 500 and 700 nanometres.

Another method is event-related optical signals (EROS) is a non-invasive activity-dependent mechanism. It also uses light, but can be applied outside the skull. It has high temporal resolution, but low spatial resolution. It does not come with the inverse problem.

Assembling and delineating

Neuroscience and cognition can be combined by connecting certain brain areas to cognitive functions. This association can be investigated using a double dissociation (Figure 2.16 on page 47). If Task A is mainly associated with Neural System A, and Task B associated with Neural System B, there would be patients with a particular brain lesion which impairs Task A, but not Task B. Subsequently, if a patient has a lesion in Neural System B, Task B would be impared as well, but not Task A.

However, neural systems often turn out to be only a bit independent. Task A and B can both be impaired to some degree at the same time. The double dissociation technique can also be used in brain activity studies. If Brain Area A is activated during Task A, we can say that there is an association. However, as with the lesion method, Task A and Task B can both be impaired to some extend even when the same brain area is activated.

Table 2.1 on page 48-49 provides an overview of all techniques and main advantages and disadvantages.

The limitations of each method, as well as their complementary scales, stimulated the use of a combination of the measurement techniques. Information obtained through a combination of techniques can be used across studies to synthesize findings relevant to particular functions. Information can also be directly combined across methodologies within the same or linked studies. However, combining data from different methods can be difficult to put in practice.

5. Motor function

Sensory input travels from our extremities to our brain for processing. The brain will provide output to the motor systems and the motor system will translate the output into behaviour. Lower motor neuron activation is coordinated by interneurons in local networks. These networks can produce reflexes on their own, but need input from the brain to perform complex voluntary movements. The voluntary movements are coordinated by higher motor neurons. Neurons in the motor and premotor cortex moderate cognitive functions that are relevant to movement and behaviour. These two cortices are monitored by the cerebellum and the basal ganglia. The cerebellum learns new motor skills and corrects ongoing movements if needed. The basal ganglia initiate motor commands.

Apraxia

Apraxia is an impairment related to the planning and performing of movements, regardless of the motivation and capability to perform the movements. The disorder can take several forms. Ideomotor apraxia is the inability to voluntarily perform learned actions when the situation asks for it. Ideational apraxia is the inability to perform sequential tasks involving thy se of other objects in the correct order. Verbal apraxia is a deficiency in the production of speech. The causes of apraxia remain largely unknown, but it has been linked to damage to the parietal and premotor cortex. Although adaption and somewhat recovery is possible, the condition will remain present in a patient.

Hierarchical control of movements

Studies of motor control indicate that the neural systems controlling for action are organised in a hierarchical manner. At the highest level we find motor programmes, which are sets of commands to initiate a sequence of movements. These motor programmes are ballistic: they do not necessarily need sensory input to initiate movement. Motor programmes are independent of specific muscle groups: studies show that people can write their name using any kind of extremity.

The motor programmes simply include the same movement sequences. Motor programmes originate in the brain itself, rather than being a reaction to sensory input from the periphery nervous system. The lowest units in the motor control hierarchy are elementary behavioural units. They directly activate the muscles. There are many intermediate levels between the motor programmes and the elementary units.

The hierarchical model of motor control is resembled in neuroanatomical and neurophysiological elements of the central nervous system. The lower motor neurons in the spinal cord and brainstem, upper motor neurons in the cortex and brainstem, the cerebellum, and the basal ganglia are the four separate, yet interacting, subsystems that made up the neural circuits responsible for skeletal movements. See Figure 5.2 on page 134 for an overview of the human motor system.

The neural circuits at the lowest level are composed of lower motor neurons and local circuit neurons. They are located in the brainstem and spinal cord. Lower motor neurons in the grey matter of the spinal cord and brainstem send axons out of the central nervous system to skeletal muscle fibered. They are comparable to the elementary behavioural units. Lower motor neurons fire action potentials immediately preceding contraction of the muscles they control. This makes their activity directly correlated with movement of the relevant body parts.

Fine motor movements innervate fewer muscle fibres than motor neurons involved in gross motor control of larger muscles. Local circuit neurons provide synaptic input to lower motor neurons and contribute to local coordination. This is particularly important for the coordination of reflexes engaged by rhythmic activity such as walking. At higher levels of the neural motor system, upper motor neurons in the cerebral cortex and brainstem provide top-down control of local circuits in the spinal cord and brain stem. The cerebellum makes corrections in responses of ongoing movements. The basal ganglia modulate the activity of the upper motor neurons an helps to initiate goal-directed movements, respectively.

Reflexes, patterns and rhythmic behaviours

Reflexes are signals from the sensors in the muscles to the spinal cord and back to the muscles again. Reflexes control simple behaviour. Local circuit neurons within the spinal cord help connect incoming sensory information to appropriate motor neurons which in turn initiate movement. Walking is alternating bursts of activity in the extensor and flexor muscles. Experiments show that the spinal cord and brainstem are capable of independently controlling the timing and coordination of multiple muscles to produce complex rhythmic movements. The spinal cord and brainstem are also able to respond to changes in the physical environment, such as obstacles and speed.

Facial expressions

Neural expressions show how upper and lower motor neurons are coordinated and how emotion plays a role in facial movements. The muscles that make up facial expression are under the direct control of lower motor neurons in the facial nerve in the pons. Damage to the upper motor neurons of the facial nerve cause paralysis of facial expressions, voluntarily expressions in the face, however, are unaffected by this damage. A common form of such facial paralysis is called Bell’s palsy. There is another motor pathway closely linked to emotion. This pathway controls involuntarily movement. Damage to this pathway result in an impairment to show spontaneous emotions, but voluntary facial expressions remain possible. The French neurologist Duchenne de Boulogne discovered muscles that can only be activated through emotional activation.

Cortical tracks for motor control

Although the lower brain regions can make some complex motor behaviour, the higher brain areas are needed to initiate and coordinate even more complex motor behaviour. Top-down control from the cerebral cortex to the brainstem and spinal cord originate on the primary motor cortex and the premotor cortical areas (with the premotor cortex and the supplementary motor cortex). Figure 5.4 on page 137 provides an overview of the upper motor neuron pathways. The primary motor cortex needs only a small amount of stimulation to initiate movement.

The upper motor neurons of the primary motor cortex have fairly direct access to local circuit neurons and lower motor neurons. Axons of upper motor neurons in the primary motor cortex that innervate neurons in the brainstem branch off at appropriate levels. Those remaining to the spinal cord merge and descend through the medullary pyramids (yes, they look like pyramids). The majority of corticospinal fibres cross the midline (this is also called decussating) at the end of the medulla and enter the lateral corticospinal track in the spinal cord. They will then travel in the grey matter to the skeletal muscles. A small amount of the fibres remain uncrossed and form the medial or ventral corticospinal tract. These axons end in the medial spinal cord grey matter on both sides of the cord. These medial cortical axons are involved in movement of muscles close to the midline of the body, such as the torso.

Early research hinted towards a special organisation of motor neurons in the motor cortex. Modern fMRI confirmed this expectation. The motor maps are a distorted representation of the body. The lips, tongue and hands are overrepresented on the cortex compared to other parts of the body. Interestingly, lips, hands, and tongue are mostly involved in fine motor control.

Animal studies suggested that stimulation of the primary motor cortex can initiate coordinate, complex movements using multiple muscles. Apparently, movements (not muscles!) are represented in motor maps. The initiation of eye movement is generated in the frontal eye fields in the cortex and superior colliculus in the midbrain. Signals then travel to the brainstem and then to the ocular muscles by lower motor neurons. This is consistent with the idea that higher motor centres provide both motor command signals for voluntarily and involuntarily movement.

The activity of neuronal populations

Although we have a general idea of how motor maps coordinate movements, we don’t know exactly how his happens. The relatively rough tuning of neurons in the higher motor centres provides a problem for researchers. The direction and amplitude of a movement cannot be predicted with any precision from a single neuron. The motor neurons are often activated during various movements. One way to tackle this problem is to average all activated neurons during a particular movement, such as eye movements. The local circuits in the brainstem coordinating eye movement are under direct control of the superior colliculus. Single neurons in the superior colliculus initiate gaze shifts and saccades. Research on saccades and gaze shifts provide evidence for the averaging of activated neurons in order to initiate motor movement. This idea is not only true for the superior colliculus, but also for the primary motor cortex.

Planning movements

The brain is capable of producing anticipatory activation, as a voluntarily planning-related response to the environment. Planning-related activity begins earlier in the premotor areas than in the primary motor cortex. Since the motor system is hierarchically organised, it makes sense that the premotor areas show earlier activation: they are ‘higher in rank’. Premotor areas calculate behaviour related to goals, and the primary motor cortex calculates the appropriate movements.

This calculation of the premotor areas is also found in the EEG-signal. An ERP-component called the readiness potential begins in the premotor areas and then arrives in the contralateral primary motor cortex. As you may expect, the readiness potential happens before the actual movement. When the premotor areas and the primary motor cortex are damaged, patients are unaware of their inability to move. This is called anosognosia. The readiness potential provides a hint to the role of awareness in motor planning. A controversial study of Libet indicates that conscious awareness actually follows the intention.

Behavioural goals are always competing for attention. The cognitive context is important for the generation of the appropriate movement. Animal studies imply that neurons in the premotor regions choose movements based on the degree of incoming sensory evidence. Motor preparation is a dynamic, competitive process. Sensory information gets processed into the intention to move. This processing uses a graded activation of neurons in various high-order premotor areas.

Besides sensory input, people also select their actions based on goals, memories, and information about the environment. Neurons in the parietal cortex of monkeys are most sensitive to the reward value of an action: the probability that an action will result in a reward. Studies in humans suggest that selecting a movement goal involves scaling neuronal responses associated with the reward value of each movement. This will bias the motor system to produce a movement that best satisfies biological motivations, such as acquiring rewards or avoiding punishments.

Sequential movements

Regions in the frontal cortex are specialised in the production of movement sequences. This supplementary motor area (SMA) is crucial for producing movements without explicit sensory input. In contrast, the premotor cortex is important for the production of movements based on sensory cues. There seems to be a functional dissociation between the SMA and the premotor cortex: if one area is damaged, the other area can still carry out its function.

This idea also applies to the production of movement sequences. The order of the movements guides the next movement in the sequence instead of external sensory cues. The supplementary motor area provides abstract motor intention signals controlling the internal production of sequences of actions. The prefrontal cortex initiates and terminates movement sequences. The primary motor cortex sends the sequences to the lower-level brain regions for implementation.

Sensory-motor coordination

Sensory input is needed to calculate the relative place in environment. Based on this information, we can guide our behaviour towards specific objects. The parietal cortex is important for sensory-motor coordination. Damage to the parietal cortex disrupts reaching and saccades. This disorder is known as optic ataxia. The patient can no longer integrate information about the locations of the eye, hand, and target. Neurobiologists Goodale and Milner suggested that the dorsal visual stream may be involved in visual guided movement, while the ventral visual stream may be specialised in object identification.

Interval timing

There is a close relationship between initiation and coordination of action and the sense of time. Interval timing actually depends on neural networks involved in the coordination and initiation of action, such as the basal ganglia, the prefrontal cortex, and the cerebellum. Damage to the basal ganglia indeed results in impairment in timing. Dopaminergic drugs (which influence the basal ganglia) also influence the sense of time.

Initiation and the basal ganglia

The motor cortex contains the circuits responsible for selection, planning, and initiation sequences of movements to fulfil goals. The subcortical circuits in the basal ganglia seem to act like a gating mechanism. It inhibits potential movements until they are needed. This way, the basal ganglia influence timing of movements. The basal ganglia are made up of three main nuclei: the caudate and putamen (together they are called the striatum), and the globus pallidus.

Two other nuclei also play important roles in the functioning of the basal ganglia. Almost all cortices are connected to the striatum in the basal ganglia. The globus pallidus sends activity from the basal ganglia to the thalamus. Here, the thalamus serves as a relay station for actions. The caudate and putamen inhibits the globus pallidus, and can activate stored commands in the thalamus. The excitatory and inhibitory effects of the basal ganglia releases and coordinates the desired movements.

See Figure 5.21 on page 154 for an overview of the basal ganglia loop. Circuits projecting to the substantia nigra pars reticulata (SNr) and on to the superior colliculus also have this loop function, but for saccades. This inhibition and permitting of desired movement by the basal ganglia has been confirmed with animal studies. In Parkinson’s disease, the neurons in the SNr that are activated by dopamine will slowly die. Patients will show a disruption in initiating movements and continuation of purposeful movements. In contrast to Parkinson’s disease, Huntington’s disease is associated with a slow death of cells in the caudate nucleus. They show an opposite pattern of symptoms. They show involuntarily choreiform (dancelike) movements and dementia. Another disorder in the basal ganglia, called hemiballismus, is caused by unilateral damage to the subthalamic nucleus. The symptoms also include the choreiform movements as seen in Huntington’s disease.

Basal ganglia and cognition

Some of the problems that can occur if the basal ganglia are damaged seem purely cognitive. Studies show that the basal ganglia has a sensory-motor networks, emotional networks, and cognitive networks (see Figure 5.23 on page 157). The pathways are almost completely distinct, but not totally. The emotional and cognitive pathways work the same as the sensory-motor pathways named earlier: each channel has a feedback loop between the cortex and the basal ganglia. The cortex sends input for selection and the basal ganglia select the appropriate behaviour. The basal ganglia are also involved in learning new movement sequences. The basal ganglia connect sensory events to the appropriate motor actions. They are influenced by the anticipation to reward.

Cerebellum

Circuits in the brainstem coordinate reflexes and correction of ongoing automatic movements. The cerebellum (Figure 5.25 on page 159) is responsible for the correction of ongoing voluntary movements. It makes smooth coordinated skilled movements possible. The cerebellum receives output from the motor cortices and sends them to the frontal and parietal cortices. These cortices send the activation to the thalamus (Figure 5.26 on page 160). Lesions to the medial cerebellum result in truncal ataxia: the inability to show coordinated movement. Lesions to the lateral cerebellum results in the opposite pattern: appendicular ataxia. Here, movement of the libs is less smooth than normal. Damage to the ipsilateral cerebellum causes intention tremor: uncoordinated movements during voluntary actions.

Cognition and the cerebellum

Studies using fMRI show that the cerebellum seems to be involved in cognition as well. However, we don’t know exactly how or why. The computational power of the cerebellum might be useful not only during the correction of movement, but also during the correction of cognitive functions. This is shown in Figure 5.29 on page 163. Here, input comes from the prefrontal cortex, instead of the premotor cortex during movement. Support for this theory comes from examinations of the connections in the brain. The prefrontal cortex is heavily connected to the cerebellum.

 

6. Attention and processing

The Cocktail Party Effect

The cocktail party effect refers to the phenomenon that we can selectively focus our attention to one conversation, while ignoring others. The psychologists Cherry researched this phenomenon using the shadowing technique. Participants got headphones on, and were asked to follow the speaker on a particular side, while ignoring the other side. After the task, people could only name very superficial content of the unattended channel. Cherry posed that attention works as a selection mechanism, but more recent researched shows that some unattended semantic information can still leak through.

Concept

The term ‘attention’ is used in many different ways. Arousal describes the global state of the brain. Being alert means that a person is completely awake and actively attending the environment. In contrast to arousal, attention can be selectively focussed. Selective attention is the distribution of processing resources to the environment. An example is the cocktail party effect. The visual spatial attention is an example of selective attention. Visual spatial attention is fixating the gaze, while shifting the attention. This type of attention, independent of modality, is called covert attention. The other type of attention is overt attention, where stimuli direct the attention towards a particular place in the environment.

Behaviour and attention

Attention has influence on the processing of sensory input. The question is whether the filtering of information happens before (early selection) or after (late selection) the completion of sensory and perceptual processing. Both models of selection can be found in Figure 6.2 on page 171. The early selection model is based on the ideas of Broadbent. Broadbent’s model needed adjustment after experiments where unattended information still leaked trough.

The new model became the late selection model. Treisman proposed another model. She proposed an adaptable filtering system. Some unattended semantic information can reach higher processing, but only if it’s highly salient. Later, researchers found an even better model. They show that in some situations information may get filtered out based on basic physical features. Under certain circumstances more complex aspects of the information is necessary to select the correct content. This information will also get processed on a higher level.

Voluntary attention is also called endogenous attention. Involuntary, automatic attention focussing is called exogenous attention or reflexive attention. These two types of attention can be investigated with the cue paradigm: subjects get cues about the location of a target on a screen. Sometimes, the cue is invalid and points to the wrong location. Subjects typically react slower to the targets with an invalid cue. When compared with a no-cue trial, subjects react faster when the cue is valid.

This means that focussing attention on the correct location has benefits for the individual. This experiment follows the endogenous (voluntary focussed) attention. Paradigms investigating the exogenous (involuntary) attention, show similar results. Endogenous and exogenous attentional cuing differs in the amount of influence they have on target processing. For endogenous cues, the effect can start from 300 milliseconds onward, and the effect lasts for some seconds. However, exogenous cuing effects start earlier and only a few hundred milliseconds. After this period, inhibition of return can occur: people actually show slower reactions to validly cued targets.

Neuroscience and attention: Auditory Spatial Attention

Higher levels are able to influence stimulus processing at lower levels. Especially the frontal and parietal cortices control attention in a top-down manner.

Viewing stimulus processing in the brain

Differences in auditory streams (the input from the left and right ear) are associated with differences in perceived space around the individual. Differences in these streams can be investigated using ERPs. The earliest waves of an ERP are known as brainstem evoked responses (BERs, see Figure 6.6 on page 176). The next phase reflects the early evoked activity in the auditory cortex. Then, a set of longer lasting waves reflect activity in higher order auditory cortices. This ERP can be obtained with the attentional stream paradigm. The participant is asked to attend different streams of beeps in both ears, with an occasional deviant beep.

The participant is asked to detect the deviant tone in only one stream, for example left. The attentional stream paradigm shows a specific ERP component called the auditory N1. N stands for negative peak; 1 for the first peak occurring around 100 milliseconds. Increase in the N1 peak of the ERP is observed for all stimuli in the attended ear. It seems associated with early selection of auditory input. Detective deviant tones elicit a larger P3 (P= positive peak, 3 = 300 milliseconds). When the attentional focussing is extraordinary strong, the BER shows an enhanced positive wave from 20 milliseconds to 50 milliseconds for the attended stimuli. This P20-50 attention effect happens in the auditory sensory cortex, indicating early attentional selection.

Neuroimaging, especially fMRI, shows that attention tends to modulate activity in specific regions of the auditory cortex. This reaction seems to correspond to the large N1 peak in the ERP of the attentional stream paradigm.

If occasional deviant tones occur in a dichotic listening paradigm, the ERP will show a component called the mismatch negativity (MMN). This negativity shows its maximum peak around 150 to 200 milliseconds in the auditory cortex. The MMN becomes larger when the difference between the standard and the deviant tone becomes larger. MMN also occurs when a subject does not attend the auditory stream, but the effect is weaker than in attended streams. This means that attention can modulate the overall amplitude of the early ERP, as well as influence the analysis of later processes.

Figure 6.11 on page 182 shows a summary of the ERPs.

Attentional blink and late selection

Under some conditions, attention will happen later and at higher anatomical levels. An example of this is the attentional blink. In the attentional blink paradigm, rapid visual stimuli are presented in a single stream. In contrast to stream paradigms, the attentional blink involves only one stream of stimuli. The participant has to detect special targets in the stream. Typically, the participant is less able to detect a second target after the detection of the first target.

The ability to detect the second target is impaired when it occurs between 150 and 450 milliseconds. This lag might represent a processing bottleneck in the target discrimination stage. ERP studies show that the sensory component (P1, N1 and N400) stay the same, whether a target is presented during the attentional blink period or not. Only the P300 peak (associated with consciousness and awareness) changes when a second target is presented during the attentional blink period. Apparently, attentional selection during the attentional blink period happens at a later stage of processing.

Neuroscience and attention: Visual Spatial Attention

The ERPs related to visual stimuli are often based on visual spatial tasks. The first wave in such ERP is P1 (positive peak after 100 ms). The P1 reflects early processing in low-level visual cortical regions, such as V1, V2, V3 and V4. The P1 is followed by the N1 around 180 milliseconds after stimulus presentation. Attention enhances both peaks of the P1 and the N1 in the ERP. The effects of visual spatial attention have also been studies with PET and fMRI. A visual stream paradigm is as the auditory stream paradigm, but on a screen. Subjects see a fixation point and can attend either the left or right side of this point. A cue indicates the location of the target stimulus.

PET studies show that images in the left visual field cause activity in the right occipital area of the brain, and vice versa. This corresponds with the crossed anatomical organisation of the brain. Moreover, images in the upper visual field activate the ventral inferior occipital cortex, whereas images in the lower visual field activate the dorsal superior occipital cortex. This inverted pattern, called retinotopic organisation, has been further investigated using fMRI (see Figure 6.14 on page 188).

Neuroimaging studies found strong effects of visual attention on stimulus processing in relatively low-level visual cortex areas (V2, V3 & V4), but the effects of attention are weaker in the primary visual cortex V1. Also, attention has influence on lower levels of the visual anatomical system. Research on the competition between stimuli for attention revealed a phenomenon called biased competition. Attention counteracts the suppressive influence from nearby stimuli. This may help filter out distracting information.

Combinations between EEG en PET show that the attention effect originates in the dorsal contralateral occipital cortex, following the retinotopic organisation of the visual sensory pathways. Attention can also have an effect at longer latencies in the sensory processing of low-level visual regions. This is called a re-entrant process, in which attention-related activity returns to the same low-level sensory areas that were activated in the first place. It presumably reflects enhanced late processing of the stimulus in those areas.

Animal studies have shown that attention influences neuron’s firing rate and pattern in response to visual stimuli. It seems that the neural response of a single cell depends on the location of attention within the receptive field. When attention is directed to a location outside the receptive field of the single cell, the response does not change. The reaction of a single cell to a particular orientation increases when the subject attends to the spatial location.

The sensitivity to a certain orientation doesn’t change. Spatial attention also increases contrast sensitivity. Attention increases the likelihood of a neural response if the contrast is near the discrimination threshold level. Research using EEG also discovered attention-related interaction between different frequency bands of the local field potentials in the sensory cortices. For example, low-frequency waves tend to synchronise with streams of stimuli.

The EEG signals evoked by (un)attended stimuli can be investigated using mismatch negativity in the ERP. This works for both visual and auditory stimuli. During easy central-vision tasks, more activity will be evoked in brain area MT+ by task-irrelevant peripheral moving stimuli than during difficult tasks (see Figure 6.18 on page 195). This relates to perceptual load: if there is processing capacity left, it is used to process the irrelevant stimuli. When a task is more difficult, all resources are used for the processing of the relevant stimulus.

Attention-related re-entrant processes

Spatial attention can have an effect on early processing and low-level anatomical cortical areas in the sensory system. See the figure in Box 6B on page 191. A possible explanation is that attention moderates the activity elicited in these sensory regions during the initial ascending flow of sensory information. However, later stimulus-evoked activity is also possible. The eplanation for the later stimulus-evoked activity is that re-entrant processes return to the same low-level sensory cortical areas, but for a longer period of time. Studies using fMRI found re-entrant processes that could not be found with EEG. The functional role of attention-related re-entrant activity in the lower-levels is not clear.

Neuroscience and non-spatial stimuli

Attending to the pitch of a sound is useful in discriminating auditory streams. In research, participants may be asked to listen to a stream of stimuli of a specific pitch to detect a deviant sound in the stream. This is known as the oddball paradigm. Afterwards, the ERPs of the participants are calculated. One effect of this paradigm is an extended negative wave at 100 milliseconds, called a processing negativity. It seems to reflect attentional enhancement of activity in the auditory cortices. Other functional imaging studies show that attending to a specific auditory feature enhance activity in the brain areas that normally process that feature.

The neural effects of attending to spatial visual characteristics involve selection negativity. A participant can be asked to focus on stimuli with specific feature in a stream of stimuli. The ERP of the special deviant stimulus will show a sustained negative wave in the posterior area: selection negativity. Feature selectivity enhances activity in regions associated with that feature. For example, attention focussing on colour increases activity in the regions associated with colour processing (V4). Animal research shows that attending to motion occurring outside a cell’s receptive field increased the neuron’s firing rate, even if the receptive field was task-irrelevant.

The feature similarity gain model states that the attentional modulation of the amplitude (gain) of a sensory neuron’s response depends on the similarity of the features of the currently relevant target and the feature preferences of that neuron. Neural representations of irrelevant stimuli in the visual field can be modulated by feature based-attention. This influence of feature-based attention across the visual field might be convenient in visual search tasks. If a feature is task-irrelevant and shown in an unattended visual area, no attentional enhancement occurs.

Attention towards objects modulates stimulus processing in the regions of the visual cortex that process objects as categories. Visual areas in the higher-level visual cortex, such as the fusiform gurus in the inferior occipital and temporal lobes are specialised in processing faces. The medial and anterior temporal lobes are specialised in processing buildings. Studies using transparent overlaying images of faces and houses show that these areas only show activity if the attention is focussed towards one of the image. For example, when focusing on the house, the medial and anterior temporal lobes show activity.

Apparently, mechanisms exist for both enhancement of activity for attended stimuli and suppression of activity for ignored stimuli. Attention can also increase the selectivity of the neural network representing an object. The lateral occipital lobe is involved in the analysis of objects. Studies show increased activity in this area when attention is focussed towards objects. Cells can be especially sensitive towards particular objects.

Attention across sensory modalities

In a natural setting, sensations come continuously from different modalities. Studies found evidence for supramodal attention or attention for multiple modalities. For example, ERPs for irrelevant auditory stimuli are enhanced if the location is relevant for visual stimuli. The enhanced activity can be found in low-level process areas of the sensory cortex. The explanation is simple: if as stimulus in one modality (such as vision) is relevant in one location, the stimulus is likely to be relevant in another modality (such as hearing) as well.

When stimuli from two different modalities occur at almost the same time, attention in the stimulus in one of the modalities will tend to spread to encompass the current stimulation in another modality. This spread is related to the tendency of the brain to link coinciding occurring stimulation together. The integration of sensory information in multiple modalities doesn’t require attention and happens automatically. Neural enhancement due to multisensory integration happens early in processing and at low-level sensory areas.

7. Control of attention

Clinical observations suggest an important role of regions of the parietal and frontal cortices in attentional control. Current models of control involve intervention by networks between brain regions.

Hemispatial neglect

Brain lesions in the right inferior parietal lobe may cause attentional problems. One example is hemispatial neglect syndrome: patients ignore one side of the visual field. If a patient only ignores a small area of the visual field, it is called a scotoma. The damage is contralateral to the side that is ignored: patients with damage in the right hemisphere ignore the left visual field and vice versa. Patients can see objects in the field, but they completely ignore them while doing tasks. Thus, the problems seem to be purely attentional and not visual. The unaffected area has more power to direct attention towards a specific location.

The phenomenon of directing the attention at cost of other attention-seeking stimuli is called extinction. Sometimes, not only visual stimuli that are seen are affected. Patients drawing objects from memory also tend to ignore one side of their drawing. Attentional neglect can also be found in other sensory modalities. The attentional deficits in hemispatial parietal neglect appear to involve supramodal attentional mechanisms.

Clinical evidence involving attentional control

Early observational research already pointed out the importance of the parietal and frontal cortices for attention. The right parietal cortex seems to be particular important for attentional control. One explanation for this is that the right parietal cortex may control both left and right sides of the visual field, while the left parietal lobe only controls the right side. The left lobe has then space for other functions, such as language.

Evidence from neuroimaging studies show that this hypothesis is not correct. They show that both hemispheres are activated during attention in any part of the visual field. Lesions in regions in the frontal lobe that are connected with the parietal regions can also cause attention problems. Unilateral frontal lesions tend to have a greater effect on the more overt aspects of attention, such as eye movement towards objects. Bilateral damage to the dorsal posterior parietal and lateral occipital cortex cause are associated with attentional problems. This is known as a disorder called Balint’s syndrome. Patients suffering of Balint’s syndrome show:

  • Simultanagnosia: inability to attend and/or see more than one visual object.

  • Optic ataxia: impaired ability to reach for or point to an object in space.

  • Oculomotor apraxia: difficulty in voluntary leading the eye gaze towards object in the visual field.

It doesn’t matter on which side of the visual field the objects are presented; the patient will still only attend to one object. This is in contrast with hemispatial neglect syndrome, where the relative position of the objects matter. Patients with Balint’s syndrome can attend to more than one aspect of a stimulus, but only when these aspects are part of one whole object. Lesions in the brainstem can also impair attentional control.

The superior colliculus plays an important role in the production of saccades. The colliculus interacts with the frontal and parietal cortices. Damage to the superior colliculus causes slow eye movements and slow shift of covert visual attention. Hemispatial neglect, caused by damage to the parietal lobe, can be cancelled out by a lesion in the superior colliculus in the other hemisphere.

Voluntary attention

Damage to parietal and frontal areas is related to attentional control (rather than to sensory processing). PET studies show that attention shifting evoked enhanced activity in the parietal and frontal brain areas. Another way of investigating voluntary control is by measuring EEG during a cuing paradigm, where a target appears after an instructional cue. Sometimes the cue is invalid and points to the wrong location. Targets at the cued versus un-cued location are detected faster. Neurophysiological studies can find the moment when the participant moves his/her attention to the target. It is hard to use fMRI in these kinds of studies: the measure BOLD response is too slow to make accurate measurements of attention swifts.

Researchers can use slower cuing paradigms or use ‘stay’ versus ‘switch’ experiment. Here, participants are asked to switch to another stream of information or to keep attending their attention to the current stream. Still, the parietal and frontal lobes were found to be essential for attentional control. Switch and stay paradigms have also been used to study non-spatial attention. These experiments also found that the parietal cortex plays an important role in switching attention. However, the parietal area is not exactly the same as for visuospatial tasks. ERPS indicate that the first voluntarily (endogenous) attention-directing activity occurs in the first 350 milliseconds.

Single-neuron recordings in animals give more specific information about attentional control processes. Recordings are made in a region of the posterior parietal cortex known as the lateral intraparietal (LIP) area and in the frontal eye fields in the dorsal prefrontal cortex. Firing patterns in the LIP implicate planning eye movement to a relevant target in the visual field. Stimulating the LIP can trigger eye movements to a location in the contralateral visual field. Firing patterns in the LIP also implicate covert focusing of attention. It seems that enhancement of neural activity is not necessarily due to saccade production, but rather to the allocation of attention to a spatial location in the neuron’s receptive field. Activity in the LIP mainly involves attention to salient information or other cognitive factors.

The frontal eye fields (FEF) are also involved in saccadic eye movements. Stimulation the FEF triggers saccades to specific locations called saccade movement fields. The premotor theory of attention states that changes in attention and preparation of actions are closely linked to each other. They are controlled by the same sensory-motor systems. The premotor theory of attention also posits that saccades mediate covert visual spatial attention. Nevertheless, the neurons in FEF may control other functions than saccades. FEF neurons may control attention independent of observable saccades. Single-neuron measurements suggest that the activity travels from frontal to parietal lobe during endogenous attentional control.

When expecting a visual stimulus at a particular location, activity increases in the frontal, parietal and extrastriate cortex. This attention-related increase of activity in the absence of visual stimuli reflects a preparatory bias, caused by top-down signals from the frontal and parietal regions. The preparatory bias can be seen in Figure 7.9 on page 217. Moreover, activity can occur in the sensory areas in absence of a visual stimulus.

This activation is preparatory activation caused by top-down control of higher areas such as the frontoparietal system. The amplitude of the activity before stimulus presentation in an EEG correlates with the amplitude of sensory-related processes to an attended target stimulus. This preparatory bias also occurs in other modalities, such as the auditory fields of the brain. It may reflect a general attentional control mechanism. Activity changes in an EEG prior to stimulus presentation have been observed in the alpha band of the EEG. This band is associated with attention. Preparatory activity in the alpha band can again occur in different modalities, such as the auditory cortex. Decreased alpha power contralaterally reflects increased excitability of occipital areas specific to the location of the target. Increased ipsilateral alpha power reflects active inhibition of irrelevant areas.

Control of exogenous attention

Attentional shifts can be triggered exogenously by salient stimuli in the environment. Exogenous attention shifts tend to be faster than endogenous shifts. Exogenously triggered shifts of attention also result in more activity in the corresponding regions of the sensory cortex. The same frontal and parietal regions that control endogenous attention are also activated b exogenous control of attention. In endogenous situations, neurons in the FEF are activated after the neurons in LIP. However, when the attention is triggered exogenously, the FEF are triggered first. The parietal neurons are activated prior to the frontal neurons. In sum, attentional control activity in the frontal and parietal cortices may occur in different orders in different attentional conditions involving different neural processes.

Correctly cued targets are detected better and faster. They also evoke larger neural responses in the visual cortex, probably due to preparatory signals from the frontal and parietal lobes. Responses from the inferior regions of the right hemisphere, especially near the right temporoparietal junction (TPJ), are more activated by incorrectly cued targets. Since an invalid cue means that the attention has to be shifted towards the correct location of the target. The enlarged activity in the TPJ reflects this stimulus-triggered attention switch.

Visual search

Visual search is the process of inspecting a complex scene for a particular item. Figure 7.11 on page 221 is an example of a typical visual search task. A display has a target, which the participant has to search for, and some distracters. Strategies of visual search change when the features of the target and distracters are manipulated. If the target is has features that are clearly different from all other stimuli, this is called a pop-out stimulus. It doesn’t matter how many distracters are placed around the pop-out stimulus, the reaction time for selecting this stimulus stays the same.

Apparently, the detection of pop-out stimulus does not require serial shifts of attention over every object in the display. Processing happens all at once in an automatic manner. The opposite of a pop-out target is a conjunction target. A conjunction target has multiple features shared with the other objects in the display and differs in only one aspect from the distractors. It takes much longer to find the target, because every distractor has needs separate attention to investigate it.

To find conjunction targets, the attention has to be actively shifted over every single item in the display. This means that it will take more time to find a target if the display shows more distractors. It will take even more time when the participant has to confirm whether the target is absent or present in the display. The explanation is that in target-absent trials, the participant has to search every single distractor before concluding that the target is absent. In target-present trials, the participant needs to search about half the distractors before finding the target. These results lead to the feature integration theory. This theory states that the perceptual system functions like a set of feature maps. These maps would represent features such as colour, form, pitch, motion, and etcetera.

For visual stimuli, the feature maps contain information about the location in the visual field. The processing of these feature maps is done early and all at once (in parallel). During conjunction search, information from each feature map must be compared or combined using attention to investigate each item. The feature integration theory also solves the so-called binding problem. The binding problem has to do with the way perceptual features are bound together into one whole representation of an object.

The idea is supported by studies of illusory conjunctions of features in a display. Sometimes people can mix up features of a display, such as seeing a green X in a display of green items and black Xs. Participants report more illusory conjunctions in items where they did not pay attention to. Apparently, attentional focus helps to correctly investigate the separate features of items. This model of visual search implies that there is a difference between conjunction search and the search for pop-out items.

However, there is no difference in processing between these two types of visual search. The way attention works is more flexible and complex than the feature integration model suggests. The guided search model can explain the similarities in processing. Here, two components determine the allocation of attention: a map activated by stimulus factors, and a map activated by top-down processes. Visual imputes are filtered by feature-tuned subsystems to make feature maps.

This is just like the feature integration theory. Perhaps, some items are more salient than others. These items will make saliency maps. Bottom-up and top-down activation maps form a general activation map that reflects the likelihood of target presence across the display. The combined map guides the search. Attention will be focussed fist to the most salient item of the activation map, than the second most salient item, and so on until the correct item is found.

Event-related fMRI studies have shown that most areas in the dorsal frontal and parietal cortex are involved in endogenous attentional control. Once the target is found, this activity will be inhibited. When a display contains a pop-out item as target, a negative wave called the N2pc wave will be elicited around 250 ms in the parietal and occipital regions. The activity may reflect control processes related to the shift and focus of attention as well as the complications of the switch in processing in the visual sensory cortex.

Attention as regulator of interacting brain regions

The earliest model of attention in the brain is from the 1980s. The brain network controlling attention contained a frontal, parietal and limbic component. The brainstem modulated overall arousal.

In the 1990s, another model was proposed. This model had two main systems: the anterior attentional system for monitoring the environment and detecting targets, and the posterior system for orienting in the environment. Later, this model was altered. The brain had three systems, each using its own neurotransmitter: 1) the altering system used the neurotransmitter norepinephrine, 2) the orienting system using acetylcholine, and 3) the executive system using dopamine.

In the early 2000s, psychologists proposed a model where brain regions control attention using two systems. The cortical control of attention can be divided into two main systems carrying out different functions, but interacting extensively (see Figure 7.13 on page 225). One system, consisting of the interparietal cortex and the superior frontal cortex, was involved in preparing and endogenous goal-directed selection of relevant stimuli and responses.

The other system consisted of the TPJ and the ventral frontal cortex and was involved in the detection of behaviourally relevant stimuli, especially unexpected or salient stimuli. This last system was thought to be involved in exogenous attention. This model implicates that the actual attentional orienting is done by dorsal frontal and parietal areas (FEF and IPS), although their activation may originated in different ways. In this model, exogenous and endogenous attention can have different origins.

Interactions inside the Attentional System

The idea is that attentional control rests on interacting networks of brain areas working together to improve neural processing. The attentional orienting seems to start in the frontal component (FEF), followed by the IPS in the parietal lobe. These signals will come together in the proper sensory cortices. It seems that sensory-related activation is needed in order to activate attentional-related responses in the frontal cortex.

TMS studies confirmed the idea that areas in a frontoparietal network are needed for coordinating and moderating the sensory cortices. This way, sensory input can be processed efficiently. The default mode-network is a network of brain areas that become active when someone is not paying attention to a task. When the task is too demanding, attentional lapses may occur.

Default-mode network

Based on neuroimaging studies, researchers proposed a default level of activity in some brain regions. In these neuroimaging studies, activity during rest is subtracted from the activity during cognitive tasks to observe the relative amount of activity. These subtractions can result in the observation of less activity during cognitive tasks. Researchers proposed that the brain might have two modes: one ‘busy’ mode during cognitive tasks, and one ‘default’ mode during rest. The reason for the existence of this default mode is unknown.

Generality of attentional control systems

It is unclear to what extend the attentional systems have influence on our daily lives. Neuroimaging studies showed that some attentional networks are influencing more general functioning, while other networks influence specific functions. For example, supramodal attention (attention in more than one modality) activates nearby brain regions in the frontal, dorsal and parietal regions. The parietal cortex plays a major role in executive control in general.

The frontal cortex is also involved in general executive control, such as coordinating other brain processes and working memory. In daily life, eye movement towards objects are associated with attentional swift towards those objects. In the brain, control circuits for saccadic eye movements overlap with the control circuits for covert attention. In sum, it seems that the general control system uses the same brain areas for specific tasks. These tasks elicit specific patterns of activation in general attentional control systems.

Attention, arousal, and consciousness

Attention is related to arousal. We are more aroused at specific times of the day, such as in the morning, and can attend stimuli more easily at those times.

The sleep cycle shows a specific pattern of stages of activation, as seen in Figure 7.16 on page 232. The EEG shows specific patterns of activation during each of these sleep stages. Arousal and the general feeling of wakefulness are generated in the area near the pons and the midbrain. The areas involved in the sleep stages are the reticular activating system, the locus coeruleus (sensitive to noradrenalin) and the raphe nuclei (sensitive to serotonin). The areas and their specific neurotransmitters are depicted in Figure 7.17 and Table 7.1 on page 233. These areas control the whole spectrum of arousal: from being alert to deep sleep. These areas are controlled by the hypothalamus, which generates the day-night cycle.

In general, we use three definitions of the term consciousness:

  • Wakefulness: being awake and not asleep or in coma.

  • Awareness: being aware what’s happening in the world around you.

  • Self-awareness: being aware that you have a distinct self from other people in the world.

Wakefulness is easy to investigate. However, awareness and self-awareness are hard to define and to investigate. Animals can show wakefulness and sometimes awareness, but only a few animals show self-awareness like humans do.

Scientists try to find the brain areas responsible for these feelings of consciousness. The standard procedure is to design experiments in which targets shift in and out of attention. One way to investigate consciousness is with binocular rivalry. Input to the separate eyes competes in the brain for attention. They do not blend; only one image at the time is seen. The subject can shift between the images that he/she sees by shifting the attention to another image. Another paradigm used in consciousness-experiments is the presentation of tilted lines for a longer period. Afterwards ‘neutral’ lines seem to be a bit tilted compared to their original orientation.

When participants are unaware of the exposure of the tilted lines, they do report the tilted neutral lines. This finding shows that that low-level visual processing is not associated to awareness. Another idea is that perceptual awareness comes from longer-latency return of activity in the relevant sensory processing regions. This activity is called re-entrant or recurrent neural activation. Re-entrant activation supports the concept that perception is partly activated in the appropriate sensory cortices. This happens in combination with higher-level processes.

Clinical evidence of the combination of perception and awareness comes from studies on blindsight. Patients with damage to the visual sensory cortex report that they can’t see objects, but are they can correctly guess simple features of objects around them. Patients with blindsight show activation in the extrastriate areas when presented with stimuli. Split-brain patients also provide useful information on awareness. The corpus callosum of these patients was slit in order to reduce epileptic seizures. After this procedure, the hemispheres can no longer communicate with each other.

Patients can perform commands presented in the language impaired right hemisphere, but confabulate (make up) their reason using their left hemisphere. People with amputations can sometimes experience a phantom limb: they are feel their limb and are aware of it, although they know it has been removed. People in the lowest stage of awareness are said to be in coma.

Patients remain unconscious and are unresponsive to sensory stimuli. Their brainstem and other deep brain structures are damaged and hold back sensory stimulation from entering higher-level brain structures. Comatose patients can recover their consciousness after some time. If this does not happen after more than a year or so, the patients is said to be in a persistent vegetative state. A flat EEG signal means irreversible brain death. The brain trauma is so severe that the brain can no longer initiate activation.

8. Memory

All forms of cognition depend on memory to some degree, such as perception, attention, emotions, decision making and self-awareness. Memory is often separated in the working memory and the long term memory, and the declarative and non-declarative memory. The case of H.M. is an example of how these last two kinds of memories are separated. H.M. couldn’t store declarative memories, but could learn skill-based tasks such as mirror drawing.

Phases, processes, systems, and tasks.

Memory is the process whereby the nervous system obtains information from experiences, holds this information, and eventually uses it to guide behaviours and planning. The first phase of memory is encoding: experiences enter and alter the nervous system. The alterations are called memory traces. They are the observable changes in strength and/or number of synaptic connections between neurons. Storage is the preservation of memory traces over time. Long-term maintenance requires consolidation of memory trace. Retrieval is the accessing of stored memories (and memory traces), which is associated with executing particular behaviour and remembering. Learning is sometimes used as a synonym of encoding. However, researchers also use it to describe gradual changes in behaviour as a result of training. Learning then refers to the improvement of performance due to the combined encoding, storage and retrieval.

Researchers separate memory systems. Memory systems are groups of memory processes and their associated brain areas. They interact to mediate performance over a group of related memory tasks. Memory tasks, however, do not use only one memory system. The case of H.M. shows that different memory systems take up different areas in the brain. Early research found the difference between the short-term memory (seconds or minutes) and the long-term memory (hours, days or longer). The short-term memory was later converted into the working memory, which not only includes retention but also attention and processing information.

The working memory facilitates the maintenance and manipulation of information for a short period of time. The long-term memory keeps information accessible for longer periods of time. It consists of the declarative memory (or explicit memory) and the non-declarative memory (or implicit memory). The declarative memory is divided into the episodic memory for events and the semantic memory for facts.

A taxonomy of memory systems can be found in Figure 8.1 of page 248.

Memory systems

Amnesia is severe memory loss. It can be caused by brain damage, but also by psychological trauma. Most people have no memories of their early childhood: childhood amnesia. Anterograde amnesia is the loss of memories after the damage. Retrograde amnesia is the loss of information before the damage. Losing memories from before and after the brain damage happens as well.

Severe anterograde amnesia occurs when the medial temporal lobe has suffered from bilateral damage. Patients with this damage may suffer from some retrograde amnesia as well. They can’t make new memories. Working memory is spared. Patients with a damaged left temporoparietal cortex show the exact opposite clinical picture: their working memory is impaired, but their declarative memory remains normal.

Patients with bilateral damage to the medial temporal lobe show impairments in the declarative memory, but not in their non-declarative memory. Patients with damage to the right occipital brain areas show the opposite: their non-declarative memory is impaired. Although some specific brain areas are involved in the different memories, they sometimes share the same area as well.

Nondeclarative memory is performed without conscious awareness. Non-declarative memory consists of three forms. The first is priming: a change in stimulus processing due to a precious encounter with a similar stimulus. The second form of non-declarative memory is skill learning: the improvement of performance due to training. The last form is conditioning: forming associations between stimuli and responses. Priming requires only one encounter with the stimulus; skill learning and conditioning require multiple encounters.

Priming is a change in the efficacy of stimulus processing due to previous encounter with a similar stimulus. For example, people tend to fill in incomplete words with words they encountered earlier. Participants are unaware of this effect. If not, the test can be contaminated by explicit memory strategies. Priming can be categorized based on the relationship between the preceding stimulus (target) and the second stimulus that causes the priming effect (prime).

In direct priming (or repetition priming) prime and target are the same. In indirect priming they are different. A well-known form of indirect priming is semantic priming. Here, the prime and target are semantically related (for example cow and horse). Direct priming can be categorised in perceptual priming (such as word completion) and conceptual priming (for example, generating words in the category ‘farming’). See Figure 3 in this summary.

Perceptual priming depends on different brain systems than declarative memory. It depends mainly on the sensory regions of the cortex. Perceptual priming is reduced when the format of the test stimuli changes between encoding and retrieval (this is called study-test format shifts). For example, words in the word completion task can be substituted by symbols. The perceptual priming effect will reduce then. The greater the perceptual change, the greater the reduction of perceptual priming.

This does not happen in episodic (declarative) memory. However, when the study-test format shift consists of conceptual manipulations, the episodic memory retrieval is improved. This effect is called ‘levels of processing’. Levels of processing does not affect perceptual priming. This double association between perceptual priming and episodic memory is strongly supported by research. They depend on two different brain systems. Perceptual priming depends on the sensory cortices. Perceptual priming inhibits neural responses. Repetition suppression is the observation that primed stimuli produce weaker hemodynamic responses. For instance, the bilateral fusiform areas show a weaker response to objects that are shown twice.

Interestingly, the priming effect in right fusiform gyrus disappears during study-test shifts, whereas left fusiform gyrus shows no change in response. Apparently, the left fusiform gyrus is stores abstract object representations. There are several theories that explain the relationship between reduced neural activity (repetition suppression or neural priming) and the enhanced processing of a stimulus (behavioural priming). The sharpening theory states that neurons that carry critical information continue vigorously during repetition of the stimulus. The neurons that transmit non-essential information rate respond to a lesser extent. This leads to the reduced hemodynamic response in the brain area.

Conceptual priming reflects prior processing of conceptual features. It is sensitive to conceptual manipulations, nut not to perceptual manipulations. It doesn’t require consciousness. Perceptual and conceptual priming probably depend on different brain areas. Conceptual priming depends on the lateral temporal and prefrontal cortices, especially in the anterior portion of the left inferior frontal gyrus.

In sematic priming the prime and target have different names, but are semantically related. For instance, “tree” and “apple”. Semantic memory is conceptualised as a network in which each node resembles to a concept and each link resembles to an connection between two concepts (Figure 8.9 on page 256). When a node is activated, the activation spreads through the network, depending on the strength of the connections between the concepts. This is known as spreading activation. In the brain, the neurons represent the nodes. The main brain areas associated with semantic priming are the left anterior temporal cortical areas.

Repetition enhancement is the opposite effect of repetition suppression. Priming represents a modification of stored representations. However, for stimuli without stored representations, priming may require the creation of new representations. Modification of stored representations might cause repetition suppression, while the creation of new representations might cause repetition enhancement. The repetition suppression effect reflects access to a sharper representation, while repetition enhancement reflects the retrieval of a nr memory representation. The support of this idea is not very strong.

Learning skills

Skill learning needs practice before the changes in behaviour can be observed. Just like priming, skill learning doesn’t depend on the medial temporal lobes and is persevered in amnesia-patients. It depends on the interaction between the neocortex and subcortical structures, such as the basal ganglia. Motor, perceptual, and cognitive actions are closely related to each other.

Motor tasks can be divided into two categories. Motor sequence learning tasks focus on the acquisition of movements into executed behaviour. Motor adaption tasks focus on the process of compensating for environmental changes. The most studied motor sequence task is the serial reaction time (SRT) task. This task consist of several trials, in which a button has to be pressed corresponding to location or colour.

There is a hidden sequence in these trials. Normal participant learn this sequence, conscious or not. Motor sequence tasks are preserved in amnesia patients, but not in patients with basal ganglia disorders, such as people with Parkinson’s or Huntington’s diseases. Implicit learning shows activation in the basal ganglia of healthy people. The basal ganglia control the starting and stopping of actions by moderating between motor and premotor cortices. The cerebellum controls coordination and corrects errors in execution.

The posterior parietal cortex is involved in guiding movements towards the perceived place in space around the person. There are two phases of motor sequence learning. In the early learning phase, performance improves quickly within a single practice session. In the advanced learning phase, performance improves slowly over multiple practice sessions. The length of the phases depends on the difficulty of the task. Different brain regions are evidently involved in early and advance learning phases. The transition from early to advance phases involves shifts from global associative regions to motor-related areas of the basal ganglia and cerebellum. To test the motor adaption tasks, researchers apply an alteration of the environment of the subject. This can be done with prism spectacles: they shift the visual image. People initially find it difficult to adjust, but they learn fast during practice.

Perceptual skill learning is improving the processing of perceptual stimuli that are similar to stimuli that have been encountered earlier. Language is a form of perceptual skill learning: we read and speak without conscious effort. Perceptual skill learning can be studied using simple sensory discrimination taksk. One popular task is the mirror reading task:, in which participants read geometrically altered test. Another way to examine perceptual skill learning is to learn meaningless objects (called ‘Greebles’) to participants. Classifying and identifying ‘Greebles’ activates the fusiform face area (FFA). The activation of FFA is also associated with other types of familiar objects. Acquiring skills in real life involves both motor and perceptual skill learning. Skill learning regularly implicates a shift from perceptual to motor processes.

Cognitive skill learning is solving a problem using cognitive abilities. An example is the weather prediction task, where subjects play a weather forecaster. They have to predict the weather based on four cards. Every card has a certain probability of rain (or sunshine). The subject has to correctly predict the weather, based on the believed probability of the cards. This relationship can only be established by learning over time. Amnesia patients (with medial temporal lobe damaged) learn to correctly predict the weather over time. However, people with Parkinson’s disease (basal ganglia damaged) can’t learn the pattern. Neuroimaging studies also show activation of the medial temporal lobe during explicit learning and activation of the basal ganglia during cognitive learning.

Conditioning

Conditioning is another form of non-declarative learning. It’s a response that is elicited by a neural stimulus that normally executed as a response to an appetitive or aversive stimulus. There are two forms of conditioning: classical and operant. In classical conditioning, an innate reflex is detached from its normal triggering stimulus and replaced with an unrelated stimulus. The unrelated stimulus gets activated based on association. The innate reflex is called the unconditioned response or UR.

The normal stimulus causing the reflex is called the unconditioned stimulus or US. The reaction to the abnormal (paired) stimulus is based on association. The paired, abnormal stimulus is called the conditioned stimulus or CS. The reaction, established trough association between CS and UR, is called the conditioned response, CR, or conditioned reflex. In operant conditioning, or instrumental learning, the association between behaviour and response is established through punishment and reward. Early animal research has been done by Thorndike and Skinner. Skinner designed the Skinner box: a pressed lever results in a reward for the animal (usually food).

It takes a few training trials to establish the connection between stimulus and behaviour. This process is called acquisition. If the behaviour no longer results in a reward, the association between stimulus and reward fades away: extinction. However, the association remains in memory. It is important to distinguish between delay and trace conditioning in classical conditioning. In delay conditioning, the conditioned stimulus is still present during when the unconditioned stimulus starts. In trace conditioning, there is a time interval between the end of the CS and the start of the US, leaving a memory trace in the nervous system.

The neural difference between delay and trace conditioning can mainly be found in the hippocampus. Eye blink conditioning is a classic conditioning paradigm where a puff of air gets blown into the eye. It depends on the interpositus nucleus and the cerebellar cortex. There are two forms of operant conditioning. The first is strategic goal-directed action; the second is a pure stimulus-driven habit. Goal-directed actions are sensitive to action-outcome (A-O) contingencies. Stimulus-driven habits are controlled by stimulus-response (S-R) associations. Research shows that goal-directed (A-O) learning is associated to the hippocampus, whereas habitual (S-R) learning depends on the dorsal striatum. However, later research showed that some dorsal striatum areas are involved in goal-directed (A-O) learning as well.

Connectionist models

Connectionist models assume that information processing is a result of parallel processing. These models are therefore also known as parallel distributed processing (PDP) models. Information is distributed over a whole network. The nodes in connectionist schemes are linked in a network, representing the neural network in the brain (for example Figure A on page 271). In input layer receives and distributes incoming information. The hidden layer of the network processes the information from the input layer. The output layer indicates the outcome. The network has several excitatory or inhibitory connections, which activate depending on the incoming information.

These models have the advantage that they specify how conceptually driven processing and data-driven processing may interact. Another strength is that the model represents a built-in learning mechanism: the alteration of connection weights resembles stored information that alters the output. Computers can simulate the models using supervised learning algorithms such as back-propagation. Here, the actual output is compared to the desired output. Then, the weight of the connections is adapted to prevent subsequent errors. Learning can also be unsupervised: nodes that are simultaneously active, get a stronger connection weight. This is similar to Hebbian learning (see Memory on Cellular Level). Another advantage of connectionist models is that when a part of the network is damaged, the knowledge is not lost at once, but slowly fades away. This is called graceful degradation.

Memory on cellular level

Neural structures can be connected to memory systems. Memory can also be observed at the cellular and molecular level. All forms of memory seem to depend on modifications in neural connectivity and the strength of synaptic transmission. Another word for memory trace is engram. The American neuroscientists Lashley found out that the place of brain damage doesn’t matter much for memory. It is rather the amount damage that determines the degree of memory impairment. This finding is consistent with the idea that engrams are networks spreading all over the cortex. Engrams are distributed in the most relevant parts of the brain. For instance, memory traces for visual information can mainly be found in the visual cortex.

Engrams are formed by Hebbian learning. Memories are stored in the brain in networks of neurons. Canadian psychologist Hebb called these networks cell assemblies. When presynaptic and postsynaptic neurons are simultaneously active, the connection between the synapses gets stronger. ‘Cells that fire together wire together.’’

Habituation is a reduced reaction when the same stimulus is constantly repeated. On cellular level, habituation is a decrease in neurotransmitter release at the synapses between sensory and motor neurons. Sensitization is an enhanced response to the habituated stimulus when it is paired with an aversive stimulus. On cellular level, sensitization is an increase in neurotransmitter release between the synapses of sensory and motor neurons. Habituation and sensitization are simple forms of memory.

Studies of hippocampal synaptic transmission showed the possibility of long-term potentiation (LTP, figure 8.24 on page 273). When a neural pathway is stimulated, the postsynaptic neuron later shows stronger responses to input. LTP can last for long periods of time, depending on the locus and stimulation paradigm. LTP can explain many forms of memory. LTP is activated even after a single stimulation, just like memories can be formed after one experience. LTP’s long lasting effects could explain the long persistence of memories. LTP has a degree of specificity that is comparable to the specificity of memories.

Only stimulated synapses show enhancement; unstimulated synapses remain unaffected. LTP has the property of associativity, just like Hebbian learning. Weak and strong neural pathways will connect if they are activated simultaneously. Synaptic weakening is the opposite of this process and involves inhibition instead of enhancement. These mechanisms are called long-term depression (LTD). It can also last for a long period of time and is specific to stimulated synapses. Behavioural LTP is a change in synaptic efficacy following a normal learning experience. For example, enriched environments seem to improve learning. Mice without genes that facilitate LTP have difficulty with learning (for example spatial learning).

Maintenance of the all memory mechanisms requires the synthesis of new proteins to build new traces. This can cause long-lasting structural changes in synapses, sometimes called synaptic consolidation. Learning-related modifications in synapses involve dendritic spines. They are small bulges from dendritic branches receiving input from other neurons. The spines can change in length and diameter or in size. Also, completely new spines can grow and existing spines can disappear, depending on nearby activation or lack of activation.

9. Declarative memory

Declarative memory is associated with consciousness. Memories are stored throughout the whole brain, but good storage depends on the medial temporal area. Prefrontal and parietal areas are important for attention and executive control. People with smaller hippocampi due to problems at birth can develop developmental amnesia. These people have a normal working memory and can keep up in education, but have poor episodic memory. Their fact knowledge (semantic memory) remains intact.

Ideas and assumptions

Figure 9.1 on page 282 shows current concepts of memory and their relation. The overview is also integrated in the overview of the memory systems in Figure 3 in this summary. Declarative memory consists of semantic memory (fact knowledge) and episodic memory (event memories). The memories can overlap and act together. The mixed semantic and episodic memories of our own lives are called autobiographical memories. Episodic memory can be a specific recollection of or a vague familiarity. Familiarity is also partly related to semantic memory and priming.

Medial temporal lobes store the “indicators” to locations of memory traces. However, prefrontal and posterior parietal areas also play a minor role. Figure 9.2 on page 283 shows the process of encoding, storing and retrieving memories. In the encoding phase, details of an event are stored in the same areas that are involved in the perception of the event. Sounds get stored in the auditory cortex, images in the visual cortex, and so on. The hippocampus stores a summary representation of the event with the indicators to the locations of the memory traces in the cortices. This summary is called an index. During the storage phase, memory traces can get stronger. This strengthening process is called consolidation.

Sometimes memories may be available but not easily reached. When a so-called retriecal cue is presented to an individual, he or she may get in the retrieval phase. Remembering happens voluntary or involuntary. The cue can be internally or externally generated. The retrieval cue accesses the index in the hippocampus and follows the paths to the memory. This model can explain why little brain damage barely impairs memory.

Small lesions won’t affect the distributed memory traces. The model also explains selective memory loss when the cortex is damage, and global memory loss when the medial temporal lobes are damaged. The model can also explain the existence of complete sets of memories, which can be accessed without direct hippocampal retrieval. This process is called system consolidation. Memories may get associated to each other, and get connected to each other without the hippocampus as intermediate.

Nature of medial temporal representations

Figure A in Box 9A on page 286 gives a systematic overview of the medial temporal system. The fornix is a connection between the hippocampus, the mammillary bodies and the thalamus. The parahippocampal and perirhinal cortices act as relay stations for the hippocampus; it receives input from other cortices.

The cognitive map theory states that the main purpose of the hippocampus is to intermediate memory for spatial relations among objects in the environment. The relational memory theory states that the hippocampus acts as a mediator for new memory associations in general. The episodic memory theory states that the hippocampus is important for episodic memory, but not for semantic memory. Indeed, lesions in the hippocampus affect episodic memory more than sematic memory.

Conversely, lesions to the cortex in the anterior temporal lobe impairs semantic memory more than episodic memory. Semantic dementia is a variant of frontotemporal dementia and show regression in knowledge and language. However, the evidence regarding the function of the hippocampus can be placed in more than one theory. It seems that the hippocampus helps with the recollection of memories rather than familiarity, supporting the idea that it is important for both relational memory and episodic memory.

There are several theories on how the hippocampus and the other medial temporal structures are related. According to the two-process theory, the hippocampus processes relational and spatial information relatively slow. The perirhinal cortex processes information faster and object-based. They exchange their information. The three-process theories include the perirhinal cortex (memory for objects), the parahippocampal cortex (memory including spatial arrangements) and the hippocampus.

The perirhinal cortex receives information from the ventral “what” pathway in the brain and the parahippocampal cortex receives information from the dorsal “where” pathway. The hippocampus integrates the information form the perirhinal cortex and parahippocampal cortex, making it the domain-general relational memory (this is shown in Figure 9 on page 292). Three-process theories account for evidence linking the perirhinal cortex to familiarity and the hippocampus to recollection. The theoretical distinction between the functions and anatomical structures fits real-world evidence.

Brain areas storing semantic and episodic memories

Semantic memory studies are associated with organisation of the memory. Episodic memory studies focus on the reactivation of existing memories. The sensory/functional theory states that the conceptual knowledge organises itself depending on the sensory and functional properties of objects. So the memory representation of grass is stored in the visual cortex (green), the olfactory cortex (smell), and so on.

The sensory/functional theory is an instance of embodied cognition. According to the domain-specific theory, concepts are organised by semantic categories rather than single properties. Grass is placed in the ‘plant’-category, which has its own niche in the brain. Both theories are partially right and can be integrated into one general theory.

The memory model postulates that the retrieval of recent episodic memories involves access to hippocampal indices to activate the cortical memory traces. Memory theories predict reactivation of the encoding processes as well. Transfer-appropriate processing is processing the overlap between encoding and retrieval in order to remember the memory. If we want to remember a particular memory, the cognitive operations for retrieval must be similar to the operations during encoding of the memory.

Encoding, retrieval, and the prefrontal cortex

The subsequent memory paradigm compares encoding activity of remembered and forgotten items in a memory test. The left inferior frontal gyrus shows more activity during the encoding of remembered words that during the encoding of forgotten words. The same pattern can be seen in the medial temporal lobe and the dorsal parietal lobe. This difference in encoding activity is called subsequent memory effects (SMEs). SMEs are typically found in the inferior frontal gyrus, but not in the middle frontal gyrus. The middle frontal gyrus organises and manipulates information in the working memory, as a preparation of episodic encoding in the inferior frontal gyrus.

Memory encoding begins with a retrieval cue, triggering memory search, which leads to the recovery of specific stored memory traces. During this process, the monitoring process rejects incorrect memories. The described process is called the retrieval mode: consciously focussing attention on finding memories. During an item recognition test, participants have to remember if they have encountered certain objects earlier in the test or not. The types of reactions are stored in Table 1 of this summary. In a recall test, the participant has to remember words, but hasn’t been told in the beginning of the test. There are two sorts of recall tests: free-recall (recalling all the words) and cued-recall (recalling certain words or word pairs). During a source (or contest) memory tests, participants must retrieve the spatial, sensory, or semantic details of items.

Lesions in the frontal lobe typically end in mild declarative memory problems, often in the strategic demands of the task. This is why recall and resource memory tests seem to be impaired more than recognition memory tasks. Source memory impairments in people with frontal lobe lesions may reflect problems in encoding processes. Confabulations are stories that are made up by a patient. They are often not caused by lesions in the prefrontal areas. Confabulations can be found in many disorders.

ERP studies of episodic memories

There are three consistent differences in the ERPs during recalling old and new items. First, the FN400 effect (FN = frontal negativity) shows stronger negativity for new items. Second, old items cause a more positive voltage over the parietal area. This effect is called the left parietal effect. Lastly, old items can elicit more positive activity in the right frontal area. This is called the right frontal effect. It seems to reflect post-retrieval calculations.

Encoding, retrieval, and the posterior parietal cortex

The posterior parietal plays a role in the retrieval and encoding of declarative memories.

Activations based on familiarity and low-confidence recognition are associated to the dorsal parietal cortex. Activations based on recollection and high-confidence memories are more common in the ventral parietal cortex. The attention to memory (AtoM) model is based on the distinction between the ventral and dorsal attention systems of memory. According to this theory, the dorsal parietal cortex mediates top-down attention processes, caused by memory search. The ventral parietal cortex mediates bottom-up attention processes, caused by salient memories or retrieval cues. An alternative theory of the ventral parietal cortex functioning states that this area mediates the upkeep of remembered multisensory episodic memories within working memory.

Dorsal and ventral parietal regions show different activation batters during encoding. The ventral parietal regions that are associated with retrieval success (recollection, high-confidence memory, giving positive subsequent memory effects) are related to encoding failure (giving negative subsequent memory effects).

Consolidation

Consolidation is storing long-term memories, strengthening the initial encoding of memory traces. Consolidation can happen at cellular level (synaptic consolidation) and system level (system consolidation). Synaptic consolidation is accomplished with the mechanisms of synaptic plasticity. Genes produce proteins, which will form the new synapses. Synaptic consolidation can be disrupted with protein synthesis inhibitors. System consolidation takes a longer period of time than synaptic consolidation. It involves changes in brain areas related to specific types of memory. Ribot’s law is associated with system consolidation. Ribot’s law: ‘Memory loss following brain damage affects recent memories more than remote memories.

The standard consolidation theory states that the memory of a recent event is distributed over multiple cortices. The hippocampus has an index of where the memories are stored and acts like an intermediate. Older event have strengthened memory traces and direct connections between each other. They don’t need the hippocampus anymore. This is shown in Figure 9.21 on page 313. However, the standard consolidation theory does not explain the fact that retrograde amnesia can extend far into the history of an individual when the hippocampus is damaged. The alternative explanation is called the multiple-trace theory. Here, the episodic memory always depends on the hippocampus. Every time a memory is reactivated, a new memory trace is stored in the hippocampus. This makes the memory more resistant to hippocampal damage.

It seems that there is another factor contributing to consolidation, besides memory repeating: sleep. During the REM-sleep (as seen on EEG), non-declarative memories are consolidated. During non-REM sleep, declarative memories are consolidated.

10. Emotion

Affective neuroscience refers to the field of cognitive neuroscience concerned with understanding emotions and the brain. Through history, scientists tried to connect bodily reactions to brain functions. This led to the limbic system theory of emotion in the twentieth century. According to this theory, autonomic functions are linked with the medial forebrain. However, this theory does not cover all aspects of emotion.

Posttraumatic stress disorder

Posttraumatic stress disorder (PTSS) can be developed after a traumatic event, such as war or rape. It is associated with feelings of fear, helplessness and stress, re-experiencing the traumatic event and avoidant behaviour. Sufferers can develop depression and substance abuse as well. Cognitive behaviour therapy can help. Also, some structural differences have been found in de amygdala and hippocampus.

Definition of emotion

Emotion often refers to conscious feelings. However, the subjective nature of emotion makes research on emotion rather difficult. Nowadays, emotion gets defined as a combination of feelings, expressive behaviour, and physiological changes. Emotions are considered dispositions toward behaviours that help an organism deal with biologically or individually significant events.

For example, emotions guide body and behaviour during danger: blood flow increases and stress hormones are released. Learning the appropriate behaviour helps us in the future. Emotions are typically triggered by conditions and events in the environment, but also by memories. Emotions include unconscious behavioural and physiological changes. Finally, emotions facilitate social interactions that are useful for survival.

Classification of emotions

Categorical theories of emotion regard emotions as independent entities. These theories often contain a set of basic emotions which are shared with other species. Most theories define anger, sadness, happiness, fear, disgust, and surprise as basic emotions. Complex emotions, however, are learned and culturally shaped. Complex emotions are influenced by language and are developed later in life.

Dimensional theories consider each emotion a point within a complex space that includes multiple dimensions. Examples of these dimensions are arousal and valence. Arousal is the physiological or subjective intensity of the emotion. Valence is the relative pleasantness of the emotion. The emotions can be represented in vector models. This boomerang-shaped model shows emotions along axes of positive and negative valence, meeting at a common neutral endpoint. The axes can also have other variables, such as arousal and activation. Circumplex models order emotions in the quartiles of a circle. The valence and the circumplex model are both showed in Figure 10.3 on page 324.

Component process theories try to capture the flexible and complex nature of emotions. These theories take a cognitive approach, trying to link behaviour, cognition and physiological response. Emotions are related according to the similarity of the cognitive appraisal processes involved. This theory views emotions as organised according to their overlap in activating specific appraisal mechanisms in the brain. These specific appraisal mechanisms differ across individuals, depending on social and cultural context.

Psychophysiology

Psychophysiology relates psychological concepts to measurable changes in the body. The autonomic nervous system takes care of automatic changes in the body. It consists of the sympathetic and parasympathetic nervous system, which are independent and complementary subsystems (Figure A on page 327). The sympathetic division is mainly associated with functions that are useful during stressful situations. It is associated with fight-or-flight responses. These responses travel to the body using series of ganglionic relays along the vertebral column. The parasympathetic division is associated with functions that are useful during rest, such as digestion. The ganglia that regulate the parasympathetic system are often situated near the organs themselves.

The enteric nervous system also uses local regulation, and is concerned with independent activity of digestive functions. These three systems need to be in balance. The input is provided by the limbic and prefrontal structures and travels via the hypothalamus to the autonomic motor neurons. The skin conductance and startle response are both very useful when studying emotions. These responses are part of the peripheral system. Skin conductance response (SCR) is a change in the electrical resistance of the skin, measured with electrodes on the palm of the hands. Sweaty hands indicate emotional arousal. The SCR can also change as a result from a shift in attention, sexual arousal, or other factors. The startle response is a protective reflex of the muscles around the eyes. It is induced by intense and unexpected sensory stimuli.

One of the components in the startle response is a reflexive eye blink. Startle responses are influenced by attention. Hence, the startle reflex can be used to find out the allocation of attention. The attentional influence is modulated by emotions. For example, during scary horror movies, we are much easier to startle. Startle is more sensitive to emotion than skin conductance.

Early theories of emotion

The search for neural substrates of emotion is divided into three stages of information-processing.

  • Evaluation of sensory input

  • Conscious experience of a feeling state

  • Expression of behavioural and physiological responses.

Neuroscientists have different theories about the relationships between these stages.

The James-Lange feedback theory poses that a motor movement may elicit a bodily response, which gets interpreted as emotion. See Figure 10.4 on page 329. The theory has two assumptions: 1) a deterministic relation exists between bodily reactions and emotions, and 2) no emotions are felt if there are no bodily reactions, since bodily reactions are causal. The James-Lange theory is a reference to their founders: William James and Carl Lange. The two scientists did not agree on the physical origin of the emotions: Lange said the heart; James said that other bodily feedback is important as well.

The Cannon-Bard theory is a rebuttal to the James-Lange feedback theory. Cannon and Bard argued that the autonomic nervous system is too undifferentiated to show the large variety of emotional states we can experience. A specific response to motor movement can’t determine which specific emotion would be felt. Also the hormonal feedback is too slow to be the cause of abrupt emotions. Finally, there is no hormone that immediately results in a particular emotion. Instead, Cannon and Bard suggested that the automatic fight-or-flight response is coordinated by the autonomic nervous system.

This autonomic nervous system mobilizes the body in preparation for action in an emotionally arousing situation. Cannon and Bard proposed that when an emotional stimulus is processed by the diencephalon (hypothalamus and thalamus), the stimulus is also communicated to the neocortex for the generation of emotions, and to the rest of the body for the expression of emotional reactions. See Figure 10.6 on page 330. The Cannon-Bard theory (or diencephalic theory) of emotion is one of the first parallel processing models of brain functioning. The hypothalamus acts as a mediator between emotional information processing and the activity of the autonomic nervous system.

Neuroanatomist James Papez found a circuit in the brain, containing areas connected to the hypothalamus. The circuit is involved in emotional processing. The structures within Papez’s circuit are the diencephalon and the forebrain: the anterior thalamus, hypothalamus, hippocampus, and the cingulate gyrus. See Figure 10.7 on page 331. First, it was thought that damage to the circuit caused the Klüver-Bucy syndrome. The main symptom is the inability to evaluate the emotional and motivational importance of objects, particularly by the sense of sight. It was only found out later that Papez’s circuit has only little to do with the syndrome.

The limbic system theory is based on the assumption that the brain structures in the medial wall of the forebrain (the rhinencephalon or visceral brain), the amygdala and the orbitofrontal cortex are important for emotion. See Figure 10.8 on page 333 for an overview of the structures in the limbic system. The core of the limbic system is the hippocampus, which is an array or pyramidal neurons. The hippocampus was thought to be the initiator of emotional feelings and the integrator of emotional reactions. However, the limbic system theory is not flawless either.

There are no independent anatomical criteria for defining which brain region is actually part of the limbic system. There isn’t much coherence between the brain structures in the limbic system. Several limbic system structures turned out to be important for primarily cognitive functioning, instead of emotions. Finally, damage to the hippocampus affects memory and spatial cognition, not emotion. Even with its flaws, the limbic system theory has been very influential.

Methods to study the neurobiology of emotion

The right-hemisphere hypothesis states that the right cerebral hemisphere is specialized for mediating emotions. Another lateralisation model, called the valence hypothesis, proposes that the left and right hemispheres are specialized for respectively positive and negative emotions. Positive emotions are more linguistic and full-fill social functions. Negative emotions are reactive and survival-related. However, the experience and expression of emotion is not the same. EEG studies provided some evidence for the valence theory. Alpha activity on the left side of the prefrontal cortex means a reaction to a positive stimulus, while activity on the right side is a reaction to a negative stimulus. People suffering from depression exhibit a reduction of the left prefrontal asymmetry pattern.

The term temperament refers to a habitual emotional reaction, which is originated in a person’s personality. Research on individual differences can discover brain-behaviour relationships that would be overlooked if all data in a dataset is averaged. Other studies have indicated that the prefrontal EEG asymmetry in the alpha band may reflect motivational tendencies of approach and avoidance rather than simply positive and negative valence. Positive emotions are associated with approach behaviour, while most negative emotions are associated with avoidance behaviour. This approach creates a dimensional theory of emotions. Anger is an exception to this general rule. Offensive anger elicits approach behaviour and is accompanied by a leftward prefrontal EEG asymmetry. In contrast, rightward EEG asymmetry is found during defensive anger, when individuals display avoidant behaviour.

Vertical integration models are integrations of emotional processing across many levels of the nervous system. An example of this is fear conditioning. Acquisition and expression of conditioned fear requires the amygdala. The hippocampus processes the context in which the conditioned stimulus appears. The ventromedial prefrontal cortex suppresses fear responses during extinction. Sensory input can be send to the amygdala in two ways: fast or slow. The fast input route can only make rough sensory discrimination, whereas the more sophisticated perceptual analysis arrives later via cortical pathways.

The rapid pathway can prime the amygdala to a more efficiently processing of the additional information from the slow cortical path. Synaptic plasticity in the amygdala helps to make long-term sensory representation that can help the detection of similar threats in the future. This seems to be related to conditioning. People with damage to the amygdala show reduced conditioned fear responses. Moreover, the degree of activation in the amygdala correlates with the degree of physiologically measured fear (often skin conductance). The hippocampus is likely to be involved in contextual fear conditioning, since it is involved in spatial processing and declarative memory.

However, fear can be persistent, even in safe situations. This is called emotional perseveration. Fear extinction is the “de-learning” of a conditioned fear response. During fear extinction, the subject learns that the former conditioned stimulus no longer elicits an unpleasant consequence. Extinction depends on the ability to integrate information in the ventromedial prefrontal cortex (vmPFC).

The somatic marker hypothesis explains the relation between decisions, the brain, and body signals. The neurologist Damasio argues that the vmPFC guides everyday decision making, by indexing links between factual knowledge and bioregulatory states associated with particular events. When one encounters a similar situation, the relevant somatic markers in the vmPFC get activated, and triggers reactivation of the somatosensory pattern that in turn activates the appropriate emotion. Through these somatosensory patterns, reasoning and decision-making processes are constrained to options that are marked as qualitatively good or bad. This helps an individual to quickly make the most optimal decision. Experimental evidence for the somatic marker hypothesis comes from the Iowa Gambling Task, where participants have to maximise their profit by choosing the appropriate cards.

The subject can choose from four decks of cards: some decks are advantageous on the long term, but increasing the profit goes very slowly. Other decks give large rewards, but are disadvantageous on the long term. See Figure 10.14 on page 342. Healthy subjects finally only choose the decks with small, but profitable rewards. Patients with damage to the vmPFC keep choosing the risky cards. The insula monitors the physiological state of the organism (interoception) and stores visceral and skeletomotor representations of emotional states which are activated during decision making. The insula also controls the homeostasis in the body. Information about bodily states gets processes in a posterior-to-anterior manner. Thus, the insula is part of a body-brain loop between body and emotion. The somatic marker theory looks (in this way) like the James-Lange theory of bodily feedback.

Studies support the idea of neural specialisation for discrete emotions in the limbic and paralimbic brain areas. This is in line with categorical theories on emotion. Some brain regions connect bodily reactions to emotions, such as disgust in the insula and fear in the amygdala. People with damage to the amygdala may lose their ability to recognise fear. More precisely they lose the ability to attend the correct facial cues of fear. Recent meta-studies failed to find observable markers of explicit emotions in the brain. Neuroscientists now use methods such as multivariate statistics to find complex patterns of neural and autonomic brain activity associated with emotions.

Interactions with other cognitive functions

Emotions have influence on the way we perceive and attend sensory information. The prioritisation of emotions is either done by automatic (involuntary) detection of salient features (gun pointed towards you) or by voluntary shift of attention (looking for threats in the environment). Studies examining the automatic detection of fearful stimuli often use masked visual stimuli to elicit fear. Another way is to use the attentional blink paradigm, in which a rapid serial visual presentation of words contains neutral and emotionally arousing words. Emotionally arousing words can be detected at shorter lag times than neutral words.

Patients with damage to the amygdala do not show this improvement of detection. The amygdala apparently overrides a perceptual mechanism that allows emotional stimuli to reach awareness more readily. Other connections between the amygdala and sensory cortices may also explain ways by which emotion may be influenced by perception. Even though emotional stimuli may initially elicit autonomic and neural responses fairly quickly and automatically, attentional and other cognitive functions are activated to further evaluate the stimulus and initiate appropriate behavioural reactions. First, the individual has to attend and orient to the emotional trigger. This often happens automatically without dorsal attentional control. For example, patients with hemispatial neglect can detect threats and emotionally arousing stimuli in the neglected areas of the visual field.

In addition to these automatic influences, limbic regions influence goal-directed attentional processing. An overview of the attention-emotion interfaces can be found in Figure 10.21 on page 349. Neurologist Meyberg theorised that the anterior cingulate and related regions in the medial and orbital prefrontal cortex maintain balance between ventral emotional and dorsal attentional functions in normal mood regulation. These activity patterns become skewed in mood disorders. In these cases, there is too much emphasis on somatic emotional processing at the expense of cognitive-attentional operations. This, individuals with mood disorders pay too much resources to emotions and have problems with focussing attention.

Emotions also have influence on the way we memorise things. The term flashbulb memory refers to all the vivid details of an emotionally frightening memory. Salient experiences (positive and negative) leave a stronger memory than ordinary events. The memory modulation hypothesis is based on the idea that emotionally arousing events enhance memory in part by engaging systems that regulate the storage of newly acquired information. The memory modulation hypothesis emphasizes the role of the amygdala in enhancing consolidation processes in other regions of the brain after an emotional episode.

The arousal dimension of emotion is the primary activator of the amygdala. The influence of the amygdala can be direct, through activation of neurons, and indirect, through hormones. The hormones that are involved in this form of memory consolidation are epinephrine, norepinephrine and cortisol. These stress hormones are secreted by the adrenal glands. See Figure 10.22 on page 351. Stress hormones react slow if you compare them to neural signalling, but have a maximum effect during memory consolidation. Cortisol helps memorising in general. The beta-adrenergic blocker propranolol seems to be the opposite of cortisol, it consolidation of emotional memories.

The hypothalamic-pituitary-adrenal axis

Stress refers to the psychological and physiological changes that occur in response to a real or perceived threat to homeostasis. The body initiates a complex pattern of endocrine, neural, and immunological activity (stress response) in an attempt to cope with the aversive stimulus (stressor). This attempt is called allostasis. A stress response activates the body’s resources to increase chances of survival. However, long term stress can cause health problems. This phenomenon is called allostatic overload.

The stress response is mediated by three organs: the paraventricular nucleus of the hypothalamus, the anterior lobe of the pituitary gland, and the cortex of the adrenal gland. These organs are referred to as the hypothalamic-pituitary-adrenal (HPA) axis. See the figure on page 352 in Box 10B. Neurons in this area activate the pituitary gland, which releases hormones, such as cortisol. The limbic forebrain and brainstem regulate the HPA axis. The amygdala is involved in a feedback loop that keeps the HPA activated. In contrast, the hippocampus and medial prefrontal cortex (PFC) have inhibitory influences over the HPA axis. Individual differ in the extent to which they are sensitive to stressful situations. The main factor in the ability to cope with stress seems to be the perceived control of the situation.

Emotion regulation

Emotions are healthy reactions to situations that require adaptive behaviour. People regulate their emotions consciously and unconsciously. Some people are better at this than others. Emotions (in humans) are not passively reactions to the environment, but active subjective processes. Cognitive-behavioural interventions train individuals to make new, adaptive responses to emotional triggers, thoughts and reactions.

The psychologist Gross proposed that people regulate their emotions with different strategies according to the timing of the emotional response and the regulatory target. Individuals change their behaviour patterns to avioid the emotional encounter (situation selection). Individuals can also engage in cognitive reappraisal to make a different interpretation of the meaning of an emotional elicitor. This way, the emotional impact is diminished. On a neural level, cognitive reappraisal of negative stimuli activates the dorsal frontoparietal network and other prefrontal regions. Conscious efforts to change emotional experience changes activity in frotolimbic structures.

11. Social cognition

Social neuroscience tries to identify the underlying neural processes by which individual understand themselves and their relationships to others. Social neuroscience builds on the emotional (affective) neuroscience.

Autism

Autism spectrum disorder is a range of neurodevelopmental problems characterised by functional deficits in communication and social interactions, and by repetitive, stereotyped behaviours. The key criteria, as described in the DSM-IV include:

  • Impairment in using nonverbal behaviours in social interactions.

  • Failure to develop peer relationships appropriate for the individual’s age level.

  • Failure to spontaneously seek to share enjoyment, interests, or achievements with other people.

  • Lack of social or emotional reciprocity.

Researchers study the brains and behaviours of people diagnosed with autism to gain insight in social cognitive functions. For example, people with autism scan non-expressive parts of the face, such as the forehead and cheeks, while most other people scan the face in a triangular manner, looking at the eyes and mouth. People with autism have problems with discriminating emotions and personal characteristics in faces, which affects the way they relate to other people.

Socials impairments also include complications in extracting meaning from interactions with others. The inability to follow other people’s gaze can cause problems in understanding intentions, desires, and language. Individuals with autism often fail theory-of-mind tasks, in which participants have to distinguish their own views and belief from those of others. People with autism show reduced activation in the fusiform gyrus, inferior temporal gyrus, superior temporal sulcus, and amygdala. People with autism tend to have larger brains, with more white matter in the cerebrum and cerebellum and more cerebral grey matter in the frontal lobes.

The structural differences grow into normality in later childhood. Behavioural impairments in autism are likely to be a result of different processing and/or connectivity among interacting brain areas that mediate social cognition, communication, and executive motor functions. This may result in an individual being very talented in a particular domain. This is called savant syndrome. In contrast, individual with autism may have more difficulty in integrating parties into wholes and across multiple domains. This is called central coherence. This may result in systemizing: the tendency to analyse information in a rigid an mechanical manner. Individuals with autism can use training learn the rules of social behaviour. These trainings often use the tendency to systemise.

The self

In order to establish good social interaction, individuals have to distinguish themselves from others. The ability to consider one’s own being as an object, and thus subject to objective consideration, is called self-reflexive thought. It is a core cognitive ability in humans. Only some animals can pass the ‘mark test’: they remove a dot of paint on their face after seeing it in the mirror. This means they know that the one in the mirror is actually themselves. There can also be disorders of the self. For example, fugue states are temporary states of confusion in which self-relevant knowledge is unavailable to consciousness. It results in uncharacteristic and self-destructive behaviour. People who display multiple identities are said to have dissociative identity disorder. The disorder typically arises after trauma.

Self-reflection requires a redirection of attentional focus from the external environment to the internal thoughts and feelings. There are two sets of brain regions involved in these processes. The midline cortical regions are involved in the default mode of brain processing, and the limbic/paralimbic regions are involved in interoception. The default mode provides a basis for the regions that are more active during self-reflection, independent of the sensory environment. These areas might be involved in the stream of consciousness. The involved areas are mostly medial, cingulate, parietal, and prefrontal (such as the vmPFC).

When not only self-reflexive thought, but also interoception and empathy is relevant, the anterior cingulate and insula get activated as well. The same emotional images can activate different brain regions depending on whether attention is on internal reflexive thinking or on an analysis of the sensory features. When emotions are more deeply and complexly experienced, the response in the anterior cingulate gets more intense. The insula is involved in bodily sensations, but is involved in self-reflective feelings as well.

Embodiment is the sense of being localised within one’s own body. The psychologist Neisser referred to this aspect of self-knowledge as the ecological self, since the physical body constrains the way individuals interact with the environment. Embodiment helps self-localisation and making an egocentric frame of reference. Humans can change this egocentric frame, for example from first-person view to third-person view. People can lose this connection between location and body, for example people with neglect syndrome, phantom lib sensations, or out-of-body experiences. There are two brain regions involved in embodiment.

The first region is an area of the extrastriate visual cortex, called the extrastriate body area, engaged in visually processing human bodies. Out-of-body experiences are associated with the temporoparietal junction, a multisensory area at the border between the temporal and medial lobes. The right temporoparietal junction is also involved in rotating: it doesn’t matter if it is the body or just an object in the environment.

Social cues in face and body

Cues about the social context of a situation have to be obtained from the environment of an individual. This makes us able to initiate appropriate responses. The human face is very salient and can convey complex social cures. We use the cues to infer the mental state of others. This is called a receptive function. We also use our faces to influence someone else’s thoughts and behaviour.

This is the communicative function of the face. A modern neuropsychological model of face perception (as seen in Figure 11.3 on page 368) includes a core system for the visual analysis of facial information, beginning with parsing facial features in the occipital lobe. After this initial stage, face processing is distributed over two parallel but interacting processes. The ventral pathway (including the fusiform gyrus and interior temporal cortex) is involved in processing the invariant aspects of faces in order to discriminate faces between other individuals and separate faces from objects.

A specialised region in the fusiform gyrus, called the fusiform face area, is to some degree specialised for encoding information from faces. After processing in the ventral pathway, the face representations are linked with semantic and episodic knowledge in the anterior temporal lobe. The ventral pathway helps with person recognition. Another pathway (the dorsal route) involves the superior temporal sulcus (STS). The dorsal route processes dynamic facial features, such as changes in emotional expression and gaze shifts. These features are transmitted in the amygdala and limbic forebrain structures for analysis of emotion, in multisensory areas for the integration of multiple sensory inputs, and in the parietal lobe for spatially directed attention due to head movements or gaze shifts. The dorsal and ventral pathways interact with each other.

Evidence for partial independent processing of invariant and dynamic facial features comes from studies of patients with prosopagnosia. They have suffered damage to the ventral regions of the temporal lobe and can’t identify individual faces. However, patients can identify emotional facial expressions. In contrast, patients with damage to the amygdala or STS have difficulty identifying emotional expression and/or gaze directs. They can identify individuals by their facial features. Fast decoding of facial movements facilitates quick reactions during personal interactions and helps to predict actions of others. Interestingly, the white of the eyes makes it easier to find the direction of gaze. It is also important for detecting a person’s emotional state.

Nonverbal communication includes faces, body movements and gestures. Visual information about the position of other body parts is sent from the visual areas to person recognition and biological motion pathways in the temporal lobe. The posterior superior temporal sulcus, particularly in the right hemisphere, discriminates biologically plausible motion from implausible motion. The STS preference body actions that are meaningful and goal-directed. The STS doesn’t show much activity when it processes random movements. Activity in the STS further indicates when body movements conflicts with social expectations.

Information from the STS gets send to attentional and executive systems in the parietal lobe and premotor areas of the frontal lobe to plane appropriate behaviour. The output of the STS is also sent to frontoparietal cortices for attention modulation and action planning, as well as to some limbic brain structures for further interpretation of the importance of the perceived motion patterns.

The use of verbal and nonverbal communication is used in social groups to communicate attitudes towards each other and to signal the presence of important stimuli in the environment. The use of body gestures together with facial and vocal expressions to determine how to deal with a situation is called social referencing. During development, social referencing is critical for learning how to properly behave in situations. Joint attention is the distribution of processing resources towards an object cued by an individual. Gaze direction and head and body orientation indicate where an individual is focussing attention. Others can take advantage of this information, for example to attend objects that are out one’s own field of view. Social referencing and joint attention require knowledge about how other cue spatial locations in the environment.

Some neurons in the STS are tuned to multiple body features to signal information conveyed by the position of another individual in the environment. A variant of the Posner attentional cuing task, in which subjects have to attend a particular location to respond to a target, uses the position of the eyes to indicate the position of the target. See Figure 11.7 on page 373. Compared to the normal Posner task, which uses arrows, subjects are more willing to follow gaze, even when its predictive value is low. This indicates that following gaze might be done automatically. Valid gaze cures improve target detection performance and lead to larger amplitudes in early ERP components, such as P1 and N1. This implies that attentional modulation of visual processing happens in the extrastriate regions of the cortex. Invalid cues cause slower reaction times. They also elicit later enhancement of the P300 component, suggestion a need to update information in the working memory as a reaction to incorrect information.

Social categorisation

The perception of individual features is also used to place people into social groups. This categorisation is useful, because it helps us to make sense of large social networks. Categorising individuals involves automatic and controlled processes and is influenced by cultural norms, knowledge of stereotypes, and personal attitudes. Although social categorisation is useful in the evolutionary sense, it also leads to social injustice.

The earliest stages of neural processing are influenced by social category information, such as gender. Most studies investigating this kind of processing use ERPs. Subjects typically view different faces and categorise these faces during an EEG-measurement. Ethnical categorisation appears in the frontocentral N1 and P2, especially when viewing unfamiliar out-group members. Later in time the racial category effects reverse, such that enhanced frontocentral N2 responses are observed when viewing in-group faces, perhaps to enlarge the individual features of the faces. The ERP effects are found in every kind of task, not the categorisation task only.

Not only viewing perceptual features may indicate particular social categories, also interpersonal exchanges can be affected by stereotypes, prejudice and overt discrimination. People have to overcome their cultural stereotypes before they can guide social behaviour according to one’s personal attitudes. Racism can be measured with the Implicit Association Test (IAT) and the startle eye blink response. On a more neural level, activation of the amygdala is associated with implicit racial bias.

Regions in the fusiform gyrus show enhance activity for familiar faces, perhaps to facilitate greater individuation or familiarity with in-group members. Remember that these results have to be interpreted with caution. Brain imaging techniques are correlational: activation of areas does not indicate that a person has a racial bias. Finally, measures rarely distinguish between activation of semantic knowledge related to cultural stereotypes versus a personal, evaluative attitude towards members of a social category.

Researchers have studies the neural mechanisms activated when group stereotypes induce response conflicts and motivate voluntary control over biases. A two stage model of cognitive control supposes that

  • Anterior cingulate activity reflects continual monitoring of conflict during information processing;

  • Prefrontal regions are subsequently recruited to implement regulatory responses once conflict resolution is needed.

The two stage model of cognitive control can be combined with ERP components to predict when and how cognitive control is engaged. The error-related negativity (ERN), located in the anterior cingulate gyrus, is the most important component here. It is sensitive to information-processing conflicts. Individuals who are internally motivated to regulate prejudiced reactions exhibit grater only monitoring of stereotype conflict when a potentially biased response is generated.

Internally motivated individuals have greater motivation to override automatic stereotypes in order to behave in accordance with their own egalitarian principles. Prefrontal responses to out-group faces may also mediate the relationship between implicit racial attitudes and cognitive consequences following an interracial encounter. Individuals with implicit racial biases use greater effort to control reactions to out-group members in part by engaging prefrontal circuitry in the brain.

Other personal characteristics are judged from physical features too. They are inferred from nonverbal behaviour to form broad impressions of individuals. For example, trustworthiness is accessed from facial appearance. Activity in brain areas involved in social cognition and emotional evaluations increases when people have to judge the trustworthiness of others, such as the insula, medial PFC, orbitofrontal cortex, amygdala, and the caudate. The first impression of one other is important to establish the trustworthiness of others.

Measuring implicit and explicit racial attitudes

The Implicit Association Test (IAT) is often used to measure implicit racial attitudes. Participant are presented with series of different faces intermixed with positively and negatively valued words that represent good or bad concepts. Participants have to respond to particular types of faces and the positive or negative valence of the words. Afterwards, the order of response is reversed. Implicit racial biases are quantified as the difference in reaction time between the two conditions.

The IAT has good test reliability, but the reason for the difference in reaction time is unclear. The explicit Modern Racism Scale is a paper-and-pencil test to determine whether subjects are motivated to respond to out-groups in an appropriate way. The IAT performance reveals biases in processing that are not shown by explicit measures and can be related to electrophysiological measurements of automatic evaluative processes.

Comprehending others’ actions and emotions

Successful interaction with the social environment is based on linking social and affective cues with behavioural outcomes. In humans, social competence is facilitated by inferring mental states in others, and linking actions of others to their thoughts and feelings.

This capacity is called theory of mind or mentalising. It is unclear whether the theory of mind occurs through simulation of the behaviours and internal states of other individuals or by making schemas of typical behaviours in particular contexts. The philosopher Dennett suggests that social interactions benefit when one assumes that other individuals are motivated to behave in a way according to their current mental state, which may differ from one’s own mental state and/or perception of the situation. This is called intentional stance.

By coding actions performed by others in a way that is related to one’s own actions, behaviour of others can be interpreted in a meaningful way. The discovery of mirror neurons in the premotor cortex of monkeys stimulated the interest in neural mechanisms underlying action representation. The evidence for mirror neurons in humans is suggestive and not very clear. Mirror neurons are situated in and around the inferior frontal region, called F5. They are not only active during one’s own actions, but also during observation of actions performed by others. Mirror responses are stronger when observing goal-directed actions performed by organisms compared to non-goals directed actions or goal-directed mechanical actions. Behaviour performed by living organisms is called biological behaviour.

The responses track the general goal of the motor act (such as filling a glass to drink water) rather than the specific features of the movement (such as the orientation of the hand). Mirror neurons have also been found in the parietal cortex that is connected to the premotor area F5. Neurons in the superior temporal sulcus send information about biological motion to the parietal lobe, but do not exhibit activation of mirror neurons.

Mirror neurons do not distinguish between the execution of one’s own actions and the observation of others’. Differentiating one’s own actions from the actions of others and understanding the intentions behind actions of others requires other cognitive abilities and mechanisms as well. The most important ability is to take one other’s perspective (third person) and distinguish that viewpoint from one’s own perspective (first person). Adopting the view of someone else requires disconnection of self-directed thoughts, a shift of attention to the mental and physical states of others, and a decoupling of knowledge of the actual unfolding of events from other people’s perceptions of those events.

Memory is important as well, because prior knowledge about personality and response patters of others improves the interpretation of the social context and possible responses of others. Neuroimaging studies have compared first- and third person perspective taking. People often spontaneously take the perspective of others. Taking the perspective of others activates many brain regions. The complexity of theory of mind makes it hard to formulate an explicit construct. This makes it hard to find assign specific functions to specific brain areas. See Figure 11.14 on page 384 for the many regions that are involved in mentalising tasks, such as the temporal pole, the superior temporal sulcus and the paracingulate cortex. The temporoparietal junction may be concerned with brief inferences based on the current social context, while the medial prefrontal cortex might be concerned with inferences based on longer lasting traits or qualities.

Studies with primates show that apes have some understanding of the attention of others and social context. The best way to test the theory of mind in animals is to check if the animal can predict other’s behaviour according to false beliefs. Understanding false beliefs of others requires making a distinction between knowledge about the correct state of affairs and what other individuals falsely believes to be true. There are two types of false-belief tasks: location-change tasks and unexpected-contents tasks. In location-change tasks, an object that was first at one particular location gets moved to another location.

The third party involved in the experiment does not know this. The subject has to indicate where the third party will look for the object, which is the last location where the third party saw the object. In unexpected-contents task, an object is placed in a container, and unbeknownst to a third party, replaced by another object. The subject then has to indicate which object the third party thinks is in the container. Humans develop false-beliefs around the age of four. They begin to use words such as like, think and know in the correct context. People with autism generally fail false-belief tasks.

Most researchers believe that young children and great apes can unconsciously track mental states of others. However, there is no convincing evidence that young children and apes explicitly use the mental contents of others. This ability to abstract the mental contents of others in a social context and represent them symbolically is called metarepresentation. Young children need further cognitive development to show this ability; apes can’t explicitly show metarepresentation.

Empathy is the capacity to comprehend other’s emotional experience, which leads to sharing that person’s feelings. Individuals must distinguish and regulate their own emotions from those of the other individual. The individual has to know that the other person’s feelings are the source of the shared emotional state. Empathy is a process by which any emotion is shared according to the social context. Empathy is very important when telling a story or selling a particular product. Sympathy is different from empathy: sympathetic reactions do not involve a sharing of emotional experience.

Empathy has both automatic and controlled components and involves basic social cognitive and emotion-processing mechanisms. With increasing cognitive and emotional capacities for taking perspective of others developing later in childhood, more complex forms of empathy are displayed. The psychologist Decety developed a model of empathy that encompasses social and emotional processes, as wel as the concepts of mental representation. See figure 11.16 on page 387.

A complete empathic response requires the coordinated operation of four component processes:

  • Emotion sharing. This is based on automatic perception-action coupling and shared somatic-emotional representations in the somatosensory cortex, insula, and anterior cingulate cortex.

  • Self-awareness and the distinction between self and others. Processing happens in the parietal lobes, the prefrontal cortex, and the insula.

  • Mental flexibility to adopt the perspective of other individuals. This activates the medial and dorsolateral prefrontal regions.

  • Emotion regulation. Emotional and somatic states are generated by the executive control mechanisms in the anterior cingulate and lateral and ventromedial prefrontal cortex.

When participants are asked to observe others experiencing an emotion, brain areas such as the insula, anterior cingulate cortex, and secondary somatosensory cortices get activated. These areas are involved in self-awareness, mental flexibility and components of empathy. See Figure 11.17 on page 388. It is generally assumed that empathy elicits helping behaviour and altruistic actions, but additional factors, such as stress, influence such behaviours.

Prosocial behaviour is more likely when individuals are in a positive mood, when the appropriate action is not unpleasant or difficult, and when the person to be helped is not a direct competitor. Sometimes, adults engage in helpful behaviours because they understand the social rewards that result from these actions.

Social bonds

Social signals such as bonding and kinship are important to survive as a group. Behaviours related to family bonds and social recognition are widespread in the animal world. Sexual intercourse with non-related individuals of the species is always preferred, due to these mechanisms. Oxytocin and vasopressin are involved in these behaviours. The nucleus accumbens is involved in the dopaminergic reward system, which rewards social bonding.

Social competition

Hierarchies are formed through competition with other members of the social group. The hierarchies affect the distribution of resources and the social system. Leaders of the social group are responsible for the well-being of the group. Members who do not adhere to the rules of the group get punished. Social competition has to be balanced by social cooperation to establish the social bonds in a group.

The relative dominance ordering (rank) of individuals has important implications for physical and mental health. Think about people with low social economic status in Western society: they face increased risk of obesity and psychiatric diseases. Both physical and psychosocial stressors contribute to these effects. Humans have a complex structure of social organisation with rank at multiple social scales, such as family, employment, and peer group.

The relationship between rank and stress also depends on the culture of a social group. Both dominant and subordinate members of a group undergo stress due to their rank. Subordinates can be suppressed, while dominant members have to keep their position. Stress can shift among group members depending on how dominance is maintained. Stress is reduced if coping strategies are applied successfully. Rank-induced stress can be measured with a variety of physiological methods, such as hypertension.

In order to keep the rank, an individual has to be both physically as psychologically strong enough to keep power and status. The enduring preference for having impact on other people is called power motivation. People who express this need for power find this impact of power on others rewarding and seek to extend their power and social influence. Power motivation can be measured both implicitly and explicitly.

Power motivation is associated with levels of epinephrine and norepinephrine, which promotes testosterone production in men. In women, power motivation is associated with higher levels of estradiol. Individuals with high measures on power motivation tend to engage in more risky behaviour. The outcome of social competition (dominance tests) has influence on the secretion of certain hormones. Victory is associated with elevated levels in testosterone in men and estradiol in women. The stress hormone cortisol is elevated following defeat. These effects are even present in passive social observers.

12. Language

Cognitive neuroscience uses a different definition of language than our everyday definition. For cognitive neuroscience, language is a symbolic system used to communicate meanings, irrespective of the sensory modality employed or particular means of expression. All languages include a vocabulary, grammar, and rules of syntax.

Dyslexia

Dyslexia is a common impairment of the ability to read. Despite a normal intelligence, people with dyslexia have problems with reading and processing speech and translating visual to verbal information, and vice versa. Thus, people with dyslexia have more problems with written language in general. There is no generally accepted treatment for dyslexia, but extra training and effort is helpful in most people. The cause and prevalence of dyslexia is unclear. Dyslexia probably has several causes. Functional FMR studies revealed that left-hemisphere brain areas are activated during both reading and speaking.

However, the visual word form area (VWFA) in the left occipitotemporal sulcus is only activated during reading. It ‘grows’ when people get more used to reading. The blood oxygenation level (BOLD) signal in the VWFA is weaker in people with dyslexia. Until very recently, many people were not accustomed to speech. The theory that the VWFA is a specific region only for reading is therefore not plausible. Cognitive neuroscientists Deheane and Cohen argued that the VWFA is a recycled part of the brain that serves a new, culturally specific function.

Speech

See Figure 12.1 on page 395 for an overview of the generation of speech sounds. Air from the lungs streams through the vocal cords in the larynx (the glottis). The closed cords open due to Bernoulli’s principle: air pressure opens the cords. The frequency of speech is mainly determined by the vibration of the cords and the muscles that control the tension of the vocal cords. The fundamental frequency of these oscillations ranges from 100 to 400 hertz. This depends on the size and gender of the speaker. The rest of the vocal tract is the resonating body. It shapes and filters the vibrations of the vocal cords.

The natural resonances of the vocal tract that filter sound pressure oscillations produce speech formants. The shape of the vocal tract can be changed by the pharynx, tongue and lips to produce the formant frequencies. This will generate different speech sounds that humans use in their everyday speech. The source-filter model is a generally accepted model of speech production. The lungs serve as a reservoir of air, the muscles of the diaphragm and chest wall force the air out. The vocal folds then provide the vibration that characterises voiced sounds, such as vowels. The pharynx, oral and nasal cavities and their included structures, such as tongue and lips, serve as filters.

The source-filter model of speech generation can be found in Figure 12.1B on page 395 of the book.

The basic speech sounds in language are called phones, which are perceived as phonemes. One or more phones make up syllables, which in turn form words, which make up complete sentences. There are around 200 different phones (and phonemes), of which about 30 to 100 are used in any given language. People can’t learn a new language fluently after the age of eight, because they have trouble learning the phones of other languages. Phones can be divided into vowel and consonant speech sounds. Vowels are mostly the voiced elements of speech. They are the basic speech sounds generated by oscillations of the vocal folds. Vowels have a tonal quality.

This means they are tones with a pitch. The majority of the acoustic power in speech is in the vowel sound. Consonant sounds are phones that often begin or end syllables. Consonants are briefer that vowels and are more complex in sound and production. They can be divided according to the site in the vocal tract that determines them (place of articulation) or the physical way they are generated (manner of articulation). Consonants are the main carriers of information in speech: if removed, people can no longer understand the message of a sentence.

Although we think that speech is physically divided into words and phonemes, the brain does not analyse speech as such. For example, in acoustic terms, speech shows no distinction between words at all. The neural analysis of speech happens in a more holistic way. Speech perceptions are actively created by the human auditory and language processing systems. They are not simply neural representations of physical stimuli that enter the ear. In short, syllables and words are not natural units of speech processing. The psychologists Liberman proposed that our perception of speech sounds is actually a form of “vocal gestures”.

Just like the vocal tract, which produces a fluent stream of speech sounds, the neural processing of speech happens in a similar way. There are, however, some problems with this view. Vocal-tract changes during speech overlap in time and influence another. This is phenomenon is called coarticulation. Another problem is that the acoustic characteristics of phones vary from person to person, making it difficult to imagine how sounds are associated with configurations of the vocal tract.

Words are combined to make up sentences, which express a complete and meaningful thought. Grammar and syntax are involved in the organisation of sentences. Grammar is the language’s system of rules by which words are correctly formed and combined. Syntax is the more general set of rules about the combination of grammatically correct words. Rules of grammar and syntax are agreements about a language that change over time. Even when sentences are correctly formed, the meaning of the sentence can still be rather ambiguous.

Homonyms are words represented by the same spelling and sound stimulus that have multiple meanings. Think about words such as left and row. Homophones are words that are represented by the same sound stimulus as well, but homophones differ in meaning and spelling. Think about to, two and too. Due to these additional difficulties, understanding the meaning of speech is also deeply dependent of context. These context have to be learned. Therefore, the understanding of speech inevitably depends on the experience of the listener. This dependence on context can be demonstrated with the McGurk effect.

The speech sound we hear is strongly influenced by the mouth movements we see. Apparently, what the listener hears is not just determined by the sound signals processed by the auditory system, but a more complex construct elaborated by language-processing areas of the brain. TMS and fMRI demonstrated that this integration occurs in the region of the superior temporal sulcus. Speech perception is based on the empirical significance of speech sounds derived from the broader context of the speech signal.

Writing speech

There are three major graphical systems for representing speech in a symbolic manner. Symbols can represent either words (such as in Chinese), syllables (such as in Japanese) or phonemes (such as in English or Dutch). These visual symbols are called graphemes. Chinese graphemes that represent complete words are called logograms. The Chinese system is not really practical, a large amount of logograms has to be learned to be fully literate. Egyptian hieroglyphs began as logograms but eventually represented syllables. This system corresponds with the Japanese system of writing syllables. The most flexible system is the one representing phonemes, which are the basic building blocks of speech. There is, however, no direct correlation between the letters of a phonetic alphabet and the used phonemes, just as there is no correlation of phonemes to the phones underlying the heard sounds in a language.

Acquiring language

The vocabulary of a language changes constantly. We use only a small part of the huge amount of words we learn during childhood or from a dictionary. Additional to learning the meaning of words, we also have to learn grammar and syntax, which are more complicated than just the meaning of words.

Speech sounds shape and limit the child’s perception and production of speech. Infants can initially perceive and discriminate all 200 phones that are used around the world; they are not biased toward any specific phoneme. One of the most thoroughly studied examples of this phenomenon is the phonetic difference between Japanese and English. The Japanese language does not have a clear distinction between the “L” and “R” sounds. After six months, all infants already show preferences for their native language. By the end of their first year they no longer respond to phonetic elements of a non-native language. Interestingly, the “baby talk” that adults speak to young children emphasizes the phonetic distinctions in language, perhaps to help the baby to hear the phonetic characteristics of the language. Additionally, the inability to differentiate between phonemes only happens in speech. When these phonemes are presented in a non-speech context, all adults are equally likely to discriminate them.

The ability to learn another language fluently stays for a few years after birth. However, the earlier the individual comes in contact with another language, the more likely it is that the individual becomes fluent in that language. This fact reflects a generalisation about neural development: neural circuitry is especially susceptible to modification during early development. The period in which an individual is most susceptible for a specific ability is called the critical period (or sensitive period). When children and adults engage in language based tasks, different brain areas get activated. This is an example of the neuroplasticity in a young brain. Exposure to and practice with language must occur within the critical period to achieve full fluency of a language.

To learn a language, exposure and practice are essential. This is needed in order to activate the relevant neural circuits to alter the relative strengths of synaptic connections. Children seemingly have the ability to rapidly map the meanings and vocals of a word into their existing schemas. Adults have a strong tendency to group speech sounds. This tendency is based on core phonemic preferences in different languages and it is called categorical speech perception. This categorisation of sounds happens very early in the acquisition process.

There are some cases in which children have been deprived of language exposure. An example of such a case is that of a girl called Genie, who was locked in a small room until the age of thirteen. Genie suffered no brain damage or major personality disorder from her abusive home, but had no language ability. She had training to improve her language kills. Her vocabulary improved, but she couldn’t learn appropriate grammar.

Theories of language

There are some theoretical generalisations about language. The linguist Chomsky explored the idea that all languages share a universal grammar. All humans can learn language and if there is no language available, people develop it spontaneously. These facts are hints toward a general structure of all languages, but no one has found this yet.

Another general approach to the understanding of language focused on the associational character of language. Apparently, there are links between linguistic concepts which are represented by the relative strength of synaptic connections in the brain. A more specific extension of this general idea about the organisation of language in associational networks is a broadly hierarchic structure of related categories, as seen in Figure 12.6 on page 405. This idea of hierarchic structures fits in connectionist theories and the idea of artificial neural networks which represents concepts in the brain. These networks are change by alterations of the synaptic connections of the relevant nodes (or neurons) in the networks.

Neural foundations of language

The general features of language are situated in the frontal lobes, which are specialised for the production of speech, and temporal lobe, which is specialised in language comprehension. This division between the production and comprehension of speech has been confirmed very early by clinical-pathological correlations.

The neurologists Broca and Wernicke investigated patients with aphasia, which is difficulty in producing and/or comprehending speech when the vocal apparatus and hearing mechanisms are intact. Difficulty in the production of speech due to a lesion in the area that controls the vocal muscles is called dysarthria.

The neurologist Broca concluded that the loss of the ability to produce normal speech arises from damage to the left central posterior area of the frontal lobe. See Figure 12.7 on page 406. Patients with Broca’s aphasia (caused by damage to the ventral posterior region of the frontal area, or Broca’s area) can’t express though appropriately because the rule of grammar and syntax have been affected by the lesion.

The rules of language are closely related to the overall organisation of other motor behaviour in the premotor cortex. In Broca’s aphasia, the structure of the language is affected, but it seems that the patient knows what he wants to say. Because only the production of language is affected, Broca’s aphasia is also called production aphasia or motor aphasia.

Broca’s assumption about the location and laterality of lexical and syntactical aspects of language was correct. However, language in general is not restricted to only one brain region. The neurologist Wernicke made a distinction between the production and the comprehension of speech. Some of his patients could produce speech and could read with correct grammar and syntax, but could not understand what they produced or read. Their utterances had no meaning.

Patients with these symptoms had lesions of the left posterior and left superior temporal lobe. See Figure 12.8 on page 408. Damage to the posterior and superior areas of the left temporal lobe causes a deficiency that is now called sensory aphasia or receptive aphasia. Since the lesions are in an area that we now call Wernicke’s area, this disorder is also called Wernicke’s aphasia. Deficits of reading and writing are called respectively alexias and agraphias. In contrast to Broca’s aphasia, the major difficulty in Wernicke’s aphasia is naming the correct objects or ideas, and expressing them in the appropriate relationship.

Speech is fluent and well-structured but it makes little or no sense, because words and meanings are not correctly linked. Table 12.2 on page 408 sums up the differences between Broca’s and Wernicke’s aphasia. When a patient has lesions in the pathways that connect the relevant temporal and frontal regions, more complex forms of aphasias may arise. Patients with these aphasias show an inability to produce appropriate responses to received communication, even if the communication in understood.

However, language is far more complex than these two well-defined areas. The neurologist Geschwind proposed that several other regions of the parietal, temporal and frontal cortices are involved in human language. Moreover, it is not only the left hemisphere that is involved in language. Most interesting findings have been found in split-brain patients. The corpus callosum and the anterior commissure have been cut in order to relieve epileptic seizures.

These bodies normally connect the left and right hemispheres. The neuroscientist Sperry was one of the first scientists to carry out studies with these split-brain patients. Sperry presented information only to one hemisphere and asked questions about the information. Patients could name information presented to the right side of the body (left hemisphere), but when information was presented to the left side of the body (right hemisphere) speech ability was far more rudimental.

Another method to independently evaluate the hemispheres is to present visual information to the right or left visual fields too rapidly for eye movements to follow. This method is called tachistoscopic presentation. Studies using this method show that the right hemisphere can react to pictorial instructions and sometimes to written instructions. See Table 12.3 on page 410 for an overview of the major differences between the left and right hemispheres. It seems that the left hemisphere is involved in more explicit aspect of verbal and symbolic processing, while the right hemisphere is involved in visuospatial and emotional information. However, no hemisphere is dominant over the other. They are involved in different but complementary functions. Interestingly, Sperry once stumbled upon a patient who was born without a corpus callosum. Both his hemispheres developed language function.

Additionally, some people spontaneously develop their primary language function in their right hemisphere and children who had lesions in the primary language areas do that as well. As you would have thought, Sperry found that split-brain patients are also less able to do tasks in which the brain areas have to work together. Further insight into the ways language is organised in the brain comes from mapping language areas in patient undergoing brain surgery. Neurosurgeons electrically stimulate the cortex during surgery to prevent them from damaging vital functions. One of the first neurosurgeons who did this was Penfield in the 1930s.

These techniques have been refined had yield a massive amount of information about language function in the brain. Large regions of the perisylvian frontal, temporal and parietal cortices in the left hemisphere are involved in language production and comprehension. However, language localisation varied from patient to patient. For example, the classic locations of speech (Broca’s and Wernicke’s areas) are differently distributed over the brain. This is no surprise: the exact location and size of any functional brain system varies among individuals.

While most attention goes to the left hemisphere in studies about language, the right hemisphere is involved in language function as well. The right hemisphere seems to be involved in the emotional aspect of language. Damage to the right hemisphere results is different and more subtle deficits. Such lesions can result in deficits in the emotional and tonal components of language. These components are called the prosodic elements of language.

The prosodic elements convey important additional meaning to verbal expression. Deficiencies in expressing the prosodic elements of language are called aprosodias, and they are generally associated with damage to the right cortical regions that roughly correspond to Broca’s and Wernicke’s areas in the left hemisphere. People with aprosodia speak in a monotone voice, and it is hard to understand the emotional meaning of the things they say. Although it is clear that the left and right hemispheres serve different functions, it is unknown why this is exactly the case.

Cerebral dominance

A popular idea is that the left hemisphere is dominant over the right hemisphere. This is enforced by the lateralisation of language (in the left hemisphere) and the fact that most people are right handed. In some cultures, being left-handed is a negative characteristic. However, there is no such thing as a superior hemisphere. For example, in many left-handers the primary language areas are in the left hemisphere, just as in right-handers. The lateralisation of language ad hand preference is presumably the result of the advantage of having a highly specialised function in one side of the brain. This will help the brain to use the available neural circuitry in the most efficient way.

Studies of language organisation

Psychologists Kutas and Hillyard showed that when people read sentences, their ERP responses depend on the semantic appearance of the material. The N400 component is enhanced if the presented words are unfamiliar or unexpected. Kutas and Hillyard suggested that the N400 wave reflects a reprocessing of language information that does not make sense. Uncommon words elicit a larger N400 than words that are frequently used in everyday speech, suggesting that processing familiar language information requires less (or more distributed) neural engagement than difficult language information. Homonyms also elicit smaller N400s. Thus, language processing in the relevant cortical areas seems to depend on prior experience with words and contexts. Other neuroimaging techniques such as PET and fMRI showed that language processing indeed happens in the classical language area’s (Broca’s and Wernicke’s areas), but other areas in the brain get activated as well.

Other studies using non-invasive brain imaging showed that different neural networks of lexical and conceptual categories get activated in the brain. Some of those neural networks overlap, depending on the category they represent. Apparently, language is organised according to categories of meaning instead of individual words. People with sensory (Wernicke’s) aphasia sometimes only have impairments of one category of words. Some researchers suggested that conceptual or category knowledge is organised according to sensory features (such as colour or motion) as well as according to functional properties (such as location). The so-called domain-specific theory argues that ecological relevance led to the evolution of specialised brain mechanisms for processing categories of objects.

The neural substrate of deaf individuals has become a rich source of information about the language areas in the brain. Studies often investigate whether language areas are specialised in speech processing or in symbolic processing. Linguist Bellugi investigated deaf subjects who suffered strokes in the left or right hemisphere. Patients with strokes in the frontal and/or temporal lobes of left hemisphere had deficits in sign production and comprehension.

This seems comparable to Broca’s and Wernickes aphasia. Patients with lesions in approximately the same areas in the right hemisphere did not display these “aphasias”. Instead, typical right-hemisphere language functions such as emotional processing and expression were impaired. This study shows that the language regions of the brain represent symbols, rather than heard or spoken language. For example, young infants can learn sign language from early age, just like speech. Young children practice language with babbling, repeating sounds (or movements in sign language) without any meaning. This shows that language is not about speech, but more about communicating symbols.

Representing numbers

Humans use numbers to quantify and label objects in the environment. The most familiar forms of number use (such as math in school) depend on explicit symbols and words used to describe them. Even simple forms of number use require a substantial amount of cognitive processing. People must understand the meaning of the symbols and must retrieve or calculate the answer. The understanding of precise, symbolically represented numbers comes from the fuzzy sense of quantities in general. This is called numerosity: an approximation of the number to be counted. Not only human adults show this ability: many birds and mammals and preverbal children have this feeling for athimetric. The numerosity sense is apparent even when adult humans make relatively precise judgments using Arabic numerals or number words instead of objects. The effects in reaction time tasks investigating the distance and size between numbers indicate that accuracy and latency are modulated by the ratio of the symbolic numerical quantity. This means that people are faster in reactions between a 2 and a 12 than between a 2 and a 4. Furthermore, when numerical distance is held constant performance decreases with increasing numerical magnitude.

Genetic influences on language

Genes play a role in the acquisition of language. Several language and/or reading deficits (such as dyslexia) tend to run in families. A gene on human chromosome 7, called FOXP2, has yield special interest. In some families, a mutated gene results in problems with the selection of the movements of the vocal apparatus. The deficit impairs motor speech organisation as well as IQ.

The uniqueness of human language

Some species have a sophisticated system to convey vocal or other symbolic communication. However, human language is unique in its kind. Human language can arbitrarily link between phones and the large set of words and it uses the recursive grammar to construct meaningful sentences. Recursive grammar is the ability to embed clauses meaningfully in sentences, and to iterate these additions in such a manner that they still make sense. “He saw the cat” is meaningful, but “She thought that he saw the cat” is meaningful as well, and so on. This progressive addition and back-referencing to such additions is called recursive.

Humans use recursion often and without effort. Context-free grammar is the addition of extra layers of grammar, but without losing the context. The sentence “She thought that he saw the cat” has some context-free grammar. In finite-state grammar, the pattern of the sentence determines the importance of the stimulus. Communication between animals mostly has finite-state grammar, but there are small exceptions.

Learned vocal communication in non-human species

Animals also learn vocal communication during their critical period of development. Many animal vocalisations are innate: they do not need experience to be correctly produced interpreted. Some birds, however, learn to communicate by vocal sounds. Early exposure and practice are needed in order to develop the correct perceptual and behavioural capacities. Just as in humans, the critical period is early in the bird’s life. Song learning happens in three stages:

  • Sensory acquisition. The young bird listens and memorises song of an adult male of its own species.

  • Sensory-motor feedback stage, in which the bird practices his songs and receives feedback from the adult male.

  • Crystallisation. This happens when the bird is sexually mature and the songs are acoustically stable.

Young birds are sensitive to the songs of other males of their species in the first two months after hatching. Their memory for songs is very strong: it retains for moths and only needs to be heard a few times in order to reproduce it. Birds also learn their ‘regional dialect during this critical period. Birds have a strong intrinsic predisposition to learn the songs of their own species. Birds raised in isolation produce abnormal songs that sound somewhat like their normal songs, but these songs fail to attract mates.

Origins of human language

It is unknown how human language can into existence. Primates show a surprising degree of vocal communication. Monkeys use different signals for different situations. Just like humans, they integrated visual and auditory information in their vocalisations. Studies with apes show that apes can communicate with abstract symbols. However, this usage has to be trained and it is very rudimental.

13. Executive functions

In order to keep our cognitive function flexible and goal-directed, executive functions modulate the activity of other cognitive processes. The executive function thus plays a regulatory role. The prefrontal cortex is involved in executive functioning.

Environmental dependency syndrome

The French neurologist Lhermitte identified a set of behaviours that get affected if the anterior and medial parts of the frontal lobe are damaged. The patients behaved in an inflexible manner: they do not behave on their own, but rather react to the surrounding environment. Lhermitte called this inflexibility environmental dependency syndrome. The patients had two remarkable symptoms. The first symptom is imitation behaviour. Patients feel the urge to mimic people around them.

The other feature of the syndrome is utilisation behaviour. This means that stimuli in the environment immediately trigger behaviour. Utilisation behaviour occurs in more severe cases of the environmental dependency syndrome. Patients with the syndrome lack insight into the causes and consequences of their actions. Their behaviour is solely guided by behaviour that is most strongly associated to the cues in the environment.

Taxonomy of executive function

Executive function is based on rules and control. The rules guide the behaviour. Rules apply to a broad context, but constrain the complex set of actions that are possible. This makes rules abstract and flexible. Control processes help us to follow the correct rule in the correct situation. The highest level of the taxonomy for executive functioning is made out of 1) the ability to create and modify rules for behaviour, and 2) the ability to engage the appropriate rule for a particular context.

The rest of the taxonomy can be viewed in Figure 13.1 on page 431. Effective use of rules serves several functions: initiating rules, inhibiting rules, shifting functions or tasks, and relating rules to each other. Control functions are often considered to include monitoring function. If the environment changes, the rules should be updated as well in order to keep a stable environment possible. All executive functions rely on the working memory, which is a capacity-limited short-term information storage. The working memory updates the incoming information and the implemented rules. This taxonomy is only an example of how executive functions can be conceptualised. There are many other theories.

Comparisons of the prefrontal cortex

Although brain size measurement is still an important tool today, investigators accepted that overall brain size is a poor index of cognitive functioning. Most neuroscientists now investigate the relative size of specific brain areas. There are clear differences between species in the relative sizes of prefrontal cortices. Although general relations between relative size of the prefrontal cortex and executive functioning become apparent, the relationship is not one-on-one.

Prefrontal cortex and executive function

The German physician and anatomist Gall was one of the first persons to propose that specific regions of the brain are associated with specific functions. He thought that these characteristics could be “read” from the skull. Gall’s theory is called phrenology. Nowadays, phrenology is no longer accepted. Lesion studies showed that bilateral damage to the prefrontal cortex impairs a broad range of cognitive functions. Unilateral barely causes damage. The regions that are currently thought to be involved in executive functioning are the prefrontal cortex, the posterior parietal cortex, the anterior cingulate cortex and the basal ganglia. See Figure 13.2 on page 434.

The prefrontal cortex comprises the prefrontal parts of the anterior frontal lobes to the motor and premotor cortices. The prefrontal cortex is typically divided into several functional regions. The interior, middle, and superior frontal gyri are usually called the lateral prefrontal cortex. The lateral cortex is divided into the dorsolateral prefrontal cortex and the ventrolateral prefrontal cortex. The ventral surface of the frontal lobes is often called the orbitofrontal cortex, named after its location above the orbits of the eyes. Regions along the ventral midline are called the ventromedial prefrontal cortex. The medial surface of the prefrontal cortex can be separated into anterior and posterior parts. The posterior, dorsal parts of the medial prefrontal cortex are called the dorsomedial prefrontal cortex. It includes the anterior cingulate gyrus.

The most anterior parts of the prefrontal cortex are often called frontopolar cortex. All regions in the prefrontal cortex have divers and bidirectional connections with the rest of the brain (Figure 13.3 on page 345). The prefrontal cortex has many bidirectional connections with the thalamus. Generally, the prefrontal cortex is directly connected with secondary sensory cortices. The orbitofrontal cortex is an exception. This region receives information from taste, olfactory and somatosensory cortices.

The strong link between the posterior parietal cortex and the dorsolateral prefrontal cortex is important, just like the connection between the hippocampus and the whole prefrontal cortex, bidirectional links between the amygdala and the ventromedial regions of the prefrontal cortex, and the link between the ventral tegmental area of the midbrain and primarily ventromedial prefrontal regions. The basal ganglia do not have a bidirectional link with the prefrontal cortex. They only receive information. The role of the basal ganglia in executive functioning is similar to the role in motor functioning.

Perception, language, memory, and motor function remain mostly unaffected when the prefrontal cortex is damaged. However, patients with damage to the prefrontal cortex show difficulty in carrying out simple tasks. Their quality of life is greatly diminished. Prefrontal cortex damage can result in either of two general syndromes. This depends on the affected region. Damage to the lateral prefrontal cortex leads to dysexecutive syndrome (or frontal dysexecutive syndrome). Patients have trouble with managing their daily lives. They can’t set long-term goals, keep their attention focussed on a task, and have general planning deficiencies. Affected individuals also have difficulty in interacting with others, because they do not understand the thoughts and goals of other people.

Furthermore, they show a lack of insight in their own and others’ actions. They may deny problems or make up explanations for problems. The creation of implausible explanations is called confabulation. Damage to the ventral and medial frontal lobe may cause disinhibition syndrome (or frontal disinhibition syndrome). Patients can’t get themselves into productive activities, and fail to inhibit themselves during social interaction.

This means they laugh at inappropriate times or tell embarrassing information. The case of Phineas Gage is considered to be a case of disinhibition syndrome, but recent research proved that this might not be completely true. Gage did not show some typical signs of disinhibition syndrome. For example, he could function correctly in social situations.

Making and changing rules

There is no clear division between executive functions and many other cognitive functions. Executive functions enhance some activities, while repressing others. Executive functions are control systems of thought and behaviour. There are three sorts of information processing that are common to a wide range of control systems, consciously or otherwise:

  • Initiating new rules for behaviour

  • Inhibiting rules that are no longer valid

  • Relating rules to support more complex forms of control.

Patients with damage to the lateral prefrontal cortex typically show impairments in the ability to initiate actions and social relationships. They are no longer spontaneous and are not concerned about the abnormality of their behaviour. It seems that they no longer experience emotions as well. Another condition, called abulia, is a motor deficit resulting from damage to the lateral part of the frontal love.

The condition is characterised by lethargy and quiet withdrawal: if they are asked to do something, they will carry out the request, but they act slowly and are easily distracted. They also have problems with attention or continuing a motor action for a longer period of time. Patients are apathetic about their surrounding environment and have problems with setting long-term goals.

Neurophysiological evidence, obtained from primates, show that prefrontal cortex neurons that carry information about rules are distributed over the cortex. Groups of neurons carry information about different rules, and the relative activity of neurons depends on which rules are relevant to the current situation. Rule-selective neurons can be found in the whole prefrontal cortex, but most of the mare situated in the principal sulcus of the lateral prefrontal cortex. The basal ganglia are also involved in the creation of new behavioural rules. See Figure 13.7 on page 440. The basal ganglia help to create rules that link a specific stimulus to a specific response. The parietal cortex also contributes to rule creating. The parietal cortex is involved in connecting possible actions to a particular decision.

Inhibition is the suppression of unimportant or distracting information or behaviour. Inhibition and initiation are two complementary sides of the same aspect of executive function. Executive functions choose the proper behaviour by altering the strength of different rules. There are four basic forms of inhibition:

  • Stopping habituated behaviours or previously valid behaviours;

  • Preventing irrelevant information for interfering with other processes;

  • Limiting inappropriate actions in a social situation;

  • Removing irrelevant information from working memory.

The lateral prefrontal cortex is involved in the processes of inhibition. In the oddball task, participants attend a continuous stream of stimuli. A few trials of the stream differ from the standard stimuli in the stream. Thus, it is necessary to inhibit the standard response to the stimulus. The oddball task evokes a strong P300 component in the ERP. The P300 originates in the dorsolateral prefrontal cortex and the parietal cortex. Figure 13.9 on page 442 shows the oddball task, its ERPs and the involved brain regions. Another version of the oddball task is the go/no-go task.

The subject has to respond to most stimuli, but has to inhibit the response to particular stimuli that are occasionally presented in the stream. The no-go stimuli evoke activation in the lateral prefrontal and parietal cortices, just like the oddball task. In a stop-signal task, participants keep responding to stimuli until they are asked to stop. Patients with damage to the posterior ventrolateral frontal cortex in the right hemisphere show great impairment when they are presented with this stop signal. They can’t inhibit their responses.

Additionally, it seems that motor inhibition not only depends on the lateral prefrontal cortex, but also connections between the lateral prefrontal cortex and the basal ganglia. Inhibitory processes also prevent task-irrelevant information from interfering with ongoing tasks. Patients with frontal lobe damage are also impaired in keeping important information in their working memories, because unfiltered irrelevant information can enter the working memory. This becomes apparent in match-to-sample tasks, in which subjects have to match a particular tone with another tone that is presented later. Patients with frontal lobe damage make more mistakes when distractors are added.

In order to have successful interaction with others, people have to inhibit inappropriate behaviours. People with damage to the ventral prefrontal cortex show difficulties in matching their behaviour to the social context. Patients do not understand why their behaviour is inappropriate. Neurologist Damasio calls the most severe form of frontal disinhibition syndrome acquired sociopathy. Just as in congenital (inborn) sociopathy, patients with acquired sociopathy can’t control their emotions, make poor decisions, and have difficulty interacting with others.

Nevertheless, there are some differences between congenital and acquired sociopathy. Patients with acquired sociopathy know what appropriate behaviour is and can distinguish good actions from bad ones (although they may still select the bad option). They may feel bad about their choice of action, but their remorse does not result in behavioural change. Congenital sociopaths can’t explain social rules or moral dilemmas. Additionally, the behaviour of congenital sociopaths is goal-directed instead of impulsive, and they seldom feel remorse for their actions. Interestingly, people who suffered acquired sociopathy at a young age, behave similar to people with congenital sociopathy later in life. The differences can be explained by acquired damage to the ventromedial prefrontal cortex. The damage impairs social behaviour, but not necessarily the cognitive rules for behaviour that already existed before the brain suffered the damage.

Goal-directed behaviour requires the formation of rules and then shifting among those rules as one’s goals and demands change. Volatility is when the rules relating actions to outcomes are not stable over time. A widely used task for rule shifting is the Wisconsin Card Sorting Test. Each card in the deck has a number of shapes in a particular colour. The subject has to sort the card according to one of the stimulus attributes, but the subject does not know which rule is active. Using trial and error, the subject has to find the rule to which the cards have to be sorted. After a few trials, the rules change again.

Healthy individuals are fairly good at this task and easily find out the new rule. Patients with damage to the prefrontal cortex continue to use the previously valid rule. This tendency is called perseveration. Patients learn the new rule at the same rate as healthy people; they only show difficulty in implementing the new rule in the new situation. They can’t inhibit the old rule. The ventral frontal lobes, especially the orbitofrontal cortex, facilitate shifting between rules for behaviour. The orbitofrontal cortex is connected with the brainstem, the sensory regions, and object-processing areas in the ventral visual stream.

These connections play an important role in incorporating affective information into plans for behaviour. The orbitofrontal cortex is particularly activated in situations that involve reward and punishment. When damaged, problems arise in so-called reversal learning tasks. In these tasks, learned rules are suddenly changed. Damage to the orbitofrontal cortex impairs the ability to adapt to the new situation.

The ability to create higher-order abstract representations is a fundamental feature of human cognition. Humans can manipulate and initiate ideas and thoughts. The capability to link simple rules forms the basis of classic tests for deficits in frontal lobe function. In Raven’s Progressive Matrices test, patients have to find a certain pattern in a sequence of shapes. Patients with damage to frontopolar cortex show greatly reduced performance when they have to work with complex patterns. Patients also show deficits in tasks that involve planning, such as the Tower of Hanoi puzzle (also called Tower of London puzzle).

The frontopolar cortex may also be important for implementing higher-order goals. Organisms have to balance between two types of goals: reward-seeking (exploitative) goals and information-seeking (exploratory goals). Exploitative goals involve choosing actions that will result in the greatest immediate reward. Exploratory goals involve taking actions that will provide new information about the environment. This goal probably will not result in immediate rewards. Exploratory behaviour activates the frontopolar cortex and intraprietal sulcus. Exploitative behaviour increases activation in subcortical regions.

In general, the posterior regions in the prefrontal cortex are related to motor behaviour, while the anterior regions are involved in reasoning and mental simulation. Anterior regions develop relatively late in life. It seems that the frontal lobes are organised in a rostral-caudal (front-to-back) organisation, going from simple functions in the posterior parts to executive functions in the anterior part (Figure 13.13 on page 452). There are two main theories.

The first theory, proposed by Koechlin, argues that executive functions are organised according to their level of temporal abstraction. Functions that are needed in the present are situated in the posterior regions of the frontal lobes, functions that implement rules in changing contexts are situated in the middle regions, and functions that are involved in long-term goals are associated with the most anterior regions. The other theory, proposed by Badre and D’Esposito, also recognises that the posterior regions of the prefrontal cortex are involved in linking simple rules to behaviour. However, they think that the more complex processing of the anterior regions shapes the functioning of the posterior regions.

The neurobiology of intelligence

Since intelligence is such a hard to define concept, measuring and explaining differences in intelligence is hard too. Intelligence can be the ability to reason and solve problems, or the possession of a special talent. IQ test do not cover the whole concept of intelligence, or the great variability of intelligence between different kinds of intelligence. The psychologist Spearman believed in the existence of a central intelligence factor, called general ability or simply “g”.

Although there is great dispute about the specific factors that make up intelligence, there is broad recognition that intelligence is made up of at least several separate components that tend to be partially correlated across individuals. Researchers also agree upon the fact that brain size does not predict intelligence among individuals or across species. What matters more for intelligence are the relative size of the brain compared to the rest of the body and specific differences within the parts of the brain that organise complex behaviours. Think about the relatively large frontal lobes in humans and primates. Variations in how the brain matures may also predict differences in intelligence across individuals. The way synaptic connections are formed in childhood and adolescence seems to be correlated to later intelligence.

Reasoning

Reasoning and problems solving are strongly associated with thinking and intelligence. Syllogisms are forms of deductive reasoning. A syllogism goes like this: If A, then B. B happens, therefore A is the cause. The truth of a given conclusion is deduced solely from a set of premises. Deductive reasoning depends on lateral prefrontal cortex. Deductive reasoning mainly happens in the left hemisphere, but it may involve some right-hemisphere regions as well. Inductive reasoning, on the other hand, is determining the likeliness of a conclusion from a set of imperfect principles. The left lateral prefrontal cortex is especially important for the generation of new hypotheses. Although the left prefrontal cortex is important for both inductive and deductive reasoning, the loci of activation are different.

Control over behaviour

Executive functioning mainly happens unconsciously. The brain has to be efficient in assigning the available resources in order to correctly implement executive functions. Processes related to monitoring and conflict resolution are called control systems.

Posner and Petersen proposed the existence of three general systems: one for maintaining alertness, one for orienting sensory stimuli, and one for detecting and identifying events that require additional processing resources. The last system is called conflict monitoring. Response conflict arises when information that points to an incorrect response is available earlier than or simultaneously with information indicating the correct response. The Stroop task generates such response conflicts.

The dorsomedial prefrontal cortex is involved in monitoring and allocating the resources for cognitive control. For example, this area gets activated during Stroop tasks trial in which the words do not match the colours. Executive processes evoke activation in the dorsal part of the anterior cingulate gyrus and other dorsomedial regions, while affective (emotional) processes evoke activation in the anterior and ventral part of the anterior cingulate gyrus. The anterior cingulate cortex is involved in monitoring feedback. The error-related negativity (ERN) is a negative-polarity electrical potential that follows mistaken actions. This can be an incorrect motor movement (response ERN) or feedback indication that an action did not result in the desired outcome (feedback ERN). It seems that the ERN component arises from the anterior cingulate cortex.

The anterior cingulate cortex is not the only region involved in monitoring. Conflict monitoring might also occur at earlier processing stages than the one that selects the appropriate response to a conflict. It also seems that the regions that are traditionally thought to be involved in error monitoring (dorsomedial prefrontal cortex) are also involved in the avoidance of mistakes, and not necessarily the detection of errors. It becomes apparent that the anterior cingulate gyrus is not necessary for conflict monitoring. For example, patients with damage to this region react and adjust to conflicts just like any other. The dorsomedial prefrontal cortex might involve a mismatch between new information and expectations instead.

The dorsomedial prefrontal cortex might have particular sub regions that support distinct executive functions. Complex forms of control processes evoke activation in more anterior regions within the dorsomedial prefrontal cortex. As seen in Figure 13.18, incapability between potential responses evokes fMRI activation in the most posterior region, difficult of choosing between options elicits activations in the middle region, and strategy-related control evokes activation in the most anterior region. Note that this organisation is similar to the organisation of the lateral prefrontal cortex. In both cases, relatively posterior regions support executive functions involved in selecting the correct motor action, while anterior regions support function related to higher-order, abstract goals.

Working memory

The processes that keep information online and available for executive processing make up the working memory. Working memory involves the temporary maintenance and manipulation of information that is not currently available to the senses but that is necessary for achieving short-term behavioural goals. Besides maintaining information, it also inhibits irrelevant information. Figure 13.20 on page 458 shows Baddeley’s model of working memory. The model has three capacity-limited memory buffers and a control system. The memory buffers are:

  • The phonological loop for sound representations,

  • The visuospatial sketchpad for visual object and spatial representations,

  • The episodic buffer, containing integrated, multimodal representations.

The control system, or central executive, allocates processing resources to the memory buffers and performs manipulations. Each memory buffer has two components: a store that holds the information briefly and a rehearsal mechanism that reactivates the information before it fades away. In the phonological loop, the store and rehearsal mechanism are called the phonological store and articulatory rehearsal. An alternative model, proposed by psychologist Cowan, views the working memory as a process that happens on two levels. See Figure 13.21 on page 459. The first level of working memory consists of activated long-term memory representations.

There is no limit to the number of activated representations, and there is no distinction between the different types of representations (visual, vocal, etcetera). The activations decay rapidly, unless they are rehearsed. The second level of working memory consists of activated representations that fall within the focus of executive control, which can hold up to four items at once. In short, Cowan’s model proposes an executive process that transiently activates sensory representations. Baddeley’s model suggests that the regions that store working memory representations are different from those storing long-term memories, whereas Cowan’s model suggests that both regions are the identical.

Because long-term memory representations of sensory information are stored in the concerned sensory and association cortices, Cowan’s models suggests that working memory maintenance should be associated with continual activity in these regions. This is supported by research. Another difference between the models is that Baddeley’s store and rehearsal mechanisms depend on different brain areas. Cowan’s model does include this distinction. Also, the visuospatial sketchpad suggests that visual and spatial maintenance mechanisms are closely related. This is not the case: they can be dissociated at both the behavioural and neural levels. Thus, Cowan’s model is better supported by research.

Studies of the neural basis for working memory typically focus on so-called delay-period activity. The delay period is the time between the initial activation of information in working memory and the actual use of that information to execute a particular action. Prefrontal cortex neurons can maintain all kinds of information. Other brain regions than the prefrontal cortex show delay-period activity as well. Activation in lateral prefrontal cortex is associated with selection, maintenance, and rehearsing of information in working memory. This is based on several findings:

  • Activation in this region typically persists for the entire length of the delay period.

  • Activation increases when more information must be held in memory.

  • Better working memory is associated with increased activation in this specific region.

  • Increased activation has been associated with resistance to the effects of distraction from a competing working memory task.

  • Activation in this the lateral prefrontal cortex tends to be greater when the task requires the manipulation of information in working memory, rather then just maintenance of that information.

On the other hand, information held in working memory reflects information processing elsewhere in the brain. For example, maintenance of simple visual information involves sensory regions as well.

14. Decisions

Rational choice models incorporate the different feature of a decision into an algorithm that can evaluate and compare different options. Researcher found that traditional choice models insufficiently explain human choice making. The integration of neuroscience and economic (neuroeonomics or decision neuroscience) tries to explain the relation between economic and social sciences. Value-based decision making means the selection of an option that leads to a desired outcome. It involves complex choices.

Gambling addictions

Pathological gambling disorder (or problem gambling) is a condition that includes an inability to stop gambling despite obvious negative consequences. Neuroimaging studies show that pathological gambler have abnormal patters in the brain regions that are involved in the evaluation of rewards. Deficits in reward processing might lead to maladaptive learning. Especially young males are susceptive to problematic gambling; perhaps their underdeveloped prefrontal cortex is related to this finding. Impairments in executive control processes may limit the ability to inhibit the attractive effects of possible rewards and to evaluate the consequences of gambling behaviour. The second largest group of gamblers are the elderly. One of the reasons for this is that gambling provides opportunities for social interaction. However, this is not the only cause. Gambling can also be enhanced by drugs that affect the brain’s reward system. Treatment of Parkinson’s disease often involves high doses of dopamine agonists well. However, a side effect is that users become more susceptible to gambling.

Decision making

Expected value is calculated by multiplying the probability of a possible outcome by its associated reward. For example, if you roll a dice and can win €1 for every dot on the rolled side, the expected value is (1+2+3+4+5+6)/6 = €3,50. A normative theory of decision making is about how people should make decisions. Choosing the option with the highest expected value is a simple normative theory of decision making. Expected value also has its limitations. For example, people are not willing to gamble with large amounts of money even if the relative expected value is the same as the gamble with small amounts of money. Bernoulli explained this with the concept on utility, which is the psychological value assigned to an outcome. He also introduced the term expected utility.

The utility of a small increase in wealth is inversely proportional to a person’s current wealth. The utility difference between €1 and €100 is much larger than the utility difference between €1000 and €1100. This phenomenon is called diminishing marginal utility. Rationality is another concept involved in decision making. There is no clear definition of rationality, but the term usually means a consistency with some formal rules of decision making. Rational decisions are characterised by preferences and decision rules do not change in different contexts. Rational decision makers also assume that other will behaviour in a similar rational manner. However, rational choice models often fail to describe real-world behaviour. The psychologists Kahneman and Tversky developed the prospect theory, referring to choosing an option whose future rewards are known or can be estimated. It estimates the subjective utility of a decision outcome.

Prospect theory is a descriptive theory of decision making: it attempts to predict what people will choose, not what they should choose (in contrast to normative theories). Prospect theory assumes reference dependence. People make decisions depending on their current state. They largely ignore how much wealth they actually have and use their current state to evaluate decision outcomes (see Figure 14.2). Prospect theory also involves the idea of probability weighting, which means that people evaluate probabilities in a subjective manner. People tend to overestimate the chances of low-probability events, but underestimate the chances of high-probability events. Kahneman and Tversky founded behavioural economics, which refers to the field that seeks factors that affect real-life choices and helps to find improvements of decision making.

Reward and utility

Rewards that have immediate benefits for fitness, such as water or sex, are called primary reinforcers. However, many real-world rewards are not primary reinforcers. Money has no intrinsic value, but can be exchanged for other rewards. Thus, memory is a secondary reinforcer. An aversive outcome is called a punishment, while the removal of an aversive outcome following an action is called negative reinforcement.

The evaluation of outcomes and reinforcements involves the neurotransmitter dopamine. Figure 14.3 shows the dopaminergic pathways in the brain. Two structures in the midbrain contain dopamine neurons: the substantia nigra and the ventral tegmental area (VTA). Both structures are connected to the basal ganglia. The VTA sends signals to the nucleus accumbens (also called ventral striatum) in the basal ganglia. Studies found that addition alters the function of dopamine neurons. Rats that could stimulate their own dopamine secretion, kept on doing so until they died. However, dopamine does not solely include feeling of pleasure from a reward.

Dopamine is involved in seeking a possible reward. Additional research using functional neuroimaging has shown that activation in the nucleus accumbens is evoked by many motivationally relevant stimuli. The stimuli reinforce a desirable association. Reward undermining refers to reducing one’s engagement in a task by implementing external rewards. For example, when rewards are removed, people will lose their intrinsic motivation to perform well.

Activity of dopamine neurons does not signal rewards, but changes in information. For example, activation of the VTA is proportional to the probability of reinforcement and the magnitude of reinforcement. Dopamine neuron activity provides an index of reward prediction error (RPE). RPE can by define through a simple equation that combines the difference between the received award to the reward that was expected and the change in information about future rewards. See Figure 14.7 on page 474 for the function. The reward prediction error signal guides behaviour by means of temporal difference learning.

Temporal difference learning refers to the idea that successive states and predictions of the world are correlated over time. The reward prediction error signal is similar to the idea of reference dependence in prospect theory. Both calculate subjective value in terms of changes from a current stage, without absolute levels of reward. Information about rewards and punishments does not only activate dopamine neurons in the basal ganglia and ventromedial prefrontal cortex, but activate also other parts of the brain.

An alternative theory posits that the central striatum responds not specifically to rewards, but to all stimuli that provide information for future behaviours. These stimuli are called salient events. They include positive and negative reinforcers and the unexpected absence of a predicted award. However, salience-based explanations have difficulty in explaining why dopamine neuron activity decreases when rewards are unexpectedly absent. It seems that some dopamine neurons are activated by positive reinforcers and others by punishments.

Learning values and forming habits

Goals-directed choices represent only one aspect of decision making. Many learning processes that relate value information to actions happen without reflection on our goals and desires. Habits are tendencies that develop over time because they are repeatedly reinforced. Habits develop slowly, but are hard to erase once established. The development of habits depends on the ability to associate stimuli with rewards and the ability to choose motor actions that maximise outcomes.

The basal ganglia are involved in both abilities. Reward learning can be explained with actor-critic models. These models assume two brain systems: a critic that continuously evaluates whether rewards are better or worse than expected, and an actor that updates the values of potential courses of behaviour to increase the likelihood of future rewards. Two components in the basal ganglia, the central striatum (or nucleus accumbens) and the dorsal striatum, act as respectively critic and actor. The central basal ganglia are active in both classical and operant conditioning, whereas the dorsal basal ganglia are only active in operant conditioning. Actor-critic models update the current state in real time in response to each stimulus, action and reward. These models also account for fictive learning: changes in behaviour based on what might have been.

Uncertainty

The term uncertainty refers to the psychological state of having limited information. There are two main forms of uncertainty: uncertainty about what outcome will occur (called rist aversion or ambiguity aversion), and uncertainty about when an outcome will occur (called delay discounting).

Risk is the estimated variance in potential outcomes, typically stabilised by the magnitude of those outcomes. For example, winning €10 or €1 is a risky gamble, whereas winning €1010 or €1001 is a safer gamble. Four cortical regions are involved in the decision whether a choice is risky: dorsolateral prefrontal cortex, dorsomedial prefrontal cortex, posterior parietal cortex and the anterior insula. The first three are important for executive functioning.

The anterior insula, however, is involved in emotional processing and monitoring aversive signals. The dorsomedial prefrontal cortex is not always activated by decision making. The magnitude of dorsomedial prefrontal activation is proportional to the amount of uncertainty about the decision when uncertainty involves limited information about the correct decision. However, when the uncertainty of an outcome reflects a distribution of probabilistic outcomes (but not decisions), activation does not increase with increasing risk. In contrast to risk, a decision involves ambiguity if the probabilities of its outcomes are unknown. Ambiguity evokes other patters of brain activation than risk.

In general, people have the tendency to judge rewards as less valuable if they occur farther in the future. This is called temporal discounting. See Figure 14.13 on page 482. Most people prefer smaller immediate rewards to slightly larger delayed rewards. This kind of behaviour led to the development of models of hyperbolic discounting: rewards are discounted differently over time. Perhaps temporal discounting reflects the competitive interaction of two separate neural processes (a dual-system model) or just a single neural process. Kahneman proposed the most general form of a dual-system model. It posits two types of processes. System 1 is fast, parallel, automatic, and context dependent.

System 1 is involved in emotionally guided decisions. System 2 is slower, serial, controlled, and evidence-based. Deliberate analysis of options before choosing often involves System 2. System 1 is associated with the central striatum and the ventral and medial prefrontal cortex, whereas System 2 is associated with the lateral prefrontal cortex and other executive function regions. Other studies support a single-process model in which the rewards system tracks the value of all potential outcomes, not just the immediate ones.

Social context

Many decisions depend on information about other people. Social stimuli reinforce a wide range of behaviours. Social stimuli evoke activation in reward-related areas in the brain. This mainly happens because of physical features, but social relationships can also evoke reward-related activation. However, this happens in a more complex manner. Two nonexclusive theories have been proposed why we engage in prosocial behaviour without any rewards. The first proposes that giving without expectations is motivated by internal reward signals (a “warm glow”) that reinforces this behaviour despite the costs. The second theory posits that prosocial behaviour requires social cognition processes that recognise other individual’s needs, as a requirement for actions to help that individual. The lateral parietal cortex and medial frontal cortex are involved in prosocial behaviour.

Game theory involves the study of “interpersonal games” in which multiple individuals are involved in decision making. See figure 14.16 on page 486. Each individual has limited information and different preferences. Most games can be visualised in a payoff matrix that plots each choice along the axes and the outcomes for each player in its cells. Such games help to study natural cooperative behaviour. One of these games is called the prisoner dilemma. Cooperation yields the best mutual outcome. Cooperation produces activation in the nucleus accumbens.

The nucleus accumbens in associated with information about future rewards, rather than the experience of these rewards. Cooperation develops trust, which may predict increased rewards in the future. The term hyperscanning refers to the simultaneous recording of fMRI data from individuals that interact with each other. Hyperscanning helps to predict behavioural and brain changes in the individuals. This method showed to be useful during so-called trust games: the trustee’s caudate shows more activity during cooperation. The activation of the caudate can be interpreted as the building of a mental model about the opponent, which helps to decide whether to trust that person. The decision to cooperate depends on how well one can predict another person’s goals.

Altruistic punishment is the punishment that follows when people violate social norms. Altruistic punishment involves the ventral striatum. In the ultimatum game, one decision maker decides the amount of money he may have, while the other player responds to that division by accepting the offer or rejecting it, which results in both players receiving no money. People tend to reject unfair divisions. This is not very rational: any amount of money is better than no money at all. Unfair offers in the ultimatum game have been linked to activation in the insular cortex.

Combining and comparing information

Comparisons do not only involve evaluating rewards, risk, and social components, but also comparisons across categories.Different contributors to the value of an action are integrated into a representation of a general value in brain regions that are associated with action selection. The involved region is the lateral intraparietal area (LIP) in the parietal lobe. However, the specific contribution of the LIP to the comparison process is unknown. Drift-diffusion models (also called diffusion decision models) assume that the decision process is a random-walk process (droft) from a neutral state towards the threshold for a decision or an action. The models can quantitatively separate the contributors to information integration: the rate of information accumulation, the confidence needed to reach a decision, and the relative bias toward one option or another.

Models of value-based decision making suggest that the brain compares rewards with very different properties by encoding information about each reward in to abstracted value signals that in turn can be compared computationally. Patients with damage to the central prefrontal cortex show an impairment in self-control. When performing the Iowa Gambling Task, they keep on choosing decks with a negative expected value, despite their losses. Patients also fail to show a bodily response to decisions and don’t demonstrate regret following losses in gambling tasks. Apparently, ventral prefrontal damage interrupts reward integration in the brain. Brain regions coding for a common currency signal might also be the ventromedial part of the orbitofrontal cortex and the posterior ventromedial prefrontal cortex.

Models of simple decisions

Simple two-option decisions can be explained with drift diffusion models, as seen in Figure A on page 492. The models assume that, during a decision, information continually accumulates from a neutral starting point toward either of two response threshold. The information accumulation process is a noisy, random-walk drift. Thresholds may move according to biases or cautious behaviour. The posterior parietal cortex (including the LIP) and the dorsomedial prefrontal cortex are particularly important in integrating information during simple decision making.

Heuristics in decision making

Heuristics are rule that people use to get a better understanding of a complex situation. For example, people tend to stick with familiar concepts when they are presented with a novel situation. Economist and psychologist Simon recognised that organisms have finite computational resources (called bounded rationality) and that executing complete consideration of possibilities is impractical or impossible. For example, people choose the choice that is simply “good enough”, without considering all possibilities. This is heuristic is called satisficing. The anchoring heuristic refers to the tendency of reference pints to bias subsequent value judgements. The endowment effect is the phenomenon that people require more money when they sell their properties then when they buy the same good. This is consistent with the idea that important reward-related brain areas respond to changes in reward from a reference point, and not in absolute values of reward. The framing effect is the tendency to be relative risk-reverse when thinking about possible gains, but also being risk-seeking when thinking about possible losses. Decisions that are inconsistent with the framing effect elicit increased activation in the dorsomedial prefrontal cortex and the amygdala. Subsequent research suggested that the dorsomedial prefrontal cortex does not contribute to any heuristic, but to executive processes associated with shifting from one heuristic to another. Perhaps emotional processes can get involved in risky decisions, leading to biases such as the framing effect.

Future directions

Neurorealism is the tendency (especially seen in laypeople) to evaluate the link between brain measurements and conclusions about behaviour as far more real and fixed than they really are. Neuromarketing is applied research that seeks to develop algorithms that predict the choices of real-word consumers. When thinking about decision making, we have to consider that decision making in real life also involves memory, attention, and emotion.

Working memory: Capacity limitations

Working memory (WM) is a cognitive system that keeps information activated for a limited amount of time. This way, this piece of information can be quickly accessed and manipulated. Working memory helps us to connect information and shape it into complex concepts. Working memory can only store a limited amount of information. The limited capacity of the working memory represents the limited processing capacity of human cognition. This limit is due to the limitation of the focus of attention. Long-term representations (memories) can be active simultaneously, but only a few can be attended.

There is no clear evidence whether WM capacity limits are modality-specific or a single central limitation. In the article, the limitations of WM are considered separately for all modalities.

The most common paradigm to investigate WM capacity uses digit span tasks. Participants are given a random list of numbers that they have to repeat. Most people can memorise about seven items, but there is a lot of individual variation. This not only holds for digits, but also for words and other stimuli. Basically, this rule of “seven, plus or minus two” (initiated by Millder in 1956) holds for every kind of “chunk” of information. Chunking can also be done voluntarily, by snipping meaningless information into meaningful chucks. This makes recalling the chucks easier, because chunking uses pre-existing information in the long-term memory.

Baddeley’s model of working memory states that the so-called phonological loop stores language- or sound-based information for a short period of time. An individual can keep this information active by repeating it. The phonological and semantic meaning has influence on the way words are stored in this loop. For example, longer words are harder to store, while words that are similar in meaning are not. Apparently, the phonological properties influence the storage limit. Another rule is that the verbal WM is limited by how much an individual can say in about two seconds. Here it’s not about items, but about time. If the rehearsal of items in the loop is prevented, the amount of remembered items drops dramatically.

Early investigation of the capacity of the visual working memory has been done by Sperling. He flashed letters on a screen and asked participants how many they could remember. The results suggested that individuals have a very limited memory for items. However, Sperling’s estimation of visual WM capacity has two possible limitations. Subjects could have suffered from output interference, because they were asked to name or write the letters they remembered. This may have interfered with the retrieval process. Additionally, Sperling used alphanumeric characters. This makes it unclear whether the verbal and/or visual WM had been used for this task. Subjects had to transform the visual image to verbal labels. Therefore, this report is not clear about the exact capacity of the working memory.

Another way to investigate the limitation of visual WM is by using Philips’ sequential comparison paradigm, or the change detection paradigm (Figure 1 in the article). Subjects have to report whether items (objects) stay the same or have changed during the task. Typically, subjects start to fail the detection task when the amount of items in the array exceeds about four items at once. The amount of features (colour, orientation, etc.) these objects have doesn’t matter. However, when the complexity of objects increases, the memory capacity decreases. There seems to be a relation between the required resolution of items in memory and the number of items that can be maintained in the visual working memory. Another factor influencing visual WM is the ability to discriminate between representations. Maybe the relationship between object complexity and change detection has more to do with the resolution for making discriminations than with the number of representations that can be maintained in working memory.

Many experiments use the blood-oxygen level-dependent (BOLD) activation to investigate the neurobiological response during retrieval. BOLD activity is mainly found in the prefrontal, posterior parietal, and inferotemporal cortices. It is hard, however, to determine if a raise in activation reflects WM or a more general response due to increased task load. There is a hint that the phonological loop is situated in Broca’s area. This study also suggests that when the WM capacity was exceeded, additional executive processes in the prefrontal areas are activated to assist in performing the task. Activity in the posterior parietal cortex might be sensitive to working memory capacity limits.

Activity in the interparietal sulcus increases when the retention period increases, but this phenomenon reaches its maximum when the subject has to process more than four items (Figure 3 in the article). This result suggests that the activity in the IPS reflects the number of item representations that can be maintained in the visual WM. After all, four items is the maximum amount of items that can be held in the visual WM. Later research found that the lateral occipital complex also gets activated during visual WM activation. Event-related potentials (ERPs) can be used to filter specific activation related to working memory. Contralateral delay activity (CDA, measured about 250 ms after stimulus presentation) increases when the number of items in a visual WM task increases. Again, the maximum of this phenomenon lays around four items.

Of course, there’s much individual variation in WM capacity. Individual working memory capacity is associated with many cognitive and aptitude measurements. WM capacity seems to be a core mental construct, underlying overall cognitive ability. Researchers recognise that WM capacity and intelligence are positively correlated. Almost every intelligence test has some sort of memory span component or WM processing component.

A popular method is the operation span (OSPAN). Subjects are asked to read a mathematic equation, judge the correctness of the equation, and have to read a random word aloud. After some trials, the subject has to recall as many words as possible. The OSPAN is correlated to language comprehension, intelligence, and verbal and visual working memory. Individual performance on the change detection paradigm is also related to intelligence and school performance. These findings are in line with the idea that there is a single central working memory capacity: measures of verbal and visual WM capacity are both predictive of cognitive ability.

There is another construct related to working memory capacity: attentional control. The WM system can be used to control the stream of information and to diminish distracting information. For example, people with low-WM-capacity are more likely to hear their name in an auditory stream task. Maybe low-capacity individuals can process the same amount information, but are also processing irrelevant information. Researchers found that high-memory-capacity individuals are more efficient in deflecting irrelevant information than low-memory capacity individuals. Low-capacity individuals actually hold more information in their WM, but these individuals store irrelevant information instead. Individual differences in WM capacity may be the consequence of the attentional control processes. It determines what information is stored in working memory or long-term memory.

Figures

9 Declarative Memory

Table 1: Table of reactions to targets in tasks

 

Response

No Response

Target present

Hit

Miss

Target absent

False alarm

Correct rejection

 

Image

Access: 
Public

Image

This content refers to .....
Psychology and behavorial sciences - Theme
Join WorldSupporter!
Search a summary

Image

 

 

Contributions: posts

Help other WorldSupporters with additions, improvements and tips

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Image

Spotlight: topics

Check the related and most recent topics and summaries:

Image

Check how to use summaries on WorldSupporter.org

Online access to all summaries, study notes en practice exams

How and why use WorldSupporter.org for your summaries and study assistance?

  • For free use of many of the summaries and study aids provided or collected by your fellow students.
  • For free use of many of the lecture and study group notes, exam questions and practice questions.
  • For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
  • For compiling your own materials and contributions with relevant study help
  • For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.

Using and finding summaries, notes and practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Use the summaries home pages for your study or field of study
  2. Use the check and search pages for summaries and study aids by field of study, subject or faculty
  3. Use and follow your (study) organization
    • by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
    • this option is only available through partner organizations
  4. Check or follow authors or other WorldSupporters
  5. Use the menu above each page to go to the main theme pages for summaries
    • Theme pages can be found for international studies as well as Dutch studies

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study for summaries and study assistance

Main summaries home pages:

Main study fields:

Main study fields NL:

Follow the author: Psychology Supporter
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Statistics
3448 1 4