Cognitive Psychology by Gilhooly, K & Lyddy, F, M (first edition) - a summary
- 2886 keer gelezen
Cognitive Psychology
Chapter 13
Language comprehension
How we understand speech and written language.
Understanding requires accessing semantic information and appreciating the meaning of words, the intention of the utterance, and sometimes the non-literal meaning.
The objective is to understand what is being communicate.
At lower levels the processes involved in speech perception (the process by which we convert a stream of speech into individual words and sentences) and visual word differ markedly.
Speech processing is a fast, accurate and automatic process. Once we have acquired language, we readily understand a spoken utterance.
The speed with which the task is achieved does not reflect the complexity of the process.
Word recognition is the starting point for language comprehension and understanding language is the key of much of higher cognition.
Prosody: the rhythm, intonation and stress patterns in speech.
Aspects of an utterance’s sound that are not specific to the words themselves.
While we perceive a sequence of words within the stream of speech, the speech signal itself is not produced as discrete units.
There are few clear boundaries between words in spontaneous speech and sounds blend together as they are produced so that phonemes differ as a function of the other sounds used.
The speech sounds produced by a single speaker vary with context.
There are further differenced when we consider: individual differences, differences in accent, and changes over time.
Factors such as speech rate, the speakers age and sex, as well as the amount and type of background noise affect the acoustic form of a spoken word.
The sounds we produce change with age and they change as societies change.
A speaker may produce as many as 150 words per minute, with each word spoken in, on average 400 milliseconds.
When someone is speaking quickly, this rate can double.
Speech occurs at a rate of 10-15 phonemes per second, and can be understood at rates as fast as 50 phonemes per second for artificially speeded speech.
Syllables are produced every 200-400 milliseconds.
Recognition precedes completion of the heard word.
Speech perception requires rapid segmentation of this continuous signal.
It is initially very difficult to work out where one word ends and the next begins, without knowledge of the structure of the language.
Speech perception: the process of imposing a meaningful perceptual experience on an otherwise meaningless speech input. A process whereby a continuous input is transformed into more or less a meaningful sequence of discrete events.
Speech provides a continuous signal extended in time, where each segment cannot be taken on its own but instead depends on what went before and what follows.
Blended sounds can occur at boundaries between words so that there is no ‘gap’ in the signal that would reliably indicate a word boundary.
The speech signal is continuous without clear boundaries between words.
The invariance problem
The invariance problem: the variation in the production of speech sounds across speech context.
The lack of invariance in speech sounds.
A particular phoneme is not uttered in exactly the same way on each occasion, even by the same speaker. Its form is affected by other phonemes that precede or follow it.
Co-articulation: the tendency for a speech sound to be influenced by sounds preceding or following it.
Sounds blend together so that a continuous, fluent output of speech is produced.
The same word can be produced with slight variations as a function of surrounding words.
The segmentation problem
If we extract words from a sentence in spontaneous speech, and present them in isolation, recognition is greatly reduced.
One important source of information that aids segmentation is provided by the sound patterns within a language.
Stress patterns provide an important cue to a speaker’s accent.
Cues to word boundaries
Infants tend to show a preference for their native language or for familiar over unfamiliar voices.
Around 7,5 months, English learning infants are able to segment words that conform to the predominant stress patterns of English words.
Initially, infants rely heavily on stress patterns, but they subsequently begin to appreciate other cues. By the age of 24 months the perception of word boundaries is at a level similar to that of native speaking adults.
The development of word recognition requires the extraction of the regularities in a language that can be reliably used to distinguish word boundaries.
Phonotactic constraints: describe the language-specific sounds groupings that occur in a language.
Permissible patterns of sounds within a language also serve as effective cues to segmentation.
Onset of a word: the initial phoneme or phonemes. The rime follows the onset.
Cross-linguistic surveys of sound patterns show clear preferences for some onset patterns over others.
Onsets like bl are commonly used.
Through early exposure to our native language, we develop tacit knowledge about how sounds go together in a language. This knowledge then guides speech perception.
Knowledge about sentence structure, provide by syntax, may also play a role in speech segmentation.
Slips of the ear
Slips of the ear: occur when we misperceive a word or phrase in speech.
70 per cent of word slips involved errors in identifying word boundary.
There are: word boundary shifts, word boundary deletions and word boundary additions.
Four categories of slip:
Cues are language-specific. Just as the structure of a native language will affect accent in a second language, segmentation of incoming speech is also biased towards the dominant patterns of the native language.
Categorical perception
While there is much variation in the way sounds are produced, we are rarely aware of this and we generally find speech perception to be unambiguous.
This is because the cognitive system tends to treat speech sounds as falling within discrete categories rather than falling along a continuum.
Categorical perception: the perception of stimuli on a sensory continuum as falling into distinct categories.
It helps counteract the invariance problem.
We are more sensitive to differences in speech sounds across phonetic categories than within, although we are still able to detect differences and discriminate speech sounds within a category.
Categorical perception applies in particular to consonant sounds; vowel sounds are treated as continuous.
Voicing: when speech sounds are produced while the vocal cords are vibrating.
Babies can distinguish between the speech sounds of many languages at a young age.
But this ability disappears as they acquire experience of the sounds of their native language.
Phonemes come to sounds like a prototype as categorical perception develops and distinctions not made in the native language are treated as belonging to the same category.
The right ear advantage for speech sounds
Connections between the ears and auditory cortex are mainly contralateral.
Consistent with this, adults show a right ear advantage for speech sounds over non-speech sounds.
Right ear advantage is not restricted to humans.
Top-down influences: more on context
The effect of context can lead to the perception of absent speech sounds, so that perception is consistent with the sentence context.
Phoneme restoration effect: the tendency to hear a complete word even when a phoneme has been removed from the input.
Perception is guided by top-down processing such that the sentence context dictated the meaning.
Phonemes that are absent can be restored in speech perception.
There are many sources of information operating to allow accurate speech perception.
Visual cues: the McGurk effect
Cues from modalities, vision, play a role in accurate speech comprehension.
Face processing involves analyses conducted specifically to facilitate speech recognition.
We can use facial cues to aid understanding of speech.
McGurk effect: a perceptual illusion that illustrates the interplay of visual and auditory processing in speech perception.
Models of speech perception attempt to explain how information coming in form the continuous stream of speech that we hear makes contact with our stored knowledge about words.
Two categories:
The Cohort model
We do not have to wait until the whole word is uttered before it is processed: some words can be recognized based on particular information.
The Cohort model of speech recognition reflects the sequential nature of speech perception.
It assumes that incoming speech sounds have direct and parallel access to the store of words in the mental lexicon.
We establish expectations regarding likely target words once we have heard the initial phonemes of a spoken word. The set of words that are consistent with the initial sound is the ‘word initial cohort’.
As more phonemes follow as input, and therefore more information about the target word is provided, the set of available candidate words reduced. Such that those which no longer fit the incoming pattern lower in activation and are dropped from the set while those remaining in the cohort become fewer until only the target remains. This is the uniqueness point.
The original cohort model:
The revised model:
The gating paradigm has been used to identify a word’s uniqueness point.
A spoken word is presented as a ‘left to right’ sequence of sounds, in segments of increasing duration.
The participant must guess the word in each case and may also supply a confidence rating as to how sure they are that they have identified the correct target word.
Electrophysiological evidence for the model was provided by an event related potential (ERP).
Lexical decision task: a task where participants are presented with a letter sting and they must decide whether or not it is a word.
ERP component occurred sooner for words that had early recognition points, consistent with a faster response time in the lexical decision task.
It also supports the facilitatory effect of context, and suggest that it plays an early role, consistent with the original cohort model.
Word recognition can occur before the point at which the provided acoustic input is sufficient to able to uniquely identify the word. Such a process is efficient as access to meaning can occur before the word is complete and multiple meanings are briefly activated within the cohort words.
Trace
The TRACE model of speech perception.
Top-down effect play a key role in speech perception.
A connectionist model. The trace is referring to the entire network of units and the particular pattern of activation associated with it. The pattern of activation left by a spoken input is a trace of the analysis of the input at each of the three processing levels.
The concepts of activation and competition are central.
The TRACE considers top-down processes and the processing of sub-optimal (noisy) input.
Trace takes a gradated approach to activation levels.
Words can acquire a level of activation as a function of shared features with other candidate words.
What contributes to the perception of a phoneme?
When conditions degrade (like encountering speech against a noisy background), more top-down processing comes into play and semantic and syntactic cues may become more influential.
Dynamic, self-updating processing system in order to reflect the online and interactive nature of speech processing.
Processing units form three levels.
TRACE does not make a word-by-word sequential assumption.
Activation can be bidirectional with bottom-up connections from feature to phoneme to word. And top-down activation form word to phoneme to feature.
Excitatory and inhibitory links within levels create a set of possible responses such that activation of a unit represents the ‘combined evidence’ for the presence of the particular linguistic unit.
Words do not occur in isolation.
Any realistic theory of sentence comprehension must be able to account for:
Lexical access
Word recognition is a process of lexical access.
Two main types of models of lexical access:
Lexical access has been investigated using a number of methodologies, experimental and neuroscientific.
Word naming task: require participants to name a word, while response time is measured.
Sentence verification tasks: present a sentence frame with a target word, and the participant must decide if the word fits in the frame.
Frequency effects
Although we have a large vocabulary, a large set of these words will be used rarely (low frequency words), while a smaller number of words will be used very often (high frequency words).
The frequency with which a word is used in a language affects cognitive processing. The higher the frequency, the easier the word is to process.
Open-class words: content words such as nouns, verbs and adjectives. New words can be added to this class of words. They do have the frequency effect.
Closed-class words: remain stable over time and are not added to (adjectives for example). They do not have the frequency effect.
Frequency is a particularly important factor in lexical decision.
The magnitude of the effect of frequency differs depending on the task used.
Priming effects
When semantically related words are primed in a lexical decision task, response time decreases.
The closer the words are in meaning, the greater the semantic priming effect.
Repetition priming: the finding that repeated exposure to a word leads to faster responses in a lexical decision task.
The effect of repetition on low frequency words is stronger than that on high frequency words. This is the frequency attenuation effect.
Syntactic context
The syntactic category of the word and sentence context affect lexical decision time.
Participants are significantly faster in recognizing words when they occurred in sentences that provided the appropriate grammatical context than when not.
Lexical ambiguity
Many words have multiple meanings.
Homographs: words with the same spelling, but more than one meaning and pronunciation.
Ambiguous words will have multiple representations in memory and therefore may be treated differently than unambiguous words.
When an ambiguous word is encountered, more than one meaning is initially activated. Context subsequently influences processing, but initially multiple meanings are active.
Context does not affect initial access to multiple meanings, although the nature of the task, the context and the word (meaning) frequently play important roles in activation of meanings.
In bilinguals, both languages are activated, the initial access is language non-selective.
Syntax and semantics
Syntax: the rules that govern language use.
Despite ambiguity, on hearing a sentence, we show a preference for one structure and interpretation. It is only when we realize a mistake may have been made that we go back and look for other alternatives.
Parsing: the process by which we assign a syntactic structure to a sentence.
Psycholinguistics: the branch of study concerned with the mental processes underlying language comprehension and production.
Superficially different sentences can have the same underlying structure and meaning, and sentence components can maintain their role in a sentence even though their position in a sentence changes.
Semantic information interacts with syntactic processing and can reduce processing load in cases where meaning can inform syntactic processing.
Irreversible passives did not require more processing time than and active voice sentence, whereas reversible passives did.
Phrase structure three: a graphic representation of the syntactic structure of a sentence.
Garden path sentences: a grammatically correct but ambiguous sentence that biases reader’s initial parsing.
The goal of parsing is to assign incoming words to the appropriate role in the sentence as simply ad efficiently as possible.
Two key strategies:
Parsing is incremental in that we allocate a word to a syntactic role as the words is perceived.
Parsing is seen as autonomous and modual in such accounts in that the syntactic analysis is independent of semantic and other factors.
The interactive view proposes that semantics can influence syntax, and there is interaction between the levels of language.
Garden path sentences require the person to revise their initial interpretation of the sentence, as new, conflicting, information is presented.
However, this re-analysis does not always produce the ‘ideal’ sentence structure, and revision of the roles initially assigned to the word may not be consistent, suggesting that structures that are ‘good enough’, rather than ideal, suffice.
The process of learning to read contrast with learning to speak.
Children acquire spoken language readily, requiring little by way of explicit instruction.
Writing systems
Scrips vary across languages in the extend and manner of representation of spoken sounds.
All spoken languages have phonemes or basic speech sounds which can be combined in various ways, but written scrips differ markedly in the extend to which, as well as the ways in which, this phonetic information is presented.
Four main types:
Some writing systems combine elements of these types.
Logographic scrips developed from earlier pictograpic forms, but the relation between the symbol and referent became arbitrary.
In syllabic writing, each syllable is represented by a character, so that the precise pronunciation of each symbol is known.
In a language with a relatively small number of syllables, this is effective.
In consonatal scripts: letters represent consonants but not vowels, although in some scrips the vowels might be represented using diacritics.
The alphabetic writing system is the most dominant across word languages. Its basic unit of representation is the phoneme.
Grapheme: the written representation of a phoneme. It can consist of more than one letter.
Transparent or shallow orthography: uses a one-to-one correspondence between the letters and sounds.
Opaque or orthographically deep languages: those where relationship between letters and sounds is more complex.
The same sound may be written in a number of ways. And the same letter string might be associated with multiple pronunciations.
Context effects on visual word recognition
Recognition can occur before the word is fully uttered.
Top-down influences can speed written word recognition.
Word superiority effect:
The finding that a target letter within a letter string is detected more readily when the string forms a word.
Context has considerable influence on visual word recognition.
Eye movements
Analysis of eye movements has provided much insight into the processes underlying reading.
As we read a line of text, our eyes do not move smoothly from one letter to the next or from one word to the next.
Saccades: fast movements of the eye made when reading or scanning an image.
Fixation: occurs when the eye settles briefly on a region of interest in a visual scene.
Two robust findings come form eye movement research:
We do not just move forward reading each word, nor are all words treated equally.
Many saccades are regressions.
There may also be multiple fixations of the same word (re-fixations) or skipping of words.
Content words are fixated more often than are function words.
As the word length increases, the likelihood that it will be fixated increases.
Context adds to the efficiency of the process, as a predictable word is more likely to be skipped than a less predictable word.
Text difficulty affects eye movement: as difficulty increases, the saccade length decreases and the number of regressions increases.
The eyes respond predictably to semantic and syntactic anomalies as well as to parsing errors such as those elicited by garden path sentences. Although studies addressing the sentence level have produced more variable findings than those addressing word identification.
The dual route model of meaning
Three routes for reading:
Pure word deafness: a deficit affecting the ability to recognize speech sounds, while comprehension of non-speech sounds remains intact.
Other aspects of aphasia are absent. And perception of (most) non-speech sounds is intact.
Pure word meaning deafness: the patient can repeat back the word, but cannot understand it.
The patient may be able to recognize the same word when it is written down.
Neuropsychology of reading
Acquired dyslexia: reading difficulties following brain injury.
Surface dyslexia: characterized by a deficit in the reading of irregular words, while the reading of regular words is spared.
Tend to make over-regularization errors when they try to read exception words.
Route 3 broken.
Phonological dyslexia
Affects non-word reading, but real words can be read.
Problems pronouncing non-words or pseudowords but they can read real words, whether regular or irregular.
Route 1 is broken.
Brain imaging and electrophysiological data
N400 component: a negative-going potential that occurs approximately 400 milliseconds after the presentation of a triggering stimulus.
It has been shown to be associated with the time-course of some aspects of word processing and with semantic processing in particular.
The N400 is relatively larger when a semantically anomalous word is presented to participants.
N400 reflects increased processing effort when dealing with semantic information.
P600 wave occurs when syntactically anomalous words are presented and has an onset around 500 milliseconds after presentation of the stimulus.
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
This is a summary of Cognitive psychology by Gilhooly & Lyddy. This book is about how cognition works and theories about cognitive psychology. The book is used in the first year of the study of psychology at the University of Amsterdam.
The first two chapters of
...There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
Main summaries home pages:
Main study fields:
Business organization and economics, Communication & Marketing, Education & Pedagogic Sciences, International Relations and Politics, IT and Technology, Law & Administration, Medicine & Health Care, Nature & Environmental Sciences, Psychology and behavioral sciences, Science and academic Research, Society & Culture, Tourisme & Sports
Main study fields NL:
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
2818 |
Add new contribution