How have neurosciences evolved over the years? - Chapter 1 - Exclusive
How have neurosciences evolved over the years? - Chapter 1
...........Read more- 1028 reads
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
How have neurosciences evolved over the years? - Chapter 1
...........Read moreWhat is the structure and function of the nervous system? - Chapter 2
...........Read moreWhat is the role of methods in cognitive neuroscience? - Chapter 3
...........Read moreWhat is hemispheric specialization? - Chapter 4
...........Read moreHow do sensation and perception relate to each other? - Chapter 5
...........Read moreWhich matters are important in object recognition? - Chapter 6
...........Read moreWhat is the function of attention and how does it work? - Chapter 7
...........Read moreWhat is the importance of action and the motor system? - Chapter 8
...........Read moreHow does memory work? - Chapter 9
...........Read moreHow does emotion work? - Chapter 10
...........Read moreOf all the higher functions that human possess, language is perhaps the most specialized and refined, and it may well be what most clearly distinguishes us from other species. Language input can be auditory or visual, so both of the sensory and perceptual systems are involved with language comprehension. Split-brain patients, as well as patient with lateralized, focal brain lesions have taught us that a great deal of language processing is lateralized to the left-hemisphere regions surrounding the Sylvian fissure. The language areas of the left hemisphere include Wernicke's area and Broca's area. These brain areas and their interconnections via white matter tracts form the left perisylvian language network.
Before neuroimaging, most of what was discerned about the neural bases of language processing came from studying patients who had brain lesions that resulted in various types of aphasia. Aphasia is a broad term referring to the collective deficits in language comprehension and production that accompany neurological damage. Aphasia may also be accompanied by speech problems caused by the loss of control over articulatory muscles, known as dysarthia, and deficits in the motor planning of articulations, apraxia. There is also a form of aphasia were the patient is unable to name objects, this is called anomia.
Broca's aphasia is the oldest and perhaps the most-studied form of aphasia. Broca observed by patient Leborgne that he had a brain lesion in the posterior portion of the left inferior frontal gyrus, now referred to as Broca's area. In the most severe form of Broca's aphasia, singleutterance patterns of speech are often observed. The speech of patients with Broca's aphasia is often telegraphic (containing only content words and leaving out the function words that have only grammatical significance, such as prepositions and articles). Broca's aphasia patients are often aware of their errors and have a low tolerance for frustration. Broca's aphasia patients also have a comprehension deficit related to the syntax, the rules governing how words have to be put together in a sentence. Often only the most basic and overlearned grammatical forms are produced and comprehended - this is known as agrammatic aphasia.
Wernicke's aphasia is a disorder primarily of language comprehension: patients with this syndrome have difficulty understanding spoken or written language and can sometimes not understand language at all. Their speech is fluently with normal prosody and grammar, but what they say is often nonsensical. Wernicke performed autopsies with his patients and came to the core of the language problem, the posterior regions of the superior temporal gyrus, now known as Wernicke's area.
Wernicke proposed a model for how the known language areas of the brain were connected. He and others found that large neural fiber tracts, arcuate fasciculus, connected Broca's and Wernicke's area. Wernicke predicted that damage to this fiber tract would disconnect the two areas in a fashion that would result in another aphasia, known as conduction aphasia. Patients understand words that they hear or see, and they are able to hear their own speech errors but cannot repair them. They also have a problem with spontaneous speech, as well as repeating speech, and sometimes they use words incorrectly. Lichtheim also proposed that this hypothetical brain region stored conceptual information about words. Once a word was retrieved from word storage, it was sent to the concept area, which supplied all information that was associated with the word. These ideas led to the Wernicke- Lichtheim model. This model proposes that language processing, from sound to motor outputs, involved interconnections of different key brain regions. And damage to different segments of this network would result in the various observed and proposed forms of aphasia.
The human language is called natural language because it arises from the abilities of the brain. It can be spoken, gestured and written. So, how does the brain cope with spoken, gestured and written input to derive meaning? And how does the brain produce spoken, gestured and written output to communicate meaning to others? The brain must store representations of words and their associated concepts. A word in a spoken language has two properties: a meaning and a phonological form. A word written also has a orthographic form. One of the central ideas in word representation is the mental lexicon - a mental storage of information about words that includes semantic information (meaning), syntactic and the details of word forms (how the words combine to form sentences), and the details of word forms (spelling and sound patterns).
There are three general functions involving the mental lexicon:
Lexical access: the stage of processing in which the output of perceptual analysis activates word form representations in the mental lexicon.
Lexical selection: the stage in which the representation that best matches the input is identified
Lexical integration: the final stage, in which words are integrated into the full sentence, discourse, or larger context to facilitate understanding of the whole message.
A normal adult speaker has passive knowledge of about 50,000 words, yet can easily recognize and produce about three words per second. The mental lexicon is proposed to have other features, linguistic evidence supports the following four organizing principles:
The smallest meaningful representational unit in a language is called a morpheme
Most frequently used words are accessed more quickly than less frequently used words
A phoneme is the smallest unit of sound that makes a difference to the meaning of a word
Representations in the mental lexicon are organized according to semantic relationships between words
When you look at the patterns of deficits in patients with language disabilities, we can infer a number of things about the functional organization of the mental lexicon. Patients of Wernicke's aphaisa make errors in speech production that are known as semantic paraphasias. They might use the word 'horse' when they intend to use the word 'cow'. The categories of semantic information of words are represented in the left temporal lobe, with a progression from posterior to anterior for general to more specific information, respectively.
The brain uses some of the same processes to understand both spoken and written language, but there are also some striking differences in how spoken and written inputs are analyzed. When you are listening to spoken language, the listener has to decode the acoustic input, this input is then translated into a phonological loop. The representations in the mental lexicon that match the auditory input are then accessed and selected. The word's meaning results in activation of the conceptual information.
Infants have the perceptual ability to distinguish all possible phonemes during their first year of life, but during the first year of life, the perceptual sensitivities became tuned to the phonemes of language they experienced on a daily basis. They, therefor, loose the ability to distinguish phonemes that are not part of the English language.
In humans, the superior temporal cortex is important for sound perception. People with damage to this area may develop pure word deafness. When the speech signal hits the ear, it is first processed by pathways in the brain that are not specialized for speech but are used for hearing them in general. The Heschl's gyri of both hemispheres are activated by speech and nonspeech sounds alike, but the activation in the superior temporal sulcus, STS, of each hemisphere is modulated by whether the incoming auditory signal is a speech sound or not. Further in the brain, the brain becomes less sensitive to changes in nonspeech sounds but more sensitive to speech sounds.
Reading is the perception and comprehension of written language. Our brain is very good at pattern recognition, but reading is a quite recent invention. Learning to read requires linking arbitrary visual symbols into meaningful words. The identification of orhtographic units may take place in occipitotemporal regions of the left hemisphere, and it has been known for over a hundred years that lesions in this area can give rise to pure alexia, a condition in which patients cannot read words, even though other aspects of language are normal. In humans, written information from the left visual field arrives first via visual inputs to the contralateral right occipitical cortex and is sent to the left-hemisphere visual word form area via the corpus callosum. The visual word area is heavily interconnected with regions of the left perisylvian language system, including the frontal, temporal and inferior parietal cortical regions.
Once a phonological or visual representation is identified as a word, then for it to gain any meaning, semantic and syntactic information must be retrieved. Words are often not processed in isolation, but in the context of other words. To understand words in their context, we have to integrate syntactic and semantic properties of the recognized words into a representation of the whole utterance.
Is it possible to retrieve word meanings before words are heard or seen when the word meanings are highly predictable in the context? When you hear the sentence 'the tall man planted a tree on the bank'. Here 'bank' has multiple meanings. But the context of the sentence enables us to interpret bank as the 'side of the river' and not 'the financial institution'. There are lower-level representations, those constructed from the sensory input, and higher-level representations, those constructed from the context preceding the word to be processed.
There are three classes of models attempt to explain word comprehension:
Modular models: claim that normal language comprehension is executed within seperate and independent modules. Higher-level representations cannot influence lower-level ones, and therefore the flow is strictly data driven, bottom-up.
Interactive models: maintain that all types of information can participate in word recognition. Context can have its influence even before the sensory information is available, by changing the activational status of the word-form representations
Hybrid models: which fall between the modular and interactive extremes, are based on the notion that lexical access is autonomous and not influenced by higher-level information.
How do we process the structure of sentences? When we hear or read sentences, we activate word forms that activate the grammatical and semantic information in the mental lexion. But representations of whole sentences are not stored in the brain. Instead, the brain has to assign a syntactic structure to words in sentences, in a process called syntactic parsing. Lexical access and selection involve a network that includes the medial temporal gyrus (MTG), superior temporal gyrus (STG), and ventral inferior and bilateral dorsal inferior frontal gyri (IFG) of the left hemisphere. When you are talking about the ERP method, the N400 method is a negative-polarity brain wave related to semantic processes in language. The P600/SPS is a large positive component elicited after a syntactic and some semantic violations.
One neural model of language that combines work in brain and language analysis has been proposed by Hagoort. His model divides language processing into three functional components:
Memory: refers to the linguistic knowledge that is encoded and consolidated in neocortical memory structures.
Unification: refers to the integration of lexically retrieved phonological, semantic, and syntactic information into an overall representation of the whole utterance.
Control: relates language to social interactions and joint action
How are these brain regions in the left hemisphere organized to create a language network in the brain? White matter tracts in the left hemisphere connect inferior frontal cortex, inferior parietal cortex, and temporal cortex to create specific circuits for linguistic operations.
Motor control involves creating internal forward models, which enable the motor circuit to make predictions about the position and trajectory of a movement and its sensory consequences, and sensory feedback which measures the actual sensory consequences of an action. Feedback control has been documented in the production of speech. Researchers have altered sensory feedback and found that people adjust their speech to correct for sensory feedback 'errors'.
Levelt came up with a influential cognitive model for language production. The first step in speech production is to prepare the message. There are two crucial aspects to message preparation: macroplanning, in which the speaker determines what she wants to express, and microplanning, in which she plans how to express it. The speech production also depends on the use of orofacial muscles that are controlled by processes using internal forward models and sensory feedback. Hickok's model of speech production involves the parallel processing and two levels of hierarchical control.
The models of language production must account for the processes of selecting the information to be contained n the message; retrieving words from the lexicon, planning sentences and encoding grammar using semantic and syntactic properties of the word, using morphological and phonoloical properties for syllabification and prodosy; thus preparing articulartory gestures for each syllable.
How do we achieve goals and meet needs? - Chapter 12
...........Read moreWhat does social cognitive neuroscience study? - Chapter 13
...........Read moreWhat is the anatomy of consciousness? - Chapter 14
...........Read moreThere are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
Field of study
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
1708 | 1 |
Add new contribution