Successful syllable detection in aphasia despite processing impairments as revealed by event-related potentials
© Becker and Reinvang; licensee BioMed Central Ltd. 2007
Received: 07 September 2006
Accepted: 19 January 2007
Published: 19 January 2007
The role of impaired sound and speech sound processing for auditory language comprehension deficits in aphasia is unclear. No electrophysiological studies of attended speech sound processing in aphasia have been performed for stimuli that are discriminable even for patients with severe auditory comprehension deficits.
Event-related brain potentials (ERPs) were used to study speech sound processing in a syllable detection task in aphasia. In an oddball paradigm, the participants had to detect the infrequent target syllable /ta:/ amongst the frequent standard syllable /ba:/. 10 subjects with moderate and 10 subjects with severe auditory comprehension impairment were compared to 11 healthy controls.
N1 amplitude was reduced indicating impaired primary stimulus analysis; N1 reduction was a predictor for auditory comprehension impairment. N2 attenuation suggests reduced attended stimulus classification and discrimination. However, all aphasic patients were able to discriminate the stimuli almost without errors, and processes related to the target identification (P3) were not significantly reduced. The aphasic subjects might have discriminated the stimuli by purely auditory differences, while the ERP results reveal a reduction of language-related processing which however did not prevent performing the task. Topographic differences between aphasic subgroups and controls indicate compensatory changes in activation.
Stimulus processing in early time windows (N1, N2) is altered in aphasics with adverse consequences for auditory comprehension of complex language material, while allowing performance of simpler tasks (syllable detection). Compensational patterns of speech sound processing may be activated in syllable detection, but may not be functional in more complex tasks. The degree to which compensational processes can be activated probably varies depending on factors as lesion site, time after injury, and language task.
The analysis of speech sounds is a necessary step in the process of language comprehension. Since most aphasic patients have auditory comprehension deficits, the question whether and to what degree speech sound perception is impaired in aphasia has been much investigated [1–15]. Several studies have indeed shown that aphasic subjects perform significantly worse than healthy controls in e.g. tasks where they have to decide whether two consonants (or two syllables with different consonants) are the same or not [1, 3, 4, 8, 9].
However, most authors did not find correlations between these speech perception impairments and auditory comprehension abilities as measured by classical aphasia assessments [2, 4, 7, 11]. Rather, several studies have revealed patients with severe auditory comprehension deficits but no or minor speech sound perception impairments, or patients with mild auditory comprehension deficits who performed poorly in speech sound discrimination and identification tasks [2–4, 6, 9, 15]. Thus, a dissociation – at least partially – between speech perception and auditory comprehension has been found, which also has been quoted as evidence for a dual pathway framework of language comprehension . However, a rather strong correlation between speech sound perception and auditory comprehension has also been reported .
Brain activity related to different stages of speech sound processing can be studied with event-related brain potentials. At about 100 ms after stimulus onset, a negativity can be recorded as the N1 wave which is generated in both temporal and frontal brain areas . N1 reflects an intermediate stage in auditory analysis as well as sound detection and orienting functions . Concerning the processing of speech sounds, N1 is suggested to reflect integrative processing of acoustic features of the incoming stream of speech, but not a neurological representation of phonemes [18–20].
Also the N2 waveform – recorded at about 150 to 300 ms after stimulus onset – is a summation of several components . While early parts of the N2 (N2a or mismatch negativity, MMN) reflect automatic deviance detection, later stages of the N2 wave are regarded as correlates of attentional deviance detection (N2b) and of classification processing (N2c). Starting with N2b and in further stages, the processing of speech sounds seems to differ from that of non-speech sounds, while sound processing is common for speech and non-speech in earlier stages as reflected by N2a . With regard to the time course of attentional discrimination of stimuli, it is suggested that the N2 component reflects processes of transient arousal triggered by unattended discrimination processes (reflected by N2a/MMN) which in turn trigger a target reaction . Cognitive processes related to target detection and to the engagement of a target reaction are reflected by the P3 component which is mainly generated in parietal regions and in the case of auditory stimuli in superior temporal cortex [24–26].
Electrophysiological studies of sub-lexical speech sound processing in aphasia have mainly focused on unattended phonetic/phonologic processing often using the mismatch negativity component (MMN; for a short overview of these studies, see ). To our knowledge, no ERP-investigations of attended processing of sub-lexical speech stimuli have been performed in aphasia. While the number of studies using simple language stimuli in attended paradigms in order to investigate auditory processing is small, more studies with non-speech stimuli have been conducted, often using tones presented in oddball paradigms. There is good evidence for N1 amplitude reduction to an attended and frequent tone stimulus in aphasia [28–32]. Regarding topographic distribution of the N1 component, a right hemisphere maximum has been observed in an aphasic group while a control group showed an even hemispheric distribution . Lesions located in either left or right superior temporal gyrus were found to be the cause for N1 amplitude reduction [33, 34]. When using monaural presentation in left hemisphere injured patients, right-ear stimulation led to bilateral N1 reduction .
Regarding the response to the target stimulus, reduced P3 amplitudes have been reported, especially in patients with severe comprehension deficits [29, 31, 36]. The temporo-parietal junction has been shown to be crucial for normal P3 amplitudes to tone stimuli .
On the background of a still unclear relation between speech sound perception and auditory comprehension and sparse ERP-research on the attended processing of speech sounds in aphasia, we aimed in this study to further explore neurophysiological correlates of automatic and cognitive processes involved in speech sound processing in aphasic subjects. A major problem in interpreting ERP-results and behavioral findings is that when the study person fails to perform the task correctly, it is impossible to determine what underlying processes are active. Our strategy is therefore to study ERP in a relevant linguistic task which can be performed adequately by aphasic subjects, and to investigate the relevance of deviations in processing for the performance of a more complex task. Having investigated automatic discrimination of syllables in an earlier study , we used the same stimuli in this present study in an attended oddball design. A central research question was at which processing stages changes may be found in aphasia. Current research is focusing on changes of brain activation during recovery from brain injury, suggesting different activation patterns in patients with successful recovery compared to those with a less favorable outcome (e.g. ). Therefore, we grouped the participating patients with regard to aphasia severity. Furthermore, differences in topographic distribution of the components identified may give further information about functional or dysfunctional changes in brain activation, especially with regard to activation of ipsilesional and contralesional processes.
A total of 20 aphasic subjects were consecutively recruited from patients admitted to our hospital for rehabilitation. 11 control subjects were recruited from hospital staff and non-brain damaged patients of the hospital. All participants with the exception of two severe and one moderate aphasic patient reported to be right-handed. Informed consent was obtained from all subjects. The study was approved by the regional research ethics committee of Eastern Norway.
All participants were examined with the auditory comprehension section of the Norwegian Basic Aphasia Assessment (NGA; ) and the Token test . These tests measure comprehension in relation to both single words and short sentences, and with regard to both naturalistic objects, body parts, and geometric tokens. In addition, the patients were investigated with the complete NGA. Furthermore, all patients were assessed by a neuropsychologist as part of their routine rehabilitation program. Etiology and lesion location were retrieved from the patient's medical charts – the latter from descriptions of CT or MRI scans.
In order to investigate whether different electrophysiological patterns depend on the severity of the auditory comprehension deficit, the aphasic subjects were distributed into two groups: a group with aphasic subjects with mild or moderate auditory comprehension impairment (moderate aphasia group) and a group of subjects with severe or very severe auditory comprehension impairment (severe aphasia group). The parameter for dichotomization was a score of 16.5 in the shortened version of the Token test which corresponds to the border between moderate and severe aphasia as described by the authors .
Demographical and clinical data of the aphasic subjects participating in this study.
Severe aphasia group
Years of age
Type of aphasia b
Site of lesion d
Months post injury
NGA comp e
NGA total f
Areas of neuropsychological deficits
apraxia, visual spatial function
memory, visual spatial function
apraxia, perseveration, problem solving, working memory
apraxia, executive function, abstract reasoning, visual attention, visual spatial function
working memory, memory, problem solving
apraxia, memory, problem solving, visual scanning
attention, working memory, perseveration
working memory, visual spatial function
executive function, problem solving, visual spatial function
Moderate aphasia group
Years of age
Type of aphasia b
Site of lesion d
Months post injury
NGA comp e
NGA total f
Areas of neuropsychological deficits
memory, visual spatial function
attention, executive function, visual discrimination
attention, executive function, memory, abstract reasoning
apraxia, attention, problem solving, visual spatial function
acalculia, apraxia, working memory, abstract reasoning, visual attention
working memory, visual spatial function
memory, visual attention
F, T, P, O
attention, executive function, visual scanning and discrimination
Overview over the three investigated groups.
Severe (N = 10)
Moderate (N = 10)
Control (N = 11)
53.5 (45.1 – 66.9)
53.1 (18.0 – 66.0)
58.2 (33.0 – 74.1)
12.4 (9 – 15)
14.2 (11 – 20)
13.8 (10 – 18)
NGA* aud. comprehension (0 – 71)
42 (13 – 57)
64 (52 – 70)
NGA total (0 – 217)
99 (35 – 163)
190 (155 – 209)
Token test (0 – 36)
7.1 (1 – 11.5)
24.8 (19 – 32)
33.6 (31 – 35)
time post onset (months)
4.5 (1.7 – 97.7)
3.0 (0.8 – 20.6)
The participants were presented with a syllable detection paradigm using the same natural speech sounds as in our earlier study of automatic syllable discrimination : The frequent standard syllable /ba:/ (p = 0.85) and the infrequent target syllable /ta:/ (p = 0.15) were presented with a stimulus onset asynchrony of 1.5 s in a pseudo randomized order with the restriction that two targets could not follow each other (see additional file 1, a 30 s sample of the auditory stimuli). The syllables were digitally recorded from a female, middle-aged native speaker and cut and re-spliced at zero crossings of the steady-state vowel to obtain syllables of same length (/ba:/ = 245.9 ms; /ta:/ = 245.2 ms). The recordings of the syllables were low-pass filtered at 8 kHz. The syllables had rise/fall times of 20 ms. A total number of 205 syllables, amongst these 30 target syllables, were presented binaurally via headphones at approximately 80 dB SPL. The participants were seated comfortably in a rest chair or their wheel chair and were instructed to press a button with the index finger of their preferred hand as soon as possible when they heard the target syllable /ta:/. Since many of the subjects had severe comprehension deficits, the stimuli (up to 15 targets and 40 standards) were first presented without EEG-recording, and the subject's reaction was observed to assure that the participants had understood the task. Additionally, prior to the recordings for this present study all subjects had been presented for the same syllable stimuli in an unattended paradigm  in the same session.
EEG was recorded continuously with a sample frequency of 500 Hz and an online band-pass filter with a range from 0.05 to 70 Hz at the following electrode sites: Fz, Cz, Pz, Fp1/2, F3/4, C3/4, P3/4, F7/8, T3/4, T5/6, O1/2, M1, and M2. A nose reference electrode was used. The continuous EEG-data were post-hoc analyzed using band-pass (1 – 15 Hz), zero-phase filtering and ocular artifact reduction using vertical oculograms . Sweeps with amplitudes exceeding +/- 100 μV in any channel except of the vertical oculogram were excluded from the analysis.
The three left-handed participants had CT-verified right hemisphere lesions and left hemiparesis. For these participants, symmetrical and corresponding electrode labels were swapped between hemispheres. Thus, in this paper odd numbered electrode indices (F3, F7 ...) refer to the brain damaged hemisphere (normally the left) and even numbered electrode indices (F4, F8 ...) refer to the contralesional hemisphere. For the controls – all being right-handed – electrode labels of the left hemisphere are referred to as ipsilateral.
The standard syllable (/ba:/) waveforms were analyzed for the N1 component, the responses to the target syllable (/ta:/) for N1 and P3. For each group separately, mean peak latencies for the components were defined as the mean of the individual peak latencies located at maxima in the following time windows: N1 = 60 – 180 ms and P3 = 300 – 700 ms. Cz electrode was used to define the latencies for standard and target N1, while Pz was used for target P3. For each component respectively, time intervals were centered at the relevant group's mean peak latency to calculate mean amplitudes. These intervals had a duration of 30 ms for the N1 and 50 ms for the P3 component. Using the respective intervals which were derived by the above described procedure, mean amplitudes for the following electrode sites were calculated and further analyzed: Fz, Cz, Pz, F3/4, C3/4, P3/4, F7/8, T3/4, T5/6. A similar analysis was performed separately for the mastoid electrodes (M1/2); these results do not give additional information and are therefore not reported.
Furthermore, subtraction waveforms (target - standard) were analyzed to elucidate the process of discriminating targets from standards. Mean average amplitudes of successive time windows of 50 ms duration in the range from 75 ms to 475 ms were calculated and analyzed; this time span contains the N2 component.
We analyzed the mean amplitudes using a two-way ANOVA model with the between subjects factor "group" (severe aphasia vs. moderate aphasia vs. control) and the within group factors anterior-posterior "line" (frontal vs. central vs. parietal) and "electrode" (5 levels; for example F7, F3, Fz, F4, and F8 for the frontal electrode line). Thus a significant interaction involving the "electrode" factor might indicate a hemisphere difference, but would have to be further analyzed focusing on the relevant electrode contrasts. Greenhouse-Geisser and Bonferroni corrections were applied when appropriate. Latencies were compared between groups using one-way ANOVAs.
Furthermore, Spearman's rank test was used to analyze ERP-amplitudes and latencies for correlations with time after brain-injury, reaction time (RT) and clinical aphasia assessment results (NGA auditory comprehension, NGA total, and Token test). Only aphasic subjects were included in these analyses, except for the RT-analysis where all participants were included. In order to reduce the risk of type I error – on the background of the large number of correlation analyses performed – the significance level for correlations was set to 0.01.
Almost all subjects detected all 30 targets; only three severe aphasic patients missed one target syllable each. Many participants had a few false alarms, but none more than four; no significant differences regarding false alarm rates were found. These results indicate that the task was a rather easy one.
The target response time was significantly prolonged in the patient groups (p < 0.05): While the mean reaction time was 383 ms in the control group, it was 465 ms in the moderate and 586 ms in the severe aphasic group.
Standard syllable N1
N2 175–225 ms
N2 225–275 ms
N2 275–325 ms
Target syllable N1
Two-way ANOVA showed a significant between group effect (F [2, 28] = 4.44, p < 0.05) which post-hoc analysis revealed to be significant for the control vs. moderate aphasia comparison (p < 0.05) and marginally significant for the control vs. severe aphasia comparison (p = 0.066). Further analysis of topographic anterior-posterior distributions showed the same tendencies as for standard-N1, but generally at a non-significant level. Visual inspection indicated a tendency towards the same hemisphere distribution differences as observed for the standard syllable elicited N1; a significant electrode * group interaction was found (F [2, 28] = 4.77, p < 0.001). The severe aphasia group showed larger amplitudes over the contralesional hemisphere especially at central and parietal sites.
The P3 component (figure 5, table 3) was observed in the controls as the typical large positivity with a parietal maximum peaking at 436 ms (4.32 μV). A somewhat earlier maximum was observed in the moderate aphasia group (peak: 419 ms, 4.58 μV). In the severe aphasia group, P3 was somewhat attenuated and peaked over the frontal midline (451 ms, 3.11 μV). However, no significant differences between groups in P3 mean amplitudes or latencies were found.
Subtraction curve analysis
For the 175 – 225 ms interval, the ANOVA showed a between groups effect (F [2, 28] = 3.67, p < 0.05); post-hoc analysis resulted only in a tendency to significance for the control vs. severe aphasia group comparison (p = 0.062). A significant line * group interaction was found (F [2, 28] = 2.69, p < 0.05). Further analysis of the frontal line resulted in a tendency towards a between group effect (p = 0.089), while we observed a significant difference for the parietal electrodes (p < 0.05). The largest amplitudes at this stage were found parietally in the control group, but centrally in the aphasic groups. For the parietal line, we also found an electrode * group interaction (F [2, 28] = 2.48, p < 0.05) with significantly different amplitudes between groups at P3, P4, and Pz electrode site. In this early segment of the processing difference, the control group's negativity was lateralized to the left hemisphere, whereas the moderate aphasia group showed higher amplitudes over the contralesional hemisphere at central and parietal sites.
The processing difference between target and standard stimulus in the 225 – 275 ms time-window increased – compared to the preceding interval – in the controls, but decreased in the aphasic groups. Analysis of variance showed a between groups effect (F [2, 28] = 10.80, p < 0.001) which was present between the controls and both the moderate (p < 0.01) and the severe aphasia group (p < 0.001). A significant line * group effect was found (F [2, 28] = 4.42, p < 0.01). The processing difference of the control group was now centered between Cz and Pz electrode and centrally localized with regard to hemisphere distribution while it still showed larger amplitudes over the hemisphere contralesional to the brain damage in the moderate aphasia group.
In the 275 – 325 ms interval, the vertex mean amplitudes of the control and the severe aphasia group remained rather unchanged, while a positive amplitude indicated the start of a P3 effect in the moderate aphasia group. Also in this time-window the processing difference showed a between group effect (p < 0.01). Post-hoc analysis resulted in a significant difference between the control and the moderate aphasia group (p < 0.01). Line * group (F [2, 28] = 3.37, p < 0.05) and electrode * group (F [2, 28] = 3.44, p < 0.05) interactions were significant, and a significant line * electrode * group interaction was found (F [2, 28] = 2.24, p < 0.05), but the pattern of electrode differences did not indicate systematic hemispheric differences.
Correlations ERP-parameters – clinical aphasia measures
Overview of significant correlations
175 – 225
225 – 275
325 – 375
NGA auditory comprehension
NGA total score
Time post injury
For the target stimulus N1, tendencies (p < 0.1) for correlations between the Token test and amplitudes at C3 and Cz electrode were observed, furthermore between M1 and the NGA total score.
Correlations with reaction time
A positive correlation was found between P3 latency and reaction time (rs = 0.49, p < 0.01): the later the P3 component peaked, the longer was RT.
Correlations ERP-parameters – time after brain injury
Moderate correlations between ERP-amplitudes and the time between brain injury and ERP-investigation were found for the N1 component elicited by the standard and the target syllable (table 4). Mean N1 amplitudes were smaller, the more time that had passed since brain injury.
In the present study, we investigated the ability of severe and moderate aphasic patients to detect rare target syllables amongst frequent standard syllables and studied the electrophysiological processes involved. The aphasic groups performed this rather easy task accurately, though more slowly than the controls. Despite the aphasics' successful task performance, we found several significant differences in their electrophysiological processing indicators. No alterations in ERP latencies were observed, but changes in ERP amplitudes for components found in the time range from about 100 and up to about 300 milliseconds after stimulus onset indicate differences during on-line stimulus processing or immediately following. These changes were primary stimulus processing reduction in the form of attenuated N1 amplitude for both standard and target stimuli at a latency of about 110 to 120 milliseconds, and a discrimination deficit between targets and standards in the time interval between 175 to 325 ms post stimulus onset. In this time range a clear N2 peak could be identified in the controls, whereas the aphasics showed a less distinct negative processing difference. P3 latency or amplitude did not differentiate between the groups, but was associated with reaction time. N1 amplitude reduction at ipsilesional fronto-central sites correlated with severity of auditory comprehension impairment. In addition, N1 amplitude at fronto-central electrode sites was smaller with increasing time after injury.
Topographic analysis indicated that moderate and severe aphasics showed different patterns of brain activation in order to solve the discrimination problem. Salient differences were that the severe aphasics showed a lateralization of activity focus to the contralesional hemisphere in an early processing window (N1), while showing no evidence of discriminatory activation in later time windows. The moderate aphasics on the other hand showed a more symmetrical activation in the corresponding early time window with evidence of discriminatory activity in later time windows. The implications will be discussed further below.
The observed attenuation of the N1 component in the aphasic groups is consistent with earlier findings for tone [29–34, 36] and word stimuli [28, 42]. A statistical correlation between N1 amplitude and measures for the severity of auditory comprehension measurement in aphasia has not been reported earlier, but in two studies that also dichotomized the aphasic patient groups in relation to auditory comprehension function, a larger N1 reduction in the severe aphasia groups has been reported [36, 43].
N1 reduction and its correlations with auditory comprehension impairment can be interpreted as impaired sound detection and orienting functions and deficient integration of the acoustic properties of speech sounds . Reduced N1 amplitude was found for both the standard and the target syllable which argues for a disturbance of primary stimulus processing independent of the role of the stimulus in the task. This is supported by the fact that the discrimination analysis (subtraction wave) did not reveal differences between groups in the N1 time window, but starting after 175 ms.
The deviant electrophysiological patterns in the aphasic groups between 175 and 325 ms argue for disturbances in the processes of attentional detection of the infrequent syllable /ta:/ and of its classification as the target stimulus. These differences were found in temporal stages of the N2 waveform which have been identified as being different between speech sound and purely acoustic processes .
However, the P3 component was not significantly altered in the aphasic groups indicating no severe impairments of target detection and processes of engaging the target reaction; this of course corresponds to the fact that the aphasic patients were able to detect the target syllables behaviorally. The lack of a significant P3 reduction – which contrasts some results of earlier P3 studies of aphasic patients – might be due to the large difference between the stimuli and to the relatively low difficulty of the task. In earlier studies, the stimuli were rich tones differing in only one parameter: frequency [29, 31, 36, 37] or duration . Actually, the reported P3 attenuation in the aphasic groups in most of these studies [29, 31, 36] was not caused by a general processing defect in aphasia, but rather – as the authors noted – by the fact that several subjects were unable to perform the task; in this present study, even the very severe aphasic subjects were able to accomplish the task almost without errors.
The close relation between the P3 component and the target response was illustrated by a significant, though weak correlation between P3 latency and reaction time. Although reaction time was significantly prolonged in the aphasic groups, we did not observe P3 latency differences between groups. This might be explained by disturbances in "post-P3" executive motor functions in the aphasic subjects, many of whom had sensory-motor deficits involving the preferred hand.
How can it be explained that the aphasic subjects were able to perform the current task successfully at the same time as the electrophysiological parameters are significantly attenuated and even correlate with auditory comprehension measures? A possible suggestion is that stimulus discrimination in at least some aphasic subjects was based not on linguistic analysis, but only or mainly on purely acoustic features. This strategy is adequate in a task with a very limited set of stimuli and no demands on semantic interpretation, but is not functional in a naturalistic comprehension task. Earlier studies have indeed shown that the ability to discriminate phonemes is a necessary, but not sufficient condition for the correct identification of these phonemes, and report several aphasic subjects that could discriminate, but not identify speech sounds [5, 15]. We would argue that the severe aphasia group, which showed the largest N1 amplitude reduction, has to rely primarily on acoustic analysis. Linguistic processing – which accounts for parts of the N1 and a more substantial part of the N2 waveform – might thus be reduced in these subjects even if these linguistic analyses were not necessary to perform the task correctly. In this perspective, one could furthermore argue that speech sound discrimination based on purely acoustic features requires more resources and is more exhausting than "normal" speech sound discrimination; this could be suggested as one reason why aphasic subjects often report that listening to language is fatiguing (cf. ).
There are some other possible reasons for the observed amplitude reductions. First, compensational pathways might exist in aphasic brains which are not revealed by ERPs, at least not as recorded in the present study. These might be processes asynchronous in relation to the stimuli. Alternatively, the N1 and N2 components in healthy subjects might (partially) be generated by unnecessary, redundant activity that can be reduced in brain injured individuals without having impact on brain functions. Also, one could question the usually proposed sequential nature of the processing steps reflected by the N1, N2, and P3 components: Rather, different processes might exist in parallel. In injured brains, due to a conflict of resources, early processing steps might then be reduced because task-relevant processes are ongoing and prioritized.
However, an important objection to these interpretations of the present results is that the observed electrophysiological changes might not be due to impaired language functions, but rather solely to deficits in purely acoustic processing. On the other hand, one could argue that the amplitude attenuations might be only unspecific effects of brain lesion and lesion size which are not related to aphasia in particular. These problems can be addressed in a study using both a speech sound paradigm and a paradigm with purely acoustic stimuli, and furthermore by comparing aphasic patients with brain injured individuals without aphasia. We are pursuing this approach in an ongoing study.
Some interesting changes in the hemispherical distribution of brain activity were observed: As N1 maximum was located with an even hemispheric distribution in the controls, the aphasic groups showed two contrasting patterns of N1 hemisphere distribution at fronto-central sites: in the moderately impaired aphasic subjects, N1 was evenly distributed or even slightly lateralized to the ipsilesional side while it had more relative weight over the non-brain damaged hemisphere in patients with severely impaired auditory comprehension. Similar to the results regarding the severe group, relatively enlarged N1 amplitudes at contralesional fronto-central sites have been reported [30, 42]. In a study using monaural stimulation, a similar pattern was found only for right-ear, but not for left-ear stimulation .
These findings might be explained by the effect of two different, but interacting mechanisms: First, a general N1 reduction takes place which is directly caused by the brain damage and which is larger in those patients with larger brain lesions and more severe impairments, i.e. the severe aphasia group. This attenuation is probably largest over brain damaged areas. Second, different compensational mechanisms in response to the brain damage might exist: Severely impaired patients activate the contralesional hemisphere relatively more than the ipsilesional hemisphere, while patients with lesser impairment show higher activation of the brain damaged than of the contralesional hemisphere. Thiel et al [38, 45, 46] have reported similar lateralization differences between patients with moderate aphasia and those with more impaired language function and claim a hierarchy of language recovery where the compensational activation of perilesional areas leads to rather good results, while the contralesional hemisphere can be activated as part of a less efficient compensational mechanism. Our results regarding the N1 component support this hypothesis, and we note that the majority of significant correlations between auditory comprehension score and single electrode N1 amplitudes are with ipsilesional fronto-central electrodes.
The ability to make use of compensational strategies in speech sound processing probably differs between aphasic subjects due to factors as premorbid brain organization and lesion site and size, but also depending on features of the speech sounds that are processed. This variation might be a reason for the complex relation between impaired speech sound perception and auditory comprehension in aphasia.
The clinical use of event-related brain potentials in order to explore and possibly monitor auditory comprehension in aphasia is under discussion [47–50]. The present study supports the usefulness of event-related potentials in the investigation of processes underlying auditory comprehension deficits in aphasia. As this study indicates, ERPs provide information about central auditory processing deficits even in tasks which are successfully accomplished by the aphasic subjects. Our results regarding the N1 and N2 waveforms – particularly the significant correlations of N1 amplitudes with clinical language comprehension assessment results – suggest that these waveforms deserve further attention in the exploration of auditory comprehension impairment in aphasia.
This study investigated attended speech sound processing in aphasia recording event-related potentials during a syllable detection task. The aphasic subjects were able to perform the task almost without errors, and processes related to the target identification (P3) were not significantly attenuated. However, electrophysiological components reflecting primary stimulus analysis (N1) and attended stimulus classification and discrimination (N2) indicated reduced processing, which constitutes a crucial weakness in more complex and naturalistic comprehension tasks. The aphasic subjects might have discriminated the stimuli by increased reliance on acoustic differences, and topographic differences between aphasic subgroups and controls indicate compensatory changes in activation. The degree to which compensational patterns of speech sound processing can be activated probably varies depending on lesion site, time after injury, and language task.
This project has been financed with the aid of EXTRA funds from the Norwegian Foundation for Health and Rehabilitation. FB is financed by these funds, by the University of Oslo and by Sunnaas Rehabilitation Hospital. IR is financed by the University of Oslo.
- Baker E, Blumstein SE, Goodglass H: Interaction between phonological and semantic factors in auditory comprehension. Neuropsychologia. 1981, 19: 1-15. 10.1016/0028-3932(81)90039-7.View ArticlePubMedGoogle Scholar
- Basso A, Casati G, Vignolo LA: Phonemic identification defect in aphasia. Cortex. 1977, 13: 85-95.View ArticlePubMedGoogle Scholar
- Baum SR: Consonant and vowel discrimination by brain-damaged individuals: effects of phonological segmentation. J Neurolinguistics. 2002, 15: 447-461. 10.1016/S0911-6044(00)00020-8.View ArticleGoogle Scholar
- Blumstein SE, Baker E, Goodglass H: Phonological factors in auditory comprehension in aphasia. Neuropsychologia. 1977, 15: 19-30. 10.1016/0028-3932(77)90111-7.View ArticlePubMedGoogle Scholar
- Blumstein SE, Cooper WE, Zurif EB, Caramazza A: The perception and production of Voice-Onset Time in aphasia. Neuropsychologia. 1977, 15: 371-372. 10.1016/0028-3932(77)90089-6.View ArticlePubMedGoogle Scholar
- Caplan D, Gow D, Makris N: Analysis of lesions by MRI in stroke patients with acoustic-phonetic processing deficits. Neurology. 1995, 45: 293-298.View ArticlePubMedGoogle Scholar
- Gandour J, Dardarananda R: Voice onset time in aphasia: Thai. I. Perception. Brain Lang. 1982, 17: 24-33. 10.1016/0093-934X(82)90002-5.View ArticlePubMedGoogle Scholar
- Jauhiainen T, Nuutila A: Auditory perception of speech and speech sounds in recent and recovered cases of aphasia. Brain Lang. 1977, 4: 572-579. 10.1016/0093-934X(77)90047-5.View ArticlePubMedGoogle Scholar
- Miceli G, Gainotti G, Caltagirone C, Masullo C: Some aspects of phonological impairment in aphasia. Brain Lang. 1980, 11: 159-169. 10.1016/0093-934X(80)90117-0.View ArticlePubMedGoogle Scholar
- Miceli G, Caltagirone C, Gainotti G, Payer-Rigo P: Discrimination of voice versus place contrasts in aphasia. Brain Lang. 1978, 6: 47-51. 10.1016/0093-934X(78)90042-1.View ArticlePubMedGoogle Scholar
- Milberg W, Blumstein S, Dworetzky B: Phonological processing and lexical access in aphasia. Brain Lang. 1988, 34: 279-293. 10.1016/0093-934X(88)90139-3.View ArticlePubMedGoogle Scholar
- Square-Storer P, Darley FL, Sommers RK: Nonspeech and speech processing skills in patients with aphasia and apraxia of speech. Brain Lang. 1988, 33: 65-85. 10.1016/0093-934X(88)90055-7.View ArticlePubMedGoogle Scholar
- Tallal P, Newcombe F: Impairment of auditory perception and language comprehension in dysphasia. Brain Lang. 1978, 5: 13-24. 10.1016/0093-934X(78)90003-2.View ArticlePubMedGoogle Scholar
- Varney NR: Phonemic imperception in aphasia. Brain Lang. 1984, 21: 85-94. 10.1016/0093-934X(84)90038-5.View ArticlePubMedGoogle Scholar
- Yeni-Komshian GH, Lafontaine L: Discrimination and identification of voicing and place contrasts in aphasic patients. Can J Psychol. 1983, 37: 107-131.View ArticlePubMedGoogle Scholar
- Hickok G, Poeppel D: Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition. 2004, 92: 67-99. 10.1016/j.cognition.2003.10.011.View ArticlePubMedGoogle Scholar
- Näätänen R, Picton T: The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure. Psychophysiology. 1987, 24: 375-425.View ArticlePubMedGoogle Scholar
- Näätänen R, Winkler I: The concept of auditory stimulus representation in cognitive neuroscience. Psychol Bull. 1999, 125: 826-859. 10.1037/0033-2909.125.6.826.View ArticlePubMedGoogle Scholar
- Näätänen R: The perception of speech sounds by the human brain as reflected by the mismatch negativity (MMN) and its magnetic equivalent (MMNm). Psychophysiology. 2001, 38: 1-21.View ArticlePubMedGoogle Scholar
- Roberts TP, Ferrari P, Stufflebeam SM, Poeppel D: Latency of the auditory evoked neuromagnetic field components: stimulus dependence and insights toward perception. J Clin Neurophysiol. 2000, 17: 114-129. 10.1097/00004691-200003000-00002.View ArticlePubMedGoogle Scholar
- Pritchard WS, Shappell SA, Brandt ME: Psychophysiology of N200/N400: a review and classification scheme. Adv Psychophysiol. 1991, 4: 43-106.Google Scholar
- Sussman E, Kujala T, Halmetoja J, Lyytinen H, Alku P, Näätänen R: Automatic and controlled processing of acoustic and phonetic contrasts. Hear Res. 2004, 190: 128-140. 10.1016/S0378-5955(04)00016-4.View ArticlePubMedGoogle Scholar
- Näätänen R: Attention and Brain Function. 1992, Hillsdale, New Jersey, Lawrence Erlbaum AssociatesGoogle Scholar
- Johnson R: On the neural generators of the P300 component of the event-related potential. Psychophysiology. 1993, 30: 90-97.View ArticlePubMedGoogle Scholar
- Linden DE: The P300: where in the brain is it produced and what does it tell us?. Neuroscientist. 2005, 11: 563-576. 10.1177/1073858405280524.View ArticlePubMedGoogle Scholar
- Picton TW: The P300 wave of the human event-related potential. J Clin Neurophysiol. 1992, 9: 456-479.View ArticlePubMedGoogle Scholar
- Becker F, Reinvang I: Mismatch negativity elicited by tones and speech sounds: Changed topographical distribution in aphasia. Brain Lang. 2007, 100: 69-78. 10.1016/j.bandl.2006.09.004.View ArticlePubMedGoogle Scholar
- Brown CM, Hagoort P, Swaab TY: Neurophysiological Evidence for a Temporal Disorganization in Aphasic Patients with Comprehension Deficits. Aphasietherapie im Wandel. Edited by: Widdig W, Ohlendorff I and Malin JP. 1997, Freiburg, Hochschulverlag, 89-122.Google Scholar
- Hagoort P, Brown CM, Swaab TY: Lexical-semantic event-related potential effects in patients with left hemisphere lesions and aphasia, and patients with right hemisphere lesions without aphasia. Brain. 1996, 119 (Pt 2): 627-649. 10.1093/brain/119.2.627.View ArticleGoogle Scholar
- Pool KD, Finitzo T, Hong CT, Rogers J, Pickett RB: Infarction of the superior temporal gyrus: a description of auditory evoked potential latency and amplitude topology. Ear Hear. 1989, 10: 144-152.View ArticlePubMedGoogle Scholar
- Swaab TY, Brown C, Hagoort P: Understanding ambiguous words in sentence contexts: electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia. 1998, 36: 737-761. 10.1016/S0028-3932(97)00174-7.View ArticlePubMedGoogle Scholar
- Woods DL, Knight RT, Scabini D: Anatomical substrates of auditory selective attention: behavioral and electrophysiological effects of posterior association cortex lesions. Brain Res Cogn Brain Res. 1993, 1: 227-240. 10.1016/0926-6410(93)90007-R.View ArticlePubMedGoogle Scholar
- Knight RT, Scabini D, Woods DL, Clayworth C: The effects of lesions of superior temporal gyrus and inferior parietal lobe on temporal and vertex components of the human AEP. Electroencephalogr Clin Neurophysiol. 1988, 70: 499-509. 10.1016/0013-4694(88)90148-4.View ArticlePubMedGoogle Scholar
- Knight RT, Hillyard SA, Woods DL, Neville HJ: The effects of frontal and temporal-parietal lesions on the auditory evoked potential in man. Electroencephalogr Clin Neurophysiol. 1980, 50: 112-124. 10.1016/0013-4694(80)90328-4.View ArticlePubMedGoogle Scholar
- Ilvonen TM, Kujala T, Tervaniemi M, Salonen O, Näätänen R, Pekkonen E: The processing of sound duration after left hemisphere stroke: Event-related potential and behavioral evidence. Psychophysiology. 2001, 38: 622-628. 10.1111/1469-8986.3840622.View ArticlePubMedGoogle Scholar
- Swaab TY, Brown C, Hagoort P: Spoken Sentence Comprehension in Aphasia: Event-related Potential Evidence for a Lexical Integration Deficit. J Cogn Neurosci. 1997, 9: 39-66.View ArticlePubMedGoogle Scholar
- Knight RT, Scabini D, Woods DL, Clayworth CC: Contributions of temporal-parietal junction to the human auditory P3. Brain Res. 1989, 502: 109-116. 10.1016/0006-8993(89)90466-6.View ArticlePubMedGoogle Scholar
- Thiel A, Herholz K, Koyuncu A, Ghaemi M, Kracht LW, Habedank B, Heiss WD: Plasticity of language networks in patients with brain tumors: a positron emission tomography activation study. Ann Neurol. 2001, 50: 620-629. 10.1002/ana.1253.View ArticlePubMedGoogle Scholar
- Reinvang I: Norwegian Basic Aphasia Assessment. Aphasia and Brain Organization. 1985, New York, Plenum Press, 181-192.View ArticleGoogle Scholar
- De Renzi E, Faglioni P: Normative data and screening power of a shortened version of the Token test. Cortex. 1978, 14: 41-49.View ArticlePubMedGoogle Scholar
- Semlitsch HV, Anderer P, Schuster P, Presslich O: A solution for reliable and valid reduction of ocular artifacts, applied to the P300 ERP. Psychophysiology. 1986, 23: 695-703.View ArticlePubMedGoogle Scholar
- Rothenberger A, Szirtes J, Jürgens R: Auditory evoked potentials to verbal stimuli in health, aphasic, and right hemisphere damaged subjects. Pathway effects and parallels to language processing and attention. Arch Psychiatr Nervenkr. 1982, 231: 155-170. 10.1007/BF00343837.View ArticlePubMedGoogle Scholar
- Praamstra P, Stegeman DF, Kooijman S, Moleman J: Evoked potential measures of auditory cortical function and auditory comprehension in aphasia. J Neurol Sci. 1993, 115: 32-46. 10.1016/0022-510X(93)90064-6.View ArticlePubMedGoogle Scholar
- Le Dorze G, Brassard C: A description of the consequences of aphasia on aphasic persons and their relatives and friends, based on the WHO model of chronic disease. Aphasiology. 1995, 9: 239-255.View ArticleGoogle Scholar
- Heiss WD, Kessler J, Thiel A, Ghaemi M, Karbe H: Differential capacity of left and right hemispheric areas for compensation of poststroke aphasia. Ann Neurol. 1999, 45: 430-438. 10.1002/1531-8249(199904)45:4<430::AID-ANA3>3.0.CO;2-P.View ArticlePubMedGoogle Scholar
- Winhuisen L, Thiel A, Schumacher B, Kessler J, Rudolf J, Haupt WF, Heiss WD: Role of the contralateral inferior frontal gyrus in recovery of language function in poststroke aphasia: a combined repetitive transcranial magnetic stimulation and positron emission tomography study. Stroke. 2005, 36: 1759-1763. 10.1161/01.STR.0000174487.81126.ef.View ArticlePubMedGoogle Scholar
- Csepe V, Molnar M: Towards the possible clinical application of the mismatch negativity component of event-related potentials. Audiol Neurootol. 1997, 2: 354-369.View ArticlePubMedGoogle Scholar
- Hyde M: The N1 response and its applications. Audiol Neurootol. 1997, 2: 281-307.View ArticlePubMedGoogle Scholar
- Näätänen R: Mismatch negativity: clinical research and possible applications. Int J Psychophysiol. 2003, 48: 179-188. 10.1016/S0167-8760(03)00053-9.View ArticlePubMedGoogle Scholar
- Giaquinto S: Evoked potentials in rehabilitation. A review. Funct Neurol. 2004, 19: 219-225.PubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.