Skip to main content

Successful syllable detection in aphasia despite processing impairments as revealed by event-related potentials

Abstract

Background

The role of impaired sound and speech sound processing for auditory language comprehension deficits in aphasia is unclear. No electrophysiological studies of attended speech sound processing in aphasia have been performed for stimuli that are discriminable even for patients with severe auditory comprehension deficits.

Methods

Event-related brain potentials (ERPs) were used to study speech sound processing in a syllable detection task in aphasia. In an oddball paradigm, the participants had to detect the infrequent target syllable /ta:/ amongst the frequent standard syllable /ba:/. 10 subjects with moderate and 10 subjects with severe auditory comprehension impairment were compared to 11 healthy controls.

Results

N1 amplitude was reduced indicating impaired primary stimulus analysis; N1 reduction was a predictor for auditory comprehension impairment. N2 attenuation suggests reduced attended stimulus classification and discrimination. However, all aphasic patients were able to discriminate the stimuli almost without errors, and processes related to the target identification (P3) were not significantly reduced. The aphasic subjects might have discriminated the stimuli by purely auditory differences, while the ERP results reveal a reduction of language-related processing which however did not prevent performing the task. Topographic differences between aphasic subgroups and controls indicate compensatory changes in activation.

Conclusion

Stimulus processing in early time windows (N1, N2) is altered in aphasics with adverse consequences for auditory comprehension of complex language material, while allowing performance of simpler tasks (syllable detection). Compensational patterns of speech sound processing may be activated in syllable detection, but may not be functional in more complex tasks. The degree to which compensational processes can be activated probably varies depending on factors as lesion site, time after injury, and language task.

Background

The analysis of speech sounds is a necessary step in the process of language comprehension. Since most aphasic patients have auditory comprehension deficits, the question whether and to what degree speech sound perception is impaired in aphasia has been much investigated [1–15]. Several studies have indeed shown that aphasic subjects perform significantly worse than healthy controls in e.g. tasks where they have to decide whether two consonants (or two syllables with different consonants) are the same or not [1, 3, 4, 8, 9].

However, most authors did not find correlations between these speech perception impairments and auditory comprehension abilities as measured by classical aphasia assessments [2, 4, 7, 11]. Rather, several studies have revealed patients with severe auditory comprehension deficits but no or minor speech sound perception impairments, or patients with mild auditory comprehension deficits who performed poorly in speech sound discrimination and identification tasks [2–4, 6, 9, 15]. Thus, a dissociation – at least partially – between speech perception and auditory comprehension has been found, which also has been quoted as evidence for a dual pathway framework of language comprehension [16]. However, a rather strong correlation between speech sound perception and auditory comprehension has also been reported [13].

Brain activity related to different stages of speech sound processing can be studied with event-related brain potentials. At about 100 ms after stimulus onset, a negativity can be recorded as the N1 wave which is generated in both temporal and frontal brain areas [17]. N1 reflects an intermediate stage in auditory analysis as well as sound detection and orienting functions [18]. Concerning the processing of speech sounds, N1 is suggested to reflect integrative processing of acoustic features of the incoming stream of speech, but not a neurological representation of phonemes [18–20].

Also the N2 waveform – recorded at about 150 to 300 ms after stimulus onset – is a summation of several components [21]. While early parts of the N2 (N2a or mismatch negativity, MMN) reflect automatic deviance detection, later stages of the N2 wave are regarded as correlates of attentional deviance detection (N2b) and of classification processing (N2c). Starting with N2b and in further stages, the processing of speech sounds seems to differ from that of non-speech sounds, while sound processing is common for speech and non-speech in earlier stages as reflected by N2a [22]. With regard to the time course of attentional discrimination of stimuli, it is suggested that the N2 component reflects processes of transient arousal triggered by unattended discrimination processes (reflected by N2a/MMN) which in turn trigger a target reaction [23]. Cognitive processes related to target detection and to the engagement of a target reaction are reflected by the P3 component which is mainly generated in parietal regions and in the case of auditory stimuli in superior temporal cortex [24–26].

Electrophysiological studies of sub-lexical speech sound processing in aphasia have mainly focused on unattended phonetic/phonologic processing often using the mismatch negativity component (MMN; for a short overview of these studies, see [27]). To our knowledge, no ERP-investigations of attended processing of sub-lexical speech stimuli have been performed in aphasia. While the number of studies using simple language stimuli in attended paradigms in order to investigate auditory processing is small, more studies with non-speech stimuli have been conducted, often using tones presented in oddball paradigms. There is good evidence for N1 amplitude reduction to an attended and frequent tone stimulus in aphasia [28–32]. Regarding topographic distribution of the N1 component, a right hemisphere maximum has been observed in an aphasic group while a control group showed an even hemispheric distribution [30]. Lesions located in either left or right superior temporal gyrus were found to be the cause for N1 amplitude reduction [33, 34]. When using monaural presentation in left hemisphere injured patients, right-ear stimulation led to bilateral N1 reduction [35].

Regarding the response to the target stimulus, reduced P3 amplitudes have been reported, especially in patients with severe comprehension deficits [29, 31, 36]. The temporo-parietal junction has been shown to be crucial for normal P3 amplitudes to tone stimuli [37].

On the background of a still unclear relation between speech sound perception and auditory comprehension and sparse ERP-research on the attended processing of speech sounds in aphasia, we aimed in this study to further explore neurophysiological correlates of automatic and cognitive processes involved in speech sound processing in aphasic subjects. A major problem in interpreting ERP-results and behavioral findings is that when the study person fails to perform the task correctly, it is impossible to determine what underlying processes are active. Our strategy is therefore to study ERP in a relevant linguistic task which can be performed adequately by aphasic subjects, and to investigate the relevance of deviations in processing for the performance of a more complex task. Having investigated automatic discrimination of syllables in an earlier study [27], we used the same stimuli in this present study in an attended oddball design. A central research question was at which processing stages changes may be found in aphasia. Current research is focusing on changes of brain activation during recovery from brain injury, suggesting different activation patterns in patients with successful recovery compared to those with a less favorable outcome (e.g. [38]). Therefore, we grouped the participating patients with regard to aphasia severity. Furthermore, differences in topographic distribution of the components identified may give further information about functional or dysfunctional changes in brain activation, especially with regard to activation of ipsilesional and contralesional processes.

Methods

Subjects

A total of 20 aphasic subjects were consecutively recruited from patients admitted to our hospital for rehabilitation. 11 control subjects were recruited from hospital staff and non-brain damaged patients of the hospital. All participants with the exception of two severe and one moderate aphasic patient reported to be right-handed. Informed consent was obtained from all subjects. The study was approved by the regional research ethics committee of Eastern Norway.

All participants were examined with the auditory comprehension section of the Norwegian Basic Aphasia Assessment (NGA; [39]) and the Token test [40]. These tests measure comprehension in relation to both single words and short sentences, and with regard to both naturalistic objects, body parts, and geometric tokens. In addition, the patients were investigated with the complete NGA. Furthermore, all patients were assessed by a neuropsychologist as part of their routine rehabilitation program. Etiology and lesion location were retrieved from the patient's medical charts – the latter from descriptions of CT or MRI scans.

In order to investigate whether different electrophysiological patterns depend on the severity of the auditory comprehension deficit, the aphasic subjects were distributed into two groups: a group with aphasic subjects with mild or moderate auditory comprehension impairment (moderate aphasia group) and a group of subjects with severe or very severe auditory comprehension impairment (severe aphasia group). The parameter for dichotomization was a score of 16.5 in the shortened version of the Token test which corresponds to the border between moderate and severe aphasia as described by the authors [40].

Table 1 presents the patients with regard to sex, age, etiology, lesion site, aphasia type, language functions and neuropsychological impairments. The aphasic subjects represent a wide range of auditory comprehension impairment. Global and Wernicke's aphasia dominate the severe aphasia group, while anomia and Broca's aphasia were most common in the moderate aphasia group. In both groups, most patients have lesions in the frontal and/or temporal lobes; the most common cause for aphasia was brain infarction, but some more infrequent etiology was also present as for example traumatic brain injury or subarachnoid hemorrhage. Besides apraxia, neuropsychological impairments were mainly from the areas of attention, memory, executive and visual spatial functions.

Table 1 Demographical and clinical data of the aphasic subjects participating in this study.

One-way analysis of variance (ANOVA) revealed significant differences (p < 0.001) between groups for all three clinical aphasia measures. Neither the severe nor the moderate aphasia group differed significantly from each other or the control group with regard to sex, age, years of education or time post injury (see table 2).

Table 2 Overview over the three investigated groups.

Stimuli

The participants were presented with a syllable detection paradigm using the same natural speech sounds as in our earlier study of automatic syllable discrimination [27]: The frequent standard syllable /ba:/ (p = 0.85) and the infrequent target syllable /ta:/ (p = 0.15) were presented with a stimulus onset asynchrony of 1.5 s in a pseudo randomized order with the restriction that two targets could not follow each other (see additional file 1, a 30 s sample of the auditory stimuli). The syllables were digitally recorded from a female, middle-aged native speaker and cut and re-spliced at zero crossings of the steady-state vowel to obtain syllables of same length (/ba:/ = 245.9 ms; /ta:/ = 245.2 ms). The recordings of the syllables were low-pass filtered at 8 kHz. The syllables had rise/fall times of 20 ms. A total number of 205 syllables, amongst these 30 target syllables, were presented binaurally via headphones at approximately 80 dB SPL. The participants were seated comfortably in a rest chair or their wheel chair and were instructed to press a button with the index finger of their preferred hand as soon as possible when they heard the target syllable /ta:/. Since many of the subjects had severe comprehension deficits, the stimuli (up to 15 targets and 40 standards) were first presented without EEG-recording, and the subject's reaction was observed to assure that the participants had understood the task. Additionally, prior to the recordings for this present study all subjects had been presented for the same syllable stimuli in an unattended paradigm [27] in the same session.

ERP-procedure

EEG was recorded continuously with a sample frequency of 500 Hz and an online band-pass filter with a range from 0.05 to 70 Hz at the following electrode sites: Fz, Cz, Pz, Fp1/2, F3/4, C3/4, P3/4, F7/8, T3/4, T5/6, O1/2, M1, and M2. A nose reference electrode was used. The continuous EEG-data were post-hoc analyzed using band-pass (1 – 15 Hz), zero-phase filtering and ocular artifact reduction using vertical oculograms [41]. Sweeps with amplitudes exceeding +/- 100 μV in any channel except of the vertical oculogram were excluded from the analysis.

The three left-handed participants had CT-verified right hemisphere lesions and left hemiparesis. For these participants, symmetrical and corresponding electrode labels were swapped between hemispheres. Thus, in this paper odd numbered electrode indices (F3, F7 ...) refer to the brain damaged hemisphere (normally the left) and even numbered electrode indices (F4, F8 ...) refer to the contralesional hemisphere. For the controls – all being right-handed – electrode labels of the left hemisphere are referred to as ipsilateral.

The standard syllable (/ba:/) waveforms were analyzed for the N1 component, the responses to the target syllable (/ta:/) for N1 and P3. For each group separately, mean peak latencies for the components were defined as the mean of the individual peak latencies located at maxima in the following time windows: N1 = 60 – 180 ms and P3 = 300 – 700 ms. Cz electrode was used to define the latencies for standard and target N1, while Pz was used for target P3. For each component respectively, time intervals were centered at the relevant group's mean peak latency to calculate mean amplitudes. These intervals had a duration of 30 ms for the N1 and 50 ms for the P3 component. Using the respective intervals which were derived by the above described procedure, mean amplitudes for the following electrode sites were calculated and further analyzed: Fz, Cz, Pz, F3/4, C3/4, P3/4, F7/8, T3/4, T5/6. A similar analysis was performed separately for the mastoid electrodes (M1/2); these results do not give additional information and are therefore not reported.

Furthermore, subtraction waveforms (target - standard) were analyzed to elucidate the process of discriminating targets from standards. Mean average amplitudes of successive time windows of 50 ms duration in the range from 75 ms to 475 ms were calculated and analyzed; this time span contains the N2 component.

Statistical analysis

We analyzed the mean amplitudes using a two-way ANOVA model with the between subjects factor "group" (severe aphasia vs. moderate aphasia vs. control) and the within group factors anterior-posterior "line" (frontal vs. central vs. parietal) and "electrode" (5 levels; for example F7, F3, Fz, F4, and F8 for the frontal electrode line). Thus a significant interaction involving the "electrode" factor might indicate a hemisphere difference, but would have to be further analyzed focusing on the relevant electrode contrasts. Greenhouse-Geisser and Bonferroni corrections were applied when appropriate. Latencies were compared between groups using one-way ANOVAs.

Furthermore, Spearman's rank test was used to analyze ERP-amplitudes and latencies for correlations with time after brain-injury, reaction time (RT) and clinical aphasia assessment results (NGA auditory comprehension, NGA total, and Token test). Only aphasic subjects were included in these analyses, except for the RT-analysis where all participants were included. In order to reduce the risk of type I error – on the background of the large number of correlation analyses performed – the significance level for correlations was set to 0.01.

Results

Behavioral results

Almost all subjects detected all 30 targets; only three severe aphasic patients missed one target syllable each. Many participants had a few false alarms, but none more than four; no significant differences regarding false alarm rates were found. These results indicate that the task was a rather easy one.

The target response time was significantly prolonged in the patient groups (p < 0.05): While the mean reaction time was 383 ms in the control group, it was 465 ms in the moderate and 586 ms in the severe aphasic group.

Standard syllable N1

Grand average waveforms for the three groups respectively are presented in figure 1, mean amplitudes and standard deviations for selected electrodes in table 3.

Figure 1
figure 1

Grand average waveforms. Vertex grand average waveforms for the standard (green) and the target stimulus (orange) and the difference curve (blue grey) for the three groups respectively.

Table 3 Mean amplitudes

The N1 component was registered as a centrally peaking component with the following mean group latencies and amplitudes: control: 115 ms, -7.01 μV; moderate aphasia: 115 ms, -4.31 μV; severe aphasia: 110 ms, -2.54 μV (figure 2 and 3). The two-way ANOVA revealed a significant between groups effect (F [2.28] = 10.67, p < 0.001). Post-hoc analysis showed a significant difference between the control and the severe aphasia group (p < 0.001) and a marginally significant difference between the control and the moderate aphasia group (p = 0.053).

Figure 2
figure 2

Standard syllable waveforms. Grand average waveforms elicited by the standard syllable /ba:/ for the control (green), the moderate (blue) and the severe aphasia group (red) respectively.

Figure 3
figure 3

Individual N1 mean amplitudes. Scatter plot that shows the individual mean N1 amplitudes in μV (black dots) for each group separately, illustrating between-subject variation. Red bars indicate the respective group mean.

A significant line * group interaction was found (F [2, 28] = 3.15, p < 0.05). Analysis of each electrode line separately indicated that while the group differences were still present in all lines, the anterior-posterior N1 distribution varied. The mean N1-amplitudes in the controls were evenly balanced frontally and parietally, whereas the moderate aphasia group had larger amplitudes over frontal than parietal sites (F [1,9] = 6.40, p < 0.05) and the severe aphasia group showed a non-significant tendency for an inverse pattern. Furthermore, the two-way ANOVA revealed a highly significant electrode * group interaction: F [2, 28] = 8.20, p < 0.001, which reflected differences in hemisphere distribution of the N1 (figure 4). The controls had an even hemispheric N1 distribution (difference between corresponding electrodes < 0.2 μV) while the moderate aphasia group showed a minor relative lateralization (~0.4 μV) to the ipsilesional hemisphere and the severe aphasia group a distinct relative lateralization (~1.2 μV) to the contralesional hemisphere. This lateralization difference was most prominent in central areas. A tendency to a significant interaction line * electrode * group (F [2, 28] = 2.00, p = 0.076) was observed for an analysis of the frontal and central line only. When using a hemisphere model with the electrodes F3/4 and C3/4, we found a significant hemisphere * group interaction (F [1,28] = 3.38, p < 0.5).

Figure 4
figure 4

Topographical distribution of the N1 component. The control (left) and the moderate aphasia group (middle) show an even hemispherical distribution while the N1 of the severe aphasia group (right) is clearly lateralized to the contralesional hemisphere.

Target syllable N1

The N1 to the target syllable could be visually distinguished from the N2 component especially at frontal and central sites (figure 5). It peaked about 10 ms later than the N1 elicited by the standard syllable (127 ms, 124 ms, and 119 ms for the control, the moderate and the severe aphasia group respectively). The vertex amplitude of the target syllable N1 was comparable to that of the standard syllable N1 (see also table 3): -7.20 μV (controls), -2.96 μV (moderate aphasia), and -2.70 μV (severe aphasia).

Figure 5
figure 5

Target syllable waveforms. Grand average waveforms elicited by the target syllable /ta:/ for the control (green), the moderate (blue) and the severe aphasia group (red) respectively.

Two-way ANOVA showed a significant between group effect (F [2, 28] = 4.44, p < 0.05) which post-hoc analysis revealed to be significant for the control vs. moderate aphasia comparison (p < 0.05) and marginally significant for the control vs. severe aphasia comparison (p = 0.066). Further analysis of topographic anterior-posterior distributions showed the same tendencies as for standard-N1, but generally at a non-significant level. Visual inspection indicated a tendency towards the same hemisphere distribution differences as observed for the standard syllable elicited N1; a significant electrode * group interaction was found (F [2, 28] = 4.77, p < 0.001). The severe aphasia group showed larger amplitudes over the contralesional hemisphere especially at central and parietal sites.

P3

The P3 component (figure 5, table 3) was observed in the controls as the typical large positivity with a parietal maximum peaking at 436 ms (4.32 μV). A somewhat earlier maximum was observed in the moderate aphasia group (peak: 419 ms, 4.58 μV). In the severe aphasia group, P3 was somewhat attenuated and peaked over the frontal midline (451 ms, 3.11 μV). However, no significant differences between groups in P3 mean amplitudes or latencies were found.

Subtraction curve analysis

The different time courses and distributions of the subtraction curves (target - standard waveform) for the three groups in successive 50 ms intervals in the time range 75 to 475 ms are illustrated in figures 6 and 7 (see also table 3). The negative processing difference of the control group started in the 125 – 175 ms window over left hemisphere temporo-parietal areas and developed into a large, central negativity that was registered over the whole scalp and lasting until about 325 ms. In the moderate aphasia group, the negative difference started in the same time window, but had a shorter duration and a more posterior and contralesionally centered maximum. The severe aphasia group showed low negative difference amplitudes at most electrode sites, but no clear lasting central negativity. Analysis of variance revealed significant differences between groups solely in the three time-windows between 175 and 325 ms which are described in the following.

Figure 6
figure 6

Subtraction waveforms. Grand average subtraction waveforms (target /ta:/ - standard /ba:/) for the control (green), the moderate (blue) and the severe aphasia group (red) respectively.

Figure 7
figure 7

Subtraction wave, mean amplitudes from time windows. Mean subtraction amplitudes (target /ta:/ - standard /ba:/) at vertex illustrating the differences between the control (green), the moderate (blue) and the severe aphasia group (red) in time course and size of the N2 component. X-axis: 50 ms time-windows from 75 to 475 ms; y-axis: mean amplitudes in μV. Black bars indicate standard deviation (for graphical reasons only shown in one direction). Significant ANOVA between group effects are indicated: * p < 0.05, ** p < 0.01, *** p < 0.001.

For the 175 – 225 ms interval, the ANOVA showed a between groups effect (F [2, 28] = 3.67, p < 0.05); post-hoc analysis resulted only in a tendency to significance for the control vs. severe aphasia group comparison (p = 0.062). A significant line * group interaction was found (F [2, 28] = 2.69, p < 0.05). Further analysis of the frontal line resulted in a tendency towards a between group effect (p = 0.089), while we observed a significant difference for the parietal electrodes (p < 0.05). The largest amplitudes at this stage were found parietally in the control group, but centrally in the aphasic groups. For the parietal line, we also found an electrode * group interaction (F [2, 28] = 2.48, p < 0.05) with significantly different amplitudes between groups at P3, P4, and Pz electrode site. In this early segment of the processing difference, the control group's negativity was lateralized to the left hemisphere, whereas the moderate aphasia group showed higher amplitudes over the contralesional hemisphere at central and parietal sites.

The processing difference between target and standard stimulus in the 225 – 275 ms time-window increased – compared to the preceding interval – in the controls, but decreased in the aphasic groups. Analysis of variance showed a between groups effect (F [2, 28] = 10.80, p < 0.001) which was present between the controls and both the moderate (p < 0.01) and the severe aphasia group (p < 0.001). A significant line * group effect was found (F [2, 28] = 4.42, p < 0.01). The processing difference of the control group was now centered between Cz and Pz electrode and centrally localized with regard to hemisphere distribution while it still showed larger amplitudes over the hemisphere contralesional to the brain damage in the moderate aphasia group.

In the 275 – 325 ms interval, the vertex mean amplitudes of the control and the severe aphasia group remained rather unchanged, while a positive amplitude indicated the start of a P3 effect in the moderate aphasia group. Also in this time-window the processing difference showed a between group effect (p < 0.01). Post-hoc analysis resulted in a significant difference between the control and the moderate aphasia group (p < 0.01). Line * group (F [2, 28] = 3.37, p < 0.05) and electrode * group (F [2, 28] = 3.44, p < 0.05) interactions were significant, and a significant line * electrode * group interaction was found (F [2, 28] = 2.24, p < 0.05), but the pattern of electrode differences did not indicate systematic hemispheric differences.

Correlations ERP-parameters – clinical aphasia measures

Correlations between the results from the Norwegian Basic Aphasia Assessment (NGA), i.e. subsection auditory comprehension and the NGA total score, and ERP-components were found for mean amplitudes of the standard stimulus N1 in ipsilesional and midline fronto-central sites and for the mean amplitude of the 325 – 375 ms subtraction curve interval in left lateral parieto-temporal sites (table 4). Additionally we found tendencies for correlations (p < 0.05) at other fronto-central sites and also between the Token test and ipsilesional fronto-central electrodes.

Table 4 Overview of significant correlations

For the target stimulus N1, tendencies (p < 0.1) for correlations between the Token test and amplitudes at C3 and Cz electrode were observed, furthermore between M1 and the NGA total score.

Correlations with reaction time

A positive correlation was found between P3 latency and reaction time (rs = 0.49, p < 0.01): the later the P3 component peaked, the longer was RT.

Correlations ERP-parameters – time after brain injury

Moderate correlations between ERP-amplitudes and the time between brain injury and ERP-investigation were found for the N1 component elicited by the standard and the target syllable (table 4). Mean N1 amplitudes were smaller, the more time that had passed since brain injury.

Discussion

In the present study, we investigated the ability of severe and moderate aphasic patients to detect rare target syllables amongst frequent standard syllables and studied the electrophysiological processes involved. The aphasic groups performed this rather easy task accurately, though more slowly than the controls. Despite the aphasics' successful task performance, we found several significant differences in their electrophysiological processing indicators. No alterations in ERP latencies were observed, but changes in ERP amplitudes for components found in the time range from about 100 and up to about 300 milliseconds after stimulus onset indicate differences during on-line stimulus processing or immediately following. These changes were primary stimulus processing reduction in the form of attenuated N1 amplitude for both standard and target stimuli at a latency of about 110 to 120 milliseconds, and a discrimination deficit between targets and standards in the time interval between 175 to 325 ms post stimulus onset. In this time range a clear N2 peak could be identified in the controls, whereas the aphasics showed a less distinct negative processing difference. P3 latency or amplitude did not differentiate between the groups, but was associated with reaction time. N1 amplitude reduction at ipsilesional fronto-central sites correlated with severity of auditory comprehension impairment. In addition, N1 amplitude at fronto-central electrode sites was smaller with increasing time after injury.

Topographic analysis indicated that moderate and severe aphasics showed different patterns of brain activation in order to solve the discrimination problem. Salient differences were that the severe aphasics showed a lateralization of activity focus to the contralesional hemisphere in an early processing window (N1), while showing no evidence of discriminatory activation in later time windows. The moderate aphasics on the other hand showed a more symmetrical activation in the corresponding early time window with evidence of discriminatory activity in later time windows. The implications will be discussed further below.

The observed attenuation of the N1 component in the aphasic groups is consistent with earlier findings for tone [29–34, 36] and word stimuli [28, 42]. A statistical correlation between N1 amplitude and measures for the severity of auditory comprehension measurement in aphasia has not been reported earlier, but in two studies that also dichotomized the aphasic patient groups in relation to auditory comprehension function, a larger N1 reduction in the severe aphasia groups has been reported [36, 43].

N1 reduction and its correlations with auditory comprehension impairment can be interpreted as impaired sound detection and orienting functions and deficient integration of the acoustic properties of speech sounds [18]. Reduced N1 amplitude was found for both the standard and the target syllable which argues for a disturbance of primary stimulus processing independent of the role of the stimulus in the task. This is supported by the fact that the discrimination analysis (subtraction wave) did not reveal differences between groups in the N1 time window, but starting after 175 ms.

The deviant electrophysiological patterns in the aphasic groups between 175 and 325 ms argue for disturbances in the processes of attentional detection of the infrequent syllable /ta:/ and of its classification as the target stimulus. These differences were found in temporal stages of the N2 waveform which have been identified as being different between speech sound and purely acoustic processes [22].

However, the P3 component was not significantly altered in the aphasic groups indicating no severe impairments of target detection and processes of engaging the target reaction; this of course corresponds to the fact that the aphasic patients were able to detect the target syllables behaviorally. The lack of a significant P3 reduction – which contrasts some results of earlier P3 studies of aphasic patients – might be due to the large difference between the stimuli and to the relatively low difficulty of the task. In earlier studies, the stimuli were rich tones differing in only one parameter: frequency [29, 31, 36, 37] or duration [37]. Actually, the reported P3 attenuation in the aphasic groups in most of these studies [29, 31, 36] was not caused by a general processing defect in aphasia, but rather – as the authors noted – by the fact that several subjects were unable to perform the task; in this present study, even the very severe aphasic subjects were able to accomplish the task almost without errors.

The close relation between the P3 component and the target response was illustrated by a significant, though weak correlation between P3 latency and reaction time. Although reaction time was significantly prolonged in the aphasic groups, we did not observe P3 latency differences between groups. This might be explained by disturbances in "post-P3" executive motor functions in the aphasic subjects, many of whom had sensory-motor deficits involving the preferred hand.

How can it be explained that the aphasic subjects were able to perform the current task successfully at the same time as the electrophysiological parameters are significantly attenuated and even correlate with auditory comprehension measures? A possible suggestion is that stimulus discrimination in at least some aphasic subjects was based not on linguistic analysis, but only or mainly on purely acoustic features. This strategy is adequate in a task with a very limited set of stimuli and no demands on semantic interpretation, but is not functional in a naturalistic comprehension task. Earlier studies have indeed shown that the ability to discriminate phonemes is a necessary, but not sufficient condition for the correct identification of these phonemes, and report several aphasic subjects that could discriminate, but not identify speech sounds [5, 15]. We would argue that the severe aphasia group, which showed the largest N1 amplitude reduction, has to rely primarily on acoustic analysis. Linguistic processing – which accounts for parts of the N1 and a more substantial part of the N2 waveform – might thus be reduced in these subjects even if these linguistic analyses were not necessary to perform the task correctly. In this perspective, one could furthermore argue that speech sound discrimination based on purely acoustic features requires more resources and is more exhausting than "normal" speech sound discrimination; this could be suggested as one reason why aphasic subjects often report that listening to language is fatiguing (cf. [44]).

There are some other possible reasons for the observed amplitude reductions. First, compensational pathways might exist in aphasic brains which are not revealed by ERPs, at least not as recorded in the present study. These might be processes asynchronous in relation to the stimuli. Alternatively, the N1 and N2 components in healthy subjects might (partially) be generated by unnecessary, redundant activity that can be reduced in brain injured individuals without having impact on brain functions. Also, one could question the usually proposed sequential nature of the processing steps reflected by the N1, N2, and P3 components: Rather, different processes might exist in parallel. In injured brains, due to a conflict of resources, early processing steps might then be reduced because task-relevant processes are ongoing and prioritized.

However, an important objection to these interpretations of the present results is that the observed electrophysiological changes might not be due to impaired language functions, but rather solely to deficits in purely acoustic processing. On the other hand, one could argue that the amplitude attenuations might be only unspecific effects of brain lesion and lesion size which are not related to aphasia in particular. These problems can be addressed in a study using both a speech sound paradigm and a paradigm with purely acoustic stimuli, and furthermore by comparing aphasic patients with brain injured individuals without aphasia. We are pursuing this approach in an ongoing study.

Some interesting changes in the hemispherical distribution of brain activity were observed: As N1 maximum was located with an even hemispheric distribution in the controls, the aphasic groups showed two contrasting patterns of N1 hemisphere distribution at fronto-central sites: in the moderately impaired aphasic subjects, N1 was evenly distributed or even slightly lateralized to the ipsilesional side while it had more relative weight over the non-brain damaged hemisphere in patients with severely impaired auditory comprehension. Similar to the results regarding the severe group, relatively enlarged N1 amplitudes at contralesional fronto-central sites have been reported [30, 42]. In a study using monaural stimulation, a similar pattern was found only for right-ear, but not for left-ear stimulation [35].

These findings might be explained by the effect of two different, but interacting mechanisms: First, a general N1 reduction takes place which is directly caused by the brain damage and which is larger in those patients with larger brain lesions and more severe impairments, i.e. the severe aphasia group. This attenuation is probably largest over brain damaged areas. Second, different compensational mechanisms in response to the brain damage might exist: Severely impaired patients activate the contralesional hemisphere relatively more than the ipsilesional hemisphere, while patients with lesser impairment show higher activation of the brain damaged than of the contralesional hemisphere. Thiel et al [38, 45, 46] have reported similar lateralization differences between patients with moderate aphasia and those with more impaired language function and claim a hierarchy of language recovery where the compensational activation of perilesional areas leads to rather good results, while the contralesional hemisphere can be activated as part of a less efficient compensational mechanism. Our results regarding the N1 component support this hypothesis, and we note that the majority of significant correlations between auditory comprehension score and single electrode N1 amplitudes are with ipsilesional fronto-central electrodes.

The ability to make use of compensational strategies in speech sound processing probably differs between aphasic subjects due to factors as premorbid brain organization and lesion site and size, but also depending on features of the speech sounds that are processed. This variation might be a reason for the complex relation between impaired speech sound perception and auditory comprehension in aphasia.

The clinical use of event-related brain potentials in order to explore and possibly monitor auditory comprehension in aphasia is under discussion [47–50]. The present study supports the usefulness of event-related potentials in the investigation of processes underlying auditory comprehension deficits in aphasia. As this study indicates, ERPs provide information about central auditory processing deficits even in tasks which are successfully accomplished by the aphasic subjects. Our results regarding the N1 and N2 waveforms – particularly the significant correlations of N1 amplitudes with clinical language comprehension assessment results – suggest that these waveforms deserve further attention in the exploration of auditory comprehension impairment in aphasia.

Conclusion

This study investigated attended speech sound processing in aphasia recording event-related potentials during a syllable detection task. The aphasic subjects were able to perform the task almost without errors, and processes related to the target identification (P3) were not significantly attenuated. However, electrophysiological components reflecting primary stimulus analysis (N1) and attended stimulus classification and discrimination (N2) indicated reduced processing, which constitutes a crucial weakness in more complex and naturalistic comprehension tasks. The aphasic subjects might have discriminated the stimuli by increased reliance on acoustic differences, and topographic differences between aphasic subgroups and controls indicate compensatory changes in activation. The degree to which compensational patterns of speech sound processing can be activated probably varies depending on lesion site, time after injury, and language task.

References

  1. Baker E, Blumstein SE, Goodglass H: Interaction between phonological and semantic factors in auditory comprehension. Neuropsychologia. 1981, 19: 1-15. 10.1016/0028-3932(81)90039-7.

    Article  CAS  PubMed  Google Scholar 

  2. Basso A, Casati G, Vignolo LA: Phonemic identification defect in aphasia. Cortex. 1977, 13: 85-95.

    Article  CAS  PubMed  Google Scholar 

  3. Baum SR: Consonant and vowel discrimination by brain-damaged individuals: effects of phonological segmentation. J Neurolinguistics. 2002, 15: 447-461. 10.1016/S0911-6044(00)00020-8.

    Article  Google Scholar 

  4. Blumstein SE, Baker E, Goodglass H: Phonological factors in auditory comprehension in aphasia. Neuropsychologia. 1977, 15: 19-30. 10.1016/0028-3932(77)90111-7.

    Article  CAS  PubMed  Google Scholar 

  5. Blumstein SE, Cooper WE, Zurif EB, Caramazza A: The perception and production of Voice-Onset Time in aphasia. Neuropsychologia. 1977, 15: 371-372. 10.1016/0028-3932(77)90089-6.

    Article  CAS  PubMed  Google Scholar 

  6. Caplan D, Gow D, Makris N: Analysis of lesions by MRI in stroke patients with acoustic-phonetic processing deficits. Neurology. 1995, 45: 293-298.

    Article  CAS  PubMed  Google Scholar 

  7. Gandour J, Dardarananda R: Voice onset time in aphasia: Thai. I. Perception. Brain Lang. 1982, 17: 24-33. 10.1016/0093-934X(82)90002-5.

    Article  CAS  PubMed  Google Scholar 

  8. Jauhiainen T, Nuutila A: Auditory perception of speech and speech sounds in recent and recovered cases of aphasia. Brain Lang. 1977, 4: 572-579. 10.1016/0093-934X(77)90047-5.

    Article  CAS  PubMed  Google Scholar 

  9. Miceli G, Gainotti G, Caltagirone C, Masullo C: Some aspects of phonological impairment in aphasia. Brain Lang. 1980, 11: 159-169. 10.1016/0093-934X(80)90117-0.

    Article  CAS  PubMed  Google Scholar 

  10. Miceli G, Caltagirone C, Gainotti G, Payer-Rigo P: Discrimination of voice versus place contrasts in aphasia. Brain Lang. 1978, 6: 47-51. 10.1016/0093-934X(78)90042-1.

    Article  CAS  PubMed  Google Scholar 

  11. Milberg W, Blumstein S, Dworetzky B: Phonological processing and lexical access in aphasia. Brain Lang. 1988, 34: 279-293. 10.1016/0093-934X(88)90139-3.

    Article  CAS  PubMed  Google Scholar 

  12. Square-Storer P, Darley FL, Sommers RK: Nonspeech and speech processing skills in patients with aphasia and apraxia of speech. Brain Lang. 1988, 33: 65-85. 10.1016/0093-934X(88)90055-7.

    Article  CAS  PubMed  Google Scholar 

  13. Tallal P, Newcombe F: Impairment of auditory perception and language comprehension in dysphasia. Brain Lang. 1978, 5: 13-24. 10.1016/0093-934X(78)90003-2.

    Article  CAS  PubMed  Google Scholar 

  14. Varney NR: Phonemic imperception in aphasia. Brain Lang. 1984, 21: 85-94. 10.1016/0093-934X(84)90038-5.

    Article  CAS  PubMed  Google Scholar 

  15. Yeni-Komshian GH, Lafontaine L: Discrimination and identification of voicing and place contrasts in aphasic patients. Can J Psychol. 1983, 37: 107-131.

    Article  CAS  PubMed  Google Scholar 

  16. Hickok G, Poeppel D: Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition. 2004, 92: 67-99. 10.1016/j.cognition.2003.10.011.

    Article  PubMed  Google Scholar 

  17. Näätänen R, Picton T: The N1 wave of the human electric and magnetic response to sound: a review and an analysis of the component structure. Psychophysiology. 1987, 24: 375-425.

    Article  PubMed  Google Scholar 

  18. Näätänen R, Winkler I: The concept of auditory stimulus representation in cognitive neuroscience. Psychol Bull. 1999, 125: 826-859. 10.1037/0033-2909.125.6.826.

    Article  PubMed  Google Scholar 

  19. Näätänen R: The perception of speech sounds by the human brain as reflected by the mismatch negativity (MMN) and its magnetic equivalent (MMNm). Psychophysiology. 2001, 38: 1-21.

    Article  PubMed  Google Scholar 

  20. Roberts TP, Ferrari P, Stufflebeam SM, Poeppel D: Latency of the auditory evoked neuromagnetic field components: stimulus dependence and insights toward perception. J Clin Neurophysiol. 2000, 17: 114-129. 10.1097/00004691-200003000-00002.

    Article  CAS  PubMed  Google Scholar 

  21. Pritchard WS, Shappell SA, Brandt ME: Psychophysiology of N200/N400: a review and classification scheme. Adv Psychophysiol. 1991, 4: 43-106.

    Google Scholar 

  22. Sussman E, Kujala T, Halmetoja J, Lyytinen H, Alku P, Näätänen R: Automatic and controlled processing of acoustic and phonetic contrasts. Hear Res. 2004, 190: 128-140. 10.1016/S0378-5955(04)00016-4.

    Article  PubMed  Google Scholar 

  23. Näätänen R: Attention and Brain Function. 1992, Hillsdale, New Jersey, Lawrence Erlbaum Associates

    Google Scholar 

  24. Johnson R: On the neural generators of the P300 component of the event-related potential. Psychophysiology. 1993, 30: 90-97.

    Article  PubMed  Google Scholar 

  25. Linden DE: The P300: where in the brain is it produced and what does it tell us?. Neuroscientist. 2005, 11: 563-576. 10.1177/1073858405280524.

    Article  CAS  PubMed  Google Scholar 

  26. Picton TW: The P300 wave of the human event-related potential. J Clin Neurophysiol. 1992, 9: 456-479.

    Article  CAS  PubMed  Google Scholar 

  27. Becker F, Reinvang I: Mismatch negativity elicited by tones and speech sounds: Changed topographical distribution in aphasia. Brain Lang. 2007, 100: 69-78. 10.1016/j.bandl.2006.09.004.

    Article  PubMed  Google Scholar 

  28. Brown CM, Hagoort P, Swaab TY: Neurophysiological Evidence for a Temporal Disorganization in Aphasic Patients with Comprehension Deficits. Aphasietherapie im Wandel. Edited by: Widdig W, Ohlendorff I and Malin JP. 1997, Freiburg, Hochschulverlag, 89-122.

    Google Scholar 

  29. Hagoort P, Brown CM, Swaab TY: Lexical-semantic event-related potential effects in patients with left hemisphere lesions and aphasia, and patients with right hemisphere lesions without aphasia. Brain. 1996, 119 (Pt 2): 627-649. 10.1093/brain/119.2.627.

    Article  Google Scholar 

  30. Pool KD, Finitzo T, Hong CT, Rogers J, Pickett RB: Infarction of the superior temporal gyrus: a description of auditory evoked potential latency and amplitude topology. Ear Hear. 1989, 10: 144-152.

    Article  CAS  PubMed  Google Scholar 

  31. Swaab TY, Brown C, Hagoort P: Understanding ambiguous words in sentence contexts: electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia. 1998, 36: 737-761. 10.1016/S0028-3932(97)00174-7.

    Article  CAS  PubMed  Google Scholar 

  32. Woods DL, Knight RT, Scabini D: Anatomical substrates of auditory selective attention: behavioral and electrophysiological effects of posterior association cortex lesions. Brain Res Cogn Brain Res. 1993, 1: 227-240. 10.1016/0926-6410(93)90007-R.

    Article  CAS  PubMed  Google Scholar 

  33. Knight RT, Scabini D, Woods DL, Clayworth C: The effects of lesions of superior temporal gyrus and inferior parietal lobe on temporal and vertex components of the human AEP. Electroencephalogr Clin Neurophysiol. 1988, 70: 499-509. 10.1016/0013-4694(88)90148-4.

    Article  CAS  PubMed  Google Scholar 

  34. Knight RT, Hillyard SA, Woods DL, Neville HJ: The effects of frontal and temporal-parietal lesions on the auditory evoked potential in man. Electroencephalogr Clin Neurophysiol. 1980, 50: 112-124. 10.1016/0013-4694(80)90328-4.

    Article  CAS  PubMed  Google Scholar 

  35. Ilvonen TM, Kujala T, Tervaniemi M, Salonen O, Näätänen R, Pekkonen E: The processing of sound duration after left hemisphere stroke: Event-related potential and behavioral evidence. Psychophysiology. 2001, 38: 622-628. 10.1111/1469-8986.3840622.

    Article  CAS  PubMed  Google Scholar 

  36. Swaab TY, Brown C, Hagoort P: Spoken Sentence Comprehension in Aphasia: Event-related Potential Evidence for a Lexical Integration Deficit. J Cogn Neurosci. 1997, 9: 39-66.

    Article  CAS  PubMed  Google Scholar 

  37. Knight RT, Scabini D, Woods DL, Clayworth CC: Contributions of temporal-parietal junction to the human auditory P3. Brain Res. 1989, 502: 109-116. 10.1016/0006-8993(89)90466-6.

    Article  CAS  PubMed  Google Scholar 

  38. Thiel A, Herholz K, Koyuncu A, Ghaemi M, Kracht LW, Habedank B, Heiss WD: Plasticity of language networks in patients with brain tumors: a positron emission tomography activation study. Ann Neurol. 2001, 50: 620-629. 10.1002/ana.1253.

    Article  CAS  PubMed  Google Scholar 

  39. Reinvang I: Norwegian Basic Aphasia Assessment. Aphasia and Brain Organization. 1985, New York, Plenum Press, 181-192.

    Chapter  Google Scholar 

  40. De Renzi E, Faglioni P: Normative data and screening power of a shortened version of the Token test. Cortex. 1978, 14: 41-49.

    Article  CAS  PubMed  Google Scholar 

  41. Semlitsch HV, Anderer P, Schuster P, Presslich O: A solution for reliable and valid reduction of ocular artifacts, applied to the P300 ERP. Psychophysiology. 1986, 23: 695-703.

    Article  CAS  PubMed  Google Scholar 

  42. Rothenberger A, Szirtes J, Jürgens R: Auditory evoked potentials to verbal stimuli in health, aphasic, and right hemisphere damaged subjects. Pathway effects and parallels to language processing and attention. Arch Psychiatr Nervenkr. 1982, 231: 155-170. 10.1007/BF00343837.

    Article  CAS  PubMed  Google Scholar 

  43. Praamstra P, Stegeman DF, Kooijman S, Moleman J: Evoked potential measures of auditory cortical function and auditory comprehension in aphasia. J Neurol Sci. 1993, 115: 32-46. 10.1016/0022-510X(93)90064-6.

    Article  CAS  PubMed  Google Scholar 

  44. Le Dorze G, Brassard C: A description of the consequences of aphasia on aphasic persons and their relatives and friends, based on the WHO model of chronic disease. Aphasiology. 1995, 9: 239-255.

    Article  Google Scholar 

  45. Heiss WD, Kessler J, Thiel A, Ghaemi M, Karbe H: Differential capacity of left and right hemispheric areas for compensation of poststroke aphasia. Ann Neurol. 1999, 45: 430-438. 10.1002/1531-8249(199904)45:4<430::AID-ANA3>3.0.CO;2-P.

    Article  CAS  PubMed  Google Scholar 

  46. Winhuisen L, Thiel A, Schumacher B, Kessler J, Rudolf J, Haupt WF, Heiss WD: Role of the contralateral inferior frontal gyrus in recovery of language function in poststroke aphasia: a combined repetitive transcranial magnetic stimulation and positron emission tomography study. Stroke. 2005, 36: 1759-1763. 10.1161/01.STR.0000174487.81126.ef.

    Article  PubMed  Google Scholar 

  47. Csepe V, Molnar M: Towards the possible clinical application of the mismatch negativity component of event-related potentials. Audiol Neurootol. 1997, 2: 354-369.

    Article  CAS  PubMed  Google Scholar 

  48. Hyde M: The N1 response and its applications. Audiol Neurootol. 1997, 2: 281-307.

    Article  CAS  PubMed  Google Scholar 

  49. Näätänen R: Mismatch negativity: clinical research and possible applications. Int J Psychophysiol. 2003, 48: 179-188. 10.1016/S0167-8760(03)00053-9.

    Article  PubMed  Google Scholar 

  50. Giaquinto S: Evoked potentials in rehabilitation. A review. Funct Neurol. 2004, 19: 219-225.

    PubMed  Google Scholar 

Download references

Acknowledgements

This project has been financed with the aid of EXTRA funds from the Norwegian Foundation for Health and Rehabilitation. FB is financed by these funds, by the University of Oslo and by Sunnaas Rehabilitation Hospital. IR is financed by the University of Oslo.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frank Becker.

Additional information

Competing interests

The author(s) declare that they have no competing interests.

Authors' contributions

The study was designed and planned by FB and IR. FB carried out the data acquisition. Statistical analysis was performed by FB under the supervision of IR. The manuscript was drafted by FB. Both authors read and approved the final manuscript.

Electronic supplementary material

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Becker, F., Reinvang, I. Successful syllable detection in aphasia despite processing impairments as revealed by event-related potentials. Behav Brain Funct 3, 6 (2007). https://doi.org/10.1186/1744-9081-3-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1744-9081-3-6

Keywords