BRAIN AND LANGUAGE 23, 13-25 (1984) Analysis of Word Comprehension in a Case of Pure Word Deafness MARIE-N• ~LLE METZ-LUTZ AND EVELYNE DAHL Chique Neurologique, Hospices Civils, CHU, Strasbourg, France A case of pure word deafness due to a left temporal infarct is reported. The results of dichotic tests suggest that auditory verbal material may be processed in the right hemisphere. The inability to repeat nonsense words, the frequent semantic paraphasias in real-word repetition tasks, and the capacity to give a partial account of the meaning of a word that the patient cannot repeat show that despite the impairment of the phonological analysis, lexical semantic processing is possible. An attempt is made to demonstrate that the patient resorts to this semantic processing and that this reflects the linguistic competence of the right hemisphere. Word deafness in its pure form is very rare. In 1877Kussmaul reported cases where patients with impaired understanding of spoken language were able to speak almost normally. He coined the term “word-deafness.” In 1885 Lichtheim described a case of “pure word deafness” which he defined as the inability to understand spoken language, to repeat spoken words, and to write to dictation without disturbance of spontaneous speech, writing, or reading. For Lichtheim, this syndrome is due to a deep lesion of the left temporal lobe which is thought to isolate the auditory association area from primary auditory input. This locus accounts for the frequent aphasic involvement typically observed at the onset (Saffran, Marin, & Yeni-Komshian, 1976; Albert & Bear, 1974). However, bilateral lesions have been found in some cases (Goldstein, 1974; Auerbach, Allard, Naeser, Alexander, & Albert, 1982; Von Stockert, 1982), but in these cases the deficit concerned linguistic and nonlinguistic auditory material (music, nonverbal noises). In recent years, some authors have attempted to clarify the nature of perceptual disturbance in pure word deafness. Albert and Bear (1974) emphasized the role of time in auditory comprehension. Saffran et al. (1976), using synthetic speech stimuli and dichotic listening techniques, Send requests for reprints to Mari-No&he Metz-Lutz, Service de Neuropsychologie, Clinique Neurologique, Hospices Civils, CHU, 67091 Strasbourg Cedex, France. 13 0093-934X/84 $3.00 Copyright 0 1984 by Academic Press, Inc. All rights of reproduction in any form reserved. 14 METZ-LUTZ AND DAHL suggest “that the perceptual deficit in word deafness is the result of an arrest of speech processing at a prephonetic level, a hypothesis that is in accord with both the classical anatomical conception of verbal agnosia and contemporary theories of speech perception.” Fluctuation of auditory comprehension in word deafness has already been reported (Von Stockert, 1982; Uhich, 1977). It is presumed that auditory verbal material can be decoded by each hemisphere but through different mechanisms, and that the minimal recovery observed may be due to the bilaterality of the lesions. In this case study, we were particularly interested in the mode of the auditory comprehension improvement. It seemed that the patient, despite the deficit of speech perception, compensated by using another way of understanding. This way is sufficient to grasp a partial meaning of a word, but not to perform the auditory-phonological conversion required for repetition, especially nonsense-word repetition. CASE DESCRIPTION G.L. is a 24-year-old, right-handed female. Her admission to the Neurology Department on September 16, 1980 was due to the sudden onset of language disturbances. Several days earlier, she had had two brief and completely regressive episodes of expressive difficulties. On admission, except for language disorders consisting of an inability to understand and garbled speech (described in detail below), the neurological examination was normal. She was well oriented in time and place; her level of attention was normal. The encephalogram showed anterior and middle temporal slow waves, which improved over a period of 1 week. The CT scan was normal the day following onset, but 1 year later a CT scan using contrast demonstrated a left temporal hypodensity spreading from the cortex to the deep structures (Fig. 1). This overall neurological picture was compatible with an occlusive vascular lesion involving the territory of the left middle cerebral artery. Further biological findings implied an arteritis. APHASIA EXAMINATION On September 17, the day following onset, aphasic symptomatology involved comprehension and expression. Spontaneous speech sounded like jargon with neologisms, frequent phonemic paraphasias, and dyssyntaxic errors. Understanding of spoken language was severely impaired for all categories of verbal material. Lip-reading facilitated comprehension only of isolated real words: it did not help discrimination of letters and digits or comprehension of complex or long linguistic material. Repetition was impossible when lip-reading was not allowed. Reading was correct for words and simple sentences. Some paralexic ANALYSIS OF WORD COMPREHENSION FIG. 1. G.L.‘s CT scan-left 15 hemisphere on the right-hand side. errors occurred for long and more complex sentences. Writing showed paragraphic and dyssyntaxic errors. This aphasic symptomatology evolved in a few days: spontaneous speech, reading, and writing improved and became almost normal; difficulties in understanding and in repetition persisted. Figure 2 shows the evolution of results from the BDAE in the 2 months following onset. Comments. The first profile (continuous line) is that of a Wernicke’s aphasia with paraphasic errors in spontaneous speech, naming, and oral 16 METZ-LUTZ 2.5 SEVERITY AND DAHL 2 RATING 2.5 1 0 1 FLUENCY AUDITORY CDMPREH. REPETITION ORAL READING PARAPHASIA AUTDM. SPEECH READING COMPREH. WRITING MUSIC PARIETAL -2.5 -2 __ : at the onset -1 0 - - -- : two +1 months +2 +2.5 later FIG. 2. Z-score profiles of aphasia. reading as well as very poor performances in auditory comprehension, repetition, and writing to dictation. There was no restriction of lip-reading in the two administrations of the BDAE: that may explain the results in auditory comprehension, especially in word discrimination and body-part identification where the items are single words. In the first administration resort to lip-reading did not help repetition even of single words. ANALYSIS OF WORD COMPREHENSION 17 Two months later (dashed line) performances improved in most of the subtests, but auditory comprehension remained difficult in spite of resorting to a more effective lip-reading. Naming performances were better: in responsive naming the performance depended on auditory comprehension; in animal naming and body-part naming the score was lowered by the increased time required for word finding; in visual confrontation naming the performances were the best. Oral reading was normal. In reading comprehension only word recognition and comprehension of oral spelling, which both test phonetic association using spoken or orally spelled word understanding, were very poor. In word recognition, the patient selected a connotatively similar word twice without taking into account the phonological form of the stimulus item. The lowest performances were in repetition tasks and in writing to dictation. There was some effect of lip-reading on the one-word level which decreased for sentences and spelling. In the second BDAE administration, there were no more neologisms or phonemic paraphasia; only verbal semantic paraphasias occurred in repetition tasks. The fact that lip-reading was allowed may be an explanation for the low occurrence of semantic paraphasias. Most of the parietal subtests improved in the second administration. Only stick memory remained weak as did rhythm reproduction in the musical tests. On the basis of the BDAE results 2 months after onset, this case may reasonably be considered to be a pure word deafness. 1. NEUROPHYSIOLOGICAL INVESTIGATIONS Hearing was tested by a pure tone audiometry which showed a mild mixed hypoacousia in the right ear. We report in Table 1 the results for the different speech frequencies. Brainstem auditory evoked potentials. In response to clicks of 70 and 90 dB the potentials recorded were normal in latency and amplitude with normal thresholds on the left and the right sides. TABLE 1 PURE TONE AUDIOMETRY Frequency (Hz) 500 1000 2000 4ooo Left ear (dB) Right ear Cd@ +15 +15 +20 + 1.5 +20 +15 +30 +20 18 METZ-LUTZ AND DAHL 2. NEUROPSYCHOLOGICAL INVESTIGATIONS Nonverbal Auditory Discrimination A test for auditory agnosia was administrated in the first week following the onset. It was composed of four subtests: -In discrimination of 20 tape-recorded meaningful nonverbal sounds, the patient had to match the sounds to their sources using a multiple choice answer sheet with either pictures or written names. She responded correctly to 19 without hesitation. -Recognition of musical instruments was fairly good. In this test the patient had to either name the instrument or point to the correct written name of the instrument on a multiple choice answer sheet after listening to tape-recorded pieces of music played by a soloist. She was not a musician and already before onset it would have been difficult for her to distinguish between a violin and violoncello, or between different brass intruments, for example. In spite of this she performed correctly on 8 out of 10 trials. In the two incorrect cases she was able to distinguish between different families of instruments. -Recognition of familiar song melodies was better for hummed melodies than for sung melodies. In the first case, she gave 9 correct responses out of 10; in the second she seemed hindered by the words and recognized only 5 melodies. -Reproduction of rhythms was bad. The patient could imitate only four very simple and short rhythmic patterns tapped with a pencil by the examiner; she failed for longer or more complex patterns. Comments. There is no auditory agnosia. Musical ability may be considered normal. The poor performance in reproduction of rhythms may be due to a lower auditory span. Verbal Auditory Discrimination Foreign language discrimination and identiJcation. Even severe aphasics recognize foreign language (Boller & Green, 1972). Is this ability preserved in pure word deafness? From a set of 20 tape-recorded short sentences (5 in French, 5 in German, 5 in English, and 5 in Spanish), the patient had to determine after each sentence whether the language spoken was French, her native tongue, or foreign and if possible to indicate which foreign language. She performed correctly without hesitation (20/20). Discrimination of intonation contours. Blumstein and Cooper in 1974 demonstrated that the right hemisphere is involved in the processing of intonation contours. In the case of pure word deafness due to a unilateral left lesion, it may be predicted that this ability to determine the semantic function of intonation is preserved. Twenty short French tape-recorded sentences (5 affirmative statements, ANALYSIS OF WORD COMPREHENSION 19 5 questions, 5 imperative statements, and 5 negative statements) were presented to the patient who had to select one of four corresponding cards (full stop, question mark, exclamation point, and the negative French adverb group “ne . . . pas”) after each sentence. She gave 19 correct responses. The error occurred when the patient took a negative statement for an affirmative one. In fact, the intonation patterns of these two types of statements are very close in French. It is interesting to note that no confusion occurred between types of sentences with very different intonation patterns. Lexical decision tusks. Considering the results of the second administration of the BDAE, auditory comprehension remains difficult in spite of resorting to lip-reading. In the repetition tasks, there are only semantic paraphasias which approximate the meaning of the items. Does this mean that auditory verbal material is processed in a lexical semantic way? If so the patient must be able to distinguish real words from nonsense words. LEXICAL DECISION IN THE AUDITORY MODALITY Thirty real words and thirty nonsense words were tape-recorded in random order with a S-set interval between them. The real words were chosen among the 500 French words of mos: frequent occurrence (Gougenheim, Rivenc, Michea, & Sauvageot, 1964). The nonwords were real words converted into pronounceable nonwords by changing one or two phonemes. The patient was asked to indicate after each stimulus whether or not it was a real French word. In this task, she performed quickly and only one error occurred (nonword -+ real word). This nonword was the homophone of the name of a wellknown French supermarket. The same test was presented in the visual modality a few days later and no error occurred. When asked to read aloud the written stimuli, she performed very well without hesitation, even for nonwords. These findings show that in this case of word deafness some linguistic processes are still preserved. They permit intonation contour discrimination and lexical decision. The foreign language test, since it used full sentences and not isolated words, may have been performed by means of intonation contour discrimination as well as by lexical or phonological discrimination. In order to demonstrate the resort to lexical semantic processing when phonemic discrimination is impaired as assumed by many authors (Albert & Bear, 1974; Auerbach et al., 1982), we compared the repetition of words and nonwords. 1. Repetition Tusk Methods. If real words may be processed in a lexical way, the repetition of nonwords requires an auditory-phonological conversion which needs phonemic discrimination. Two 20 METZ-LUTZ AND DAHL TABLE 2 OF WORDSAND NONWORDS RESULTS IN REPETITION Repetition Correct Errors” Failuresb Approximation’ Real words n = 50 Nonwords n = 50 a Error: patient gives a word or a nonword which is not exactly the stimulus item. b Failure: patient cannot repeat. ’ Approximation: patient gives the meaning of the word. lists were established: one of 50 mono-, di-, and trisyllabic real words (25 nouns, 15 adjectives, and 10 verbs from the 500 most frequent French words) and one of 50 nonsense words. Each item was spoken individually and the patient was asked to repeat it, and if not able, at least to try to give its meaning. The two lists were presented separately. The patient was informed that all the items on the second list were nonsense words. Lipreading was not allowed and repetition time was unlimited. Only the first response was taken into account. Results and comments (Tables 2, 3). Performances in word repetition are significantly better than in nonword repetition. In real-word repetition all errors are semantic paraphasias. Semantic paraphasia and meaning approximation represent 56% of the responses. On the other hand, in repetition of nonwords which requires a phonemic discrimination, the errors and failures are more frequent (88%) and all errors are phonemic paraphasias. These results are consistent with the prediction that when the phonemic processing is impaired the patient is able to resort to another way which goes through a lexical semantic processing. This way permits an access to the meaning of a word but not to its phonological form, as demonstrated by the inability to repeat. If the left temporal lesion impaired the phonemic discrimination, the lexical-semantic way may represent the right hemisphere’s capacity to process auditory linguistic material. TABLE 3 TYPES OF ERRORS IN REPETITION OF WORDSAND NONWORDS Repetition Errors Phonemic paraphasia Semantic paraphasia Real words n = 50 (3&) 0 16 28 0 Nonwords n = 50 (& ANALYSIS 21 OF WORD COMPREHENSION 2. Dichotic Listening Studies The dichotic listening model has already been applied by Albert and Bear (1974) and Saffran et al. (1976). Their patients had an extinction of the right ear signal under dichotic conditions. In the present case of word deafness due to a left temporal lesion demonstrated by CT scan, it may be predicted that under dichotic competition the patient should have difficulty perceiving right ear stimuli. Methods. Two dichotic tests were performed. The first was composed of 20 pairs of CV syllables whose initial consonants differed (ka, ta, la, . . . ). The vowel remained the same. Only half of these syllables were meaningless. The others may be interpreted as monosyllabic French real words. They were presented in random order to the right and the left ears. The second test used meaningful items: 10 pairs of digits and 40 pairs of nouns (10 monosyllabic CV, VC, and CVC, and 30 dissyllabic CVCV). The stimuli were presented through stereophonic headphones reversed for a retest 1 week after. The patient was asked to repeat after each trial. She was aware that two stimuli were present. Audiometric testing shows a mild hypoacousia of the right ear: the intensity of the right ear stimuli was increased proportionately. Results and comments (Table 4). Compared to the average results of 36 normal subjects, the performances of G.L. are very poor especially for dissyllabic words. As expected, the performances are much better for the left ear stimuli but only for single syllables and digits. This may be due to the fact that the patient had to repeat what she perceived. Perhaps the performances would have been different if the patient had had to answer from a multiple choice of written syllables, digits, or nouns. These results neverthless show that auditory verbal stimuli may be processed in the right hemisphere and that this processing is better for short stimuli and for digits. Before the test the patient was informed that she would have to repeat syllables in the first test, and digits and then nouns in the second. The performances for digits may be explained by the fact that she had a strong semantic cue in this task. TABLE 4 RESULTS IN DICHOTIC LISTENING TESTS Percentage of correct responses Item Left ear Right ear Syllables 35 (43.9)” 10 (44.3) Digits 50 (60) 10 (60.5) Nouns Monosyllabic Dissyllabic 10 (61.9) 3 (59.4) 0% (62.7) 0% (60) a Mean percentage of correct responses from 36 normal subjects in parentheses. 22 METZ-LUTZ 3. Role of Time in Auditory AND DAHL Comprehension The role of time in word comprehension has been emphasized by Albert and Bear (1974). The authors demonstrated in a case of word deafness that auditory linguistic processing was rate dependent. By tests of recognition and reproduction of digit trigrams, they observed that comprehension improved significantly at slower presentation rates and that the positional advantage of the first digit exists only if lip-reading is allowed. In their tests, only digits with single syllables were used. It may be asked whether in such tests the rate dependency concerns phonological units such as syllables or lexical or semantic units such as digits. In our case, the results of dichotic tests show that the performances are better for phonological units such as syllables and digits which are monosyllabic. As said above, half of the syllables of the first dichotic test may be interpreted as lexical items. And all the syllables correctly repeated by our patient had the same pronunciation as French real words. For example, the syllable 1ba 1 is pronounced like the French word “bas” (low or stockings), 1ta 1 may be the feminine possessive adjective “ta” or the substantive “tas” (heap, pile). Similarly in the repetition tasks the patient tried to grasp part of the meaning rather than part of the phonological form, despite the phonetical cues given by lip-reading. Therefore we tried to differentiate the effect of slower rate presentation when words were segmented into syllables or into morphemes (prefix, root, suffix, flexional ending, etc.). Methods. We constructed a list of 30 “compound” words (radical + prefix or suffix): 15 nouns, 10 adverbs, and 5 adjectives. These words were presented in three different manners. A-the words were pronounced at a normal rate. B-the words were pronounced at a slower rate with a syllabic segmentation (Zsec separation). C-the words were pronounced at a slower rate with a morphemic segmentation (2-set separation also), for example: sauciere [SO:Sj & R] (sauce boat) (A) [SO:Sj I R] (B) [SO: 12 set 1 Sj & R] (C) [SO:S 12 set 1 j & RI In each presentation the order of the 30 words was different and the patient was not aware of her errors. The first time the words were presented as per manner A; 2 days after as per manner B; and only 1 week later as per manner C for repetition. In the three presentations, lip-reading was not allowed. Results and comments (Table 5). In the lower rate presentation with morphemic segmentation (manner C), the performances are significantly better than in slower rate presentation with syllabic presentation (B). This cannot be interpreted as an effect of learning: 2 days after the third presentation the patient had to repeat the same words pronounced at a normal rate and she repeated correctly only 11 out of 30. ANALYSIS 23 OF WORD COMPREHENSION TABLE 5 ROLE OF TIME IN AUDITORY COMPREHENSION-RESULTS IN REPETITION Condition ‘4, B C -4 Items, n = 30 9 15 25 11 Percentage 30 50 83.3 36.6 Note. A,: at normal rate (first presentation; B: at lower rate with syllabic segmentation; C: at lower rate with morphemic segmentation; A,: at normal rate (second presentation). These results show that the rate dependency of auditory comprehension is more effective when it concerns units of meaning such as morphemes than units of phonological forms such as syllables. When the words are segmented into morphemes, the repetition as well as the understanding of single words improved: when the patient failed to repeat, she was able to give the meaning of the word. This bears out what was suggested by the performances in word and nonword repetition: that the patient resorts to a semantic processing which seems improved by the morphemic segmentation. DISCUSSION The present case study suggests, as proposed by the logogen model developed by Morton (1969, 1980), that there are several paths between the auditory input and the response buffer. For Morton (1980) word deafness and conduction aphasia could be the mirror syndrome of deep dyslexia requiring “no semantic paralexias, no problem in reading nonsense words, semantic paraphasias in repetition of words, and inability to repeat nonsense words.” This case of word deafness may be interpreted according to this logogen model as an inability to perform auditory-phonological conversion when linguistic processing is still possible. For Morton, this corresponds “to a disruption of the path involving the auditory-phonological conversion together with a disconnection of the direct route between the auditory logogen system and the output logogen system.” So the remaining capacity to understand uses the only available path going from the auditory logogen input system through the cognitive system. Also, a semantic access is possible without resorting to the phonetic code, which explains the frequent semantic paraphasias in the repetition of single words, the capacity to give the meaning when the word cannot be repeated, and the improvement of word repetition performances by morphemic segmentation of the verbal stimuli. In a case of pure word deafness due to a left temporal lesion, Saffran et al (1976) have demonstrated that “the perceptual deficit . . . is a result of an arrest of speech processing at a prephonetic level.” In the present 24 METZ-LUTZ AND DAHL case the dichotic tests suggest that the right hemisphere may process some auditory linguistic material. If so, the remaining capacity for word comprehension with the resort to a lexical semantic way may be dependent on the right hemisphere’s linguistic competence. Gainotti, Caltagirone, Miceli, and Masullo (1981) studied the semanticlexical and the phonemic discrimination in 50 right-brain-damaged patients. They concluded that lesions of the right hemisphere impaired the semanticlexical discrimination but not the phonemic discrimination. Leaving out the effect of mental deterioration and unilateral spatial inattention, they consider that the lexical-semantic impairment is due to the damage to the right hemisphere. This is consistent with the data obtained from splitbrain patients by Zaidel (1976). CONCLUSION In this case study of pure word deafness, we were more interested by the residual capacity to understand than by the perceptual deficit itself, which has been well studied these last years. The left unilateral lesion and the data from dichotic listening tests allow us to suggest that these residual capacities belong to the right hemisphere. The inability to repeat nonsense words, the frequent semantic paraphasia or semantic approximation, and the more effective rate dependency when it concerns units of meaning in repetition tasks show that the right hemisphere’s processing is probably of a high linguistic or cognitive nature. The tests used in our observation do not claim to distinguish these two processings. REFERENCES Albert, M. L., & Bear, D. 1974. Time to understand: A case study of word deafness with reference to the role of time in auditory comprehension. Bruin, 97, 873. Auberbach, S. M., Allard, T., Naeser, H., Alexander, M., & Albert, M. L. 1982. Pure word deafness: Analysis of a case with bilateral lesions and a deficit at prephonemic level. Brain, 105, 271-300. Blumstein, S. E., & Cooper, W. E. 1974. Hemispheric processing of intonation contours. Cortex, 10, 146-159. Boller, F., & Green, E. 1972. Comprehension in severe aphasia. Cortex, 8, 815-830. Gainotti, G., Caltagirone, C., Miceli, G., & Masullo, C. 1981. Selective semantic-lexical impairment of language comprehension in right brain damaged patients. Brain and Language, 13, 201-211. Goldstein, M. N. 1974. Auditory agnosia for speech. Pure word deafness: A historical review with current implications. Bruin and Language, 1, 195-204. Goldstein, M. N., Brown, M., & Hollander, J. 1975. Auditory agnosia and word deafness: An analysis of a case with three years follow-up. Brain and Language, 2, 324-332. Goodglass, H. & Kaplan, E. 1972. The assessment of aphasia and related disorders. Philadelphia: Lee and Febiger. Gougenheim, G., Rivenc, P., Michea, R., & Sauvageot, A. 1964.L’tVaboration du Fran@ fondamental. Paris: Didier. Kussmaul, A. 1877. Disturbances of speech. In H. Von Ziemssen (Ed.), Cyclopedia of the practice of medicine. New York: Wood. Vol. 14, pp. 581-875. ANALYSIS OF WORD COMPREHENSION 25 Lichtheim, M. L. 1885. On aphasia. Brain, 7, 433-484. Morton, J. 1969. Interaction of information in word recognition. Psychological Review, 76, 165-178. Morton, J. 1980. Two auditory parallels to deep dyslexia. In M. Coltheart, K. Patterson, & J. C. Marshall (Eds.), Deep dyslexia. London: Routledge & Kegan Paul. Saffran, E. M., Marin, S. M., & Yeni-Komshian, G. H. 1976. An analysis of speech perception in word deafness. Brain and Language, 3, 209-228. Ulrich, G. 1977. Das Syndrom der Akustischen Agnosie. Archiv ftir Psychiatric und Nervenkrankheiten, 224, 221-233. Von Stockert, Th. R. 1982. On the structure of word deafness and mechanisms underlying the fluctuation of disturbances of higher cortical functions. Bruin and Language, 16, 133-146. Zaidel, E. 1976. Auditory vocabulary of the right hemisphere following brain bisection or hemidecortication. Cortex, 12, 191-211.