Neurocase The Neural Basis of Cognition ISSN: 1355-4794 (Print) 1465-3656 (Online) Journal homepage: http://www.tandfonline.com/loi/nncs20 A patient with aphasia using the nonsemantic lexical route for Kanji reading Kosei Hashimoto, Akira Uno, Ami Sambai & Go Mizumoto To cite this article: Kosei Hashimoto, Akira Uno, Ami Sambai & Go Mizumoto (2017): A patient with aphasia using the nonsemantic lexical route for Kanji reading, Neurocase, DOI: 10.1080/13554794.2017.1390140 To link to this article: http://dx.doi.org/10.1080/13554794.2017.1390140 Published online: 16 Oct 2017. Submit your article to this journal Article views: 21 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=nncs20 Download by: [University of Florida] Date: 29 October 2017, At: 21:45 LIQUID CRYSTALS, 2017 https://doi.org/10.1080/13554794.2017.1390140 A patient with aphasia using the nonsemantic lexical route for Kanji reading Kosei Hashimoto a,b , Akira Unob, Ami Sambaic and Go Mizumotod Department of Rehabilitation, JCHO Kumamoto General Hospital, Kumamoto, Japan; bGraduate school of Comprehensive Human Sciences, University of Tsukuba, Ibaraki, Japan; cFaculty of Education, Osaka Kyoiku University, Osaka, Japan; dFaculty of Health Science, Kumamoto Health Science University, Kumamoto, Japan Downloaded by [University of Florida] at 21:45 29 October 2017 a ABSTRACT ARTICLE HISTORY We report a patient with aphasia, caused by cerebral hemorrhage, who probably used the nonsemantic lexical route when reading words aloud. To investigate the mechanisms underlying her reading dysfunction, we analyzed her reading abilities using the Dual-Route Cascaded Model. Language tests resulted in low correct percentages for both reading comprehension and reading nonwords aloud, suggesting problems in the semantic system and the nonlexical route. Conversely, the patient showed high scores on the reading words aloud task. Although she failed to understand many inconsistentatypical words in the reading comprehension test, she correctly read most words aloud, suggesting that she used the nonsemantic lexical route. In addition, the lexical reading route was analyzed in detail by using inconsistent-atypical Kanji words as stimuli. Finally, we analyzed her reading dysfunction compared with previous cases. Received 22 December 2016 Accepted 3 October 2017 1. Introduction Acquired reading impairments caused by brain damage are classified into surface dyslexia, phonological dyslexia, and deep dyslexia based on cognitive neuropsychological analysis (Benson & Ardila, 1996). Surface dyslexia is characterized by difficulty in reading irregular words compared to regular words and nonwords (Coltheart, Masterson, Byng, Prior, & Riddoch, 1983; Marcel, 1980). In addition, patients with surface dyslexia show a regularity error in reading irregular words aloud, which is known as the legitimate alternative reading component (LARC) error (Funnell, 1996; Fushimi et al., 2003; Graham, Hodges, & Patterson, 1994; Patterson & Hodges, 1992; Patterson, Suzuki, Wydell, & Sasanuma, 1995). By contrast, patients with phonological dyslexia have difficulty reading nonwords, but can read known words, including irregular words (Beauvois & Derouesne, 1979; Sasanuma, Ito, Patterson, & Ito, 1996). Lexicalization errors are also frequently observed in these patients (Lesch & Martin, 1998; Martin, Breedin, & Damian, 1999). Although deep dyslexia has similarities to phonological dyslexia, semantic errors are only observed in deep dyslexia (Jones, 1985; Marshall & Newcombe, 1966; Plaut & Shallice, 1993). The three main classes of reading disorder can be analyzed using the Dual-Route Cascaded (DRC) Model for visual word recognition and reading aloud (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001). In this model, reading aloud is achieved by lexical whole-word processing and nonlexical serial processing. The lexical route utilizes word-specific knowledge about character strings and pronunciations stored in the mental lexicon, and is necessary for irregular word processing. By contrast, the nonlexical route sequentially transforms a character into sound using grapheme-to-phoneme conversion rules, allowing nonwords to be read aloud through this route. CONTACT Akira Uno uno@human.tsukuba.ac.jp © 2017 Informa UK Limited, trading as Taylor & Francis Group KEYWORDS Nonsemantic lexical route; DRC model; Kanji; inconsistent-atypical word; aphasia The three main reading impairments can also be analyzed by cognitive neuropsychological methods. Given that surface dyslexia is caused by damage to the lexical route for reading, it is characterized by difficulty reading irregular words compared to regular words and nonwords. By contrast, given that phonological dyslexia follows damage to the nonlexical route for reading, there is characteristically preservation of the ability to read known words, including irregular words, whereas the ability to read nonwords is impaired. Deep dyslexia shares similarities with phonological dyslexia, but is characterized by impairment of the semantic system that does not occur in phonological dyslexia (Bub & Kertesz, 1982; Marshall & Newcombe, 1966, 1973). In the DRC Model, two further paths are involved in lexical whole-word processing: the semantic lexical route with semantic mediation and the nonsemantic lexical route without semantic mediation. The nonsemantic lexical route is also called the “third route” for reading (Wu, Martin, & Damian, 2002), and selective (or relative) functioning of this route has been reported in some brain-damaged patients (Coslett, 1991; Funnell, 1983; Law, Wong, & Chiu, 2005; Lytton & Brust, 1989; Wu et al., 2002). Although these patients typically use this route as the common reading process, the results differ in terms of aphasia type, dyslexia type, lesion site, and language task. Additionally, we are aware of no reports of Japanese patients using the nonsemantic lexical route for reading aloud. Before introducing our case, it is important that we briefly introduce some basic principles of Japanese orthography. There are two orthographic systems used in Japanese language: logographic Kanji and syllabic (more precisely, moraic) Kana. Most Kanji characters originated in China and have meanings in themselves, so are sometimes called “ideograms” (e.g., “木” means “a tree,” and resembles the shape of an 2 K. HASHIMOTO ET AL. Downloaded by [University of Florida] at 21:45 29 October 2017 actual tree). By contrast, Kana was derived from Kanji characters and has no intrinsic meaning in isolation (e.g., the Kana “さ” is a simplified form of the Kanji “左,” which means “left”). Character–sound correspondence for Japanese Kanji characters is different from Kana in several regards. Kanji characters generally contain one to three morae, whereas most Kana characters usually map to a single mora. In addition, although most Kanji characters have two or more different pronunciations that are determined by the intra-word context, most Kana characters have a single pronunciation that does not vary across words. In this article, we present a Japanese patient who used the nonsemantic lexical route when reading Kanji aloud. We analyzed the reading process by asking the patient to read Kanji nonwords and inconsistent-atypical Kanji words. The results of these reading tasks provide evidence that the third route was used for reading in this case. 2. Current case Our case was a 66-year-old, right-handed Japanese woman with aphasia caused by cerebral hemorrhage. There was no history of cerebrovascular disease prior to the index event. Clinical examination revealed transcortical motor aphasia and severe right hemiplegia as focal neurological dysfunctions, which necessitate wheelchair use. She was treated conservatively with infusion therapy, and a head computed tomography scan performed two days after symptom onset revealed high-density areas in and around the left putamen of the basal ganglia. Her medical history included high blood pressure diagnosed after the age of 50 years that had been treated with oral antihypertensives. In daily conversation, she showed difficulty in spontaneous speech, but could use repetition provided by a conversational partner. In addition, she could not always understand the speech of others. Her Raven’s Coloured Progressive Matrices score was 31 out of 36, which was higher than the average score of 29.2 for age-matched healthy adults. 3. Language tests To investigate the basic mechanisms underlying her reading disorder, we conducted several language tasks, including the Standard Language Test of Aphasia (SLTA; Japan Society for Higher Brain Dysfunction Brain Function Test Committee, 2003), the Standardized Comprehension Test of Abstract Words (SCTAW; Haruhara & Kaneko, 2002), the Test of Lexical Processing in Aphasia (TLPA; Fujita et al., 2000), and the Sophia Analysis of Language in Aphasia (SALA; Jochi University SALA Project Team, 2004). The SLTA is the most commonly used standardized test for Japanese patients with aphasia and includes the subtests of auditory comprehension, reading comprehension, oral naming, repetition, reading aloud, and spelling. In both the SCTAW and TLPA, we conducted the auditory comprehension, reading comprehension, repetition, and reading aloud tasks. As the Japanese version of the Psycholinguistic Assessments of Language Processing in Aphasia (Kay, Lesser, & Coltheart, 1992), the SALA has many subtests; we conducted the visual lexical decision task (VC12), two tasks of reading two-Kanji compounds aloud (imageability × frequency; OR34, consistency; OR36), and the task of reading Kana nonwords aloud (OR37). 3.1. Reading and auditory comprehension On the SLTA reading comprehension task, our patient got 50% (5/10) correct on the Kanji word-to-picture matching task, 70% (7/10) correct on the Kana word-to-picture matching task, and 0% (0/10) correct on both the written sentence comprehension and written command comprehension tasks. The stimuli used in the SCTAW are 45 two-Kanji compound words, whereas the TLPA uses 31 two-Kanji compound words, 4 one-character Kanji words, and 5 Kana words. Our patient got 33% (15/45) and 52% (16/31) correct for the two-Kanji compound words in the SCTAW and TLPA, respectively. For the other stimuli in the TLPA, she got 50% (2/4) of the onecharacter Kanji words correct and 40% (2/5) of the Kana words correct. The auditory stimuli for the SLTA, SCTAW, and TLPA are the same as those of the reading comprehension tasks. In the SLTA, our patient got 100% (10/10) correct on the spoken word-to-picture matching task, 20% (2/10) correct on the spoken sentence comprehension task, and 0% (0/10) correct on the spoken command comprehension task. The auditory comprehension task scores for the SCTAW and TLPA were 31% (14/45) and 65% (26/40), respectively. 3.2. Reading comprehension compared with reading aloud Next, we compared the results of reading comprehension and reading aloud using the same stimuli for both tasks. First, we will show the results of the reading comprehension tasks in the SCTAW and TLPA. For two-Kanji compound words, our patient got 33% (15/ 45) and 52% (15/31) correct for the SCTAW and TLPA, respectively. However, using the same stimuli in the reading aloud task, the respective correct scores were 78% (35/45) and 87% (27/31). Concerning the total score for both tasks, the reading aloud score was significantly higher than that of reading comprehension score (reading aloud 84% [64/76], reading comprehension 37% [28/76], χ2[1] = 32.40, p < .01). In addition, she could read aloud 38 of the 48 words scored as incorrect in the reading comprehension task, and her reading comprehension was incorrect for 38 of the 64 words that had been correctly read aloud. The error responses for reading aloud tasks in the SCTAW and TLPA included 10 phonological errors, 1 perseveration error, and 2 other errors. Thus, there were no semantic errors in the reading aloud tasks (Figure 1). We also analyzed the responses to inconsistent-atypical words included in the stimuli set of the SCTAW and TLPA by introducing inconsistent-atypical compound Kanji words. Japanese Kanji characters typically have one to three pronunciations; those that have only one pronunciation are called “consistent” reading and those with two or more pronunciations are called “inconsistent” reading (Fushimi, Ijuin, Patterson, & Tatsumi, 1999; Morton & Sasanuma, 1984). For example, one Kanji “医,” which translates to “medical,” has LIQUID CRYSTALS 3 Print Visual Feature Units Character Units Lexical Route Non-Lexical Route Orthographic Input Lexicon Character-Sound Conversion Semantic System Downloaded by [University of Florida] at 21:45 29 October 2017 Phonological Output Lexicon Response buffer Speech Figure 1. DRC model of reading. This model was modified from the original model (Coltheart et al. 2001) for Japanese reading. only one pronunciation (/i/) and is therefore categorized as consistent reading. Another Kanji “都,” which translates to “capital,” has three patterns of pronunciation (/to/,/tsu/, and/ miyako/); thus, this Kanji is categorized as inconsistent reading. Furthermore, inconsistently read Kanji characters are divided into the subtypes of “typical” and “atypical” reading, which occur in high and relatively low frequency, respectively. To determine whether a Kanji character was typical or atypical, we calculated the consistency ratio based on the procedure outlined by Uno et al. (2009) and Fushimi et al. (1999). The consistency ratio was calculated as “friends” divided by the sum of “friends” and “enemies.” The term “friends” indicates the number of words sharing the same character at the same position with the same pronunciation, and the term “enemies” indicates the number of words sharing the same character at the same position with different pronunciations in two-Kanji compound words (Fushimi et al., 1999). We counted the number of friends and enemies using the NTT database series Nihongo-no Goitokusei (Lexical properties of Japanese; Amano & Kondo, 1999) as the language corpus. In this report, Kanji characters with a consistency ratio ≥0.50 were classified as typical reading and those <0.50 were classified as atypical reading. For example, the Kanji “都” exists at the first position in two-Kanji compound words, with the pronunciation/to/being classified as typical reading because it has a consistency ratio of 0.76, whereas the pronunciations/tsu/and/miyako/are categorized as atypical reading because they have consistency ratios of 0.10 and 0.14, respectively. We defined an inconsistent-atypical word as two-Kanji compound words that have more than one Kanji character read as inconsistent-atypical. For example, the two-Kanji compound “都合” pronounced/ tsugoR/, which translates to “circumstance,” is categorized as an inconsistent-atypical word because it contains the inconsistent character “都” pronounced atypically as/tsu/. Such words cannot be read via the nonlexical, one-to-one conversion rule; therefore, if we apply the frequency-based nonlexical conversion rule to “都合,” it will be misread as/ togoR/. Inconsistent-atypical words must therefore be read aloud by utilizing the lexical route. We included 19 inconsistent-atypical words in the stimuli of the SCTAW and TLPA, and our patient could read 11 words aloud from among the 15 words scored as incorrect in the reading comprehension test. She could not comprehend 11 of the inconsistent-atypical words among the 14 words that were correctly read aloud. Next, we show the results for the Kana words included in the TLPA and SLTA. In the TLPA, the patient correctly answered 2 of 5 words in the reading comprehension task, but correctly read aloud 5 of 5 words with the same stimuli. In the SLTA, she correctly answered 7 of 10 words in the reading comprehension task and could read aloud 9 of 10 words with the same stimuli. Overall, our patient got 60% (9/15) and 90% (14/15) correct on the reading comprehension task and reading aloud task, respectively. Thus, the reading comprehension task score was lower than that of the reading aloud task, which was marginally significant (χ2[1] = 3.20, p = .073). In addition, she could read aloud 5 of 6 words scored as incorrect for the reading comprehension task, and failed to comprehend 5 of the 14 words she correctly read aloud. 3.3. Reading aloud nonwords (kanji and kana) The two-Kanji compound nonword stimuli were the same as those of the lexical decision task (SALA VC12). Our patient scored only 3% (2/60) when reading Kanji 4 K. HASHIMOTO ET AL. nonwords aloud, showing 3 lexicalization errors, 1 partially correct response, 3 partially correct responses, and 51 nonresponses. In addition, she scored 61% (34/56) when reading Kana nonwords aloud (SALA OR37), which was significantly lower than that for Kana real words (61% [34/56] and 90% [27/30] for Kana nonwords and real words; χ2(1) = 8.13, p < .01). In this task, she showed 19 phonological errors, 1 lexicalization error, 1 partially correct response, and 1 nonresponse. In addition, the Kanji nonword score was significantly lower than the Kana nonword score (χ2[1] = 39.88, p < .01) (Figure 2). Downloaded by [University of Florida] at 21:45 29 October 2017 3.4. Visual lexical decision task For the visual lexical decision task, a speech and language therapist presented two-Kanji real words or nonwords to the patient, and the patient judged whether the two-Kanji words were real. The lexical properties of the stimuli were matched for the lexical decision task and reading comprehension task, so did not differ significantly in frequency, familiarity, imageability, orthographic plausibility, or reading consistency. Our patient scored 77% (79/102) on the lexical decision task, which was significantly higher than for the reading comprehension task (i.e., TLPA+SCTAW, 41% (31/76)) (χ2[1] = 23.27, p < .01). The tests of reading comprehension for two-Kanji compound words, reading aloud for two-Kanji compound words, reading aloud for Kanji nonwords, and the visual lexical decision task are compared in Figure 3. 3.5. Reading aloud tasks (imageability × frequency and consistency) We asked the patient was asked to read two-Kanji compound words aloud, controlling for imageability and frequency (SALA OR34). The high imageability plus high frequency score was 67% (8/12), the high imageability plus low frequency score was 100% (12/12), the low imageability and high frequency score was 83% (10/12), the low imageability and low frequency score was 75% (9/12), and the total score was 81% (39/48). Overall, there was no effect of either imageability (χ2 [1] = 0.1368, p = .71) or frequency (χ2[1] = 1.2308, p = .27). Next, the patient was asked to read two-Kanji compound words aloud, which was controlled for the consistency ratio (SALA OR36). The score for consistent words with a high consistency ratio was 65% (13/20) and the score for inconsistent-atypical words with a low consistency ratio was 70% (14/ 20). There was no significant difference between the scores of the two tests (χ2[1] = 0.114, p = .76). 3.6. Other tests Our patient scored 83% (25/30) for high-familiarity Kanji words and 90% (27/30) for high-familiarity Kana words. The repetition scores using the same stimuli for SCTAW and TLPA were 96% (43/45) and 93% (37/40), respectively. For the SLTA, she scored 35% (7/20) on picture naming, 90% (9/10) on word repetition, and 20% (1/5) on sentence repetition. Moreover, she could not correctly answer any of the spelling tasks in the SLTA. All of the results for the SLTA are summarized in Table 1. Figure 2. Head computed tomography (CT) scanning performed 2 days after onset. Figure 3. Compared graph of four tasks; reading comprehension and reading aloud with two Kanji compound words, reading aloud Kanji nonwords, and visual lexical decision task. LIQUID CRYSTALS Downloaded by [University of Florida] at 21:45 29 October 2017 Table 1. Scores on subtests of the SLTA. Task Comprehension Spoken word-to-picture matching Spoken sentence comprehension Spoken command comprehension Kanji word-to-picture matching Kana word-to-picture matching Written sentence comprehension Written command comprehension Production Oral naming: objects Oral naming: actions Oral scene description Word fluency Written naming of Kanji Written naming of Kana Written scene description Repetition Words Sentences Reading aloud Kanji words Kana words Kana characters Sentences Writing-to-dictation Kanji words Kana words Kana characters Sentences Score Max Average SD 10 2 0 5 7 0 0 10 10 10 10 10 10 10 10.0 9.5 9.6 9.9 10.0 9.6 9.4 0.2 0.8 0.7 0.8 0.1 1.0 1.5 7 9 1 9 0 0 0 20 10 6 ― 5 5 6 19.6 9.9 5.8 12.6 4.2 4.8 5.2 0.8 0.4 0.6 4.5 1.1 0.7 1.1 9 1 10 5 9.9 4.5 0.1 0.8 5 5 8 1 5 5 10 5 5.0 5.0 10.0 4.9 0.4 0.1 0.2 0.3 0 0 0 0 5 5 10 5 4.3 4.8 9.7 4.0 1.0 0.8 1.1 1.5 4. Discussion We investigated the underlying mechanisms of our patient’s reading dysfunction based on the DRC Model. Given that both her reading comprehension and auditory comprehension task scores were low, the semantic system and/or access to the semantic system were thought to be impaired. Conversely, she scored high (84%, 64/76) in the reading aloud task when using the same stimuli used to assess reading comprehension. These results suggest that our patient could read words aloud without understanding them, and the nonlexical route was thought to be severely impaired because she scored 3% (2/ 60) on the nonword reading aloud task. From these results, we hypothesized that our patient was using the nonsemantic lexical route for reading (i.e., a direct route from the orthographic lexicon to the phonological lexicon in the DRC Model). Interestingly, the score for the lexical decision task was significantly higher than that for the reading comprehension task, suggesting that the orthographic lexicon was relatively intact compared with the semantic system. Thus, our patient probably used the nonsemantic lexical route rather than the semantic lexical route when reading. She also failed to understand 15 words in the reading comprehension task when we assessed her ability to read inconsistent-atypical words aloud, but could read 11 of those words. Inconsistent-atypical words cannot be pronounced using the nonlexical character-to-sound corresponding rule because they are not typical reading for that Kanji character, and they must instead be read aloud by the lexical route. Given that our patient showed a tendency to read inconsistent-atypical words aloud without understanding them, the nonsemantic lexical route was likely activated for reading aloud. 5 Although we primarily analyzed the reading disorders in our patient based on Kanji language tasks, many reports have shown clear differences between Kanji and Kana reading in Japanese patients with brain damage (Hashimoto & Uno, 2016; Iwata, 1984; Sakurai, Mimura, & Mannen, 2008; Sakurai, Sakai, Sakuta, & Iwata, 1994; Soma, Sugishita, Kitamura, Maruyama, & Imanaga, 1989). Therefore, we also confirmed whether the differences could be shown between Kanji and Kana reading in our patient. The patient’s score for reading Kana real words aloud was significantly higher compared with the score for reading nonwords, and our patient could read many Kana words without comprehension. Thus, the results were comparable between Kana and Kanji word reading, indicating that the underlying nonsemantic lexical route was probably shared. However, it should be noted that our patient scored 3% (2/ 60) for Kanji and 61% (34/56) for Kana when reading nonwords aloud (p < .01). This suggests that the nonlexical route may have been relatively more functional for Kana than for Kanji reading. There are a few possible explanations for the discrepancy between the Kana and Kanji reading scores. First, it is possible that our patient had difficulty in selecting the correct pronunciation for the Kanji characters from among the allowed pronunciations. Indeed, whereas most Japanese Kanji characters have two or more pronunciations, most Kana characters typically correspond to a particular sound and do not require the patient to make a choice about pronunciation. Second, if our patient read Kanji nonwords using the semantic system, then it is possible that semantic processing affected inhibitory control when reading. Although two-Kanji compound nonwords have no meaning (e.g., “跳目” has no meaning), each Kanji character used in the evaluation had meaning by itself (e.g., “跳” means jump and “目” means eye). Given that our patient probably had either an impaired semantic system or impaired access to the semantic system, reading Kanji nonwords could have been affected by impaired semantic processing if twoKanji nonwords were read aloud using the one-by-one character approach. Most reading errors for two-Kanji nonwords were also nonresponses (51/58), suggesting possible impaired semantic processing. By contrast, each Kana character would not have been affected by semantic processing because they have no meaning. It is reasonable, therefore, to anticipated that the Kana nonword reading score would be higher than the Kanji nonword reading score. We also compared the findings in this case with those of other cases with surface dyslexia, phonological dyslexia, and deep dyslexia. First, regarding surface dyslexia, many reports have suggested an association between semantic impairment and the observed LARC error, which is a legitimately plausible reading for that character (Funnell, 1996; Fushimi et al., 2003; Graham et al., 1994; Patterson & Hodges, 1992; Patterson et al., 1995). In surface dyslexia, pronounceable nonwords are read correctly because the nonlexical route is preserved when reading. Our patient did not satisfy the conditions of surface dyslexia based on the lack of LARC errors and the low score for reading Kanji nonwords. Second, phonological dyslexia is associated with a deficit of nonlexical reading, which was present in our patient (Beauvois & Derouesne, 1979; Lesch & Martin, 1998; Martin et al., 1999). However, there is no Nonwords Oral naming Visual lexical decision Visually Orthographic segmentation 100% 33% Related 2 picture 81% Unrelated 2 picture 100% – Non-homophonic 0% Homophonic 100% Few semantic errors High 88% imageability Low 83% imageability High 93% frequency Low frequency Suffixes words 98% Alphabet Nouns 88% Verbs 90% Adjectives 91% Functions 91% Regular words 79% Irregular words 82% Word classes Error responses Comprehension Auditory Orthography Reading aloud Deep dyslexia Cerebrovascular accident Left cerebral hemisphere Dyslexic type Disease Lesion WB: Funnell (1983) English Broca’s aphasia Mother language Aphasic type Cases – 100% ― Word 96% Commands 40% – No semantic error 97% Very irregular 93% Mild irregular 90% High image. high freq. 95% High image. low freq. 93% Low image. high freq. 98% Low image. low freq. 93% 21% 97% – – Lexicalization error (nonwords reading) – 30% Low image. low freq. 91% Low image. high freq. 99% High image. low freq. 95% High image. high freq. 100% Phonological dyslexia Cerebral infarction Left frontal and parietal operculum Alphabet Nouns 100% Verbs 100% Adjectives 100% Functions 85% – – – Cerebral infarction Left superior temporal gyrus Alphabet Nouns 90% Functions 93% – ML: Wu et al. (2002) English WT: Coslett (1991) English Wernicke’s aphasia – Severely impairment Severely impairment No semantic error 15% – – 80% – – 90% Direct dyslexia Cerebral infarction Left frontal and temporal lobes Alphabet Nouns 93% Non-nouns 80% English Wernicke’s aphasia GH: Lytton and Brust (1989) Table 2. Summary of the five previously reported patients and our patient. Downloaded by [University of Florida] at 21:45 29 October 2017 YKM: Law et al. (2005) – 96% 98% 100% 33% 95% 70% 60% No semantic error 100% – – Chinese character Nouns 98% Verbs 95% Functions 100% Cantonese Transcortical motor aphasia – Cerebral hemorrhage Left basal ganglion HM: current case 64% – 90% – – – – SCTAW:33% TLPA:52% 77% 35% TLPA: 40% SLTA: 70% 66% No semantic error SCTAW:31%, TLPA:65% No semantic error High image. high freq. 67% High image. low freq. 100% Low image. high freq. 83% Low image. low freq. 75% 3% 65% 70% – Cerebral hemorrhage Left basal ganglion (in and around putamen) Kanji Kana Nouns 83% Nouns 90% Japanese Transcortical motor aphasia 6 K. HASHIMOTO ET AL. Downloaded by [University of Florida] at 21:45 29 October 2017 LIQUID CRYSTALS impairment of semantic processing in phonological dyslexia, which was also present in our patient, as indicated by the low scores for reading and auditory comprehension. Finally, consistent with the findings expected in deep dyslexia, our patient had impairments of the semantic system and the nonlexical route (Jones, 1985; Marshall & Newcombe, 1966; Plaut & Shallice, 1993). However, the error responses were different: patients with deep dyslexia frequently show semantic reading errors when reading aloud (Jones, 1985; Marshall & Newcombe, 1966; Plaut & Shallice, 1993), whereas our patient showed no such errors. Patients with deep dyslexia are more likely to have semantic errors because of access to the impaired semantic system. Next, we compared the features of our case report with those in previous reports of five patients who were thought to use the nonsemantic lexical route. The profiles of these cases, plus our own, are summarized in Table 2, and comprises four English speakers (Coslett, 1991; Funnell, 1983; Lytton & Brust, 1989; Wu et al., 2002), one Cantonese speaker (Law et al., 2005), and one Japanese speaker (our case). These six patients all probably used a common reading process, but showed differences in their aphasia types, dyslexia types, lesion sites, and language tasks. Funnell (1983) reported a Broca-type disorder, whereas the cases reported by Lytton and Brust (1989) and Coslett (1991) were Wernicke-type disorders. Additionally, the case reported by Funnell was diagnosed as having deep dyslexia, while that reported by Wu et al. (2002) had phonological dyslexia. Of note, Lytton and Brust (1989) proposed the original classification of direct dyslexia in their patient. The brain lesion sites also varied among these patients; in the case of Lytton and Brust (1989), the lesion was in the left frontal and temporal lobes; in the case of Wu et al. (2002), the lesion was in the left frontal and parietal operculum; and the case of Law et al. (2005), the lesion was in the left basal ganglion. In addition, there were no effects of frequency, regularity, or imageability in the reading test results. For instance, the Cantonese patient reported by Law et al. (2005) was not influenced by any lexical properties, such as frequency, imageability, word class, position of phonetic radical, or phonological similarity of phonetic radical, when examined for reading performance. Our patient shared many features in common with the patient reported by Law et al. (2005). Indeed, the language tasks were categorized in the same aphasic type, there were common brain lesions, and neither case was influenced by frequency and imageability when reading words aloud. Given that these patients both used the same orthographic scripts (Kanji/Chinese character), it is perhaps plausible that they developed similar symptoms. However, this idea should be challenged on a couple of key grounds. First, most Japanese Kanji characters have two or more pronunciations, while most Chinese characters typically have just one pronunciation. Second, Cantonese is a tonal language, whereas Japanese is not. These two features mean that we cannot simply categorize Japanese Kanji and Chinese characters into the same class. The different characteristics of Japanese Kanji and Chinese characters also required differences in the analysis methodology. For instance, the lexical route in our patient was analyzed deeply using Kanji 7 inconsistent-atypical words, whereas Law et al. (2005) could not because their patient was a Cantonese speaker. In addition, our patient’s nonlexical route could be assessed using Kanji nonwords, whereas this was not possible for their patient. By contrast, Law et al. (2005) could perform analyses of tonal errors for reading error responses, whereas we could not because Japanese is not a tonal language. Therefore, although Chinese characters are commonly used in both Japanese and Cantonese languages, there are some differences between the two languages that necessitate different analytical approaches. In our patient, an impaired semantic system was unlikely to have strongly influenced the ability to read Kanji words, which we think explains the lack of effect on reading imageability. In addition, a reading consistency effect was unlikely to be observed because the patient’s impaired nonlexical route would not promote the reading of high-consistency words. By contrast, a reading frequency effect should be observed in patients who use the nonsemantic lexical route because word frequency affects the activation of the orthographic lexicon in the DRC Model (Coltheart et al., 2001). However, no frequency effect was seen in either our patient’s results or those of Coslett (1991), Wu et al. (2002), or Law et al. (2005). Though unlikely, it is possible that a frequency effect might have been observed if latency had been measured. In our case report, as well as those by Coslett (1991) and Law et al. (2005), latency was not used, and we instead used the percentage of correct responses as the evaluation measure. Wu et al. (2002) did use reading latency as an evaluation measure, but they did not investigate the frequency effect for reading. Nevertheless, a study of healthy adults has shown that a frequency effect can be observed, even in the study of brain-damaged patients, if latency is used as an evaluation measure (Andrews, 1989; Forster & Chambers, 1973). This is an issue to be addressed in future research. Acknowledgments This work was supported by the Japan Society for the Promotion of Science under grant numbers B-23330201 for Akira Uno and C-15K02550 for Go Mizumoto, and the JAS Grant for Young Scientists 2016 for Kosei Hashimoto. Disclosure statement No potential conflict of interest was reported by the authors. Funding This work was supported by the Japan Society for the Promotion of Science under grant numbers B-23330201 for Akira Uno and C-15K02550 for Go Mizumoto. ORCID Kosei Hashimoto http://orcid.org/0000-0001-7239-2099 References Amano, S., & Kondo, T. (1999). Nihongo no goi tokusei [Lexical properties of Japanese]. Tokyo: Sanseido. Downloaded by [University of Florida] at 21:45 29 October 2017 8 K. HASHIMOTO ET AL. Andrews, S. (1989). Frequency and neighborhood effects on lexical access: Lexical similarity or orthographic redundancy? Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 234–254. Beauvois, M. F., & Derouesne, J. (1979). Phonological alexia: Three dissociations. Journal of Neurology, Neurosurgery & Psychiatry, 42, 1115–1124. doi:10.1136/jnnp.42.12.1115 Benson, D. F., & Ardila, A. (1996). Aphasia. A clinical perspective. New York, NY: Oxford University Press. Bub, D., & Kertesz, A. (1982). Deep agraphia. Brain and Language, 17, 146– 165. doi:10.1016/0093-934X(82)90011-6 Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204–256. doi:10.1037/0033-295X.108.1.204 Coltheart, M. J., Byng, S., Prior, M., & Riddoch, J. (1983). Surface dyslexia, the quarterly journal of experimental psychology section A. Human Experimental Psychology, 35, 469–495. Coslett, H. (1991). Read but not write “Idea”: Evidence for a third reading mechanism1. Brain and Language, 40, 425–443. doi:10.1016/0093-934X (91)90141-M Forster, K. I., & Chambers, S. M. (1973). Lexical access and naming time 1. Journal of Verbal Learning and Verbal Behavior, 12, 627–635. doi:10.1016/S0022-5371(73)80042-8 Fujita, I., Monoi, H., Okudaira, N., Ueda, M., Ono, K., Furuya, F., . . . Sasanuma, S. (2000). Test of Lexical Processing in Aphasia (TLPA). Tokyo: Excor. Funnell, E. (1983). Phonological processes in reading: New evidence from acquired dyslexia. British Journal of Psychology, 74, 159–180. doi:10.1111/bjop.1983.74.issue-2 Funnell, E. (1996). Response basis in oral reading: An account of the cooccurrence of surface dyslexia and semantic dementia. Quarterly Journal of Experimental Psychology, 49A, 417–446. doi:10.1080/ 713755626 Fushimi, T., Ijuin, M., Patterson, K., & Tatsumi, I. (1999). Consistency, frequency, and lexicality effects in naming Japanese Kanji. Journal of Experimental Psychology: Human Perception and Performance, 25, 382–407. Fushimi, T., Komori, K., Ikeda, M., Patterson, K., Ijuin, M., & Tanabe, H. (2003). Surface dyslexia in a Japanese patient with semantic dementia: Evidence for similarity-based orthography-to-phonology translation. Neuropsychologia, 41, 1644–1658. doi:10.1016/S0028-3932(03)00108-8 Graham, K. S., Hodges, J. R., & Patterson, K. (1994). The relationship between comprehension and oral reading in progressive fluent aphasia. Neuropsychologia, 32, 299–316. doi:10.1016/0028-3932(94)90133-3 Haruhara, N., & Kaneko, M. (2002). The standardized comprehension test of abstract words. (ed.). A Uno. Tokyo: Intelna-Shuppan. Hashimoto, K., & Uno, A. (2016). Cognitive neuropsychological analysis of differential reading and spelling disorder mechanisms in a patient with aphasia. Neurocase, 22, 294–299. doi:10.1080/13554794.2015.1137950 Iwata, M. (1984). Kanji versus Kana neuropsychological correlates of the Japanese writing system. Trends in Neurosciences, 7, 290–293. doi:10.1016/S0166-2236(84)80198-8 Japan Society for Higher Brain Dysfunction Brain Function Test Committee. (2003). Standard Language Test of Aphasia (SLTA) (2nd ed.). Tokyo: Shinko Igaku Shuppan. Jochi University SALA Project Team. (2004). Sophia Analysis of Language in Aphasia (SALA). Tokyo: Escor. Jones, G. V. (1985). Deep dyslexia, imageability, and ease of predication. Brain and Language, 24, 1–19. doi:10.1016/0093-934X(85)90094-X Kay, J., Lesser, R., & Coltheart, M. (1992). Psycholinguistic Assessment of Language Processing in Aphasia (PALPA). London: Lawrence Erlbaum Associates. Law, S. P., Wong, W., & Chiu, K. M. Y. (2005). Preserved reading aloud with semantic deficits: Evidence for a non-semantic lexical route for reading Chinese. Neurocase, 11, 167–175. doi:10.1080/13554790590944618 Lesch, M. F., & Martin, R. C. (1998). The representation of sublexical orthographic-phonologic correspondences: Evidence from phonological dyslexia. The Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology, 51, 905–938. doi:10.1080/713755790 Lytton, W. W., & Brust, J. C. M. (1989). Direct dyslexia: Preserved oral reading of real words in Wernicke’s aphasia. Brain, 112, 583–594. doi:10.1093/brain/112.3.583 Marcel, T. (1980). Surface dyslexia and beginning reading: A revised hypothesis of the pronunciation of print and its impairments. In M. Coltheart, K. E. Patterson, & J. C. Marshall (Eds.), Deep dyslexia (pp. 227– 258). London: Routledge & Kegan Paul. Marshall, J. C., & Newcombe, F. (1966). Syntactic and semantic errors in paralexia. Neuropsychologia, 4, 169–176. doi:10.1016/0028-3932(66) 90045-5 Marshall, J. C., & Newcombe, F. (1973). Patterns of paralexia: A psycholinguistic approach. Journal of Psycholinguistic Research, 2, 175–199. doi:10.1007/BF01067101 Martin, R. C., Breedin, S. D., & Damian, M. F. (1999). The relation of phoneme discrimination, lexical access, and short-term memory: A case study and interactive activation account. Brain and Language, 70, 437–482. doi:10.1006/brln.1999.2184 Morton, J., & Sasanuma, S. (1984). Lexical access in Japanese. In L. Henderson (Ed.), Orthographies and reading: Perspectives from cognitive psychology, neuropsychology, and linguistics (pp. 25–42). London: Lawrence Erlbaum Associates. Patterson, K., & Hodges, J. R. (1992). Deterioration of word meaning: Implications for reading. Neuropsychologia, 30, 1025–1040. doi:10.1016/0028-3932(92)90096-5 Patterson, K., Suzuki, T., Wydell, T., & Sasanuma, S. (1995). Progressive aphasia and surface alexia in Japanese. Neurocase, 1, 155–165. doi:10.1080/13554799508402358 Plaut, D. C., & Shallice, T. (1993). Deep dyslexia: A case study of connectionist neuropsychology. Cognitive Neuropsychology, 10, 377–500. doi:10.1080/02643299308253469 Sakurai, Y., Mimura, I., & Mannen, T. (2008). Agraphia for kanji resulting from a left posterior middle temporal gyrus lesion. Behavioural Neurology, 19, 93–106. doi:10.1155/2008/393912 Sakurai, Y., Sakai, K., Sakuta, M., & Iwata, M. (1994). Naming difficulties in alexia with agraphia for kanji after a left posterior inferior temporal lesion. Journal of Neurology, Neurosurgery & Psychiatry, 57, 609–613. doi:10.1136/jnnp.57.5.609 Sasanuma, S., Ito, H., Patterson, K., & Ito, T. (1996). Phonological alexia in Japanese: A case study. Cognitive Neuropsychology, 13, 823–848. doi:10.1080/026432996381827 Soma, Y., Sugishita, M., Kitamura, K., Maruyama, S., & Imanaga, H. (1989). Lexical agraphia in the Japanese language. Pure agraphia for Kanji due to left posteroinferior temporal lesions. Brain, 112, 1549–1561. doi:10.1093/brain/112.6.1549 Uno, A., Suzuki, K., Sambai, A., Haruhara, N., Kaneko, M., Awaya, N., . . . Goto, T. (2009). Issues in direct application of adult cognitive neuropsychology to developmental dyslexia –Surface and/or phonological dyslexia–. Japanese Journal of Logopedics and Phoniatrics, 50, 276–284. doi:10.5112/jjlp.50.276 Wu, D. H., Martin, R. C., & Damian, M. F. (2002). A third route for reading Implications from a case of phonological dyslexia. Neurocase, 8, 274– 295. doi:10.1076/neur.8.3.274.16198