This article was downloaded by: [Flinders University of South Australia] On: 09 January 2015, At: 16:53 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Neurocase: The Neural Basis of Cognition Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/nncs20 Deep dyslexia for kanji and phonological dyslexia for kana: Different manifestations from a common source a b c d Hitomi Sato , Karalyn Patterson , Takao Fushimi , Jane Maxim & Karen Bryan a Department of Rehabilitation , Yokufukai Hospital , Tokyo, Japan b MRC Cognition and Brain Sciences Unit , Cambridge, UK c Department of Rehabilitation , Kitasato University , Kanagawa, Japan e d Department of Language and Communication , University College London , London, UK e Division of Health and Social Care , University of Surrey , Guildford, UK Published online: 15 Nov 2008. To cite this article: Hitomi Sato , Karalyn Patterson , Takao Fushimi , Jane Maxim & Karen Bryan (2008) Deep dyslexia for kanji and phonological dyslexia for kana: Different manifestations from a common source, Neurocase: The Neural Basis of Cognition, 14:6, 508-524, DOI: 10.1080/13554790802372135 To link to this article: http://dx.doi.org/10.1080/13554790802372135 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions NEUROCASE 2008, 14 (6), 508–524 Deep dyslexia for kanji and phonological dyslexia for kana: Different manifestations from a common source NNCS Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 Deep-Phonological Dyslexia in Japanese Hitomi Sato,1 Karalyn Patterson,2 Takao Fushimi,3 Jane Maxim,4 and Karen Bryan5 1 Department of Rehabilitation, Yokufukai Hospital, Tokyo, Japan MRC Cognition and Brain Sciences Unit, Cambridge, UK 3 Department of Rehabilitation, Kitasato University, Kanagawa, Japan 4 Department of Language and Communication, University College London, London, UK 5 Division of Health and Social Care, University of Surrey, Guildford, UK 2 A Japanese-speaking stroke patient with disrupted phonology but relatively good semantics was severely impaired in nonword reading, with better preserved and imageability-modulated word-reading in both kanji and kana. This basic similarity of reading in the two Japanese scripts was accompanied by the following differences: (i) distinct error patterns (prominent semantic errors for kanji vs. phonological errors for kana); (ii) a more pronounced imageability effect for kanji; and (iii) a remarkable pseudohomophone advantage for kana. The combination of deep dyslexia for kanji and phonological dyslexia for kana in a single patient suggests that these are not two distinct reading disorders. Keywords: Deep dyslexia; Phonological dyslexia; Japanese orthography; Phonological impairment. INTRODUCTION This paper is motivated by two issues in the cognitive neuropsychology of language and reading. One of these questions applies to virtually any language or writing system: what is, or are, the underlying deficit(s) in the two acquired reading disorders known as deep and phonological dyslexia, and are these distinct disorders? The second question is specific to Japanese, with its two different forms of orthography, morphographic kanji and phonographic kana: do the observed ‘dissociations’ in impairments with respect to these two forms of orthography indicate different reading mechanisms for kanji vs. kana? The history of the first question dates back nearly 30 years, to the time when phonological dyslexia was first reported (Beauvois & Derouesné, 1979) and researchers (e.g., Patterson, 1982) were noting its similarities to/differences from deep dyslexia (e.g., Marshall & Newcombe, 1973). Deep dyslexia (Coltheart, Patterson, & Marshall, 1980) consists of a constellation of symptoms, of which the most important are (a) total or nearly-total failure to read aloud any nonwords or pseudowords Our extreme gratitude goes to YT for her participation in this research. We also wish to express our appreciation to Professor Taeko N. Wydell for providing us her database of imageability (1991); to Dr M. Hatta, Dr R. Sato, and Dr R. Yoshida for referring YT, providing and explaining her MRI; and to Dr E. Otomo (Director of Yokufukai Hospital), by whose approval H.S. was able to study in London. Address correspondence to Dr Hitomi Sato PhD, Yokufukai Hospital, Department of Rehabilitation, 1-12-1 Takaidonishi, Suginami-ku, Tokyo, 168-8535 Japan. (E-mail: hitomi.sato@hotmail.co.jp). © 2008 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business http://www.psypress.com/neurocase DOI: 10.1080/13554790802372135 Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 DEEP-PHONOLOGICAL DYSLEXIA IN JAPANESE (such as dake in English); (b) degrees of success in real word reading that vary with the concreteness or imageability of the target words, and possibly also with word class (though the latter symptom may in fact be a result of the former (e.g., Barry & Richardson, 1988), since the less favoured word classes – especially function words – are low in imageability); and (c) at least occasional errors1 in single-word reading where the response is semantically, but not orthographically or phonologically, related to the target word (e.g., lecturer → ‘student’). Phonological dyslexia has a rather similar profile, but with the following differences: (a) although nonword reading is always impaired relative to normal readers (indeed, this is the criterial feature of phonological dyslexia), it is rarely abolished; (b) where patients have been tested on reading pseudohomophones (e.g., caik, homophonic with the word cake) vs. matched nonwords which do not sound like real words when pronounced, a significant pseudohomophone advantage is sometimes observed; (c) real-word reading overall is usually better than in deep dyslexia as well, such that the imageability and/or part-of-speech effects may be less striking; and (d) frank semantic errors lacking any structural similarity to the target word are absent. As described by Coltheart (1996), from the perspective of the syndrome approach to neuropsychology, this phonological-dyslexic profile signalled a disorder truly separate from deep dyslexia. A different interpretation treats phonological dyslexia as the milder end of a continuum with deep dyslexia (Crisp & Lambon Ralph, 2006; Friedman, 1996; Glosser & Friedman, 1990). In either case, there remains considerable debate about the nature of the deficits underlying these disorders. In particular, (a) can all of the symptoms in either or both conditions be explained by a single deficit, or are multiple impairments involved? (b) Are deep and phonological dyslexia specific disorders of reading, or are the reading impairments a predictable manifestation of a more general language deficit? The second question addressed in this study, referred to as the ‘kanji–kana problem’ in the Japanese research community (e.g., Kawamura, 1The percentage of semantic errors (out of all errors) varies considerably from one deep dyslexic patient to another. For example, it was 54% in PW (Patterson, 1978) and 10% in PS (Shallice & Coughlan, 1980). 509 2007), has a longer history, starting with the first clinical and/or experimental studies of reading and writing disorders in Japanese (e.g., Imura, 1943; Imura, Nogami, & Asakawa, 1971; Kimura, 1934; Sakamoto, 1940). There are many descriptions available in western literature (e.g., Fushimi, Ijuin, Patterson, & Tatsumi, 1999; Morton & Sasanuma, 1984) of the characteristics of kanji (the morphographic set of characters essentially inherited from China) and kana (the phonographic characters created in Japan to represent morae – the basic phonological units of spoken Japanese2 – consisting of hiragana and katakana). Three important differences will be briefly summarised here: First of all, each kana character has a single, fixed pronunciation that does not vary from one context to another. Orthography-to-phonology relationships for words written in kana are thus perfectly consistent and predictable. The pronunciation of kanji words, by contrast, is much more inconsistent and unpredictable, because most kanji characters have multiple pronunciations or ‘readings’, and the correct one for a given target word depends on the other component character(s) in the word. Secondly, individual kanji characters always convey some meaning, though the meaning of the character on its own may not be transparently related to the meaning of a word containing that character. Individual kana characters, by contrast, are reasonably pure symbols for sound rather than meaning. Finally, although any Japanese word can be written in kana (since it represents sound), the two scripts are in practice used to write different types of words: nouns and the root forms of verbs and adjectives are most often written in kanji; function words, the obligatory inflections on verbs and adjectives, and some content words are written in hiragana. An alternative but exactly equivalent set of katakana 2Morae, or moras, are the time-based units of spoken Japanese. There are 108 distinct morae in the corpora of Japanese speech and more than 70% of morae are consonant-vowel combinations (CV) (Otake, 1990). Additional moras consist of vowels on their own (the five canonical vowels of Japanese, /a/, /i/, /u/, /e/, /o/), more complex combinations of consonant and vowel (CjV), and the two special moras corresponding to nasal (N) and geminate consonants (Q). For instance, the katakana word caramel, pronounced /kja-ra-me-ru/, consists of one CjV morae (kja) and three CV moras (ra, me, and ru), and the kanji word stairs, pronounced /kai-daN/, consists of two CV moras (ka and da), one V morae (i) and one nasal (N). Both and are 4-mora words. Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 510 SATO ET AL. characters, again representing pronunciation in a transparent way, is used for loan words (such as , pronounced /me-ro-N/, meaning melon). Owing to these very distinct features of the two Japanese writing systems, it is not surprising that, following lesions to language networks in the brain, there may be substantial differences in both degree of success and types of errors in oral reading of words in kanji vs. kana. Some researchers have claimed that these differences constitute a meaningful double dissociation and have linked the two sides of the dissociation to different lesion sites: kanji > kana with a left angular gyrus lesion (e.g., Yamadori, 1975; Kawamura, 1990), and kanji < kana with a left posterior inferior temporal lesion (e.g., Kawahata, Nagata, & Shishido, 1988; Sakai, Sakurai, Sakuta, & Iwata, 1992). It is essential to note, however, that these studies used single kana characters (=nonwords)3 and/or kana transcriptions of kanji words (ⱌpseudohomophones)4 as kana reading stimuli to compare with single kanji characters and/or kanji words. Thus, cases of a reported kanji advantage might be attributable to a lexical/semantic effect (words > nonwords), and cases of a kana advantage to a pronunciation consistency effect (transparent kana > unpredictable kanji). Even a study (Sugishita, Otomo, Kabe, & Yunoki, 1992) which criticised the methodology of previous investigations examined the patients’ reading performance using single basic kana and kanji characters (N = 46 each, the kanji characters also corresponding to single-character words). In short, neuropsychological attempts to determine whether reading mechanisms are different for kanji vs. kana have been somewhat hindered by problems in the selection of reading materials in the two orthographies. The logic underlying many early studies was presumably that, since any word in Japanese can be written in kana, the sensible experimental contrast would be between real kanji words and the same items transcribed into kana, because then the target pronunciations of stimuli in the two scripts would be identical. Unfortunately, this strategy 3Although some single morae in Japanese correspond to words (e.g., /me/ eye, /ki/ tree), such words are commonly written in kanji characters, not in kana characters. Therefore, single-kana characters can be considered as non-homophonic nonwords or pseudohomophones. 4Hiragana transcriptions are not misspelled, but psycholinguistically they are similar to pseudohomophones, as Coltheart, Patterson, and Marshall (1987) pointed out. confounded script type with orthographic familiarity: a word normally written in kanji is familiar in kanji but much less so in kana – even if it is easily pronounceable by any normal Japanese reader. Furthermore, this experimental choice also confounded script type with orthographic word length: the pronunciation of a 2-character kanji noun usually has 3 or 4 moras, and therefore requires 3 or 4 characters in its kana transcription. In addition, kanji words have many homophones (e.g., /ka-rei/: magnificent, graceful, aging, over refrigeration, etc.); and whereas morphographic kanji – which specifies meaning – disambiguates homophonic alternatives, phonographic kana does not. Finally, some vocabulary items can be written in either kanji or kana.5 For all of these reasons, recent studies (e.g., Sasanuma, Ito, Patterson, & Ito, 1996) have usually opted for a contrast between words normally written in kanji with those normally written in kana. Meanwhile, kanji nonwords have not been used in reading investigations of neurological patients until quite recently (Fushimi et al., 2003). This may be attributable either to an intuitive notion that ‘logographic’ kanji is processed via semantics, or to a failure to appreciate how much can be learned from patients’ ability to read nonwords, or both. The study reported here concerns the reading and other language abilities of a Japanese stroke patient with impaired phonological abilities. By contrasting her reading performance on different types of words and nonwords written in kana and kanji, and by trying to characterise her pattern of reading in the different scripts as either phonological or deep dyslexia, we hope to offer a small but significant step in the resolution of both of the issues outlined in the introduction. 5 This flexibility of the Japanese writing system led Kondo and Amano (1999) to propose a new psycholinguistic variable called orthographic plausibility. This is similar to orthographic wordlikeness and reflects a sort of subjective acceptability of writing a particular lexical item in kanji, katakana or hiragana. The degree of orthographic plausibility (on a 5-point scale) for a kanji word which is normally written in kanji is of course high, whereas this value for its transcriptions is low. For example, the word meaning magnificent, pronounced /ka-rei/, is usually written in kanji, and this representation ( ) has a plausibility value of 4.75; when written in hiragana ( ), the plausibility value drops to 2.75; and if written in katakana ( ), it drops still further to 1.60. For the subset of words that are frequently written in either kanji or kana, such a pattern is not observed (e.g., apple /ri-N-go/: = 4.30, = 4.15, = 4.05). Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 DEEP-PHONOLOGICAL DYSLEXIA IN JAPANESE 511 Figure 1. Horizontal and coronal sections of an MRI (T2-weighted imaging) for YT in June 2002. Case report YT, a right-handed 55-year-old female restaurant owner with a 12-year education, suffered a haemorrhage of the left putamen and had an operation for removing a haematoma in 1996. YT’s MRI (Figure 1) shows a left hemisphere lesion in the sub-cortex, the temporal area of the cortex and in the parietal area of the sub-cortex.6 Her spontaneous speech was non-fluent but well articulated. Word finding difficulties were evident in picture description, with semantic errors, though often accompanied by a statement that the produced word was not the target (e.g., coffee → ‘cider, no it wasn’t’). Phonological errors were infrequently observed, sometimes accompanied by self-correction. On first assessment in 1998, YT’s profile on the Japanese version of the Western Aphasia Battery (1986) was as follows: spontaneous speech 12; auditory comprehension 6.75; repetition 7.0; naming 5.3; Aphasia Quotient 62.1. Her reading aloud of single-character kanji words was poor but more successful for concrete words (29/60 = 48%) than abstract words (11/60 = 18%). Semantic errors also occurred more frequently for concrete words (17/ 31 = 55%) than abstract words (7/49 = 14%). She could write Arabic numbers, a limited number of kanji characters, and only three kana characters. Her verbal fluency within a 1-min time limit was 4 for a 6 YT’s lesion is consistent with reports that the usual lesion site in deep dyslexia is the left-temporo-parietal region and ‘typically larger, encompassing at least the perisylvian area and often extending to include much of the left hemisphere’ (Lambon Ralph & Graham, 2000, p. 142). semantic category (animal) and zero for a letter (/ka/). Her digit span (forward) was only 2 and she could not perform backward span. YT’s score for Raven’s Coloured Progressive Matrices (Raven, 1962) was 30/ 36 and for copying the Rey complex figure (Rey, 1941) was 35/36. The main investigations were conducted when YT was 5–6 years post onset. RESULTS OF EXPERIMENTAL INVESTIGATIONS The results will be organised into five sections: (1) assessment of semantic and phonological abilities, to establish the status in YT of the most basic language skills; (2) nonword reading, to characterise the nature of YT’s reading disorder; (3) word reading, for the same purpose; (4) pseudohomophone reading, to determine whether the phonological ‘familiarity’ of nonwords plays an important role; and (5) word reading and picture naming with an incremental cueing technique, to examine the interaction between orthographic, semantic and phonological sources of activation in YT’s spoken word production. Assessment of semantic and phonological abilities Semantic knowledge and word comprehension The Tiger and Lion Test (Sato, 1996) comprises 60 pictures representing 6 exemplars from each of 10 semantic categories (5 animate and 5 inanimate: 512 SATO ET AL. TABLE 1 YT’s performance (% correct) in five phonological tasks not involving reading Repetition 2 mora (N = 48) 4 mora (N = 120) Mora Discrimination (N = 60) Mora Detection (N = 144) Mora Segmentation (N = 72) Mora Concatenation (N = 40) Words 100 98 100 65 49 Nonwords 79 65 93 62 40 98a 100b 73a 50b Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 arate of presentation = 1 mora/sec; brate of presentation = 2 sec between 1st and 2nd mora, 1 sec between 2nd and 3rd mora e.g., birds, musical instruments) and provides measures of both expressive and receptive semantic knowledge. For an aphasic patient, expressive tasks are always more difficult, and unsurprisingly YT was impaired in this regard, with a picture naming score of 45/60 = 75%. In the receptive components of the test, on the other hand, she achieved essentially perfect scores: for spoken word-to-picture matching, in which the target picture was presented along with 5 same- or different-category distracters, her score was 59/60 = 98% in the same-category condition and 60/60 = 100% in the different-category condition. In a version of the Pyramids and Palm Trees Test (Howard & Patterson, 1992) modified7 by changing several test items to make the test more appropriate for Japanese subjects, YT’s performance was also good: 3 pictures: 96%; 1 spoken word and 2 pictures: 94%; 1 written word and 2 pictures: 98%; 3 written words: 98%. For another assessment of written word comprehension, we used 42 katakana words comprising 2 to 6 characters and 42 kanji words comprising 1 to 3 characters; in both cases, there were 6 items in each of the identical seven semantic categories (e.g., for animals, katakana word: giraffe, and kanji word: bear). In written word-to-picture matching with same-category distracters, YT’s score was 41/42 = 98% for both katakana words and kanji words. In a difficult 3-alternative forced choice semantic similarity judgement task designed to assess comprehension of both concrete and abstract words, YT was asked to match a written single-character kanji word to a semantically similar target word among a 7 Patterson et al. (1995) modified this test for investigating a Japanese neurological patient and created 49 test items. We added three new items to this version in order to match the number of items to the original test. semantically associated distracter and a visually similar distracter (e.g., car → vehicle, traffic, knitting; puzzle → obscurity, incident, illumination). Her performance, though not perfect, was fairly good and equal for concrete words (46/52 = 88%) and abstract words (47/52 = 90%). In the abstract word comprehension test (Uno, 2003), YT’s score was 44/45 = 98% for spoken words and 43/45 = 96% for written 2-character kanji words. Non-reading phonological abilities Table 1 presents YT’s results in 5 non-reading phonological tasks for both words and nonwords. YT’s repetition ability was further examined in detail. Repetition. YT was asked to repeat 48 two-mora words and 48 nonwords, with the latter created by reversing the first and the second mora of 2-mora words. She was also asked to repeat a set of 120 three-, four-, five-mora words (N = 40 each) which are normally written in katakana, and 120 fourmora nonwords. The nonwords were created from the 4-mora katakana words by (i) transposing the second and the third mora, (ii) substituting one of the constituent morae with a different mora, and (iii) randomising the mora sequence. While YT’s performance was near perfect for word repetition, her repetition of nonwords was substantially impaired, especially for the longer strings. The majority of her errors were phonologically similar to the target (48/52 = 92%; 8/10 for 2-mora nonwords, 40/42 for 4-mora nonwords); of these 48 errors, 11 were lexicalisation errors like /bi-ta-meN/ →/bi-ta-mi-N/ vitamin. Phonological discrimination. In the phonetic discrimination test (Endo et al., 2000), consisting of 26 CV identical pairs and 26 CV different pairs, which include 10 different phonemes (/d/, /g/, /t/, /z/, /m/, /n/, /k/, /s/, /p/, /b/), YT could recognise a consonant difference with no errors. In the mora discrimination Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 DEEP-PHONOLOGICAL DYSLEXIA IN JAPANESE task, pairs of items differing in one mora were presented to YT for same/different judgments. There were 60 word pairs (e.g., /ha-ke/ brush vs. /ha-ko/ box) and 60 nonword pairs (e.g., /no-yo/ vs. /nu-yo/). YT’s judgments were flawless for the word pairs, but she made a few errors on nonword pairs. Mora detection. Three sets of 3-mora spoken words and nonwords (N = 48 of each per set) were prepared with reference to three target morae, /ka/, /su/, and /mo/. The order of items was randomised for lexical status but blocked for target mora. In each blocked set, half of the stimuli contained a target (e.g., /ki-N-ka/, /su-mi-re/) and half did not; in those containing a target mora, the target occurred in the initial, middle or final position in equal proportions. YT was simply asked to make a yes/no judgement as to whether each spoken stimulus contained the specified target mora. In this easy task with a 50% chance rate of success, YT’s performance was very poor: 93/144 = 65% for words, 89/144 = 62% for nonwords. These scores are above chance, since the 95% confidence intervals do not include 50% (for words, the confidence interval on YT’s score is 57–72%, and for nonwords, 54–69%), but not a great deal above chance. Mora segmentation. The stimuli were 3 sets of 24 three-mora words or nonwords. In each blocked set, every stimulus (word or nonword) contained one of the target morae /ka/, /su/ or /mo/. YT was asked to judge the position of the target mora by pointing to one of three horizontally placed circles representing the 3-mora positions. As with mora detection, YT’s scores were very poor: 35/72 = 49% for words and 29/72 = 40% for nonwords. The 95% confidence interval for words (37–60%) is just above the chance level of 33%; but the corresponding interval for nonwords (30–52%) fails to rule out a hypothesis of chance performance. YT was least successful at locating the target when it occurred in middle position: scores for initial, middle, and final position were 18/24, 7/24, 10/24 for words, and 10/24, 6/24, 13/24 for nonwords. Mora concatenation. YT was given auditory presentation of 80 three-mora stimuli, 40 words and 40 nonwords; the nonwords were formed from the words by either transposing the second and third mora of the base word (e.g., /hi-yo-ko/ → /hiko-yo/) or by substituting a different mora in second or third position (e.g., /se-na-ka/ → /se-no-ka). The stimuli were presented once at a rate of 1 mora/second, and a second time with a longer (~2-s) gap between the onsets of the first and the second morae. Her task was to concatenate each 513 separated sequence into a single utterance (in English, a real-word example from the first condition would be like hearing com - pu - ter at one syllable/second and being asked to produce the word ‘computer’ in response). YT’s performance was near perfect for real words but much poorer for nonwords: 29/40 = 73% with a regular spacing of morae and 20/40 = 50% with a longer gap between the initial and middle morae. Repetition of single and multiple items. YT’s ability to repeat single words both immediately and after a delay (during which she counted from one to five) was tested for 100 words varying in imageability (Ogawa & Inamura, 1974; Itukushima, Ishihara, Nagata, & Koike, 1991) and familiarity (Amano & Kondo, 1999). She was near perfect for immediate repetition (98/100) and still moderately successful after a delay (84/100); in the delayed condition, her poorest performance was on low imageability/low familiarity words (13/20 = 65%). The stimuli for the multiple-item repetition task were 2- and 3-word strings of 3- and 4-mora words from three bands of imageability (Wydell, 1991). YT’s ability to repeat word sequences was impaired even for 2-word strings (45/72 = 63%) and was severely impaired once list-length reached three (7/48 = 15%). The impact of imageability on her success was significant for 3-word strings (high vs. low imageability bands, χ2(1) = 5.93, p < .02) and nearly so for 2-word sequences (high vs. low, χ2(1) = 3.37, p = .06). It is worth reporting that YT made two semantic errors in the delayed and 3-item repetition task ( sickness /bjo-ki/→ sudden illness /kju-bjo/, poplar /po-pu-ra/→ a row of trees /na-mi-ki/). Comment These tests establish that YT had relatively wellpreserved semantics combined with a significant phonological impairment. Her phonological deficit was exacerbated when the tasks required either (a) analysis or production of unfamiliar phonological strings (nonwords) or (b) a significant workingmemory component. Phonological tasks such as repetition and mora concatenation revealed a substantial advantage for real words > nonwords; and some of the phonological tests with real words, such as immediate serial recall, indicated a substantial impact of the semantic ‘richness’ of the word stimuli (i.e., high vs. low imageability). The former effect might arise from the advantage conferred by the phonological familiarity of real words, or the fact that they have meaning, or both. The latter 514 SATO ET AL. effect indicates that – at least for YT and probably for normal speakers as well (Romani, McAlpine, & Martin, 2008; Walker & Hulme, 1999) – word meaning influences the activation/maintenance of phonological representations in what some researchers consider a purely phonological task. Reading aloud nonwords Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 Reading aloud kana characters Kana characters are conventionally divided into three groups: (a) the basic set comprising 46 hiragana/katakana characters which, apart from the nasal ( ), correspond to V or CV morae (e.g., = /a/, = /ki/); (b) the diacritical set comprising 25 hiragana/katakana characters which also correspond to CV morae but have a diacritical mark representing a phonetic distinction such as voicing (e.g., = /sa/ but = /za/); and (c) the complex set comprising 36 hiragana/katakana 2-character compounds, corresponding to CjV mora (e.g., = /mjo/). All normal Japanese adults, and even children who have just learned to read, achieve 100% correct in reading aloud these single kana characters. As shown in Table 2, YT was very poor in reading aloud both types of kana characters, with errors consisting of production of a different CV within the same set or no response. In attempting to read several kana characters in the basic set, YT used the order of the kana list8 (e.g., /o/ →/a, i, u, e, o/ then /o/) and she also used lexical relay-words for reading other sets of kana (e.g., /zu/ → /mi-zu/ water then /zu/), though these strategies were not always successful. TABLE 2 YT’s reading performance for Single kana characters and nonwords % Correct (N) Single Hiragana characters Basic set With diacritical Complex set Full set Single Katakana characters Basic set With diacritical Complex set Full set Nonwords Katakana (2 characters, 2 mora) Katakana (4 characters, 4 mora) Kanji (2 characters, 4 mora) 74 (46) 36 (25) 8 (36) 43 (107) 63 (46) 16 (25) 6 (36) 33 (107) 42 (48) 3 (120) 7 (120) Reading aloud nonwords in kana and kanji The katakana stimuli were the 48 two-mora nonwords and the 120 four-mora nonwords used in the nonword repetition task. None of the stimuli contained any complex CjV mora like /kju/. As shown in Table 2, YT was moderately impaired for 2-character kana nonwords (42%) and failed almost completely to read 4-character kana nonwords (3% correct). The kanji nonwords were the 120 two-character kanji nonwords (4-mora) from Fushimi et al. (1999), created by combining pairs of real kanji characters that cannot go together to form a real word. YT’s kanji nonword reading was severely impaired (7%). Since normal readers’ accuracy on such materials is 100% for kana nonwords and 88% for kanji nonwords, YT’s failure to read aloud kana/kanji nonwords was striking. The nature of YT’s nonword reading errors The majority of YT’s errors in kana nonword reading were lexicalisations (61/117 = 52%), which occurred most frequently in the transposed nonwords (N = 29) as compared to the substituted nonwords (N = 18) and the randomised nonwords (N = 14). In 44/ 61 cases, YT produced the base word ( /a-roi-N/ → /a-i-ro-N/ iron). In 17/61 cases, she produced a real word whose constituent kana characters are similar to the stimuli (e.g., /so-maN-ra/ → /so-ra-ma-me/ broad bean). In many cases, YT indicated, after producing the lexicalisation error, that she knew it was wrong but she seemed unable to inhibit these responses that she knew to be incorrect. YT also produced phonological (or visual) errors (49/117 = 42%), often with only one incorrect mora, as in /ko-N-ra-bu/ → /o-N-ra-bu/. In some trials, YT used the same strategies evident in her single kana character reading for attempting to arrive at the target pronunciation: (i) going serially through the order of the kana list, and/or (ii) using a lexical relay-word. Unrelated responses were rare (7/117 = 6%). In kanji nonword reading, YT made two types of lexicalisation error. In one type, she pronounced a word sharing either the first or the second kanji character with the stimulus (57/112 = 51%) as in /deN-zoku/ → /deN-wa/ telephone. In a second type, YT produced a word semantically 8This is called ‘Gojyu-On-Hyou’ (a Japanese syllabary list), in which kana characters are arranged in order, and this list is used for learning kana. DEEP-PHONOLOGICAL DYSLEXIA IN JAPANESE associated with a constituent kanji character (27/112 = 24%) as in → teacher (the bound morpheme meaning teach presumably evoked the associated word teacher). Non-lexicalisation errors included (i) the concatenation of a legal pronunciation for one character and an illegal pronunciation of the other character (6/112 = 5%) as in /keQ-tai/ → /tei-obi/ ( has two pronunciations /tai/ and /obi/); (ii) unrelated responses (6/112 = 5%); and (iii) omissions (16/112 = 14%). Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 Comment YT’s oral reading of nonwords in both kana and kanji was grossly impaired, and she made prominent lexicalisation errors. This ‘lexical capture’ (Funnell & Davison, 1989; Patterson, Suzuki, & Wydell, 1996) occurred somewhat more frequently in response to kanji than kana nonwords (i.e., 75% vs. 52%). In contrast, YT produced far more phonological errors in response to kana than kanji nonwords (42% vs. 5%). Reading aloud words in kana and kanji Both katakana and kanji words were manipulated by concreteness9 and imageability (Wydell, 1991). Mean familiarity (Amano & Kondo, 1999) was equated across all stimulus classes. The psycholinguistic characteristics of the reading stimuli are shown in Table 3. Mean imageability of concrete and abstract words was parallel to high- and lowimageability words. Set 1: Concrete and abstract words Katakana words comprised the 120 words used in the word repetition task, with 20 words in each of six conditions formed by crossing two bands of concreteness (concrete vs. abstract nouns) with three bands of word length (3, 4, or 5 characters). An advantage for concrete words (concrete vs. abstract words: 53/60 = 88% > 44/60 = 73%) was marginally significant (χ2(1) = 3.56, p = .059). Within abstract words, there was a reliable effect of word length (3- vs. 5-mora words: 17/20 = 85% > 11/20 = 55%), (χ2(1) = 9 The assignment of concrete and abstract words was based on the author’s subjective judgment, in which object names (e.g., desk) were classified as concrete words, and nouns representing non-visible things (e.g., honesty) or time and space (e.g., present, left) were classified as abstract words. 515 4.29, p = .038). A simultaneous multiple logistic regression analysis with imageability, familiarity, word frequency and word length as 4 predictors revealed a significant imageability effect (Wald = 4.27, p = .039) and a marginal wordlength effect (Wald = 3.69, p = .055). Kanji words were 104 single-character words, comprising 52 concrete and 52 abstract words. YT showed a marked concreteness effect (concrete words vs. abstract words: 45/52 = 87% > 25/52 = 48%, χ2(1) = 18.99, p = .00001). A simultaneous multiple logistic regression analysis was performed with the same four predictors as in the analysis for katakana words (note that the single kanji words in this set have pronunciations with 2, 3 or 4 mora, so the word length factor here only means spoken – not written – word length) and revealed significant effects of imageability (Wald = 7.47, p = .0062) and familiarity (Wald = 6.48, p = .011). Set 2: High- and low Imageability words Katakana words comprised 60 high- and 60 lowimageability words. Kanji words consisted of 120 single- and 120 two-character words in each of two imageability bands (high and low). As shown in Table 3, there was a marked imageability effect on kanji word reading (high vs. low imageability words: in single-character kanji words: 41/60 = 68% > 22/60 = 37%, χ2(1) = 9.66, p = .0019; in 2-character kanji words: 43/60 = 72% > 22/60 = 37%, χ2(1) = 12.12, p = .0005). High imageability produced a numerical but not statistically reliable advantage in katakana word reading (40/60 = 67% > 34/60 = 57%, χ2(1) = 1.27, p = .25). The nature of YT’s word reading errors Table 4 presents the analysis of YT’s oral reading errors as a function of script type and character-length of kanji words. For katakana words combined across Sets 1 and 2, the majority of YT’s errors were phonological (37/69 = 54%), fairly equally split between responses that were words (17/69, e.g., /do-ra-mu/ drum → /do-rama/ drama) or nonwords (20/69, e.g., /sa-Nda-ru/ sandal → /ha-N-da-ru/). She produced very few semantically related errors: two semantic associates ( spare → key, potato → chip); one response that was perhaps semantically and visually related to its target ( cobra → koala); and three semantically related gestures (e.g., piano → gesture of 516 1.4 (1.1–3.4) 80 (3 mora) 75 (4 mora) 55 (5 mora) 70 (Total) Mean Orth. Plaus.b (range) Pseudo-homophones 1.2 (1.1–2.2) 65 (3 mora) 50 (4 mora) 30 (5 mora) 48 (Total) 85 (3 mora) 80 (4 mora) 55 (5 mora) 73 (Total) thrill su-ri-ru 4.9 (3.1–6.9) 6.0 (3.3–6.7) 4.0 (3–5) Abstract N = 60 57 1.2 (1.1–2.2) 67 paint pe-N-ki 6.6 (6.5–6.9) 6.0 (5.1–6.3) 3.9 (3–5) High Imag. N = 60 Low Imag. N = 60 43 1.3 (1.1–2.4) 57 alibi a-ri-ba-i 4.7 (2.4–5.4) 6.0 (5.5–6.4) 3.9 (3–5) Set 2 88 3.6 (3.0–4.1) 87 chestnut kuri 6.9 (6.8–7.0) 6.1 (4.6–6.7) 2.3 (2–4) Concrete N = 52 Abstract N = 52 56 3.3 (2.6–4.3) 48 love ai 4.8 (4.1–5.4) 5.8 (4.5–6.7) 2.3 (2–4) Set 1 88 3.5 (2.8–4.1) 68 temple tera Low Imag. N = 60 72 3.3 (2.7–4.3) 37 puzzle nazo 4.9 (3.8–5.5) 6.0 (5.2–6.7) 2.3 (2–4) Set 2 6.7 (6.5–6.9) 6.0 (5.1–6.8) 2.3 (2–4) High Imag. N = 60 Single character Kanji 68 2.8 (2.3–4.0) 72 swan haku-cjou 6.6 (6.4–6.9) 6.1 (5.9–6.4) 3.2 (2–4) High Imag. N = 60 Low Imag. N = 60 53 2.6 (2.3–3.1) 37 expectation ki-tai 4.8 (4.6–4.9) 6.1 (5.7–6.4) 3.2 (2–4) Set 2 Two character Kanji Note: Mean Orth. Plaus. = Mean Orthographic Plausibility. aImageability and Familiarity are on a 7-point scale. bOrthographic plausibility (see Footnote 2) for hiragana transcriptions is on a 5-point scale. 95 (3 mora) 95 (4 mora) 75 (5 mora) 88 (Total) apron e-pu-ro-N Example Meaning Pronunciation Words 6.7 (4.3–7.0) 6.1 (3.6–6.6) 4.0 (3–5) Mean Imageability a (range) Mean Familiarity a (range) Mean Mora (range) Concrete N = 60 Set 1 Katakana TABLE 3 Characteristics of reading stimuli and YT’s reading performance (% correct) for katakana/kanji words and hiragana transcriptions Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 DEEP-PHONOLOGICAL DYSLEXIA IN JAPANESE 517 TABLE 4 Proportion of different error types in YT’s word reading Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 Error type Semantic (word) Semantic (gesture, onomatopoeia) Semantic-Visual Phonological * Visual Legitimate alternative reading One character correct Unrelated Don’t know /No response Katakana words (N = 69 errors) Single-character kanji words (N = 92 errors) Two-character kanji words (N = 54 errors) 0.03 0.04 0.01 0.54 0.00 … … 0.16 0.21 0.24 0.13 0.13 0.05 0.02 0.04 … 0.12 0.26 0.13 0.09 0.18 0.05 0.00 0.00 0.13 0.13 0.29 Note: * In katakana word reading, phonological errors are also visually similar to the stimuli. playing keyboard). These errors occurred only to concrete/high imageability words. She also produced some unrelated responses and omissions. In contrast, YT made a prominent proportion of semantically related errors (68/146 = 47%) in reading aloud kanji words combined across Sets 1 and 2. These included ‘pure’ semantic errors (29/146 = 20%) as in /gai-ro/ street → /nami-kimichi/ avenue, /kai-shi/ beginning → /sjuQpatsu/ starting; visual-and-semantic errors (22/146 = 15%) as in /sou-ko/ storehouse → /sjako/garage; /ei-zoku/ permanence → /ei-eN/ eternity; /hanashi/ talk → /deN-wa/ telephone; and some semantically related gestures or onomatopoetic responses (17/146 = 12%). Only a few of YT’s kanji-word reading errors were phonologically (but not visually or semantically) related to the target (8/146 = 5%), and almost all of these responses were words (7/8, e.g., /sei-buN/ ingredient → /sei-butsu/ living thing). YT made very few visual errors (3/146 = 2%; e.g., /koi/ love → /mado/ window; /ke-mushi/ caterpillar → /mou-fu/ blanket). There were some unrelated responses (17/146 = 12%), and a substantial number of omissions (39/146 = 27%). YT also produced a few legitimate alternative pronunciations in single-character kanji word reading (4/146 = 3%) and a few responses containing the correct pronunciation for one constituent kanji character in 2-character kanji word reading (7/146 = 5%; ‘one character correct’ in Table 4). Comment YT demonstrated salient concreteness/imageability effects on kanji word reading, whereas these effects were marginal in her katakana word reading. Her error pattern was notably different depending on the script type, with many more semantic errors in kanji word reading and many more phonological errors in kana word reading. Reading aloud pseudohomophones Kana pseudohomophones The final row of Table 3, labeled ‘Pseudohomophones’, presents YT’s accuracy in reading aloud hiragana transcriptions of all of the katakana words and kanji words for which her reading performance is indicated two rows above, labelled ‘Words’. These hiragana transcriptions can be treated as pseudohomophones because, although the stimuli are not presented in their normal, familiar orthographic form (which would be either katakana or kanji), their correct pronunciations correspond to real words. YT’s reading performance for all kana pseudohomophones (2–5 mora length) averaged 375/584 = 64% correct, demonstrating a considerable advantage for pseudohomophones over nonhomophonic nonwords (see Table 2). A simultaneous multiple logistic regression analysis was performed on YT’s success in reading hiragana pseudohomophones transcribed from katakana words and kanji words (131/240 = 55% and 244/ 344 = 71%, respectively). The three factors were imageability (Wydell, 1991) of the base words, mora-length, and orthographic plausibility (Kondo & Amano, 1999; see footnote 5) of the hiragana transcriptions; this latter factor refers Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 518 SATO ET AL. to ratings of the familiarity of seeing a particular word written in hiragana (mean values are shown in the penultimate row of Table 3). The analysis for the items taken from katakana base words revealed a significant effect of imageability (Wald = 6.95, p = .0084); for the items taken from kanji base words, there were significant effects of imageability (Wald = 12.55, p = .0004) and orthographic plausibility of hiragana transcriptions (Wald = 17.56, p < .0001). For Sets 1 and 2 combined, YT showed an advantage for reading katakana words over their hiragana pseudohomophones (171/240 = 71% > 131/240 = 55%) but an advantage for hiragana pseudohomophones over their original kanji words (244/344 = 71% > 198/344 = 58%). The former effect (i.e., an orthographic lexicality advantage) was significant both in the concrete/high-imageability band (93/120 = 78% > 76/120 = 63%, χ2(1) = 5.78, p = .0162) and in the abstract/low-imageability band (78/120 = 65% > 55/120 = 46%, χ2(1) = 8.92, p = .0028). The latter effect (a phonological transparency advantage) was significant in the abstract/ low-imageability band only (104/172 = 60% > 69/172 = 40%, χ2(1) = 14.25, p = .0002). Kanji pseudohomophones The test stimuli consisted of 80 two-character kanji homophonic/non-homophonic nonwords. Using 40 two-character consistent kanji words10 (20 each of high/low frequency) from Patterson, Suzuki, Wydell, and Sasanuma (1995), both types of nonwords were created. For kanji pseudohomophones, the first or second constituent character of the base words was changed to a different kanji character that has the same pronunciation and has no alternative pronunciation (e.g., memory /ki-oku/ → /ki-oku/). The first and second consistent kanji characters of the base words were reversed for making kanji non-homophonic nonwords. YT’s reading performance in both conditions was low: 8/40 = 20% for pseudohomophones and 3/40 = 8% for non-homophonic nonwords [cf. the mean scores of normal controls (N = 8, mean age = 58) were 98 and 95%, respectively]. YT’s poor performance here is compatible with her results (see Table 2) of reading aloud 120 two-character 10 Each constituent character of 2-character consistent kanji words has only one possible pronunciation. YT’s reading accuracy for these high/low frequency kanji words was 15/20 = 75% and 10/20 = 50%, respectively. kanji nonwords from Fushimi et al. (1999). YT showed only a numerical advantage for kanji pseudohomophones over kanji non-homophonic nonwords (χ2 (1) = 2.64, p = .10). The nature of YT’s pseudohomophone reading errors YT’s errors in reading hiragana pseudohomophones were (i) phonologically/visually similar responses (91/209 = 44%), which consisted of words (63/91) and nonwords (28/91); (ii) omissions (72/209 = 34%); (iii) unrelated errors (39/209 = 19%); and (vi) a few semantic errors (7/209 = 3%) as in /ne-tsu/ heat → /a-se/ sweat. Of YT’s errors in reading kanji pseudohomophones, there were (i) multiple responses which were related to each constituent character (5/32 = 16%) as in /mitsu-yu/ → /mitsu/ and /yu-kai/; (ii) semantic and/or semantic-visual errors (6/32 = 19%) as in (pseudohomophone of /eki-iN/ station employee) → /fumi-kiri/ railroad crossing; (iii) visual errors (3/32 = 9%) as in (pseudohomophone of /shiN-sa/ examination) → /shin-seN/ fresh; (iv) phonological errors (2/32 = 6%) as in (pseudohomophone of /mitsurjo/ poaching) → /mitsu-rou/; (v) incomplete responses (5/32 = 16%), in which the correct pronunciation of a constituent character was produced as in /sai-nou/→ /nou/; (vi) unrelated responses (5/32 = 16 %); and omissions (6/32 = 19%). Comment YT’s reading performance demonstrated a dramatic advantage for kana pseudohomophones relative to non-homophonic kana nonwords. The imageability of the base words also influenced YT’s success in kana pseudohomophone reading. By contrast, constructing kanji nonwords with a pronunciation corresponding to real words yielded only a small and insignificant boost to YT’s severe impairment of kanji nonword reading. The error patterns in reading pseudohomophones written in hiragana or kanji reflected the nature of phonographic kana and morphographic kanji characters. Although YT demonstrated an orthographic lexicality effect on reading aloud kana strings (i.e., katakana words > hiragana transcriptions), the pattern was very different for kanji base words: a non-significant difference between the orthographically familiar kanji forms and their hiragana transcriptions for concrete words; and a clear advantage for the hiragana transcriptions over the kanji words for abstract/low imageability items. DEEP-PHONOLOGICAL DYSLEXIA IN JAPANESE Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 Word reading and picture naming using an incremental cueing procedure This experiment was designed to examine the interaction between phonology, orthography and semantics in YT’s language system, by determining whether word reading and picture naming would be facilitated by phonological cues, and in particular whether cueing would block semantic errors in these tasks. The stimulus words, all corresponding to picturable nouns, comprised 120 two-character kanji words with 30 words in each of four conditions formed by crossing two bands of familiarity (Amano & Kondo, 1999) with two bands of spoken word length (3 or 4 mora). The mean familiarity (range) was 6.1 (5.8–6.6) for high-familiarity words and 5.4 (4.5–5.7) for less familiar words. YT was asked to name the pictures and read aloud the words for these 120 items in an ABBA design. She was given up to15 s for the first response to each item. If she failed to produce the correct response, the initial mora was provided as a cue. If this single-mora cue failed to elicit the correct response, she was given the first + second morae (i.e., this was an incremental cueing technique). For 3-mora items, cueing stopped after this point; for 4-mora items, the cueing was further extended to incorporate the third mora if the 2-mora cue was insufficient. In all cases, therefore, cueing was stopped at target word minus 1 mora. Considering first YT’s uncued performance: as shown in Figure 2, familiarity had a significant impact on her kanji word reading (χ2(1) = 11.11, 519 p = .0009), but word length (i.e., number of morae) did not have a reliable effect (χ2(1) = 1.23, p = .26). YT’s success in picture naming was modulated by both word length (χ2(1) = 9.70, p = .0018) and familiarity (χ2(1) = 4.06, p = .0439). YT’s uncued production accuracy was a little higher in word reading (70/120 = 58%) than in picture naming (55/120 = 46%), (McNemar test: χ2(1) = 4.56, p < .05). Next we consider the impact of cueing on reading and picture naming, both generally – i.e., for all initially incorrect responses – and specifically for cases where YT’s initial response was a semantic error. Out of YT’s total incorrect first responses in word reading (N = 50), she produced 27/50 = 54% of words correctly after the first mora cue and 46/50 = 92% correctly after incremental cueing. The corresponding figures for picture naming (N = 65 initially incorrect responses) were 30/65 = 46% correct with a single mora and 59/65 = 91% after incremental phonological cueing. For initial responses classified as pure semantic errors, the impact of cueing was as follows: in reading, 8/10 = 80% correct after the first mora cue and 10/10 = 100% after incremental cues; in naming, 20/37 = 54% after one cue and 35/37 = 95% correct at the end of cueing. Comment YT’s marked benefit from phonological cueing in both word reading and picture naming reflects the interaction between phonology and orthography/ semantics. As previously demonstrated by Katz Figure 2. YT’s performance in oral reading and picture naming. Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 520 SATO ET AL. and Lanzoni (1997) in an English-speaking deep dyslexic patient, additional phonological information essentially eliminated YT’s semantic errors in these tasks. This suggests that there was very little amiss in YT’s orthographic and semantic processing. Rather, her phonological system was so impaired that neither orthographic nor semantic (picture) input on its own could produce a normal degree of phonological activation for the correct target word. This circumstance allowed semantically related responses to emerge in both tasks. When the insufficient phonological activation produced by a kanji word or a picture was given the opportunity to combine with phonological information provided by cues, YT almost invariably produced the correct response. GENERAL DISCUSSION Our experimental investigations of YT have provided a picture of an aphasic patient with good comprehension of both spoken and written words in kana and kanji; poor receptive and expressive phonological skills in all tasks, especially for nonwords and even for real words if the task required phonological working memory; deep dyslexia in kanji; and phonological dyslexia in kana. It is time to relate these results to the questions motivating this paper. We should note before we embark on this discussion that general questions of this kind can never be definitively answered via single-case studies: even if the results from a particular case strongly support one set of conclusions, a new patient may turn up tomorrow whose data apparently tell a different story. For this reason, secure conclusions must ultimately be based on an accumulation of results from a series of neuropsychological case studies or, even more ideally, from studies based on case series. The first issue concerns the nature of deep and phonological dyslexia. Are they distinct disorders (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), or do they just occupy different positions on a continuum (e.g., Crisp & Lambon Ralph, 2006)? Are these disorders specific to reading, or are their features mirrored by the patients’ performance in tasks that do not involve orthography? Can all of the symptoms in either or both conditions be explained by a single deficit (Patterson & Lambon Ralph, 1999), or are multiple impairments involved (e.g., Friedman, 1996; Glosser & Friedman, 1990; Klein, Behrmann, & Doctor, 1994)? The other main issue – which may seem separate but in this case is not, because YT’s language is Japanese – is whether reading mechanisms for kana and kanji are substantially the same or fundamentally different. As summarised in the Introduction, the major features of deep dyslexia are a virtually complete failure to read aloud nonwords, a major impact of imageability on success in word reading, and a well-above-chance rate of semantic errors in singleword reading. YT’s reading of kanji fits this description precisely. Phonological dyslexia is also characterised by (a) very poor (though not necessarily abolished) nonword reading, sometimes with a significant advantage for pseudohomophones; (b) by an advantage for words high in imageability (though again this may not be as striking as in deep dyslexia); but (c) not by any notable number of semantic errors in single-word reading. YT’s reading of kana fits this description precisely. It should be noted that pseudohomophone reading has rarely been tested in deep dyslexia, but at least two such patients have shown an increase from the floor levels of reading ‘ordinary’ nonwords when asked to read pseudohomophones (Buchanan, Kiss, & Burgess, 2000; Buchanan, McEwen, Westbury, & Libben, 2003). Does YT have two different acquired reading disorders? A more likely interpretation is that she has a single disorder with largely similar but also somewhat different manifestations that result from the nature of the two forms of Japanese writing. The most important difference between kana and kanji is that kana characters translate to phonology in a perfectly consistent and predictable fashion. It should also be noted that, although there are inevitably some variations in frequency of occurrence within kana characters (for example, the basic set vs. the complex set), all kana characters, which are taught as part of the curriculum in the first year of primary school, are highly familiar to any Japanese reader. Kanji characters, which are acquired over a long period of time through formal education and self-study, have a much less predictable relationship to phonology and a much greater range of familiarity. There are some kanji characters with only a single pronunciation, but these tend to be low-familiarity characters (Fushimi et al., 1999); most of the commonly used kanji characters do not specify a single pronunciation. Furthermore, kanji characters offer information about word meaning in a way that kana characters do not. The implication of these differences is that, for a normal Japanese reader, encountering a word Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 DEEP-PHONOLOGICAL DYSLEXIA IN JAPANESE written in kana will evoke a rapid, strong, single phonological representation, with less strong semantic activation directly from the orthography. By contrast, a word written in kanji will produce some direct phonological activation; but this will be less consistent and thus less rapid/strong than in kana; and relative to kana, kanji will also yield more direct semantic information. Now let us take this picture to a patient like YT with a significantly impaired phonological system. We have not done exhaustive investigation of the nature of her phonological deficit; but it seems clear that her brain injury had resulted in a substantially impaired capacity to achieve and/or maintain the phonological representations necessary to support normal speech production. This problem was exaggerated if the material to be produced (1) lacked familiarity and meaning (words >> nonwords even in tasks requiring only production of a single item, such as single-item repetition or concatenation of individual morae into a single utterance); (2) was long (repetition of single words vs. 2-word sequences vs. 3-word sequences = 98% vs. 63% vs. 15%; repetition of 2-mora nonwords vs. 4-mora nonwords = 79% vs. 65%); and/or (3) required phonological working memory (as in tasks like delayed or multi-word repetition and phonological segmentation or concatenation). This all suggests an under-activated, fragile phonological system that could only achieve something approaching normal performance under the easiest, most optimal phonological conditions. How did YT’s fragile phonological system behave in reading? Kana characters still evoked some phonological activation, presumably due to their strong pre-morbidly learned relationship to pronunciation. YT was able to pronounce 63% of the basic set of katakana characters and 42% of 2-character katakana nonwords; and other features of her reading performance suggest that, even when the phonological activation produced directly by kana orthography was not strong enough to surpass the threshold for speech production, it was present and could be boosted by other sources of information. Furthermore, in kana word reading, her errors were typically related to the target words in sound and almost never in meaning. Thus, phonological activation from kana word stimuli, even if not sufficiently strong to guarantee a correct pronunciation, was strong enough to prevent semantic errors. Richer semantic representations, as in concrete vs. abstract or high- vs. lowimageability words, had only a mildly beneficial 521 effect on YT’s reading of katakana words (a 10– 15% boost), suggesting that direct activation of semantics by kana orthography was present but rather limited. By contrast, in one of the most dramatic results of this study, the richer semantic representations of high-imageability or concrete words produced at least as much if not more benefit – about a 15–30% boost (bottom row of Table 3) – on YT’s reading of hiragana pseudohomophones. These stimuli are unlikely to produce direct activation of semantics by orthography, since they are not orthographically familiar. The probable source of the benefit is that the written string yields phonological activation that is subthreshold for speech production but sufficient to interact with and be boosted by semantic representations. As argued by Patterson et al. (1996), it is almost certainly the processing sequence orthography→ phonology → semantics→ phonology that underlies YT’s advantage for hiragana pseudohomophones of concrete/ high imageable katakana or kanji base words. Relative to kana, kanji evoked a smaller degree of direct phonological activation. There is nothing in kanji comparable to a single kana character because single kanji characters are morphemes, which consist of words or parts of words. When the meaning of these characters was not very rich, as in abstract or low imageability single-kanji character words (ranging from 2–4 mora), YT’s reading was poor (37–48% correct), whereas her reading of the 3–4 mora katakana words with abstract meanings in Set 1 was 80–85% correct. A statistical comparison of these performances would not make sense because neither the stimuli nor responses are matched, but the discrepancy in success is clear. YT’s poor reading success in kanji pseudohomophones (compared to kana pseudohomophones) can also be attributed to significantly less direct phonological activation, preventing any real benefit from the interaction between phonology and semantics. On the other hand, relative to kana, kanji evoked a larger degree of direct semantic activation. The 10–15% benefit from concrete/high imageability status in katakana word reading turned into a 31–39% boost in kanji word reading. Even this boost, however, was not sufficient to counteract YT’s impoverished phonological activation from kanji, with the result that she made a significant number of semantic errors in oral reading of high-imageability kanji words. With additional phonological information from cueing, semantic errors were essentially abolished. Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 522 SATO ET AL. It is of course possible to interpret these patterns as reflecting qualitatively different mechanisms for reading kana and kanji, but we see nothing that demands such an account and would opt instead for a quantitative difference. That is, kana is more efficient at activating phonology; kanji is more efficient at activating semantics; but both processes occur in both orthographies (Sato, 2007). When the phonological system is damaged, these different degrees of efficiency naturally and perhaps even necessarily give rise to a pattern of phonological dyslexia in kana and deep dyslexia in kanji, on a continuum. In other words, we predict that identical patterns of performance and error types for kanji and kana will never be observed. Note that this analysis is highly compatible with the summation hypothesis of Hillis and Caramazza (1991, 1995), in which phonology arising from a semantic source and from a non-semantic source (orthography or external phonology as in cueing) can interact and sum to produce more substantial/ more accurate phonological activation than either source on its own. What the study of YT adds to this account is evidence from a single task – i.e., reading – in which the patient’s language incorporates two forms of orthography with different inherent connection strengths to semantics and phonology. From a cross-linguistic point of view, it is worth noting that the paucity of semantic errors in YT’s kana reading echoes the rarity of such errors in patients who read other relatively transparent alphabetic writing systems such as Spanish (e.g., Ardila, 1991; Ferreres & Miravalles, 1995) and Italian (Miceli, Capasso, & Caramazza, 1994). It is even more reminiscent of the differential error patterns observed in a few bilingual patients who used two orthographies differing in print-sound consistency. An English-speaking and -reading deep dyslexic patient who could also read Nepalese (which uses a syllabic script) produced only one semantic error in reading aloud 50 Nepalese words (Byng, Coltheart, Masterson, Prior, & Riddoch, 1984). An Arabic/French bilingual patient showed deep dyslexia in both languages, but the rate of semantic errors was lower in the transparent Arabic orthography than in French orthography with its less consistent relationship to phonology (Béland & Mimouni, 2001). Both YT and these previous case studies suggest an inherent relationship between the phonological transparency of orthography and the rate of semantic errors in patients with reading disorders on the phonological-deep continuum. Finally, we think that YT’s results favour the view that deep and phonological dyslexia are not isolated reading disorders but rather reflect, in the reading domain, the presence and severity of a general phonological deficit. The characteristics of YT’s phonological/deep dyslexia, such as the impact of familiarity (words vs. nonwords), meaningfulness (high- vs. low-imageability words) and phonological length, all find equivalents in her performance in non-reading speech-production tasks. This is perhaps the most difficult issue to address from the perspective of a single case study, and we acknowledge (a) that there are several case reports in the literature of patients with impaired nonword reading but preserved phonological performance in non-reading tasks (Bisiacchi, Cipolotti, & Denes, 1989; Caccappolo-van Vliet, Miozzo, & Stern, 2004a,b; Derouesné & Beauvois, 1985); and (b) that many neuropsychological researchers still assume that dissociations are important and associations are not. We think, and hope, that the strength of this assumption is gradually diminishing as larger case series are being assessed (see for example Woollams, Lambon Ralph, Plaut, & Patterson, 2007, on the prominent association between surface dyslexia and semantic impairment). We also hope that researchers are starting to pay more attention to degrees of deficit rather than black-and-white categorisations. In this context, we note that a recently reported phonological dyslexic patient with good non-reading phonological skills, JH, correctly read aloud 70% of a set of 132 nonwords, and even achieved 58% correct on a set of 43 long and complex nonwords (Tree & Kay, 2006). We do not claim that this is normal performance; but JH had a very mild phonological dyslexia relative to a case like YT. We predict that, as cases accumulate in the literature (Crisp & Lambon Ralph, 2006), the severity of a patient’s phonological dyslexia will – despite some inevitable individual differences – be largely predictable from the extent of his or her general phonological deficit. Original manuscript received 14 May 2007 Revised manuscript accepted 6 June 2008 REFERENCES Amano, S., & Kondo, T. (1999). NTT Database Series: Lexical properties of Japanese, Vol. 1. Word familiarity. Tokyo: Sanseido. [in Japanese]. Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 DEEP-PHONOLOGICAL DYSLEXIA IN JAPANESE Ardila, A. (1991). Errors resembling semantic paralexias in Spanish-speaking aphasics. Brain and Language, 41, 437–445. Barry, C., & Richardson, J. T. E. (1988). Accounts of oral reading in deep dyslexia. In H. A. Whitaker (Ed.), Phonological processes and brain mechanisms (pp. 118–171). New York: Springer-Verlag. Beauvois, M. F., & Derouesné, J. (1979). Phonological alexia: Three dissociations. Journal of Neurology, Neurosurgery and Psychiatry, 42, 1115–1124. Béland, R., & Mimouni, Z. (2001). Deep dyslexia in the two languages of an Arabic/French bilingual patient. Cognition, 82, 77–126. Bisiacchi, P. S., Cipolotti, L., & Denes, G. (1989). Impairment in processing meaningless verbal material in several modalities: The relationship between short-term memory and phonological skills. The Quarterly Journal of Experimental Psychology, 41A, 293–319. Buchanan, L., Kiss, I., & Burgess, C. (2000). Phonological and semantic information in word and nonword reading in a deep dyslexic patient. Brain and Cognition, 43, 65–68. Buchanan, L., McEwen, S., Westbury, C., & Libben, G. (2003). Semantics and semantic errors: Implicit access to semantic information from words and nonwords in deep dyslexia. Brain and Language, 84, 65–83. Byng, S., Coltheart, M., Masterson, J., Prior, M., & Riddoch, J. (1984). Bilingual biscriptal deep dyslexia. Quarterly Journal of Experimental Psychology, 36A, 417–433. Caccappolo-van Vliet, E., Miozzo, M., & Stern, Y. (2004a). Phonological dyslexia without phonological impairment? Cognitive Neuropsychology, 21, 820–839. Caccappolo-van Vliet, E., Miozzo, M., & Stern, Y. (2004b). Phonological dyslexia: A test case for reading models. Psychological Science, 15, 583–590. Coltheart, M. (1996). Phonological dyslexia: Past and future issues. Cognitive Neuropsychology, 13, 749–762. Coltheart, M., Patterson, K. E., & Marshall, J. C. (1980). Deep dyslexia. London: Routhledge & Kegan. Coltheart, M., Patterson, K., & Marshall, J. C. (1987). Deep dyslexia since 1980. In M. Coltheart, K. Patterson, & J. C. Marshall (Eds.), Deep dyslexia (2nd ed., pp. 407–451). London: Routledge & Kegan. Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204–256. Crisp, J., & Lambon Ralph, M. A. (2006). Unlocking the nature of the phonological-deep dyslexia continuum: The keys to reading aloud are in phonology and semantics. Journal of Cognitive Neuroscience, 18, 348–362. Derouesné, J., & Beauvois, M. F. (1985). The ‘phonemic’ stage in the non-lexical reading process: Evidence from a case of phonological alexia. In K. E. Patterson, J. C. Marshall, & M. Coltheart (Eds.), Surface dyslexia (pp. 399–457). Hove, UK: Lawrence Erlbaum Associates Ltd. Endo, K., Abe, M., Tsunoda, S., Yanagi, H., Ichikawa, H., & Isahara, H. (2000). Neural mechanism for 523 speech sound discrimination: Findings from the study of aphasic patients. Higher Brain Function Research, 20, 165–177 [in Japanese with an English abstract]. Ferreres, A. R., & Miravalles, G. (1995). The production of semantic paralexias in a Spanish-speaking aphasic. Brain and Language, 49, 153–172. Friedman, R. B. (1996). Recovery from deep alexia to phonological alexia: Points on a continuum. Brain and Language, 52, 114–128. Funnell, E., & Davison, M. (1989). Lexical capture: A developmental disorder of reading and spelling. Quarterly Journal of Experimental Psychology, 41A, 159–180. Fushimi, T., Ijuin, M., Patterson, K., & Tatsumi, I. F. (1999). Consistency, frequency, and lexicality effects in naming Japanese Kanji. Journal of Experimental Psychology: Human Perception and Performance, 25, 382–407. Fushimi, T., Komori, K., Ikeda, M., Patterson, K., Ijuin, M., & Tanabe, H. (2003). Surface dyslexia in a Japanese patient with semantic dementia: Evidence for similarity-based orthography-to-phonology translation. Neuropsychologia, 41, 1644–1658. Glosser, G., & Friedman, R. B. (1990). The continuum of deep/phonological alexia. Cortex, 26, 343–359. Hillis, A. E., & Caramazza, A. (1991). Mechanisms for accessing lexical representations for output: Evidence from a category-specific semantic deficit. Brain and Language, 40, 106–144. Hillis, A. E., & Caramazza, A. (1995). Converging evidence for the interaction of semantic and sublexical phonological information in accessing lexical representations for spoken output. Cognitive Neuropsychology, 12, 187–227. Howard, D., & Patterson, K. E. (1992). Pyramid and palm trees: A test of semantic access from pictures and words. Bury St. Edmunds, UK: Thames Valley Test Company. Imura, T. (1943). Aphasia: Characteristic symptoms in Japanese. Seishin-shinkeigaku zasshi, 47, 196–218 [in Japanese]. Imura, T., Nogami, Y., & Asakawa, K. (1971). Aphasia in Japanese language. Nihon University Journal of Medicine, 13, 69–90. Itukushima, Y., Ishihara, O., Nagata, Y., & Koike, Y. (1991). Research of two-Chinese character word attributes: Imagery, concreteness, and ease of learning. Psychological Researh, Nihon University, 12, 1–19 [in Japanese with an English abstract]. Katz, R., & Lanzoni, S. M. (1997). Activation of the phonological lexicon for reading and object naming in deep dyslexia. Brain and Language, 58, 46–60. Kawahata, N., Nagata, K., & Shishido, F. (1988). Alexia with agraphia due to the left posterior inferior temporal lobe lesion – Neuropsychological analysis and its pathogenetic mechanisms. Brain and Language, 33, 296–310. Kawamura, M. (1990). Localization and symptomatology of pure alexia, pure agraphia and alexia with agraphia. Japanese journal of Neuropsychology, 6, 16– 24 [in Japanese with an English abstract]. Kawamura, M. (2007). Reading and writing in Japanese and the kanji-kana problem. In M. Iwata, & Downloaded by [Flinders University of South Australia] at 16:53 09 January 2015 524 SATO ET AL. M. Kawamura (Eds.), Neurogrammatology: Neuroscience for reading and writing (pp. 37–46). Tokyo: Igaku-Shoin Ltd. [in Japanese]. Kimura, K. (1934). Characteristics of aphasic symptom in Japanese. Shinkeigaku Zasshi, 37, 437–459 [in Japanese]. Klein, D., Behrmann, M., & Doctor, E. (1994). The evolution of deep dyslexia: Evidence for the spontaneous recovery of the semantic reading route. Cognitive Neuropsychology, 11, 579–611. Kondo, T., & Amano, S. (1999). NTT database series: Lexical properties of Japanese, vol. 2. Word orthography. Tokyo: Sanseido. [in Japanese]. Lambon Ralph, M. A., & Graham, N. L. (2000). Acquired phonological and deep dyslexia. Neurocase, 6, 141–178. Marshall, J. C., & Newcombe, F. (1973). Patterns of paralexia: A psycholinguistic approach. Journal of Psycholinguistic Research, 2, 175–199. Miceli, G., Capasso, R., & Caramazza, A. (1994). The interaction of lexical and sublexical processes in reading, writing and repetition. Neuropsychologia, 32, 317–333. Morton, J., & Sasanuma, S. (1984). Lexical access in Japanese. In L. Henderson (Ed.), Orthographies and reading (pp. 25–42). London: Lawrence Erlbaum. Ogawa, T., & Inamura, Y. (1974). An analysis of word attributes: Imagery, concreteness, meaningfulness and ease of learning for Japanese nouns. The Japanese Journal of Psychology, 44, 317–327 [in Japanese with an English abstract]. Otake, T. (1990). Rhythmic structure of Japanese and syllable structure. Techinical Report of IEICE (the Institute of Electronics, Information and Communication Engineers), 89, 55–61 [in Japanese with an English abstract]. Patterson, K. (1978). Phonemic dyslexia: Errors of meaning and the meaning of errors. The Quarterly Journal of Experimental Psychology, 30, 587–601. Patterson, K. (1982). The relation between reading and phonological coding: Further neuropsychological observations. In A. W. Ellis (Ed.), Normality and pathology in cognitive functions (pp. 77–111). London: Academic Press. Patterson, K., & Lambon Ralph, M. A. (1999). Selective disorders of reading? Current Opinion in Neuropsychology, 9, 235–239. Patterson, K., Suzuki, T., Wydell, T., & Sasanuma, S. (1995). Progressive aphasia and surface alexia in Japanese. Neurocase, 1, 155–165. Patterson, K., Suzuki, T., & Wydell, T. N. (1996). Interpreting a case of Japanese phonological alexia: The key is in phonology. Cognitive Neuropsychology, 13, 803–822. Raven, J. C. (1962). Coloured progressive matrices: SetA, Ab and B. London: Lewis & Co. Ltd. Rey, A. (1941). L’examen psychologieque dans les cas d’encephalopathie traumatique. Archives de Psychologie, 28, 286–340. Romani, C., McAlpine, S., & Martin, R. C. (2008). Concreteness effects in different tasks: Implications for models of short-term memory. The Quarterly Journal of Experimental Psychology, 61, 292–323. Sakai, K., Sakurai, Y., Sakuta, M., & Iwata, M. (1992). Naming difficulties seen in a case of alexia with agraphia caused by a left postero-inferior temporal lesion. Clinical Neurology, 32, 1227–1231 [in Japanese with an English abstract]. Sakamoto, S. (1940). Contribution to ‘Kanji vs. Kana problem’ in aphasia. Osaka Nisseki shi, 4, 185–212 [in Japanese]. Sasanuma, S., Ito, H., Patterson, K., & Ito, T. (1996). Phonological alexia in Japanese: A case study. Cognitive Neuropsychology, 13, 823–848. Sato, H. (1996). Semantic dementia in Japanese: Primary loss of connections for the meaning of words. Unpublished MSc Thesis, University of London. Sato, H. (2007). Acquired dyslexia in Japanese: Implications for reading theory. Unpublished PhD Thesis, University of London. Shallice, T., & Coughlan, A. K. (1980). Modality specific word comprehension deficits in deep dyslexia. Journal of Neurology, Neurosurgery and Psychiatry, 43, 866–872. Sugishita, M., Otomo, K., Kabe, S., & Yunoki, K. (1992). A critical appraisal of neuropsychological correlates of Japanese ideogram (Kanji) and phonogram (Kana) reading. Brain, 115, 1563–1585. The Committee for the Japanese version of WAB (1986). The Western Aphasia Battery (Japanese version). Tokyo: Igaku-Shoin Ltd. Tree, J. J., & Kay, J. (2006). Phonological dyslexia and phonological impairment: An exception to the rule? Neuropsychologia, 44, 2861–2873. Uno, A. (Ed.), Haruhara N, Kaneko M. (2002). The standardized comprehension test of abstract words. Tokyo: Interuna Syuppan. [in Japanese]. Walker, I., & Hulme, C. (1999). Concrete words are easier to recall than abstract words: Evidence for a semantic contribution to short-term serial recall. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 1256–1271. Woollam, A. M., Lambon Ralph, M. A., Plaut, D. C., & Patterson, K. (2007). SD-Square: On the association between semantic dementia and surface dyslexia. Psychological Review, 114, 316–339. Wydell, T. (1991). Processes in the reading of Japanese: Comparative studies between English and Japanese orthographies. Unpublished PhD thesis, University of London. Yamadori, A. (1975). Ideogram reading in alexia. Brain, 98, 231–238.