Perceptual Expertise Effects Are Not All or None: Spatially Limited Perceptual Expertise for Faces in a Case of Prosopagnosia Cindy M. Bukach1, Daniel N. Bub2, Isabel Gauthier1, and Michael J. Tarr3 Abstract & We document a seemingly unique case of severe prosopagnosia, L. R., who suffered damage to his anterior and inferior right temporal lobe as a result of a motor vehicle accident. We systematically investigated each of three factors associated with expert face recognition: fine-level discrimination, holistic processing, and configural processing (Experiments 1–3). Surprisingly, L. R. shows preservation of all three of these processes; that is, his performance in these experiments is comparable to that of normal controls. However, L. R. is only able to apply these processes over a limited spatial extent to the fine-level detail within faces. Thus, when the location of a given change is unpredictable (Experiment 3), L. R. exhibits normal detection of features and spatial configurations only for the lower half of each face. Similarly, when required to INTRODUCTION Prosopagnosia (a face recognition deficit) can be conceptualized as a loss of, or reduced access to, previously acquired perceptual expertise with faces (Gauthier, Behrmann, & Tarr, 1999). Studies of prosopagnosia tend to isolate the deficit to a particular process such as configural processing (e.g., Barton, Press, Keenan, & O’Conner, 2002; Levine & Calvanio, 1989). However, it may be that perceptual mechanisms are not necessarily lost in an ‘‘all-or-none’’ fashion. Expertise-training studies with novel objects (Greebles) suggest that expertise is acquired incrementally over an expanding spatial window (Gauthier & Tarr, 2002), and thus, its loss may also follow a similar spatial gradient. We present a study of a prosopagnosic case, L. R., which suggests that expertise effects are not all-or-none, but may be lost incrementally, such that they remain functional over a spatially restricted area. 1 Vanderbilt University, 2University of Victoria, 3Brown University D 2006 Massachusetts Institute of Technology divide his attention over multiple face features, L. R. is able to determine the identity of only a single feature (Experiment 4). We discuss these results in the context of forming a better understanding of prosopagnosia and the mechanisms used in face recognition and visual expertise. We conclude that these mechanisms are not ‘‘all-or-none,’’ but rather can be impaired incrementally, such that they may remain functional over a restricted spatial area. This conclusion is consistent with previous research suggesting that perceptual expertise is acquired in a spatially incremental manner [Gauthier, I., & Tarr, M. J. Unraveling mechanisms for expert object recognition: Bridging brain activity and behavior. Journal of Experimental Psychology: Human Perception & Performance, 28, 431–446, 2002]. & A General Framework for Studying Face Recognition Deficits Research on impaired face processing in brain-injured individuals has been motivated by two alternative views of the relation between the mechanisms responsible for face recognition and those mediating the recognition of other object categories. One view is that such impairments result from the loss of distinct mechanisms that are domain specific to faces. This ‘‘domainspecific’’ interpretation of prosopagnosia is based on evidence from tasks that contrast impaired performance for face stimuli with intact performance for nonface stimuli (e.g., Nunn, Postma, & Pearson, 2001; Henke, Schweinberger, Grigo, Klos, & Sommer, 1998; Farah, Levinson, & Klein, 1995; McNeil & Warrington, 1991). However, such comparisons may not always be equated for factors such as level of difficulty, response times, response bias, or level of expertise (Gauthier, Behrmann, & Tarr, 1999; Sergent & Signoret 1992b). Thus far, only a single case has been documented that shows the reverse pattern (intact performance on faces but impaired recognition of objects; Moscovitch, Winocur, & Behrmann, 1997). Journal of Cognitive Neuroscience 18:1, pp. 48–63 The alternative view is that prosopagnosia results from the loss of one or more perceptual processes that underlie expertise with objects that are identified at an individual level and that are members of homogeneous categories. This ‘‘perceptual-expertise’’ interpretation of prosopagnosia is based on evidence that shows a functional association between face recognition deficits and abnormal performance for objects (Gauthier, Behrmann, & Tarr, 1999). The rationale is that once the impaired process in a case of prosopagnosia is identified, a deficit should be evident for both faces and objects, providing the task requires the impaired process for both faces and objects. Although association methodology has been criticized because co-occurring deficits may simply reflect anatomically proximal but independent mechanisms (for a discussion, see Shallice, 1988), this weakness can be overcome by an a priori theoretical framework built on evidence from the study of normal face perception. Although we realize that the issue of domain specificity in prosopagnosia continues to be debated (e.g., Duchaine, Dingle, Butterworth, & Nakayama, 2004; Gauthier, Behrmann, & Tarr, 2004), here we address a different question: Can prosopagnosic patients demonstrate evidence for partial preservation of face expertise that resembles the performance of trainees at intermediate levels of expertise? Although a spatial gradient of impairment is orthogonal to the domainspecific debate, currently, the expertise framework is the only account that can explain or predict such a pattern of loss. Thus, to answer this question, we apply an expertise framework to the study of a prosopagnosic patient, L. R., and systematically test the processes that are known to be important to the development and utilization of perceptual expertise. This methodology has two important benefits: First, it allows us to rule out many potential hypotheses regarding the cause of L. R.’s deficit with faces and to place his deficit in the context of a well-specified theoretical framework. This is particularly useful because prosopagnosia, like many neurological syndromes, is not a unitary phenomenon; rather, many kinds of impairments can lead to selective difficulty in the perception or identification of faces. Second, the application of a general framework of generic object recognition to investigations of prosopagnosia allows for stronger generalizations from single-case studies to normal object recognition processes. An expertise framework is especially useful for two reasons: First, the spatial restriction on L. R.’s expert face processing generalizes to nonface objects such as Greebles (Bukach, Bub, Kadlec, Gauthier, & Tarr, in preparation). Second, the spatial restrictions on L. R.’s fine-level processing can best be explained within the expertise framework and related experiments that have found that the acquisition of expertise mechanisms also occurs in a spatially graded manner (Gauthier & Tarr, 2002). Several factors have been proposed to distinguish expert face recognition mechanisms from the mechanisms that are used to identify other object classes. First, faces more than other objects require the ability to make fine-level discriminations (Damasio, 1990; Damasio, Damasio, & Van Hoesen, 1982). This inequality is the result of differences in both task demands and stimulus characteristics. Face recognition is more demanding of discrimination processes because it involves identification at an individual or subordinate level, whereas most other object recognition requires only basic-level identification (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). Discrimination skills are also more taxed by faces because of the homogeneity of faces as a stimulus class. For instance, Gauthier, Behrmann, and Tarr (1999) suggested that a general deficit in fine-level discrimination would interfere with subordinate-level judgments not only for faces, but also for any homogeneous object class, providing the task was sufficiently difficult (from the standpoint of high visual similarity between objects that must be discriminated from one another). To test this hypothesis, they manipulated level of categorization within several object categories and compared the performance of two prosopagnosic subjects to normal controls. When judgments required subtle discriminations at more subordinate levels, the performance of the prosopagnosic subjects was dramatically impaired, regardless of stimulus class (see also Viele, Kass, Tarr, Behrmann, & Gauthier, 2002). A second distinction that has been made between face and object recognition is the degree to which faces are processed holistically (Levine & Calvanio, 1989; Davidoff, Matthews, & Newcombe, 1986). Evidence for holistic processing of faces is based in part on the finding that details of one part of a face influence the perception of another part of the face. For example, a change to the shape of the eyes impairs recognition of the unaltered mouth (Farah, Wilson, Drain, & Tanaka, 1998). One interpretation of this holistic effect is that it represents a failure of selective attention, whereby subjects are unable to filter out irrelevant aspects of a face because of an attentional window that is applied over a large spatial area. Gauthier and Tarr (2002) showed that this ‘‘holistic-inclusive’’ effect is not specific to faces, but develops with perceptual expertise. They trained subjects to identify a novel set of homogeneous objects (Greebles). Subjects were considered to be Greeble experts when their reaction times were equivalent for verifying Greeble labels at both superordinate (family) and subordinate (individual) levels. Development of the holistic-inclusive effect occurred gradually over the course of training, becoming evident first for features that were close to one another, and later for more distal features, suggesting a widening window of spatial attention. Furthermore, Gauthier and Tarr found that the development of holistic-inclusive processing for Greebles was correlated with changes taking place in the right Bukach et al. 49 fusiform face area (FFA), an extrastriate region that is typically more active for faces than for other object classes (Kanwisher, McDermott, & Chun, 1997; McCarthy, Puce, Gore, & Allison, 1997; Puce, Allison, Gore, & McCarthy, 1995; Haxby et al., 1994; Sergent & Signoret, 1992a). A third factor that is associated with the expert recognition of faces is the encoding of spatial relations between features. Spatial distances between features are particularly diagnostic for faces because all faces share a common global configuration of features (e.g., eyes above nose, nose above mouth). Whereas this global configuration, independent of finer spatial relations, is important for recognizing faces at the basic level (recognizing an image as a face, as opposed to some other object), subtle variations in the distance between features is important for more specific judgments of faces. The mechanisms responsible for the encoding of spatial relations are generally referred to as ‘‘configural processes.’’ Evidence for expert configural processing of faces is based in part on the finding that the recognition of individual features (e.g., ‘‘Emile’s eyes’’) is superior when faces are presented in their original configuration relative to a novel configuration (the same eyes moved apart). Furthermore, this sensitivity to spatial configuration is attenuated when faces are inverted (Tanaka & Sengco, 1997). The disruption of configural processing with inversion is interpreted as a marker of expert configural processing (Diamond & Carey, 1986), and is often measured relative to the disruption of other types of local feature processing. This relative measure is known as the ‘‘face inversion effect" (FIE). Using this FIE measure, researchers have found that sensitivity to changes in the spatial relations between features (e.g., distance between the eyes) is disproportionately disrupted relative to sensitivity to changes in local feature information (e.g., size, color, texture, or shape of the eyes themselves) when faces are inverted (Leder & Bruce, 1998, 2000; Searcy & Bartlett, 1996). Although the inversion effect was first thought to be unique to faces (Yin, 1969), Diamond and Carey (1986) demonstrated that similar inversion effects could be found for objects other than faces, providing the subject was an expert (e.g., dog experts show an inversion effect for dogs). Similarly, configural effects emerge with expert Greeble training (Gauthier & Tarr, 1997, 2002; Gauthier, Williams, Tarr, & Tanaka, 1998). Fine-level discrimination, holistic processing, and configural processing are all factors that have been identified as particularly relevant to face recognition and perceptual expertise for objects in general. Although these processes are not likely to represent an exhaustive list of face recognition mechanisms, they nonetheless embody the beginnings of an a priori theoretical framework from which to consider impairments of general expertise processes that might result in what appears to 50 Journal of Cognitive Neuroscience be a selective deficit for face recognition. Impairment to any of these processes may disrupt face recognition, and thus, each factor may represent a different possible functional locus for prosopagnosia. In this context, we document a case of severe prosopagnosia, L. R., who is unusual both in regard to the nature and locus of the injury leading to his impairment, and also in his impressive ability to identify visual objects other than faces. We systematically investigated each of the three factors identified above as critical to expert face recognition (Experiments 1–3). Surprisingly, we found that L. R. shows all three of the abilities associated with expert face recognition: L. R. can make fine-level discriminations, shows holistic-inclusive effects, and shows a robust FIE. However, the spatial extent over which L. R. is able to apply these expert processes is limited. Experiment 4 confirmed that when feature and spatial changes are restricted to local regions of the face, L. R.’s expertise effects are limited to a single region, typically the lower region of the face. We discuss these findings in relation to mechanisms involved in expert face processing and perceptual expertise more generally. Case Description L. R. is a 49-year-old man who was involved in a motor vehicle accident in 1974, during which he was thrown from the front passenger seat of a truck onto the gearshift. The gear lever was missing the usual plastic cap covering the top, and L. R. received a penetrating head wound when the hollow metal tube of the uncapped gear shaft impaled his lower left cheek in front of the jaw, passing through the left intracranial cavity and sphenoid sinus. The shaft then entered the right cavernous sinus, clipping the right internal carotid artery and injuring the abducens nerve and the ophthalmic and maxillary divisions of the trigeminal nerve. It then pierced the right temporal lobe, leaving a bone fragment in the superficial aspect of the middle temporal gyrus. L. R. subsequently developed a right temporal intracerebral hematoma which was relieved through surgery, and also required clipping of the right internal carotid artery. CT scans revealed ablation of the anterior and inferior sections of the right temporal lobe, affecting the amygdala, but apparently sparing posterior regions, including the fusiform gyrus (see Figure 1). As a result of the clip, MRI is not possible. Visual acuity a year following the accident was 20/20 in both eyes with corrective lenses, and visual fields were full. Outward movement of his right eye is somewhat restricted due to right ocular motor nerve palsy. L. R. continues to have problems with depth perception, which he resolves by moving his head. His major residual complaint is that he can no longer recognize faces. He claims to rely primarily on distinctive features and context. For example, he has difficulty recognizing Volume 18, Number 1 Figure 1. CT scans of L. R.’s lesion, showing damage to the anterior temporal lobe in coronal (A) and axial (B) views. his own daughter when she is at a swimming pool and her hair is wet, or when she is encountered unexpectedly on the street. L. R. also claims that many people appear familiar, and thus he is susceptible to false alarms, making him cautious in social situations. Neuropsychological Profile L. R. is of high-average intelligence, with a Full-Scale IQ of 114 as assessed by the Wechsler Adult Intelligence Scale (Third Edition), and Verbal and Performance scores of 115 and 111, respectively. Low-level visual processing was assessed using the Visual Object and Space Perception Battery (Warrington & James, 1991). L. R. scored in the normal range on all subtests, including tests of noncanonical views and silhouettes. He also achieved a standardized score of 104 (50th percentile) on the Benton Judgment of Line Orientation (Benton, Hamsher, de, Varney, & Spreen, 1983). Object recognition was tested using the Snodgrass and Vanderwart (1980) picture set. L. R. identified all of the pictures accurately and without delay. According to the Nelson and Denny Reading Test (Brown, Fishco, & Hanna, 1993), L. R. had an extremely fast reading rate of 364 words per minute, placing him in the 99th percentile for individuals with 18.9 years of schooling. L. R.’s memory as assessed by the Wechsler Memory Scale (Third Edition) was also in the normal range, with a General Memory score of 107 and a Working Memory score of 111. We also administered the Doors & People Test (Baddeley, Emslie, & Nimmo-Smith, 1994) to assess his visual memory. He scored in the 90th percentile in delayed shape recall (12/12), and in the 95th percentile for recognition of doors from a highly homogeneous set (23/24). Face Processing Ability L. R. scored within the normal range (49/54) on the Benton Test of Face Recognition (Benton et al., 1983), which tests ability to match identical face photos and faces that vary with respect to viewpoint and lighting. Despite being encouraged to give speeded responses, however, his performance was extremely slow (average 55.18 sec per trial) and he used a laborious feature-by-feature matching strategy, as is commonly reported of prosopagnosics. When the test was administered with a 17-sec cutoff for each trial, L. R.’s score fell within the severely impaired range (12/54). We also administered the Warrington Recognition Memory Test, (Warrington, 1994) in which 50 faces are studied for 3 sec each, during which participants assess whether each face is pleasant or unpleasant, followed immediately by a two-alternative forced-choice recognition test. L. R. recognized only 38/50 faces (5th percentile). We also presented L. R. with 121 photos of famous people to identify. He was able to provide correct names for only 23 famous faces, and provided additional semantic identifying information for another 3 faces. Out of the 23 photos he correctly named, he also incorrectly assigned 7 of these names to other photos. His comments while performing the task were informative. For example, L. R. often used characteristic identifying features (such as ‘‘signature hair,’’ ‘‘recognize the teeth and the smile,’’ ‘‘Bette Davis eyes’’). Occasionally, he recognized the particular photo from a magazine or video cover that he owned. Most often, L. R. would first attempt to classify the individual’s looks into particular stereotypes. Sample classifications include ‘‘pretty enough to be an actress,’’ ‘‘bad boy,’’ ‘‘looks like a musician,’’ and ‘‘looks Italian.’’ He would also try to date Bukach et al. 51 the individuals from their hairstyle and makeup. His comments suggest that when a distinctive feature was insufficient to identify a photo, he attempted to use a subset of the features to constrain the possibilities according to known stereotypes. Table 1. Mean Sensitivity (d0) for Easy and Difficult Conditions in the Face Discrimination Task (Experiment 1) for Brief and Long Exposure Durations Controls Mean d L. R. Range d0 Easy 3.31 2.98–3.97 4.07 Difficult 2.05 1.22–2.85 2.63 Easy 3.30 2.74–4.21 4.78 Difficult 2.89 2.51–3.50 2.93 Exposure Duration Brief (2 sec) RESULTS Experiment 1: Fine-level Discrimination In Experiment 1, L. R.’s ability to make fine-level discriminations of faces was assessed using a simultaneousmatching paradigm (Viele et al., 2002) and compared to the performance of seven normal controls. This task included both visually similar and dissimilar trials (see Figure 2), as well as long (2 sec) and short (5 sec) exposure durations. By limiting exposure duration, we can determine the efficiency of L. R.’s encoding. If L. R. is unable to encode enough information to detect subtle differences between stimuli, his sensitivity should be impaired relative to controls, especially on difficult trials and at the shorter exposure duration. We do not analyze response times, as the task included giving confidence ratings; thus long response times could reflect a number of sources, including interference from this secondary task. Table 1 contains the sensitivity measures for controls and L. R. in the various conditions. Surprisingly, even given a short (2 sec) exposure duration, L. R. was able to discriminate faces in both the easy and difficult conditions as well as or better than normal controls. L. R.’s performance improved even further with longer exposure durations, as did the performance of normal controls, at least in the difficult condition. Thus, L. R.’s prosopagnosia does not appear to be due to a deficit in fine-level discrimination. Importantly, L. R. extracted sufficient detail from faces to detect even subtle differences between stimuli quickly and efficiently. We note that this ability is not sufficient to ensure efficient matching of faces across viewpoint and lighting Figure 2. Sample stimuli used in Experiment 1. Difficult trials consisted of an original face paired with a spherized face (one-step pairs). Easy trials consisted of a negatively spherized face paired with a positively spherized face (two-step pairs). 52 Journal of Cognitive Neuroscience Long (5 sec) changes, as his performance in the timed version of the Benton task was substantially impaired. Experiment 2: Holistic-inclusive Processing In Experiment 1, we found that L. R. was able to detect small differences between faces. In Experiment 2, we investigated whether L. R. would also show evidence of holistic processing for upright faces, another mechanism that is associated with expert face recognition (Gauthier & Tarr, 2002). Recall that the holistic-inclusive effect is a measure of obligatory processing of multiple parts of a face. We administered a sequential face-matching task that required L. R. to selectively attend to either the upper or lower portion of the face. On each trial, a study face appeared for 700 msec, followed by a cue indicating whether the top or the bottom of the study face was to be compared with the test face. The test face then appeared for 4000 msec. The noncued half of the test face was the same or different from the study face, and this was manipulated independently of the cued half, such that attention to the irrelevant part of the face could lead to the same (congruent) or different (incongruent) response as the cued half of the face (sample stimuli are shown in Figure 3). In this task, holistic processing is reflected by poorer performance for incongruent than congruent trials. This measure of holistic processing is sensitive to inversion and misalignment of the parts (Hole, 1994; Young, Hellawell, & Hay, 1987), and is also sensitive to interference from holistic processing for other objects of expertise (Gauthier, Curran, Curby, & Collins, 2003). If L. R. processes upright faces holistically (i.e., he encodes sufficient information from both halves of the face to influence a response), his accuracy should be influenced by the congruency of the noncued half of the face. Alternatively, if L. R. does not process faces holistically, we would expect that he should be able to ignore the noncued half, and his accuracy should depend only on the cued part. We compared L. R.’s performance to four normal controls. Volume 18, Number 1 Z = 2.12, p < .05, respectively). Such interference from a distractor half is indicative of holistic-inclusive processing. We can therefore conclude that L. R., like normal observers, shows obligatory processing of information from both upper and lower parts of the face. Experiment 3: Spatial Relations Figure 3. Sample stimuli (bottom ‘‘same’’ trials), and mean sensitivity and response time for controls and L. R. in the three conditions of Experiment 2. The range of control data is indicated by solid dashes. Error bars for L. R.’s sensitivity indicate 95% confidence intervals as computed according to Marascuilo (1970). Error bars for L. R.’s response times indicate standard deviations. Figure 3 displays the sensitivity measures and response times for the controls (with dashes indicating the range of performance) and L. R. (with error bars representing the 95% confidence interval of the point estimate) in each condition. As indicated in the left panel of the graph, L. R.’s sensitivity measures were quite high and within control range for all conditions. To determine whether L. R. showed a significant effect of congruency, we tested whether the d0 for the congruent and incongruent conditions were equivalent using a z-test as recommended by Marascuilo (1970).1 Importantly, L. R. showed a significant congruency effect (Z = 3.78, p < .001), as did all controls (Z range = 2.16–4.08). Examination of response times confirms that this is not due to a speed–accuracy tradeoff (M = 1673 msec and 2197 msec for congruent and incongruent trials, respectively), although L. R.’s response times are slower than those of the slowest agematched control (M = 1254 msec and 1514 msec). We note that the size of L. R.’s congruency effect (524 msec) is not only within the control range (114–708 msec) but above the control mean (305 msec). To determine whether the congruency effect occurred for top and bottom trials, sensitivity data were analyzed separately for these conditions. Controls showed very little difference in sensitivity for tops versus bottoms. Moreover, L. R.’s performance was within the normal range and showed a robust effect of congruency for both top and bottom trials (Z = 2.84, p < .01 and Thus far, we have shown that L. R. is able to make finelevel discriminations and to process faces in a holisticinclusive manner. Experiment 3 was designed to examine L. R.’s ability to encode the spatial relations between face parts. We tested whether L. R. would show the typical FIE that is associated with expert recognition of faces; that is, a disproportionate effect of inversion for spatial compared to feature changes. We created different face pairs by substituting the eyes or the mouth with the eyes or mouth from a different face (feature change), or by changing the spatial distance between the eyes or between the mouth and nose (spatial change). Sample faces are presented in Figure 4. Faces were presented either upright or inverted in a sequential-matching paradigm. If L. R. engages expert (and normal) encoding of the spatial relations between face features, then he should show a disproportionate drop in sensitivity to spatial changes relative to feature changes when faces are inverted. We compared L. R.’s performance to that of three male controls. As the upper panel of Figure 4 shows, L. R. was able to detect spatial changes in the upright condition above the level of chance, indicating that his description of faces includes information about the spatial distances between parts. Moreover, L. R. showed a disproportionate effect of inversion for spatial changes relative to feature changes (Z = 2.51, p = .006 and Z = 1.33, p > .05 for spatial and feature inversion effects, respectively). This interaction between sensitivity to spatial changes and face orientation suggests that L. R. utilizes processes associated with perceptual expertise to extract spatial information from faces. Thus, L. R.’s face recognition deficit cannot be attributed to the loss of expertise in encoding spatial information. L. R.’s overall performance on this task was nonetheless below the range of the controls for all conditions, indicating that his face perception is not normal. An interesting remark by L. R. during the experiment was that he felt he had time to attend only to the mouth. In response to this comment, we analyzed the data from eye trials and mouth trials separately. These trials were randomly mixed within each block of the experiment. The results of this analysis are presented in the lower two panels of Figure 4. Consistent with L. R.’s self-report, the adequacy of his performance depended upon the face part manipulated. His performance was well within the normal sensitivity range for mouth judgments, and showed a strong and disproportionate FIE for spatial changes in this condition (Z = 3.77, p < .001 for spatial Bukach et al. 53 Figure 4. Sample stimuli and mean sensitivity for controls and L. R. in the upright and inverted conditions of Experiment 3. Sensitivity to feature changes is shown in the left panels; sensitivity to spatial changes is shown on the right. The upper panels show overall performance (averaged across eye and mouth trials), and the bottom two panels show sensitivity for eye and mouth trials separately. The range of control data is indicated by solid dashes. Error bars for L. R.’s data indicate 95% confidence intervals. inversion; Z = 1.35, p > .05 for feature inversion). However, his sensitivity was well below that of the controls for eye judgments in all conditions, and in particular, he was at chance for spatial modifications of the eyes in both upright and inverted conditions. Accordingly, he showed no FIE for spatial changes in the eye condition. Given these results, we infer that the partbased spatial description L. R. derives in this task is based primarily on information from the lower half of the face and does not include a complete representation of the eye region. The impairment is not entirely limited to encoding the spatial details of the upper region of the face, as his ability to detect feature changes in the eyes 54 Journal of Cognitive Neuroscience was also impaired relative to controls. Nor can the impairment be due to a deficit of the upper visual field, as performance for mouths in the inverted face condition was normal. We note that L. R.’s ability to make eye judgments improves substantially if eye trials are blocked (a version of the experiment not reported here), suggesting that performance can be mediated by directing attention to the relevant spatial area. It appears that given limited exposure duration, L. R. has time to attend and encode detailed information from only a limited region of the face. This finding appears to be in conflict with those of Experiments 1 and 2, which showed intact fine-level Volume 18, Number 1 discrimination and a robust holistic-inclusive effect for both the upper and lower half of the face. However, this apparent discrepancy can be resolved post hoc by noting differences in the methodology of each experiment. First, the changes applied to the stimuli in Experiment 1, although subtle, affected the entire face, whereas changes in Experiment 3 were local and unpredictable. Thus, attention to any one part of the face would be sufficient to detect a change in Experiment 1, but not in Experiment 3. Second, although the changes to stimuli in Experiment 2 were also unpredictable, the nature of the change was more salient in Experiment 2 than Experiment 3 (half face vs. isolated feature), and thus, the holistic effect may have been due to the encoding of coarse-level information. We acknowledge that this explanation is post hoc because we did not manipulate the saliency of changes directly, but we point the reader to a follow-up series of experiments that confirms L. R.’s performance is sensitive to the magnitude of changes between face features (Bukach, Le Grand, Kaiser, Bub, & Tanaka, submitted).2 Third, Experiment 2 involved a cue indicating which part of the test stimulus was relevant on each trial. Although this cueing could not contribute to the encoding of the first stimulus because the cue appeared after its offset, it could potentially impact the encoding of the second stimulus as well as the comparison stage by restricting the target area. In contrast, successful performance on Experiment 3 required the encoding and comparison of fine-level details and spatial information over a much wider extent. The benefit of a post cue could be further investigated by manipulating the timing of the cue, but must remain speculative at this point. Our interpretation of L. R.’s performance across these three tasks is that he is able to encode coarse-level information from the entire face, but is able to extract precise internal details, including spatial information, from only a small portion of each face at a time. This hypothesis was investigated further in Experiment 4. faces using two different tokens (e.g., two pairs of eyes varying in interocular distance) for the three features. Of the eight faces, four had eyes ‘‘A,’’ and the other four had eyes ‘‘B,’’ and likewise for the nose and mouth (see Figure 5). The subtle nature of the changes was designed to keep normal controls off ceiling. Similar multidimensional face sets have been successfully used to investigate face recognition impairments in other patients (Barton, Zhao, & Keenan, 2003; Le Grand, Mondloch, Maurer, & Brent, 2001), and to investigate normal face mechanisms (Barton, Deepak, & Malik, 2003; Leder & Bruce, 2000). In Experiment 4, a single face was presented on each trial as the target for a limited duration (ranging from 250 to 1250 msec), and L. R. was then required to select it from among the set of eight alternatives displayed in free viewing. In this task, accuracy depends on the number of features that are resolved at a given exposure duration: Accuracy would be 25% if only one feature was encoded, 50% if two features were encoded, and 100% if all three features were encoded. Based on his performance in Experiment 3, we predicted that L. R.’s accuracy would be close to 25%. Furthermore, we expected that his errors would reflect a bias for the mouth. We compared L. R.’s results to that of four normal controls. Accuracy and confidence ratings at the various exposure durations are presented in Figure 6, with error bars representing the range of control performance. L. R.’s performance is substantially below controls’ in both accuracy and confidence ratings. Indeed, only at 1250 msec did L. R.’s accuracy substantially exceed 25%, Experiment 4: Specification of Details Across the Entire Face Experiment 4 was designed to determine the spatial extent over which L. R. can specify the internal features of a face. Given the large number of faces that we encounter over a lifetime, and their homogeneity, it is likely that a combination of features is necessary to disambiguate faces during the recognition process, and this may be part of the benefit of holistic face processing. The evidence from Experiment 3 suggested that L. R. is able to specify the local spatial and feature details of only the lower part of the face, given a brief exposure duration. To obtain further evidence, we designed an identification paradigm that required specification of all three internal features (eyes, nose, and mouth). This was accomplished by creating a conjunction set of eight Figure 5. Face stimuli used in Experiment 4. Bukach et al. 55 the accuracy rate predicted from responses based upon only one feature. Controls had some difficulty at exposure durations of 250 msec, but even here their accuracy was over 60%, substantially above the singlefeature rate of 25%, and at 500 msec their accuracy rates were much improved. Furthermore, although controls’ reports of confidence increased with exposure duration, L. R.’s confidence remained very low for all exposure durations, consistent with his everyday experience. To further explore the possibility that L. R. relied upon a limited region of the face to make decisions, a confusion matrix was constructed to examine performance across the three parts of the face. Based on this confusion matrix, accuracy was calculated separately for each face part. For example, to calculate eye accuracy for trials in which ‘‘Bill’’ was presented, all faces having the same eyes would be considered correct (Bill, Bram, Biff, or Buck). Using this method, each face part on a given trial could have four correct responses, resulting in a chance level of 50%. The results of this analysis are displayed in Figure 7, with error bars representing the range of control performance. Controls had high accuracy rates for all parts, with relatively small variations in accuracy between parts, reflecting either a very rapid (although not immediate) integration of face parts, and/ or a small difference in the saliency of the face parts. L. R.’s performance on the other hand was at chance for the eyes, only slightly above chance for the nose, but very accurate for the mouth (within the normal range), providing further evidence that L. R.’s responses were based primarily on the mouth. L. R.’s preference for the lower part of the face in Experiments 3 and 4 is consistent with his self-report that he favors the mouth region when identifying faces. It should be noted, however, that L. R. does not always show a mouth advantage: On one occasion, in a matching task using similar conjunction face stimuli, L. R. showed an eye advantage (performance for the nose and mouth were at chance). On subsequent testing with Figure 6. Mean accuracy and confidence ratings across various exposure durations for controls and L. R. in Experiment 4. Error bars represent the range of control performance. The dashed line indicates expected accuracy based on one feature only. 56 Journal of Cognitive Neuroscience Figure 7. Mean accuracy across exposure durations plotted separately for eyes, nose, and mouth parts in Experiment 4. Error bars represent the range of control performance. The dashed line indicates performance expected by chance. these same stimuli, L. R. reverted to a mouth strategy (Bukach & Bub, 2002). It appears therefore that L. R. can selectively attend to other parts of the face, but at a cost to the other features. Nonetheless, his performance across a wide variety of tasks and across the vast majority of testing sessions shows a preference for the mouth area. This mouth preference is surprising in light of the fact that normal observers typically show a preference for the eye region (Tanaka & Farah, 1993; Sergent, 1984; Walker Smith, 1978; McKelvie, 1976; Goldstein & Mackenberg, 1966). However, like L. R., individuals with autism who have face recognition deficits also show a preference for the mouth region, suggesting that this bias is not arbitrary (Joseph & Tanaka, 2003; Klin, Jones, Schultz, Volkmar, & Cohen, 2002; Langdell, 1978). The tendency for L. R. to show a mouth advantage is explored further by Bukach, Le Grand, et al. (submitted). DISCUSSION Our purpose was to characterize L. R.’s prosopagnosic deficit according to an a priori theoretical framework for face recognition. Surprisingly, the results indicate that L. R.’s recognition abilities share many characteristics associated with expert face recognition: First, L. R. was able to make fine-level discriminations of faces as well as normal controls (Experiment 1). Second, he showed holistic-inclusive effects in that he was unable to ignore the irrelevant part of the face when judging whether the tops or bottoms of faces matched (Experiment 2). Finally, L. R. also showed a disproportionate disruption in detecting spatial changes relative to feature changes when faces were inverted, indicating that he uses expert configural processes to encode spatial information (Experiment 3). Despite retention of these expert abilities, L. R. is limited in the spatial extent over which he can apply these expert mechanisms to faces: When the location of Volume 18, Number 1 changes was subtle and unpredictable (Experiment 3), L. R. showed abnormal detection of features and their spatial configuration for the top half of the face. Additionally, when required to divide his attention over multiple face features, L. R. was unable to determine the identity of more than a single feature, even though he was aware that all three features (eyes, nose, and mouth) were necessary to make a correct response (Experiment 4). This pattern of performance has important implications for our understanding of prosopagnosia and the general mechanisms of perceptual expertise. Implications for Prosopagnosia L. R.’s failure to extract relevant details from the entire face interferes with recognition of faces in his everyday life. The inadequacy of spatially restricted expert face processing suggests that identification on the basis of local features and local spatial information cannot disambiguate all face competitors. Perhaps stored face representations do not include sufficient spatial resolution to provide unique identification for each face part. Indeed, because faces are dynamic, many aspects of an individual face would have a range of possible values on various structural dimensions. For example, both the eye and the mouth regions undergo substantial shape transformations across different facial expressions. In such a system, the combination of information from multiple parts of a face would significantly reduce the number of competitors. Accordingly, identification on the basis of only a subset of diagnostic information should result in frequent misidentifications and frequent occurrences of a false sense of familiarity. Identification should be much more accurate, however, when single features are highly distinctive. This description appears to fit well with L. R.’s everyday experience. A perplexing question is why L. R. is unable to identify faces in everyday life by sequentially attending to relevant face features. One possibility is that L. R.’s prosopagnosia involves a deficit in attentional mechanisms. L. R.’s inability to fully represent all of the internal features of a face resembles, to some extent, a type of prosopagnosia described by Levine and Calvanio (1989) as a loss of configural processing. These authors used the term ‘‘configural processing’’ to denote the general ability to perceive and integrate all parts of an object at a single glance. They described a severely prosopagnosic case, L. H., who like L. R. was unable to derive a sufficiently detailed overview of a face for recognition. Instead, L. H. based his identification of faces on isolated features. This impairment affected L. H.’s ability to recognize pictures of common objects as well, especially animals. For example, L. H. mistook a panda for an owl. Further testing revealed that L. H. was unable to recognize incomplete or degraded pictures and words. Thus, L. H. suffered from a type of object-based simultanagnosia that prevented him from deriving a ‘‘gestalt’’ or overview of an object. Although both L. R. and L. H. have difficulty representing multiple features of a face, several aspects of L. R.’s deficit argue against the form of simultanagnosia displayed by L. H.. First, unlike L. H., L. R. does not show a tendency to misidentify drawings of objects. Second, L. R. was not impaired at identifying incomplete object patterns or perceptually degraded objects, and thus, could form a gestalt from fragmented pictures. Most important, L. R. showed congruency effects for both the upper and lower parts of a face (Experiment 2), indicating that he was able to encode at least coarse-level information from the entire face, and demonstrating that he does not suffer from either simultanagnosia or visual neglect. Although the evidence argues against simultanagnosia in L. R.’s case, it is possible that L. R. is unable to direct attention to relevant areas within a face for further detailed perceptual processing (or lacks the time to do so). That is, the extraction of fine-level details may require additional processing that is mediated by selective attention to local face regions.3 Barton, Press, et al. (2002) suggested that an attentional allocation deficit may impair the ability to process spatial relations specifically. Two of their prosopagnosic patients (Patients 3 and 4) were unable to detect changes in spatial configurations when they had to attend to both eye and mouth features. Detection of spatial relations for mouths improved significantly, however, when mouth trials appeared in a single block and the patients were directed to attend to this location alone. Although both L. R. and Patients 3 and 4 are sensitive to blocking manipulations, other important differences between these patients suggest that different aspects of attention and/or other visual processing mechanisms are implicated. First, the lesions in Barton, Press, et al.’s patients extend more posteriorly than L. R.’s lesion. Second, whereas L. R. showed normal discrimination of the mouth when trials were randomized, Patients 3 and 4 showed a deficit for both eye and mouth spatial judgments under randomized conditions. Last, the deficits of Patients 3 and 4 appear to be limited to the processing of spatial relations, whereas L. R. was impaired at detecting both spatial and feature changes in nonpreferred locations. This latter finding suggests that selective attention is required for the encoding of both spatial relations and fine-level features. An alternative explanation that could account for L. R.’s inability to recognize faces on the basis of sequential attention to multiple features is that L. R. is unable to integrate the product of sequential attention into a unified percept for comparison. That is, he may be unable to represent the details of facial features as a combination. L. R. may be limited to a time-consuming and inefficient matching process, whereby the comparison process of a single feature must be completed before a second feature is entertained. If this were the case, given unlimited exposure duration and Bukach et al. 57 simultaneous matching, we would expect that L. R. could match faces on the basis of multiple features, but it would take abnormally long response times. According to this hypothesis, the encoding of fine-level details and spatial information requires not only directed attention to local areas, but also an integration process that combines the product of sequential attention into a coherent percept. It is difficult to separate the integration hypothesis from the attentional hypothesis with the current data; we plan further experiments with L. R. to discriminate these two alternatives. One promising technique to approach this question is multidimensional signal detection theory. Using this technique, Wenger and Ingvalson (2003) were able to show that some aspects of holistic processing have a decisional basis. The uniqueness of L. R.’s behavioral pattern can be attributed to the unusual location of his lesion. Most commonly, lesions reported in cases of prosopagnosia extend posteriorly to the junction of the occipitotemporal gyri in the right hemisphere, posterior to, or encompassing, the area where the FFA is typically found. As a result, many prosopagnosics have difficulty with making fine-level discriminations (e.g., Gauthier, Behrmann, & Tarr, 1999). In contrast, the anterior location of L. R.’s lesion appears to have spared the visual processing areas that are nominally responsible for making finelevel discriminations. It appears that L. R.’s lesion may have also spared the right FFA. This area has been associated with expert visual processing of faces, birds, cars, and Greebles. In particular, increased activity in the right FFA is correlated with increased holistic processing during expertise training (Gauthier & Tarr, 2002). Of course, we cannot know whether or not L. R.’s right FFA is functioning normally based on CT scans alone, and unfortunately, the presence of clips prevents L. R. from undergoing fMRI. However, L. R.’s behavioral data are consistent with at least some preservation of function in this area. We speculate therefore that L. R.’s prosopagnosia is most likely to be informative of perceptual processes subserved by regions anterior to the FFA. Mechanisms of Perceptual Expertise L. R.’s unique behavioral pattern with faces is informative with respect to the nature of expert perceptual mechanisms. First, L. R.’s results have implications for the nature of holistic processes. L. R.’s results indicate that holistic processing can be driven by the perception of coarse-level information from a wide spatial window, but that this wide spatial window may not contain fine-level details of features and their spatial relations. We note that fine-level details may contribute to normal holistic processing when available, but that these fine details are not necessary for holistic processing to occur. Second, L. R.’s results also have implications for configural processes that underlie the inversion effect. Experiment 3 revealed that L. R.’s sensitivity to spatial relations was confined to 58 Journal of Cognitive Neuroscience the lower portion of the face. This finding is consistent with Leder and Bruce’s (2000) hypothesis that relational information is processed locally, rather than derived from a holistic template. Third, L. R.’s results indicate that holistic and configural processes can be dissociated, in that the former may be applied to a relatively large window of coarse-level information, whereas the latter relies on the extraction of fine-level details that may require the operation of selective attention. Finally, L. R.’s pattern of results implies that expert mechanisms are not applied to the whole stimulus in an ‘‘all-or-none’’ fashion, but can be applied to a limited spatial region. This finding is consistent with conclusions from studies of the normal acquisition of perceptual expertise with Greebles (Gauthier & Tarr, 1997, 2002; Gauthier, Williams, et al., 1998). In these studies, different Greeble parts showed increasing sensitivity to configural changes at different times in the training paradigm: Sensitivity to a spatial change in the boges (upper part) emerged first for the quiff (the middle part), and only after longer training for the dunth (lower part). This systematic pattern of increasing sensitivity from close to more distal parts suggests that the order of acquisition may have been determined by spatial proximity. In this sense, L. R.’s face recognition resembles the Greeble processing of Greeble experts at intermediate levels of training. Thus, both the acquisition and the loss of perceptual expertise may be the result of gradual quantitative changes in the spatial area over which certain perceptual processes can be applied. The expertise perspective is currently the only approach that can explain or predict such a spatially graded development or loss of face recognition mechanisms. However, this spatially graded pattern alone is not necessarily incompatible with the domain-specific view; rather, the domain-specific view in its current form is simply not specified enough to account for this pattern. In this sense, the data are orthogonal to the domain-specific question. The expertise hypothesis does, however, provide a clear prediction that L. R. should encounter similar difficulties in a task that requires integration of the finelevel information from multiple parts of nonface objects. In fact, results from a Greeble training study with L. R. (Bukach, Bub, et al., in preparation) show a strikingly similar pattern to the current face studies: When L. R. was required to spread his attention over multiple Greeble parts (in a paradigm similar to Experiment 4), identification was primarily based on a single Greeble part. These results indicate that L. R.’s deficit is not face-specific, but is general to any homogeneous class of stimuli for which expert perceptual processes are engaged. Conclusions We have presented evidence for the preservation of processes associated with perceptual expertise in a prosopagnosic patient, L. R. (fine-level discrimination, Volume 18, Number 1 holistic processing, and configural processing). Importantly, we have shown that the application of these processes to fine-level details is limited to a restricted spatial area, and that this spatially restricted expertise is insufficient for face recognition. These results support the view that the skills underlying perceptual expertise are not ‘‘all or none,’’ but can be developed or lost in a gradually expanding or shrinking spatial window. METHODS Experiment 1 Participants L. R.’s performance was compared to that of controls reported in Viele et al. (2002). These controls were Brown University undergraduate and graduate students who participated either for pay or for course credit. There were seven controls in the short exposure duration condition and five controls in the long exposure duration condition. contained 20 trials (10 same, 10 different) for a total of 480 trials. Subjects were given breaks after every 8 blocks (160 trials). Stimuli were presented side by side on the computer screen. Subjects indicated their response by a keypress (the comma and period keys for same and different judgments, respectively). They then rated their confidence using a scale from 1 (very low confidence) to 6 (very high confidence). Each participant first received a block of practice trials in which they judged rabbit photos. The first 4 trials of the practice block had unlimited exposure durations, in the remaining 10 trials stimuli pairs were presented for 2 sec each. Sensitivity (d0) was calculated to control for possible differential response biases between L. R. and controls. Experiment 2 Participants In addition to L. R., two female controls (aged 29 and 24) and two male controls (aged 49 and 50) from the University of Victoria participated and received payment. Materials The stimuli set consisted of 40 gray-scale 3-D laser scans of faces provided by Heinrich Bülthoff and Niko Troje (Max Planck Institute, Tübingen, Germany; http://faces. kyb.tuebingen.mpg.de/). All faces were cropped using a 2 ! 3-inch oval window to remove cues from the hairline and face contour. These 40 faces were then altered by using the Adobe Photoshop ‘‘spherize’’ filter on the two halves (split horizontally) of each image to change the aspect ratio of the entire object. Each image was distorted in this fashion in both positive and negative directions, yielding 40 face triplets (the original, plus the negative and positive filtered image) (see Figure 2 for an example of the face triplets). There were also two gray-scale photos of rabbits used for practice, one original photo and one with the tail removed. The experiment was conducted on a Macintosh computer using RSVP software (www.tarrlab.org/rsvp.html). Design and Procedure The experiment was a simultaneous matching paradigm, with level of difficulty (easy, difficult) and exposure duration (2 sec, 5 sec) manipulations. Stimuli were paired at the exemplar level such that easy discriminations differed by a two-step filter (positively filtered matched with negatively filtered exemplars) and difficult discriminations differed by only a one-step filter (the original photo matched with either the negatively or the positively filtered stimulus). Presentation conditions using short and long exposure durations were run on separate days for L. R. Within each session, 24 blocks (12 easy, 12 difficult) were randomly ordered for each subject. Each block Materials The stimuli were created from 12 digital images of male faces without hair (from the face database provided by the Max-Planck Institute for Biological Cybernetics in Tübingen, Germany; http://faces.kyb.tuebingen.mpg.de/). Each face was approximately 200 ! 160 pixels in size and saved in 256 grays. The top and bottom halves of each face (cut just above the tip of the nose) were saved as separate images (used for the isolated parts condition). The top and bottom halves were reorganized to create 24 composite faces to be used in the experiment. The parts were paired systematically so that each top or bottom appeared in two of the stimuli. A black line (3 pixels thick) was positioned at the seam between the two halves of each stimulus (or in the same position for isolated parts). A 256 ! 256 pixel nonsense texture mask was made using the glass ‘‘tiny lens’’ filter in Adobe Photoshop. The experiment was conducted on a Macintosh computer and equipped with color monitors using RSVP software (www.tarrlab. org/rsvp.html). Design and Procedure Experiment 2 was a sequential face-parts matching paradigm in which subjects were to make a same– different judgment on either the top or bottom of faces. Subjects were postcued as to the relevant part and were instructed to ignore the other, irrelevant part. We manipulated location (top or bottom) and congruency of the distractor (irrelevant) part. For congruent trials, the information in the irrelevant part of the study face led to the same decision as the information in the Bukach et al. 59 relevant part (i.e., if the relevant test part was the same as the studied face, the irrelevant part was also the same; if the test part was different from the study part, the irrelevant part was also different from the studied face). For incongruent trials, the distractor part led to the opposite decision to the target part. An isolated part condition (both study and test) was also included as a baseline. Subjects first completed a practice trial of 12 randomly selected trials. They then completed eight blocks of 36 trials each, for a total of 288 trials. There were an equal number of same and different trials. Each trial began with an instruction to ‘‘press the space bar,’’ followed by the study stimulus (either a whole face or half face) presented centrally for 700 msec. A mask then appeared and flashed four times for intervals of 120 msec followed by 50-msec pauses. After the fourth mask flashed, a cue indicating the relevant part appeared in the appropriate location (either above or below the position of the relevant part for top and bottom trials, respectively). After another 800 msec, the test stimulus appeared centrally, and both the part cue and test face stayed on the screen for another 4000 msec. Participants were given 5000 msec from the onset of the target stimulus to make a response. Participants responded by pressing the 1 or 3 key (labeled ‘‘same’’ and ‘‘different,’’ respectively) on the number pad of the keyboard. tion (feature substitution vs. spatial relation) were manipulated. The experiment was conducted in two sessions on separate days, with upright orientation presented on the first day, and inverted on the second. Modification was blocked in an ABBA design (A = feature; B = spatial). There were an equal number of same and different trials, also randomized. Each of the four blocks in a session began with eight practice trials, followed by 72 experimental trials, for a total of 320 trials per session. Each trial began with a fixation for 500 msec, followed by an ISI of 255 msec; the first unmodified face for 2000 msec; a mask for 255 msec; the second face (modified for different trials) then appeared and remained on the screen until the subject responded. All stimuli were presented at the center of the screen. Subjects responded by pressing the ‘‘z’’ key for same and ‘‘m’’ key for different. A proportional FIE was calculated by dividing the difference in sensitivity between upright and inverted conditions by the sensitivity in the upright condition. Experiment 4 Participants L. R. and four controls (3 female students from the University of Victoria, average age 21, and 1 male control, age 49) participated in the experiment. Experiment 3 Participants In addition to L. R., three male controls (age 30, 37, and 50) took part in the experiment. Materials The stimulus set consisted of gray-scale digital facial photos of 36 individuals taken at a 10-degree angle and with neutral expressions. Hairlines were covered with a swim cap, and all distinctive markings such as collars and jewelry were removed using Adobe Photoshop 5.0. Each photo was also edited to produce four variations by either replacing the eyes or the mouth with those from a different photo (feature condition), or by moving the eyes horizontally or the mouth vertically (spatial relation condition). Sample photos of each condition are displayed in Figure 4. Additional photo sets of two more individuals were created for practice trials. The experiment was conducted on a Macintosh computer and equipped with color monitors using Psychlab software (Bub & Gum, 1990). Materials The stimulus set consisted of eight modified gray-scale digital facial photos of the type used in Experiment 3. An original photo was modified using Adobe Photoshop 5.0 in the following manner: The eyes were either left in their original position or moved apart; the nose was either the original nose or the nose of another individual in the same spatial location; the mouth was either left in its original position or moved downward. Eight stimuli were created using all possible combinations of these modifications and each was assigned a unique name. The experiment was conducted on a Macintosh computer equipped with a color monitor using RSVP software (www.tarrlab.org/rsvp.html). Labels with the names of the eight faces were placed on the number pad of the keyboard (keys 1–4 and 6–9). A template reference sheet (8 ! 11) was created with the eight faces and labels in the same spatial arrangement as that of the number pad (see Figure 5). Design and Procedure Design and Procedure The experiment was a sequential matching paradigm in which orientation (upright vs. inverted) and modifica- 60 Journal of Cognitive Neuroscience The experiment was a speeded identification task, with variable exposure durations (250, 500, 750, 1000, 1250 msec). Each subject was first given the template Volume 18, Number 1 to study, and required to explain the differences between each of the eight faces before proceeding to the computer task. Each subject was given as much time as was necessary to find the differences. The template was then hung to the left of the computer monitor for easy reference throughout the experiment. There were 4 blocks of 120 trials each, with 96 trials at each of the 5 exposure durations. Exposure duration and face identities were randomized throughout the experiment. On each trial, a fixation point was displayed in the center of the screen for 500 msec, followed immediately by one of the eight faces displayed in the center of the screen for a variable exposure duration. Subjects pressed the number pad key whose label correctly matched the face presented. Subjects had unlimited time to respond and were encouraged to consult the template as necessary. Following their response, a cue appeared asking them to rate their confidence in the response from 1 (very low confidence) to 6 (very high confidence). Subjects pressed the return key when they were ready to continue, and the next trial began after a 200-msec pause. After each block, subjects received feedback as to their accuracy for that block. Acknowledgments First and foremost, we thank L. R. for his enthusiasm and patience. We would also like to acknowledge the help of Dr. Alex Moll for interpretation of the CT scans, and Cheryl Klaiman and Kathy Koenig for their help in administering some of the neuropsychological tests. Finally, we gratefully acknowledge the financial support of the Perceptual Expertise Network funded by a grant from the James S. McDonnell Foundation, the Vanderbilt Kennedy Center for Research on Human Development, and the Natural Science and Engineering Research Council of Canada. Reprint requests should be sent to Cindy M. Bukach, Department of Psychology, Vanderbilt University, 204 Wilson Hall, Nashville, TN 37203, or via e-mail: cindy.bukach@vanderbilt.edu. Notes 1. The procedure for testing the equivalence of two d0s involves a z-test, with z ¼ ðd10 $ d20 Þ=½varðd10 Þ þ varðd20 Þ(; and var(d0) being the variance of the d0 estimate given by var(d0i) = Pr(fa)[1$Pr(fa)]/ [Nn(Ord(Zn))] + Pr(hit)[1$Pr(hit)]/ [Ns(Ord(Zs))] where Pr(fa) is the probability of false alarm and Pr(hit) is the probability of a hit on which the d 0i estimate is based; Nn and Ns are the number of trials of the ‘‘noisealone’’ and ‘‘signal + noise’’ conditions, respectively; Ord(Zn) and Ord(Zs) are the ordinate of the z-score for the noise and signal distribution, respectively. 2. This follow-up study confirms L. R.’s inability to attend simultaneously to the details of multiple face regions is not simply due to an age-related decrement, as his performance for eyes in this task was also impaired relative to another group of five age-matched controls (M = 46 years). Moreover, this group of controls did not differ from a group of 32 younger adults (M = 20 years) (Bukach, Le Grand, et al., submitted). 3. We note that in a fully cued version of a similar experiment, normal observers showed no inversion effects for either spatial or feature changes (Barton, Deepak, et al., 2003). The authors claimed that inversion effects disappeared because of focused attention. One might argue that if L. R. is selectively attending to a single location, he likewise should not show inversion effects. However, in Barton et al.’s fully cued version, trials were blocked not only by location, but also by type of change. Furthermore, trials included only one degree of change (e.g., all mouths were moved the same distance down in the ‘‘mouth down’’ condition). Thus, subjects could become selectively tuned to a particular type and size of change, therefore, it is doubtful that this fully cued task reflects expert face processing. L. R.’s preservation of inversion effects for the mouth region in Experiment 3 (uncued) argues against the use of this selective tuning strategy. Rather, L. R.’s strategy appears to reflect a spatially limited version of the expert face processing shown by normal controls in both our task and in Barton et al.’s uncued version. REFERENCES Baddeley, A., Emslie, H., & Nimmo-Smith, I. (1994). The doors and people test. Suffolk, England: Thames Valley Test Company. Barton, J. J. S., Deepak, S., & Malik, N. (2003). Attending to faces: change detection, familiarization, and inversion effects. Perception, 32, 15–28. Barton, J. J. S., Press, D. Z., Keenan, J. P., & O’Connor, M. (2002). Lesions of the fusiform face area impair perception of facial configuration in prosopagnosia. Neurology, 58, 71–78. Barton, J. J. S., Zhao, J., & Keenan, J. P. (2003). Perception of global facial geometry in the inversion effect and prosopagnosia. Neuropsychologia, 41, 1703–1711. Benton, A. L., Hamsher, K. deS., Varney, N. R., & Spreen, O. (1983). Contributions to neuropsychological assessment. New York: Oxford University Press. Brown, J. I., Fishco, V. V., & Hanna, G. S. (1993). The Nelson–Denny Reading Test. Itasca, IL: Riverside Publishing Company. Bub, D., & Gum, T. (1990). Psychlab [Computer software]. Montreal: McGill University. Bukach, C. M., & Bub, D. N. (2002). [Studies of conjunction face-matching ability in a prosopagnosic subject]. Unpublished raw data. Bukach, C. M., Bub, D. N., Kadlec, H., Gauthier, I., & Tarr, M. (in preparation). The limits of perceptual expertise in a prosopagnosic subject. Bukach, C. M., Le Grand, R., Kaiser, M., Bub, D., & Tanaka, J. (submitted). Preservation of mouth region information in two cases of prosopagnosia. Damasio, A. R. (1990). Category-related recognition defects as a clue to the neural substrates of knowledge. Trends in Neurosciences, 13, 95–98. Damasio, A. R., Damasio, H., & Van Hoesen, G. W. (1982). Prosopagnosia: Anatomical basis and behavioral mechanisms. Neurology, 32, 331–341. Davidoff, J. B., Matthews, W. B., & Newcombe, F. (1986). In H. D. Ellis, M. A. Jeeves, F. Newcombe, & A. Young (Eds.), Aspects of face processing (pp. 279–290). Dordrecht: Martinus Nijhoff. Diamond, R., & Carey, S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115, 107–117. Duchaine, B. C., Dingle, K., Butterworth, E., & Nakayama, K. Bukach et al. 61 (2004). Normal Greeble learning in a severe case of developmental prosopagnosia. Neuron, 43, 469–473. Farah, M. J., Levinson, K. L., & Klein, K. (1995). Face perception and within-category discrimination in prosopagnosia. Neuropsychologia, 33, 661–674. Farah, M. J., Wilson, K. D., Drain, M., & Tanaka, J. N. (1998). What is ‘‘special’’ about face perception? Psychological Review, 105, 482–498. Gauthier, I., Behrmann, M., & Tarr, M. J. (1999). Can face recognition really be dissociated from object recognition? Journal of Cognitive Neuroscience, 11, 349–370. Gauthier, I., Behrmann, M., & Tarr, M. J. (2004). Are Greebles like faces? Using the neuropsychological exception to test the rule. Neuropsychologia, 42, 1961–1970. Gauthier, I., Curran, T., Curby, K. M., & Collins, D. (2003). Perceptual interference supports a non-modular account of face processing. Nature Neuroscience, 6, 428–432. Gauthier, I., & Tarr, M. J. (1997). Becoming a ‘‘Greeble’’ expert: Exploring the face recognition mechanism. Vision Research, 37, 1673–1682. Gauthier, I., & Tarr, M. J. (2002). Unraveling mechanisms for expert object recognition: Bridging brain activity and behavior. Journal of Experimental Psychology: Human Perception and Performance, 28, 431–446. Gauthier, I., Williams, P., Tarr, M. J., & Tanaka, J. (1998). Training ‘‘greeble’’ experts: A framework for studying expert object recognition processes. Vision Research, 38, 2401–2428. Goldstein, A. G., & Mackenberg, E. J. (1966). Recognition of human faces from isolated facial features: A developmental study. Psychonomic Science, 6, 149–150. Haxby, J. V., Horwitz, B., Ungerleider, L. G., Maisog, J. M., Pietrini, P., & Grady, C. L. (1994). The functional organization of human extrastriate cortex: A PET-rCBF study of selective attention to faces and locations. Journal of Neuroscience, 14, 6336–6353. Henke, K., Schweinberger, S. R., Grigo, A., Klos, T., & Sommer, W. (1998). Specificity of face recognition: Recognition of exemplars of non-face objects in prosopagnosia. Cortex, 34, 289–296. Hole, G. J. (1994). Configurational factors in the perception of unfamiliar faces. Perception, 23, 65–74. Joseph, R. M., & Tanaka, J. (2003). Holistic and part-based face recognition in children with autism. Journal of Child Psychology and Psychiatry and Allied Disciplines, 44, 529–542. Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17, 4302–4311. Klin, A., Jones, W., Schultz, R., Volkmar, F., & Cohen, D. (2002). Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Archives of General Psychiatry, 59, 809–816. Langdell, T. (1978). Recognition of faces: An approach to the study of autism. Journal of Child Psychology and Psychiatry and Allied Disciplines, 19, 255–268. Le Grand, R., Mondloch, C. J., Maurer, D., & Brent, H. P. (2001). Early visual experience and face processing. Nature, 410, 890. Leder, H., & Bruce, V. (1998). Local and relational aspects of face distinctiveness. Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 51A, 449–473. 62 Journal of Cognitive Neuroscience Leder, H., & Bruce, V. (2000). When inverted faces are recognized: The role of configural information in face recognition. Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 53A, 513–536. Levine, D. N., & Calvanio, R. (1989). Prosopagnosia: A defect in visual configural processing. Brain and Cognition, 10, 149–170. Marascuilo, L. A. (1970). Extensions of the significance test for one-parameter signal detection hypotheses. Psychometrica, 35, 237–243. McCarthy, G., Puce, A., Gore, J. C., & Allison, T. (1997). Face-specific processing in the human fusiform gyrus. Journal of Cognitive Neuroscience, 9, 605–610. McKelvie, S. J. (1976). The role of eyes and mouth in the memory of a face. American Journal of Psychology, 89, 311–323. McNeil, J. E., & Warrington, E. K. (1991). Prosopagnosia: A reclassification. Quarterly Journal of Experimental Psychology, A, 43, 267–287. Moscovitch, M., Winocur, G., & Behrmann, M. (1997). What is special about face recognition? Nineteen experiments on a person with visual object agnosia and dyslexia but normal face recognition. Journal of Cognitive Neuroscience, 9, 555–604. Nunn, J. A., Postma, P., & Pearson, R. (2001). Developmental prosopagnosia: Should it be taken at face value? Neurocase, 7, 15–27. Puce, A., Allison, T., Gore, J. C., & McCarthy, G. (1995). Face-sensitive regions in human extrastriate cortex studied by functional fMRI. Journal of Neurophysiology, 74, 1192–1199. Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D. M., & Boyes-Braem, P. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382–439. Searcy, J. H., & Bartlett, J. C. (1996). Inversion and processing of component and spatial–relational information in faces. Journal of Experimental Psychology: Human Perception and Performance, 12, 904–915. Sergent, J. (1984). An investigation into component and configural processes underlying face perception. British Journal of Psychology, 75, 221–242. Sergent, J., & Signoret, J. L. (1992a). Functional and anatomical decomposition of face processing: Evidence from prosopagnosia and PET study of normal subjects. In V. Bruce, A. Cowey, & E. Rolls (Eds.), Processing the facial image (pp. 55–62). New York: Clarendon Press. Sergent, J., & Signoret, J. L. (1992b). Varieties of functional deficits in prosopagnosia. Cerebral Cortex, 2, 375–388. Shallice, T. (1988). From neuropsychology to mental structure. Cambridge: Cambridge University Press. Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6, 174–215. Tanaka, J. W., & Farah, M. J. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology, 46A, 225–245. Tanaka, J. W., & Sengco, J. A. (1997). Features and their configuration in face recognition. Memory and Cognition, 25, 583–592. Viele, K., Kass, R. E., Tarr, M. J., Behrmann, M., & Gauthier, I. (2002). Recognition of faces versus Greebles: A case study in model selection. In C. Gatsonis, R. E. Kass, A. Carriquiry, A. Gelman, D. Higdon, D. K. Pauler, & I. Verdinelli (Eds.), Volume 18, Number 1 Case studies in Bayesian statistics (Vol. 6, pp. 91–111). New York: Springer. Walker Smith, G. J. (1978). The effects of delay and exposure duration in a face recognition task. Perception and Psychophysics, 24, 63–70. Warrington, E. K. (1994). Recognition memory test. Windsor, Berk: NFER Nelson. Warrington, E. K., & James, M. (1991). The visual object and space perception battery. Bury St. Edmunds, UK: Thames Valley Test Company. Wenger, M. J., & Ingvalson, E. M. (2003). Preserving informational seperability and violating decisional separability in facial perception and recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 1106–1118. Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81, 141–145. Young, A. W., Hellawell, D., & Hay, D. C. (1987). Configurational information in face perception. Perception, 16, 747–759. Bukach et al. 63