HHS Public Access Author manuscript Author Manuscript Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Published in final edited form as: Cogn Neuropsychol. 2018 October ; 35(7): 343–351. doi:10.1080/02643294.2018.1432584. Spontaneous in-flight accommodation of hand orientation to unseen grasp targets: A case of action blindsight Emily K. Prentiss#1, Colleen L. Schneider#1,2, Zoë R. Williams3,4,5, Bogachan Sahin4, and Bradford Z. Mahon1,4,5,6,7 1. Department of Brain & Cognitive Sciences, University of Rochester, Rochester, NY, USA Author Manuscript 2. Medical Scientist Training Program, University of Rochester School of Medicine & Dentistry, Rochester, NY, USA 3. Department of Ophthalmology, University of Rochester Medical Center, Rochester, NY, USA 4. Department of Neurology, University of Rochester Medical Center, Rochester, NY, USA 5. Department of Neurosurgery, University of Rochester Medical Center, Rochester, NY, USA 6. Center for Visual Science, University of Rochester, Rochester, NY, USA 7. Center for Language Science, University of Rochester, Rochester, NY, USA # These authors contributed equally to this work. Abstract Author Manuscript The division of labor between the dorsal and ventral visual pathways is well established. The ventral stream supports object identification, while the dorsal stream supports online processing of visual information in the service of visually-guided actions. Here, we report a case of an individual with a right inferior quadrantanopia who exhibited accurate spontaneous rotation of his wrist when grasping a target object in his blind visual field. His accurate wrist orientation was observed despite the fact that he exhibited no sensitivity to the orientation of the handle in a perceptual matching task. These findings indicate that non-geniculostriate visual pathways process basic volumetric information relevant to grasping, and reinforce the observation that phenomenal awareness is not necessary for an object’s volumetric properties to influence visuomotor performance. Author Manuscript There are multiple parallel pathways within the early visual system, with different channels optimized for different visual information, including form, color, and motion (Goodale & Milner, 1992; Jeannerod & Jacob, 2005; Livingstone & Hubel, 1988; Merigan & Maunsell, Corresponding Author: Bradford Z. Mahon, PhD, Meliora Hall, University of Rochester, Rochester, NY 14627, mahon@rcbi.rochester.edu. Other Authors: Emily K. Prentiss, BS, 430 Elmwood Avenue, University of Rochester, Rochester, NY 14620, emily.prentiss@rochester.edu Colleen L. Schneider, BA, 430 Elmwood Avenue, University of Rochester, Rochester, NY 14620, colleen_schneider@urmc.rochester.edu Zoë R. Williams, MD, 210 Crittenden Blvd., Flaum Eye Institute, University of Rochester, Rochester, NY 14642, zoe_williams@urmc.rochester.edu Bogachan Sahin, MD PhD, Department of Neurology, University of Rochester, Box 673, 601 Elmwood Avenue, Rochester, NY 14642, bogachan_sahin@urmc.rochester.edu Prentiss et al. Page 2 Author Manuscript 1993; Sincich & Horton, 2005; Ungerleider & Mishkin, 1982). At the cortical level, areas within the ventral visual pathway support object identification and recognition in allocentric reference frames and represent material and surface properties that are also relevant for planning and executing functionally appropriate actions (Cant & Goodale, 2007; Gallivan et al., 2011; Goodale, Westwood, and Milner, 2003; Goodale et al., 1994; Schenk, 2006). The dorsal stream supports online transformation of visual information into action-relevant properties, including size, shape, location, depth, and orientation (Goodale and Milner, 1992; for discussion see Freud, Plaut, & Behrmann, 2016; Pisella et al., 2000; Schenk and McIntosh, 2010; Kravitz, Saleem, Baker, and Mishkin, 2011). A key question is the degree to which dorsal processing of stimuli can proceed independent of processing in primary visual cortex (V1), and independent of ‘awareness’ or ‘perception.’ A direct means to test this is to study the visuomotor abilities of individuals with lesions that prevent processing of stimuli in V1, and thus who are blind across both eyes for a region of their visual field. Author Manuscript Author Manuscript Blindsight refers to the phenomenon whereby individuals who are cortically blind due to a lesion to V1 or the optic radiations, can still make accurate perceptual judgments and/or visuomotor actions to stimuli presented in the blind visual field (Cowey & Stoerig, 1995; Leopold, 2012; Pöppel, 1973; Stoerig & Cowey, 1997; Stoerig & Cowey, 2007; Weiskrantz, 2009; Weiskrantz, Warrington, Sanders, & Marshall, 1974). “Action-blindsight”, a term coined by Danckert and Rossetti (2005), refers to the ability of some individuals to make accurate saccades or visually-guided reaches and pointing gestures to objects in the blind field, despite being phenomenally unaware of, and unable to explicitly describe those objects. Those residual visuomotor abilities are thought to be supported by one or both of two pathways that bypass V1: the superior colliculus to pulvinar to extrastriate cortex pathway, and the lateral geniculate nucleus to extrastriate cortex pathway (Lyon, Nassi, & Callaway, 2010; Schmid et al., 2009; Schmid et al., 2010; Sincich, Park, Wohlgemuth, & Horton, 2004; Takakuwa, Kato, Redgrave, & Isa, 2017). Author Manuscript There have been several case reports of cortically blind patients who retain an ability to make accurate reaches to objects presented in the blind field (Danckert et al., 2003; Marcel, 1998; Perenin & Jeannerod, 1975; Perenin & Rossetti, 1996; see also de Gelder et al., 2008). However, to our knowledge, there is only one reported case of a patient who could accurately rotate the wrist to grasp an unseen object, while being unable to make accurate explicit perceptual judgments about the object’s orientation (Perenin & Rossetti, 1996). The patient reported by Perenin and Rossetti, PJG, had a right hemianopia secondary to a lesion involving V1 and the optic radiations but sparing the occipital pole and not extending beyond the parieto-occipital sulcus. When asked to report the size or orientation of objects, PJG performed at chance levels. However, when asked to post a card through slots at varying orientations, he was able to do so with remarkable accuracy (for precedent with this task from visual form agnosia, see Goodale, Milner, Jakobson, & Carey, 1991). PJG was also able to scale his grip aperture appropriately and spontaneously when picking up objects presented in the blind field. The opposite pattern has been reported in individuals with optic ataxia, an impairment in object-directed reaching and/or grasping associated with lesions to posterior parietal cortex. Individuals with optic ataxia can make accurate perceptual judgments about objects, but have difficulty orienting, shaping and/or locating their hands Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 3 Author Manuscript appropriately to grasp objects (Binkofski, Buccino, Dohle, Seitz, & Freund, 1999; Perenin & Vighetto, 1988; Pisella et al., 2000). Careful study of individuals exhibiting dissociations between vision-for-action and visionfor-perception continues to hold tremendous potential in constraining theories about the functional organization of early and mid-level visual systems, as well as the subcortical and cortical inputs to the ventral and dorsal visual pathways. In the current report, we describe an individual with a lesion involving left lateral occipital and posterior parietal areas; he spontaneously and accurately rotated his wrist in flight to match the orientation of an object that was the target of his reach, despite having no visual awareness in that part of his visual field and being unable to report the orientation of the target in a perceptual matching task. Case Report Author Manuscript AI is a 75-year-old right-handed man who sustained an ischemic stroke involving the left precentral gyrus and parietal and lateral occipital cortex, sparing the occipital pole (Figure 1); the lesion involved the parietal white matter, including the optic radiations, deafferenting early visual cortex (Figure 1). Following the stroke, AI had right-sided hemiparesis and a dense right inferior quadrantanopia (Figure 2A). At the time of the stroke, he reported having mild word-finding difficulty and impairments in mental imagery and short-term memory. Testing Timeline. Author Manuscript We tested AI in two phases. In Phase I, AI was tested while an inpatient at Strong Memorial Hospital in Rochester, NY. This initial set of tests included a brief neuropsychological evaluation (3 days post-stroke) and a neuro-ophthalmologic exam (8 days post-stroke). The key experiments (grasping and perceptual judgments) that are the focus of the current report took place on days 11–12 and 14–16 post-stroke. After discharge, AI came into the lab for Phase II testing (22, 24 and 28 days post-stroke), during which he completed a larger battery of neuropsychological tests, as well as a second neuro-ophthalmologic exam. It became clear that during the week between his discharge from the hospital and his Phase II testing in the lab, AI had enjoyed substantial visual recovery (Supplemental Figure 1); however, all data from the experiment in this report were collected while AI’s quadrantanopia was still present. In anticipation of a potentially rapidly changing clinical profile, the perceptual matching and grasping tasks described below were both always administered in each testing session. Overview of Neuropsychological Tests. Author Manuscript When first screened for the study (2 days post-stroke), AI was oriented to self, time, and place. It was during this initial screening, when he was asked to reach out and grasp a pen held at different angles in his blind visual field, that his ability to spontaneously rotate his wrist accurately was noticed. Below is a brief account of AI’s performance on neuropsychological tests at the time of the experimental investigation; see Supplemental Online Materials for experimental designs and Phase II performance. Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 4 Author Manuscript AI showed no signs of neglect on line bisection, or copying a drawing (Supplemental Figure 2A), performed well on a test of mid-level vision involving orientation matching administered at central fixation with free viewing (Riddoch & Humphreys, 1993), and was 87% correct for object reality decision (Riddoch & Humphreys, 1993), indicating no visual form agnosia. He demonstrated mild word-finding difficulty but no particular difficulties with object recognition. He was 85% correct for naming a subset of Snodgrass and Vanderwart pictures (Snodgrass & Vanderwart, 1980). In contrast, he demonstrated extreme difficulty constructing a mental image from memory and could not draw a giraffe from memory (Supplemental Figure 2B). Visual Field Testing. Author Manuscript AI’s vision was assessed with a full neuro-ophthalmologic exam (by author ZRW, at Flaum Eye Institute, University of Rochester Medical Center) including 24–2 Humphrey automated perimetry (each eye tested individually, with central fixation enforced). Humphrey perimetry demonstrated a dense right inferior quadrantanopia (Figure 2A); this was independently confirmed for the central 20 degrees of vision using a letter detection and identification visual field task (Supplemental Online Materials and Supplemental Figure 1). Note, however, that the perceptual matching and grasping task was performed more peripherally than the Humphrey perimetry test locations. For this reason, care was taken to ensure that both the perceptual matching task and the grasping task were administered during each session, which ensured that we consistently tested grasping in a visual field location where AI was not able to ‘phenomenally see’ the stimulus. Visuomotor Study Author Manuscript Materials & Methods. The visuomotor task described here was conducted while AI was an inpatient at the acute rehabilitation unit at Strong Memorial Hospital. He was first familiarized with the task over two days (days 11 and 12 post-stroke), then tested over three days (days 14–16 post-stroke). Author Manuscript AI completed two different tasks: a perceptual matching task and a reach-to-grasp task. Each task was completed in the intact visual field and the blind visual field in every testing session. The sequence of the tasks was counterbalanced across sessions. For example, on the first day of testing, the order was: reach-to-grasp in the blind field (task “A”), matching in the blind field (task “B”), reach-to-grasp in the intact field (task “C”), and matching in the intact field (task “D”), whereas on the second day, he completed the tasks in a “CDAB” order. For each trial, the handle was in one of six different orientations relative to the horizontal meridian: 0° (horizontal), 90° (vertical), and 30° or 60° to the right or left of the vertical meridian. The study was designed as a 2×2 task (matching vs. grasping) by target location (blind or sighted). At the beginning of each testing block in the blind field, the experimenters verified the placement of the grasping device within AI’s blind field by asking him if he could see any part of the grasping device while fixating on a webcam. The webcam was moved so that the grasping device was located farther in his peripheral vision until he reported it Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 5 Author Manuscript completely disappeared from sight—that location was then tested in that session for both perceptual matching and grasping. The two grasping devices were created so that they could be fastened to a table (Figure 2B). Each device had a handle fixed to a rotating annulus that was oriented in the fronto-parallel plane. Because of AI’s right-sided hemiparesis, all grasping and matching was performed with his left arm. Since testing took place outside the lab and across multiple days, care was taken to ensure that the grasping device was located at a similar eccentricity in the intact and blind visual fields each day. The center of the target handle (for the matching and grasping task) was placed in AI’s blind field (22.5–31.5° right of fixation and 37.6–56.3° below fixation, ranges correspond to variation across testing sessions, Figure 2B). The handle itself subtended between 8° and 11° of visual angle (again, range depends on testing session). Author Manuscript Wrist orientation was recorded using an iPhone 4 and the “Advanced Gyroscope” application (Mercier, 2013). The application recorded real-time position information using the iPhone’s native accelerometer. AI wore the iPhone in an armband on his left wrist. The application sampled wrist orientation in degrees relative to the horizontal plane at 10 Hz. The gyroscope was calibrated at the beginning and middle of each block so that it was set at 0° when AI grasped the handle in the horizontal orientation. Two video cameras also captured AI’s movements: a ‘go-pro’-like camera was positioned orthogonal to his direction of reach and an HD-camera was positioned above and behind AI while a webcam recorded eye position for offline analysis (Figure 2B). Matching Task. Author Manuscript The matching task was designed to assess AI’s perceptual abilities in his blind and intact visual fields. The second (manipulated) handle was placed in the intact field, just below fixation. For each trial, the experimenters temporarily occluded AI’s view of the model while setting the handle to one of the 6 pre-specified orientations (see above). Once set, the occluder was removed and AI was instructed to manipulate the (visible) second handle to match the model (in his blind field) as closely as possible while maintaining fixation on the webcam. This task was repeated with the model in the intact field (lower left quadrant, 21– 41° left of fixation and 33.5–58.3° below fixation, ranges correspond to variation across testing sessions) and the second handle in the intact field just below fixation. Over 3 sessions, he completed 84 trials with the model in the sighted field and 84 trials with the model in his blind field, yielding 14 trials for each of the 6 orientations. Reach-to-Grasp Task. Author Manuscript The reach-to-grasp task was designed to assess spared visuomotor ability in AI’s blind and intact visual fields. The grasping device was placed in either the blind field or the intact field as above. For each trial, the experimenters rotated the occluded handle an arbitrary number of times to prevent auditory cues from providing information based on a memory of its orientation from the last trial; the handle’s orientation was then set at one of the 6 prespecified orientations (see above). The occluder was then removed and AI reached to grasp the handle as quickly and accurately as possible while maintaining fixation on the webcam. A specific starting position for reaching-to-grasp was not enforced, but AI generally rested Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 6 Author Manuscript his hand on the armrest or in his lap before each trial. AI completed 84 trials in the blind field and 84 trials in the sighted field, yielding 14 trials for each orientation in the impaired and intact visual fields. On a small number of trials in his blind field, AI would reach out without rotating his wrist, touch the handle with his knuckles, then orient his wrist and grasp the handle; these trials were excluded from the analysis (n = 10) and he was reminded to orient his wrist appropriately ‘in-flight.’ Throughout all testing, AI reported that he could not see the handle in his blind field and expressed surprise when he would reach out – in a way that he perceived to be random – and successfully grasp the handle. Analysis. Author Manuscript We conducted a frame-by-frame analysis of videos from the camcorder positioned behind AI and the camera orthogonal to his reaching trajectory (Figure 2B). For the matching task, we recorded the position of the model handle and the handle that AI manipulated. For the reaching task, we extracted his wrist angle from the iPhone at the time point corresponding to the last video frame before he made contact with the handle. While AI was completing the task, one experimenter monitored his gaze in real time so that testing trials in which he broke fixation were repeated, ensuring that all cells of the design had the same number of ‘clean’ trials. We also inspected the webcam videos after testing, which ensured that AI maintained fixation throughout all trials. Results from the two handle orientations that mirrored each other (e.g. 30° and 60° to the right or left of the vertical meridian) were collapsed for analysis. Results Author Manuscript As performance was consistent across all three testing sessions, and matching and grasping were performed within each session, all reported results reflect performance averaged across testing dates. Matching. When both grasping devices were in AI’s intact visual field, he was able to orient the second handle to match the orientation of the model extremely accurately (r = 0.99, p < 0.001, Figure 3A), with an average magnitude of difference between target and actual orientation of 7.8° (SD = 6.1°). However, when the model handle was placed in his blind visual field, he was unable to match the visible handle to the model (r = 0.08, p > 0.50, Figure 3B); the average magnitude of difference between target and actual orientation was 58° (SD = 43.8°). When asked about his performance, he stated that he was guessing for all trials in which the model was presented in the blind visual field. Author Manuscript Reaching-to-Grasp. When the grasping device was presented in his intact visual field, AI spontaneously oriented his wrist upon reaching for the handle with a high degree of accuracy (r = 0.88, p < 0.001, Figure 3C); his average deviation from the target orientation was 8.9° (SD = 10.3°). In contrast to his poor performance in perceiving the orientation of the handle in his blind visual field, AI also spontaneously and accurately oriented his wrist when grasping the (unseen) handle (r = 0.71, p < 0.001, Figure 3D); his average deviation from the target Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 7 Author Manuscript location was 19.4° (SD = 16.9°). When asked about his performance, AI asserted that he never had a percept of the handle in his blind field and was guessing the orientation of the handle every time. He never ceased to be surprised that he was accurate in grasping the handle (see Supplemental Video 1 for example trials). General Discussion Author Manuscript We have reported a dissociation between a complete lack of awareness of visual information and spontaneous rotation of the wrist during grasping in a patient with a lesion that deafferented early visual cortex. AI was able to accurately perceive and grasp a handle that was presented in his intact visual field, but was unable to perceive a handle in his blind field; the key finding is that AI was able to accurately rotate his wrist to grasp the handle in his blind field. Clearly, his ability to accurately orient his hand during the grasping action means that visual information is being processed; this is despite the fact that he has no experience of vision. These findings are another demonstration of the dissociation between vision-foraction and vision-for-perception, first described in the context of visual form agnosia by Goodale and colleagues (1991). Author Manuscript To our knowledge, our findings represent the second reported case in the literature of a dissociation between accurate wrist orientation and impaired perception of objects presented in the cortically blind field, with the prior case described by Perenin and Rossetti (1996). One aspect of our case that is of particular interest is that AI’s lesion included posteriorlateral parietal cortex. His spontaneous and accurate accommodation of his hand’s orientation to the orientation of the handle in the blind field might appear somewhat surprising given the extent of his putatively ‘dorsal’ lesions. However, it is important to note that all grasping was performed by AI using his left (i.e., ipsilesional) hand, as it was not possible to test his contralesional hand due to his hemiparesis, which persisted throughout all of our testing sessions, even after he recovered a substantial amount of vision. Author Manuscript One account of AI’s intact ability to orient his hand to an unseen target in his blind field is that the damage in his left parietal lobule, in fact, spared the relevant regions of the dorsal pathway. In other words, while he had a parietal lesion, it may not have involved parietal regions that participate in dorsal visual analysis in the service of action. Another, more intriguing possibility, is that AI’s parietal lesion would have caused optic ataxia, except that it was not possible to test his ability to grasp targets with his contralesional hand. Because optic ataxia is classically a visuomotor impairment for grasping targets in the contralesional visual field with the contralesional hand, AI’s hemiparesis may have ‘masked’ a possible optic ataxia. This issue can be addressed through studies of future patients with deafferenting or frank V1 lesions who do not have parietal involvement, or who have parietal involvement without motor impairments. The expectation would be that patients with cortical blindness and no parietal lesion would demonstrate accurate spontaneous wrist orientation while grasping with either hand. In contrast, patients with concomitant parietal lesions without hemiparesis may exhibit accurate wrist orientation when grasping targets in the blind field with their ipsilesional hand, but not with the contralesional hand (i.e., action blindsight and optic ataxia, within the same individual but dissociated across the two hands). Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 8 Author Manuscript Conclusion Critical insight about the type of information that is processed by non-geniculostriate pathways can be gleaned by studying patients with lesions that affect post-geniculate visual processing. The findings we have reported in this case study indicate that pathways that bypass V1 are sufficient to process the principal axis of elongation of an object that is the target of an action, and provide additional evidence for the dissociation between vision-foraction and vision-for-perception. Supplementary Material Refer to Web version on PubMed Central for supplementary material. Author Manuscript Acknowledgments This research was supported by NIH grant R01NSO89069 to B.Z.M., a core grant to the Center for Visual Science (P30 EY001319), and a grant from the Schmitt Program on Integrative Brain Research (University of Rochester) to B.S. and B.Z.M. We are grateful to AI for his enthusiastic participation in these studies, and to Duje Tadin for his assistance with the development of the letter detection and identification test. References Author Manuscript Author Manuscript Binkofski F, Buccino G, Dohle C, Seitz RJ, & Freund HJ (1999). Mirror agnosia and mirror ataxia constitute different parietal lobe disorders. Annals of Neurology, 46(1), 51–61. [PubMed: 10401780] Cant JS, & Goodale MA (2007). Attention to form or surface properties modulates different regions of human occipitotemporal cortex. Cerebral Cortex, 17(3), 713–731. [PubMed: 16648452] Cowey A, & Stoerig P (1995). Blindsight in monkeys. Nature, 373(6511), 247–249. [PubMed: 7816139] Danckert J, Revol P, Pisella L, Krolak-Salmon P, Vighetto A, Goodale MA, & Rossetti Y (2003). Measuring unconscious actions in action-blindsight: Exploring the kinematics of pointing movements to targets in the blind field of two patients with cortical hemianopia. Neuropsychologia, 41(8), 1068–1081. [PubMed: 12667542] Danckert J, & Rossetti Y (2005). Blindsight in action: What can the different sub-types of blindsight tell us about the control of visually guided actions? Neuroscience & Biobehavioral Reviews, 29(7), 1035–1046. [PubMed: 16143169] de Gelder B, Tamietto M, van Boxtel G, Goebel R, Sahraie A, van den Stock J, Stienen BMC, Weiskrantz L, Pegna A (2008). Intact navigation skills after bilateral loss of striate cortex. Current Biology, 18, 24, R1128–9 [PubMed: 19108766] Freud E, Plaut DC, & Behrmann M (2016). ‘What’ is happening in the dorsal visual pathway. Trends in Cognitive Science, 20(10), 773–784. Gallivan JP, Chapman CS, Wood DK, Milne JL, Ansari D, Culham JC, & Goodale MA (2011). One to four, and nothing more: Nonconscious parallel individuation of objects during action planning. Psychological Science, 22(6), 803–811. [PubMed: 21562312] Goodale MA, Meenan JP, Bulthoff HH, Nicolle DA, Murphy KJ, & Racicot CI (1994). Separate neural pathways for the visual analysis of object shape in perception and prehension. Current Biology, 4(7), 604–610. [PubMed: 7953534] Goodale MA, & Milner AD (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15(1), 20–25. [PubMed: 1374953] Goodale MA, Milner AD, Jakobson LS, & Carey DP (1991). A neurological dissociation between perceiving objects and grasping them. Nature, 349(6305), 154–156. [PubMed: 1986306] Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 9 Author Manuscript Author Manuscript Author Manuscript Author Manuscript Goodale MA, Westwood DA, & Milner AD (2003). Two distinct modes of control for object-directed action In Heywood CA, Milner AD & Blakemore C (Eds.), Roots of Visual Awareness (Vol. 144, pp. 131–144). Amsterdam: Elsevier Science Bv. Huxlin KR, Martin T, Kelly K, Riley M, Friedman DI, Burgin WS, & Hayhoe M (2009). Perceptual relearning of complex visual motion after V1 damage in humans. Journal of Neurosci, 29(13), 3981–3991. Jeannerod M, & Jacob P (2005). Visual cognition: A new look at the two-visual systems model. Neuropsychologia, 43(2), 301–312. [PubMed: 15707914] Kravitz DJ, Saleem KS, Baker CI, & Mishkin M (2011). A new neural framework for visuospatial processing. Nature Reviews Neuroscience, 12(4), 217–230. [PubMed: 21415848] Leopold DA (2012). Primary visual cortex: Awareness and blindsight In Hyman SE (Ed.), Annual Review of Neuroscience, (Vol. 35, pp. 91–109). Palo Alto: Annual Reviews. Livingstone M, & Hubel D (1988). Segregation of form, color, movement, and depth - anatomy, physiology, and perception. Science, 240(4853), 740–749. [PubMed: 3283936] Lyon DC, Nassi JJ, & Callaway EM (2010). A disynaptic relay from superior colliculus to dorsal stream visual cortex in macaque monkey. Neuron, 65(2), 270–279. [PubMed: 20152132] Marcel A (1998). Blindsight and shape perception: Deficit of visual consciousness or of visual function? Brain, 121, 1565–1588. [PubMed: 9712017] Mercier N (2013). Advanced gyroscope. Merigan WH, & Maunsell JHR (1993). How parallel are the primate visual pathways. Annual Review of Neuroscience, 16, 369–402. Perenin MT, & Jeannerod M (1975). Residual vision in cortically blind hemiphields. Neuropsychologia, 13(1), 1–7 [PubMed: 1109450] Perenin MT, & Rossetti Y (1996). Grasping without form discrimination in a hemianopic field. Neuroreport, 7, 793–797. [PubMed: 8733747] Perenin MT, & Vighetto A (1988). Optic ataxia: A specific disruption in visuomotor mechanisms. I. Different aspects of the deficit in reaching for objects. Brain, 111, 643–674. [PubMed: 3382915] Pisella L, Gréa H, Tilikete C, Vighetto A, Desmurget M, Rode G, . . . Rossetti Y. (2000). An ‘automatic pilot’ for the hand in human posterior parietal cortex: Toward reinterpreting optic ataxia. Nature Neuroscience, 3(7), 729–736. [PubMed: 10862707] Pöppel EH, R., Frost D. (1973). Residual visual function after brain wounds involving the central visual pathways in man. Nature, 243(5405), 295–296. [PubMed: 4774871] Riddoch MJ, & Humphreys GW (1993). The Birmingham Object Recognition Battery (BORB). Hove: Lawrence Erlbaum Associates. Schenk T (2006). An allocentric rather than perceptual deficit in patient D.F. Nature Neuroscience, 9(11), 1369–1370 [PubMed: 17028584] Schenk T, & McIntosh RD (2010). Do we have independent visual streams for perception and action? Cognitive Neuroscience, 1(1), 52–62. [PubMed: 24168245] Schmid MC, Mrowka SW, Turchi J, Saunders RC, Wilke M, Peters AJ, . . . Leopold DA. (2010). Blindsight depends on the lateral geniculate nucleus. Nature, 466(7304), 373–377. [PubMed: 20574422] Schmid MC, Panagiotaropoulos T, Augath MA, Logothetis NK, & Smirnakis SM (2009). Visually driven activation in macaque areas v2 and v3 without input from the primary visual cortex. PLoS One, 4(5). Sincich LC, & Horton JC (2005). The circuitry of v1 and v2: Integration of color, form, and motion. Annual Review of Neuroscience, 28, 303–326. Sincich LC, Park KF, Wohlgemuth MJ, & Horton JC (2004). Bypassing V1: A direct geniculate input to area MT. Nature Neurosciencei, 7(10), 1123–1128. Snodgrass JG, & Vanderwart M (1980). A standardized set of 260 pictures : Norms for name agreement, image agreement, familiarity, and visual complexity, Journal of Experimental Psychology: Human Learning and Memory 6(10), 174–215. [PubMed: 7373248] Stoerig P, & Cowey A (1997). Blindsight in man and monkey. Brain, 120, 535–559. [PubMed: 9126063] Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 10 Author Manuscript Stoerig P, & Cowey A (2007). Blindsight. Current Biology, 17(19), R822–R824. [PubMed: 17925204] Takakuwa N, Kato R, Redgrave P, & Isa T (2017). Emergence of visually-evoked reward expectation signals in dopamine neurons via the superior colliculus in V1 lesioned monkeys. Elife, 6. Ungerleider LG, & Mishkin M (1982). Two cortical visual systems In Ingle DJ, Goodale MA & Mansfield RJ (Eds.), Analysis of Visual Behavior (pp. 549–580). Cambridge, MA: MIT Press. Weiskrantz L (2009). Blindsight: A case study spanning 35 years and new developments. New York: Oxford University Press. Weiskrantz L, Warrington EK, Sanders MD, & Marshall J (1974). Visual capacity in hemianopic field following a restricted occipital ablation. Brain, 97, 709–728. [PubMed: 4434190] Author Manuscript Author Manuscript Author Manuscript Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 11 Author Manuscript Author Manuscript Author Manuscript Figure 1. MRI showing extent of acute stroke lesion. The images show diffusion-weighted MRI collected 1 day post-stroke demonstrating a lesion in left parieto-occipital cortex involving Baum’s loop but sparing the occipital pole. Author Manuscript Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 12 Author Manuscript Author Manuscript Author Manuscript Author Manuscript Figure 2. Visual Fields and Experimental Setup. A) Automated 24–2 Humphrey Visual Field collected 8 days post-stroke with both eyes combined into an interpolated winner map (as in Huxlin et al., 2009). Darkened areas show a right inferior quadrantanopia (see also Supplemental Figure 1). B. Photograph and schematic of the experimental set-up during the visuomotor experiment. Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01. Prentiss et al. Page 13 Author Manuscript Author Manuscript Author Manuscript Figure 3. Dissociation between wrist orientation and perceptual matching. Results of perceptual matching and grasping experiment. Error bars indicate standard error of the mean. Manipulated handle angle compared to model handle angle when matching targets presented to the A) intact (left) visual field and B) blind (right) visual field; AI’s wrist orientation compared to target angle when reaching in the C) intact (left) field and D) blind (right) field. Author Manuscript Cogn Neuropsychol. Author manuscript; available in PMC 2019 October 01.