Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY ‘All the better for not seeing you’: An investigation of whether the speech of an individual with acquired communication difficulties is affected by communicative context Carolyn Bruce* Language and Communication, Division of Psychology and Language Sciences, University College London Ursula Braidwood Division of Psychology and Language Sciences, University College London Caroline Newton Developmental Science, Division of Psychology and Language Sciences, University College London * Corresponding author Dr C. Bruce, UCL Language and Communication, Chandler House, 2 Wakefield Street, London, United Kingdom, WC1N 1PF Tel: +44(0)20 7679 4225 E-mail: c.bruce@ucl.ac.uk 1 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY Abstract Evidence shows that speakers adjust their speech depending on the demands of the listener. However, it is unclear whether people with acquired communication disorders can and do make similar adaptations. This study investigated the impact of different conversational settings on the intelligibility of a speaker with acquired communication difficulties. Twentyeight assessors listened to recordings of the speaker reading aloud 40 words and 32 sentences to a listener who was either face-to-face or unseen. The speaker’s ability to convey information was measured by the accuracy of assessors’ orthographic transcriptions of the words and sentences. Assessors’ scores were significantly higher in the unseen condition for the single word task particularly if they had heard the face-to-face condition first. Scores for the sentence task were significantly higher in the second presentation regardless of the condition. The results from this study suggest that therapy conducted in situations where the client is not able to see their conversation partner may encourage them to perform at a higher level and increase the clarity of their speech. 2 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY 1. Introduction Do individuals with acquired communication difficulties make conversational adjustments to benefit their listeners? Many studies have demonstrated that in numerous ways proficient speakers adapt what they say and how they say it depending on circumstances, the cognitive demands of the task and/or the demands of the listener (Bell, 1984; Cameron-Faulkner, Lieven, & Tomasello, 2003; Uchanski, Choi, Braida, Reed, & Durlach, 1996). However, few studies have investigated whether individuals with acquired communication difficulties, who have had normal competencies in language prior to brain injury, modify their speech in order to aid listeners’ comprehension. In the present study, we examine whether a woman with acquired aphasia and associated motor speech difficulties speaks differently when she sees the listener and when she does not. A speaker’s speed of delivery, articulatory precision, complexity of grammatical structure and choice of vocabulary are modified by factors such as task demands and the communicative context. An obvious example would be if the speaker was describing a new and complex task, then he or she would take care to select the appropriate vocabulary and syntax to provide the detailed information required for the task to be completed accurately. Whether information is new or not has also been shown to affect articulation; words that are new tend to be produced with more care, while words that are predictable, either from being heard before or from the linguistic context, are often produced less clearly with shorter durations, reduced vowel spaces and dropped phonemes (Aylett & Turk, 2006). These articulatory changes observed in conversation have not been found when individuals have read words in a list (Fowler, 1988), which suggests that tasks with higher cognitive demands have an effect on articulatory precision. The listener’s needs and knowledge also have been shown to be taken into account when speaking. A range of factors, including the age and language proficiency of the listener 3 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY have been found to affect the semantic, syntactic and phonetic forms used by the speaker. For example, speakers adjust the complexity of an utterance according to the listener’s age (Cameron-Faulkner et al., 2003); select a language code appropriate to the listener’s socioeconomic status (Bell, 1984); rephrase or give additional information to utterances if they have not been understood (Goodwin & Heritage, 1990); and adopt a hyper-articulated style of speech when talking to someone who has a hearing impairment (Uchanski et al., 1996). The term clear speech has been used to refer to the way in which talkers adjust their speaking style to maximise intelligibility for a communication partner (Smiljanic & Bradlow, 2009). A number of acoustic changes have been identified as relating to clear speech production; including expanded vowel space area (Bradlow, Torretta & Pisoni, 1996), slowed speech rate (Bradlow, Krause & Hayes 2003) and increased vocal intensity (Dromey, 2000). Aylett and Turk (2006) suggest that there are two opposing constraints affecting the care with which people speak: communicating effectively and using articulatory effort efficiently. Similarly, Lindblom (1990) observed that speakers varied their pronunciation along a continuum of hyper-articulation to hypo-articulation depending on the listening conditions. Hyper-articulation, which involves pronouncing words more clearly than normal, is used when the listening conditions are difficult and the speaker believes the listener needs more acoustic information to understand what is being said. As articulating words precisely requires effort, it is unlikely that this would be the speaker’s usual speech style in conversation. However, a variety of instructions either focusing on the speaker’s performance (e.g., ‘speak clearly’ or ‘hyperarticulate’) or the listener’s experience (e.g., ‘speak to someone with a hearing impairment’) has been shown to elicit clear speech. Recent research suggests that the wording of the instruction affects the particular acoustic adjustments made by the speaker, possibly because it focuses the 4 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY speakers’ attention to different parameters in speech processing. In their study of four different speaking conditions (habitual, clear, hearing impaired, and overenunciate), Lam, Tjaden and Wilding (2012) found that instructing healthy young adults to overenunciate was the most effective cue, eliciting the greatest changes in vowel production and speech timing. In contrast, the instruction to ‘speak to someone with a hearing impairment’, appeared to be more effective in increasing vocal intensity. Further studies are needed to establish whether these findings translate to clinical populations, such as people with acquired communication difficulties, or indeed if such changes would increase their intelligibility. Many studies have demonstrated that visual cues, such as facial expression and gesture, play an important role in the communicative exchange, supplementing or occasionally overriding the speech signal. Such cues help both the listener and the speaker. When the listener can see the speaker’s face, speech intelligibility increases (Garcia & Dagenais, 1998; Keintz, Bunton, & Hoit, 2007) and when the speaker can see the listener’s face they have a better idea of whether the message has been transferred successfully. These studies suggest that communication is likely to be less efficient in situations where the conversational partners are unable to see one another. However, there is evidence that speakers are sensitive to the needs of the listener in these conditions and adapt their speech accordingly. Adaptations include increased numbers of words (Boyle, Anderson, and Newlands (1994), and more filled pauses (e.g. “um” and “uh”) when they could not see the listener (Rimé, 1982). Other evidence shows that speakers may also make articulatory changes. In Anderson, Bard, Sotillo, Newlands, and Doherty Sneddon’s study (1997) transcription accuracy was better for recordings of speakers where they could see the conversational partner versus when the conversation partner was unseen. We are not aware of any published research on whether people with acquired communication disorders make similar adaptations to their speech. 5 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY The presence of dysarthria in a speaker has been shown to have a large impact, both acoustically (Kent & Netsell, 1975; Kent, Netsell, & Abbs, 1979; Kent, Kent, Weismer, & Duffy, 2000; Weismer, Martin, Kent, & Kent, 1992) and perceptually (Mackenzie & Lowit, 2007). The intelligibility of a speaker is influenced by a range of factors, including the severity of the motor speech impairment (Yorkston & Beukelman, 1978) and the familiarity of listeners to the speaker and the speech impairment (Beukelman & Yorkston, 1980). A number of studies have shown that the intelligibility of speakers with dysarthria is affected by communicative gestures, the predictiveness of the message, and the relation of the message to specific contexts (e.g., Garcia & Cannito, 1996). Moreover, intelligibility scores were found to be higher when listeners were presented with audio-visual recordings than audio only recordings, suggesting that they utilised information available through visual speech to compensate for lost acoustic information in the degraded speech signal. These studies suggest that speakers with dysarthria are easier to understand in face-to-face conditions. However, in these studies, the speakers themselves were not involved in conditions that required them to take into account the needs of the listener; the type of recording constituted the two different listening conditions. Therefore, it is not possible to determine from these findings whether individuals with acquired communication difficulties, such as dysarthria and aphasia, retain the ability i) to interpret what the listener can be assumed to know and ii) to modify their speech style, e.g., articulating words more carefully, to increase the chance that they will be understood in difficult listening conditions. This study aims to address these gaps in our knowledge by investigating whether a woman with apraxia of speech, dysarthria and anomic aphasia modifies her speech production spontaneously when talking under conditions judged by her to be difficult for the listener. If the speaker is sensitive to the fact that the listener is likely to be disadvantaged in the unseen condition, she may alter her speech to aid the listener (e.g., by reducing speed of 6 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY articulation). This will result in more accurate responses from the assessors whilst listening to speech produced in the unseen condition. 2. Material and methods 2.1 Participants Three types of participants were included in this study; the speaker, the listener and 28 assessors. Speaker: The speaker, SN, was a non-native English speaking, right handed, 63-year-old woman who had a left parietal infarct four years prior to the start of this study. Her first language was Serbian, but she had lived in the UK for 37 years prior to the study and spoke English fluently. She passed a hearing screening at 40 dB HL in the better ear for 1000 Hz and 2000 Hz (Ventry & Weinstein, 1983) and performed well (94%) on an auditory discrimination task (used in Dunton, Bruce & Newton, 2011). Hearing loss in the higher frequencies did not appear to affect her performance in one-to-one speaking situations. SN presented with chronic anomic aphasia with co-existing mild to moderate apraxia of speech (AOS) and mild unilateral upper motor neuron dysarthria (UUMND), as assessed independently by two trained speech and language therapists. In addition, SN had a paralysed right hand but no other physical difficulties associated with her stroke. On the Western Aphasia Battery (WAB; Kertez, 2006) SN had an aphasia quotient of 93.2, indicating a mild aphasia. In conversation, she used circumlocution when unable to name a lexical item as well as producing some semantic errors. She made occasional grammatical and/or word-order errors, although these did not obscure the meaning of the message. She reported that since her stroke it took her longer to process and formulate utterances. In addition to her aphasia, SN exhibited motor speech difficulties. Subscores for the Apraxia Battery for Aphasia - 2nd edition (Dabul, 2000) revealed mild deficits in 7 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY diadochokinetic rate, increasing word length (part A) and utterance time for polysyllabic words and moderate deficits in increasing word length (part B) and the repeated trials subtests, but no oral or limb apraxia. Characteristics of AOS that were observed in her speech included 1) an inability to increase rate while maintaining phonemic integrity, 2) phoneme distortions, 3) prolonged vowels, and 4) self-initiated trials to repair errors with production often improving with successive attempts. SN also demonstrated a mild UUMND as described by Duffy (1995). Her speech was characterised by low pitch and articulatory imprecision particularly in consonant clusters and sounds such as /r/, /l/, /tʃ/ and/j/ (e.g., the word children was produced as [tʃədraɪn]). Words and syllables were usually produced at a slow rate, with extended pauses between them and with equal stress, affecting the rhythm and intonation of her connected speech. It was not clear whether these changes in speech production were features of her AOS or UUMND or her attempt to compensate for them. SN’s speech intelligibility was, at worst, moderately reduced. She reported that her combined speech and language difficulties led to communication breakdowns, particularly with new conversation partners. Listener – The listener for the recordings was a final year trainee speech and language therapist who had experience interacting with clients with acquired communication difficulties. She had not met the speaker prior to the study. Assessors – These were twenty eight adults, aged between 18 – 48 years (with a mean age of 23.5), with no specific training or experience in interacting with people who have acquired communication disorders. All had English as their first language and reported a normal level of hearing. They were divided into two groups to listen to the recordings. 8 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY 2.2 Materials (i) A set of 40 monosyllabic minimal pairs comprising an equal number of pairs that differed by word-initial consonant (e.g., fan-van); word-final consonant (e.g., bag-back), word-initial consonant cluster (e.g., stick-slick) and intervocalic consonant (e.g., coffee-copy). The opposite member of each pair was used in the two conditions of the experiment but in a different order. ii) Thirty two Bamford, Kowal and Bench (BKB) sentences (Bench, Kowal, & Bamford, 1979). The BKB materials consist of short sentences using simple vocabulary such as a cat sits on the bed. 2.3 Procedures The experiment was in two parts, the first involving recording the utterances produced by the speaker communicating with a listener, the second collecting the assessors’ transcriptions of these recordings. Each participant attended two sessions one week apart. 2.3.1 Recording speech samples The speaker was told that the primary goal of the two recording sessions was to speak in such a way as to allow the listener to complete the two tasks; i) identify the correct word, and ii) transcribe the sentence. The listener had been told that her job was to perform the tasks according to what she believed the speaker had said and that throughout the experiment she was to limit her responses to minimal turns, for example requests for repetition if necessary and non-linguistic vocalisations such as ‘mhm’ and ‘uh-huh’. 9 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY Session one, the face to face recording, took place in a sound-proofed recording booth. The speaker and the listener sat on opposite sides of a vertical screen. This barrier ensured that although they could see one another’s face they could not see the other’s work. The speaker read aloud the set of single words whilst the listener selected the target from a choice of two words (i.e., both members of the minimal pair; e.g., stick-slick). The speaker then read aloud the BKB sentences, which the listener transcribed orthographically. The conversation was recorded using a Rode NT1-A 15 microphone, connected to an Edirol UA 25 USB interface, which was situated near the speaker. Cool Edit 2000 was used as recording software. Session two, the unseen recording, took place with the speaker and the listener in separate sound-proofed booths. An analogue output from the UA 25 USB interface in the speaker’s booth was fed to an amplifier and headphones in the conversation partner’s booth so the recording could be monitored. There was a microphone in the conversation partner’s booth (RS 249 – 946), connected to a Tascam Porta 02 mixer, feeding a loudspeaker in the speaker’s booth which could be switched on if it was necessary to talk to the speaker. The researcher conducting the tasks was in the same room as the speaker, sitting behind her so as not to be a visual distraction. After familiarisation with the set-up, the speaker was asked to read aloud the second set of words whilst the listener again selected the target from a choice of two words. The speaker and listener were given the same instructions as in session 1 for the BKB sentences. The sentences and the new set of single words that the speaker read aloud were randomly ordered to be in a different sequence from the first session. 2.3.2 Preparing speech samples for playback to assessors Recorded samples were transferred onto computer via a digital sound card, maintaining the sampling rate and quantization of the original recordings. Recordings of each stimulus, word 10 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY or sentence were separated into individual sound files. Stimulus files were normalized using the digital audio editing software Sound Forge 4.0 that the peak amplitude of each sentence was constant across all files. 2.3.3 Experimental task The 28 assessors were divided into two groups of 14 in order to counterbalance the order of listening. The group labelled F2F listened to the face-to-face recording first and the unseen recording second while the group labelled UN listened to the recordings in the reverse order. Testing was carried out in a quiet but not soundproofed room. The recordings were played through two loudspeakers and each recording was played once. Prior to beginning the experimental task, the assessors were told they would hear a sample of speech from a speaker with acquired communication difficulties reading aloud words and sentences. The assessors were asked to transcribe orthographically the single words and sentences they heard. They were also told that the person speaking would be difficult to understand and that if they were uncertain they should take their best guess. However, if they were unable to venture a guess they should skip the word. The assessors’ second session involved the same procedure but listening to a recording of a different set or order of stimuli (depending on which they had heard first). At the end of their second session, all assessors were asked to complete a questionnaire on the ease of task completion. The questionnaire was created specifically for this study and asked questions about how easy or difficult the speech was to understand, which characteristics of the individual’s output impacted understanding and which presentation (i.e., first or second) was easier to understand. Participants indicated their response to these questions using a 5-point Likert scale. They were also asked which of the tasks (i.e., single words or sentences) was more difficult. In addition, they were asked to 11 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY provide information about their familiarity with acquired communication difficulties, in particular dysarthria. Assessors’ transcriptions were kept after each session for analysis. 2.4 Scoring The orthographic transcriptions of single words and sentences generated by listeners were scored using different criteria. Misspellings and homonyms were accepted as correct. i) In the single word task each correct word earned 1 point giving a total of 40 points each for the two conditions. The data were further analysed with the different types of errors in participants’ transcriptions being counted: initial consonant errors, final consonant errors, intervocalic consonant errors, vowel errors, cluster reduction or epenthesis errors. The percentage of errors on monosyllabic words and multisyllabic words was also compared. ii) In the sentence task, BKB sentences were scored using the ‘tight’ scoring system proposed by Bamford and Wilson (1979), where one point was awarded for each key word in the sentence correctly identified. Across each set of sentences a total of 103 points were possible. 3. Results Examination of the transcripts indicated that, as instructed, the listener mainly limited her contributions during the tasks to minimal turns for encouragement, as outlined above. She asked for repetitions of four sentence stimuli in the face-to-face condition, but none in the unseen condition or for either single word task. The speaker made unprompted revisions to individual sounds and whole words throughout, and twice spontaneously repeated a whole sentence but only in the unseen condition. A measure of speech rate was taken and no 12 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY difference was found between the two conditions (face-to-face mean=.78 syllables per second; unseen mean=.82; t(31)=-1.28, p=.21). Mean accuracy scores were calculated for each task and for each condition and are presented in table 1. Table 1. Mean accuracy scores and standard deviations for single words and BKB sentences for the two groups in both conditions Single words BKB sentences max = 40 max = 103 Face-to- Unseen face Face-to- Unseen face Face-to- M =13.93 M =20.21 M =77.5 M =86.29 face first SD =2.43 SD =3.09 SD =10.36 SD =5.48 Unseen first M =15.86 M =17.64 M =88.29 M =81.86 SD =2.51 SD =3.56 SD =3.47 SD =3.84 3.1 Single words Single word transcription accuracy was analysed using a 2x2 mixed ANOVA with condition (unseen vs. face-to-face recording) as a within-subjects factor and group (F2F vs. UN) as a between-subjects factor. There was no effect for group (F(1, 26) = .124, p = .728, p2=.005). There was a main effect for condition (F(1, 26) = 41.0, p < .001,p2=.612) that was qualified by an interaction between condition and group (F(1, 26) = 12.74, p = .001,p2=.329). The assessors who heard the face-to-face recording first were much more advantaged in the unseen condition than the group who heard the unseen condition first (see figure 1). A posthoc test using repeated t tests with Bonferroni correction levels of .008 showed that i) 13 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY accuracy for the F2F group was significantly higher in the unseen condition than the face-toface condition (p < .001, d=2.25), ii) accuracy was significantly higher for the F2F group in the unseen condition than the scores for the UN group in the face-to-face condition (p < .001, d=1.25) and iii) accuracy was significantly lower for the F2F group in the face-to-face condition than the scores for the UN group in the unseen condition (p = .007, d=.85). Figure 1: Single words: mean accuracy for each group (unseen first and face-to-face first) in the two listening conditions (face-to-face and unseen). Annotations of 1 and 2 indicate which condition was heard first (1) and second (2) by each group. The percentage of accurately transcribed multisyllabic words across both conditions and groups (56.94%) was higher than the percentage of monosyllabic words (37.46 % - see table 2). A Chi Square test (using raw scores) indicated that the difference in accuracy between the two different types of word was significant (2=65.58, p<.001). This suggests that multisyllabic words were more intelligible. 14 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY Table 2. Percentage of correctly transcribed mono- and multi-syllabic words. Unseen recording Face-to-face recording Heard first Heard second Heard first Heard second Monosyllabic % 32.6 38.32 36.19 42.85 Multisyllabic % 52.83 57.12 56.41 61.4 Further analysis of the assessors’ transcriptions show a number of different types of errors. The majority of errors were vowel errors, e.g., sick  seek. In this example, these two vowels have some similarities; they are both unrounded close vowels. Most of the consonant errors were voicing errors, with the majority being the replacement of voiceless consonants by their voiced counterparts, e.g., race  raise. However there are also place errors, e.g., sum  sun and manner errors, e.g., saver  sabre. Moreover, some words had clusters reduced, e.g., stick  tick and others involved epenthesis, e.g., dense  tennis. See figure 2 for proportions of these different types of error. Figure 2: Proportion of errors from participants’ single word transcriptions vowels voicing place manner cluster reduction 15 epenthesis Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY Figure 3. BKB sentences: mean accuracy for each group (unseen first and face-to-face first) in the two listening conditions (face-to-face and unseen). Annotations of 1 and 2 indicate which condition was heard first (1) and second (2) by each group. 3.2 BKB sentences BKB sentence transcription accuracy was analysed using a 2x2 mixed ANOVA with condition (unseen vs. face-to-face recording) as a within-subjects factor and group (F2F vs. UN) as a between-subjects factor. There was no effect for group (F(1, 26) = 2.05, p = .164, p2=.073) and no effect for condition (F(1, 26) = 1.475, p = .236, p2=.054). There was a significant interaction between group and condition (F(1, 26) = 61.431, p < .001,p2=.703), with higher accuracy for whichever recording condition was heard second (see table 1). Posthoc Bonferroni-corrected t-tests revealed significant differences for three of the pairwise comparisons; i) accuracy for the F2F group was significantly higher in the unseen condition than the face-to-face condition (p < .001, d=1.44), ii) accuracy was significantly higher for the UN group in the face-to-face condition than the scores for the F2F group (p = .002, d=1.23), and iii) accuracy was significantly higher for the UN group in the face-to-face condition than their scores in the unseen condition (p < .001, d=1.62) (Figure 3). 16 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY 3.3 Assessor Questionnaire Analysis of the assessors’ questionnaires showed that 25 of the assessors found SN’s speech ‘difficult’ or ‘very difficult to understand’ across all the tasks regardless of condition. No assessors reported that her speech was ‘easy’ or ‘very easy’ to understand. The reports of what specific characteristics had an effect on understanding SN’s speech results were more varied. There seems to be a trend for articulation, accent and prosody impacting on the ease of understanding of SN’s speech; mixed reports for rate; and low impact of pitch (see figure 4). Overall regardless of condition, the assessors in both groups ranked SN’s speech more difficult to understand in the single word task than the sentence task. Figure 4. Listeners’ assessments of the characteristics of SN’s output which had ‘made it more difficult to understand her’. 4. Discussion This project set out to investigate the functional impact on communication of any adaptations made by the speaker, SN, when she could not see her conversation partner. The results 17 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY suggest that SN was sensitive to the needs of the listener and was able to modify her pronunciation of the words and sentences she read aloud. In this study, using a design similar to Anderson et al’s (1997) study, the assessors were not able to use visual cues to compensate for the distorted acoustic signal. Their transcriptions were only from audio-recordings of the speaker, although these recordings were made when the speaker either could or could not see the listener’s face. The results suggest that SN was modifying her speech in response to the listener’s needs. Scores on the single word task were affected by the listening condition: performance was significantly better in the unseen condition, particularly if the recording of the face-to face condition was heard first. Assessors who heard the unseen condition first actually scored less well on their second attempt, indicating that differences in scores is not simply the result of more exposure to SN’s speech. This suggests that hearing the less intelligible recording first allowed assessors to appreciate the modifications made by SN in the unseen condition, but not vice versa. Thus SN’s intelligibility as judged by single word transcription was better in the unseen condition. The pattern of performance was different in the BKB sentence task. In this task order of presentation was the important factor; participants improved on the second recording regardless of the condition. It is possible that SN was modifying her speech in the unseen condition for this task as well, but other factors outweigh the benefits gained from these modifications, such as the listeners’ ability to use the contextual clues (e.g., neighbouring words and syntactic structure) provided by the sentence to help predict words for transcription. This result mirrors previous findings that transcription intelligibility scores are higher for sentences than word lists at least for individuals with mild dysarthria (Hustad, 2007). An alternative account is that SN was unable to allocate processing resources to intelligibility in the sentence task as she had to produce more complex utterances. 18 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY If SN is making modifications to her speech, then it would appear that these may be more beneficial in tasks where there is less predictability and the listeners are more dependent on the speech signal itself. Despite the claims that word lists, unlike sentences, are produced with clear speech (Fowler, 1988), it would appear that SN only uses maximum articulatory effort when she believes the listener will have trouble perceiving her speech. Increased speaking effort caused by her communication difficulties may mean that SN only uses her clear speech when absolutely necessary. It is possible that she consciously uses a less effortful articulation in usual conversation, knowing that prolonged attempts at producing words more clearly would lead to fatigue. Despite SN’s ability to produce ‘clearer speech’ it is very difficult to disentangle the primary cause of her speech output difficulties, though it may be that modifications in the unseen condition arise from an ability to maximise articulatory effort thereby compensating for problems caused by her dysarthria. Lam et al. (2012) found that their participants’ production of clear speech resulted in changes in vowel production, speech timing and vocal intensity. For SN, no difference in speech rate was found between the two conditions, nor was there any perceptual difference in intensity. It is possible that subtle vowel changes account for higher intelligibility in the unseen condition. Differences were found between the scores of the assessors, although no assessor scored the lowest mark across all the tests. Although all assessors stated prior to participating in the study that they had had minimal or no contact with people with acquired communication difficulties, these differences in scores may have been due to a number of factors relating to their experience. For example, their familiarity with variable accents and people with acquired communication difficulties, and how easy or hard they found the task. This comment relates to the idea expressed previously that familiarity with a speaker may increase intelligibility scores (Beukelman & Yorkston, 1980). Analysis of the assessor questionnaire revealed that 24 of the 28 assessors reported that SN’s speech was difficult to 19 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY understand with one reporting that it was very difficult. Assessor A, who reported that SN’s speech was ‘very difficult’ scored very low on the single words task (10/40 and 20/40) and the BKB sentence task (59/103 and 78/103); the scores in brackets are for face-to-face recording and unseen recording respectively. The focus of this study was on investigating the assessors’ ability to understand SN’s speech when she was talking to her conversational partner in different listening conditions. As the researchers were primarily concerned with whether SN was able to accomplish the communication goals of the task, phonetic analysis of the speech recordings was not conducted. However, it would be interesting to establish whether higher intelligibility scores in the unseen condition were caused by SN articulating words more precisely and the nature of these articulatory changes. This may enable clinicians to focus therapy on changes that have the biggest impact on intelligibility. In addition, future research should investigate how language and articulatory difficulties interact in less constrained tasks where the individual has to generate their own utterances. There was only one speaker in this study and different effects may be found for other speakers with other types of communication difficulty. Moreover, this speaker had an unfamiliar accent which may have been a confounding factor, although this was consistent across all conditions. In the future, it may be beneficial to use an accent that is familiar to all participants. In this study, the speaker was not explicitly instructed to adapt her speech depending on the communicative context and although it was interesting to establish whether this ability still remained, it is possible that SN might have been able to make greater and more consistent adaptations if this had been done. Nor do we know whether SN would make similar changes if she just imagined talking in difficult listening condition. Future research may investigate these possibilities and do so with a case series. 20 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY 4.1 Clinical Implications There are important clinical implications for the findings in this study that speech production remains listener-focused post brain injury, and that speakers modulate their speech according to their listeners’ needs. A speaker may produce clearer speech in situations where visual cues are not available, such as barrier type tasks and communicating by phone. The latter activity has particular advantages because it would give practice with an important daily life skill. The findings also provide some evidence that using a traditional telephone may be beneficial in rehabilitation, and in some cases may be more beneficial than face-to-face therapy. Recent research has focused on the effectiveness of tele-rehabilitation (e.g., Tindall, Huebner, Stemple, & Kleinert, 2008). However, a telephone may be a more accessible at least for older people with limited technology experience than using video conferencing and Skype (Rosen, 2001). In this study, not seeing the listener encouraged SN to perform at a higher level and produce speech that was more intelligible. This was without any advice or instruction from the researcher. Additionally, the comments of the assessors and the higher scores for the BKB sentences suggest that sentences are more intelligible to listeners than single words. This adds to a growing body of evidence that shows that focusing on sentence level in therapy is more likely to benefit comprehensibility. 5. Conclusions The process by which the speaker formulates and produces an utterance is complex and multi-faceted. Research shows that speech perception and production constraints interact to determine the speech output. This study demonstrated that a speaker with AOS, dysarthria and anomic aphasia was sensitive to the needs of her listener and was able to adapt her speech according to the listening condition. She was able to increase the clarity of her speech when the listener was unable to see her face thereby improving her intelligibility. These 21 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY findings could be used to develop more effective rehabilitation for people with acquired communication difficulties. Acknowledgements: The authors wish to thank all the participants who contributed to the study. Declaration of interest: The authors report no conflicts of interest. The authors are responsible for the content and writing of the paper. 22 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY References Anderson, A. H., Bard, E. G., Sotillo, C., Newlands, A., & Doherty Sneddon, A. (1997). Limited visual control of the intelligibility of speech in face to face dialogue. Perception and Psychophysics, 59 (4), 580-592. Aylett, M., & Turk, A. (2006). Language redundancy predicts syllabic duration and the spectral characteristics of vocalic syllable nuclei. Journal of the Acoustical Society of America, 119 (5), 3048-3058. Bamford, J., & Wilson, I. (1979). Methodological considerations and practical aspects of the BKB sentence lists. In J. Bench, & J. Bamford (Eds.), Speech-hearing tests and the spoken language of hearing impaired children. London: Academic Press. Bell , A. (1984). Language Style as Audience Design. Language in Society, 13 (2), 145-204. Bench, J., Kowal, A., & Bamford, J. (1979). The BKB (Bamford-Kowal-Bench) sentence lists for partially-hearing children. British Journal of Audiology, 13 (3), 108-12. Boyle, E. A., Anderson, A. H., & Newlands, A. (1994). The effects of visibility on dialogue performance in a co-operative problem solving task. Language and Speech, 37 (1), 1-20. Beukelman, D. R, & Yorkston, K. M. (1980). Influence of passage familiarity on intelligibility estimates of dysarthric speech. Journal of Communication Disorders, 13 (1), 33-41. 23 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY Bradlow, A.R., Krause, N., & Hayes, E. (2003). Speaking clearly for children with learning disabilities: Sentence perception in noise. Journal of Speech, Language, and Hearing Research, 46 (1), 80–97. Bradlow, A.R., Torretta, G.M., & Pisoni, D.B. (1996). Intelligibility of normal speech I: global and fine-grained acoustic-phonetic talker characteristics. Speech Communication, 20 (3–4), 255–272. Cameron-Faulkner, T., Lieven, E., & Tomasello, M. (2003). A construction based analysis of child directed speech. Cognitive Science 27 (6), 843-873. Dabul, B.L. (2000). Apraxia Battery for Adults-Second Edition. Tigard, Ore: CC Publications, Inc. Dromey, C. (2000). Articulatory kinematics in patients with Parkinson disease using different speech treatment approaches. Journal of Medical Speech-Language Pathology, 8, 155–161. Duffy, J.R. (1995). Motor speech disorders: Substrates, differential diagnosis, and management. St. Louis: Mosby-Year Book, Inc. Dunton, J., Bruce, C. & Newton, C. (2011). Investigating the impact of unfamiliar speaker accent on auditory comprehension in adults with aphasia. International Journal of Language & Communication Disorders, 46, 63-73. 24 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY Fowler, C.A. (1988). Differential shortening of repeated context words produced in various communicative contexts. Language and Speech, 31 (4), 307-319. Garcia, J.M., & Cannito, M.P. (1996). Influence of verbal and nonverbal contexts on the sentence intelligibility of a dysarthric speaker. Journal of Speech and Hearing Research, 39 (4), 750-760. Garcia, J.M., & Dagenais, P.A. (1998). Dysarthric sentence intelligibility: contribution of iconic gestures and message predictiveness. Journal of Speech, Language and Hearing Research, 41 (6), 1282-1293. Goodwin, C., & Heritage, J. (1990). Conversation Analysis. Annual Review of Anthropology, 19 (1), 283-307. Hustad, K.C. (2007). Effects of speech stimuli and dysarthria severity on intelligibility scores and listener confidence ratings for speakers with cerebral palsy. Folia Phoniatrica et Logopaedica, 59(6), 306-17. Kent, R.D., Kent, J.F., Weismer, G., & Duffy, J. (2000). What dysarthrias can tell us about the neural control of speech. Journal of Phonetics, 28 (3), 273-302. Kent, R. D., Netsell, R., & Abbs, J.H. (1979). Acoustic characteristics of dysarthria associated cerebellar disease. Journal of Speech and Hearing Research, 22 (3), 627-648. 25 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY Kent, R. D., & Netsell, R. (1975). A case study of an ataxic dysarthric: cineradiographic and spectrographic observations. Journal of speech and hearing disorders, 40 (1), 115-134. Kertez, A. (2006). The Western Aphasia Battery Revised. London: The Psychological Corporation. Keintz, C. K., Bunton, K., & Hoit, J.D. (2007). Influence of visual information on the intelligibility of dysarthric speech. American Journal of Speech-Language Pathology, 16, 222-234. Lam, J., Tjaden, K., & Wilding, G. (2012). Acoustics of Clear Speech: Effect of Instruction. Journal of Speech, Language, and Hearing Research, 55, 1807-1821. Lindblom, B. (1990). Explaining phonetic variation: A sketch of the H and H theory. In W. Hardcastle, & A. Marchal (Eds.), Speech production and speech modelling (pp. 403-439). Dordrecht: Kluwer. MacKenzie, C., & Lowit, A.G. (2007). Behavioural intervention effects in dysarthria following stroke: communication effectiveness, intelligibility and dysarthria impact. International Journal of Language and Communication Disorders, 42 (2), 131-153. Rimé, B. (1982). The elimination of visible behaviour from social interaction: effects of verbal, non-verbal and interpersonal variables. European Journal of Social Psychology, 12 (2), 113-129. 26 Running head: COMMUNICATIVE CONTEXT AND ACQUIRED LANGUAGE DIFFICULTY Rosen, E. (2001) Twenty minutes in the life of a tele-home healthcare nurse. Telemedicine Today (December), 12-13. Smiljanic, R., & Bradlow, A.R. (2009). Speaking and hearing clearly: Talker and listener factors in speaking style changes. Linguistic and Language Compass, 3, 236–264. Tindall, L.R., Huebner, R.A., Stemple, J.C., & Kleinert, H.L. (2008). Videophone-delivered Voice Therapy: A comparative analysis of outcomes to traditional delivery for adults with Parkinson’s disease. Telemedicine Journal and e-Health, 14 (10), 1070-1077. Uchanski, R. M., Choi, S.S., Braida, L.D., Reed, C.M., & Durlach, N. I. (1996). Speaking clearly for the hard of hearing IV: Further studies of the role of speaking rate. Journal of Speech and Hearing Research, 39 (3), 494-509. Ventry, I. M., & Weinstein, B. E. (1983). Identification of elderly people with hearing problems. American Speech-Language-Hearing Association, 25, 37–47. Weismer, G., Martin, R., Kent, R.D., & Kent, J.F. (1992). Formant trajectory characteristics of males with Amyotrophic Lateral Sclerosis. Journal of Acoustical Society of America, 91 (12), 1085-1098. Yorkston, K. M., & Beukelman, D.R. (1978). A comparison of techniques for measuring intelligibility of dysarthric speech. Journal of Communication Disorders, 11 (6), 499-512. 27