One of the most common modes of communication for instruction in schools for deafFootnote 1 children in the United States is a bimodal form of simultaneously using signs and speech referred to as “simultaneous communication,” or SIMCOM. Simultaneous communication uses spoken English and signed English simultaneously in classrooms. The goal is to provide both auditory and visual access to the rule systems of English that are needed to support the development of literacy skills (Hyde and Power 1991). Although this mode has dominated in schools and programs for deaf students for the past 50 years, empirical evidence of its effectiveness toward increased academic achievement is not strong (Lederberg et al. 2013; Mayer 2007; Nicholas and Geers 2003).

Here we compare bimodal SIMCOM to unimodal sign-only communication in story recall for deaf children. Sign-only in this case is a version of American Sign Language (ASL), which has a somewhat flexible sentence structure and many different forms, as is the case in other languages. The participants were not regularly exposed to true ASL structure (e.g., a common structure is Topic-Comment, such as GAMES LIKE PLAY) in their classrooms. Therefore, the sign-only communication in the study refers to a form of ASL with a more English-like syntax while still emphasizing non-manual grammatical markers such as facial expressions. Thus, it has the flexibility of using an S-V-O (subject verb object) sentence structure for the participants to understand it better than the Topic-Comment structure. Story recall, like language production and comprehension in general, relies on the ability to maintain and actively integrate linguistic information in working memory (Cornish 1980; Dodwell and Bavin 2008) and to form mental models of the situations described in the stories (Kintsch 2004). SIMCOM requires students to attend to two linguistic forms at the same time. This could overload working memory, giving listeners less capacity to keep word meanings in mind, understand the linguistic structure of each and integrate the languages into a mental model or representation of the content of the story. Because unimodal communication requires listeners to comprehend, remember and integrate only a single linguistic form, children should remember complex stories better under unimodal communication than bimodal.

Theoretical Framework

The hypothesis that unimodal communication should lead to better story recall than bimodal communication has both theoretical and empirical support. Story recall is defined here as the process of generating a narrative from memory that represents a previously experienced verbal representation of an activity or event (Adams et al. 2002). Both story comprehension and story retelling rely on at least three cognitive systems all putting demands on limited working memory capacity: verbal short-term memory to store verbatim information for a brief period (Baddeley 1998); long-term episodic and semantic memory to access word meaning (Tulving 1972); and language skills to processes and integrate phonological, morphosyntactic, semantic, and pragmatic aspects of language (Poulsen et al. 1979).

Story recall performance has been studied in various populations, including children with attention-deficit/hyperactivity disorder (e.g., Lorch et al. 1999), learning disabilities (e.g., Copmann and Griffith 1994), intellectual disabilities such as Down syndrome (e.g., Tager-Flusberg and Sullivan 1995), and language impairment (e.g., Paul and Smith 1993). To the authors’ knowledge, only two published story recall studies have evaluated the effects of communication mode in deaf children (Stewart 1987; Tevenal and Villanueva 2009).

For example, to assess the effects of language ability in different modalities for deaf participants, Stewart (1987) examined the comprehension of deaf students on stories presented in ASL and signed English in three modalities: sign-only, sign plus lip movements but no voice, and SIMCOM. The experimental design used a repeated-measures approach. Thirty-four middle- and high-school students (mean age = 16.9 years) with severe to profound hearing loss (PTA = 83–113 dB) participated in the study. Results of the study revealed that deaf students reproduced more information when stories were presented in ASL than when they were presented in signed English without speech signals. Interestingly, when SIMCOM was used, the difference in scores was not significantly different from that for the ASL presentations. In signed English, the addition of speech improved the comprehension of stories, which showed an advantage for SIMCOM. Stewart (1987) concluded that little benefit in comprehension would be gained through the additional cues derived from speechreading and audition in SIMCOM when knowledge of ASL was adequate. He added that reasons for there not being a more statistically significant effect of ASL were related to the fact that students had only been exposed to ASL informally outside of school contexts.

The only other experiment that examined the effects of SIMCOM on the degree of correct information received was conducted by Tevenal and Villanueva (2009), whose stated objective was to determine whether deaf, hard-of-hearing, and hearing participants received complete or equivalent messages from SIMCOM presentations. Eighty-nine undergraduate and graduate students from Gallaudet University participated in the study (46 deaf, 8 hard of hearing, and 35 hearing). The ages ranged from 18 to 57 years (mean age = 25 years). All the participants watched nine video clips, each of which lasted less than 30 s. Each clip presented a hearing person who simultaneously spoke and signed specific information expressed in complete sentences. After each of the nine clips was shown, participants were presented with printed questions on a PowerPoint slide and asked to write the answers to those questions in an individual answer booklet. The group of hearing participants scored the highest number of correct answers, with a mean score of 84%. The mean score was 36.25% for the hard-of-hearing group and 29.33% for the deaf group. The researchers interpreted the results to indicate that not all the participants had the same access to the information presented, but many confounding variables (e.g., no control on the written English skills and/or the sign-language skills of the participants) weakened the study. For example, it is well documented that average students with severe to profound hearing loss leaving the educational system in the United States read at the beginning of the fourth-grade level (Trezek et al. 2010). Furthermore, in most cases, the challenges to deaf students’ reading comprehension are not specific to print, but are paralleled by similar weakness in understanding sign language—that is, individuals who are deaf or hard of hearing cannot fully comprehend sign-based presentations without required bona fide sign-language skills (Marschark et al. 2009). The comparison between hearing and deaf or hard-of-hearing participants in the study also complicated the results.

In summary, neither study revealed a significant difference in favor of one mode or another. Meanwhile, a study on deaf college students’ performance on lectures delivered in different modalities presented similar results. Marschark et al. (2008) conducted 4 experiments investigating classroom learning by deaf college students via direct and mediated instruction using interpreters. Not surprisingly, compared with their hearing classmates, deaf students’ average performance was lower on prior content knowledge, scores on post-lecture assessments of content learning, and gain scores, as measured by written multiple-choice questions. However interestingly, the use of SIMCOM and ASL appeared to be equally effective for deaf students’ learning of the material, and their self-rated sign language skills were not significantly associated with performance. The results drawn from deaf college students should be applied to young deaf students with cautions though, because young deaf students are still developing bona fide sign-language skills as well as spoken and written English skills.

Meanwhile, some research has provided evidence that an integrated speech-gesture system was more comprehensive than it would be if content were derived from a visual or auditory modality alone. For example, in a study conducted by Beattie and Shovelton (1999), ten participants watched video clips of people describing a cartoon using speech plus gestures, speech alone, or gestures alone. When asked questions about objects and actions in the cartoon, the participants were more accurate under the speech-plus-gestures condition than under the speech-only condition. Although this study was limited by the small sample, similar studies on narrative retellings (Goldin-Meadow and Sandhofer 1999; McNeill et al. 1994) confirmed that even when the information was presented in speech-and-gesture conflict, listeners generally were able to integrate features of visual and auditory signals to create a unified account of the speaker’s message. Collectively, research on the influence of bimodal signals on cognitive tasks has produced mixed results. Some research shows that gestures and speech comprise a cohesive system, whereas other research finds a less synergistic relationship between the two conditions.

Communication Mode for Deaf Children

Deafness is a condition that interferes with the auditory processing of sound and therefore with the development of spoken language. During the past 10 years, the universal screening of hearing in newborns, along with the development of digital hearing aids and cochlear implants, has had a measureable impact on the progress of children with hearing loss (Busa et al. 2007; Harris 2015; Spencer and Marschark 2010). Current research notes considerable improvements for some deaf children in receptive and expressive spoken language following early identification, early intervention, and the use of new technologies in hearing amplification (Geers and Hayes 2011; Geers et al. 2003; Harris et al. 2013). Although some improvements have been observed in literacy (Geers and Hayes 2011; Paul et al. 2013; Trezek et al. 2010), the development of reading and writing skills nevertheless remains a major challenge for most deaf children because they still may not have full access to the spoken language upon which reading is based (Kyle and Harris 2010; Lederberg et al. 2013; Mayer 2007; Nicholas and Geers 2003; Wang et al. 2008), even with cochlear implants (Connor and Zwolan 2004) or hearing aids (Arehart et al. 2013; Banerjee 2011). As noted in the research, individual trajectories vary significantly (Arfé et al. 2014; Geers 2003; Geers and Hayes 2011; Harris and Moreno 2004; Kyle and Harris 2010) for both cochlear-implant and hearing-aid users.

For deaf children who do not benefit enough from amplification alone, an approach to communication involving sign language or sign-supported speech may be more appropriate. In this study, the term sign language is used to refer to the manual representation of language, relying on the use of signed vocabulary and syntax to represent concepts. The sign language used by the deaf community in the United States, ASL, not only has its own vocabulary but also its own grammar and is completely distinct from spoken English. A separate system, sign-supported speech, involves voicing, as in spoken English, while simultaneously signing a form of manually coded English. The syntax and pragmatics of English are used, with some signs being borrowed from ASL and others invented by educators of the deaf (Akamatsu and Stewart 1998). The term sign-supported speech is often used interchangeably with simultaneous communication (SIMCOM), which is the term used in this study. SIMCOM incorporates signs but not embedded in the visuospatial grammar of a true signed language. Instead there is a one-to-one match for spoken English. In this way SIMCOM is considered to be more of a sign system than a language (Akamatsu and Stewart 1998).

SIMCOM attempts to represent the syntactic structure of spoken English in a manual form while also delivering the speech signal. The intention is that deaf children can learn the structure and phonology of the English language not only through amplified sound and speechreading patterns of spoken English, but also through manual patterns of signed English (Akamatsu and Stewart 1998; Akamatsu et al. 2002). Combining the modes was expected to have a synergistic effect (Akamatsu and Stewart 1998). In practice, though, the very nature of the two modes (spoken and manual) often causes users to alter their messages to accommodate one mode or the other, causing a compromise in relaying the message as a whole (Luetke-Stahlman 1988; Wilcox 1989). SIMCOM users, usually hearing teachers, will either slow their speech in order to sign every word or eliminate important function words, morphemes, and inflections in attempts to maintain signing speed and voice rate (Luetke-Stahlman 1988; Marmor and Petitto 1979; Maxwell and Doyle 1996). The quality of ASL is also altered such that linguistic principles of English and ASL are violated. The artificial nature of SIMCOM (contrary to a natural language) in combining two languages is criticized by some researchers as unnatural and stilted, and believed to lead to an impoverished or incomplete language system for many deaf children (Baker 1978; Johnson et al. 1989; Marmor and Petitto 1979). SIMCOM, they point out, is actually a hybrid of two languages in that it combines parts of spoken-language structure and parts of signed-language structure.

Furthermore, it has been suggested that SIMCOM is more than mere simultaneous production of a language in two modalities, and that it actually imposes unique and complex processing demands on its users (Baker 1978; Maxwell 1990). This notion invites several empirical questions. Can one actually speak and sign every aspect of a language at the same time without one mode having a dominant or negative effect on the other? What about the processing demands for the receiver? Is it possible for a receiver to successfully integrate simultaneously produced speech and sign-signal sequences without experiencing cognitive overload?

Code blending (Emmorey et al. 2008a; Peitto et al. 2001) is commonly used by deaf bimodal bilinguals. Grounded from research on children of deaf adults (Codas), code blending is the simultaneous production of sign and speech for a single proposition (Quadros et al. 2016). Bimodal bilinguals use code blending far more frequently than code switching, that is, using both modalities/languages in alternating succession. Emmorey et al. (2008a, 2008b) categorized code blending based on which language could be used as the Matrix language that provided the syntactic frame and the majority of the morphemes. They identified English as the Matrix language for adult Codas while recognizing the possibilities of ASL as the Matrix language or potentials of Matrix language changing between English and ASL for other bimodal bilinguals. It should be highlighted that, in spite of their similarities, strictly speaking, code blending is not SIMCOM. While code blending is a natural bilingual phenomenon produced by highly proficient bilinguals, many deaf children, who are SIMCOM users, such as the participants in the current study, are not proficient in either English or ASL.

Interestingly, Emmorey et al (2008b) reported that adult bimodal bilinguals did not differ from monolinguals but both were slower than unimodal bilinguals on an experimental task targeting executive function. The results were interpreted as the modality constraint on the bilingual advantage in cognitive control, that is, although bilinguals often outperform monolinguals on nonverbal tasks that require resolving conflict from competing alternatives, such an enhanced executive control is only limited to unimodal bilinguals. The authors concluded that unimodal bilinguals were constantly facing with more challenging production demands because their languages used the same articulation system, therefore, they had extensive practice with difficult selection and control processes, which might improve their response selection and attention control. On the other hand, the cross-modal nature of sign and speech made attentional selection processes more efficient for bimodal bilinguals, so they did not face the same processing demands, and thus did not demonstrate the same enhanced performance on executive control tasks.

Although the issue may be due to limited language skills interfering with their cognitive development, the working-memory capacity of profoundly deaf children is reported to be smaller than that of hearing children (Burkholder and Pisoni 2003; Pisoni and Cleary 2003; Pisoni and Geers 2000). Language comprehension requires working memory skills. Deaf children who use cochlear implants or digital hearing aids may have access to some sounds, but these sounds may still be distorted (Arfé et al. 2015; David and Hirshman 1998). When deaf children simultaneously “listen to” degraded sound signals, while watching signed messages within complicated syntax, might this create a cognitive load that is too high for comprehension processing? This experiment attempts to explore this hypothesis with a focus on the story recall of participants as an index of their comprehension.

Several variables have been found to affect story recall in typically developing children, including previous knowledge of story schema, existence/nonexistence of causality within the story, constructive memory related to a child’s prior knowledge (Greenhoot 2000), and language comprehension. Child characteristics have also been linked to story recall, including age (Gathercole and Baddeley 2014), reading ability (Gathercole and Baddeley 2014), home language (Hammer et al. 2012), and gender (Pauls et al. 2013). Most of that research has been done in typically developing children (Davidson and Hoe 1993; Hudson and Nelson 1983). Thus, the present study attempted to extend this research with children with hearing loss. To that end, we examined whether gender, age, pure-tone average (PTA), standardized reading scores, and home language were related to children’s story recall scores. Additionally, given its nature as a language task, story recall is also assumed to be strongly associated with level of hearing loss and use of hearing-assistive technology (Lederberg et al. 2013), so we also examined whether type of hearing-assistive technology was related to children’s story recall scores.

Statement of the Problem

A few research studies have been conducted on the use of bimodal communication systems in the classrooms of deaf learners, particularly teachers’ attempts to deliver a coherent signed representation of English (e.g., Akamatsu et al. 2002; Marmor and Petitto 1979; Mayer and Lowenbraun 1990; Strong and Charlson 1987). However, even fewer research studies have empirically explored how deaf students respond to these different communication modes (e.g., Stewart 1987), particularly SIMCOM and sign-only, which is the focus of the current study. Research on the effects of communication mode on deaf children’s learning has the potential to improve instructional practice and language policy for those students. Research on deaf students’ ability to integrate simultaneously presented auditory and visual language also has the potential to increase our understanding of working memory and human information processing. Noting the limited and conflicting research in the area of SIMCOM, Hamilton (2011) has called for further research and reconsideration of the use of SIMCOM. This study responds to that call.

Purpose of the Study

This study used a story recall task and a within-subject design to compare deaf children’s comprehension of stories presented in SIMCOM with their comprehension of stories presented in ASL. The purpose of this study was two-fold. The first aim was to note any effects of age, gender, reading ability, home language, PTA, or use of hearing-assistive technology (hearing aids or cochlear implants) on deaf children’s story recall performance. The second aim was to explore the effects of mode of communication on deaf children’s story recall performance. The dependent variable in the study was the score on the story recall task used as a measure of working memory. The independent variables were the two modes of presentation (SIMCOM and sign-only). Covariates (age, gender, reading ability, home language, and use of hearing-assistive devices) were explored to note any effect on the dependent variable.

Research Questions

This study is guided by the following research questions:

  1. 1.

    What are the factors that are related to story recall scores? Specifically, will gender, age, PTA, standardized reading scores, home language, or type of hearing-assistive technology correlate with story recall scores?

  2. 2.

    Does communication mode affect story recall scores for deaf students? Specifically, is there a difference in story recall scores when the story is presented in SIMCOM versus sign-only?

Methods

Participants

Study participants were recruited using convenience sampling with prior approval from the Institutional Review Board at Teachers College, Columbia University. Participants included 36 children (19 females, 17 males) with severe to profound hearing loss, who attended a state-funded school for the deaf in a large city in the Northeastern part of the United States (grades 5–8). Children ranged in age from 11.3 to 14.8 years (M = 12.9 years, SD = 1.03). The school adheres to the total-communication philosophy wherein teachers are expected to sign and speak simultaneously (i.e., SIMCOM) at all times, but the students often communicate primarily in signing with each other. Demographic information obtained from the school and from parent reports indicated that of the 36 participants, 17 (47.2%) were black (African American, Caribbean American, African), 12 (33.3%) were Hispanic, 5 (13.9%) were Asian, and 2 (5.6%) were white. Languages used at home included Spanish (8 [22.2%]), English (25 [69.4%]), Chinese (1 [2.8%]), Russian (1 [2.8%]), and ASL (1 [2.8%]). Since only one participant had ASL as the home language, the majority of them learned how to sign from school. When they used unimodal (i.e., sign-only) communication, their signs were between pure ASL and the signs used in SIMCOM, which were the base for the signs used in the sign-only condition of the present study. For example, ASL signers typically use Time-Subject-Verb-Object or Time-Subject-Verb word order, where a time frame is established before the rest of the sentence, such as WEEK-PAST I CALL MY UNCLE or MY UNCLE? WEEK-PAST I CALL. However, the participants in the current study, most likely, would sign I CALL MY UNCLE WEEK-PAST, following the SIMCOM word order. Participants’ unaided PTA in the better ear indicated that all students had severe to profound hearing loss (72–120 dB). Within the group, 17 (47.22%) of the participants used cochlear implants and 19 (52.78%) used hearing aids. None of the participants were reported to have additional handicapping conditions. The demographic characteristics of study participants are presented in Table 1.

Table 1 Demographic characteristics Of study participants (N = 36)

Thirty-eight students were eligible for participation and given consent letters. Thirty-six of these students returned the signed consent letter from their parents/guardians who agreed to provide or permit access to demographic information and allow participation in the study. With the consent to participate from parents/guardians, we requested from the school the demographic information for student participants (i.e., age, gender, ethnicity, degree of hearing, documented presence of additional disabilities, home language).

The Research Team

The research team consisted of a professor from the Deaf Education program at Teachers College, Columbia University, and a lecturer who had worked as a middle-school teacher at the recruiting site for 12 years. Five research assistants contributed to the study: four female, hearing graduate students from the same program, one of whom was a freelance ASL interpreter; and one deaf male graduate student, who was a fluent signer and a student-teacher at the school site, and therefore familiar with the students participating in this study.

Measures

The data used in this study included demographic data from school records, a story recall measure, and transcripts of participants’ responses from the story recall task. For research question 1, demographic data that included participants’ age, gender, PTA, home language, standardized reading scores, and type of hearing-assistive technology used (hearing aid or cochlear implant) were collected from school records. These data were used to analyze asssociations with the dependent measure, which was the score on the story recall task. In these records, deafness was measured by PTA; standardized reading scores were measured using the Stanford Achievement Test–Hearing Impaired–Verbal (SAT-HI) (1996); age, gender, home language, and type of hearing-assistive technology were updated regularly through school contact with families.

For research question 2, the students’ performance on The Woodcock-Johnson Tests of Achievement (WJ III ACH) (Woodcock et al. 2001) subtest Story Recall was used in the study. Two sets of quantitative data (recall score on stories presented in SIMCOM and recall score on stories presented in sign-only) were collected as the primary data. The dependent measure was the score on the story recall in each mode. Points were awarded for specific elements that were memorized, and participants’ scores were computed as the number of essential elements correctly memorized for each of the stories. The maximum score in each condition was 38.

Materials

Two sets of videotaped short stories were used for the memory task. The first set (stories 1–6) was taken directly from the WJ III ACH Story Recall subtest (Woodcock et al. 2001). These stories were presented in SIMCOM, that is, the signer was using conceptually accurate signs along with the spoken English sentences. However, due to the linguistic differences of English and ASL and to maintain the goal of conceptual accuracy, there may be some words that are spoken and not signed, such as “likes to catch butterflies” (the “to” is not signed) or “ride in the car” (in this case the sign for ride includes the action of getting in the car so separate signs for in and the are not needed). The consultants who had experience in the classroom with the student participants for this study assisted the team in developing a translation that was most similar to the format in which the teachers at this school for the Deaf signed in the classroom with these students. For example,

  • Sentence 1: The signer signed: J-U-L-I-E LIKE CATCH BUTTERFLIES THEN LET THEM GO. The signer said: “Julie likes to catch butterflies. Then she lets them go.”

  • Sentence 2: The signer signed: M-A-R-Y HAVE DOG HE LOVE RIDE CAR BUT HATES BATH. The signer said, “Mary has a dog. He loves to ride in the car, but he hates to take a bath.”

The second set (stories A–F) was an alternative form created by the research team to include different content but linguistic structures and syntactic elements similar to those presented in the first set. These stories were presented using sign-only. The two sets of stories representing the two experimental conditions began as simple propositions and became increasingly more complex. For each condition, the first story consisted of two simple sentences, whereas the last consisted of four sentences, with some including embedded clauses (see Table 2).

Table 2 Measurements of story recall

For consistency, one signer was videotaped presenting all 12 stories to ensure that all participants saw the material in exactly the same way. The signer was a certified interpreter and a child of deaf adults, who worked collaboratively with the research team to translate the frozen text of the twelve English stories into either SIMCOM (sign and speech in English word order) or sign-only (sign language with no voice component). For the SIMCOM component, the signer used conceptually accurate signs along with voicing in English word order to relay the stories. For the sign-only component, basic ASL syntax and features were used. For example,

  • Sentence A: The signer signed: BOY NAME S-T-E-V-E, HE ENJOY PLAY GAME. HE ALWAYS WIN (3×”s) (Steve likes to play games. He always wins).

  • Sentence B: The signer signed: BOY NAME B-O-B HE HAVE BOOK. THIS BOOK ABOUT WHAT? SNAKE IN JUNGLE. SNAKE EAT WHAT? LEAF ONLY, THAT’S-IT (Bob has a book. It is about a snake in the jungle, who eats only leaves).

Pilot

Two students from the school (one male and one female), who were considered by the teachers as representative of the students, participated in a pilot study to examine the overall design of the procedure, the usefulness of the scripted protocol, and the general comprehensibility of the 12 stories. Signing speed, idiosyncratic use of individual signs, and clarity of lip movements were factors that affected the message being delivered; accordingly, modifications were made in the production and delivery of the test items. For example, the initial sign used for HALLOWEEN was not familiar to the students in the pilot, so it was replaced with a more local variant.

Data-Collection Procedures

To explain the procedure before the individual assessments, all participants were gathered in one classroom where one of the researchers used a scripted protocol to describe what would take place. One practice item for each communication mode (SIMCOM and sign-only) was presented, and participants were given the opportunity to ask questions and clarify understanding of the process. Participants were then called into one of four classrooms individually where a TV monitor, a video camera, and two chairs (one for the student participant and one for the researcher) were arranged. To alleviate any anxiety the students might have on performing, especially in front of strangers, a school staff member was also in the room with each student. Four female researchers, all of whom were fluent signers, individually administered the experiment simultaneously in four separate classrooms following a scripted protocol. The protocol consisted of the following instructions: “You are going to see/hear a story. Then you are going to tell the story back to me. Watch/listen very carefully.” Immediately after the story was presented, the examiner asked the student to recall the story to her. At the end of each recall, the examiner asked, “Is that all?” “Is there anything more you can remember?”

Given the within-subjects design, counterbalancing was used to reduce the influence of order effects and practice effects. Participants were randomly assigned to watch either SIMCOM stories first or sign-only stories first. The stories were presented through video, one by one, and after each presentation, participants responded with their recalls. The researchers did not repeat any stories, but encouraged participants to offer whatever they could remember. The participants were instructed to respond in whatever communication mode they felt comfortable using. For each participant, the performance of the story recall task lasted approximately 15 to 20 min. The entire procedure was videotaped for analysis.

Transcription and Scoring for the Dependent Measure

After each participant’s free recall was transcribed it was scored using the scoring method provided by the Examiners Manual (Mather and Schrank 2007). The two researchers within each pair scored the participants’ responses individually based on the transcripts. The participants’ total scores were obtained by adding every correctly identified element in each modality. Segments of the stories were separated by slashes (/). Each segment contained content words (nouns, adjectives, adverbs, pronouns, prepositions with semantic load), which were scored. Some sections also contain noncontent words (conjunctions, articles, helping verbs, prepositions without semantic load), which were not scored. The participants’ story-recalls were compared with the semantic units from the original stories, and a score of 0/1 (not recalled/recalled) was assigned for each segment. Participants were given one point for each correctly identified element in their responses. Words in bold as suggested by the Examiners Manual were considered essential elements and had to be present to receive credit. Other elements could be synonyms or paraphrased. Following the Examiners Manual, variations of verbs (e.g., “like” for “likes,” “swim” for “swimming”) and minor omissions (e.g., “monkey” for “monkey’s”) were permissible. The content words did not have to be recalled in the order in which they were presented. Although we followed these scoring criteria, we did make one modification related to proper nouns, specifically names. In sign language, names are either fingerspelled or initialized. Because the original names used in the WJ III ACH may have been unfamiliar to the participants, we assigned 1 point for a complete name if it was fingerspelled or spoken correctly or .5 if it was misspelled or incorrectly spoken. We also awarded .5 if the response referred to the subject as “boy” or “girl” but did not provide a name. Points were totaled to achieve a final story recall score. Interrater agreement was 96% for the first pair of transcribers/scorers and 94% for the second pair.

The same four researchers who administered the tests were divided into two pairs to translate the participants’ videotaped story-recalls into English, capturing both the speech and signing that the participants used (sign language without voice was translated into English first). Neither pair of researchers translated responses from participants they administered: one conducted the initial translation, a second one double-checked the transcriptions against each participant’s videotaped story-recall, and a third researcher joined the discussion if any ambiguity occurred.

Results

IBM SPSS Statistics for Macintosh, Version 22.0, was used to analyze the data, and alpha for all tests of significance was set at the .05 level (two-tailed).

Preliminary Data Analysis

Before testing the study hypothesis, the data were examined for outliers and missing data. No outliers were found and the assumption of normality was not violated.

Primary Data Analysis

Research question 1: What factors are associated with children’s story recall?

Age

A Pearson correlation test was conducted to determine whether there was a relationship between participants’ age and story recall scores. No significant correlation was found between age and SIMCOM scores, r(34) = .28, p = .10, or between age and sign-only scores, r(34) = .28, p = 0.10 (see Table 3).

Table 3 Correlations among demographic and study variables

Gender

An independent samples t test was conducted to determine the effects of gender, if any, on story recall scores. The data indicated no significant difference in scores between male and female participants in either the SIMCOM condition, t(34) = −.52, p = .61 (males: M = 21.85, SD = 6.44; females, M = 20.68, SD = 7.01), or the sign-only condition, t(34) = −.68, p = .50 (males: M = 27.77, SD = 5.65; females: M = 26.42, SD = 6.15).

Home Language

An independent samples t test was conducted to test for differences in SIMCOM and sign-only story recall based on home language (spoken English versus other). The difference was not statistically significant for SIMCOM, t(34) = .272, p = .79, or sign-only, t(34) = .098, p = .923.

PTA

In order to note the effects of residual hearing on story recall scores, if any, a Pearson correlation test was performed. No significant correlation was found between PTA and SIMCOM scores, r(34) = .18, p = .29, or between PTA and sign-only scores, r(34) = .21, p = .23 (see Table 3).

Standardized Reading Scores

A Pearson correlation test was conducted to determine the relationship between SAT scores and story recall scores. A strong positive correlation was found between SAT and SIMCOM scores, r(34) = .59, p < .001, and between SAT and sign-only scores, r(34) = .63, p < .001. Participants with higher reading scores performed better in both conditions than did participants with lower reading scores. Based on this significant finding, SAT score was included as a covariate in the subsequent within-subject analyses to determine differences between presentation conditions (see Table 3).

Hearing-Assistive Technology

An independent samples t test was conducted to determine the effect of type of hearing-assistive technology, if any, on participants’ story recall scores. No significant difference was found between scores for cochlear-implant users versus hearing-aid users in either the SIMCOM or sign-only condition. Story recall scores for cochlear-implant users in SIMCOM were M = 23.17, SD = 6.21, and those for hearing-aid users in SIMCOM were M = 19.50, SD = 6.76, t(34) = 1.69, p = .10. Scores for cochlear-implant users in the sign-only condition were M = 28.56, SD = 5.15, and scores for hearing-aid users in the sign-only condition were M = 25.71, SD = 6.29, t(34) = 1.48, p = .15. Thus, the type of hearing-assistive technology did not significantly influence performance in either condition.

Research question 2: Are there differences in story recall as a function of presentation mode? A repeated-measures ANCOVA was used to determine whether there was a statistically significant mean difference between the story recall scores under the sign-only condition and those under the SIMCOM condition, while controlling for SAT scores. Inclusion of SAT scores as a covariate also allowed us to control for the interaction between SAT and the within-subjects difference. Results revealed a statistically significant within-subject effect, F(1,34) = 8.36, p = .007 (see Table 4). Participants attained higher story recall scores during the sign-only condition, M = 27.05, SD = .77, than they did during the SIMCOM condition, M = 21.23, SD = .91. To test whether the magnitude of the mean difference was of practical significance, an effect size was calculated using the partial eta-squared value, which yielded η2 = .19, which is a “large” effect according to Cohen’s (1988) guidelines.

Table 4 Results of analysis of covariance

Discussion

Summary of the Results

Research Question 1

In this study, age, gender, home language, PTA, and type of hearing-assistive technology did not significantly relate to performance in either condition. Only standardized reading scores were found to correlate significantly with performance in both conditions. The fact that story recall scores were not significantly higher for older participants, for participants with more hearing, or for participants whose home language was English is perhaps related to the larger language-learning challenges faced by deaf children reflected in the persistent plateau in linguistic development that has characterized the field of deaf education since its inception (Lederberg et al. 2013). It was also interesting that the correlation between SIMCOM and sign-only (.81) was even higher than SIMCOM and standardized reading scores (.59) or sign-only and standardized reading scores (.63), which might suggest that the participants were bilingual to a certain degree. Variables that this study could not control for include age at identification, age at first hearing-aid fitting or implantation, quantity and quality of early intervention, and early caregivers’ acceptance of their child’s deafness and their ability to commit resources.

Research Question 2

The higher story recall scores for sign-only condition may suggest that the working-memory system performs differentially in different memory contexts, and that, in the present study, the sign-only condition presented the more optimal context for a more complete propositional memory for these participants. The results of the present study lend support to the idea that use of two channels (sign plus speech) to deliver the same linguistic information could tax working memory, leading to decreased story recall ability. From a cognitive overload perspective, the addition of a mode of communication necessarily affects the attention distribution, because the system must make decisions about which mode to pay attention to or how to rapidly switch attention between modes. This situation seems to reinforce the idea that although humans can process language and visual information at the same time, we do not process two simultaneous language stimuli as easily.

The results may also represent an application of Mayer’s (2001) cognitive theory of multimedia learning. This model includes many of the principles involved in the previous research ideas of Baddeley (2000), Paivio (1986), and Sweller (1994) in its exploration of dual channels for incoming visual and auditory information, selective attention to one system using prior knowledge as a guide, and application of cognitive resources for using the stimuli to build schema and make decisions. Under the simultaneous-communication condition, speech and sign—although attempting to relay the same information—actually specify different gestural and articulatory events with signs manipulated to conform to the parameters of speech. Information from the two sensory channels cannot be integrated quickly in the same way, as occurs when an experienced listener simultaneously sees the speaker’s lips and hears speech or when an experienced receiver of signs sees a visually coordinated message. Thus, little facilitation or enhancement is gained from the combination of visual and auditory input; if anything, substantial competition and even inhibition effects resulting from two divergent input signals may occur.

The outcomes of the current experiment counter the results of the studies by Stewart (1987), which did not find a significant advantage for sign-only input versus SIMCOM, while supporting the results of the study by Tevenal and Villanueva (2009), which concluded that SIMCOM did not provide an equivalent message to all receivers in their study.

Implications

The current findings, if replicated, have significant educational implications. First, they serve to support the intuition of many teachers and educational professionals who have suggested that deaf students struggle with SIMCOM. Although this study should not be taken as a call for schools and programs to adopt a sign-only policy, it does alert educators to the idea that one mode may support struggling language learners better than two modes. This study did not explore a voice-only mode, which is appropriate for many deaf children who are identified early, amplified early, and who are successful at developing listening and spoken language (Nicholas and Geers 2013). Although there is longstanding controversy over communication methods in deaf education, perhaps in the early stages of language development it is the separation of modes rather than the exclusion of one over the other that matters most in language and concept development. Considered by Mayer (2016) as “a messy business”, communication methods in deaf education have never been without controversy, “It may be that we need to live with some ambiguity around these questions, recognizing that mandating policy for how individuals communicate and use language is almost never successful. Rather, it may be a case of doing ‘whatever works’ and what makes sense given the context in which communicators find themselves” (p. 41). Also, we could like to reiterate that this study was intended to measure participants’ recall (i.e., working memory) not their language skills. The types of educational tasks that story recall performance may generalize to include delayed story recall, list learning, delayed list learning, list recognition and so on. There are also some tasks where less generalizability exists; for example, there may be less generalizability to instruction in English syntax, where it may be desirable for the teacher to code switch between SIMCOM and ASL. Rather than being interpreted as the advantages of ASL over SIMCOM, this study demonstrated the better recall with unimodal (i.e., sign-only) vs. bimodal (i.e., SIMCOM) presentations. This is an area in which the field of deaf education may benefit from collaboration with the field of bilingual education.

The findings of this study, although limited to a sample group in residential deaf school, may also provide some insight on possible ways to improve educational designs for deaf children of similar samples. Keeping in mind knowledge about how information processing occurs can help educators reduce cognitive load in specific learning situations. Helping learners manage load can result in more productive learning (Clark et al. 2006). The idea that in the SIMCOM condition participants may have had to split attention between two sources of information underscores the impact of instructional design on cognition, specifically on a learner’s working memory. Eliminating the physical and temporal separation of incoming linguistic stimuli may result in better learning for these children.

Communication is a cornerstone of learning and a key feature of collaborative experiences. Successful classrooms depend on communication among all participants, and teachers who understand this aspect of pedagogy seek to build communicative experiences into the design of their curriculum. One of a teacher’s roles is to design continuously learning environments. Classroom learning depends on students understanding the medium of teaching. The present study raises awareness of educators to the issues of working memory in the classroom. With a cognitive load that is too high, one runs the risk of the student not being able to follow the presentation. In the future, it might be possible to refine the predictions for classroom learning by combining cognitive-load theory with theories of cognitive development, which make some specific predictions about how much capacity is present at a particular age in childhood (Halford et al. 2007). Hamilton (2011), in his study of memory skills in deaf learners, asks the question, “Is recall and comprehension of SIMCOM superior to sign-only communication in the classroom during presentation of information more complex than word lists?” (p. 417) This study may answer his question for the particular sample studied. His next question, “How can ASL (to reduce WM load) and SIMCOM (to provide an enhanced signal that is recalled better than sign-alone) be best used for communication and instruction?” remains and is an area open to future study.

Limitations

This study has limitations that could be addressed in future work. First, it was very difficult to have equivalent stories for the SIMCOM condition and the sign-only condition. A team of 8 individuals met for over three hours for the purpose of the discussion and actual translation of the frozen text of the twelve English stories into either SIMCOM or sign-only. Form B of the WJ-III Story Recall task was not used because the research team believed the researcher-designed stories in sign-only were more equivalent to the Form A of the WJ-III Story Recall stories in SIMCOM than did the Form B of the WI-III Story Recall stories in sign-only. That is, due to the linguistic differences of English and ASL, we used the researcher-designed stories in sign-only instead of the ones from Form B of the WI-III Story Recall. As such, the equivalency of stories in SIMCOM vs. stories in sign-only could be a major limitation of the study. In future research, we could add a second study where a group of participants with characteristics similar to those in the current study could view both sets of stories in the sign-only mode, with story recall after each story in a set. If there were not a significant difference in recall between the two sets of stories, it would provide evidence that the difficulty levels of the two sets of stories are equivalent. Evidence of equivalent difficulties of the two sets of stories would significantly increase the strength and value of the study. Furthermore, although participants were randomly assigned to watch either SIMCOM stories first or sign-only stories first, we did not counterbalance the stories that were used in the two conditions, that is, instead of creating two versions of each set of stories, we used one set of stories for SIMCOM and other set for sign-only, which leads to the possibility that the differences between SIMCOM and sign-only conditions might be due to differences in the stories used rather than modality.

Second, the study does not resolve the issue of access to English for the purposes of literacy. Although this study found that participants were better able to understand sign language only, this should not be seen as a call for a sign-only policy in deaf education, but rather as a call to continue to explore the language-learning needs of deaf children and how best to support the development of language needed for academic success.

A third limitation is the use of PTA as a measure of hearing. Although PTA is useful for indicating the quietest levels at which a child can hear, it does not directly indicate the child’s access to speech. Another measure, the Speech Intelligibility Index (SII), is more highly correlated with the intelligibility of speech and is perhaps a better indicator of a child’s access to speech in conversation (American National Standards Institute 1997). An additional benefit of the SII is that it can take into account the effects of a child’s hearing aid or cochlear implant on conversational speech. SII is not yet widely used in school audiological evaluations and, thus, that information was not available for the participants in this study. Because scores did not differ greatly based on the amount of residual hearing, this indicator may not have been consequential.

A fourth limitation is that deaf children represent a low-incidence population; thus, conducting a strong group research design is challenging. Some aspects of the chosen research design limit interpretations. Although an experimental design with random assignment to comparison groups was used, the sample size was relatively small compared with that used in typical research. Given the diversity of deaf children, generalization from small sample sizes must be made with caution.

Lastly, this study may have benefitted from inclusion of a subjective measure of cognitive load or participant input regarding which mode was more comprehensible and why. Traditional audiological measures, such as pure-tone threshold testing and measures of speech recognition, provide valuable information on auditory function and processing abilities. For example, measures of speech recognition indicate how much an individual understands when speech is presented in noise at a conversational level. However, these measures do not indicate how much effort was exerted to achieve that level of understanding. It is logical that as listening conditions decline, understanding of speech becomes more difficult and listening effort increases (Mackersie and Cones 2011; Zekveld et al. 2011). Regardless of condition level, there will always be individual variation in both subjective and objective measures of listening effort (Picou et al. 2011). Asking for participants’ perceptions would have provided interesting data to explore related to the story recall scores.

Future Directions

In conclusion, we have found that deaf children comprehend and remember stories better when they are instructed unimodally using sign-only rather than the common way deaf children in the US are taught, bimodally, using SIMCOM. We predicted these results on the basis of many years of research on the roles of working memory in remembering stories. Understanding stories entails keeping words in mind long enough to integrate them into propositions, accessing diverse linguistic features of the words and propositions, and creating mental models of the described situation (Kintsch 2004). Bimodal communication taxes the working memory of students even further, by requiring them to integrate two different linguistic forms. Stories are typical of the materials school children need to master; in fact it could be argued that because material in social science and science is more difficult for many than stories, the advantage of unimodal communication might be even stronger for these cases. These results should be replicated because they have strong implications for deaf education. A potential further study is to explore the benefits of ASL and English bilingual approaches using ASL, written English and spoken English (when appropriate) in story recall, for example, comparing the story recall performance of deaf children in 1) watching stories in ASL and then reading the same stories in English, 2) watching stories in ASL only, or 3) reading stories in written English only. It will be interesting to see if there are benefits to having two types of language inputs, ASL and English (used separately), in story recall.