It is not uncommon for people to believe that individuals who are deaf can see better than people who can hear, what is referred to as the sensory compensation hypothesis. This suggestion clearly is not true in the literal sense. Guy et al. (2003), for example, found that over 40 % of the 110 deaf and hard-of-hearing (DHH) children they examined had one or more vision-related abnormalities, a prevalence 2 to 3 times greater than in hearing children. But, there are other ways in which deaf people might be considered “more visual” than hearing people. Lane et al. (2011), for example, argued that in the sociocultural sense, deaf people who use sign languages such as American Sign Language (ASL) or Sign Language of the Netherlands (NGT) are “visual people.” In support of their claim, Lane et al. suggested that “there is extensive research evidence showing that fluent ASL signers have heightened perception in the visual periphery, heightened abilities in spatial processing, and enhanced capacity for interpreting rapidly presented visual information” (p. 4).

In fact, recent findings across a variety of visual-spatial tasks have indicated that, as a group, DHH individuals often perform no better, and sometimes worse, than hearing peers, and their performance often is associated with different cognitive foundations and outcomes (Arnold and Mills 2001; Blatto-Vallee et al. 2007; Cattani et al. 2007; López-Crespo et al. 2012; Marschark et al. 2013; Marschark et al. 2015). ASL signers, for example, may have heightened sensitivity to visual stimuli in the periphery, but so do video gamers (Dye et al. 2009) and individuals who have implicitly learned to attend to such stimuli under experimental conditions (Chen et al. 2006). Moreover, recent research has demonstrated that deaf individuals’ visual-spatial abilities are far less consistent than previously thought and affected by a variety of factors of which sign language ability is just one (Marschark et al. 2015). In short, deaf people generally do not see better than hearing people. It thus might be safer to keep the notion of “visual people” as a socio-cultural descriptor, rather than having to list caveats to the definition depending on DHH individuals’ hearing thresholds, preferences for spoken language, sign language fluencies, and use of hearing aids and cochlear implants (CIs).

Another perspective on deaf individuals’ reliance on vision is the frequent suggestion that deaf students, and especially those who use sign language rather than spoken language, are likely to be visual learners (e.g., Dowaliby and Lang 1999; Hauser et al. 2008; Marschark and Hauser 2012). This tacitly-accepted assumption may lead teachers of DHH students to utilize specific visually-oriented methods and materials in the classroom more frequently than they would otherwise (e.g., Bauman and Murray 2010; Hauser et al. 2008, p. 299). The extent to which such strategies benefit DHH students, and in which contexts, remains unclear. Moreover, there does not appear to be any empirical evidence to indicate that deaf students—regardless of whether they use sign language or spoken language—are any more likely than hearing students to be visual learners or more likely to be visual rather than verbal learners (or vice versa in both cases).

Beyond the obvious fact that learners with less hearing are likely to depend more on vision, being a visual (or verbal) learner is a reference to a learning style.1 Theoretically, teaching and learning should be most effective when related methods and strategies match students’ learning styles. Lang, Stinson, Kavanagh, Liu, and Basile (1999, p. 17), for example, noted that “understanding the learning styles of [deaf college] students may assist educators in providing the most appropriate kinds of reinforcement and in devising strategies to teach their students more effectively.” Felder (n.d.) further pointed out that “A student’s learning style profile provides an indication of possible strengths and possible tendencies or habits that might lead to difficulty in academic settings.”

The various dimensions of a learning style typically are determined through the use of standardized assessments or self-reports. In the case of the visual-verbal dimension, individuals may report frequently using visual imagery or verbal mnemonics in learning and memory or prefer instruction via language (either through the air or via text) or via diagrams or pictures (static or animated). The general assumption underlying the notion of being a visual learner thus is that individuals who are visual learners will learn/retain more or learn faster with visual presentations of information, while those who are verbal learners will learn more or faster with verbal methods, what is referred to as an attribute-treatment interaction or ATI (Mayer and Massa 2003 Massa and Mayer 2006; Sternberg and Zhang 2001). In practice, however, attribute-treatment interactions are hard to find. Massa and Mayer (2006), for example, sought to determine whether (hearing) visual and verbal learners would learn better from multimedia materials in which Help screens used pictures or words. Students who reported being visual learners were more likely to use pictorial Help screens and those who reported themselves to be verbal learners were more likely to use verbal Help screens. However, there was no consistent relation between self-reported learning styles and actual learning. Massa and Mayer concluded that visual versus verbal cognitive abilities have to be distinguished from students’ learning preferences and cognitive styles (Litzinger et al. 2007). Pashler et al. (2008) similarly acknowledged the validity of the visualizer-verbalizer dimension and (hearing) learners’ preferences for visual versus verbal presentations of information. Their extensive review of the empirical literature on ATIs, however, yielded no evidence that the visual-verbal dimension is relevant to educational practice. Yet the belief persists in educational settings and with regard to deaf students, in particular.

Why Would We Expect Deaf Students to be Visual Learners?

In considering possible visual-spatial advantages accruing to deaf individuals by virtue of sensory compensation, Tharpe, Ashmead, and Rothpletz (2002, p. 403) argued that “superior abilities may develop in one or more sensory systems as a compensatory response to impairment in one of the others.” Consistent with that claim, Morere (2013, p. 40) suggested that “deaf signers may outperform hearing peers on some aspects of visual memory (Arnold and Mills 2001; Arnold and Murray 1998; Cattani et al. 2007; Flaherty 2003; Hamilton 2011; Wilson et al. 1997).” In fact, with the exception of that finding by Arnold and Murray (1998), which was not replicated by Arnold and Mills (2001), what those studies actually demonstrated was advantages in visual-spatial memory for deaf and hearing signers over hearing non-signers (see Marschark, Machmer, & Convertino, 2016, for discussion). López-Crespo et al. (2012), however, argued that there is no evidence that deaf individuals have better visual memory than hearing individuals when hearing status and language modality (signed or spoken) are not confounded. Their results and those of Marschark, Sarchet, and Trani (2016) supported that conclusion.

Marschark and Leigh (2016) argued that findings of differences between DHH and hearing students’ performance across cognitive and academic domains rarely if ever indicate that visual-spatial ability, hearing loss, or any other single variable as the causal factor. DHH and hearing individuals more often vary in their approaches to cognitive and academic tasks, influenced by factors such as their primary mode of communication (speech versus sign) and differing conceptual and strategic knowledge. Marschark and Leigh thus suggested that pedagogical, psychological, and cognitive research involving DHH learners needs to take into account or at least recognize the large variability in factors that, individually, have been shown to affect academic outcomes: hearing thresholds, secondary disabilities, language fluencies, and various aspects of cognitive ability. Searching for possible interactions of such variables with visual-spatial ability is of primary interest in the present context.

Reviews of numerous studies examining the visual-spatial abilities of DHH individuals in several domains have been considered by Emmorey (2002); Mayberry (2002); Hall and Bavelier (2010), and Marschark, Sarchet, and Trani (2016) and need not be reiterated here. Far less research has examined relations of such abilities with regard to academic performance, perhaps for the simple reason that deaf students’ being visual learners has been taken as obvious. The ability of deaf individuals to rapidly shift visual attention (Rettenbach et al. 1999), for example, seems likely to offer benefits in the classroom, although the limited evidence available thus far suggests that it does not (Marschark et al. 2005). Dye et al. (2008) went further, suggesting that deaf learners’ rapid shifting of visual attention together with greater sensitivity to peripheral visual stimuli can lead to greater distractibility in the visually noisy environment of the classroom.

Three studies have provided indirect evidence with regard to deaf students’ generally being visual learners as it relates to mathematics performance. In a study involving deaf and hearing students in middle school through postsecondary settings, Blatto-Vallee et al. (2007) found that hearing students were more likely than deaf students to use a visual-spatial strategy in mathematics problem solving. The hearing students were found to use visual-spatial mental representations of mathematics problems that were schematic, incorporating relations among elements that supported problem-solving. Deaf students were more likely to retain pictorial mental representations incorporating visual properties of a diagram but not reflecting conceptual relations or aspects of the problems. Use of schematic representations was associated both with higher scores on a pair of visual-spatial tasks and with greater success in solving mathematics problems. Hearing students, accordingly, performed better on the mathematics task than deaf students and also scored significantly higher than the deaf students on the visual-spatial tasks.

Consistent with the results of Blatto-Vallee et al. (2007); Marschark et al. (2013) found that hearing students scored higher than deaf peers on all seven of the visual-spatial tasks they employed.2 Beyond the deaf students’ not exhibiting better visual-spatial performance than their hearing peers, it was the interrelations among the various measures that were most interesting. First, only their Spatial Relations task was significantly correlated with mathematics performance for the hearing students, while it and three other visual-spatial tasks were significantly related to performance for the deaf students. Second, scores on the visual-spatial tasks were positively interrelated for the deaf students but unrelated or even negatively related to each other among the hearing students. Finally, while hearing students significantly outscored the deaf students on the Spatial Relations task, scores among the deaf students were positively correlated with their hearing thresholds. That is, students with greater hearing losses scored better than those with milder losses, but those with no hearing loss scored the best. Taken together, these results suggest that at some level, the visual-spatial tasks were tapping somewhat different cognitive abilities in the deaf and hearing students.

Marschark et al. (2015) conducted three experiments examining visual-spatial processing, visual-spatial working memory, and executive function (EF) among deaf students who varied in their spoken and sign language skills and use of CIs or hearing aids. The results were consistent with those of Blatto-Vallee et al. (2007) and Marschark et al. (2013) in indicating no generalized advantage for the deaf students in the visual-spatial domain, as the hearing students outperformed the deaf students on the visual-spatial tasks regardless of the deaf students’ sign language skills or use of CIs. Most interestingly perhaps, the deaf students’ visual-spatial performance was associated with the strength of their language skills in their preferred modality (signed or spoken) rather than only their sign language skills. Thus, visual-spatial performance was positively related to spoken language skills among those students who primarily used spoken language, and their performance was negatively related to their sign language skills. Further, performance of deaf and hearing individuals on the visual-spatial tasks was associated with performance on different EF and logical reasoning tasks, again suggesting that different cognitive processes may underlie visual-spatial processing in the two populations.

One conclusion to be drawn from the above findings is that visual-spatial tasks are more than just visual + spatial. Beyond simple visual-matching tasks, EF and other higher cognitive abilities are likely to be involved, abilities that also are linked to individuals’ language abilities. A variety of studies involving deaf learners, in particular, have found associations between EF and both signed and spoken language (e.g., Figueras et al. 2008; Remine et al. 2008). Hintermair (2013), for example, demonstrated links between EF and language in a study of behavioral problems among deaf 5- to 18-year-olds. Communicative competence was strongly related to EF, and higher functioning in both of those domains was associated with fewer behavior and social problems. Although more than 90 % of Hintermair’s participants used spoken language, CI use was not related to the presence or absence of social-behavioral problems. Kronenberger et al. (2014a), in contrast, did find a significant association between spoken language and EF among 7- to 27-year-old, long-term CI users.

Adding some nuance to the above findings, Remine et al. (2008) found spoken language ability significantly related to EF among deaf 12- to 16-year-olds with a verbal EF task, but not with a nonverbal EF task. Meristo and Hjelmquist (2009) explored possible associations among language, EF, and theory of mind. Their study included deaf children who were native signers educating in bilingual or “oral” settings and later sign learners educated in bilingual programs. The bilingual native signers scored higher on theory of mind tasks, but there were no differences among the groups in their EF scores.

The Present Study

Taken together, the above findings suggest that both language and EF are factors that might mediate learning styles and therefore should be considered with regard to the extent to which deaf students are more likely to be visual learners than verbal learners and/or more likely to be visual learners than their hearing peers. The present study was designed to do just that.

Given the convergence of the Marschark et al. (2015) findings with those of the Blatto-Vallee et al. (2007) and Marschark et al. (2013) studies, Marschark et al. (2015) concluded that assuming that deaf students are visual learners is not very helpful in the educational domain. They acknowledged that deaf students believe that they are visual learners (Lang et al. 1993) but suggested that the issue needed to be addressed directly through assessments of learning styles. This study therefore compared learning styles and preferences of deaf and hearing college students with particular emphasis on the visual-verbal dimension, using the Individual Differences Questionnaire (IDQ) of Paivio (1971; Paivio and Harshman 1983) and the Index of Learning Styles (ILS) of Felder and Soloman (2004; Felder & Spurlin, 2005). In addition, because the effectiveness of visual and verbal strategies depends on knowing when and how to deploy them, possible relations of learning styles to executive function (EF) were examined using the Learning, Executive, and Attention Functioning (LEAF) scale (Kronenberger et al. 2014b).

EF is related to academic performance in reading and mathematics in hearing children and adolescents, and Kronenberger et al. (2014a) found that their long-term CI users scored significantly below hearing peers in aspects of EF relating to verbal working memory, inhibition, visual matching, and concentration. Marschark et al. (2015) found that associations between visual-spatial skills and EF were different for hearing students, deaf students who were CI users, and deaf students who were not CI users, suggesting that the cognitive abilities underlying visual-spatial processing might be different across the three groups. One focus of the present study therefore was to ascertain whether those groups might differ in relations between EF and learning styles more generally.

In summary, the primary goal of the present study was to examine the validity of the assumption that deaf students generally are visual learners. Are they really more likely to be visual learners than verbal learners? Are they more likely to be visual learners than hearing students? Do deaf and hearing students differ in perceptions of their visual and verbal strengths? In view of the fact that most “visual-spatial” tasks are not just visual and spatial, but involve higher-level problem solving skills, the study examined possible relations of learning styles and EF. Finally, in order to evaluate the assumption that sign language ability is associated with a visual learning style, language ability, language modality, and use of CIs also were considered.

Method

Participants

A total of 102 deaf students and 21 hearing students at Rochester Institute of Technology (RIT) volunteered for the study and were paid for their participation. Fifty of the deaf participants reported actively using CIs, having received them between the ages of 2 and 19 years (mean = 5.67, SD = 4.08). Unaided hearing thresholds available from institutional records indicated the CI users to have a mean pure tone average (averaged over both ears) of 109.04 (SD = 18.89) and the nonusers a mean of 92.13 (SD = 18.38). All but four of the deaf CI users (92 %) and all but one of those who did not use CIs (98 %) indicated that they knew sign language well enough to be able to discuss basic social and school topics (see below). Over 65 % of the CI users and over 90 % of the nonusers indicated that they were skilled enough to have a natural, signed conversation about social and school topics. Ten of the 21 hearing students indicated that they had varying levels of sign language skill.3

Materials

Learning Styles and Preferences

Participants’ learning styles, habits, and preferences, with an emphasis on visual and verbal styles, were assessed using two self-report measures. The Individual Differences Questionnaire (IDQ, Paivio 1971) consists of 86 true-false questions concerning orientations (i.e., preferences, habits, and abilities) with regard to visual and verbal abilities (e.g., “I have a photographic memory”). Paivio and Harshman (1983) found that the two primary dimensions consistently emerging from the factor analyses in their large individual differences studies, Visual Imagery (23 items) and Verbal (29 items), corresponded well to theoretically-defined scales indicating generalized imagery and verbal habits and skills. The score for each scale is the sum of selected true and false items associated with each of those factors in Paivio and Harshman (1983).

The Index of Learning Styles (ILS, Felder and Soloman 2004; Felder and Spurlin 2005) consists of 44 two-choice self-descriptions (e.g., “I find it easier, a) to learn facts, b) to learn concepts”). Eleven items tap each of four dimensions of learning style (from Felder and Soloman 2004):

  • Visual-Verbal: Visual learners remember best what they see—pictures, diagrams, flow charts, time lines, films, and demonstrations. Verbal learners get more out of words—written and spoken explanations.

  • Active-Reflective – Active learners tend to retain and understand information best by doing something active with it, discussing or applying it or explaining it to others. Reflective learners prefer to think about it quietly first.

  • Sensing-Intuitive: Sensing learners tend to like learning facts; intuitive learners often prefer discovering possibilities and relationships.

  • Sequential-Global: Sequential learners tend to gain understanding in linear steps, with each step following logically from the previous one. Global learners tend to learn in large jumps, absorbing material almost randomly without seeing connections, and then suddenly “getting it.”

In order to make comparisons across individuals, ILS scale scores for the present purposes ranged from 1 to 11 with the endpoints as indicated above. Thus, for example, a Visual-Verbal score less than 5.5 would indicate a visual learning style and a score greater than 5.5 would indicate a verbal learning style (see Felder and Soloman 2004, for individual scoring).

Language

For the purposes of service provision, language and communication skills of deaf students entering RIT are evaluated through the Language and Communication Background Questionnaire (LCBQ). This self-report measure has replaced formal assessments because it is faster than conducting individual communication interviews, it can be administered online, and the results correlate highly with interview assessments. The full LCBQ as individuals rate their communication preferences with different individuals (deaf, hearing), receptive and expressive sign language and speech skills, use of different forms of signing (e.g., sign alone, speech and sign together), and so on. The research version of the LCBQ utilized here asked participants to rate their expressive and receptive skills in sign language and in spoken language (e.g., how well others understand their speech), all on 5-point Likert scales. Histories of using hearing aids and CIs and age of sign language acquisition also were queried.

Because of the emphasis in the literature described earlier on the link between sign language and visual-spatial skills, participants’ sign language skills were of particular interest. In addition to the LCBQ, therefore, the deaf participants rated their expressive sign language skills using materials from the Sign Language Proficiency Interview (SLPI). Widely used in the United States, the SLPI normally consists of a one-to-one signed conversation between an interviewer and interviewee (https://www.rit.edu/ntid/slpi/). For the present purposes, participants rated their sign language skills according to descriptions on the SLPI 6-point Likert scale from 0 (e.g., “I either do not know any sign, or I know very few basic signs and have to fingerspell most of my responses to basic questions signed to me”) to 5 (e.g., “I am able to have a very comfortable, in-depth conversation about social and school topics. I have a very large sign language vocabulary”). Receptive sign language skills (i.e., comprehension) of the deaf participants also were assessed using questions taken from Hudgins et al. (1947), usually referred to as the Harvard Auditory Test, and presented in sign language with no voice. Thirty questions were selected so as to have various sentence structures and require one-word answers (e.g., “How many colors are there in the American flag?”). The sentences were signed by a certified sign language interpreter and videorecorded in a professional television studio. After each sentence, participants were given as much time as they needed to write their answers on an answer sheet, but questions were seen one time only.

Executive Function (EF)

As indicated earlier, EF was evaluated using the LEAF (Kronenberger et al. 2014b; Kronenberger and Pisoni 2009), a self-report measure that has been used extensively in studies involving deaf children and young adults with CIs as well as their hearing peers (e.g., Kronenberger et al. 2014a; Marschark et al. 2015). The questionnaire includes 40 questions about individuals’ recent experiences and behaviors reflecting EF-related cognitive abilities. Each item is rated on a 4-point Likert scale from “never” to “very often.” The LEAF includes five items each on eight cognitive subscales: Comprehension and Conceptual Learning, Factual Memory, Attention, Processing Speed, Visual-Spatial Organization, Sustained Sequential Processing, Working Memory, and Novel Problem-Solving.

Procedure

All of the above instruments except the signed Harvard Auditory Test and the SLPI self-rating were administered to small groups on laptop computers in a laboratory setting. As a variety of previous studies with this population has indicated no performance differences with materials presented in sign language versus text (see Marschark and Leigh 2016; Marschark, Machmer, & Convertino, 2016, for reviews), only the latter was used in the computer presentations here. At least one experimenter/sign language interpreter was present in each session. All of the experimenters were former sign language interpreters in RIT classrooms with extensive experience as educational researchers. In addition to instructions being provided online for each task, they were explained in sign language, spoken language, or both depending on student preferences. None of the tasks were timed. The signed sentences of the Harvard Auditory Test were viewed on a 47 in. LCD monitor, and responses were written on an answer sheet.

Results

IDQ

Overall, the three groups did not differ significantly on the IDQ in either their Imagery scores, F(2, 120) = 1.64, or their Verbal scores, F(2, 120) < 1.0 (see Table 1). Higher Imagery than Verbal scores were obtained from 54 % of the students who used CIs, 52 % of the deaf students who did not use CIs, and 52 % of the hearing students. This indicates that each of the three groups was relatively balanced in terms of individuals who were more visually or verbally oriented. Considering the balance of visual and verbal habits and styles within individuals, as can be seen in Table 1, the CI users were relatively more “visual” than they were “verbal,” t(49) = 2.40, p < .05, while the deaf nonusers and the hearing participants were balanced in the two domains, t(51) = 0.12, p = .90, and t(20) = 0.54, p = .60.

Table 1 Means and standard deviations (SD) for measures of learning style, language, and executive function

Because the various language measures were expected to be interrelated, the extent to which visual imagery and verbal abilities might be associated more with one language modality (i.e., sign) rather than the other, two multiple regression analyses were conducted for each of the groups. Imagery and Verbal IDQ scores alternately were used as criterion variables and the language scores (self-rated measures and Harvard Sentences) and audiologic measures (hearing thresholds and age of cochlear implantation) were used as predictor variables.4 The cumulative R 2 and significance level for the variables represented in the final models are reported below. Table 2 provides the results of the bivariate correlation analyses between the language measures and learning style assessments of the deaf students.

Table 2 Correlation coefficients between learning style scores and language measures

IDQ Imagery

Among the deaf students who were CI users, self-rated speech intelligibility was the only significant predictor of visual imagery orientations as reflected in IDQ Imagery scores: better speech was associated with lower Imagery scores, R 2 = .11, β = −.33, p < .05. Among the deaf students who did not use CIs, both greater sign language receptive ability (Harvard scores), R 2 = .08, β = −.59, p < .05, and self-rated expressive sign skill, R 2 = .14, β = −.48, p < .01, were negatively related to Imagery scores. For hearing students, in contrast, greater receptive sign language ability, R 2 = .22, β = .47, p < .05, was positively related to Imagery scores. In short, based on IDQ scores, the deaf students were no stronger than the hearing students in the visual domain and no more likely to see themselves as visually-oriented as opposed to verbally-oriented; nor was there any indication of a positive relation between sign language and visual imagery habits and skills among the deaf students.

IDQ Verbal

Among the deaf students with CIs, verbal orientations as indexed by IDQ Verbal scores were associated with lower self-rated expressive sign language skill, R 2 = .11, β = −1.28, p < .05, lower self-rated SLPI scores, R 2 = .44, β = −.80, p < .001, lower receptive sign language skill, R 2 = .70, β = −.42, p < .001, lower speech intelligibility, R 2 = .52, β = −.54, p < .001, and higher speech reception, R 2 = .67, β = .54, p < .001. That is, greater verbal orientations among CI users were negatively related to their sign language skills. There were no significant predictors of Verbal scores for either the deaf students who were not CI users or the hearing students.

ILS

In order to reduce the possibility of Type I error, a multivariate analysis of variance (MANOVA) was used to examine possible group differences on the four ILS scales. Results indicated that there were no significant differences in ILS learning styles among the three groups: deaf students with CIs, deaf students without CIs, or hearing students, (Wilks’ λ), F(8118) < 1.0. More specifically, a univariate analysis indicated no differences among the groups with regard to their visual/verbal learning styles, F(2120) = 2.00, as (see Table 1). All three groups indicated themselves to be more oriented toward visual than verbal learning styles. As can be seen in Table 1, however, this orientation was stronger among hearing students than deaf students and, among the deaf students somewhat stronger scores among CI users who are more likely to use spoken language than deaf students who did not use CIs.

Multiple regressions using the four ILS learning styles alternately as criterion variables and the language and audiologic measures as predictor variables indicated only that for the deaf CI users, earlier ages of implantation were associated with more global learning styles, R 2 = .10, β = −.32, p < .05, and for the deaf non-users, better self-rated expressive sign language skill was associated with being more intuitive, R 2 = .09, β = .30, p < .05. Consistent with the IDQ, ILS scores indicated that deaf students were no more likely than hearing students to be visual learners (see Table 1), nor was sign language found to be associated with being a visual learner for any of the groups.

LEAF

Examination of possible relations between learning styles/orientations and EF were examined via correlational analyses. Considering first the IDQ, there were no significant correlations between Imagery scores and any of the LEAF scales. As can be seen in Table 3, however, for all three groups of students, a greater verbal orientation as indicated by the IDQ was associated with better EF (note that higher LEAF scores indicate greater EF difficulties) across essentially all LEAF scales. The only exceptions were the nonsignificant relations of Verbal IDQ scores and Visual-Spatial Organization among the deaf students who did not use CIs and among the hearing students. For the CI users, a greater verbal orientation was associated with better EF in Comprehension, Processing Speed, and Working Memory domains (Remine et al. 2008).

Table 3 Correlation coefficients between learning style scores and LEAF executive function scores

Similar analyses exploring possible relations between EF and the learning styles tapped by the ILS indicated among the deaf CI users, visual learning styles were associated with poorer Comprehension-related EF. As can be seen in Table 3, among the deaf students who were not CI users, relatively more concrete learning styles (active and sensing) were associated with better EF in the domains of Factual Memory, Attention, Processing Speed, Visual-Spatial Organization, and Problem Solving. None of the correlations was significant for the hearing group. In short, EF is strongly associated with verbal orientations/styles, a finding demonstrated previously by Kronenberger and colleagues for deaf children with CIs (Kronenberger et al. 2014b; Kronenberger and Pisoni 2009).

Discussion

Although it has been suggested that deaf students should be considered visual learners (e.g., Dowaliby and Lang 1999; Hauser et al. 2008; Marschark and Hauser 2012), this assumption has been called into question by findings indicating that hearing students generally perform as well or better than deaf students on a variety of visual-spatial tasks (e.g., Blatto-Vallee et al. 2007; Hauser et al. 2008; López-Crespo et al. 2012; Marschark et al. 2013, 2015; Van Dijk et al. 2013). López-Crespo et al. (2012) argued that there is no evidence for a visual memory advantage among deaf individuals, and several studies have indicated that deaf individuals’ apparent superiority in spatial processing frequently is confounded with sign language ability (see Marschark et al. 2015; Mayberry 2002). Such findings indicate that the heterogeneity of the deaf population makes broad generalizations about their cognitive (or other) abilities questionable, but they bear only indirectly on the issue of whether or not deaf students are visual learners. The fact that teachers of deaf students themselves might emphasize the importance of using visual materials in the classroom (Dowaliby and Lang 1999; Marschark and Hauser 2012) also has little bearing on the issue, at least the empirical one (Massa and Mayer 2006; Mayer and Massa 2003).

That deaf individuals depend more on vision than audition both in communication (e.g., speechreading, sign language) and in information processing in the world at large appears obvious. A greater reliance on vision than audition, however, does not make one a visual learner, at least in contrast to being a learner with a more verbal/linguistic orientation or learning style. Further, given diversity in hearing thresholds and visual acuity, the extent to which individual deaf learners, with or without CIs, will rely on visual or auditory information in different contexts also will vary widely, an issue worthy of empirical study in its own right.

In an effort to evaluate the general assumption that deaf students tend to be visual learners and more likely to adopt visual strategies in various tasks and activities, the present study employed two instruments designed to reflect individuals’ use of visual and verbal processes in learning and other domains. The IDQ (Paivio 1971; Paivio and Harshman 1983) has been validated as an assessment and research tool in the examination of individual differences in visual imagery and verbal processes in everyday activities including academically-related tasks such as reading and problem-solving. The ILS (Felder and Soloman 2004; Felder and Spurlin 2005) includes the visual-verbal dimension as one of four learning styles seen to be useful to both students and teachers in optimizing the way they approach to-be-learned material and a variety of academic tasks. The results yielded by both of these instruments in the present study clearly indicate that (1) there is no reason to conclude that deaf college students are more likely to be visual learners than their hearing peers, (2) deaf college students do not see themselves as stronger in their visual abilities or visual preferences than hearing peers, and (3) deaf college students who rely primarily on sign language are no more likely to be visual learners than deaf peers who rely primarily on spoken language. Nor do deaf students who have greater access to spoken language through the use of CIs demonstrate any disadvantage in the visual-spatial domain (Marschark et al. 2015) or in their likelihoods of being visual learners.

The inclusion of a diverse sample of deaf students in the present study has the advantage of offering a more representative picture of deaf learners compared to studies that have included only native signers, that is, individuals in the small minority of deaf individuals with deaf parents. It also offered an opportunity to examine learning styles in samples of deaf individuals with diverse abilities and preferences in sign language and spoken language. At the same time, the fact that the study included only college students might be seen as a limitation for two reasons. With regard to age, it may be that younger deaf children are in some sense more visual (and/or less verbal) than older ones relative to hearing peers (cf. Blatto-Vallee et al. 2007). With regard to academic status, deaf college students may have greater verbal skills and hence a greater likelihood of verbal learning styles than deaf individuals in the general population.

Studies that have examined visual-spatial skills among native signers have included both children and adults, but related studies examining relations among deaf children’s spoken language and sign language fluencies and visual-spatial functioning apparently have not. Research into possible age-related and language-related learning styles among younger deaf learners with and without CIs and coming from various educational backgrounds clearly would be worthwhile (Marschark and Leigh 2016). Similar studies also could involve hearing learners who vary widely in their sign language skills (e.g., Marschark et al. 2015) as well as deaf learners with no sign skills. Further, the deaf samples in the present study were considerably larger than the hearing sample because there is greater variability across cognitive, social, and academic measures among deaf than hearing learners (Marschark and Leigh 2016). Larger samples of hearing learners also would be needed if these variables were to be fully explicated.

Finally, a cautionary note: The finding that deaf college students generally are no more likely to be visual learners than their hearing classmates does not deny that some deaf students, like some hearing students, certainly are. Moreover, some deaf students will be visual learners in some domains but not others, and some domains will be more amenable to the use of visual materials than others (Paivio 1971). Marschark and Hauser (2012), for example, suggested that “Science teachers who sign also can offer [deaf] students the benefit of watching an ‘animation’ using signs, gestures, and space” (p. 46), but it is unclear to what extent such a demonstration is beneficial. In fact, Dowaliby and Lang (1999) found that adding sign language and/or animation to a science lesson did not improve learning for deaf college students. Rather, the participants benefitted most from the verbal strategy of pausing periodically to ask questions that required students to re-consider information they had just read. Together with the present findings, the Dowaliby and Lang results caution against the tacit assumption among many teachers of DHH students that visually-oriented instructional methods and materials in the classroom will offer optimal access to academic concepts and instruction. Assuming that all learners, all deaf learners, or even just those deaf learners who sign, are going to benefit from a specific instructional method or materials is likely to hurt more students than it helps. As Knoors (2016) suggested in critiquing a one-size-fits-all approach to the education of deaf students, it is more likely that “one size fits none.”

  1. 1.

    The word verbal is often misunderstood as a synonym for spoken, but in fact it refers to the use of words in any language, signed or spoken (or written). A lack of audition does not preclude the use of verbal learning strategies via a visual language modality.

  2. 2.

    Five of the Marschark et al. (2013) tasks were drawn from the Woodcock-Johnson III (Woodcock et al. 2001); the remaining two were the Corsi Blocks and an Embedded Figures task. Although only two tasks (Embedded Figures, Pair Cancellation) yielded significant differences in a MANOVA, hearing participants outscored deaf participants on all tasks.

  3. 3.

    A study of this sort, ideally, would include deaf participants with no spoken language ability and others with no sign language ability as well as hearing participants with no sign language ability. Practically, the former is almost impossible to secure for research purposes, as almost all deaf people have some knowledge of sign language and most use their voices at one point or another. Theoretically, the inclusion of hearing participants who vary in their sign language ability is important if the goal is to unconfound hearing status and sign language ability.

  4. 4.

    Self-rated receptive sign language abilities were strongly associated with assessed (Harvard) scores and self-rated expressive sign language abilities were strongly associated with SLPI self-ratings for both deaf CI users and non-users, all rs ≥ .78.