Keywords

1 Introduction

This article presents a novel approach to deaf education using holographic AR technology. The approach described in the paper is unique because it: (1) uses advanced technology to improve mathematics skills of K-6 deaf students; (2) provides equal access and opportunities by overcoming known deficiencies in science, technology, engineering and math (STEM) education as reflected in the under-representation of deaf people in fields requiring STEM skills; (3) provides a model for teaching technology in general that can contribute to improving deaf education around the globe. The paper is organized as follows: Sect. 2 discusses challenges in Deaf Education, Sect. 3 presents a brief review of studies on the application of AR in educational settings, and Sect. 4 reports prior research on signing avatars. The system is described in Sect. 5 and the evaluation study and findings are reported in Sect. 6. Conclusion and future work are included in Sect. 7.

2 Challenges of Deaf Education

Various estimates put the number of Deaf and Hard of Hearing people in the United States at 28 million [1]. An estimated half million Americans have severe to profound hearing loss, 8% are children (3–17 years) and 54% are adults 65 years of age or older [2]. Deaf individuals face barriers in school, the workplace, and social venues, which prevent them from equal opportunity for success.

The Deaf are significantly underrepresented in the fields of science and engineering and historically, it has been difficult for them to gain entry into higher education that leads to STEM careers [3]. There are several factors that contribute to this disparity: (1) significant delay in deaf children’s literacy; (2) the difficulty of (hearing) parents to convey in sign language basic mathematical concepts, and the lack of efficient tools for learning sign language mathematics; and (3) the inaccessibility to incidental learning; i.e., exposure to media in which mathematical concepts are practiced and reinforced. Deaf children lack access to many sources of information (e.g., radio, conversations around the dinner table) and their incidental learning suffers. A strong need exists for solutions which allow deaf children to communicate and interact with other people in an environment free of prejudice, stigma, technological barrier, or other obstacles. The project described in the paper contributes to fill this need.

3 Augmented Reality in Education

Several studies have investigated the use of AR in learning environments [4], however the benefits of AR in education are still unclear and further research is needed. Wu et al. [5] suggest that research in this field should aim at discovering the unique characteristics and advantages of AR in education that differentiate this technology from others. One of the goals of the proposed work is to advance knowledge in this direction.

Recently, Diegmann et al. [6] conducted a review of the advantages of AR in educational settings based on 25 publications, and classified the benefits into five groups. Benefits related to “State of Mind” include students’ increased motivation, attention, concentration and satisfaction; benefits related to “Teaching Concepts” include increased student-centered learning and improved collaborative learning. Benefits related to “Presentation” include increased information accessibility and increased interactivity. Benefits related to “Learning Type” are improved learning curve and increased creativity; those related to “Content Understanding” include improved development of spatial abilities and improved memory. Bacca et al. [7] analyzed 32 studies on the use of AR in education published between 2003 and 2013 and identified applications, target groups, advantages, limitations, and features. Results from their analysis show that the majority of existing AR applications are in science education, followed by arts and humanities, and engineering, manufacturing and construction; the largest target group is college level students. Major benefits include “learning gains” and “motivation”, followed by “increased capacity of innovation” “positive attitudes”, “awareness”, “anticipation” and “authenticity”. Main limitations include “difficulties maintaining superimposed information” and “intrusive technology”.

The majority of the studies that were analyzed focused on marker-based AR, as marker-less AR has not been widely used in educational settings yet. The project described in the paper uses the latest advancement in marker-less AR, hence it contributes knowledge in a new area that needs exploration.

4 Signing Avatars

Research findings support the value of signing avatars. The pioneer work in applying computerized animation to sign language was carried out by Vcom3D [8]. Vcom3D SigningAvatar™ accessibility software has demonstrated the benefits of using 3D signing avatars to provide multimedia access and increase English literacy for the Deaf. In 2005, TERC collaborated with Vcom3D on the SigningAvatar®accessibility software and developed a Signing Science Dictionary [9]. Purdue University Animated Sign Language Research Group focuses on development and evaluation of innovative 3D animated interactive tools, e.g. Mathsigner, SMILE [10] and ASL system [11] aimed at improving deaf children’s math skills. In the U.S., English to sign language animation translation systems include those by Zhao et al. [12] and continued by Huenerfauth [13] and by Grieve-Smith [14]. In China, researchers have created the Kinect Sign Language Translator, a system that understands signs and converts them to spoken and written language [15].

5 Description of the AR System

The project extends the functionality of the previously developed ASL system (a detailed description of the ASL system can be found in [11]; a demo of the system can be accessed at http://hpcg.purdue.edu/idealab/asl/about.htm). The new AR system takes as input: speech, a rigged 3D character and our current database of animated signs. The system recognizes speech and translates it into English text. The English sentences are interpreted to identify the signs, prosodic markers and prosodic modifiers needed to animate the signing character. The output is accurate and life-like Signed English animation data for the avatar. The signing avatar is displayed as a 3D hologram (Fig. 1-right) and viewed through the AR glasses (Fig. 1-left). The original ASL system was developed in Unity 3D, and assets such as avatar meshes, rigs, and animations were reused. Translation of spoken English was implemented using the Microsoft Speech SDK. A custom Speech Recognition Grammar Specification (SRGS) file for K-6th grade level math preparation was developed to test the holographic AR system.

Fig. 1.
figure 1

AR glasses (left); holographic signing avatar (right)

Please note that the system does not translate from speech to animated American Sign Language (ASL). Several considerations currently prevent such an endeavor. While extensive research on the structure of English is available, the study of ASL structure is only recently begun, and there are significant barriers to simple syntax-to-syntax translations. Second, of course, there is the general translation problem related to the semantic domains of words not overlapping in different languages. For example, the English word ‘run’ can translate into different ASL signs depending on the context: ‘run a race’, ‘run an office’, ‘run for office’. While lexical sign choice can be guided by our existing algorithms so that the correct sign appears, we cannot yet guarantee that the ASL syntax associated with that sign will come out correctly. Thus, signed English is a viable first step for development of the system.

To date the AR system has been used in the context of math education. The rationale is twofold: (1) as mentioned in Sect. 2, there is a pressing need to improve deaf children’s competency in mathematics, and (2) the authors have extensive experience in research and development of K-6 math tools for deaf children [10, 11] and had previously created a database of animated signs for K-6 mathematics.

5.1 Technical Details

The AR system was developed using the Meta 1 developer kit and the Unity 3D game engine. The meta 1 SDK includes 3D see-through glasses, head and hand tracking, cameras and audio input/output. The AR glasses allow users to see the real world with holographic images in it. Deaf children sitting in a classroom see the holographic signing avatars seamlessly integrated with the physical environment (Fig. 2). Students can move around their immediate environment without losing the holograms because the digital eyeglasses offer 360-degree head tracking. So far, the holographic signing avatars have played the role of sign language interpreters (e.g. they translate into SE what the teacher or parent is saying). However, the AR system also allows students to interact with the holograms via simple gestures (for head and hand tracking, the meta1 uses a nine-axis inertia sensor, a magnetic compass, an accelerometer and a gyroscope). This possibility will be explored in future applications.

Fig. 2.
figure 2

Holographic 3D signing avatar seamlessly integrated in the classroom environment. The avatar, seen thorough the AR glasses by the child with the red shirt, is translating into SE what the teacher is saying

The meta 1 SDK is powered by the Unity engine. Using Unity as the development platform offered two main advantages: (1) Unity has an optimized graphics pipeline that supports interactive rendering of complex animated 3D meshes and advanced lighting and textures even on computers with limited graphics capabilities. (2) Unity interfaces seamlessly with major 3D animation tools and file formats, and allows for instantaneous import and update of asset files and animations.

6 Initial Evaluation

The objectives of the initial evaluation were to investigate (1) whether the 3D holographic signing avatars are usable by deaf children, and (2) the attitude of deaf children, their parents and teachers toward them. In particular, the formative evaluation focused on usability, willingness to use and perceived usefulness as evaluated by the children, teachers and parents, and enjoyment measures from the children themselves, following procedures we have previously used to assess other systems [16]. We employed quantitative and qualitative methods of data collection and analysis to examine the usability and functionality to identify weaknesses. Evaluation instruments included:

  • Rating/ranking exercises and observation to measure “fun” and usability with the children. The evaluation of “fun” was based on the three dimensions of fun proposed by [17]: expectations (i.e., the component of fun which derives from the difference between the predicted and reported experience), engagement, and endurability (or ‘returnance’), i.e., the desire to do again an activity that has been enjoyable. Ranking and rating questions were used primarily to measure endurability and expectations, while observation was used to assess engagement and usability. In general, ranking, rating, and observation have been proven to be more reliable than children’s responses to questions on whether they liked something [18]. All ranking and rating questions were designed to be age appropriate: for instance, children rated elements of the system using a scale with 4 smiling/frowning faces (full smile, partial smile, partial frown, full frown), which the authors had used in a previous study with children [16].

  • Interviews with teachers and parents regarding their perception of benefits and challenges of using the AR system.

6.1 Subjects and Procedure

Subjects included 5 children (age 7–10; all children were ASL users), 2 hearing parents and 2 K-6 math teachers. The teachers, children and parents were invited to one of the research labs at Purdue University and one of the teachers was asked to deliver a 10 min math lesson. In the first part of the experiment, all children were given a set of AR glasses so they could see the holographic virtual sign language interpreters translating the teacher’s lesson in real time. In the second part of the experiment, the parents and the teacher who was not lecturing were given the AR glasses and the teacher was asked to repeat a segment of the math lecture (only 5 AR glasses were available in the lab therefore children, parents and educators could not use the glasses at the same time). In the third part of the experiment, 2 of the children (wearing the AR glasses) sat in front of a computer displaying a digital math lesson, which included text and images. Their parents sat next to them and explained the lesson. The children viewed the holographic signing avatars, which translated what the parents were saying in real time. Upon completion of the three parts of the experiment, the children were asked to answer the rating questions and the evaluator interviewed the parents and teachers.

6.2 Findings

Fun and Usability (Children).

The diagram in Fig. 3 summarizes the results of the children’s expectations by showing the differences (in mean values) between the predicted and reported experience. According to Read et al. [17], this difference is an effective indicator of how enjoyable the experience has been. As mentioned previously, children rated their responses using a scale with sad and happy faces. To calculate the mean values, the happiest face was assigned a value equal to 4 and the saddest face was assigned a value equal to 1. Results show that the children had high expectations but the reported experience surpassed them. The system was perceived more fun and easier to use than expected, and following/completing the lessons with the signing avatars less challenging than expected. Table 1 summarizes the results of the ‘Again-Again Table’ used to measure ‘returnance’. The table reveals that children enjoyed all activities and wanted to do them again.

Fig. 3.
figure 3

Diagram showing the differences between the predicted and reported experience

Table 1. Summary of results of the ‘Again-Again Table’

Observation showed the all children were engaged and attentive. Two children showed discomfort with the glasses and readjusted them on their nose several times. The glasses are designed for adults and their size is too large for children. We are currently researching a solution to this problem, including coupling the glasses with a headband.

Perception of Benefits and Challenges of the AR System (Teachers And Parents).

All subjects agreed that the AR system is a useful tool that shows great potential for improving young deaf children accessibility to learning content. Both teachers commented that the signing avatars were “accurate and fluid”. Two of the subjects expressed some concerns regarding the usability of the glasses and commented: “two of the children seemed uncomfortable with the glasses, maybe because they are too heavy”. Both teachers said they would like to use the system in their class and both parents were interested in using the system at home when helping the children with the homework, however they were concerned about cost.

7 Discussion and Conclusion

The immediate benefit of this research is to improve access to K-6 math educational materials for deaf children, and therefore improve deaf students’ learning of mathematical concepts. In order for the potential of the system to be fully realized, it must move from laboratory development and testing to field-testing and use. Thus, future work will focus on ensuring the successful transition of the system from research laboratory and expert user conditions to an actual education institution for the Deaf (i.e. the Indiana School for the Deaf) where the system is needed and where it is ready to make a potentially considerable impact.

Future work will also focus on extending the system in order to achieve a 2-way communication (speech to signing and signing to speech). Building on research by Chen et al. [15], we plan to use signing data captured with the Kinect motion control camera and translate the gestures into words which can be spoken by the avatar. Initially, sign recognition will be limited to K-6th grade level math vocabulary.

In summary, the work reported in the paper addresses an important research question: can the application of holographic AR improve deaf learning in STEM disciplines and overall communication between hearing and non-hearing members of our society? It also aims to advance what is known about the potential of AR for disability education and for improving the lives of people with disabilities. While our initial objective is to target the educational audience, we believe that facilitating creation of sign language translation is important beyond the education domain. For instance, the holographic 3D signers could be used in many other domains, such as entertainment and social networking to remove current communication barriers.