Keywords

1 Introduction

This paper describes our cyclical refinement of a mobile prototype that supports teaching computer literacy skills to Deaf people, using South African Sign Language (SASL) as the medium of instruction. We write Deaf with a capital ‘D’ to define the Deaf as a cultural group who use a signed language which do not have a written form to communicate much like other groups who use languages like English. Deaf people have limited literacy in spoken and written languages [1]. Acquiring computer skills necessitates pre-existing knowledge of a written language. For Deaf people therefore learning involves simultaneously mastering the written language whilst learning computer skills and technical terminology.

Our previous SignSupport projects aimed at bridging communication between Deaf and hearing people. The projects focused on constrained contexts where a limited collection of interactions were incorporated into pre-recorded SASL videos. The interactions previously investigated were between a doctor and a Deaf patient [2] and between a pharmacist and Deaf patients, [3] implemented on a mobile phone [4]. We are extending this to a different communication context.

In this paper, we explore the context of adult computer literacy training. We investigate how to support Deaf people learning computer literacy skills using the e-learner training system [5], an International Computer Driving License (ICDL www.icdl.org.za) approved curriculum, developed by Computers 4 Kids (www.computers4kids.co.za). At present, teaching Deaf learners involves the teacher interpreting the lesson content from the e-learner manual into SASL. In the process, all Deaf learners must look at the teacher due to the visual nature of signed language. Progress of faster learners is impeded since the pace of the class is dictated by the weaker learners because when something is unclear, all have to be interrupted. We also investigated whether mobile devices were a viable means to support Deaf learners because of their ubiquitous nature.

Deaf people’s text literacy is adequate for social purposes between Deaf people who accept grammatical problems but often not for technical discussion [6]. Consequently, it creates a communication barrier which hinders Deaf people from acquiring new skills and limits them from seeking higher education and employment opportunities. Many are unemployed or employed in menial jobs. The socio-economic level of the community is affected as a whole [7]. We partnered with a grassroots NGO, Deaf Community of Cape Town (DCCT) which is staffed primarily by Deaf people and serves the needs of the larger Deaf community.

We conducted a field study and two user studies, in three research cycles, with Deaf DCCT staff members as participants to investigate how mobile phones could be used to support Deaf learners acquiring computer literacy skills. The field study sought to investigate the obstacles that Deaf learners encounter while acquiring computer skills and establish the existing technology capacity of the Deaf community. From the results we obtained, we designed and implemented our intervention, addressing some of the issues identified in the field study and evaluated the developed solution with our participants.

2 Related Work

We describe work related to SignSupport here, examining Deaf literacy practices and the work of others in the area of computer literacy projects with Deaf people.

2.1 Deaf Adult Literacy

Internationally and in South Africa, development of literacy in Deaf adult population has had its challenges. Internationally, the average reading age of Deaf adults is said to be at fourth grade level [8] and in South Africa, the average reading age of Deaf adults who have attended schools for the Deaf is lower than the international average [9]. Apartheid further caused racial inequalities in educational development and provision resulting in varying literacy levels in Deaf people across different racial groups [10].

In the Bilingual-Bicultural approach, Deaf learners are taught through a signed language to read and write the written form of a spoken language [11], there being no accepted written version of the signed language. Previous approaches to Deaf literacy such as the principle of Oralism [12] and total communication [13], neglected the need for Deaf people to learn in their own language and promoted little literacy development. Research has shown that Deaf learners taught in sign language perform better than learners who are not [14]. Glaser and Lorenzo [1] provide an approach that aims to redress low literacy levels among Deaf adults in South Africa where the use of the Deaf learners’ existing knowledge of SASL and written English. It highlights the difference between these two languages in order to facilitate the development of their second-language skills in written English. For Deaf learners, literacy is moving from a primary to a secondary communication form as well as moving from one language to another. We adopt the bilingual-bicultural approach to teaching computer literacy skills. By teaching the lessons in SASL, we use the Deaf learners existing knowledge (the known), to introduce computer literacy skills (the unknown).

2.2 Computer Literacy Projects

There are numerous projects that have sought to address increasing the educational level of Deaf and hard-of-hearing persons. In order to meet users’ needs, in addition to providing guidelines based on technology, it is necessary to understand the users and how they work with their tools [15]. One approach provides additional educational input using multimedia-supported materials on the World Wide Web [16]. This kind of user interfaces can be found in other projects such as BITEMA [17] and DELFE [18]. Results from these projects have shown that multimedia systems additionally increased the success of learning.

Project DISNET in Slovenia [16], focused on providing an alternative way of learning computer literacy using accessible and adapted e-learning materials. It used multimedia materials in a web-based virtual learning environment. The project aimed to increase computer literacy among Deaf and hard of hearing unemployed people using the ICDL e-learning material [16]. The system was designed for people who have access to computers, high speed broadband Internet but without basic computer or web browser experience.

The above projects focus on e-learning materials and e-learning environments with dependence on the World Wide Web to distribute their multimedia materials. The commonality between our work with the projects above is the use of multimedia learning materials.

2.3 Discussion

SignSupport emphasizes video quality and resolution [19]. The videos are stored locally on the phone. High data costs in South Africa compared to neighbouring African countries [20], make it uneconomical for already marginalised Deaf people to access remotely stored data. Similar to project DISNET, we utilise multimedia ICDL learning materials to improve computer literacy education levels amongst Deaf people. It differs by not being web-based and not using broadband internet connections; SignSupport is mobile-based and uses commercially available devices.

3 Methodology

SignSupport was based on over a decade of research and collaboration by an interdisciplinary team comprising a diverse range of expertise. All members were involved continuously through the project [7].

Deaf users played the steering role in the research. They dictated how they would used it and most of the user requirements were gathered from them by integrating their perspectives thereby increased chances of an accepted solution.

A Deaf education specialist who was the link between the technical team and the Deaf community members in addition to being the facilitator for the computer literacy course. The specialist assisted in design and explanation of Deaf learning practices to make SignSupport fit Deaf users’ expectations and helped translate the course material into SASL.

Computer scientists who were tasked with implementing the design of SignSupport and verified that the SASL videos were displayed in the correct and logical order. They examined how end users engaged with SignSupport to uncover design flaws and any other interesting outcomes.

We undertook a community based co-design process [7, 21] following an action research methodology. This approach required participation with the target groups and engaged them throughout the design, implementation and evaluation phases and referred back to them to show how their feedback is incorporated into SignSupport. During interactions with Deaf participants, the facilitator who is acceptably fluent in SASL facilitated the communication process which aided us in understanding the usage context and building positive relationships with the Deaf community.

We undertook three research cycles. In the first cycle we observed and participated in the computer literacy classes at DCCT where some of their staff members were taking the classes and conducted unstructured interviews with the facilitator in the form of informal conversations and anecdotal comments made by the facilitator during the class sessions. Data were gathered using hand written notes, video and photographs were used to build a cognitive system [22] of the computer literacy classes following a distributed cognition approach. The ideas generated were then used to synthesize our solution intervention. The second cycle we implemented our solution and evaluated it. The feedback we received in the second cycle was used as input to the third cycle to refine our solution.

We collaborated with two other researchers to co-design an XML specification that was used to structure lesson content and generated by a content authoring tool. The XML specification was an abstraction of the hierarchical structure of the e-learner manual. A mobile prototype was developed that used the XML specification and mapped the content of the e-learner and serially displayed the content in SASL videos and images. The mobile interface design was inspired by the work of Mothlabi [4] such that the video frame size covered at least 70 % of the display size and the navigation buttons and image filled up the rest of the space.

We recorded SASL videos of two lessons chosen from the e-learner curriculum using scripts that we created and videos that were stored on the mobile phone’s internal memory. The mobile prototype was then evaluated in a live class setting and the results were taken into account for the next design in the third cycle.

4 Computer Literacy Classes

The computer literacy classes (e-learner classes henceforth) are taught using the International Computer Driving license (ICDL) approved curriculum, e-Learner [5], which has two versions: school and adult of which the latter is taught at DCCT. The classes aim to equip Deaf learners with computer skills that will result in the learners taking assessments to obtain the e-Learner certificate. The Deaf learners then progress to the full ICDL programme. These classes are taught by a facilitator and co-author, Meryl Glaser, who has been in a long involvement with DCCT in addition to collaborating with researchers from the University of Cape Town (UCT) and the University of the Western Cape (UWC) in the SignSupport project.

All the Deaf learners were DCCT staff members. Three were female and two were male with an average age of 38.4 years. Prior to the beginning of the e-learner classes, three of the learners had received the EqualSkills certificate [23]. EqualSkills – also an ICDL programme – provides flexible learning programme that introduces basic computer skills to people with no prior exposure.

4.1 Course and Lesson Structure

E-learner is a modular and progressive curriculum spread over seven units which are: IT Basics, Files and folders, Drawing, Word processing, Presentations, Spreadsheets and Web and Email essentials. The units are similar to the modules in the ICDL programme but contain simplified content. The e-learner curriculum is in two parts: a manual containing lesson instructions used by the facilitator and software, loaded on to the computers that the Deaf learners use to retrieve templates and lesson resources. The Deaf learners use computer applications to complete the templates following signed instructions from the facilitator. The facilitator first teaches literacy skills in the written language to develop their technical vocabulary.

These units are composed of lessons that have the same structure in the following categories: Orientation, Essential and Supplementary. Lessons in different units overlap i.e. the same lesson appears in different units. This allows for the learner to revise a lesson or skip it having done it before. The lesson structure is as follows:

  1. 1.

    Integrated activity – A class discussion on the lesson content.

  2. 2.

    Task description – A brief overview of the work the learners will perform.

  3. 3.

    Task steps – The list tasks that the learners perform to complete the lesson.

  4. 4.

    Final output – A diagram showing what the learners are expected to produce after performing the task steps.

4.2 Classroom Setup

In the computer lab there are six computers in a U-shaped arrangement. There is a server at the front left of the classroom with a flip-board on a stand and two white boards. The arrangement is to allow the learners to have a clear line-of-sight to view the front of the classroom where the facilitator stands and signs. The seating arrangement also allows the Deaf learners to see each other which is crucial for class discussions and to see contributions from other learners and questions.

Each computer, except for the server, is running a copy of Microsoft Windows 7. All computers have a copy of Microsoft Office 2007 and e-Learner Adult version 1.3.

4.3 Results

In observation and participation in the e-Learner classes we uncovered various themes that are discussed below.

Although the lessons in the e-Learner manual had the same structure, the facilitator adapted the teaching method and lesson content to make it relevant for the Deaf learners. Teaching generally takes up a whole lesson and the Deaf learners only get to perform the tasks in the next class session on the following week.

Images played an important role in teaching. There were numerous times where the facilitator pointed at a projected image of the computer application that was being used in the lesson, pointing out buttons and icons and lists to scroll through.

Teaching the Deaf learners is demanding and tiring for the facilitator. There is one copy of the e-Learner manual used for the lessons because the Deaf learners are text illiterate, unable to read the English text in the manual. The facilitator has to read the instructions, understand them before signing the instructions to the Deaf learners in SASL. In other instances, the facilitator has an assistant who voices the instructions to the facilitator who then signs them to the Deaf learners.

The facilitator has to gain the undivided visual attention of the learners. This is a necessary step in order to explain a concept or provide instructions to the Deaf learners due to the visual nature of sign language. This distinguishing factor between Deaf and hearing learners is called divided attention. Hearing learners can simultaneously listen to instructions being provided while they look at their computer monitors. Deaf learners cannot watch the SASL signing and look at their computer screens at the same time. Eye contact first has to be established before signing can begin.

Deaf learners use SASL as their principal language of communication and it has its own structure and vocabulary. English users bring all the necessary vocabulary to the task of computer literacy skills learning. Deaf learners lack this vocabulary to rely on, hence they are learning English vocabulary and ICT skills concurrently. For example, in a lesson observed, the facilitator broke down the word “duplicate” into the phrase “make a copy” after which the Deaf learners associated copy with its respective sign in SASL. English vocabulary in computer literacy classes has to be simplified by either making use of synonyms, definitions or descriptions.

We observed different individual work rates of the Deaf learners during our class participation, similar to hearing learners. The difference is that Deaf learners have the additional burden of having to stop and look at the facilitator for instruction. All need to be interrupted to see signed instruction. This would interrupt the whole class and the learners work rate. The faster learners usually finished their tasks earlier and often spent time waiting for the slower learners to catch up. As a result, the pace of learning was dictated by the slower learners because facilitator was forced to teach at a slower pace to accommodate the slower learners. This puts pressure on the slower learners and makes it boring and at times frustrating for the faster learners. The faster learners were the same three Deaf learners, previously identified, who had acquired EqualSkills certificates.

We also observed the Deaf learners using various mobile phones. These phones ranged from feature phones to smartphones. One learner had two smartphones: a HTC running Android OS for work and a Blackberry for personal use. Two other participants had Nokia feature phones with QWERTY keyboards. These devices are capable of playing video as well as instant messaging applications such as WhatsApp. In addition, the Deaf learners do not have computers or laptops at home and at work, they use old computers hence their limited experience.

4.4 Analysis and Design Implications

We use a distributed cognition approach [24, p. 91] to understand the e-learner class environment. Distributed cognition studies the cognitive phenomena across individuals, artefacts and internal and external representations in a cognitive system [22] which entails:

  • Interactions among people (communication pathways).

  • The artefacts they use.

  • The environment they work in.

We define our cognitive system as the e-learner class where the top-level goal is to teach computer skills to Deaf learners. In this cognitive system we describe the interactions in terms of how information is propagated through different media. Information is represented and re-represented as it moves across individuals and through an array of artefacts used (e.g. books, written word, sign language) during activities [24, p. 92].

Propagation of representational states defines how information is transformed across different media. Media here refers to external artefacts (paper notes, maps, drawings) or internal representations (human memory). These can be socially mediated (passing a message verbally or in sign language) or technologically mediated (pressing a key on a computer) or mentally mediated (reading the time on a clock) [24, p. 303]. Using these terms we represent the computer literacy class cognitive system showing the propagation or representative states for the teaching methods.

Fig. 1.
figure 1

A diagram showing the propagation of representational states for the teaching method to deliver a single instruction to the Deaf learners. The boxes show the different representational states for different media and the arrows show the transformations.

Fig. 2.
figure 2

The diagram shows the propagation of representational states for a hearing literate person. The boxes show the different representational states for different media and the arrows show the transformations.

By representing the teaching method in the diagram (see Fig. 1) we discover the task of teaching Deaf learners involves a set of complex steps. Instructions are propagated through multiple representational states, verbally when interacting with the assistant, visually when interacting with the Deaf learners and mentally in both cases. In comparison with the situation for hearing learners (Fig. 2) the representational states are fewer. Our proposed system attempts to bring the Deaf learners closer to how hearing literate people learn.

The design implications were to reduce the number of steps involved to deliver instructions to the Deaf learners. Our solution was to deliver the lesson instruction in SASL videos and images, effectively removing a number of representational states, approximately four. These SASL videos were pre-recorded and contained the lesson instructions from the e-learner manual thereby eliminated the need for the assistant and the facilitator to deliver the lesson instructions. In addition, the limited text literacy amongst the Deaf learners meant the need for SASL instructions to allow them to learn in their preferred language.

Mobile phones provided an ideal way to deliver the lesson content and most Deaf people used a mobile phone to communicate with other Deaf and hearing people [3]. This solution made use of off-the-shelf mobile phones similar to the previous SignSupport solution. Therefore, SignSupport could be carried home by Deaf learners on their cellphones and teach themselves where access to a computer was available. In addition, the socio-economic situation of the Deaf learners put them in a position unable to afford the high data costs. This eliminated the use of data networks to host lesson content remotely in order to stream to the mobile phones.

Another design consideration was to organise and structure the SASL videos to represent the logical flow of the lessons in the e-learner. It involved design of a data structure that effectively structured the course and lessons to reflect the e-learner manual. Discussion of the design is in the following section.

5 Design and Implementation

In this section we discuss the technical details of the design of the data structure, the design of the content authoring tool and the user interface of the SignSupport mobile prototype.

5.1 Structuring Lesson Content

To make the SASL videos and images meaningful, they need to be organized in a logical manner that reflects the e-learner lesson structure (see Sect. 4.1). In our analysis of the e-learner classes, we revealed the numerous steps involved to deliver lesson content to Deaf learners. To model the structure of the e-learner curriculum we chose Extensible Markup Language (XML) [25] as our data format. XML provided the necessary flexibility to represent the curriculum in its hierarchical structure. To manage the lesson resources (SASL videos and images) we chose to use Universal Resource Locators (URLs) that would point to the location where the resource was stored.

We abstracted the e-learner hierarchical structure representing the course, unit and lesson with unique identifiers. Lessons were further classified by category: Orientation, Essential and Supplementary. The resulting XML structure is shown in Fig. 3.

Fig. 3.
figure 3

The XML structure of the course.

The e-learner curriculum changed infrequently making it beneficial to store the resources locally on the device. This effectively make the system independent of data networks to update the lesson content. In order to manage the lesson assets and the XML lesson files effectively we decided to store all in the folder structure.

This XML data structure is parsed using built-in XML parsers used by the mobile prototype (see Sect. 5.3).

5.2 Content Authoring Tool

We needed to design a content authoring tool to run on a computer that would structure the lesson content. It would allow domain specialists such as the facilitator to create content for their usage context without the need for a programmer. Mutemwa and Tucker identified the lack of this as a bottleneck to their SignSupport designs [2], limiting their design to one scenario within the communication context.

The design was modelled on the structure of the e-learner manual (see Sect. 4.1). It uses drag-and-drop features to add lesson resources (videos and images) to the placeholder squares that represented the lesson description, task description and task step as shown in Fig. 4. Lesson resources are uploaded to the authoring tool and displayed in panels on the right. Once a lesson is created and lesson resources added, it can be previewed to view the lesson in sequential order from the beginning. The lesson is then added to a unit and a course before saving and exporting the course. Exporting the course generates the XML data structure that then consumed by the mobile prototype below (see Sect. 5.3).

Fig. 4.
figure 4

The content authoring tool interface that allows the facilitator to create lessons for Deaf learners.

The authoring tool was implemented using Java FX [26] using Netbeans 7.4 integrated development environment (IDE). It was tested on both Microsoft Windows 7 and Apple Mac OS X 10.9.5 to check for compatibility.

5.3 Mobile Prototype

The mobile phones we used had 25 gigabytes (GB) of internal storage space, with a touch sensitive display of size 4.8 inches and a resolution of 1280 by 720 pixels. The phones run Android OS 4.3 (Jelly bean). The higher resolution screen was considerably larger in comparison to the display used in the previous version of SignSupport [4]. Our version of SignSupport was only similar in video playback interfaces but differed in content structure and context of use. The extra space allowed for an image to be inserted below the video frame in addition to the navigation buttons (see Fig. 5).

Fig. 5.
figure 5

The SignSupport interface with an image of an icon beneath the video and video caption in the action bar which indicates the instruction the learner is currently working on.

Fig. 6.
figure 6

User interface navigation on the mobile prototype of SignSupport. The boxes represent the different screens the user interacts with and the arrows indicate the direction of navigation between the screens.

Navigating the mobile prototype interfaces is performed in two ways: linear and hierarchical navigation. For linear navigation, a Deaf learner uses the next and back buttons on the lesson detail screen (see Fig. 5) to move between video instructions. The linear structure navigates through the XML structure (see Sect. 5.1) that was generated from the content authoring tool (see Sect. 5.2). Hierarchical navigation is done moving from the home screen down to the lesson detail screen and back shown in Fig. 6. To navigate down to the lesson detail screen, the Deaf user starts on the home screen and selects a lesson from the list of lessons (see Fig. 7) by pressing on the list item that has the lesson name. Once a lesson is selected, the Deaf learner is presented with another list of lesson sections where the learner clicks on a list item to reveal the screen shown in Fig. 5 that contains the SASL video instructions. For better user experience, the depth of the hierarchical navigation was at most two levels from the home screen.

XML data is parsed In the backend of the mobile prototype, the XML data format designed in Sect. 5.1 was parsed using the Android interface XmlPullParser. XML files stored in the SignSupport folder in the mobile phone internal memory are modelled using an ArrayList data structure. Navigation is facilitated using clickable list widgets and buttons on the interface and scrolling through the list of lessons and lesson sections (see Fig. 7) was done through swipe gestures.

The mobile prototype is designed to be used concurrently with a computer as a tutoring system. The Deaf learner can query the facilitator if further clarification is needed and when in the presence of a facilitator.

Fig. 7.
figure 7

User interface navigation on the mobile prototype of SignSupport. The interface shows the lesson sections in a scrollable list. The Deaf learner taps on the desired list item to reveal the screen with the SASL video instructions.

6 Content Creation

6.1 Recording Procedure 1

We recorded SASL videos with the help of a professional SASL interpreter who met the following criteria: A registered SASL interpreter and a background in education.

Before recording the videos, we created conversation scripts for two lessons we selected from the e-learner manual on the basis of difficulty. We first wrote down the original instructions of the lesson. Afterwards, the facilitator guided us with the abstraction of the lesson content putting emphasis on the resulting instructions must have one task or a single explanation per bullet point. Multiple instructions were further broken down to single tasks and single explanations and computer terminology explained in detail. In some cases synonyms for complex terms were used instead. For example the word duplicate was replaced with the phrase “copy and paste” where signs their existed in SASL. These steps were repeated until all instructions were done and simplified.

The recording procedure involved instructions on a conversation script being voiced to the interpreter. The interpreter then signed on camera until all the instructions on the script were translated into SASL. Signed instructions were separated by writing down the number of the instruction on a whiteboard or paper according to its position on the script and displaying it in-front of the camera while continuously recording. When the interpreter put her hands down, that was the visual cue that the signing for that particular instruction had ended and helped us when editing the SASL videos.

The recorded SASL videos were edited in Adobe Premier Pro CS6 where the audio channel was removed to reduce video file size. The resulting videos were encoded using the H.264 video codec with a frame size of \(640 \times 480\) pixels and a frame rate of 25 frames per second (fps) as per the ITU requirements [27]. The final video had MPEG-4 video compression that was compatible with Android OS and the official video format for the platform. The resulting video clips are short, the longest video clip is 48 seconds which does not pose a high cognitive workload on the learner.

6.2 Recording Procedure 2

In this recording procedure we hired a SASL interpreter who had previously worked with DCCT and was known by the community. This ensured that the dialectal differences that were identified in Sect. 7.1 were avoided. We chose the lesson “S3: Our Organisation” from the e-learner manual. A conversation script was generated but the lesson content was tailored to suit the Deaf learners. The lesson instructed the learners to create a chart for their own organization, DCCT.

Recording of the SASL videos was done at the DCCT premises during office hours. Present at the recording were the interpreter, facilitator and an advanced Deaf learner. The recording setup was as follows: Two cameras on two tripods, positioned in-front of the interpreter to ensure redundancy in case of failure of a camera. The Deaf learner and facilitator stood off camera view with a laptop running Microsoft Windows.

Instructions were voiced from a conversation script to the interpreter. The Deaf learner watched the interpreter’s signing to check if the signing was correct. If the correct sign for computer terminology was not used, the Deaf learner corrects them and the video was re-recorded. Instructions that needed clarification in terms of the position of the Microsoft Word tools, all three parties would pause the recording and refer to the laptop with Microsoft Word. Only then was the instruction re-recorded. These steps, not all, were repeated until all instructions on the conversation script were recorded. Additional short video clips for contextual information and discourse markers (videos to inform the learner to progress forward or go back to the previous instruction) were recorded.

The videos were recorded with a resolution of 1920 by 1080 pixels at 25 frames per second. They were edited similar to Sect. 6.1 following the same procedure and only changing the colour channel to grayscale to further reduce the file size [4]. The resulting video clips are short, the longest being 48 seconds. The total number of clips for the lesson were 51 videos: 7 lesson description videos, 1 task description video and 43 task step videos.

7 Cycle 1

The above design of the prototype is evaluated using the lesson content recorded in Sect. 6.1. The XML files parsed by the prototype in this cycle are hand coded and not exported lessons generated by the authoring tool.

7.1 Evaluation

This section analyses the results obtained from our user evaluation of the mobile prototype. We observed the Deaf learners to uncover design flaws and any other interesting use of the prototype.

Procedure: Five DCCT staff members participated in the evaluation. These were the same Deaf learners identified in Sect. 4. The facilitator was present to interpret on our behalf. The Deaf participants were each given a smart phone that contained the prototype. After a short briefing about the project, the Deaf participants were first trained how to use the system then given a practice lesson to do for 20 min to get a feel of using the prototype and a second lesson to do for 30 min. In the first lesson, the learners were required to pair graphics of special keyboard keys (e.g. Space bar, Shift key etc.) with images that represent their function. The second lesson required the learners to identify and name different storage media. Then, identify which files represented by icons could fit into the storage media without exceeding their capacities. Both lessons were provided in Microsoft Excel templates. After, the Deaf participants were invited to participate in a focus group discussion to get their opinions and feedback on the prototype. The session was video recorded and photographs were taken with the help of an assistant.

Questionnaires were not used to elicit feedback on the system. Motlhabi noted that conducting an evaluation with Deaf text semi-literate participants proved to be a problem while answering questionnaires [4]. Interpreters were scarce and costly to hire and the number he hired were not enough to interpret the questionnaires for each participant individually without very long delays for the participants.

7.2 Results and Analysis

The number of representational states involved in delivering a single instruction were reduced by 4. It eliminated the facilitator, flip chart, data projector and assistant states involved in the process shown in Fig. 8. The reduction in representational states moved the Deaf participants closer to hearing literate users with the same number of representational states (see Fig. 2).

The participants had little difficulty navigating the user interface. Two participants had difficulty locating the back button on the interface that navigated back to the list of lesson sections even after training. All the participants managed to re-watch the SASL videos. It was easy for them to employ a tap on the video frame to bring up the video controls to replay the video. They also found it easy to navigate through the lesson content using the back and next buttons on the interface as well as navigate between the list of lesson sections and the lesson detail screen that contained the SASL videos.

Fig. 8.
figure 8

The representational states of a single instruction being delivered to a Deaf learner using SignSupport. The reduced states make it simpler for Deaf learners and promotes individual work.

All the participants noted that some signs used in the videos were different to theirs, indicating dialectal difference in the signs used in the SASL videos. Despite the difference, the stronger participants were able to understand the context of the instructions and continue with the tasks. In this case these stronger participants helped the weaker participants understand the instructions in 18 instances observed.

We also observed, during the testing, that the Deaf participants were individually working at their own pace, and the facilitator helped the participants individually in 21 different instances. Two of the 21 instances of assistances were initiated by the Deaf participants while the other 19 were initiated by the facilitator. In 9 out of the same 21 instances, the facilitator prompted a Deaf participant to continue with the task, click on a button or replay a video. In the other 12 instances, the facilitator explained unclear instructions in SASL. The assistance did not affect the other participants working individually and the role of the facilitator changed from delivering the lesson content to a support role. Consequently, the workload on the facilitator is reduced.

Some Deaf participants noted a mismatch between the instruction and what they expected to see on the computer. The mismatch occurred due to unforeseen steps such as the monthly password that is entered in the software to access the lesson content. The facilitator reported that additional SASL videos with contextual information and discourse markers [28] were needed to provide cues for the Deaf participants to progress to the next instruction or to perform a task.

8 Cycle 2

In this cycle we took the feedback from the results from the previous evaluation in Sect. 7.1, particularly the instruction inconsistencies and re-recorded sign language videos in Sect. 6.2. The lesson content used here was generated and exported by the authoring tool.

8.1 Evaluation

We evaluated the mobile prototype after re-recording the videos in Sect. 6.2.

Procedure. Four Deaf participants were used in the evaluation. Three participated in the previous evaluation in Sect. 7.1 and one was new to the project. Two participants were advanced learner, one intermediate and the last a beginner. Also present, an advanced Deaf learner who assisted in the filming of the SASL videos (see Sect. 6.2) and acted as an assistant, the facilitator and the research team. The facilitator and assistant were only there to clarify SASL instructions. Additionally, the facilitator interpreted on our behalf. Much of our procedure was similar to in Sect. 7.1 with the only difference of the lesson content focused on organization charts. Data was collected using note taking, photographs and video recording. Observations and comments made by both the facilitator and Deaf learners were recorded on video for later analysis.

8.2 Results and Analysis

While still achieving the same representational states as in Sect. 7.2, the fastest learner (participant 3) completed the lesson in 1 h 6 min. In that same period, Table 1 below shows the number of completed tasks. The tasks correspond to the task step videos that the learners had to perform in order to create an organization chart.

Table 1. Task completion rates of the lesson by the Deaf participants.

From Table 1 we see the that the beginner learner (participant 4) had the lowest completion rate. We observed that the learner needed more help compared to the other participants. The learner required prompting to carry on. In another observed instance, the same participant was staring at a dialog box where she had to click Ok button for the dialog box to disappear. In another instance the same learner sought assistance from the facilitator to confirm whether the SmartArt object chosen was the correct one. From these observations and task completion rates, the difference in computer literacy between advanced learners and the beginner learner is evident. This allowed the assistant or facilitator to focus on assisting the beginner (participant 4) while the advanced learners (participant 2 and 3) continued with their individual work. From these results, we reconsidered the target group for SignSupport to be more suited to the Deaf learners with some basic exposure to computer literacy, shown by the individual work rates of the advanced learners.

The facilitator engaged with the participants in 25 observed instances compared to 28 instances that the assistant engaged with the same participant. The facilitator clarified instructions that were potentially confusing to an advanced learner (participant 2) or to prompt the other learners. In addition, the facilitator instructed the assistant to help the learners with problematic spelling, for example the word organization. In the assistant’s engagement, the assistant prompted the participants to clarify some of the SASL instructions that the participants misinterpreted. In the event that the assistant was not sure of an instruction, the facilitator was called in to assist.

The assistant’s presence proved to be helpful in reducing facilitator workload. The assistant helped participants with terminologies and unfamiliar signs used in new terminology that was developed in the class. Workload reduction was shown by the 28 instances of assistant engagement compared to the 25 instances of facilitator engagement. It allowed the facilitator to step back and allow the assistant run the session demonstrating signs of sustainability of SignSupport.

We observed the emergence of a blended learning environment. Lesson content provided electronically through SignSupport while instructors (assistant and facilitator) were present. This environment allowed the facilitator to engage more with the learners and assistant rather than deliver content.

9 Conclusion and Future Work

SignSupport suited Deaf learners with prior exposure to computer literacy skills which benefits them learning in their preferred language, SASL. In our discussion, we discovered the obstacles that text illiterate Deaf people encountered while acquiring computer skills dependent on the facilitator using one copy of the e-learner manual. Distributed cognition revealed the number of representational states involved in delivering a single instruction to Deaf learners and the cognitive overhead on the facilitator while teaching.

We designed and implemented a prototype on commercially available devices, which showed potential to support Deaf users acquiring computer literacy skills by presenting content in SASL videos. We observed the prototype allowing the Deaf users to work individually at their own pace, with or without the assistance from the facilitator or assistant. Thereby reducing the workload on the facilitator. The decreased number of representational states decreased the cognitive overhead on the facilitator. Furthermore, SignSupport has demonstrated to work effectively in a blended learning environment with an assistant (Deaf learner) taking a more active role in teaching allowing the facilitator to step back.

Our design of an XML data format to represent lesson content, organised the SASL videos and images logically. The findings from this work are being used to generalise to other Deaf users undertaking computer literacy training.

Future work could investigate whether the SignSupport effectively increases computer literacy skills among Deaf people. This would involve a pedagogy study with Deaf learners with pre-existing basic computer knowledge and also see if the assistant can replace the facilitator.