Keywords

1 Introduction

People with hearing loss (hard of hearing people, deaf and deaf/blind) are in the category of individuals who need specifically designed Information and Communication Technology (ICT), with emphasis on support in visual form, or sound amplification to enhance communication abilities, educational achievement and sociocultural characteristics [1]. Hearing loss is a condition where the ability to hear is reduced, and individuals require medical, educational, and psychological attention. Studies in Europe and the USA show that around 9–16% of the population in these countries have some type of hearing loss and the prevalence is increasing, especially since the population is getting older [23]. Another study shows that there are approximately millions of people in the world (14.6 million in the USA, for example) with a very high cost for an untreated disabling hearing loss, which are between 8,000 USD (Europe) to 9,000 USD (USA) per person each year [14]. In this way, it is important to lower this cost and support the use of hearing accessories, since they contribute to better health, higher income, and better family and social life [16].

Deafness is unique among disabilities, since it is the only disability in which most deaf sign language users share a common language, which is not equalled by the dominant hearing society. However, a minority of deaf and hard of hearing people speak sign language [15]. Additionally, most sign language users are users who learned sign language as their first language. Therefore, most deaf sign language users are bilingual, and, in this way, their primary need is to have bilingual support [11].

Deaf blindness is a separate disability from deafness and blindness. Usually, deaf-blind people experience some level of both hearing and vision disability and complete deaf-blindness is very rare. However, age-related vision and hearing loss is going to become a serious problem with population aging. For them, hearing systems can be combined with haptic technology to enhance tactile perception [5].

As found in other studies, the development of accessible ICT holds great promise in supporting the communication needs, language, and social development in people with hearing loss [4, 12, 13].

According to the study, hearing systems and accessories include three broad classes of devices [8]:

  • Hearing technology

  • Alerting devices

  • Communication Support Technology

Today’s ICT supported hearing technology, which includes hearing devices, Assistive Listening Devices (ALD) and Personal Sound Amplification Products (PSAPs), are powerful miniaturized computing systems, on the one hand, and increasingly offer options for coupling and connectivity with modern communication devices to expand their capabilities on the other [8]. However, even the most sophisticated ICT technology may be of little use if it does not fit well to a person’s individual hearing requirements and usage needs [19]. There are other various types of hearing technology that can benefit those with a hearing loss: Smart hearing instruments, adaptive and user-controlled hearing systems, machine learning-based hearing systems for individualization of the listening experience, algorithms for improving the acoustics of sound, and other types of cutting-edge technology which can assist people with hearing loss with listening, speaking and reading.

Alerting devices support visual modalities with the use of light and, in some cases, vibrations, or a combination of them, to alert users to specific events (clock alarm, fire alarm, doorbell, IoT devices, baby monitors). However, such devices need to be developed in strong connection with the end users according to the Universal Design principles, and adapted for the users who will be using them [9].

Communication Support Technology, also known as Augmentative and Alternative Communication (AAC), are devices and tools for improving communication skills, like telecommunication services, person-to-person interactions, collaborative and cooperative services. Using accessible AAC for communication and collaborative activities can encourage a group of persons to improve their use of language and their understanding of concepts as they plan and carry out their work. Despite many advances in this field, there remain challenges like the marginalization of people with severe hearing loss, and the need for research-driven technical development to optimize technology and precision of AAC devices [10].

Other emerging hearing systems and accessories include eXtended Reality (XR) glasses, real-time captioning systems with Automatic Speech Recognition and advanced computer vision algorithms. Furthermore, accessible and adaptive hearing systems, as, for example, in XR environments, can support visual modalities with avatars, pictures, signs or text on screen, allowing individuals to extend both their general knowledge and use of language without listening [20].

The following sections of the paper examine the challenges discussed by authors working in Communication Support Technology in higher education and in interpretation, as well as in human factors.

2 Support in Higher Education

Bogdanova (2020) [3] present the challenge of integrating people with disabilities into society, which requires not only the technical and informational implementation, but also to implement legislative initiatives as well. It is evident that, for example in Russia, all Russian universities are taking care in the development of the methodological tools which promote inclusion of students with disabilities. For example, for students with hearing loss it is needed to develop individual educational paths on collectivism and dialogical principles. These methods include a preview of learning material on the Learning Management System platform, together with quizzes and answering questions in essays. Another method defines the organization of the group activities together with hearing students, where students with hearing loss read the assignment and give results in a collaborative way in the shared document. Such combined group activities with the ICT support is claimed to reduce hearing workload and promote visual support. Additionally, in the study, they found that hearing students learn more successfully communication with students with hearing loss, which, at the same time, increases the tolerance in communication.

3 Sign Language to Text Interpretation

Live sign language to captioning interpretation is seen as a challenge, since the usual method involves sign language users, a sign language interpreter, who reads and vocalizes the sign language of the speaker and a high speed typist, who generates captions from the vocalization. Tanaka [17] noted that such method doubles labor costs and delays captioning provision. He proposes to use a crowdsourcing method with a non-expert typist, but who can perform sign language interpretation. In his study, where live video segmentation in short videos has been used, he found that non-expert users took approximately three times a segment’s length to finish text captioning via a website. In this way, at least 11 users would be required to reduce the necessary workload for captioning. Crowdsourcing captioning illustrates that the idea of using multiple users for writing captions can be an effective way of performing captioning by a wide range of users with different skills and abilities.

4 Captioning

Captions allow translation of the auditory information into a visual representation on the screen. They give all viewers, including those who are deaf or hard of hearing people, a visual medium to follow video content that includes an auditory track. Usually, the live captioning quality is affected by the delay of a human stenographer’s response in listening and transcribing live speech, and, consequentially, has a higher error rate due to transcribing under pressure. To help this challenge, Automatic Speech Recognition (ASR) has become used widely. While ASR is fast, its performance in transcribing and punctuating live speech has been less accurate than transcribing pre-recorded speech, as it has less time to make a decision on what has been said, and is unable to take the words that follow an utterance into account [6]. However, as ASR services have become more accurate and complex, these services have begun to incorporate reliable automatic punctuation into their transcriptions, through a combination of lexical and prosodic features, such as pause length in speech. Before the widespread adoption of ASR solutions, captions for television, education or courtroom reporting were generated by human-powered captioning services such as stenography or re-speaking, that usually generated punctuated captions [7]. Datta et al. [6] noted that the issue of evaluating punctuation versus unpunctuated captions was not considered until the advent of ASR. In their study, they found that viewers reported that punctuation improves the “readability” experience for deaf, hard of hearing and hearing viewers, regardless of whether it was generated via ASR or humans. In this way, the results of the study show the importance of using punctuation in the ASR systems as well.

In another study, done by Wakatsuki et al. [21], the authors investigated the tendency and characteristics of gaze behavior in deaf and hard of hearing people during captioned lectures. Additionally, they created hybrid captions, where part of the slides had been inserted into the captioning text, and made comparison between classical captioning and hybrid captioning. The study showed that there was no significance difference in the average gaze count between classical and hybrid captioning. Results from the experiment supported the findings from Behm et al. [2], where they argued that trailing captions that are positioned close to the instructor as the information source are easier to read and understand.

5 Distance Communication for Deaf Blind People

The field of Communication Support Technology for distance communication in various public or outside places for the deaf-blind is not well researched according to Onishi et al. [18] even though there are commonly used methods of communication for the deaf-blind, like tactile sign language and Braille. There are researchers working with solutions for supporting distance communication, like using hand tracking technology and a 3D-printed bio inspired robotic arm, or using wearable technology using telecommunication solutions. However, deaf-blind people have to be able to master Tactile Sign Language for the first case, or a stable and high-speed Internet supported single mobile telephone line for the second case.

Onishi et al. proposed a system for casual and remote communication between many users which incorporated hearing and non-hearing users. The interface of the system includes Braille display and WebSocket communication in a way that information can be shared between hearing users, who are not familiar with Tactile Sign language and are able to be connected using voice input or chat keyboard input. With the proposed interface, users can speak in chronological order and avoid conflicts of simultaneous talking. With this proposed system, there is no need to involve an additional person, who is in charge to work as a relay user for deaf-blind users.

6 Visiting Museums for Hard of Hearing People

Visiting museums is one of the important activities for lifelong learning, especially for people with disabilities. In Japan, Wakatsuki et al. [22] found that, in Japan, there are almost no museums which are working on accessibility. For speeding up the process, the authors prepared a 27 item questionnaire for deaf and hard of hearing people, with the aim to find accessible factors, which this target group need when visiting museums. As a result, visitors with hearing loss noted that, in a museum, they could not understand spoken explanations or announcements, and that there was a need for sign language interpretation. Additionally, they found that participants would generally prefer to navigate in the museum at their own pace with the help of accessible technology. In this way, the authors prepared videos with sign language for triggering by the QR code, which has been proposed as a proof-of-concept experiment.

7 Discussion and Conclusion

The thread that appears from all the papers presented in the session Hearing Systems and Accessories for People with Hearing Loss is in the support with Communication Suppor Technology. This applies captioning, sign language video presentation and interpretation and communication at the Higher Education Institution.

From the studies undertaken by the various authors, it is evident that there remains the need to make adaptations to standard forms of communication and presentation for deaf and hard of hearing persons. Ease of access and strategies that offer extra support in terms of captioning, sign language video, visual presentation of text based materials, for example, is one of the main important goals for universal design for the deaf and hard of hearing.