Keywords

1 Introduction

Ensuring digital accessibility for individuals with disabilities is critical to inclusive Human-Centered design. In the realm of extended reality (XR), accessibility challenges for people with visual impairments, motor impairments, hearing impairments, cognitive, and other disabilities are significant exclusion factors giving rise to novel dimensions of the digital divide. To address these challenges, XR environments need to be carefully designed to integrate various accessibility features seamlessly, which, however, remains a complex task.

To alleviate difficulties entailed in creating XR environments accessible to all, a universal access approach needs to be adopted, to design systems that take diversity into account and satisfy the variety of implied requirements in a proactive approach [1]. To this end, this paper introduces an ongoing XR accessibility framework designed to provide developers with a cohesive approach to incorporating diverse accessibility features into their XR applications. The framework aims to simplify the process of adjusting accessibility settings without burdening developers with multiple disparate tools. The proposed framework is based on a thorough review of relevant literature, thus ensuring that state of the art accessibility features to XR environments are adopted.

The framework offers customizable text settings, alternative text for images and videos, multiple controlling mechanisms for user interaction, and ongoing work on video subtitle customization. It also includes features such as edge enhancement for 3D artefacts, interactive element descriptions with dynamic widgets, scanning support for navigation in the XR environment, and foreground positioning of active objects. Additionally, it incorporates scene adaptations like brightness adjustment, magnified lenses, and recolouring tools to cater to specific visual needs.

The proposed XR accessibility framework is an ongoing work that aims to enhance XR accessibility for developers. While certain features are still under development, the framework continues to evolve and improve. This paper provides background information, an overview of the framework, implementation details, and a use case to showcase its effectiveness in creating inclusive XR applications.

2 Background and Related Work

Today, there is a vast array of online services and applications that have become essential for our daily activities. A notable advancement in this realm is the emergence of online XR applications, which go beyond traditional domains like gaming and education. These applications now span various areas, including business [2] ecommerce [3], and culture [4]. Consequently, ensuring digital accessibility has become a crucial requirement for addressing the fundamental needs of people with disabilities, and thus ensuring their equal access to digital services and applications. Digital accessibility encompasses a growing commitment, by policymakers, public bodies, the research community, and the industry, to develop legislation, guidelines, standards, and assistive technologies that empower people with disabilities to access and utilize various applications [5,6,7].

Although many efforts have been put forward in several domains addressing disabled users, and especially individuals with visual impairments, the challenges they face in engaging with digital content in extended reality (XR) remain significant. People with visual impairments encounter difficulties perceiving visual information, including text, images, videos, and 3D objects, within XR environments [8]. To address these challenges, numerous solutions have been proposed, such as visual display adaptations [9, 10], overlays [11, 12], and audio or haptic-based [13,14,15,16] approaches for interaction. Visual display adaptations have gained attention to enhance accessibility for this user group. Similarly, individuals with motor impairments face obstacles when interacting with virtual objects and navigating virtual environments. Many existing systems employ complex interaction techniques without customization, overlooking the specific needs of this user group [17, 18]. Approaches commonly used include alternative input devices, eye gaze control, and head movements [19] Nevertheless, a major challenge for users with motor impairments, despite the device employed, is that the point-and-select paradigm is not effective; instead, there is a need for acquiring sequential access to the interactive elements of a User Interface (UI). A common technique employed in this respect is scanning, which sequentially highlights and gives focus to the interactive elements of a UI [20]. For individuals who are deaf or hard of hearing approaches to enhance accessibility include displayed written content, which however may not be in their native language, as well as signed video descriptions for text, objects, or other interface items (see Footnote 1).

Realizing the pressing need for creating accessible XR environments, adopting a ‘by design’ approach, numerous tools have been proposed in the literature to aid the development of XR experiences, focusing on streamlining and automating commonly utilized functionalities. An illustrative example is the XR Interaction Toolkit [21], specifically designed to simplify the process by offering preconfigured components that ensure seamless compatibility across various Virtual Reality (VR) devices. Moreover, the toolkit incorporates scripts that facilitate fundamental interactions within VR environments. SeeingVR is a Unity plugin for developers, designed to enhance visual display settings VR applications, offering 14 distinct tools to optimize visual accessibility for individuals with low vision [22]. Despite the progress achieved, many of these efforts remain in the prototype stage within the research field, lacking integration into mainstream applications or platforms, while developers identify that they need better integration of accessibility guidelines, alongside code examples of particular accessibility features [23]. Grass-rooted in these approaches, we propose an XR accessibility framework, for Unity developers, that will foster them towards developing universally accessible XR applications, addressing the interaction needs of users with visual impairments.

3 The Universal Accessibility XR Framework

The proposed framework has been implemented as an assets package made on the Unity Game Engine, available to be installed in projects. This is an easy-to-use, plug-and-play approach that developers can use to effortlessly embed accessibility into their AR or VR applications.

3.1 Framework Overview

The objective of the framework is to establish a cohesive approach for XR application developers to incorporate various accessibility features. Additionally, the framework aims to offer a straightforward method for adjusting these settings to the specific requirements of each application, without requiring any developer effort. The accessibility adjustments provided are derived from a comprehensive review of relevant literature.

Currently, the system provides support for a range of content adjustments to enhance accessibility for text, images, videos, and 3D artefacts. One of the main goals of the framework is to provide developers with accessible components ready to be used. More specifically, in respect of textual information, the framework offers a wide range of options for customization, including the option of modifying the font size and color, outline thickness and colour, in addition to an adjustable text background. This feature is particularly valuable for individuals with low vision, as it allows them to enhance text contrast and improve legibility. Images and videos are being enhanced with alternative text (alt text), which provides textual descriptions of the visual content. Furthermore, multiple controlling mechanisms, such as resizing, play, and pause options, are incorporated to facilitate user interaction regarding multimedia content. Additionally, the framework extends its accessibility features to encompass video subtitles, allowing users to customize them to their preferences. This customization includes the ability to modify font styles, background colours, and font sizes, thereby optimizing the viewing experience for individuals with diverse accessibility needs.

For 3D artefacts, the framework grants developers the ability to activate the edge enhancement tool, enabling the enhancement of object edges to improve visibility. Furthermore, developers can customize line colours and thickness, affording them greater control over the visual representation of these artefacts. This flexibility allows for enhanced user experiences and accommodates diverse user preferences.

To activate the accessibility features, the developer has to indicate the interactive elements within the scene. To support multiple ways of descriptionFootnote 1, each interactive element is accompanied by a widget that offers supplementary information such as text, images, and videos. Depending on the disability of the target users, the widget is dynamically adjustable. For instance, for blind users, screen reader is activated automatically, providing auditory descriptions of each interactive object, utilizing t the text description associated with the object. For persons with vision deficiencies, appropriate tools are deployed to assist them in perceiving and interacting with the XR environment. As a result, individuals with visual impairments are empowered to effectively access and comprehend the content.

The accessibility framework also incorporates a scanning feature that holds significant value in XR applications for individuals with disabilities. This feature plays a crucial role in facilitating effective navigation through the interactive elements of the XR for users with visual impairments. Each interactive element within the scene is activated in a hierarchical order, which is initially determined by the default arrangement of interactive elements in the Unity scene, moving from top to bottom. However, the framework also provides developers with the flexibility to customize this order using a designated field. This capability empowers developers to tailor the scanning experience and optimize accessibility based on specific user needs within XR applications.

The accessibility framework includes also a notable feature designed to enhance navigation in the XR environment for individuals with visual impairments. This feature ensures that when specific interactive objects selected by the developer, become active, they are brought forward in the scene, closer to the user. By bringing the active objects into the foreground, the framework facilitates improved visibility and easier interaction, benefiting individuals with visual impairments. Moreover, this functionality may also be beneficial for users with cognitive impairments, as it brings to the user’s focus the element they need to pay attention to, reducing any cognitive burden induced by the complexity of the remaining scene.

Furthermore, the framework extends its accessibility provisions beyond individual content items. It includes scene adaptations, offering functionalities such as brightness adjustment, a magnified lens for enlarged viewing, and a recolouring tool to modify the colour scheme, thus catering to the needs of colour-blind individuals. In more detail, the user can select a colour profile (e.g. protanopia, deuteranopia, tritanopia), and the framework ensures that the scene is appropriately recoloured to address the needs of each user in the best possible way. For instance, in the case of protanopia, the color red is substituted with magenta, while in the case of deuteranopia, green is substituted with light blue. Similarly, for tritanopia, blue is substituted with green [24]. These scene adaptations aim to address the specific visual needs and preferences of users, further enhancing their overall experience within the XR environment.

3.2 Implementation

To enhance the description of each interactive 3D artefact (Unity GameObject), additional text, images, or video can be incorporated. To identify the interactive elements within the scene, developers can utilize a C# script called “InteractiveElement.cs” that should be attached to the corresponding GameObjects. This script extends the functionality of the GameObject class in Unity and includes references to the text, image, and video GameObects, where developers can place the appropriate prefabs. Furthermore, the script includes a field called “Order in Hierarchy” that allows developers to modify the scanning order hierarchy without altering the original order established in the Unity scene.

To facilitate scene adaptations, the accessibility framework offers a Unity Prefab called “Accessibility Framework Manager.” This Prefab includes public entries that grant control over various tools and their respective parameters. The goal of this approach is to establish a standardized development way for adjusting accessibility settings within the XR application. For example, the developers using the Prefab corresponding to text assets need only to proceed with text adjustments once, and then they are applied seamlessly throughout the scene. The framework dynamically identifies all text objects present in the scene and applies the desired changes accordingly. Additionally, the Prefab allows for enabling features such as brightness adjustment, magnified lens, and recolouring tool. By incorporating the Accessibility Manager Prefab into their VR scene, developers can effortlessly control these tools and optimize accessibility settings for users (Fig. 1).

Fig. 1.
figure 1

The accessibility framework manager provided adaptations

The framework also allows developers to specify the set of disabilities that their application aims to address. To this end, we have expanded the Unity top bar menu by adding < MenuItem > called “AccessibilityManager”. This menu item inherits the functionalities of the Unity < EditorWindow > and offers a convenient way to configure the scene. It shows the categories of disabilities supported, like blindness, low vision, colour blindness, hearing impairment, and upper limb motor disabilities. When the colour blindness option is selected, a dropdown menu appears, presenting diverse types of colour blindness, namely: protanope, deuteranope, and tritanope. The disability types are shown as toggle buttons, allowing the selection of one or multiple options, as illustrated in Fig. 2. Once the values are specified, a JSON-like element is created, representing the scene’s configuration as it is displayed in Fig. 3. For example, if the application is targeting blind users, any assisting videos or images will be hidden, and the screen reader will be activated to provide auditory feedback.

Fig. 2.
figure 2

Scene Configuration

Fig. 3.
figure 3

JSON object for scene configuration

4 Use Case

To facilitate testing and evaluation, a Unity sample scene is being developed. This scene presents a Virtual Reality museum, showcasing various 3D cultural heritage (CH) artefacts that provide accessible interaction with users, including persons with visual impairments (Fig. 4). Each 3D artefact in the scene, that the developer wants to be accessible, is associated with the “InteractiveElement.cs” script. Additionally, multiple text GameObjects, implemented using the < TextMeshPro > component, are positioned within the scene to provide descriptions for each artefact. The placement of these text elements can be customized as per preference. The accessibility Framework Manager prefab scans the scene, identifying and adjusting the text elements based on user options, as depicted in Fig. 5. When the scanning option is activated, one by one all the interactive elements in the scene are activated inheriting the accessibility properties that the developer has set via the “InteractiveElement.cs” script. For instance, if the current user is blind or partially sighted, then the first interactive element on the list is brought forward in the scene, closer to the user, the embedded screen reader starts reading the corresponding text, describing the artefact, while with the use of edge enhancement the 3D object is highlighted as shown in Fig. 6.

Fig. 4.
figure 4

Museum scene

Fig. 5.
figure 5

Text Adjustments, font size and outline

Fig. 6.
figure 6

Screen Reader and Edge Enhancement to the active element

5 Conclusion

In this paper, we report on an ongoing accessibility framework designed for developers of XR applications. The framework offers customizable features for text, images, videos, and 3D artefacts, along with interactive element descriptions and various controlling mechanisms. Its goal is to simplify the process of creating accessible XR environments for developers, ensuring the adoption of accessibility guidelines, best practices and state-of-the-art approaches. While the presented use case focuses on museums, it is important to note that the framework can be applied to various XR applications, including games, educational environments, business environments, etc. By incorporating this framework into their development process, developers can contribute to the advancement of XR accessibility and ensure that individuals with disabilities can fully engage and enjoy XR experiences across different domains.

Future work entails the extension of the described framework with additional accessibility features to provide improved support for a wide range of individuals with disabilities. In addition, the framework will be tested with developers and end-users to ensure that it addresses their needs in the best possible way.