Abstract
Digital accessibility is vital for ensuring equal access and usability for individuals with disabilities. However, addressing the unique challenges faced by individuals with disabilities in XR environments remains a complex task. This paper presents an ongoing accessibility framework designed to empower developers in creating inclusive XR applications. The framework aims to provide a comprehensive solution, addressing the needs of individuals with disabilities, by incorporating various accessibility features, based on XR accessibility guidelines, best practices, and state of the art approaches. The current version of the framework has focused on the accessibility of XR environments for blind or partially sighted users, enhancing their interaction with text, images, videos, and 3D artefacts. The proposed work lays the foundation for Extended Reality (XR) developers to easily encompass accessible assets. In this respect, it offers customizable text settings, alternative visual content text, and multiple user interaction control mechanisms. Furthermore, it includes features such as edge enhancement, interactive element descriptions with dynamic widgets, scanning for navigation, and foreground positioning of active objects. The framework also supports scene adaptations upon user demand to cater to specific visual needs.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Ensuring digital accessibility for individuals with disabilities is critical to inclusive Human-Centered design. In the realm of extended reality (XR), accessibility challenges for people with visual impairments, motor impairments, hearing impairments, cognitive, and other disabilities are significant exclusion factors giving rise to novel dimensions of the digital divide. To address these challenges, XR environments need to be carefully designed to integrate various accessibility features seamlessly, which, however, remains a complex task.
To alleviate difficulties entailed in creating XR environments accessible to all, a universal access approach needs to be adopted, to design systems that take diversity into account and satisfy the variety of implied requirements in a proactive approach [1]. To this end, this paper introduces an ongoing XR accessibility framework designed to provide developers with a cohesive approach to incorporating diverse accessibility features into their XR applications. The framework aims to simplify the process of adjusting accessibility settings without burdening developers with multiple disparate tools. The proposed framework is based on a thorough review of relevant literature, thus ensuring that state of the art accessibility features to XR environments are adopted.
The framework offers customizable text settings, alternative text for images and videos, multiple controlling mechanisms for user interaction, and ongoing work on video subtitle customization. It also includes features such as edge enhancement for 3D artefacts, interactive element descriptions with dynamic widgets, scanning support for navigation in the XR environment, and foreground positioning of active objects. Additionally, it incorporates scene adaptations like brightness adjustment, magnified lenses, and recolouring tools to cater to specific visual needs.
The proposed XR accessibility framework is an ongoing work that aims to enhance XR accessibility for developers. While certain features are still under development, the framework continues to evolve and improve. This paper provides background information, an overview of the framework, implementation details, and a use case to showcase its effectiveness in creating inclusive XR applications.
2 Background and Related Work
Today, there is a vast array of online services and applications that have become essential for our daily activities. A notable advancement in this realm is the emergence of online XR applications, which go beyond traditional domains like gaming and education. These applications now span various areas, including business [2] ecommerce [3], and culture [4]. Consequently, ensuring digital accessibility has become a crucial requirement for addressing the fundamental needs of people with disabilities, and thus ensuring their equal access to digital services and applications. Digital accessibility encompasses a growing commitment, by policymakers, public bodies, the research community, and the industry, to develop legislation, guidelines, standards, and assistive technologies that empower people with disabilities to access and utilize various applications [5,6,7].
Although many efforts have been put forward in several domains addressing disabled users, and especially individuals with visual impairments, the challenges they face in engaging with digital content in extended reality (XR) remain significant. People with visual impairments encounter difficulties perceiving visual information, including text, images, videos, and 3D objects, within XR environments [8]. To address these challenges, numerous solutions have been proposed, such as visual display adaptations [9, 10], overlays [11, 12], and audio or haptic-based [13,14,15,16] approaches for interaction. Visual display adaptations have gained attention to enhance accessibility for this user group. Similarly, individuals with motor impairments face obstacles when interacting with virtual objects and navigating virtual environments. Many existing systems employ complex interaction techniques without customization, overlooking the specific needs of this user group [17, 18]. Approaches commonly used include alternative input devices, eye gaze control, and head movements [19] Nevertheless, a major challenge for users with motor impairments, despite the device employed, is that the point-and-select paradigm is not effective; instead, there is a need for acquiring sequential access to the interactive elements of a User Interface (UI). A common technique employed in this respect is scanning, which sequentially highlights and gives focus to the interactive elements of a UI [20]. For individuals who are deaf or hard of hearing approaches to enhance accessibility include displayed written content, which however may not be in their native language, as well as signed video descriptions for text, objects, or other interface items (see Footnote 1).
Realizing the pressing need for creating accessible XR environments, adopting a ‘by design’ approach, numerous tools have been proposed in the literature to aid the development of XR experiences, focusing on streamlining and automating commonly utilized functionalities. An illustrative example is the XR Interaction Toolkit [21], specifically designed to simplify the process by offering preconfigured components that ensure seamless compatibility across various Virtual Reality (VR) devices. Moreover, the toolkit incorporates scripts that facilitate fundamental interactions within VR environments. SeeingVR is a Unity plugin for developers, designed to enhance visual display settings VR applications, offering 14 distinct tools to optimize visual accessibility for individuals with low vision [22]. Despite the progress achieved, many of these efforts remain in the prototype stage within the research field, lacking integration into mainstream applications or platforms, while developers identify that they need better integration of accessibility guidelines, alongside code examples of particular accessibility features [23]. Grass-rooted in these approaches, we propose an XR accessibility framework, for Unity developers, that will foster them towards developing universally accessible XR applications, addressing the interaction needs of users with visual impairments.
3 The Universal Accessibility XR Framework
The proposed framework has been implemented as an assets package made on the Unity Game Engine, available to be installed in projects. This is an easy-to-use, plug-and-play approach that developers can use to effortlessly embed accessibility into their AR or VR applications.
3.1 Framework Overview
The objective of the framework is to establish a cohesive approach for XR application developers to incorporate various accessibility features. Additionally, the framework aims to offer a straightforward method for adjusting these settings to the specific requirements of each application, without requiring any developer effort. The accessibility adjustments provided are derived from a comprehensive review of relevant literature.
Currently, the system provides support for a range of content adjustments to enhance accessibility for text, images, videos, and 3D artefacts. One of the main goals of the framework is to provide developers with accessible components ready to be used. More specifically, in respect of textual information, the framework offers a wide range of options for customization, including the option of modifying the font size and color, outline thickness and colour, in addition to an adjustable text background. This feature is particularly valuable for individuals with low vision, as it allows them to enhance text contrast and improve legibility. Images and videos are being enhanced with alternative text (alt text), which provides textual descriptions of the visual content. Furthermore, multiple controlling mechanisms, such as resizing, play, and pause options, are incorporated to facilitate user interaction regarding multimedia content. Additionally, the framework extends its accessibility features to encompass video subtitles, allowing users to customize them to their preferences. This customization includes the ability to modify font styles, background colours, and font sizes, thereby optimizing the viewing experience for individuals with diverse accessibility needs.
For 3D artefacts, the framework grants developers the ability to activate the edge enhancement tool, enabling the enhancement of object edges to improve visibility. Furthermore, developers can customize line colours and thickness, affording them greater control over the visual representation of these artefacts. This flexibility allows for enhanced user experiences and accommodates diverse user preferences.
To activate the accessibility features, the developer has to indicate the interactive elements within the scene. To support multiple ways of descriptionFootnote 1, each interactive element is accompanied by a widget that offers supplementary information such as text, images, and videos. Depending on the disability of the target users, the widget is dynamically adjustable. For instance, for blind users, screen reader is activated automatically, providing auditory descriptions of each interactive object, utilizing t the text description associated with the object. For persons with vision deficiencies, appropriate tools are deployed to assist them in perceiving and interacting with the XR environment. As a result, individuals with visual impairments are empowered to effectively access and comprehend the content.
The accessibility framework also incorporates a scanning feature that holds significant value in XR applications for individuals with disabilities. This feature plays a crucial role in facilitating effective navigation through the interactive elements of the XR for users with visual impairments. Each interactive element within the scene is activated in a hierarchical order, which is initially determined by the default arrangement of interactive elements in the Unity scene, moving from top to bottom. However, the framework also provides developers with the flexibility to customize this order using a designated field. This capability empowers developers to tailor the scanning experience and optimize accessibility based on specific user needs within XR applications.
The accessibility framework includes also a notable feature designed to enhance navigation in the XR environment for individuals with visual impairments. This feature ensures that when specific interactive objects selected by the developer, become active, they are brought forward in the scene, closer to the user. By bringing the active objects into the foreground, the framework facilitates improved visibility and easier interaction, benefiting individuals with visual impairments. Moreover, this functionality may also be beneficial for users with cognitive impairments, as it brings to the user’s focus the element they need to pay attention to, reducing any cognitive burden induced by the complexity of the remaining scene.
Furthermore, the framework extends its accessibility provisions beyond individual content items. It includes scene adaptations, offering functionalities such as brightness adjustment, a magnified lens for enlarged viewing, and a recolouring tool to modify the colour scheme, thus catering to the needs of colour-blind individuals. In more detail, the user can select a colour profile (e.g. protanopia, deuteranopia, tritanopia), and the framework ensures that the scene is appropriately recoloured to address the needs of each user in the best possible way. For instance, in the case of protanopia, the color red is substituted with magenta, while in the case of deuteranopia, green is substituted with light blue. Similarly, for tritanopia, blue is substituted with green [24]. These scene adaptations aim to address the specific visual needs and preferences of users, further enhancing their overall experience within the XR environment.
3.2 Implementation
To enhance the description of each interactive 3D artefact (Unity GameObject), additional text, images, or video can be incorporated. To identify the interactive elements within the scene, developers can utilize a C# script called “InteractiveElement.cs” that should be attached to the corresponding GameObjects. This script extends the functionality of the GameObject class in Unity and includes references to the text, image, and video GameObects, where developers can place the appropriate prefabs. Furthermore, the script includes a field called “Order in Hierarchy” that allows developers to modify the scanning order hierarchy without altering the original order established in the Unity scene.
To facilitate scene adaptations, the accessibility framework offers a Unity Prefab called “Accessibility Framework Manager.” This Prefab includes public entries that grant control over various tools and their respective parameters. The goal of this approach is to establish a standardized development way for adjusting accessibility settings within the XR application. For example, the developers using the Prefab corresponding to text assets need only to proceed with text adjustments once, and then they are applied seamlessly throughout the scene. The framework dynamically identifies all text objects present in the scene and applies the desired changes accordingly. Additionally, the Prefab allows for enabling features such as brightness adjustment, magnified lens, and recolouring tool. By incorporating the Accessibility Manager Prefab into their VR scene, developers can effortlessly control these tools and optimize accessibility settings for users (Fig. 1).
The framework also allows developers to specify the set of disabilities that their application aims to address. To this end, we have expanded the Unity top bar menu by adding < MenuItem > called “AccessibilityManager”. This menu item inherits the functionalities of the Unity < EditorWindow > and offers a convenient way to configure the scene. It shows the categories of disabilities supported, like blindness, low vision, colour blindness, hearing impairment, and upper limb motor disabilities. When the colour blindness option is selected, a dropdown menu appears, presenting diverse types of colour blindness, namely: protanope, deuteranope, and tritanope. The disability types are shown as toggle buttons, allowing the selection of one or multiple options, as illustrated in Fig. 2. Once the values are specified, a JSON-like element is created, representing the scene’s configuration as it is displayed in Fig. 3. For example, if the application is targeting blind users, any assisting videos or images will be hidden, and the screen reader will be activated to provide auditory feedback.
4 Use Case
To facilitate testing and evaluation, a Unity sample scene is being developed. This scene presents a Virtual Reality museum, showcasing various 3D cultural heritage (CH) artefacts that provide accessible interaction with users, including persons with visual impairments (Fig. 4). Each 3D artefact in the scene, that the developer wants to be accessible, is associated with the “InteractiveElement.cs” script. Additionally, multiple text GameObjects, implemented using the < TextMeshPro > component, are positioned within the scene to provide descriptions for each artefact. The placement of these text elements can be customized as per preference. The accessibility Framework Manager prefab scans the scene, identifying and adjusting the text elements based on user options, as depicted in Fig. 5. When the scanning option is activated, one by one all the interactive elements in the scene are activated inheriting the accessibility properties that the developer has set via the “InteractiveElement.cs” script. For instance, if the current user is blind or partially sighted, then the first interactive element on the list is brought forward in the scene, closer to the user, the embedded screen reader starts reading the corresponding text, describing the artefact, while with the use of edge enhancement the 3D object is highlighted as shown in Fig. 6.
5 Conclusion
In this paper, we report on an ongoing accessibility framework designed for developers of XR applications. The framework offers customizable features for text, images, videos, and 3D artefacts, along with interactive element descriptions and various controlling mechanisms. Its goal is to simplify the process of creating accessible XR environments for developers, ensuring the adoption of accessibility guidelines, best practices and state-of-the-art approaches. While the presented use case focuses on museums, it is important to note that the framework can be applied to various XR applications, including games, educational environments, business environments, etc. By incorporating this framework into their development process, developers can contribute to the advancement of XR accessibility and ensure that individuals with disabilities can fully engage and enjoy XR experiences across different domains.
Future work entails the extension of the described framework with additional accessibility features to provide improved support for a wide range of individuals with disabilities. In addition, the framework will be tested with developers and end-users to ensure that it addresses their needs in the best possible way.
Notes
References
Stephanidis, C., Antona, M., Ntoa, S.: Human factors in ambient intelligence environments. In: Salvendy, G., Karwowski, W. (eds.) Handbook of Human Factors and Ergonomics, pp. 1058–1084. Wiley (2021). https://doi.org/10.1002/9781119636113.ch41
Ntoa, S., Birliraki, C., Drossis, G., Margetis, G., Adami, I., Stephanidis, C.: UX design of a big data visualization application supporting gesture-based interaction with a large display. In: Yamamoto, S. (ed.) Human Interface and the Management of Information: Information, Knowledge and Interaction Design: 19th International Conference, HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I, pp. 248–265. Springer International Publishing, Cham (2017). https://doi.org/10.1007/978-3-319-58521-5_20
Margetis, G., Ntoa, S., Stephanidis, C.: Smart omni-channel consumer engagement in malls. In: Stephanidis, C. (ed.) HCI International 2019 - Posters: 21st International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, Part III, pp. 89–96. Springer International Publishing, Cham (2019). https://doi.org/10.1007/978-3-030-23525-3_12
Margetis, G., Papagiannakis, G., Stephanidis, C.: Realistic natural interaction with virtual statues in X-reality environments. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLII-2/W11, 801–808 (2019). https://doi.org/10.5194/isprs-archives-XLII-2-W11-801-2019
Wu, H.-Y., Calabrèse, A., Kornprobst, P.: Towards accessible news reading design in virtual reality for low vision. Multimed. Tools Appl. 80(18), 27259–27278 (2021). https://doi.org/10.1007/s11042-021-10899-9
Hoppe, A.H., Anken, J.K., Schwarz, T., Stiefelhagen, R., van de Camp, F.: CLEVR: a customizable interactive learning environment for users with low vision in virtual reality. In: Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility, in ASSETS 2020. New York, NY, USA: Association for Computing Machinery, Oct. 2020, pp. 1–4. https://doi.org/10.1145/3373625.3418009
Weir, K., Loizides, F., Nahar, V., Aggoun, A., Buchanan, G.: Creating a bespoke virtual reality personal library space for persons with severe visual disabilities. In: Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020, in JCDL 2020. New York, NY, USA: Association for Computing Machinery, Aug. 2020, pp. 393–396. https://doi.org/10.1145/3383583.3398610
Kasowski, J., Johnson, B.A., Neydavood, R., Akkaraju, A., Beyeler, M.: Furthering visual accessibility with extended reality (XR): a systematic review. arXiv, Sep. 10, 2021. http://arxiv.org/abs/2109.04995. Accessed 13 Sep 2022
Zhao, Y., Szpiro, S., Azenkot, S.: ForeSee: a customizable head-mounted vision enhancement system for people with low vision. In: Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, in ASSETS 2015. New York, NY, USA: Association for Computing Machinery, Oct. 2015, pp. 239–249. https://doi.org/10.1145/2700648.2809865
Pamparău, C., Vatavu, R.-D.: FlexiSee: flexible configuration, customization, and control of mediated and augmented vision for users of smart eyewear devices. Multimed. Tools Appl. 80(20), 30943–30968 (2021). https://doi.org/10.1007/s11042-020-10164-5
Zhao, Y., Szpiro, S., Knighten, J., Azenkot, S.: CueSee: exploring visual cues for people with low vision to facilitate a visual search task. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, in UbiComp 2016. New York, NY, USA: Association for Computing Machinery, Sep. 2016, pp. 73–84. https://doi.org/10.1145/2971648.2971730
Langlotz, T., Sutton, J., Zollmann, S., Itoh, Y., Regenbrecht, H.: ChromaGlasses: computational glasses for compensating colour blindness. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, in CHI 2018. New York, NY, USA: Association for Computing Machinery, Apr. 2018, pp. 1–12. https://doi.org/10.1145/3173574.3173964
Racing in the dark: exploring accessible virtual reality by developing a racing game for people who are blind. https://journals.sagepub.com/doi/epdf/10.1177/1071181321651224. Accessed 13 Oct 2022
Schneider, O., et al.: DualPanto: a haptic device that enables blind users to continuously interact with virtual worlds. In: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, in UIST ‘18. New York, NY, USA: Association for Computing Machinery, Oct. 2018, pp. 877–887. https://doi.org/10.1145/3242587.3242604
Zaal, T., Akdag Salah, A.A., Hürst, W.: Toward inclusivity: virtual reality museums for the visually impaired. In: 2022 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Dec. 2022, pp. 225–233. https://doi.org/10.1109/AIVR56993.2022.00047
Ji, T.F., Cochran, B., Zhao, Y.: VRBubble: enhancing peripheral awareness of avatars for people with visual impairments in social virtual reality. In: Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility, in ASSETS 2022. New York, NY, USA: Association for Computing Machinery, Oct. 2022, pp. 1–17. https://doi.org/10.1145/3517428.3544821
Gerling, K., Dickinson, P., Hicks, K., Mason, L., Simeone, A.L., Spiel, K.: Virtual reality games for people using wheelchairs. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, in CHI 2020. New York, NY, USA: Association for Computing Machinery, Apr. 2020, pp. 1–11. https://doi.org/10.1145/3313831.3376265
Mott, M., Tang, J., Kane, S., Cutrell, E., Morris, M.: ‘I just went into it assuming that I wouldn’t be able to have the full experience’: understanding the accessibility of virtual reality for people with limited mobility. In: ASSETS 2020: Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility, Oct. 2020, pp. 1–13. https://doi.org/10.1145/3373625.3416998
Heilemann, F., Zimmermann, G., Münster, P.: Accessibility guidelines for VR games - a comparison and synthesis of a comprehensive set. Front. Virtual Real. 2 (2021). https://www.frontiersin.org/articles/10.3389/frvir.2021.697504. Accessed 12 Oct 2022
Ntoa, S., Margetis, G., Antona, M., Stephanidis, C.: Scanning-based interaction techniques for motor impaired users. In: Kouroupetroglou, G. (ed.) Assistive Technologies and Computer Access for Motor Disabilities, pp. 57–89. IGI Global (2014). https://doi.org/10.4018/978-1-4666-4438-0.ch003
“XR Interaction Toolkit | XR Interaction Toolkit | 2.3.2.” https://docs.unity3d.com/Packages/com.unity.xr.interaction.toolkit@2.3/manual/index.html. Accessed 30 May 2023
Zhao, Y., Cutrell, E., Holz, C., Morris, M.R., Ofek, E., Wilson, A.D.: SeeingVR: a set of tools to make virtual reality more accessible to people with low vision. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, in CHI 2019. New York, NY, USA: Association for Computing Machinery, May 2019, pp. 1–14. https://doi.org/10.1145/3290605.3300341
Ji, T.F., Hu, Y., Huang, Y., Du, R., Zhao, Y.: A preliminary interview: understanding XR developers’ needs towards open-source accessibility support. In: 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Mar. 2023, pp. 493–496. https://doi.org/10.1109/VRW58643.2023.00107
Wong, B.: Points of view: color blindness. Nat. Methods 8, 441 (2011). https://doi.org/10.1038/nmeth.1618
Acknowledgements
This work has received funding from the EU’s Horizon Europe research and innovation programme under Grant Agreement No 101060660 (SHIFT). This paper reflects only the authors’ views and the Commission is not responsible for any use that may be made of the information it contains.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this paper
Cite this paper
Valakou, A., Margetis, G., Ntoa, S., Stephanidis, C. (2024). A Framework for Accessibility in XR Environments. In: Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G. (eds) HCI International 2023 – Late Breaking Posters. HCII 2023. Communications in Computer and Information Science, vol 1958. Springer, Cham. https://doi.org/10.1007/978-3-031-49215-0_31
Download citation
DOI: https://doi.org/10.1007/978-3-031-49215-0_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-49214-3
Online ISBN: 978-3-031-49215-0
eBook Packages: Computer ScienceComputer Science (R0)