Abstract
We examine in this work the desirability and preferences of people with visual impairments for assistive vision, i.e., vision rehabilitation and enhancement, delivered by smart eyewear devices. We present results from a vignette experiment with N = 17 participants with visual impairments, who reported their preferences regarding 32 hypothetical scenarios that we formulated for assistive vision, e.g., long-distance vision, peripheral vision, highly sensitive perception of colors, thermal vision, night vision, and others. Our results show higher desirability (average score of 4.21 out of 5) for assistive vision scenarios addressing rehabilitation of lost vision functions compared to scenarios that propose Augmented Reality-based enhancements of human vision (3.76) or visual perception in other regions of the electromagnetic spectrum, such as thermal or infrared vision (3.36). To understand these results, we conduct a second vignette study involving N = 178 participants without visual impairments, for which we report lower desirability for vision augmentation (3.44/5) compared to participants with visual impairments (3.75/5). We discuss implications of our results for augmented and mediated vision delivered by smart eyewear devices.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Smart eyewear devices with built-in video cameras, Wi-Fi connectivity, and see-through displays [38] provide wide opportunities for researchers and practitioners to design, engineer, and evaluate new applications for assistive vision. Common examples include magnification, contrast enhancement, and color replacement [8, 33, 36, 37, 40, 69, 70, 76, 92, 95] that address and aim to correct specific vision deficiencies. Such applications represent instances of mediated vision [47] by implementing vision rehabilitation and compensating lost vision functions. The emergence of Augmented Reality (AR) and Mixed Reality (MR) technology [9, 15] readily available on mobile and wearable computing devices has enabled augmented vision that, unlike mediated vision, superimposes computer-generated content on top of the visual reality perceived by the user. Augmented vision enables new types of applications for assisted vision, including assisted navigation [94], face recognition and person identification [98], sign and text reading [36], scene recognition [25], as well as new experiences for home entertainment [80, 83], to name just a few. Moreover, combining augmented and mediated realities toward augmediation [48] opens up new opportunities for applications in assistive technology for human vision.
However, while researchers and practitioners develop technology for smart eyewear and assistive vision, it is equally important to understand the needs, preferences, and desirability of end users for assistive vision, such as of people with visual impairments. This desideratum implies conducting user studies, interviews, and surveys to unveil such preferences, an approach that has been adopted recently to inform design of AR technology for specific application domains [60, 64, 80]. However, in what regards smart eyewear, accessible computing, and people with visual impairments, only a handful of such studies have been conducted to date [23, 64, 93, 99]. While this prior work has unveiled important findings about the perceptions of people with visual impairments regarding smart eyewear devices, little is still known about their needs and preferences for augmented and mediated vision scenarios that are possible with today’s technology, such as face recognition [98], color correction [40], night vision [54], extended peripheral vision [20], or thermal vision [1]. In this work, we present results from a vignette experiment [6, 13, 28] in which participants with visual impairments were elicited for their preferences for assisted vision. Our work equally covers people without visual impairments as well, for which we want to understand their preferences for the various ways in which human vision could be mediated, augmented, and augmediated with smart eyewear leading towards Verbeek’s [82] posthuman vision scenarios through technological mediation and, respectively, to practical application opportunities of Chambel et al.’s [19] concept of Alternate Realities, where new devices, transmission paradigms, and content formats enabled by multimedia technology make new kinds of immersive experiences possible for end users.
The contributions of our work are as follows:
-
1.
We conduct an examination of the preferences for augmented and mediated vision of N = 17 participants with visual impairments of various types and severity. In order to collect preferences for a wide range of possible scenarios for assisted vision (including applications readily accessible today, such as color correction and contrast enhancement, but also applications not yet achievable with today’s technology, such as X-ray vision), we conduct our examination in the form of a vignette experiment [6, 13, 28].
-
2.
To instrument our user study, we introduce a taxonomy of vision augmentation and mediation with four categories: (1) human vision with no impairments, (2) extended vision in the visible spectrum, (3) augmediated vision in the visible spectrum, and (4) augmediated vision in other regions of the electromagnetic spectrum with a total of 32 subcategories representing possible scenarios for assistive vision.
-
3.
We replicate our vignette experiment with N = 178 participants without visual impairments constituting a control group to contrast the findings obtained with people with visual impairments. Informed by our empirical observations, we discuss implications for assisted, mediated, and augmented vision for smart eyewear computing.
2 Related work
We discuss in this section prior work on applications of assistive vision for people with visual impairments. We also overview interaction challenges with computing technology experienced by users with visual impairments and connect to prior work that documented well-being and coping strategies adopted by people with visual impairments in everyday life. Before proceeding further, we define several key concepts employed in our work.
2.1 Definitions
Smart eyewear
In this work we focus on “smart eyewear” devices that, according to the classification of Kress et al. [38] and their discussion on the segmentation of the Head-Mounted Display (HMD) markets, feature integrated optical combiners and prescription lenses, i.e., Rx functionality. Smart eyewear extend smartglasses that incorporate displays (either occlusion or see-through), but for which the optical combiner is not part of the Rx lens. At their turn, smartglasses extend the functionality of connected glasses that pack Bluetooth and/or Wi-Fi connectivity, digital imaging through embedded video cameras, but (usually) no display according to Kress et al. [38].
Mediated and Augmented Vision (M&A vision)
We are interested in this work in understanding desirability for applications for smart eyewear that assist human vision, either by means of augmentation or mediation. The distinction between the two, clarified by Mann [47], consists in that augmentation superimposes digital content on top of the perception of visual reality, i.e., Augmented Reality, whereas mediation is about presenting the user with a modified version of the visual reality, such as by employing computer vision and image processing algorithms or, for short, Mediated Reality [47, 48]. In this work, we are interested in all techniques that enhance visual perception, including augmediation that combines augmentation and mediation, i.e., Augmediated Reality [48].
Visual impairments
The term “visual impairments” includes a range of visual abilities that can be classified according to distance visual acuity from mild to moderate, severe, and blindness [85]. Low vision represents vision loss that cannot be corrected by medical or surgical treatment or prescription glasses. Unlike people who are blind, people with low vision do rely on their visual abilities to perform everyday activities, but face considerable challenges and physiological discomfort [86]. In this work, we address people with visual impairments, which equally include people who are blind that could benefit from M&A vision by means of sensory substitution, e.g., haptic feedback for interaction in virtual worlds [67].
2.2 Augmented vision for people with visual impairments
Prior work in accessible computing has examined the benefits of AR technology to reduce accessibility gaps for people with visual impairments [21, 73, 91], but also for people without visual impairments that may experience temporary decrease of visual acuity under specific circumstances, such as low ambient light or eye fatigue, known as “situationally induced impairments and disabilities” (SIIDs) [65, 89]. AR applications for assistive vision have been proposed for smartglasses [8, 36, 40, 59, 70, 76, 94] and HMDs [25, 34, 37, 49, 59, 75, 95,96,97], but also smartphones [33], finger-worn devices [69], and VR gear [92]. Researchers have implemented and evaluated various techniques for assistive vision, such as magnification, edge enhancement, brightness and contrast adjustment, text extraction, and black/white reversal; see the ForeSee [95], SeeingVR [92], and FlexiSee [57] prototypes for representative examples. Itoh and Klinker [37] proposed a system designed to filter out optical abnormalities by superimposing a restorative image on the user’s field of view rendered via an HMD; Tang et al. [75] adopted a similar approach for see-through lenses; and Melillo et al. [49] employed video see-through technology to render video with a restorative filter. Other representative prototypes are ChromaGlasses [40] and Chroma [76], designed to shift the color scheme in the video acquired by the built-in camera according to the specific type and severity of color vision deficiency. Regarding the control of such features, Aiordăchioae et al. [3] performed an inventory of voice input commands for assistive applications for smartglasses.
Some AR systems for assistive vision were designed to help with specific tasks, such as mobility [25], providing easier access to physical interfaces in the real-world [33], obstacle avoidance [34], or sign reading [36]. For example, Everingham et al. [25] employed Computer Vision and classification techniques to identify obstacles, vehicles, and road pavement in video, which were highlighted for users with distinct colors to assist mobility in urban environments. For indoor scenarios, the CueSee system [96] was designed to highlight specific objects to assist users with low vision to be more effective at performing specific visual search tasks. Hicks et al. [34] leveraged residual vision to deliver information to users about the size and localization of obstacles: a low-resolution black and white image was used to indicate the distance, encoded using brightness levels, to nearby objects. Indoor way-finding was equally explored, such as by Huang et al. [36], who developed a prototype for sign identification on walls and doors, displayed magnified to users and read using text-to-speech; and Zhao and Azenkot [94] used AR to assist people with low vision for navigation by displaying visual highlights aligned with stairs. Aiordăchioae et al. [2] proposed wearable devices to address situations of innattentional blindness, where objects and phenomena automatically detected in the video captured by the camera embedded in a pair of glasses are presented to the user in the form of vibrotactile patterns delivered at finger, wrist, and forearm level. To support remote assistance, Pamparău and Vatavu [57] presented FlexiSee, a system for vision mediation that enabled secondary users, in the form of vision monitors and vision assistants, to view and control the mediation presented to the primary user via the HMD display, from a distance. And Pamparău et al. [56] described “do you control what I see” scenarios for the remote control of vision mediation, which they contrasted to the conventional “do you see what I see” feature. Other applications have targeted reading tasks. For example, Sterns et al. [69, 70] developed a prototype using the HoloLens HMD and a finger-worn camera, and Guo et al. [33] introduced VizLens, a mobile application that enabled users to capture a photograph of a real-world physical interface, e.g., of a microwave oven, and receive guidance about how to use it.
2.3 Interaction challenges with computing technology for users with visual impairments
Several approaches have been adopted in the scientific literature to understand the interaction challenges experienced by people with visual impairments with computing technology. One promising approach, suggested and applied by Schipor et al. [66] and Rusu et al. [63], relies on the use of models of human vision (neurobiological, cognitive, and neurocognitive models) to inform design of accessible computing technology solutions in accordance with the type and severity of the visual impairment; see, for example, the interpretation of gesture recognition results for people with low vision in relation to such models [79]. Other approaches have employed direct observation of people with visual impairments while using assistive technology or indirect observation to collect and document interaction challenges. For example, Szpiro et al. [74] observed eleven participants with low vision during simple tasks involving smartphones, tablets, and computers. They found that their study participants often preferred to access information with the help of visual assertive tools, e.g., magnification and contrast enhancement, rather than via aurally feedback. However, they also found that this strategy led to considerable delays in performing tasks. Brady et al. [17] conducted a large-scale study involving more than 5,000 blind people that asked more than 40,000 questions via the VizWiz social application. By analyzing this large dataset, the authors derived several categories of questions that people with visual impairments wanted answers for, from object identification to description and help with reading text and signs. And other approaches have employed interviews to elicit people with visual impairments regarding their needs, preferences, and desires for assistive technology. For example, Sandnes et al. [64] reported, from interviews conducted with three individuals with visual impairments, that face and text recognition were the most important features for smartglasses-based assistive vision. Rusu et al. [63] employed semi-structured interviews with five participants with visual impairments and documented their difficulties encountered while walking, reading public signs, locating objects, recognizing faces, working, or reading news. And Zhao et al. [98] interviewed eight people with visual impairments to understand their needs for on-line social activities. In another study, Zhao et al. [93] compared the performance of twenty participants with low vision against a control group regarding the use of mainstream AR smartglasses. The tasks considered in their study involved shape and text recognition while sitting and walking. Results showed that the differences in performance found for the sitting and walking experimental conditions followed a similar pattern for both groups of participants with and without visual impairments, which led the authors to suggest the possibility of applying similar assistive strategies for people with and without visual impairments alike.
AR-based assistive vision also comes with several challenges that need to be overcome by careful design. For example, one challenge in the design of assistive technology in general, and assistive vision in particular, is represented by the stigma related to using and wearing visual aids in public [64], i.e., the “AT effect” [61]. Another challenge is to reduce frustration in using AR devices, which may induce delays, present synchronization issues between the virtual content and the real world experienced via the see-through display [76], and that necessitate additional interactions [74].
2.4 Well-being and coping strategies for people with visual impairments
In this work, we collect measurements of well-being and subjectively perceived quality of life from our participants with visual impairments, and we connect these measurements to their preferences for M&A vision. In this section, we overview prior work that examined well-being and coping strategies for people with visual impairments.
Prior work has shown that vision deficiencies influence social functioning and autonomy and are related to higher levels of emotional distress, depression, anxiety, frustration, anger, stress, financial strain, loneliness, and low levels of well-being [7, 18, 24, 26, 27, 30]. Also, visual impairments in children and young adults lead to more negative emotions and lower levels of physical, psychological, and social well-being compared to the general population [7, 62]. Furthermore, children with visual impairments have lower levels of social-emotional competences compared to children without visual impairments [39] since vision represents a crucial factor during development. For adults, vision impairments may affect family life (e.g., by increasing family stress and lowering marital quality) and work life alike (e.g., by contributing to unemployment and financial strain) [26]. Since vision represents a key factor in social interaction, as it mediates processes such as facial recognition, eye contact, and so on, people with low vision are at high risk of social isolation and loneliness [18]. Also, prior work has reported that older adults with visual impairments exhibit higher levels of depression compared to people without impairments [24].
People with visual impairments experience challenges with functioning, autonomy, and social interactions that are known sources for emotional problems. Empirical research has indicated that vision loss is associated with negative consequences for emotional well-being, social participation, and career goals and motivation [30]. Furthermore, visual impairments seem to affect not only the people who have them, but also the members of their families. For instance, prior work has reported that parents of children with visual impairments experience helplessness, guilt, anxiety, stress, and insomnia [44]. Also, spouses of people with sensory deficiencies may show low levels of psychological and relational well-being [41, 42]. People with visual impairments employ various coping strategies to compensate for their vision loss. For example, problem-focused coping (e.g., taking actions, making plans, and focusing on solutions), positive refocusing (thinking of positive and joyful issues), re-engagement in alternative, meaningful goals, family acceptance, and optimism represent effective strategies that contribute to lowering depression [14, 31, 42, 71]. In contrast, avoidance coping (i.e., distracting from the problem) and rumination (i.e., repetitive thinking about negative experiences and feelings) have been related to depressive symptoms and low levels of life quality [31, 72]. Electronic aids for low vision that enable people with visual impairments to be more independent also have a positive effect on their psychological well-being [30].
2.5 Eliciting responses to hypothetical situations using vignettes
In this work, we focus on understanding desirability and preferences for new technology, including technology that is not yet widely available or affordable, such as high-definition thermal cameras or X-ray vision. Therefore, we conduct our examination in the form of a “vignette study” [6, 13, 28], in which participants are asked to react to and express their preferences for fictional situations regarding M&A vision. Since vignette studies have been little applied in HCI [32, 35, 43] compared to other fields, such as psychology and sociology [6, 12, 13, 16, 28, 88], we briefly present in this section their main characteristics and highlight their suitability for our scientific investigation.
Finch [28] described vignettes as “short stories about hypothetical characters in specified circumstances, to whose situation the interviewee is invited to respond” (p. 105). More generally, a vignette is “a short, carefully constructed description of a person, object, or situation, representing a systematic combination of characteristics” [6, p. 128]. Barter and Renold [13] identified many use cases for vignette studies, such as eliciting interpretations of actions, clarifications of individual judgments, and explorations of sensitive topics in ways that are less personal and threatening to the participants of a study. Regarding the actual implementation, vignettes may be presented to participants in various forms, from keywords to text (dialog and narratives) and graphical formats (cartoons and pictures) up to multimedia content [6, 13]. Vignette studies have also been applied in HCI, but to a lesser extent. For example, Vatavu and Vanderdonckt [81] reported the results of a vignette study in which participants were presented with visual mock-ups of graphical menus for smartglasses from a large design space, which they were asked to evaluate in terms of visual aesthetics, a challenge that was addressed by using a randomized A/B technique [78] for comparing user interface design alternatives via the web; Hoyle et al. [35] conducted a vignette study using Amazon Mechanical Turk to collect judgments regarding the appropriateness of posting private photographs online; and Lindgaard et al. [43] employed a vignette study to inform the design of a diagnostic decision support system.
In the case of M&A vision, a vignette represents a hypothetical description of assisted vision enabled by smart eyewear devices, such as technology for providing better contrast, higher resolution, better peripheral vision, better vision during nighttime, etc. An important characteristic of a vignette is that it enables the participants of a study to define the situation depicted by the means of the vignette in their own terms [13]. This aspect limits any influence from the interviewer, such as inflicting of perspectives, on the interviewee. Our choice of the instrument of vignettes for our investigation enables us to collect needs, preferences, and feedback regarding a wide variety of M&A vision scenarios, including applications not yet available. By adopting such an approach, we aim to collect data to inform further research and development in assistive vision.
3 A working taxonomy for M&A scenarios for assistive vision
In this work, we collect and report preferences for M&A vision in order to derive implications for assistive vision and smart eyewear devices. To instrument our vignette study, we devised a taxonomy of M&A vision informed by prior work and our brainstorming of possible applications of Mediated and Augmented Reality for vision rehabilitation and vision enhancement. In this section, we present the categories of this taxonomy.
Prior work has described various applications of smart eyewear devices to assist visual perception [8, 29, 33, 36, 40, 49, 64, 69, 70, 75, 76, 95, 96, 99], which we used to extract scenarios for M&A vision. Also, prior work in computer-generated and computer-mediated realities has presented many theoretical and practical developments in Augmented [9, 15], Mixed [51,52,53], Mediated [47], Multimediated [48], Alternate [19], and Cross-Reality [58], which we used to envision possible application scenarios for what mediated and augmented vision may look like in these hybrid physical-virtual realities. Based on this prior work, we identified four categories of M&A vision scenarios, enumerated below. For each scenario we devised, for the purpose of examination in our vignette study, a number of eight possible implementations of that scenario by addressing specific characteristics of human vision (e.g., contrast, resolution, long-distance vision) or possibilities for sensing and visualization technology to enhance visual perception (e.g., by means of 360∘ video cameras or AR visualizations); see Table 1. Our four categories of M&A vision are:
-
Category #1: Human vision with no impairments. This category includes scenarios in which computing technology implements vision rehabilitation to compensate vision deficiencies, such as correcting color perception [40, 76], improving contrast and magnification [95], etc., to the levels expected for human vision in the absence of any impairments, e.g., 20/20 visual acuity, 190∘ visual field for binocular vision, etc.
-
Category #2: Extended human vision in the visible spectrum. This category includes scenarios in which video cameras are used to extend the limits and capabilities of human vision. Examples include remote vision, where users can see events taking place in a remote location by means of live video streaming; panoramic vision enabled by 360∘ video cameras; alternated perspectives, where the same scene can be viewed from multiple points of view as in video surveillance systems, and so on. Any scenario that employs video cameras to extend the natural limits of human vision typically falls into this category.
-
Category #3: Augmediated vision in the visible spectrum. In this category, we place applications that apply Artificial Intelligence technology (e.g., Machine Learning, Computer Vision) to recognize objects and extract meaning from videos in order to present users with relevant information about objects from their field of view, e.g., face and emotion recognition and AR applications fall into this category. By augmediated vision we understand live streaming videos that are both augmented and mediated [48].
-
Category #4: Augmediated vision in other regions of the electromagnetic spectrum. This category extends the applications from Category #3 to other regions of the electromagnetic (EM) spectrum, beyond visible light. Examples include infrared vision and thermal vision that can be implemented with sensors active in those Hz ranges, but also futuristic scenarios that we imagined in our brainstorming, e.g., material vision, where the type of material from which an object is made of can be identified by mere eyesight. This also includes AR applications that operate in other ranges of the EM spectrum, but also applications that address other senses beyond vision, e.g., the ability to appreciate distances to objects by means of sensory substitution.
4 Study #1: Preferences of people with visual impairments for M&A vision
We conducted a vignette study to collect the preferences of people with visual impairments for possible application scenarios for augmented and mediated vision enabled by smart eyewear devices.
4.1 Study design
Participants
Seventeen people with visual impairments (10 female) with ages between 17 and 73 years (M = 25.1, SD= 16.8 years) participated in our experiment; see Tables 2 and 3 for their demographic details.
Apparatus
Participants were demonstrated several features of the Microsoft HoloLens HMD [50], the Vuzix Blade AR smartglasses [84], and the NorthVision Technologies NC-05 camera glasses [55] representing various instances of eyewear devices from HMDs with photorealistic graphics rendering and see-through displays (both eyes) to light AR glasses with see-through display (one eye) and limited graphics capability, and glasses with an embedded video camera and Wi-Fi connectivity, but no optical lenses. HoloLens was used to project 3-D holograms in the room (e.g., a floating island) with the built-in Holograms app and participants were invited to discover and explore those holograms by moving around the room and inspecting them closely. Our demonstration of the Vuzix Blade consisted of the built-in Photos app for picture visualization, where participants could browse through images and videos stored on the glasses and view them on the optical lenses. Finally, participants used the NC-05 glasses with an embedded micro video camera to stream live video to a connected smartphone where the image could be magnified. Figure 1 illustrates a few snapshots from the experiment.
Task
Participants followed a six-step procedure consisting in questionnaires, a visual function test, interview, and feedback elicitation regarding M&A vision scenarios, as follows:
-
1.
Preliminary questionnaire. The goal of the study was presented to participants and their consent to participate in the study was acquired. We collected demographic information (age, gender, visual impairment).
-
2.
Visual acuity and contrast test. We conducted visual acuity and contrast testing with the Freiburg Vision Test (FrACT) application (v3) [10]. To evaluate visual acuity, we used the Tumbling E 24-trial test and the decimal logarithm of the Minimum Angle of Resolution, measured in arcminutes;Footnote 1 see [68]. To evaluate the contrast threshold, we used the Landolt C 18-trial test and the decimal logarithm of the inverted Weber contrast threshold [11]. We also asked participants to report any assistive devices and/or technology that they were using at the time of the study, such as prescription eyeglasses, magnifying lenses, specific software settings for computer screens and mobile devices, e.g., larger fonts, use of screen readers, voice input, etc.
-
3.
The Visual Functioning Questionnaire (VFQ-25) [46] measures the influence of the visual impairment on the physical, social, and emotional well-being. The questionnaire has 25 items that target general health and vision (e.g., “At the present time, would you say your eyesight using both eyes is excellent, good, fair, poor, or very poor, or are you completely blind?”), the difficulty of performing various activities (e.g., “How much difficulty do you have reading street signs or the names of stores?”), and vision problems (e.g., “Do you accomplish less than you would like because of your vision?”). Items were rated using 5-point and 6-point Likert scales. For our study, we used just 23 items of the VFQ-25 questionnaire and discarded two items that referred to driving.
-
4.
The Subjective Happiness Scale (SHS) [45]Footnote 2 is a 4-item scale designed to assess the global subjective happiness (i.e., well-being) relative to other people, e.g., “Some people are generally very happy. They enjoy life regardless of what is going on, getting the most out of everything. To what extent does this characterization describe you?” The items from the SHS questionnaire are rated using 7-point Likert scales.
-
5.
Smart eyewear technology showcase. We presented participants with the Microsoft HoloLens HMD [50], Vuzix Blade light AR glasses [84], and the NorthVision Technologies NC-05 video camera glasses [55] and let participants explore those devices and specific applications; see Fig. 1 for photos captured during the study. We chose these devices for their different capabilities regarding computing resources and photorealism for rendering AR applications, representing different instances of eyewear devices according to the classification from Kress et al. [38].
-
6.
We employed a semi-structured interview to unveil the preferences, needs, and desires for vision mediation and augmentation using eyewear technology, including mobile and wearable devices. At this stage of the study, we introduced to our participants the 32 M&A vision scenarios enumerated in Table 1 in the form of hypothetical situations, e.g., “I would like to see better under strong ambient light” or “I would like to be able to identify easier the people I am talking to”. We elicited participants’ desirability of each scenario in the form of a preference rating on a scale from 1 (scenario very little desirable or not applicable to the participant) to 5 (scenario very desirable and important to the participant). Figure 2 shows photos captured during this part of the study. To make sure that all participants understood the scenarios and to avoid any reading difficulties they might have had, the questionnaire was read and explained by a qualified psychologist. Each scenario from Table 1 was followed by detailed explanations, e.g., “this means that you could perceive more nuances of the same color, for example more tones of yellow or pink” for scenario S29 (high color sensitivity); “imagine that you could see with your eyes the data being transfered in the wireless network” for S31 (radio vision); and “this means that you could perceive that part of radiation that is responsible for tanning and sunburns” for scenario S32 (UV vision), respectively.
Design
Our study was a within-subject design with one independent variable: Scenario, nominal variable with 32 subcategories representing scenarios of assistive M&A vision for people with visual impairments; see Table 1.
Measures
We used the following measures:
-
1.
Desirability-rating, ordinal variable, expressing participants’ desirability and preferences for each M&A vision application scenario from Table 1, which we measured using a 5-point Likert scale with the following items: 1 - “Not at all or very little desirable (this scenario does not apply to my case)”, 2 - “‘Little desirable,” 3 - “Undecided (beneficial scenario, but I do not necessarily need or desire it),” 4 - “Desirable,” and 5 - “Very desirable (this scenario is very important to me)”.
-
2.
VFQ25, ratio variable, computed by averaging the vision-targeted subscale scores, i.e., general vision, ocular pain, near activities, distance activities, vision specific social functioning, vision specific mental health, vision specific role difficulties, vision specific dependency, color vision and peripheral vision [46]. VFQ25 takes values between 0 (worst possible visual functioning) and 100 (best possible visual functioning); see the VFQ-25 manual [77].
-
3.
SHS, the Subjective Happiness Score, computed by averaging participants’ answers to the items of the SHS scale. The range of the SHS values is from 1 to 7 with higher scores representing greater well-being.
4.2 Results
We used the VFQ25 and SHS measurements to understand the impact of our participants’ visual impairments on their functioning and general life and, thus, to better characterize our sample of participants besides the demographic information from Tables 2 and 3. Participants reported low levels for general health (M = 42, SD= 21.22), general vision (M = 52.94, SD= 24.43), near activities (M = 57.47,22.62), role difficulties (M = 63.97, SD= 23.75), peripheral vision (M = 64.06, SD= 27.33), and distance vision (M = 64.70, SD= 24.91), on scales ranging from 0 to 100. Higher scores were reported for color vision (M = 79.68, SD= 29.18), dependency (M = 72.42, SD= 25.77), ocular pain (M = 69.11, SD= 26.92), social functioning (M = 68.38, SD= 30.97), and mental health (M = 67.05, SD= 25.25), respectively. Overall, our participants reported moderate levels of general subjective happiness (M = 5.10,SD= 1.41). We found positive inter-correlations between visual functioning and subjective happiness. For instance, significant positive correlations between SHS and general health (r(N= 17)=.51, p<.05), ocular pain (r(N= 17)=.67, p<.01), near activities (r(N= 17)=.56, p<.05), distance activities (r(N= 17)=.57, p<.05), vision functioning mental health (r(N= 17)=.62, p<.01), role difficulties (r(N= 17)=.59, p<.05) and dependency (r(N= 17)=.48, p<.05).
Figure 3 shows participants’ individual preferences for each M&A vision scenario in the form of histograms and mean preference ratings; ratings closer to 5 denote higher desirability. Shapiro-Wilk tests indicated significant deviations from normality at α=.05, and a Levene’s test showed the presence of heteroscedasticity in our data (F(31,512)= 1.798, p<.01). Thus, we employed the Brunner-Domhof-Langer method,Footnote 3 an improvement on Friedman’s test in terms of power, designed to be sensitive to differences among average ranks [87, p. 543] for data analysis. Results showed a significant effect of Scenario on Desirability-rating (F(7.893)= 3.021, p<.005). Overall, the mean Desirability-rating across all the M&A vision scenarios was 3.75 (SD= 1.38), close to 4 that denotes “desirable” scenarios, according to the items of our 5-point Likert scale; see the experiment description in the previous section. The top-rated scenarios were, in order, S1 (participants wished for better long-distance vision with an average rating of 4.71 out of a maximum of 5); S4 (better contrast, rating 4.53); S19 (audio-rendered vision, 4.41); S6, S8, and S17 (representing desires for better peripheral vision, better resolution of their current vision, and AR-enhanced vision in the form of text and sign reading, all scenarios scoring an average rating of 4.35); S5 and S7 (better vision in ambient light and during nighttime, average ratings 4.24 and 4.18, respectively); and three other scenarios were rated closely to 4, S10, S27, and S24, respectively (preferences for remote vision, infrared vision, and AR-enhanced vision where more details about objects are displayed in real time). Overall, eleven scenarios (34.4%) received desirability preferences that averaged greater than or equal to 4. At the opposite end of the scale, the least preferred scenarios were S32 (little preference for UV vision with an average rating of 2.29 out of 5) and S18 (little preference for diminished reality, rating 2.94). The rest of the nineteen scenarios examined in our study were rated between 3 (corresponding to the Likert item “undecided: beneficial scenario, but I do not necessarily need or desire it”) and 4 (“desirable”) by our participants with visual impairments. These results indicate a large preference for M&A vision scenarios from the first category, “human vision with no impairments,” while the rest of the scenarios were found potentially useful, but not necessarily desirable or applicable for the needs of our participants.
We performed a correlation analysis between participants’ Desirability-rating for various M&A vision scenarios and their visual functioning scores. Specifically, we found positive significant correlations for alternative perspectives (seeing from inaccessible viewpoints) and vision specific mental health (r=.49, p<.05) and dependency on others (r=.51, p<.05), a positive correlation between desirability for better vision at a distance (to appreciate better the distance to objects) and vision specific role difficulties (r=.48, p<.05), as well as between desirability for face recognition to identify people easier and general health (r=.53, p<.05). Other significant correlations were negative, such as between emotion recognition to identify face expressions and emotions and social functioning (r= − .56, p<.05), between rewind vision (seeing again an event or action) and general vision (r= − .49, p<.05), and between multiple perspectives and near vision activities (r= − .61, p<.05).
5 Study #2: Preferences of M&A vision of people without visual impairments
To understand better the preferences for M&A vision scenarios, we conducted a second vignette study in which we targeted people without visual impairments representing the control group. To collect data from a large sample of participants, we organized this second study online.
5.1 Study design
Participants
A total number of 178 participants (100 female) without any known visual impairments with ages between 17 and 75 years (M = 32.4, SD= 12.8 years) volunteered for our study. Participants had various occupations and technical backgrounds and were recruited via mailing lists; about half were students in Computer Science, Psychology, and Educational Sciences.
Apparatus
We used a Google Forms questionnaire that presented participants the descriptions of the M&A vision scenarios from Table 1.
Task
Participants were asked to fill in the questionnaire and to indicate their preferences for M&A vision scenarios that they believed were useful to them. For this study, we did not use the VFQ-25 and SHS questionnaires regarding visual function and subjective well-being.
Measures
The only measure of this study was the Desirability-rating dependent variable with values between 1 (“not at all or very little desirable; this scenario does not apply to my case”) and 5 (“very desirable; this scenario is very important to me”).
5.2 Results
Figure 4 shows the individual preferences of the participants without visual impairments for each M&A vision scenario. Shapiro-Wilk tests indicated significant deviations from normality at α=.05, and a Levene test detected heteroscedasticity (F(31,5664= 3.384, p<.001). The Brunner-Domhof-Langer test [87, p. 543] revealed a significant effect of Scenario on Desirability-rating (F(19.934 = 21.803, p<.001).
The mean Desirability-rating computed across all the M&A vision scenarios was 3.44 (SD= 1.28), slightly lower (-8%) than the mean rating of participants with visual impairments (3.75; see the previous section). To analyze this difference, we compiled the Desirability-rating data from the two studies into one dataset and considered participants without visual impairments as the control group by introducing the Visual-impairment independent variable, nominal with two conditions. A between-by-within ANOVA procedure based on ranks and the Brunner-Domhof-Langer method [87, p. 554],Footnote 4 showed a significant effect of Visual-impairment on Desirability-rating for M&A vision scenarios (F(1,22.504)= 4.379, p=.047), a significant effect of Scenario (F\(_{(19.467,\infty )}{=}4.379\), p<.001), and a significant interaction between Visual-impairment and Scenario (F\(_{(19.467,\infty )}{=}2.280\), p=.001). To understand these results, we looked at the individual preferences of the participants without visual impairments for the thirty-two scenarios examined in our study; see Fig. 4. We found that only one scenario received a mean rating greater than 4 (S8, better resolution) compared to eleven scenarios rated above 4 by the participants with visual impairments. We also found three scenarios with preference ratings lower than 3 (“undecided”), while the majority of the scenarios (28 of 32 representing 87.5%) had mean Desirability-rating scores between 3 and 4. Table 4 lists the mean scores for each M&A vision category, revealing that the desirability of enhanced vision expressed by participants with visual impairments was higher not just overall (3.75 vs. 3.44), but also on each individual category compared to that expressed by the participants without visual impairments. The largest difference (4.21 vs. 3.61) was recorded for the first category, human vision with no impairments. In the next section, we discuss implications of these findings.
6 Discussion
Our results show different preferences for M&A vision for people with and without visual impairments. In this section, we use these results to derive a number of implications for smart eyewear that implement assistive mediated and augmented vision, as follows:
-
1.
Focus on vision rehabilitation applications for which people with visual impairments express the highest desirability. We found that participants with visual impairments expressed higher desirability for M&A vision compared to participants without visual impairments, not just overall (3.75 vs. 3.44) but equally for each of the four categories from our working taxonomy; see Table 4. The largest difference emerged for human vision without impairments (4.21 vs. 3.61). These results motivate the need for more research and development toward new solutions for vision rehabilitation, e.g., eyewear devices that are easier to use [74] and have improved technical capabilities such as regarding the synchronization between virtual content and the real world perceived via the see-through display [76], and form factors that do not attract unwanted attention [61]. Future work could focus on understanding preferences for the first category of M&A vision in more depth, for example with in-the-lab and in-situ studies, where end users could provide feedback regarding eyewear application prototypes with actual implementations of M&A vision scenarios.
-
2.
Differentiation between sighted and visual impaired individuals. In some scenarios, the preference ratings were similar for the two groups (e.g., 3.76 and 3.72 for S9, alternative perspectives; 3.18 and 3.12 for S12, see events in slow motion; and 3.59 and 3.54 for S16, rear-view vision). One consequence of these similarities is that M&A vision applications could be designed to address end users with and without visual impairments alike, which supports a previous conclusion from Zhao et al. [93]. However, we did find a significant effect of Visual-impairment on Desirability-rating with the largest difference observed for the first category of M&A vision, human vision without visual impairments (4.21 vs. 3.61), which suggests that user-centered and ability-based design approaches are needed; see next.
-
3.
Specificity for various types of visual impairments and ability-based design. Our study included participants with different types and severities of visual impairments. Due to the limited number of participants (N = 17), we did not run statistical tests for sub-categories (e.g., N = 3 blind participants vs. N = 14 with low vision). However, our discussion from Section 2 revealed a vast body of literature that highlighted the specificity of visual impairments and, consequently, the need to adapt assistive applications to individuals, e.g., user-centered design [22], but also in the form of ability-based design [90]. According the former paradigm, “users and their experience of a product, system, or service [are placed] at the center of the design process and allows the user to contribute to every stage” [22, p. 67]; according to the latter, “by focusing on users’ abilities rather than disabilities, designers can create interactive systems better matched to those abilities” [90, p. 62]. In the support of this recommendation we highlight our findings that showed that general health and vision, visual disability, and the difficulty of performing various activities were related to preferences manifested explicitly by the participants with visual impairments for specific M&A vision scenarios. Participants rated the following M&A vision scenarios as being the most desirable: long-distance vision, contrast, audio-rendered vision (i.e., hearing the text that is watched such as street signs), AR vision (seeing objects of interest highlighted), and resolution (seeing more details on the objects they are looking at). In contrast, the least desirable scenarios were UV vision, Diminished Reality vision (not being distracted by unimportant objects form the background), radio vision, slow motion, and thermal vision. In particular, we found that general health and visual functioning (vision-related health, emotional well-being, and social functioning) affected positively the preferences for some of our scenarios. These results recommend future work to look more closely at user-centered and ability-based design of assistive M&A vision.
-
4.
Specificity vs. universality in the design of assistive systems for M&A vision. Our results revealed that some M&A vision scenarios were rated higher than others, e.g., S19 (audio-rendered vision) received an average preference rating of 4.41, while S18 (diminished reality) only 2.94 for participants with visual impairments; see Fig. 3. These findings indicate preferences for scenarios in which computing technology could help correcting vision deficiencies, e.g., by highlighting objects or improving the contrast and resolution of human vision. Also, our results revealed a preference for scenarios in which AI techniques could be used to present more information about objects, e.g., audio-rendered vision. Given their difficulties in perceiving objects in the visible spectrum, participants with visual impairments were less interested in scenarios addressing other regions of the electromagnetic spectrum, such as UV vision, radio vision, and thermal vision, for instance. Based on these results, we can distinguish between univalent, single-purpose systems for assistive vision that focus on one aspect of vision rehabilitation or vision enhancement, e.g., [40, 76], and multivalent, multi-purpose systems that implement several M&A vision scenarios, such as [92, 95, 97].
-
5.
Activity-based M&A vision. Some of the scenarios considered in our work could be implemented in multivalent systems to assist people with visual impairments with specific activities such as walking, cooking, finding specific objects, working, etc. This implication is supported by (1) existing prototypes from the scientific literature that focused on improving performance for specific activities, such as stair navigation [94], sign reading [36], visual product search [96], or interacting within VR environments [92]; and (2) our participants’ self-reported visual functioning (see Table 3) that revealed various challenges with specific activities. Based on these findings, we recommend design of assistive systems and applications that combine multiple types of mediated and augmented vision toward improving the performance of specific tasks and activities.
-
6.
Design for the portability of M&A vision on various assistive devices. In our study, we presented participants with visual impairments with three types of eyewear with various capabilities for rendering photorealistic computer-generated content, embedded sensors, and computing resources. For instance, the HoloLens HMD [50] was the most advanced device used in our study, but was perceived by participants bulky and they feared it would draw unwanted attention if worn in public; also, it was the most expensive of the three devices. At the opposite end was the camera glasses [55] that had no see-through display, but it was affordable and inconspicuous (unless warned, there is no way to see the micro video camera hidden inside the temples). Future work will look at ways in which M&A vision could be implemented on devices with various hardware and software capabilities and resources toward highly portable M&A vision.
7 Conclusion and future work
We reported preferences of people with visual impairments for thirty-two scenarios regarding mediated and augmented vision, which we compared to the preferences of a large group of people without visual impairments. Based on our findings, we proposed a number of implications for assistive eyewear systems and M&A vision to guide future work. One limitation of our study is represented by potential individual differences in understanding the M&A vision scenarios and future work could employ actual implementations of AR systems for confirmation of our findings and further discoveries. Besides the development of technical prototypes, future work could further explore the relationship between assistive vision and subjectively-perceived well-being. For example, we found positive associations between vision functioning and subjective happiness, results that are consistent with prior work from psychology documenting lower levels of psychological and social well-being and higher levels of negative emotions, such as depression and anxiety, for people with visual impairments [7, 18, 27, 30]. We believe that careful design of assistive M&A vision may have a positive impact on well-being and reduce negative emotions for people with visual impairments. We hope that our results will be useful to inform such future developments in assistive vision for smart eyewear.
Notes
One arcminute equals 1/60 of 1∘.
Implemented with the bprm(...) function from R. Wilcox’s Rallfun-v37 R library, available from https://dornsife.usc.edu/labs/rwilcox/software/
Implemented with the bwrank(...) function from R. Wilcox’s “Rallfun-v37” R library, available from https://dornsife.usc.edu/labs/rwilcox/software/
References
Abdelrahman Y, Wozniak P, Knierim P, Henze N, Schmidt A (2018) Exploration of alternative vision modes using depth and thermal cameras. In: Proceedings of the 17th international conference on mobile and ubiquitous multimedia, MUM 2018. https://doi.org/10.1145/3282894.3282920. ACM, USA, pp 245–252
Aiordăchioae A., Gherasim D, Maciuc AI, Gheran BF, Vatavu RD (2020) Addressing inattentional blindness with smart eyewear and vibrotactile feedback on the finger, wrist, and forearm. In: Proceedings of the 19th ACM international conference on mobile and ubiquitous multimedia, MUM ‘20. https://doi.org/10.1145/3428361.3432080. ACM, USA
Aiordăchioae A, Schipor OA, Vatavu RD (2020) An inventory of voice input commands for users with visual impairments and assistive smartglasses applications. In: Proceedings of the 15th International conference on development and application systems, DAS ‘20. https://doi.org/10.1109/DAS49615.2020.9108915, pp 146–150
American Optometric Association (1997) Optometric clinical practice guideline: Care of the patient with hyperopia. http://www.aoa.org/documents/optometrists/CPG-16.pdf. Last accessed March 2020
American Optometric Association (1997) Optometric clinical practice guideline: Care of the patient with myopia. http://www.aoa.org/documents/optometrists/CPG-15.pdf. Last accessed March 2020
Atzmüller C, Steiner PM (2010) Experimental vignette studies in survey research. Methodology (6):128–138. https://doi.org/10.1027/1614-2241/a000014
Augestad LB (2017) Mental health among children and young adults with visual impairments: A systematic review. J Vis Impairm Blindness 111(5):411–425. https://doi.org/10.1177/0145482X1711100503
Azenkot S, Zhao Y (2017) Designing smartglasses applications for people with low vision. SIGACCESS Access Comput (119):19–24. https://doi.org/10.1145/3167902.3167905
Azuma RT (1997) A Survey of Augmented Reality. Presence: Teleoper. Virtual Environ 6(4):355–385. https://doi.org/10.1162/pres.1997.6.4.355
Bach M Freiburg visual acuity & contrast test. https://michaelbach.de/fract/. Last accessed March 2020
Bach M (2014) Manual of the freiburg vision test ‘fract’. https://michaelbach.de/fract/media/FrACT3_Manual.pdf. Last accessed March 2020
Barrera D, Buskens V (2007) Imitation and learning under uncertainty: A vignette experiment. Int Sociol 22(3):367–396. https://doi.org/10.1177/0268580907076576
Barter C, Renold E (1999) The use of vignettes in qualitative research. Soc Res Updat 25. http://sru.soc.surrey.ac.uk/SRU25.html
Ben-Zur H, Debi Z (2005) Optimism, social comparisons, and coping with vision loss in israel. J Vis Impair Blindness 99(3):151–164. https://doi.org/10.1177/0145482X0509900304
Billinghurst M, Clark A, Lee G (2015) A survey of augmented reality. Found Trends Hum-Comput Interact 8(2–3):73–272. https://doi.org/10.1561/1100000049
Birnbaum M (1999) How to show that 9 > 221: Collect judgments in a between-subjects design. Psychol Methods 4 (3):243–249. https://doi.org/10.1037/1082-989X.4.3.243
Brady E, Morris MR, Zhong Y, White S, Bigham JP (2013) Visual challenges in the everyday lives of blind people. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ‘13. https://doi.org/10.1145/2470654.2481291. ACM, USA, pp 2117–2126
Brunes A, Hansen MB, Heir T (2019) Loneliness among adults with visual impairment: prevalence, associated factors, and relationship to life satisfaction. Health Qual Life Outcome 17(1):24. https://doi.org/10.1186/s12955-019-1096-y
Chambel T, Kaiser R, Niamut OA, Ooi WT, Redi JA (2016) Altmm ‘16: Proceedings of the 1st international workshop on multimedia alternate realities. ACM, USA. https://doi.org/10.1145/2983298
Chaturvedi I, Bijarbooneh FH, Braud T, Hui P (2019) Peripheral vision: A new killer App for smart glasses. In: Proceedings of the 24th international conference on intelligent user interfaces, IUI ‘19. https://doi.org/10.1145/3301275.3302263. ACM, USA, pp 625–636
Coughlan J, Miele J (2017) AR4VI: AR as an accessibility tool for people with visual impairments. In: Proceedings of the 2017 IEEE international symposium on mixed and augmented reality (ISMAR-Adjunct). https://doi.org/10.1109/ISMAR-Adjunct.2017.89.978-0-7695-6327-5, pp 288–292
Dorrington P, Wilkinson C, Tasker L, Walters A (2016) User-centered design method for the design of assistive switch devices to improve user experience, accessibility, and independence. J Usability Stud 11(2):66–82
eSight Electronic glasses for the legally blind. https://www.esighteyewear.eu/. Last accessed March 2020
Evans JR, Fletcher AE, Wormald RP (2007) Depression and anxiety in visually impaired older people. Ophthalmology 114(2):283–288. https://doi.org/10.1016/j.ophtha.2006.10.006
Everingham M, Thomas B, Troscianko T (1998) Head-mounted mobility aid for low vision using scene classification techniques. Int J Virtual Reality 3(4):1–10. https://doi.org/10.20870/IJVR.1998.3.4.2629
Fenwick E, Rees G, Pesudovs K, Dirani M, Kawasaki R, Wong TY, Lamoureux E (2012) Social and emotional impact of diabetic retinopathy: a review. Clin Exp Ophthalmol 40 (1):27–38. https://doi.org/10.1111/j.1442-9071.2011.02599.x
Fenwick EK, Ong PG, Man RE, Sabanayagam C, Cheng CY, Wong TY, Lamoureux EL (2017) Vision impairment and major eye diseases reduce vision-specific emotional well-being in a chinese population. Br J Ophthalmol 101(5):686–690. https://doi.org/10.1136/bjophthalmol-2016-308701
Finch J (1987) The vignette technique in survey research. Sociology 21:105–114. https://doi.org/10.1177/0038038587021001008
Fuller T, Sadovnik A (2017) Image level color classification for colorblind assistance. In: Proceedings of the 2017 IEEE international conference on image processing, ICIP ‘17. https://doi.org/10.1109/ICIP.2017.8296629, pp 1985–1989
Garcia GA, Khoshnevis M, Gale J, Frousiakis SE, Hwang TJ, Poincenot L, Karanjia R, Baron D, Sadun AA (2017) Profound vision loss impairs psychological well-being in young and middle-aged individuals. Clin Ophthalmol (Auckland NZ) 11:417. https://doi.org/10.2147/OPTH.S113414
Garnefski N, Kraaij V, De Graaf M, Karels L (2010) Psychological intervention targets for people with visual impairments: The importance of cognitive coping and goal adjustment. Disabil Rehabil 32(2):142–147. https://doi.org/10.3109/09638280903071859
Goodman E, Stolterman E, Wakkary R (2011) Understanding interaction design practices. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ‘11. https://doi.org/10.1145/1978942.1979100, https://doi.org/10.1145/1978942.1979100. ACM, USA, pp 1061–1070
Guo A, Chen XA, Qi H, White S, Ghosh S, Asakawa C, Bigham JP (2016) VizLens: A robust and interactive screen reader for interfaces in the real world. In: Proceedings of the 29th annual symposium on user interface software and technology, UIST ‘16. https://doi.org/10.1145/2984511.2984518. ACM, USA, pp 651–664
Hicks S, Wilson I, Muhammed L, Worsfold J, Downes S, Kennard C (2013) A depth-based head-mounted visual display to aid navigation in partially sighted individuals. PloS ONE 8(7):e67695. https://doi.org/10.1371/journal.pone.0067695
Hoyle R, Stark L, Ismail Q, Crandall D, Kapadia A, Anthony D (2020) Privacy norms and preferences for photos posted online. ACM Trans Comput-Hum Interact. https://www.microsoft.com/en-us/research/publication/privacy-norms-and-preferences-for-photos-posted-online/
Huang J, Kinateder M, Dunn M, Jarosz W, Yang X, Cooper E, Haddad J (2019) An augmented reality sign-reading assistant for users with reduced vision. PLoS ONE 14(1):e0210630. https://doi.org/10.1371/journal.pone.0210630
Itoh Y, Klinker G (2015) Vision enhancement: Defocus correction via optical see-through head-mounted displays. In: Proceedings of the 6th augmented human international conference, AH ‘15. https://doi.org/10.1145/2735711.2735787. ACM, USA, pp 1–8
Kress B, Saeedi E, de-la Perriere VB (2014) The segmentation of the Hmd market: Optics for Smart Glasses, Smart Eyewear, AR and VR Headsetss. In: Kazemi AA, Kress BC, Mendoza EA (eds) Photonics applications for aviation, aerospace, commercial, and harsh environments V. https://doi.org/10.1117/12.2064351, vol 9202. International Society for Optics and Photonics, SPIE, pp 107–120
Lang M, Hintermair M, Sarimski K (2017) Social-emotional competences in very young visually impaired children. Br J Vis Impair 35(1):29–43. https://doi.org/10.1177/0264619616677171
Langlotz T, Sutton J, Zollmann S, Itoh Y, Regenbrecht H (2018) ChromaGlasses: computational glasses for compensating colour blindness. In: Proceedings of the 2018 CHI conference on human factors in computing systems, CHI ’18. https://doi.org/10.1145/3173574.3173964. ACM, USA
Lehane CM, Dammeyer J, Elsass P (2017) Sensory loss and its consequences for couples’ psychosocial and relational wellbeing: an integrative review. Aging Ment Health 21(4):337–347. https://doi.org/10.1080/13607863.2015.1132675
Lehane CM, Nielsen T, Wittich W, Langer S, Dammeyer J (2018) Couples coping with sensory loss: A dyadic study of the roles of self-and perceived partner acceptance. Br J Health Psychol 23 (3):646–664. https://doi.org/10.1111/bjhp.12309
Lindgaard G, Folkens J, Pyper C, Frize M, Walker R (2010) Contributions of psychology to the design of diagnostic decision support systems. In: Proceedings of HCIS 2010, the IFIP human-computer interaction symposium. https://doi.org/10.1007/978-3-642-15231-3_3. Springer, Berlin, Heidelberg, pp 15–25
Lupón M., Armayones M, Cardona G (2018) Quality of life among parents of children with visual impairment: A literature review. Res Dev Disabil 83:120–131. https://doi.org/10.1016/j.ridd.2018.08.013
Lyubomirsky S, Lepper H (1999) A measure of subjective happiness: Preliminary reliability and construct validation. Soc Indic Res 46:137–155. https://doi.org/10.1023/A:1006824100041
Mangione CM, Lee PP, Gutierrez PR, Spritzer K, Berry S, Hays RD (2001) for the national eye institute visual function questionnaire field test investigators: Development of the 25-list-item national eye institute visual function questionnaire. Arch Ophthalmol 119(7):1050–1058. https://doi.org/10.1001/archopht.119.7.1050
Mann S (1999) Mediated reality. Linux J 1999(59es):5–es. https://www.linuxjournal.com/article/3265
Mann S, Furness T, Yuan Y, Iorio J, Wang Z (2018) All reality: Virtual, augmented, mixed (X), Mediated (X, Y), and Multimediated Reality. arXiv:1804.08386
Melillo P, Riccio D, Di Perna L, Sanniti Di Baja G, De Nino M, Rossi S, Testa F, Simonelli F, Frucci M (2017) Wearable improved vision system for color vision deficiency correction. IEEE J Transl Eng Health Med 5:1–7. https://doi.org/10.1109/JTEHM.2017.2679746
Microsoft HoloLens (1st gen) hardware. https://docs.microsoft.com/en-us/hololens/hololens1-hardware. Last accessed March 2020
Milgram P Jr, HC (1999) A taxonomy of real and virtual world display integration. Springer-Verlag, Berlin, Heidelberg. https://www.researchgate.net/publication/2440732_A_Taxonomy_of_Real_and_Virtual_World_Display_Integration
Milgram P, Kishino F (1994) A taxonomy of mixed reality visual displays. IEICE Trans Inf Syst E77-D(12):1321–1329. https://search.ieice.org/bin/summary.php?id=e77-d_12_1321
Milgram P, Takemura H, Utsumi A, Kishino F (1995) Augmented reality: A class of displays on the reality-virtuality continuum. In: Proceedings of the society of photo-optical instrumentation engineers 2351, telemanipulator and telepresence technologies. https://doi.org/10.1117/12.197321, vol 2351
Niforatos E, Vidal M (2019) Effects of a monocular laser-based head-mounted display on human night vision. In: Proceedings of the 10th augmented human international conference 2019, AH2019. https://doi.org/10.1145/3311823.3311858. ACM, USA
NorthVision Technologies Glasses camera. http://northvisiontec.com/products/camera-spy/glasses-eyewear-camera/nc-c05glasses-camera19201080-avi-tf-card-videophoto-876.html. Last accessed March 2020
Pamparău C, Aiordăchioae A, Vatavu RD (2020) From do you see what I see? to do you control what I see? Mediated vision, from a distance, for eyewear users. In: Proceedings of the 19th ACM international conference on mobile and ubiquitous multimedia, MUM ‘20. https://doi.org/10.1145/3428361.3432089. ACM, USA
Pamparău C, Vatavu RD (2020) Flexisee: Flexible configuration, customization, and control of mediated and augmented vision for users of smart eyewear devices. Multimed Tools Appl. https://doi.org/10.1007/s11042-020-10164-5
Paradiso JA, Landay JA (2009) Guest editors’ introduction: Cross-reality environments. IEEE Pervasive Comput 8(3):14–15. https://doi.org/10.1109/MPRV.2009.47
Peli E (2001) Vision multiplexing: An engineering approach to vision rehabilitation device development. Optom Vis Sci 78(5):304–315. https://doi.org/10.1097/00006324-200105000-00014
Popovici I, Vatavu R (2019) Understanding users’ preferences for augmented reality television. In: 2019 IEEE international symposium on mixed and augmented reality (ISMAR). https://doi.org/10.1109/ISMAR.2019.00024, pp 269–278
Profita H, Albaghli R, Findlater L, Jaeger P, Kane SK (2016) The AT Effect: How disability affects the perceived social acceptability of head-mounted display use. In: Proceedings of the 2016 CHI conference on human factors in computing systems, CHI ‘16. https://doi.org/10.1145/2858036.2858130. ACM, USA, pp 4884–4895
Rainey L, Elsman EBM, van Nispen RMA, van Leeuwen LM, van Rens GHMB (2016) Comprehending the impact of low vision on the lives of children and adolescents: a qualitative approach. Qual Life Res 25(10):2633–2643. https://doi.org/10.1007/s11136-016-1292-8
Rusu P, Schipor M, Vatavu R (2019) A lead-in study on well-being, visual functioning, and desires for augmented reality assisted vision for people with visual impairments. In: Proceedings of the 2019 E-health and bioengineering conference (EHB). https://doi.org/10.1109/EHB47216.2019.8970074, pp 1–4
Sandnes FE (2016) What do low-vision users really want from smart glasses? Faces, text and perhaps no glasses at all. In: Proceedings of the international conference on computers helping people with special needs, ICCHP ‘16. https://doi.org/10.1007/978-3-319-41264-1_25, pp 187–194
Sarsenbayeva Z, van Berkel N, Luo C, Kostakos V, Goncalves J (2017) Challenges of situational impairments during interaction with mobile devices. In: Proceedings of the 29th australian conference on computer-human interaction, OZCHI ‘17. https://doi.org/10.1145/3152771.3156161. ACM, USA, pp 477–481
Schipor M, Vatavu RD (2017) Neurobiological and neurocognitive models of vision for touch input on mobile devices. In: 2017 E-health and bioengineering conference (EHB). https://doi.org/10.1109/EHB.2017.7995434, pp 353–356
Schneider O, Shigeyama J, Kovacs R, Roumen TJ, Marwecki S, Boeckhoff N, Gloeckner DA, Bounama J, Baudisch P (2018) Dualpanto: A haptic device that enables blind users to continuously interact with virtual worlds. In: Proceedings of the 31st annual ACM symposium on user interface software and technology, UIST ‘18. https://doi.org/10.1145/3242587.3242604. ACM, USA, pp 877–887
Schulze-Bonsel K, Feltgen N, Burau H, Hansen L, Bach M (2006) Visual acuities “hand motion” and “counting fingers” can be quantified with the freiburg visual acuity test. Invest Ophthalmol Vis Sci 47(3):1236–1240. https://doi.org/10.1167/iovs.05-0981
Stearns L, DeSouza V, Yin J, Findlater L, Froehlich JE (2017) Augmented reality magnification for low vision users with the microsoft hololens and a finger-worn camera. In: Proceedings of the 19th international ACM SIGACCESS conference on computers and accessibility, ASSETS ‘17. https://doi.org/10.1145/3132525.3134812. ACM, USA, pp 361–362
Stearns L, Findlater L, Froehlich JE (2018) Design of an augmented reality magnification aid for low vision users. In: Proceedings of the 20th international ACM SIGACCESS conference on computers and accessibility, ASSETS ‘18. https://doi.org/10.1145/3234695.3236361. ACM, USA, pp 28–39
Sturrock BA, Xie J, Holloway EE, Hegel M, Casten R, Mellor D, Fenwick E, Rees G (2016) Illness cognitions and coping self-efficacy in depression among persons with low vision. Invest Ophthalmol Vis Sci 57(7):3032–3038. https://doi.org/10.1167/iovs.16-19110
Sturrock BA, Xie J, Holloway EE, Lamoureux EL, Keeffe JE, Fenwick EK, Rees G (2015) The influence of coping on vision-related quality of life in patients with low vision: a prospective longitudinal study. Invest Ophthalmol Vis Sci 56(4):2416–2422. https://doi.org/10.1167/iovs.14-16223
Szpiro S, Zhao Y, Azenkot S (2016) Finding a store, searching for a product: A study of daily challenges of low vision people. In: Proceedings of the 2016 ACM international joint conference on pervasive and ubiquitous computing, UbiComp ‘16. https://doi.org/10.1145/2971648.2971723. ACM, USA, pp 61–72
Szpiro SFA, Hashash S, Zhao Y, Azenkot S (2016) How people with low vision access computing devices: Understanding challenges and opportunities. In: Proceedings of the 18th international ACM SIGACCESS conference on computers and accessibility, ASSETS ‘16. https://doi.org/10.1145/2982142.2982168. ACM, USA, pp 171–180
Tang Y, Zhu Z, Toyoura M, Go K, Kashiwagi K, Fujishiro I, Mao X (2018) Arriving light control for color vision deficiency compensation using optical see-through head-mounted display. In: Proceedings of the 16th ACM SIGGRAPH international conference on virtual-reality continuum and its applications in industry, VRCAI ‘18. https://doi.org/10.1145/3284398.3284407. ACM, USA
Tanuwidjaja E, Huynh D, Koa K, Nguyen C, Shao C, Torbett P, Emmenegger C, Weibel N (2014) Chroma: A wearable augmented-reality solution for color blindness. In: Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing, UbiComp ‘14. https://doi.org/10.1145/2632048.2632091. ACM, USA, pp 799–810
The National Eye Institute 25-item visual function questionnaire. https://www.rand.org/content/dam/rand/www/external/health/surveys_tools/vfq/vfq25_manual.pdf. Last accessed March 2020
Vanderdonckt J, Zen M, Vatavu RD (2019) AB4Web: An On-Line A/B tester for comparing user interface design alternatives. Proc ACM Hum-Comput Interact 3(EICS). https://doi.org/10.1145/3331160
Vatavu RD, Gheran BF, Schipor MD (2018) The impact of low vision on touch-gesture articulation on mobile devices. IEEE Pervasive Comput 17 (1):27–37. https://doi.org/10.1109/MPRV.2018.011591059
Vatavu RD, Saeghe P, Chambel T, Vinayagamoorthy B, Ursu MF (2020) Conceptualizing augmented reality television for the living room. In: ACM international conference on interactive media experiences, IMX ’20. https://doi.org/10.1145/3391614.3393660. ACM, USA, pp 1–12
Vatavu RD, Vanderdonckt J (2020) Design space and users’ preferences for smartglasses graphical menus: A vignette study. In: Proceedings of the 19th international conference on mobile and ubiquitous multimedia, MUM ‘20. https://doi.org/10.1145/3428361.3428467. ACM, USA
Verbeek P (2005) Beyond the human eye. Mediated vision and posthumanity. Veenman Publishers en ARTez Press. https://research.utwente.nl/en/publications/beyond-the-human-eye-mediated-vision-and-posthumanity
Vinayagamoorthy B, Glancy M, Ziegler C, Schäffer R (2019) Personalising the tv experience using augmented reality: An exploratory study on delivering synchronised sign language interpretation. In: Proceedings of the 2019 CHI conference on human factors in computing systems, CHI ‘19. https://doi.org/10.1145/3290605.3300762. ACM, USA, pp 1–12
Vuzix Vuzix Blade. Augmented Reality (AR) Glasses for the Consumer. https://www.vuzix.com/products/blade-smart-glasses. Last accessed March 2020
WHO (2019) Icd-11, vision impairment. http://id.who.int/icd/entity/30317704. Last accessed March 2020
WHO (2019) World report on vision. https://www.who.int/publications-detail/world-report-on-vision. Last accessed March 2020
Wilcox R (2012) Modern statistics for the social and behavioral sciences. A practical introduction, CRC Press, Boca Raton
Wilks T (2004) The use of vignettes in qualitative research into social work values. Qual Soc Work 3(1):78–87. https://doi.org/10.1177/1473325004041133
Wobbrock JO (2019) Situationally aware mobile devices for overcoming situational impairments. In: Proceedings of the ACM SIGCHI symposium on engineering interactive computing systems, EICS ’19. https://doi.org/10.1145/3319499.3330292. ACM, USA
Wobbrock JO, Gajos KZ, Kane SK, Vanderheiden GC (2018) Ability-based design. Commun ACM 61(6):62–71. https://doi.org/10.1145/3148051
Zhao Y (2018) Using direct visual augmentation to provide people with low vision equal access to information. SIGACCESS Access Comput (120):38–42. https://doi.org/10.1145/3178412.3178421
Zhao Y, Cutrell E, Holz C, Morris MR, Ofek E, Wilson AD (2019) SeeingVR: A set of tools to make virtual reality more accessible to people with low vision. In: Proceedings of the 2019 CHI conference on human factors in computing systems, CHI ‘19. https://doi.org/10.1145/3290605.3300341. ACM, USA
Zhao Y, Hu M, Hashash S, Azenkot S (2017) Understanding low vision people’s visual perception on commercial augmented reality glasses. In: Proceedings of the 2017 CHI conference on human factors in computing systems, CHI ’17. https://doi.org/10.1145/3025453.3025949. ACM, USA, pp 4170–4181
Zhao Y, Kupferstein E, Castro BV, Feiner S, Azenkot S (2019) Designing AR visualizations to facilitate stair navigation for people with low vision. In: Proceedings of the 32nd Annual ACM symposium on user interface software and technology, UIST ‘19. https://doi.org/10.1145/3332165.3347906. ACM, USA, pp 387–402
Zhao Y, Szpiro S, Azenkot S (2015) ForeSee: A customizable head-mounted vision enhancement system for people with low vision. In: Proceedings of the 17th international ACM SIGACCESS conference on computers & accessibility, ASSETS ‘15. https://doi.org/10.1145/2700648.2809865. ACM, USA, pp 239–249
Zhao Y, Szpiro S, Knighten J, Azenkot S (2016) CueSee: exploring visual cues for people with low vision to facilitate a visual search task. In: Proceedings of the 2016 ACM international joint conference on pervasive and ubiquitous computing, UbiComp ‘16. https://doi.org/10.1145/2971648.2971730. ACM, USA, pp 73–84
Zhao Y, Szpiro S, Shi L, Azenkot S (2019) Designing and evaluating a customizable head-mounted vision enhancement system for people with low vision. ACM Trans Access Comput 12(4). https://doi.org/10.1145/3361866
Zhao Y, Wu S, Reynolds L, Azenkot S (2018) A face recognition application for people with visual impairments: understanding use beyond the lab. In: Proceedings of the 2018 CHI conference on human factors in computing systems, CHI ‘18. https://doi.org/10.1145/3173574.3173789. ACM, USA
Zolyomi A, Shukla A, Snyder J (2017) Technology-mediated sight: A case study of early adopters of a low vision assistive technology. In: Proceedings of the 19th international ACM SIGACCESS conference on computers and accessibility, ASSETS ‘17. https://doi.org/10.1145/3132525.3132552. ACM, USA, pp 220–229
Acknowledgements
This work was supported by a grant of the Ministry of Research and Innovation, CNCS-UEFISCDI, project no. PN-III-P1-1.1-TE-2016-2173 (TE141/2018), within PNCDI III. The work was carried out in the Machine Intelligence and Information Visualization Lab (MintViz) of the MANSiD Research Center. The infrastructure was provided by the University of Suceava and was partially supported from the project “Integrated center for research, development and innovation in Advanced Materials, Nanotechnologies, and Distributed Systems for fabrication and control”, No. 671/09.04.2015, Sectoral Operational Program for Increase of the Economic Competitiveness, co-funded from the European Regional Development Fund. The HoloLens device used in this work was kindly provided by OSF Global Services, the Mobile Division, Suceava.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Vatavu, RD., Rusu, PP., Schipor, OA. et al. Preferences of people with visual impairments for augmented and mediated vision: A vignette experiment. Multimed Tools Appl 83, 46531–46556 (2024). https://doi.org/10.1007/s11042-021-11498-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-021-11498-4