Keywords

1 Introduction

Elder mistreatment is a persistent, national issue that affects approximately 1 in 10 Americans aged 60 or older, which is an estimated 5 million older adults every year [1, 2]. The Centers for Disease Control and Prevention defines elder mistreatment, also known as elder abuse, as “the intentional act, or failure to act, by a caregiver or trusted person that causes or creates a risk of harm to an adult age 60 or older [3, 4].” The six commonly reported categories of elder mistreatment are: physical abuse, financial exploitation, emotional abuse, sexual abuse, neglect, and abandonment [1, 3]. It is estimated that only 1 in 24 cases of elder mistreatment become known to authorities [5].

Older adult victims are not likely to self-report that they are being mistreated due to several barriers that limit help-seeking behaviors. These barriers include fear of nursing home placement, fear of losing autonomy, and fear that if the abusive caregiver is removed, no one will take care of them. There are also concerns regarding involving an abusive family member with legal trouble [6,7,8].

Existing methods to increase the identification of elder mistreatment focus on educating healthcare professionals and developing screening tools to be administered by providers with limited input from older adults themselves. We developed a different and unique approach to address the lack of identification of victims of elder mistreatment for community dwelling and cognitively intact older adults. In our approach, we include the older adults in the screening process and help them be their own advocate.

1.1 Overview of VOICES Tool

VOICES is a digital health screening tool designed to place the process of elder mistreatment screening in the hands of the older adult and to motivate them to self-report mistreatment [5, 9, 10]. VOICES is self-administrated by a digital coach and runs on a tablet device to deliver elder mistreatment screening content targeting attitudes, subjective norms, and perceptions of control. The tool provides educational content, as well as resources and services available, to older adults along with information on the Adult Protective Services (APS) response to disclosure. VOICES uses a digital coach, called Vicky, to guide the user through a customized pathway depending on the user’s needs. Vicky uses an automated text-to-speech feature to narrate the text presented on the screen or the audio contained in the animated educational videos. If suspicion of mistreatment is identified, the tool will attempt to motivate the user to identify with being mistreated and disclose their mistreatment to a healthcare professional.

The development of the VOICES tool consisted of content and application development. The content of the VOICES tool was based on existing literature on elder mistreatment, theories of planned behavior and self-determination [11, 12], the technological needs of older adults and subject matter expert interviews, including clinical researchers in geriatrics, psychology, and intimate partner violence. The application development of the VOICES tool was based on the User-Centered Design (UCD) approach, which involved requirement gathering, conceptual model design, focus groups and interviews, prototyping and mockups, tool development, and an initial evaluation with a representative sample from potential end users [9, 10, 13]. We conducted focus groups to test and validate the concept of elder mistreatment electronic screening and we have validated the usability of our tool using formal usability evaluation [5]. The focus group results showed a willingness to use a tablet for elder mistreatment screening and the initial usability results suggested that older adults are capable, willing, and comfortable with using a tablet-based screening tool.

1.2 VOICES Tool Screening Process

VOICES is presented to the older adult on a tablet device by the provider who will remain in the vicinity to assist with initial tool orientation and to answer any questions. Before initializing the tool, the older adult is informed that their provider will be notified if any suspicion of elder mistreatment is identified. The VOICES tool starts with an educational module, utilizing evidence from multidisciplinary fields to introduce the topic of elder mistreatment and emphasize that elder mistreatment is rarely an isolated incident, which can escalate in severity and intensity if left undisclosed. The VOICES tool then continues to the elder mistreatment screener, which assesses the user’s mistreatment risk. The older adult is then shown an educational module consisting of a brief animated video that provides a general summary of mistreatment. Afterward, the user is invited to watch up to five short (1–2 min) animated videos detailing each common category of elder mistreatment, along with its respective risks, signs, and consequences.

The results from the previous screener will determine whether the VOICES tool ends without suspicion of mistreatment or continues to a motivational Brief Negotiation Interview (BNI) module (suspicion of mistreatment), which will then encourage the user to reflect on and understand their experience as having been mistreated. A key component of the motivational BNI module is to prompt the user to consider the benefits or motives for self-identifying as being mistreated [14, 15]. If the user decides not to identify as mistreated, the VOICES tool will end. Otherwise, the user will be motivated to self-report and seek professional help. At this point in the tool, VOICES will privately notify the provider if a suspicion of mistreatment was identified to prompt the provider to follow up with a more comprehensive mistreatment screening.

1.3 Current Study

In addition to the common barriers for elder mistreatment identification, older adults with disabilities who are blind, have low-vision, or are deaf or hard of hearing are at a greater risk of elder mistreatment compared to those without, and face further limitations in communicating their needs with health professionals and disclosing mistreatment [1, 16]. The goal of our study is to make the VOICES tool more inclusive and usable by older adults with vision and hearing disabilities. We are aiming to reduce disparities and empower this segment of older adults to be their own advocates and to help increase the coverage of the VOICES tool to include persons with disabilities. Digital screening of elder mistreatment can produce significantly higher rates of elder mistreatment reporting. Digital tools that offer the opportunity to confidentially self-report risky or stigmatizing behavior should not exclude persons with disabilities.

In this paper we describe how we performed a preliminary evaluation of the usability of the VOICES screening tool for older individuals with visual or hearing disabilities. Specifically, we describe how we evaluated the ease of use and usefulness of VOICES as a screening tool for older adults who are blind, have low vision, are deaf, or are hard of hearing, to assess the degree to which this tool is appropriate for these potential populations of users. We also describe how we used the findings and recommendations from the usability evaluation and the User-Centered Design (UCD) approach to enhance and refine the VOICES tool to be more usable and acceptable by older adults with visual or hearing disabilities.

The objective in this paper is to describe our approach to increase the scope of the user population of the VOICES tool to include adults with disabilities and to present our findings from the usability evaluation that we conducted during the enhancement of our tool.

2 Methodology

We conducted one-on-one usability evaluation sessions with (n = 14) cognitively intact older adults age 60 or older who are blind, have low vision, are deaf, or are hard of hearing. In the usability evaluation sessions participants used the VOICES tool on an iPad tablet to perform elder mistreatment screening scenarios. We then analyzed audio and video recordings, and participant feedback from the evaluation sessions. Usability was evaluated in terms of its three constituent components: effectiveness, efficiency, and satisfaction. The International Organization for Standardization defines usability as, “the extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” [17].

2.1 Usability Testing Configuration

Participants used an Apple ® iPad Pro with a 12.9-in. screen size while seated at an adjustable drafting table with height and angle options (see Fig. 1), allowing the user to change the angle and position of the table to their liking. The tablet rested on the surface of the table to avoid any strain caused by holding the device, and the table was angled to reduce glare [18]. In addition, the adjustable table was used to provide necessary support for older adults who may have dexterity limitations to assist with accuracy of button presses and reduce unintended inputs [19, 20].

Fig. 1.
figure 1

Mobile testing configuration.

A non-slip matting was also placed on the table to keep the iPad from sliding, and participants could move the position of the tablet within an approximately 40 × 40 cm area that was taped off. If needed, overhead fluorescent lights were turned off, and a lamp with a 60 W lightbulb was placed next to the table to eliminate glare for each participant.

Video of the iPad screen was captured with a Logitech C525 USB Webcam mounted on a tripod and TechSmith’s Morae (v3.3.4) software, and a separate back-up audio recorder was used (Olympus WS-821). We also used the screen recording feature on the iPad.

2.2 VOICES Tool Modes and Interface

VOICES Modes. Participants used either the coach Vicky Mode (hard of hearing and low vision participants) or VoiceOver Mode (blind and low vision participants) to attempt the two task scenarios.

Vicky Mode. This mode is intended for either users without accessibility requirements, low vision users, or users who are hard of hearing or deaf. By default, this mode uses automated text-to-speech with a female voice, called the Vicky digital coach. Users have the ability to mute the audio via a volume button on the bottom menu bar or allow the Vicky coach to auto-play and read the text with each slide. Videos automatically display closed captioning. The speech rate of the Vicky voice was 155 WPM.

Digital coach Vicky reads from a separate text track similar to what is shown on the screen which allows for customizable pronunciation of words that the automated text-to-speech coach may not properly pronounce by default. For example, while the word “caregiver” may be shown as one word to the user, in the text track it is displayed as “care giver” so the Vicky coach can pronounce the appropriate hard ⟨g⟩, rather than the text-to-speech default soft ⟨g⟩ pronunciation (similar to pronouncing the ‘g’ in “gym”).

VoiceOver Mode. This mode is intended for users without vision, or with some degree of vision impairment. The iPad’s internal accessibility feature, VoiceOver (which is a gesture-based screen reader), is activated on the device and when a new user is created in the tool, accessibility requirements can be toggled for the user’s profile. The user’s interface will have slight adjustments to accommodate the iPad’s VoiceOver functionality. For example, the Vicky coach text-to-speech functionality will be disabled, and the VOICES’s volume button will be hidden since it is not needed. For this study, the default speech rate of 175 WPM for VoiceOver was used.

Interface Design. The VOICES tool was designed with a simple layout, large buttons to minimize selection errors, and large high-contrast text (using Arial font and size of 32pt in black against a white background) with limited text on each screen. For example, Fig. 2 shows an example of a screen from the Vicky version with a top and bottom banner and a large content area; the bottom bar includes a “Play”/“Pause” button, volume control, and the “Continue” button (which becomes active after the narration has finished or after a selection has been made for a question). The VoiceOver version of this screen did not include the “Play”/“Pause” or volume control buttons.

Fig. 2.
figure 2

Example of VOICES tool page from Vicky mode.

2.3 Procedure

Usability evaluation sessions were conducted at the facilities of Michigan State University Usability/Accessibility Research and Consulting (UARC) laboratory. All procedures were approved by the Michigan State University Human Investigation Committee (IRB). Participants used an Apple ® iPad Pro (12.9-in.) tablet that had the VOICES tool loaded onto it. Each one-on-one usability session lasted approximately 90 min. Participants were given a brief overview of the study, asked to sign the Informed Consent Form, fill out a brief demographic questionnaire, attempt the task scenarios, and fill out post-study surveys.

Task Scenarios. During the usability sessions, participants were asked to use pre-prepared scenarios and assume a certain persona for each task. This did not impact participants’ understanding or use of the tool, but it did cause users to focus on evaluating VOICES. Therefore, participants were asked to use the VOICES tool on the iPad to go through a step-by-step screening process using a specific set of instructions that were provided and talk-out-loud to share their thoughts and insights as they moved through the process. Participants were verbally given specific instructions on which selections to make during the step-by-step screening process tasks, but they were not instructed on how to use the tool. Participants were specifically instructed not to disclose any personal experiences with elder mistreatment and were reminded several times that they would be performing task scenarios from different perspectives, i.e., “It is important to know that this study is not about your personal experience. We will ask you to use the tool from the perspective of a specific person described in the task.”

Before each task, the facilitator explained the participant’s role or persona they needed to assume and what they hoped to accomplish. The first task scenario involved having participants use the VOICES tool as if they were a completely independent, older adult who did not have a caregiver, and who did not rely on anyone to take care of them. They were instructed to indicate that no one was treating them poorly or in a way that they did not want to be treated. The second task explored a scenario where the participant responded as if they had been positively screened as someone with a caregiver who was mistreating them. They indicated that they believed that they had been mistreated by this caregiver, and they also felt that this mistreatment may have led to some problems in their life. As a result, they were somewhat ready to disclose this information to someone that day using the VOICES tool.

Metrics. The VOICES tool’s usability was evaluated quantitatively and qualitatively through effectiveness, efficiency, and satisfaction metrics. Usability effectiveness was measured as the percentage of tasks completed successfully. Usability efficiency was measured as the average time to perform a task and assessed based on issues observed during performance of the tasks. Usability satisfaction was measured by user satisfaction ratings (i.e., from post-task and post-study questionnaires) and written or verbal feedback on the questionnaires, and verbal comments from each session. While effectiveness and efficiency measures were quantitative, satisfaction was measured qualitatively.

The surveys collected demographics, usability perceptions, ease-of-use questions, familiarity and comfortability with technology, current emotional state, understanding of elder mistreatment principles, and open-ended qualitative feedback. The System Usability Scale (SUS), an industry standard, asked participants to rate their level of agreement to 10 user satisfaction statements; the SUS has been proven to be accurate for small sample sizes of 8–14 [21,22,23]. The Computer Efficacy Scale, a 10 question, validated survey for measuring comfortability with technology, was administered to assess participant’s level of technological competency [24]. Emotional reactions to VOICES were assessed using the 10-item International Positive and Negative Affect Schedule Short Form (I-PANAS-SF) [25]. A five-question comprehension survey tested participants’ knowledge of mistreatment after using VOICES. Open-ended questions provided an opportunity for participant recommendations and targeted potential challenges.

2.4 Recruiting Strategy

A significant challenge in conducting this study was reaching and recruiting potential participants. While recruiting participants for a typical usability study can take 2–3 weeks, reaching potential participants with specific disabilities can take considerably more time. Additionally, the eligibility requirements for the current study required locating participants with visual and hearing disabilities who were 60 years or older, which took more than 2 months.

Successful recruiting efforts are primarily built on establishing relationships with individuals within the specific communities. Finding “champions” within the user communities who can spread the word about the study and are credible voices is the most effective and efficient way to find potential participants. For example, individuals who are deaf may also identify as “uppercase Deaf,” meaning that they share a language – American Sign Language (ASL) – and a culture; members of this group use sign language as a primary means of communication and hold a set of beliefs about themselves and their connection to the larger society [26].

Sources. Participants for the current study were recruited in a variety of ways, including through flyers on information boards (physical and virtual); local and professional organization newsletters and listservs; social media; disabilities offices within universities and colleges; and our organization’s own professional and personal networks and websites. Suggestions for finding older participants who are blind, have low vision, are deaf, or are hard of hearing include: Centers for independent living, disability rights coalitions, nursing homes and assisted living facilities. For individuals who are blind or have low vision, contact local blind and low vision organizations and associations, e.g., National Federation of the Blind, American Council for the Blind, or Braille Institute. For deaf and hard of hearing associations, try local chapters of the National Association of the Deaf or Hearing Loss Association of America.

Scheduling. Individuals with disabilities, including older adults who are using assistive technologies, may require extra time to complete the usability evaluation. For example, based on prior findings [27] and anecdotal evidence from other researchers and practitioners, it is recommended that a 1:4:4 (no visual disabilities:low vision:blind) rule of thumb be used for estimating how much time to allow for individuals with visual impairments. If an individual who is sighted takes 10 min. to complete the task, then it is reasonable to anticipate that an individual who is blind and using a screen reader may require 40 min. This is not because individuals with visual impairments are less capable than those without, but rather because assistive technology and different ways of interacting can take more time, particularly as most products are designed or optimized for individuals who do not have disabilities.

To help with planning for sessions, it is suggested to find out during the recruiting process what assistive technology the participant typically uses and/or requires to be able to participate in the study. Anticipating how frequently a potential participant uses assistive technology, and their level of confidence is important for understanding if they are a novice or are a more expert user. When scheduling, participants should be informed about the duration of their session ahead of time, and it may be helpful to build in a buffer of a least 30 min between sessions (e.g., allow time for any technical issues that may have happened during the previous session, to account for a participant who is late, time for recordings to finish saving after a session has finished, time to set up for next participant, time to take a break to use the bathroom, etc.).

2.5 Participants

The (n = 14) participants in this study included three user groups, with seven participants who were blind (in both eyes), three with low vision (20/70 to 20/200, corrected vision), and four who were hard of hearing (bi-lateral hearing loss). Although invited, no deaf participants completed the online recruiting eligibility screener before the data collection was suspended due to COVID-19 research restrictions. All blind participants and two low vision participants used the iPad’s native VoiceOver screen reader mode, while all hard of hearing participants and one low vision participant used the VOICES Vicky coach mode. Participants with low vision were asked whether they typically would use VoiceOver. Eight participants were male and six were female. Twelve participants identified as White, while two identified as Black or African American. Age ranged from 62 to 80 (average age, 67; median age, 68). General Internet use was uniformly high with 12 participants reporting using the Internet daily and two at least weekly, accessed on either a desktop computer, laptop, tablet, or smartphone. Ratings for confidence in their abilities to use new technology was relatively high, average of 7.7 (Scale: 10 = “Very confident” to 0 = “Not confident at all”). Participants received a $75 gift card for participating in the usability session as a thank-you for their time.

3 Results

Our iterative enhancement and usability evaluation of the VOICES tool was conducted in two phases. First, we conducted Phase 1 of the usability evaluation with six participants. Then, some enhancements were made to the VOICES tool to improve the reading sequence and focus order (e.g., to try to ensure the screen reader/virtual coach would start at the top of a screen, focus would stay within a pop-up region, etc.), especially for participants using VoiceOver mode. Later, Phase 2 was conducted with 8 participants to complete the usability evaluation after the Phase 1 enhancements.

The following tables (see Tables 1 and 2) provide a summary of the results by phase, including the participant group, which mode of the VOICES tool was used, whether they were successful on their own for each task, and each participant’s System Usability Scale (SUS) score and Computer Efficacy Scale (CES) response average. Overall, five participants used the Vicky coach mode and eight participants used the VoiceOver mode.

Table 1. Summary of Phase 1 of usability evaluation
Table 2. Summary of Phase 2 of usability evaluation

Across the two phases, six participants completed the tasks successfully on their own; seven participants (mostly participants using VoiceOver) completed the tasks with some intervention or help from the moderator. The help or slight prompts given to participants was related to the focus or reading order not always starting at the top of a screen (e.g., participant may not have realized they were on a new screen), lack of instructions/feedback or focus in relation to available buttons (e.g., “Continue” button on first screen, “Play” button and “Close” button for a video dialog), or lack of confirmation after an option had been selected or unselected (e.g., number on the interactive ruler prompt or an answer to a question). Overall, participants using VoiceOver mode with the VOICES tool had longer task times across the phases.

The SUS scores were promising in relation to the usability of the tool: The majority of participants had SUS scores in the acceptable range (above 70) across the phases and most of the participants in Phase 2 had SUS scores of 90 or above.

Based on the responses to the CES [11, 24], there was a range of overall confidence with new technology across participants. Most participant responses to the CES were on the higher end with an average of 8.1 across the phases, although three of the participants (two blind and one with low vision in Phase 2) had CES response averages below 7, indicating somewhat less comfort with navigating digital tools independently.

Of all the participants, 12 (92%) stated that they would recommend the VOICES tool to others. However, five participants also thought a user would need some familiarity with this type of technology (or VoiceOver for that version of the tool). Participants also suggested a “coach” who could explain how to use the technology and provide assistance if needed, as well as help with an older adult’s potential anxiety or fear in relation to technology (i.e., as stated earlier, most of the participants in this particular study had high levels of confidence with technology and had used an iPad before). One participant who used the VoiceOver mode thought that this type of screening should be done with a real person due to older adults having potential aversion toward or lack of experience with this type of technology.

The majority of participants had positive reactions to using the VOICES tool. Most found VOICES easy to use and easy to navigate (although most participants using the VoiceOver mode did receive some form of help during the tasks). In relation to recommending the tool, most thought this would be an important tool for mistreatment screening (e.g., educational and helpful in conveying information, useful option for people to express themselves, less intimidating or judgmental if dealing with a real person for this topic, etc.). Participants who were hard of hearing or had low vision appreciated the larger text size, color choices, and high contrast (see Fig. 3). Some of the low vision participants suggested the inclusion of additional options for preferences (e.g., switch between contrast modes, enlarge text).

Fig. 3.
figure 3

Example of VOICES tool page with checkbox selection options.

Although the VOICES tool was intuitive to most, some participants suggested adding content or other features to help with getting started and to check on an answer, if needed. For example, some participants were unclear at first that they needed to use the “Continue” button after the first screen was read out (especially for the VoiceOver users) and the addition of brief introductory instructions for the tool would be helpful. Additionally, VoiceOver users suggested the inclusion of in-tool help or optional instructions at the beginning for those who may be infrequent VoiceOver users or needed a refresher. The inclusion of a “Back” button for each slide would also provide a way to correct an earlier mistake for a question or to check an answer.

Across the modes and phases, most participants thought the content was easy to understand (e.g., clear wording for questions and choices, helpful illustrations/graphics, brief amount of text on each screen). Additionally, most participants liked (or were not bothered by) the use of first person (e.g., “I’m sorry to hear you’ve been mistreated. You’re a strong person to admit this…”) throughout the VOICES tool; several participants thought this type of language could be helpful by making the tool feel more personable, and some VoiceOver users thought this type of approach would be familiar to VoiceOver users already. For the 5-question comprehension survey that tested the participants’ understanding of the information after using the VOICES tool, 10 of the 13 participants scored 100% and 3 of the participants scored 80%. However, some of the participants did think the reading level of the language was still a bit high and one participant suggested indicating that further explanations will be given for terms throughout the use of the tool to reassure users.

Participants who used the Vicky mode appreciated the voice narration. For example, one participant who was hard of hearing thought it was helpful that people could listen to the voice instead of reading as well as follow along with the text if preferred, and another stated: “Normally I’d just use the text, but I enjoyed the voice so much. I was enjoying the experience of hearing it very clearly, which I don’t get too often.” Participants also found the pace of the Vicky coach acceptable despite whether they felt they were a slow reader or fast reader (i.e., they could easily follow along with the text while the Vicky voice was speaking); some participants did mention including the choice to increase the speed of the voice, if desired.

For the VoiceOver version, most participants thought the use of the default setting (175 WPM) for the speech rate was adequate to understand the content. Several participants suggested this pace would be useful for those who may be less experienced with VoiceOver, although they also thought there should be an easy option, such as at the start of the tool, to adjust the speed of the voice. Several participants using the VoiceOver version also thought the pauses within the video content were too long at times (e.g., sometimes they were unsure if the video had ended and they were supposed to do something, but then the voice in the video would continue speaking).

4 Discussion

4.1 Enhancements to Improve Accessibility of VoiceOver Mode

Most of the issues that participants encountered that required help from the moderator occurred in Phase 1 with the VoiceOver mode of the VOICES tool which included: inconsistent and unclear reading sequence, focus order, and screen reader feedback, as well as unclear button labels. Overall, this evaluation demonstrated the importance of ensuring that a tool works with the standard screen readers (i.e., VoiceOver on the iPad in this case) and does not interfere or overwrite how the assistive technology works. VoiceOver may not always pronounce words as intended either (e.g., caregiver), and therefore it is important to listen through all content beforehand to note if any adjustments are needed to ensure understanding for users. Also, as one participant stated, “I think it’s wonderful that you’ve brought me in and bringing other people in to test at this point. I think it’s a mistake not to have people with disabilities on your development team… get people with disabilities embedded at all stages of product development.”

Consistent and Correct Reading & Focus Order. Across the tasks, the reading order (i.e., the order in which a screen reader provides content) and the focus order were not consistent or clear (e.g., reading did not always start automatically when a new screen loaded and did not always start at the top of the content, and the focus did not always stay within a video dialog that had opened), which led some participants to explore screens further to figure out whether all the content had been read aloud and where the focus was on the screen. To ensure a consistent and accurate experience, reading sequence and focus order need to advance to the correct location on the next screen (e.g., consistently start at the top of the content without forcing the user to find the beginning of the content), and reading order and focus should immediately move to a dialog when it is opened and focus should be restricted to the dialog’s content while it is open (i.e., focus should not reach the page behind the dialog). For decorative images (such as within the banner area), use null alt text to ensure these will be ignored by the screen reader. Structural information should also be appropriately conveyed to users (e.g., headings should be appropriately structured, list items should be consistently coded as programmatic lists, etc.).

Clear and Consistent Feedback. Throughout the tasks, appropriate feedback was not always provided to the screen reader, which made it unclear to participants whether they had moved to a new screen at times (e.g., whether the page had updated to a new question to answer), when a video dialog had opened, and when a selection had been made on the screens with an interactive ruler (see Fig. 4). Screen readers and other assistive technologies should be clearly and consistently notified after completing an action and/or when page content has changed or updated (e.g., after moving to a new screen/page, when a dialog has opened or closed, after making a selection, etc.) to ensure users receive audio feedback via VoiceOver.

Fig. 4.
figure 4

VOICES selection ‘ruler’ with 3 selected.

In addition, information or feedback should not be conveyed through the use of color alone (e.g., after selecting a number on the ruler, see Fig. 4), and videos should also not start playing automatically without warning and without an option to pause or stop the video, especially if the audio of the video interferes with the feedback from the screen reader (e.g., participants were confused about what was happening when VoiceOver and a video were speaking at the same time).

Descriptive and Associated Labels. Participants encountered some difficulties when trying to use the controls within the video dialogs due to unclear button labels. For example, the label for the “Play” button included additional information regarding the length of the video which obscured the specific button purpose and led some participants to miss the button entirely and require help. Interactive controls (e.g., buttons, form elements, etc.) require associated and clear labels to ensure a screen reader recognizes and provides the label with type of input to the user.

4.2 Importance of Flexibility for Universal Design

During our sessions with participants, we observed their usage of the VOICES tool and asked post-study interview questions to learn more about their experiences with the tool and their broader experiences. Overall, the observations and comments gathered from this study demonstrated the importance of flexibility and agility [28, 29] in the universal design of the VOICES tool to support a range of functional and technical needs across disabilities. Most importantly, the default design of the tool should be equally usable for older adults with or without disabilities and be flexible enough to allow for a range of abilities (e.g., easily accessible instructions) and preferences (e.g., easily accessible setup options at the start of the tool to make adjustments based on their needs, if desired) to facilitate further ease of use.

As already described in the results, participants recommended adding options to adjust the text (e.g., text size or contrast options). In addition, although having captions for videos turned on as a default is useful to easily provide this option, one participant mentioned that they are a slow reader and therefore find video captions distracting; therefore, providing an option to easily turn off captions if desired is also helpful.

Participants described adding options to easily adjust the volume and speech rate, although most thought the default speech rate should still be close to a speaking rate to work well for a broad audience (i.e., a “kiosk voice”). Two participants thought there should be an option to choose between female or male voices. As one participant brought up, for older adults, high-frequency hearing loss occurs first [30]. Sounds at a higher pitch can be more difficult for older adults to hear and difficulties related to frequency can also vary depending on whether a person is using hearing aids or has cochlear implants (e.g., lower frequency sounds may be more difficult to hear with cochlear implants); therefore, providing easily accessible options at the start to adjust the gender, pitch, and the speed of the voice used for either mode of the VOICES tool would be helpful to meet the range of hearing needs of older adults [31].

Participants who used the VoiceOver mode of the VOICES tool mentioned a range of preferences regarding the size of the device. For example, some of these participants prefer a smaller device (e.g., iPhone or iPad mini) because they are more familiar with or used to the screen size and felt like they had to adjust to the layout being more spread out (e.g., inadvertently touching the screen at times). One participant also mentioned having a condition in which their hands shake a little, which can cause them to accidentally touch other parts of the screen on a larger size device versus an iPhone size. Another participant mentioned that although a larger screen size could be helpful for some with low vision, a smaller screen size could be more helpful for some conditions, such as tunnel vision (i.e., vision is constricted to a central, tunnel-like field of vision). However, some participants also thought the larger size was helpful so the layout was not as crowded together on the screen (versus on an iPhone) and could allow for even larger text size if needed. Another participant suggested considering the option of an external Bluetooth keyboard to use with the iPad. Overall, a range of potential options for users could be helpful to meet a variety of needs or preferences.

When asked whether they would be comfortable using the VOICES tool without headphones in a public setting, most participants agreed that having headphones would be very important to them to preserve their privacy while using this mistreatment screening tool. However, preferences regarding the type of headphones that would work varied across the participants, indicating that a range of headphones options should be offered. For example, users who are hard of hearing may wear some form of traditional hearing aid(s) (e.g., in-the-canal, in the ear, behind the ear) or have cochlear implant(s), and their hearing aids may have additional features such as telecoils and/or Bluetooth compatibility to pick up audio from a phone or other type of device [32]. Most of the participants who were hard of hearing would prefer to use over-ear headphones versus earbuds to avoid having to take out their hearing aids and/or to help block out background noise. One participant said they cannot wear headphones due to feedback issues and they instead prefer to use a neck or induction loop, and another said they have found it difficult to use headphones due to their cochlear implant and would prefer to read the text on the screen (and the option to turn off the voice narration) for privacy if in a public setting. Although one participant suggested the possibility of using Bluetooth capability if a person’s hearing aid(s) included this feature instead of using headphones, they also thought it would depend on whether the person was familiar with how this feature would work.

In terms of privacy, some participants also described the issue of the screen being visible and the need for a private setting when using this type of tool. Some of the participants who used the VoiceOver version described the issue of being unaware if someone is looking at your screen when you are blind, and therefore recommended an option at the start of the tool to have the display turned off while they are using VoiceOver (i.e., “Screen Curtain”) to hear the audio with headphones.

Across the versions, some participants also thought a user should use the VOICES tool in a private room or booth-type situation along with headphones and without the presence of a caregiver or family member to ensure their privacy. Additionally, although participants with dexterity impairments were not included in this particular study, we determined considerations from previous research related to the environment of a private room: A tablet should not be fully attached to a surface (such as a fixed stand) to allow the user to hold the device or move it around on a surface if needed, and an adjustable surface allows for various height and angle options for ease of viewing and arm support for users with dexterity limitations [19].

The VOICES tool is still in the early stages of development and further evaluation with older adults with other types of disabilities (e.g., cognitive, physical, etc.) and abilities or experiences (e.g., little or no familiarity with an iPad or any type of tablet or smartphone device) is needed to ensure ease of use, as well as flexibility to support diverse users.