Keywords

1 Introduction

While accessibility has come a long way [9] it also has a long way to go [10]. The public sector seems to embrace accessibility more easily as a requirement, perhaps driven by stricter regulations and inclusion requirements. However, both government and business continue to struggle to adopt accessibility guidelines, best practice and compliance. Educational institutions are required to provide accessibility in software, especially non-voluntary use of software as in the case of educational performance evaluation (“testing”). Almost all schools in the United States rely on software created by private corporations. Accessibility guideline and standard adherence varies greatly between software providers.

As with all efforts to enable participation of disadvantaged or disparate user groups, including the actual end-users or recipients of a design solution [15] in the design process continues to be a key challenge. The problem of ignoring user needs during the creation of design solutions is particularly problematic when designing for accessibility, with designers considering themselves the end-users, not the actual users [22].

Exclusion from the software creation process is especially impactful to marginalized or disadvantaged populations, where access to critical software features defines educational attainment itself. Including users with disabilities in accessibility efforts for product or service experiences suffers the same problem: inclusion of the target audience [23].

1.1 Adopting a Different Model to Accessibility

Accessibility efforts typically take a technology-centric model to improving access to content, which is understandable because Assistive Technology, such as a screen reader, interacts with actual code such as properly tagged digital content: e.g. Alternative text (ALT text) for images will allow the screen reader to detect and describe the image. However, this narrow approach to improving accessibility solely by following guidelines and optimizing code reduces the task to a model that inadvertently excludes users as part of the process.

Most approaches to accessibility follow a guideline, checklist or algorithmic approach to compliance checking. Software tools for testing accessibility continue to be developed to meet the demand. The problem with this technology-centric approach, is that users are rarely included in the testing process. As a result, assessing software defects with users with disabilities (Accessibility Testing) is rarely done.

Worse, trying to understand and design to user needs before beginning the design or optimization of accessibility, is even less frequently performed. User needs analysis, or ethnographic study, offers an opportunity for designers and developers to develop an advocacy approach to disability requirements. Field studies provide the ability to empathize as well as understand the context-of-use of a feature while using, for example a screen reader or magnifier. Understanding context-of-use in the accessibility user experience is rarely seen as a worthwhile effort. Official guidelines (e.g. W3C’s WCAG) fail to suggest its value and instead promote the technology-centric model to accessibility.

Case Study with Schoolchildren.

In this paper we will describe how a disability-centric model provided marked improvement in overall outcomes for the users but also for the developers and designers. In this paper, we discuss key lessons learned from an ethnography of school children with disabilities ages 5–18 years old. The study was conducted for a not-for-profit organization that develops software for 9,500 schools, districts, and education agencies in 145 countries. We will describe an inclusive design approach that included Ethnography followed by Interaction design and Accessibility Testing for school children with disabilities. The goal of the project was to update non-accessible software (web application) used in educational performance assessment. We aimed to ground the software improvement project by including target users (schoolchildren with disabilities) throughout the project. This included an early needs analysis (Ethnographic field study) that informed how features were designed, followed by Accessibility Testing of the new features.

Disability-Centric Model.

Because the use of technology by users with disabilities is so nuanced, a new approach is required. Relying on guidelines and algorithmic tools that check accessibility alone is too risky and prone to user exclusion. Taking a disability-centric approach to accessibility puts disability at the center of the optimization effort—with an understanding of user needs and observed performance—as core to designer decision-making and developer fact-checking.

1.2 Role of Empathy in Disability User Research

For designers and developers, accessibility offers a critical opportunity to develop empathy [3]. Social neuroscience considers empathy a defining characteristic of social intelligence [6]. This seems to imply that designers and developers can improve their decision-making toward others by increasing their empathy. The popular human-centered design methodology, Design Thinking, specifically points to empathy as a vital tool for gaining direction into designing for others [7]. Brown notes empathy is a means to an end or vehicle for understanding user needs, not the solution itself [8].

However, the issue of empathy as a design tool has been questioned for its overall validity in the design process [5]. Studies have even shown that empathy can unintentionally distance designers from users with disabilities [4]. Norman [5] points to this fix for distorted empathy: “So we’re proposing to combine experts and community workers. Instead of recommending solutions, experts should be facilitators, guides, and mentors. We need an approach that’s top-down, the expert knowledge, and bottom-up, the community people”. Activating the empathy process in design seems to require careful intent, that according to research, can be greatly aided with a learning and reciprocity approach when studying the needs of disparate groups [16].

Empathy arises from including users in the design process. The more contact you have with users the more you are likely to understand their needs [28]. User research (Ethnography and user testing) are critical for improving or gaining insight into disability issues and understanding the disability experience. For example, in our consulting practice at Experience Dynamics we have seen qualitative leaps in understanding from observing users with disabilities using Assistive Technology (AT) in their native environment (device, home etc.). We have found that observing the precise set-up of AT in context-of-use can have marked impact on fixing accessibility issues [28]. Nind [19] points to the need for a strong ethics grounding to avoid compromising, for example, children with learning difficulties, communication difficulties or users on the autism spectrum. Other researchers emphasize the need for ethnographers to give agency or control back to users, particularly children, regarding the content and comfort of their contributions with researchers [26, 27].

Empathy plays a critical role in the motivation to do user research. This is especially true with understanding the experience of children with disabilities adapting to digital technology, in particular in a school context where it is mandatory, not voluntarily accessed. Researchers have found that understanding the social consequences of AT in context, for example, can greatly improve design approaches in particular with error handling by blind users [11].

1.3 Stigma in Conducting Research with Users with Disabilities

Gaining access to stigmatized populations such as users with disabilities is a challenge to conducting vital user research [13]. This is due largely to the varying disability types and the lack of understanding of which types of disabilities require priority over others as well as the varying configurations, versions and AT devices types, not to mention individual user proficiency with AT. Dawe [24] found challenges in generalizing accessibility issues when conducting accessibility testing: “The types of AT devices in use for different disabilities vary widely. A single abandonment rate for all AT does not provide a very useful picture for anyone, given the large differences among the devices and the populations that use them”.

Designing for children with disabilities is even more problematic, as noted in several studies with young adults [25]. Further, Dawe [24] found “teachers and parents played prominent roles at different stages of the adoption process, which also created unintended challenges to adoption”. Worse, research among young children with learning disabilities “has not always been considered a valid focus of inquiry” [27].

Researchers [17] point to key challenges with how designers frame disability design challenges for children by taking an “enabling” of the disabled approach that “repairs” or rescues users with design (the medical model of disability) instead of imagining solutions that empower and enhance the quality of their experience.

A meta-analysis of research examining visibility of disability shows that many disabilities go undetected and therefore ignored from the design process [12] as an issue, while “advocacy is stigmatized” with legal action as a primary motivator [29].

Those unfamiliar with disability (user) research can quickly get discouraged and move away from embracing a disability-centric model to digital accessibility and inadvertently remove users from the process: focusing solely on ‘testing’ with a technology-centric model; assuming they can perceive issues like a user with disabilities [22]; or by solely checking WCAG guidelines—again sans users with disabilities.

2 Methodology

Before beginning the optimization of an existing web application developed by a not-for-profit organization, used to evaluate U.S. Department of Education standards for learning proficiency, we conducted an ethnography of students with disabilities. The purpose was to first understand user needs in the context of AT and school software use, before beginning the feature-optimization process and/or adherence to accessibility standards. The research was conducted using user observations and interviews across a range of disabilities and age ranges. The aim of the ethnographic field study was to understand current experiences with AT and accommodations in the context of the ‘test’ taking and assessment experience. The key question we sought to answer was: What obstacles and barriers did children with disabilities experience, especially compared with children without disabilities?

The field data was used to inform new features and interface improvements including new accessibility tools that would be required to interact with the proficiency evaluation software. The software code was then created, and we conducted several rounds of accessibility testing with children with disabilities (observing AT use with the target newly optimized accessible interfaces), in order to assess the quality of the code improvements and updates.

Children were observed using AT in their natural environment, and then informally interviewed for between 30–60 min. AT use, in classroom context, was observed. We focused on understanding the disability school experience across grade ranges. Under what conditions were students taking ‘tests’? What obstacles or barriers did they confront as part of their disabilities? How did technology or current (competitor) software respond to accessibility challenges? We wanted to know what was working for students, and what AT software features were frustrating or difficult to use. Interviews were conducted with teachers or assistants present. Parents were invited to attend the interviews.

2.1 Participant Selection

A total of 22 students in K-12 (ages 5–18) at Austin Public Schools (Austin, Minnesota USA) from a sampling of disabilities represented in the school district, as well as the major areas of disability were included in the research. The school was currently using a different educational performance assessment software, similar to but not the same as that of our client. This competitor software had been optimized already for accessibility, and this allowed us an opportunity to see how well a complete ‘product’ performed in the field.

Students with the following disabilities were included in the user needs analysis (Table 1):

Table 1. Students recruited for the accessibility study.

Once the user needs analysis was complete, the insights were used in the interaction design process. Developers created proof-of-concept features that could be tested with actual users with disabilities. For the follow-on Accessibility testing, we recruited a further 20 children (in the Portland area), from the major disability groups, across the K-12 age range: Visual (blind, low vision, color blindness), Deaf/hard of hearing, Motor (inability to use a mouse, slow response time, limited fine motor control), Cognitive (Learning Disabilities, distractibility, inability to remember or focus on large amounts of information or complex steps). These students helped accessibility test the educational performance assessment software with new or improved (access-optimized) features.

3 Results

The field study provided evidence of the current adoption, use and support students were receiving from AT and tools available at school. This helped us specify inclusive design solutions such as the design of digital Highlighter pens, Rulers, magnifiers etc. It showed us where children were at and what issues and obstacles they were facing in their physical, social and digital environments.

An illustration of the value of conducting the Ethnography and spending time with users with disabilities, was found in the Special Education (SPED) room- a dedicated space where children with disabilities can relax, study or take computer-based tests. We observed a blue filter over ceiling lights in the Special Education (SPED) room. This teacher-created filter allowed students to stay calm and manage stress caused by the typical classroom social environment. This mood-enhancing solution, we then observed being leveraged by the competitor (educational performance assessment) software being used by the children–see Fig. 1 below. We interviewed these SPED teacher advocates, who confirmed that students liked the ‘mood light’ and used the blue filter on the software frequently. This example illustrates the value of direct observation in a user’s environment and how such an insight translates to the prioritization and value (to users) of a proposed feature.

Fig. 1.
figure 1

Blue light filter over classroom lights (left); accessibility tool (blue filter feature-right) observed in use in curriculum assessment software. (Color figure online)

Further findings gave us specific insights into the AT toolbar that would be introduced into the software. In the schools, we noticed that physical calculators were more popular than digital ones, and that physical AT rulers and protractors were missing. We noticed physical magnifiers in use but an absence of AT screen magnifiers (ZoomText and MaGic, for example). Screen magnifiers help low vision users perceive text and details on a screen and are built into Apple’s iOS, for example. Several middle school students with low vision who might benefit, did not realize their iPads had a Magnifier feature available. Formally activating the feature on her iPad (Settings > Accessibility etc.). at school was too embarrassing for a high school student, meaning AT tools should be ‘just-in-time’ to bypass schoolroom stigma—especially at older grade levels where social scrutiny from peers is high. This example illustrates how understanding social context-of-use can increase the priority or placement of a feature (Fig. 2).

Fig. 2.
figure 2

Physical calculators were more prevalent (left); which magnifiers were very common (right) they were not always available in software used by the students.

Later in the process, during accessibility testing of proof-of-concept features, we observed users struggling with online protractors seemingly due to their physics behaviors on screen, but also novelty or lack of familiarity—findings we had observed in context, during the field study. Students had not seen a physical protractor and physical versions with AT features, e.g. Braille rulers or Tactile protractors, were not available at school, though they can be easily purchased online. The user testing allowed us to understand this important context-of-use of the tool within the AT context specifically, and why users were uncomfortable with certain technologies. For example, students told us they did not like laptops because they required more eye-hand coordination as well as keyboard proficiency. iPads or tablets were easier to interact with, they said. See Fig. 3.

Fig. 3.
figure 3

Students struggled with keyboard use and found tablet interaction easier. We observed that games that provide keyboard mapping tips on-screen, the students found helpful (left). Customizing settings was challenging for users (right) and at the high school level contributed to social stigma within the peer community—if a setting was not apparent, having to go configure it showed a ‘special’ need. High school students do not want to feel singled-out, or ‘special’.

4 Analysis

The process of including users throughout the design process centered gave us critical direction and guidance that would have otherwise been theoretical or guess-work. The ability to use the ‘empathy insights’ from the user research was invaluable in providing strategic direction as well as tactical guidance throughout the project lifecycle. Instead of defining UI features and design ideas on internal design decisions and then matching accessibility guidelines to the “new AT tools”, the way many organizations conduct accessibility efforts, we instead had evidence-based validation and insights that designers and developers used to better interpret and comply with government and W3 WCAG guidelines.

The key findings emerging from our disability-centric model of user participation, clustered around the following themes:

  • Observing pain points and context-of-use allowed for greater empathy and understanding about what might work or not work in interaction design. For example, users seemed to need direct access to AT tools or configuration options. We evidenced this from observing students with visual impairments (low vision in particular), not using standard Screen Magnifier AT. We observed students with cognitive impairments not enjoying school-issued laptops due to inefficiency and added effort of use. We noted the preference for iPads due to direct, rapid interaction for students with cognitive and mobility impairments.

  • Understanding how users work-around cumbersome or absent AT solutions provided clear direction for possible design solutions. For example, it appeared that the absence of physical protractors meant a steeper learning curve for students when discovering a digital tool modeled after a real-world tool. Even calculators seem to be a newer concept for the digital generation and the literacy of using a calculator seems to require some exposure—it was not an immediate use experience for primary and elementary students, unlike most adults (teachers) who have grown up with the familiar mental model. In both calculator and protractor examples, user testing showed us that we had to design the tools with not only accessibility guidelines (see Fig. 4) but also with a simplified interaction model.

    Fig. 4.
    figure 4

    Standard contrast protractor (left) and high-contrast protractor (right). The high-contrast design is accessibility compliant for low vision users. In addition, the interaction (usage of) the protractor has been simplified to allow greater access to children unfamiliar with the physical manipulation of a protractor.

  • Checking accessibility enhancements and new accessible UI concepts with children with disabilities against WCAG guidelines, allowed us to identify accessibility ‘bugs’ and report them to the developers for investigation. By conducting Accessibility Testing, we were able to directly observe children struggling with the proposed tools and access features in the educational performance assessment software experience. We believe a standard machine-level accessibility tool check, or review against the W3 WCAG guidelines, would not have identified these issues. For example, much of the UI feedback related to icon intuitiveness, leading us to redraw icons several times to allow them to be more perceivable by the students. This reinforces the need to include users with disabilities in accessibility quality improvement efforts, instead of assuming designers or developers can interpret the guidelines sufficiently [22].

  • Being mindful of inclusion in design, early-on, enables an ‘accessibility first’ approach (disability-centric model) that will likely reduce the common ‘build, then make accessible later’ approach. It seems that such a bolt-on accessibility approach creates the need for excessive iterative testing as we learned with placement of our toolbar AT. The new toolbar interface was created based on software requirements: e.g. Test question + toolbar: “Read this problem and use the tools to solve the equation”. Our findings indicated that users did not explore the tool bar unless the precise problem-solving tools were offered for that particular test question. A persistent generic toolbar with “all tools” was all too often ignored by the students. Instead, we had to be mindful of the ordering of tool bar icon/buttons—a trade-off in maintaining consistency of the toolbar UI, vs providing contextually-supported tools. Thinking of our users with disabilities, knowing the struggle they have to endure—even with the right AT—we were able to make more carefully calculated interaction design decisions.

5 Conclusions and Recommendations

By including users with disabilities throughout the accessibility enhancement or improvement process, we were aided by practical insights and strategic findings that helped not only improve the immediate software but also helped the designer and developer team apply learnings to avoid future mistakes. The process of developing a disability-centric model of interaction design helped us include users with disabilities and optimize around their real needs, not merely relying on official accessibility guidelines or automated checking tools. We found that enabling participation of users with disabilities not only kept the project grounded but forced us to reality-check guidelines and gain observed evidence as to their practicality and performance.