Abstract
Personal computers, palm top computers, media players and cell phones provide instant access to information from around the world. There are a wide variety of options available to make that information available to people with visual disabilities, so many that choosing one for use in any given context can often feel daunting to someone new to the field of accessibility. This paper reviews tools and techniques for the presentation of textual, graphic, mathematic and web documents through audio and haptic modalities to people with visual disabilities.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
The information society offers a collection of new tools to access a wealth of information. Personal computers, mobile devices web technologies have changed the way people read the news, shop, and communicate.
One would hope that such technological advances would benefit all people equally. Unfortunately, despite the early hopes placed in new technology, there are accessibility and usability issues that still hinder access to technology and information for both mainstream users and users with disabilities. Universal access for all people has sadly lagged behind technical advancement, leaving many technologies difficult or, in some cases, impossible to use for people with disabilities. This paper presents some of the tools and techniques that address this digital divide.
As the field of access to information is extensive, this paper presents a broad survey of research regarding the presentation of information to people with visual disabilities, addressing in particular those problems that are prominent in the research community. While an attempt is made to include research which is representative of multinational initiatives, this review, inevitably, has a bias towards the English-speaking world. A subsequent review of additional sources from the non-English-speaking research community, as well as a large bibliographic database, are available through the authors [94]. It is important to note that this survey is intended as a starting point for those interested in pursuing research on accessible information for people with visual disabilities; it is not a definitive volume of all research in the field. In particular, this paper does not address the input modality of interfaces for accessing diagrams, mathematics or other forms of information.
In this paper the term people with visual disabilities is used to refer to the full range of people who have visual disabilities. This includes people who are blind, who have little or no functional vision, and people who have low vision.
This paper begins with a discussion of the presentation alternatives available to people with visual disabilities. These alternatives are audio presentation, discussed in Sect. 2, and tactile presentation, covered in Sect. 3. Following the discussion of these technologies, the paper examines how they are applied to different types of content, addressing the presentation of textual information, mathematics and graphics in Sects. 4, 5 and 6, respectively.
Section 7 considers online sources of information which include web and multimedia documents. The review concludes with a discussion of haptic technology and areas for future exploration.
Audio media
This section discusses how information can be conveyed through sound to a user, starting from non-speech sounds and then proceeding to discuss synthesized speech.
The term auditory icon refers to the use of real-world sounds to communicate the interaction of a user with objects in a scene. Originally proposed by Gaver [78], these sounds are usually related to the task being performed and the object with which the user is interacting. For example, the Trash bin icon in a graphical user interface indicates visually when there are documents which have yet to be cleared from the system. When a user chooses to empty the Trash, he/she hears an auditory icon of papers being shuffled out of a rubbish bin.
As these sounds are digital representations of their real-world counterparts, the parameters of the sounds can be adjusted in order to indicate the identity of the object being manipulated, such as its relative size and the action being performed on it. While it is possible to adjust many parameters of the sound, such as the pitch and tempo of the icon being played, it is hypothesized that nomic mappings of sounds to tasks, in general, are better than metaphorical mappings [78].
In contrast, earcons are abstract musical melodies that are symbolic of tasks or objects. As an example, a scale increasing in pitch could map to the opening of a file, while a decreasing scale could represent closing a file. Further information regarding the design and use of earcons can be found in [19, 82, 113, 121, 153, 192, 193].
The use of speech for communicating information, and in particular text, cannot be ignored when designing interfaces for people with visual disabilities. While the applications of text to speech technology are discussed in Sect. 4, the following are some of the usability concerns of which designers must be aware:
-
Technology. Hardware synthesizers, on average, provide better sound production and more accurate speech synthesis. However, these devices have the drawback of being costly to purchase as well as taking up additional workspace. Software synthesizers can also be expensive to purchase, but open source initiatives are resulting in some low cost or free alternatives. These software synthesizers also take advantage of existing sound card hardware available on most home computing workstations.
-
Speed. The average speed of screen readers is approximately 2.8 times slower than the average speed for a user with a visual disability [8]. In order to compensate for varying comprehension rates, any application using text to speech technology should provide an accessible means of adjusting the speed of the speech output.
-
Voicing. The majority of speech synthesis systems provide a range of voices ranging from low male to high childlike. Much like speed, the voice used in vocalizing text must be a customizable option for the user.
The use of three-dimensional sound (3D) interfaces is likely to become more common as the cost for sound hardware decreases. The 3D sound system must produce a signal which matches the transformation of a sound from its point of origin to its arrival in the ear canal. This signal will vary based on the origin point and its relative position to the head. In the case of a sound originating on the left side of the head, the sound wave signal will reach the left ear first, unfiltered by the head; whereas the right ear will receive an altered signal, which is caused by the wave being shadowed by the head [25, 76].
As a result of this complicated set of factors, sound systems have a collection of head-related transfer functions (HRTF), which are numbers representing the time delay, amplitude and tonal transformation of sounds from various points around the head. These functions are used to alter a sound signal which is being sent towards the ear in order to give the illusion that it has come from a point in the 3D space. The HRTF in formation itself is recorded through a series of tone experiments with microphones placed in the ear canal of either a manikin or a specific person.
In terms of hardware, 3D sound applications can be created using either headphones or loudspeakers. In the case of loudspeakers, these can be placed in a traditional stereo configuration, a sound wall consisting of a bank of speakers (e.g., as seen in the work by Donker et al. [51]), or with multiple surrounding speakers. For headphones, HRTF production for a single subject is relatively easy, with information being projected directly into the appropriate ear. Loudspeakers have the additional problem of crosstalk which can be described as sound waves intended for one ear arriving at the other. These extra signals disrupt the localization effects for the user. In order to counteract these signals, crosstalk filters can be added to the signal, cancelling out the unwanted sound waves and preventing them from reaching the wrong ear.
For examples of applications of 3D sound the reader is referred to the table browsing interface by Raman [163], memory enhancement techniques by Sánchez et al. [175] and cognitive map formation work by Ohuchi et al. [144]. The use of 3D sound in presenting graphical user interfaces to the blind was also investigated in the GUIB project [42, 43, 57].
Tactile media
The sense of touch can play an important role in presenting information to people with visual disabilities. However, the production of tactile documents has lagged behind print for the sighted.
The following technologies all produce what can be define as offline documents. Each of them can be read, and in some cases authored, away from a desktop computer. Examples of such print documents would be maps, calendars or textbooks.
Technology and techniques for ad hoc production
When working with an individual student, it is often beneficial to be able to generate tactile documents in an ad hoc manner, as the need arises. Such documents may consist of tactile graphics displayed in a 2D space, with various materials providing depth or texture to the graphic.
There are several examples in [67] of variable height pictures which are static pictures, prepared to provide depth to a graphic by reproducing contours or raised areas by attaching felt or other materials to a background and using fasteners like stick pins to identify landmarks of interest. In place of this, ink which dries to a raised surface can be used. There are several prefabricated kits which are designed to assist in building such pictures such as the Chang Tactual Diagram Kit which provides felt shapes and lines to apply to a background and the Tactile Diagram Starter’s Kit [31, 196].
Tactile-experience pictures, as discussed in [224], are graphics primarily used by children which are created with wood, sandpaper, and other types of materials with distinct tactile sensations. Buildup displays by comparison consist of several very thin layers of paper placed on top of each other to produce a contoured surface. With build-up displays, household materials like string, wire and drawing pins can then be used to draw attention to landmarks of interest in the tactile scene. Finally, for fast, immediate generation of tactile documents, a raised line drawing board where a plastic stylus is run over a plastic film can be used to produce raised lines.
While these types of tools are very useful, they are not suitable for generating mass production graphics. In order to provide documents in large quantities, one must turn to traditional embossing techniques, thermoform materials, swell paper, or computer presentation through tactile displays.
Embossing
Embossing in the context of this paper will refer to the printing of raised dots within a small distance of each other to create 2D structures. The dots are produced by embossing printers such as those listed in [10], or through heat transfer copying as discussed in [224]. These dots are usually the same distance apart as the standard Braille character, which is approximately 2.5 mm, permitting the easy generation of Braille text intermixed with other graphical elements. However, there are examples, such as the TIGER embosser [67, 70, 71], which provide more finely spaced dots for the production of near continuous raised lines and surfaces. A large list of embossers is available through the Royal National Institute of the Blind (RNIB) [168].
Microcapsule paper
Microcapsule paper reproductions are polyethylene paper with a polystyrene microcapsule layer coating one side. These capsules expand when heated, raising areas of the paper, thus giving the medium its colloquial name of swell paper. These documents are produced through an application of graphic elements to the paper with a dark colored ink pen, or through standard printing techniques. The microcapsule paper is then placed in a tactile image enhancer which heats the paper expanding the capsules, with the darker sections absorbing more heat and resulting in an area raised higher than the lighter areas of the paper.
In place of this, a pen with a heated tip can be used to free draw on microcapsule paper. This can be useful in certain situations, such as class room interaction. However, many teachers and parents shy away from such devices as there is a chance of burns occurring through skin contact with the heated tip [224].
Thermoforming
Thermoforming (vacuum forming) is the process of generating a tactile document from pre-tooled dyes. A large metal dye is molded into the shape of the document, which can include both Braille and printed text, line graphics and multi-tiered graphics. These molds are placed under a PVC sheet and heated, which causes the sheet to form over the mold. When the material cools, the sheet firms around the mold and can be removed, creating a replica of the document. This process can be repeated as many times as desired [224].
Limitations of offline tactile documents
While all of the above technologies are in use by people with visual disabilities, they have several disadvantages:
-
1.
Size: due to the need to have large enough features for recognition through the finger tip, these documents are always substantially larger than standard print documents. As a result, they can be bulky and awkward to transport. In the case of multilevel vacuum form documents, stacking of materials may simply be impossible, resulting in storage problems.
-
2.
Information loss: often a lack of space or resolution in the tactile medium results in the loss of fine detail. This loss of information could result in the misinterpretation of data, or in the reader becoming confused during unguided navigation tasks.
-
3.
Cost: while more mundane, this is still a serious problem for the community. While many ink based printers are now (2009) less than 100 USD, the average embossing printer or tactile developing system costs several thousand dollars. This, coupled with the cost of the production media (e.g., microcapsule sheets), which ranges between 1.50 USD and 5 USD a sheet, limits the availability of such offline media.
-
4.
Immutability: Changes to documents are inevitable. If a document needs to be updated, the only way to incorporate changes into a tactile document is to regenerate either a portion or the whole of the document, which is not only costly, but it is also virtually impossible to ensure that documents are up to date.
Online document production
Clearly, many of the problems with tactile offline media derive from the fact that, by their very nature, they cannot adjust to the needs of an individual user. In order to compensate for this, research has attempted to provide access to online documents. Devices for the display of these types of online data are varied in their capabilities. A selection is reviewed below.
There are several examples of tablet displays which convey audio information associated with the tactile documents. A tactile overlay is placed on top of the display area and the user explores the surface with his/her fingers. As pressure is placed on the display, the user’s finger location is transmitted to a computer where the coordinate information is decoded to speech or other audio information. These types of displays can address the information loss problem found in offline documents through the audio annotations; however, the cost, reproduction and size issues are often not addressed. Examples of these devices are the NOMAD [88] tablet, the Talking Tactile Tablet [105, 106] and the IVEO system [217] produced by ViewPlus technologies that is intended for use with the Tiger embosser and associated software.
Truly dynamic displays which can be refreshed in a matter of seconds, such that a user can page through a document, are more rare and in general, more difficult and expensive to produce. This is no more evident than in the case of the ill-fated optical to tactile converter (Optacon) [27, 41, 60, 77, 80, 127–129, 155–157, 174, 176]. Originally created in 1966, the Optacon was used by a large number of people with visual disabilities despite its substantial cost. Optacon could scan almost any surface, including computer screens, and produce a tactile image on a small surface of 144 vibrating pins. This type of display gave access to all types of printed materials from printed books to everyday items such as coins and receipts. Optacon was successful also as a research tool in many diverse projects, including: early electronic image processing [96, 97]; spatial cognition development [9]; interactive Braille output [226]; tutoring systems [54]; tactile exploration experiments [34, 108, 112]; and virtual textures [86]. Unfortunately, despite proposals for a new Optacon device as late as 1994 [133], the device was discontinued in 1996, leaving a large void in the community which has yet to be filled by a comparable device. In her 1998 open letter to the community, regarding the fate of the Optacon, Barbara Kent Stein, who was, at the time, the First Vice President of the National Federation of the Blind of Illinois stated [188]:
“Surely there is another approach to the whole problem, one that does not depend on speech at all. Why not develop a device to enable blind people to read the screen tactually? Why not turn visual graphics into tactile images?”
There are further examples of dynamic displays, such as: pin displays similar to the DMD 120060 [131] and the NIST pin display [169]; the wave based displays as discussed in [138]; portable displays [199, 229], and many more as listed in the extensive review by Vidal-Verdú and Hafez. Most notable is VideoTIM, which provides similar functionality to Optacon [1]. However, all of these devices, and several more like them, are largely only available in experimental settings and have yet to be produced at a low cost for the end user.
Text transcription
Text documents are perhaps the most common types of documents, and, therefore, the most important. As a result, it is not surprising that substantial efforts have been committed to rendering text for people with visual disabilities. The first electronic text transcription proposals were available in the early 1970s [104, 182–184, 186].
Audio presentation of text
Audio presentation of text can either be speech recordings, such as those found in most audio books and some digital talking books, or synthesized speech produced by text to speech technology, which can be combined with screen readers for access to computer text.
Audio book formats have been present in mainstream media for more than 50 years with novels, textbooks and other printed materials being read into recordings by authors or celebrity readers. Indeed, there are still major initiatives to distribute such materials to blind populations through the world, with many thousands of recordings being produced every year [39]. These books had been available on various versions of analog tape [101], which could be navigated through rewinding and fast forwarding. This provided simple navigation, but due to the sequential nature of such recordings, significant challenges were faced by readers when they wanted to review specific sections of the document.
With digital media, books were moved to CDs and portable digital music players [122, 143, 172, 223]. While many of the sequential navigation problems remained, the nature of such digital media provided the ability to provide chapter and section markers to assist in navigation through the text [101].
Digital talking books (DTB) and the process of their standardization is overseen by the Digital Accessible Information Systems (DAISY) Consortium, a non-profit organization which was started by leaders in international libraries for blind and other print disabled readers. The DAISY Standard 2.0 has been developed through an iterative process (as detailed in [101]) and now includes markup standards based on World Wide Web Consortium (W3C) languages such as SMIL, thus permitting the synchronization of audio presentation with the visual presentation of the document (e.g., audio description of video). Examples of technology for reading DTB’s can be found in [44, 45, 101, 102, 134, 135].
In place of recordings, text to speech systems can be used to automatically read aloud text. Indeed, there are several early examples of monotone speech generation being used to convey information to either equipment operators or in early telephony applications [79, 114, 146]. The best-known example in the area of accessibility is perhaps the Kurzweil reading machine, which performed basic rendering of text to speech from scanned printed documents [3, 98, 104, 130]. However, it is also recognized that a simple transcription of character representation to speech is insufficient for understanding [145]. Without sufficient prosodic cues for perceiving and evaluating the context of the information being presented, long streams of speech can be difficult for the listener to understand.
Today, while there is still work to do on prosodic processing, there are text to speech systems available in many Latin, Germanic and other languages including: English, German, French [200, 201], Italian [47, 48], Japanese [222] and Chinese [116, 117]. While it is well-understood how to prepare such a system, there are still documents which remain unavailable to people with visual disabilities. Developing nations where funding for such transcription projects is limited, countries with multiple official languages and languages with small speaking populations, all remain a problem for providing text to speech output [50, 148, 179].
In order to take advantage of text to speech technology, documents must be transformed into an electronic form. This can, of course, be done by direct entry, as in the case of word processors. Alternatively, paper documents can be scanned into electronic form and optical character recognition (OCR) can be used to retrieve the character information. While OCR is a fairly mature technology with examples of use since the early 1970s [2, 15, 33, 46, 49, 59, 137, 160, 180, 195, 216, 227, 240], poor scanning results, or defects in the paper documents, or hand written notes can all still result in errors in text recognition.
Enhanced visual presentation of text
For low-vision users, text access can be accomplished through the use of increased font sizes, which require either large screens or screen magnification technology. One resource for information on screen magnification is the recent article by Blenkhorn et al. in [20], where several architectures and design factors for screen magnifiers are discussed.
In addition to screen magnifiers, some people with low vision, or particular color-vision deficiencies require alternative color contrast for text against backgrounds. Recent work in the BenToWeb project [17] demonstrates that the color contrast calculation for cathode ray tube television sets is still the most accurate at describing the perception of color contrast for the general user population. However, this work does not necessarily account for the preferences of an individual, and as such the ability to adjust the color of both text and document background is a requirement for accessibility.
Tactile presentation of text
In place of auditory presentation, touch can be used to present documents. For reading text there are a variety of Braille codes, with the original code being created by Louis Braille in 1829, that are used for transcription of text documents into tactile form [21]. These codes consist of characters of either six or eight dots with dots, in columns of three or four dots. The dots are approximately 2.5 mm apart; however, the optimal spacing of the dots is more subtle and is an issue of much discussion whenever a new device is designed. Aside from text, there are several related Braille-style codes that are used to translate a variety of materials such as music [58], flow charts [38], computer symbols [37], chemistry notation [32] and mathematics [141] into sequential strings of similar Braille characters. The following discussion applies to all of these codes.
Historically, focus has been on the transcription of printed documents into Braille. Transcribers have been available since the first use of Braille for manual transcription. With the introduction of electronic computers, there was an initiative to alleviate some of the manual transcription problems by facilitating the entry of text into computers and having the computer perform automatic transcription of that text into Braille [52, 83, 187, 228].
Now, automatic transcription of text into Braille is fairly common place, with Braille output being produced through embossing machines. Many such devices are available, with several different resolutions developed over the years [11, 28, 29, 71, 75, 87, 123, 170, 221, 231]. A recent survey of Braille embossers is found in [10]. The Royal National Institute for the Blind also has elaborated a list of embossers available on the market today [168].
There is a more immediate form of Braille transcription and presentation which is available at a relatively low cost. Braille display terminals are small portable terminals that present either 20, 40 or 80 characters to a blind reader through a set of refreshable Braille cells. Many of these displays now come with Braille note-taking interfaces consisting of seven keys that can be used for navigating documents and recording Braille characters.
Non-standard text layout
With all of the discussed technologies for rendering and transcribing text, it would seem that this problem is, for the most part, solved. However, there are still some significant challenges that need to be addressed in the research community.
The vast majority of these challenges result from the 2D layout of text, which provides context for how to read the information. For example, a document may contain layout information indicating section headers, spacing regarding paragraph breakdown and, in some places, lists of information which are indented to indicate their importance to the whole document. Sighted readers are able to take in all of this information through the overview process which is facilitated through the visual sense. On the other hand, if the sighted reader was to read the document one line at a time, seeing only between 20 and 80 characters on each line, the reading experience would be very different, and would be analogous to reading a document through a single line refreshable Braille display. Indeed, a sighted reader can see at one time approximately 50 times more of a document when compared to what a blind reader perceives when using a single line Braille display. This difference in perception provides the sighted user with the advantage that the text can be placed in context in the document as it is being read, without the increased load on working memory caused by viewing a document one line at a time.
Moving away from ordinary text documents such as novels and newspapers, there are several other types of text information which provide significant presentation difficulties when translated into sequential form. An example is the computer pseudocode presented in Algorithm 1.
This simple algorithm for an insertion sort poses several problems for a screen reader, due to the combination of text with mathematics symbols (which will be discussed more in Sect. 5) that are arranged in semantic groupings. Specifically, there is significant meaning contained in the indentation of the code that must to be communicated to the blind individual. Adding to this the non-standard notation present in the user defined variable names; this algorithm would be communicated very poorly through a screen reader or other assistive technology. While there has been some research on reading and presenting source code [181, 218, 232], tools to process non-standard notation are uncommon.
Problems similar to this can occur in the processing of tabular data by speech synthesis. Despite the fact that table data presentation are a heavily researched area, in particular due to their prominence in layout designs in hypermedia documents (as discussed in Sect. 7), there is no universally agreed upon solution. This is partially due to the large number of variations in table use and in structure; however, it also depends on the intentions of the user when perusing table data [164, 230].
Mathematics presentation
Many of the solutions used for presentation of text tend to not work well with mathematics. The reasons for this come from the very structure of mathematical notation, or more appropriately, the plethora of audio descriptions a fragment of mathematical notation can take. For example, considering the following formula:
How should a screen reader vocalize such a formula? Should the numerator be read first or the denominator? Regarding the terms under the square root sign, is 4ac a product of two or three terms? The ambiguity resulting from the perception of the mathematical notation without understanding the intention of the author, when combined with the problem of vocalizing the notation in a predictable way, makes audio presentation extremely difficult. Of course, the alternative to audio presentation is to generate a tactile representation of the mathematical notation which can be explored with the hands. This section discusses the many approaches for both audio and tactile mathematics presentation, each of which has its own benefits.
Audio presentation of mathematics
Many of the earliest approaches towards the presentation of mathematics drew on the availability of audio hardware to generate speech interpretations of the notation. This type of approach suffered from the same problems as highly structured text that was reviewed in Sect. 4. Experimental evidence indicates that internalizing the structure of mathematical notation can be very difficult when presented through audio [191]. This may be attributed to the increased cognitive load on the reader with having to perceive and understand the notation while focussing on navigating through a mathematical document to detect terms of interest.
Raman attempted to solve these types of navigation problems based on his own experience with university level computer science research papers. The Aster project introduced a customizable profile to specify how a user would prefer to read a document. Through a collection of user defined rules, the following options are available [164, 165]:
-
browse the entire document;
-
skip sections entirely;
-
retrieve summaries of technical areas of the document;
-
mark areas for recall;
-
retrieve simplified or descriptive audio output on mathematical formulae;
-
recognize patterns for specialized context renderings.
Whereas Raman’s [164] work focused on providing access to advanced technical documents, the work by Edwards and Stevens on the Mathtalk system was intended for high-school level and early undergraduate work in mathematics, in particular complex algebra [191].
Edwards and Stevens recognized that the key advantage to the visual sense is that readers do not need to remember all the information presented at one time. The fundamental difference between sighted and blind mathematicians was that sighted mathematicians use paper to record progress and for recall of previously encountered mathematical symbols. In this way, the sighted mathematician can focus on the comprehension of what the symbols mean, as opposed to the sequence of presentation. The Mathtalk system allows audio browsing of algebraic equations through an active reading process with the user participating in the display and review of mathematical notation [191].Footnote 1
The work by Edwards and Stevens on mathematics for people with visual disabilities is extensive; the following are general design recommendations from that work:
-
1.
Lexical cues can provide a means of breaking up algebra into unambiguous representations.
-
2.
Prosody of speech can provide a better means of understanding equation structure over lexical cues when used as an equivalent of the typographic rules for formatting algebra in print.Footnote 2
-
3.
A method of navigating the text at all levels must be provided. The user must be able to step through sections of the text to gain a preview of a complete document, and skip over objects which are not pertinent to the reading task. These rules must be extensible by the user on a situational basis, allowing the rules to change while the document is read.Footnote 3
-
4.
The user must be able to navigate through a formula with the application providing the ability to identify sections of the formula through audio cues and either read the contents of the formula or skip over the section entirely.
-
5.
Blind users will use various reading strategies for mathematics. These strategies must be taken into consideration when the mathematics interface is designed [190].
Recently, Gillan and Karshmer completed a large study on how people process mathematics when it is presented through audio and through print. Their results corroborate the design principles above [100].
Furthermore, recent work in the European project Linear Access to Mathematic for Braille Device and Audio-synthesis (LAMBDA) uses this principle of active recording in the design of an interface combining tactile and audio user interaction with mathematics [177].
Tactile presentation of mathematics
Currently, two options for the tactile presentation of mathematics are regularly used, namely Braille codes and the Dots-Plus system.
Tactile codes
The Nemeth code was developed in 1968 and is the standard code for tactile presentation mathematics in North America. This standard uses context symbols to change between the literary context and the mathematics context. This code was designed primarily as a transcription language, and while it is recommended that individuals doing transcription have the technical knowledge of the mathematics material, it is intended that anyone who knows the Nemeth code can translate a written mathematical document directly [141].
A second commonly used mathematic code is the Marburg code, used primarily in the European Union. In comparison to the Nemeth code, which represents the syntax of mathematics, the Marburg notation combines content with presentation information. Through the use of prefix indicators for identifying parts of a formula, and spacing of characters and delimiter marks for displaying 2D mathematics in a linear form, the Marburg code is capable of representing the majority of mathematics through 64 symbols [14].
There remains some debate regarding the effectiveness of a “number mode” presentation style similar to that of the Marburg code and the literary Braille code. This is due to the perceived “clumsiness” of complex mathematics presented in this manner. The alternative is to assign a unique symbol to each number as is done in the Nemeth code after it enters the mathematics context, and in the GS code proposed by Gardner and Salinas [74]. Several other codes for mathematics have been proposed and are or have been in use, such as the Halifax [85], as well as Russian and French codes.
The history of translating mathematics to Braille codes is not as robust as that for translating text; however, there are several options documented in the literature. One such attempt at providing a system to translate ad hoc mathematics documents for students is the work by Dr. Fred Lytle, a chemistry Professor at the University of Purdue. While teaching blind students in his chemistry class, he had been told that it was impossible to generate Nemeth code automatically from a document specification due to the context problems associated with such a transcription. Lytle prepared a mathematics to Nemeth code translation program in a macro set designed for WordPerfect 7 for Windows personal computers [119].
Recent work in the transcription of mathematics to tactile codes is the work done through the Universal Mathematics Access Project spearheaded by the University of South Florida Lakeland and the University of New Mexico. The overall goal of this project is to provide a Universal Math Converter which will convert from traditional mathematics authoring languages, such as TEX, OpenMath and MathML, into either Nemeth or Marburg Braille [147].Footnote 4
Recent work at the University of Western Ontario has focused on the complete translation of technical documents containing both simple text and mathematics to their Braille equivalents from a plain TEX source. This translation is then presented to the user through a document browser on a refreshing pin display [56, 95].
DotsPlus
Standard literary Braille has low usage, with estimates of use ranging from 10 to 20% of all blind/low-vision readers knowing the code. Mathematics Braille codes have an even lower usage than that. Particular reasons for this lack of use are the need of remembering complex symbol combinations and contextual overloading of symbols.
DotsPlus Braille attempts to address this problem by combining graphical characters with specialized numerical Braille symbols. Taken from the description available through the Science Access Project (SAP),Footnote 5 Dots-Plus addresses some of the problems associated with traditional mathematical Braille specifically:
Translation. Translation of mathematics into tactile form through Dots-Plus Braille font characters which are substituted for their print equivalents.
Numbers. DotsPlus avoids the use of the above described number mode, instead using the single cell Braille numbers. It adds an additional dot to each of the characters representing the digits in literary Braille number mode.
Exotic symbols. Exotic symbols such as summation and integration symbols are represented as direct tactile translations of their visual equivalents.
Combined with the TIGER Braille printers, the DotsPlus system is an alternative for readers at all levels of mathematics. Those who have lost their sight later in life can use their residual visual memory to process the outlines of exotic symbols. Further information regarding DotsPlus and the Tiger embosser can be found in [67, 69, 70, 72].
Graphics presentation
Diagrams are critical for the process of collecting, organizing and interpreting data, as well as for the exchange of information within office or education environments. Inaccessible graphics are a barrier that must be addressed for these settings to be inclusive for people with visual disabilities [107].
Taxonomies of graphics
There are a variety of types of pictures throughout media. Graphics can be distinguished into two very broad categories based on their presentation formats [94]. There are graphics which are representations of real-world phenomena. These graphics, referred to as pictures, require precision in the placement of their graphical elements in order to duplicate the features of their real-world equivalents. By comparison, diagrams are the mapping of real-world ideas to abstract representations. In diagrams it is much easier to separate the meaning of the diagram from the presentation of the diagram. Indeed, in many cases, a diverse collection of diagrams can represent a single set of data, as, for example, in histograms and pie charts.
The two categories of graphics mentioned above can be further broken down into a classification based on the intended use of the final graphic document. Under the category of pictures, there are photographs, navigational maps and structure diagrams such as architectural plans and medical sketches.
Diagrams are a larger category, as humans tend to use many kinds of diagrams to organize data and to ease interpretation tasks. Sub-categories of diagrams include statistical charts including bar charts, histograms, pie charts and graph diagrams (i.e., a collection of labeled nodes and edges, including trees and modeling language diagrams).
Way, in his treatise on tactile graphics [224], further distinguishes between these types of graphics, pointing out that time can play a part in understanding graphical content. Static graphics are those which once they are complete will not change, like a photograph or a sketch, while dynamic graphics are likely to change over time, such as software modeling diagrams. While such a classification system is reasonable, it is perhaps not flexible enough to encompass all graphical documents. Considering for example a set of architectural blueprints for an office building as an example, the blueprints will change a great deal during the initial design phase for the building; however, once construction has started, the blueprints are unlikely to change. Later, when the building requires renovation, the blueprints will be examined and changed according to the needs of the clients. This type of punctuated change occurs in many types of graphics, including architectures for buildings and software, city maps (as new roads are built) and circuit diagrams. This implies that there is an iterative life cycle for graphics where a graphic has its requirements specified, it is authored and after an arbitrary amount of time, the requirements are changed and the graphic updated.
While the above taxonomies provide an understanding of how to interpret graphics, they do not provide hints regarding how audio/tactile diagrams should be presented to a reader with a visual disability. Recent conferences on tactile graphics have shown that this is a question which still defies a precise answer, as there is no consensus on what makes a “good” or “meaningful” tactile graphic. Clearly, no single solution can address all of these different types of graphics. As a result, there is a large number of projects which have approached graphics presentation through audio, or tactile/haptic or multimodal presentation.
Audio presentation of graphics
In their recent survey of audio presentation of diagrams, Brown et al. [24] identified several design principles which are required for non-visual diagram access. These principles also apply to tactile and multimodal presentations of diagrams.
-
Overview. In much the same way that Stevens [191] advocates providing navigation from the general to the specific in mathematics equations, diagrams must provide an external reference for the reader such that organizational information does not need to be completely internalized. This is difficult to do for highly detailed tactile pictures, as the resolution and workspace size is limited as well as in audio, due to its inherent sequential nature.
-
Search. A facility to search for specific pieces of information or types of information is essential for providing an understanding of diagram contents [16].
-
Recognition. The search mechanism provides access to the explicit information in a diagram, such as the nodes or edges of a graph; however, there should also be a means of providing access to implicit features of interest within the diagram. The example used in [24] is locating and describing cycles within a simple graph.
-
Representational constraints. Many approaches emphasize the use of a diagram form which is similar to the printed form used by sighted people, in order to support collaboration between mainstream readers and readers with visual disabilities. Similar results can be seen in work on the presentation of tree diagrams [6], histograms [132] and technical diagrams [158]. However, as observed by Challis et al. [30] in their work on music presentation, it was discovered that simple visual to tactile translations do not always result in a workable document, and thus any graphic design must be tempered with user evaluation.
These guidelines provide a good start for the design of tactile diagrams, however, the resulting diagram will be further constrained by the medium in which it is presented. Further examples and guidelines of how to read various diagram types have recently been the focus of the Technical Drawings Understanding for the Blind (TeDUB) project, and an initial document has been published for study materials [40]. This work is significant in that the researchers approached experts and users regarding what type of information is to be conveyed through particular types of diagrams, such as those of software architecture. This type of user engagement produced a deeper understanding of the intentions of authors and of the readers in order to produce sensible interactions that would not only allow the user to perceive aspects of the diagram, but also aid in the navigation and comprehension of the diagram meaning.
This type of engagement of the target audience, from both the perspective of the author and the reader are essential for future improvement of audio presentation of graphics.
Tactile presentation of graphics
When trying to represent pictures, the constraints on presentation are fixed; the tactile pictures duplicate the visual scene as closely as possible. For example, a change in location or size of a feature in a tactile map could lead to a misunderstanding of the layout of a room, or in the distance between two cities.
For the tactile rendering of photographs, the most notable example of system is the work by Way on the TACTICS project [225]. This system uses image processing techniques to emphasize boundary areas and height differences in order to generate a static tactile picture which can then be explored by a blind person.
Tactile maps are one of the most heavily researched areas regarding pictures. In particular, Ungar et al. [18, 202–215] conducted several research projects regarding the exploration and encoding of information on tactile maps by blind individuals.
A conference on tactile graphics [162] shows that there is no consensus among the research and user communities regarding which guidelines should be considered as standard. It may be that such a debate is the result of existing standards being too constrained in their representations of information. This over-specification results in graphics which are accessible to a very narrowly defined user group, but inaccessible to others with only minor differences in accessibility requirements. Examples of problems which may arise from over-specific standards are as follows:
-
Low-vision users prefer to take advantage of their residual sight and, as a result, diagrams prepared without enlarged fonts or without extreme contrast will be less accessible.
-
The age of sight loss can play a role in graphic interpretation, as residual visual memory can help late-blind individuals interpret diagrams with which they are familiar, such as the math symbols discussed in the Dots-Plus project [68].
-
Different cultural backgrounds lead to different expectations for diagram presentation. For example, a recent project in Japan produced a set of guidelines for tactile graphics [65]. Although it is clear from this work that there are differences between this and other user groups, it is not clear what is different about Japanese users which makes North American or European tactile graphics guidelines not applicable.
-
Tactile sensitivity may be low with some users; as a result, more space between tactile features may be needed.
Due to these problems, guidelines must be very precisely specified for a particular user group (see the research done by Jacko et al. [89–91] in their work on elderly low-vision adults and the specification of visual profiles), or very general guidelines must be specified.
An investigation of existing tactile graphic standards has resulted in the following collection of tactile features, which seem to be accepted by the research and user communities.
-
1.
Tactile symbols should be simple [30, 36]. In this case, simple refers to the amount of time which is needed to comprehend a specific symbol. For example, a star symbol requires more time than a circle, due to the need to count points.
-
2.
Consistent mapping of tactile shapes to concepts is necessary to enhance comprehension [30, 106].
-
3.
There should be a minimal number of tactile symbols used to reduce the cognitive load of the reader (preferably fewer than 15 symbols [106]).
-
4.
Tactile symbol design should relate to the information being represented. Overly abstract symbols will require frequent consultation, by the reader, from either a legend or an expert reader [106]. This relates to the problem of presentation consistency between sighted and non-sighted formats, as those symbols which are shared will be more recognizable by people who became blind later in life.
-
5.
Diagrams should avoid disconnected components with excess white space between them. Large amounts of empty space leads to disorientation of the reader [30, 36]. However, this must be tempered with the knowledge that objects cannot be spaced too closely together, as they will be indistinguishable from one another.
-
6.
Consistent line type and line size are important factors when attempting to have the user follow a specific path. For those with low vision, high contrasting colors are also important in this task [36, 106].
-
7.
Braille labels should be kept to a minimum due to the large amount of space required for them [36].
Multimodal presentation of graphics
A common trend in tactile graphics in recent years is to combine audio output with tactile pictures to aid in navigation and comprehension. This process is usually carried out through a static printout being placed on a touch sensitive pad which transmits finger positions to a computer to play associated sounds such as those discussed in Sect. 3. These systems have the advantage of being able to communicate layers of both speech and non-speech sounds to the reader in conjunction with tactile exploration. Examples of such documents can be found in the work on the Talking Tactile Tablet by Touchgraphics Incorporated and the IVEO Tablet by Viewplus Technologies [105, 106, 217]. One of the first such systems was proposed in detail in [158].
There remain several open problems regarding this type of technology. First, complications arise from the interruption of audio information through repeated interaction with the touch pad. This interruption can lead to a “stuttering” effect in the audio playback. Second, there is a problem associated with the detection of the finger positions on the display. Even with the low resolution of such displays, it is difficult to determine the exact location of the fingers of the user. This can result in inaccurate reporting of the audio annotations, or the user receiving no audio information at all.
The World Wide Web
The Internet and the World Wide Web (WWW) have changed the way people interact with information. In the last 10 years, information on just about every subject imaginable has been made available for users to download and peruse at their leisure. The web, originally, held a great deal of potential for helping to eliminate the gap in access to information between those with sight and those without. Web pages that originally consisted of mostly text and few graphics promised to be a resource that would be accessible through screen reading technology and Braille displays. However, as connection speeds increased, users began to demand more variety in their media. When new technologies were designed to provide graphics, games and more on the web, they were produced so quickly that they rarely considered accessibility factors. Instead companies focussed on providing more content, faster, in what seems to be a continuing down-hill trend in accessibility. With the majority of new web sites consisting of extensive non-text, visual content, without a descriptive audio or tactile counterpart, much of the content is unavailable to people who are blind.
There are several thematic areas in the literature on how to make web documents more accessible for people with visual disabilities, including guidelines for governing how web sites are created, specialized browser design for disabled users, navigation and presentation strategies, and semantic de-construction of web page content.
Identifying the problems
When examining web pages, it is easy to see that they suffer from many of the same problems present in other types of documents, such as text in non-standard layout and graphics embedded in documents without any kind of alternative access to the information.
These problems are augmented by the very nature of web pages and web sites. First, the documents are intended to be viewed online, with little thought on how a hard copy of a web page should look or be generated. Therefore, it is difficult to generate an accessible hard copy of the document automatically for a sighted or blind person. With the addition of animated graphics and multimedia applications, automatic rendering is not a viable option.
The following sub-sections summarize the features in web pages that are significant barriers to access.
Navigation by hyperlinks
One significant problem with web pages is the method of navigation between individual pages. In a book, the method of moving to new content is obvious: the reader turns the page. In the case of web pages, hyperlinks attached to text within the actual document are used to move from page to page to access content. There have been several efforts to understand how to alternatively present such links to people with visual disabilities, including Raman’s work on aural stylesheets for web pages [163], and the work of Petrie et al. [136] on the use of earcons and auditory icons in presenting navigation information.
Frames
Frames are a navigational challenge to anything but a point and click interface, due to the necessity to address window focus to a specific frame for navigating the links contained within. Even though there are features which can be used to make frames more accessible, such as the use of frame titles and the noframe XHTML element, these features are seldom used, or if used, are improperly composed (e.g., a frame title such as “Top Frame”) [53]. In order to address these challenges, it is important to inform web developers of alternatives such as using the div XHTML element for layout.
Tables
As observed by Raman [163], tables are used in two significantly different ways within web pages. The first is to provide organization of relational data. In this case, the tables produce many of the same problems that are observed in [26] regarding the navigation of structured information. In particular, users are unable to orient themselves within the broader space to be able to understand the information in context. In such cases, tables must be marked up appropriately with summary data, as well as row and column headings which can be read by a screenreader. The second use for tables is in layout of text and graphics, which should be avoided, using in place of the table element the now standard div element.
Graphical content
Web pages can have informative images [152], in that they convey information related to the web page content, and in other cases decorative images which are completely unrelated to the actual content (e.g., advertisements, bullet graphics). Additionally, navigation links can be attached to a graphic that may be difficult to navigate as they seldom have descriptive text associated with them [53, 154].
A recent study based on interviews with print disabled users indicated that the guidelines for providing alternative text and long description text are still insufficient. The following were cited in the interviews as problems [152]:
-
1.
There was consensus among users that not all images require descriptions. In particular, those images which are used for spacing and filler should have empty strings provided in place of descriptions in order to facilitate skipping over these images.
-
2.
The WCAG guidelines state that a minimum of two to three words should be used to describe all images. For informative images that do require description, this is insufficient for comprehension about what the graphic is supposed to communicate.
-
3.
Descriptions should augment the information that is already contained in the body or caption text.
An investigation of over 100 web pages based on the results from these interviews showed that 71% of the informative images contained descriptive comments, while only 10% of the decorative images had associated descriptions. Obviously this is far below the goal of all informative images having such descriptions [152].
Application content
Most recently, high speed Internet access has provided the means of introducing full fledged applications in Internet web sites. These applications, which initially were relatively limited in functionality, are now dominant in web site production. Tools like Flash and Java Servlets provide to the web developer a great deal of flexibility, but, at the same time, even more care is necessary to ensure that content is not driven away from being accessible to the disabled. While there has been previous work on examining simple web applications, like Java applets [53], to date there has not been an extensive study focussed on the accessibility of such web applications.
Standards, guidelines and legislation
There have been several attempts to provide guidance and regulatory controls to the World Wide Web. While these standards are well thought of in the web development community, and are endorsed by governmental agencies, they seldom are followed by the industry, either due to lack of training, or worse, apathy towards such a small segment of the population.
The most widely known standards and guidelines come from the World Wide Web Consortium (W3C) [61] through the Web Accessibility Initiative (WAI). This organization provides a forum for researchers and other collaborators to contribute to the creation of guidelines for governing how information is to be presented on the web. There are several groups addressing a number of topics, including: Web Content Accessibility Guidelines (WCAG), Scalable Vector Graphics (SVG), Authoring Tool Accessibility, and markup languages (among others).
These guidelines are available freely, and involvement in the groups is encouraged to anyone who has expertise and interest in participating. This open format provides a significant contribution from both public and private organizations interested in accessible content on the web.
Colwell and Petrie examined WCAG 1.0 [35, 150]. Their studies looked at both the readability of the guidelines themselves and their use in developing new web pages. These experiments showed that there were several problems with navigation of the standards document, which caused developers to make mistakes while creating web pages. It was also found that some of the guidelines did not produce optimal results in creating accessible content.
In a recent study conducted by Freitas et al. [61], they observed that many of the problems with generating accessible content comes from the lack of support to help developers. Developers often do not have time to read and learn such a large set of guidelines. This, coupled with the fact that many of the developers use drag and drop style web page authoring tools which often do not have support for accessibility, leads to inaccessible content. Fortunately, it appears that some of these problems could be mitigated as software companies begin to include accessibility standards and validation technology into their products [142].
One of the problems attributed to the inability of developers to know and understand their responsibility to the disabled community is the sheer number of regulations which govern Internet content. There are not only international guidelines, as published by the W3C, but also local government regulations. There is also the question of international jurisdiction: is it the case that a developer from Canada should respect the content guidelines of Brazil? The answer is not clear. It may be that several policies from multiple nations overlap in terms of content, but an extensive study, and perhaps an international treaty agreement, is required to ensure that the global Internet remains accessible to the disabled around the world. For references on international legislation and policy the reader is directed to [219].
There are some tools to help developers provide accessible content without an explicit knowledge of the regulations. These tools come in the form of checklists and validation programs. The WPASET checklist is a means of evaluating the design of a web site through a subjective set of questions regarding how accessible the developer has made a web page. As mentioned in [150], this checklist format lends itself to a certain amount of bias, due to the fact that the developer evaluates his/her own work. It may be the case that such a checklist would be more valuable if conducted by an external referee of a document. The validation programs for the most part do not perform their functions very well. Indeed, as discussed in [150], these automatic checking tools often fail to aid the developer in identifying problems. This could be attributed to the lack of understanding of the initial guidelines by the designers of the validating programs, or due to the inability of these programs to check qualities of the content in comparison to their markup. For example, an automatic checking tool can only detect whether an alternative text tag is present, it cannot check if the content of that tag is correct.
The most recent development in this area is the completion of the Benchmarking Tools and Methods for the Web (BenToWeb) project; an EU initiative working with the World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI) on the development evaluation and validation techniques and tools. This project has produced a large number of results [17] on a wide variety of accessibility issues. These results include:
-
An evaluation of color contrast equations regarding how well they describe users’ perceptions.
-
An evaluation of factors affecting navigational consistency between web pages.
-
An extensive survey regarding awareness and knowledge of website owners and designers.
-
A comparison of the accessibility and usability of validation tools.
-
The creation of a test suite for validating test tools and methods regarding their correctness and completeness in testing WCAG guidelines.
Browsing solutions
Several authors have looked at the needs of people with visual disabilities and their access to web documents. Compiled here are collected recommendations based on interviews and usability studies for presenting web content to people with visual disabilities. For references on web browsers designed specifically for people with visual disabilities, the reader is directed to Raman’s work [163], to the ACCESS project and their DAHNI browser [154], the Brookestalk project [239], the SMART web browsing system by Truillet et al. [198] and the Home Page Reader by Asakawa [7]. A review of early web browser systems is also found in [198].
There must be a means of providing an overview of the document to give the users the opportunity to understand whether or not the document contains the information they require, or contains links to other documents in which they might be interested. Several projects by Zajicek et al. [237] have provided evidence that features like headings can be used to build conceptual overviews for users with visual disabilities, while hyperlinks typically do not provide an appropriate overview as they represent other documents, not the current document. Also tested in [238] was the use of keywords which proved to provide some context on the contents of a page, but the use of tri-grams (three consecutive words) produced trouble in understanding a document, and an abridged text format was not very successful in communicating a page’s purpose.
For navigation, interfaces should include the ability to re-read sections of the document at various grammatical levels, allowing the user to review paragraphs, sentences or single words. This navigation must also provide a means of gathering related features for review. This includes keywords, headings (which are specified in the W3C Web Content Accessibility Guidelines (WCAG) as requirements for accessibility) and link structures. In addition, providing a means of traversing back along a known path to a previous location in a document or of returning to a known location is suggested for orientation of the reader [154].
The majority of the solutions designed to date have extremely simple interface controls. In the case of the Brookestalk browser [239], functionality is mapped away from graphical buttons to the function keys on the keyboard of a personal computer. In the case of the Home Page Reader designed by Asakawa [7], all functionality (almost 100 features) was mapped to the keypad. It is noted that each of these implementations overloads the mappings on the interface buttons and it is surprising that the participants were not only able to learn the interface, but excelled at using it.
The ACCESS project designed a completely new web browsing interface that maps common web browsing functions onto a series of tactile buttons, which was shown to be effective in communicating the intent of the interface to the users who were included in the usability tests. Coupled with these tactile interfaces are auditory interfaces which provide both speech feedback (for text and link names) and non-speech sounds for indicating events that occur in the environment. In fact, the work in [136] shows that such non-speech sounds are a boon to helping the user navigate the complex interfaces required for web browsing.
Web page analysis solutions
Web pages, due to their markup languages, are rich in structural information. The syntax of HTML and of more advanced markup languages, such as XML, provides information about how pages are structured. Indeed, it is not surprising that much research has been devoted to trying to leverage this information for making the information presented more accessible to the disabled reader. Indeed, the earliest work of providing aural cascading style sheets by Raman [163], and later adopted by the WAI, focuses on using the syntax of cascading style sheet files to define audio presentation details.
Alternatively, web pages can be restructured to provide different views depending on the navigation methods of the user or the task which is to be performed at a given time. For example, if the user wishes to review the contents of the page, it may be worthwhile to provide a table of contents based on the heading tags provided in the current XHTML specifications (2008).
This type of approach is purely syntactic. Looking at the structure of web pages, it is very easy to see that sections of the page serve very specific purposes. For example, it is common to have a navigation pane present on the left hand side of the page for easy access to links within the site. This grouping of content provides a certain amount of information regarding the use of such links. This information is apparent to the sighted users due to color, position and other sight-dependent attributes. This same information should be encoded overtly for a blind user, but to do this the intention of the content must be interpreted, and this can be a difficult task, given the non-conformity of web pages. An example of an attempt to harness the semantic information is presented in [64], where semantic groupings permit the addition of an information rich table of contents.
Finally, Pontelli et al. [159] propose a semantic representation of HTML structure (specifically for tables) in a graph format. This graph is hierarchical in nature, with the different levels of the structure representing a more granular view of the data contained within the structure. The first tier in the hierarchy represents the table itself, the second tier represents the rows and finally the third tier represents the data contained within the cells. Combined with this representation is a domain specific language (DSL), which is used to specify navigation through the links of the table and the hierarchy levels themselves. The intent of this system is to provide standard viewing annotations defined by the syntax definitions, but also to include separate DSL descriptions that would help govern the navigation of the user through the data. Additionally, there may be opportunities for learning techniques to predict future viewing from former behavior of a user.
Multimedia presentation
Multimedia content is becoming more common on the web. The combination of text, graphics, video and audio presents great challenges in customization and personalization of contents for individuals with disabilities.
The MultiReader project had the goal of investigating and understanding the problems associated with navigating multimedia documents by both mainstream and disabled user groups. Since its inception, this project has provided not only guidelines on how multimedia information should be presented in order to optimize the comprehension of the reader, but also the design, implementation and testing of a prototypical application which applies the results of their user studies. The MultiReader application went through several iterations and its architecture serves not only as an example of how complex the problem of navigating multimedia is, but also as an indicator of how far commercial applications must evolve before they are truly accessible to all users.
The original set of requirements needed for the application were obtained through focus groups including mainstream users, users with visual disabilities, hearing disabilities and people with specific learning difficulties such as dyslexia. The requirements from the interview sessions were refined through iterative testing of the MultiReader prototype to produce a set of access requirements for each of these user groups [151].
Haptics
The newest technology for information access by people with visual disabilities is that of haptic technology. As this field is extensive in its breadth, only a selection of results is presented here.
In this section the broad term haptic sensation is used to refer to the detection of external stimuli such as kinesthetic forces, vibration and temperature through the skin. Haptic interfaces or simply haptics is the general term used in the research and corporate communities to describe the interfaces which can connect the haptic sensory system with a virtual environment, be it a traditional 2D interface or a virtual reality environment. While haptics has been often used for teleoperation of equipment in dangerous environments, robotics and game system controllers, this section focuses on its applications in providing interfaces to virtual worlds for people with visual disabilities. For a more complete discussion of applications of haptics in its more common uses, the reader is referred to [22, 194].
There are several devices that have been used to provide varying levels of vibration to a user who is blind. Most commonly, low cost solutions that are available to both researcher and user alike are used, such as haptic joysticks, gamepads or mice. These devices cost approximately $100 USD, and can be purchased through local distributors,Footnote 6 and are thus more likely to be accepted by the community of people with visual disabilities.
In 2006, a new style of haptic mouse was introduced. The VTPlayer mouse is quite different from the haptic mice discussed above. In place of providing haptic feedback through vibration, there are two tactile cells on which the fingers of the user rest. This device holds some promise for novel interaction styles, such as the work by Brewster et al., which used a two handed interaction style using the VTPlayer mouse in one hand and a pen and tablet input device in the alternate hand. This apparatus was then used to allow the user to explore a 2D scene with the pen hand, while having tactile information delivered to the mouse hand [220].
These tactile devices are suitable for some applications, but the range of haptic feedback is, with few exceptions, limited to varying levels of vibration and two degrees of freedom in movement. In order to achieve a more fine-grained haptic sensation, one must turn to high-end electronic devices such as the PhantomFootnote 7 single-point haptic touch system by Sensable Technologies, or the CybergraspFootnote 8 5-point haptic system by the Immersion Corporation. Indeed, recent studies [233] have shown that in activities in the absence of sight, the higher sensory feedback from devices such as these can improve performance in haptic interaction tasks in comparison to off-the-shelf technology.
There have been extensive projects testing the application of advanced haptic technology. With most of these results coming from fields such as teleoperation and surgical medicine, it is difficult to generalize results from these sight-dependent tasks to haptic exploration by a user who is blind. For this reason, there has been an increasing number of research results describing the interaction techniques suitable for the blind user.
A large number of results regarding the use of haptics have been contributed by Lederman et al. [110] whose early work focussed on comparisons between sensory systems in estimating properties of materials, such as texture. Studies such as this one emphasize touch as a first class member of the human senses, being able to distinguish many properties, in this case texture, independent of vision.
Lederman et al. provide definitions of different types of exploratory procedures (EP) that can be employed by an individual for haptic identification of properties such as texture, hardness, temperature, weight, volume, global shape and exact shape. These exploratory procedures have been observed in laboratory settings with subjects interacting with 2D and 3D objects, such as those described in [167].Footnote 9 These procedures are: lateral motion, pressure, static contact, unsupported holding, enclosure and contour following [103]. A discussion of how these exploratory procedures relate to haptic interface design can be found in [111].
There are many other results which have been identified as providing starting points for a set of guidelines for haptic interaction. These include:
-
1.
Haptic exploration with a single-point device results in slower recognition times and more misattributions than real-world exploration tasks [92].
-
2.
There is debate regarding complex scenes perceived by blind users. Some results indicate that complex scenes are extremely difficult to identify through haptics alone [93]. However, the study by Magnusson et al. [120] indicates that blind users were able to identify complex scenes reliably.
-
3.
For path finding, grooves are better than bumps due to prevent users from “falling off” of the bumps [63].
-
4.
Roughness of textures is perceived to increase as groove width decreases [149].
-
5.
Internal exploration of an object (i.e., within the object boundaries) results in the object size being perceived as larger than external exploration [149].
-
6.
Multiple points of virtual contact result in better size estimates of virtual objects [124, 125].
-
7.
Navigation through a virtual space can be done with only auditory cues [115].
Example applications that were specifically designed for people with visual disabilities include:
-
A haptic museum where virtual art pieces can be touched and explored [126].
-
Multimodal exploration of graphs for comprehension of data [23, 62, 63, 171, 233, 234, 236].
-
A non-visual molecule browser [23].
-
Navigation in unknown environments [185].
-
Software for exploration of mathematical relations [178].
-
World Wide Web exploration [139].
-
Exploration of virtual scenes (HOMERE) [109].
It is hoped that, as haptic technology becomes more affordable for both research facilities and the home users, applications presenting realistic, natural, 3D haptic interaction will be achievable in the near future.
Conclusions
This paper has reviewed several examples of presenting media to people with visual disabilities. It first discussed the alternatives for making visually based information available to people with visual disabilities, providing in particular a survey of the types of sound and tactile based presentation options that are available.
Second, the problems which are encountered when attempting to translate and present textual information, mathematics and graphics were identified. The trend towards moving information to online sources, such as multimedia and web documents, was also examined, discussing some of the unique challenges associated with these types of documents.
All of these issues have been explored by both users and researchers, resulting in a variety of approaches, all moving toward one common goal: the equality of access to information for people with visual disabilities. While all successful approaches have their merits, very few have made their way into mainstream use by the population. The following general statements can be made regarding future access technologies:
-
Interface requirements need to be abstracted away from specific applications. Specific applications provide a means of testing the effectiveness of interface theories and designs. These applications have very specific human and technological factors which make them successful in achieving their goals. These factors need to be generalized in such a way that future research and commercial systems can include them in new applications.
-
Multimodal interfaces must continue to be brought to mainstream applications. While any type of provided feedback is of benefit to a user, material translated for the people with visual disabilities should use both audio and tactile output. While audio output is certainly easier to manufacture, it is too serial to communicate all information effectively. It is clear from previous examples of technology, such as the Optacon and the IVEO tablet, that devices and applications which include tactile feedback are more readily accepted by the user community.
-
Further work on automatic transcoding is required. There are several examples of transcoding for each type of media discussed in this paper. Transcoding, if it can be accomplished without the aid of a human assistant, provide independence for the user with a visual disability in controlling their access to information. Also, it is essential that the process be examined from the view of the document as a whole, so that one tool can render all information contained in a single document.
-
Further automatic and semi-automatic testing tools need to bedeveloped. While there are many tools that provide automatic testing for accessibility on the web and in other domains, these tools address a small subset of accessibility problems.
-
Awareness of universal access must be increased. Tools for transcoding and verification of material will continue to be ineffective if those who are in greatest need of them are unaware of their existence. Media transcribers, developers and students all must be informed of the challenges which exist for those with disabilities, so that they can look for specific tools.
-
Involvement of the target user group must be sought at all levels of design, implementation and testing. There is a need to include the target user group at all levels of the research process. There are several examples in the literature where tools have been designed without the input of the users, and then tested without participation of that community. However, certain techniques for acquiring testing data produce accessibility concerns of their own. An example of this is the use of time diaries for people with visual disabilities, which were shown to have their own set of unique problems [4]. It is important for researchers to be aware of such problems, so that development and testing plans can be adjusted appropriately.
In summary, research regarding accessibility of information for people with visual disabilities is extensive. However, with the rapid pace at which technology changes, it is important for researchers and developers as a community to abstract solutions away from specific technologies, so that accessibility of all information presentation can be achieved in the future.
Notes
The Mathtalk project was incorporated into the Mathematics Access for Technology and Science project in 1996 (MATHS) [55].
An example would be raising the voice when reading an exponent.
For example, a user should be able to skip some graphic elements and receive textual descriptions for others within the same document.
This work identified the two procedures of grasping and molding.
References
ABTIM: Videotim. ABTIM company website. http://www.abtim.com/ (2006). Accessed November 2006
Ali, H.A., El-Desouky, A.I., El-Gwad, A.O.A.: Realization of high-performance bilingual English/Arabic articulated document analysis and understanding system. Int. J. Comput. Appl. Technol. 16(1), 54–65 (2003)
Allan, B.: Kurzweil reading machine. Comput. Mag. 20 (1985)
Allen, A., Kleinman, J., Lawrence, J., Lazar, J.: Methodological issues in using time diaries to collect frustration data from blind computer users. In: Proceedings of HCI International 2005: Emergent Application Domains in HCI, vol. 5, LEA. CD-ROM Publication (2005)
Annamalai, N., Gopal, D., Gupta, G., Guo, H., Karshmer, A.: INSIGHT: a comprehensive system for converting braille based mathematical documents to latex. In: Stephanidis, C. (ed.) (189), pp. 226–230
Arrabito, R., Jürgensen, H.: Using to produce braille mathematical notation in accordance with the Nemeth Braille code for mathematics and science notation, 1972 revision. Undergraduate Thesis (1987)
Asakawa, C., Itoh, T.: User interface of a home page reader. In: Proceedings of the Third International ACM Conference on Assistive Technologies, pp. 149–156. ACM Press, New York (1998)
Asakawa, C., Takagi, H., Shuichi, I.: A proposal for a dial-based interface for voice output based on blind users’ cognitive listening abilities. In: Stephanidis, C. (ed.) (189), pp. 1245–1249
Bach-Y-Rita, P., Hughes, B.: A modified Optacon: towards an educational program. In: Discovery ‘84. Technology for Disabled Persons. Conference Papers, pp. 187–193 (1985)
Baillie, C., Burmeister, O.K., Hamlyn-Harris, J.H.: Web-based teaching: communicating technical diagrams with the vision impaired. In: Presentation at the Australian Web Accessibility Conference, OZeWAI 2003. http://opax.swin.edu.au/~303207/Papers/OZeWAI20031.html (2003). Retrieved September 2005
Balin, P.: A workstation for blind. Computerised Braille production. In: Proceedings of the 5th International Workshop, pp. 27–32 (1986)
Barry, W.A., Gardner, J.A., Raman, T.V.: Accessibility to scientific information by the blind: Dotsplus and ASTER could make it easy. In: Proceedings of the 1994 CSUN Conference on Technology and Persons with Disabilities (Los Angeles). California State University, Northridge (1994)
Batusic, M., Miesenberger, K., Stöger, B.: LABRADOOR, a contribution to making mathematics accessible for the blind. In: Computers and Assistive Technology—6th International Conference on Computers Helping People with Special Needs, ICCHP ’98. München (1998)
Batusic, M., Miesenberger, K., Stöger, B.: Parser for the Marburg Mathematical Braille Notation NIDRR Project: universal math converter. In: Stephanidis, C. (ed.) (189), pp. 1260–1264
Beddoes, M.P., Kanciar, E., George, R.G.: An optical character recogniser for a reading machine for the blind. In: 5th Canadian Medical and Biological Engineering Conference-Digest of papers (1974)
Bennett, D.J., Edwards, A.D.N.: Exploration of non-seen diagrams. In: ICAD’98 International Conference on Auditory Display (Glasgow), eWiC, British Computer Society. http://www.icad.org/Proceedings/1998/BennettEdwards1998.pdf. Retrieved 2009
Benchmarking tools and methods for the web. http://hcid.soi.city.ac.uk/research/Bentoweb.html (2005)
Blades, M., Ungar, S., Spencer, C.: Map using by adults with visual impairments. Prof. Geogr. 51, 539–553 (2000)
Blattner, M.M., Sumikawa, D.A., Greenberg, R.M.: Earcons and icons: their structure and common design principles. Hum. Comput. Interact. 4(1), 11–44 (1989)
Blenkhorn, P., Evans, G., King, A., Hastuti Kurniawan, S., Sutcliffe, A.: Screen magnifiers: evolution and evaluation. IEEE Comput. Graph. Appl. 23(5), 54–61 (2003)
Braille, L.: Method of writing words, music, and plain songs by means of dots, for use by the blind and arranged for them (1829)
Brewster, S.: The impact of haptic ‘touching’ technology on cultural applications. In: Proceedings of EVA2001 (Glasgow), pp. 1–14 (2001)
Brown, A., Pettifer, S., Stevens, R.: Evaluation of a nonvisual molecule browser. ASSETS 2004. In: The Sixth International ACM SIGACCESS Conference on Computers and Accessibility, pp. 40–47 (2004)
Brown, A., Stevens, R., Pettifer, S.: Issues in the nonvisual presentation of graph based diagrams. In: Proceedings of Eighth International Conference on Information Visualisation, pp. 671–676 (2004)
Brown, C.P., Duda, R.O.: An efficient hrtf model for 3-D sound. In: Proceedings of the IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, IEEE (1997)
Brown, L.M., Brewster, S.A., Ramloll, R., Burton, M., Riedel, B.: Design guidelines for audio presentation of graphs and tables. In: ICAD 2003 Workshop on Auditory Displays in Assistive Technologies (University of Boston, MA), Boston University Publications Production Department (2003)
Brugler, J.: Technology for the Optacon, a reading aid for the blind. In: Eurocon 71 digest, p. 2 (1971)
Bucken, R.: Aids for the handicapped. Funkschau 10, p. 36 (1990)
Buczynski, L.: Determination of the combined index of quality of braille printouts and convex copies for the blind. In: Final Program and Proceedings of IS&T’s NIP19: International Conference on Digital Printing Technologies, p. 780 (2003)
Challis, B.P., Edwards, A.D.N.: Design principles for tactile interaction. In: Proceedings of the First International Workshop on Haptic Human–Computer Interaction. Lecture Notes in Computer Science, vol. 2058, pp. 17–24 (2001)
Chang diagram kit: American Printing House for the Blind (2005)
Braille code for chemical notation: Braille Authority of North America (1997)
Chen, X., Yuille, A.L.: Detecting and reading text in natural scenes. Proc. 2004 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 2, 366–373 (2004)
Cholewiak, R.W., Collins, A.A.: The effects of a plastic-film covering on vibrotactile pattern perception with the Optacon. Behav. Res. Methods Instrum. Comput. 22(1), 21–26 (1990)
Colwell, C., Petrie, H.: Evaluation of guidelines for designing accessible World Wide Web pages. In: Proceedings of the Conference on Telematics in the Education of the Visually Handicapped (Paris) (1998)
Committee, C. B. A. E. B. S.: Report on tactile graphics. Canadian Braille Authority (2003)
Code for computer braille notation. Braille Authority of North America (1994)
Computer braille code supplement. Flowchart design for applicable Braille codes. Compiled under the authority of the braille authority of North America adopted October 8, 1991. American Printing House for the Blind (1992)
Cookson, J., Rasmussen, L.: National library service for the blind and physically handicapped: digital plans and progress. Inf. Technol. Disabil. 7(1) (2000)
Cornelis, M., Krikhaar, K.: Guidelines for Describing Study Literature. Federatie Van Nederlandse Blindenbibliotheken, Amsterdam (2001)
Craig, J.C.: Vibrotactile pattern perception: extraordinary observers. Science 196(4288), 450–452 (1977)
Crispien, K., Ehrenberg, T.: Evaluation of the “cocktail-party effect” for multiple speech stimuli within a spatial auditory display. J. Audio Eng. Soc. 43(11), 932–941 (1995)
Crispien, K., Petrie, H.: The GUIB spatial auditory display: generation of an audio based interface for blind computer users. In: Proceedings of ICAD 94. Santa Fe (1994)
Crombie, D., Dijkstra, S., Schut, E., Lindsay, N.: Spoken music: Enhancing access to music for the print disabled. In: Proceedings of Computers Helping People with Special Needs 8th International Conference, ICCHP 2002. Lecture Notes in Computer Science, vol. 2398. pp. 667–674 (2002)
Crombie, D., Leeman, A., Oosting, M., Verboom, M.: Unlocking doors: building an accessible online information node. In: Proceedings of Computers Helping People with Special Needs 8th International Conference, ICCHP 2002. Lecture Notes in Computer Science, vol. 2398. pp. 374–381 (2002)
Cushman, R.-C.: Seeing-eye computers. Creat. Comput. 7(12), 142–145 (1981)
Delmonte, R., Mian, G.A., Tisato, G.: A text-to-speech system for Italian. ICASSP 84. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2–9 (1984)
Delmonte, R., Mian, G.A., Tisato, G.: A text-to-speech system for unrestricted Italian. In: A.I.C.A. Annual Conference Proceedings, pp. 429–438 (1984)
Derfall, A.: Artificial intelligence as applied to input and output or making computers read and speak. In: Proceedings of the 5th International Conference on Pattern Recognition, pp. 882–884 (1980)
Dobrisek, S., Gros, J., Vesnicer, B.T., Pavesic, N., Mihelic, F.: Evolution of the information-retrieval system for blind and visually-impaired people. Int. J. Speech Technol. 6(3), 301–309 (2003)
Donker, H., Klate, P., Peter, G.: The design of auditory interfaces for blind users. In: Proceedings of NordiCHI 2002 (New York), pp. 149–155. ACM Press, New York (2002)
Dubus, J.-P., Wattrelot, F.: Self governing braille translation automatic system with typewriter keyboard. Nouvel Automatisme 24(6), 31–35 (1979)
Duchateau, S., Archambault, D., Burger, D.: The accessibility of the World Wide Web for visually impaired people. In: Proceedings of AA-ATE’99 (5th European Conference for the Advancement of Assistive Technology). http://www.snv.jussieu.fr/inova/publi/aaateacces.htm (1999). Retrieved September 2005
Durre, K.P., Eisele, M.: A computerized Optacon tutor. RESNA ‘87: meeting the challenge. In: Proceedings of the 10th Annual Conference on Rehabilitation Technology, pp. 437–439 (1987)
Edwards, A.D.: The MATHS project. http://www.cs.york.ac.uk/maths (2005). Retrieved from September 2005)
Eramian, M., Jürgensen, H., Li, H., Power, C.: Talking tactile diagrams. In: Stephanidis, C. (ed.) (189), pp. 1377–1381
Fellbaum, K., Crispien, K.: Use of acoustic information in screen reader programs for blind computer users: results from TIDE project GUIB. In: Proceedings of the 2nd TIDE Congress, IOS Press, Amsterdam (1995)
Firman, R., Crombie, D.: Miracle: developing a worldwide virtual braille music library. In: Internet Librarian International 2002. Collected Presentations, p. 38 (2002)
Fishman, M., Livni, I.: Alignment and size-normalization in a multi-font optical character recognition system. In: 1977 Electrical and Electronics Engineers in Israel Tenth Convention, pp. 262–266 (1978)
Fjelsted, K.: A time-sharing terminal adapted for use by blind computer users. In: Proceedings of the Twelfth Hawaii International Conference on System Sciences III, pp. 34–37 (1979)
Freitas, D., Ferreira, H.: On the application of W3C Guidelines in Website Design from scratch. In: Stephanidis, C. (ed.) (189), pp. 955–959
Fritz, J.P., Barner, K.E.: Design of a haptic graphing system. In: Proceedings of the RESNA ‘96 Annual Conference Exploring New Horizons. Pioneering the 21st Century, pp. 158–160 (1996)
Fritz, J.P., Barner, K.E.: Design of a haptic data visualization system for people with visual impairments. IEEE Trans. Rehabil. Eng. 7(3), 372–384 (1999)
Fukuda, K., Takagi, H., Maeda, J., Asakawa, C.: An assist method for realizing a Web page structure for blind people. In: Stephanidis, C. (ed.) (189), pp. 960–964
Fukuda, T., Kwok, M.G.: Guidelines for tactile figures and maps. In: Stephanidis, C. (ed.) The Proceedings of HCI International 2005: Universal Access in HCI: Exploring New Interaction Environments, vol. 7. CD-ROM Publication (2005)
Gardner, C., Lundquist, R.: MathPlus ToolBox, a computer application for learning basic math skills. In: Proceedings of the 15th IFIP World Computer Congress, Vienna (1998)
Gardner, J.A.: Tactile graphics, an overview and resource guide. Inf. Technol. Disabil. 3(4) (1996)
Gardner, J.A.: The DotsPlus tactile font set. J. Vis. Impair. Blind. pp. 836–840 (1998)
Gardner, J.A.: The quest for access to science by people with print impairments. Comput. Mediat. Commun. 5(1), 502–507 (1998)
Gardner, J.A.: Access by blind students and professionals to mainstream math and science. In: Proceedings of the 2002 International Conference on Computers Helping People with Special Needs, Linz, Austria (2002)
Gardner, J.A.: Hands-on tutorial on tiger and win-triangle. In: Proceedings of the 2002 CSUN International Conference on Technology and Persons with Disabilities (Los Angeles). http://www.rit.edu/~easi/itd/itdv03n4/article2.htm (2002). Retrieved from September 2005
Gardner, J.A.: DotsPlus Braille tutorial, simplifying communication between sighted and blind people. In: Proceedings of the 2003 CSUN International Conference on Technology and Persons with Disabilities (Los Angeles). http://www.csun.edu/cod/conf/2003/proceedings/284.htm (2003). Retrieved from September 2005
Gardner, J.A., Lundquist, R., Sahyun, S.: Triangle: A tri-modal access program for reading, writing, and doing math. In: Proceedings of the 1998 CSUN International Conference on Technology and Persons with Disabilities (Los Angeles). http://www.csun.edu/cod/conf/1998/proceedings/csun98_104.htm (1998). Retrieved from September 2005
Gardner, J.A., Salinas, N.: Gs braille code. Science Access Project, 2005. http://dots.physics.orst.edu/gs_index.html (2005). Retrieved from September 2005
Gardner, J.A., Stewart, R., Francioni, J., Smith, A.: Tiger, AGC and win-triangle, removing the barrier to SEM education. In: Proceedings of the 2002 CSUN International Conference on Technology and Persons with Disabilities (Los Angeles). http://www.csun.edu/cod/conf/2002/proceedings/299.htm (2002). Retrieved from September 2005
Gardner, W.G.: 3d audio and acoustic environment modeling. Wavearts Incorporated (1999)
Garland, H.T.: Reading with the Optacon: the importance of movement. In: Proceedings of the 26th Annual Conference on Engineering in Medicine and Biology, p. 143 (1973)
Gaver, W.W.: Auditory icons: using sound in computer interfaces. Hum. Comput. Interact. 2(2), 167–177 (1986)
Goldhor, R.S., Lund, R.T.: University to industry advanced technology transfer. In: 1980 IEEE Engineering Management Conference Record, pp. 204–208 (1980)
Goldstein Jr., M., Stark, R., Yeni-Komshain, G., Grant, D.: Tactile stimulation as an aid for the deaf in production and reception of speech: preliminary studies. In: 1976 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 598–601 (1976)
Guo, H., Karshmer, A., Weaver, C., Mendez, J., Geiger, S.: Computer processing of Nemeth Braille math code. In: Vollmar, R., Wagner, R. (eds.) The Proceedings of Computers Helping People with Special Needs 2000 (Austria). OCG Press, USA (2000)
Hankinson, J.C.K., Edwards, A.D.N.: Designing earcons with musical grammars. SIGCAPH Newslett. 65, 16–20 (1999)
Haynes, R.L.: An automated braille translation system. In: 1971 WESCON Technical Papers. Western Electronic Show and Convention (San Francisco), pp. 30–32 1971
Hughes, R.G., Forrest, A.R.: Perceptualisation using a tactile mouse. Proc. Vis. 96, 181–188 (1996)
Hussey, S.R.: Mathematical Notation. The Halifax Code. Fraser School, Halifax (1981)
Ikei, Y., Wakamatsu, K., Fukuda, S.: Vibratory tactile display of image-based textures. IEEE Comput. Graph. Appl. 17(6), 53–61 (1997)
Ina, S.: Development of 2D tactile graphics editor and printing system for document with braille and graphics. In: Transactions of the Institute of Electronics, Information and Communication Engineers D-II J77D-II, vol. 10, pp. 1973–1983 (1994)
Ina, S.: Development of 2D tactile graphics editor and printing system for document with braille and graphics. In: Transactions of the Institute of Electronics, Information and Communication Engineers D-II J77D-II, vol. 10, pp. 1973–1983 (1999)
Jacko, J.A., Barreto, A.B., Scott, I.U., Chu, J.Y.M., Vitense, H.S., Conway, F.T., Fain, W.B.: Macular degeneration and visual icon use: deriving guidelines for improved access. Univ. Access Inf. Soc. 1(3), 197–206 (2002)
Jacko, J.A., Rosa, R.H.J., Scott, I.U., Pappas, C.J., Dixon, M.A.: Visual impairment: the use of visual profiles in evaluations of icon use in computer-based tasks. Int. J. Hum. Comput. Interact 12(1), 151–164 (2000)
Jacko, J.A., Sears, A.: Designing interfaces for an overlooked user group: considering the visual profiles of partially sighted users. In: Proceedings of ASSETS’98. Third International ACM Conference on Assistive Technologies, pp. 75–77 (1998)
Jansson, G., Billberger, K., Petrie, H., Colwell, C., Kornbrot, D., Fanger, J., Konig, H., Hardwick, A., Furner, S.: Haptic virtual environments for blind people: exploratory experiments with two devices. Int. J. Virtual Real. 4(1), 10–20 (1999)
Jansson, G., Larsson, K.: Identification of haptic virtual objects with different degrees of complexity. In: Proceedings of Eurohaptics 2002, pp. 57–60 (2002)
Jürgensen, H.: Tactile computer graphics. Manuscript, 48 pp. (1996)
Jürgensen, H., Power, C.: An application framework for the presentation of tactile documents. In: Stephanidis, C. (ed.) Universal Access in HCI: Exploring New Interaction Environments, vol. 7. Lawrence Erlbaum Associates, London, CD-ROM Publication (2005)
Kaczmarek, K., Bach-y Rita, P., Tompkins, W.J., Webster, J.G.A.: Tactile vision-substitution system for the blind: computer-controlled partial image sequencing. IEEE Trans Biomed Eng BME 32(8), 602–608 (1985)
Kaczmarek, K.A., Bach-y Rita, P., Tompkins, W.J., Webster, J.G.: A time-division multiplexed tactile vision substitution system. In: Proceedings of the Symposium on Biosensors (Cat. No. 84CH2068-5), pp. 101–106 (1984)
Kamentsky, L.: The kurzweil reading machine: current developments. In: Proceedings of the IEEE Computer Society Workshop on Computers in the Education and Employment of the Handicapped, pp. 97–100 (1983)
Karshmer, A., Gupta, G., Geiger, S., Weaver, C.: Reading and writing mathematics: the mavis project. Behav. Inf. Technol. 18(1), 2–10 (1999)
Karshmer, A.I., Gillan, D.: How well can we read equations to blind mathematics students: some answers from psychology. In: Stephanidis, C. (ed.) (189), pp. 1290–1294 (1998)
Kerscher, G.: Daisy consortium: information technology for the world’s blind and print-disabled population-past, present, and into the future. Libr. Hi. Tech. 19(1), 11–14 (2001)
Kimbrough, B.T.: Daisy on our desktops? a review of lpplayer 2.4. Libr. Hi. Tech. 19(1), 32–34 (2001)
Klatzky, R.L., Lederman, S.J.: Toward a computational model of constraint driven exploration and haptic object identification. Perception 22, 591–621 (1993)
Kurzweil, R.: The kurzweil reading machine: the complete personal reading machine for the blind. In: Proceedings of the Fifteenth Hawaii International Conference on System Sciences 1982, pp. 727–731 (1982)
Landau, S.: Tactile graphics: strategies for non-visual seeing. Thresholds (1999)
Landau, S., Gourgey, K.: Development of a talking tactile tablet. Inf. Technol. Disabil. 7(2). http://www.rit.edu/~easi/itd/itdv07.htm (2001)
Larkin, J.H., Simon, H.A.: Why a diagram is (sometimes) worth ten thousand words. Cogn. Sci. 11, 65–99 (1987)
Lasko-Harvill, A., Harvill, Y., Steele, R., Hennies, D., Verplank, W., MacConnell, B.: Audio and tactile feedback strategies for tracking. RESNA ‘87: meeting the challenge. In: Proceedings of the 10th Annual Conference on Rehabilitation Technology, pp. 459–461 (1987)
Lecuyer, A., Mobuchon, P., Megard, C., Perret, J., Andriot, C., Colinot, J.-P.: Homere: a multimodal system for visually impaired people to explore virtual environments. In: Proceedings IEEE Virtual Reality 2003, pp. 251–258 (2003)
Lederman, S.J., Abbott, S.: Texture perception: studies of intersensory organization using a discrepancy paradigm and visual versus tactual psychophysics. J. Exp. Psychol. Hum. Percept. Perform. 7(4), 902–915 (1981)
Lederman, S.J., Klatzky, R.L.: Designing haptic and multi-modal interfaces: a cognitive scientist’s perspective. In: Farber, G., Hoogen, J. (eds.) Proceedings of Collaborative Research Centre, vol. 453, pp. 71–80. Technical University of Munich, Munich (2001)
Lee, S.: Effect of the field-of-view against target ratio in haptic exploration. Design of computing systems: cognitive considerations. In: Proceedings of the Seventh International Conference on Human–Computer Interaction (HCI International ‘97), vol. 1, pp. 595–598 (1997)
Leimann, E., Schulze, H.-H.: Earcons and icons: an experimental study. Human–computer interaction. Interaction 95, 49–54 (1995)
Lerner, E.J.: Products that talk [computers]. IEEE Spectr 19(7), 32–37 (1982)
Lokki, T., Gröhn, M.: Navigation with auditory cues in a virtual environment. In: IEEE MultiMedia, pp. 80–86 (2005)
Luk, R., Yeung, D., Lu, Q., Leung, E., Li, S. Y., Leung, F.: Digital library access for chinese visually impaired. ACM 2000. Digital Libraries. In: Proceedings of the Fifth ACM Conference on Digital Libraries, pp. 244–245 (2000)
Luk, R.W.P., Yeung, D.S., Lu, Q., Leung, H.L., Li, S.Y., Leung, F.: Asab: a chinese screen reader. Softw Pract Exp 33(3), 201–219 (2003)
Lundquist, R., Barry, W.A., Gardner, J.A.: Scientific reading and writing by blind people-technologies of the future. In: Proceedings of the 1995 CSUN Conference on Technology and Persons with Disabilities, Los Angeles, CA (1995)
Lytle, F.: Wordperfect 7 macros for translation of Nemeth Braille. Personal Correspondence (2004)
Magnusson, C., Rassmus-Gröhn, K., Sjöström, C., Danielsson, H.: Navigation and recognition in complex haptic virtual environments—reports from an extensive study with blind users. In: Proceedings of Eurohaptics. http://www.eurohaptics.vision.ee.ethz.ch/2002.shtml (2002)
Martial, O., Dufresne, A.: Audicon: easy access to graphical user interfaces for blind persons-designing for and with people. Human–computer interaction. In: Proceedings of the Fifth International Conference on Human–Computer Interaction (HCI International ‘93), pp. 808–813 (1993)
Mates, B.: CD-rom: reference format for the visually impaired and physically handicapped. Computers in Libraries ‘90. In: Proceedings of the 5th Annual Computers in Libraries Conference, pp. 113–116 (1990)
McConnell, B.: The handicapped: a low cost braille printer. Creat. Comput. 8(10), 186–188 (1982)
McKnight, S., Melder, N., Barrow, A.L., Harwin, W.S., Wann, J.: Psychophysical size discrimination using multi-fingered haptic interfaces. In: Proceedings of Eurohaptics 2004. CD-ROM Publication (2004)
McKnight, S., Melder, N., Barrow, A.L., Harwin, W.S., Wann, J.P.: Perceptual cues for orientation in a two finger haptic grasp task. In: Proceedings of the First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, pp. 549–550. IEEE (2005)
McLaughlin, M.L., Sukhatme, G., Shahabi, C.: The haptic museum. In: Proceedings of the EVA 2000 Conference on Electric Imaging and Visual Arts (2000)
Melen, R.D.: A one-hand Optacon. 1973 WESCON technical papers. West. Electron. Show Conv. 17, 7–13 (1973)
Melen, R.D., Meindl, J.D.: Electrocutaneous stimulation in a reading aid for the blind. IEEE Trans. Biomed. Eng. 18(1), 1–3 (1971)
Melen, R.D., Meindl, J.D.: A transparent electrode ccd image sensor for a reading aid for the blind. IEEE J. Solid State Circ. SC 9(1), 41–49 (1974)
Melton, L.: Mister impossible: ray kurzweil. Comput. Electron 22(7), 40–45 (1984)
METEC: Braille-großdisplay dmd 120060. Ingenieur-Gesellschaft mbH, Stuttgart (1989)
Miller, C.: Multimedia statistical diagrams. Undergraduate thesis, The University of Western Ontario (1996)
Minamino, T.: Canon’s activity on computer devices for the disabled. In: Proceedings of the IISF/ACM Japan International Symposium. Computers as our Better Partners, pp. 154–155 (1994)
Morgan, G.: A word in your ear: library services for print disabled readers in the digital age. Electron. Libr. 21(3), 234–239 (2003)
Morley, S.: Digital talking books on a pc: a usability evaluation of the prototype daisy playback software. Inf. Technol. Disabil. 7(1). http://www.rit.edu/~easi/itd/itdv07.htm (2000). Retrieved from September 2005
Morley, S., Petrie, H., O’Neill, A.-M., McNally, P.: The use of non-speech sounds in a hypermedia interface for blind users. In: Edwards, A.D.N., Arató, A., Zagler W.L. (eds.) Proceedings of ICCHP 98 (Vienna), Austrian Computer Society Book Series, vol. 118, pp. 205–214. Austrian Computer Society (1998)
Muhlbacher, S., Buschbeck, F.: Reading device for visually handicapped persons. Computers for Handicapped Persons, pp. 163–171 (1989)
Nara, T., Takasaki, M., Maeda, T., Higuchi, T., Ando, S., Tachi, S.: Surface acoustic wave tactile display. IEEE Comput. Graph. Appl. 21(6), 56–63 (2001)
Nemec, V., Mikovec, Z., Slavik, P.: Adaptive navigation of visually impaired users in a virtual environment on the World Wide Web. Universal Access. Theoretical perspectives, practice, and experience. In: 7th ERCIM International Workshop on User Interfaces for All. Revised Papers. Lecture Notes in Computer Science, vol. 2615, pp. 68–79 (2003)
Nemec, V., Sporka, A., Slavik, P.: Haptic and spatial audio based navigation of visually impaired users in virtual environment using low cost devices. User-centered interaction paradigms for universal access in the information society. In: 8th ERCIM Workshop on User Interfaces for all. Lecture Notes in Computer Science, vol. 3196, pp. 452–459 (2004)
Nemeth, A.: The Nemeth Braille Code for Mathematics and Science Notation, 1972 Revision. American Printing House for the Blind, Louisville (1985)
Niederst, J.: Web Design in a Nutshell, 2nd edn. O’Reilly and Associates, Sebastopol (2002)
Nielson, G., Harvey, G.: Interactive talking books for the blind on CD-rom. In: Proceedings of the Johns Hopkins National Search for Computing Applications to Assist Persons with Disabilities, pp. 181–184 (1992)
Ohuchi, M., Iwaya, Y., Suzuki, Y., Munekata, T.: Cognitive map formation of blind persons in a virtual sound environment. In: Proceedings of the 12th International Conference on Auditory Display (2006)
O’Malley, M.H., Larkin, D.K., Peters, E.W.: Beyond the reading machine: what the next generation of intelligent text-to-speech systems should do for the user. In: Official Proceedings of SPEECH TECH ‘86. Voice Input/Output Applications Show and Conference, pp. 216–219 (1986)
Omotayo, O.R.: Converting text into speech in real time with microcomputers. Microprocess. Microsyst. 8(9), 481–487 (1984)
Palmer, B., Pontelli, E.: Experiments in translating and navigating digital formats for mathematics-a progress report. In: Stephanidis, C. (ed.) (189), pp. 1320–1324
Pavesic, N., Gros, J., Dobrisek, S., Mihelic, F.: Homer ii-man–machine interface to internet for blind and visually impaired people. Comput. Commun. 26(5), 438–443 (2003)
Penn, P., Petrie, H., Colwell, C., Kornbrot, D., Furner, S., Hardwick, A.: The haptic perception of texture in virtual environments: an investigation with two devices. In: First International Workshop: Haptic Human–Computer Interaction, pp. 92–97 (2000)
Petrie, H., Colwell, C., Evenepoel, F.: Tools to assist authors in creating accessible World Wide Web pages. In: Proceedings of the Conference on Telematics in the Education of the Visually Handicapped, Paris. http://www.snv.jussieu.fr/inova/publi/ntevh/ntevh_ang.htm (2005)
Petrie, H., Fisher, W., O’Neill, A.-M., di Segni, Y., Pyfers, L., Gladstone, K., Rundle, C., van den Eijnde, O., Weber, G.: Navigation in multimedia documents for print disabled readers. In: Stephanidis, C. (ed.) (189), pp. 1457–1461
Petrie, H., Harrison, C., Dev, S.: Describing images on the web: a survey of current practice and prospects for the future. In: Stephanidis, C. (ed.) Universal Access in HCI: Exploring New Dimensions of Diversity, vol. 8. LEA. CD-ROM Publication (2005)
Petrie, H., Morley, S.: The use of non-speech sounds in non-visual interfaces to the MS-windows GUI for blind computer users. In: ICAD’98 International Conference on Auditory Display (Glasgow), eWiC, British Computer Society (1998). http://ewic.bcs.org/conferences/1998/auditory/papers/paper22.htm (2005)
Petrie, H., Morley, S., McNally, P., Graziani, P., Emiliani, P.L.: Access to hypermedia systems for blind students. In: Burger, D. (ed.) New technologies in the education of the visually handicapped. INSERM/John Libbey Eurotext (1996)
Plummer, J.D., Meindl, J.D.: Mos electronics for a reading aid for the blind. 1970 IEEE Int. Solid State Circ. Conf., pp. 168–169 (1970)
Plummer, J.D., Meindl, J.D.: A reading aid for the blind using mos electronics. In: Proceedings of the 23rd Annual Conference on Engineering in Medicine and Biology (1970)
Plummer, J.D., Meindl, J.D.: Mos electronics for a portable reading aid for the blind. IEEE J. Solid State Circ. SC, pp. 111–119 (1972)
Poh, S.-P.: Talking diagrams. Master’s thesis. Also technical report No. 459, The University of Western Ontario (1995)
Pontelli, E., Xiong, W., Gupta, G., Karshmer, A.I.: A domain specific language framework for non-visual browsing of complex HTML structures. In: Proceedings of the Fourth International ACM Conference on Assistive Technologies, New York. ACM Press, New York, pp. 180–187 (2000)
Portele, T., Kramer, J.: Adapting a TTS system to a reading machine for the blind. In: Proceedings ICSLP 96. Fourth International Conference on Spoken Language Processing (Cat. No.96TH8206), vol. 1, pp. 184–187 (1996)
Preddy, M., Gardner, J., Sahyun, S., Skrivanek, D.: Dotsplus: how-to make tactile figures and tactile formatted math. In: Proceedings of the 1997 CSUN Conference on Technology and Persons with Disabilities, Los Angeles. http://www.csun.edu/cod/conf/1997/proceedings/csun97.htm (1997)
Proceedings of the Second International Conference on Tactile Diagrams, Maps and Pictures, Hatfield. http://www.nctd.org/Conference/Conf2002/Programme.asp (2002)
Raman, T.: Speech-enabling the semantic WWW. http://emacspeak.sourceforge.net/publications/semantic-www.html (2005)
Raman, T.V.: Audio System for Technical Readings. Ph.D. thesis. Appeared also as Technical Report TR 941408 and Spoken on Tape Cornell University (1994)
Raman, T.V.: Emacspeak: a speech-enabling interface. Dr. Dobb’s J. 22(9), 18–23 (1997)
Rangin, H.B., Barry, W.A., Gardner, J.A., Lundquist, R., Preddy, M., Salinas, N.: Scientific reading and writing by blind people-technologies of the future. In: Proceedings of the 1996 CSUN, in Conference on Technology and Persons with Disabilities, Los Angeles (1996)
Reed, C., Lederman, S.J., Klatzky, R.L.: Haptic integration of planar size with hardness, texture, planar contour. Can. J. Psychol. 44(4), 522–545 (1990)
Rnib embosser list. http://www.rnib.org.uk/xpedio/groups/public/documents/PublicWebsite/public_rnib002980.hcsp (2005)
Roberts, J.: NIST Refreshable Tactile Graphic Display: a new low-cost technology. In: Proceedings of the 2004 CSUN Conference on Technology and Persons with Disabilities, Los Angeles. California State University, Northridge. http://www.csun.edu/cod/conf/2004/proceedings/csun04.htm (2004)
Rosen, L., Jaeggin, R.B., Ho, P.W.: Enabling blind and visually impaired library users: in magic and adaptive technologies. Libr. Hi. Tech. 9(3), 45–61 (1991)
Roth, P., Giess, C., Petrucci, L., Pun, T.: Adapting haptic game devices for non-visual graph rendering. In: Proceedings of 3rd International Conference on HCI: Universal Access, pp. 977–981 (2001)
Rothberg, M., Wlodkowski, T.: CD-roms for math and science. Information Technology and Disabilities, vol. 5. http://www.rit.edu/~easi/itd/itdv05.htm (1998)
Sahyun, S., Gardner, J., Gardner, C.: Audio and haptic access to math and science -audio graphs, triangle, the mathplus toolbox, and the tiger printer. In: Proceedings of the 15th IFIP World Computer Congress, Vienna (1998)
Salsbury, P.J.: A monolithic image sensor for a reading aid for the blind. In: Solid State Sensors Symposium, pp. 29–32 (1970)
Sánchez, J., Flores, H.: Memory enhancement through audio. In: Proceedings of ACM ASSETS. ACM Press, New York, pp. 24–31 (2004)
Savoie, R., Erickson, P.: Experimental simulation of an optical character recognition speech output reading machine for the blind. SIGCAPH Newslett. 24(10), 30–35 (1978)
Schweikhardt, W.: LAMBDA: a European system to access mathematics with braille and audio synthesis. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds.) Proceedings of the 10th International Conference on Computers Helping People with Special Needs ICCHP, no. 4061. Lecture Notes in Computer Science, Springer, Heidelberg (2006)
Scoy, F.L.V., Kawai, T., Fullmer, A., Stamper, K., Wojciechowska, I., Perez, A., Vargas, J., Martinez, S.: The sound and touch of mathematics: a prototype system. In: Proceedings of the Phantom Users Group (2001). http://www.cs.sandia.gov/SEL/conference/pug01/papers.htm. Retrieved from September 2005
Sef, T., Gams, M.: Speaker (govorec): a complete Slovenian text-to-speech system. Int. J. Speech. Tech. pp. 277–287 (2003)
Shinohara, M.: Vocal character reader for persons with disabled sight. J. Acoust. Soc. Jpn. 43(5), 336–343 (1987)
Siegfried, R.: A scripting language to help the blind to program visually. SIGPLAN notices, vol. 37, pp. 53–56 (2002)
Slaby, W.: A universal braille translator. In: Proceedings of International Conference on Computational Linguistics, Pisa (1973)
Slaby, W.A.: Automatische Übersetzung in blindenkurzschrift. EDV in Medizin und Biologie 5, 111–116 (1974)
Slaby, W.A.: Automatische Erzeugung formaler Übersetzungssysteme aus endlichen Mengen von Beispielen. Tech. Rep. 24, Rechenzentrum, Universität Münster, Schriftenreihe (1977)
Sodren, P., Semwal, S.K.: Haptic help for orientation in unknown environments. In: Stephanidis, C. (ed.) (189), pp. 1330–1334
Splett, J.: Linguistische Probleme bei der automatischen Produktion der deutschen Blindenkurzschrift. Z. Dialektologie und Linguistik (1973)
Steele, E.L., Puckett, R.E.: Enhancement of grade 2 braille translation. Papers presented at the Western electronic show and convention, pp. 30–34 (1971)
Stein, B.K.: The optacon: Past, present, and future. DIGITEYES: The Computer Users’ Network News (1998)
Stephanidis, C. (ed.): Universal Access in HCI: Inclusive design in the information society, vol. 4. Lawrence Erlbaum Associates, Mahwah (2003)
Stevens, R., Wright, P., Edwards, A.D.N.: Strategy and prosody in listening to algebra. In: Adjunct Proceedings of HCI’95: people and computers, Huddersfield, British Computer Society, pp. 160–166 (1995)
Stevens, R.D.: Principle for the design of auditory interfaces to present complex information to blind computer users. Ph.D. thesis, The University of York, UK (1996)
Stevens, R.D., Edwards, A.D.N.: An approach to the evaluation of assistive technology. In: Proceedings of Assets ’96, ACM, pp. 64–71 (1996)
Stevens, R.D., Wright, P.C., Edwards, A.D.N., Brewster, S.A.: An audio glance at syntactic structure based on spoken form. In: Interdisciplinary Aspects on Computers Helping People with Special Needs. 5th International Conference, ICCHP ’96 (2), pp. 627–635 (1996)
Stone, R.J.: Haptic feedback: a potted history, from telepresence to virtual reality. http://www.dcs.gla.ac.uk/~stephen/workshops/haptic/papers/stone.pdf (2005)
Sully, P.: Alone with a book. Nat. Electron. Rev. 18, 9–12 (1983)
Tactile graphics starter kit. American Printing House for the Blind (2005)
Tornil, B., Baptiste-Jessel, N.: Use of force feedback pointing devices for blind users. User-Centered Interaction Paradigms for Universal Access in the Information Society. In: 8th ERCIM Workshop on User Interfaces for all. Lecture Notes in Computer Science, vol. 3196, pp. 479–485 (2004)
Truillet, P., Vigouroux, N.: Multimodal presentation of html documents for blind using extended cascading style sheets. In: Proceedings of the 9th WWW conference, Foretec Seminars, Inc. http://www9.org/final-posters/4/poster4.html (2000)
Tyler, M., Haase, S., Kaczmarek, K., Bach-y Rita, P.: Development of an electrotactile glove for display of graphics for the blind: preliminary results. In: Conference Proceedings. Second Joint EMBS-BMES Conference 2002. 24th Annual International Conference of the Engineering in Medicine and Biology Society. Annual Fall Meeting of the Biomedical Engineering Society (Cat. No.02CH37392). vol. 3, pp. 2439–2440
Tzoukermann, E.: Issues in French text-to-speech synthesis. J. Acoust Soc. Am. 95, 2816 (1994)
Tzoukermann, E.: Text-to-speech for french. In: The Proceedings of the ESCA Workshop on speech synthesis (1994)
Ungar, S.: Cognitive mapping without visual experience (2000)
Ungar, S., Blades, M., Spencer, C.: The role of tactile maps in mobility training. Br. J. Vis. Impair. 11, 59–62 (1993)
Ungar, S., Blades, M., Spencer, C.: Mental rotation of a tactile layout by young visually impaired children. Perception 24, 891–900 (1995)
Ungar, S., Blades, M., Spencer, C.: Visually impaired children’s strategies for memorizing a map. Br. J. Vis Impair, pp. 27–32 (1995)
Ungar, S., Blades, M., Spencer, C.: The ability of visually impaired children to locate themselves on a tactile map. J. Vis. Impair. Blind. 90, 526–535 (1996)
Ungar, S., Blades, M., Spencer, C.: Can blind and visually impaired people read tilted braille labels? In: Proceedings of the Maps and Diagrams for Blind and Visually-impaired People: Needs, Solutions, Developments, Ljubljana, International Cartographic Association (1996)
Ungar, S., Blades, M., Spencer, C.: The construction of cognitive maps by children with visual impairments. In: Portugali, J. (ed.) The construction of cognitive maps. Kluwer, Dordrecht, pp. 247–273 (1996)
Ungar, S., Blades, M., Spencer, C.: The use of tactile maps to aid navigation by blind and visually impaired people in unfamiliar urban environments. In: Proceedings of the Royal Institute of Navigation, Orientation and Navigation Conference, Oxford, Royal Institute of Navigation (1996)
Ungar, S., Blades, M., Spencer, C.: Strategies for knowledge acquisition from cartographic maps by blind and visually impaired adults. Cartogr. J. 34, 93–110 (1997)
Ungar, S., Blades, M., Spencer, C.: Teaching visually impaired children to make distance judgements from a tactile map. J. Vis. Impair. Blind. 91, 221–233 (1997)
Ungar, S., Blades, M., Spencer, C.: Can a tactile map facilitate learning of related information by blind and visually impaired people? a test of the conjoint retention hypothesis. In: Proceedings of Thinking with Diagrams ’98, Aberystwyth, University of Aberystwyth (1998)
Ungar, S., Blades, M., Spencer, C.: The effect of orientation on braille reading by blind and visually impaired people: the role of context. J. Vis. Impair. Blind. 92, 454–463 (1998)
Ungar, S., Blades, M., Spencer, C., Morsley, K.: Can visually impaired children use tactile maps to estimate directions? J. Vis. Impair. Blind. 88, 221–233 (1994)
Ungar, S., Espinosa, A., Blades, M., Ochaíta, E., Spencer, C.: Blind and visually impaired people using tactile maps. Cartogr. Perspect. 28, 4–12 (1998)
Vayda, A.J., Whalen, M.P., Hepp, D.J., Gillies, A.M.: A contextual reasoning system for the interpretation of machine printed address block images. Proceedings. In: Second Annual Symposium on Document Analysis and Information Retrieval, pp. 429–441 (1993)
Viewplus technologies. Online Product Web Site Retrieved from http://www.viewplustech.com/ (2002)
Vincent, A.T.: Talking basic and talking braille: two applications of synthetic speech. Comput. Educ. 45(11), 10–12 (1983)
W3C:WAI. Policies relating to web accessibility. http://www.w3.org/WAI/ (2005)
Wall, S., Brewster, S.: Feeling What You Hear: Tactile Feedback for Navigation of Audio Graphs. In: CHI 2006 Proceedings, ACM Press, New York, pp. 1123–1132 (2006)
Walsh, P., Gardner, J.A.: Tiger: A new age of tactile text and graphics. In: Proceedings of the 2001 CSUN International Conference on Technology and Persons with Disabilities, Los Angeles (2001)
Watanabe, T.: Bep; Japanese and English text-to-speech system for the Japanese visually impaired and their usage of computer with speech output. Joho Shori 43(8), 873–879 (2002)
Watanabe, T., Okada, S., Ifukube, T.: Development of a cd-rom books vocalizing system for blind persons in a gui environment. Trans. Inst. Electron. Info. Commun. Eng. D-I J82D-I 4(4), 589–592 (1999)
Way, T.P., Barner, K.E.: Automatic visual to tactile translation—part I: human factors, access methods, and image manipulation. IEEE Trans. Rehab. Eng. 5(1), 81–94 (1997)
Way, T.P., Barner, K.E.: Automatic visual to tactile translation—part II: human factors, access methods, and image manipulation. IEEE Trans. Rehab. Eng. 5(1), 81–94 (1997)
Williams, T.T., Lambert, R.M., White, C.W.: Interactive braille output for blind computer users. Behav. Res. Methods Instrum. Comput. 17, 265–267 (1985)
Wood, S.L., Marks, P., Pearlman, J.: A segmentation algorithm for ocr application to low resolution images. In: Conference Record of the Fourteenth Asilomar Conference on Circuits, Systems and Computers, pp. 411–415 (1980)
Woolfson, L.: Braille translation by computer. Microprocess. Softw. Q. 10(2), 44–46 (1983)
Yanagisawa, S., Yonezawa, Y., Ito, K., Hashimoto, M.: A high-density and high-speed tactile display panel using passive-writing method. J. Inst. Image. Elect. Eng. Jpn. 33(1), 19–26 (2004)
Yesilada, Y., Stevens, R., Goble, C., Hussein, S.: Rendering tables in audio: the interaction of structure and reading styles. ASSETS 2004. In: The Sixth International ACM SIGACCESS Conference on Computers and Accessibility, pp. 16–23 (2004)
Yonezawa, Y., Hattori, H., Itoh, K.: A method of nonimpact braille printer by electro-thermosensitive process. In: Transactions of the Institute of Electronics, Information and Communication Engineers C J70C, pp. 1545–1552 (1987)
York, B., Karshmer, A.: Tools to support blind programmers. In: Seventeenth Annual ACM Computer Science Conference, pp. 5–11 (1989)
Yu, W., Brewster, S.: Comparing two haptic interfaces for multimodal graph rendering. In: Proceedings 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. HAPTICS, pp. 3–9 (2002)
Yu, W., Brewster, S.: Evaluation of multimodal graphs for blind people. J. Univ. Access. Info. Soc. (2003)
Yu, W., Kangas, K., Brewster, S.: Web-based haptic applications for blind people to create virtual graphs. In: Proceedings 11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. HAPTICS, pp. 318–325 (2003)
Yu, W., Ramloll, R., Brewster, S., Ridel, B.: Exploring computer-generated line graphs through virtual touch. In: Proceedings of the Sixth International Symposium on Signal Processing and its Applications (Cat. No. 01EX467), vol. 1, pp. 72–5 (2001)
Zajicek, M., Powell, C.: Building a conceptual model of the World Wide Web for visually impaired users. In: Proceedings of the Ergonomics Society, Annual Conference, Grantham (1997)
Zajicek, M., Powell, C.: The use of information rich words and abridged language to orientate users to the World Wide Web. In: IEEE Colloquium on Prospects for Spoken Language Technology (Digest no. 1997/138), pp. 7–11 (1997)
Zajicek, M., Powell, C., Reeves, C.: Evaluation of a World Wide Web scanning interface for blind and visually impaired users. In: Proceedings of HCI International ’99, Munich. http://www.brookes.ac.uk/speech/publications/65_hciin.htm (2005)
Zandifar, A., Duraiswami, R., Chahine, A., Davis, L.: A video based interface to textual information for the visually impaired. In: Proceedings Fourth IEEE International Conference on Multimodal Interfaces, pp. 325–330 (2002)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Power, C., Jürgensen, H. Accessible presentation of information for people with visual disabilities. Univ Access Inf Soc 9, 97–119 (2010). https://doi.org/10.1007/s10209-009-0164-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10209-009-0164-1