Keywords

1 Introduction

Tactile maps may contribute to the orientation of blind people or alternatively be used for navigation. In the past, the generation of these maps was a manual task which considerably limited their availability. Nowadays, similar to visual maps, tactile maps can also be generated semi-automatically by tools and web services.

Today, physical representations of electronic tactile maps can be produced by a number of different techniques. Up to now, there are three different printing techniques used by automatic map generation systems. Braille embossers are used to print Braille text or graphics by punching a series of dots into printer paper. By varying the strength of punches for individual dots, some embossers are able to print different sizes of dots. When this is used in combination with halftone techniques, several height variations can be simulated. Alternatively, a combination of a common printer and a thermal enhancer (fuser) can be used. Chemically treated swell (microcapsular) paper is printed with monochrome graphics. Subsequently, the fuser heats the black parts above a certain threshold temperature, which causes them to swell. This technique has strengths for printing continuous graphics used in maps, however, in principle only one height level is possible. A recent technology for printing tactile maps are 3D printers. The most common technique for consumer 3D printing is Fused Decomposition Modelling (FDM). Molten thermo-plastic polymer (e.g., polylactic acid) is deposited by a two dimensionally moving printing head. By sequentially printing multiple layers onto each other, numerous different height levels can be achieved which can be used to topologically group cartographic elements.

This paper introduces a novel web service (http://maps.blindweb.org) for blind people to interactively select and automatically generate tactile maps. In this approach, the resulting tactile maps can be printed by multiple methods.

2 Related Work

There have been multiple approaches which address the automatic generation of (audio-) tactile maps. The approach Talking Tactile Tablet (TTT) [1] used a frame including a touchpad which was connected to a computer. Embossed tactile sheets could be installed in the frame. After identifying the tactile sheets the user was able to explore the printed graphics and to request additional information at certain points. This system was used by a succeeding approach [2] which could be used by end users to order tactile US maps which were printed automatically by local Braille embossers or third party services: a prototypical system by Wang et al. [3] enabled to generate audio-tactile maps from normal maps designed for sighted people. It utilizes computer vision algorithms to read in labels and map features and creates SVG files including the extracted data. Reproduced tactile maps can be printed using thermal enhancers and subsequently be used along with a touchpad. This technique tackled the issue of limited availability of tactile maps, but was limited to existing printed maps.

The Tactile Map Automated Creation System (TMACS) [4] realized an important step towards blind person’s mobility by offering a web service which produced street level maps after entering a departure and a destination address. By using a Japanese map database users were able to immediately generate map extracts which could be printed on microcapsular paper. A subsequent approach by Watanabe et al. [5] extended TMACS by using the OpenStreetMap database and provided the first approach which could instantly generate tactile street maps of the whole world. Another approach [6] introduced a process to generate tactile detail maps, which could be instantly printed on swell paper or automatically generated as 3D model for 3D printing. Recently, Taylor et al. [7] sketched another system supporting 3D printers and made suggestions for future improvements.

These approaches enable users to generate maps by entering a specific address or point of interest. This can in principle be done by a blind user. However, these approaches actually show an image of the map on the users display which cannot be read by screen readers. Consequently, the blind user does not know what is on the map before it is printed. Ideally, the map selection process should give the user more information and freedom to select the desired excerpt. Map providers for sighted users offer means to (i) enter an address, (ii) review the resulting map extract including details of the map features, (iii) make adjustments to the map extract by panning and zooming and (iv) enable to define individual informational needs of the user. Finally, multiple printing techniques for tactile graphics should be supported.

3 Our Approach

We ported our approach [6] to a web service which is online since 2014 and has been used for the evaluation with blind users (see Fig. 1). The aim was to adapt the interaction concept for map selection to the requirements of blind users whilst supporting multiple printing technologies.

Fig. 1.
figure 1

Screenshot of BlindWeb Maps. After entering address (1), included map features (2) can be read by screen readers. The extract can be shifted by directional buttons (3), changes of map features are reported immediately. Map features, zoom level and output technology can be chosen (4) and generated (5) as graphics file or 3D model.

In this approach the user is free to enter either an address or a proper name of a point of his or her interest to immediately generate a tactile map. In order to allow the user to examine the result before the actual printing process, lists of map features included in the map are shown which can be read by usual screen readers. We transferred the prevalent panning of visual maps via mouse interaction to directional buttons which can be used by blind persons with their keyboard. Each shifting action advances the map by 1/3 in the desired cardinal direction. Additionally, the users immediately get feedback about changes of map features caused by panning the map. Another list, accessible by screen readers, informs about map features which are new to the map extract and those which dropped out.

On the one hand, it is crucial to limit the number of elements of tactile maps. On the other hand, it is important to fit the map content to the informational needs of the user. Hence, we developed the functionality for customizable filtering of map features. Users can select (groups of) map features they are interested in in order to adapt the rendering of the map to their needs. Besides usual map features (such as shops, bus stops) special blind specific map features can be highlighted on the map. This includes elements for building entrances, guided traffic lights and tactile paving.

Zooming is realized by offering the selection of multiple detail levels for the maps. In order to realize a reasonable proportion and effective tactile presentation of map contents different rendering styles for each zooming level are used. In principle, our approach can serve the whole range of map zoom levels (i.e., highly detailed map up to world map). Tactile maps have to be generated with their printing technologies in mind – which have differing technical constraints. Similar to the aforementioned zooming, our approach supports individual rendering styles for each technology to adapt the generated maps to the technology’s inherent strengths. For example, multiple height levels are only used for 3D printing – for swell paper dotted/hatched textures are used instead (see Fig. 2-left).

Fig. 2.
figure 2

Generated map extract of Johannes-Kepler-University Linz for swell paper prints (left), i.e., automatically adapted to a single height level. 3D printed map including optical markers for optional support of audio-tactile interaction (right).

Technically our approach works as follows: After an address or point of interest is given by the user the web service OpenStreetMap Nominatim is used to obtain corresponding GPS coordinates. Next, depending on the zoom level a bounding box is defined and used to request detailed map data from the OpenStreetMap API web service. Our web service processes and filters the map data for the requirements of blind users and extracts the textual content of the map features. The map rendering as well as the generation of the 3D models is according to our dedicated approach [6]. Panning the map causes shifting of the bounding box values, selection of another zoom level narrows or widens the bounding box extents. There are individual rendering styles: for (i) different zoom levels and (ii) different printing technologies. All the interaction elements are accessible via shortcuts.

Finally, the user can download the map either as image file for paper bound printing technologies or as 3D model for 3D printers. A medium detailed map covers an area of 422 × 287 m (at mid-latitude). The scale is depending on the printed size of the map. For a map of 22,5 × 15 cm this is equivalent to a scale of 1:1875. The produced maps are intended to be used as audio-tactile maps in conjunction with a mobile phone (e.g., by [8, 9]). This helps to reduce the tactile complexity by removing Braille labels and limiting the number of symbols and markers.

By using such an interactive approach the user can obtain additional information about map features (such as name and type of it) by pointing at the desired map feature. However, if desired, the user is also able to generate pure tactile maps without elements used for interaction.

4 Evaluation

We evaluated a preliminary version of our approach by interviews with blind and visually impaired people in order to obtain suggestions about the selection process as well as the design of automatically produced tactile maps.

4.1 Participants

The questionnaire was conducted among 12 visually impaired persons (five female) with an average age of 43.08 ± 13.43 years. Seven of the participants were blind since their birth or early childhood, four of them late-blind and one had low vision. Four participants were able to participate in person, the other eight were interviewed by telephone. Except for one all individuals were familiar with the concept of tactile maps, 3D models and reliefs as well as graphics made with a Braille printer. Furthermore, most of them knew tactile maps made with micro capsule paper and deep-drawn foil.

4.2 Procedure

After an introduction to the web application and its functionality as well as the result each individual was asked to answer questions about their experience with tactile maps, how they orientate in an unknown area and what they are most interested in to find on a tactile map. They were asked about their own suggestions to improve the web interface as well as the tactile map. Additionally, the face-to-face interviewees were asked to give an opinion about two 3D printed maps made with BlindWeb Maps again with suggestions to improve on the quality and legibility.

4.3 Results

The preliminary version included the option to mark the following points of interest as the only point features in the map: pharmacies, doctor’s offices, cash dispensers, mail boxes, building entrances and public transport (such as tram or bus stops). To add more useful POIs to the list or eliminate unnecessary elements the interviewed person had the task to assess a map element, including both existing and suggested POIs. Every participant expressed the importance of pedestrian lights (especially with audio signal), which is one of the most relevant components for the orientation of a visually impaired person, right after public transport and building entrances. Restaurants, shops, hotels as well as the already existing POIs were equally marked as interesting and not interesting, which resulted from the individual usage of such a map. To cope with the variety of individual map features we followed the suggestion to sort the elements into categories. The implemented categories were Health, Eat and Drink, Shops and Service, Culture and Entertainment, Nature and Leisure, Traveling, and Accessibility.

Each of the participants mentioned that multiple zoom levels should be supported. 92 % of the interviewees requested more than two zoom levels. We integrated the functionality for the selection of three different zoom levels which uses dedicated map rendering styles.

5 Conclusion and Future Work

This paper introduced a novel web service, which enables to generate tactile maps which can be printed by multiple techniques. It focused on the technical approach to provide blind users with a more adequate search process, which allows to review and adjust the map extract. Thus, we transferred interaction techniques known from map web services for sighted users, which are inaccessible to blind users, to more adequate techniques. The audio review of the map’s contents allows earlier feedback if the currently selected map extract corresponds to the desired information need. Changes can be initiated before the map is printed which, especially for 3D printing, saves much time.

Based on a comprehensive interview with blind and visually impaired users we evaluated a preliminary version of the web service and initiated multiple changes to better satisfy the user’s needs. Accordingly, it is possible to personalize maps by selection of desired points of interest. This may help to compromise between individual informational need of the user and to simultaneously limit the tactile complexity of the map. Additionally, users can choose to highlight map features particularly important to blind people (i.e., building entrances, assisted traffic lights). Finally, to enable both a limited complexity of the tactile map and detailed information on specific map features our approach integrates support for audio tactile maps [8, 9] which allow to interactively request additional information during the exploration of the printed tactile map.

Despite this paper did not focusing on map rendering styles, but instead the interaction concept and its technical approach, this is an important part of our future work. This approach supports multiple zoom levels as well as multiple printing technologies using individual map rendering styles. However, the optimization of automated map rendering styles for blind people at different zoom levels and printing technologies has to be discussed by the scientific community. There is already manifold expertise for automated map rendering at different zoom levels using miscellaneous technologies in the scientific community (e.g., [2, 5]). By collaboration a unified platform for qualitatively high tactile maps available for blind people all over the world can be formed. Thus, this has to be accompanied by a constructive dialogue in a highly international context and verified by further user studies.