Abstract
Blind players have many difficulties to access video games since most of them rely on impressive graphics and immersive visual experiences. To overcome this limitation, we propose a device designed for visually impaired people to interact with virtual scenes of video games. The device has been designed considering usability, economic cost, and adaptability as main features. To ensure usability, we considered the white cane paradigm since this is the most used device by the blind community. Our device supports left to right movements and collision detection as well as actions to manipulate scene objects such as drag and drop. To enhance realism, it also integrates a library with sounds of different materials to reproduce object collision. To reduce the economic cost, we used Arduino as the basis of our development. Finally, to ensure adaptability, we created an application programming interface that supports the connection with different games engines and different scenarios. To test the acceptance of the device 12 blind participants were considered (6 males and 6 females). In addition, we created three mini-games in Unity3D that require navigation and walking as principal actions. After playing, participants filled a questionnaire related to usability and suitability to interact with games, among others. They scored well in all features without distinction among player gender and being blind from birth. The relationship between device responsiveness and user interaction has been considered satisfactory. Despite our small test sample, our main goal has been accomplished, the proposed device prototype seems to be useful to visually impaired people.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In last years, there has been a growing demand for video games across a whole range of age, gender groups, and application fields. According to the Entertainment Software Association, 63% of U.S. households have at least one person who plays video games regularly (3 hours or more per week) with an average age of players of 35 years old [70]. This interest in video games has lead to different studies that evaluate their effects on players. Although, first studies focused on negative aspects such as potential harm related to violence, addiction, and depression, last decade studies have highlighted positive aspects of playing video games from cognitive, motivational, emotional, or social perspectives [68]. The fact is that video games have great potential not only for entertainment but also as a tool to address specific problems or to improve different skills. Currently, games are used in education, health, or business, among others [72].
Despite the increased level of interest in games and their provided benefits, a large group of people is left out from playing video games due to a disability. To overcome this limitation, the creation of inclusive computer games such that nobody is left out and everyone has access, has become an emerging focus of research. In this paper, we will focus on visually impaired people. Video games could offer new socialization, education, employments and health opportunities for these individuals [39]. According to the World Health Organisation 285 million people are estimated to be visually impaired worldwide (39 million are blind and 246 have low vision) (http://www.who.int/blindness/GLOBALDATAFINALforweb.pdf).
Blind players have many difficulties to access video games since most of them rely on impressive graphics and immersive visual experiences. They may also find barriers in providing input. Different strategies to replace visual stimuli have been proposed in order to adapt videogames to this community. On the one hand, there are audio-based techniques that use auditory icons and earcons to associate information to sound [7, 15, 43]. There are also more sophisticated audio solutions such as spatial or 3D sound [57] which assigns a 3D effect to the sound allowing it to come from any part of the scene. Modern engines such as Unity 3D (https://unity3d.com/) and Unreal Game Development Kit (https://www.unrealengine.com/) provide functionalities to create these effects. The Audio Games community provides lots of titles of audio-based games designed for users with a visual disability [50]. On the other hand, there are haptic-based strategies which can be used to enhance immersion, for instance, by vibrating when collisions are detected. Westin et al. presented a literature study of the advances in game accessibility research describing different proposals for blind users. Game accessibly was also surveyed by Yuan et al. [72] and recent advances on video game accessibility for users with visual impairment can be found in [39]. In general, video games use a combination of these strategies together with screen readers that turn what is displayed on the monitor into a different non-standard output such as speech or text on a Braille output device [18, 25, 45, 60]. For a summary of all these strategies see (http://game-accessibility.com/documentation/visually-impaired-gamers-where-to-go-what-to-play/) [3].
In this paper, we propose a device designed for visually impaired people to interact with virtual scenes of video games. Our device provides three main advantages. First, it is easy to use since it reproduces the movements of a white cane in a virtual scenario. The white cane is the most used device thanks to be inexpensive, lightweight and foldable [28]. However, it has some limitations such as the difficulties to detect overhanging obstacles at head-level or ranges further than approximately one meter from the user [38], and the solutions proposed to tackle these problems have high prices and poor user interface that limit their use [1, 27, 41, 55]. Second, it has a low cost since the basis of its design is Arduino [4]. Third, it is easy to integrate in any game since we provide an application programming interface that supports the connection with different games engines and different scenarios. Our device is recommended for games with a strong component in navigation and walking.
The paper has been structured as follows. In Section 2, we present some of the techniques and games that have been proposed for blind players and related work on white cane since this is the basis of our proposal. In Section 3, we give a detailed description of our proposed device, the application programming interface, and an example created using Unity3D. In Section 4, we present the experiment set-up that has been designed to test the acceptance of the device. Obtained results as well as main limitations are described in Section 5. Finally, in Section 6, we present our conclusions and future work.
2 Related work
In this section, we will present first, some of the techniques and games that have been proposed for blind players, and second, we will describe advances related with the white cane since this is the basis of proposed device.
2.1 Video games for blind players
Video game playing can be characterized as a three step process, where first, player receives an stimuli (visual, auditory, haptic or a combination of these), second, player determines a response according to possible game actions, and third, game responses according to previous selected action [72]. After successfully performing these three steps, internal state of the game may change and new stimuli may be provided. The process is repeated until the player wins, loses or quits the game. In all this process, visual stimuli are the most common and this makes playability difficult for blind players. To transform visual stimuli to other, different computer interaction techniques that use sound, touch screens, haptic equipment, and specially designed software and hardware have been proposed.
Audio-based techniques are the most popular. These use speech strategies provided either by screen readers, audio cues, and sonification [52]. Audio techniques alone or combined with haptics have been used not only to create video games but also virtual environments to help visually impaired people develop spatial orientation and mobility skills [59]. Some of the proposed games are: Finger Dance, an audio-based rhythm-action game [43]; AuditoryPong [21], an interactive game that transfers the game pong into a physical and acoustic space where users move the game paddle with body interaction or haptic devices, and receives immediate acoustic feedback; or Sonic Badmington, an audio-augmented badminton game that uses a virtual shuttlecock with audio feedback [26].
Focusing on enhancement of navigation and orientation skills, Sanchez et al. proposed Audiopolis [59], an educational game that simulates a city and its environmental sounds to teach blind children how to navigate in urban scenes by using haptic devices as virtual canes. Maidenbaum et al. [36] designed an orientation experiment where a cane controlled by the space bar is simulated by audio feedback only. Balan et al. [6] reviewed the most notable audio-based games in what concerns their usability as an education tool for visually impaired people. Torres et al. [65] proposed a virtual reality simulator that makes an auditory representation of the virtual environment, rendering the virtual world entirely through the hearing. It uses a 3D tracking system to locate user’s head orientation and position to provide a natural user interaction since the user only has to walk through the environment perceiving it through acoustic information. There are also navigation aids which use ultrasound information to sense the surroundings and acquire spatial information (NavBelt [61], Sonic Eye [62], SonicGuide [24], iGlasses [51]). For a review of recent advances in virtual reality technology for blind and visual impaired people see [16].
Tactile-based techniques replace visual stimulus into tactile stimulation. The first system, proposed fifty years ago [5], converted signals from a video camera into tactile stimulation applied to the back of the subject. Currently, thanks to technological advances, much smaller portable devices that allow hands-free interactions have been proposed. Some of them are head-mounted devices, wrist-bands, vests, belts, shoes etc. [47, 67]. There are also small electro-tactile and vibro-tactile stimulators that can be placed on body surfaces such as fingers, wrists, head, abdomen, or feet [23]. Yuan and Folmer [71] developed a glove that transforms visual information into haptic feedback using small pager motors attached to the tip of each finger. This allows a blind player to play Guitar Hero. VI-Tennis and VI-Bowling integrate a haptic interface based on a motion sensing controller enhanced with vibrotactile and audio cues that allows blind players detect key events of Wii Sports game [45, 46]. To navigate in 2D environments, the simulation of different tactile surfaces with various materials was proposed by TiM games [2] and Digital Clock Carpet [58]. Milne et al. [44] proposed Braille Play which teaches how to write and read Braille letters using vibration feedback and touch interface of smartphones. Nikolakis et al. [49] proposed a haptic virtual reality tool that allows visually impaired, to study and interact with various virtual objects in specially designed virtual environments. The system is based on the use of CyberGrasp [69] and PHANToM [40] haptic devices. PHANToM is the most commonly used force feedback device. It provides the sense of touch along with the feeling of force feedback at the fingertip.
Focusing on mobile technology, this is gaining sophistication and widespread use. Current research is centered on making mobile phones and other handheld computer devices more efficient, cost-effective, functional, and accessible which will also benefit visually impaired[9, 19]. Rodriguez et al. [54] proposed a smartphone-based method that uses phone camera to capture pathway scenes as images and transform them into messages and warnings to avoid collisions. Lu et al. [35] used smartphone accelerometers to recognize daily living activities and sporting activities. For a review of all these technologies see [17, 22]. Moreover, human-computer interaction based research is increasingly exploring the possibility of supporting eyes-free interaction methods for smartphones and other handheld devices. Consider for instance, the scheme to recognize human activities from sensor data proposed by Liu et al. [31] or the same author’s proposed algorithm which is capable of efficiently mining temporal patterns from low-level actions to represent high-level human activities [34].
2.2 The white cane
Many different techniques and devices have been proposed to enhance the interaction between humans and computers [20]. However, it is important to choose the right form of interaction to reach a high usability of the resulting system. In our case, motivated by the extensive use of the white cane we have considered this option as the basis of our interaction device.
Improvement of the white cane performance is a continuos focus of research. The aim is detecting obstacles at wider ranges and at a distance above-knee level. Generally, the proposed methods are based on sensors and multi sensory displays mounted on the classic white cane, but which can sometimes be removed from the cane and used independently. These smart canes come in two forms [64]. In the first type, a detection device is mounted on the cane to form a stick, making this a detachable unit. Some canes in this group are Teletact [13], Tom-Pouce [14], Vistac Laser Long Cane (https://www.vistac.com) and UltraCane (https://www.ultracane.com/soundforesigntechnologyltd). In the second type, devices have detection sensors built into the canes such as LaserCane [38]. For a review of technological canes see [11].
We conclude this section with a final comment about the use of the described systems. A main limitation is that most of them have been tested only in experimental conditions and are not used in daily life by visually impaired people. Only few of them have been commercialized. Gori et al. [17] described possible reasons of these facts. A first reason could be that most of them are invasive devices. A second one could be that processing acoustical or tactile signals might overwhelm cognitive abilities of the user. A third one could be that many of these devices require a long period of training in order to be used. A fourth one could be that the level of performance of these systems is insufficient to justify the invasiveness and effort needed to use them. Another aspect to consider is that many of the described devices do not take into account the important link established by action and perception in learning process. In addition, the cost of the device can also be a limiting factor. Taking into account all these limitations, our aim is to develop an easy to use device that does not require any special training and has a low cost. Although our device is focused on video games, we consider that these limitations have to be also taken into account.
3 The proposed device
Our main objective is the creation of a device to interact with video games in order to adapt them to blind players. To create it, we consider as main features usability, economic cost, and adaptability. In this section, we will present all the details of the proposed device. First, we will present the design of the device, then the application programming interface required to connect the device with a game, third an application example using Unity3D, and fourth some final considerations.
3.1 Device description
To create the device, we first focused on usability and economic cost. Our device should be easy to use and without requiring any significant training process for its use. To satisfy this first requirement, we consider the white cane as our inspiration since this is the most popular device among blind community. Such a decision fixed the design and movements that have to be supported by the device. To start, we considered movements in one dimension, from left to right and vice versa. In addition, the device should be able to detect collisions in the virtual scenarios of the game returning different sounds according to collided materials. Finally, the device should be used both when sitting and standing to satisfy different players preferences.
Our device should have a low cost. To satisfy this requirement, we decided to use Arduino as the basis of our development. Arduino has been specifically designed for people with little or no background in electronics or programming, is free to download online, and supported by an expanding open-source online community [4]. Its hardware is inexpensive and can be combined with any number of sensors and instruments that are available from a variety of retailers. In addition, it requires minimal development effort or experience, allowing developments without large financial investment. There are previous works based on this technology. For instance, Sakhardande et al. [56] proposed a detachable unit to extend the functionality of the existing white cane, to concede above-knee obstacle detection as well as below-knee detection. The proposed stick uses ultrasound sensors for detecting obstructions before direct contact providing haptic feedback to the user in accordance with the position of the obstacle. More recently, Sudhanthiradevi et al. [63] proposed a theoretical model of a walking stick for visually impaired people that consists of three modules to detect heat, obstacles, and water, respectively.
Taking into account all these requirements, we created the device illustrated in Figs. 1 and 2. As we can see in Fig. 1, the device is composed of a rectangular wooden basis (25cm×25cm) and at each vertex of the basis there is a vertical piece of wood (front face 22cm × 25cm, back face 12cm×25cm). These pieces create an inclined plane where the cane can move. Two pieces of wood and lateral limits of the device restrict left to right movements of the cane. In Fig. 2, the main components of the device have been labelled. There are two engines on the back face, one at each extreme of the guide. These engines control the movement of a vertical piece of wood, labelled as lateral limit, that determines the space where the cane can move. There are also two way-end push buttons that restrict movements of these lateral limits. When we initialize the device, engines place the limits at the extremes. There is a loudspeaker to simulate collision effects. We have integrated different sounds of materials to increase realism to the collision. These are activated each time the cane, which is covered with aluminum paper, collides with the metallic pieces attached to the lateral limits placed at both sides of the device. The device has two engine controllers Easy Driver which control movements that have to be performed to reach a desired position. An Arduino ONE is used to control the device and to connect it with the PC via USB port.
3.2 The device API
Our device should be adaptable to different video games. To satisfy this requirement, we created an application programming interface (API) to support the connection with different game engines, and also to support different scenarios. This last feature requires calibration parameters to control the movements of the device according to player movements and objects of the video game scenario.
The API is composed of two different parts, the external API that communicates the device with the game engine, and the internal API integrated in the game engine used to develop the video game. The main modules of the API are illustrated in Fig. 3 and described below.
3.2.1 External API
External API has been programmed in C. Main modules of the external API are illustrated in Fig. 4 and described below.
- Communication Library :
-
connects the device with computer. It configures the connection to receive and emit data from the Serial Port COM3 and waits for game engine API information related to position of the lateral limits of device, and information of collisions between white cane and an object of the game scenario. To carry out this process it has the following methods: OnStart() which configures communication Serial Port COM3 to send and receive messages; OnPositionReceive() which waits for the game engine API message with the position of lateral limits; SendCollision() which controls if the cane collides with lateral limits; OnSoundReceive() which reproduces the sound assigned to the collided object; and OnStop() which closes the device.
- Movement Library :
-
controls lateral limits of the device used to simulate virtual objects. It starts placing the limits to an initial position and then, waits for information related to the position of the object in the scene to move the limits according to it. It has two private methods, Move() and Calibrate(). The first has two input parameters, representing the positions of lateral limits, and moves the limits to these positions. The second, moves the limits to the initial position. Movement Library also has three public methods, OnStart() and OnEnd(), used to turn on and turn off the device, respectively, and OnPositionUpdate(), called by OnPositionReceive from Communication library, that calls Move() method to move the limits.
- Switch Library :
-
controls device switches. It has two main methods, Limit Switch() that controls if left and right limits are in initial position or not, and LeftRightSwitch() that controls if the white cane collides with limits.
- Audio Library :
-
reproduces the sound assigned to the collided object. It is composed of a class and an Audio database that stores sounds in MP3 format and their relationships with objects. The class has OnStart() method to init loudspeaker, and Play(SoundID) method, called by OnSoundReceive(), that has as input the identifier of the sound from the Audio data base that has to be reproduced.
3.2.2 Game engine API
Main modules of the internal API are illustrated in Fig. 5 and described below.
- Connection Library :
-
connects the computer and the device. It configures the connection to send and receive information from Serial Port COM3. It sends to external API the position of the lateral limits and it waits for collisions. In case of collision, it sends the sound identifier. The methods of this module are: OnStart() which configures the communication with the Serial Port COM3; SendPosition() that sends the positions of the lateral limits; OnCollisionReceive() that receives the data when it has been a collision with one of the lateral limits; SendSoundID() that sends the identifier of the sound that have to be reproduced; and OnStop() that closes connection when we exit.
- Character controller :
-
configures player parameters such as distance and velocity of player movements. It also inits the white cane fixing the walking distance. It has the following methods: OnStart() that configures movement velocity and white cane parameters; CreateWhiteCane() that creates and inits the white cane object; and UpdatePosition() that moves the player from one position to the next.
- White Cane controller :
-
controls the movements of white cane. It fixes the amplitude of white cane movements and, during execution, it controls if there are collisions between white cane and scene objects. It has the following methods: OnStart() that configures the width of white cane movements and other parameters; LeftRightRayCast() that casts a ray perpendicular to the player position in order to detect left and right collisions. The ray only can collide with the objects of the collision layer. The method returns the identifier and the distance of the first collided object with respect to player position. represented as (objL, objR, DistL, DistR). In case of no collisions, it returns − 1for all the values. CalculateLimitPosition() method receives distance values and width of the white cane, and it returns the position of the limits, represented as L-left and L-right. These values are defined according to the device-game scale.
- Collision controller module :
-
defines the layer of objects of the scene that can be collided by the white cane. It will be used to improve the WhiteCane Controller. It has a GetLayer() method that assigns the layer to the objects.
- Object Collision :
-
is a class assigned to each object of the scene that can be collided by white cane. Each instance of this class has a sound identifier and a OnStart() method that assigns the object to the layer, and a GetSound() that returns the sound of the object.
- Behaviour module :
-
which determines the action that have to be performed when an object is collided. It has the following methods: OnImpactStart() called when the object is collided; OnImpactStay() called while object is collided; and OnImpactEnd() that is called when collision has stopped. The module allows the creation of new behaviours by class inheritance and new implementation of these methods. There is also another method, UpdateObjectPosition() that moves the object if it is necessary.
3.2.3 Unity3D API
As an example, we present the GameEngine API created for Unity3D game engine. This has been programmed in C# and uses some of the Unity 3D functionalities. In particular, for the Communication Library we have used C# methods for communicating via Serial Port COM3. For the Character Controller module, we have used the physics library of Unity3D to move the player character, and the hierarchy of objects of Unity 3D, to place the white cane in the correct position with respect to the character position to be controlled. For the WhiteCane controller, we have used the RayCast methods from Unity 3D physics library which cast rays and detect objects in a layer. For Collision Controller and Object Collision modules, we have used the system of layers of Unity3D. In this way, we can distinguish collided objects from non collided. For the Behaviour module, we have implemented two different behaviors, the first one, drags the object at the opposite direction of collision, and, the second one, detects the number of knocks on the object and depending on this number it moves the object or not.
In addition. for all possible game scenarios we have identified all objects that can be collided by the player. For each object, we have assigned a sound, that will be reproduced when object is collided, layer of the collider, and also supported behaviours. Moreover, we have defined the correspondence scale between the movement of virtual white cane and the scenario. Part of these information is illustrated in Fig. 6 where a screen of video game Direction to Saint Narcis Church, presented in next section, has been considered.
3.2.4 Considerations
We conclude this section with some final considerations. With respect to the design of game objects, and player movements, it has to be taken into account that lateral limits simulate the object position and the device engines place them in the correct location. The movement speed of these limits depends on the engine capabilities, the threat of the guides used to move them, and the distance between previous position and current one. In the worst situation, when lateral limit is at the device extreme and has to move to the other extreme it requires 2.0 seconds. To ensure that all objects can be detected, we have to adapt the velocity of player movements to lateral limits speed. Once this is determined, to control player movements, we can consider two strategies. In the first one, player moves automatically with the velocity adapted to lateral limits speed. In the second one, player moves using keyboard keys. With respect to the reality of the provided feedback, different to other devices, in case of collision, there is a real impact with lateral limit. This feature gives more realism to the collision than other devices that return a vibrotactile feedback [40, 66]. Moreover, economic and computation costs of our device with respect to these others are lower.
4 Experimental set-up
In this section, we describe the experiment that has been designed to test the acceptance of the proposed device.
4.1 Testing video games
Our device is particularly well suited for video games where navigation and walking is the principal action. For this reason, we created three mini-games that require these player actions as the main ones. These three mini-games have been integrated in Legends of Girona game [53]. A brief description of each one is given below.
First game, called Entering the defensive wall of Girona, reproduces the entrance into the defensive wall of Girona town (see Fig. 7a). It reproduces a corridor with obstacles at both sides, left and right. Player has to detect the obstacles on the left and also on the right and she/he has to remember the number of detected obstacles at each side. When player reaches the end of the corridor, he/she finds at each side two final obstacles that have to be moved. To move them he/she has to knock as many times as obstacles has found in the corresponding side and wait five seconds. If the number is correct, he/she will access to the other side of the defensive wall. The challenge is doing it as fast as possible.
Second game, called Direction to Saint Narcis Church, reproduces the way to the Saint Narcis Church (see Fig. 7b). The player has to avoid colliding with different objects, such as chairs, tables or benches. The game is over when player arrives to the church.
Third game, called Saint Narcis and the Flies, recreates part of the Saint Narcis and the flies legend (see Fig. 7c). This took place in the 18th century when French army entered the town of Girona and occupied the outside city walls including the Church of Sant Feliu where tomb of Sant Narcis laid. To avoid French soldiers to attack, the player has to reach the tomb going through a maze and set free the flies that are inside the tomb. Player knows that there are the same number of turns on the right than on the left. At the last corner, the player can go to the right or to the left, the option that equals the number of corners is the correct one. If the player selects it, the game will be over. At each play, the maze configuration changes. To perform movements the player uses the keyboard arrows.
4.2 Participants
To evaluate the acceptance of the proposed device, we considered a group of 12 participants who were blind. There were 6 females and 6 males. Their average age was 35 years old and a standard deviation of 10.86 for the males, and an average age of 29.83 years old and a standard deviation of 8.05 for the females. Participants were recruited through personal contacts. The study was conducted in a laboratory at the University of Girona.
4.3 Designed study
We designed a study composed of three different parts. Each participant carried out the study individually. First, we described the experiment to the participant and we asked him/her to answer a first questionnaire (see Table 1 from question Q1 to Q4). Second, we introduced the device and the three mini-games to the participant. The participant played games alone under authors observation. At the end of playing session, participants answered questions related to the device (see Table 1 from question Q5 to Q7). For each mini-game, they also answered questions related to the played game (see Table 1 questions Q8 and Q9 ). To answer, players used a five-level Likert scale where 1 means Strongly disagree and 5 Strongly agree. We also asked him/her about the experience, about their feelings, how they feel at the beginning of the session and how at the end.
4.4 Statistical analysis
Fisher exact, and Mann-Whitney U were used to test primary outcome measures in the experimental design of participants gender (male vs. female) and if they are blind from the birth or not. Fisher’s exact is used to test independence of two categorical variables. The null hypothesis is that the relative proportions of one variable are independent of the second variable. We use this test as it is well known that it is more accurate than chi-squared test or G-test of independence when expected numbers are small. Also, Mann-Whitney U is used to test if two independent groups are homogeneous and have the same distribution. Null hypothesis stipulates that the two groups come from the same population. We use this test instead of t-test because it is particularly suitable when the dependent variable is scalar or ordinal, as it is the case in questions Q5 to Q9 (see Table 1).
5 Results
In this section, we present the obtained results. We also present a time performance evaluation and the limitation of the experiment.
In Table 2 we present the responses to the first questionnaire (see Table 1 from question Q1 to Q4). From the 12 participants of our testing group we can see that 66.7% are blind from birth, all of them use white cane, 83.3% would like to play videogames but only 16.7% of them had played.
From the answer to the second questionnaire (see Table 1 from question Q5 to Q7), we evaluated device usability focusing on its similarity with white cane, its robustness, and player’s impression when colliding with virtual objects. The answers were scored from 1 to 5. The obtained results are shown in Table 3. Note that the median of the scores is 4 in all questions.
We also asked, on same scale, about the suitability of the device in the game, and player enjoyment for each one of the games. The obtained results are presented in Tables 4 and 5, respectively. Regarding the device suitability in the game, it seems to be suitable since all quartiles are 4 or higher except for game 2 where first quartile is 3 (see Table 4). With respect to enjoyment, results are completely different, medians are 4, 3 and 5, respectively. The worst results were obtained in mini-game 2. We think that this is due to the fact that there is no challenge in the game, as the player only has to walk avoiding obstacles.
We also look for significant differences among gender and if they are blind from birth. Our sample is well balanced according to gender, with 6 people on each group (see Table 6). We did not find significant differences on age (p-value 0.4848) and on questions Q1, Q3, and Q4 (p-values= 0.55, 1,1, respectively). Table 7 shows the differences on the scores according to gender. The least value of median is 3 but most of them are 4 or higher. Also, after a U mann-Whitney test, we did not find significant differences on these scores of males and females.
If we separate our sample according to participants blindness from birth, groups are not as well balanced, 8 born blind and 4 not (see Table 8). We did not find significant differences on age (p-value 0.2141), and on questions Q1, Q3, and Q4 (p-values= 0.55, 0.09,1, respectively). Table 9 shows the differences on the scores according to the born blind condition. Again, the least value of median is 3 but most of them are 4 or higher. Also, after a U mann-Whitney test, we did not find significant differences on these scores of being blind from birth or not.
To use the device, the user can both sit or stand. In the last case, it is necessary to place the device at the correct height. In our experiment, all players preferred to sit.
Although the number of participants is small, from this evaluation, we have seen that the device is well accepted and that it fits into games. We have also observed that the game requires a final challenge to be more attractive to player. Moreover, we did not find significant differences according to gender of being blind from birth. With respect to players feelings, we asked them how they feel at the beginning of the test and at the end. Their feelings range from anxiety, fear, and scepticism, at the beginning, to happiness, and euphoria. at the end. We consider that this issue needs a deeper study to reproduce experiments such as the ones presented in [73].
5.1 Time performance
The relationship between application responsiveness and user attention is one well-studied area of human-computer interaction, as Jakob Neilson describes in Usability Engineering [48], and basic advice regarding response times has been about the same for thirty years [8, 42]. Time limit for users feeling that they are directly manipulating objects in a user interface is 0.1 second. Time limit for users feeling that they are freely navigating the command space without having to unduly wait for the computer is 1 second. Finally, the limit for users keeping their attention on the task is 10 seconds. Unfortunately, the relationship between game responsiveness and player attention can not be measured in the same way since depending on game genre the player requirements can be very different. For instance, role games do not require faster interaction while shooter games do. In our context, we focused on the experience of true immersion and we consider that players must be able to manipulate the game world almost as intuitively as they manipulate the real-world. Therefore, response time of our device is a key factor to keep the player attention.
To evaluate time response, we considered the first mini game, Entering the defensive wall of Girona, since response time is decisive as player has to detect all obstacles in minimal time. First time to be considered is time to detect collisions. This is 100 microseconds which is the delay time set in communication protocol between the computer and the device. Note that this is a maximum time. Second time to be considered is time to detect other actions such as number of knocks on an object. This depends on the bounce control which in our case has been set to 0,02 seconds. We also have to take into account time required to place lateral limits of the device in the correct position to simulate the collided obstacle. As we mentioned, this time depends on engine speed and distance between previous position and the current one. We have considered the worst situation when lateral limit is at the extreme. In this case 2,7 seconds are required. To compute this time we used the TimeCatch module which is part of the External API and also the Internal one. In External API case, this module has two methods: StartMovement that saves the time when the movement has to start, and OnFinishMovement that is triggered when movement finishes. In Internal API case, this module has a SendFinishMovement method that calls OnFinishMovement when movement finishes. Players have a feeling of immediate feedback. The obtained results satisfy Jakob’s restrictions and we consider that are good enough response times. Moreover, if we compare with Shneiderman and Seow classifications [12] the obtained response times can be considered good enough.
From these experiments, we have seen that response time can be also improved by modifying the shape of the elements in the game scenario. In particular if rectangular shapes are transformed into circular shapes the object detection will be faster without loosing playability.
5.2 Limitations
Although, we are satisfied with the results of our experiments, we are conscious that low number of participants is a main limitation of our work and a more exhaustive experiment has to be carried out. For this reason, as immediate work we will carry on a new experiment with more participants, we are recruiting more blind players. In addition, we want to prepare our laboratory to perform a video controlled experiment registering all the details of the participants during playing sessions. We want to evaluate other factors that may influence comfort when using the device, such as human motion [10, 29, 30, 32]. Moreover, to predict the interest of users on the device, we want to apply machine learning techniques such as in [33]. We consider that these steps are necessary before device commercialization.
6 Conclusions and future work
Visual channel is the main component of the majority of video games. This fact makes designing games for visually impaired people challenging. In this paper, we have focused not on a game design but on an interaction device specifically designed for visually impaired people. Inspired by smart-canes, we have presented an Arduino-based device that supports left-right movements as well as drag and drop operations. Moreover, combined with a sound library the proposed device provides a real experience for any player. The device is suitable for exploration games that require identification of objects placed on the floor. It is easy to use and can be adapted to any game thanks to the provided application programming interface. Our future work will be centered on the development of a new version of the device that supports more movements and that can be used not only with PC’s. In addition, we want to define games capable to combine visual and non-visual players together. Finally, we are working on the design of a new experiment with more participants and considering more advanced techniques to evaluate player performance and preferences.
References
Abdel-Wahab AG, El-Masry AAA (2011) Mobile information communication technologies adoption in developing countries: effects and implications. Hershey, IGI Global
Archambault D (2004) The TIM project: overview of results. In: Proc. Computers helping people with special needs, pp 248–256
Archambault D, Ossmann R, Gaudy T, Miesenberger K Computer games and visually impaired people (https://cedric.cnam.fr/fichiers/RC1204.pdf)
Arduino What is Arduino. Available online: https://www.arduino.cc/en/Guide/Introduction. Accessed 22 May 2017
Bach-y-Rita P, Collins CC, Saunders FA, White B, Scadden L (1969) Vision substitution by tactile image projection. Nature 221:963–964
Balan O, Moldoveanu A, Moldoveanu F (2015) Navigational audio games: an effective approach toward improving spatial contextual learning for blind people. Int J Disabil Hum Dev 14(2):109–118
Brewster SA (1998) Using non-speech sounds to provide navigation cues. ACM Trans Comput-Human Interact 5:224–259
Card SK, Robertson GG, Mackinlay JD (1991) The information visualizer: an information workspace. In: Proc. ACM CHI’91 Conf, 181–188
Csapó G, Nagy H, Stockman T (2015) A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research. J Multimodal User Interf 9(4):275–286
Cui J, Liu Y, Xu Y, Zhao H, Zha H (2013) Tracking generic human motion via fusion of low- and high-dimensional approaches. IEEE Trans Syst Man Cybern Syst 43(4):996–1002
Cuturi LF, Aggius-Vella E, Campus C, Parmiggiani A, Goria M (2016) From science to technology: orientation and mobility in blind children and adults. Neurosci Biobehav Rev 71:240–251
Dabrowski J, Munson EV (2011) 40 Years of searching for the best computer system response time. Interact Comput 23:555–564
Damaschini R, Legras R, Leroux R, Farcy R (2005) Electronic travel aid for blind people. Assistive Technol Virtual Realit 16(1):251–255
Farcy R, Leroux R, Damaschini R, Legras R, Bellik Y, Jacquet C, Pardo P (2003) Laser telemetry to improve the mobility of blind people: report of the 6 month training course. In: Proc. 1st International conference on smart homes and health telematics12, pp 113–115
Friberg J, Gärdenfors D (2004) Audio games: new perspectives on game audio. In: Proc. Int. ACM conference on advances in computer entertainment technology, pp 148–154
Ghali NI, Soluiman O, El-Bendary N, Nassef TM, Ahmed SA, Elbarawy YM, Hassanien AE (2012) Virtual reality technology for blind and visual impaired people: reviews and recent advances. In: Gulrez T, Hassanien A E (eds) Advances in robotics and virtual reality, ISRL 26, pp 363–385
Gori M, Cappagli G, Tonelli A, Baud-Bovy G, Finocchietti S (2016) Devices for visually impaired people: High technological devices with low user acceptance and no adaptability for children. Neurosci Biobehav Rev 69:79–88
Gutschmidt R, Schiewe M, Zinke F, Jürgensen H (2010) Haptic emulation of games: haptic Sudoku for the blind. In: Proc. Int. Conf. on pervasive technologies related to assistive environments. Article 2. ACM
Hakobyan L, Lumsden J, OSullivan D, Bartlett H (2013) Mobile assistive technologies for the visually impaired. Surv Opthalmol 58(6):513–528
Haptics rendering and applications Edited by Dr. Abdulmotaleb El Saddik, 978-953- 307-897-7 (2012)
Heuten W, Henze N, Boll S, Klante P (2007) Auditorypong, playing pong in the dark. In: Audio mostly- proc. conference on interaction with sound
Hossain E, Khan R, Muhida R, Ali A (2011) State of the art review on walking support system for visually impaired people. Int J Biomechatron Biomed Robot 1(4):232–251
Kajimoto H, Inami M, Kawakami N, Tachi S (2003) Smart Touch-augmentation of skin sensation with electro cutaneous display, haptic interfaces for virtual environment and teleoperator systems. HAPTICS 2003. In: Proc. Symposium on IEEE, pp 40–46
Kay L (2000) Auditory perception of objects by blind persons, using a bioacoustic high resolution air sonar. J Acoust Soc Am 107:3266–3275
Kim J, Ricaurte J (2011) TapBeats: accessible and mobile casual gaming. In: Proc. ACM Conference on computers and accessibility. ACM, pp 285–286
Kim S, Lee K, Nam T (2016) Sonic-badminton: audio-augmented badminton game for blind people. In: Proc. Conference on human factors in computing systems. ACM, pp 1922–1929
Lacey G, Dawson-Howe KM, Vernon D (1995) Personal autonomous mobility aid for the frail and elderly blind (Technical Report, No. TCD-CS-95-18). Trinity College, Dublin
Leonard R (2002) Statistics on vision impairment: a resource manual, 5th edn. Arlene Gordon Research Institute of Lighthouse International, New York
Liu Y, Zhang X, Cui J (2010) Visual analysis of child-adult interactive behaviors in video sequences. Int Conf Virt Syst Multimed
Liu Y, Cui J, Zhao H, Zha H (2012) Fusion of low-and high-dimensional approaches by trackers sampling for generic human motion tracking. Int Conf Pattern Recogn 898–901
Liu Y, Nie L, Han L, Zhang L, Rosenblum DS (2015) Action2Activity: recognizing complex activities from sensor data. In: Yang Q, Wooldridge M (eds) Proc. int. conference on artificial intelligence. AAAI Press, pp 1617–1623
Liu L, Cheng L, Liu Y, Jia Y, Rosenblum DS (2016) Recognizing complex activities by a probabilistic interval-based model. Int Conf Artific Intell, 1266–1272
Liu Y, Zhang L, Nie L, Yan Y, Rosenblum DS (2016) tFortune teller: predicting your career path. Int Conf Artif Intell, 201–207
Liu Y, Nie L, Liu L, Rosenblum D S (2016) From action to activity: sensor-based activity recognition. Neurocomputing 181(2016):108–115
Lu Y, Wei Y, Liu L, Zhong J, Sun L, Liu Y (2017) Towards unsupervised physical activity recognition using smartphone accelerometers. Multimed Tools 76(8):10701–10719
Maidenbaum S, Levy-Tzedek S, Chebat D, NamerFurstenberg R, Amedi A (2014) The effect of expanded sensory range via the eyecane sensory substitution device on the characteristics of visionless virtual navigation,MSR. ISSN 2213–4794
Manduchi R, Coughlan JM (2012) Computer vision without sight. Commun ACM 55(1):96–104
Manduchi R, Kurniawan S (2011) Mobility-related accidents experienced by people with visual impairment. Res Pract Vis Impairment Blindness 4(2):44–54
Manduchi R, Kurniawan S (2012) Assistive technology for blindness and low vision. CRC Press, ISBN 9781439871539
Magnusson C, Rassmus-Grhn K, Sjstrm C, Danielsson H (2002) Navigation and recognition in complex haptic virtual environments? Reports from an extensive study with blind users? In: Proc. Eurohaptics 2002. Edinburgh
Mau S, Melchior NA, Makatchev M, Steinfeld A (2008) BlindAid: an electronic travel aid for the blind (Technical Report, No. CMU-RI-TR-07-39). The Robotics Institute at Carnegie Mellon University, Pittsburgh
Miller RB (1968) Response time in man-computer conversational transactions. Proc AFIPS Fall Joint Comput Conf 33:267–277
Miller D, Parecki A, Douglas S (2007) Finger dance: a sound game for blind people. In: Proc. Int. ACM conference on computers and accessibility, pp 253–254
Milne L, Bennett C, Ladner R, Azenkot S (2014) Brailleplay: educational smartphone games for blind children. In: Proc. Int. conference on computers and accessibility, pp 137–144
Morelli T, Foley J, Columna L, Lieberman L, Folmer E (2010) VI-Tennis a vibrotactile / audio exergame for players who are visually impaired categories and subject descriptors. In: Proc. Int. ACM conference on the foundations of digital games, pp 147–154
Morelli T, Foley J, Folmer E (2010) Vi-bowling: a tactile spatial exergame for individuals with visual impairments. In: Proc. International conference on computers and accessibility, pp 179–186
Nagarajan R, Yaacob S, Sainarayanan G (2003) Role of object identification in sonification system for visually impaired. In: IEEE Tencon (IEEE region 10 conference on convergent technologies for the AsiaPacific). Bangalore, pp 15–17
Neilson J (1993) Usability engineering
Nikolakis G, Tzovaras D, Moustakidis S, Strintzis M (2004) Cybergrasp and phantom integration: Enhanced haptic access for visually impaired users. Conf Speech Comput 20–22
Papa Sangre (2013) http://www.papasangre.com/
Rempel J (2012) Glasses that alert travelers to objects through vibration? An evaluation of iGlasses by RNIB and AmbuTech. AFB Access World Mag 13:9
Revuelta P, Ruiz B, Sánchez JM, Bruce N (2014) Scenes and images into sounds: a taxonomy of image sonification methods for mobility applications. J Audio Eng Soc 62(3):161–171
Rodriguez A, Garcia RJ, Garcia JM, Magdics M, Sbert M (2013) Implementation of a videogame: legends of girona actas del primer simposio espaol de entretenimiento digital. In: Gonzlez P A, Gmez, M A (eds) pp 96–107
Rodriguez-Sancheza MC, Moreno-Alvareza MA, Martin E, Borromeoa S, Hernandez-Tamamesa JA (2014) Accessible smartphones for blind users: a case study for a way finding system. Expert Syst Appl 41(16):7210–7222
Roentgen UR, Gelderblom GJ, Soede M, de Witte L (2008) Inventory of electronic mobility aids for persons with visual impairments: a literature review. J Vis Impairment Blindness 102(11):702– 724
Sakhardande J, Pattanayak P, Bhowmick M (1163) Arduino based mobility caneinternational. J Sci Eng Res 4:4
Sánchez J, Sáenz M, Ripoll M (2009) Usability of a multimodal videogame to improve navigation skills for blind children. In: Proc. ACM Computers and accessibility, pp 35–42
Sánchez J, Sáenz J, Garrido JM (2010) Usability of a multimodal video game to improve navigation skills for blind children. TACCESS 3:2010
Sánchez J, Campos M, Espinoza M, Merabet LB (2014) Audio haptic video gaming for developing way finding skills in learners who are blind. In: Proc. International conference on intelligent user interfaces. ACM, pp 199–208
Savidis A, Stamou A, Stephanidis C (2007) An accessible multimodal pong game space. Universal Access Ambient Intell Environ 405–418
Shoval S, Borenstein J, Koren Y (1998) The Navbelt-A computerized travel aid for the blind based on mobile robotics technology. IEEE Trans Biomed Eng 45:1376–1386
Sohl-Dickstein J, Teng S, Gaub BM, Rodgers CC, Li C, DeWeese MR, Harper NS (2015) A device for human ultrasonic echolocation. IEEE Trans Biomed Eng 62:1526–1534
Sudhanthiradevi M, Suganya Devi M, Roshini R, Sathya T (2016) Arduino based walking stick for visually impaired. Int J Adv Res Trends Eng Technol 5:4
Sung-Yeon K, Kwangsu C (2013) Usability and design guidelines of smart canes for users with visual impairments. Int J Des 7:1
Torres-Gil MA, Casanova-Gonzalez O, Gonzalez-Mora JL (2010) Applications of virtual reality for visually impaired people. WSEAS Trans Comput 2(9):184–193
Tzovaras D, Moustakas K, Nikolakis G, Strinzis M (2009) Interactive mixed reality white cane simulation for the training of the blind and the visually impaired. Person Ubiq Comput 13(1):51–58
Velzquez R (2010) Wearable assistive devices for the blind. Wearable and autonomous biomedical devices and systems for smart environment. Lect Notes Electr Eng 75:331–349
Vorderer P, Bryant J (2006) Playing video games-motives, responses and consequences, Mahwab, NJ:LEA, ISBN: 978-0805853223
Wood J, Magennis M, Cano Arias EF, Gutierrez T, Graupp H, Bergamasco M (2003) The design and evaluation of a computer game for the blind in the GRAB haptic audio virtual environment eurohaptics 2003. Dublin
www.theesa.com/. Accessed on May 2017
Yuan B, Folmer E (2008) Blind hero: enabling guitar hero for the visually impaired. In: Proc. International ACM conference on computers and accessibility, pp 169–176
Yuan B, Folmer E, Harris FC (2011) Game accessibility: a survey. Univers Access Inf Soc 10(1):81–100
Zhao S, Yao H, Gao Y, Ji R, Xie W, Jiang X, Chua TS (2016) Predicting personalized emotion perceptions of social images. Proc ACM Multimed Conf 1385–1394
Acknowledgments
This work was supported by the Catalan Government (Grant No. 2014-SGR-1232) and by the Spanish Government (Grant No. TIN2016-75866-C3-3-R).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Rodríguez, A., Boada, I. & Sbert, M. An Arduino-based device for visually impaired people to play videogames. Multimed Tools Appl 77, 19591–19613 (2018). https://doi.org/10.1007/s11042-017-5415-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-017-5415-1