1 Introduction

In last years, there has been a growing demand for video games across a whole range of age, gender groups, and application fields. According to the Entertainment Software Association, 63% of U.S. households have at least one person who plays video games regularly (3 hours or more per week) with an average age of players of 35 years old [70]. This interest in video games has lead to different studies that evaluate their effects on players. Although, first studies focused on negative aspects such as potential harm related to violence, addiction, and depression, last decade studies have highlighted positive aspects of playing video games from cognitive, motivational, emotional, or social perspectives [68]. The fact is that video games have great potential not only for entertainment but also as a tool to address specific problems or to improve different skills. Currently, games are used in education, health, or business, among others [72].

Despite the increased level of interest in games and their provided benefits, a large group of people is left out from playing video games due to a disability. To overcome this limitation, the creation of inclusive computer games such that nobody is left out and everyone has access, has become an emerging focus of research. In this paper, we will focus on visually impaired people. Video games could offer new socialization, education, employments and health opportunities for these individuals [39]. According to the World Health Organisation 285 million people are estimated to be visually impaired worldwide (39 million are blind and 246 have low vision) (http://www.who.int/blindness/GLOBALDATAFINALforweb.pdf).

Blind players have many difficulties to access video games since most of them rely on impressive graphics and immersive visual experiences. They may also find barriers in providing input. Different strategies to replace visual stimuli have been proposed in order to adapt videogames to this community. On the one hand, there are audio-based techniques that use auditory icons and earcons to associate information to sound [7, 15, 43]. There are also more sophisticated audio solutions such as spatial or 3D sound [57] which assigns a 3D effect to the sound allowing it to come from any part of the scene. Modern engines such as Unity 3D (https://unity3d.com/) and Unreal Game Development Kit (https://www.unrealengine.com/) provide functionalities to create these effects. The Audio Games community provides lots of titles of audio-based games designed for users with a visual disability [50]. On the other hand, there are haptic-based strategies which can be used to enhance immersion, for instance, by vibrating when collisions are detected. Westin et al. presented a literature study of the advances in game accessibility research describing different proposals for blind users. Game accessibly was also surveyed by Yuan et al. [72] and recent advances on video game accessibility for users with visual impairment can be found in [39]. In general, video games use a combination of these strategies together with screen readers that turn what is displayed on the monitor into a different non-standard output such as speech or text on a Braille output device [18, 25, 45, 60]. For a summary of all these strategies see (http://game-accessibility.com/documentation/visually-impaired-gamers-where-to-go-what-to-play/) [3].

In this paper, we propose a device designed for visually impaired people to interact with virtual scenes of video games. Our device provides three main advantages. First, it is easy to use since it reproduces the movements of a white cane in a virtual scenario. The white cane is the most used device thanks to be inexpensive, lightweight and foldable [28]. However, it has some limitations such as the difficulties to detect overhanging obstacles at head-level or ranges further than approximately one meter from the user [38], and the solutions proposed to tackle these problems have high prices and poor user interface that limit their use [1, 27, 41, 55]. Second, it has a low cost since the basis of its design is Arduino [4]. Third, it is easy to integrate in any game since we provide an application programming interface that supports the connection with different games engines and different scenarios. Our device is recommended for games with a strong component in navigation and walking.

The paper has been structured as follows. In Section 2, we present some of the techniques and games that have been proposed for blind players and related work on white cane since this is the basis of our proposal. In Section 3, we give a detailed description of our proposed device, the application programming interface, and an example created using Unity3D. In Section 4, we present the experiment set-up that has been designed to test the acceptance of the device. Obtained results as well as main limitations are described in Section 5. Finally, in Section 6, we present our conclusions and future work.

2 Related work

In this section, we will present first, some of the techniques and games that have been proposed for blind players, and second, we will describe advances related with the white cane since this is the basis of proposed device.

2.1 Video games for blind players

Video game playing can be characterized as a three step process, where first, player receives an stimuli (visual, auditory, haptic or a combination of these), second, player determines a response according to possible game actions, and third, game responses according to previous selected action [72]. After successfully performing these three steps, internal state of the game may change and new stimuli may be provided. The process is repeated until the player wins, loses or quits the game. In all this process, visual stimuli are the most common and this makes playability difficult for blind players. To transform visual stimuli to other, different computer interaction techniques that use sound, touch screens, haptic equipment, and specially designed software and hardware have been proposed.

Audio-based techniques are the most popular. These use speech strategies provided either by screen readers, audio cues, and sonification [52]. Audio techniques alone or combined with haptics have been used not only to create video games but also virtual environments to help visually impaired people develop spatial orientation and mobility skills [59]. Some of the proposed games are: Finger Dance, an audio-based rhythm-action game [43]; AuditoryPong [21], an interactive game that transfers the game pong into a physical and acoustic space where users move the game paddle with body interaction or haptic devices, and receives immediate acoustic feedback; or Sonic Badmington, an audio-augmented badminton game that uses a virtual shuttlecock with audio feedback [26].

Focusing on enhancement of navigation and orientation skills, Sanchez et al. proposed Audiopolis [59], an educational game that simulates a city and its environmental sounds to teach blind children how to navigate in urban scenes by using haptic devices as virtual canes. Maidenbaum et al. [36] designed an orientation experiment where a cane controlled by the space bar is simulated by audio feedback only. Balan et al. [6] reviewed the most notable audio-based games in what concerns their usability as an education tool for visually impaired people. Torres et al. [65] proposed a virtual reality simulator that makes an auditory representation of the virtual environment, rendering the virtual world entirely through the hearing. It uses a 3D tracking system to locate user’s head orientation and position to provide a natural user interaction since the user only has to walk through the environment perceiving it through acoustic information. There are also navigation aids which use ultrasound information to sense the surroundings and acquire spatial information (NavBelt [61], Sonic Eye [62], SonicGuide [24], iGlasses [51]). For a review of recent advances in virtual reality technology for blind and visual impaired people see [16].

Tactile-based techniques replace visual stimulus into tactile stimulation. The first system, proposed fifty years ago [5], converted signals from a video camera into tactile stimulation applied to the back of the subject. Currently, thanks to technological advances, much smaller portable devices that allow hands-free interactions have been proposed. Some of them are head-mounted devices, wrist-bands, vests, belts, shoes etc. [47, 67]. There are also small electro-tactile and vibro-tactile stimulators that can be placed on body surfaces such as fingers, wrists, head, abdomen, or feet [23]. Yuan and Folmer [71] developed a glove that transforms visual information into haptic feedback using small pager motors attached to the tip of each finger. This allows a blind player to play Guitar Hero. VI-Tennis and VI-Bowling integrate a haptic interface based on a motion sensing controller enhanced with vibrotactile and audio cues that allows blind players detect key events of Wii Sports game [45, 46]. To navigate in 2D environments, the simulation of different tactile surfaces with various materials was proposed by TiM games [2] and Digital Clock Carpet [58]. Milne et al. [44] proposed Braille Play which teaches how to write and read Braille letters using vibration feedback and touch interface of smartphones. Nikolakis et al. [49] proposed a haptic virtual reality tool that allows visually impaired, to study and interact with various virtual objects in specially designed virtual environments. The system is based on the use of CyberGrasp [69] and PHANToM [40] haptic devices. PHANToM is the most commonly used force feedback device. It provides the sense of touch along with the feeling of force feedback at the fingertip.

Focusing on mobile technology, this is gaining sophistication and widespread use. Current research is centered on making mobile phones and other handheld computer devices more efficient, cost-effective, functional, and accessible which will also benefit visually impaired[9, 19]. Rodriguez et al. [54] proposed a smartphone-based method that uses phone camera to capture pathway scenes as images and transform them into messages and warnings to avoid collisions. Lu et al. [35] used smartphone accelerometers to recognize daily living activities and sporting activities. For a review of all these technologies see [17, 22]. Moreover, human-computer interaction based research is increasingly exploring the possibility of supporting eyes-free interaction methods for smartphones and other handheld devices. Consider for instance, the scheme to recognize human activities from sensor data proposed by Liu et al. [31] or the same author’s proposed algorithm which is capable of efficiently mining temporal patterns from low-level actions to represent high-level human activities [34].

2.2 The white cane

Many different techniques and devices have been proposed to enhance the interaction between humans and computers [20]. However, it is important to choose the right form of interaction to reach a high usability of the resulting system. In our case, motivated by the extensive use of the white cane we have considered this option as the basis of our interaction device.

Improvement of the white cane performance is a continuos focus of research. The aim is detecting obstacles at wider ranges and at a distance above-knee level. Generally, the proposed methods are based on sensors and multi sensory displays mounted on the classic white cane, but which can sometimes be removed from the cane and used independently. These smart canes come in two forms [64]. In the first type, a detection device is mounted on the cane to form a stick, making this a detachable unit. Some canes in this group are Teletact [13], Tom-Pouce [14], Vistac Laser Long Cane (https://www.vistac.com) and UltraCane (https://www.ultracane.com/soundforesigntechnologyltd). In the second type, devices have detection sensors built into the canes such as LaserCane [38]. For a review of technological canes see [11].

We conclude this section with a final comment about the use of the described systems. A main limitation is that most of them have been tested only in experimental conditions and are not used in daily life by visually impaired people. Only few of them have been commercialized. Gori et al. [17] described possible reasons of these facts. A first reason could be that most of them are invasive devices. A second one could be that processing acoustical or tactile signals might overwhelm cognitive abilities of the user. A third one could be that many of these devices require a long period of training in order to be used. A fourth one could be that the level of performance of these systems is insufficient to justify the invasiveness and effort needed to use them. Another aspect to consider is that many of the described devices do not take into account the important link established by action and perception in learning process. In addition, the cost of the device can also be a limiting factor. Taking into account all these limitations, our aim is to develop an easy to use device that does not require any special training and has a low cost. Although our device is focused on video games, we consider that these limitations have to be also taken into account.

3 The proposed device

Our main objective is the creation of a device to interact with video games in order to adapt them to blind players. To create it, we consider as main features usability, economic cost, and adaptability. In this section, we will present all the details of the proposed device. First, we will present the design of the device, then the application programming interface required to connect the device with a game, third an application example using Unity3D, and fourth some final considerations.

3.1 Device description

To create the device, we first focused on usability and economic cost. Our device should be easy to use and without requiring any significant training process for its use. To satisfy this first requirement, we consider the white cane as our inspiration since this is the most popular device among blind community. Such a decision fixed the design and movements that have to be supported by the device. To start, we considered movements in one dimension, from left to right and vice versa. In addition, the device should be able to detect collisions in the virtual scenarios of the game returning different sounds according to collided materials. Finally, the device should be used both when sitting and standing to satisfy different players preferences.

Our device should have a low cost. To satisfy this requirement, we decided to use Arduino as the basis of our development. Arduino has been specifically designed for people with little or no background in electronics or programming, is free to download online, and supported by an expanding open-source online community [4]. Its hardware is inexpensive and can be combined with any number of sensors and instruments that are available from a variety of retailers. In addition, it requires minimal development effort or experience, allowing developments without large financial investment. There are previous works based on this technology. For instance, Sakhardande et al. [56] proposed a detachable unit to extend the functionality of the existing white cane, to concede above-knee obstacle detection as well as below-knee detection. The proposed stick uses ultrasound sensors for detecting obstructions before direct contact providing haptic feedback to the user in accordance with the position of the obstacle. More recently, Sudhanthiradevi et al. [63] proposed a theoretical model of a walking stick for visually impaired people that consists of three modules to detect heat, obstacles, and water, respectively.

Taking into account all these requirements, we created the device illustrated in Figs. 1 and 2. As we can see in Fig. 1, the device is composed of a rectangular wooden basis (25cm×25cm) and at each vertex of the basis there is a vertical piece of wood (front face 22cm × 25cm, back face 12cm×25cm). These pieces create an inclined plane where the cane can move. Two pieces of wood and lateral limits of the device restrict left to right movements of the cane. In Fig. 2, the main components of the device have been labelled. There are two engines on the back face, one at each extreme of the guide. These engines control the movement of a vertical piece of wood, labelled as lateral limit, that determines the space where the cane can move. There are also two way-end push buttons that restrict movements of these lateral limits. When we initialize the device, engines place the limits at the extremes. There is a loudspeaker to simulate collision effects. We have integrated different sounds of materials to increase realism to the collision. These are activated each time the cane, which is covered with aluminum paper, collides with the metallic pieces attached to the lateral limits placed at both sides of the device. The device has two engine controllers Easy Driver which control movements that have to be performed to reach a desired position. An Arduino ONE is used to control the device and to connect it with the PC via USB port.

Fig. 1
figure 1

Different views of the proposed device

Fig. 2
figure 2

The main components of the proposed device

3.2 The device API

Our device should be adaptable to different video games. To satisfy this requirement, we created an application programming interface (API) to support the connection with different game engines, and also to support different scenarios. This last feature requires calibration parameters to control the movements of the device according to player movements and objects of the video game scenario.

The API is composed of two different parts, the external API that communicates the device with the game engine, and the internal API integrated in the game engine used to develop the video game. The main modules of the API are illustrated in Fig. 3 and described below.

Fig. 3
figure 3

The main modules of the proposed API to connect the device with a video game. In this example, Unity 3D is the game engine

3.2.1 External API

External API has been programmed in C. Main modules of the external API are illustrated in Fig. 4 and described below.

Communication Library :

connects the device with computer. It configures the connection to receive and emit data from the Serial Port COM3 and waits for game engine API information related to position of the lateral limits of device, and information of collisions between white cane and an object of the game scenario. To carry out this process it has the following methods: OnStart() which configures communication Serial Port COM3 to send and receive messages; OnPositionReceive() which waits for the game engine API message with the position of lateral limits; SendCollision() which controls if the cane collides with lateral limits; OnSoundReceive() which reproduces the sound assigned to the collided object; and OnStop() which closes the device.

Movement Library :

controls lateral limits of the device used to simulate virtual objects. It starts placing the limits to an initial position and then, waits for information related to the position of the object in the scene to move the limits according to it. It has two private methods, Move() and Calibrate(). The first has two input parameters, representing the positions of lateral limits, and moves the limits to these positions. The second, moves the limits to the initial position. Movement Library also has three public methods, OnStart() and OnEnd(), used to turn on and turn off the device, respectively, and OnPositionUpdate(), called by OnPositionReceive from Communication library, that calls Move() method to move the limits.

Switch Library :

controls device switches. It has two main methods, Limit Switch() that controls if left and right limits are in initial position or not, and LeftRightSwitch() that controls if the white cane collides with limits.

Audio Library :

reproduces the sound assigned to the collided object. It is composed of a class and an Audio database that stores sounds in MP3 format and their relationships with objects. The class has OnStart() method to init loudspeaker, and Play(SoundID) method, called by OnSoundReceive(), that has as input the identifier of the sound from the Audio data base that has to be reproduced.

Fig. 4
figure 4

Main methods of the external API

3.2.2 Game engine API

Main modules of the internal API are illustrated in Fig. 5 and described below.

Connection Library :

connects the computer and the device. It configures the connection to send and receive information from Serial Port COM3. It sends to external API the position of the lateral limits and it waits for collisions. In case of collision, it sends the sound identifier. The methods of this module are: OnStart() which configures the communication with the Serial Port COM3; SendPosition() that sends the positions of the lateral limits; OnCollisionReceive() that receives the data when it has been a collision with one of the lateral limits; SendSoundID() that sends the identifier of the sound that have to be reproduced; and OnStop() that closes connection when we exit.

Character controller :

configures player parameters such as distance and velocity of player movements. It also inits the white cane fixing the walking distance. It has the following methods: OnStart() that configures movement velocity and white cane parameters; CreateWhiteCane() that creates and inits the white cane object; and UpdatePosition() that moves the player from one position to the next.

White Cane controller :

controls the movements of white cane. It fixes the amplitude of white cane movements and, during execution, it controls if there are collisions between white cane and scene objects. It has the following methods: OnStart() that configures the width of white cane movements and other parameters; LeftRightRayCast() that casts a ray perpendicular to the player position in order to detect left and right collisions. The ray only can collide with the objects of the collision layer. The method returns the identifier and the distance of the first collided object with respect to player position. represented as (objL, objR, DistL, DistR). In case of no collisions, it returns − 1for all the values. CalculateLimitPosition() method receives distance values and width of the white cane, and it returns the position of the limits, represented as L-left and L-right. These values are defined according to the device-game scale.

Collision controller module :

defines the layer of objects of the scene that can be collided by the white cane. It will be used to improve the WhiteCane Controller. It has a GetLayer() method that assigns the layer to the objects.

Object Collision :

is a class assigned to each object of the scene that can be collided by white cane. Each instance of this class has a sound identifier and a OnStart() method that assigns the object to the layer, and a GetSound() that returns the sound of the object.

Behaviour module :

which determines the action that have to be performed when an object is collided. It has the following methods: OnImpactStart() called when the object is collided; OnImpactStay() called while object is collided; and OnImpactEnd() that is called when collision has stopped. The module allows the creation of new behaviours by class inheritance and new implementation of these methods. There is also another method, UpdateObjectPosition() that moves the object if it is necessary.

Fig. 5
figure 5

Modules and methods of the internal API

3.2.3 Unity3D API

As an example, we present the GameEngine API created for Unity3D game engine. This has been programmed in C# and uses some of the Unity 3D functionalities. In particular, for the Communication Library we have used C# methods for communicating via Serial Port COM3. For the Character Controller module, we have used the physics library of Unity3D to move the player character, and the hierarchy of objects of Unity 3D, to place the white cane in the correct position with respect to the character position to be controlled. For the WhiteCane controller, we have used the RayCast methods from Unity 3D physics library which cast rays and detect objects in a layer. For Collision Controller and Object Collision modules, we have used the system of layers of Unity3D. In this way, we can distinguish collided objects from non collided. For the Behaviour module, we have implemented two different behaviors, the first one, drags the object at the opposite direction of collision, and, the second one, detects the number of knocks on the object and depending on this number it moves the object or not.

In addition. for all possible game scenarios we have identified all objects that can be collided by the player. For each object, we have assigned a sound, that will be reproduced when object is collided, layer of the collider, and also supported behaviours. Moreover, we have defined the correspondence scale between the movement of virtual white cane and the scenario. Part of these information is illustrated in Fig. 6 where a screen of video game Direction to Saint Narcis Church, presented in next section, has been considered.

Fig. 6
figure 6

An example of the required scene information to properly connect the proposed device with a video game programmed with Unity 3D

3.2.4 Considerations

We conclude this section with some final considerations. With respect to the design of game objects, and player movements, it has to be taken into account that lateral limits simulate the object position and the device engines place them in the correct location. The movement speed of these limits depends on the engine capabilities, the threat of the guides used to move them, and the distance between previous position and current one. In the worst situation, when lateral limit is at the device extreme and has to move to the other extreme it requires 2.0 seconds. To ensure that all objects can be detected, we have to adapt the velocity of player movements to lateral limits speed. Once this is determined, to control player movements, we can consider two strategies. In the first one, player moves automatically with the velocity adapted to lateral limits speed. In the second one, player moves using keyboard keys. With respect to the reality of the provided feedback, different to other devices, in case of collision, there is a real impact with lateral limit. This feature gives more realism to the collision than other devices that return a vibrotactile feedback [40, 66]. Moreover, economic and computation costs of our device with respect to these others are lower.

4 Experimental set-up

In this section, we describe the experiment that has been designed to test the acceptance of the proposed device.

4.1 Testing video games

Our device is particularly well suited for video games where navigation and walking is the principal action. For this reason, we created three mini-games that require these player actions as the main ones. These three mini-games have been integrated in Legends of Girona game [53]. A brief description of each one is given below.

First game, called Entering the defensive wall of Girona, reproduces the entrance into the defensive wall of Girona town (see Fig. 7a). It reproduces a corridor with obstacles at both sides, left and right. Player has to detect the obstacles on the left and also on the right and she/he has to remember the number of detected obstacles at each side. When player reaches the end of the corridor, he/she finds at each side two final obstacles that have to be moved. To move them he/she has to knock as many times as obstacles has found in the corresponding side and wait five seconds. If the number is correct, he/she will access to the other side of the defensive wall. The challenge is doing it as fast as possible.

Fig. 7
figure 7

From left to right, two views of testing scenarios correspoding to the three designed mini-games

Second game, called Direction to Saint Narcis Church, reproduces the way to the Saint Narcis Church (see Fig. 7b). The player has to avoid colliding with different objects, such as chairs, tables or benches. The game is over when player arrives to the church.

Third game, called Saint Narcis and the Flies, recreates part of the Saint Narcis and the flies legend (see Fig. 7c). This took place in the 18th century when French army entered the town of Girona and occupied the outside city walls including the Church of Sant Feliu where tomb of Sant Narcis laid. To avoid French soldiers to attack, the player has to reach the tomb going through a maze and set free the flies that are inside the tomb. Player knows that there are the same number of turns on the right than on the left. At the last corner, the player can go to the right or to the left, the option that equals the number of corners is the correct one. If the player selects it, the game will be over. At each play, the maze configuration changes. To perform movements the player uses the keyboard arrows.

4.2 Participants

To evaluate the acceptance of the proposed device, we considered a group of 12 participants who were blind. There were 6 females and 6 males. Their average age was 35 years old and a standard deviation of 10.86 for the males, and an average age of 29.83 years old and a standard deviation of 8.05 for the females. Participants were recruited through personal contacts. The study was conducted in a laboratory at the University of Girona.

4.3 Designed study

We designed a study composed of three different parts. Each participant carried out the study individually. First, we described the experiment to the participant and we asked him/her to answer a first questionnaire (see Table 1 from question Q1 to Q4). Second, we introduced the device and the three mini-games to the participant. The participant played games alone under authors observation. At the end of playing session, participants answered questions related to the device (see Table 1 from question Q5 to Q7). For each mini-game, they also answered questions related to the played game (see Table 1 questions Q8 and Q9 ). To answer, players used a five-level Likert scale where 1 means Strongly disagree and 5 Strongly agree. We also asked him/her about the experience, about their feelings, how they feel at the beginning of the session and how at the end.

Table 1 Questions of the interviews. Second and third questionnaires were answered using the five-level Likert scale where 1 means Strongly disagree and 5 Strongly agree

4.4 Statistical analysis

Fisher exact, and Mann-Whitney U were used to test primary outcome measures in the experimental design of participants gender (male vs. female) and if they are blind from the birth or not. Fisher’s exact is used to test independence of two categorical variables. The null hypothesis is that the relative proportions of one variable are independent of the second variable. We use this test as it is well known that it is more accurate than chi-squared test or G-test of independence when expected numbers are small. Also, Mann-Whitney U is used to test if two independent groups are homogeneous and have the same distribution. Null hypothesis stipulates that the two groups come from the same population. We use this test instead of t-test because it is particularly suitable when the dependent variable is scalar or ordinal, as it is the case in questions Q5 to Q9 (see Table 1).

5 Results

In this section, we present the obtained results. We also present a time performance evaluation and the limitation of the experiment.

In Table 2 we present the responses to the first questionnaire (see Table 1 from question Q1 to Q4). From the 12 participants of our testing group we can see that 66.7% are blind from birth, all of them use white cane, 83.3% would like to play videogames but only 16.7% of them had played.

Table 2 Responses to first questionnaire

From the answer to the second questionnaire (see Table 1 from question Q5 to Q7), we evaluated device usability focusing on its similarity with white cane, its robustness, and player’s impression when colliding with virtual objects. The answers were scored from 1 to 5. The obtained results are shown in Table 3. Note that the median of the scores is 4 in all questions.

Table 3 Responses to second questionnaire

We also asked, on same scale, about the suitability of the device in the game, and player enjoyment for each one of the games. The obtained results are presented in Tables 4 and 5, respectively. Regarding the device suitability in the game, it seems to be suitable since all quartiles are 4 or higher except for game 2 where first quartile is 3 (see Table 4). With respect to enjoyment, results are completely different, medians are 4, 3 and 5, respectively. The worst results were obtained in mini-game 2. We think that this is due to the fact that there is no challenge in the game, as the player only has to walk avoiding obstacles.

Table 4 Responses to device suitability in the game
Table 5 Responses related to game enjoyment

We also look for significant differences among gender and if they are blind from birth. Our sample is well balanced according to gender, with 6 people on each group (see Table 6). We did not find significant differences on age (p-value 0.4848) and on questions Q1, Q3, and Q4 (p-values= 0.55, 1,1, respectively). Table 7 shows the differences on the scores according to gender. The least value of median is 3 but most of them are 4 or higher. Also, after a U mann-Whitney test, we did not find significant differences on these scores of males and females.

Table 6 Description of the sample according to gender of the player
Table 7 Differences on the scores according to the gender (W represents women and M men)

If we separate our sample according to participants blindness from birth, groups are not as well balanced, 8 born blind and 4 not (see Table 8). We did not find significant differences on age (p-value 0.2141), and on questions Q1, Q3, and Q4 (p-values= 0.55, 0.09,1, respectively). Table 9 shows the differences on the scores according to the born blind condition. Again, the least value of median is 3 but most of them are 4 or higher. Also, after a U mann-Whitney test, we did not find significant differences on these scores of being blind from birth or not.

Table 8 Description of the sample according to being blind from birth
Table 9 Differences on the scores according to the born blind condition

To use the device, the user can both sit or stand. In the last case, it is necessary to place the device at the correct height. In our experiment, all players preferred to sit.

Although the number of participants is small, from this evaluation, we have seen that the device is well accepted and that it fits into games. We have also observed that the game requires a final challenge to be more attractive to player. Moreover, we did not find significant differences according to gender of being blind from birth. With respect to players feelings, we asked them how they feel at the beginning of the test and at the end. Their feelings range from anxiety, fear, and scepticism, at the beginning, to happiness, and euphoria. at the end. We consider that this issue needs a deeper study to reproduce experiments such as the ones presented in [73].

5.1 Time performance

The relationship between application responsiveness and user attention is one well-studied area of human-computer interaction, as Jakob Neilson describes in Usability Engineering [48], and basic advice regarding response times has been about the same for thirty years [8, 42]. Time limit for users feeling that they are directly manipulating objects in a user interface is 0.1 second. Time limit for users feeling that they are freely navigating the command space without having to unduly wait for the computer is 1 second. Finally, the limit for users keeping their attention on the task is 10 seconds. Unfortunately, the relationship between game responsiveness and player attention can not be measured in the same way since depending on game genre the player requirements can be very different. For instance, role games do not require faster interaction while shooter games do. In our context, we focused on the experience of true immersion and we consider that players must be able to manipulate the game world almost as intuitively as they manipulate the real-world. Therefore, response time of our device is a key factor to keep the player attention.

To evaluate time response, we considered the first mini game, Entering the defensive wall of Girona, since response time is decisive as player has to detect all obstacles in minimal time. First time to be considered is time to detect collisions. This is 100 microseconds which is the delay time set in communication protocol between the computer and the device. Note that this is a maximum time. Second time to be considered is time to detect other actions such as number of knocks on an object. This depends on the bounce control which in our case has been set to 0,02 seconds. We also have to take into account time required to place lateral limits of the device in the correct position to simulate the collided obstacle. As we mentioned, this time depends on engine speed and distance between previous position and the current one. We have considered the worst situation when lateral limit is at the extreme. In this case 2,7 seconds are required. To compute this time we used the TimeCatch module which is part of the External API and also the Internal one. In External API case, this module has two methods: StartMovement that saves the time when the movement has to start, and OnFinishMovement that is triggered when movement finishes. In Internal API case, this module has a SendFinishMovement method that calls OnFinishMovement when movement finishes. Players have a feeling of immediate feedback. The obtained results satisfy Jakob’s restrictions and we consider that are good enough response times. Moreover, if we compare with Shneiderman and Seow classifications [12] the obtained response times can be considered good enough.

From these experiments, we have seen that response time can be also improved by modifying the shape of the elements in the game scenario. In particular if rectangular shapes are transformed into circular shapes the object detection will be faster without loosing playability.

5.2 Limitations

Although, we are satisfied with the results of our experiments, we are conscious that low number of participants is a main limitation of our work and a more exhaustive experiment has to be carried out. For this reason, as immediate work we will carry on a new experiment with more participants, we are recruiting more blind players. In addition, we want to prepare our laboratory to perform a video controlled experiment registering all the details of the participants during playing sessions. We want to evaluate other factors that may influence comfort when using the device, such as human motion [10, 29, 30, 32]. Moreover, to predict the interest of users on the device, we want to apply machine learning techniques such as in [33]. We consider that these steps are necessary before device commercialization.

6 Conclusions and future work

Visual channel is the main component of the majority of video games. This fact makes designing games for visually impaired people challenging. In this paper, we have focused not on a game design but on an interaction device specifically designed for visually impaired people. Inspired by smart-canes, we have presented an Arduino-based device that supports left-right movements as well as drag and drop operations. Moreover, combined with a sound library the proposed device provides a real experience for any player. The device is suitable for exploration games that require identification of objects placed on the floor. It is easy to use and can be adapted to any game thanks to the provided application programming interface. Our future work will be centered on the development of a new version of the device that supports more movements and that can be used not only with PC’s. In addition, we want to define games capable to combine visual and non-visual players together. Finally, we are working on the design of a new experiment with more participants and considering more advanced techniques to evaluate player performance and preferences.