Introduction

In the last decade, solutions for people’s tasks in social aspects, education, culture, and economics have been provided by Information Technology (Skinner 2013). Mobile technologies have changed the world in many respects (Traxler 2009). The education field is, however not an exclusion, and has extensively been affected by information technology. Clearly, there are rise in interests for using information technology and other new educational approaches that can encourage learning in formal or informal manner. Educational technologies consisting of the different set of teaching tools are being used to better learning activities in the educational field. While these changes have improved society in many respects, they present an obstacle for visually challenged people who may have significant difficulty processing the visual cues presented by modern graphical user interfaces (Chiang et al. 2005). Besides, people with visual challenges face special barriers in using the Internet, aside from those related to material access and computer-related trainings (Puffelen 2009). The numbers of people with a visual impairment using computers are increasing (Douglas and Long 2003); however, they are having serious problems while using ICT tools because of the lack of basic ICT skills, lack of training, and lack of training materials (Gerber 2003; Sales et al. 2006; UNESCO 2009). Indeed, according to the World Health Organisation (WHO 2016), 285 million people are estimated to be visually impaired. In recent years, there has been a growing international urgency to include physically challenged persons on university campuses (Lourens and Swartz 2016). In essence, real inclusion therefore means feeling like a welcomed member of the tertiary environment; a member that truly belongs and whose contributions to the diversity of the university are valued and celebrated (Bantjes et al. 2015; Beauchamp-Pryor 2013; Swart and Greyling 2011). It is not about merely increasing the number of visually impaired learners to Tertiary Institutions but also involves the quality of the social and learning experiences of this category of learners once they gain access to higher education (Fuller et al. 2004; Jacklin et al 2006).

The purpose of this study was to find an alternative using technological advances in computing that would enable visually impaired learners to find an improved way of learning. It was borne in mind that as (Gerber 2003; Sales et al. 2006; UNESCO 2009) mentioned, this category of learners encountered numerous problems to use computers and the Internet. So the solution that had to be devised had to be user-friendly, easy-to-use and not requiring high level of proficiency of computing, and use of Internet. One aspect toward which this research was geared was the use of mobile devices in the learning process of the visually impaired learners.

Given the nature of mobile devices and the technologies advances, there is a lot to be gained by harnessing new technologies for education, especially for the visually impaired. Some advancement in mobile technologies can breach the gap between fully sighted students and visually impaired. However, by incorporating technologies such as motion sensors, now standard within many modern smartphones, a prospect to develop new inclusive mobile learning applications for the learners with sight difficulties, can be provided. Mobile devices as compared to computers are easily portable and can encourage anytime and anywhere learning, especially for the learners with visual impairments.

SensorApp is one such application where we can see the incorporation of such technologies that can help the visually impaired learners and see to it that they are not left out of the system. The objective of this research has been to build an application using motion sensors to enhance learning and eventually test it with visually impaired learners to see how their learning experience can be improved. The application that has been developed contains a navigation menu that makes use of the accelerometer sensor that is available in several mobile phones, particularly in the Android platform. A movement recognition algorithm has been created for the collection of all the necessary data for the effective functioning of the system. The integration of text-to-speech and speech-to-text will allow visually impaired student to hear each option as they navigate through the system. SensorApp is also interactive where the visually impaired learners can choose the different options they want from the menu using the Voice command. The strengths of SensorApp lies in the fact that the interfaces developed are simple and hence no advanced skills of ICT required by the learners. This is also complemented with a powerful Voice feature which helps the visually impaired learners to search for some particular information or to display the information looked for in an auditory way.

Literature review

Literature that has been discussing the literacy of the visually impaired is numerous, and many have been alarming about the decline in the use of Braille for at least 30 years (Wiazowski 2013).

Certain sources mention that only about 10% of blind people, the younger ones in particular, enjoys the power of Braille (Canadian Federation of the Blind [CFB] 2013; Engelhart 2010). It is frequent to find that technology is to be blamed for this situation (Hatlen and Spungin 2008), especially with the growing popularity and quality of synthetic voices and mainstream devices that incorporate synthesized output (Danielsen 2013).

Today, with the means of communication and a variety of information sources available everywhere such as at home, school or at work, and in our environment at large, we are also witnessing the rise of a connected mobile society. This is even being described as the start of the next social revolution (Rheingold 2003). The spread of various mobile learning systems shows how important it is to develop wireless and mobile learning applications (Liu et al. 2002a, b). In recent years, distance learning has grown into two significant directions: ‘the individual flexible teaching model’ and ‘the extended classroom model.’ The extended classroom model allows no restriction for starting time of the class, students to study alone, and to interact with teachers and other students. This model divides the students into groups and needs them to meet at local study centers. It also allows the students to interact by the use of video conferencing (Rekkedal and Dye 2007). Mobile industry is among the fastest growing, and the number of mobile phone owners surpasses the number of computers in the world. Around the world, there are about 2.7 billion mobile phones, and in some cases is the only way of long-distance communication.

M-Learning (Mobile Learning) can be said to be a learning method occurring across locations or having an edge of learning opportunities that are accessible by technologies such as laptops, smart phones, computers, cameras, media players, and games consoles (e.g., Nintendo DS, Sony PSP). Mobile, commonly understood as portable and movable, can also implicate a personal; so mobile technologies can be categorized using the two unrelated features of personal versus shared and portable versus static (Naismith et al. 2004).

Classification of mobile technologies

The range of mobile technologies can be categorized using the two unrelated features of personal v/s shared and portable v/s static, as shown in Fig. 1. Naismith et al. (2004) emphasized that quadrants one to quadrants three consists of mobile technologies and also those from quadrant 4 that are not at the extreme end of the static dimension.

Fig. 1
figure 1

Classification of mobile technologies (Naismith et al. 2004)

New challenge is being brought by high dynamics because of various fusion of mobility in mobile learning. The test would be to make the most of the constantly altering environment with a new category of learning applications that are flexible and can adjust accordingly to dynamic learning conditions. The mobile devices accessible for the cost of networks, capacity, the usage, and so on may all change over place and time. In brief, the learning setting keeps changing over time, that is, usage, capacity cost, and so on can change.

Mobile learning

M-Learning research is still in its infancy because the amount of available primary research studies is still, relative to other fields of study like e-learning, small. Most literature reviews and conceptual papers seek to establish a foundation for m-learning, develop theory, or focus on design. Specifically, prior reviews have focused on the type of m-learning projects being done (Fetaji 2008), the nature of research questions (Ali and Irvine 2009), and the type of activities that can be supported with mobile technologies (Naismith et al. 2004).

The range of the research on mobile learning has made it challenging to produce a single definition or to determine generally added benefits (Frohberg et al. 2009). While it is typical for an emerging field to have varied definitions, the lack of conceptual frameworks and robust theories has been frequently addressed as a concern in the literature (Peng et al. 2009). The greatest added value of mobile learning vis-a-vis PC learning lies in the aspects that extend classroom interaction to other locations via communication networks. Recent advances such as imbedded sensors, cameras, motion detection, location awareness, social networks, web searching, and augmented reality present the potential to foster learning and engagement across multiple physical, conceptual, and social spaces, both indoors and out (Newhouse et al. 2006).

However, some of the major limitations of mobile learning (Shudong and Higgins 2006) include:

  • Small screen and low resolutions

  • Connectivity and Internet Access problems

  • Lack of standardization and compatibility

  • Battery, memory, and storage capacity

Placing learning in a specific context

One of the main affordances of a smartphone is that the user can take it with him/her wherever he/she goes. The importance of context in learning has been written to support awareness (Seely Brown et al. 1989). For example, students can apply mathematical or scientific inquiry in the real-world problem-solving situations, using M-Learning tools such as MobiMaths (Tangney et al. 2010). Mobile technologies and smartphones can offer the solutions of how some of the issues can be addressed in mathematical education. MobiMaths aims to provide an integrated toolkit surrounding all aspects from hardware through to lesson plans. From the hardware outlook, students will be provided with smartphones which can communicate with each other and with the teacher’s console machine.

Augmenting reality with virtual information

We can connect something virtual onto something real with a smartphone. Augmented reality tools such as Google Goggles, Layar, and Wikitude show the prospective for using a smartphone to give data about location and artifacts.

Having an adaptive learning toolkit in the palm of your hand

Various combinations of functions and sensors can be a benefit for many applications to make smartphone into all kinds of tool. Smartphone can be a distance-measuring device, a compass, a speedometer, a spirit level, and a whole range of other things. In particular, the role of device as tool is well suited to supporting inquiry-based learning (Powell et al. 2011).

Context-aware learning

A comparatively innovative domain of research can be represented by context-aware mobile learning (CAML). The collection of data from the environment to give a statement of the current situation occurring around the user and the device is known as Context awareness. Since they are available in diverse contexts, mobile devices are mainly suitable to context-aware applications and so can make use of those contexts to increase the learning act. Context-aware mobile devices can be a support to learners by offering learners the chance to maintain their consideration on the world and by proposing the necessary help when needed (Naismith et al. 2004). CAML puts great importance on learners having portable devices, such as PDA, improved with sensors, the wireless LAN, camera, GPS receivers and software sensors, network congestion manager, student behavior analyzer, web log analyzer, and so on. The kinds of context awareness approaches are classical researches, like portfolio (Chen et al. 2003) and student modeling (Liu et al. 2002a, b). Context-aware ubiquitous/mobile learning can also be described as the approach that uses mobile, wireless communication, and sensing technologies to support real-world learning activities (Hwang et al. 2008). Recently, a study carried out by Chen et al. (2014) identified that progressive prompt-based context-aware learning approach yielded better results as compared to conventional context-aware learning system with single-stage prompts. Indeed, this approach provided more challenging tasks that would encourage the students to provide more effort in examining contextual information. In a context-aware ubiquitous learning environment, learning systems are also aware of students’ locations and learning status in the real world via the use of sensing technologies which provide personalized guidance or support (Yin et al. 2016). In such a learning environment that guides students to observe and learn from real-world targets, various physical world constraints need to be taken into account when planning learning paths for individuals. Determining personalized pathways can help maximizing students’ learning efficacy. Hsu et al. (2016) recognizes that it is essential to guide the students along an efficient learning path to maximize their learning performance according to the current situation. Active learning support system (ALESS) for context-aware ubiquitous learning environments was eventually designed and developed, and results showed that the learning process was more efficient using ALESS.

As mobile learning constantly emerges, CAML will also become more important. However, presently, there is little support for building CAML systems in a durable and reliable manner. As result, developers must deal with a wide range of system issues. These system issues include stating context needs, discovering available sensors that can address these needs, obtaining data from these sensors, applying fusion algorithms to improve the reliability of sensor data, utilizing recognition algorithms to transform low-level sensor data into higher-level context data, and routing the context data to the learning application.

Motion sensors

The accelerometer sensor

The Accelerometer sensor measures forces being applied to the sensor and determines the acceleration that is applied on a smartphone. It uses the standard sensor coordinate system, that is, conditions that apply when the device is flat on a surface in its ordinary orientation (Mobile Science 2016). The conditions are as follows (Fig. 2):

Fig. 2
figure 2

Accelerometer sensor (MathWorks 2016)

  1. 1.

    When the smartphone is pushed on the left side, the value of the X-acceleration is positive.

  2. 2.

    When the smartphone is pushed on the bottom, the value of the Y-acceleration is positive.

  3. 3.

    When the smartphone is pushed toward the sky with an acceleration of A m/s2, the value of the Z-acceleration equals to A + 9.81, which matches the acceleration of the device (+A m/s2) minus gravitational force (−9.81 m/s2).

  4. 4.

    The immobile device will have a value of +9.81 for the acceleration corresponding to the acceleration value of the smartphone.

Accelerometer is one important sensor found in mobile devices that are generally absent in desktop environment. It allows calculation of the orientation of devices equipped with sensor and measurement of any motion of the devices. Actions like stimulating learning content or moving to the next chapter can be triggered by shaking the device, while acceleration can be used to display physical processes within learning content. The GPS sensor is used to determine the current location of the mobile device using the satellite position information and is capable of providing hints about speed of movement of the user and the altitude. The location is an important context information which is mainly true for learning purposes since it is one main aspect that decides whether an environment is appropriate for learning (Wang 2011). Other sensors are available including the barometer or magnetic field sensor which supports the compass that can also be helpful for navigation.

Gravity sensor

This sensor provides a 3D vector that indicates the magnitude and direction of Earth’s gravity. It is derived from the accelerometer where linear acceleration is removed from data with the help of sensors such as the magnetometer and the gyroscope. The following shows how to get an instance of the default gravity sensor (Fig. 3):

Fig. 3
figure 3

Gravity sensor

The gyroscope

It is a sensor that involves the measurement of the rate of rotation around the x-, y-, and z-axes of a device. The gyroscope uses the same coordinate system as the one used by the accelerometer sensor. In the counter clockwise direction, rotation is positive. The raw rotational information without any modification or filtering for drift, and noise is provided by Standard gyroscope (Motion Sensors 2016) (Fig. 4).

Fig. 4
figure 4

Gyroscope (GET gyroscope on IPhone 2016)

The uncalibrated gyroscope

It is analogous to the gyroscope only that there is no compensation of gyro-drift to rate of rotation. It is used for post processing and melding orientation data.

Significant motion sensor

A significant motion is a motion that can change the location of the user; for example: walking, sitting in a moving vehicle, riding bicycle, etc. So the sensor activates an event every time a significant motion is spotted and it disables itself.

Rotation Vector sensor

The Rotation Vector sensor shows the orientation of the device as a combination of an angle and an axis, in which the device has rotated an angle around any of the three axes X, Y, or Z (Motion Sensors 2016) (Fig. 5).

Fig. 5
figure 5

Coordinate system for the rotation vector sensor (Motion Sensors 2016)

Related works

MobiMath

MobiMath targets to provide a unified toolkit encompassing all aspects from hardware through to lesson strategies. From the hardware side students will be provided with smartphones which can communicate with each other and with the teacher’s console machine. The toolkit will comprise a range of unbiased tools which can be practical broadly across the course (e.g., an in-class voting response system) and a variety of “Mind tool” applications which are purpose defined by the program and serve to intensify conceptual understanding, spread out thinking, and improve problem cracking (Jonassen 2006) (Fig. 6).

Fig. 6
figure 6

MobiMath (Jonassen 2006)

Serious Physics

Serious Physics enables the use of a mobile device to conduct several experiments on Kinematics. There are many new scenarios where Serious Physics users can learn about Kinematics in an experimental way. The modular architecture of the software allows covering other topics and scenarios on the top of it (Martinez and Garaizar 2014) (Fig. 7).

Fig. 7
figure 7

Serious Physics (Martinez and Garaizar 2014)

Sensor Kinetics

The app demonstrates the use of the accelerometer, gyroscope, and the rotation sensor to control a tilt-based view navigation like the RotoView technology by INNOVENTIONS (Sensor Kinetics 2015). It also demonstrates the operation of the magnetic sensor, the linear acceleration sensor, and the gravity sensor within special graphical displays. Each sensor is attached to a sophisticated chart viewer. The Multi-Sensor Recorder records multiple sensors simultaneously at a controlled data rate.

Educational objective

Sensor Kinetics demonstrates that the physics of gravity, acceleration, rotation, magnetism and more as these forces are measured by your phone or tablet. The app includes comprehensive help files with easy to understand information and experiments you can perform with the sensors (Kinetic Sensor Google Play 2015) (Fig. 8).

Fig. 8
figure 8

Sensor Kinetics (Sensor Kinetics 2015)

Learning for the visually impaired: Braille system

Today, it is important that all learners, irrespective of their limitations, be given access to the appropriate environment, framework, and facility for learning. The visually impaired learners have for years been learning using the Braille system which undeniably has had a significant contribution. Braille System has gradually evolved, and it now plays a significant role in the literacy of the blind or visually impaired learner (Fig. 9).

Fig. 9
figure 9

Braille (Font Meme 2016)

Methodology and proposed solution

The solution chosen to implement the mobile learning application is to build the application from scratch to meet all the functional and non-functional requirements. Using different motion sensors, namely the accelerometer sensor, the GPS, the gyroscope sensor, and so on, the application allows users to measure distance between 2 points, the angle of elevation and rotation, speed at which the device is moving and also contains a compass to show the position of the North Pole (Shala and Rodriguez 2011). With the integration of text-to-speech and speech-to-text, the application is an easy-to-use one for the visually impaired and hence increasing the divide between learners who are visually impaired and their fully sighted friends. For user acceptance, a group of 20 visually impaired students were asked to use the application and their feedbacks were collected. The system was then eventually refined following their comments and suggestions.

In terms of mobile infrastructure, one smartphone is being used for the implementation of this mobile learning application. The smartphone makes use of the 5.0 version of the Android OS. The Android platform can support three comprehensive set of sensors: Motion sensors, Position sensors, and Environmental sensors. The Android sensor framework allows the user to access various types of sensors. Accessing and managing multimedia data, sensor values, and location information is also possible on Android platform as Android uses file system directly.

Detailed description of the system

The SensorApp application caters for users such as university students to use their smartphones so as to encourage mobile learning using motion sensors. The application shall consist of the client side (mobile phone).

The client side consists of the mobile interface where the users can navigate through the application and main menu. The application is dedicated for students who can make use of different motion-sensors such as:

  1. 1.

    Accelerometer sensor

  2. 2.

    Gyroscope sensor

  3. 3.

    GPS sensor

  4. 4.

    Gravity sensor

  5. 5.

    Magnetic-field sensor

Upon launching the application, the system shall display the Welcome page and the user shall be requested to accept the terms and policy of the software by clicking on the “Continue” button. A second page of the application will then be displayed where the user can choose to enter the “Main Menu” or “Search Apps” or “Introduction Page.” The user can access the menu navigation which consists of many options such as:

  1. 1.

    List of sensors

  2. 2.

    Accelerometer test

  3. 3.

    Outside distance

  4. 4.

    Bubble level

  5. 5.

    Speedometer test

  6. 6.

    Compass

  7. 7.

    About us

The ‘List of Sensors’ option shall display a list of all the sensors’ name and vendor which is equipped in their smartphones. The ‘Accelerometer Test’ option shall display the information of the device movement or positioning precisely and accurately using the Accelerometer sensor. The ‘Outside Distance’ option shall calculate the distance from the starting point till the ending point using the Longitude and Latitude of the current location and display the result. The ‘Bubble Level’ option shall indicate whether a surface is horizontal or vertical level using the Accelerometer sensor. The ‘Speedometer Test’ option shall allow the user to records the speed in real time as the device is constantly moving. The ‘Help & Feedback’ option shall display the user manual of the application and shall allow the user to give feedback about the application by sending a mail or post on blogs and social network. The ‘Setting’ option shall display the general setting of the smartphone to allow the users to adjust brightness and activate/deactivate the GPS satellites service. The ‘About Us’ option shall display the details about the application. On selecting the options, the user will be able to hear the options on the menu by a text-to-speech service. The option “Speech to Text” will be implemented to facilitate the partially impaired users, as they will be able to open these options above using their voice.

Overall system

The diagrams below show the overall system and some designs of the system (Figs. 10, 11).

Fig. 10
figure 10

Overall system diagram

Fig. 11
figure 11

Modeling of the proposed system

Development tools and environment used

The development tools and environment that has been used for the purpose of this research are described below.

Language used

  • Android: Android is the customizable, easy-to-use operating system that powers more than a billion devices across the globe.

  • Java: Java is a general-purpose computer programming language that is concurrent, class-based, object-oriented, and specifically designed to have as few implementation dependencies.

  • XML: Extensible Markup Language is a markup language that defines a set of rules for encoding documents in a format which is both human readable and machine readable.

Software tools

  • Android Studio: Android Studio is the official IDE for Android application development, based on IntelliJ IDEA.

  • Java SE: Java Platform, Standard Edition (Java SE), lets you develop and deploy Java applications on desktops and servers, as well as in today’s demanding embedded environments. Java offers the rich user interface, performance, versatility, portability, and security that today’s applications require.

  • Eclipse IDE: Eclipse is an integrated development environment (IDE). It contains a base workspace and an extensible plug-in system for customizing the environment.

  • Robotium Recorder: Robotium is an Android test automation framework for testing native and hybrid android mobile applications against mobile devices or emulators. It makes easy to write powerful and robust automated tests for Android applications.

Hardware requirements

  • A smartphone with internet connectivity to turn on GPS.

  • A computer with 8 GB RAM for smooth environment and at least 2 GB Hard-Disk Space.

  • A computer with core i7 processor for android development.

  • A mobile equipped with appropriate sensors to operate successfully.

  • A mobile with android version 5.0 and above (make uses of API 21).

Results and interpretation

Standards and conventions

The rules and agreements set to be followed consist of indentation, declarations, comments, statements, organization, directory structure, naming conventions, and so on. Naming conventions make programs more understandable by making them easier to read. They can also give information about the function of the identifier, for example, whether it is a constant, package, or class which can be helpful in understanding the code. It provides huge benefits to the engineers, which means that any new programmer would be able to understand clearly the codes.

Features of system

This project is particularly categorized into three layers, the presentation layer, the business layer, and the service layer. However, it consists of several packages:

  1. 1.

    SplashScreen page (Presentation layer)

  2. 2.

    Welcome page (Presentation layer)

  3. 3.

    Introduction Page (Presentation layer)

  4. 4.

    VoiceSearch Page (Presentation layer)

  5. 5.

    AccelerometerTest package (Business layer)

  6. 6.

    OutsideDistance package (Business layer)

  7. 7.

    Compass package (Business layer)

  8. 8.

    Speedometer Test package (Business layer)

  9. 9.

    Bubble Level package (Business layer)

  10. 10.

    Terms&Policies, AboutUs, Error messages, and Dialog fragments (Service layer) (Fig. 12)

    Fig. 12
    figure 12

    Features of system

Discussion and experimentation

In order to ensure to ensure that the application that has been developed provides good usability features and a fruitful learning experience, a number of tests have been performed. People with visual challenges face special barriers in using the Internet, aside from those related to material access and computer-related trainings (Puffelen 2009). Accordingly, some of these tests were also to verify that some of the limitations inherent in mobile learning have been considered during the development of SensorApp. These limitations have been highlighted in the Literature Review section.

Real-time issues

The ‘Speedometer Test’ and the ‘Outside Distance’ application take several minutes to load and to display the GPS data, and this can be a disadvantage to the system. This issue depends on the availability of the network provider. However, the GPS signal plays a vital role in the real-time issue. Tracking the smartphone’s current position becomes inaccurate due to marginal error of the signal received. Therefore, the application was implemented to receive an update from the GPS satellite every three milliseconds with a minimum distance.

Performance testing

Performance testing is being performed to determine how fast the system performs under a particular workload, and it involves using tools to create a series of virtual users who will access the user interface all together to report.

To test the performance of the application, we have used tools to collect data about the execution behavior of the system. Android Studio and the profiling tools provided by the smartphone are used to record and visualize the rendering, memory, compute, and battery performance of the application.

Debug GPU overdraw walkthrough

It helps the developer to see where the rendering overhead can be able to reduce (Fig. 13).

Fig. 13
figure 13

Color key for debug GPU overdraw output

Profiling GPU rendering walkthrough

It helps to see how a UI window performs against the 16-ms-per-frame target (Fig. 14).

Fig. 14
figure 14

Enlarged annotated profile GPU rendering graph

The following figure below shows the Profile GPU Rendering graph of the application (Fig. 15).

Fig. 15
figure 15

Profile GPU rendering graph for SensorApp

The green line represents 16 ms. The blue section of the bar represents the time used to create and update the View’s display lists. The purple section of the bar represents the time spent transferring resources to the render thread. The red section represents the time spent by Android’s 2D renderer issuing commands to OpenGL to draw and redraw display lists. The orange section of the bar represents the time the CPU is waiting for the GPU to finish its work (Android Developer 2016).

Battery Historian charts

The Battery Historian chart graphs power-relevant events over time (Fig. 16).

Fig. 16
figure 16

Battery Historian output

Memory Monitor walkthrough

The Memory Monitor reports in real time how the SensorApp allocates memory, and this walkthrough shows the basic usage for the Memory Monitor tool in Android Studio (Fig. 17).

Fig. 17
figure 17

Memory Monitor report

Acceptance testing

The acceptance testing is a formal testing conducted to determine whether a software system satisfies the user needs. The following are test cases for the acceptance testing.

For user acceptance, a group of 20 visually impaired students were asked to use the application. Their feedbacks were collected and are as follows:

  • The visually impaired students all found the text-to-speech and the Voice Search features very interesting. These features were very useful to them as they would guide the students through the system with voice messages at each stage and the Voice search that would directly lead them to the desired screen.

  • There was difference in opinions about the application structure. Some found it to be very good, while others suggested that the graphical interface could be better.

  • Others found the application to be very innovative especially with the Outside distance and Bubble Level features.

  • As for the use of sensors, some found that there was a good use of the sensors but also suggested that the application could be further expanded with additional features (Tables 1, 2, 3, 4, 5, 6).

    Table 1 Acceptance testing table
    Table 2 Comment user 1
    Table 3 Comment user 2
    Table 4 Comment user 3
    Table 5 Comment user 4
    Table 6 Comment user 5

Recommendations and conclusion

The project has successfully achieved its aims and objectives in trying to bridge the gap between the visually impaired learners and the common learners. Using the application measurement of distance, speed, and level of a surface is possible. In addition, the exact position of the North can also be determined using the compass. To be able to receive feedback from user, a SensorApp Facebook page was created and an option to log in from the application was also successfully implemented. The main difficulty encountered was how to make good use of the sensors to enhance learning. This problem, however, was solved by going through existing systems to see how these motion sensors could help students, and finally we came up with some features that will be helpful for visually impaired students.

Our future works would be to make the SensorApp application a cross-platform one where most mobile users could get access to it. Currently, it is only operational on Android. As stated earlier in feedback received from the visually impaired students, additional features can be added to further increase interaction between users and the application. The application could also be refined so that not only students can use it but also at work it can be used for in the construction field. The text-to-speech and speech-to-text features proved to be very much helpful.