Abstract
The learning community is a very heterogeneous one, and ‘inclusion’ is a concept that should stand high on the agenda of every learning community. Globally, there are 150 million children who are physically challenged, and they are often deprived of education because they are the most vulnerable and excluded people in their respective community. Inclusion implies that no one should be left out of the education system as education is the key to success. All students who are otherwise capable, in whatsoever way, should have access to proper education and training. Proper education and training for the twenty first century would be much more geared toward encouraging learners toward critical thinking, problem solving, analysis, interpretation of information and creativity. Learners are expected to play a predominant role, whereas the teacher will be doing less instruction but much more orchestration of information. Learners who are visually impaired, nowadays, have access to a number of technologies that can facilitate the learning process. During the last decade, learners have been using smartphones, which has grown significantly, in their everyday life. Learners are regularly encouraged to use their personal devices for learning processes. In addition, the majority of mobile learning applications do not make the most of the smartphones, which have interesting features such as Accelerometer and Gyroscope. SensorApp, a free mobile learning application that makes use of different motion sensors to enhance the learning experience of visually impaired learners, has been developed for this research. The integration of text-to-speech in this application has breached the divide between visually impaired students and those with no vision problems. The Voice search is an interesting feature that will be helpful to the visually impaired students. Finally, a thorough testing has been made with 20 visually impaired learners, and they found the application to be very interesting and innovative and their feedback and comments have been considered. The contribution of systems like the Braille has undoubtedly has had its load of contribution, but nowadays, facilities provided by the digital era we are living in can be brought to those who are physically challenged. This can only change their lives in a positive way.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
In the last decade, solutions for people’s tasks in social aspects, education, culture, and economics have been provided by Information Technology (Skinner 2013). Mobile technologies have changed the world in many respects (Traxler 2009). The education field is, however not an exclusion, and has extensively been affected by information technology. Clearly, there are rise in interests for using information technology and other new educational approaches that can encourage learning in formal or informal manner. Educational technologies consisting of the different set of teaching tools are being used to better learning activities in the educational field. While these changes have improved society in many respects, they present an obstacle for visually challenged people who may have significant difficulty processing the visual cues presented by modern graphical user interfaces (Chiang et al. 2005). Besides, people with visual challenges face special barriers in using the Internet, aside from those related to material access and computer-related trainings (Puffelen 2009). The numbers of people with a visual impairment using computers are increasing (Douglas and Long 2003); however, they are having serious problems while using ICT tools because of the lack of basic ICT skills, lack of training, and lack of training materials (Gerber 2003; Sales et al. 2006; UNESCO 2009). Indeed, according to the World Health Organisation (WHO 2016), 285 million people are estimated to be visually impaired. In recent years, there has been a growing international urgency to include physically challenged persons on university campuses (Lourens and Swartz 2016). In essence, real inclusion therefore means feeling like a welcomed member of the tertiary environment; a member that truly belongs and whose contributions to the diversity of the university are valued and celebrated (Bantjes et al. 2015; Beauchamp-Pryor 2013; Swart and Greyling 2011). It is not about merely increasing the number of visually impaired learners to Tertiary Institutions but also involves the quality of the social and learning experiences of this category of learners once they gain access to higher education (Fuller et al. 2004; Jacklin et al 2006).
The purpose of this study was to find an alternative using technological advances in computing that would enable visually impaired learners to find an improved way of learning. It was borne in mind that as (Gerber 2003; Sales et al. 2006; UNESCO 2009) mentioned, this category of learners encountered numerous problems to use computers and the Internet. So the solution that had to be devised had to be user-friendly, easy-to-use and not requiring high level of proficiency of computing, and use of Internet. One aspect toward which this research was geared was the use of mobile devices in the learning process of the visually impaired learners.
Given the nature of mobile devices and the technologies advances, there is a lot to be gained by harnessing new technologies for education, especially for the visually impaired. Some advancement in mobile technologies can breach the gap between fully sighted students and visually impaired. However, by incorporating technologies such as motion sensors, now standard within many modern smartphones, a prospect to develop new inclusive mobile learning applications for the learners with sight difficulties, can be provided. Mobile devices as compared to computers are easily portable and can encourage anytime and anywhere learning, especially for the learners with visual impairments.
SensorApp is one such application where we can see the incorporation of such technologies that can help the visually impaired learners and see to it that they are not left out of the system. The objective of this research has been to build an application using motion sensors to enhance learning and eventually test it with visually impaired learners to see how their learning experience can be improved. The application that has been developed contains a navigation menu that makes use of the accelerometer sensor that is available in several mobile phones, particularly in the Android platform. A movement recognition algorithm has been created for the collection of all the necessary data for the effective functioning of the system. The integration of text-to-speech and speech-to-text will allow visually impaired student to hear each option as they navigate through the system. SensorApp is also interactive where the visually impaired learners can choose the different options they want from the menu using the Voice command. The strengths of SensorApp lies in the fact that the interfaces developed are simple and hence no advanced skills of ICT required by the learners. This is also complemented with a powerful Voice feature which helps the visually impaired learners to search for some particular information or to display the information looked for in an auditory way.
Literature review
Literature that has been discussing the literacy of the visually impaired is numerous, and many have been alarming about the decline in the use of Braille for at least 30 years (Wiazowski 2013).
Certain sources mention that only about 10% of blind people, the younger ones in particular, enjoys the power of Braille (Canadian Federation of the Blind [CFB] 2013; Engelhart 2010). It is frequent to find that technology is to be blamed for this situation (Hatlen and Spungin 2008), especially with the growing popularity and quality of synthetic voices and mainstream devices that incorporate synthesized output (Danielsen 2013).
Today, with the means of communication and a variety of information sources available everywhere such as at home, school or at work, and in our environment at large, we are also witnessing the rise of a connected mobile society. This is even being described as the start of the next social revolution (Rheingold 2003). The spread of various mobile learning systems shows how important it is to develop wireless and mobile learning applications (Liu et al. 2002a, b). In recent years, distance learning has grown into two significant directions: ‘the individual flexible teaching model’ and ‘the extended classroom model.’ The extended classroom model allows no restriction for starting time of the class, students to study alone, and to interact with teachers and other students. This model divides the students into groups and needs them to meet at local study centers. It also allows the students to interact by the use of video conferencing (Rekkedal and Dye 2007). Mobile industry is among the fastest growing, and the number of mobile phone owners surpasses the number of computers in the world. Around the world, there are about 2.7 billion mobile phones, and in some cases is the only way of long-distance communication.
M-Learning (Mobile Learning) can be said to be a learning method occurring across locations or having an edge of learning opportunities that are accessible by technologies such as laptops, smart phones, computers, cameras, media players, and games consoles (e.g., Nintendo DS, Sony PSP). Mobile, commonly understood as portable and movable, can also implicate a personal; so mobile technologies can be categorized using the two unrelated features of personal versus shared and portable versus static (Naismith et al. 2004).
Classification of mobile technologies
The range of mobile technologies can be categorized using the two unrelated features of personal v/s shared and portable v/s static, as shown in Fig. 1. Naismith et al. (2004) emphasized that quadrants one to quadrants three consists of mobile technologies and also those from quadrant 4 that are not at the extreme end of the static dimension.
New challenge is being brought by high dynamics because of various fusion of mobility in mobile learning. The test would be to make the most of the constantly altering environment with a new category of learning applications that are flexible and can adjust accordingly to dynamic learning conditions. The mobile devices accessible for the cost of networks, capacity, the usage, and so on may all change over place and time. In brief, the learning setting keeps changing over time, that is, usage, capacity cost, and so on can change.
Mobile learning
M-Learning research is still in its infancy because the amount of available primary research studies is still, relative to other fields of study like e-learning, small. Most literature reviews and conceptual papers seek to establish a foundation for m-learning, develop theory, or focus on design. Specifically, prior reviews have focused on the type of m-learning projects being done (Fetaji 2008), the nature of research questions (Ali and Irvine 2009), and the type of activities that can be supported with mobile technologies (Naismith et al. 2004).
The range of the research on mobile learning has made it challenging to produce a single definition or to determine generally added benefits (Frohberg et al. 2009). While it is typical for an emerging field to have varied definitions, the lack of conceptual frameworks and robust theories has been frequently addressed as a concern in the literature (Peng et al. 2009). The greatest added value of mobile learning vis-a-vis PC learning lies in the aspects that extend classroom interaction to other locations via communication networks. Recent advances such as imbedded sensors, cameras, motion detection, location awareness, social networks, web searching, and augmented reality present the potential to foster learning and engagement across multiple physical, conceptual, and social spaces, both indoors and out (Newhouse et al. 2006).
However, some of the major limitations of mobile learning (Shudong and Higgins 2006) include:
-
Small screen and low resolutions
-
Connectivity and Internet Access problems
-
Lack of standardization and compatibility
-
Battery, memory, and storage capacity
Placing learning in a specific context
One of the main affordances of a smartphone is that the user can take it with him/her wherever he/she goes. The importance of context in learning has been written to support awareness (Seely Brown et al. 1989). For example, students can apply mathematical or scientific inquiry in the real-world problem-solving situations, using M-Learning tools such as MobiMaths (Tangney et al. 2010). Mobile technologies and smartphones can offer the solutions of how some of the issues can be addressed in mathematical education. MobiMaths aims to provide an integrated toolkit surrounding all aspects from hardware through to lesson plans. From the hardware outlook, students will be provided with smartphones which can communicate with each other and with the teacher’s console machine.
Augmenting reality with virtual information
We can connect something virtual onto something real with a smartphone. Augmented reality tools such as Google Goggles, Layar, and Wikitude show the prospective for using a smartphone to give data about location and artifacts.
Having an adaptive learning toolkit in the palm of your hand
Various combinations of functions and sensors can be a benefit for many applications to make smartphone into all kinds of tool. Smartphone can be a distance-measuring device, a compass, a speedometer, a spirit level, and a whole range of other things. In particular, the role of device as tool is well suited to supporting inquiry-based learning (Powell et al. 2011).
Context-aware learning
A comparatively innovative domain of research can be represented by context-aware mobile learning (CAML). The collection of data from the environment to give a statement of the current situation occurring around the user and the device is known as Context awareness. Since they are available in diverse contexts, mobile devices are mainly suitable to context-aware applications and so can make use of those contexts to increase the learning act. Context-aware mobile devices can be a support to learners by offering learners the chance to maintain their consideration on the world and by proposing the necessary help when needed (Naismith et al. 2004). CAML puts great importance on learners having portable devices, such as PDA, improved with sensors, the wireless LAN, camera, GPS receivers and software sensors, network congestion manager, student behavior analyzer, web log analyzer, and so on. The kinds of context awareness approaches are classical researches, like portfolio (Chen et al. 2003) and student modeling (Liu et al. 2002a, b). Context-aware ubiquitous/mobile learning can also be described as the approach that uses mobile, wireless communication, and sensing technologies to support real-world learning activities (Hwang et al. 2008). Recently, a study carried out by Chen et al. (2014) identified that progressive prompt-based context-aware learning approach yielded better results as compared to conventional context-aware learning system with single-stage prompts. Indeed, this approach provided more challenging tasks that would encourage the students to provide more effort in examining contextual information. In a context-aware ubiquitous learning environment, learning systems are also aware of students’ locations and learning status in the real world via the use of sensing technologies which provide personalized guidance or support (Yin et al. 2016). In such a learning environment that guides students to observe and learn from real-world targets, various physical world constraints need to be taken into account when planning learning paths for individuals. Determining personalized pathways can help maximizing students’ learning efficacy. Hsu et al. (2016) recognizes that it is essential to guide the students along an efficient learning path to maximize their learning performance according to the current situation. Active learning support system (ALESS) for context-aware ubiquitous learning environments was eventually designed and developed, and results showed that the learning process was more efficient using ALESS.
As mobile learning constantly emerges, CAML will also become more important. However, presently, there is little support for building CAML systems in a durable and reliable manner. As result, developers must deal with a wide range of system issues. These system issues include stating context needs, discovering available sensors that can address these needs, obtaining data from these sensors, applying fusion algorithms to improve the reliability of sensor data, utilizing recognition algorithms to transform low-level sensor data into higher-level context data, and routing the context data to the learning application.
Motion sensors
The accelerometer sensor
The Accelerometer sensor measures forces being applied to the sensor and determines the acceleration that is applied on a smartphone. It uses the standard sensor coordinate system, that is, conditions that apply when the device is flat on a surface in its ordinary orientation (Mobile Science 2016). The conditions are as follows (Fig. 2):
-
1.
When the smartphone is pushed on the left side, the value of the X-acceleration is positive.
-
2.
When the smartphone is pushed on the bottom, the value of the Y-acceleration is positive.
-
3.
When the smartphone is pushed toward the sky with an acceleration of A m/s2, the value of the Z-acceleration equals to A + 9.81, which matches the acceleration of the device (+A m/s2) minus gravitational force (−9.81 m/s2).
-
4.
The immobile device will have a value of +9.81 for the acceleration corresponding to the acceleration value of the smartphone.
Accelerometer is one important sensor found in mobile devices that are generally absent in desktop environment. It allows calculation of the orientation of devices equipped with sensor and measurement of any motion of the devices. Actions like stimulating learning content or moving to the next chapter can be triggered by shaking the device, while acceleration can be used to display physical processes within learning content. The GPS sensor is used to determine the current location of the mobile device using the satellite position information and is capable of providing hints about speed of movement of the user and the altitude. The location is an important context information which is mainly true for learning purposes since it is one main aspect that decides whether an environment is appropriate for learning (Wang 2011). Other sensors are available including the barometer or magnetic field sensor which supports the compass that can also be helpful for navigation.
Gravity sensor
This sensor provides a 3D vector that indicates the magnitude and direction of Earth’s gravity. It is derived from the accelerometer where linear acceleration is removed from data with the help of sensors such as the magnetometer and the gyroscope. The following shows how to get an instance of the default gravity sensor (Fig. 3):
The gyroscope
It is a sensor that involves the measurement of the rate of rotation around the x-, y-, and z-axes of a device. The gyroscope uses the same coordinate system as the one used by the accelerometer sensor. In the counter clockwise direction, rotation is positive. The raw rotational information without any modification or filtering for drift, and noise is provided by Standard gyroscope (Motion Sensors 2016) (Fig. 4).
The uncalibrated gyroscope
It is analogous to the gyroscope only that there is no compensation of gyro-drift to rate of rotation. It is used for post processing and melding orientation data.
Significant motion sensor
A significant motion is a motion that can change the location of the user; for example: walking, sitting in a moving vehicle, riding bicycle, etc. So the sensor activates an event every time a significant motion is spotted and it disables itself.
Rotation Vector sensor
The Rotation Vector sensor shows the orientation of the device as a combination of an angle and an axis, in which the device has rotated an angle around any of the three axes X, Y, or Z (Motion Sensors 2016) (Fig. 5).
Related works
MobiMath
MobiMath targets to provide a unified toolkit encompassing all aspects from hardware through to lesson strategies. From the hardware side students will be provided with smartphones which can communicate with each other and with the teacher’s console machine. The toolkit will comprise a range of unbiased tools which can be practical broadly across the course (e.g., an in-class voting response system) and a variety of “Mind tool” applications which are purpose defined by the program and serve to intensify conceptual understanding, spread out thinking, and improve problem cracking (Jonassen 2006) (Fig. 6).
Serious Physics
Serious Physics enables the use of a mobile device to conduct several experiments on Kinematics. There are many new scenarios where Serious Physics users can learn about Kinematics in an experimental way. The modular architecture of the software allows covering other topics and scenarios on the top of it (Martinez and Garaizar 2014) (Fig. 7).
Sensor Kinetics
The app demonstrates the use of the accelerometer, gyroscope, and the rotation sensor to control a tilt-based view navigation like the RotoView technology by INNOVENTIONS (Sensor Kinetics 2015). It also demonstrates the operation of the magnetic sensor, the linear acceleration sensor, and the gravity sensor within special graphical displays. Each sensor is attached to a sophisticated chart viewer. The Multi-Sensor Recorder records multiple sensors simultaneously at a controlled data rate.
Educational objective
Sensor Kinetics demonstrates that the physics of gravity, acceleration, rotation, magnetism and more as these forces are measured by your phone or tablet. The app includes comprehensive help files with easy to understand information and experiments you can perform with the sensors (Kinetic Sensor Google Play 2015) (Fig. 8).
Learning for the visually impaired: Braille system
Today, it is important that all learners, irrespective of their limitations, be given access to the appropriate environment, framework, and facility for learning. The visually impaired learners have for years been learning using the Braille system which undeniably has had a significant contribution. Braille System has gradually evolved, and it now plays a significant role in the literacy of the blind or visually impaired learner (Fig. 9).
Methodology and proposed solution
The solution chosen to implement the mobile learning application is to build the application from scratch to meet all the functional and non-functional requirements. Using different motion sensors, namely the accelerometer sensor, the GPS, the gyroscope sensor, and so on, the application allows users to measure distance between 2 points, the angle of elevation and rotation, speed at which the device is moving and also contains a compass to show the position of the North Pole (Shala and Rodriguez 2011). With the integration of text-to-speech and speech-to-text, the application is an easy-to-use one for the visually impaired and hence increasing the divide between learners who are visually impaired and their fully sighted friends. For user acceptance, a group of 20 visually impaired students were asked to use the application and their feedbacks were collected. The system was then eventually refined following their comments and suggestions.
In terms of mobile infrastructure, one smartphone is being used for the implementation of this mobile learning application. The smartphone makes use of the 5.0 version of the Android OS. The Android platform can support three comprehensive set of sensors: Motion sensors, Position sensors, and Environmental sensors. The Android sensor framework allows the user to access various types of sensors. Accessing and managing multimedia data, sensor values, and location information is also possible on Android platform as Android uses file system directly.
Detailed description of the system
The SensorApp application caters for users such as university students to use their smartphones so as to encourage mobile learning using motion sensors. The application shall consist of the client side (mobile phone).
The client side consists of the mobile interface where the users can navigate through the application and main menu. The application is dedicated for students who can make use of different motion-sensors such as:
-
1.
Accelerometer sensor
-
2.
Gyroscope sensor
-
3.
GPS sensor
-
4.
Gravity sensor
-
5.
Magnetic-field sensor
Upon launching the application, the system shall display the Welcome page and the user shall be requested to accept the terms and policy of the software by clicking on the “Continue” button. A second page of the application will then be displayed where the user can choose to enter the “Main Menu” or “Search Apps” or “Introduction Page.” The user can access the menu navigation which consists of many options such as:
-
1.
List of sensors
-
2.
Accelerometer test
-
3.
Outside distance
-
4.
Bubble level
-
5.
Speedometer test
-
6.
Compass
-
7.
About us
The ‘List of Sensors’ option shall display a list of all the sensors’ name and vendor which is equipped in their smartphones. The ‘Accelerometer Test’ option shall display the information of the device movement or positioning precisely and accurately using the Accelerometer sensor. The ‘Outside Distance’ option shall calculate the distance from the starting point till the ending point using the Longitude and Latitude of the current location and display the result. The ‘Bubble Level’ option shall indicate whether a surface is horizontal or vertical level using the Accelerometer sensor. The ‘Speedometer Test’ option shall allow the user to records the speed in real time as the device is constantly moving. The ‘Help & Feedback’ option shall display the user manual of the application and shall allow the user to give feedback about the application by sending a mail or post on blogs and social network. The ‘Setting’ option shall display the general setting of the smartphone to allow the users to adjust brightness and activate/deactivate the GPS satellites service. The ‘About Us’ option shall display the details about the application. On selecting the options, the user will be able to hear the options on the menu by a text-to-speech service. The option “Speech to Text” will be implemented to facilitate the partially impaired users, as they will be able to open these options above using their voice.
Overall system
The diagrams below show the overall system and some designs of the system (Figs. 10, 11).
Development tools and environment used
The development tools and environment that has been used for the purpose of this research are described below.
Language used
-
Android: Android is the customizable, easy-to-use operating system that powers more than a billion devices across the globe.
-
Java: Java is a general-purpose computer programming language that is concurrent, class-based, object-oriented, and specifically designed to have as few implementation dependencies.
-
XML: Extensible Markup Language is a markup language that defines a set of rules for encoding documents in a format which is both human readable and machine readable.
Software tools
-
Android Studio: Android Studio is the official IDE for Android application development, based on IntelliJ IDEA.
-
Java SE: Java Platform, Standard Edition (Java SE), lets you develop and deploy Java applications on desktops and servers, as well as in today’s demanding embedded environments. Java offers the rich user interface, performance, versatility, portability, and security that today’s applications require.
-
Eclipse IDE: Eclipse is an integrated development environment (IDE). It contains a base workspace and an extensible plug-in system for customizing the environment.
-
Robotium Recorder: Robotium is an Android test automation framework for testing native and hybrid android mobile applications against mobile devices or emulators. It makes easy to write powerful and robust automated tests for Android applications.
Hardware requirements
-
A smartphone with internet connectivity to turn on GPS.
-
A computer with 8 GB RAM for smooth environment and at least 2 GB Hard-Disk Space.
-
A computer with core i7 processor for android development.
-
A mobile equipped with appropriate sensors to operate successfully.
-
A mobile with android version 5.0 and above (make uses of API 21).
Results and interpretation
Standards and conventions
The rules and agreements set to be followed consist of indentation, declarations, comments, statements, organization, directory structure, naming conventions, and so on. Naming conventions make programs more understandable by making them easier to read. They can also give information about the function of the identifier, for example, whether it is a constant, package, or class which can be helpful in understanding the code. It provides huge benefits to the engineers, which means that any new programmer would be able to understand clearly the codes.
Features of system
This project is particularly categorized into three layers, the presentation layer, the business layer, and the service layer. However, it consists of several packages:
-
1.
SplashScreen page (Presentation layer)
-
2.
Welcome page (Presentation layer)
-
3.
Introduction Page (Presentation layer)
-
4.
VoiceSearch Page (Presentation layer)
-
5.
AccelerometerTest package (Business layer)
-
6.
OutsideDistance package (Business layer)
-
7.
Compass package (Business layer)
-
8.
Speedometer Test package (Business layer)
-
9.
Bubble Level package (Business layer)
-
10.
Terms&Policies, AboutUs, Error messages, and Dialog fragments (Service layer) (Fig. 12)
Discussion and experimentation
In order to ensure to ensure that the application that has been developed provides good usability features and a fruitful learning experience, a number of tests have been performed. People with visual challenges face special barriers in using the Internet, aside from those related to material access and computer-related trainings (Puffelen 2009). Accordingly, some of these tests were also to verify that some of the limitations inherent in mobile learning have been considered during the development of SensorApp. These limitations have been highlighted in the Literature Review section.
Real-time issues
The ‘Speedometer Test’ and the ‘Outside Distance’ application take several minutes to load and to display the GPS data, and this can be a disadvantage to the system. This issue depends on the availability of the network provider. However, the GPS signal plays a vital role in the real-time issue. Tracking the smartphone’s current position becomes inaccurate due to marginal error of the signal received. Therefore, the application was implemented to receive an update from the GPS satellite every three milliseconds with a minimum distance.
Performance testing
Performance testing is being performed to determine how fast the system performs under a particular workload, and it involves using tools to create a series of virtual users who will access the user interface all together to report.
To test the performance of the application, we have used tools to collect data about the execution behavior of the system. Android Studio and the profiling tools provided by the smartphone are used to record and visualize the rendering, memory, compute, and battery performance of the application.
Debug GPU overdraw walkthrough
It helps the developer to see where the rendering overhead can be able to reduce (Fig. 13).
Profiling GPU rendering walkthrough
It helps to see how a UI window performs against the 16-ms-per-frame target (Fig. 14).
The following figure below shows the Profile GPU Rendering graph of the application (Fig. 15).
The green line represents 16 ms. The blue section of the bar represents the time used to create and update the View’s display lists. The purple section of the bar represents the time spent transferring resources to the render thread. The red section represents the time spent by Android’s 2D renderer issuing commands to OpenGL to draw and redraw display lists. The orange section of the bar represents the time the CPU is waiting for the GPU to finish its work (Android Developer 2016).
Battery Historian charts
The Battery Historian chart graphs power-relevant events over time (Fig. 16).
Memory Monitor walkthrough
The Memory Monitor reports in real time how the SensorApp allocates memory, and this walkthrough shows the basic usage for the Memory Monitor tool in Android Studio (Fig. 17).
Acceptance testing
The acceptance testing is a formal testing conducted to determine whether a software system satisfies the user needs. The following are test cases for the acceptance testing.
For user acceptance, a group of 20 visually impaired students were asked to use the application. Their feedbacks were collected and are as follows:
-
The visually impaired students all found the text-to-speech and the Voice Search features very interesting. These features were very useful to them as they would guide the students through the system with voice messages at each stage and the Voice search that would directly lead them to the desired screen.
-
There was difference in opinions about the application structure. Some found it to be very good, while others suggested that the graphical interface could be better.
-
Others found the application to be very innovative especially with the Outside distance and Bubble Level features.
-
As for the use of sensors, some found that there was a good use of the sensors but also suggested that the application could be further expanded with additional features (Tables 1, 2, 3, 4, 5, 6).
Recommendations and conclusion
The project has successfully achieved its aims and objectives in trying to bridge the gap between the visually impaired learners and the common learners. Using the application measurement of distance, speed, and level of a surface is possible. In addition, the exact position of the North can also be determined using the compass. To be able to receive feedback from user, a SensorApp Facebook page was created and an option to log in from the application was also successfully implemented. The main difficulty encountered was how to make good use of the sensors to enhance learning. This problem, however, was solved by going through existing systems to see how these motion sensors could help students, and finally we came up with some features that will be helpful for visually impaired students.
Our future works would be to make the SensorApp application a cross-platform one where most mobile users could get access to it. Currently, it is only operational on Android. As stated earlier in feedback received from the visually impaired students, additional features can be added to further increase interaction between users and the application. The application could also be refined so that not only students can use it but also at work it can be used for in the construction field. The text-to-speech and speech-to-text features proved to be very much helpful.
References
Ali, R., & Irvine, V. (2009). Current m-learning research: A review of key literature. In Proceedings of the world conference on e-learning in corporate, government, healthcare and higher education (pp 2353–2359).
Bantjes, J., Swartz, L., Conchar, L., & Derman, W. (2015). ‘There is soccer but we have to watch’: The Embodied consequences of rhetorics of inclusion for South African children with cerebral palsy. Journal of Community & Applied Social Psychology, 25, 474–486. doi:10.1002/casp.2225.
Beauchamp-Pryor, K. (2013). Visual impairment and disability: A dual approach towards equality and inclusion in UK policy and provision. In N. Watson, A. Roulstone & C. Thomas (Eds.), Routledge handbook of disability studies (pp. 177–192). London: Routledge.
Canadian Federation of the Blind [CFB]. (2013). An absence of intensive training & rehabilitation for blind people in Canada. Retrieved January 7, 2017 from http://www.cfb.ca/the-blind-canadian-volume-7-October-2013.
Chen, C. H., Hwang, G. J., & Tsai, C. H. (2014). A progressive prompting approach to conducting context-aware learning activities for natural science courses. Interacting with Computers, 26(4), 348–359.
Chen, G. D., Ou, K. L., & Wang, C. Y. (2003). Use of group discussion and learning portfolio to build knowledge for managing web group learning. Journal of Educational Computing Research, 28(3), 291–315.
Chiang, M., Cole, R., Gupta, S., Kaiser, G., & Starren, J. (2005). Computer and World Wide Web accessibility by visually disabled patients: problems and solutions. Survey of Ophthalmology, 50(4), 394–405.
Danielsen, C. (2013). National federation of the blind commends department of education for new guidelines on Braille instruction. Retrieved January 18, 2017 from https://nfb.org/national-federation-blindcommends-department-education-new-guidelines-braille-instruction.
Douglas, G., & Long, R. (2003). An observation of adults with visual impairments carrying out copy-typing tasks. Behaviour & IT, 22(3), 141–153.
Engelhart, K. (2010). The Braille crisis. Maclean’s, 123(17), 44.
Fetaji, M. (2008). Literature review of M-Learning issues, M-Learning projects and technologies. In C. Bonk, M. Lee & T. Reynolds (Eds.), Proceedings of E-Learn: World conference on E-Learning in corporate, government, healthcare, and higher education (pp. 348–353). Chesapeake, VA: Association for the Advancement of Computing in Education (AACE).
Font Meme. (2016). Braille Font. Accessed December 28, 2016, from http://fontmeme.com/braille/.
Frohberg, D. D., Göth, C. C., & Schwabe, G. G. (2009). Mobile learning projects—A critical analysis of the state of the art. Journal of Computer Assisted learning, 25, 307–331.
Fuller, M., Healey, M., Bradley, A., & Hall, T. (2004). Barriers to learning: A systematic study of the experience of disabled students in one university. Studies in Higher Education, 29(3), 304–318. doi:10.1080/03075070410001682592.
Gerber, E. (2003). The benefits of and barriers to computer use for individuals who are visually impaired. Journal of Visual Impairment and Blindness, 97(9), 536–550.
GET Gyroscope on IPhone. (2016). Accessed December 28, 2016, from https://www.youtube.com/watch?v=oD4wUERDMOw.
Hatlen, P., & Spungin, S. J. (2008). The nature and future of literacy: Point and counterpoint. Journal of Visual Impairment & Blindness, 102(7), 389.
Hsu, T. Y., Chiou, C. K., Tseng, J. C. R., & Hwang, G. J. (2016). Development and evaluation of an active learning support system for context-aware ubiquitous learning. IEEE Transactions on Learning Technologies, 9(1), 37–45.
Hwang, G. J., Tsai, C. C., & Yang, Stephen J. H. (2008). Criteria, strategies and research issues of context-aware ubiquitous learning. Educational Technology & Society, 11(2), 81–91.
Jacklin, A., Robinson C, O’Meara, L., & Harris, A. (2006). Improving the experiences of disabled students in higher education. Brighton: University of Sussex. Accessed January 18, 2017, from http://www.sussex.ac.uk/wphegt/resources/bibliographies/disability.
Jonassen, D. H. (2006). Modeling with technology: Mindtools for conceptual change. Columbus, OH: Merill/Prentice Hall.
Liu, C. C., Chen, G. D., Wang, C. Y., & Lu, C. F. (2002a). Student performance assessment using Bayesian network and web portfolios. Journal of Educational Computing Research, 27(4), 437–469.
Liu, T., Wang, H., Liang, J., Chan, T., Yang, J. (2002). Applying wireless technologies to build a highly interactive learning environment. Paper presented at the IEEE international workshop on wireless and mobile technology in education 2002, Växjö, Sweden.
Lourens, H., & Swartz, L. (2016). Experiences of visually impaired students in higher education: bodily perspectives on inclusive education. Disability & Society, 31(2), 240–251. doi:10.1080/09687599.2016.1158092.
Martinez, L., & Garaizar, P. (2014).Learning Physics down a slide: A set of experiments to measure reality through smartphone sensors. In: IEEE global engineering education conference (EDUCON) (pp. 1153–1156).
MathWorks. (2016). Hardware support. Accessed December 28, 2016, from https://www.mathworks.com/hardware-support/android-sensor.html.
Mobile Science. (2016). The accelerometer. Accessed December 28, 2016, from https://mobilescience.wikispaces.com/file/view/Accelerometer.pdf/514534524/Accelerometer.pdf.
Motion Sensors|Android Developers. (2016). Motion Sensors|Android Developers. Accessed April 06, 2016, from http://developer.android.com/guide/topics/sensors/sensors_motion.html.
Naismith, L., Lonsdale, P., Vavoula, G., & Sharples, M. (2004). Report 11: Literature review in mobile technologies and learning. Accessed April 06, 2016, from http://www.futurelab.org.uk/resources/documents/lit_reviews/Mobile_Review.pdf.
Newhouse, C. P., Williams, P. J., & Pearson, J. (2006). Supporting mobile education for pre-service teachers. Australasian Journal of Educational Technology, 22(3), 289–311.
Peng, H., Su, Y.-J., Chou, C., & Tsai, C.-C. (2009). Ubiquitous knowledge construction: Mobile learning re-defined and a conceptual framework. Innovations in Education & Teaching International, 46(2), 171–183.
Powell, C., Perkins, S., Hamm, S., Hatherill, R., Nicholson, L., & Harapnuik, D. (2011). Mobile-enhanced inquiry-based learning: A collaborative study. Educause Review. Accessed April 06, 2016, from, www.educause.edu/ero/article/mobile-enhanced-inquiry-based-learning-collaborative-study.
Profiling GPU Rendering Walkthrough|Android Developers. (2016). Profiling GPU Rendering Walkthrough|Android Developers. Accessed April 06, 2016, from, http://developer.android.com/tools/performance/profile-gpu-rendering/index.html.
Puffelen, C. V. (2009). ICT-related skills and needs of blind and visually impaired people. SIGACCESS Accessibility and Computing, 93, 44–48.
Rekkedal, T., & Dye, A. (2007). Mobile learning and SMS services—Student views on the use of mobile phones in distance education. Accessed April 06, 2016, from, http://www.ericsson.com/ericsson/corpinfo/programs/incorporating_mobile_learning_into_mainstream_education/products/nki/nki_wp6_wp7_evaluation_report.pdf.
Rheingold, H. (2003). Smart Mobs: The next social revolution; transforming cultures and communities in the age of instant access. Cambridge, MA: Perseus Publishing.
Sales, A. S., Evans, S., Musgrove, N., & Homfray, R. (2006). Full-screen magnification on a budget: Using a hardware-based multi-display graphics card as a screen-magnifier. The British Journal of Visual Impairment, 24(2), 135–140.
Seely Brown, J., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32–42.
Sensor Kinetics. (2015). Accessed August 02, 2016, from http://www.rotoview.com/sensor_kinetics_android.htm.
Shala, U., & Rodriguez, A. (2011). Indoor positioning using sensor-fusion in Android Devices. Msc thesis, School of Health and Society, Department of Computer Science, Kristianstad University, Sweden.
Shudong, W., & Higgins, M. (2006). Limitations of mobile phone learning. The JALT CALL Journal, 2, 1.
Skinner, G. (2013). Integration of motion sensing into mobile learning applications. GSTF Journal on Computing (JoC), 3(1), 114.
Swart, E., & Greyling, E. (2011). Participation in higher education: Experiences of students with disabilities. ActaAcademica, 43(4), 81–110. Retrieved January 15, 2017, from http://www.sabinet.co.za/abstracts/academ/academ_v43_n4_a4.html.
Tangney, B., Weber, S., O’Hanlon, P., Knowles, D., Munnelly, J., Salkham, A., Watson, R., & Jennings, K. (2010). MobiMaths: An approach to utilizing smartphones in teaching mathematics. In M. Montebello, V. Camilleri, & A. Dingli (Eds.), MLearn 2010 Mobile Learning, proceedings of 9th world conference on mobile and contextual learning (pp. 9–15). Msida: University of Malta.
Traxler, J. (2009). Learning in a mobile age. International Journal of Mobile and Blended Learning, 1(1), 1–12.
UNESCO. (2009). People with visual impairment reading the world/the importance of ICT for visually impaired. Innovative Programmes and Projects. Unesco, Jakarta. Retrieved January 15, 2017, from http://www.unescobkk.org/fileadmin/user_upload/ict/Announcement_e-Newsletter/18Sep09.pdf.
Wang, S. L. (2011). Application of context-aware and personalized recommendation to implement an adaptive ubiquitous learning system. Expert Systems with Applications, 38(9), 10831–10838.
Wiazowski, J. (2013). Audio invasion—Are tactile media in peril? In Proceedings 1st international conference on technology for helping people with special needs, Riyadh, Saudi Arabia.
World Health Organization. (2016). Accessed December 28, 2016, from http://www.who.int/mediacentre/factsheets/fs282/en/.
Yin, P. Y., Chuang, K. H., & Hwang, G. J. (2016). Developing a context-aware ubiquitous learning system based on a hyper-heuristic approach by taking real-world constraints into account. Universal Access in the Information Society, 15, 315–328.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Rights and permissions
About this article
Cite this article
Sungkur, R.K., Bissessur, H. & Camdoo, K. SensorApp: the light at the end of the tunnel for visually impaired learners. J. Comput. Educ. 4, 197–224 (2017). https://doi.org/10.1007/s40692-017-0078-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40692-017-0078-5