Introduction

The rapid proliferation of mobility devices has resulted in a situation in which nearly all school students have a parent with a mobile phone, or have their own. Further, the capabilities of small mobile devices such as mobile phones and tablets have advanced, with an explosion in the number and types of devices that can now access the World Wide Web. The affordability and widespread use of mobile phones in developing countries makes them worth investing in as a vehicle of learning—especially if we wish for an educational system that is genuinely egalitarian. The heterogeneity of devices that can now access the World Wide Web necessitates the development of device-specific software that optimizes this capacity in any given device, with content that is tailored to the requirements of different users. With this technology, students may now supplement their e-learning in schools with m-learning at home. Schools that cater to students in a lower socio-economic bracket are unable to afford traditional computer labs, but their students can benefit from the lower cost m-learning alternative.

There are many adaptive learning systems that are designed for either personal computers or for mobile devices, but very few are designed for both devices. In this paper we present the flexible, extensible architecture of ALAS, a system that supports personalized assessment-for-learning on both e-learning and mobile devices. The term “extensibility” in this context implies (a) applicability in different learning settings without influencing pedagogical soundness, and (b) readiness to accommodate different adaptive techniques for enhancing the overall user experience.

ALAS supports formative adaptive evaluations with scaffolds. These provide individualized intervention: that is, real-time individualized feedback to teachers and learners. ALAS also automatically detects end-user device types so as to appropriately adapt the content based on the device, allowing students to continue their adaptive assessment and learning process in almost any environment. This provides the opportunity for “anytime, anywhere” learning for students. Our current pedagogical approach of individualized, ongoing assessment, scaffolding, and feedback tailored to student needs is maintained with the additional flexibility provided by mobile devices.

This paper is structured as follows: we first present the architecture of the current ALAS design for personal computers. Next we discuss the specific architectural extensions for the mobile environment, and the challenges that we faced in seeking to maintain the same standard of personalized learning functionality. We then describe a controlled pilot study that allows students to seamlessly move between e-learning at personal computer labs and m-learning with a mobile device and compare the performance, scores and perceptions of students who used both the learning environments and provide feedback on the integrated approach. A comparison is also made between a pure e-learning environment to the integrated learning environment. Finally, we conclude with recommendations and plans for future enhancements and deployment studies.

Literature survey

Implementing innovative approaches like personalized learning on mobile devices requires us to remind students of the pedagogical justification for this new way of doing things. Rogers’s (1995) five conditions of innovation adoption—relative advantage, compatibility, complexity, trial ability, and Observability— should be demonstrably met. The rationale for m-learning is often to increase access and to enable new pedagogical methods (Kukulska-Hulme and Traxler 2005). m-Learning involves using handheld devices such as mobile phones, iPods and Personal Digital assistants (PDAs) to facilitate and enhance the learning process (Traxler 2005). m-Learning provides flexible access to learning and if carefully designed can overcome some of the limits of the Human–Computer Interaction community (Gay et al. 2001).

The need for the adaptation and profiling of content for mobile use has been widely recognized among researchers (see Yang et al. (2004)). Personalization in mobile learning has been described in terms of two adaptive approaches (Kinshuk et al. 2009). One approach adapts to the learner and the other adapts to the learner’s surroundings. At present, there are many learning technologies that use different mobility devices, such as mobile phones (Markett et al. 2006), or PDAs (Waycott and Kukulska-Hulme 2003). These aim to support learning “anywhere, anytime.”

The transition from e-learning to m-learning has resulted in a change of terminology. For instance, “distance learning” has been replaced by “situated learning,” as shown in Table 1 (Laouris and Eteokleous 2005).

Table 1 Terminology comparisons between e-learning and m-learning (Sharma and Kitchens 2004, as adapted by Laouris and Eteokleous 2005)

Mobile learning has met with success in different settings and modes: work-based, distance-based, and independent or field work (Goh and Hooper 2007; Kramer 2005). The majority of these architectures are either m-learning-only systems, or frameworks that adapt existing e-learning content to an m-learning environment. An integrated system has not been developed in which students may switch between m-learning and e-learning, and continue their assessment and learning process where they left off. Another innovation is the tracking of student usage of both m-learning and e-learning software. Teachers, parents and school administrators can monitor and use this feedback.

We hope to use the mobile devices in schools to supplement students’ traditional e-learning lab, as well as to provide a cheaper alternative for schools in rural areas that do not have access to e-learning resources such as computers or reliable internet access. ALAS, our Adaptive Learning and Assessment System, is presently used by students at school computer labs. This chain of schools is managed by a non-profit organization, and caters to both rural and urban areas. Over half of the schools use ALAS in traditional e-learning labs.

In a previously published paper (Nedungadi and Raman 2010), a 21 week study of students who used the e-learning ALAS system in primary schools for Mathematics for 15 min twice every week was conducted. In the majority of cases, there were improvements in learning levels and school performance for the students who had used the system in a consistent manner compared to students who only attended the traditional classroom.

However, due to the high cost of computer labs in schools, many schools have an unfavourable student-to-computer ratio, and most students can access computer labs only once or twice a week. Some schools presently do not offer ALAS to the students, since they have a limited number of computers, and ALAS requires one-on-one interaction with the computer. Our alternative lower cost m-learning solution was developed to provide m-learning in schools that could not afford e-learning, and also to allow m-learning at home to support and provide additional practice to the students who used the e-learning system. m-Learning systems can support personalized learning and assessment that adapt to the users’ knowledge level, preferences, and devices. Unlike the majority of the learning systems, ALAS is an integrated system that seamlessly supports both personalized e-learning and m-learning.

Adaptive learning and assessment system (ALAS)

ALAS was designed as a research-based solution to provide individualized education and feedback to K-10 school students. Its computer-based Adaptive Assessment can identify skills that individual students have mastered, diagnose instructional needs, monitor academic growth over time, make data-driven decisions at the classroom, school, and district levels, and place students into appropriate instructional programs. It contains the following major modules (Fig. 1):

Fig. 1
figure 1

An architectural overview of the system

  1. 1.

    The student module primarily includes the learner’s knowledge levels: items mastered, misconceptions, time to master and so on. These are maintained separately for every independent track. Additional factors such as preferences and pace of learning are gradually added to the model.

  2. 2.

    The Pedagogical module is the intelligent decision-maker in the system. It consists of the Continuous Evaluation for Learning and the Initial Adaptive Assessment functions. Based on the student model, it determines the skill area from which to present and the pace at which learning concepts and questions are presented. It thus establishes the form and the instruction sequence.

  3. 3.

    The Expert module provides adaptive feedback to the student’s response to an item. This feedback takes the form of scaffolds, answers to questions, hints, and so on.

  4. 4.

    The Context Adaptation Module further modifies the content selected by the pedagogical module based on user preferences and the end-devices. This has been extensively enhanced to support context adaptation to mobile devices.

  5. 5.

    The Authoring Module defines a methodology to create educational content organized in a structured way, and supports the authoring process with an editing tool.

  6. 6.

    Feedback Loops: These reports provide insight into the student’s “attendance,” performance, improvement over time, and weak areas. This support can be useful if a student encounters a difficult concept, seeks clarification on a particular question, or needs outside intervention. It also provides classroom and group intervention reports for instructors and administrators.

Initial adaptive assessment

The platform uses an adaptive assessment algorithm to determine a student’s initial knowledge level. Parallel adaptive assessments from various skill areas result in a wide range of questions presented to the students. Within each skill area, adjustments are continuously made based on the accuracy and speed of the student’s response. Assessment ends when the right learning level has been found for each skill area (Fig. 2). The time required for assessment is proportional to the deviation in the learning level from the start of the test, which is set to the students’ grade level in school. In other words, the assessment is longer for the highly advanced and the weaker student, but shorter for the student performing closer to the starting level of the assessment. The initial assessment often needs multiple sessions to complete and can be taken on either a personal computer or a mobile device or both.

Fig. 2
figure 2

Varying levels in each track after the initial assessment

Knowledge organization

The subject curriculum, based on National Standards, is divided into multiple learning tracks that comprise fairly independent skill areas (Fig. 3). A track is a set of specific skills in one area, arranged in order of increasing difficulty. For example, the learning tracks of Mathematics include Probability, Geometry, Computation, and so on. The Learning Objects (LO) consist of the question bank, related audio, text, image, and media tutorials which include animation, simulation, and video. The question bank includes a wide range of curriculum-mapped questions of various difficulty levels and types. These include higher-order thinking skill (HOTS), inference, computation, word problems, comprehension, and questions that may map on to related tutorials. Within each track, these LOs are organized in an ascending order of difficulty, such that the pre-requisites for any given LO are at a lower level than the LO. Though each learning track is fairly independent, there may be some prerequisite skills for an LO that is from another track.

Fig. 3
figure 3

Knowledge organization of mathematical concepts

Student model

The student model may include domain-specific information (Self 1974) such as how much and what the student has mastered, the pace of learning, and the concepts not mastered. The result of initial assessment is a “knowledge state” which represents the set of concepts or skills that the student has already mastered and forms the initial student model. The student model is maintained separately for each skill area or track. It is continuously updated based on student performance, speed and preferences.

Continuous evaluation for learning

Effective human tutoring, as Self (1974) argues, integrates knowledge of the subject, knowledge of teaching methods, and knowledge of the student. Similarly, Ohlsson (1987) suggests that an Intelligent Tutoring System must have a model of instructional content that determines what to teach, a teaching model that determines how to teach, and a student model that determines who to teach. Accordingly, we designed a Knowledge Base that organizes the subject content as independent tracks or skill areas, and a student model that represents the knowledge of the student. Our pedagogical methods include tutorials, and our assessment-for-learning techniques include motivational messages, scaffolds, hints, and pre-requisite reviews. While the student model is initially set to the student’s knowledge level as determined by the initial adaptive assessment, the pedagogical module generates a unique learning path based on the student model to provide individual instruction.

The system is built on multiple tracks, each covering a different skill area. These are concurrently active in order to maintain student interest and to reinforce learning (Fig. 4). Therefore, a student might find a question in one track followed by a tutorial for a new concept in the second track, followed by pre-requisite skills in a third track. LO from tracks at lower levels have an increased probability of being shown, in order to raise the learning level of the lower track.

Fig. 4
figure 4

Mixed presentation of skills

With ALAS, students in each subarea progress at their own pace, and can either advance to topics ahead of their grade level, learn at the grade level, or learn remedial materials that are needed for the grasp of their current level. Movement within each track is based on mastery of concepts (Fig. 5).

Fig. 5
figure 5

Movement within one track

The model auto-adjusts to the student’s growing knowledge. It continuously evaluates student performance, interactions and errors and adjusts both the content and the pace of learning accordingly. Various intervention techniques, like scaffolds, hints, and tutorials, are applied according to the student’s needs.

The system automatically gives additional time and emphasis to skill areas in which the student has difficulty. For example, if a student is above grade level in geometry, at grade level in multiplication and below grade level in probability, the system will pose more questions in the lower level skill area: probability. The system maintains a list of items that were not mastered for further review, and periodically presents them to the student. In the event that an LO is still not mastered after various interventions and reviews, the system stops attempting to teach the LO and adds it to a list of items not mastered after repeated items. Hence, at any point in time, every track includes a list of items mastered and items being learnt, practiced or assessed. Items that need additional review, and items that the student failed to master despite all interventions, can thus be determined.

Architectural environment for mobility

The personalized nature of m-learning gives it the potential to be an extension of the adaptive learning systems that are used in Kindergarten to Grade 10 classrooms, provided that it meets the logistical challenges posed by this goal. As Kukulska-Hulme (2007) states, the physical attributes of these devices—small screen size, heaviness, inadequate memory, and short battery life—can make them unwieldy. Further, content and software applications and usability can be limited by mobile devices’ lack of built-in functions, the difficulty of adding applications, the challenges in learning how to work with them, and the incompatibilities between applications. The speed and reliability of local networks can another limiting factor.

User interface

The Adaptive Learning System is comprised of both an adaptation engine, and content that needed to be extended for mobile devices. As most of the system-users would use both personal computers and mobile devices interchangeably, it was important to design a user interface with a similar look and feel to that provided by the computer (Fig. 6).

Fig. 6
figure 6

Sample question with hint

After looking at the various heterogeneous devices and evaluating a range of mobile application-building software, we decided to use the mobile browser-based software. It appears as the most scalable way to develop and support multiple platforms: since extensions are browser-based, there is no need for a user to download applications that may be too large for their end devices.

We further categorised mobile devices into two groups (Fig. 7): the “new generation” mobile devices like smartphones and tablets, and the lower-end mobile devices. The new generation mobile devices allowed a rich UI interface using HTML5 and CSS3. For the lower-end mobile devices, lightweight content could be provided using HTML and CSS. For both categories JAVASCRIPT and CSS drive the front-end client scripting. The majority of applications were using HTML-based processes, while the XML-based web pages were used for applications that needed to re-use functionality.

Fig. 7
figure 7

Content categorization for m-learning

Processing power

Another major challenge we faced in adapting a PC platform to mobile devices was adjusting to the difference in their processing power. We needed to optimise the application for mobile processors, in order to cater to students using different mobility devices and extensions. We thus required a system that would support heterogeneous platforms like iOS, Android, Symbian and other JavaScript-enabled micro-browsers. Most mobile platforms do not support memory-intensive applications like flash that were used for the rich content presentation on the web. We tried to modify processor-intensive logic without reducing the functionality of the Adaptive Learning System.

Screen size, resolution, and orientation

It was important to us to design a mobile user-interface that was consistent with the web-based counterpart, so that the student did not notice a great deal of difference while accessing the site through various platforms. Our initial survey showed that learning with a small screen limits readability and interactivity, especially when the device used is a low-end mobile. Accordingly, we maximised readability by using suitable CSS styles, wrapping the text, and automatically re-sizing the image content to the screen. We reduced the file size (Table 2) to fit the lower screen size and processing power. We were, however, able to retain the same level of functionality.

Table 2 File size comparisons on various platforms of the same tutorial

The context is adapted by a platform detection algorithm that detects the device and recommends appropriate interfaces. The new generation mobiles have a rich interface through HTML5 and CSS3, and the interaction is similar to the web-based system. For the lower-end mobiles, the interface is simple and lightweight, and provides for interaction based on the limitations of the mobile keyboard. For the higher-end mobility devices, we were able to maintain the same look and feel. For the lower-end entry-level phones, the presentation of the learning content had to be significantly modified.

Input capabilities

Since ALAS is an interactive learning system, it was critical to adapt the system so that a student could easily interact with it. Such adaptations, for instance, allowed students to use the touch function on iPads, or the up and down keys on mobile phone key pads, to replace the input from the mouse (Fig. 8).

Fig. 8
figure 8

System automatically adapts to various devices

The navigation convention used for the web interface is adapted to the mobile device, and additional navigation options may be offered to make interaction easier for users.

Service adaptation and resource discovery

ALAS determines the learning object to be presented based on the student’s knowledge-level. For mobile platforms, additional parameters like the device type, screen size, layout, and connectivity are detected, logged and used to modify the presentation of the selected item from the server-side. The client-side modules control further adaptation to the size, colour and orientation of the screen.

The resource discovery engine detects the availability of various browser features and network conditions, such as bandwidth, plug-ins, JavaScript support, and so on. This module defers parsing of JavaScript to the end of the page and combines multiple JavaScripts to reduce the blocking of page rendering, and to improve the speed to lead pages. It also ensures that the browser caches the content. Further, it modifies images, using lossy compression, according to the end-device that it detects.

Learning interventions

ALAS tracks all input by the student, including answers, hints requested, and time spent on an item, tutorials viewed and so on. It compiles a complete history of all student activity, and provides scaffolding—that is, the precise help that enables a learner to achieve a specific goal that would not be possible without some kind of support (Sharpe 2006)—according to each student’s needs. Based on its analysis of student data, intervention in the form of thinking clues, tutorials (Fig. 9) or reviews is provided (Nedungadi and Raman 2011).

Fig. 9
figure 9

System automatically presents tutorials, hints, scaffolds or prerequisites based on student response

Powerful feedback loop and reports

According to Quillen, “Most apps—which basically are software programs designed to run on smartphones, cell phones, and other hand-held devices— don’t allow teachers to monitor student progress or garner student data in the same way that’s typically possible with educational programs operated through a laptop or desktop computer” (2011). Our system addresses this very limitation by adapting to mobile devices yet providing the same progress reports (Fig. 10), irrespective of the end device used.

Fig. 10
figure 10

Real-time feedback to teachers about student and class performance

User session data from the mobile device is continuously logged and updated to the server side using asynchronous logging techniques. Thus, the user is able to continue from exactly where he left off, even if the connection is lost or there is a change in the device. If the student logs into a new device, all the session data are transferred to the new device and the old session auto-expires. Further, all the data logs from both the m-learning and e-learning are available from one integrated system that provides students with an integrated learning report. Teachers can get up-to-the-minute reports on students that include the amount of time they were logged in, the number of tutorials they viewed, their scores for each session, the time they spent on each item, and the time they spent on a device.

Pilot study—e-learning and the integrated e-learning and m-learning experimentation

A pilot study was undertaken to explore student’s use of ALAS on mobile devices, and whether m-learning can either supplement or replace the current e-learning.

We examined the use of ALAS content using an integrated e-learning and m-learning model—a method of instruction that adds a mobile device to an already existing e-learning system. The aim of this study was only to compare the performance, scores and perceptions of students who used both the learning environments and provide feedback on the integrated approach.

Most learning systems for mobile devices are built exclusively for them. The case studies for such systems compare m-learning with traditional classroom learning. Our system differs in that it allows a student to exclusively use either of the learning environments or switch between e-learning and m-learning at any time so students could use both.

The ALAS interface is functionally equivalent across personal computer and mobile devices, so that students can engage with it whenever and wherever they like. Thus, our goal with ALAS is to ensure that the functionality offered by the e-learning environment was not significantly reduced with the m-learning one.

We assume that students will use a mobile device for at least the same amount of time, and probably more, than they would spend with a PC in the e-learning lab, since the mobile devices can be easily transported between home, classrooms, and science labs.

If performance using m-learning and e-learning environments is comparable, then m-learning using ALAS is a cost effective alternative to e-learning even if students use it for only the same amount of time. If the students spend additional time with it, or use mobility characteristics specific to the mobile device, it could even be a better alternative to e-learning only.

There are many obvious pedagogical benefits of mobile devices that are outside the scope of this pilot study, such as portability, ubiquity, and location awareness.

The sample

There were two experimental groups, both from eight grade using the same science topic for our study.

The experimental group 1 (EG1) (n = 22) worked on the pilot study using the integrated the m-learning and e-learning environment. These students had previous exposure to ALAS in e-learning environment.

The experimental group 2 (EG2) (n = 39) had different set of comparable students and used only the e-learning environment. Data for EG2 was taken from the existing elearning logs.

Measures

There were two measures used to analyze the results obtained from the integrated learning experimentation.

  1. 1.

    Survey measure—A 27-item questionnaire (Appendix A) was constructed in order to assess student perceptions and attitudes towards the user interface, and learning in general, as well as the convenience of e-learning vs. m-learning. Twenty-five items were 5 point Likert-scaled, and two items were multiple choice. The survey was constructed by the authors and was based on their experience working with school administrators, teachers, and students while designing and implementing ALAS

  2. 2.

    Quantitative measure—Assessment for the science learning topics taken by the students. Students typically work on different areas within the same topic depending on their ability and performance. The data relevant here are response accuracy, number of questions attempted, response time, and the use of help or hint buttons and these are automatically logged by ALAS.

With ALAS, the accuracy and rate of student performance that the system records is not proportional to the number of items attempted. It depends instead on the item’s difficulty and grade level. This means that we can only compare the accuracy and pace of a student with that student, not with other students. This is why the study compares the same student attempting the assessment with the mobile device and with the personal computer. This would not be a real-life classroom scenario. However, it shows us whether results using the mobile device and the personal computer are comparable. If they are, then any further mobility-specific studies are improvements over the present e-learning system.

Procedure

The student were already using the ALAS system in their e-learning labs and thus were familiar with the system. The pilot project initially met with resistance when presented to school principals, since it was against the rules for students to use mobile phones in school. Hence, we had to provide the school with 25 smartphones that were connected by wireless networks to the local server at the school. The server was configured for limited access to ALAS and a few other educational websites that were pre-approved by the school. Though the smartphones could have been taken out of the classrooms, we did not allow this during the study so that we could keep the amount of time that was spent with personal computers and smartphones the same.

One week before the study, teachers were trained in how to integrate the mobile devices into their weekly classroom activities. Students were given time to familiarize themselves with the smartphones by practicing assignments in unrelated topics. Each student had an individual account, and could thus log in and continue their personalized learning or assessment with either learning environment, at their convenience. Researchers were present to assist the students in exploring sample lessons and taking sample tests on the mobile devices. Following this students participated in ALAS’s formative assessment of a topic previously taught in the classroom.

In most schools that currently use ALAS, students are taken to the computer lab once a week. If a student is absent or misses the lab period for any reason, or if the electricity or network are unavailable, the student is unable to work with the computer system for that week. Moreover, the number of computers is approximately half the number of students. Hence, one group of students has to complete their formative assessments before the other group can start. After about 15 min, the groups switch.

Our integrated experimentation study maintained this same time-frame and students from EG1 alternated between e-learning and m-learning. After 15 min, students working on the smartphones logged into the personal computers, with ALAS allowing them to pickup at the same learning point where they had left off, and students working on personal computers switched to smartphones.

A survey (Appendix A) was conducted after 1 week of the study to understand student perceptions of e-learning vs. m-learning.

Results analysis

The experience of integrated learning approach can be divided into the following phases:

  1. 1.

    Meeting with the students, teachers about the pilot study

  2. 2.

    Students using ALAS on both e-learning and m-learning environments

  3. 3.

    Completion of survey questionnaire by the students after the pilot study

Quantitative measures

ALAS automatically logs the time taken per question, the score, and whether the student used help or a hint: only correct answers without help are scored as correct. A quantitative analysis of the usage logs for EG1 (Table 3) compares the average time per session, average time per question, and the percentage scores using the mobile device and the personal computer.

Table 3 EG1—mean, standard deviation, and p (t Test, 2-tailed, 2-sample, equal variances) after removing outliers

The difference for Average Time Overall per session, and for Average Time (in s) per question, between the mobile device (M = 564, SD = 367.4; M = 24.3, SD = 12.5, respectively) and Personal Computer performance (M = 352.2, SD = 227.0; M = 14.1, SD = 12.5), are significant (p < .05). However, the difference in Score % is slightly less than significant (p = .058). When controlling for outlying low scores, users take more time on the mobile device, but perform equally across platforms/devices.

Comparison between EG1 and EG2

Table 4 compares the e-learning average time and score from the time per question using the group EG1 (integrated group) with group EG2 (e-learning only). The average scores and time per questions between the e-learning usage suggests that our EG1 group was interacting with the software in normal ways.

Table 4 EG1 and EG2 mean, standard deviation of e-learning

Survey measures

The survey results suggest a great deal of interest from the students in m-learning as an addition to the e-learning, and a positive response from school administrators as well. Administrators and teachers, though initially reluctant to use the system because of concerns over the safekeeping of the smartphones and the maintenance of their accessibility to students, liked the increased learning time and flexibility afforded by the mobile devices.

All the users were familiar with the e-learning environment, as they used it regularly. They understood that the pilot study was for feedback on the m-learning experience. However, researchers observed that some users did not participate in good faith: that is, some participants answered blindly, simply pressing buttons randomly to get through the task of using the mobile. The users who had a score of zero with the mobile device and over 33 % using the personal computers were removed from the remaining analysis.

Student reactions were mixed when comparing e-learning to m-learning (Fig. 11).

Fig. 11
figure 11

Survey results

  • 71 % of students said that m-learning gives them greater control of their learning but only 55 % of them felt that learning on the mobile device was easier than learning on the personal computer.

  • 76 % preferred the larger screen size offered by the computer to the small screen, and 53 % said that it was harder to find the hint button on the mobile device. All students were used to working in the computer lab, but many were new to m-learning.

  • 88 % felt that they had the knowledge to use the mobile device for learning.

  • 76 % said they found it easy to use the mobile device for learning and that learning using it was fun.

  • 76 % said that students need support for m-learning while 53 % said that based on this experience they would spend more time learning to use the mobile device.

  • 41 % said that learning using the mobile device was easier than learning using the computer.

  • 47 % said that it took them longer to complete the session using the mobile device, whereas 41 % said it took about the same time. Only 12 % said that it took longer at the computer

One user who preferred the personal computer said that on the mobile, the squares to touch for the answer were so small that they often chose the right answer but pressed the wrong square by accident. To make them bigger made the question too big for the screen. Another user said, “I prefer computers to mobile devices. I feel that the response time of mobile devices is quite slow compared to personal computers.”

We speculate that may be largely because they were already used to the program using personal computers, but were new to the interface for the mobile device. Many of the students had not worked with smartphones, and responded with enthusiasm to the touch features and the mobility that they offered.

A user who preferred the mobile device said, “I would prefer to learn using a mobile device with a larger screen so that I can study whenever I like: in any room in my house or school or under a tree. Provided this is affordable for my parents.”

Discussion and conclusion

In this paper, we discussed the architecture of an adaptive learning system that personalizes assessment and learning on both personal computers (e-learning) and mobile devices (m-learning) and allow students to seamlessly switch between these two forms of learning. Our current strategy of providing individualized assessment tailored to individual student needs was maintained across both forms of learning with the additional flexibility provided by mobile devices. The majority of m-learning case studies compare m-learning with traditional classroom learning using the characteristics specific to mobile devices. However, in this study the goal was to ensure that the functionality offered by the integrated system in the e-learning environment was available in the m-learning one and to understand the performance differences and the user experience with the m-learning system. Hence this study was limited and restricted so as to maintain the same parameters in both environments.

The lower cost, transportability and flexibility of mobile learning, as Zurita and Nussbaum (2004) have discussed, gives it the edge over the traditional, more expensive, computer lab setup for schools with more socio-economically challenged student populations. The m-learning approach allows the similar supplementary learning to be provided to schools that cannot afford expensive computer labs.

Our pilot study showed that students could indeed seamlessly move between e-learning and m-learning systems without significantly affecting the learning outcomes, and provides empirical analysis of students’ perceptions and their achievements. Furthermore, teachers could monitor both individual and group performances irrespective of the end learning environment used.

In the pilot study, student interest and engagement in learning with mobile devices is comparable to their responses to computer labs. Thus, there is no sacrifice of student motivation to be made to reap the practical benefits of m-learning. Students spent more time overall and more time per question on the mobile devices versus the personal computer while the performance scores were comparable. The additional time on m-learning can be reduced to some extent with improved user interface, but may never match the personal computer as additional steps such as scrolling or clicking to a sub-menu will need to be maintained to work around the smaller screen size.

Though the performance using m-learning was slightly lower than e-learning environments, students were comfortable with the environment and scores were comparable on both environments. If the students have access and are able to spend additional time with m-learning, then it could possibly be a good alternative or supplement the e-learning only environment.

Future enhancements and studies planned include pedagogical items to take advantage of mobility specific features such as location awareness and collaboration.