Abstract
Online learning has grown in popularity, leading to more widespread utilization of online exams. Online exams have started to become a preferred method of assessment in both online and traditional learning environments. They provide various benefits for the learning process and learners when used appropriately within online learning programs. The current study aims to investigate the academic achievement of online learners in online exams as compared to traditional exams and to analyze their perceptions towards online exams. The study was conducted at a state university in Turkey during the 2018 spring semester. Participants of the study are 163 vocational college level online learners. This research has been designed as a mixed method study. In this regard, learners’ academic achievement and perceptions have been considered as quantitative data and learners’ opinions as qualitative data. Through the use of quantitative analysis methods, it is shown that learners report positive attitudes towards online exams and that there was no statistically significant difference in the students’ academic achievement in online and traditional exams. The majority of the learners pointed out that online exams are efficient, usable, and reliable while others indicated a level of insufficiency related to exam duration, as well as concerns about potential technical problems that may occur during the implementation of online exams. Understanding the benefits and challenges of online exams will help the institutions in planning their institutional road map.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
New assessment strategies have emerged and gained popularity along with the increase in utilization of online learning environments. This growth prompts questions about assessment strategies: “Is it the same? Is it different? How best to do it?” (Conrad and Openo 2018, p. 6). In the aim of sustaining online learning’s advantages, universities are increasingly utilizing online exams as a method of assessment and evaluation. For instance, 12% of exams at the Australian University in 2012 were conducted via online means, covering courses of several departments such as Arts, Commerce, Engineering, Humanities, and Science (Hillier 2014). Tampere University in Finland has been applying online exams for proficiency and retake exams for several years, and began to employ the university wide system “EXAM” (Laine et al. 2016) in 2014. Another example of online exam application was conducted by the Saudi Electronic University, which recently initiated online exams in the context of their learning management system (Alsadoon 2017).
In Turkey, online exams have recently become popular, with certain nationally implemented exams such as foreign language and driving license theory tests conducted within an online environment. Moreover, the majority of Turkish universities employ online exams for various courses, especially blended courses structured to cover both face-to-face and online sessions and those conducted entirely online.
While online exams require internet connection, e-exams can be implemented computer based without internet connection. Both type of these exams have both advantages and disadvantages for learners, instructors, and institutions. Advantages include time reduction, test security, secure data storage, quick results, cost effectiveness, paper saving, and automated record keeping for item analysis and learning analytics. In contrast to these advantages, disadvantages include technical problems, help required from external resources, and examinees’ feelings of tiredness due to technological tool usage (Bayazit and Askar 2012; Gvozdenko and Chambers 2007; Nguyen et al. 2017; Parshall et al. 2002; Smith and Caputi 2007).
The current study analyzed learners’ academic achievement and perceptions in online exams at a public state university. One of the leading centers of online learning in Turkey, the distance education institution initiated the use of online exams in 2015, with their goal being to perform online exams in place of makeup exams. Since the 2016 spring semester, online exams have been used for the application of midterm exams for vocational college level learners who were enrolled in an online program. Although the institution has considerable experience regarding the organization and implementation of online exams, online learners’ experiences related to these online exams have not been evaluated. The current literature lacks investigations into learners’ perceptions and performance with regard to online exams. Exams are always a foundation of stress, which can be exacerbated in new and different environments. For this reason, conducting online exams can be an important decision for institutions. In this regard, this study’s main aim is to identify learners’ perceptions of online and traditional exams.
2 Literature review
While assessment is an essential component of the learning process, there are considerable debates in educational institutions on measuring learners’ performances and outcomes (Conrad and Openo 2018). Assessment can be described as the ongoing process of collecting, analyzing, or categorizing data about learner performance, and takes place through grading, or summative assessment. The primary difference between summative and formative assessment is grading: formative assessment focuses on the process of improving teaching and learning while summative assessment focuses on grading during specific periods. Traditional assessment methods are the most common assessment methods for determining learning and performance. Alternative methods such as observing student performances, commenting on student products, analyzing progress, and conducting interviews are other assessment methods. From the constructivist perspective in particular, projects, portfolios, presentations, or essays can be helpful for a fair assessment since they consider individual differences (Rovai 2000).
Even though formative assessment strategies serve better to focus on individual differences, it is not easy to conduct formative assessment strategies in online programs and the many learners they serve at the same time,. However, with the help of information and communication technologies, online exams have significant potential to provide equality of opportunities.
In the context of electronic exams, Dermo (2009) was proposed to investigate learners’ opinions through six main dimensions: (1) affective factors, (2) validity, (3) practical issues, (4) reliability, (5) security, and (6) learning and teaching. Affective factors focus on how learners feel during electronic exams, validity factors on the appropriateness of electronic exams for learners’ studies, practical issues on challenges and benefits, and reliability factors focus on the fairness of electronic exams. Security factors examine if electronic exams are a secure alternative to paper based exams while learning and teaching factors focus on the role of electronic exams in education.
2.1 Learners’ performance in online exams
A review of the previously published literature reveals that only a few studies have analyzed learners’ academic achievement in online exams. Ćukušić et al. (2014), Johnson and Kiviniemi (2009), Johnson (2006), and Orr and Foster (2013) analyzed the effect of online self-assessment tests on undergraduate learners’ exam performance and found a significant relation with learners’ involvement in tests having led to higher achievement in exams. Kemp and Grieve (2014) compared undergraduates’ preferences and academic performance regarding instructional materials and assessment provided online versus the traditional classroom. Although learners generally indicated a preference for learning activities (i.e. activities except exams) conducted face-to-face, no significant difference was found in the test performances of the two different modalities. Werhner (2010) conducted a study to compare the achievement of online versus on-campus learners through the same online exam administered in an earth sciences class that he taught. The results showed that both groups of learners demonstrated significantly similar performance in their exams. In another study, Karay et al. (2015) divided medical students into two groups with respect to their prior test results and provided identical room and seating arrangements, as well as the order of questions and answers for both groups. The study found no difference in performance between paper-based and computer-based versions of the same test. Furthermore, the results revealed that it took less time to complete the computer-based tests. The results of these studies suggest that the medium of the exams do not affect the performance.
2.2 Learners’ experiences in online exams
Learners’ opinions are very important in the decision making process before initiating online exams. In the current literature, a number of studies have investigated learners’ opinions of electronic examinations. Generally, students are satisfied with the implementation of electronic exams, although some reported challenges. Ozden et al. (2004) study investigated students’ perceptions of online assessment through the application of a survey and interviews considering the factors of user interface, system use, and impacts on learning process. The results revealed that students found online assessment favorable based on features such as the provision of immediate feedback, random question selection, item analysis of exam questions, and announcement of the grades immediately after the exam. Al-Mashaqbeh and Al Hamad (2010) applied a questionnaire to university students in order to demonstrate the strengths and weaknesses of an online exam tool. Questionnaire items were categorized under five factors: student opinions, capabilities and features of the online exam, students’ computer skills, university resources, and the need for an online exam. The results of the study demonstrated that students had positive attitudes towards adopting the online exam. Sorensen (2013) examined students’ perceptions of e-assessment and reported that the students felt that e-assessment added value to their learning and provided immediate feedback; the authors thus expected to view more e-assessment in the modules of the learning management system. Additionally, participants mentioned no concerns about unfairness related to random question selection from a question bank. In a study investigating the preconceptions of e-exams, Hillier (2014) applied a survey including items under eight indicators (affective factors, teaching and learning, validity, reliability, practicality, security, production, adoption) and found that students have positive attitudes in general but at the same time, they indicated concerns that electronic exams favored students from technology majors more than those from other departments. That is to say, that studying in technology-based departments provides students with greater opportunities to readily adopt electronic exams thanks to their experience typing in computer-based environments. On the other hand, students were concerned about the risk of technical failures and insecurity of the platform with regard to the use of e-exams. Jawaid et al. (2014) investigated postgraduate residents’ perceptions of computer-based assessment and found that the majority of students preferred this method to traditional paper-based assessment, that they felt confident after having that experience, and that completing computer-based exams took less time than paper-based exams. Bandele et al. (2015) explored positive opinions of undergraduate students about the use of electronic exams with their appraisal for quality test administration and educational standards. Cabı (2016) analyzed master students’ perceptions on different e-assessment methods. Her study revealed that students preferred e-exams because they provide immediate feedback if set to do so and that students gain the chance to perform self-assessment through practice tests. Their concerns about e-exams were related to the possibilities for cheating, technical failures, and a lack of e-exam sessions during the semester. Laine et al. (2016) investigated students’ experiences related to the electronic exam system and found that students were satisfied with the arrangements of the exams and the appropriateness of the questions. The only challenge that students indicated was the application of mathematical problems and calculations in electronic exams due to an inadequacy of the system in recording their mathematical calculations and ease of use of the calculator. Adnan (2016) aimed to share experiences of online learners related to the use of online assessments. The results revealed that learners faced with challenges like the unfamiliarity of the program, poor network speed, and unreliable devices.
In more recent studies, new technologies have been also proposed to prevent challenges of online exams, especially those of security. D'Souza and Siegfeldt (2017) offered a conceptual framework for detecting cheating in online and take-home exams. In the same area, Atoum et al. (2017) presented a multimedia analytics system that conducts automatic online exam proctoring and aims at identifying the following cheat behaviors: “(a) cheat from text books/notes/papers, (b) using a phone to call a friend, (c) using the Internet from the computer or smartphone, (d) asking a friend in the test room, and (e) having another person take the exam other than the test taker.” In addition to security, some studies (e.g. Farrús and Costa-Jussà 2013; Al-Shalabi, 2016) proposed methods and systems for the automated grading of essay questions. Böhmer et al. (2018) proposed an e-exam system for a part-time blended-learning engineering study program and analyzed learners’ opinions after implementation. They found that the majority of learners indicated their satisfaction with the e-exams and in fact preferred the e-exams to paper based exams due to their ease of use and fast grading. For instance, TeSLA project offers the technologies as face recognition, voice recognition, authorship validation, keystroke patterns for secure implementation of online exams (Mellar et al. 2018).
Although online exams are used frequently, there is not much information about the agreement of its measurements with traditional offprint exams. New technologies in online exams are currently considered by distance education institutions and will be applied in the upcoming years. The significant part of this study focuses on learner perceptions of online and traditional exams in different majors and course types. The benefits and challenges of online exams from the learners’ perspective will also be examined. Based on the existing literature, the current study is one of only a handful of studies to consider both the academic achievement and perceptions of learners from different majors and courses of online exams.
3 Methodology
3.1 Research questions
This study aims to investigate online learners’ academic achievement in online exams as compared to traditional exams and to analyze their perceptions towards online exams. In order to achieve these objectives, the study identified the following research questions:
- 1.
Is there any difference in terms of online learners’ academic achievement in online exams versus traditional exams?
- 2.
What are the perceptions of online learners towards online exams?
- 3.
Is there any relation between online learners’ perceptions to online exams and their academic achievement in online exams?
- 4.
Is there any difference in online learners’ perceptions according to their gender, age, and major department?
- 5.
What are the opinions of online learners related to their experience in online exams?
3.2 Research context
The study was conducted at a state university in Turkey during the 2018 spring semester, which covers April 2018 when midterm exams were held, and June 2018 when final exams were held. Online learners used a learning management system (LMS) for studying course notes and activities, accessed an online exam system in order to complete their midterm exams and participated in final exams organized in a face-to-face setting. Midterm exams provide 20% contribution and final exams provide 80% contribution to overall grades of students. An online exam system was provided through a distinct LMS for the exclusive use of online vocational college level students. Through this system, each instructor developed question banks and created electronic exams for the courses that they taught. Each exam consisted of 30 randomly-selected questions, which could be multiple choice, true-false, short answer, and/or matching formats. The schedule of the exams was announced to students before the exams through different means such as a website, social media applications (e.g., Facebook, Twitter), or SMS service. On the exam day, students logged into the system and completed their exams from their preferred location such as a computer laboratory, their home, or public internet access points. If they experienced any problems, they could post questions to a live chat or forum integrated within the system. Hence, their problem could be evaluated and resolved immediately in order to maintain continuity of the exam. Termination of an exam session triggered the checking of the exam grades. If there was an error or missing value in grades, the exam’s settings were revised and grades were recalculated. At the end, students were able to review their attempts and correct answers from the system.
Exams of approximately 140 different courses are conducted through this system each semester. This current study considered online learners’ performance in Literacy, History, and Foreign Language course midterm exams, which were all conducted through this system. Final exams were all conducted on-paper in a traditional face-to-face environment. Both midterm exams and final exams consist of questions in multiple choice format. The aforementioned three courses were selected since they are common and compulsory to all majors.
3.3 Research design & participants
This research was designed as a mixed-methods study. Student perceptions were gathered via an online survey, which was integrated into the course LMS, one week after the midterm. Participation in the survey was indicated as non-compulsory, and thus students volunteered as respondents to answer the survey’s questions. Students’ midterm exam and final exam results were obtained from the university’s student affairs system. Midterm exams were organized in online and un-proctored form, while the final exams were organized in traditional face-to-face exam room settings as proctored form. Both the midterm and final exams consisted of an identical number of multiple-choice questions (i.e. 30) and lasted the same duration (i.e. 30 min).
Participants were all vocational college level online learners in their first or second year in the programs. As online exams are organized in each term, this was their second or fourth time taking an online exam. For their first experience in online exams, a tutorial covering step-by-step introductions for login, performing, and completing online exams was provided. Although 447 online learners took the online exams, the number of learners that voluntarily participated in the online survey was 163. Therefore, the sample of the study is determined as being 163 participants, whose demographic characteristics are provided in Tables 1 and 2. The participants were vocational college learners who registered in the programs of Tourism and Hotel Management (THM), Banking and Insurance (BAI), Medical Documentation Support Personnel (MDSP), Computer Programming (CP) Distance Learning Program, or Judicial Services Support Personnel (JDSP).
3.4 Data collection tool & analysis
The Perceptions of e-Assessment Scale was selected as the data collection tool. The scale was developed by Yılmaz (2016) and was based on the e-Assessment scale originally developed by Dermo (2009). The scale is structured as a five-point, Likert-type instrument consisting of rankings of “strongly disagree (1)”, “disagree (2)”, “neutral (3)”, “agree (4)”, “strongly agree (5).” In total, the scale consists of 17 items within three factors, which are identified as “practicality-suitability,” “affective factors,” and “reliability.” The scale was completed on a voluntary basis; hence reliability analysis was conducted using data of the learners who participated in the survey. As a result, the Cronbach Alpha value was calculated and found to be .92. In addition, the Cronbach Alpha coefficients related to the three factors of the scale are presented in Table 3.
Additionally, the opinions of the participants were gathered through an open-ended question regarding their opinions about online exams. In total, 28 learners provided a response to this question. For the quantitative analysis, IBM’s SPSS 17.0 tool was employed. A total of 28 participants provided a detailed response to the open-ended question that sought the participants’ opinions towards online exams. The responses of the 28 participants were analyzed utilizing Xsight software. In this type of research, open coding is preferred as suggested by Strauss and Corbin (1990). After preparing the codes and themes, an intercoder agreement strategy was employed for the purposes of assuring reliability of the results. Two totally different coders (not the researchers) worked on the emerged codes and themes, and Cohen’s Kappa coefficient was then calculated and found to be .72. This was seen as acceptable according to the ranges of Krippendorff (2004).
4 Results
4.1 Research Question 1: Is there any difference in terms of online learners’ academic achievement in online versus traditional exams?
The paired sample t-test was used for analysis of the level of difference in participants’ performance in the Literacy, History, and Foreign Language lessons. The results of this analysis revealed a significant difference in the success of participants according to the different exam formats. That is, the participants demonstrated better performance in their midterm exams (online) compared to their final exams (traditional face-to-face). The results of the analysis are provided in Table 4.
4.2 Research Question 2: What are the perceptions of online learners towards online exams?
According to the results of the e-assessment scale, participants generally had positive perceptions of the online exams. The analysis covers the total score of the scale and average scores based on scale items and factors. The average score of the scale (X = 3.63) resulted from the calculation of the average from the total score and shows that the participants had positive attitudes in general. The participants found the online exams to be effective and usable (X = 3.78) and indicated their feelings about online exams (X = 2.58). The results additionally demonstrated that the online exams were determined to be reliable by the participants (X = 3.64).
4.3 Research Question 3: Is there any relation between online learners’ perceptions to online exams and their academic achievement in online exams?
Correlation analysis was used to analyze the relation between the participants’ responses to the e-assessment scale and their grades in Literacy, History, and Foreign Language lessons. The results of the correlation analysis are presented in Table 5.
There was no statistically significant relation between the participants’ responses to the e-assessment scale and their course grades. It can therefore be stated that the positive or negative attitudes of the participants towards the online exams did not lead to any difference in their course performance.
4.4 Research Question 4: Is there any difference in online learners’ perceptions according to their gender, age, and major department?
In order to analyze differences in the participants’ perceptions based on demographic variables, t-test and ANOVA analyses were employed. The results revealed no significant differences in the learners’ perceptions according to their gender, age, or major department. Tables 5, 6, and 7 present the results of the corresponding analyses.
As with the gender variable, the age variable also provided a non-significant result for the learners’ perceptions of online exams. Details are presented in Table 7.
The study also found no significant difference in the learners’ perceptions based on their major department as a variable (Table 8). Content-related difference exist in different course and result in differences in exam content such as verbal or numerical questions. In this regard, the findings of the current study revealed similar perceptions towards online exams among learners from different departments, which can be considered a significant and important finding.
4.5 Research Question 5: What are the opinions of online learners related to their experience in online exams?
The results of the qualitative analysis revealed three themes as “Time Management,” “Usability,” and “Anxiety of Potential Problems.” The generated themes and codes are presented in Table 9.
Some of the participants pointed out that they experienced difficulty due to shortness of duration for online exams for some courses. Although the time provided for both online and face-to-face exams are identical (i.e. 30 min), learners had difficulty in completing online exams of some courses within this duration. Especially in online math exams that required calculations to be made, learners used paper and pencils and at the same time used computers while trying to solve the problems. However, using both the computer and paper led to problems of time wastage. It was also revealed that learners who experienced problems regarding exam duration found the questions to be difficult and long. Responses of two participants were as follows:
“Online exams may be appropriate for courses covering verbal content since we don’t need pencil and paper, and the duration is therefore sufficient for them. I was very stressed in exams that required mathematical calculations since I needed more time.” [P-26].
“In online math exams, I needed to use paper and pencil. I can manage the time appropriately in traditional exams, but in online exams, I experienced some stress due to the exam duration. At the same time, I also experienced the same stress in literacy and interpretation questions.” [P-18].
Even if the allotted time for online and traditional exams are the same (30 min), learners are able to directly solve the problems in the booklet, while they must first write equations on paper and then solve them in online exams, which is time consuming.
The majority of participants stated that online exams provided several benefits since they are conducted online. Learners with physical disabilities were very satisfied since they were able to sit their exams from their own homes instead of being required to attend an examining location. One of the participants indicated his opinions related to online exams and location and time independency:
“As a deaf student, the online exams much more preferable for me comparing a traditional on campus exam.” [P-17].
“I wish you can conduct final and makeup exams as an online exam. I am living in another city and it is very hard to come to the main campus for me. People who have economic troubles or dependents like parents or children, have to travel inter-cities for exams otherwise they can not pass the class.” [P-14].
Participants experienced anxiety related to the potential for technical system problems, besides their own exam-related stress, in all online exams. Although none of the participants experienced technical problems, the possibility or potential for Internet connectivity problems, power-cuts or non-responsive computers was seen as a source of additional anxiety for the participants.
5 Discussions and conclusion
As the use of online learning programs become widespread, online exams have become correspondingly more common. The current study aimed to reveal online learners’ perceptions towards online exams and to investigate the relation between their perceptions and academic achievement. In this regard, the learners’ course grades in exams and their responses to an e-assessment scale were analyzed. The performance of the learners significantly changed based on the format of the exam (paper-pencil vs online), which can be considered an important finding for the implementation of online exams. While the current results show a statistically significant difference between exam types, in the literature, studies comparing the performance of learners in online and paper-based exams (Ardid et al. 2015; Boevé et al. 2015; Karay et al. 2015; Kemp and Grieve 2014; Werhner 2010) revealed different results to the current study and found no significant difference in learner achievement. The significant difference seen in the current study may originate from the lack of topic load in the midterm exams or the learners’ use of additional sources during un-proctored online exams. However, the limited duration allocated for online exams is usually an effective means of controlling such an issue. Designing the online exam to occupy only the limited time was also offered as one of the essential Online Exam Control Procedures (Clusky Jr et al. 2011). Orr and Foster (2013) on the other hand reported that learners participating in online quizzes were more successful than learners who took identical quizzes in paper format. In addition, the current study found no significant relation between learners’ perceptions and their academic achievement in three different courses. Hence, it can be concluded that the positive or negative attitudes of participants towards online exams did not lead to any difference in their course performance. Additionally, there was no differentiation based on major or course type.
The results also revealed no significant difference in learners’ perceptions according to their gender, age, or major department. Studies in the existing literature considering gender, exam anxiety, and perception variables reported that each gender had significant positive perceptions towards computer-based assessment, since both gender groups indicated that the method facilitates the assessment process (Terzis and Economides 2011). From a broader perspective, it can be determined that no significant difference in the perceptions of learners based on the gender variable being found in the context of the current study actually provides a result similar to other studies that focused on the acceptance, perceptions, and attitudes towards computers and information technologies (Chu 2010; Comber et al. 1997; Dong and Zhang 2011). Although it is generally expected that the age variable sees a difference due to exam type preferences based on habits, both the current study and that of Dermo (2009) revealed no significant difference according to learners’ ages. This result can be considered a positive finding since online learning adopts the philosophy of lifelong learning. According to the results, learners’ perceptions remain similar as their age advances. This is an important finding that negates the idea that negative perceptions towards such systems originate as learners become older. Like the age variable, there are mixed results in terms of gender. Some studies show that male participants tend to have more positive perceptions towards the usage of technology than female participants (Kesici et al. 2009; Wang et al. 2009).
Considering the opinions of learners regarding online exams, one problem area was reported with regard to exam duration. With the online exam tool they developed, Bayazit and Askar (2012) quantitatively compared paper and pencil-based exams according to performance and time factors. While the results of the study found no difference in terms of performance, it did reveal that learners required more time during online exams. Additionally, learners indicated that they experienced increased stress and were affected by external noises, tiredness, and problems in focusing (Bayazit and Askar 2012). These results were similar with those of the current study. Attitudes towards technology can be considered an important factor for the increase in such problems. In exam subjects like math and geometry in particular students must use paper and pencil for calculations, which often requires additional time. However, Karay et al. (2015) reported completely opposite findings that learners completed exam questions faster in online exams since they were not required to use paper or pencils. In this regard, it is important to consider differences in participants and to analyze the effects of online and paper-based exams for different course subjects in order to increase the effectiveness of online exams.
It is important to consider the time it requires to become familiar with a new exam format, especially for learners who have taken all their previous exams in paper-based form. Hence, it is necessary that learners be provided with appropriate time, especially for exams that require paper and pencil calculations, and exam duration should be planned accordingly to reflect the learners’ needs. In this context, this institution plans to provide the necessary time for each exam to enhance learner experiences for upcoming online exams.
Assessment is an essential component of the teaching-learning processes. A well-organized assessment system can increase student performance and engender positive attitudes towards the system and process. A poorly organized system on the other hand can create negative attitudes. For this reason, the planning and implementation processes are critical in the utilization of online exams. In the current study, the sample of learners and the features of the LMS were limited to learners’ performances over three courses. Future studies including learners’ opinions of a greater variety of courses should be analyzed for the improvement of online exam systems and for the effectiveness of their implementation. In addition, technology acceptance models such as Technology Acceptance Model (TAM) or Unified Theory of Acceptance and Use of Technology (UTAUT) can be used to determine learner behaviors in terms of adaptation to a system.
References
Adnan, I. (2016). Online assessment at Universitas Terbuka (Indonesia Open University). ICERI2016 Proceedings, 4928–4936.
Al-Mashaqbeh, I. F., & Al Hamad, A. (2010). Student’s perception of an online exam within the Decision Support System Course at Al al-Bayt University. In Proceedings Second International Conference on Computer Research and Development, ICCRD 2010 (pp. 131–135). IEEE. doi: https://doi.org/10.1109/ICCRD.2010.15
Al-Shalabi, E. (2016). An automated system for essay scoring of online exams in Arabic based on stemming techniques and levenshtein edit operations. International Journal of Computer Science Issues, 13, 45–50.
Alsadoon, H. (2017). Students’ perceptions of e-Assessment at Saudi Electronic University. Turkish Online Journal of Educational Technology-TOJET, 16(1), 147–153.
Ardid, M., Gómez-Tejedor, J. A., Meseguer-Dueñas, J. M., Riera, J., & Vidaurre, A. (2015). Online exams for blended assessment. Study of different application methodologies. Computers & Education, 81, 296–303. https://doi.org/10.1016/j.compedu.2014.10.010.
Atoum, Y., Chen, L., Liu, A. X., Hsu, S. D., & Liu, X. (2017). Automated online exam proctoring. IEEE Transactions on Multimedia, 19(7), 1609–1624.
Bandele, S. O., Oluwatayo, J. A., & Omodara, M. F. (2015). Opinions of undergraduates on the use of electronic examination in a Nigerian University. Mediterranean Journal of Social Sciences, 6(2 S1), 75. https://doi.org/10.5901/mjss.2015.v6n2s1p75.
Bayazit, A., & Askar, P. (2012). Performance and duration differences between online and paper-pencil tests. Asia Pacific Educational Review, 13(2), 219–226. https://doi.org/10.1007/s12564-011-9190-9.
Boevé, A. J., Meijer, R. R., Albers, C. J., Beetsma, Y., & Bosker, R. J. (2015). Introducing computer-based testing in high-stakes exams in Higher Education: Results of a field experiment. PLoS One, 10(12), e0143616. https://doi.org/10.1371/journal.pone.0143616.
Böhmer, C., Feldmann, N., & Ibsen, M. (2018). E-exams in engineering education — online testing of engineering competencies: Experiences and lessons learned. Paper presented at the 2018 IEEE Global Engineering Education Conference (EDUCON). doi: https://doi.org/10.1109/EDUCON.2018.8363281
Cabı, E. (2016). The perception of students on E-Assessment in Distance Education. Journal of Higher Education & Science [Yükseköğretim ve Bilim Dergisi], 6(1), 94–101. https://doi.org/10.5961/jhes.2016.146.
Chu, R. J. (2010). How family support and Internet self-efficacy influence the effects of e-learning among higher aged adults – Analyses of gender and age differences. Computers & Education, 55(1), 255–264. https://doi.org/10.1016/j.compedu.2010.01.011.
Comber, C., Colley, A., Hargreaves, D. J., & Dorn, L. (1997). The effects of age, gender and computer experience upon computer attitudes. Educational Research, 39(2), 123–133. https://doi.org/10.1080/0013188970390201.
Conrad, D., & Openo, J. (2018). Assessment Strategies for Online Learning: Engagement and Authenticity. AU Press: Athabasca University. doi:https://doi.org/10.15215/aupress/9781771992329.01
Ćukušić, M., Garača, Ž., & Jadrić, M. (2014). Online self-assessment and students’ success in higher education institutions. Computers & Education, 72, 100–109. https://doi.org/10.1016/j.compedu.2013.10.018.
Dermo, J. (2009). e-Assessment and the student learning experience: A survey of student perceptions of e-assessment. British Journal of Educational Technology, 40(2), 203–214. https://doi.org/10.1111/j.1467-8535.2008.00915.x.
Dong, J. Q., & Zhang, X. (2011). Gender differences in adoption of information systems: New findings from China. Computers in Human Behavior, 27(1), 384–390. https://doi.org/10.1016/j.chb.2010.08.017.
D'Souza, K. A., & Siegfeldt, D. V. (2017). A Conceptual Framework for Detecting Cheating in Online and Take-Home Exams. Decision Sciences Journal of Innovative Education, 15(4), 370–391. https://doi.org/10.1111/dsji.12140.
Farrús, M., & Costa-Jussà, M. R. (2013). Automatic evaluation for e-learning using latent semantic analysis: A use case. The International Review of Research in Open and Distributed Learning, 14(1), 239–254. https://doi.org/10.19173/irrodl.v14i1.1389.
Gvozdenko, E., & Chambers, D. (2007). Beyond test accuracy: Benefits of measuring response time in computerised testing. Australasian Journal of Educational Technology, 23(4), 542–558.
Hillier, M. (2014). The very idea of e-Exams: student (pre) conceptions. In B. Hegarty, J. McDonald, & S.-K. Lok (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings Ascilite, Dunedin 2014 (pp. 77–88). Ascilite.
Jawaid, M., Moosa, F. A., Jaleel, F., & Ashraf, J. (2014). Computer based assessment (CBA): Perception of residents at Dow University of Health Sciences. Pakistan Journal of Medical Sciences, 30(4), 688–691. https://doi.org/10.12669/pjms.304.5444.
Johnson, B. C., & Kiviniemi, M. T. (2009). The effect of online chapter quizzes on exam performance in an undergraduate social psychology course. Teaching of Psychology, 36(1), 33–37. https://doi.org/10.1080/00986280802528972.
Johnson, G. (2006). Optional online quizzes: College student use and relationship to achievement. Canadian Journal of Learning and Technology [La revue canadienne de l’apprentissage et de la technologie], 32(1). https://doi.org/10.21432/T2J300.
Jr, G. R. C., Ehlen, C. R., & Raiborn, M. H. (2011). Thwarting online exam cheating without proctor supervision. Journal of Academic and Business Ethics., 4, 1–7.
Karay, Y., Schauber, S. K., Stosch, C., & Schüttpelz-Brauns, K. (2015). Computer versus paper—Does it make any difference in test performance? Teaching and Learning in Medicine, 27(1), 57–62. https://doi.org/10.1080/10401334.2014.979175.
Kemp, N., & Grieve, R. (2014). Face-to-face or face-to-screen? Undergraduates’ opinions and test performance in classroom vs. online learning. Frontiers in Psychology, 5, 1278. https://doi.org/10.3389/fpsyg.2014.01278.
Kesici, S., Sahin, I., & Akturk, A. O. (2009). Analysis of cognitive learning strategies and computer attitudes, according to college students’ gender and locus of control. Computers in Human Behavior, 25(2), 529–534. https://doi.org/10.1016/j.chb.2008.11.004.
Krippendorff, K. (2004). Content analysis: An introduction to its methodology (2nd ed.). Sage Publications.
Laine, K., Sipilä, E., Anderson, M., & Sydänheimo, L. (2016). Electronic exam in electronics studies. Paper presented at the SEFI Annual Conference 2016: Engineering Education on Top of the World: Industry University Cooperation. Tampere, Finland.
Mellar, H., Peytcheva-Forsyth, R., Kocdar, S., Karadeniz, A., & Yovkova, B. (2018). Addressing cheating in e-assessment using student authentication and authorship checking systems: teachers’ perspectives. International Journal for Educational Integrity, 14(1), 2. https://doi.org/10.1007/s40979-018-0025-x.
Nguyen, Q., Rienties, B., Toetenel, L., Ferguson, R., & Whitelock, D. (2017). Examining the designs of computer-based assessment and its impact on student engagement, satisfaction, and pass rates. Computers in Human Behavior, 76, 703–714. https://doi.org/10.1016/j.chb.2017.03.028.
Orr, R., & Foster, S. (2013). Increasing student success using online quizzing in introductory (majors) biology. CBE—Life Sciences Education, 12(3), 509–514. https://doi.org/10.1187/cbe.12-10-0183.
Ozden, M. Y., Erturk, I., & Sanlı, R. (2004). Students’ perceptions about online assessment: A case study. Journal of Distance Education, 19(2), 77–92.
Parshall, C. G., Spray, J. A., Kalohn, J. C., & Davey, T. (2002). Practical considerations in computer-based testing. New York: Springer.
Rovai, A. P. (2000). Online and traditional assessments: what is the difference? The Internet and Higher Education, 3(3), 141–151. https://doi.org/10.1016/S1096-7516(01)00028-8.
Smith, B., & Caputi, P. (2007). Cognitive interference model of computer anxiety: Implications for computer-based assessment. Computers in Human Behavior, 23(3), 1481–1498. https://doi.org/10.1016/j.chb.2005.07.001.
Sorensen, E. (2013). Implementation and student perceptions of e-assessment in a Chemical Engineering module. European Journal of Engineering Education, 38(2), 172–185. https://doi.org/10.1080/03043797.2012.760533.
Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park: Sage Publications.
Terzis, V., & Economides, A. A. (2011). Computer based assessment: Gender differences in perceptions and acceptance. Computers in Human Behaviour, 27(6), 2018–2122. https://doi.org/10.1016/j.chb.2011.06.005.
Wang, Y., Wu, M., & Wang, H. (2009). Investigating the determinants and age and gender differences in the acceptance of mobile learning. British Journal of Educational Technology, 40, 92–118. https://doi.org/10.1111/j.1467-8535.2007.00809.x.
Werhner, M. J. (2010). A comparison of the performance of online versus traditional on-campus earth science students on identical exams. Journal of Geoscience Education, 58(5), 310–312.
Yılmaz, Ö. (2016). Çevrimiçi Sınav Görüş Anketi. e-Kafkas Eğitim Araştırmaları Dergisi, 3(3), 26–33.
Acknowledgements
We want to thank to Instructor Mesut Sevindik, who helped us during data collection process.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Ilgaz, H., Afacan Adanır, G. Providing online exams for online learners: Does it really matter for them?. Educ Inf Technol 25, 1255–1269 (2020). https://doi.org/10.1007/s10639-019-10020-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10639-019-10020-6