21.1 Introduction

To identify the progress of cognitive and interactive processes in online learning, it is necessary to have a sensible and valid performance evaluation strategy. These must be combined with a series of tools to detect complex changes in the knowledge constructions of the students. These tools can be used to regulate the knowledge construction in students [33]. There are three stages in the assessment of online learning (the same in case of traditional learning for that matter). The first is the initial assessment which is aimed to gain reliable idea of ​​the level of knowledge and skills of the students. The second is the formative assessment which is carried out during the teaching course, at established intervals as per the course design and planning. The third is the summative assessment which is carried out at the end of the course [1].

Given the characteristics of online education, there are communication limitations inherent in the nature of the internet as medium, which in turn distinguish online assessment from traditional face to face assessment. The underlying purpose of assessment is to evaluate performance and provide feedback to students [2]. Student assessment-focused studies stated that the attributes of an online assessment include reliability [3], validity [3], objectivity [4] and authenticity [4]. Reliability refers to the confidence that an instrument generates to reflect the student's level of achievement [3]. The validity refers to the instrument measuring what is really intended, and not something else [3]. The objectivity refers to the neutrality with which students are graded [4]. Authenticity relates to the contents of the assessment with the practical reality which reflects the skills and competences that students gain to get ready for practical world [4]. Literature review indicates that although much has been written about online assessment, however, there are few studies analyzing assessment with these factors.

In the research on online learning, Batu et al. (2018) analyzed many studies in which the effectiveness of online courses is analyzed and reported that fundamentally following assessment strategies are used. The first category is the knowledge questionnaires that are conducted at the end of the courses. These are critiqued for their assurance of reliability, validity or authenticity, although they present a certain guarantee of objectivity, inherent to the online medium. The second assessment methods are the interviews or using indices/inventories that probed student satisfaction in the course (implying that satisfaction and learning are equal). This article mentions [5] who differentiates both concepts and argued that two constructs are not comparable, since the satisfaction variable confuses motivational and instructional effects. This study agreed with [6] to posit that the variety of online learning assessment resources should include objective tests, essays, projects, rubrics, concept mapping, and/or evidence-based assessment. A possible classification of the types of online assessment: (a) score-based assessment such as learning needs analysis; (b) elaborative assessment such as essays, concept maps, projects, etc.; and (c) collaborative assessment, in which students are assessed in group work such as peer review [6].

According to [7], although assessment is considered a fundamental part of the online learning process, and even when a great potential of technology is recognized to create effective assessment systems, there is no relevant development of technological applications to carry it out. Research in the evaluation of online learning is scarce, and among other things, this is why the knowledge that is had in this regard is unsatisfactory (Ho, et al., 2018). Therefore, this study is based on the rationale that online assessments must have aforementioned features to fulfill their purpose, i.e. to regulate or influence students’ learning. In other words, this research aims to explore the relationship between online assessment and students’ learning outcomes. This study has adopted the conceptual framework based on cognitive task analysis model to design a questionnaire, which is used to understand students’ perception about online assessments and its impact on learning outcomes.

21.2 Literature Review

21.2.1 Online Assessment

A critical step in the development of student-centered systems is having quality information about the student's competencies in the educational domain. The technology includes a set of tools with the potential to optimize the evaluation of learning in online environments. In fact, in the online educational modality, assessment should be a continuous activity and the backbone of learning [8]. Willcox et al. [9] argued that in the online education much attention has been paid to the relevance of the contents, the instructional design, the optimal use of technological resources, therefore the meticulous consideration of the assessment of learning has been grossly ignored. Ultimately, assessment is considered to be a weak point or limitation in online learning [7]. Hence, there is a need to improve online assessment with support from theoretical models and empirical evidence. This is the basic purpose of this research work.

The technological infrastructure could allow the flexible and comprehensive recording of performance data and the monitoring of students' progress in their process of cognitive development, i.e. knowledge construction and skills acquisition. Despite such potential, the evaluation of online learning or online assessment is something that needs to be improved, something on which we must reflect in greater depth, which must be developed in a more creative way and return to it by investing more pedagogical and technological resources [10].

According to [11] although assessment is considered a fundamental part of the online learning process, and even when a great potential of technology is recognized to create effective assessment systems, there is no relevant development of technological applications to carry it out. Research in the evaluation of online learning is scarce, and among other things, this is why the knowledge that is had in this regard is unsatisfactory.

21.2.2 Cognitive Task Analysis (CTA)

Fives and Barnes [12] argued that CTA developed complex learning assessment models based on the analysis of the cognitive operations and skills underlying the tasks performed in a given educational domain. An educational domain is a delimited thematic area composed of a set of knowledge that is object of assessment.

To carry out these analyses, in the first instance, it is necessary to identify the competences that make up the educational domain. Once this macrostructure has been identified, a Cognitive Task Analysis is carried out to identify the micro components that constitute each competence. The procedure analyzes the task through steps in which the knowledge, skills and dispositions associated with each step are identified, in a progressive sequence of greater detail and precision [13]. In this process, it is important to identify the cognitive complexity (e.g., understanding, application, problem-solving) and the type of knowledge it involves (e.g., conceptual, structural, causal) [14].

Based on both analyses, a structure is built that identifies fine-grained components. In this way, a universe of measurement is organized that has made it possible to design representative assessment situations of critical competences and their components, in an articulated assessment scheme [12]. The dimensions to perform these classifications are:

A continuum of complexity in terms of the cognitive skills required in the domain includes three categories [15]: (a) understanding of the topics, which includes the recognition of information, classification, and organizing; (b) application of knowledge and skill, including operations such as analysis, extrapolation, inference, and comparison; and (c) problem-solving, which includes operations such as error correction, action planning, evaluation and decision-making.

The complexity dimension in terms of the mental models built by the students. Cognitive skills represent categories of operations that the student can apply in the domain; mental models reflect the integration of knowledge that allow you to explain the reality of phenomena. The inclusion of mental models as a diagnostic axis responds to evidence that students, depending on their level of expertise, could only conceptually describe the domain, or could have a highly structured knowledge about it, for which they could explain it [13]. The mental models that are included in this model are three: (1) conceptual, they answer the question: “what is this?”, They describe the meaning of the phenomenon or theme, and the interrelation of the elements that compose it; (2) structural, which answer the question: “how is this structured?”, And describe how the conceptual field in question is organized; and (3) causal, which answer the question: “how does this work?”, and which describe how the principles affect each other and help to interpret processes, give explanations of events and make predictions (ibid).

The dimension of the complexity of the learning topics, determined by the content units as a universe to be evaluated applying different mental models with different levels of cognitive complexity. This aspect has a logical order from the simple to the complex, depending on the objectives of the units of the chosen learning program [16].

Figure 21.1 shows the structure that would result from integrating the three relevant dimensions of the analysis. In it, each space of the cube includes a category of content that together make up the universe of the educational domain analyzed. This means that assessment items, resources and learning activities can be generated from each box.

Fig. 21.1
A diagrammatic model depicts the complexity of the mental model, cognitive skills, and thematic complexity. Cognitive skills involve problem-solving, knowledge application, comprehension, and organization.

Model of the cognitive analysis of tasks of a domain

21.2.3 Conceptual Model of Cognitive Task Analysis in Online Assessment

Based on the logic of Cognitive Task Analysis, following model is proposed to describe the application of these techniques in online assessment. Figure 21.2 illustrates the proposed model. It includes on the left the modeling of the educational domain, which leads to the instructional design. This makes it possible to propose what is the route to follow to meet the learning objectives and outcomes. The instructional design is a set of decisions about the activities, content and assessments that the course will have, based on the objectives.

Fig. 21.2
A framework depicts the features of online learning. Reliability, validity, authenticity, and objectivity are the features of online. Comprehension, knowledge, skills, and competency are the learning features.

Application model of cognitive task analysis to online assessment

It was previously pointed out that assessment is a weak point of online education, although it is the backbone of learning in these contexts. There is not a significant body of knowledge derived from research in this field. Online assessments can be systematized by applying Cognitive Task Analysis as suggested in the model, so that initial/diagnostic assessment to allow obtaining valid, reliable, objective and authentic assessment results. Since the online environment is especially suitable for this type of assessment, which can include objective reagents, exercises, cases, problems, etc., derived from a design of the assessment that starts from the Cognitive Analysis of Tasks.

Formative assessments represent the most extensive work in online education. They include a wide variety of activities related to the topics of instructional design and related to the fulfillment of course objectives. Some of the formative assessments can be automatic, since interactive exercises can be included based on the levels required for the topics to be reviewed in the course [17]. Other formative evaluations can be elaborative such as essays, project reports, etc., its complexity would also be derived from the levels detected in the Cognitive Task Analysis.

Finally, summative evaluations can also be constructed based on Cognitive Task Analysis. It is suggested that the questionnaire-based assessments used as initial diagnosis be applied again at the end as a post-test; however, equally a special evaluation could also be constructed, depending on the needs of the course in question [18].

21.3 Methodology

In order to apply the conceptual model presented in previous section, deductive research approach was selected. The researcher started by developing a conceptual model and developed tentative hypotheses followed by collection of data from specific settings [19], i.e. higher education students in the UAE. Therefore, this study has attempted to test CTA theory within the specific context of the UAE higher education system and hence deductive reasoning-based research approach has been conducted.

Since the study aims to assess the impact of online assessment on learning outcomes, therefore quantitative data analysis is essential. Hence, the research method applied in the research process is quantitative method. The quantitative method is distinguished from others because it uses quantitative data only [20]. Furthermore, selection of quantitative data is also suitable for this study because it provides high level of reliability and validity as compared to qualitative data. In addition, quantitative methods were preferred over mixed method because the former provides higher time and cost efficiency [21]. Furthermore, the study is based on cross-sectional time horizon in which data were collected from a sample of higher education students in the UAE at a specific point in time. The cross-sectional design suits this study as compared to longitudinal design because the former facilitates small-scale research with better time and cost efficiency as compared to the latter where multiple data collection at different time points are essential requirements [22].

For data collection, this study developed a self-administered questionnaire. The self-administered questionnaire was designed to apply the conceptual framework designed above. The research used closed ended statements to operationalize the variables. The Likert five-point scale system was used to obtain ratings with respect to responses of students to the statements. In order to establish the reliability of questionnaire, this study assessed internal consistency using Cronbach’s alpha technique [23], the results of which are reported in following section.

With respect to sampling technique, the researcher used random sampling method. A sample of 350 students was selected from five Universities of UAE including City University College of Ajman, Al Ghurair University Dubai, Capital College Dubai, Manipal University Dubai and Skyline University College Sharjah. Initially, the researcher designed the questionnaire and sent request for participation to 500 students; however, only 396 students responded back. However, during the data collection period, it was realized that 46 questionnaires were incomplete and hence were excluded from research process.

21.4 Results

Based on the data collected through questionnaire, this study explored the relationship between online assessments and learning outcomes using correlation and regression techniques. Online assessments were assessed in terms of their reliability, validity, authenticity, and objectivity, while learning outcomes were operationalized in terms of understanding, knowledge, skills, and problem-solving. Following table provides correlations for the aforementioned variables.

The correlation between reliability and learning outcomes is (r = 0.9, p = 0.000) from which it can be inferred that there is strong positive impact of online assessment on learning outcomes as the former has level of reliability to assess learning outcomes. In other words, from perception of students, the online assessments are able to check students’ achievement and performance. Furthermore, validity as a feature of online assessment also shows very strong and positive correlation with learning outcomes (r = 0.899, p = 0.000). These results shows that online assessment are able to check learning outcomes with high level of validity. In other words, online assessments serve their purpose and check what they are aimed to check. Thirdly, the correlation between authenticity and learning outcomes is (r = 0.898, p = 0.000), which leads to the assertion that online assessments are able to check skills and competences of students. This result also reinforces positive relationship between online assessment and learning outcomes. Finally, the correlation between objectivity and leaning outcomes is (r = 0.914, p = 0.000), which indicates that online assessment shows high level of objectivity which means that they are able to test students fairly, equally, and objectively.

Correlations

 

Reliability

Validity

Authenticity

Objectivity

Learning outcomes

Reliability

Pearson correlation

Sig. (2-tailed)

N

1

350

0.906

0.000

350

0.907

0.000

350

0.892

0.000

350

0.900

0.000

350

Validity

Pearson correlation

Sig. (2-tailed)

N

0.906

0.000

350

1

350

0.333

0.000

350

0.903

0.000

350

0.899

0.000

350

Authenticity

Pearson correlation

Sig. (2-tailed)

N

0.907

0.000

350

0.383

0.000

350

1

350

0.898

0.000

350

0.898

0.000

350

Objectivity

Pearson correlation

Sig. (2-tailed)

N

0.392

0.000

350

0.903

0.000

350

0.398

0.000

350

1

350

0.914

0.000

350

Learning outcomes

Pearson correlation

Sig. (2-tailed)

N

0.900

0.000

350

0.399

0.000

350

0.398

0.000

350

0.914

0.000

350

1

350

  1. **Correlation is significant at the 0.01 level (2-tailed)

Despite the fact that correlation coefficients provide valuable and clear insights about relationship between online assessment and learning outcomes, yet the inherent limitation in the coefficients requires further analyses. The main limitation is that correlation only reflects nature and strength of change occurring on learning outcomes due to change in online assessment and does not gauge the impact of the former on the latter [24]. Hence, regression model was constructed to overcome these limitations.

The regression model below uses learning outcomes as dependent variable while online assessment indicators as independent variable which are reliability, validity, authenticity and objectivity. The coefficient of determination in the model is 0.884. On the one hand, it shows that the model is sound and that online assessments explain 88.4% of change or variability in learning outcomes. Therefore, it can be concluded that online assessments have strong impact on learning outcomes. Furthermore, the significance level in the model is 0.000 < 0.05 implying that the impact of online assessment is significant. Finally, the beta values indicate the predicted change in learning outcomes provided that there is a unit change in online assessment. The model predicts that there is likely to be a 0.205 unit increase in learning outcomes if reliability of the online assessment increase by 1 unit (β = 0.205). Furthermore, the model predicts that there is likely to be a 0.208 unit increase in learning outcomes if validity of the online assessment increase by 1 unit (β = 0.208). Furthermore, the model predicts that there is likely to be a 0.206 unit increase in learning outcomes if authenticity of the online assessment increase by 1 unit (β = 0.206). Finally, the model predicts that there is likely to be a 0.205 unit increase in learning outcomes if objectivity of the online assessment increase by 1 unit (β = 0.361).

Model Summary

   

Adjusted R

Std. error of

Model

R

R square

Square

the Estimate

1

940a

0.884

0.883

0.40761

  1. aPredictors: [Constant), Objectivity1, Reliability, Authenticity, Validity

ANOVfl a

Model

Sum of squares

df

Mean square

F

Sig

1 Regression

Residual

Total

437.119

57.320

494.439

4

345

349

109.280

0.166

657.734

0.000b

  1. Dependent Variable: Learning outcomes
  2. Predictors: [Constant), Objectivity, Reliability, Authenticity, Validity

Coefficients 3

Model

Unstandardized coefficients

Standardized coefficients

t

Sig

B

Std. error

Beta

1 [Constant)

Reliability

Validity

Authenticity

Objectivity

0.062

0.205

0.208

0.206

0.361

0.074

0.053

0.050

0.049

0.051

0.204

0.210

0.209

0.355

0.826

3.895

4.167

4.198

7.080

0.409

0.000

0.000

0.000

0.000

  1. aDependent Variable: Learning outcomes

Lastly, in order to further enhance the methodological rigor of the results above, this study calculated the Cronbach’s alpha coefficient to check the internal consistency of questionnaire. The internal consistency refers the degree to which items in the questionnaire are able to collect what they are aimed for. The coefficient reported is 0.955 > 0.7 and indicates that there is very high level of internal consistency and hence reflects high level of reliability in the questionnaire.

Case Processing Summary

 

N

%

Cases Valid

Excluded3

Total

350

0

350

100.0

0.0

100.0

  1. aListwise deletion based on all variables in the procedure

Reliability Statistics

Cronbach’s alpha

N of items

0.955

18

21.4.1 Discussion

The model presented adapts a proven methodology in the area of ​​the evaluation of complex learning in face-to-face higher education, to incorporate it into virtual environments as a mechanism that can give structure and order to content, materials, interactions and assessments in an online learning environment. In this way, the competencies and their components, which represent the learning goals/outcomes of educational programs, are the central axis of work. Students interact with objects that represent the levels of complexity expected of their performance (Batu et al. 2018).

Once the contents have been analyzed and the competences of the educational domain have been identified, the Cognitive Task Analysis leads to a powerful assessment scheme that allows diagnosing the structuring of students' knowledge throughout online courses, taking into account dimensions such as training of mental models as cases of knowledge complexity, as well as skill levels, as cases of cognitive complexity, throughout the dimension of the subject of the course [25]. The creation of this model of the domain of knowledge of the courses leads to two fundamental results for the development of research: on the one hand, the construction of valid and reliable assessment instruments, and on the other hand, the prescription of activities and instructional materials that can promote learning [26].

This study has performed the modeling of these structures, which can be useful in e-learning research, as it helps determine the influence of variables (see regression results in previous section). In sum, this conceptual model could be applied to carry out research on various methods of assessment and promotion of online learning, due to high level of reliability it offers in the sense of delimiting the educational domain by means of a valid and sensitive taxonomy [27]. Online education is considered a flexible, effective and viable option to cover the high educational demand of modern world, but if we do not generate research that explains the effects of online assessments and learning promotion interventions, we could be heading towards a scenario characterized by poor results, which is part of what we no longer want in the education system of a country that aims to become developed economy [28].

21.5 Conclusions

The aim of this study was to evaluate the impact of online assessment on students’ learning outcomes. The study collected data from higher education students in UAE using survey questionnaire. With high level of internal consistency (Cronbach’s α = 0.0.955), this study reports that online assessment has significant positive impact on learning outcomes. Furthermore, Additionally, in the light of the data procured, this inquiry endorses that students think online assessments that they have taken are designed to check how much they have learned in the course. Moreover, from the evidence accumulated, this investigation understands that online assessments have enabled students to demonstrate the level of new information they gained during the course. Furthermore, based on the analysis rendered, this investigation infers that online assessments are relevant with the course and topics covered in the course. Furthermore, in the light of the discussions shown, this study explains that assessment exercises help students to understand and analyze the topic after the lecture. Moreover, in view of the analysis obtained, this study exhibits that assessment tasks encourage students to demonstrate comparative skills. In addition, in the light of the facts yielded, this inquiry recognizes that assessment methods are relevant to the course material and learning outcomes. In addition, in view of the illustrations collected, this study imparts that many of the assessment tasks are focused on real-world settings. Furthermore, taking into account the data accumulated, this research accounts that assessment tasks require students to show and apply the lecture and book knowledge in practical settings. In addition, taking into account the assessments contributed, this work demonstrates that assessment and evaluations require to solve problems and to apply theoretical knowledge. Furthermore, in view of the statistics found, this inquisition highlights that students think online assessments are fair and transparent. Furthermore, in the light of the empirical information produced, this research corroborates that students believe that all students are tested equally through online assessments. Additionally, in the light of the empirical data gathered, this research reveals that students believe that marking schemes, learning outcomes and assessment tasks are suitable for the education level (they are not excessive burden). Moreover, considering the assessments secured, this inquiry explicates that online assessments have the ability to properly check the knowledge and skills of students. In addition, using the arguments discovered, this research declares that online assessments test analytical and problem-solving skills of students. Moreover, in view of the empirical evidence accumulated, this investigation surmises that online assessments can be used to evaluate the performance of students during the course.

21.6 Future Research Opportunities

One of the main implications of this study is for future researchers to use the conceptual framework defined above and assess the impact of online assessment methods on learning outcomes such as using the framework above to check the reliability, validity, authenticity and objectivity of essays/reports/case studies, etc. Teachers and school administrations can use this framework to assess the impact of their assessment methods and identify opportunities for improvement. It is also important to emphasize on the fact that the evidence reported above involves students’ perceptions and experiences only which is a limitation in generalizability. Future researchers are encouraged to conduct same research with other actors particularly teachers and assess its applicability from teachers’ perspective.