Abstract
The assessment phase plays a very important role during the students’ teaching-learning processes since from this phase the knowledge acquired by them are validated and the shortcomings and/or strengths from students are detected. However, to do so, the questions selection made by the teacher or the learning platform does not always respond to the needs, limitations and/or cognitive characteristics of the students. In this context, it become necessary the incorporation of mechanisms that allows to obtain the main student features in a better way in order to use them during the process of question selection. In fact, this brings several benefits such as a better acquired knowledge measurement, an increase in the students’ interests, a better fail detection for new educational resource recommendation, among others. In order to make a better question selection that fulfil the student’s needs, this paper aim at proposing a characterization of the most relevant techniques and models for question selection. Likewise, an ontological model of personalized adaptive assessment is proposed, supported by Artificial Intelligence techniques that incorporate relevant cognitive and contextual information of the student to carry out a better selection and classification of questions during the e-assessment process.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Monteiro, D., Alturas, B.: The adoption of e-Recruitment: The Portuguese case: study of limitations and possibilities by the point of view from candidates and from recruiters. In: Proceedings of the 7th Iberian Conference on Information Systems and Technologies (CISTI 2012) (2012)
Hajjej, F., Hlaoui, Y.B., Ayed, L.J.: Ben: personalized and generic e-assessment process based on cloud computing. In: 2015 IEEE 39th Annual Computer Software and Applications Conference, pp. 387–392. IEEE (2015)
Wauters, K., Desmet, P., Van den Noortgate, W.: Adaptive item-based learning environments based on the item response theory: possibilities and challenges. J. Comput. Assist. Learn. 26, 549–562 (2010)
Singhal, R., Goyal, S., Henz, M.: User-defined difficulty levels for automated question generation. In: 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 828–835. IEEE (2016)
Baneres, D., Rodríguez, M.E., Guerrero-Roldán, A.-E., Baró, X.: Towards an adaptive e-assessment system based on trustworthiness. In: Formative Assessment, Learning Data Analytics and Gamification, pp. 25–47. Elsevier (2016)
Perez, E.V., Santos, L.M.R., Perez, M.J.V., de Castro Fernandez, J.P., Martin, R.G.: Automatic classification of question difficulty level: teachers’ estimation vs. students’ perception. In: 2012 Frontiers in Education Conference Proceedings, pp. 1–5. IEEE (2012)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Salazar, O.M., Ovalle, D.A., de la Prieta, F. (2019). Towards an Adaptive and Personalized Assessment Model Based on Ontologies, Context and Collaborative Filtering. In: Rodríguez, S., et al. Distributed Computing and Artificial Intelligence, Special Sessions, 15th International Conference. DCAI 2018. Advances in Intelligent Systems and Computing, vol 801. Springer, Cham. https://doi.org/10.1007/978-3-319-99608-0_35
Download citation
DOI: https://doi.org/10.1007/978-3-319-99608-0_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99607-3
Online ISBN: 978-3-319-99608-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)