Abstract
This paper proposes a concept professional assessment model for RDF data crowdsourcing workers . The model extracts test task instances similar to crowdsourcing tasks from the standard knowledge base based on the concept hierarchy tree of RDF data crowdsourcing tasks, and uses knowledge representation to automatically build a set of options for test task instances, thereby generating a concept professionalism test task. The quality of the RDF data crowdsourcing workers is ensured by calculating the conceptual expertise of the results of the crowdsourcing workers completing the test tasks. We carried out simulation experiments, and the experimental results verified the validity of the concept professional assessment model proposed in this paper.
L. Huang—Contributed equally to this work and should be considered as co-first authors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Movie dataset comes from https://github.com/SimmerChan/KG-demo-for-movie.
- 2.
ConsumableFood dataset comes from https://download.csdn.net/download/taoxiuxia/9454140.
References
Ma, Y., Qi, G.: An analysis of data quality in DBpedia and Zhishi.me. In: Qi, G., Tang, J., Du, J., Pan, Jeff Z., Yu, Y. (eds.) CSWS 2013. CCIS, vol. 406, pp. 106–117. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-54025-7_10
Jacobi, I., Kagal, L., Khandelwal, A.: Rule-based trust assessment on the semantic web. In: Bassiliades, N., Governatori, G., Paschke, A. (eds.) RuleML 2011. LNCS, vol. 6826, pp. 227–241. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22546-8_18
Mallea, A., Arenas, M., Hogan, A., Polleres, A.: On blank nodes. In: Aroyo, L., Welty, C., Alani, H., Taylor, J., Bernstein, A., Kagal, L., Noy, N., Blomqvist, E. (eds.) ISWC 2011. LNCS, vol. 7031, pp. 421–437. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25073-6_27
Bizer, C., Cyganiak, R.: Quality-driven information filtering using the WIQA policy framework. J. Semant. 7(1), 1–10 (2009)
Mendes, P.N., Mühleisen, H., Bizer, C.: Sieve: linked data quality assessment and fusion. In: Proceedings of the 2012 Joint EDBT/ICDT Workshops, pp. 116–123. ACM (2012)
Welinder, P., Perona, P.: Online crowdsourcing: rating annotators and obtaining cost-effective labels. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 25–32. IEEE (2010)
Joglekar, M., Garcia-Molina, H., Parameswaran, A.: Evaluating the crowd with confidence. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 686–694. ACM (2013)
Bernstein, M.S., Brandt, J., Miller, R.C., et al.: Crowds in two seconds: enabling realtime crowd-powered interfaces. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology., pp. 33–42. ACM (2011)
Aydin, B.I., Yilmaz, Y.S., Li, Y., et al.: Crowdsourcing for multiple-choice question answering. In: Twenty-Eighth AAAI Conference on Artificial Intelligence, pp. 2946–2953 (2014)
Ipeirotis, P.G., Provost, F., Wang, J.: Quality management on amazon mechanical turk. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 64–67. ACM (2010)
Dawid, A.P., Skene, A.M.: Maximum likelihood estimation of observer error-rates using the EM algorithm. J. Roy. Stat. Soc.: Ser. C (Appl. Stat.) 28(1), 20–28 (1979)
Acosta, M., Zaveri, A., Simperl, E., Kontokostas, D., Auer, S., Lehmann, J.: Crowdsourcing linked data quality assessment. In: Alani, H., et al. (eds.) ISWC 2013. LNCS, vol. 8219, pp. 260–276. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41338-4_17
Acosta, M., Zaveri, A., Simperl, E., et al.: Detecting linked data quality issues via crowdsourcing: a dbpedia study. Semant. Web 9(3), 303–335 (2018)
Zaveri, A., Kontokostas, D., Sherif, M.A., et al.: User-driven quality evaluation of dbpedia. In: Proceedings of the 9th International Conference on Semantic Systems, pp. 97–104. ACM (2013)
Acosta, M., Simperl, E., Flöck, F., et al.: RDF-Hunter: automatically crowdsourcing the execution of queries against RDF data sets. Comput. Sci. 7695, 212–226 (2015)
Zurek, W.H.: Complexity, Entropy and the Physics of Information. CRC Press (2018)
Ngomo, A.C.N., Auer, S.: LIMES: a time-efficient approach for large-scale link discovery on the web of data. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence-Volume Volume Three, pp. 2312–2317. AAAI Press (2011)
Bordes, A., Usunier, N., Garcia-Duran, A., et al.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, pp. 2787–2795 (2013)
Acknowledgement
This work was supported by the Scientific Rsearch Project of Education Department of Hubei Province under grant B2019008.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, Z., Huang, L., Yang, L., Li, Q. (2021). A Conceptual Professional Assessment Model Based RDF Data Crowdsourcing. In: Meng, H., Lei, T., Li, M., Li, K., Xiong, N., Wang, L. (eds) Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery. ICNC-FSKD 2020. Lecture Notes on Data Engineering and Communications Technologies, vol 88. Springer, Cham. https://doi.org/10.1007/978-3-030-70665-4_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-70665-4_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-70664-7
Online ISBN: 978-3-030-70665-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)