Abstract
Since the initial recognition that human knowledge is structured in a relational manner, technologies have been developed for assessing and analyzing the structure of knowledge for a variety of purposes. A computer-based text analytic offline software system, ALA-Reader, that was developed to assess this knowledge structure (KS) reflected in a text has been modified and improved since its initial announcement (Clariana 2004) through a number of investigations in various kinds of learning environments across several languages. Based on the empirical evidence from the ALA-Reader, we have recently developed the online version of the ALA-Reader, called Graphical Interface of Knowledge Structure (GIKS), that can immediately convert students’ writings into visually represented KS network graphs to indicate specific areas of their knowledge strengths and weaknesses compared to the referent KS (e.g., a teacher), regardless of which language is used. This paper presents an overview of the ALA-Reader system and applications, as well as the implication of the GIKS system in online contexts.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction and Description of the Emerging Technology
There are several existing automatic writing evaluation (AWE) software tools for supporting free-text responses in online courses. However, the existing AWE tools are typically used for summative purposes (e.g., scoring), based on linguistic assessment (e.g., grammar), and/or implemented only in a single language. We have developed a formative structural AWE tool that can be used for multiple language text analysis, called Graphical Interface of Knowledge Structure (GIKS), that can immediately convert students’ writings into visually represented knowledge structure (KS) network graphs, a visual and concise snapshot of the degree to which the learners have organized and comprehended the content.
GIKS system has been developed by integrating two standalone offline software tools from different research traditions, ALA-Reader (Clariana and Wallace 2007) from education and Pathfinder KNOT (Schvaneveldt 2004) from a graph-theoretic cognitive science approach. Simply put, the ALA-Reader software tool captures the sequence of preselected important key terms in a text (e.g., student’s essay) as pair-wise links in a proximity file (see Kim 2012, for the validity of ALA-Reader). However, ALA-Reader tends to over-specify term links, thus adding some error to the data (Clariana et al. 2009). Pathfinder KNOT is a computer-based “data reduction” network scaling technique that searches for the strongest link, or most direct path, between the nodes (i.e., eliminating much of the complexity, or visual noise, of the original network; see Sarwar 2011, for the validity of KNOT). For this reason, ALA-Reader was designed specifically to output file formats that are directly readable by Pathfinder KNOT; Pathfinder algorithm is applied to convert the raw proximity data from ALA-Reader into a Pathfinder Network (PFnet) so that the resulting PFnet can represent the most salient linkages between co-occurring key terms in the text, or KS (see Fig. 1).
Notably, the ALA-Reader + Pathfinder (now GIKS) approach is not language-dependent because it uses pattern matching to capture the relations of preselected keywords in a text, so it can be applied in other language contexts beyond English (see Fig. 2 for example).
2 Relevance for Learning, Instruction, and Assessment
It is becoming widely understood that meaningful knowledge leads to well-organized knowledge (e.g., novice-to-expert, Herr 2009). For example, the National Research Council recommends assessing KS as a key indication of meaningful learning (NRC 2000). Learning and attaining new knowledge is mainly dependent on how we organize the knowledge in our memory (Clariana 2010). Similarly, Spector and Koszalka (2004) argued that one could view learning progress as cognitive changes in the direction of expert-like KS. In this sense, learning progress is likely to be well-illustrated by the notion of KS when learning progress is characterized as a set of directional changes in a learner’s KS (Ifenthaler and Pirnay-Dummer 2014; Schlomske and Pirnay-Dummer 2009). Thus, we propose that KS is worth measuring.
Then, how can we effectively capture and visually represent KS? Recent cognitive studies emphasize KS as associative networks of concepts with weighted connections (much like a mental lexicon that consists of weighted associations between concepts, see Ifenthaler 2014, for detail), and learning can facilitate the strengthening of link weight as well as the enrichment of the concepts in the mental network (Meyer et al. 2012). Clariana et al. (2013, 2014) argued that learning outcomes should be examined with the framework of a nomological network in which the interrelationships among constructs are identified and empirically tested, and thus methods that can capture network properties can be the most effective for representing and describing the KS. Based on this connectionist theoretical bias, ALA-Reader + Pathfinder approach was developed and has been used to capture, visually represent, and compare KS as network graphs inherent in texts, because writing closely resembles thinking and learning and is thus a concrete artifact or manifestation of cognition (Emig 1977).
Since its initial operation by Clariana (2004), the ALA-Reader approach has been modified and improved over time through a number of investigations in diverse domains within and across languages to measure KS from texts - of monolingual speakers of English (Clariana et al. 2014), German (Clariana et al. 2013), and Dutch (Fesel et al. 2015), and also of bilingual speakers of Dutch/English (Mun 2015), Chinese/English (Tang and Clariana 2016), and Korean/English (Kim and Clariana 2015, 2016).
Also, the ALA-Reader approach has been favorably compared to other computational text analysis models, including global-scale (implicit) models such as Latent Semantic Analysis and Hyperspace Analogue of Language (LSA and HAL; e.g., Su and Hung 2010) and also local-scale (explicit) models such as T-MITOCAR and CMM (e.g., Kim 2012). Overall, these investigations reported similar or better performances of ALA-Reader for scoring essays. For example, in Su and Hung (2010) study, students’ essays were scored with LSA, a commercial essay scoring system, and ALA-Reader and each derived essay scores were compared to human raters. The correlation obtained were: LSA to raters, r = .56 and ALA-Reader to raters, r = .71, ALA-Reader outperformed LSA. Distinctively, the LSA is a ‘black box’, users (e.g., students) don’t know what terms are involved nor how it derived its structures, only providing a numeric score—it does not focus on a particular misconception a student has; whereas the ALA-Reader approach is a ‘glass box’ in that it provides a visual display of KS, allowing quick visual examination and comparison among KS representations—reflection on KS has repeatedly been shown to be an effective means for formative assessment (e.g., Sarwar and Trumpower 2015).
3 GIKS in Practice
Despite the large amount of ALA-Reader applications to measure KS in a text, there has been very few studies examining the effectiviness of KS network graphs in online learnig contexts, mainly because ALA-Reader + Pathfinder procedure is not currently automated. Thus, we have recently developed a browser-based online version of the ALA-Reader + Pathfinder, GIKS with support from the Pennsylvania State University's Center for Online Innovation in Learning (COIL). If individual students read a lesseon text and submit their writing responses (e.g., summary essay) on the GIKS website, then the software will immediately generate and display KS network graphs of their writings and of the lesson text (or a referent benchmark KS) to indicate specific areas of students’ current knowledge strengths and weaknesses (see Fig. 3). For example, based on the student’s KS shown in Fig. 3, she may note that she did not use the key concepts, talk, benefit, low-pitched, and scientists, and where these key concepts may fit and that she associated distance with things and animals rather than with access, i.e., this feedback could support “self-regulated learning” by providing a visual knowledge artifact for reflection—namely, through reflection on their KS in comparison with a referent KS. Reflection on feedback regarding the structure of one’s own knowledge can empower students by making learning engaging and personal (Clariana 2010); reflection can be done well or badly, successfully or unsuccessfully, but it is always a productive learning experience (Spector and Koszalka 2004). Thus, the GIKS system can promote online students’ active engagement in the development of their KS during learning by providing individualized KS feedback, and also benefit instructor’s understanding of their students’ KS and thinking, which may lead to using improved pedagogy and individualized instructional strategies.
4 Significant Challenges and Conclusions
One significant challenge arose in our experiments of ALA-Reader is that this approach may extend only to analysis of texts that include fairly technical and thus specific vocabulary. Less technical content will likely include term synonyms and metonyms across texts that may not be recognized by the ALA-Reader approach, and this would certainly add error in the analysis in proportion to how many of these alternate forms of the selected terms are missed by the software (Clariana 2010). This increased error can be mitigated to some degree by careful identification of synonyms and metonyms that must then be included in the analysis. Nevertheless, additional research is needed to determine the validity of the ALA-Reader approach for less technical text content (e.g., narrative text).
References
Clariana, R. B. (2004). ALA-Reader (beta version). Retrieved from http://www.personal.psu.edu/rbc4.
Clariana, R. B. (2010). Multi-decision approaches for eliciting knowledge structure. In Computer-based diagnostics and systematic analysis of knowledge (pp. 41–59). Boston, MA: Springer US. doi:10.1007/978-1-4419-5662-0_4.
Clariana, R. B., Engelmann, T., & Yu, W. (2013). Using centrality of concept maps as a measure of problem space states in computer-supported collaborative problem solving. Educational Technology Research and Development, 61(3), 423–442. doi:10.1007/s11423-013-9293-6.
Clariana, R., & Wallace, P. (2007). A computer-based approach for deriving and measuring individual and team knowledge structure from essay questions. Journal of Educational Computing Research, 37(3), 211–227. doi:10.2190/EC.37.3.a.
Clariana, R. B., Wallace, P. E., & Godshalk, V. M. (2009). Deriving and measuring group knowledge structure from essays: The effects of anaphoric reference. Educational Technology Research and Development, 57(6), 725–737. doi:10.1007/s11423-009-9115-z.
Clariana, R. B., Wolfe, M. B., & Kim, K. (2014). The influence of narrative and expository lesson text structures on knowledge structures: Alternate measures of knowledge structure. Educational Technology Research and Development, 62(5), 601–616. doi:10.1007/s11423-014-9348-3.
Council, N. R. (2000). How people learn. Washington, DC: National Academies Press. doi:10.17226/9853.
Emig, J. (1977). Writing as a mode of learning. College Composition and Communication, 28(2), 122. doi:10.2307/356095.
Fesel, S. S., Segers, E., Clariana, R. B., & Verhoeven, L. (2015). Quality of children’s knowledge representations in digital text comprehension: Evidence from pathfinder networks. Computers in Human Behavior, 48, 135–146. doi:10.1016/j.chb.2015.01.014.
Herr, N. (2009). The sourcebook for teaching science, grades 6–12: Strategies, activities, and instructional resources. Education Review//Reseñas Educativas. Retrieved from http://edrev.asu.edu/index.php/ER/article/view/1111.
Ifenthaler, D. (2014). AKOVIA: Automated knowledge visualization and assessment. Technology, Knowledge and Learning, 19(1–2), 241–248.
Ifenthaler, D., & Pirnay-Dummer, P. (2014). Model-based tools for knowledge assessment. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (4th ed., pp. 289–301). New York: Springer.
Kim, M. K. (2012). Cross-validation study of methods and technologies to assess mental models in a complex problem solving situation. Computers in Human Behavior, 28(2), 703–717. doi:10.1016/j.chb.2011.11.018.
Kim, K., & Clariana, R. B. (2015). Knowledge structure measures of reader’s situation models across languages: Translation engenders richer structure. Technology, Knowledge and Learning, 20(2), 249–268.
Kim, K., & Clariana, R. B. (2016). Text signals influence second language expository text comprehension: Knowledge structure analysis. Educational Technology Research and Development, 1-22. doi:10.1007/s11423-016-9494-x.
Meyer, B. J. F., Ray, M. N., & Middlemiss, W. (2012). Children’s use of comparative text signals: The relationship between age and comprehension ability. Discours. doi:10.4000/discours.8637.
Mun, Y. (2015). The effect of sorting and writing tasks on knowledge structure measure in bilinguals’ reading comprehension. Retrieved from https://scholarsphere.psu.edu/files/x059c7329.
Sarwar, G. (2011). Structural assessment of knowledge for misconceptions: Effectiveness of structural feedback provided by pathfinder networks in the domain of physics. Kolln: LAP Lambert Academic Publishing.
Sarwar, G. S., & Trumpower, D. L. (2015). Effects of conceptual, procedural, and declarative reflection on students’ structural knowledge in physics. Educational Technology Research and Development, 63(2), 185–201.
Schlomske, N., & Pirnay-Dummer, P. (2009). Model based assessment of learning dependent change within a two semester class. Educational Technology Research and Development, 57(6), 753–765. doi:10.1007/s11423-009-9125-x.
Schvaneveldt, R. W. (2004). Finding meaning in psychology. In A. F. Healy (Ed.), Experimental cognitive psychology and its applications: Festschrift in honor of Lyle Bourne, Walter Kintsch, and Thomas Landauer. Washington, DC: American Psychological Association.
Spector, J., & Koszalka, T. (2004). The DEEP methodology for assessing learning in complex domains. Final report to the National Science Foundation Evaluative Research and Evaluation. Syracuse, NY: Syracuse University.
Su, I-H., & Hung, Pi-H. (2010). Validity study on automatic scoring methods for the summarization of scientific articles. A paper presented at the 7th Conference of the International Test Commission 19–21 July, 2010, Hong Kong. https://bib.irb.hr/datoteka/575883.itc_programme_book_-final_2.pdf.
Tang, H., & Clariana, R. (2016). Leveraging a sorting task as a measure of knowledge structure in bilingual settings. Technology, Knowledge and Learning. doi:10.1007/s10758-016-9290-z.
Acknowledgements
This material is based upon work supported by the Penn State University's Center for Online Innovation in Learning (COIL). I would like to express my heartfelt appreciation to Dr. Roy B. Clariana for his guidance and encouragement throughout my academic study. I am fortunate to have you as my academic advisor.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kim, K. Graphical Interface of Knowledge Structure: A Web-Based Research Tool for Representing Knowledge Structure in Text. Tech Know Learn 24, 89–95 (2019). https://doi.org/10.1007/s10758-017-9321-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10758-017-9321-4