Keywords

11.1 Introduction

Each one of us holds different beliefs and theories about the world. Learners’ theories can be conceived, articulated, and assessed more efficiently in the form of causal maps—networks of events (nodes) and causal relationships (links) between events—than in the form of linearly written text. Some causal maps may be more accurate than others—depending on the presence and/or absence of supporting evidence; and some maps and the causal links within the maps may be more or less firmly held—depending on both the strength of the supporting evidence and the strength of specific causal relationships. Furthermore, causal maps are not fixed and unchanging. Instead, they are incomplete and constantly evolving; may contain errors, misconceptions, and contradictions; may provide simplified explanations of complex phenomena; and may often contain implicit measures of uncertainty about their validity (Seel, 2003). As a result, causal maps can change, but usually not randomly. That is, we presume that events trigger and provide the impetus for change. Causal maps and other similar forms of visual representations are being increasingly used to help assess learners’ understanding of complex domains and/or learners’ progress towards increased understanding (Nesbit & Adesope, 2006; Spector & Koszalka, 2004). However, the methods and software tools to measure how learner’s maps change over time (Doyle & Radzicki, 2007; Ifenthaler & Seel, 2005) and how specific events (e.g., pedagogical discourse) trigger changes in learners’ causal maps (Shute, Jeong, & Zapata-Rivera, in press) have not yet been adequately addressed.

To address some of these methodological challenges, Ifenthaler and Seel (2005) used transitional probabilities to determine how likely learners’ maps (when examined as a whole) changed in structural similarity across eight different time periods. Raters were given a specially designed questionnaire to determine if a learner’s map at one point differed in structure from the learner’s map produced from the most previous point in time. The study found that maps were most likely to change in structure at the early stages of the map construction process with the likelihood of changes dropping from one version to the next. However, Ifenthaler, Madsuki, and Seel (2008) found that changes in scores on seven of nine measures of structural quality (e.g., total number of links, level of connectedness, average number of incoming and outgoing vertices per node) had no correlation to the degree to which the learners’ maps matched the expert map. Not surprisingly, the one aspect of the learners’ maps that did correlate to learning was the number of links shared between the learner’s map and the expert map. These findings altogether suggest that measures used to gauge changes at the global level (where the unit of analysis is the map as a whole) and measures that are not scored in relation to a target map (e.g., expert or collective group map) may have little or no value when used to assess learners.

One alternative approach is to measure changes at a more micro-level by using the node-link-node as the unit of analysis and unit of comparison between learners’ and target maps. At this level, we can examine how likely links between specific nodes change from one state to another (e.g., strong vs. moderate vs. weak vs. no causal impact; or high vs. moderate vs. low probability/likelihood) as maps change over time. We can also see to what extent the observed changes in the values of each causal link converge towards the target causal link values present in the target map. For example, we expect that the causal link values for links representing learner’s misconceptions (e.g., erroneous links not observed in the target map) or learners’ shallow understandings (e.g., links between two nodes not directly related and/or better explained by inserting a mediating node) will converge towards a value of 0 (no causal link) over time, following close examination and critical discussion of the causal relationships. At the same time, the expectation is that the causal link values of the links not observed in a learner’s map (but present in the target map) will progress from a value of 0 to the value observed in the target map. Using the node-link-node as the unit of analysis enables us to precisely examine how and to what extent observed changes in targeted links help and/or inhibit learners from achieving the target learning outcomes (e.g., more accurate, deeper, precise understanding). Furthermore, this approach enables us to examine how specific interventions and instructional events (e.g., depth of argumentation, the production of supporting evidence) affect the direction and magnitude of changes across links that are either missing or present and at the same time links that are valid or invalid.

To explore the strengths and limitations of using the node-link-node as the unit of analysis, this chapter presents a software tool called jMAP that can be used to identify differences between learners’ causal maps, initiate collaborative argumentation to produce justifications for proposed causal links, and produce changes in learners’ causal maps that better reflect/represent complex phenomena (see Fig. 11.1). Similar to the Cognizer program produced by Nakayama and Liao (2005), jMAP enables learners to individually produce causal maps (with numerically weighted links) thus reducing unwanted biases and the influence of other learners (Doyle et al., 2007). Once learners submit their maps, they can download and aggregate maps of all or selected learners to capture the group’s collective understanding. Unique to jMAP is that the learner can generate matrices to compute and report the percentage of learners’ maps that share each causal link (including the average strength of each link observed across all learners’ maps), and can superimpose his/her own causal diagram over the aggregate map to visually identify similarities and differences among the causal maps of all learners (Jeong, 2008).

Fig. 11.1
figure 1

Causal map produced in jMAP using weighted links to specify strength of each causal relationship and dotted links to specific level of confidence or evidentiary support

Some of the other unique functions of jMAP enable researchers and teachers to: (a) graphically superimpose an individual learner’s map over the expert/target map to visually identify and highlight changes occurring over time in the causal maps of an individual or group of learners; (b) determine the extent to which the observed changes progress toward a target or collective model; (c) determine precisely where, when, and to what extent changes occur in the causal links within the causal maps; and most importantly (d) identify and measure how and to what extent specific events (e.g., viewing consensus data, discussing evidence, engaging in specific and critical discourse patterns) trigger changes in the causal links between various states (e.g., strong, moderate, weak, and no causal link) as demonstrated in Fig. 11.2.

Fig. 11.2
figure 2

A learner’s map depicting a view of media’s relation to learning with positive (+) and opposing (–) evidence and differential link strengths

The following sections in this chapter present the findings from two case studies. The first study illustrates how sequential analysis can be used to build stochastic models that assess how specific learning events affect the way learners change causal links in their causal maps. The second study serves to evaluate some of the potential advantages and issues when using software tools like jMAP to support learning and assessment. In the end is a brief discussion of possible directions for future research and development.

11.2 Assessing Change in Causal Maps with Sequential Analysis

An initial case study was conducted to develop and test the jMAP software and its ability to help us visually and quantitatively analyze how causal maps change over time. Specifically, this study assessed how the causal links between nodes changed in strength values (i.e., no link, weak, moderate, and strong) in learners’ causal maps after learners reviewed readings and discussed related issues in an online threaded discussion. Most of all, this study examined how particular events (the presence of evidentiary support derived from group discussions and readings) affected how learners changed the causal strength values of the causal links presented in their causal maps.

11.2.1 Method

Twelve graduate students in the Instructional Systems program at Florida State University participated in a weeklong online discussion on the topic Technologies and Media in Distance Education. Students were assigned a set of readings and were required to post at least six contributions to the discussion forum across the 1 week period. Each student produced three concept maps representing their current beliefs of the functional/causal relationships among ten variables related to the topic. In this study, the ten variables were selected by the course instructor. Four learners did not submit one or more of the maps (for reasons unknown) and as a result, the maps of eight learners were used in this study to illustrate the tools and methodology.

The students’ objectives were to describe the conceptual differences between media, technology, and instructional methods, and to state criteria for making decisions about the selection and use of delivery systems. To achieve these objectives, students were presented readings from which to extract arguments, counter-arguments, explanations, and supporting/opposing evidence to bring into an online team debate over the claim that, “One’s choice of media (text, graphics, audio, and video) significantly increases student learning”. Before, during, and after the team debates, each student was required to draw causal maps to convey their evolving understanding of how media affects learning. The maps were completed at three specific times during the week: (a) before reading and discussions, (b) in the middle of the week following initial discussions, and (c) at the end of the week following the conclusion of the discussions. Students were individually assigned to debate during the first 3 days on one side of the issue, and then asked to debate for the opposite side of the issue on the last three days. The readings were given to learners to reveal two opposing views: (a) media makes no difference on learning, and (b) media does make a difference.

In each causal map, learners could vary the density of each link (weak = low width, moderate = moderate width, strong = highest width) to convey the level of impact one variable has on another variable. Students judged the strength of each causal link based on empirical evidence presented in the readings (e.g., the reported effect sizes or the percent difference or increase in learning). In addition, learners specified the direction (+ or –) and amount of evidence (if available) to support and justify the causal links presented in their maps. The experiment coded all maps by hand and recorded each observed causal link into adjacency matrices—one matrix for each student map. For example, the cell in row 2 column 6 in Fig. 11.3 shows that the student believes that a causal relationship exists between “novelty” and “media quality” (e.g., when an instructor uses new media for the first time, its novelty tends to motivate instructors to produce higher quality media). The first digit in the cell signifies that the causal relationship is weak (1 = weak, 2 = moderate, 3 = strong). If a second digit appears, the second digit signifies that the learner possessed some knowledge of evidence to support this causal relationship.

Fig. 11.3
figure 3

Adjacency matrix of links and number of evidentiary support derived from the learner’s causal map with the addition of “new nodes” inserted in the last two rows. Note: The first digit in each cell signifies the strength of causal impact (blank, 1, 2 or 3) that one node (listed in left column) has on another node (listed in the top row). The second digit (1 or blank) signifies whether the learner possesses evidence to support the proposed causal relationship

In the online debates, learners were required to post specific messages and responses (see Table 11.1) to a threaded discussion (Fig. 11.4) hosted in Blackboard, a course management system. In each posting, learners inserted a corresponding tag into the subject heading to explicitly identify the function of each posting (Jeong & Juong, 2007). As a result, each posting served one and only one function at a time. Included with each tag was a + and – symbol to identify team position. Students were required to follow this protocol to receive points for participating in the week long debate. At any time, learners could return to their postings to insert the appropriate tags into the message headings.

Fig. 11.4
figure 4

Team debate with message tags in an online threaded discussion board. Note: Digits signify causal link strength/impact presented with and without supporting evidence

Table 11.1 Message tags and definitions of message tags

11.2.2 Data for Sequential Analysis

To analyze the data recorded in the adjacency matrices for each learner’s causal map, jMAP was developed and used to sequentially tabulate data from the adjacency matrices to capture observed changes in causal strength values between learners’ maps produced on Monday versus Thursday and Thursday versus Sunday. The sequential data was imported into the Discussion Analysis Tool or DAT (Jeong, 2005a, 2005b) to produce a frequency matrix (Fig. 11.5) to reveal patterns in the changes observed in links that possessed vs. did not possess evidentiary support. The frequencies reported in the upper left quadrant of the matrix were used to compute the transitional probabilities (or relative frequencies) for changes in strength values observed when causal links were not presented with supporting evidence. The probabilities of a change between each of the possible strength values in causal links with supporting evidence were computed by combining the cell frequencies from the other three quadrants of the frequency matrix (when evidence was presented in the previous and/or current map). The DAT software was then used to create the transitional state diagrams in Fig. 11.6 to visually convey and compare the observed transitional probabilities between causal links with versus without supporting evidence.

Fig. 11.5
figure 5

Frequency matrix with reported number of observed changes in strength values between revised and previous causal maps

Fig. 11.6
figure 6

Transitional state diagrams revealing the direction and likelihood of changes in causal strengths when links are presented without vs. with supporting evidence

11.2.3 Findings

The sequential analysis of causal link values revealed that evidentiary support strongly influenced how likely a student retained or eliminated a causal link between specific variables on each successive revision of their causal maps. Overall, links presented without evidence were more likely to change to lower strength values in subsequent revisions to the map, whereas links presented with supporting evidence were more likely to remain the same or increase in strength values.

For example, the left diagram in Fig. 11.6 shows that when no evidence was present to justify a causal link, the causal links that were assigned a strength value of one (1 = weak impact) were changed to a strength value of zero (None = no impact) 50% of the time (based on the examination of all changes observed between the first and second and between the second and third causal maps). In contrast, the right diagram shows that when causal links were presented with evidence, the links with strength values of one were much more likely to remain the same (78% instead of 50%), with 11% of the values increasing from weak to moderate impact and 11% of links increasing from weak to strong impact. A similar pattern can be seen in the causal links that were assigned strength values of two and three. A Chi-Square test can be used to test for significant differences between specific links that were presented with versus without supporting evidence.

11.2.4 Implications

These findings illustrate how sequential analysis and state diagrams (Fig. 11.6) can be used to assess changes in learners’ causal understanding and learning trajectories by analyzing how causal links (examined across all learners) change in strength values (i.e., no link, weak, moderate, and strong). Furthermore, these findings illustrate how sequential analysis can be used to assess how particular learning or learner events (providing student access to empirical data or learner’s knowledge of evidentiary support) affect the directions in which learners change the causal strength values of the causal links presented in their causal maps and the likelihood of such changes.

The methods and software tools presented here are intended to make the assessment of causal understanding and the process of argumentation more feasible and less labor intensive. The same tools and methods can be used to assess the learner’s ability to engage in high level argumentation measured in terms of the observed number of message-response exchanges performed when cross examining the proposed causal relationships between nodes and the accuracy of the presented evidence (as illustrated in the next case study). The tools can then be used to assess how learners are able to apply the insights gained from argumentation to justify and validate changes/revisions to causal link values, and to assess how the changes converge towards target values observed in the expert map or the map of the collective group.

11.3 Assessing Argumentation and Effects on Causal Maps

The second case study illustrates how jMAP and the described methods can be used to assess learners’ ability to engage in specific forms of argumentation and their ability to apply these forms of argumentation to construct better causal maps. Furthermore, this study also illustrates how jMAP can be used to compare causal maps between learners, identify differences between learners’ maps and initial/current consensus on map links, and to initiate and structure learners’ discussions in ways that might help to improve their causal maps. This study addressed the following research questions:

  1. 1.

    What are the effects of consensus observed in initial maps on the level of consensus in subsequent maps? When learners use jMAP to determine which causal links are shared most among everyone’s initial maps, are the most commonly shared links more likely to remain in learners’ subsequent maps than the less commonly shared links?

  2. 2.

    What is the relationship between initial levels of consensus and level of argumentation? Do learners engage in more argumentation when a causal link is more or less commonly shared between learners? In other words, do higher or lower levels of initial consensus trigger higher levels of argumentation?

  3. 3.

    What are the effects of argumentation levels on consensus in subsequent maps? Do high levels of argumentation lead to higher or lower levels of consensus in maps produced subsequent to group discussions/debates?

11.3.1 Method

Participants. Nineteen graduate students (8 male, 11 female) enrolled in an online course on computer-supported collaborative learning at a large southeastern university participated in this study. The participants ranged from 22 to 55 years in age, and the majority of the participants were enrolled in a Master’s level program in instructional systems/design.

Procedures. The course examined factors that influence success in collaborative learning and instructional strategies associated with each factor. In week 2, learners used a Wiki webpage to share and construct a running list of factors believed to influence the level of learning or performance achieved in group assignments. Students classified and merged the proposed factors, discussed the merits of each factor, and voted on the factors believed to exert the largest influence on the outcomes of a group assignment. The votes were used to select a final list of 14 factors that learners individually organized into causal maps.

In week 3, students were presented six example maps to illustrate the desired characteristics and functions of causal maps (e.g., temporal alignment, parsimony). Students were provided the jMAP program (pre-loaded by the instructor with nodes for each of the 14 selected factors) to construct their first causal diagram (map 1). Map 1 allowed students to graphically explain their understanding of how the selected factors influence learning in collaborative settings. Using the tools in jMAP, learners connected the factors with causal links by: (a) creating each link with varying densities to reflect the perceived strength of the link (1 = weak, 2 = moderate, 3 = strong); and (b) selecting different types of links to reveal the level of evidentiary support (from past personal experiences) for the link. Personal maps were completed and electronically uploaded within a 1-week period to receive class participation points (class participation accounted for 25% of the course grade). The maps were also used to complete a written assignment describing one’s personal theory of collaborative learning (due week 4, and accounting for 10% of the course grade).

Using jMAP, the instructor aggregated all the initial maps (n = 17) that were submitted by students. Two students did not submit their maps for reasons unknown. The matrix in Fig. 11.7 was shared with students to convey to the students the percentage of maps that possessed each causal link. The links enclosed in boxes in the right side of the figure are common links observed in 20% or more of the learners’ maps. For example, the causal link between ‘Individual Accountability’ and ‘Learner Motivation’ was observed in 47% of learners’ maps. To select this 20% cut-off criterion, the instructor ran multiple aggregations of the learner maps at different cut-off criterion until the instructor felt that a sufficient number of links were identified on the right side of Fig. 11.7 to help discriminate between links that were more versus less shared between learners. Presented in the left side of the figure are the mean strength values of links observed in 20% or more of the maps. The highlighted values reveal links that are present or absent in the expert’s map (i.e., dark shaded cells with values = links shared and strength values match, lightly shaded with values = links shared with non-matching values, lightly shared boxes with no values = missing target links).

Fig. 11.7
figure 7

Mean causal link strengths across all maps and percent of maps with given links

In week 9, learners were shown the matrix in Fig. 11.7 with the percentage of maps (map 1) that possessed each link. Students posted messages in online threaded discussions to explain the rationale and justification for each proposed causal link. Each posted explanation was labeled by learners with the tag ‘EXPL’ in message subject headings. Postings that questioned or challenged explanations were tagged with ‘BUT.’ Postings that provided additional support were tagged with ‘SUPPORT.’ In weeks 9 and 10, learners searched for and reported quantitative findings from empirical research into a group Wiki that could be referenced and used later to determine the instructional impact of each factor.

Students received instructions on how to use jMAP to superimpose their own map over the aggregated group map (Fig. 11.8) to visually identify similarities and differences between their own maps and the collective conception of the causal relationships between factors and outcomes. For example, Fig. 11.8 reveals the similarities and differences between an individual student’s first map (student #4) and the group map (g1) generated by the aggregation of all the maps produced by all students at the first time period. The course instructor used jMAP to superimpose his expert map over the group map produced at time period one (g1) and in time period two (g2) by using the control keys (ctrl-h, ctrl-j, ctrl-k) to toggle between maps g1 and g2. By using the navigational tools to toggle between the two group maps, the instructor was able to visually and quantitatively observe the progression of changes averaged across all the students’ maps in order to assess the extent to which the observed changes converged towards the expert map. Jeong (2008) presents more detailed information on how to use jMAP to visualize and animate progressive changes in maps created by a select learner (or group of learners) across multiple time periods relative to a target map.

Fig. 11.8
figure 8

Visual comparison of student 4’s first map with the aggregated group map (g1) with darker links revealing matching causal strength values, lighter links revealing shared links (differing in values), and light gray links revealing missing links

In week 10, students reviewed the discussions from week 9. Within a discussion thread for each examined link, learners posted messages to report whether they rejected or accepted the link (along with explanations). At the end of week 10, each student posted a revised causal diagram based on their analysis of the arguments presented in class discussions (see Fig. 11.8).

Data Analysis. To measure the level of change in learners’ maps, link frequencies from each learner’s second map (n = 15) were aggregated to determine the percentage of maps that shared each link. Differences in the reported percentages between maps 1 and 2 were computed and appear in Fig. 11.9. Overall, the percentages in 19 of the 24 commonly shared links (in boxes) increased by an average of 26%. Four of these shared links (in gray-shaded boxes) changed by an average of –10.75%.

Fig. 11.9
figure 9

Change in percent of maps sharing selected links

The level of critical discourse produced within each discussion on each link was determined by the number of observed EXPL-BUT, BUT-BUT, BUT-EXPL or SUPPORT, and BUT-SUPPORT exchanges. Challenges to explanations, and explanatory responses to challenges were used as a measure of critical discourse because explanations, when generated in direct response to conflicting viewpoints, have been shown to improve learning (Pressley et al., 1992). Pearson correlations between variables are presented below.

11.3.2 Findings

Effects of consensus observed in initial maps on level of consensus in subsequent maps. Based on links (n = 24) that were observed in 20% or more of students’ maps and discussed by students on the discussion board, the correlation (Table 11.2) between the percentage of students that shared a causal link in the first map and the average change in the percentage of students that shared the causal links was not significant (r = –0.09, p = 0.68). The opinions of the majority did not appear to influence learners’ decisions to include or exclude causal links into their revised maps. This suggests that the use of jMAP to reveal the similarities and differences between students’ maps did not promote group think.

Table 11.2 Correlations (n = 24) between level of initial agreement, critical discourse, and change in percent of learners sharing each causal link

Relationship between initial agreement and level of critical discourse. The correlation (n = 24) between the percentage of students that shared a causal link in the first map and the level of critical discourse that was generated by students to examine the strength of each causal link approached statistical significance (r = 0.39, p = 0.06). The students engaged in more critical discussion over the causal links when the causal links were shared by more students rather than less students. This finding suggests that students did not simply accept or give into the status quo. Conversely, the finding also suggests that students exhibited some tendency to engage in less critical discussion over the causal links when the casual links were shared by fewer students. One possible explanation for this finding may be that the causal links shared by the fewest number of students where those that exhibited the most obvious flaws in logic and as a result, these links did not warrant much debate to omit the causal link from the causal maps.

Effects of argumentation on changes in agreement in subsequent maps. No significant correlation was found between the level of critical discourse over each causal link and the change in the percentage of maps sharing each casual link (r = –0.15, p = 0.48). This finding suggests that the level of critical discourse over each causal link neither increased nor decreased the percentage of students that rejected a causal link.

Post-hoc analysis on the individual effects of each of the four types of exchanges (all of which were aggregated and used to measure the level of that critical discourse) revealed the frequency of EXPL-SUPP exchanges observed in discussions over each link were moderately and positively correlated (r = 0.39, p = 0.06) with changes in the percentage of students that shared each causal link. Supporting statements that were specifically posted in direct response to other learners’ causal explanations (e.g., presenting supporting evidence, simple expression of agreement) were the types of events/exchanges that were most likely to persuade learners to adopt new links into subsequent causal maps. This finding is consistent with the findings from the first case study in which causal link strength values were more likely to remain the same or increase in value when links were supported with evidence. Also worth noting here is that the frequency of supporting statements alone observed in discussions over each causal link (without regard to what messages they were posted in response to) revealed a similar correlation but of lesser statistical significance (r = 0.31, p = 0.14). This suggests that message-response exchanges as opposed to simple message frequencies alone can provide more explanatory power when analyzing the effects of critical discourse on causal understanding.

11.3.3 Implications

The findings in this second case study illustrate how jMAP can be used to assess the impact of critical discussions or other types of learning events on learners’ causal understanding. When used as a research tool, jMAP provides insights into the processes of learning (e.g., causal understanding) and insights into how specific processes (e.g., EXPL-SUPP) lead to specific learning outcomes/behaviors. At the same time, this case study illustrates how jMAP can help learners work collaboratively to build and refine causal understanding. Learners can identify similarities and differences in their causal understanding relative to others. Then they can use the differences as the starting point to discuss and explore the causal relationships.

11.4 Directions for Future Research

The findings in the two case studies reported above are not conclusive given the limited sample size. Nevertheless, these studies illustrate how the demonstrated tools and methods can be used to assess how causal understanding evolves over time and how specific processes of discourse (including processes of scientific inquiry) influence causal understanding. More research is needed to identify the specific discourse processes (and interventions designed to foster critical discussions) that can trigger changes in causal links—particularly changes that converge towards the expert and/or the group model.

To further facilitate research on processes that support causal understanding, online discussion boards can be integrated into jMAP to automatically create discussion threads for each causal link observed in learners’ causal maps, to seed discussions with learners’ initial explanations, to support message tagging, and to compile and report scores that measure certain qualities observed in the group discussions for any given set of causal links. Such a system could be used by instructors to assess not only the quality of learners’ causal maps and understanding, but also the quality of learners’ discourse and its impact on their causal understanding. Additional functions can be added to jMAP to recognize nodes that are indirectly linked via mediating nodes to fully account for observed differences between learner and expert maps. Another useful function would be one that can identify/measure to what extent and in what temporal direction changes in causal links propagate subsequent changes in adjacent links—a measure that could be used to determine to what extent learners are able to systematically break down and reflect on causal relationships. To examine this issue in more detail, a function can be added to jMAP that captures and logs every action performed in jMAP as learners construct their maps.

In addition, refinements to the jMAP user interface will be necessary to make map construction easier, more intuitive, and less time consuming if systems like jMAP are to be used in school-based applications—particularly for learners at younger ages. Instructions and guidance on how to conceptualize a coherent causal map/model (e.g., temporal flow, parsimony) should be embedded directly into the jMAP interface to assist learners that lack the skills needed to construct a causal map.