Introduction

Complex systems are systems without central control that follow relatively simple rules which give rise to complex behavior. These systems may or may not adapt through learning or evolution (Mitchell, 2009). Many of today’s most pressing problems involve recognizing and understanding complex systems. Whether trying to understand how to address the declining health of ecosystems, how to improve communication or delivery networks, or how to understand and stop the spread of diseases, a complex systems lens is needed if actions are to be effective (Meadows, 2008). Therefore, complex systems are important for K-20 students to understand as evidenced by the inclusion of systems and complex systems in U.S. national frameworks and standards (e.g., National Research Council, 2017; NGSS Lead States, 2013).

Despite the importance of learning about complex systems, students at all levels tend to have difficulty understanding complex systems concepts (e.g., Feltovich et al., 1994; Grotzer, 2012; Yoon et al., 2019). Building upon prior research (e.g., Jacobson, 2019), this study focuses on specific complex system concepts that prove difficult for learners, including agent actions, action effects, order, as well as non-obvious, distant, and process-based causation. Agent actions involve understanding that agents such as people, ants, or other entities have the ability to make unpredictable and random local or micro-scale decisions. Action effects include how these decisions can result in effects that can be nonlinear, with small actions by agents creating large, potentially cascading effects (e.g., Goh et al., 2012). Learners can also struggle with the order of systems, or have difficulty understanding that systems can be decentralized, that multiple groups can create order, and that order can emerge from distributed actions (e.g., Yoon et al., 2019). Students also have difficulty understanding different types of causation, such as non-obvious causes that cannot be directly observed, distant causes that originate in locations removed from the effect, and process-based causes such as equilibration or emergent processes (e.g., Grotzer et al., 2013).

Previous research indicates that computer simulations can help students understand complex systems (Danish et al., 2016; Grotzer et al., 2017). Agent-based simulations, or simulations that model the actions and interactions of autonomous agents, can also help learners understand complex systems (e.g., Wilensky & Rand, 2015). In particular, agent-based, computer-aided participatory simulations place the learner in the role of the agent, enabling the learner to interact and make decisions on a micro-scale and also see the effect of their actions on the entire system (e.g., Kumar & Tissenbaum, 2019; Wilensky & Stroup, 2000). Research demonstrates the potential of agent-based participatory simulations to help students understand complex systems (Dickes et al., 2016; Rates et al., 2016; Stroup & Wilensky, 2014). Students may need instructional support, or scaffolding to learn with these agent-based participatory simulations, however (e.g., Basu et al., 2015; Sengupta et al., 2013). Scaffolding involves a more knowledgeable peer or teacher providing instructional assistance, thus enabling a learner to go beyond what they would be able to do unassisted (e.g., Sherin et al., 2004). Scaffolding can also be provided by technologies (e.g., Guzdial, 1994) or supporting materials (e.g., Quintana et al., 2004) and may persist in learning environments or fade to provide less support over time (e.g., McNeil et al., 2006). Despite existing research in scaffolding in science settings, few studies have directly compared different kinds of scaffolding with complex system simulations on student learning (e.g., Li, 2013). The present study compares the effect of persistent ontological vs. self-monitoring scaffolding on complex system understanding among undergraduate and graduate students studying architecture. Ontological scaffolding involved explicitly teaching students about complex system concepts to help learners create “conceptual rigging” (Goldstone, 2006, p. 41) and to help learners to look for and recognize specific system concepts. In contrast, self-monitoring scaffolding emphasized management of large amounts of information, and aimed to support learner sense making, process management, and reflection. This study addressed the following research questions:

RQ1

To what extent did participants’ understanding of complex systems concepts change after engaging with a participatory, agent-based simulation?

RQ2

How did participants’ understanding of complex systems concepts compare for ontological scaffolding versus self-monitoring scaffolding conditions?

We anticipated that both scaffolding conditions would promote some improvement in participants’ understanding of complex systems, based on the existing research on ontological and self-monitoring scaffolding. There were no studies identified that directly compared these scaffolding types and we therefore did not anticipate one outperforming the other.

Theoretical background

Defining complex systems

To define complex systems, we refer to literature that differentiates between types of systems (Meadows, 2008) and broadly groups systems as either non-complex or complex. Non-complex systems have multiple elements with set roles, and whose interactions do not change or adapt with new information. Examples of these non-complex systems are clocks, airplanes, and computers. In contrast, complex systems contain many interacting entities that may have agency to make their own local decisions and whose roles and interactions may change (Bar-yam, 2019). Structures of complex systems may be organized in a variety of ways, including various connections among interacting entities, or nesting of entities and/or relationships at various scales. Complex systems may also show emergent and complex effects not exhibited by the individual elements through nonlinear, spontaneous, or adaptive interactions.

This study investigates student understanding of specific system concepts: agent actions, action effects, order, and non-obvious, distant, and process-based causation, informed by the work of Jacobson and colleagues (Jacobson, 2001, 2019; Jacobson et al., 2011). Agent actions are not predictable in complex systems. In an ecosystem we may know that bears will eat fish, but we do not know which fish. Because agents can adapt as their environment changes, we also cannot predict which parts will interact as the system changes. In contrast, within non-complex systems, agent actions are predictable and non-adaptive. Because parts do not adapt, these systems are vulnerable to failures (e.g., if a clock gear breaks, the whole clock stops working). Therefore, backup measures are often built into non-complex systems such as emergency brakes or extra engines on airplanes. Action effects in complex systems mean that small actions in one part of a system can lead to unpredictable and potentially large (nonlinear) effects in other parts of the system, due to the variety and shifting nature of connections within a system (Grotzer, 2012). In contrast, effects in non-complex systems are proportional; small actions have small, predictable effects with no potential for non-linear effects. While both system types can have linear effects, a defining feature of complex systems is the potential for non-linear effects. Order in non-complex systems is largely imposed by design in a top-down fashion (e.g., a clock is designed for a purpose) but emerges in a bottom-up way for complex systems (e.g., an ecosystem derives order from each animal acting on its own needs). Causation in complex systems involves non-obvious (i.e., invisible) causes that are not visible or attributable to a single agent (e.g., Grotzer, 2012). Distant causation involves causes that are located spatially or temporally apart from where the effects occur (Grotzer & Solis, 2015). For example, emissions from cars in populated areas have consequences for Arctic wildlife. Process-based causation implies emergent or equilibrating processes over longer periods of time or processes without distinct beginnings or ends (Grotzer, 2012; Meadows, 2008); in contrast with the clear order and cause-effect connection in non-complex systems (Jacobson et al., 2011). In non-complex systems, although parts may function simultaneously, interactions can be broken down into static events. For example, a person winds a clock, which turns a gear, which turns a minute hand, which causes an alarm to sound. Process-based causation alludes to the temporal unfolding of events; for example, ecosystem dynamics represent resilience models where discrete ecological events are situated within longer-term patterns over time (Grotzer et al., 2013).

Learning about complex systems

Overall, research indicates that K-20 students can learn about complex systems with explicit support (e.g., Yoon et al., 2018). For example, studies demonstrate student success in understanding levels or scale of complex systems or the interconnected nature of complex systems (e.g., Yoon et al., 2019). Barth-Cohen (2018) demonstrated that students are able to exhibit a continuity of reasoning patterns across centralized to decentralized causality and are able to access knowledge about complex systems that are applicable in macro and micro levels of sand dune movement. Other studies demonstrate that students can learn complex systems concepts from studying water systems (Lally & Forbes, 2019), and climate change (Jacobson et al., 2017).

In their recent review, Yoon et al. (2018) identified multiple computer-based simulations of complex systems intended to help learners to visualize structures and their interactions, which in turn promotes consideration of less obvious systems characteristics and how systems change over time (e.g., Klopfer et al., 2005a, 2005b; Repenning et al., 2015; Vattam et al., 2011; Wilensky & Reisman, 2006). Agent-based simulations, incorporating modeling tools like NetLogo (Wilensky & Reisman, 2006) and StarLogo (Yoon et al., 2017), were specifically identified as beneficial to complex systems understanding. With agent-based simulations, students can interact with a system, investigate surprising outcomes from the rules or interactions among agents, understand random variation, and system characteristics that can be difficult to investigate in the real world. These characteristics include the decentralized nature of interactions among system components and emergent system behaviors (Yoon et al., 2018).

When students act as agents in the system, the simulations are considered participatory (Klopfer et al., 2005a, 2005b; Stroup & Wilensky, 2014). Students make choices that interact with actions by other student agents as well as additional living and nonliving components in the system, and the participatory experience is often enhanced by visualizations that promote reflection on less obvious system mechanisms and behaviors. For example, to learn about sustainability, students can act as city managers, making industrial and environmental decisions for their own cities while having to share resources and pollution with other students managing nearby cities (Kumar & Tissenbaum, 2019). In a non-participatory simulation, students may see these city managers, their actions and effects, but they do not themselves act as the city managers. Participatory simulations can help learners visualize abstract concepts associated with complex systems (Wilensky & Stroup, 2000). Through direct experience or modeling of the system (Colella et al., 1998), students can develop understanding of the assumptions and develop connections between micro and macro behaviors (Wilensky & Resnick, 1999).

Participatory simulations can also help learners understand more process-based causes for effects in the systems (Grotzer et al., 2013). In higher education settings, limited simulation-based complex systems research has been conducted (e.g., Lizier et al., 2018). Much of the existing research has examined the evaluation and development of computer models. For example, Lally and Forbes (2019) found that post-secondary students improved their understanding of key hydrology concepts and improved their evaluations of a computer-based water model in a course revision that included participatory simulation. Changes in complex system understanding were not directly addressed, however. In another study, undergraduate and graduate students used the LA Water Game to investigate infrastructure water system management. Across a series of workshops, learning outcomes included more abstract ideas about complex systems (McBurnett et al., 2018).

Challenges to learning about complex systems

Despite the importance of complex systems understanding, students tend to experience multiple challenges when learning about complex systems. Learners’ intuitive knowledge of natural phenomena can clash with scientific explanations and these intuitive ideas tend to be robust and resistant to change (Vosniadou & Brewer, 1992). One of the most difficult complex systems ideas to grasp for secondary science education students may be decentralized organization and the unpredictable nature of effects (Yoon et al., 2019). Difficulties can emerge due to intangible processes and causality that are often invisible, time-delayed, or occurring on different time scales (Feltovich et al., 1994; Grotzer, 2012). Also, even when students understand micro and macro relationships, they can mis-identify causes within the system as occurring at the macro level (Penner, 2000, 2001). Confusion between levels at which phenomena emerge leads to deep misunderstandings (Wilensky & Resnick, 1999).

Robust misunderstandings about complex systems can occur when learners have the wrong ontologies, or conceptions about the entities or nature of a complex system. We refer to these ontologies using Jacobson’s (2001) terms, clockwork mental model, which is characterized by non-complex system properties and behaviors; and complex mental model, which is characterized by complex system properties and behaviors. According to Chi (2005), students might have only one ontological category, for non-complex systems, and conceptualize properties of complex systems through a clockwork systems lens, causing them to interpret and attribute system behavior incorrectly. For example, a person who has only an ontological framework for clockwork systems will correctly perceive order in the system but will attribute it inappropriately to a leader within the system (Resnick, 1996). Although learners may eventually come to correct understandings of complex systems, clockwork ontologies may serve as a barrier to more expert understanding (e.g., Slotta & Chi, 2006).

Because deep understanding of complex systems requires learners to be able to understand and apply complex systems concepts across multiple scenarios or systems (e.g., Detterman, 1993; Gick & Holyoak, 1980; National Research Council, 2000), explicitly teaching students about the ontologies of complex systems and phenomena through participatory simulations, students might be able to learn to interpret complex systems according to underlying principles and use these for navigating subsequent encounters with other systems (e.g., Hoehn & Finkelstein, 2018; Slotta & Chi, 2006).

Alternately, participating as an agent in a complex system simulation may also constrain learning about complex systems, because certain forms of complexity can be difficult to perceive (Cuzzolino et al., 2019). In addition, non-agentive dynamics, or those without specific causes, may require additional scaffolding such as problematizing one’s assumptions and scaffolding to explicitly reflect on causal understanding. Taken together, these studies point to a need to support individuals when they are learning about complex systems in a context of agent-based participatory simulations.

Ways to support complex system understandings

Numerous studies demonstrate that scaffolding can help students learn from simulations (e.g., Honey & Hilton, 2011; McElhaney et al., 2015) and support student learning of complex systems (Basu et al., 2015; Danish et al., 2016). For example, Grotzer et al. (2017) provided targeted scaffolds to help students understand non-deterministic complex systems across domains and contexts. Grotzer et al. (2017) found that prompts to help make connections to similar phenomena and to everyday examples helped students in Grades 2–6 develop understanding of probabilistic causality. Similarly, Yoon (2011) found that using social network visualizations within participatory simulations helped students evolve from a clockwork to a complex systems understanding in the context of genetic engineering.

In particular, ontological scaffolding may support complex systems understanding by helping students form ontologies of complex systems as different from non-complex systems. Ontological scaffolding can help students create separate and distinct ontological categories for complex systems (e.g., Chi, 2005). By comparing and talking about the differences of several key concepts of complex systems (Jacobson, 2001; Jacobson et al., 2011) and making explicit the organizing framework of complex systems (Goldstone, 2006; Jacobson, 2001), students may be able to use the lens of a complex systems ontology to more accurately perceive and understand these concepts during instruction. For example, Jacobson et al. (2011) explicitly visualized and explained complex systems to students while working with an agent-based NetLogo computational model. Jacobson et al. (2011) found that students improved and transferred their declarative knowledge of complex systems. Likewise, Slotta and Chi (2006) found that explicit ontological training before working with a simulation of electrical current resulted in greater understanding of emergent processes as compared to students who were not exposed to ontological scaffolding. Given that student difficulties with complex systems often stem from their misattributing features of non-complex systems to complex systems, helping students develop the correct ontologies for complex systems may facilitate their learning from agent-based participatory simulations.

Self-monitoring scaffolds may also benefit learning about complex systems with agent-based participatory simulations. Research demonstrates that self-monitoring scaffolds, or supports for sense-making, process management, and articulation and reflection, can be beneficial in inquiry settings (Quintana et al., 2004) and with computer visualizations (McElhaney et al., 2015). Sense-making takes place when students generate hypotheses, analyze data, collect observations, and carry out other scientific tasks. Process management refers to how students manage sense-making tasks and make decisions about how to proceed in their investigations. Finally, articulation and reflection are how students review and evaluate and then communicate their findings (Quintana et al., 2004). Research demonstrates that students who plan and reflect on their understanding can learn more from simulations (Belland et al., 2017; White & Frederiksen, 1998). Thus, helping students engage in self-monitoring while participating in an agent-based participatory simulation may also facilitate their learning.

This study investigated to what extent undergraduate and graduate students can learn about complex systems concepts through the use of an agent-based, participatory simulation. In particular, because both ontological and self-monitoring scaffolds have both been found to benefit learning with simulations, we specifically explored differences between ontological and self-monitoring scaffolds on student learning of complex systems concepts with an agent-based, participatory simulation.

Methods

This study used a pre-post design to explore what students may have learned from engaging in a participatory, agent-based simulation and an experimental design to investigate any differences in learning between students who were exposed to ontological versus self-monitoring scaffolds.

Context

The context of this study was a mid-level architecture course offered at a large research university in the Mid-Atlantic region of the U.S. The course emphasized how the built environment functions within complex systems. For example, the Chesapeake Bay Watershed, a complex system, has numerous socio-ecological problems like excess nitrogen and other pollution from upstream farms and developments. Design decisions from architects directly influence the amount of pollution flowing into the Chesapeake Bay. This study integrated a participatory simulation of the Chesapeake Bay watershed as part of the course, and student learning outcomes were assessed through their understanding of complex systems and complex systems interactions.

Participants

Participants were 96 undergraduate and graduate students. Most were undergraduate (88%), female (67%), 19–20 years old (73%), and architecture majors (92%) with the other eight participants majoring in seven different programs. The study was approved by the university institutional review board and all participants completed consent forms before the study began.

Simulation

An agent-based participatory simulation was used in this study. The simulation modeled the complex system of the Chesapeake Bay watershed and the relationships between different stakeholders, such as farmers and policy makers. Participants played the roles of individual stakeholders in different regions in the watershed, with eight to ten students in each regional team. For example, students could be a crop farmer, livestock farmer, developer, agricultural policy maker, land use policy maker, bay policy maker, or a fisherman. Each geographical region had their own set of roles (e.g., crop farmer and livestock farmer in the Potomac region, as well as a crop farmer and livestock farmer in the Susquehanna region). Each group of agents in a region works together given two parameters: to increase their economic growth and to reduce levels of pollution. Both parameters determine overall success within the game and regional teams are ranked each round by how well they have accounted for the effects of each one, over ten simulated rounds.

While agents can discuss choices within their region, or with agents in similar roles in other regions (e.g., one livestock farmer from one group can ask a livestock farmer in another group about their choices), ultimately, individual agents make their own choices. The simulation models micro-level components (e.g., farming, regulation, etc.) and interactions, allowing students to have local information and agency (Learmonth & Plank, 2015).

During the simulation, participants are immersed at the agent level in (a) carrying out their individual tasks (e.g., determining how many crabs to fish, types of animals to farm, or amount of taxes to levy) and (b) interacting with other agents to learn more about how the system is functioning and how to alter it. After all participants make their choices, the round ends and all participants observe system-level outcomes of their collective choices on a large screen (see Figs. 1 and 2). An instructor then explains changes in the system and can draw participants’ attention to system shifts using a variety of maps and graphs. Participants discuss and question outcomes with the moderator, within their regional teams, and between teams. After group discussion ends, a new round begins and the simulation proceeds by alternating between micro-level choices during rounds and macro-level outcomes between rounds. Each round, including discussion, may last from 10 to 20 min depending on the amount of questions and discussion generated by students.

Fig. 1
figure 1

The agent-based participatory simulation that models the Chesapeake Bay used in this study

Fig. 2
figure 2

Participant standings between rounds are determined using a combination of Net income and Nutrient runoff

Procedures

Participants were randomly assigned to attend either a 75-min session in an Ontological scaffold (N = 48) or a Self-monitoring (N = 48) workshop. Workshops were taught concurrently; two workshops assigned as Treatment 1: Self-monitoring scaffold groups (24 students each) and two workshops assigned as Treatment 2: Ontological scaffold groups (24 students each).

The Ontological scaffold groups were taught by the class professor and the first author; the Self-monitoring groups were taught by two TAs. Fidelity of implementation was supported with trainings and by using presentations with scripts that guided workshop instruction. TAs taught the Self-monitoring condition because it required minimal content knowledge, while the Ontological condition required instructors to answer questions about complex systems. In both conditions, instructors explained how the simulation worked, randomly assigned roles in the simulation, and gave worksheets to complete during gameplay. The nature of each worksheet differed and is described next.

Ontological condition

In the Ontological condition, participants were introduced to six complex system concepts, one by one, then given an explanation and a non-ecological example. Participants were asked where they might have previously encountered each concept and then in small groups they discussed where in the simulation they might expect to encounter each concept and to share their ideas with the class. During gameplay they received a prompting worksheet asking them to list examples of the complex systems concepts they discussed during the workshop. Worksheet questions encouraged participants to identify examples and explanations of complex systems concepts such as feedback loops, decentralized order, and emergence (see Appendix A).

Self-monitoring condition

For the Self-monitoring condition, participants brainstormed strategies to succeed at gameplay, working in groups with multiple, distinct stakeholders. Participants were asked how they would make money or reduce pollution separately. Participants were then asked how they would try to optimize both parameters and asked to plan in small groups what information they would need to know about the ecosystem and their own roles for the upcoming gameplay round. Participants planned strategies for the simulation (process management) and discussed their ideas and understandings as a group (articulation). During gameplay they received a prompting worksheet with questions about strategies they felt worked well, mistakes they made, and how to improve them during the simulation (see Appendix B).

Gameplay

The week following the scaffolding workshop sessions, all 96 participants engaged in the simulation as a whole group for a 75-min class period. Participants were randomly assigned to one of ten teams and interacted in a large ballroom while sitting at tables by geographic group. They were initially given instructions about the goals of the game and then worked within their groups to make choices. Each round was punctuated by a 5-min period between rounds when the moderator, a developer of the simulation, could explain changes in metrics and rankings; participants could see the results of their choices and those of others playing the game. Initially moderators helped groups to generate discussion and encourage between-group questioning and sharing. After the first few rounds, students better understood their roles and discussed within and between geographic teams with ease.

As noted, based on their assigned scaffolding condition treatment group, each participant completed a worksheet during gameplay with either Ontological question prompts [e.g., What are examples of emergence you’ve noticed during gameplay? (Appendix A)] or Self-monitoring question prompts [e.g., What are successful strategies you have noticed? Mistakes you have made? (Appendix B)]. Participants were reminded to fill out their accompanying worksheets multiple times and worksheets were collected at the end of gameplay. We note that during gameplay, mixing between scaffolding groups occurred.

Data collection

Data collection, treatment workshop training, and the simulation occurred over nine days. First, all participants responded to the same pretest questions distributed in class (see Appendix C). Two days later, they participated in the treatment workshops. The following week, the simulation occurred, and two days after that, participants responded to the same short answer questions as a posttest. To test students’ understanding of complex systems concepts, the assessment items were about different system types than the simulation. To assess action effects, agent actions, and order, three questions were adapted from those of Jacobson’s (2001) initial study of novice and expert differences. To assess causation, one question was adopted directly from Grotzer et al. (2013); it has been used in multiple studies of causation. Participants were given 30 min at both pre- and post-test to complete the short answer questions.

Data analysis

Coding and analysis was informed by the previous work of many researchers including Chi’s ontological categories framework and Jacobson et al. (2011); see Table 1.

Table 1 Levels of understanding and sources for each complex system concept

Complex system concept rating

For the first three concepts of action effects, agent actions, and order, each concept mapped directly to a single question on the pre- and post-test assessment and was coded for one of three levels of understanding of that concept, following guidelines by Yoon et al. (Goh et al., 2012; Yoon, 2008). For example, a novice rating for order might indicate top-down organization of a system, with order imposed by a central authority, such as when the queen of an ant colony commands ants to find food and the ants follow leader directions. An intermediate rating reflected a mix of top-down and bottom-up order, such as ants randomly look for food and once they find it, they command other ants to bring the food back. An expert rating indicated a bottom-up order such as ants randomly searching for food, and once found, they leave a pheromone trail for other ants to follow to the food.

For the causation question, students listed as many causes to explain a fish die-off as they could. Each response to the single item was then coded for one of two levels of understanding for each of the three causation concepts, using a validated rubric and scoring by Grotzer et al. (2011, 2013). For example, a participant received a novice rating of obvious vs. non-obvious causation when they identified visible potential sources of pollution such as people dumped trash in the river, which caused fish to die. An expert rating involved a focus on low oxygen levels, high toxin levels, and/or other causes invisible to the naked eye. To ensure that participants who wrote more examples were not scored higher, we calculated an average of all responses for each participant.

To ensure inter-rater reliability, two researchers independently coded and rated 20% of the data. First, both researchers coded five complete sets of participant data and compared answers. They discussed coding interpretation until consensus was reached for any coding discrepancies, clarifying the rubrics as needed. Then the researchers coded and rated another 15% of the data. Any discrepancies were again resolved through discussion. A satisfactory interrater reliability score of above 80% was achieved for all rubrics, with a range from a low of .85 alpha to a high of .94 alpha. Then one researcher coded and rated the remainder of the data.

Statistical analysis

The first three concepts of action effects, agent actions, and order were scored from 1 to 3, with 2 levels as the largest shift a participant could make. The three causation concepts were coded 0 and 1. Because students could list as many causes as they chose, causation scores were averaged for each of the three concepts resulting in a score ranging from 0 to 1. For example, a student who listed two non-obvious and two obvious causes received a score of 0.5, indicating an equal focus on both types.

For RQ1, comparative analyses were used to explore differences within participants, by treatment, from pre- to post-test. Because the data for all but one concept violated normality, Wilcoxon signed-rank tests were used to determine if there were significant (p < .05) differences between students’ own pretest and posttest scores for all concepts. For RQ2, a Mann–Whitney test was used to determine if there were significant (p < .05) differences between students in different treatment conditions for gains in concept understanding. Effect sizes were calculated for all variables through conversion of z-scores by dividing by the square root of the total number of observations compared (Rosenthal, 1991).

Results

RQ1: To what extent did participants’ understanding of complex systems concepts change after engaging with a participatory, agent-based simulation?

Participant responses for the first three outcomes for pre- and post-test by treatment condition are presented in Fig. 3. The last three categories for causation do not have discrete levels because scores represent averages of a varying number of responses given by each student.

Fig. 3
figure 3

Student level understanding by condition and time point for three complex system learning categories

A Mann–Whitney test indicated that there was no significant difference on the pretest between the two groups (Ontological or Self-monitoring). A Wilcoxon signed-rank test was used to examine the pretest and posttest scores by treatment condition. No concepts for the Self-monitoring group changed at a level of statistical significance from pretest to posttest (Table 2). The Ontological group showed significant improvement for agent actions (z = 1.79, p = .04) and process-based causation (z = 1.81, p = .04) while there was also a significant decrease of action effects score by 0.18 points (z = -1.97, p = .03). The effect size for the increase in agent actions was r = 0.20 and for process-based causation was r = 0.19 while the decrease in action effects was r = − 0.22. All other concept changes were not significant.

Table 2 Comparisons of pretest and posttest scores by treatment condition

Agent actions

The significant improvement of students’ scores on the agent actions concept in the Ontological condition (r = .20, p = .04) was driven by students who initially believed that the actions of agents in a complex system are predictable but later understood that actions are not wholly predictable. For example, one student’s initial, novice response about whether one could predict the movement of an individual fish in the pretest that “With enough knowledge of a fish’s general path, food sourcing, and ‘comfort zones’ you could potentially predict the path of a fish” (Participant 96). After the intervention, this student said, “you cannot predict the movement of the individual because it is less predictable than an entire school. Emergence of the pack does not exist here.” Here the student now believes that an agent’s actions are less predictable, coded as an intermediate level response.

Process-based causes

Students significantly improved their understanding of process-based causes in the Ontological condition (r = .19, p = .04). On the posttest, students wrote about relatively more process-based causes such as “Overpopulation led to a shortage of food, and these fish starved” (Participant 78) and “nutrients allow for excessive algae growth, and such growth depletes the water from oxygen, leaving fish with conditions not supportive of life” (Participant 4).

Action effects

Decreases in students’ scores in the Ontological condition (r = − .22, p = .03) and a non-significant negative trend in the Self-monitoring condition for action-related items arose from one of two patterns. Either students’ posttest answers were relatively shorter (e.g., “yes, this happens because of chain reactions and feedback loops”), lacking how or why explanations that they included in their initial pretest response. Or, participants responded that non-linear effects could occur, but only through the accumulation of several small effects. For example, one student responded that “Yes. Many slight effects can create a large effect. Even if there is no notable change in the system after a small change, as time goes on, these changes build up to become very apparent” (Participant 73). Although the responses did not contain incorrect ideas, they also did not represent the non-linear effects that are created through cascading effects, which the rubric coded for. While students may have understood how these occurred, they did not provide evidence of their understanding on the posttest.

RQ2: How did participants’ understanding of complex systems concepts compare for Ontological scaffolding versus Self-monitoring scaffolding conditions?

Comparing gain scores by type of scaffolding treatment, only one concept, order, showed a significant difference between treatment groups (z = 2.17, p = .02) with students in the Ontological group (M = 0.18, SD = 0.71) outperforming those in the Self-monitoring group (M = − 0.18, SD = 0.76) from pretest to posttest, with a small effect size of r = .24 (Table 3).

Table 3 Mean gain score differences by treatment type

More students fell one level of understanding for order in the Self-monitoring group (23.1%) as opposed to the Ontological group (10%). For example, many Self-monitoring students exhibited an intermediate understanding of order on the pretest by explaining a combination of bottom-up and top-down causes of order for ants finding food (e.g., “Ants travel in colonies to hunt for food and the ants mimic each other’s behaviors and movement patterns”) but then later providing more novice top-down responses on the posttest (e.g., “The ants follow a leader and mimic each other’s behaviors”). The opposite trend happened in the Ontological scaffolding group, which had a larger number of students who increased their understanding by one level (32.5%) as opposed to the Self-monitoring group (15.4%). For example, one participant in the Ontological group stated on the pretest, “There is a leading ant and the rest of the group follows it.” On the posttest, the participant stated, “The order comes from the system in the ant group. Ants have a system in the group when they try to find food.”

Discussion

This study explored differences among ontological and self-monitoring scaffolding for post-secondary architecture students’ learning about complex system concepts with an agent-based, participatory simulation. We discuss three major findings.

First, significant changes occurred mainly in the Ontological scaffolding group, with no significant differences among pretest and posttest scores for any concept in the Self-monitoring group. Analyses revealed significant improvement for ontologically scaffolded students’ understanding of two complex systems concepts from pretest to posttest: agent actions and process-based causation. For agent actions, students engaged as virtual agents in a concrete example of a complex system. Because they were tasked with trying to achieve a system level goal (to reduce pollution while remaining economically viable) students were forced to make choices within the simulation while experiencing randomness that affected these decisions. These experiences may have helped students realize the complexity of the system as well as the role of an agent within the system. Similarly, for process-based causation, students may have benefitted from witnessing simulated events within the simulation over a period of 20 years as well as taking part in the processes underlying these events. Further, the cyclical nature of the rounds of play may have helped underscore emergent patterns. Students could see not only an effect, and the instigating event, but also the choices that built up toward that event, and the shift in balance that eventually led to a change.

Second, counter to hypotheses, student scores in the Ontological condition for the action effects concept significantly decreased from pretest to posttest. These findings provide nuanced evidence for the potential limitations of agent-based participatory simulations. The relevant assessment item required students to demonstrate not just a type of understanding (i.e., non-linear vs. linear) but also explain how they believed action effects occur in complex systems. Possible reasons that students’ scores decreased may have been first that the simulation did not help students understand how small causes can create large effects in a non-linear versus cumulative fashion. During the simulation, students worked together as a large group by region, made their own choices during rounds, and then saw these cumulative effects displayed for both territorial groups, and the Bay as a whole. Students may have interpreted the overall effect on the system as an aggregated effect of everyone’s more tangible and equally weighted choices. This interpretation aligns with Barth-Cohen’s (2018) findings that students can have a range of understandings and priming during the activity may cause students to focus on one over the other. Further, these results align with recent research by Cuzzolino et al. (2019) that provided evidence that participating in a complex system as an agent may actually constrain complex systems learning. Their results suggest that certain forms of complexity are harder to perceive, and that dynamics that are non-agentive and do not have specific causes require additional scaffolding such as problematizing one’s assumptions and scaffolding to explicitly reflect on one’s causal understanding. Our results corroborate the idea that students may need additional instructional support to reflect and discuss action effects when participating in agent-based simulations. For example, learners may need more explicit reflection upon the inner-workings of the simulation or underlying complex system model to more clearly grasp action effects or to understand the role of non-agentive dynamics such as weather.

Third, only one concept, order, differed at p < .05 depending on the scaffolding condition, with students receiving ontological support scoring higher gains on order than those receiving self-monitoring support. These results align with previous research which demonstrates that ontological scaffolding can help students make sense of simulations of complex phenomena (Jacobson et al., 2011; Slotta & Chi, 2006). The ontological scaffolding may have helped students develop a framework about agent effects and process-based causes that they could use to interpret complex systems. For example, because students experience complex systems every day with a default framework of top-down order, it is possible that students in the self-monitoring group only bolstered this framework by participating in the simulation. Findings underscore the importance of not just experience with complex systems, but the importance of ontological scaffolds to help students think beyond clockwork views, which are often more intuitive than complex views (Grotzer, 2012; Wilensky & Resnick, 1999); and to look past clockwork behaviors, which are often more tangible. Much remains unknown about how to support learners’ development of complex systems understandings. Aligned with previous research, students’ understandings of complex systems concepts related to causation were rated as novice or expert yet the rating of other concepts included a mid-range rating. The articulation of a middle level in between novice and expert for causation concepts may be warranted (e.g., Barth-Cohen, 2018; Samon & Levy, 2020). Future research can also examine the need for further support including scaffolding within the simulation along with curricular support (Kamarainen et al., 2015) as well as the addition of situational learning to help bypass transfer difficulties (Grotzer et al., 2015).

Limitations

Limitations include that participants were from a single class within an architecture school at one large institution, which may limit the generalizability of the findings. Additionally, given that the comparison was between two scaffolding treatments that were both designed to benefit learning, limited gains make sense. Assessment items were also not situated in the same context of the simulation. Although we chose to use assessment contexts that required students to generalize their understanding of complex systems concepts from the simulation, items that specifically target the same context of the simulation may have revealed more differences among groups. Future research can investigate the nature of knowledge gained from agent-based participatory simulations and the near- and far-transfer of student understanding to new contexts. Attention to assessment of complex systems understanding after participating in a simulation is also needed, given that using the same short-answer questions at posttest may have been fatiguing for students who had seen and written responses for the same questions only 11 days before. Finally, during gameplay students were encouraged to interact with and learn from each other, with groups formed without regard to treatment condition. This interactive learning element may have potentially reduced differences between conditions.

Conclusion

Among the growing number of studies investigating how to help students understand complex systems with simulations (Grotzer et al., 2013; Jacobson et al., 2011; Vattam et al., 2011), few place students in the role of agents who create and participate in the systems under investigation. This study extends other research on how to support understanding of complex systems through simulations in undergraduate settings (e.g., Lizier et al., 2018), and contributes to the literature that demonstrates the benefit of ontological scaffolds to help learners understand difficult aspects of complex systems (e.g., Jacobson et al., 2011; Slotta & Chi, 2006).

Architects, designers, and engineers use simulations to understand complex systems. More research is needed to identify effective ways to support learners in understanding complex systems using appropriate scaffolding. Our findings point to critical areas where integrating scaffolding may improve our approaches to real-world health, social, and environmental problems—with human understanding most likely preceding workable long-term solutions.