Introduction

Computer simulations used in science teaching can be defined as a computer program that mimics the behavior of a real system (de Jong and Lazonder 2014). They can be used to investigate scientific phenomena as a part of inquiry-based science teaching (de Jong 2006b). Simulations offer a chance for learners to perform experiments by changing variables and observing the effects. Research on inquiry-based learning indicates that learners need support to overcome difficulties with certain tasks, such as drawing conclusions from data (Alfieri et al. 2011). Support or guidance for inquiry-based learning can come from different sources, including the simulation or accompanying software, the teacher, or other learning material. Thus far, the research on supporting inquiry-based learning with simulations has concentrated on the guidance provided by the simulations or by the accompanying software, and the teachers’ role in guiding learners has not received much attention (Smetana and Bell 2012). The aim of this descriptive study is to describe the forms of guidance provided by teachers and a particular simulation for learning about balance at the primary education level. Using the same categorization for the guidance provided by teachers and the simulation enables the study of forms of guidance provided by these two sources. Different patterns for the distribution of guidance highlight the complexity of providing guidance for inquiry-based science learning with simulations.

Literature Review

Inquiry-Based Learning and Simulations

Computer simulations can enhance traditional (i.e., lecture-based, textbook-based, and/or practical work) science instruction (Rutten et al. 2012; Smetana and Bell 2012). The simulations should be integrated with other classroom activities and used in a way that allows the learners to have an active role in the investigation (Rutten et al. 2015; Smetana and Bell 2012). This conforms with the consensus in science education that learners should be engaged in inquiry, experimentation, and discovery as active agents and simultaneously develop their practices related to science (de Jong and Lazonder 2014; National Research Council 1996; National Research Council 2000; NGSS Lead States 2013). Inquiry-based learning is usually defined through phases or features of inquiry starting with asking questions or generating hypotheses, moving on to conducting investigations and drawing conclusions from the collected data, and finally communicating these conclusions to others (Bell et al. 2010; National Research Council 1996; National Research Council 2000). In order to identify and summarize the core features of inquiry-based learning, Pedaste et al. (2015) conducted a systematic literature review of the existing literature on inquiry-based learning. The result is an inquiry cycle consisting of five phases: stimulating interest (orientation), stating theory-based questions and/or hypotheses (conceptualization), planning and carrying out investigations (investigation), drawing conclusions based on the data (conclusion), and communicating the information to others and reflecting on one’s own actions (discussion). This paper uses Pedaste et al.’s definition of inquiry since it is based on earlier definitions of inquiry and has already been used to study guidance provided by simulations for inquiry-based learning (Zacharia et al. 2015).

Guidance for Inquiry-Based Learning

Inquiry learning can be unguided or guided. In unguided inquiry learning, the learners are fully in control of the whole inquiry learning process; in guided inquiry, the teacher or some other source (e.g., a simulation) provides support for the process (Furtak et al. 2012). Unguided inquiry learning has been criticized as ineffective and cognitively too challenging for learners (Alfieri et al. 2011; Kirschner et al. 2006; Mayer 2004). In inquiry-based learning, learners may have issues with generating suitable hypotheses, with designing experiments, and with drawing conclusions and/or regulating their own learning process (de Jong and van Joolingen 1998; de Jong and Lazonder 2014). These issues may be exacerbated by the use of simulations instead of physical, hands-on experiments. This is because the high information content of simulations and the difficulty of extracting information from them (Zacharia et al. 2015) increases the need for meta-cognitive skills (Hegarty 2004). Empirical research on inquiry learning has shown that providing assistance—e.g., feedback, worked examples, or elicited explanations during the inquiry learning process—benefits learners and improves learning outcomes (Alfieri et al. 2011). In general, guidance for inquiry learning should be personalized (i.e., adapted to the learners’ knowledge and skills), fade away (i.e., the amount of guidance should decrease during the learning process), and support self-regulated learning (de Jong and Lazonder 2014).

Guidance for inquiry learning can be classified in different ways, such as by the phase of the inquiry cycle it addresses (de Jong 2006a) or by the learning process it supports (Quintana et al. 2004). De Jong and Lazonder (2014) developed a typology that organizes different forms of guidance according to their levels of specificity. Table 1 lists these forms of guidance and an example of each form. The issues with the terms guidance, scaffolding, and scaffolds become apparent here. The first two terms are often used to describe the same thing, a type of support designed to promote learning, and the term scaffolding focuses on responsiveness to learners’ actions (van de Pol et al. 2010). Scaffolds, on the other hand, are one form of guidance in the classification by de Jong and Lazonder (2014). In this paper, we use the term guidance to describe all support designed to promote learning.

Table 1 Forms of guidance for inquiry learning with simulations (de Jong and Lazonder 2014)

Zacharia et al. (2015) reviewed the existing research on guidance for inquiry learning using virtual laboratories (i.e., simulations) and online laboratories. This review only addressed guidance provided by the computer software—the simulation or accompanying software. This is a general trend in research on support for learning with simulations; most previous research has focused on the instructional support provided by the simulation itself (Rutten et al. 2012). However, the role of the teacher when using simulations in science education is a critical element in their successful implementation (Hennessy et al. 2006; Rutten et al. 2015; Smetana and Bell 2012). It is still unknown what sort of guidance teachers can offer for learning science with simulations (Chang 2013; Rutten et al. 2012; Smetana and Bell 2012).

Although this paper thus far has contrasted guidance provided by the software and the teacher, these two types of support can co-exist and interact with each other. Key factors of successful guidance are the same no matter who or what provides it; van de Pol, Volman, and Beishuizen (2010; 2012) list the same three characteristics (i.e., adaptation to the learner, fading out, and support for self-regulated learning) for scaffolding in teacher-learner interaction as de Jong and Lazonder’s (2014) list for guidance provided by the software in inquiry learning.

Teachers and software have different capabilities to provide guidance. For example, teachers can obtain information about learner performance from different sources than software (Ruiz-Primo 2011). This affects their ability to adapt guidance to learner needs. Puntambekar and Kolodner (2005) use the term distributed scaffolding to describe instructional designs that include guidance from multiple providers (e.g., the software and the teacher). Distributed scaffolding can follow one of three different patterns (Tabak 2004). The first pattern is a differentiated scaffold. In this pattern, each of the learners’ different needs is addressed by a specific form of guidance. The goal in implementing this pattern is to identify the form of guidance that is best suited to a specific learning need. The second pattern is that of redundant scaffolds; in this pattern, multiple forms of guidance target the same need, but they are enacted at different points in time. The redundancy of guidance ensures that all learners benefit from at least some of the different forms of guidance. The third and final pattern is that of synergistic scaffolds. In this pattern, multiple forms of guidance co-occur and interact with each other. The rationale behind this pattern is that some skills and practices embody such a wide array of knowledge and values that multiple forms of support must be used in unison to support the development of such skills and practices.

Guidance for inquiry-based learning is a complex process that encompasses multiple forms and providers of guidance and multiple patterns distributing guidance between these providers. The objective of the present descriptive study is to investigate guidance provided by both the software and teachers in the context of one particular simulation and topic. The decision to concentrate on just one simulation was based on the fact that simulations and their surrounding frameworks differ from one simulation to another (Clark et al. 2009). The results add to the literature on how teachers provide guidance for learning science with simulations in this particular case and how teachers’ guidance can be contrasted with the guidance provided by the simulation. The same categorization was used for guidance provided by the simulation and by the teachers so these two sources could be contrasted. Through examples of different patterns for distributed guidance, this study aims to highlight the complexity of providing guidance for inquiry-based learning with simulations.

Our research questions are as follows:

  1. 1.

    What forms of guidance does the Balancing Act PhET simulation provide?

  2. 2.

    What forms of guidance do pre-service teachers provide when guiding learners working with the Balancing Act PhET simulation?

  3. 3.

    How do different patterns for distributed guidance manifest when teaching with the Balancing Act PhET simulation?

We acknowledge that there could be differences in the ability of pre-service and in-service teachers to guide learners. This study describes the guidance provided for inquiry-based learning by pre-service teachers and by one particular simulation adding to the literature on the role of the teachers in general in guiding inquiry-based learning with simulations. Pre-service teachers might guide learners differently than in-service teachers, but both play the same role in lessons as human facilitators of learning. By contrasting the guidance provided by pre-service teachers with guidance provided by the simulation, this paper also contrasts guidance provided by humans with that provided by the simulation.

Method

The data for the study comes from a larger project in which pre-service primary teachers (PSTs) participated in an intervention (Lehtinen et al. 2016) aimed at improving their skills and confidence in teaching inquiry-based physics with simulations. At the end of the intervention, the pre-service teachers planned and taught an inquiry-based physics lesson for primary-aged learners using a predetermined simulation. The lessons were planned and taught by groups of five PSTs. The data for this study comes from these two lessons (45 min each); the topic was learning about balance using a seesaw. The lessons were taught to a third-grade class with 15 learners and a fifth-grade class with 13 learners. Each of the two lessons was planned and taught by two different groups of five PSTs each. These two groups of PSTs were told to implement the given simulation in an inquiry-based lesson, and they planned the lessons independently. The simulation used in these lessons was the Balancing Act simulation from the PhET website (University of Colorado Boulder 2016). This particular simulation was chosen for this case study because of its high amount of embedded guidance compared to most PhET simulations. Even though the two participating classes were from different grades, there was no significant difference in the level of content of the two lessons. The main difference between the two lessons was in the approach to drawing conclusions based on the investigations. Fifth-grade learners were given a handout and asked to fill in the variables that affect the balance of the seesaw and the rule that allows it to balance; the learners had to deduce these answers from their investigations. Since third-grade learners are less able to write conclusions than are fifth graders, they expressed their conclusions verbally to the PST guiding their group. Both classes followed the standard Finnish science curriculum, and both classes had not received instruction on balance before the study.

Both of the lessons followed the basic phases of inquiry-based instruction (Pedaste et al. 2015): the PSTs started the lessons by stimulating the learners’ interest and connecting the forthcoming experiment with their everyday experience by asking the learners about their playground experiences with seesaws. One or two of the five PSTs conducted the orientation, and the others observed. The learners knew from experience that if two people want to balance a seesaw, the lighter one has to sit further from the fulcrum. In order to quantitatively examine the connection between the ratios of weights and distances from the fulcrum, the learners used the simulation to investigate the phenomenon in groups of three to five. The data analyzed in this paper comes from this investigation phase of inquiry. Each of the five PSTs guided one group of learners in their investigations. The PSTs were focused on letting the learners work on their own but guided them in collecting data and drawing conclusions from it. As they went through the simulation, the learners came up with initial ideas about the ratio of weights and distances from the fulcrum, which they could then apply to the assignments embedded into the simulation. After the investigation, one of the PSTs led a discussion as the learners shared their findings with other groups; the PST also asked the learners to reflect on the lesson and the inquiry.

Participants

The participants of the study were Finnish pre-service primary teachers (n = 8) (PSTs A to H). They ranged in age from 20 to 31 (M = 24.9, SD = 3.8), and they had 6 months to 2 years of previous general teaching experience. None of the pre-service teachers had any experience in teaching science with simulations, and all of them were taking a science teaching methods course in the same semester the lessons were taught. The PSTs majored in special education, but they had chosen primary teacher studies as their minor. Of the ten pre-service teachers who planned and taught the lessons, one PST was left out of the study due to missing research permits from the learners, and one PST was left out because of outside interference during his/her teaching. Informed consent was obtained from all individual participants in the study.

Data

Each small group consisting of three to five learners was given a laptop in which the simulation ran. Their experiments with the simulation and the talk between the learners and the teacher were recorded using screen capture software running on these laptops. The laptops’ inbuilt microphones recorded the talk. The lessons were also recorded using two stationary cameras at the front and back of the classrooms. The screen capture video data consisted of around 200 min of experimentation with the simulation from eight groups of three to five learners and a pre-service teacher guiding them. The analysis for the teachers’ guidance was done through the screen capture videos.

Analysis of the Guidance Provided by the Simulation

The Balancing Act simulation (University of Colorado Boulder 2016) is aimed at learners in primary and lower secondary schools and deals with balance and torque. The interface of the simulation is pictured in Fig. 1. The simulation is divided into three tabs: Intro, Balance Lab, and Game. In the Intro tab, the users can experiment with the seesaw using three objects. Two of the objects weigh 5 kg and one weighs 10 kg. The seesaw’s supports can be removed and replaced. The masses of the objects, the forces they exert, the positions of the objects on the seesaw, and the level meter can be hidden or shown. The Balance Lab tab allows the users to put more than three objects on the seesaw. In this tab, the relative weight of the objects does not have to be 1:2. The Balance Lab also allows so-called mystery objects of unknown weight to be put on the seesaw. Finally, in the Game tab, the users can choose assignments from four different levels of difficulty. These assignments challenge the users to apply the knowledge gained from the two previous tabs. The assignments include, for example, placing an object so that the seesaw is balanced or finding the weight of an object by using it to balance the seesaw. Users receive points for successfully completing an assignment (2 points = correct answer on the first try, 1 point = correct answer on the second try; after two attempts, the user gets 0 points and the correct answer is shown).

Fig. 1
figure 1

The interface for the Intro tab of the Balancing Act simulation

To classify the guidance provided by the Balancing Act simulation, it was compared to a hypothetical, unguided version of the same simulation. In such a sandbox-like simulation, learners would be presented with just a seesaw and objects of varying weights to place on the seesaw; there would be no structure or feedback from the simulation. This would mirror the same experiment conducted in a traditional, physical, hands-on method; then, the structure and guidance must come from other sources. All features found in the actual Balancing Act simulation and not the hypothetical, unguided version (e.g., the option of showing and hiding the objects’ weights and the assignments in the Game tab) were each scrutinized for their possible role in guiding the learners in their experimentations. The term guiding element is used to describe the features of the simulation that guide learners in their investigations. The guiding elements of the simulation were into categorized typology developed by de Jong and Lazonder (2014). For example, the assignments embedded into the simulation in the Game tab were seen as guiding elements and categorized as prompts.

Analysis of the Guidance Provided by the Teachers

The conversation between the learners and the pre-service teachers was analyzed in order to categorize the guidance provided by the teachers into the forms defined by de Jong and Lazonder (2014) as well. The analysis had two main phases. The first phase involved coding the transcribed discussions between the teachers and learners into six categories, forms of guidance. De Jong and Lazonder’s (2014) descriptions of the forms of guidance and Zacharia et al.’s (2015) examples from previous research on different forms of software guidance were used. The term guiding action is used to describe each action of guidance provided by the teachers. The length of these guiding actions ranged from single utterances to discussions lasting around 3 min. A total of 421 guiding actions were identified in the data. De Jong and Lazonder’s descriptions and Zacharia et al.’s examples of each form of guidance were scrutinized to identify the factors in each form of guidance that are not unique to guidance provided by simulations. These factors were then used to code the transcripts of the guiding actions. One example is the performance dashboards provided by the teachers. Even though the teachers could not verbally give the learners a visual dashboard the way that the software can, they could still give the learners real-time progress reports about their learning process and knowledge status. This type of guiding action fits de Jong and Lazonder’s description of performance dashboards.

The second phase used thematic analysis (Braun and Clarke 2006) to more accurately describe the guidance provided by the teachers. The timing, content, and possible connection to previous events of each guiding action were scrutinized when defining and naming the themes. For guiding actions in the form of performance dashboards, prompts, and heuristics, two different themes were defined for each. These themes act as subcategories for those forms of guidance. The subcategories differ not in the specificity of guidance provided but in their timing, content, and connection to previous events.

An inter-rater reliability analysis using Cohen’s Kappa was performed to establish consistency between two raters. The first author coded all of the data, and the second author coded a subset of the data using a coding manual. Regarding discerning teachers’ guiding actions from non-guiding actions, the second author coded 10% of the data. The percentage of agreement between the authors was 96%, and κ = 0.897 (95% CI from .721 to 1), p < .0005. For the different forms of guidance, the second author coded 15% of the data, and the percentage of agreement was 88%, and κ = 0.798 (95% CI from .675 to .904), p < .0005. When the subcategories for performance dashboards, prompts, and heuristics were taken into account, the percentage of agreement was 83%, and κ = 0.784 (95% CI from .664 to .883), p < .0005. These values indicate a fine reliability for high-inference coding of the video data in this study (Fischer and Neumann 2012).

Analysis of the Distribution of the Guidance Provided by Different Sources

The analysis of the distribution of guidance revolved around the interaction of guidance provided by different sources and the temporal properties of the guidance (Puntambekar and Kolodner 2005; Tabak 2004). We examined the interaction of the simulation’s guiding elements with the teachers’ guiding actions and vice versa, also considering the learning need that each guiding element or action supports. Examples for each pattern of guidance as defined by Tabak showcase the complex nature of guiding inquiry-based learning with simulations via multiple sources of guidance.

Results

Guidance Provided by the Simulation

Table 2 provides an overview of the forms of guidance provided by the Balancing Act simulation. No heuristics or scaffolds were present in the simulation.

Table 2 Forms of guidance and guiding elements from the Balancing Act simulation, with their descriptions

Three different elements of process constraints were present: progression within the simulation, progression within the Game tab, and visualization settings. The simulation is divided into three tabs (see Figs. 1 and 2), and the learner can be expected to progress through the tabs in the order they are presented. This progression ensures that the learners first try to balance the seesaw in a simple situation with fewer variables before moving on to a more challenging situation and finally applying their skills to the assignments in the Game tab. Within the Game tab, the simulation offers four different levels of assignments. The levels differ; for example, the number of objects on the seesaw changes from one level to the next, as do the objects’ weight ratios and distances from the fulcrum. This progression also allows learners to first apply their skills to simpler situations and then move on to more challenging ones. As the final process constraint, the distances of the objects from the center of the fulcrum and the forces they exert on the seesaw are hidden on default but can be shown. The default settings simplify the simulation and hide information away that could distract the learners in the beginning of their experimentations. As the learners’ knowledge increases, these settings can be enabled.

Fig. 2
figure 2

An example assignment from the Game tab of the Balancing Act simulation

The score given to the learner based on the number of attempts they need to complete the assignments in the Game tab is a kind of performance dashboard. The score gives learners real-time information about their level of knowledge and their progression. As the learners gain knowledge, they are able to answer more of the assignments correctly on the first try, increasing their score. The assignments themselves are also a form of guidance. They serve as prompts which enable the learners to apply their knowledge. Because the assignments are preceded by the two other tabs, the learners have had a chance to develop the necessary knowledge needed to complete the assignments. Finally giving the learners the correct answer to the assignments after two incorrect answers serves as a form of direct presentation of information. Revealing the answer ensures that learners who are unable to answer an assignment can still benefit from the content information implicitly embedded in the correct answer and use this knowledge to progress through the rest of the assignments.

Guidance Provided by the Teachers

Table 3 provides an overview of the forms of guidance provided by the pre-service teachers.

Table 3 Forms of guidance and guiding actions provided by the pre-service teachers, with their descriptions

The different forms of guidance provided by the pre-service teachers are presented in the following sections through illustrative examples and excerpts from classroom dialog.

Process Constraints by the Teachers

The pre-service teachers used process constraints when the learners were overwhelmed by the number of options (number of objects, places for the objects, etc.). This excerpt illustrates one of these situations. In the excerpt, the learners are trying to balance the seesaw in the Intro tab with two fire extinguishers (weighing 5 kg each) on one side and one trash bin (weighing 10 kg) on the other side. The pre-service teacher (PST A) sees that the learners have already tried to move the objects into different places and are having difficulty balancing the seesaw.

PST A: Well, now we notice that the side with the fire extinguishers still weighs a bit more—let’s do it like this: let’s keep the trash bin where it is now; let’s agree we’ll not move it anymore [Process constraints]. Then, how could we make the side with the fire extinguishers a bit lighter?

Learner 1: If we would put them a bit forward.

PST A: Yeah, you can put them forward.

PST A suggests that the learners leave the trash bin in place and only adjust the place of the fire extinguishers on the other side of the seesaw. By doing so, PST A removes a degree of freedom from the assignment, reducing the complexity of the situation. Thus, this guiding action restricts the number of options the learners have to consider, which is characteristic of a process constraint (de Jong and Lazonder 2014).

Performance Dashboards by the Teachers

Even though the pre-service teachers were not able to present real-time information to the learners via a visual dashboard, they still gave the learners feedback about their learning process and the quality of their outcomes. This feedback was given in two different types of guiding actions. First, the pre-service teachers gave the learners feedback while they experimented with the simulation. This feedback occurred when the learners were close to balancing the seesaw or when they utilized a good strategy in their experiment. Second, the teachers gave feedback on, e.g., the good quality of the learners’ explanations of the phenomena after the learners had completed an assignment. In the following excerpt, the pre-service teacher (PST B) gives feedback during experimentation. The learners are working on an assignment from the Game tab of the simulation. The assignment asks the learners to determine the weight of an unmovable vase by using another object weighing 5 kg to balance the seesaw.

Learner 3: Should we put it there?

Learner 4: Put it there.

L 3: Try it first all the way in the end.

L 2: Hey put it there, because then it’s it the same spot as the other one.

[The learners put the object in the same spot as the vase but on the other side of the seesaw. The seesaw balances.]

PST B: That was a very good idea to try it first in the same spot [Performance dashboard—feedback on experimentation]. Well, what can you now deduce from this situation?

L 2: That would be five kilos.

In this excerpt, PST B praises Learner 2 for his/her strategy for the assignment. The learner suggests placing the object with a known weight at the same distance from the fulcrum as the vase, the weight of which is unknown. Seeing what happens then tells the learners whether the unknown weight is less than, more than, or the same as the known weight. The pre-service teacher explicitly states that this particular strategy is a good idea, which gives the learners information about their learning process and knowledge. The learners can act on the feedback, which is an essential characteristic of a performance dashboard that provides guidance (de Jong and Lazonder 2014).

Prompts by the Teachers

The pre-service teachers prompted the learners in two different ways. First, they prompted the learners to interact with the simulation—for example, to balance the seesaw in a given situation in the Intro and Balance Lab tabs or to complete an assignment in the Game tab. These actions caused the learners to apply their knowledge to balance the seesaw or complete the assignment. Second, the teachers prompted the learners for verbal responses. They instructed the learners to form hypotheses before trying to complete an assignment and prompted them to reflect on their actions and answers. The excerpt below shows pre-service teacher C (PST C) prompting the learners to reflect on their actions after completing an assignment in the Game tab. In this assignment, the learners had to balance the seesaw using a weight of 20 kg on one side with a fixed weight of 10 kg on the far end of the other side.

Learner 5 [talking to Learner 6, who is using the simulation]: So, put it there—no, wait, one step forward—that’s it. Let us see if it’s correct.

[The 10-kg weight is placed half as far from the fulcrum as the 20-kg weight, but on the other side. The seesaw balances, and the simulation informs the learners of their correct answer.]

PST C: Yeah.

L 5: It was.

[Learner 6 moves the mouse cursor to the “Next” button.]

PST C: Do not go on to the next assignment yet—what was the reason that this was the correct answer? [Prompts—prompt for answer]

L 5: Well, wait a minute...

Learner 6: The 20-kg weight a bit heavier but….

Learner 7: Which means it’s more to the center.

In the excerpt, the learners succeeded in balancing the seesaw on their first attempt. They are ready to move on to the next assignment, indicated by moving the cursor to the “Next” button. At this point, PST C prompts the learners to explain why their answer was correct. The discussion that follows was initiated by this prompt, and it probably would not have happened without it. The prompt served as a stimulus for the learners to reflect on their answer when they did not show initiative to do so on their own, which fits the description of prompts (de Jong and Lazonder 2014).

Heuristics by the Teachers

The pre-service teachers guided the learners using two different types of heuristics. The first type of heuristics involved reminding the learners of something they had previously done. This could include a reminder of a similar assignment in the Game tab or a reminder of a rule they had previously formulated for balancing the seesaw. The second type of heuristics involved giving the learners a hint. These hints pointed out possible actions or ways to perform the action. In the excerpt below, a pre-service teacher (PST A) uses both types of heuristics. The learners are working on an assignment in the Game tab in which they must determine the weight of a trash can, which is fixed in one place on the seesaw, by balancing the seesaw using a brick that weighs 15 kg.

Learner 1: Now this trash can.

PST A: This is a similar assignment to the one where you had to guess the weight [Heuristics—reminder].

L 1: Maybe it’s ten kilos in this one as well… probably not.

PST A: I think you should first put it so that the seesaw balances itself; try it [Heuristics—hint].

Learner 8: Put it all the way to the end.

L 1: Oh, yeah. OK.

[The brick is placed to the far end of the seesaw. The seesaw balances.]

In the excerpt, PST A pointed out that the assignment at hand is similar to an earlier assignment, which the learners had already completed. This guiding action served as a reminder. It directed the learners’ thoughts toward the previous assignment and how they completed it. This is a characteristic of heuristics (de Jong and Lazonder 2014). Learner 1 begins to think aloud about the possibility that the answer to this assignment is the same as that of the previous assignment which PST A referred to. It was not clear to the learners that in a similar assignment, same-looking objects could have different weights. This may have spurred PST A to give a hint on how to proceed with the experiment; the teacher hinted that they should first try to find a position for the brick which balanced the seesaw. This guiding action serves as a heuristic because it points out a way to complete a task (de Jong and Lazonder 2014).

Scaffolds by the Teachers

De Jong and Lazonder (2014) define scaffolds as tools that structure the activity. Instead of tools, the pre-service teachers provided scaffolds by dividing the process of drawing conclusions from the experiments into smaller steps, providing a structure for drawing conclusions. The pre-service teachers asked multiple closed questions about the ratio of weights and their distances, which provided the learners with the components of the process. The questions structured the learning process and thus simplified a complex process, which fits the description of scaffolds by de Jong and Lazonder. In the following excerpt, a pre-service teacher (PST D) provides this kind of guidance. In the excerpt, the learners are working on an assignment from the Game tab in which they must predict what will happen when the supports are removed from a seesaw that has two bricks weighing 5 and 15 kg on opposite sides of the seesaw at the same distance from the fulcrum. The simulation gives them three options: the seesaw tilts to the left, tilts to the right, or remains horizontal.

[Learner 11 points to the option indicating that the seesaw tilts to the right, which is the correct answer.]

L 11: I think it is that one.

PST D: Which of these is more—which one is heavier?

L 10 [points to the 15-kg weight]: This one.

PST D: This one—are these on the same line?

L9 and L 10: Yes.

PST D: Yes, so if this one is heavier, then what will happen? [Scaffolds—dividing the problem into smaller parts].

L 10: It goes down.

L 9: It goes there, so that picture.

PST D: Ok, try it and press “Check answer.”

L 11: Yes, it was.

In the beginning of the excerpt, learner 11 picks the right answer from the three options. To structure the process of choosing the correct option, PST D divides the process into three parts through three questions: (1) Which of the objects is heavier? (2) Are the objects on the same distance from the fulcrum? and (3) What happens when one of the objects is heavier and they are on the same distance from the fulcrum? Simplifying and structuring a complicated process (such as determining which way the seesaw will tilt) by dividing it into smaller components is characteristic for scaffolds (de Jong and Lazonder 2014).

Direct Presentation of Information by the Teachers

The pre-service teachers also directly presented information to the learners during their experimentations. This form of guidance was provided to inform learners of the rule by which the seesaw can be balanced or to inform them of the variables (weight and distance from the fulcrum) that affect the balance. In the following excerpt, a pre-service teacher (PST A) directly presents information to the learners at the conclusion of an assignment in the Game tab. In this assignment, the learners must find the weight of a flower pot which is 1.5 m from the fulcrum using a brick that weighs 10 kg. They have balanced the seesaw by placing the brick 0.75 m from the fulcrum, and they have come to the conclusion that the flower pot weighs 20 kg. The simulation informs them that they have answered incorrectly.

Learner 1: It wasn’t twenty.

PST A: Yeah, so now you guessed twenty, but because this one (the flower pot) is further away, it has to be in fact lighter than ten kilos. [Direct presentation of information]

Learner 12: Five.

PST A: Why do you answer five? [Prompts—prompt for answer]

L 12: Because I suddenly felt like it.

Learner 8: Yes, I agree.

[The learners enter five kilos. The simulation informs them that their answer is correct.]

PST A: It is correct, so it weighs half as much—girls, would you listen to me for just a second?

Learners: Yes.

PST A: It weighs half as much as ten kilos because it is twice as far from the fulcrum as the ten kilos is—this is why they are in balance. [Direct presentation of information]

The learners chose the correct ratio for two weights (1:2) in their first answer. By explicitly stating that the object further from the fulcrum must be lighter, PST A informs the learners that the answer must be less than 10 kg. Learner 12 has the right answer but is unable to give a reason for the answer when PST A asks for one. After the correct answer is entered into the simulation, PST A explains that objects weighing half as much must be placed twice as far from the fulcrum. The first direct presentation of information gave the learners qualitative information and the latter quantitative information about how to balance the seesaw. The learners were unable to discover this information on their own, as was apparent from their first incorrect answer and their inability to provide reasons for the correct answer. According to de Jong and Lazonder (2014), it is appropriate to directly provide the learners with information in this situation.

Distribution of Guidance Between the Simulation and the Teachers

Tabak’s (2004) three different patterns of distributed guidance are illustrated in this data by examples showing the distribution of the guidance among different guiding elements and actions.

Differentiated Guidance

Identifying learning needs and supporting each of them using the best source and form of guidance available is the principle behind the pattern of differentiated guidance (Tabak 2004). In this study, for example, only the pre-service teachers (and not the simulation) prompted to learners to reflect on their answers to the assignments or on their learning in general. The assignments in the Game tab assign scores based on the number of correct and incorrect answers, but these scores do not take into account for whether the learners used a method or a strategy to solve the assignment or whether they simply guessed the answer. The teachers, on the other hand, prompted the learners to verbally express their strategies for solving the assignments and to give explicit reasoning for their answers. When the learners explicitly state their reasoning for their answers, they devote effort and resources to the scientific content of the answer. The teachers also prompted them to share their ideas with other members of their group, which helped them discover or address disagreements among themselves, prompting them to engage in exploratory discussions (Mercer 1996).

Redundant Guidance

The idea that different learners have different needs for support is the principle behind the pattern of redundant guidance (Tabak 2004). An example of this pattern in this study was when the teachers verbalized and paraphrased the assignments given by the simulation in the Game tab. The excerpt below provides an example of this. In this assignment, the seesaw was shown in a predetermined configuration with supports holding it in place. The learners had to determine what would happen to the seesaw (tilt to the left, tilt to the right, or remain horizontal) when the supports were removed.

Learner 9: So now…

Learner 10: This one has to be moved that way.

L 9: Yeah, this has to be moved this way in order to—

PST D: That is true, but now—here, you don’t have to move these, but if the situation is this: that fifteen kilos is there and the other one is here, which of these options will happen? [Prompts – prompt for action]

Even though the instructions are written on the screen, younger learners in particular may have difficulties understanding what they are expected to do in the assignment. When the learners in this excerpt encountered this type of assignment for the first time, at least two of them did not immediately understand that they could not move the weights on the seesaw and started to discuss where to move them. Some of the learners in the group might have understood the assignment, but at least two did not. So, the teacher verbalized the assignment which transformed the written assignment into a verbal one, going from one mode of expression to another. This redundant guidance provided through different sources and modalities ensured that all members of the group received guidance in the form of a prompt and an assignment (Tabak 2004).

Synergistic Guidance

The idea that different forms of guidance augment one another and work in tandem to guide learner performance is the idea behind the pattern of synergistic guidance (Tabak 2004). An example of synergistic guidance is the interplay between learners’ progression through the different levels of assignments in the Game tab and teacher instruction for the learners to either move on to a more difficult level or to stay at the same level. Let us take PST C and his/her group of learners as an example. After completing the first level of assignments in the Game tab, the learners want to move on to level 2 (“Can we go on to level 2 now?”). PST C agrees to that but adds that they must pay attention to the difficulty of the assignments (“We can try, but we’ll have to see if they [the assignments] are really difficult.”). The learners go on solving the assignments, but they struggle with some of the assignments because the assignments in level 2 are more complex. PST C acknowledges this (“This is a really hard one [level].”). With the teacher’s guidance, the learners are able to complete all of the assignments and want to move on to level 3 (“OK next up is level 3.”). PST C, however, prevents the learners from moving on (“Let’s just play levels 1 or 2; those previous ones were already really difficult.”). In this example, the guidance embedded in the simulation (i.e., progression through the different levels) was augmented by the teachers’ observations of the learners’ skills and knowledge. If guidance was only provided by the simulation, learners could over or underestimate their skills and try to complete levels that are too hard or too easy for them. This would hinder their learning or at least decrease their motivation. The teacher can make use of the progressive difficulty of the levels in the simulation and provide additional guidance that is adapted to the learners’ needs. When the guidance provided by the simulation is augmented by dynamic support from the teacher, the guidance is more likely to be effective (Tabak 2004).

Discussion

The results of this study give a glimpse into one case dealing with guidance provided by different sources. The teachers provided more varied guidance than the Balancing Act simulation by providing different forms of guidance through different guiding actions. The guidance provided by the simulation was concentrated around the assignments in the Game tab and the learners’ progression through the simulation. These results illustrate how inquiry-based learning is guided by both the teachers and the simulation. Using the same categorization for both providers of guidance made it possible to contrast the forms of guidance provided. It also allowed the patterns of guidance distribution between the different providers to be examined.

The examples from the data for the patterns of distributed guidance by Tabak (2004) all have one characteristic in common: the teacher is the guidance provider that enacts the patterns. This showcases the crucial role that teachers play in guiding inquiry-based learning with simulations. In theory, teachers can adapt their guidance to both the needs of the learners and to the guidance provided by the simulation better than the simulation could and vice versa. We will discuss these two forms of adaptation separately.

Firstly, guidance for inquiry learning with simulations should always be adapted to the needs of the learners, no matter the source of the guidance (de Jong and Lazonder 2014; Smetana and Bell 2012; van de Pol et al. 2010). De Jong and Lazonder emphasize the role of constant monitoring of learners’ performance and knowledge in adapting guidance for every learner. Teachers probe the learners’ performance and knowledge and adapt their instruction through formative assessment (Black 2009; Buck et al. 2010; Haug and Ødegaard 2015; Ruiz-Primo and Furtak 2007; Ruiz-Primo 2011). This has been argued to be one of the fundamental mechanisms for learning and also for improving learning gains (Black and Wiliam 1998; Jordan and Putz 2004). Especially in informal, formative assessment, teachers consciously discover information about learners’ understanding and use this information to alter their immediate instruction (Ruiz-Primo 2011; van de Pol et al. 2010). The teacher can obtain evidence about learner needs and knowledge from multiple sources, including oral evidence (e.g., learners’ conversations, questions, and responses), written evidence, graphic evidence, practical evidence (e.g., observing learners’ work with a simulation), and non-verbal experience (e.g., body language) (Ruiz-Primo 2011). On the other hand, simulations cannot adapt to learners’ needs and knowledge in the same way since they cannot receive as much information about the learners as teachers can. For example, the only information received by the assignments embedded into the Balancing Act simulation is whether the learners complete the assignment in the first, second, or third attempt. Based on this information, the simulation then gives the learner a score or displays the correct answer. More complex learning analytical educational software for science learning exists; such programs can obtain evidence of learner outcomes from different learning products within the software (de Jong et al. 2010). The development of this sort of learning analytical tools that guides learners based on their needs is still under way (de Jong and Lazonder 2014; Ferguson 2012; Olympiou and Zacharias 2013). At this time, the ability of teachers to adapt to learners’ needs is far beyond the capacity of any software.

Second, because the guidance provided by teachers can be more adaptive than the guidance provided by a simulation, teacher guidance must adapt to software guidance. Teachers’ ability to provide guidance that complements that provided by the simulation ensures that the overall guidance the learners receive is as adaptive as possible. In order for science learning with simulations to be supported as efficiently as possible, all sources of guidance are needed—teachers, simulations, or other learning materials. The guidance should be distributed between different sources in a pattern that amplifies the best features of each source of guidance.

Primarily, the responsibility for creating this beneficial distribution rests with the teachers. They need to be aware of the forms of guidance that the software and other sources can provide and compare those forms of guidance to the learners’ needs. In some cases, the software could be the best source of guidance for a particular learning need, and in other cases, teachers might need to augment guidance by software. The patterns of distributed guidance described by Tabak (2004) help illustrate this process. Through pre- and in-service training, teachers could be made more aware of guidance provided by different sources and helped to recognize their own strengths and weaknesses. This could make it possible to provide better overall guidance to learners.

Secondarily, producers of educational software and simulations for science learning are responsible for ensuring productive and optimal interaction between different providers of guidance. In some ways, software can provide better guidance than teachers can: individual learners can interact with the software throughout a lesson and during all inquiry phases, while a teacher can only guide one group of learners at a time or the entire class together. Leaving some guidance to the software enables teachers to focus more on probing learners’ needs and providing additional guidance that complements the software guidance. Research is needed to identify which aspects of guidance can be delegated to software and which cannot (van Joolingen et al. 2007). An example of software for inquiry-based science learning with simulations that can be adapted by the teacher is the Go-Lab project (de Jong et al. 2014). This software implements a rich set of tools to provide different forms of guidance throughout the inquiry learning cycle. Teachers can use their own diagnosis of learners’ needs at the class level to design inquiry learning spaces using different tools and different forms of guidance to target different phases of the inquiry learning cycle. This enables teachers to combine their ability to gain evidence about learner needs with the software’s ability to provide guidance for multiple learners at the same time and can possibly improve the overall quality of guidance in the classroom. Still, the guidance provided by the Go-Lab software has to be pre-programmed by the teacher and cannot be modified on-the-fly. Educational software needs to develop further to enable this sort of flexibility and adaptation.

One source of guidance that could be promoted through software is collaboration with peers. This could involve different types of collaboration scripts that support learning by shaping the way learners interact with each other (Kobbe et al. 2007). For example, guidance provided by software could include prompts for learners to engage in discussions with their peers. Thus, some guidance could be provided by peers in addition to that provided by software and teachers.

Limitations

One limitation of this study is the fact that guidance for inquiry-based learning was only studied in the context of one PhET simulation. The guiding frameworks in which simulations are embedded differ from one simulation to another (Clark et al. 2009); different simulations provide different forms of guidance. This also leads teachers to provide different forms of guidance and to use different patterns for distributing guidance. In order to make generalizations about the guidance provided by simulations and teachers, one would need to collect data from lessons utilizing multiple different simulations. Also, the fact the data was collected from pre-service teachers’ and not in-service teachers’ lessons has an effect on the guidance provided. In-service teachers with more experience could provide different forms of guidance through different guiding actions. Pre-service teachers’ limited content knowledge could have an effect on their teaching, including the guidance they provided (Childs and McNicholl 2007).