Introduction

Computer-supported learning is becoming increasingly important and a growing number of studies show the positive effects of this approach (de Witte et al. 2015; Wongwatkit et al. 2016). Virtual laboratories (simulations) include software solutions that do not require real laboratory equipment, but serve as an “imitation” of real processes during which the parameters can be controlled or certain measurements can be made in an improvised environment.

A large number of studies emphasise the importance of applying virtual laboratories for conceptual understanding of content learning (Balamuralithara and Woods 2009; de Jong et al. 2013; Kollöffel and de Jong 2013; Sarabando et al. 2014; Zacharia 2007). Electronic labs are particularly suitable for the development of learning skills through inquiry learning. Students should know how to access content knowledge efficiently and effectively and how to develop problem-solving skills (de Jong et al. 2014; Govaerts et al. 2013; Zacharia et al. 2015).

Formative assessment is an important aspect of learning (Hopster-den Otter et al. 2017; Shah et al. 2014), which includes providing continuous assessment, and providing detailed information about achievement as well as support for future learning and progress (Webb et al. 2013). Feedback is a key component of computer-based instruction (Clariana and Koul 2005) and represents one of the key elements of formative assessment and consequently may have a great impact on learning processes (Bokhove and Drijvers 2012; Havnes et al. 2012; Van der Kleij et al. 2015; Voerman et al. 2012). Feedback allows learners to review each set task, enabling further development of learning skills (Higgins et al. 2002). Therefore, feedback allows students to overcome the gap between what is understood and what is aimed to be understood (Hattie and Timperley 2007), creating a reflection of feedback and creating new learning strategies (Baas et al. 2015).

There are inconsistencies in results obtained by various researchers about the effectiveness of feedback (Attali 2015). The quality of traditional feedback from teachers can depend on various factors. Traditional feedback mainly depends on human factors such as teachers’ workload, consistency and punctuality as well as students’ comprehension of the feedback given by teachers, their interests and their expectations (Lee 2008). Computer-based feedback has certain advantages in overcoming barriers, such as students’ avoidance of asking for an explanation and the inability of teachers to deal with each student at the same time or provide feedback information of equal quality in accordance with their available time and current working capacity. Moreover, computer testing enables automatic data collection on achievements of a greater number of students, providing fast feedback information on test results, the possibility of testing at any time etc. (Vasilyeva et al. 2007). However, Mulliner and Tucker (2015) stated that misunderstanding of feedback can arise from a lack of interaction between a teacher and a student; therefore, direct contact and communication can be seen as one of the advantages of traditional feedback.

Previous research shows that there are difficulties in understanding electrical circuits and electricity in primary and secondary education, which results in relatively low academic achievement (Peşman and Eryılmaz 2010; Kada and Ravanis 2016; Küçüközer and Kocakülah 2007; Lee and Law 2001). Therefore, it is important to find the most efficient way of overcoming these difficulties. Computer simulations are valuable means for solving this problem but there are no research studies comparing the effects of simulations with an already created electrical circuit and simulations in which students create an electrical circuit. Also, the number of studies that deal with comparing the effects of traditional and computer feedback are very limited. For example, Jagodziński and Wolski (2015) measured the influence of using a virtual laboratory with instructions from a teacher and with other multimedia elements (video, text). Therefore, the objective of this paper is to compare simulations that are based upon individual electrical circuit modelling and simulations with already created circuits that enable parameter measuring. Moreover, the objective is to compare the efficiency of computer-elaborated feedback (EF) for correct and incorrect answers (which include hints for similar conceptual tasks) in combination with try-again-type feedback and traditional feedback provided by the teachers. In accordance with the research results in favour of computer EF for conceptual understanding and the positive impact of the application of simulations (virtual laboratories), these two approaches have been integrated into this research.

Literature Review

Electricity in Primary Education

The greatest challenge for science teachers is to influence changes in conceptual understanding (Lee and Law 2001) where one can distinguish strategies of cognitive conflict and reinterpretation of existing knowledge (Duit and von Rhöneck 1997). The strategy of cognitive conflict is realised through students being challenged to express their ideas, where it is important for them to see the discrepancy between knowledge structures they own and real facts, whereas the reinterpretation strategies refer to finding more accurate descriptions of concepts and theories in relation to existing knowledge. Difficulties in understanding electrical circuits and electricity in primary and secondary education are confirmed by different authors (Çepni and Keleş 2006; Kada and Ravanis 2016; Küçüközer and Kocakülah 2007; Lee and Law 2001).

For example, ninth-grade students’ misconceptions about electrical circuits are analysed and relatively low results of testing have been found (Küçüközer and Kocakülah 2007; Peşman and Eryılmaz 2010; Sencar and Eryilmaz 2004). Duit and von Rhöneck (1997) also analysed students’ misconceptions about current, voltage and resistance. They claim that primary school students may have misconceptions because of the everyday meaning of “current”, which they have already accepted. Lee and Law (2001) dealt with older students’ (17 years old) alternative conceptions of electrical circuits and teaching strategies that improve conceptual understanding.

The findings show that students at different levels of education have the same problems in understanding electrical circuits (Shipstone 1988). The following issues are seen as particularly problematic (Duit and von Rhöneck 1997): understanding the concept of electricity consumption; local reasoning (where the focus is on only one point of an electrical circuit with no insight into the whole circuit); sequential reasoning (inability to establish changes in initial parts of an electric circuit if there are changes in the subsequent parts); and understanding the resistance concept (e.g. difficulties in understanding value changes of electricity when one value of resistance is changed in an electrical circuit).

Different Types of Simulation and Feedback

There are many articles on the importance of using computer simulations in natural sciences (Lye et al. 2014). In the context of learning through research inquiry, students are required to identify the problem, develop their hypotheses, make a report, analyse the data obtained, conclude, keep track of their progress and evaluate the learning process (Zacharia et al. 2015). Thus, when the students set research hypotheses, they check their understanding of the problem through the application of the experiment.

The teacher has a very important role in computer-based science education. Fang and Hsu (2017) in their paper consider two dimensions of teaching: guidance and cognitive dimensions. According to these authors, guidance includes giving instructions, support and organising students’ work. On the other hand, the cognitive dimension can be divided into four categories: conceptual, epistemic, social and technological. With regard to the aspect of the conceptual domain, teachers have the following tasks: giving explanations for scientific concepts encouraging students to think; encouraging their deep understanding of content; summarising content; evaluating students’ understanding; linking with previous knowledge and everyday experience. From the epistemic perspective, teachers need to give opportunities to students to observe information, give explanations, and summarise and check their hypothesis. The social aspect includes discussing topics, enhancing ideas etc. The technological aspect includes explanations regarding using software. From the guidance perspective, lesson content should include step-by-step presentation with detailed explanation.

The importance of giving feedback during learning while using virtual labs is evaluated, as well as the interaction between teachers and students in overcoming gaps in conceptual understanding (Furberg 2016; Sarabando et al. 2014). There are no studies that examine feedback in combination with different types of simulations. Narciss et al. (2014) used elaborated procedural and conceptual feedback in combination with software that includes a lesson about fractions. This software required pupils to find the error in the solution to the task with three attempts. After the third attempt (or correct answer), students were given the worked-out solution. Other authors used knowledge of result (KR) and knowledge of correct response (KCR) feedback in combination with a simulation for learning Newton’s law (Huang et al. 2015). Chang et al. (2008) compared the learning effects of three similar simulation-based environments in which the software in combination with experiment prompts and a hypothesis menu proved to be more effective than software with step guidance for conducting the experiment. Other studies have found that simulation with question prompts in combination with KCR proved to be more effective than simulation with KR and without prompts (Huang et al. 2015). Another study, in the context of higher education, indicates that students who used EF in combination with simulation and supporting information achieved better results than students who used a simple form of feedback (Bernstein et al. 2016). Yaman et al. (2008) used multiple-choice answers with brief feedback (correct/incorrect) as well as simulation with example solutions for the task in combination with simulation for learning chemistry topics. Jagodziński and Wolski (2015) compared different types of instructions in combination with a chemistry virtual laboratory. The results indicated that the group that used simulation with instructions from the teacher was more effective than groups that used simulation with instructional text and films. In our study, we use EF for all answered questions, but with instructions for solving future similar tasks (conceptual hints).

Feedback and Learning Performance

Information and communication technologies provide various forms of feedback such as giving information about overall results, information on accurate and inaccurate answers and EF, while the depth of elaboration can vary (Golke et al. 2015; Shute 2008). EF can appear in several forms, with some being basic (Maier et al. 2016b): EF for a specific task, focused feedback on the task (when students see the exact solution to the specific task); instruction based on feedback (provided explanations are based on the existing material) and extra-instructional feedback (illustration of a software solution using previously unseen materials). EF usually refers to the correct answer, or an explanation of why the answer is inaccurate (Shute 2008).

Focusing on the type and time of feedback, Shute (2008) distinguishes between immediate and delayed feedback. Immediate feedback usually refers to feedback given directly after each student’s answer, while delayed feedback is feedback information given right after the student has completed a quiz or test, but also a few days after that or a specific block of tasks. Different authors have discussed the efficiency of immediate and delayed feedback; some of them emphasise the advantage of immediate feedback because it prevents storing wrong information in long-term memory, while others believe that delayed feedback reduces proactive inhibition, so that the wrong information is forgotten and correct information is memorised without interference. Some believe that delayed feedback can also be useful for difficult tasks because challenging tasks require more time for information processing (Yuan and Kim 2015).

In addition, single-try feedback (STF) and multiple-try feedback (MTF) (Van der Kleij et al. 2011) can be distinguished. MTF allows tasks that are incorrect to be re-answered.

Numerous papers have dealt with examination of the effects of feedback in the learning process and final outcomes that defined numerous classifications and the use of various types of feedback, such as performance-oriented feedback, feedback oriented to the outcomes and objectives, and adaptive feedback (Baadte and Schnotz 2014; Narciss et al. 2014). While some focus on developing or deepening knowledge, others focus on the development of metacognitive skills and learning processes (Price et al. 2010).

The effects of applying formative evaluation depend on a number of factors including the content and time of feedback, and the place of feedback (after right or wrong answers), followed by situational variables, such as the content of learning and task difficulty, and the characteristics of pupils (motivation, self-determination, objectives of learning etc.) (Maier et al. 2016b). The authors of the meta-analysis, whose aim was to establish the effects of feedback, find the least positive effects in the science field (Kingston and Nash 2011). These authors claim that the efficiency of feedback depends to a great extent on the very nature of tasks that are characteristic for specific fields of studies, and also the type of feedback. Moreover, they claim that feedback has a more positive influence when it is used with simpler tasks. Maier et al. (2016b) find that students who used elaborate feedback, which they find useful, and verification feedback achieved better results than students who regarded EF as useless or those who did not have any feedback. The same authors point out that the volume of feedback, which is quite often bigger in scientific fields, can reduce its efficiency. Accordingly, the influence of intrinsic motivation on the tendency to read feedback and its usage can be an important factor. Similarly, a study that examines the efficiency of EF and KCR feedbacks in combination with knowledge prompts and application prompts shows the dependence on the context in which it is used (Law and Chen 2016). The authors of this study find that EF is more efficient in relation to KCR with knowledge prompts, and KCR has better effects on students’ achievements in relation to ER when application prompts are used. Another factor that can be influenced by feedback efficiency is students’ ability (Narciss and Huth 2004). For example, EF can be more appropriate for students with weaker abilities, whereas KCR feedback can be more useful for students with stronger abilities. Also, in order to have positive effects, feedback should be understandable to students. Accordingly, there are indications that students are not able to interpret the obtained feedback or are unable to understand feedback comments, and interpret them correctly and use them (Mulliner and Tucker 2015). The reason for this situation may be the lack of interaction between teachers and students. The results of numerous studies indicate more positive effects of EF (Heckler and Mikula 2016; Meyer et al. 2010; Moreno 2004) and feedback that provides an explanation of the correct answers (Erhel and Jamet 2013) than feedback that provides information on the accuracy of the task (Van der Kleij et al. 2015) and instructions that do not contain any form of feedback (Maier et al. 2016b).

Nevertheless, authors of other meta-analyses claim that the results of studies in the field of feedback should not be relied upon since a large number of studies are methodologically limited with an insufficient number of explanations about the types of feedback used (McMillan et al. 2013). EF is not suitable or appropriate for all learning situations (Golke et al. 2015; Maier et al. 2016a). Some authors suggested that more complex tasks require detailed feedback (Maier et al. 2016b; Van der Kleij et al. 2011). On the other hand, we can compare oral and written feedback. Studies have concluded that most students prefer oral feedback (Mulliner and Tucker 2015), but this type of feedback does not affect their achievement (Morris and Chikwa 2016). Methods for formative assessment should not be generalised. The application of different forms of evaluation depends largely on specificities of the teaching fields (Bennett 2011). Therefore, one of the aims of this research is to determine the effects of EF in regard to correct and incorrect answers (which include hints for solving conceptual tasks) using MTF in physics education using different software simulations.

Methodology

Design and Participants

This study employed a pre-test–post-test research design. Parallel versions of tests were used in the initial and final testing to measure students’ achievement in terms of conceptual understanding. The controlled factor was the different lesson approach with different types of simulations in the field of physics for primary school students. Two groups used web-based software with integrated simulation with EF for correct and incorrect answers (which includes a hint for conceptual tasks) in combination with MTF and KCR feedback. The third group of pupils used simulation in combination with oral feedback and instruction from the teacher in the classroom.

One hundred sixty-eight eighth-grade students from three schools in Čačak, Serbia, were included in the study (81 males and 87 females; 14 years old). The students were randomly assigned to one of three treatments groups: (a) simulation with an already created electrical circuit in which students could change the values of the voltage and resistance with EF for correct and incorrect answers in combination with MTF and KCR (n = 59: 29 boys and 30 girls); (b) simulation in which students created an electrical circuit of offered elements in combination with EF for correct and incorrect answers in combination with MTF and KCR (n = 54: 25 boys and 29 girls); (c) simulation in which students created an electrical circuit of offered elements in combination with the teacher’s feedback in the classroom (n = 55: 27 boys and 28 girls). No statistically significant differences in the achievements of different groups of students in the initial test were found (F(2,165) = 2.053, p = .132, p > .05).

The selected sample size of 168 is sufficient for analysing effect sizes above 0.33.

Objectives of Research and Hypotheses

When students have a certain degree of interaction with a certain subject matter they are able to retain higher percentage of information than in situations in which they can only hear and see information (Wolfgram 1994). In previous studies, it was shown that simulations for assembling electrical circuits had a very positive effect on students’ conceptual understanding (Faour and Ayoubi 2018; Wieman et al. 2008; Zacharia 2007). On the other hand, simulations with already created electrical circuits allowed a lower level of interaction and reduced students’ opportunity to engage more actively in the learning task. For the above-mentioned reasons, it can be expected that these two types of simulations have different effects on students’ achievement.

Feedback is an important element in the learning process. On the basis of feedback, students’ direct their learning process, change their problem-solving strategies and foster understanding (Bokhove and Drijvers 2012). Studies have shown the positive effects of computer feedback on the conceptual understanding of students (Law and Chen 2016; Clariana and Koul 2005). Given the fact that traditional feedback may have a different effect compared to feedback from a computer due to the differences in face-to-face and virtual interactions (Mulliner and Tucker 2015) and other factors such as students’ reluctance to ask questions and teachers inability to respond to different questions at the same time, it could be expected that there are differences in the effects of applying these two approaches.

The aim of this study was to determine the effects of two different types of simulation in combination with computer EF for correct and incorrect student answers in combination with MTF and KCR and simulation in a classic classroom environment. The following hypotheses were identified:

  • H1. There are achievement differences between the groups of students who used simulation with an already created electrical circuit and simulation in which students create an electrical circuit of offered elements (group 1 and group 2).

  • H2. There are achievement differences between the groups of students who used simulation in combination with computer EF for correct and incorrect answers in combination with MTF and KCR feedback (group 2) and simulation in combination with traditional feedback (group 3).

Materials

Simulation with an Already Created Electrical Circuit with Computer EF for Correct and Incorrect Student Answers in Combination with MTF and KCR—Type 1

In accordance with findings suggesting the positive effects of EF application on conceptual understanding (Law and Chen 2016), MTF (Clariana and Koul 2005) and the frequency of applying multiple-choice questions in the online learning environment (Butler et al. 2007; Zlatović et al. 2015), these two approaches were integrated in this study.

The software contains circuit simulation (Fig. 1), instruction in the form of text and image, and a knowledge check through a multiple-choice test (Fig. 2).

Fig. 1
figure 1

Simulation with an already created electrical circuit (condition I)

Fig. 2
figure 2

Multiple-choice task in conditions I and II

Simulations were created in LabView v12 (http://www.ni.com/download/labview-development-system-2012-sp1/3692/en/) (LabVIEW n.d.) software package, and an interactive website in the Java programming language and HTML. The simulations are designed to measure current based on the given values of electrical resistance and voltage (in a simple circuit, with a series and parallel connection).

The simulation is implemented on a website accessible to students through a login page. After creating an account on the site, students access a lesson. The lesson includes a page on learning Ohm’s law, and series and parallel connections of resistors, for which students have access to simulations that present electrical circuits and enable measurements of electrical current and voltage (in series connection). Each simulation is supported by a written theoretical basis for each part of the lesson. After going through the simulations, the test for the lesson is opened for the students. If the students do not pass the test, the software will direct them to re-access the simulations with additional explanations and instructions for solving each assignment in the test and for similar conceptual tasks, but without showing the correct answers (Fig. 3).

Fig. 3
figure 3

Example of EF feedback (failed test) in conditions I and II

After that, students can access the test again (which does not contain the same tasks as the first attempt). When they pass the test, they will receive information on accurate and inaccurate answers with explanations (Fig. 4).

Fig. 4
figure 4

Example of EF feedback with KCR feedback (passed test) in conditions I and II

The results of one study showed that EF has a positive impact on re-solving new tasks in relation to the repeated tasks (Butler et al. 2013).

There are findings indicating that delayed feedback is more suitable for conceptual tasks (Corbett and Anderson 2001). However, some authors found that immediate feedback is more effective (Huang et al. 2015).

Reports are made on each test including student data, the percentage of correct answers, the answer given for each task and the correct answer, and the number of students taking the test. Tests have a dynamic form that enables a random selection of tasks for each retesting attempt.

Simulation in which Students Create an Electrical Circuit of Offered Elements in Combination with Computer EF for Correct and Incorrect Student Answers in Combination with MTF and KCR—Type 2

Using a virtual lab is based on the constructivism-contextual approach to learning, where individuals need to actively participate in the construction of their knowledge (Dega et al. 2013), and has a positive impact on students’ attitudes towards this form of learning (Lye et al. 2014). Another type of educational software is a combination of the above-mentioned solution and existing simulations created by Colorado University (PhET Interactive Simulations, University of Colorado Boulder, https://phet.colorado.edu/en/simulation/legacy/circuit-construction-kit-dc) – Circuit Construction Kit (DC only) (PhET Interactive Simulations n.d.). The simulation is designed so that the students use elements independently in order to create electric circuits, connect resistors and carry out measurements. Application software allowing students to create an electrical circuit had positive effects on students’ understanding (Zacharia 2007). PhET simulations are widely dispersed online in the teaching process and are often subject to research (Adams et al. 2008; Moore et al. 2014; Perkins et al. 2006; Wieman et al. 2008). Simulation for assembling electrical circuits proved to be very effective for conceptual understanding by students (Circuit Construction Kit) (Wieman et al. 2008). Faour and Ayoubi (2018) compared the effects of learning with PhET simulation and learning in a traditional laboratory and found that learning with PhET simulation was significantly more efficient. Similarly, Farrokhnia and Esmailpour (2010) found that the usage of PhET Colorado simulation is particularly efficient when used in combination with real equipment. Other research has shown that teachers highly value the usefulness of this simulation (Kriek and Stols 2010).

The simulation is integrated with a dynamic web page that also contains a theoretical basis, a test and feedback. As in group 1, students had the opportunity to access the page with lessons and simulation, but in this type of simulation, students first need to create a simple circuit before they can carry out measurements of electrical current and voltage. In this type of simulation, students build the same circuits that have already been created in the simulation used by the first group. After going through the simulation, the test for the lesson is made accessible to the students.

EF is provided after unsuccessful attempts. Before the software allows another attempt to take the test, it first gives guidance for further solving tasks through EF and returns to the page where the lesson is in order to read the theoretical part again and access the simulation. After passing the test, a detailed record of the number of accurate and incorrect answers and explanations for specific tasks is provided. Tasks in the second attempt are randomly selected and different from the previous ones. Both versions of the software contained the same tasks.

Simulation in Which Students Create an Electrical Circuit of Offered Elements in Combination with the Teacher’s Oral Feedback in the Classroom—Type 3

The third type of software includes only the aforementioned PhET simulation (PhET Interactive Simulations, University of Colorado Boulder, https://phet.colorado.edu/en/simulation/legacy/circuit-construction-kit-dc) (Fig. 5) for creating electric circuits without the possibility of providing EF and without elements of formative evaluation. In combination with this simulation, the students received oral feedback and instruction from the teacher after solving their assignments during the lesson.

Fig. 5
figure 5

PhET simulation (https://phet.colorado.edu) used in conditions II and III

As in group 2, students in the third group had the opportunity to access the page with lessons and use simulation, which required the students to create a simple circuit before they could carry out measurements. Afterwards, students had to solve a test for the lesson as in groups 1 and 2. After finishing the test and because in this group EF was not available, the students had the opportunity to use simulation in order to check the solutions by creating circuits in the simulation and by measuring parameters, or alternatively they could ask the teacher for feedback.

In this group, the teacher followed the recommendations of Fang and Hsu (2017), listed in the “Different Types of Simulation and Feedback” section of this paper. As suggested by other authors, the first activities included the presentation of the simulation on screen when the teacher shared ideas about content with the students by using simulation and at the same time explained the components of simulation (Beauchamp and Kennewell 2010).

During the lesson, the teacher answered students’ questions and provided detailed feedback, which was of a higher quality than EF due to the teacher’s ability to recognise which part of the task, and why the problem occured for particular students. The teacher also had the usual limitations, such as an inability to respond to different students’ questions at the same time and students’ reluctance to ask questions. After the group had finished the set of tasks, the teacher provided feedback and gave explanations for every single task and discussed the solutions with the students. Following which the students had the opportunity to access the page with an improved theoretical understanding and afterwards solve a new set of tasks.

Tests of Conceptual Understanding

Conceptual understanding, which is defined as the “understanding of principles governing a domain and the interrelations between units of knowledge in a domain” (Streveler et al. 2008), can have a positive influence on finding new ways of solving problems, identifying precise procedures and finding mistakes in procedures. There are findings that show that the development of conceptual understanding and the development of procedural skills are cross-dependent skills and they should not be seen separately (Rittle-Johnson et al. 2001). Kollöffel and de Jong (2013) also used procedural tasks in combination with conceptual tasks for measuring students’ achievement in this field. Therefore, the test that has been developed for the needs of this research also contains open-form tasks that require the application of adopted concepts for calculating parameters and finding relations between the elements of an electrical circuit.

Parallel versions of tests were used in the initial and final testing of the conceptual understanding of Ohm’s law and connecting resistors. Both versions of the test had eight tasks, and the total number of test points was 70. The first six tasks were multiple-choice questions to check students’ understanding of Ohm’s law and the functioning of series and parallel connections. The seventh and eighth tasks were based upon the recognition of series and parallel circuits, the calculation of electricity, resistance and total resistance. The first and second tasks in the test are aimed at checking students’ understanding of the relationship between voltage, current and resistance in a simple circuit. The third question required an understanding of rules about resistance in series and parallel connections. The remaining tasks were based on tasks from conceptual tests of other authors: tasks 4 and 5 (Bayrak et al. 2007), task 6 (Duit and von Rhöneck 1997), and tasks 7 and 8 (Kollöffel and de Jong 2013; Peşman and Eryılmaz 2010).

Firstly, the pilot testing was conducted on the sample of 30 students and pairs of tasks were created. After that, a couple of tasks were revised. Cronbach’s alpha for the initial test was 0.69, whereas for the final test, it was 0.71. The reasons for the relatively low coefficient could be the small number of tasks in tests (Cronbach 1951), the different forms of tasks and task difficulty. The most feasible reason for the relatively low internal consistency in the pre-test is the low level of prior knowledge (Maier et al. 2016b). Relatively low values of Cronbach’s alpha are not rare when we consider achievement tests from the field of natural sciences and they are considered acceptable if deviation from the usual threshold is not too big (Jaakkola and Veermans 2015; Jaakkola et al. 2011; Kollöffel and de Jong 2013; Peşman and Eryılmaz 2010; Taber 2017). The value obtained for the final test exceeds the acceptable level of 0.70 (Nunnally 1978) while the value for the initial test is very close. For the above-mentioned reasons, we believe that the reliability of both tests is reasonably acceptable to some extent.

Procedures

All participants, prior to the experiment, were initially tested for 30 min. After that, the students were grouped into experimental groups so that all groups used different instructions as described in the “Materials” section. After receiving instructions on the use of software, students from all three groups were instructed to access the page with the lesson and followed the initial teacher’s introductory lecture. The aim of this lecture was to enable students to understand the concept of current, voltage and resistance as well as the relationship between them in a simple electrical circuit. The instruction included an explanation of Ohm’s law and the role of different elements in an electrical circuit (resistors, bulbs, battery, ammeter and voltmeter). Examples of an electrical circuit in series and parallel connections (with two resistors and bulbs), with an explanation of the behaviour of the current, voltage and resistance in them, were given. The teacher also provided a detailed explanation about changing values in electrical circuits depending on changing parameters of resistance or voltage.

The first group of students used simulations with previously created electrical circuits. After the lesson, the students accessed the test. If a student did not pass the test, the software showed EF for each task and points for re-accessing the lesson and simulation, followed by retesting with a random selection of new tasks. In the simulation, the students were able to adjust the values of voltage and resistance. After passing the test, the students received EF (with a hint for solving conceptual tasks), but in combination with KCR feedback.

The second group used a different type of simulation based on the construction of electrical circuits in contrast to the simulation with already created circuits used by the first group. After using the simulation, students accessed the test covering the lesson. After failed attempts, the software showed EF for each task and points for re-accessing the simulation, followed by the test with a random selection of new tasks. Both in type I and type 2, after passing the test, the students were given EF (with a hint for solving conceptual tasks), again in combination with KCR.

The third group used the simulation described in the “Materials” section and researchers used traditional feedback for the task in class. The tasks required the creation of current circuits, voltage and resistance adjustment, setting hypotheses, computational proving of the hypotheses and then checking the hypotheses with the simulation.

In all three learning situations, the researcher was constantly present in order to provide assistance in using the software and to give an oral lesson in combination with different types of software for learning Ohm’s law and series and parallel connections of resistors. After the experimental part, the students conducted the final testing.

The only difference between group 1 and group 2 is in the simulation used. The first group used simulation with a ready made electrical circuit (a simple electrical circuit, resistors in series and in parallel), whereas the second group used simulation for which the creation of the same electrical circuits used in the first group was required (PhET simulation). Both groups received computer based feedback. After an unsuccessful attempt to solve the test tasks, the software showed students elaborate feedback in the manner of guidelines for solving each task, but not the correct answers. Once the students pass the test, the software offers the elaborate feedback again, as well as information indicating if they have answered each task correctly or not (KCR feedback). These software have the following characteristics: if the students fail the test, they have to access the simulation again in order to retake the test. In addition, both software have the page with the theoretical basis. The tests consisted of tasks of various levels of difficulty. Some tasks required recognition of the theoretical basis and forms, some required the application of forms and rules applying to circuits in parallel, whereas some tasks required an understanding of connections and relationships between different elements of an electrical circuit in relation to different connections of resistors and changes of specific values.

The only difference between group 2 and group 3 is that the students from the second group used computer feedback, whilst the students from the third group received feedback from the teacher. Because group 3 did not have EF, after solving the tasks, the students from this group were encouraged to check the solutions by creating a suitable electrical circuit in simulation and carrying out measurements. They could also ask the teacher for feedback. For example, if one student finished the tasks before the others, the teacher provided feedback to that student of a similar quality to the EF presented in Figs. 3 and 4 depending on the correctness of the student’s answer. If the student’s answer was not correct, the teacher provided additional explanations and instructions without showing the correct answer and instructed the student to access the simulation again in order to try to obtain the correct answer. Thus, students were encouraged to think deeply whilst prompts from the teacher influenced students’ developing understanding of the concepts (Fang and Hsu 2017).

When all the students had finished the tasks, the teacher provided detailed feedback on every task on the whiteboard followed by oral explanations.

Results

A one-way independent-samples analysis of covariance (ANCOVA) was adopted for the analyses, in which three simulation-based learning scenarios, a traditional lesson and a post-test were independent and dependent variables, and pre-test scores were the covariate. Table 1 shows the statistics for the pre-test and post-test scores for all three groups.

Table 1 Means and standard error values of pre-test and post-test scores in the three groups

A test of the homogeneity of the regression coefficient revealed that the interaction between the independent variable and covariate was F(2,158) = 1.2, p = .304 (p > 0.05). This confirms the hypothesis of the homogeneity of the regression coefficient. Table 1 shows means and standard errors for the pre-test and post-test for all three groups.

After eliminating the influence of the covariate (pre-test) on the dependent variable (post-test), ANCOVA analyses showed significant differences between the three groups of pupils (F(2,160) = 12.176, p < 0.05) (Table 2). This shows that the post-test scores varied depending on experimental treatments. A strong relationship was established between the results of testing the conceptual understanding of Ohm’s law and connections of resistors before and after the intervention, with a partial eta square value of 0.407. The effect size obtained was 0.39, which represents a moderate effect for Cohen’s f test and is very near to the boundary of a large effect (Cohen 1988).

Table 2 Summary data from ANCOVA

A post hoc test analysis was conducted for detailed results. The results are shown in Table 3.

Table 3 Summary data from post hoc test of free learning environment

The adjusted mean difference for group 2 and group 3 was 10.432 (p < 0.05). The mean difference between group 2 and group 1 was 12.392 (p < 0.05). The mean difference between group 1 and group 3 was 1.960 (p > 0.05). This indicates that there was no significant difference between these two groups.

Discussion

The results of this study show that the second group achieved significantly better results in the post-test on the conceptual understanding of Ohm’s law and connections of resistors than the other groups. This group used the software containing an electrical circuit simulation requiring the construction of electric circuits from existing elements and computer EF in combination with KCR (after passing the test) and MTF. The first group used simulation with an already created electrical circuit and achieved a significantly lower score in the post-test than group 2, which indicates that hypothesis H1 is confirmed. This finding is in accordance with previous studies, which proved that simulations for assembling electrical circuits had a very positive effect on students’ conceptual understanding (Faour and Ayoubi 2018; Wieman et al. 2008; Zacharia 2007). Students are able to retain a much bigger percentage of information if they interact with it than if they only hear and see it (Wolfgram 1994). Students from group 1 had the opportunity to interact with a simulation by only changing parameters for measurements, but when the construction of an electrical circuit is in question they only had the chance to see an already created circuit. In contrast, students from group 2 had the opportunity to build an electrical circuit, which gave them a much higher level of interaction, a chance to engage more actively in the learning task and an opportunity to learn first-hand about the relationships between elements of electrical circuits.

A higher level of interactivity in learning software enhances the learning process and results in higher achievement in problem-solving tests (Evans and Gibbons 2007). On the other hand, according to cognitive load theory, due to the specificity of content and the high level of interaction between elements and concepts, cognitive load can be high (Sweller 2010). Also, the way in which software is used can increase cognitive load. The students in the second group (like students in other groups) were given an explanation about each individual element of an electrical circuit and the relationships between them. After that, the teacher asked the students to create an electrical circuit step by step (as is recommended according to cognitive load theory) and element per element (Van Merrienboer and Sweller 2005). In this way, the complexity of problems was reduced and the choice of elements were more clearly specified. Studies have shown that initial presentation of isolated concepts can improve the process of integrating different elements into the whole in later phases of learning (Lee et al. 2006). Although the same instruction was applied to both groups, one of the reasons for the lower achievement of the first group may be the fact that the students in this group did not have the opportunity to use individual elements for creating electrical circuits. Therefore, they had a lower level of interaction and fewer opportunities to actively engage in the task.

In all groups of students, the teacher gave instructions on using the software and the students always had the opportunity to ask their teacher for help in using the software. The use of software therefore did not cause a significant increase in cognitive load. We can conclude that the potential increase in cognitive load in the second group, which could have been caused by a higher degree of freedom and the use of software with more options, did not significantly reduce the benefits of the higher level of interaction, greater engagement in learning tasks and the opportunity to learn first-hand by creating an electrical circuit from the offered elements.

The third group used a simulation requiring the construction of electrical circuits but without the possibility of providing EF and without elements of formative evaluation, and achieved a significantly lower score in the post-test than group 2, which indicates that hypothesis H2 is confirmed. This finding is in line with previous studies, which proved the positive effect of EF (Heckler and Mikula 2016; Law and Chen 2016; Meyer et al. 2010), MTF (Attali 2015) and software that contained the elements of formative assessment in the field of physics (Lai and Chen 2010).

The difference between groups 2 and 3 could be explained by the fact that computer feedback is more useful for conceptual understanding than traditional feedback. Of course, medium used (computer or teacher) does not seem to make the main difference in effects of feedback. If we had a situation in which the teacher could devote all their attention to only one student, then teachers’ feedback would probably have an advantage over EF. However, when the number of students in the classroom increases, the ability of the teacher to provide feedback to all students is reduced due to the teachers’ inability to respond to different questions at the same time. This supports the notion that traditional feedback depends to a great extent on teachers’ skills, available time and workload, but also on students’ interest in discussing the subject with the teacher (Lee 2008) or their reluctance to ask questions. Moreover, in different educational contexts, traditional feedback could have different influences. Other researchers suggest that students prefer feedback face to face (Mulliner and Tucker 2015). Authors have also found that no form of EF affected students’ skills regarding text understanding in an electronic environment, while feedback provided by the researcher during the instruction had the most positive effect (Golke et al. 2015).

The results of another study suggest that for inquiry-based teaching, detailed explanations that were given by a teacher, and indirectly suggested correct answers, had a more positive effect on conceptual understanding than lessons with fewer instructions and feedback given directly about tasks (Fang and Hsu 2017).

Although the teacher’s feedback is regarded as an important aspect of teaching physics, findings regarding its efficiency are inconsistent (Nicaise et al. 2006). This says a lot about the limitations of teacher’s feedback and other factors as well, which can also influence the efficiency of feedback. Influencing factors include individual differences between students, willingness or reluctance to ask questions and look for additional information, and the ability to understand feedback and its appropriate usage. Certain authors claim that students very often lack such skills (Mulliner and Tucker 2015).

The results of this study show that software with simulation in which students create an electrical circuit of offered elements, in combination with computer EF for correct and incorrect student answers in combination with MTF and KCR, are more effective than the other two types of software used for comparison.

Conclusion and Limitations

The results of this study indicate that software with simulation in which students create an electrical circuit of offered elements, in combination with computer EF for correct and incorrect student answers in combination with MTF and KCR, are more effective than software with an already created electrical circuit that enables measurement. The results also indicate that computer feedback is more effective than traditional feedback (from the teacher) when their usefulness in combination with simulation that requires the construction of an electrical circuit was compared. Therefore, it would be useful to examine the effect of different forms of feedback in class with students of different ages for different teaching fields. While the findings of the study are expected, it would be useful to determine how feedback affects students’ achievement and how much they pay attention to feedback, and to determine students’ skills in processing and using feedback. Based on the results of this study, we can recommend using simulation that requires the creation of an electrical circuit in combination with EF and in combination with MTF and KCR feedback.

This study has some limitations. Due to the lack of a fourth group in the experimental design in which simulation with already created circuits should have been used in combination with a teacher feedback, we were unable to analyse interactions between the type of feedback and the type of electrical circuit used. For this reason, future research should explore these interactions. Although it was presumed that the random division of students’ would provide sufficient equality among the experimental groups, some confounding or control variables, such as interest in the subject and intelligence, were not measured, which may have influenced the results of this study to some extent.