Keywords

The present chapter will analyse the scientific literature reporting experiences in the field of educational robotics (ER). This analysis aims to provide a broad classification of experiences reporting the use of a robot for education, a classification of the available robots used in the ER context and a classification of existing evaluation methods to carry out the assessment of the ER activities. Starting from the distinction between robotics in education (RiE) and ER, this chapter will contribute to the discussion of what ER means and consists of. On the other hand, the proposed classifications aim to provide people working in the field of ER with a reference, by stating clearly what robotics can do for education and by providing a benchmark against which one can compare the activities carried out in the educational context. This comparison could improve teachers’ and educators’ understanding of how to bring robotics into the classroom. Moreover, all stakeholders could rethink existing experiences and work together to improve and replicate them.

Previous literature in the field of ER searched through databases to answer specific questions like “What topics are taught through robotics in schools?” (Alimisis 2013; Benitti 2012; Jung and Won 2018; Mubin et al. 2013), “What kind of skills does an ER activity develop?” (Jung and Won 2018; Miller and Nourbakhsh 2016), “How is student learning evaluated?” (Alimisis 2013; Benitti 2012; Jung and Won 2018; Miller and Nourbakhsh 2016; Toh et al. 2016), “What kind of robotic tools are employed in an ER activity?” (Alimisis 2013; Miller and Nourbakhsh 2016; Mubin et al. 2013), “Which pedagogical theories are supporting the implementation of ER activities?” (Jung and Won 2018; Mubin et al. 2013) and “Is robotics an effective tool for teaching and developing skills?” (Benitti 2012; Jung and Won 2018; Toh et al. 2016). Unfortunately, there is little agreement on what the essential features of ER are. This means that even if researchers are trying to answer the same questions, they are working on different sets of examples taken from the literature. For example, Benitti (2012) and Toh et al. (2016) excluded from their analysis those papers reporting activities using robotics as a subject in primary and secondary education. On the contrary, Jung and Won (2018) reviewed existing literature describing trends in two areas: robotics to teach robotics itself and robotics to teach other subjects. Moreover, Jung and Won (2018) analysed literature in robotics education using robotics kits for young children, excluding social robots, whereas Mubin et al. (2013) included them. Differences in carrying out activities and in researching on this field affect the results and their comparison.

Authors will provide in each section a classification for an aspect that characterises an ER activity. First of all, Section 1 states the difference between RiE and ER and provides a general classification of RiE and ER activities. Section 2 presents an overview of the robotic tools that are used to carry out activities and a classification of these tools based on four main features. Section 3 discusses a classification for the evaluation of ER activities and proposes the authors’ first steps and considerations into a novel real-time technique for the assessment of the ER activities. Results from the proposed classifications can be found in the Appendix section.

Section 1: A Classification of Experiences Carried Out in Education Using Robots

Even if some literature uses “robotics in education” and “educational robotics” as synonyms (Benitti 2012; Eguchi 2017), authors believe that a distinction should be made between the two labels. Robotics in education (RiE) is a broader term referring to what robotics can do for people in education. For example, it can help impaired students to overcome limitations or it can help teachers to gain attention or to deliver content to pupils. Educational robotics (ER) refers to a specific field, which is the intersection of different kinds of expertise like robotics, pedagogy and psychology. ER builds on the work of Seymour Papert, Lev Vygotsky and Jean Piaget (Ackermann 2001; Mevarech and Kramarski 1993; Papert 1980; Vygotsky 1968) to bring not just robotics in education, but to create meaningful experiences on robotics since an early age (Scaradozzi et al. in press, 2015). ER is made of robots allowing a construction/deconstruction and programming activity, teachers/experts facilitating the activity and methodologies enabling students to explore the subject, the environment, the content of the activity and their personal skills and knowledge. These key elements of ER make it an integrated approach to STEM (Brophy et al. 2008) and an interdisciplinary and transdisciplinary subject (Eguchi 2014). Authors identified four different features to describe a RiE experience or project: the learning environment, the impact on students’ school curriculum, the integration of the robotic tool in the activity and the way evaluation is carried out. Regarding how the robotic tool is integrated into the activity, we can distinguish ER as a subset of RiE (Fig. 3.1).

Fig. 3.1
A hierarchy of robotics in education with 2 learning environments, 2 impacts on curriculum, 4 integration of robotic tools, and 5 methods of evaluation of activities.

The proposed classification for robotics in education (RiE)

In the next subsections, these four categories are described. Table 3.1, reported in the Appendix section, shows some examples of experiences using robotics in education and analyses them through the four main categories proposed by the authors.

Learning Environment: Formal or Non-formal Projects

Students can learn in a variety of settings (e.g. at school, at home, in an outdoor environment). Each setting is characterised by the physical location, learning context and cultures. Usually, each setting holds specific rules and ethos to define relationships, behaviours and learning activities. It’s the authors’ opinion that it is important to specify in a RiE activity whether the learning environment is formal or non-formal. Formal education is usually delivered by trained teachers in publicly recognised organisations providing structured activities and evaluation. Non-formal education can be a complement to formal education, but it may be apart from the pathway of the national education system, consisting in a shorter activity. Usually, non-formal activities lead to no qualification, but they can have recognition when they complete competences otherwise neglected. Formal environment is where formal education usually takes place (e.g. schools) and non-formal environment is where non-formal education usually happens (e.g. private houses, company’s headquarters, museums).

Teaching methodologies, spaces, furniture and many other variables influence the outcome of a RiE or an ER activity, but they are out of scope in this part of the classification, which intends to make a distinction at a broader level.

School Curriculum Impact: Curricular or Non-curricular Projects

The way activities are integrated in education strongly impacts their design and their expected outcomes. Activities carefully designed to fit the curriculum needs, carried out regularly in the classroom to support students’ learning of a concept and whose evaluation is recognised in the final evaluation of the school on students, are curricular activities. Seldom activities organised to better support the teaching of particular concepts, both inside and outside the classroom, and that lead to no final formal recognition are non-curricular activities. There may be activities performed at school (formal learning environment) that do not account for the final evaluation of the student (non-curricular activity). On the other hand, there may be an activity performed outside the classroom environment (non-formal learning environment) that is recognised into the final evaluation of the student provided by the school (curricular activity).

Integration of Robotic Tools

Robotic tools that are used into the activities should be distinguished according to the purpose they serve in the educational context. First, they can reduce the impairments for students with physical disabilities. These tools are usually medical devices that help people in their activities of daily living and they compensate for the lost function. These kinds of robots are assistive robots, and they are not intentionally produced to meet the need of education, but to meet the needs of impaired people.

Second, some robots can help people with a social impairment (e.g. autistic spectrum disorder). This kind of robots can be defined as socially assistive robots, because they are capable of assisting users through social rather than physical interaction (Matarić and Scassellati 2016). Socially assistive robots “attempt to provide the appropriate emotional, cognitive, and social cues to encourage development, learning, or therapy for an individual” (Matarić and Scassellati 2016, p. 1974).

Third, some robots can be companions to students’ learning or to teachers while teaching (Belpaeme et al. 2018). These robots are called social robots, because they are designed to interact with people in a natural, interpersonal manner to accomplish a variety of tasks, including learning (Breazeal et al. 2016).

Fourth, robots can be a tool to study robotics and STEAM subjects and to develop transversal skills. ER projects use this kind of robots. Generally, they are presented to students as disassembled kits to give the possibility to create meaningful interdisciplinary pathways, letting students be free to build original artefacts. To build an artefact with fully functioning actuators and sensors, students need to master the fundamental concepts about robotics. Only when these concepts are reworked and absorbed by students that they can feel confident in reusing that kind of knowledge in another context. So, one of the main features of ER is the basic understanding of robotics fundamentals.

Evaluation: Qualitative, Quantitative or Mixed Methods

Evaluation in RiE activities could be carried out by using a qualitative method, a quantitative method or a mixed-methods approach. Qualitative methods in education pertains to research and to everyday practice. Teachers and researchers can analyse essays, focus groups, scenarios, projects, case studies, artefacts, personal experiences, portfolios, role play or simulation and many other outputs of the activities. This is a deep and rich source of information on students’ learning, but sometimes impractical in a crowded classroom and always vulnerable to personal biases or external influence. On the opposite, quantitative methods are easier to replicate and administer. They try to summarise with numbers the outcome of an activity. Common tools in quantitative methods are based on questionnaires, tests and rubrics. Anyway, experiments and empirical method should be applied to prove these methods are valid, reliable and generalisable. Moreover, a quantitative evaluation in education is often deemed as poor and reductive. Lately, researchers in education have been overcoming the historical distinction between qualitative and quantitative methods to exploit the beneficial aspects that both methods provide. Researchers have been proposing the mixed-methods approach as an appropriate research method to address problems in complex environments, like education. The choice of mixed-methods design is usually well motivated because it could imply a lot of work as it requires that both quantitative and qualitative data are collected. In the last years, some novel real-time techniques have been introduced to monitor students during their activities. Technology and artificial intelligence seem to be promising in providing feedback on students’ learning and in integrating both qualitative and quantitative methods of assessment. Moreover, it could be deployed into classroom seamlessly and give response on the activity to support the assessment.

Section 2: A Classification of ER Tools

The way the robotic tool is integrated into the experience can make the difference between a general RiE experience or an ER activity, but even among the ER tools we can make a distinction. In fact, there are many robots and robotic kits available on the market, but not all of these products are meant to be “educational”. Reviewing ER tools available on the market, authors included those robots or robotic kits that respected these two criteria: tools that were designed purposefully for education OR tools that have been used in educational contexts, whose activities were reported in a scientific paper. Table 3.2 reports the analysis of those tools according to four sub-categories:

  1. 1.

    Age (kindergarten/primary school/secondary school): The age group for which the kit is recommended; it could be a large range; indeed sometimes varying the educational activity is possible to use the kit with different age groups.

  2. 2.

    Programming language (text-based/block-based/unplugged): There are three different kinds of programming languages associated with the kits. The most commonly used are the block-based environments (scratch or similar), where the students can create software sequences using blocks, without writing code and the possibility of making syntactical errors (namely visual programming technique). Considering tools that are more suitable for secondary school students, the trend is to propose text-based programming language as an alternative to the block-based environments. The third option, the unplugged way to program a robot, is very common for the kindergarten tools. There is no need to use a screen (tablet or computer) to create the sequence. Students can design different behaviours for their robot using some physical blocks (or physical buttons).

  3. 3.

    Assembly feature (“ready-to-use” robot/“to-build” kit): Using some of the commercial kit, students have the possibility of building the robot, interacting with mechanical and electronic parts (wheels, gears, sensors, motors, etc.). Other solutions are “ready to use”: opening the box pupils find an already assembled robot, so they can program only the behaviour of the system, without the chance of modifying the robot’s aspect.

  4. 4.

    Robot’s environment (earth/water/air): educational robots usually move on the floor, or on the school desk (earth robot); but in recent years, some companies and research institutes have developed also educational drones (air) and marine vehicles (water).

In addition to the commercial kits presented in Table 3.2, there are other tools purposefully designed by researchers to implement ER activities (Bellas et al. 2018; Ferrarelli et al. 2018; Junior et al. 2013; Naya et al. 2017).

Section 3: A Classification of the Assessment of ER Activities

Section 1 introduced a distinction on how evaluation is carried out based on the way observation is designed, carried out and presented, and resulting in three categories: qualitative methods, quantitative methods and mixed methods. This is not the only way to characterise evaluation and research methods. Considering the target of the evaluation, evaluation can focus on performance, attitude and behaviour. Performance measurement can be a test whose aim is to evaluate the knowledge acquired on the subject and/or the ability to use it to perform a task (Blikstein et al. 2017; Di Lieto et al. 2017; Screpanti et al. 2018b) or it can be based on neuropsychological measures (Di Lieto et al. 2017). Complex task evaluation can also be related to the development of skills, not only knowledge. Moreover, written tests more often reflect theoretical knowledge, while practical exercises or tests demonstrate applied skills. Attitudes and skills are more often measured through surveys and questionnaires (Atmatzidou and Demetriadis 2016; Cesaretti et al. 2017; Cross et al. 2017; Di Lieto et al. 2017; Goldman et al. 2004; Lindh and Holgersson 2007; Screpanti et al. 2018a; Weinberg et al. 2007), which are easy to administer and useful for triangulation. Measures of student’s behaviours in ER activities can help the design of the learning environment as well as deepen understanding of how students learn (Kucuk and Sisman 2017).

Another distinctive feature of evaluation regards when to measure. Measurements (or evaluation of a student’s state) can be performed before the activity, iteratively during the activity and after the activity. In addition to this, stating the purpose of evaluation can help researchers and teachers to clarify how and when to perform such assessment. Summative assessment (or assessment of learning) is often related to the outcome of the activity and it is often regarded as the post-activity evaluation which relates to benchmarks. Formative assessment (or assessment for learning) is often a kind of evaluation taking place before the activity, but it can also be iterative, occurring periodically throughout the ER activity. The purpose of formative assessment is to adjust teaching and learning activities to improve student’s attainment. More recently, the field of assessment as learning brought the idea that formative assessment, feedback and metacognition should go together (Dann 2014; Hattie and Timperley 2007).

At the end of an ER activity, it would be interesting to investigate the process that led to the resolution of a specific problem, or to the design of a software sequence. During an ER activity, students experiment and modify their sequence of instructions or robot’s hardware structure, to obtain a specific behaviour. They usually work in team in a continuous process of software and/or hardware improvement, as specified by the TMI model (Martinez and Stager 2013). It would be very interesting for an educator to have the chance to observe and analyse this process, but it is not realistic to have one teacher per group that keeps track of the students’ development inside the classroom. New experimentations in constructionist research laid the way into new possibilities of insights into the students’ learning processes. Evaluation can be performed using the “offline” or “online” method. The offline methods are those assessments gathering information one or more times during the activity and then usually processed later by a human evaluator. The online methods are those assessments “continuously” gathering information on students’ activity (e.g. camera recording students’ behaviour, sensors collecting physiological parameters, log system recording students’ interactions) aiming at providing an analysis of the student’s learning while the student is still exploring the activity. Online methods are usually automated and rely on educational data mining (EDM) and learning analytics (LA). The first applications of these technologies tried to extrapolate information from data gathered from structured online learning environments (Baker et al. 2004; Beck and Woolf 2000; Berland et al. 2014; Merceron and Yacef 2004): in this type of condition, it was easier to deduce relations and recognise patterns in the data. Recent studies (Asif et al. 2017; Ornelas and Ordonez 2017) tried to predict students’ success using machine learning algorithms on data gathered from closed environments. Blikstein et al. (2014) collected the code snapshots of computer programs to investigate and identify possible states that model students’ learning process and trajectories in open-ended constructionist activities. Berland et al. (2013), extending the previous work by Turkle and Papert (1992), registered students’ programming actions and used clustering to study different pathways of novice programmers. This led to the identification of three general patterns: tinkering, exploring and refining. To evaluate different aspects of constructionist activities, other works relied on external sensors (cameras, microphones, physiologic sensors) and automated techniques, like text analysis, speech analysis and handwriting analysis (Blikstein and Worsley 2016). A key for future developments and experimentations will probably be connected to the availability and cost of implementation of such technological solutions for classroom assessment. External sensors may be more expensive, whereas embedded software solutions and machine learning algorithms could be effective and reliable in extracting evidence of students’ learning process and helping teachers to provide personalised feedback to students. Anyway, as stated by Berland et al. (2014), EDM and LA in constructionist environment aim at generating complementary data to assist teachers’ deep qualitative analysis with quantitative methods.

A first experimentation that used data mining in the field of ER was conducted by Jormanainen and Sutinen (2012). They adopted the Lego Mindstorms RCX and collected data from students’ activities with the main functions of a new graphical programming environment that they designed. They created an open monitoring environment (OME) for the teachers involved, obtaining promising results with decision trees algorithm (J48 implementation) for classifying students’ progress in the ER setting. But probably there were some weaknesses in this experimentation: the kit chosen for the study was anachronistic, indeed in 2012 the new model of Lego Mindstorms (the NXT version) had been on the market since 2006; only 12 students and 4 teachers from primary school were involved, a very low number of participants to validate the method; a new graphic programming environment was developed, but it was without a block-based approach, maybe not so friendly for primary school pupils.

First Steps Towards Educational Data Mining with Lego Mindstorms EV3

The first steps in the application of educational data mining to Lego Mindstorms EV3 were made by the authors in an Italian upper secondary school, Liceo Volta Fellini in Riccione (a formal learning environment) during an alternating school-work course (a non-curricular activity). Thanks to a software development it was possible to track all the sequences of blocks made by the students using the Lego Mindstorms EV3 software environment. Three classes were involved in the project. Participants were divided into teams of 3–4 students who worked together to design software or hardware solutions to a set of tasks. The first challenge faced by the learners, after the robot’s construction, was programming the robot so that it covers a given distance (1 m), trying to be as precise as possible. Solving the task, students had to consider a few constraints:

  • Fifteen minutes to prepare the software solution and then the “final” competition between the teams.

  • During the available time, the teams could test the solution as many times as they wanted.

  • They could not use measuring instruments (set squares, rulers, etc.) to measure the distance covered by the robot on the floor during the test time; they had the possibility of using the instruments only to determine some robot’s parameters (e.g. the radius of the wheel).

Some students realised that there were some cables with a known length inside the Lego Mindstorms box, and they were allowed to use them as a reference object for the trials.

This task was tricky because in the Lego software there are not blocks in which the designer can set a specific distance to cover. The trainer presented only one block for the challenge: the “move steering” function, where students can set three modes to control the motors (“on for seconds”, “on for degrees” or “on for rotations”) and the steering of the robot and the motors’ power.

Students’ teams mainly focused on the change in the last parameter: some groups calculated the wheel’s circumference; other groups tried to measure the robot’s speed (in order to calculate the number of seconds to set); other groups adopted a more practical and “trial and error” approach, for example, using the cable inside the box as a reference measurement. These different approaches to the solution to a given task seem to fit into the two different styles in problem-solving proposed by Turkle and Papert (1992). They suggested that students could achieve learning objectives while taking different pathways and strategies: the “bricoleur scientist” prefers “a negotiational approach and concrete forms of reasoning”, while the “planner scientist” prefers “an abstract thinking and systematic planning”.

Figure 3.2 shows an example of the log recorded by the modified Lego Mindstorms EV3 used during the challenge. It is interesting to take into consideration the rotations/seconds/degrees parameter set by the students during the trial time, and analyse the behaviour of three groups involved in the robotics course, which seem to have very definite features.

Fig. 3.2
A series of programs generated by the Lego Mindstorms E V 3. It lists move steering, on for rotations, speed, steering, motorsports, sound, play tone, frequency, volume, play type, and duration.

A log example, generated by the modified Lego Mindstorms EV3

Figure 3.3 shows the sequence of the rotation parameter (the number of the rotations set for the motors) chosen by group 1: 9 tests were conducted by the team (the last one was the final competition), all of them with a rotation parameter very close to 5.78 rotations. In this case, planning seems to be the prevalent approach adopted by the group, probably with an initial mathematical calculus and then verification tests to check the robot’s behaviour. This team obtained a 0.5 cm error from the desired measure.

Fig. 3.3
A line graph plotted a horizontal line between the rotation parameter set by the students and the test number.

Rotations graph for team 1

Group 2 performed 8 tests (the last one was the final competition): the first one with a value equal to 1 rotation and the following with a value very close to 5.5 rotations (Fig. 3.4). Planning seems to be the prevalent approach adopted by the group. They probably did a first check of the robot’s behaviour setting 1 rotation for the motors, then they inserted the value 5.5 in the rotation parameter. It is likely that they made a calculation (or a proportion) to reach the solution of the given task. This team obtained a 2 cm error from the desired measure.

Fig. 3.4
A line graph plotted an inverted L-shaped curve between the rotation parameter set by the students and the test number.

Rotations graph for a team 2

Figure 3.5 shows the sequence of the values chosen by group 3 for the rotation parameter. They performed 15 tests (the last one was the final competition), and their strategy is represented by a broken line ranging from a minimum value of 1 rotation to a maximum value of 8. In this case, tinkering seems to be the prevalent approach adopted by the group, probably with a “trial and error” pathway more pronounced compared to the other teams. This team obtained a 1.5 cm error from the desired measure.

Fig. 3.5
A line graph plotted between the rotation parameter set by the students and the test number. The curve has an increased, stable, and decreased pattern. The highest peaks in 8 rotations at 4 and 5.5 test number

Rotations graph for team 3

This preliminary analysis shows how such a tool can provide teachers with complementary information on students learning. Furthermore, such an automated tool assessing the progress of the activity from each group (as an online method of evaluation) can provide feedback to the teacher, thus allowing a real-time evaluation. Experts are cooperating to identify meaningful indexes for students’ performance and style of learning. More complex tasks, and therefore logs, are under analysis, to unravel the knot of different skills and knowledge applied in an open-ended environment. Moreover, different machine learning algorithms are compared to extrapolate knowledge from the raw data.

Discussions and Conclusion

Results from authors’ classification of RiE experiences are shown in Table 3.1. Information on learning environment or on school curriculum impact is often missing (the word “Unknown” in the table means that authors didn’t find these specifications). This can be related to the scope of some activities within the RiE field, namely, social robotics, socially assistive robotics and assistive robotics, where studies are mainly focused on interaction or physical or cognitive rehabilitation, not on education. But even in the ER subfield, it is hard to retrieve information on school’s curriculum impact. Information about the impact of an ER research project on school curriculum is fundamental to the process of integrating ER at school and for the design of activities because it influences the learning outcomes and their evaluation. Moreover, clear consideration of the curriculum impact could make it easier for teachers and educators to replicate the project in other schools or institutions, spreading the academic results into the daily educational practice.

It is also important for ER designers to consider the appropriate tools, analysing the four features proposed in Section 2: age group, programming language, assembly feature and robot’s environment. Table 3.2 shows that market has a variety of robotic tools to choose from. For a deep understanding of the core concepts of robotics, authors suggest choosing kits defined as “to build”, especially in primary school. This kind of kits lets students manipulate basic elements of a robot, design experimental mechanisms, design creative robots and create personal and meaningful “public entity”, as proposed by Papert (1991). Furthermore, the simultaneous analysis of hardware and software during the design process is more challenging for students: if the robot doesn’t work, students have to consider how they assembled the various parts of the robot as well as how they programmed it. This can be even more challenging when integrating an open control board (e.g. based on Arduino or Raspberry Pi) in the activity. On the one hand, it would offer teachers the chance to explain the relevance of the open source culture and the community. On the other hand, it provides students with a “white-box” tool, whose construction and reconstruction is enabled to a deeper level. Authors agree with Alimisis (2013) on the need of a transition to a “white-box” or “black-and-white” approach for constructionist environments. Teachers and educators can choose according to their learning objectives how to introduce robotics in their class to support teaching and to produce a positive impact on student’s learning. Literature supports observations like “ER helps in developing twenty-first-century skills”(Eguchi 2014, 2015, 2016), “ER prevents ESL”(Daniela and Strods 2018; Daniela et al. 2017; Moro et al. 2018) and “ER is effective in conveying knowledge about subjects” (West et al. 2018), but often those studies are too limited to generalise. Several studies focus on qualitative methods that do not provide indexes or a numeric indication on how to evaluate student’s performance. Moreover, there is no homogeneity in conducting such studies because they do not all align on the purpose of the study, and when they do, they do not use the same protocol to bring ER to student or measurement instrument (Castro et al. 2018). ER needs longitudinal studies to validate ER curricula, valid and reliable assessment instruments, trained and motivated educators and teachers, stakeholders’ engagement to help ER methodologies and tools to enter the education system and impact the future citizens.

Table 3.3 shows some literature’s studies and their description through the four features of evaluation. It reports that several constructs belonging to performance, behaviour and attitude are explored in relation to the ER experience. This evaluation has almost always the purpose of assessing the intended constructs and hardly ever the purpose of providing feedback to students. Moreover, qualitative and quantitative assessments are widely used, often in a mixed approach. It can be noted that the categories “what”, “when” and “how” can belong to all RiE subfields, but “for what purpose” pertains specifically to those fields directly targeting learning. In fact, in socially assistive robotics and assistive robotics, the assessment is often focused on the evaluation of the improvements of the lost function following the intervention with robots (Bharatharaj et al. 2018; Cook et al. 2005; Holt et al. 2013; Mengoni et al. 2017; Tapus et al. 2012). In the RiE subfield of social robotics, studies are mainly focused on the interaction between the robot and the student or the teacher (Fridin 2014; Fridin and Belokopytov 2014).

In the ER context, online measurement is not used, but for Jormanainen and Sutinen (2012). This may be because the data mining approach is relatively new, and it has become robust only recently. Though, mainly unexplored, this research direction is an interesting challenge which may eventually lead to a system informing teachers or students on the ER activity. To reach this goal, data should be gathered through transparent, replicable and open experiments that could thus produce comparable results. Moreover, integrating teachers’ qualitative evaluation, new technologies and techniques, like educational data mining and learning analytics, it will be possible to validate and examine in depth the real potential of ER. In a future scenario, teachers will be able to analyse minute by minute the progression of their students, and they will have available meaningful information about students’ learning. In this scenario, students will also benefit from personalised feedbacks, with a real chance to develop their personal learning style.

The proposed classifications are in line with some aspects proposed by relevant literature but they lead to some considerations in relation to other aspects. Moro et al. (2018) stated that ER does not mean to teach a specific discipline like robotics, but rather a didactical approach to learning, based on constructivist and constructivism theories. Authors agree with the fact that ER is a didactical approach to learning but argue that this is not enough to describe ER. In fact, constructivism alone does not build the ER field. The didactical approach is a key element in ER education, but another essential element in ER is robotics. Students should develop the technical knowledge on the object they are using to grasp the meaning of the activity. This aspect is also highlighted by Angel-Fernandez and Vincze (2018). They proposed a definition of ER as a field of study at the intersection of three broad areas: education (all those disciplines aiming at studying and improving people’s learning), robotics (all those disciplines aiming at studying and improving robots) and human-computer interface (aiming at improving user experience). This definition covers categories like robotics as a learning object (robots used to teach robotics), robotics as a learning tool (robots are tools to teach other subjects) and robots as learning aids (social robots). As previously stated, authors disagree with the inclusion of social robotics in the field of ER. Social robots focus on the interaction between robots and people in a natural, interpersonal manner, often to achieve positive outcomes (Breazeal et al. 2016). Thus, social robotics is a RiE subfield dealing with robots as companions to teachers or peers to students with the aim of engaging them in a learning activity. Although robots for ER, described in Section 2, do not focus on just the interaction between humans and robots to achieve an outcome, they are designed, built and programmed by students in the context of a constructionist environment.

This chapters presented a novel description of some basic features of an ER activity to provide a common ground for researchers and common knowledge for teachers and educators. Moreover, specifying the impact on school curriculum and the learning environment, authors intended to remark that ER can actually enter the school curriculum. Robotics should be a subject within school’s hours, with its own lesson and evaluation plan or, at least, afternoon activities strictly connected to the school program. Whether a whole curriculum-based education or a regular activity inside another broader subject, ER should be part of school’s curricular offer since an early stage.