Keywords

1 Introduction

Robotic computing has been proposed as an inspiring framework for getting students involved with STEM disciplines as well as with programming [4]. In most studies conducted on the use of educational robotics in schools, children are asked to enliven the robots by creating the appropriate computer programs [5]. The programmer has to think mainly about the goal of the robot and how the robot will interact with the environment. However, there is another crucial aspect that should also be considered, and this is if and how the user will interact with the robot. In particular, we are interested in the effects of programming human-robot interactions on learning performance and attitudes. Moreover, we are motivated by embodied learning findings that regard a broad spectrum of human motor-perceptual skills, which reach beyond the traditional desktop metaphor and keyboard-mouse as input devices.

Embodied cognition researchers argue that bodily experiences and physical interactions with the environment through sensorimotor modalities (touch, movement, speech, smell and vision) are considered essential factors in the learning process and the construction of knowledge [3, 22]. From a theoretical perspective, embodied learning is closely related to the principles of constructivist [20] and constructionist [18] learning theories. The core idea in Piaget’s theory is that young learners construct knowledge and form the meaning of the world by interacting directly with physical objects [20]. Papert [18] believed that children are better learners when they construct knowledge voluntarily while playing with real-world metaphors or tangible objects, programming the turtle in the Logo environment or interactive robots.

The embodied approach is being widely used to cover the learning of abstract materials in a wide range of topics that extend from science, technology, engineering and mathematics (STEM) [10, 13,14,15] to computational thinking [6, 7, 19]. Specifically, concerning computational thinking, a practical learning approach is to have students physically enact the programming scripts through their bodies before creating the program [7]. Other scholars [6, 19] examined how embodied interaction in a virtual environment that processed students’ dance movements can facilitate computational learning. Some educators and researchers believe that robotics education is a promising field for employing the embodied cognition view. Alimisis [1] points out that embodiment is an innovative approach for making robotic activities more attractive and meaningful to children. Lu et al. [16] examined how direct and surrogate bodily experiences in a robotic workshop can influence student’s understanding of programming concepts. Similarly, Sung and colleagues [21] investigated how embodied experiences, with a different amount of embodiment [13] (full body and hand), can affect students’ problem-solving skills. Having children enact [16, 21] or reenact the robots’ moves through physical interaction seems a useful approach for learning abstract computational concepts.

This small sample of embodied research highlights the need to explore the positive learning effects of embodiment within robotics [1] in greater extent. Thus, the current study set out to investigate how various programming activities to control a robot using diverse interaction modalities, such as touch, speech, hand and full body gestures can affect students in exploring computational concepts. Allen-Conn’s and Rose’s work [2] for introducing powerful ideas (math and science) through programming with Squeak was the main inspiration for creating the intervention. Expanding their views “beyond the screen” by targeting a real robot, is one aspect of our study. The main contribution of our research is studying alternative types of human-robot interaction in the context of embodied learning. Our research questions centered on these major topics:

  • Intention: Did the robotic workshop have any influence on students’ attitudes towards computing?

  • Interaction: What were students’ interaction modalities selections for controlling the robot and what were the criteria for making such selections?

2 Methodology

2.1 Subjects

Thirty-six middle school students (17 girls, 19 boys), aged between fourteen and fifteen years, with little to no prior programming experience were recruited to participate in a seven-session robotic workshop. We randomly selected the participants from the third-level class of a middle school. The decision for selecting this specific age group was guided by the fact that none of the students had previously received teaching in computer programming as part of previous formal education. Students worked in pairs in each of the activities. Thus, fifteen same-gender and three mixed-gender pairs were created.

2.2 Activities

The workshop was divided into seven individual sessions. In the first session which served as the introductory activity, students were asked to assemble a three-wheel robot and create a simple mobile application for controlling the robot’s arm with their mobile phone. In the second session, a remote control mobile app was developed by the students, and they controlled the movement of the robot by touching with their fingers the appropriate buttons on their phone’s touchscreen. In the next session, students created a mobile app that engaged hand gesture movement for navigating the robot using the phone’s orientation sensor. In the fourth session, they controlled the robot through speech commands by utilizing speech recognition technology. In the fifth session, students made use of computer vision technology by creating a program to control the robot through full body gestures. In the sixth session, students were asked to create a mobile app that integrated artificial intelligence to the robot so it could move autonomously on the track following a black line. Each of the above sessions followed a similar basic format: (1) Building the User Interface. A basic template application for each session was given to students, and they were given instructions to add the necessary UI elements, (2) Programming the application’s behavior, and (3) Going further by enhancing the basic application with additional features such as variable speed. In the final session, a semi-open [23] problem-solving task was given to students. They were asked to create a program so that they could successfully navigate the robot on a fixed track and hit an object placed at a predefined spot with its robotic arm. No instructions were given to students on the final project, and they were prompted to choose any of the above interaction modalities they preferred. Moreover, they were allowed to reuse code from the previous sessions. Students attempted to solve the programming tasks by creating the following programming mechanisms: (1) robot navigation, (2) robotic arm control, and (3) power-speed control. The duration of each of the first six sessions was about 45 min while the Project activity lasted between 45–90 min.

2.3 Materials

App InventorFootnote 1 [9] was employed as the development platform in the sessions that involved mobile technology and students used their own mobile phone devices in an attempt to reinforce the sense of ownership. For the session that involved full-body interaction, ScratchXFootnote 2 was employed as the development platform and was supported by the Kinect sensor for tracking the body [11]. With mobile technologies, as tablets or smart devices, the interaction space is expanded “to more physical and embodied modalities” [15] as touch screen, gyroscope based hand gestures, and speech interfaces can be used to interact with digital information [12]. Similarly, with the use of computer vision technologies full body interfaces can also be employed for interacting with information. The interaction modalities and the development platform employed in each of the activities can be found in Table 1.

Table 1. Overview of the interaction modalities and the development platforms for each session of the workshop.

The robots chosen for supporting the workshop were Lego MindstormsFootnote 3 (NXT and EV3). Both App Inventor and ScratchX programming environments have the potential to be used for programming the Lego robotsFootnote 4, and this was the main reason for their selection.

2.4 Measuring Instruments and Data Analysis

For the study, both qualitative and quantitative data were collected and analyzed. Concerning the quantitative data, the students filled out brief pre-test and post-test questionnaires. The pre-tests before the programming activities consisted of a five-level Likert questionnaire that recorded student’s prior experience with programming, their views, and intentions towards computing, robotics, and mobile development. The post-tests after the programming activities included a five-level Likert questionnaire that recorded a change of students’ views and intentions towards computing, robotics, and mobile development.

Regarding the qualitative data, student’s projects in the final session were manually analyzed for investigating students’ interaction modalities selections. We additionally employed a 30-min plus semi-structured interview that gave participants a chance to describe not only their projects but also their experiences. Finally, each of the students’ workstation screens was recorded by Camtasia capture during the sessions. The qualitative data from the interviews and the Camtasia recordings are still being analyzed, so we intend to publish the results in a separate paper.

3 Findings

3.1 Students’ Attitudes

Table 2 summarizes students’ views and intentions towards computing before and after the workshop. We conducted six paired sampled t-tests were, to determine whether there were a significant change in students’ views and intentions. The results indicated that participants reported having more programming skills after \( (M = 2.86,\;SD = 0.899) \) the workshop than before \( (M = 2.25,\;SD = 0.77) \). This difference, \( - 0.61 \), BCa 95% CI \( \left[ { - 1.03, - 0.19} \right] \) was significant, \( t\left( {35} \right) = - 2.94,\;p = .006 \) and represented a medium-sized effect, \( d = 0.45 \). The differences in the other cases were not significant.

Table 2. Views’ and intentions’ mean averages before and after the intervention.

3.2 Interaction Modalities

To complete the problem-solving task given to them in the project session, students had to create the appropriate programming mechanisms. First of all, they had to program the robot navigation mechanism so that the users of the application could move the robot on the track. Additionally, they had to program the robotic arm control mechanism for hitting the object with the robot’s arm. Optionally, students could extend their application by adding a power control mechanism so that the robot could move with variable speed on the track. Figure 1 summarizes the interaction modalities that students selected while developing the programming mechanisms.

Fig. 1.
figure 1

Selected interaction modalities for each of the programming mechanisms.

In total, eighteen projects were created in the final session as many as were the groups of students who participated in the workshop. In sum, all students were able to complete the main programming tasks by creating the robot navigation and the robotic arm control mechanisms, while ten groups extended their projects by adding the optional power control mechanism. Concerning the robot navigation mechanism, in most cases, full body gestures and touch sensorimotor were selected as the interaction modalities. For navigating the robot with accuracy on the track, the program must respond immediately to the users’ actions. For this reason, students avoided using speech commands for controlling the movement of the robot as there was a substantial delay in the speech recognition mechanism and in some cases failure to recognize the correct word. As for the robotic arm mechanism, participants showed a preference towards the full body and the speech interfaces. Students, in this case, used speech commands to trigger the movement of the robotic arm as any delay in speech recognition mechanism did not prevent them from hitting the object successfully. Finally, for the power control mechanism, most participants preferred to create a program that allowed users to change the speed of the robot with touch, by manipulating a power slider. None of the students created a body interface for controlling the speed of the robot even though the body interaction modality was the most popular in each of the main programming tasks.

4 Conclusion

Our results suggest that students felt more confident about their programming skills after the intervention. Moreover, students adopted various interaction modalities while developing the programming mechanisms in the problem-solving task. Body gestures were one of the most popular modalities used in the final session, as many groups selected them for navigating the robot and controlling the robotic arm. Surprisingly, none of the groups, which used the body interfaces implemented the power control mechanism. Students struggled to program a concurrent body gesture for controlling the speed of the robot, despite the fact that during the Body Control activity they were given instructions on how to create a mechanism for adjusting the speed of the robot depending on the distance between the users’ knees. For the robot navigation, touch sensorimotor was also extensively used, as it allowed users to guide the robot more accurately. Although students did not use the speech interface for the navigation of the robot due to its affordance, they used it for triggering the robotic arm. In sum, it seems that the participants besides choosing interfaces that were attractive to them, they also chose interfaces that their affordances matched to the specific programming tasks [17].

One limitation that might influence students’ modalities selections is the opportunity to use a new technology for controlling the robot, especially in the body control case. Moreover, it is possible that their choices might also be biased by other students’ choices. Additionally, further analysis is needed to evaluate the learning outcomes of the current study. We intend to analyze students’ final projects for assessing computational thinking. Finally, as a future investigation, it would be interesting to investigate whether students’ choices are related to a particular learning style model [8].

The contribution of this paper is to provide additional insight on the synergy between embodied learning and educational robotics. Compared to previous studies, instead of exploring the learning outcomes by comparing a tangible interface to a digital one [24] we exposed students to a wide range of interactive possibilities and made an attempt to examine the problem-solving strategies that arose. We believe that the findings of our study might benefit teachers, assisting them in creating effective robotic interventions with an embodied learning perspective.