Research concerning young children’s perception and learning of technological systems is sparse (Zuga 2004). Different studies show that young children categorize artifacts by function rather than by appearance (Kemler Nelson et al. 1995), perceive causal relations for technological mechanisms earlier than for natural physical phenomena (Piaget and Inhelder 1972; Niazzi and Gopnik 2003), and assign importance to the “insides” of artifacts for their functioning (Simons and Keil 1995). In explaining this early understanding, researchers point out the large amount of knowledge already constructed by young children: (a) from the very fact that they are immersed in a technology-saturated environment; (b) because of the many interactive encounters with such systems; and (c) the dynamic nature of artifacts’ functioning which attracts young children’s attention. While several studies have explored young children’s perception of technology (Jarvis and Rennie 1998) and investigated the processes by which they plan, create and relate to their designed objects (Fleer 1999, 2000; Carr 2000), little research has been conducted into their conceptual understanding and learning of a more narrow set of technological constructs. Today, controlled systems (e.g., automatic doors, domestic devices, programmable toys) have become central to everyday life. Given their ubiquitous presence, it is important to study how they are understood and learned.

For over two decades, we have seen the development of several learning environments, which embrace the topic of control. However, research on the process of learning within these environments, particularly through their construction and more specifically among young children, is sparse (see Granott 1991a,b; Mioduser et al. 1996; Betzer 2002; Talis et al. 1998). It is this apparent lack we wish to address in the current study, seeking learning pathways, ways by which children construct explanations of controlled systems.

This study explores young children’s evolving understanding of an adapting robot’s dynamic emergent behaviors, as they learn to program such behaviors with simple rules. We have supported their reasoning by helping them attend to relevant features, and compared their spontaneous and supported descriptions. We investigate the children’s understanding of the robot’s behavior through a sequence of tasks, analyzing their ideas as knowledge representations.

Background

In his book “Vehicles: Experiments in Synthetic Psychology”, Braitenberg (1984) illustrates a series of thought experiments, in which “vehicles” with simple circuits and rules display complex life-like behaviors, such as “aggression” and “seeking cold”. In his law of “uphill analysis and downhill invention”, Braitenberg claims that it is much more difficult to guess a robot’s internal structure through observing its behavior than it is to create an internal structure that produces such a behavior. In our study, we have engaged in both “uphill” and “downhill” courses: “uphill”, the children decipher a given set of Braitenberg-like behaviors; “downhill”, they program a robot’s internal structure with rules, forming complex behaviors.

The following sections present three themes that are relevant to our research goals and questions: the presence of robots in early education settings, knowledge representations in reasoning about a robot, and the role of additional agents (i.e., adult support and artifacts) in advancing children’s understanding of such emergent systems.

The presence of robots in early education settings

Robots and other adapting controlled artifacts have had a long history in early childhood education: from the mechanical and programmable “floor turtle” which drew pictures on paper (Papert 1980/1993), through a variety of robots that children interact with in different ways (AIBO, Fujita et al. 2000; Furby, Maddocks 2000; PETS, a story-teller robot, Montemayor et al. 2000; Bers and Portsmore 2005), to programmable bricks and an array of computational toys in the Lifelong Kindergarten project (Resnick et al. 1996; Resnick 1998).

Most of the programming interfaces that are geared for younger children do not employ sensors (however, see Electronic Bricks, Wyeth and Purchase 2000, which includes logical control using sensors; and ToonTalk™ in preschool classrooms, Kahn 1996; Morgado et al. 2001). Without sensors, the robot possesses a repertoire of “rote” motor schemes, which can be repeated and nested. With sensors, this repertoire widens to include sensory-motor schemes, which are represented as general behavior procedures (e.g., rules). With such possibilities for adaptation, the robot veers closer to the animate world, as it gains awareness of its environment, autonomy, and a form of “intelligence”. van Duuren et al. (1998) have found that 5-year-old children did not use ideas of autonomy or programmability to distinguish between robots operated by rote and adaptive robots. However in their study, the children were not engaged in programming the robot. We expect that the children’s construction and interaction with such systems would enhance this understanding.

In our programming interface (Talis et al. 1998), control knowledge is represented as time independent non-sequential rules. In the present study, we challenge young children to go beyond the rote scripts of a robot’s behavior, to extract the robot’s internal rules and re-construct its behaviors, which emerge via interaction with the environment. Through observing young children’s interactions and learning with adapting robots, we may conclude regarding their learning progressions and adapt such environments to this natural progression.

Knowledge representations in reasoning about a robot

In the difficult course of “uphill analysis” (Braitenberg 1984), can young children abstract the rules that underlie and control a given robot’s behavior? What knowledge representations or constructs do they use when reasoning about such systems?

Several knowledge representations can be used to describe a robot’s behavior. In the literature that refers to event knowledge, episodes, and scripts are commonly highlighted. The least general of these representations is a specific episode (Flavell et al. 1993), a mental representation of the flow of events, a one-time occurrence. It is made up of actors, actions, and props, organized along a spatial and temporal structure. A script is a generalized, temporally and spatially organized sequence of events about some common routine with a goal (Schank and Abelson 1977). Prior work has shown that young children’s representations of events are mainly script-like, and become so after a very small number of repetitions (Flavell et al. 1993). Literature on event knowledge does not refer to rules. However, in the case of robots (and other systems), events can be constructed from rules. Rules, which are independent of the order of events, are the most abstract knowledge representation of the three. They commonly take the form of “if... then...” statements, connecting conditions and their related actions.Footnote 1 When the robot is placed in a specific setting, the rules are activated according to its local environmental conditions, resulting in an emergent behavior. Such emergent behaviors are coherent in terms of function, yet cannot be reduced to the underlying rules.

Young children can apply rules to solve problems. For example, studies of preschool children’s causal reasoning show that by the age of four, children can apply not only single “if... then...” rules, but also more complicated embedded rules with two inputs and two outputs (Frye et al. 1996). On the other hand, children’s conditional reasoning is at best, limited. Conditional reasoning is usually associated with the formal operational level of reasoning and develops through adolescence (Muller et al. 2001; Markovits 2002; Overton et al. 1985). However, between these two poles lies a middle ground of inferring rules from a flow of events. One can view the task of discovering a robot’s rules as generalizing from a set of instances, noticing the co-variation of environmental features with the robot’s actions, such as “on black surfaces the robot flashes its light”.

Two lines of research inform us of younger children’s ability to infer rules from experience: causal inference (e.g. Sobel et al. 2004; Cheng 1997) and scientific reasoning (e.g. Klahr et al. 1993; Schauble 1990; Zimmerman 2000).

As regards to causal inference, different models describe how the rules are inferred from data (Shanks 1995; Cheng 1997; Sobel et al. 2004). Gentner and Medina (1998) suggest that the process of comparing several instances of evidence and finding their common features affords a mapping and alignment between structural similarity and a symbolic rule-based account. In our study, we assume that the co-occurrence of quickly changing environmental conditions and robot actions provides a database of correlated evidence. This evidence should play a part in constructing associations and aligning them with a causal rule-based account of the robot’s behavior. Siegler and Chen (1998) highlight the importance of noticing relevant explanatory features of the situation and generalizing local relationships between cause and effect for the successful construction of rules.

The second line of study, scientific reasoning, is greatly influenced by the early work of Piaget and Inhelder (1948/1956). Their distinction between concrete and formal operational thought led them to conclude that the logic of scientific experimentation and inference is not acquired until adolescence. In several studies exploring young children’s scientific reasoning (Klahr et al. 1993; Schauble 1990; Kuhn 1989), it was observed that they could not entertain more than one hypothesis at a time, conducted experiments that were difficult to interpret, had trouble inferring implausible conclusions, persevered with prior beliefs in the face of conflicting evidence and lacked valid heuristics in coping with this discord. These studies suggest that inferring rules from data may be too difficult a task for young kindergarten children.

Both causal inference and scientific reasoning studies share the goal of discovering how people make inferences of causality from data on co-variation (Kuhn and Dean 2004). Similar to our work, many of the problems that are used in these domains, especially with younger children, involve dynamic physical devices. However differing from our work, these studies focus mainly on the relationship between prior causes (e.g., placing an object on a device) and final outcomes (e.g., scale tilting in Siegler and Chen 1998; or the device lighting up and playing music in Sobel et al. 2004). In our case, we focus on the process of change itself, the emergent behaviors, such as the robot “searching for black squares” and “circling the island”. No research has been found to date regarding young children’s abstraction of rules underlying the dynamics of change in such emergent systems.

Hoyles et al. (2001) have explored 7–8 years old children’s articulations of simple rules (one condition-action couple), which they had programmed while constructing a video game. They found that the children came up with explanations that ranged between formal rules, narratives and psychological-intentional explanations. However, while involved in programming, all the children described these events in terms of a full condition-action rule. In this study, we go beyond a single rule, and challenge the children with greater complexity, from a single rule, all the way to four concurrently active rules.

In a parallel study we have conducted with the same data-set (Levy and Mioduser 2007), we have explored the children’s explanatory frameworks with regards to emergent robot behaviors. We have found that the children employed two modes of explanation: “engineering” mode focused on the technological building blocks which make up the robot’s operation; “bridging” mode tended to combine and align two explanatory frameworks—technological and psychological. Thus, one may find children’s descriptions ranging between intentional anthropomorphisms, technological mechanisms and combinations of the two.

In summary, different lines of research point to conflicting conclusions regarding young children’s ability to form rules from data, and in our case, to form rules regarding the self-organizing behavior of a robot. Research on scientific reasoning suggests that young children will have difficulty forming abstractions and coordinating them with the evidence; developmental studies claim that temporally structured events would be represented as scripts, rather than abstract rules. However, studies on causal inference paint a different picture: young children can detect patterns in the observed data and use them to predict and plan.

In this study, we employ a framework that highlights the differences in generality between three constructs for describing the dynamics of change: episodes, a description of a unique sequence of events, scripts, which include temporally-structured repeating patterns and rules, atemporal descriptions, associating environmental conditions and the robot’s actions. By this, we hope to contribute an insight into children’s pathways in abstracting the rules underlying emergent physical phenomena.

The interaction-space

Vygotsky (1986) emphasizes the role of social interaction and cultural tools while learning, turning our attention to the “Zone of Proximal Development”, in which the child can participate in cultural practices above his spontaneous individual capability. Similarly, more recent approaches of situated learning (Brown et al. 1989; Lave 1988) view learning as enculturation, the social and experiential construction of knowledge, in terms of relations between people, physical materials and cultural communities.

In this study, we explore children’s reasoning set within a multiple-agents interaction space: the child, the adult/interviewer, and the robotic system. The child’s increased understanding and capabilities are framed within the structure of this interaction.

The adult demonstrated several curious robot behaviors and conversed with the child. The types of interactions between the adult and the child were not of a normal instructional genre. The adult asked questions that supported the children in communicating their ideas, and later probed for their possible extension. In line with Siegler and Chen’s (1998) research into the impact of an adult’s assistance in encoding relevant task features upon children’s use of rules, we have intervened mainly in helping them notice pertinent components of the situation.

The other agent in the interaction space, the robot system, served the child as a concrete environment for the exploration and construction of abstract concepts and schemas, following a constructionist approach (Papert 1980/1993). The robot is in fact a concrete system embodying abstract ideas and concepts. Interplay is generated between this “abstractions-embedded-concrete-agent”, and the cognitive abstractions generated by the child. This is the realm of thinking processes we will refer to later in this paper as the realm of “concrete-abstractions”, in which recurring cycles intertwining the symbolic and the concrete are exercised by the child while abstracting schemas for understanding the robot’s behavior.

Research questions

The purpose of the present is to investigate children’s abstraction of rules from a sequence of events, characterized by a robot’s emergent behaviors whilst moving through a changing terrain. To this end, we have narrowed the richness of normal educational settings to help us elicit and focus upon each individual child’s reasoning processes. Given the seemingly conflicting evidence regarding young children’s ability to abstract rules, their limitations as to the number of rules they can reason with, and the potential benefits of the interaction with an adult and the concrete system, we asked the following research questions:

  1. 1.

    What type of constructs do young children use to explain an adaptive robot’s emergent behavior? (i.e., episodes, scripts, rules)

  2. 2.

    When the children make use of rules: what rule-base configuration do they assemble? (i.e., partial, complete, or combined rules)

  3. 3.

    How do the children’s constructs and rule-base configurations compare when their descriptions and explanations are spontaneous versus when an adult supports their encoding of relevant task features?

Method

Sample

The sample included six children, three boys and three girls, selected randomly out of 60 children in an urban middle-class public school in the central area of Israel. Their ages spanned from 5 years 6 months to 6 years 3 months, with a mean age of 5 years 9 months and a standard deviation of 3 months. The children’s parents all signed consent forms approving their child’s participation in the study, and attrition rate was zero.

Instruments

Two instruments have been developed: a computerized control environment and a sequence of tasks.

The computerized control environment was designed to scaffold the children’s learning process. This environment includes a computer interface (Fig. 1), a physical robot (made with the Lego system) and modifiable “landscapes” for the robot’s navigation (Fig. 2).

Fig. 1
figure 1

Sample screen of the computer control environment. This sample is at the more advanced level of two interrelated rules

Fig. 2
figure 2

Setting for “guarding the island” task: the robot has a light sensor on its front, facing down. When it sees dark colors below it, it moves forward. When it sees light colors, it turns to the left. When placed at the center of the island, the robot quickly moves to the edge and starts following the island’s rim from the inside

A key component of the environment is an iconic interface for defining the control rules in a simple and intuitive fashion (Talis et al. 1998). The left panel shows the inputs to the system, the information the sensors can collect and transmit. The right panel presents the possible actions the robot can perform. The central section is devoted to the “programming board” in matrix form. This part changes with advancing tasks: starting with one condition-action couple and ending with that seen in Fig. 1: two complete rules or four condition-action couples. Each square shows an action to be performed when the two conditions (row and column) are met.

The subjects in our study participated in a sequence braided of two strands of tasks: description and construction. In this paper, we focus on the description tasks, in which the child portrays, narrates and explains a demonstrated robot behavior. The full set of tasks is presented in our previous paper (Levy and Mioduser 2007).

An example of a description task is shown in Fig. 2: the robot is placed upon an island. The robot moves across the island until it reaches its edge. It then travels around the perimeter of the island, sniffing and following the island’s rim. The tasks make use of the same robot in a variety of physical landscapes, and were designed as a progression of rule-base configurations. The operational definition of rule-base configuration is the number of pairs of condition-action couples. A robot control rule consists of a pair of two related condition-action couples. The conditions are complementary, i.e. if one condition is “dark”, then the other is “light”. The tasks progress through a range of increasing difficulty: from half a rule (one condition-action couple), a complete rule, two independent rules to two interrelated rules, which are made up of a two pairs of condition-action couples.

A construction tasks followed the description tasks at each stage in the progression. A construction task began with explicating the programming interface with respect to the description task. The child was then presented with a goal, such as “teach the robot to move freely about an obstacles field” and proceeded to program and test this behavior.

Procedure

The design used in this study is shown in Fig. 3. The study lasted five 30–45 min sessions, spaced about 1 week apart. The children worked and were interviewed individually in a small room off the teachers’ lounge. All sessions were videotaped. The videotapes were transcribed. The transcriptions were segmented into 341 utterances. A content analysis was performed on these utterances in terms of the interviewer’s support, the child’s construct, and, when rules are employed, their rule-base configuration. We are interested in the structure of the children’s explanations and not in their correctness.

Fig. 3
figure 3

Study design. Each session is marked along a timeline; tasks and their rule-base configuration (number of rules) are included

In coding for interviewer’s support, the children’s responses were classified as “spontaneous” or “supported”. The constructs in the children’s descriptions were coded as episodes, scripts or rules. Table 1 describes the definitions for these codes and provides examples. Several examples are presented in the next section.

Table 1 Coding scheme for children’s responses

The children’s rule-base configuration was coded as the greatest number of condition-action couples, ranging from half a rule (one condition-action couple) to two interrelated rules.

Three independent coders (first two authors and a graduate student) coded 20% of the transcripts. Inter-judge reliability was 90%. The remaining data were coded by the student and checked by the other judges to uncover obvious errors. An example of one such analysis is provided in Appendix I, relating to the same transcription as that demonstrated with regards to the children’s explanatory frameworks (Levy and Mioduser 2007).

Results

The results refer to the children’s descriptions and explanations of the robot behaviors they had observed. We present the results for the first two research questions: the construct employed to describe the robot’s behavior and the rule-base configuration of the explanations. Results for the third research question, comparing children’s spontaneous and supported descriptions, are interleaved within each of the first two sets of results.

Research question 1: What type of constructs do young children use to explain an adaptive robot’s emergent behavior?

We have coded the children’s utterances by increasing generality, as episodes, scripts and rules. The children used all three types of constructs, however at different frequencies. For each type of task and intervention, Table 2 and Figs. 4 and 5 describe the children’s constructs.

Table 2 Type of construct (episode, script, rule) used in describing a robot’s behavior for the different tasks and interventions, for each subject (S’s)a
Fig. 4
figure 4

Children’s spontaneous (unsupported) descriptions of the robot’s behaviors, classified by construct—episodes, scripts and rules

Fig. 5
figure 5

Children’s supported descriptions of the robot’s behaviors, classified by construct—episodes, scripts and rules

We address the first research question by focusing on the children’s spontaneous descriptions, those elicited with minor adult support. We can see that these were mostly scripts. The easier tasks elicited mainly rules, which were gradually displaced by scripts in the more advanced tasks, with episodes increasing somewhat in frequency in the most advanced tasks. Increasing task difficulty is associated with less general constructs.

With respect to the third research question, we can see that with an adult’s support, all the children, in most of the tasks described the robot’s behavior using rules.

Within the spontaneous responses, episodes were few, focusing only upon the robot’s actions without referring to environmental conditions, as in Mali’s description: “It’s going backwards, forwards, turning.... Below, we show how this resolves into a rule, and examine the role of such episodes within the progression of succeeding descriptions. While the episodes are infrequent, we are particularly interested in them, as they capture moments when no generalization is made. In Table 2, we can see that this type of construct is an intermediate, usually between other constructs. We have found that the episodes showed up in one of two situations: early in the interview, when a new behavior has just been demonstrated (three instances), or when the interviewer introduces a change that violates the child’s previous generalization (three instances).

Vignettes I–II portray children’s use of scripts and rules. Vignettes III–V illustrate transitions between the constructs, including episodes.

Vignette I: A script

The children’s spontaneous descriptions were mainly in the form of scripts. Generally, the scripts are in the form of repeating action sequences, which are triggered by an environmental prop or feature. In the following exchange, Ron describes a robot, which is traveling through a landscape spattered with dark spots “Brightening dark spots, oops! Trapped by a hat”. When it reaches a dark spot, it flashes a light. When a hat is placed on its “head”, it turns like a top. The interviewer is supporting Ron by drawing his attention to one of the conditions, asking what the robot does when hatless:

  • Interviewer: And when I take off the hat?

  • Ron: No, it turns and then goes backwards.

  • Interviewer: When does it turn and when does it go backwards?

  • Ron: Turns one turn, and then backwards.

Ron attends to the robot’s actions, a repeating sequence: “turn one turn, then backwards”. This sequence of actions is set off by an environmental condition: every time the robot’s hat is removed. The interviewer’s attempt to help Ron separate between the two actions and possibly connect them to different conditions fails.

Vignette II: A rule

In the less advanced tasks, the children used mainly rules to describe the robot’s behaviors. Alex (male) uses a rule to describe the same scenario as that which Ron (above) has described with a script:

  • Interviewer: What’s happening with this robot? What things does it know how to do?

  • Alex: When someone puts a hat on it, it turns.

  • Interviewer: Does it know to do something else?

  • Alex: Yes.

  • Interviewer: What?

  • Alex: When you take off his hat, he doesn’t.

Alex describes the conditions: putting on and taking off a hat, and the robot’s associated actions: turning or not-turning. He has delineated a whole rule: two complementing conditions and their related robot actions.

Vignette III: Episode to rule transition

We demonstrate a shift from an episode to a rule. In this case, it is a psychological rule in which the actions are intentions, rather than physical actions (Levy and Mioduser 2007). Mali (female) describes “The cat in the hat likes black”, a robot with a hat navigating on a checkerboard, searching for black squares; without a hat it moves in a straight line. From an episode she shifts to a rule:

  • Interviewer: Can you tell me what the car is doing? What’s happening?

  • Mali: It’s going backwards, forwards and turning [Mali is describing the robot’s actions while in operation].

  • Interviewer: Backwards, forwards?

  • Mali: It wants to be only on black squares.

Mali describes the robot’s physical directional motions with respect to the robot’s frame of reference: “backwards, forwards and turning”, a transient sequence of events. When the interviewer asks more pointedly about these actions, she constructs a rule, connecting the robot’s intentions with respect to the landscape: “it wants to be only on black squares”. This is a rule, an atemporal generalization, which includes one condition (black) and one action (wants to be on). Mali’s episode description takes place when the robot’s behavior is first introduced: she starts by describing the robot’s actions, subsequently abstracting a rule.

Vignette IV: Rule to episode transition

An episode description, which shows up after expectations are violated, is seen in Naomi’s description of “Guarding the island”, a robot following along the rim of an island (see Appendix I). Naomi first generates a rule, referencing only the island: He’s all the time looking at him [the white island]. When the interviewer moves the robot to the center of the island, she is surprised: the robot moves in a straight beeline until it reaches the edge, then resuming its rim-following behavior.

  • Interviewer: Let’s see what happens when I put it like this. [places robot in the center of the island; robot moves straight to edge]

  • Naomi: So now he’s going in a straight line, and then again to the left. And then again to the left, and again straight. And then he goes straight, all the time he’s going ... now he’s going backwards.

  • Interviewer: (...) You said that when he’s on the white, what does he do?

  • Naomi: He’s all the time looking at the white and he doesn’t want to see the rug.

Notice Naomi’s struggles in forming a general rule. From a transient “so now he’s going in a straight line, and then... she moves into a description which is still episodic, but peppered with attempts at generalizing “and then again” ... “and again”, culminating with “all the time he’s going.... At this point, she breaks down and reverts to the episodic “now he’s going backward.” This segment includes only robot actions and no conditions. However, additional observation and conversation help Naomi notice the rug, incorporate it into her description, and resolve into a complete rule construct, with two condition-action couples.

Vignette IV: Script to rule transition

We present a vignette in which a script shifts into a rule. Ofer is describing the robot navigating the checkerboard field, as in the previous vignette.

  • Interviewer: Ofer, what does the robot know how to do?

  • Ofer: When? That it turns, you put a hat on him, so where you... here – he’s turning to here. You put a hat on him, so he...

  • ...

  • Interviewer: What is he doing here?

  • Ofer: I don’t know what he’s doing.

  • Interviewer: So now I’ll put a hat on him; what will he do?

  • Ofer: He’ll go forward.

Ofer has detected the following repeating sequence: the robot turns and then the interviewer places a hat upon it. Although he has identified a temporal pattern, it doesn’t help him understand the robot’s behavior. When the interviewer steps in to help him disentangle the condition from the action, he generates a rule: with a hat, the robot goes forward.

To conclude, with support in decomposing the task, all the children were able to describe the robot’s behavior using rules. Without support, we can see that the children all used rules in the easier tasks, shifted to scripts in the more advanced tasks, with the most advanced eliciting a few episodes. Episodes are infrequently expressed, and seem to portray the child’s confusion either by the novelty of the situation or through violation of a predictable robot’s behavior.

Research question 2: When the children make use of rules to explain an adaptive robot’s behavior, what rule-base configuration do they assemble?

The following vignette portrays Mali’s description of the robot’s emergent edge-following behavior with rules.

Vignette VI: Two conditions, two actions

When Mali first observes “Guarding the island”, the robot circling a paper island along its rim, the following exchange takes place:

  • Interviewer: What is it doing? Do you want to tell me what it’s doing?

  • Mali: He’s walking all the time [points at the island]. And when he sees the rug [around the island] he runs away from it. As if this [the rug] is dark and the paper [the island] is the light.

Mali distinguishes between two conditions: the rug and the paper, and maps them onto the robot’s sensation of dark and light. She sees the robot as walking all the time. She doesn’t explicitly say so – however she gestures that the robot is walking on the island. When the robot reaches the rug surrounding the island, it turns away from the rug. This completes the rule, with a pair of condition-action couples.

Table 3 and Fig. 6 illustrate the children’s rule-base configuration, when explaining the robot’s behavior, as it changes for the different tasks and the two levels of adult support. Figure 6 includes the actual task rule-base configuration. When a child did not describe a rule, the entry in the table is zero. These values of zero are included in the means that are displayed in the graph.

Table 3 Rule base configuration (number of rules in a single explanation) in describing a robot’s behavior for the different tasks and interventions, for each subject (S’s)
Fig. 6
figure 6

Children’s spontaneous and supported rule-base configuration (number of rules in an explanation) versus tasks’ rule-base configuration

In the first task, all the children spontaneously described the robot’s behaviors using rules, which matched the task configuration, one condition-action couple. However, most of the children’s spontaneous descriptions were not in the form of rules. The number of children who used rules decreased with the tasks’ difficulty, so that by the last task, all but one child could not describe the robot’s behavior in terms of rules. When the children used rules spontaneously, they were usually half a rule, or one condition-action couple.

With an adult’s support in noticing relevant task features, most of the children shifted into the use of rules, and verbalized more advanced rule-base configurations, commonly between 1/2 rule and 11/2 rules more than which they can provide on their own. Eventually, most children reached a rule-base configuration in their descriptions that was close to the actual configuration of each task.

Within each task, we can see a shift from focusing on simple behaviors (one condition-action couple or even none) to considering a compound of a number of behaviors, as well as of the relevant contextual information. This process takes place through interaction with an adult. Most children generated more complex descriptions when an adult supported them in decomposing the task.

Discussion

This paper concerns young children’s changing knowledge representations while they describe and explain the emergent behavior of an adapting mobile robot. First, we have explored the children’s abstraction of the simple rules underlying the observed robot behaviors. When the children employed rules, we examined the rule-base configuration, the number of condition-action couples they can infer. Finally, we have investigated the role of an adult in supporting the children’s reasoning about the dynamics of the robot’s behavior.

We turn to elaborate on the process by which the children decipher the robot’s behavior, a process marked by three features: increased generality, a shift from temporal to atemporal constructs and decentering from the robot to include its environment.

In the “uphill analysis” (Braitenberg 1984) of the rules governing a robot’s behavior, we have found that the children spontaneously used a variety of constructs: the majority were scripts, some were rules and a minority were in the form of unique episodes. We have also seen a trend in the way these constructs interact with the difficulty of the tasks. For the easiest task, one condition-action couple sufficed to define the robot’s behavior. In this case, the children easily extracted the underlying rule. As the tasks increased in difficulty, we see the rise and then predominance of scripts in the children’s portrayal of the robot’s behavior. Finally, in the two most difficult tasks, some episode-like descriptions emerge.

We claim that the more difficult tasks challenge the children’s thinking beyond their current ability in abstracting rules. They then fall back to earlier forms of reasoningFootnote 2, exposing phases that may happen rapidly in the easier tasks, and are not captured through verbal descriptions. Similar observations were made in studies, which documented regression to earlier phases with increased task difficulty, in the domain of gear mechanisms (from abstract rules to depictive models, Schwartz and Black 1996), the balance scale and volume of liquid (Siegler 1986, pp. 88–89) and controlled robots (Granott 1991b).

We argue that the children abstract rules from the robot’s behavior in the following way: (a) by observing the robot’s sequence of moves and actions in the landscape (episodes), with a primary focus on the robot’s actions, rather than on the environmental conditions, in a “robo-centric” approach; (b) by seeking repeating routines in the robot’s actions set off by particular features/props of the terrain (scripts), with the spatial conditions gaining some importance, partially decentering from the robot’s actions; and (c) distilling atemporal relationships between the environmental conditions and the robot actions (rules), when comparable importance is attributed to both conditions and actions in explaining the robot’s behavior, completing the decentering from the robot. Let us elaborate on the characteristics of the rule-abstraction process (see Fig. 7).

Fig. 7
figure 7

Progression in forming atemporal robot control rules based on observation of the robot’s behavior. We employ constructs based on event knowledge (episode, script, rule) and relate these to their relative focus on conditions and actions in the succession from the least general episode, through a script, to the most general form of a rule

At first, the robot’s behavior can be described as a succession of unique events. The child focuses on the robot, noticing its actions, at the expense of ignoring the environment within which it is navigating. Such focus on the behaving agent is the foundation of body syntonic learning described by Papert (1980/1993), in which the learner’s identification with the behaving “turtle” enables her to decipher and construct its behavior. We have used the term “episode” to illustrate the construct framing such an account, e.g. “it’s going backwards, forwards, turning...”. In our study, we have seen evidence of such constructs in the more demanding tasks, when atemporal patterns are more difficult to discern. Such episodes show up at special times, when the child is confused in some way: either when first presented with a new robot behavior, or when some assumed regularity breaks down. In these situations, repeating progressions have not yet been detected, and no pattern or routine seems to subsume the localized sequence of particular actions.

While the temporal succession of the robot’s actions may at first seem unpredictable, one may notice an intermittent temporal pattern. Focusing on the robot’s actions and assuming its viewpoint eventually affords noticing repeating sequences and key environmental features, critical triggers to such repetitions. For example, a robot is moving across a field in which a number of obstacles are strewn about. When it hits an obstacle, the following sequence is set in motion: “go straight – turn a bit – go straight”. This script includes some environmental feature or prop such as “obstacles”, which serves to initiate the robot’s routine. Mali describes such a sequence: “He’s trying to move between the barriers, so he succeeds in getting past them. He’s going, going, going, he has an obstacle so he turns and goes to the other side; he has an obstacle, so he turns and goes to the other side. Then he doesn’t have an obstacle.” We have seen the prevalence of scripts in the children’s spontaneous descriptions of the more complex robot behaviors. Scripts serve as the primary frame for making sense of the robot’s actions, congruent with developmental studies of event knowledge (Flavell et al. 1993).

However, the children in our study did not stop at scripts. When the robot is governed by few enough rules, the children offered atemporal rule-based descriptions. For example, Ron is describing the robot as it moves from one black square to another in terms of a rule: “When it’s on the white, it immediately turns to the black.” We propose that the invariance within the embedded scripts promotes a search for greater generality in the robot’s interactions with its environment, which can be captured in rules.

Simple rules underlie the robot’s emergent behaviors. The robot’s temporally ordered actions may not provide a lever into understanding such invariant rules. However, structuring the succession of actions in space, rather than only through time, affords such a connection. The scripts provide the first step towards coupling environmental conditions and robot actions, as specific spatial conditions trigger repeating action sequences. It shifts the observer from noticing only actions to an emerging focus on conditions. This awareness resolves into rules when the conditions are fully incorporated into the representation. When similar importance is attributed to environmental conditions and robot actions, one may form an array of co-varying data. Co-occurrence of quickly changing environmental conditions and robot actions provides a database of correlated evidence, a substrate from which the induction of atemporal rules is made possible (Sobel et al. 2004; Shanks 1995; Cheng 1997; Gentner and Medina 1998).

At an additional (and complementary) level, we have addressed in this study the issue of an adult’s support in children’s grappling with the robot’s emergent behaviors. We compared the children’s spontaneous explanations of the robot’s behavior with those provided with additional probing, which helped them notice and differentiate among significant features in the robot’s actions and environment. We have seen that the children’s spontaneous articulations were framed mainly by scripts, and that their rule-base configuration included no more than one pair of condition-action couples. However, when supported by an adult, the children turned their attention to additional pertinent features in the environment or in the robot’s behaviors (encoding, in terms of Siegler and Chen 1998). They then formed associations between them in the form of rules, close to the actual complexity of the tasks. Thus, adult intervention was associated with a shift to higher levels of generality in the children’s knowledge representations and to a more complex rule-base configuration. These findings are consonant with Vygotsky’s (1986) “Zone of Proximal Development”, where the child can perform at higher capabilities with appropriate guidance and tools.

We have thus delineated the children’s process of abstracting rules from an adapting robot’s behaviors, a process marked by increased generalization, a shift from temporal to atemporal constructs and decentering from the robot to include its environment. Through this process, their representations shift from episodes to scripts to rules. This shift is accelerated with adult support in noticing the relevant features.

On emergent behavior and “concrete-abstractions”

Emergent behavior in complex systems presents a challenge to learners’ reasoning (Chi 2005; Wilensky and Resnick 1999; Jacobson 2001; Hmelo-Silver and Pfeffer 2004). While a single robot may not be regarded a complex system per se (Bar-Yam 1997), its behavior is emergent. When considering a mobile robot, the underlying rules, together with the changing landscape and the particular perceptions and actions of the robot itself all co-determine an emergent behavior, which cannot be reduced to its component rules. In the process of discovering these rules, the children need to disentangle the interactions between multiple components. This task places heavy demands upon a reasoner, especially a young and inexperienced one. What supports the children’s grappling with this challenge? We underscore three components that play a central role in this the children’s learning: function, mechanism, and concrete-abstractions.

While emergent behaviors challenge our reasoning, they are coherent. More specifically for an adaptive robot, they are coherent in their display of a consistent overall behavior, e.g., avoiding obstacles or guarding the perimeter of an island. Alternatively, this consistent behavior can be viewed as the function of the robot. In adults’ categorization of artifacts, function is the predominant feature, overriding features such as structure or appearance (Barton and Komatsu 1989; Keil 1989). This bias towards function has been explored in several developmental studies (Gentner 1978; Keil 1989; Kemler-Nelson et al. 1995), showing it is early to develop. In this study, we have seen the children search for a meaningful subsuming function in their description of the robot’s actions. For example, when Naomi is confused by a particular sequence of robot actions, she peppers her detailed descriptions of what the robot is doing with expressions, such as “and then again” ... “and again”, culminating with “all the time he’s going.... While the emergent behavior is not clear, Naomi assumes some general function is there to be found. This assumption pushes her beyond the robot’s moment-to-moment actions in search for a common encompassing function: “He’s all the time looking at the white and he doesn’t want to see the rug.”

Furthermore, an invariant mechanism is understood to underlie the robot’s behavior. In this study, the children’s engagement in programming such artifacts, creating such decision-making mechanisms, highlights this invariance. Piaget and Inhelder’s (1972; see also Niazzi and Gopnik 2003) developmental study of children’s explanations of artifacts, such as bicycles, has found that causal mechanistic explanations are early to emerge, including unambiguous causal sequences and mechanistically-relevant parts. Assuming such constancy within the robot’s workings, imparts structure to an analysis of the robot’s emergent behavior.

Thus, two invariants structure the children’s reasoning about the robot: functions, which lend coherence to the inspected system (top-down), and mechanisms that partially explain the robot’s behavior (bottom-up).

In the children’s exploration of the robot’s behaviors, an additional important factor supports their reasoning: the concreteness of the abstract rules. The abstract rules that make up the robot’s decision-making faculties are embedded in concrete actions that take place in a physical environment. This material robot can be observed, touched, manipulated and re-programmed. The abstract rule structure is embedded in a concrete object in a way that supports the children’s playful inquiry and creative design. Specifically, for the young children in this study, this factor scaffolds constructing the mappings between the abstract rule structure and the observed emergent behaviors (Gentner and Medina 1998). We have seen the children spontaneously abstract rules for the simpler tasks, and with support for the more complex tasks. This runs contrary to studies, which have shown young children’s difficulties in forming abstractions (Klahr et al. 1993; Schauble 1990; Kuhn 1989). This finding is congruent with studies that have demonstrated children’s capabilities inferring rules from the outcomes of change in physical devices (Frye et al. 1996; Siegler and Chen 1998; Sobel et al. 2004) or articulating a rule while programming a computer game (Hoyles et al. 2001). Going beyond the research to date, which has explored how children relate prior causes and final outcomes, we have seen the children abstract multiple and concurrent atemporal rules relating to processes of change in emergent phenomena. This challenge requires the coordination of several transient components: local environmental conditions and the robot’s actions. Thus, the robot system serves the child as a concrete environment for the exploration and construction of abstract concepts and schemas. Interplay is generated between this “abstractions-embedded-concrete-agent”, and the cognitive abstractions generated by the child. This is the realm of thinking processes we refer to as the realm of “concrete-abstractions”, in which recurring cycles intertwining the symbolic and the concrete are exercised by the child while abstracting schemas for understanding the robot’s behavior.

To summarize the discussion, we have described the children’s path in making sense of emergent robot behaviors. The robot’s overall function serves as an organizing framework. The children’s involvement in designing and programming such artifacts underscores an invariant mechanism. The embeddedness of the abstract rules in concrete objects and events together with adult scaffolding support the children’s reasoning. Along this path, the children’s constructs evolve from episodes to scripts to rules, increasing in generality and complexity, shifting from temporal to atemporal descriptions and decentering from the robot to include its environment.

Educational implications

This study is an initial probe into young children’s understanding of controlled adaptive systems, which display emergent behaviors. While deepening how we understand young children’s evolving knowledge of autonomous artificial behaviors, it is limited in its small sample and disconnect from classroom situations. Further research is necessary to broaden and extend this small sample. We are currently engaged in expanding the scope of this research from individual children to its application in a regular classroom. Based on the results of this study, a new operational version of the programming environment has been completed, and the progression of tasks has been reformulated for classroom implementation. A team of early education teachers underwent training on the conceptual approach, the tools and the pedagogical materials, and is currently implementing the model in a regular early education setting. We expect to be able to report on the classroom implementation stage in the near future.