Keywords

1 Introduction

Researchers have been developing robots to interact with children, e.g., for educational and health purposes. Programmable and non-programmable educational robots have been used to support child learning. These robots have been encouraging children to interact with screen-based computer devices involving mostly visual and auditory information-gathering scenarios (e.g., desktop/portable computers) and user interfaces based on hand-eye coordination skills (e.g., keyboard, mouse, multi-touch interfaces). These interfaces are used, e.g., to program physical actions/behaviors performed by the robots - programming autonomous control through source code [16]. Programmable and non-programmable educational robots may also communicate with children through speech and gestures [12].

For the aforementioned reasons, programmable and non-programmable educational robots are linked to sedentary behavior in children - in most cases encouraging sedentary interaction formats based on manipulative actions.

Sedentary behavior is characterized by diminished or complete lack of physical activity and is one of the main factors negatively influencing children’s health. It correlates with obesity, type 2 diabetes, ADHD, anxiety, depression, mobility and postural problems, breathing difficulties, sleep and digestive disorders. This type of behavior has also been linked to increased cancer risk and premature death [18]. Nowadays, levels of energy expenditure are still insufficient in order to benefit children’s health [4]. Health robots to promote children’s physical and mental health - increases in physical activity - are still scarce [7, 10]. Physical activity has a variety of benefits for children’s physical and mental health [18].

In addition, children interact with educational and health robots predominantly in indoor environments. It has also been referenced that natural environments may optimize children’s physical and mental health [11].

We propose a new format of robotic devices to increase children’s physical activity levels, encourage learning and contact with natural environments: Biosymtic (Biosymbiotic Robotic) devices (BSDs).

In the present work we describe the main computational models included in BSDs that aim to potentiate children’s physical and mental health. The next section introduces the literature associated with this work. We then describe a computational model demonstrating how to increase physical activity levels, in children, through automatic feedback control mechanisms - describing an experimental study. Followingly, we demonstrate a theoretical cognitive model on how to program robotic devices through whole-body interaction in natural environments, and why this model may represent a benefit for both children’s development and Artificial Intelligence.

2 Related Work

2.1 Health Robots and Children

The ALIZ-E project employs a robotic humanoid system - NAO; semi-autonomous robot - to help children with diabetes in hospital environments. NAO helps children learn about and manage their metabolic condition. It is a teleoperated robot: the operator communicates verbally with the child as if he were the robot. NAO incites children to be physically active by suggesting they imitate its dance moves. In a study, it was demonstrated that five diabetic children (aged 8 to 12) increased their knowledge about diabetes by playing diabetes-quiz based games with the robot [7].

A non-mobile robot - “Keepon robot” - was developed to promote healthy physical activity and dietary behaviors in adolescents. The user wears a wristband device measuring physical activity levels throughout the day - number of steps. The obtained data is processed by the robot system, which persuades the adolescent to increase his/her daily physical activity levels - by modeling the user’s motivational state. The user communicates with the robot directly or via a phone application (speech or text input). In turn, the robot produces speech as output [10].

2.2 Physical Activity and Natural Environments Benefit Children’s Physical and Mental Health

Physical activity benefits human development in multiple ways. It benefits the development of the sensorimotor system; promotes physical fitness; reduces anxiety and depression; boosts self-esteem; and it is associated with the reduction of ADHD symptoms [18].

Moderate vigorous physical activity (MVPA) benefits children’s physical and mental health, optimizing the development of the cardiovascular system and contributing to a healthy body weight. The World Health Organization recommends at least 60 min of daily MVPA for children aged 5 to 17 years [18].

In addition, MVPA seems to be the gold standard to optimize cognitive function (including academic achievement) and boost cognitive structure in children [6].

Although scarce, recent studies have demonstrated that the development of gross motor skills (demanding use of large muscular groups) improves the development of cognitive function and optimizes academic achievement [17]. The previous experiences go beyond the manipulative actions (demanding small muscular groups) used to interact with computing devices in most educational robotic systems.

Other recent studies have also demonstrated that whole-body motion computing devices tend to benefit children’s cognitive function (e.g., attention and memory) compared to sedentary computing devices [9].

It has been referenced that natural environments optimize children’s physical and mental health, e.g., optimizing cognitive function by increasing alertness levels; synthesis of vitamin D for prevention of autoimmune diseases [11].

For the aforementioned reasons, it is urgent to include physical activity, in natural environments, in children’s daily lives so as to promote healthy lifestyles.

The central goal of a BSD is to potentiate children’s physical and mental health (including cognitive development/performance), while connecting them with challenging natural environments offering multiple possibilities for sensory stimulation and increasing physical and mental stress to the organism.

3 Computational Models Associated with Biosymtic Devices

A BSD is characterized as an artificial system displaying automatic control functions and autonomous behaviors.

A BSD displays automatic control functions while (two modes):

  1. (1)

    Directly connected to a human organism (human-integrated automatic control);

  2. (2)

    Disconnected from a human organism (working as an autonomous robot).

A BSD is able to sense and act in the environment, demonstrating adaptive functions in both modes 1 and 2.

3.1 Mode 1 - Automatic Control Functions in a Biosymtic Device While Directly Connected to a Human Organism

Computational models allow robots to control a variety of sensors and actuators in real-time in order to act in the environment.

In mode 1, a BSD encourages a child to be physically active, in natural environments, through computational models - robot software systems - based on automatic feedback control mechanisms. These mechanisms encourage the child to achieve a specific physiological state - e.g., MVPA - associated with physical and mental health. The child’s physiological data is always used to persuade a covert response: physical action.

We developed a wheeled mobile BSD, named “Cratus”, to test our proposals. “Cratus” mimics a Roman inventor. The physical structure of this device consists of a head connected to a torso, with a three-wheel mechanism on its base. The system includes a touch-based computer, on the back of the torso, displaying visual (virtual) information. Auditory output (e.g., music and verbal speech) is produced by a sound speaker integrated in the apparatus’ head (resembling an eye).

System inputs to control visual and audio information (e.g., put a game avatar into motion; robot’s speech) are made via whole-body physical action, e.g., the child may push, pull, rotate the apparatus while running on the physical terrain. The child may also skate while using this system - feet on top of the three-wheel mechanism. “Cratus” includes wireless motion sensors to capture motion data (e.g., accelerometer, gyroscope and a tilt sensor on the device’s torso and a rotational speed sensor in one of its wheels). Moving “Cratus” on the terrain is translated as virtual locomotion, e.g., of a game avatar. The system also captures the child’s physiological data - e.g., communicating wirelessly with a heart rate (HR) biosensor on the child’s chest. The device also includes environmental sensors (light, humidity and temperature) and servomotors connected to encoders (actuators integrated in its wheels) (see Fig. 1).

Fig. 1.
figure 1

Biosymtic device “Cratus”.

“Cratus” may include a variety of robot software systems, e.g., aiming to improve children’s cardiorespiratory performance and/or cognitive performance. These software systems include automatic feedback control mechanisms.

For example, we developed a software - game “Cratus. The Space Traveller” - whose goal is to optimize children’s cardiorespiratory performance. In this game narrative, “Cratus” is an old Roman inventor who reinvented his body in order to become a time traveller. The “wheeled teleportation system” is one of his great inventions. This system allows “Cratus” to travel the Universe at the speed of light. “Cratus” begins a journey throughout the Universe to discover the ancient architectural epochs of planet Earth (e.g., Roman coliseum - IV century, Eiffel Tower - XIX century). The game’s main goal is to help “Cratus” control his “wheeled teleportation system” while exploring a variety of 3D game scenarios - displayed on the touch-based display. The child is encouraged to move, as fast as possible, in each game scenario in order to score - completing racetracks in the shortest time possible.

In this program, “Cratus”, encourages a child to perform MVPA - the automatic feedback control mechanism incites the child to achieve 50% and 85% of maximum HR values (recommended by the American Heart Association) [1].

The inertial wheel mechanism actuator of the apparatus - on two of the wheels - maintains the desired interval of HR values. The inertial mechanism consists of an electromechanical brake system - encoders placed on servomotors - that controls the rotational speed of the apparatus. For instance, if the child presents an average HR value <134.5 beats per minute (<50%, sedentary physical activity levels), while interacting with the device (e.g., while pushing the apparatus during running), the system increases the inertial forces applied to the inertial mechanism (brake system locks the wheels up to a certain degree, e.g., 45%) - the child then needs to move faster to reach the desired interval. If the child exhibits an average HR value >175 beats per minute (>85%, very vigorous physical activity levels), the system decreases the inertial forces applied to the inertial mechanism (brake system unlocks the wheels up to a certain degree, e.g., 36%).

The system also includes a verbal actuator that produces audio output. This actuator follows the principles established for the “inertial wheel mechanism actuator”. If, for instance, the child presents an average HR value <134.5 beats per minute (<50%), the verbal actuator emits specific verbal feedback, e.g., “Run faster!” or “Give me more power!”. If the child presents an average HR value >175 beats per minute (>85%), the verbal actuator emits specific verbal feedback, e.g., “Slow down!” or “Move slower!”. The reference interval of HR values is calculated and adjusted every minute - according to average HR values per minute.

“Cratus” demonstrates adaptive behavior by adapting to the performance of each child. For instance, for a similar task, child “A” may need to be exposed to increased inertial forces to achieve the desired HR values when compared to child “B”.

Additionally, real-time HR/motion data can be visualized on the software: a bar graph that changes color according to the child’s HR values; a slider (disk) displaying the angles and the rotation velocity in three-dimensional space. The system displays angles and rotation velocity to facilitate the child’s performance during the game. This information is also accompanied by verbal instructions - e.g., “Straight ahead!”; “Turn right!”; “Rotate 45 degrees to the right!”.

We conducted a study [8] to evaluate the effects of the BSD “Cratus” (cardiorespiratory performance program) on physical activity levels of a group of 20 healthy children aged 6 to 8 years - evaluation via the SenseWear® Arm Band. In the experimental session, children were evaluated once - two children interacting with the “Cratus” device (one device each) in a natural forested landscape - performing a single 12-min session. Children were told to interact freely with “Cratus” in the natural forested landscape - playing the game “Cratus. The Space Traveller”. Children’s expectations regarding the device - motivation to play - were also collected using the Smileyometer questionnaire - individual evaluation in a classroom context.

Results showed that interacting with “Cratus”, in a natural environment, instilled vigorous physical activity levels in children - 8.1 ± 1.4 METs (metabolic intensity). In addition, children achieved the reference (desired) interval of HR values (moderate-intense physical activity) - between 50% and 85% of maximum HR values (recommended by the American Heart Association), instilled by the automatic feedback control mechanism. Furthermore, children were highly motivated to interact with the device in the natural environment.

The latter results demonstrate that the automatic feedback control mechanism included in “Cratus” served the purpose of maintaining children’s HR values at adequate exercise intensities. Hence, robotic systems endowed with automatic feedback control mechanisms - establishing a connection with the child’s physiological state in real-time (HR) - may be advantageous in promoting adequate exercise intensities and thus favoring health in children.

3.2 Mode 2 - Autonomous Functions in a Biosymtic Device - Bio-Kinesthetic Programming

In mode 2, a BSD builds autonomous functions by interacting with a human. We termed this process Bio-Kinesthetic Programming (BKP). BKP is an approach to Robot Programming by Demonstration, aiming to help a robot build autonomous functions via human guidance techniques, such as whole-body direct physical control and physiological states transfer, while physically and mentally benefiting the human organism. Hence, a BSD and a human form a symbiotic connection.

The child controls the BSD, in the environment, through whole-body physical action, programming autonomous functions - e.g., the child’s locomotion works as an example to be replicated by the device during autonomous navigation. The child’s physiological states, while controlling the robot, are also used to program autonomous functions - e.g., the robot learns to manage its energy sources according to the child’s energy metabolism.

Kinesthetic teaching has been used to help robots learn motor skills associated with a certain goal, e.g., guiding the motion of a robot’s arms through direct manual control in order to teach it how to manipulate objects [5]. However, and to our knowledge, there are no references to guidance techniques making use of physiological states transfer between a human - while playing in the natural world - and a robotic system.

Let’s take our wheeled mobile BSD “Cratus” as a way of example.

After activating the “learning program” (robot software system to build autonomous functions in the BSD) the child is encouraged to control “Cratus”, in the physical environment, through whole-body physical action. For example, the child may push, pull, rotate and throw the apparatus while she walks or runs on the physical terrain. In the “learning program” the child interacts freely with the BSD, e.g., may select a variety of avatars/scenarios included in the software (or even previously build her own avatars/scenarios) to freely explore the physical world.

The child may also define the inertial forces applied to her avatar by controlling software actuators - e.g., “inertial wheel mechanism” changing effort intensity. At the same time, the child establishes physical interaction with the BSD, she may also teach it physical actions/behaviors to be autonomously executed later.

The device includes motion sensors to capture motion data, e.g., accelerometer, gyroscope and a tilt sensor on the apparatus’ torso; a rotational speed sensor in one of its wheels. The system captures the child’s physiological data - e.g., communicating wirelessly with a HR biosensor on the child’s chest. “Cratus” also includes environmental sensors - light, humidity and temperature; four infrared sensors (IR) on its base to detect objects in close proximity; a microphone in its head to record sound; and servomotors in its wheels, which allow for autonomous motion (motion in a straight line and rotation).

The child may visualize motion, physiological and environmental data on the touch-based display: motion intensity scale; angles/velocity of rotations; a virtual heart that changes color according to the child’s HR values; light, humidity and temperature scales. The child is encouraged to explore her own biological processes on the software, together with environmental information during the teaching experience.

The “learning program” on BSDs involves a memory-based learning approach grounded on perceptual, motor and physiological states and verbal commands. This approach is inspired by the multimodal simulation system developed by Barsalou (Embodied Cognition) to explain the origins of mental representations in different animal species [3]. Accordingly, mental representations are generated through simulation processes in the brain (brain computations) - reactivation of neural circuits that were active during previous perceptual, motor and introspective experiences. Multimodal representations regarding perceptual, motor and introspective states - perceptual symbols - are stored in the brain’s memory system and later recalled during conceptual knowledge - supporting memory recall and other higher-order cognitive abilities such as higher-order perception and conceptual knowledge.

A BSD may learn a variety of tasks via multimodal simulation processes: collecting (recording) motor, physiological and environmental data, while interacting with the child, to be later recalled/used during autonomous functions - gathering information about events in the environment through human guidance. Motor and physiological data - captured via rotational speed sensors, gyroscopes, tilt, IR and HR sensors - represent the sensory state of the BSD. Environmental data - light, humidity and temperature - represents information external to the system.

The child also ascribes verbal labels to the learning experiences - e.g., “stop”, “move fast”, “move slow”, “avoid obstacles”, “touch-object”, “search-light”, “rotate”, “dance” - to activate multimodal simulation processes on the device.

For example, after activating the “learning program”, on the software, the BSD immediately starts recording sensory data. The child interacts with the device freely in the physical environment - deciding what physical action/behavior to teach the device. When the child determines that the learning activity has ended, she activates a “verbal learning” function on the software - ascribing a verbal label to the previous experience via verbal input to the system - microphone recorded. The software associates this verbal label to the metrics obtained during the interaction (motion, physiological, environmental data) - creating a memory representation of the experience. Later, after a few learning experiences, the child may reactivate this memory via a speech recognition system, included in the software - activating the function “autonomous behavior” in the software and voicing the verbal label to the system. At this moment, the system should be able to reactivate the memory associated with the verbal label (accessing motion, physiological, environmental data associated with the learning experience - multimodal simulation) and autonomously replicate the behavior - also demonstrating adaptive behavior in different environmental settings.

Let’s imagine that the child decides to teach the concept of “move fast” to “Cratus”.

The child activates the “learning program” and starts pushing the device as fast as possible in the physical world. To end the learning process the child activates the “verbal learning” function in the software. At this moment the software stops recording data and verbal inputs may be given (a verbal label) - “move fast”. This behavior may be later reactivated, in the “autonomous behavior” mode, by voicing the same verbal command - “move fast”.

During the learning experience, the sensory states of the BSD and information external to it are continuously recorded. The child may enable or disable inputs made to the system by activating or deactivating communication between the sensors and the software.

Data acquired during the learning experience - demonstration phase - represents perceptual symbols to be stored by the software. These perceptual symbols are reactivated during the “autonomous behavior” function, according to specific verbal labels determined by the child. This learning approach allows the BSD not only to create a multimodal memory of a verbal concept - associated with motor and physiological states and environmental information - but also to act, in the environment, according to the earlier experiences. For instance, in the previous example, the verbal label “move fast” is associated with an increase in the rotational speed of the wheels of the apparatus. The “learning program” records the rotational speed of the wheels, from the start of the activity until the “verbal learning” function is activated. In this case, the device records motion data to later manage its locomotion functions, in the environment, autonomously.

The BSD also makes use of the child’s physiological data – e.g., HR data – to perform autonomous behavior in the environment. In the previous example, the verbal label “move fast” is associated with an increase in the child’s HR. Again, the “learning program” records the child’s HR values from the start of the activity until the “verbal learning” function is activated. In addition, the device captures the physical terrain’s gradient through the tilt sensor.

The child’s physiological data is recorded by the device to later manage its energy sources autonomously while in the environment. The computational model makes an analogy between the human energy metabolism and the energy functions in the BSD. For instance, the verbal label “move fast” is associated with an increase in HR. As the child moves across different terrain gradients, she will show variations in HR - e.g., a slope will increase HR while she tries to push the device as fast as possible. The BSD records and associates data from the child’s HR and terrain gradient, obtained during the learning experiences, to manage its energy functions - e.g., providing more power to the servo motors when on a slope in order to maintain speed.

The BSD learns new skills (or builds skill models) by computing the average and variance of continuous motion, physiological and environmental data from multiple learning experiences - in order to apply them to new contexts. To that end, the software makes use of a Gaussian Mixture Model, extracting the statistical regularities and the variability across multiple interactions. For instance, if after a few demonstrations, in different environmental conditions, the average rotational speed of the wheels of the apparatus is “X revolutions/time” then - when the child activates the “move fast” behavior in the “autonomous behavior” function - the apparatus tries to replicate this same speed (“X revolutions/time”) in different environmental conditions. Variations regarding environmental conditions - e.g., terrain gradient - and on the child’s HR (for different terrain gradients) are taken into account when trying to replicate the behavior “X revolutions/time” to new contexts. If the Biosymtic device is not able to perform the learned behavior in different environmental conditions, then the child needs to provide the system more learning experiences.

While the child guides the BSD through the learning activity, environmental data may also be recorded - light, humidity and temperature. Since the learning process on a BSD results from multiple learning experiences, we may expect the BSD to create associations between not only motor and physiological data, but also environmental data and verbal concepts. In some cases, environmental data may be more important for the BSD to learn how to act in an environment than other types of data. For instance, the “move fast” behavior may be more dependent on motor and physiological data, compared to environmental data, because the learning process results from finding statistical regularities and the variability across multiple interactions. On the other hand, if the child tries to teach behaviors such as “search light” or “search shadow”, light data becomes essential for the device. For example, starting the “learning program” with the light sensor and the motion sensors activated and driving the device multiple times to a shaded area and vice-versa (Fig. 2).

Fig. 2.
figure 2

Bio-Kinesthetic Programming Cognitive Model. The Biosymtic device captures continuous motion, physiological and environmental data during the learning experience (demonstrations phase). Data is used to create a multimodal representation of the learning experience (recorded in the robot software system) - associated with a verbal label.

Bio-Kinesthetic Programming: Benefits for Children’s Development and Artificial Intelligence

While programming autonomous behaviors in a BSD, the child is encouraged to be physically active in natural environments that offer a variety of benefits for physical and mental health (including cognitive development and performance). Moreover, while giving commands to a BSD she may engage in social skills practice, e.g., through collaborative play. For example, she may decide to challenge another child to a race/football contest between BSDs - verbally controlling the devices.

One of the goals of the Bio-Kinesthetic teaching approach to robot learning is to make robots acquire autonomous functional features in close proximity to biological organisms; however, autonomy is always subject to previously given commands - according to human will.

Our learning model allows BSDs to develop adaptive functions based on memory processes - multimodal simulation. The device is able to identify human verbal commands - concepts - to perform actions in the physical world. Concepts in a BSD become associated with motor and physiological states - transferred from the human organism to the device - in combination with sensory information from the physical environment (e.g., light, temperature). In fact, BSDs learn how to manage their energy resources by mirroring human energy functions in order to optimize their actions in the physical world. Hence, we may say that BSDs are endowed with cognitive skills, however, under human control.

The central goal of the Artificial Intelligence (AI) field has been to assign biological functions to artificial machines [2, 13,14,15]. Roboticists have been working on artificial machines aiming to emulate the physical and cognitive features of a variety of biological organisms (e.g., humans, insects, reptiles) [2, 13]. However, electromechanical systems differ from biological organisms in their physical properties. A biological organism is characterized as a contiguous living system consisting of a single or multiple cells: made of organic matter. Life involves chemical reactions to sustain it, e.g., energy production. An electromechanical system integrates mechanical moving parts to carry out electrical operations: made of silicon and metals [15].

According to von Neumann [15:70], “the natural materials have some sort of mechanical stability and are well balanced with respect to mechanical properties, electrical properties, and reliability requirements. Our artificial systems are patchworks in which we achieve desirable electrical traits at the price of mechanically unsound things”.

Meyer and Guillot [13:1415] emphasize that billions of years of evolution have made biological functional mechanisms more efficient (“sensing, actuating and control mechanisms”) and that it has been difficult for artificial devices to cope with that evolutionary background.

Embodied Cognition researchers claim that cognition, in biological organisms, emerges according to their structural features and associated action possibilities - e.g., a human will perceive a chair differently from a crocodile because these species have different body structures (a chair suggests “seating” for most humans, a crocodile will probably not notice it) [3]. It has also been emphasized that different kinds of biological organisms present particular physiological features that determine, e.g., how those organisms sense and create knowledge about the physical world.

The previous arguments led us to the idea that structural, as well as physiological features, determine how cognition emerges in biological organisms. Taking into account that electromechanical systems differ from biological organisms in their physical and functional properties and that, e.g., biological organisms present more efficient “sensing, actuating and control mechanisms” than current artificial devices, it may be the case that artificial machines come to benefit from a learning approach involving, not only motor, but also physiological learning models. This learning approach may allow artificial machines to demonstrate functional features in close proximity to biological organisms.

The BSD “Cratus” has a different “body structure”, and energy mechanism, to that of a human. Hence, it is expected that the perception of the physical world will emerge differently. While operating the device in the physical environment, through physical action, the child teaches the device to generate percepts from the physical world according to the latter’s “body structure” - i.e., a torso with no arms connected to a wheeled mechanism. The child also teaches the device to manage its own energy functions more efficiently in the environment - emulating the human energy metabolism to manage its own energy functions.

The BSD may also learn from other types of human physiological response so as to learn how to manage its own functional mechanisms, e.g., emotional response. Hence, the BSD learns to interpret the functional processes of the human body in order to manage its functional processes autonomously. Because the learning process results from several learning experiences it may be characterized as a developmental process.

Researchers in Developmental Robotics and AI have suggested that in order for artificial machines to demonstrate cognitive abilities they must be subject to several learning experiences [14].

We are currently evaluating/testing a computational model allowing children to program Biosymtic devices through Bio-Kinesthetic teaching techniques. This research study will be presented in future work.

4 Conclusions

Whole-body motion robotic systems endowed with automatic feedback control mechanisms - establishing a connection with the child’s heart rate in real-time - seem to be advantageous in promoting adequate exercise intensities - moderate-intense physical activity. Researchers showed that moderate-intense physical activity benefits children’s physical and mental health - including cognitive development/performance.

Programmable robots are linked to sedentary behavior in children - encouraging sedentary interaction formats to program autonomous control through source code. Biosymtic Robotic devices are an alternative to programmable robots - encouraging whole-body motion interaction formats, in natural environments, to program autonomous control. Research showed that whole-body motion and natural environments benefit children’s physical and mental health.

The present work aims to draw the attention of the Child-Robot Interaction community to the fact that children’s development may benefit from whole-body interaction in natural environments, and that robot learning - artificial intelligence - may profit from a symbiotic approach with the human organism.