1 Introduction

The use of robotic technology in health-care can be roughly divided between robots that are used to physically intervene, e.g. through being used in surgical procedures [13, 15, 25], to systems that engage the patient on a cognitive level. Computer vision can be used to support diagnosis [20, 34], artificial intelligence is used to support decisions [10, 29], and robots are now considered in earnest for patient and elderly care [5, 38]. Robots are also considered for children with cognitive impairments (CWCI) [9, 14, 18, 21, 35], but results are mixed and large scale studies are rare. Children diagnosed with a cognitive impairments tend to have difficulties with social communication and have impaired social skills. They for example struggle to express appropriate emotions or do not make appropriate eye contact [31]. One of the core deficits in cognitive function is lack of attention skills [28].

Currently, there is still a need for more empirical evidence on the promise of social robotics for helping and improving the quality of life of CWCI. Moreover, there are no standardized protocol or assessment tools for measuring the improvement of child behaviour in Child–Robot Interaction, especially for CWCI. In this study, our research focuses on improving attention skills in child–robot interaction among CWCI. This study aims to address existing gaps in child–robot interaction research, as discussed by Ismail et al. [19]. The identified gaps addressed in this study relate to (1) the content of interaction modules which was designed based on the ability of the robot, (2) the observation of multiple exposures of child–robot interaction instead of a single exposure, (3) the potential of child–robot interaction to improve the social communication skills among children diagnosed with cognitive impairment.

In recent years, a number of commercial robot platforms, such as the Softbank Robotics NAO robot [3, 33] or Pepper [32] robot, have been used for CWCI. On the other hand, there are also a number of bespoke robot platform such as FACE [26], PROBO [36], KASPAR [39], CHARLIE [4] which were specifically designed to support children with difficulties in social communication and interaction skills. This study used the social robot LUCA (Fig. 1) which is a social robot inspired by the OPSORO platform [37]. The robot can display various interactive behaviours and is capable of expressing typical facial emotions, such as happy, sad, angry, surprise and so forth. The expressive facial features in the interaction modules are believed to encourage more attention during child–robot interactions.

Fig. 1
figure 1

The LUCA robot in the experimental setting. The child and an adult carer are seated in front of the robot

Next to addressing the previously mentioned research questions, our analysis of attention in child–robot interaction can be unpacked in two quantitative measures. (1) Task completion time (TCT), in which we analyze the time (seconds) it takes for the child to complete a series of tasks together with the robot and (2) an analysis of the interaction duration (ID) between child and robot. In this article, we will present findings from a study in which children with cognitive impairments interact with LUCA. Section 2 focuses on the design of the interaction modules between the child and the robot. Section 3 describes the demographics of participants. The experimental framework such as experimental setup, experimental flow and child–robot interaction duration will be shown in Sect. 4. Finally, the results and the attention analysis of child–robot interaction will be discussed in Sects. 5 and 6, before concluding and suggesting possible future directions.

2 Design of Child–Robot Interaction Module

Child–robot interaction can be considered a sub-discipline of human–robot interaction. In this study, we focus specifically on the child–robot interaction for therapeutic use. To achieve these goals, we designed five different modules for child–robot interaction based on the literature, as mentioned in Table 1. The design used a ‘Wizard of Oz’ approach: the participants believed the robot was autonomously operated while in actual fact is was being controlled by a member of the research team [30]. In this study, we have 20 children diagnosed with cognitive impairments interacting with the LUCA robot and the focus of this study is to report the analysis of their social interaction improvement, especially their attention skills as measured by their TCT and ID. Based on our observations, children respond differently and their answers are often unique. To help the reader understand we included illustrative examples of dialogues between the robot and a child. As an example of child–robot conversation, we provide the example of the dialogue between child number 8 (Gender: Male, Age: 9 years old, and MILD group in CCTT-2 Cluster) and the robot in session 1. It is also important to note that the dialogue from the robot is consistent for all the children to avoid bias in child–robot interaction. Details of the interaction will be discussed in the next section.

Table 1 Brief summary of child–robot interaction modules

2.1 Module 1: Introduction to Robot

In this module, the therapist/teacher introduced the LUCA robot to the child. The child was escorted from their class room to the experiment room, and the child was asked to sit in front of the robot. This first module aimed to introduce the robot to the participant. This is necessary in order to break the ice between the child and robot [2, 18] and to assure the following interactions are not influenced by the child being unfamiliar with the robot or the study setting. The child was welcomed by LUCA using simple English language and some low valence non-verbal behaviour. The text to speech voice was generated using an online synthesizer.Footnote 1

The voice used American English, which was deemed appropriate for the target population. Moreover, we opted for an expressive and transformable female voice (Alison). An example of transcript of a typical dialog between the robot and a child is given below:

  • ROBOT: Welcome to human–robot interaction program, Module 1 start, introduction to robot. Hi, my name is LUCA. What is your name?

    CHILD: “My name is ..” (We cannot disclose his name to protect his privacy and confidentiality.)

  • ROBOT: Hi there. Anyway, I am from Belgium. Nice to see you in Malaysia. How are you doing today?

    CHILD: “Good... good” (the child repeated the answer with a louder voice)

  • ROBOT: I’m good. I like your outfit today. How do i look?

    CHILD: “Good.”

  • ROBOT: Do you like my costume?

    CHILD: “Uh hu, yes”. (the child answered while nodded his head).

  • ROBOT: Thank you for being here today. And thank you for spending some time to meet me. Really appreciate your presence.

2.2 Module 2: Facial Expression Game

Module 2 of the child–robot interaction was designed as a facial expression game. In this session, the child is invited to guess the different facial expressions of the robot and by doing so engage in eye contact with the robot during the interaction. This module has been designed to help CWCI improve their attention skills [1, 27]. The dependent variable in this module is the time taken by the child to complete the task. In this module, the researcher controlled the robot and selected a range of different facial expressions such as happy, sad, angry. The children were invited by the robot to guess the expression, and they were allowed a second try if their initial answer was wrong. If their answer was still incorrect, the correct answer was given by the robot. The children were also expected to mimic the expression of the robot while maintaining eye contact with the robot.

Some example of dialogues between child and robot in Module 2 are given below:

  • ROBOT: Now, we are in Module 2, facial expression game. So, today we will play together. Do you want to play?

    CHILD: “Yes”

  • ROBOT: Great. Let’s play, please. I am going to express some emotion. Your task is to guess my emotion. Do you want to guess my facial expression?

    CHILD: “Yes.. yes.”

  • ROBOT: Ok, I will make several facial expressions. Your task is to guess my emotion, based on my facial expression. Example of the emotions are (1) happy, (2) fear, (3) angry, (4) sad, (5) surprise and (6) disgusting, These are the emotional expressions.

    CHILD: “Ok!”

  • ROBOT: So, what is my facial expression?

    CHILD: “Happy” (the child answered “happy” after he looked at the robot’s face)

  • ROBOT: Correct, well done, I am happy, to see you here.

  • ROBOT: Sorry, you are wrong. Try again. What is my facial expression now, the hint is, joyful or sad? (This dialogue is spoken by the robot if the children answered wrongly. In this case, the child guessed correctly. Thus, this dialogue is not applicable for this child.)

  • ROBOT: The right answer is happy. (The right answer will be given by the robot after 3 attempts were made by the child in order to move to the next facial expression. In this case, the child guessed correctly. Thus, this dialogue is not applicable.)

Following this, similar dialogues continued for different facial expressions like fear, sad, surprise, angry and disgusting.

2.3 Module 3: Song with Facial Expression Game

In Module 3, a song was added to the facial expression game in order to encourage the children to play the facial expression game and make the interaction more engaging. Earlier pilots and studies found that music was an effective manner to draw children into the interaction [18, 24]. The music was chosen to match the emotions expressed by the robot and helped the children guess the facial expression, next to enhancing their attention span. The music which accompanied specific emotions was chosen with the help of a therapist. Some children have some difficulties in distinguishing certain facial expressions. Thus, with the aid of a theme song, they would be able to successfully guess the facial expression providing positive encouragement.

Below are some examples of dialogues in Module 3:

  • ROBOT: Now, we are in Module 3; Song with facial expressions. In this module, we are going to play some edited song with the facial expression game. Among the song themes are (1) Happy, (2) Sad, (3) Angry, and (4) fear. Moreover, my facial expression will be associated with the song theme. You will listen to the song and at the same time, please look at me and guess my facial expression. (The sad song is played together with sad face)

    CHILD: “Sad”

  • ROBOT: Correct, Well done, That was a sad song and you can see my sad face.

  • ROBOT: Wrong guess, can you try again? What is my facial expression? The hint is, happy or sad? (This dialogue is spoken by the robot if the children answered wrongly. In this case, the child guessed correctly. Thus, this dialogue is not applicable for this child.)

  • The answer is Sad. (The right answer will be given by the robot after 3 attempts were made by the child in order to move to the next facial expression. In this case, the child guessed correctly. Thus, this dialogue is not applicable.)

Then, similar dialogues would continue for different facial expressions like happy, angry, and fear.

2.4 Module 4: Attention Task

Module 4 was developed to measure the attention skills of the child. These are very important skills, central to social interaction, learning and collaboration, and robots are believed to be able to improve these skills during therapeutic child–robot interaction [16, 40]. This session expected the child to look at a certain shape pasted on a board placed on the right (for example, an image of rectangle ) and left (for example, an image of circle) of the robot.

The child would need to perform a “matching task” in which the robot gave an instruction to look at at a shape (mounted to the left or right of the robot) and fixate their gaze for 3 s. For example, he/she would be required to look at the rectangle for 3 s.

In the second phase of Module 4, the robot would also join the child by looking at the shape. When giving the instruction, the robot looked to the right or to the left at the intended shape. As this is an easier task, with both a verbal and visual component, the child should not struggle to complete the task. This serves as a motivation to improve their imitation skills, besides improving their understanding of verbal instructions and attention skills.

Examples of dialog between children and the robot are given below:

  • ROBOT: Now, we are in Module 4; Attention task. In this module, you are going to listen to my instruction. When I say, look at the rectangle image, please look at your right for 3 s, and count 1, 2 and 3. Then look at me back in the center, and, when I say, look at the circle image, please look to your left, for 3 s, and count 1, 2 and 3, then look at me back. After that I will also look at the image together with you.

    CHILD: The child listened to the instructions carefully.

  • ROBOT: Now, you will try it first. Please look at the circle image.

    CHILD: The child looked at the circle image and counted 1, 2 and 3 correctly

  • ROBOT: 1, 2, 3 (count together with the child). Well done! You did it correctly.

  • ROBOT: Try again. Please look at the circle image. (This dialogue is spoken by the robot if the children answered wrongly. In this case, the child guessed correctly. Thus, this dialogue is not applicable.)

  • ROBOT: Now, Please look at the rectangle image.

    CHILD: The child looked at the circle image and counted 1, 2 and 3 correctly

  • ROBOT: 1,2,3 (count together with the child). Well done! You did it correctly.

  • ROBOT: Try again, Please look at the rectangle image. (This dialogue is spoken by the robot if the children answered wrongly. In this case, the child guessed correctly. Thus, this dialogue is not applicable.)

  • ROBOT: Now, I will join you. Please look at me straight and listen to my order.

  • ROBOT: Please look at the circle image. (controlled the robot to look at circle)

    CHILD: The child looked at the circle image and counted 1, 2 and 3 correctly

  • ROBOT: 1,2,3 (count together with the child). Well done. You did it correct

  • ROBOT: Try again. Please look at the circle image. (This dialogue is spoken by the robot if the children answered wrongly. In this case, the child guessed correctly. Thus, this dialogue is not applicable.)

  • ROBOT: Now, Please look at the rectangle image.

    CHILD: The child looked at the rectangle image and counted 1, 2 and 3 correctly

  • ROBOT: 1, 2, 3 (count together with the child). Well done. You did it correctly

  • ROBOT: Try again. Please look at the rectangle image (This dialogue is spoken by the robot if the children answered wrongly. In this case, the child guessed correctly. Thus, this dialogue is not applicable.)

  • ROBOT: Thank you for having fun with me

2.5 Module 5: Free Style Interaction

Finally, module 5 was a free style interaction between the child and the robot. This module can be very useful to gauge their interest in and attention towards the robot, which serves as a measure of their focus in social interactions [7, 8]. The child was given the chance to ask questions to the robot. The robot answered, with answers being typed in on a keyboard by a member of the research team and spoken by the robot. If children requested the robot to move, then these actions were performed when the robot had the capability to do so. Below are some example of dialogues in this module:

  • ROBOT: Now, we are in Module 5; Free interaction. You can ask me any question. Or you can ask me to do anything that you want such as facial expression and head turn. So, what do you want me to do? I will wait for your command.

  • CHILD: “Can you look to the right” (the child point to the right of the robot.)

    ROBOT: The robot was controlled to turn to the right as instructed.

  • CHILD: “That’s correct.

  • CHILD: The child speak “something” in a very slow voice.

    ROBOT: Sorry, can you speak louder?

  • CHILD: “Did you hate to eat metal broccoli? It is ... disgusted?”

    ROBOT: No, I don’t eat.

  • CHILD: “You don’t eat broccoli right?”

    ROBOT: Yes.

  • CHILD: “What’s your favorite metal foods?”

    ROBOT: Sorry, can you speak louder?

  • CHILD: “What’s your favorite metal foods?”

    ROBOT: I don’t eat?

  • CHILD: “You don’t?”

    ROBOT: Yes.

  • CHILD: “Do you speak Malay or something else? You can speak malay right? I’m not expert. ”

    ROBOT: I speak English.

  • CHILD: “You don’t speak Spanish?”

  • ROBOT: I speak English like you, not Spanish.

  • CHILD: “I know you speak English. Who made you?”

    ROBOT: My creator.

  • CHILD: “I know you speak English. Who made you?”

    ROBOT: My creator.

  • CHILD: “What makes you happy?”

    ROBOT: I’m happy to see you.

  • CHILD: “What’s your favorite songs?”

    ROBOT: Angry birds

  • CHILD: “Oh, my favorite is.... my favorites too. But, what kind of game you play?”

    ROBOT: I made game for people

  • CHILD: “Oh, yeah...? Are you made of metal?”

    ROBOT: No, I was made from plastics.

  • CHILD: “Can you move your head to the right side?”

    ROBOT: (The robot was controlled to look to the right as instructed.)

  • CHILD: “Good, and, I got a quiz, I got a quiz”

    ROBOT: You don’t want me to look at you?

  • CHILD: “I wanna look at you. You can look at me.”

    ROBOT: The robot was controlled to look at the child. Alright, thank you dear. If you don’t have further questions or requests, we shall end our interaction here. Shall we meet again next time? Bye Bye!

These are some of the example of the dialogue between the child and LUCA robot during child–robot interaction.

3 Demographics of the Participants

In this study, we considered a number of aspects to select suitable participants. It should be noted that CWCI are very diverse in their cognitive development. Thus, in an effort to obtain standardized and unbiased results, we established inclusion and exclusion criteria for CWCI to participate in our study. The inclusion and exclusion criteria are as below:

  • Inclusion criteria

    1. 1.

      Diagnosed as having a cognitive impairment (level validated by attention skills via Children’s Colouring Trail Test: CCTT  [22]).

    2. 2.

      Age between 6 and 12 years.

    3. 3.

      No evidence of self injury or aggressive behaviour.

    4. 4.

      Able to speak in English or Malay.

    5. 5.

      Able to follow simple instructions in English or Malay.

  • Exclusion criteria

    1. 1.

      Unwillingness to participate.

    2. 2.

      Child with mutism.

    3. 3.

      Uncorrected hearing deficit.

    4. 4.

      Uncorrected vision deficit.

Table 2 This table shows the demographics of participants and their level of cognitive impairment based on suggested clinical interpretation in CCTT-2 assessment results prior to participate in child–robot interaction

While often experiments are conducted in medical centers [23], we opted to choose a school that provided special education for children with disabilities and special needs. The inclusion and exclusion criteria were more conducive for this population sample, as the cognitive impairment seen in children attending school was less severe compared to those seen in a hospital or medical setting. Thus, we approached the Ministry of Education in Malaysia through the Putrajaya Education Department and established collaboration with a school for students with special needs in Putrajaya, Malaysia. This school had sufficient number of students diagnosed with cognitive impairment.

Running a child–robot interaction studies in schools is preferred over running studies in lab setting: the natural setting provides a more relaxed environment for the students and they feel more comfortable in places that they familiar with. It is also ecologically more valid in the sense that the robot is used in an environment frequented by the children and that the obtained results are likely to be a better reflection of the robot’s success outside the context of an evaluation study. We also avoided transportation and logistics problem, while having the advantage that the experimental room was familiar to the participating children. In total, there were 92 students diagnosed with cognitive impairments. However, only 36 children fulfilled all inclusion criteria. Before we started the study, we requested consent from the parents or legal guardians prior to allowing the children to participate. Consent to participate were granted from 25 parents or guardians.

After the screening process by the teacher, all 25 children whose parents or guardians consented for them to participate in the study, underwent the CCTT-2 assessment by a certified occupational therapist. 20 children managed to complete the assessment. The remaining five were unable to complete all the tasks in CCTT-2. With the CCTT, the children were diagnosed to have a certain level of cognitive impairment, as suggested in CCTT-2 clinical interpretation assessment tools, based on their competency in the assessment as shown in Table 2. The AVG (Average) cluster grouped children with the least cognitive impairments, followed by MILD and SEVERE clusters. It is also important to note that all 20 children fulfilled the inclusion criteria despite their varying levels of cognitive impairment.

4 Experimental Framework

The design and framework of the experiment in this study was conducted in accordance with the principle on research ethic and social robotics [17]. The safety and well-being of participants was our main priority. Moreover, the privacy of the participants also will be maintained and data collected in this study was only made available for academic and research purposes.

4.1 Experimental Duration

The experiment only began after obtaining the approval of research ethics application on 30th July 2018 from Chairman of Universiti Teknologi MARA (UiTM) Research Ethics Committee, Institute of Research Management and Innovation, UiTM, Malaysia [REC reference number: 600-IRMI (5/1/6)]. In this study, CWCI were interacted with the robot in three different session in August 2018. Thus, every CWCI will have the opportunity to interact with the robot once in a week for the duration of one month. The average time of interaction for each session of child–robot interaction was approximately 10–16 min.

4.2 Experimental Setup

The layout of the experiment was very important since the robot was controlled by tele-operation by a member of the research team. The experimental layout and setup was designed in a way so the CWCI would not notice how the robot was being controlled by a researcher. As such, the teacher or therapist who accompanied the CWCI ensured that the child sat on their chair as soon as they entered the experimental room.

Figure 2 shows the experimental setup of our child–robot interaction in the school. The setup differs from studies that were conducted in a medical center [23], day care center [6] or university laboratory [11], as we decided to perform our experiment in the Snoezelen Multi-Sensory room of the participating school. The room is comfortable and equipped with air-conditioning, and is typically used to offer multi-sensory experiences to the children. The multi-sensory devices were hidden from view and not used during the sessions with the robot. We used an empty space in one corner of the room and prepared our experimental setup as shown in Figs. 2 and 3.

Fig. 2
figure 2

This illustration shows the experimental setup during child–robot interaction between LUCA robot and children diagnosed with cognitive impairment

Fig. 3
figure 3

This figure illustrates the experimental setup of child–robot interaction from side view. The CWCI and teacher sit on the two chairs on the left while the researcher sits behind the divider

4.3 Experimental Flow

The protocol of the experiment was uncomplicated. A teacher or therapist would come to the experimental room with one child at a time. They would knock on the door, walk into the room and sit down in front of the robot. All interactions were recorded using five video cameras for later analyses. Once the child was seated and ready, the teacher would flash a card at the robot and the interaction with the robot was initiated. The interaction began with a welcoming note in Module 1, as explained in the previous section. After finishing each module, the next module would start without a break, as illustrated in Fig. 4. To conclude the session, the robot would say “bye bye” to the child and teacher. This indicated the end of interaction and the video recording was deactivated.

Fig. 4
figure 4

This flow chart shows the experimental protocol during child–robot interaction between LUCA robot and children diagnosed with cognitive impairment

In the case of an emergency—such as aggressive behavior by the child, a disturbance from outside or a malfunction of the robot or other technical problems—the interaction would have been aborted and the child’s data would be discarded. However, no emergencies occurred during the experiment and data collection.

5 Result of Child–Robot Interaction

In this section, we report the results of attention of children (\(N=20\)) over three consecutive sessions. In each session the robot runs through the five modules as described above. Table 3 shows the overall results for the 3 interaction sessions; Session 1 (S1), Session 2 (S2), and Session 3 (S3). Data reporting time (seconds) was recorded and analyzed throughout the interaction in term of task completion time (TCT) and interaction duration (ID). In order to clearly observe the attention fixations in child–robot interaction, the duration of the robot’s spoken instructions (which was the same for all children) such as the welcoming notes in every module and the standard question from the robot for each child have been removed from the analyses. The total duration of interaction between a child and the robot was on average 720 s for all five modules. From this we removed the duration of the standard spoken script of the robot, i.e. 100 s for all five modules, since we are only interested in the TCT of each child.

Table 3 Task completion time (s) and interaction duration (s) for sessions 1, 2, 3

As discussed in the earlier section, Module 1 was designed to break the ice between the child and robot. So, Module 1 was not assessed for TCT, since it was designed to simply introduce the robot to the child. Module 2, 3 and 4 were developed in order to assess and analyze the attention skills of the CWCI by measuring TCT approach. Besides that, the ability to maintain the ID in Child–Robot Interaction was measured in an open interaction in Module 5.

6 Analysis and Discussion of Child–Robot Interaction

6.1 Analysis of Task Completion Time

Earlier work shows that robots can act as a catalyst to encourage and improve attention skills in children [12]. In our study, the time to complete Module 2, 3 and 4 was recorded and reported as the TCT. We expected the CWCI to improve with each session, i.e. the TCT to reduce. This is on the one hand caused by practice: children are expected to improve their performance when doing the task a second or third time. However, they also improve because the tasks require the children to use their attention skills to perform well. As such, any improvement on the TCT indicates an improvement of their attention skills. In anticipating the novelty effect, we designed module 1 (ice-breaking) and the response of the children in this module was already excluded from the analysis. Moreover, suppose if we only wanted to consider the results of session 2 and 3 and neglect session 1 (for anticipating novelty effect), we still could see their improvement in their attention skills by using their TCT as a proxy to gauge the level of attention.

Table 4 shows the descriptive statistics of the total TCT results. Overall, the mean of TCT for completing Module 2, 3 and 4 shows a decreasing pattern (session 1 \(M=489.8\), \(SD=110.3\); session 2 \(M=359.4\), \(SD=62.94\) and session 3 \(M=332.0\), \(SD=56.89\)). The children completed the tasks significantly faster between session 1 and 2 and 3 (\(p < 0.0001\)), as shown in Fig. 5. In session 1, while the children are getting familiarized with the robot, they are not yet familiar with the tasks and games played with the robot and are as such slower than in the next two sessions. The second and third exposure to the robot resulted in an increased task performance and faster task completion. As directing attention to the robot is integral to completing the task, this indicates that the robot acted as a positive agent in improving attention skills for CWCI (Fig. 6).

Table 4 Descriptive statistics of task completion time (TCT) for 20 CWCI in 3 sessions, (S1, S2 and S3)
Fig. 5
figure 5

This figure shows the task completion time of 20 children diagnosed with cognitive impairments for module 2–4 (session 1 to session 3

Fig. 6
figure 6

This figure shows the task completion time of 20 children diagnosed with cognitive impairments

As discussed earlier, the children were expected to improve their attention skills from session 1 to session 3. Most of them showed good progress and improved their task performance, and subsequently attention skills, during the child–robot interaction. An analysis of variance (ANOVA) was performed in order to study the significance of having multiple sessions of child–robot interaction which qualitatively improved the attention skills of the CWCI. The qualitative results were noteworthy through the way in which the children “engage” with the robot, as reflected in the duration of the interaction. We argue that the appearance and features of the robot are central to the engagement and improved the children’s attention skills.

A one-way repeated measures ANOVA was conducted to compare the effect of session on Task Completion (TCT). There was a significant effect of session on TCT, using the Greenhouse-Geisser correction to correct for sphericity ([F(1.081, 19) \(=\) 40.46, \(p < 0.001\)]). Post hoc comparisons using the Tukey HSD indicated that the mean TCT between all sessions was significantly different (\(p<0.001\)). Thus, the hypothesis that CWCI significantly improved their task performance (as measured by TCT) and their attention skills over repeated sessions holds. They were taking less time (TCT) to complete all tasks in Module 2, 3 and 4, and continued to improve with every session. The biggest gain was made between session 1 and 2.

Fig. 7
figure 7

This figure shows the individuals performance of interaction duration time between child and robot for session 1, 2 and 3 (Module 5)

Fig. 8
figure 8

This figure shows the pattern of interaction duration time between child and robot for session 1, 2 and 3 (Module 5)

6.2 Analysis of Interaction Duration (ID) of Free Style Interaction Between Child and a Robot

We also analyzed the duration of the interaction with the robot in Module 5 (the free style interaction). The duration was used as a proxy to analyze the child’s attention and engagement with the robot. There were no tasks to be completed in this module and they could ask the robot questions or make requests from the robot. At the start of the module, the robot reminded them of its capabilities, which are limited to facial expressions and head turning. If they requested the robot to perform something beyond the capability of the robot, the robot apologized for not being able to complete the request. During the post video analysis, the duration of the robot’s introductory speech (which was constant between all participants) was removed for all children interaction time.

Figure 7 shows the individual ID for all participants in this study while Fig. 8 shows the overall ID results. The responses vary from one child to another and are difficult to generalize. Nevertheless, most of the children were able to maintain their interaction with the robot for approximately 120 s. The time that they spent in module 5 was only used an indicator of their interest in the robot. The fact that they did not walk away from the robot and still played with the robot after 4 structured interaction modules suggested that the robot is a meaningful tool to attract their interest and attention. The means of ID for all children in Session 1, 2 and 3 are 133.7s (SD \(=\) 91.85), 110.9s (SD \(=\) 75.06) and 123.6s (SD \(=\) 95.86) respectively, as shown in Table 5. All children were expected to spend longer time in later sessions. A previous study [40] suggested that robot can be used to engage and improve attention skills in children with social interaction difficulties.

Table 5 Descriptive statistics of interaction duration (ID) for 20 CWCI in 3 session, (S1, S2 and S3)

The qualitative analysis shows that the children spent their time engaging and interacting with the robot, which according to their carers or therapists was atypical. However, the duration of the interaction did not differ significantly between sessions based on a repeated measures ANOVA (\(p = 0.455\)). Nevertheless, the overall results showed that CWCI were attracted to the robot and maintained their attention for approximately 2 min, even after completing Module 1, 2, 3, 4 which on average took 10 min. The qualitative results are however impressive and suggest that robot could engage and improve attention skills in through child–robot interaction.

7 Conclusion and Future Work

In this study, we have successfully designed a child–robot interaction that engaged children with cognitive impairments (CWCI) with the specific aim of improving and analyzing their attention skills. We used TCT as a proxy for attention, as measuring attention was fraught with technical difficulty. The interaction consisted of several modules during which the children played short games with the robot, taking into account the technical abilities of the robot. The modules fulfilled our research objective of measuring attention. Since the robot does not have actuated lower torso, all the interactions involved only used speech, facial expressions, head movements and arm gestures.

Each child was exposed to a series of 3 sessions of child–robot interaction. Over a period of one month we were able to have 3 sessions for all 20 children. This gives us a window into the potential of social robots for supporting children with cognitive impairments, and specifically for practising social skills such as attention. We expect more exposures to the robot to have beneficial effects on the outcomes and identify the need for long-term observations of child–robot interaction. Overall, we could see an improvement for most of the children from session 1 to session 3, and while improvement tails of after session 2, the interest in the robot remains high, suggesting that future sessions—possibly focusing on other aspects of social development—might be beneficial.

We note that there is no base line for the measure we used (task completion time), as such we cannot compare our robot-based intervention to alternative interventions. Further research is warranted into longer-term interactions: would more sessions have diminishing returns? And perhaps most importantly, are we just seeing a practice effect, whereby increased exposure to the robot results in lower completion times, or is it the social aspect of the robot that improves task completion time through increased attention on the task.

We believe that we have made three contributions to child–robot interaction research; (1) How the content of the interaction module can be designed as a function of the technical capabilities of the robot. This is important since the limitation of the robot sometime could turn down the interest of the children when the robot fail to meet the expectation of the children. Thus, interactive and structured content of the interaction modules could maintain the engagement and interest of the children towards the robot. (2) How multiple observations provide insights which could not have been gleaned from a single exposure of the child to the social embodied robot. This suggest that with multiple exposure, children with cognitive impairments could improve their level of attention based on our findings. (3) The potential for child–robot interaction to improve social communication skills. Social robot could help children diagnosed with cognitive impairment to improve their interaction skills and able to complete the tasks during child–robot interaction.

Finally, we showed how attention skills of CWCI in child–robot interaction can be indirectly measured by assessing (1) task completion time and (2) interaction duration. This offers opportunities for future studies, where low cost robots and low performing visual attention can be substituted with interaction timing measures. Although, we expect that the improvement of attention skills in CWCI in this study will transfer to interactions with people, such as their peers, teachers and family, this has not been assessed in this study. The transfer of improved attention skills during child–robot interaction to real-life interaction should be of great interest in future studies.