Abstract
This research focuses on robotic anthropomorphism and how it impacts the learning environment of students with autism spectrum disorder (ASD). ASD students show a greater interest in anthropomorphic characteristics in robots. Social interaction between robots and students by employing anthropomorphism degrees in a robot’s physical design and behavior has boosted productivity in ASD students. As robots enter our social space, we will inherently impose our interpretation on their actions, similar to the techniques we employ in rationalizing, for example, a pet’s behavior. This propensity to anthropomorphize is not seen as a hindrance to social robot development but rather a helpful mechanism that requires careful examination and employment in social robotics research. Specifically, this chapter examines social-cognitive intelligence in relation to artificial intelligence, emphasizing privacy protections and ethical implications of HRI, while designing robots that are ethical, cognitively, and artificially intelligent, as well as human-like in their social interactions.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Robotic Anthropomorphism
- Robotic Intentionality
- Social Cognition
- Autism Spectrum Disorder
- Human–Robot Interaction
- Humanoid Robots
- Social Robotics
- Human–Robot Interaction (HRI)
Introduction
Almost all the famous childhood stories use anthropomorphism in some way. Those stories, in most cases, feature human characters interacting with non-human characters. Social robots have a special relationship with anthropomorphism, which they consider neither a cognitive error nor a sign of immaturity (Damiano & Dumouchel, 2018). Instead, it considers that this common human tendency, which is supposed to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of collaborative and interactive agents—social robots (Damiano & Dumouchel, 2018). This approach leads social robots to focus research on engineering robots that activate users’ stereoscopic projections. The goal is to give robots a “social presence” and “social behaviors” that are credible enough for human users to engage in comfortable, long-term relationships with these machines (Damiano & Dumouchel, 2018). This choice of “applied anthropomorphism” as a research method exposes the artifacts produced by social robots to moral condemnation: social robots are judged as a “cheating” technology because they generate in users the illusion of mutual social and emotional relationships (Damiano & Dumouchel, 2018). This chapter takes a position in this debate, developing a series of arguments relevant to the philosophy of mind, cognitive science, and robotic artificial intelligence and asking what social robots can teach about anthropomorphism.
The Social Cognitive Theory (SCT) examines the influences of individual experiences, the actions of others, and environmental factors on health behaviors. Through instilling expectations, promoting self-efficacy, using observational learning, and using other reinforcements to change behavior, Social Cognitive Theory offers opportunities for social support. Exploration shows that individuals with ASD are especially powerless against forlornness, and hence the humanizing of non-human specialists may work as a social outlet of sorts. For example, grown-ups with a severe level of ASD-related qualities were discovered to be the same as controls in their craving for friendship yet detailed altogether higher evaluations of forlornness which they credited to their absence of social agreement (Jobe & Williams White, 2007). Proof of less informal organizations (Mazurek, 2014), alongside an expanded impression of the self as a helpless social entertainer (Vickerstaff et al., 2007), may add to the raised degrees of social nervousness present inside the populace (for an audit, see MacNeil et al., 2009). As social contrasts may seclude those with ASD from peers and result in adverse results, humanizing non-human substances may consider social commitment with less passionate danger. Along these lines, collaborations with human characters may turn out to be more socially inspiring.
There is a dearth of research in the field of robotic anthropomorphism and intentionality in social robotics and the way these concepts can impact social-cognitive behavior of ASD individuals. Human–robot interaction (HRI) studies have shown exciting yet preliminary benefits for individuals with autism spectrum disorder, including increased engagement in tasks, increased levels of attention, and novel social behavior, such as joint attention. Despite the excitement generated by these studies within the robotics community and media attention, the results have received relatively little attention from the clinical community; clinicians tend to view HRI for autism as a trial or an experiment. Presently, research in advanced social mechanics and HRI is investigating the impact of ascribing deliberateness to robots and the conduct boundaries of the robot that most proficiently instigate this. In investigating the impact that crediting deliberateness has on friendly communication, members of a test are occasionally persuaded that they are collaborating either with a pre-customized machine (e.g., Wykowska et al., 2014; Özdem et al., 2017) or with another human (who normally has wants and convictions). In some different examinations, members are first presented to various specialist types (e.g., human, humanoid robot, non-humanoid robot) and consequently are persuaded that they are connecting with one of them (e.g., Krach et al., 2008). Prompting a specific (e.g., deliberate) position through guidance control remains rather than strategies utilized in research that means to characterize the boundaries under which members unexpectedly expect the purposeful position. Here, the deliberate position is actuated through, for instance, the robot’s look, discourse, or general conduct (e.g., Wykowska et al., 2015).
-
1.
Can robots be perceived as “intentional” agents?
-
2.
How can social robots facilitate student learning for individuals with autism spectrum disorder and other learning disorders/disabilities through robotic anthropomorphism and intentionality?
-
3.
How can we change human behavior through or with human–robot interaction (HRI) situations?
The research makes the following contributions. First, it aims to develop a different perspective on social robots, as it explains how social robots can be scientific tools for examining human social cognition, particularly its flexibility. Secondly, it focuses on the importance of social robotics through anthropomorphism and intentionality and how it may improve social cognition for ASD individuals, and thirdly, we will consider the issue of adopting an intentional stance toward robots, discuss its relationship to other, lower-level mechanisms of social cognition, and evaluate methods to assess adoption of intentional stance. The main purpose of this research is to focus on social robotics’ concepts of anthropomorphism and intentionality and how they can result in improved social cognition for ASD individuals. This chapter consists of four sections. First, we focus on defining and describing robotic anthropomorphism and intentionality in social robotics. Second, we examine how social-cognition traits of ASD individuals can be improved through robotic anthropomorphism and intentionality. Thereafter, we investigate the effects of robotic anthropomorphism and intentionality on robot likeability, and the subsequent effect of these HRI interrelationships on the success of HRI implementation. Next, we propose our Anthropomorphism—Intentionality—Social-Cognition framework and propose managerial implications of our framework on consumers and businesses in the context of social robotics.
Theoretical Background
A study examines the direct effects of deliberately adopting social media, deliberately acting as an independent variable, and caused by the misrepresentation of beliefs. At the same time, the functioning of robots is similar to experimental conditions. In line with this, further research is looking at ways in which the intentional state may be created automatically by robots’ behavior (Terada et al., 2008; Yamaji et al., 2010). Where deliberate persuasive approaches themselves are the subject of research, measures to measure the effectiveness of this process are needed, and although the objective nature as a concept is defined as length (Dennett, 1971, 1987), measuring its acceptance presents a challenge. In addition, much of the literature on intentional architecture comes from the field of engineering and has a different approach to previously discussed research based on experimental psychology. Although the similarities are evident in the intentions and the overall term, the method and research questions in these different fields are often different. A prominent type of paradigm in HRI research in targeted mental states includes natural experiments and open conclusions (e.g., Terada et al., 2007; Yang et al., 2015). Participants in these studies are usually not given strict instructions on how to perform a particular task with the robot in question, but it leaves the connection naturally natural. Despite the similarities between this type of setup and the nature of the set of robotic platforms, the authenticity of the test is compromised in this way, and questions about which aspect of the robotic behavior leads to the acceptance of the target state remain unanswered.
Apart from discussing the circumstances in which the state of determination and the role played by the position played in balancing social perceptions of society separately, these conditions do not exist in isolation. This is illustrated by Pfeiffer et al. (2011), which showed that the personality values associated with the on-screen avatar in the public view test differ from the combination of the viewing behavior and the subject’s expectations regarding the avatar. According to the task order, personality limitations increased when the avatar’s visual performance seemed to be in line with the strategy that followed. Therefore, it is important to know that in the end, it is a combination of moral boundaries and human expectations for the integrated robot to inform human values. At present, this is rarely considered in an HRI study.
ASD is a progressive disorder characterized by poor social and interpersonal communication, in line with known and recurring behavioral limitations and interests (Lord et al., 1994). For example, children with ASD avoid physical contact, do not target people, do not show connection, do not show excitement or interest, and may spend many hours listing toys or investigating items (Rutter et al., 2003). Since ASD cannot be cured, some people with the condition need more expensive and more powerful, lifelong care and treatment, which encourages the development of social robots to help them and their caregivers. The emergence of social robots dedicated to ASD can be traced back to seminal research by Emanuel and Weir (1976) (see also Howe, 1983), where an electron-controlled computer, a tortoise-like robot (LOGO) that rotates on the ground, was used as a correction tool ASD boy. It was not until the late 1990s that many laboratories adopted this research topic (see Werry & Dautenhahn, 1999; Diehl et al., 2012; Begum et al., 2016; Ismail et al., 2019; review).
Until now, around 30 robots have been tested as ASD correction tools [e.g., Robota (Billard et al., 2007), FACE (Pioggia et al., 2007), Aibo (Francois et al., 2009), Charlie (Boccanfuso and O’Kane, 2011), NAO (Shamsuddin et al., 2012; Arora & Arora, 2020), GIPY-1 (Giannopulu, 2013), Pleo (Kim et al., 2013), KASPAR (Wainer et al., 2014), Jibo (Guizzo, 2015), Maria (Valadao et al., 2016), Sphero (Golestan et al., 2017), MINA (Ghorbandaei Pour et al., 2018), Leo (She et al., 2018), SAM (Lebersfeld et al., 2019), SPRITE (Clabaugh et al., 2019), Actroid-F (Yoshikawa et al., 2019) etc.]
Anthropomorphism is defined as “seeing the human in non-human forms” (Aggarwal & McGill, 2007). In the field of social robotics, creators and designers design robots with human (or living being) like characteristics to incite robotic anthropomorphism (e.g., Nao, the bipedal humanoid robot developed by SoftBank Robotics is used popularly in education and research). Uncanny valley effect phenomenon states that anthropomorphic appearance of a robot leads to trust and familiarity (i.e., humanoid robots are preferred more by ASD individuals than non-humanoids due to human-like appearance and interactions) (Arora et al., 2021; Sung et al., 2007; Turkle, 2017). In HRI context, robotic anthropomorphism means association of human-like characteristics in humanoid robots (e.g., facial features of robots like big eyes, smiling face, interactive voice, speech, hand, and body gestures integrated into robots like ASIMO, NAO, Kirobo Mini, Pepper, etc.). The ability to anthropomorphize robots is strongly linked to attributing human personality traits related to user’s personality leading to robot likeability. Robotic intentionality (a.k.a., intentional stance or intentional mindset) is governed by the assumption that humans’ state-of-mind associations, mental states, anthropomorphic beliefs, and desires result in (positive or negative) behaviors toward robots (Dennett, 1971, 1987).
A key hypothesis after this effort states that social robots may overcome some of the motivating and emotional challenges; they face with people with ASD when they interact with their partners (Dautenhahn, 1999). Contrary to their developing peers, whose interactions reward them naturally, children with ASD show only a weak function of the brain reward system in response to social reinforcement (Chevallier et al., 2012; Delmonte et al., 2012; Watson et al., 2015). ASD Community Outreach Vision Chevallier et al. (2012) stated that children with ASD do not want to maintain relationships with human partners, instead showing a preference for non-human and often mechanical objects (Watson et al., 2015).
In addition to these encouraging problems, neuropathy for people with ASD is uncommon: they often do not tolerate the complexities of many things (Bogdashina, 2010, 2012), show detailed data (Happé and Frith, 2006), and sensory sensitivity or conflict (Bogdashina, 2010), with significant social concerns (Spain et al., 2018). According to the theory of Weak Central Coherence (Happé et al., 2001) and the Enhanced Perceptual Functioning model (Mottron et al., 2006), cognitive processing of ASD individuals focuses on local structures. These children are unable to integrate individual pieces of patterns of the world. The Intense World Theory of Autism (Markram, 2007) suggested that these individuals suffer from excessive neuronal data processing resulting in detailed loading and abnormal levels of anxiety, which they seek to reduce with repeated and repetitive behaviors (Rodgers et al., 2012).
Over the past 50 years, there has been a growing interest in social cognition, which has prompted researchers to investigate new issues related to attributing ideas to others. Social cognition refers to the psychological operations that are the basis of social interaction, including perception, interpretation, and response to the intentions, personality, and behavior of others (Green, Horan, 2010). The ability to think and predict preferences, thoughts, desires, thinking, behavioral reactions, plans, and beliefs of others is an important aspect of social cognition (Frith, Frith, 2012) and is often referred to as “mindreading” or “mentalization.”
Some of the most critical challenges people with ASD face are social interactions and their social and emotional development. This difficulty in communicating and interacting socially results from impaired language and communication skills, often combined with a lack of cognitive skills. Individuals with neurotypical development have communication skills based on their capacities for social interactions. However, it is difficult to focus on developing separated communication and entertainment. The term sociability refers to a person’s ability to adapt to social situations and engage in friendly and professional relationships. Estimating the proper level of anthropomorphic robots used in the treatment of ASD is important. If the humanoid looks like a human, the child may begin to feel fear and apathy. On the other hand, it should not look like a machine because the child will be more interested in testing it than interacting with it. Humanoids adopt a beautiful, attractive design (i.e., large eyes, posture, body language, and facial expressions) to give a rich speech and help prevent fear in children with ASD.
Given the characteristics of ASD, it seems helpful to consider that a social robot with dynamic motivation, behavioral repetition, simplified appearance, and a lack of judgment in society may be more appealing to people with ASD than real people. Under the Intense World Theory of Autism (Markram, 2007), there is a reduction in unpleasant anxiety-related behaviors (e.g., superstitions, shouting, spontaneous attacks, etc.) during a human–robot interaction (HRI) situation. Therefore, in line with the ASD Social Motivation theory (Chevallier et al., 2012) and SCT theory, we propose the following propositions.
Proposition 1
Human-robot interaction (HRI) between (anthropomorphic and intentional) social robots and ASD individuals results in better social motivation and cognition for ASD individuals and individuals with other learning disorders / disabilities.
Proposition 2
HRI between (anthropomorphic and intentional) social robots and ASD individuals results in a reduction in unpleasant anxiety-related behaviors.
Robotic Anthropomorphism, Intentionality, Social Cognition, and Autism: Discussions and Conclusions
Anthropomorphism refers to giving personality traits or human-like characteristics to non-human objects, such as robots, computers, and animals. Anthropomorphizing objects is a way to build relationships with them, to deal with them as mediators in a communicative interaction. This process leads to the automatic delivery of intentionality and social behavior. Intentional stance allows us to deal with unknown entities or artifacts whose behavior is ambiguous or impregnable. Research in social robotics and HRI explores the impact of attributing deliberateness to mechanisms and the robot’s behavioral parameters that almost all with efficiency induce this. In examining the effects that attributing deliberateness has on social interaction, participants of associate degree experiment are typically semiconductor diode to believe that they’re interacting either with a pre-programmed machine (e.g., Wykowska et al., 2014; Özdem et al., 2017) or with another human (who naturally has wishes and beliefs). In other studies, participants are initially exposed to totally different agent sorts (e.g., human, mechanical man mechanism, non-humanoid robot) and afterward are semiconductor diode to believe that they’re interacting with one among them (e.g., Krach et al., 2008). causation a specific (e.g., intentional) stance through instruction manipulation stands in distinction to analysis ways that aim to outline the parameters beneath that participant’s ad libitum assume the intentional stance. Here, the intentional stance is induced through, as an example, the robot’s gaze, speech, or general behavior (e.g., Wykowska et al., 2015). Godspeed questionnaires measure robotic anthropomorphism and intentionality. The full survey instruments are available as Appendix 5.1 and Appendix 5.2.
Social cognition is defined as understanding, perceiving, and interpreting information about other people and ourselves in a social context. These include emotional recognition, cognitive theory (ToM), delivery style, social vision, and knowledge. Social cognition consists of various processes that allow people to understand and interpret rapidly changing social data and respond appropriately to social incentives quickly, effortlessly, and easily. Recent works have shown that cognitive functioning and social skills among autistic individuals in proven steps only show a slight correlation in their functional outcomes over other factors (Sasson et al., 2020). Some people with autism may exhibit general social skills despite a low perception of psychological functioning with psychological compensation (Livingston et al., 2019). Among adults with autism without cognitive impairment, general cognition predicts more social potential than social cognition (Sasson et al., 2020), and the performance of explicit social-cognitive measures such as those used here may be less predictable social interaction Behavioral Behavior in Autism rather than Practice Social clarity (Keifer et al., 2020). Using natural methods is a challenge in terms of experimental control. Humanoid robots can prove particularly useful in this context, as they allow studying social cognition and joint attention specifically with a high degree of experimental control and relatively high ecological validity. That approach provides new insights into collaborative attention-based approaches (such as the role of human similarity, eye contact in visual acuity outcomes, difficulty severing facial expressions), and the ability to apply for health care, training, and assessment of joint care for children diagnosed with ASD. Appendix 5.3 provides the measures for social cognition targeted at ASD individuals during human–robot interaction (HRI) situations.
In conclusion, individuals with autism face behavioral challenges, and social robots can help to mitigate those uncommon behaviors / challenges. ASD individuals have difficulty communicating with other people—often failing to see people as human beings rather than simply being things in their environment. They cannot communicate easily with ideas and feelings, have difficulty concentrating on what others think or feel, and sometimes spend their lives in silence. They often find it challenging to make friends or even to bond with family members. Studying the conditions and consequences of implementing human-like behaviors on artificial agents that can potentially induce the adoption of an intentional stance is fascinating from a theoretical perspective and extremely important for the future of our societies.
References
Aggarwal, P., & McGill, A. L. (2007). Is that car smiling at me? Schema congruity as a basis for evaluating anthropomorphized products. Journal of Consumer Research, 34(4), 468–479.
Arora, A. S., Fleming, M., Arora, A., Taras, V., & Xu, J. (2021). Finding “H” in HRI: Examining human personality traits, robotic anthropomorphism, and robot likeability in human-robot interaction. International Journal of Intelligent Information Technologies (IJIIT), 17(1), 19–38.
Arora, A. S., & Arora, A. (2020). The Race between Cognitive and Artificial Intelligence: Examining Socio-Ethical Collaborative Robots through Anthropomorphism and Xenocentrism in Human-Robot Interaction. International Journal of Intelligent Information Technologies (IJIIT), 16(1), 2020.
Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71–81.
Begum, M., Serna, R. W., & Yanco, H. A. (2016). Are robots ready to deliver autism interventions? A comprehensive review. International Journal of Social Robotics, 8(2), 157–181.
Billard, A., Robins, B., Nadel, J., & Dautenhahn, K. (2007). Building robota, a mini-humanoid robot for the rehabilitation of children with autism. Assistive Technology, 19, 37–49.
Boccanfuso, L., & O’Kane, J. M. (2011). Charlie: An adaptive robot design with hand and face tracking for use in autism therapy. International Journal of Social Robotics, 3, 337–347.
Bogdashina, O. (2010). Autism and the edges of the known world: Sensitivities, language, and constructed reality. Jessica Kingsley.
Chevallier, C., Kohls, G., Troiani, V., Brodkin, E. S., & Schultz, R. T. (2012). The social motivation theory of autism. Trends in Cognitive Sciences, 16, 231–239. https://doi.org/10.1016/j.tics.2012.02.007
Clabaugh, C., Mahajan, K., Jain, S., Pakkar, R., Becerra, D., Shi, Z., et al. (2019). Long-term personalization of an in-home socially assistive robot for children with autism spectrum disorders. Frontiers in Robotics and AI, 6, 110.
Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human-robot co-evolution. Frontiers in Psychology, 9,. https://doi.org/10.3389/fpsyg.2018.00468
Delmonte, S., Balsters, J. H., McGrath, J., Fitzgerald, J., Brennan, S., Fagan, A. J., & Gallagher, L. (2012). Social and monetary reward processing in autism spectrum disorders. Molecular Autism, 3(1), 1–13.
Dennett, D. C. (1971). Intentional systems. The Journal of Philosophy, 68, 87–106. https://doi.org/10.2307/2025382
Dennett, D. C. (1987). The Intentional Stance. MIT Press.
Diehl, J. J., Schmitt, L. M., Villano, M., & Crowell, C. R. (2012). The clinical use of robots for individuals with autism spectrum disorders: A critical review. Research in Autism Spectrum Disorders, 6(1), 249–262.
Emanuel, R., & Weir, S. (1976, July). Catalysing communication in an autistic child in a LOGO-like learning environment. In Proceedings of the 2nd Summer Conference on Artificial Intelligence and Simulation of Behaviour (pp. 118–129).
Francois, D., Stuart, P., & Dautenhahn, K. (2009). A long-term study of children with autism playing with a robotic pet: Taking inspirations from non-directive play therapy to encourage children’s proactivity and initiative-taking. Interaction Studies, 10, 324–373.
Frith, C. D., & Frith, U. (2012). Mechanisms of social cognition. Annual review of psychology, 63(1), 287–313.
Ghorbandaei Pour, A., Taheri, A., Alemi, M., & Meghdari, A. (2018). Human-robot facial expression reciprocal interaction platform: Case studies on children with autism. International Journal of Social Robotics, 10, 179–198.
Giannopulu, I. (2013). Multimodal cognitive nonverbal and verbal interactions: The neurorehabilitation of autistic children via mobile toy robots. International Journal of Advances in Life Sciences, 5, 214–222.
Golestan, S., Soleiman, P., & Moradi, H. (2017). Feasibility of using sphero in rehabilitation of children with autism in social and communication skills. In 2017 International Conference on Rehabilitation Robotics (ICORR) (pp. 989–994). IEEE.
Green, M. F., & Horan, W. P. (2010). Social cognition in schizophrenia. Current Directions in Psychological Science, 19(4), 243–248.
Guizzo, E. (2015). Jibo is as good as social robots get. But is that good enough? Science Robotics, 3, 21.
Happé, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5–25.
Happé, F., Frith, U., & Briskman, J. (2001). Exploring the cognitive phenotype of autism: weak “central coherence” in parents and siblings of children with autism: I. Experimental tests. The Journal of Child Psychology and Psychiatry and Allied Disciplines, 42(3), 299–307.
Howe, J. (1983). Autism-using a ‘turtle’ to establish communication. In W. J. Perkins (Ed.), High Technology Aids for the Disabled, pp. 179–183. Elsevier. https://doi.org/10.1016/B978-0-407-00256-2.50033-2
Ismail, L. I., Verhoeven, T., Dambre, J., & Wyffels, F. (2019). Leveraging robotics research for children with autism: A review. International Journal of Social Robotics, 11(3), 389–410.
Jobe, L. E., & White, S. W. (2007). Loneliness, social relationships, and a broader autism phenotype in college students. Personality and Individual Differences, 42(8), 1479–1489.
Keifer, C. M., Mikami, A. Y., Morris, J. P., Libsack, E. J., & Lerner, M. D. (2020). Prediction of social behavior in autism spectrum disorders: Explicit versus implicit social cognition. Autism, 24, 1758–1772. https://doi.org/10.1177/1362361320922058
Kim, E. S., Berkovits, L. D., Bernier, E. P., Leyzberg, D., Shic, F., Paul, R., et al. (2013). Social robots as embedded reinforcers of social behavior in children with autism. Journal of Autism and Developmental Disorders, 43, 1038–1049.
Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., & Kircher, T. (2008). Can machines think? Interaction and perspective taking with robots investigated via fMRI. PloS One, 3(7): e2597.
Lang, F. R., & Carstensen, L. L. (2002). Time counts: Future time perspective, goals, and social relationships. Psychology and Aging, 17(1), 125.
Lebersfeld, J. B., Brasher, C., Biasini, F., & Hopkins, M. (2019). Characteristics associated with improvement following the SAM robot intervention for children with autism spectrum disorder. International Journal Pediatric Neonatal Care, 5, 9.
Livingston, L. A., Colvert, E., Bolton, P., & Happé, F. (2019). Good social skills despite poor theory of mind: Exploring compensation in autism spectrum disorder. Journal of Child Psychology and Psychiatry, 60, 102–110. https://doi.org/10.1111/jcpp.12886
Lord, C., Rutter, M., & Le Couteur, A. (1994). Autism Diagnostic Interview-Revised: A revised version of a diagnostic interview for caregivers of individuals with possible pervasive developmental disorders. Journal of Autism and Developmental Disorders, 24(5), 659–685.
MacNeil, B. M., Lopes, V. A., & Minnes, P. M. (2009). Anxiety in children and adolescents with autism spectrum disorders. Research in Autism Spectrum Disorders, 3(1), 1–21.
Marchesi, S., Ghiglino, D., Ciardo, F., Perez-Osorio, J., Baykara, E., & Wykowska, A. (2019). Do we adopt the intentional stance toward humanoid robots? Frontiers in Psychology, 10, 450.
Markram, H. (2007). The intense world syndrome-an alternative hypothesis for autism. Frontiers in Neuroscience, 1, 77–96. https://doi.org/10.3389/neuro.01.1.1.006.2007
Mazurek, M. L. (2014). A Tag-Based, Logical Access-Control Framework for Personal File Sharing. Doctoral dissertation, Carnegie Mellon University.
Miyamoto, E., Lee, M., Fujii, H., & Okada, M. (2005). How can robots facilitate social interaction of children with autism? Possible implications for educational environments. In Proceedings of the 5th International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, 145–146.
Mottron, L., Dawson, M., Soulières, I., Hubert, B., & Burack, J. (2006). Enhanced perceptual functioning in autism: An update, and eight principles of autistic perception. Journal of Autism and Developmental Disorders, 36(1), 27–43.
Nomura, T., Suzuki, T., Kanda, T., & Kato, K. (2006, July). Altered attitudes of people toward robots: Investigation through the Negative Attitudes toward Robots Scale. In Proc. AAAI-06 workshop on human implications of human-robot interaction (Vol. 2006, pp. 29–35).
Özdem Yilmaz, Y., Cakiroglu, J., Ertepinar, H., & Erduran, S. (2017). The pedagogy of argumentation in science education: science teachers’ instructional practices. International Journal of Science Education, 39(11), 1443–1464.
Pfeiffer, U. J., Timmermans, B., Bente, G., Vogeley, K., & Schilbach, L. (2011). A non-verbal turing test: Differentiating mind from machine in gaze-based social interaction. PLoS ONE 6:e27591. https://doi.org/10.1371/journal.pone.0027591
Pioggia, G., Sica, M. L., Ferro, M., Igliozzi, R., Muratori, F., Ahluwalia, A., et al. (2007). Human-robot interaction in autism: FACE, an android-based social therapy. In RO-MAN 2007 - The 16th IEEE international symposium on robot and human interactive communication (pp. 605–612). IEEE.
Rodgers, J., Glod, M., Connolly, B., & McConachie, H. (2012). The relationship between anxiety and repetitive behaviours in autism spectrum disorder. Journal of Autism and Developmental Disorders, 42, 2404–2409. https://doi.org/10.1007/s10803-012-1531-y
Rutter, M., Anthony, B., & Lord, C. (2003). The Social Communication Questionnaire Manual. Western Psychological Services.
Sasson, N. J., Morrison, K. E., Kelsven, S., & Pinkham, A. E. (2020). Social cognition as a predictor of functional and social skills in autistic adults without intellectual disability. Autism Research, 13, 259–270. https://doi.org/10.1002/aur.2195
Shamsuddin, S., Yussof, H., Ismail, L. I., Mohamed, S., Hanapiah, F. A., & Zahari, N. I. (2012). Initial response in HRI- a case study on evaluation of child with autism spectrum disorders interacting with a humanoid robot NAO. Procedia Engineering, 41, 1448–1455.
She, T., Kang, X., Nishide, S., & Ren, F. (2018). Improving LEO robot conversational ability via deep learning algorithms for children with autism. In 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) (pp. 416–420). IEEE.
Spain, D., Sin, J., Linder, K. B., McMahon, J., & Happé, F. (2018). Social anxiety in autism spectrum disorder: A systematic review. Research in Autism Spectrum Disorders, 52, 51–68.
Sung, J.-Y., Guo, L., Grinter, R. E., & Christensen, H. I. (2007). My roomba is ramboi: Intimate home appliances. Springer.
Turkle, S. (2017). Alone together: Why we expect more from technology and less from each other.
Terada, K., Shamoto, T., Ito, A., & Mei, H. (2007, October). Reactive movements of non-humanoid robots cause intention attribution in humans. In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 3715–3720). IEEE.
Terada, K., Shamoto, T., & Ito, A. (2008, August). Human goal attribution toward behavior of artifacts. In RO-MAN 2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication (pp. 160–165). IEEE.
Valadao, C. T., Goulart, C., Rivera, H., Caldeira, E., Bastos Filho, T. F., Frizera-Neto, A., et al. (2016). Analysis of the use of a robot to improve social skills in children with autism spectrum disorder. Research on Biomedical Engineering, 32, 161–175.
Vickerstaff, S., Heriot, S., Wong, M., Lopes, A., & Dossetor, D. (2007). Intellectual ability, self-perceived social competence, and depressive symptomatology in children with high-functioning autistic spectrum disorders. Journal of Autism and Developmental Disorders, 37(9), 1647–1664.
Wainer, J., Dautenhahn, K., Robins, B., & Amirabdollahian, F. (2014). A pilot study with a novel setup for collaborative play of the humanoid robot KASPAR with children with autism. International Journal of Social Robotics, 6, 45–65.
Watson, K. K., Miller, S., Hannah, E., Kovac, M., Damiano, C. R., Sabatino-DiCrisco, A., et al. (2015). Increased reward value of non-social stimuli in children and adolescents with autism. Frontiers in Psychology, 6, 1026. https://doi.org/10.3389/fpsyg.2015.01026
Werry, I. P., & Dautenhahn, K. (1999). Applying mobile robot technology to the rehabilitation of autistic children. In Proceedings of SIRS99, 7th Symp on Intelligent Robotic Systems, 265–272.
Werry, I., Dautenhahn, K., Ogden, B., & Harwin, W. (2001). Can social interaction skills be taught by a social agent? The role of a robotic mediator in autism therapy. In CT’01 Proceedings of the 4th International Conference on Cognitive Technology: Instruments of Mind. Springer-Verlag. https://doi.org/10.1007/3-540-44617-6_6
Wykowska, A., Kajopoulos, J., Obando-Leiton, M., Chauhan, S. S., Cabibihan, J. J., & Cheng, G. (2015). Humans are well tuned to detecting agents among non-agents: examining the sensitivity of human perception to behavioral characteristics of intentional systems. International Journal of Social Robotics, 7(5), 767–781.
Wykowska, A., Wiese, E., Prosser, A., & Müller, H. J. (2014). Beliefs about the minds of others influence how we process sensory information. PloS One, 9(4), e94339.
Yamaji, Y., Miyake, T., Yoshiike, Y., De Silva, P. R., & Okada, M. (2010, March). STB: Human-dependent sociable trash box. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 197–198). IEEE.
Yang, Y., Li, Y., Fermuller, C., & Aloimonos, Y. (2015, March). Robot learning manipulation action plans by “watching” unconstrained videos from the World Wide Web. In Proceedings of the AAAI conference on artificial intelligence (Vol. 29, No. 1).
Yoshikawa, Y., Kumazaki, H., Matsumoto, Y., Miyao, M., Kikuchi, M., & Ishiguro, H. (2019). Relaxing gaze aversion of adolescents with autism spectrum disorder in consecutive conversations with human and android robot-a preliminary study. Frontiers in Psychiatry, 10, 370.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendices
Appendix 5.1 Godspeed Questionnaires—Measures of Anthropomorphism (Adapted from Bartneck et al., 2009)
Appendix 5.2 Measures of Intentionality and Negative Attitude Toward Robots (Adapted from Nomura et al., 2006)
Intentional Stance / Intentionality Questionnaire (ISQ, Marchesi et al., 2019) can be found at: https://instanceproject.eu/publications/rep
The complete Instance Questionnaire can be found at: https://drive.google.com/file/d/1DFY8lXB9uyR8LqPvxoQ-2hAVLXriWY5z/view
Negative Attitude Toward Robots Scale
Appendix 5.3 Measures of Social Motivation Targeted at ASD individuals during Human-Robot Interaction (HRI) Situations (Adapted from Lang & Carstensen, 2002)
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Sammonds, A., Arora, A.S., Arora, A. (2022). Robotic Anthropomorphism and Intentionality Through Human–Robot Interaction (HRI): Autism and the Human Experience. In: Arora, A.S., Jentjens, S., Arora, A., McIntyre, J.R., Sepehri, M. (eds) Managing Social Robotics and Socio-cultural Business Norms. International Marketing and Management Research. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-04867-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-04867-8_5
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-031-04866-1
Online ISBN: 978-3-031-04867-8
eBook Packages: Business and ManagementBusiness and Management (R0)