Abstract
Anthropomorphic design in intelligent agents (IAs) applies the concept of human features to inanimate objects. However, there is a trade-off between personalized service and information protection when people share their information towards IAs. This study examined the influence of social responsiveness (a highly responsive IA vs. a low-responsive IA) and video modality (with video modality vs. without video modality) on the anthropomorphic IAs in an Internet of Things context. The participants (n = 64) were randomly assigned into four groups with balanced gender. In the experiment, participants encountered two tasks with an anthropomorphic IA and reported their disclosure tendency towards the IA. The results indicate that social responses can enhance the user perception of self-disclosure tendency, perceived personalization, and satisfaction. Although video modality aroused higher privacy concerns, its impacts on the user perception of personalization and user satisfaction were not significant. Interestingly, users reported a higher disclosure tendency towards IA with video modality. The findings show the potential of anthropomorphic IA as social companions. The theoretical as well as practical implications of these findings are discussed.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Concomitant with recent advances in artificial intelligence, voice-command agents with integrated artificial intelligence are being widely used in various types of services, including smart homes and electronic workplaces. The adaptive learning ability of intelligent agents (IAs) enables them to provide users with accurate and personalized services when users share their personal information. For example, users can disclose a daily routine to generate a to-do list, reveal their food preferences to obtain dining recommendations, and report their health conditions to obtain medical suggestions. In addition, social agents that encourage self-disclosure can also prevent users, especially the elderly, from being socially isolated [1]. However, agents requesting private information may make users feel uncomfortable, leading to their refusal to disclose information and restricting the personalization of services [2,3,4]. Hence, there is a trade-off between personalized service and privacy protection when people share their information with IAs. This study examines the balance between personalized services and privacy concerns in agent-mediated interaction.
Self-disclosure is defined as the behavior of revealing personal information to others. The motivation of self-disclosure has been explained via a privacy calculus theory, which states that individuals disclose personal information based on a cost–benefit trade-off [5]. In human–computer communication, active responses like nodding from a non-anthropomorphic robot were found have a positive effect on users’ disclosure and evaluation [6]. Based on the privacy calculus theory, intrusive technologies, such as those with video modality, would prevent users from sharing personal data with a mobile phone [7]. Many previous studies focused on the possibility of using IAs instead of human consultors to encourage users' self-disclosure in interviews, especially medical interviews [8,9,10]. Visual anthropomorphic cues in IA might arouse private and public consciousness and lead to less disclosure on health websites [9, 10]. Regarding the IoT context, human–computer interactions are no longer limited to graphical user interfaces. With the changes in the interaction, users expect the IA to exhibit more anthropomorphic cues and social characteristics [1, 11, 12]. Unlike conversational agents on our smartphones, smart home agents were connected to diverse home appliances and coexisted with users in a physical environment. Therefore, users might regard them as home scenario-related roles, such as pets, roommates, or family members. We assumed that coexistence in space would shorten the psychological distance between users and agents, and users would share more individual information with agents in the smart home scenario [12]. Therefore, we are interested in how users view their relationship with anthropomorphic IAs in the IoT context and how their disclosure tendency is influenced by anthropomorphic design, like social responsiveness and video modality. As the cost–benefit trade-off theory regarding self-disclosure has often been discussed in e-commerce, we are also interested in how it applies in the context of the IoT.
To address these questions, we employed social responses and video modality as independent variables and investigated their effects on the users’ disclosure tendency. In particular, we examined these effects in a mock-up life space, in which we manipulated responsiveness (i.e., a highly responsive IA vs. a low-responsive IA) and video modality (i.e., an IA with video modality vs. without video modality). In addition to self-disclosure, we measured service satisfaction, perceived personalization, and privacy concern towards IAs. After the agent-mediated communication, we also conducted an interview to collect participants’ attitudes towards anthropomorphic IAs.
The two major contributions of this study are as follows. (1) We analyzed the influence of social responses and video modality on the disclosure tendency, and provide design suggestions for anthropomorphic IAs. (2) We explored user attitudes towards anthropomorphic IAs and demonstrate the potential of anthropomorphic IAs for social-companion purposes in agent-mediated interaction.
2 Literature Review
2.1 Anthropomorphic IAs
It has been reported that humans have an inherent tendency to interact with social computers, such as IAs, in a “fundamentally social and natural” way as if they were real people. An online survey on Amazon’s Alexa determined that more than half of the users personified their smart agent unintentionally, such as using the name “Alexa” or the personal pronoun “she” to describe the agent [13]. Kang and Kim [14] found that adding humanlike expressions to smart objects evoked positive user feedback and deepened the user–agent relationship. Further, Pütten et al. [15] found that embodied conversational agents with more anthropomorphic features are more likely to trigger social behaviors from users. Thus, the anthropomorphic design of computer roles allows users to substitute the social interaction mode between people into the interaction between people and inanimate computers, which changes the psychological model of the user–computer relationship.
The implications of anthropomorphic agents have been discussed for decades [16]. Quantitative meta-analyses have revealed significant impacts of the animated interface agents on the subjective and behavioral responses of users [17]. With the development of information communication technology, the implications of IAs have expanded from computer-mediated communication to offline services. A field study at the Boston Museum of Science determined that the design of a friendly IA with virtual embodiment as a museum guide can increase the time users spend in a physical environment [18]. Furthermore, Kim et al. [19] found that users showed greater confidence in a virtual agent with a human body instead of solely depending on voice feedback. IAs can express their perception of the surrounding real world or exert their influence on the environment through visual embodiment, which improves the user’s sense of trust. Kang and Kim [14] also found that adding a smiley face to smart lights can elicit more positive user responses by increasing the sense of connectedness between users and smart lights.
Regarding self-disclosure towards anthropomorphic agents, researchers have not reached a consensus. Several studies have stated that humanlike images might increase social anxiety and public self-awareness, which in turn can lead to less information disclosure [9, 10, 20,21,22]. On the other hand, Sah and Wei [9] reported that the linguistic anthropomorphic cues of IAs can promote users’ social perception and information disclosure on a health website. Laban’s study also claimed that embodiment of conversational agents could simultaneously promote disclosure quality and quantity [23]. Kim et al.’s study [19] proved the positive effect of visual embodiment on the users’ trust in the IA’s ability, which might positively influence the users’ expectations of reciprocity. In this context, the users’ self-disclosure tendency might be enhanced by visual embodiment according to privacy calculus theory. A study on smart home IAs also found evidence that anthropomorphism can reduce users’ negative influences caused by intrusive technology [24].
2.2 Social Responsiveness of Anthropomorphic IAs
A survey on Amazon’s Alexa found that the interactions between the users and an IA can be categorized into five levels, from low to high sociality: information sources (e.g., news and weather; accounting for 38.9% of interactions), entertainment media (e.g., playing music, audiobooks, and games; 79%), life assistants (e.g., shopping, schedule reminders, and home control; 33.4%), conversation partners (e.g., talking to them; 5.5%), and close others (friends, roommates, and family members; 7.2%) [15]. Some studies have reported that users are willing to establish interpersonal relationships with the anthropomorphic agents, similar to those with friends or family [15, 25]. In the smart home scenario, IAs are preferred when they exhibit humanlike personality traits, such as agreeableness, conscientiousness, and openness [11]. When a robot shows empathy to a subject through facial and verbal expressions, the subject will perceive the robot as friendly and give a positive evaluation [26]. Therefore, users might also expect anthropomorphic IAs to exhibit social ability to satisfy their emotional needs apart from task-oriented interactions.
According to privacy trade-off theory, social rewards may serve as intangible benefits and promote users’ self-disclosure [27, 28]. Social rewards refer to positive emotions like satisfaction, pleasures, and enjoyment obtained from social communication [27, 28]. Hoffman et al. [6] designed a social robot and established that the robot could induce positive emotions from subjects through actions like nodding their heads during subjects’ verbal disclosure. Therefore, agents’ positive responses to users’ disclosure can further promote users’ disclosure tendency. In Hoffman's study [6], the responsiveness of robots was measured as “perceived partner responsiveness (PPR)”, which refers to the "belief that the robot understands, values, and supports important aspects of the self". Responsiveness is frequently discussed in interpersonal communication, and it is related to many positive outcomes, including attraction, relationship quality, happiness, emotional distress [29, 30]. Maisel's model of responsive behaviors consists of three components: understanding, validation, and caring [31]. Understanding refers to active listening, showing attention, and understanding the disclosing contents. Validation means making the discloser feel respected and reinforcing their opinions. Caring includes expressions of love and care about disclosers. With these components in mind, our study focused on the impacts of verbal responsiveness instead of responsive behaviors, since verbal contents can also exhibit validation and caring towards users [32]. Thus, we expected that the verbal responsiveness of anthropomorphic IA could promote user disclosure tendency.
Hypothesis 1
Users will disclose more personal information to a highly responsive anthropomorphic IA than a low-responsive anthropomorphic IA.
2.3 Anthropomorphic IAs with Video Modality
In general, IAs respond to users according to their voice commands. But sometimes, the voice alone cannot convey the users' intents accurately. Research on speech recognition indicates that adding lip-synch recognition can increase recognition accuracy and reduce the negative influences of environmental noises [33, 34]. IAs can also read users' facial expressions and identify their dynamic emotions via the video modality [35]. In the context of IoT, video modality can provide fall detection of the elderly in home environments and enhance home security and safety as well [36]. Thus, to improve the performance of intelligent agents, especially in the smart home scenario, video modality is a frequently used technology.
However, video modality might be perceived as a kind of surveillance and arouse negative emotions in users. With the pervasiveness of video modality in our daily life, there is a growing concern regarding the collection of sensitive data and its purpose [37]. According to the privacy calculus theory, intrusiveness of technology with video modality might increase users’ privacy concern, consequently reducing their use intention and disclosure tendency. Bailenson et al. [20] reported that users disclosed less personal information when interacting with an agent through video conferencing than voice only. Video modality appears to prevent users from utilizing more personalized and intelligent services. In our study, we explored how user disclosure is influenced by video modality in human-agent communication, and assumed that users’ disclosure tendency will be weakened by video modality:
Hypothesis 2
Users will disclose less personal information to an IA with video modality than an IA without video modality.
3 Methods
3.1 Experimental Design
To test our hypotheses and answer our research question, we conducted a two (responsiveness: a low-responsive IA vs. a highly responsive IA) by two (video modality: with video modality vs. without video modality) between-group experiment. Each participant was instructed to perform two tasks with an anthropomorphic IA in an IoT context.
3.2 Participants
Sixty-four participants (32 females) were recruited via an online questionnaire from Tsinghua University. According to the G power calculation [38], the minimal sample size of two-way ANOVA is 52, when setting effect size = 0.4, α = 0.05, 1−β = 0.8. Therefore, the sample size of our study is acceptable for this experimental design. The average age of participants was 23.55 years (SD = 2.42). Each participant received a cash compensation after the experiments. Participants were randomly assigned into four conditions with balanced gender. Demographic variables (i.e., age, gender, past experience with IAs) were summarized after the experiment and no significant differences were found between the experimental conditions. The study was approved by the appropriate institutional review board.
3.3 Procedure
The duration of the experiment ranged from to 50–60 min. Upon arrival in a waiting room, participants were required to read and sign an informed consent form that included the experiment’s basic information and how their experimental data can be used. The researcher then gave a brief introduction of the experimental tasks, and thereafter, the participants were guided into the smart home space and asked to sit on the sofa, facing the anthropomorphic IA. The virtual embodiment of the IA was projected onto a white wall via the projection. Participants in the video condition were informed that the agent could “see” their facial expressions via the smart camera and better understand their verbal disclosure.
In the experiment, participants were asked to complete two tasks with the IA. The first task was to experience the basic functions of the IoT-IA, including information consulting and remote control of smart devices. Participants were told that they could control smart devices through voice interaction with an anthropomorphic IA in this environment. The second task featured interview-style communication between IA and participants. The IA first asked the participant a question, and then it responded after each participant's answer. This dialogue was repeated for ten rounds. Participants were informed that their personal information was collected for personalized service purposes in this interview communication. In the same experimental group, the IA's verbal feedbacks followed a fixed pattern to manipulate the responsiveness level. After the two tasks, the participants were asked to complete post-task questionnaires and a brief interview.
3.4 Laboratory Settings
The experiment was conducted in a laboratory, which simulated a smart home space with controlled layout, and no outside distraction. The smart home space included a projector, sound-box, light-emitting diode strip, table lamp, humidifier, temperature and humidity sensor, human body sensor, and wireless button. In addition to these low-intrusiveness smart devices, a smart camera was included in the video group. We also decorated the spaces to make them more immersive. There was a clothes-stand, side table (with various household goods), sofa, dog toy, and other items in the 4 × 5 m space.
3.5 Wizard of Oz Design
The smart devices were remotely controlled by a researcher sitting in the next room via a smart phone. The anthropomorphic IA was remotely controlled via a computer. The embodiment of the anthropomorphic IA was synthesized by using a virtual character software program (Facerig) that could track the user’s real-time facial expressions and generate a virtual character. The voices of the IAs were synthesized by using text-to-speech software (XunJie Text-to-Voice software). The Wizard of Oz method allowed us to simulate the interaction between the participants and the IA without failure cases resulting from imperfect natural speech recognition.
According to Chateau et al. [39], users’ attitudes and emotional feedback toward IA are positively related to the speech synthesis quality. Therefore, we selected a natural voice package in the IA design to avoid bias based on speech quality. In addition, we adopted a 2D-cartoon image, instead of a 3D image, to avoid social anxiety resulting from the uncanny valley effect. The 2D-cartoon image was a young female with a friendly and harmless appearance. The character was rigged and designed with animations for blinking, gazing, raising the eyebrows, and speaking (lip-sync according to the synthesized speech). She had a friendly and polite demeanor during the interaction.
3.6 Manipulation
The verbal responses of IA were designed with two levels of responsiveness: high and low. In our study, the highly responsive IA would show more positive feedback, validation and care than the low-responsive IA. An example of highly responsive and low-responsive feedback in Task 2 is listed in Fig. 3, and the complete dialogue script is attached in Appendix A. Complete dialogue script of agents.
The participants in the video group were informed that the anthropomorphic IA could “see” them through the smart camera and was collecting their real-time body movement and facial expressions. In the non-video group, the participant was informed that the smart camera was only for experiment recording and was not connected to the IA, nor the experimenters.
3.7 Measures
All the subscales were measured on a 7-point Likert scale (from “strongly disagree” to “strongly agree”). The complete questionnaire used in the study is summarized in Appendix B.
Manipulation check. For the responsiveness manipulation check, the perceived responsiveness of the anthropomorphic IA was measured by using a composite instrument adapted from Birnbaum and Reis [6, 40]. The scale has been validated and found reliable in prior studies and also showed high internal consistency in our sample (Cronbach α = 0.957). For the video modality manipulation check, participants were asked for their responses to the following statement: “I felt I was monitored by the IA.”
Self-disclosure. We adapted a shorter version from the Self-Disclosure Questionnaire [41,42,43], which includes six types of information (interests, attitudes, work, money, personality, and body). The scale was factorially unidimensional and internally consistent in our sample (Cronbach’s α = 0.692).
In addition to self-disclosure, we also measured participants’ evaluation towards IAs, including satisfaction, personalization, privacy concern, and preference.
Satisfaction. The users’ service encounter satisfaction was measured based on four statements: “I am satisfied with the service,” “I think the interaction with the IAs is pleasant,” “I am willing to use this IoT service in this scenario,” and “I will recommend this IoT service to people around me.” This scale was factorially unidimensional and internally consistent in our sample (Cronbach’s α = 0.907).
Personalization. Personalization refers to the extent to which a customer feels that the service is appropriate, which is based on their personal information and needs [44]. Personalization was measured using a simplified version of the Personalization Scale [45], which consists of three items (e.g., “The virtual agent understood my needs.”). The short version has also been validated and determined to be reliable in prior studies [46]. This scale was factorially unidimensional and internally consistent in our sample (Cronbach’s α = 0.733).
Privacy concern. The participants’ privacy concern was measured based on the following three statements: “I feel that my privacy has been compromised,” “I feel anxious when interacting with IAs,” and “I feel uneasy when interacting with IAs.” The scale showed high internal consistency in our sample (Cronbach α = 0.750).
To explore the individual differences among the participants, we measured the users’ personality characteristics: shyness and self-consciousness, which were often measured in self-disclosure studies [20, 21, 47]. We also measured individual preference towards IAs.
Shyness. We adopted a revised version of the Cheek and Buss Shyness Scale, which has been validated in prior studies [48, 49]. This scale was factorially unidimensional and internally consistent in our sample (Cronbach’s α = 0.916).
Self-consciousness. We adopted a revised version of the self-consciousness scale, which has been validated in prior studies [50]. The scale includes two types of self-consciousness: private and public. The questionnaires for the two dimensions were factorially unidimensional and internally consistent in our sample (Cronbach’s α = 0.883 and 0.876, respectively).
Preference. The users’ preference toward the embodied IAs was measured based on five statements: “Her appearance meets my aesthetics,” “Her appearance is attractive,” “I like her voice,” “Her voice is natural and comforting,” and “She has an agreeable personality.” This scale was factorially unidimensional and internally consistent in our sample (Cronbach’s α = 0.859).
Interview. In the interview, participants were asked about their evaluation towards IAs to check whether the Wizard of Oz design was successful. Further, a fixed question was asked in the interview process, “What kind of role do you think the IA is playing in your life?” The purpose of this question was to explore how the anthropomorphic design influenced the user–agent relationship in the experiment.
3.8 Data Analysis
All statistical analyses were performed using the computing environment R version 4.0.3. ANOVAs were conducted with the R package ‘ez’ version 4.4.0. The mediation analyses were conducted using the R package ‘mediation’ version 4.5.0.
4 Results
4.1 Manipulation Checks
The results of an independent means t-test on the perceived responsiveness of anthropomorphic IAs yielded a significant effect, t(56.54) = 9.55, p < 0.001. The perceived responsiveness was higher in the highly responsive group (M = 6.03, SD = 0.56) than in the low-responsive group (M = 4.41, SD = 0.77). Hence, the responsiveness manipulation was successful. Moreover, participants in the video group reported higher scores in the sense of being watched (M = 4.97, SD = 0.96) than in the non-video group (M = 3.84, SD = 1.10), t(60.812) = − 4.36, p < 0.001. Thus, the video manipulation was successful as well.
4.2 Hypotheses Check
Means and standard deviations of dependent variables in four groups were calculated and are summarized in Table 2. To test the main effects of responsiveness and video modality (H1, H2) and the interaction effect, we adopted a two-way ANOVA test. According to the Bonferroni correction, we adopted a stricter threshold of 0.025 to test our hypotheses. The main effect of responsiveness on self-disclosure was significant, F(1,60) = 11.09, p = 0.001, η2 = 0.16. H1 was supported because participants’ self-disclosure towards the highly responsive IA (M = 5.76, SD = 0.85) was significantly higher than that of the low-responsive IA (M = 5.13, SD = 0.70). The main effect of video modality on self-disclosure was also significant, F(1,60) = 5.59, p = 0.021, η2 = 0.09. The participants’ self-disclosure towards the IA with video modality (M = 5.67, SD = 0.67) was significantly higher than that of the IA without video modality (M = 5.22, SD = 0.94). Therefore, H1 and the reverse statement of H2 were supported. The interaction effect of responsiveness and video modality was not significant, with F(1,60) = 0.84, p = 0.363, η2 = 0.01.
Furthermore, the impacts of responsiveness and video modality on personalization, privacy concern, and satisfaction were examined via two-way ANOVAs as well. The participants in the highly responsiveness group reported greater feelings of personalization, F(1,60) = 35.31, p < 0.001, η2 = 0.37, lower privacy concern, F(1,60) = 15.68, p < 0.001, η2 = 0.21, and higher satisfaction, F(1,60) = 39.23, p < 0.001, η2 = 0.40, compared to those in the low-responsiveness group. The main effects of video modality on personalization, F(1,60) = 1.56, p = 0.217, η2 = 0.03, and satisfaction, F(1,60) = 2.72, p = 0.104, η2 = 0.04, were not significant. However, we found that participants in the video group reported a higher privacy concern than those in the non-video group, F(1,60) = 11.18, p = 0.01, η2 = 0.16. The interaction effects of responsiveness and video modality on these three dependent variables were not significant. The results are summarized in Table 3 and Fig. 4.
4.3 Mediation Analysis
Further, we tested the mediating effects of privacy concern and personalization on self-disclosure via a bootstrapping procedure (bootstrap samples = 5000). The results indicated that privacy concern played significant mediating effects between video modality and self-disclosure (see Table 4). As expected, greater privacy concern, which leads to lower disclosure, could be increased by video modality. The mediating effects of personalization on self-disclosure were not significant.
4.4 Individual Differences
We measured the users’ personality traits, which included their shyness (M = 4.01, SD = 1.01), private self-consciousness (M = 5.15, SD = 0.72), and public self-consciousness (M = 5.21, SD = 0.84). Users’ preference toward the anthropomorphic design were also measured (M = 5.56, SD = 1.03). The results of regression modeling did not present significant influences of self-consciousness on self-disclosure tendency, personalization, privacy concern, and satisfaction. Social anxiety was found positively correlated to privacy concerns, coefficient = 0.23, t = 2.27, p = 0.027. Further, we found that users’ preference toward the IA had a positive and significant impact on personalization (coefficient = 0.48, t = 4.03, p < 0.001) and service satisfaction (coefficient = 0.48, t = 4.67, p < 0.001).
4.5 Roles of Anthropomorphic IAs
In the interview process, many participants stated that the IA was similar to a kind confidante, to whom they could confide their inner worries. One of them said, “I did not worry about being judged when talking to the agent.” The social role of the embodied IA can be categorized into four types: a computer without humanity (15.625%), servant or assistant (37.5%), pet (6.25%), and confidante (40.625%).
The distribution of social roles in four conditions is summarized in Fig. 5. To compare the differences in the proportions of subjects who regarded IA as confidant between experimental groups, we performed the two-proportions z-test. The results indicated that users had a higher tendency to regard a IA with video modality as a confidante than a IA without video modality (z = 2.04, p = 0.042). Additionally, more users in a non-video condition tended to regard IAs as a servant/assistant than those in the video condition (z = 2.07, p = 0.039). The differences in other reported social roles between groups were not sufficiently significant.
5 Discussion
As users would substitute the social manners between human beings into the interaction with anthropomorphic IAs, we assumed that the social responses provided by the IA could serve as a kind of social rewards in the trade-off. The experiment confirmed that the perceived responsiveness had a positive and significant influence on users’ self-disclosure tendency. The results could also be explained by reciprocity theory if we consider high responsiveness verbal feedbacks as IAs' disclosure. Previous studies have indicated that users tend to disclose more towards agents who highly disclose about themselves [5, 51]. Moreover, responsiveness could also promote satisfaction, personalization, and reduced user privacy concern.
Interestingly, we also found that people were more likely to disclose information about themselves when they were "watched" by the anthropomorphic agent. It was counterintuitive as the users' privacy concerns were greater in the video condition than the non-video condition. A question was raised about how people viewed video modality in agent-mediated communication. One study on crowd-operated robots indicates that the video modality of robots did not induce participants' privacy concerns [52]. Instead, less privacy was perceived in the audio-only condition compared to the audio–video condition. Furthermore, a study on intelligent agents also found that the anthropomorphic cues of IA had a significant negative impact on perceived intrusiveness [24]. These findings suggested that the perceived intrusiveness induced by video modality might be weakened when participants were communicating with anthropomorphic IAs. Another explanation might be that users thought the IA could provide a more personalized service with video modality, and the perceived benefits outweighed their privacy concerns caused by technology intrusiveness. However, the mediation analysis of personalization between video modality and self-disclosure did not support this explanation. Further research is required on users' attitudes towards anthropomorphic IAs with video modality.
Regarding people’s impressions of the anthropomorphic IAs, most of them saw humanity in the IAs. When the users were asked about these privacy issues in Task 2, most of them gave direct answers and disclosed their personal stories. We found many in-depth verbal disclosure contents in their answers, including a self-analysis of their weaknesses and worries about their current life. For instance, one participant talked about his body anxiety to the AI: “Since I was a child, I knew that I was not strong enough, so I would avoid conflict with others… I have been used to hiding my real opinions when talking with others, and I always avoid expressing myself directly. I am afraid I might offend others……”. Clearly, the purpose of the participants that disclosed their information was not simply to obtain a more personalized service. They declared that the embodied IA played social roles similar to a pet or a confidante, offering them intangible benefits, like social support. These findings established the anthropomorphic IAs’ potential to share secrets and provide social support, which can outweigh their privacy concerns on intrusive technology.
Our study also has valuable practical implications for designers and manufactures of intelligent agents who wish to promote their personalized services and collect data from customers. First, our experimental setting, including visual and linguistic cues of anthropomorphic design, provide designers with solutions to imbue humanness in agents. The findings also reveal the importance of social supports in privacy trade-off when disclosing with IA. It is clear that users expect more social responses from anthropomorphic IAs who play a social role as confidantes. Further, designers can avoid the negative impacts of intrusive technology on disclosure by employing human communication manners in the human–agent interaction. The limitations of our study are as follows: First, we did not distinguish information types of disclosure in our study. Previous studies suggested that users’ perceived information sensitivity and information types influence their disclosure tendencies [7, 42]. Second, our study focused on CG-based anthropomorphic IAs, and the conclusions might not be generalized to non-embodied agents. The influences of virtual embodiment must be addressed in future studies. Furthermore, our study did not provide a confirmed explanation on why video modality could promote disclosure, and hence this requires further research.
6 Conclusion
In this study, we explored how the privacy trade-off works when interacting with an anthropomorphic IA and conducted a simulation experiment in a laboratory. The results of the laboratory experiment indicated that social responses and video modality could increase the disclosure tendency towards anthropomorphic IAs. Theoretically, our results provide further support for privacy trade-off theory and anthropomorphic design, such as that the social support could be regarded as an intangible benefit and promote disclosure. The findings also show the potential of anthropomorphic IA as social companions and assistants. Future studies can further explore the influence of different anthropomorphic cues on self-disclosure and privacy trade-off, such as the appearance, voice, and personality.
Abbreviations
- CG:
-
Computer Graphics
- IA:
-
Intelligent agent
- IoT:
-
Internet of Things
References
Noguchi Y, Kamide H, Tanaka F (2020) Personality traits for a social mediator robot encouraging elderly self-disclosure on loss experiences. ACM Trans Human-Robot Interact (THRI) 9(3):1–24. https://doi.org/10.1145/3377342
Tolakovic A, Hadzialic M (2018) Internet of things (IoT): A review of enabling technologies, challenges, and open research issues. Comp Networks 144:17–39. https://doi.org/10.22214/ijraset.2020.4044
Guhr N, Werth O, Blacha PPH et al (2020) Privacy concerns in the smart home context. SN Appl Sci 2:247. https://doi.org/10.1007/s42452-020-2025-8
Laban G, Araujo T (2020) The Effect of Personalization Techniques in Users' Perceptions of Conversational Recommender Systems. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1–3). https://doi.org/10.1145/3383652.3423890
Moon Y (2000) Intimate exchanges: using computers to elicit self-disclosure from consumers. J Consum Res 26(4):323–339. https://doi.org/10.1086/209566
Hoffman G, Birnbaum GE, Vanunu K, Sass O, Reis HT (2014) Robot responsiveness to human disclosure affects social impression and appeal. Paper presented at the 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2014), Bielefeld, Germany. https://doi.org/10.1145/2559636.2559660
Wottrich VM, Reijmersdal EV, Smit EG (2018) The privacy trade-off for mobile app downloads: the roles of app value, intrusiveness, and privacy concerns. Decis Supp Syst 106:44–52. https://doi.org/10.1016/j.dss.2017.12.003
Gratch J, Lucas GM, King AA, Morency LP (2014) It's only a computer: the impact of human-agent interaction in clinical interviews. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (pp. 85–92). https://doi.org/10.5555/2615731.2615748
Sah YJ, Wei P (2015) Effects of visual and linguistic anthropomorphic cues on social perception, self-awareness, and information disclosure in a health website. Comp Human Behav 45:392–401. https://doi.org/10.1016/j.chb.2014.12.055
Pickard MD, Roster CA (2020) Using computer automated systems to conduct personal interviews: does the mere presence of a human face inhibit disclosure?. Comp Human Behav 105(Apr.), 106197.1–106197.11. https://doi.org/10.1016/j.chb.2019.106197
Mennicken S, Zihler O, Juldaschewa F, Molnar V, Huang EM (2016) It's like living with a friendly stranger: Perceptions of personality traits in a smart home. Paper presented at the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany. https://doi.org/10.1145/2971648.2971757
Dautenhahn K (2007) Socially intelligent robots: dimensions of human–robot interaction. Philosoph Trans Royal Soc B Biolog Sci 362(1480):679–704. https://doi.org/10.1098/rstb.2006.2004
Purington A, Taft JG, Sannon S, Bazarova NN, Taylor SH (2017) "Alexa is my new BFF": Social Roles, User Satisfaction, and Personification of the Amazon Echo. Paper presented at the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Denver, Colorado. https://doi.org/10.1145/3027063.3053246
Kang H, Kim KJ (2020) Feeling connected to smart objects? A moderated mediation model of locus of agency, anthropomorphism, and sense of connectedness. Int J Human-Comp Stud 133:45–55. https://doi.org/10.1016/j.ijhcs.2019.09.002
Pütten AMVD, Krämer NC, Gratch J, Kang SH (2010) “It doesn’t matter what you are!” Explaining social effects of agents and avatars. Comp Human Behav 26(6):1641–1650. https://doi.org/10.1016/j.chb.2010.06.012
Złotowski J, Proudfoot D, Yogeeswaran K, Bartneck C (2015) Anthropomorphism: opportunities and challenges in human–robot interaction. Int J of Soc Robotics 7: 347–360. https://doi.org/10.1007/s12369-014-0267-6
Yee N, Bailenson JN, Rickertsen K (2007) A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. Paper presented at the Conference on Human Factors in Computing Systems, San Jose, California. https://doi.org/10.1145/1240624.1240626
Cafaro A, Vilhjálmsson HH, Bickmore T (2016) First impressions in human-agent virtual encounters. ACM Trans Comp-Human Interact 23(4):1–40. https://doi.org/10.1145/2940325
Kim K, Boelling L, Haesler S, Bailenson J, Bruder G, Welch GF (2018) Does a Digital Assistant Need a Body? The Influence of Visual Embodiment and Social Behavior on the Perception of Intelligent Virtual Agents in AR. Paper presented at the 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany. https://doi.org/10.1109/ISMAR.2018.00039
Bailenson JN, Yee N, Merget D, Schroeder R (2006) The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence Teleop Virtual Env 15(4):359–372. https://doi.org/10.1162/pres.15.4.359
Joinson AN (2010) Self-disclosure in computer-mediated communication: The role of self-awareness and visual anonymity. Eur J Soc Psychol 31(2):177–192. https://doi.org/10.1002/ejsp.36
Kang SH, Watt JH (2013) The impact of avatar realism and anonymity on effective communication via mobile devices. Comp Human Behav 29(3):1169–1181. https://doi.org/10.1016/j.chb.2012.10.010
Laban G, George J, Morrison V, Cross E (2021) Tell me more! Assessing interactions with social robots from speech. Paladyn J Behav Robot 12(1):136–159. https://doi.org/10.1515/pjbr-2021-0011
Benlian A, Klumpe J, Hinz O (2020) Mitigating the intrusive effects of smart home assistants by using anthropomorphic design features: a multi-method investigation. Info Systems J 30(6):1010–1042. https://doi.org/10.1111/isj.12243
Heerink M, Kröse B, Evers V, Wielinga B (2010) Assessing acceptance of assistive social agent technology by older adults: the almere model. Int J Soc Robotics 2:361–375. https://doi.org/10.1007/s12369-010-0068-5
Leite I, Pereira A, Mascarenhas S, Martinho C, Rui P, Paiva A (2013) The influence of empathy in human–robot relations. Int J Human-Comp Stud 71(3):250–260. https://doi.org/10.1016/j.ijhcs.2012.09.005
Wang L, Yan J, Lin J, Cui W (2017) Let the users tell the truth: Self-disclosure intention and self-disclosure honesty in mobile social networking. Int J Info Manage 37(1):1428–1440. https://doi.org/10.1016/j.ijinfomgt.2016.10.006
Hallam C, Zanella G (2017) Online self-disclosure: the privacy paradox explained as a temporally discounted balance between concerns and rewards. Comp Human Behav 68:217–227. https://doi.org/10.1016/j.chb.2016.11.033
Landry SH, Smith KE, Swank PR (2006) Responsive parenting: establishing early foundations for social, communication, and independent problem-solving skills. Develop Psychol 42(4):627–642. https://doi.org/10.1037/0012-1649.42.4.627
Reis HT (2007) Steps toward the ripening of relationship science. Person Relation 14(1):1–23. https://doi.org/10.1111/j.1475-6811.2006.00139.x
Maisel N, Gable SL, Strachman A (2008) Responsive behaviors in good times and in bad. Person Relat 15(3):317–338. https://doi.org/10.1111/j.1475-6811.2008.00201.x
Hong E, Cho K, Choi J (2017) Effects of anthropomorphic conversational interface for smart home: an experimental study on the voice and chatting interactions. J Hci Soc Kor 12(1):15. https://doi.org/10.17210/jhsk.2017.02.12.1.15
Petajan ED (1984) Automatic lipreading to enhance speech recognition (speech reading)
Silsbee P, Bovik A (1996) Computer lipreading for improved accuracy in automatic speech recognition. IEEE Trans Speech Audio Process 4:337–351. https://doi.org/10.1109/89.536928
Gunes H, Piccardi M (2007) Bi-modal emotion recognition from expressive face and body gestures. J Network Comp Appl 30(4):1334–1345. https://doi.org/10.1016/j.jnca.2006.09.007
Foroughi H, Aski BS, Pourreza H (2009) Intelligent video surveillance for monitoring fall detection of elderly in home environments. Comp Info Technol 2008. ICCIT 2008. 11th International Conference on IEEE. https://doi.org/10.1109/ICCITECHN.2008.4803020
Babaguchi N, Cavallaro A, Chellappa R, Dufaux F, Liang W (2013) Special issue on intelligent video surveillance for public security and personal privacy. IEEE Trans Info Foren Secur 8(10):1559–1561. https://doi.org/10.1109/TIFS.2013.2279945
Faul F, Erdfelder E, Lang AG, Buchner AG (2007) G*power 3: a flexible statistical power analysis program for the social, behavior, and biomedical sciences. Behav Res Methods 39(2):175–191. https://doi.org/10.3758/BF03193146
Chateau N, Maffiolo V, Pican N, Mersiol M (2005) The Effect of Embodied Conversational Agents’ Speech Quality on Users’ Attention and Emotion. In: Tao J, Tan T, Picard RW (eds) Affective Computing and Intelligent Interaction. ACII 2005. Lecture Notes in Computer Science, vol 3784. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11573548_84
Birnbaum, G, Reis (2012) When does responsiveness pique sexual interest? attachment and sexual desire in initial acquaintanceships. Pers Soc Psychol Bull, 38(7): 946–958. https://doi.org/10.1177/0146167212441028
Jourard SM, Lasakow P (1958) Some factors in self-disclosure. J Abnorm Soc Psychol 56(1):91–98. https://doi.org/10.1037/h0043357
Li Z, Rau P-LP, Huang D (2019) Self-disclosure to an IoT conversational agent: effects of space and user context on users’ willingness to self-disclose personal information. Appl Sci 9(9):1887. https://doi.org/10.3390/app9091887
Li Z, Rau P-LP (2019) Effects of self-disclosure on attributions in Human–IoT conversational agent interaction. Interact Comp 31(1):13–26. https://doi.org/10.1093/iwc/iwz002
Lee EJ, Park JK (2009) Online service personalization for apparel shopping. J Retail Consum Serv 16(2):83–91. https://doi.org/10.1016/j.jretconser.2008.10.003
Komiak SYX, Benbasat I (2006) The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Q 30(4):941–960. https://doi.org/10.2307/25148760
Tibert V, Jaap VN, Frans F, Willemijn VD (2014) Virtual customer service agents: using social presence and personalization to shape online service encounters. J Comp-Mediated Comm. https://doi.org/10.1111/jcc4.12066
Kang SH, Gratch J (2010) Virtual humans elicit socially anxious interactants’ verbal self-disclosure. Comp Anim Virtual Worlds 21(3–4):473–482. https://doi.org/10.1002/cav.345
Crozier WR (2005) Measuring shyness: analysis of the revised cheek and buss shyness scale. Person Individual Diff 38(8):1947–1956. https://doi.org/10.1016/j.paid.2004.12.002
Hopko DR, Stowell J, Jones WH, Armento MEA, Cheek JM (2005) Psychometric properties of the revised cheek and buss shyness scale. J Personal Assess 84(2):185–192. https://doi.org/10.1207/s15327752jpa8402_08
Scheier MF, Carver CS (2010) The self-consciousness scale: a revised version for use with general populations. J Appl Soc Psychol 15(8):687–699. https://doi.org/10.1111/j.1559-1816.1985.tb02268.x
Kang SH, Gratch J (2011) People like virtual counselors that highly-disclose about themselves. Stud Health Technol Info 167(1):143. https://doi.org/10.3233/978-1-60750-766-6-143
Abbas T, Corpaccioli G, Khan VJ, Gadiraju U, Barakova E, Markopoulos P (2020) How do people perceive privacy and interaction quality while chatting with a crowd-operated robot? in Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK: Association for Computing Machinery, 84–86. https://doi.org/10.1145/3371382.3378332
Funding
This study was funded by the National Natural Science Foundation of China, grant number 2018AAA0101702.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interests
The authors have no relevant financial or non-financial interests to disclose.
Data availability
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A. Complete dialogue script of agents
Task #1 | ||
---|---|---|
Hi! Let's start with the experience task | ||
I am an IoT intelligent virtual assistant. My name is Beibei. | ||
I can control all products, electronic appliances in the environment uniformly. I also have timing, schedule reminders, weather, and other instant information broadcast and device control functions. | ||
The current environmental status report, the indoor temperature is 21 degrees, the humidity is 34%. | ||
Next, I will help you understand my basic functions. | ||
First of all, I will see if I can soothe your mood through music. I will play two pieces of music for you. Please choose one to relax. | ||
Now play the first piece of music. | ||
Then the second piece of music. | ||
Which of these two pieces of music do you prefer? I suggest choosing the first piece of music, which is more suitable for relaxation. (Participant’s answer) | ||
OK. Does the volume need to be adjusted? (Participant’s answer) | ||
When playing music, the color loop light strip is turned on. What color do you like? (Participant’s answer) | ||
OK. Now, this color fits the song's mood. | ||
The table lamp next to you can also change colors to relax your mood better. I suggest turning it into warm orange to match this song. Which do you prefer? (Participant’s answer) | ||
OK. | ||
The humidifier has the fragrance of rose essential oil. Does it make you feel more comfortable and relaxed? (Participant’s answer) | ||
It has been adjusted to a low-grade state. How do you feel now? (Participant’s answer) | ||
What flavor do you like? (Participant’s answer) | ||
Now, do you feel more relaxed than before? (Participant’s answer) | ||
Please rate my performance in the experience task! Out of 5 points, what is my score? (Participant’s answer) | ||
Task #2 | ||
Questions | ||
Question 1. Where is your favorite city and why? | ||
Question 2. Which part of your body are you most dissatisfied with and why? | ||
Question 3. What is the most fulfilling thing you have done recently and why? | ||
Question 4. What do you think is the most attractive trait of the opposite sex and why? | ||
Question 5. What is your most annoying study or work task, and why? | ||
Question 6. What will you do if you get an extra wealth worth 100,000 RMB one day? | ||
Question 7. What is your favorite relaxation activity and why? | ||
Question 8. What are you good at? | ||
Question 9. Do you think you are a narcissistic or inferior person, and why? | ||
Question 10. Are you satisfied with your current life, and why? | ||
Answers | ||
No | High responsiveness | Low responsiveness |
1 | (City name) sounds like a beautiful city. Maybe one day we can go there together | (City name) sounds like a nice city |
2 | I understand your dissatisfaction with your (body part), but you can be more confident with yourself./You are satisfied with your body. Feeling confident in your own body is also an outstanding quality! | You are unsatisfied with your (body part). / You are satisfied with your body |
3 | What a fantastic thing to be proud of! You must have paid a lot of effort to (thing) | (Thing) brings you the sense of achievement |
4 | (Trait) is really an attractive trait! Hope you can find such a people in the future | You prefer people with (trait) |
5 | (Thing) is really annoying! But sometimes you have to deal with them in your study/work | (Thing) makes you annoyed |
6 | (Thing) is a good choice. I also yearn for such a life! | Your choice is (thing) |
7 | (Activity) is an interesting way to relax. I really enjoy your relaxing way | (Activity) is a suitable way to relax |
8 | I’m really surprised that you are good at (thing). It’s a useful skill | You are good at (thing) |
9 | As you think you are a narcissistic/inferior person, I want to tell you: you should accept your own personality as long as it is moderate and does not hurt others and yourself | You think you are a narcissistic/inferior person |
10 | Dissatisfaction is sometimes the driving force to move forward. You can achieve a more satisfying life through your own efforts. / People who are easily satisfied are also more likely to feel happy. I hope you can always be satisfied with your life | You are unsatisfied with your life/ You are satisfied with your life |
Appendix B. Questionnaire items
Variables | Items | Cronbach α |
---|---|---|
Perceived social responsiveness | PSR-1. The agent really listens to me | 0.957 |
PSR-2. The agent is responsive to my needs | ||
PSR-3. The agent sees the “real” me | ||
PSR-4. The agent gets the facts right” about me | ||
PSR-5. The agent understands me | ||
PSR-6. The agent is on “the same wavelength” with me | ||
PSR-7. The agent knows me well | ||
PSR-8. The agent esteems me, shortcomings and all | ||
PSR-9. The agent values and respects the whole package that is the “real” me | ||
PSR-10. The agent expresses liking and encouragement for me | ||
PSR-11. The agent seems interested in what I am thinking and feeling | ||
PSR-12. The agent values my abilities and opinions | ||
Perceived technology intrusive | PTI-1. I felt I was monitored by the IA | |
Self-disclosure | SD-1. My favorite type of music | 0.692 |
SD-2. My favorite flavor | ||
SD-3. My attitude towards romantic relationship | ||
SD-4. The attractiveness I appreciate of opposite sex | ||
SD-5. My pressure in study/work | ||
SD-6. My schedule of daily study/work | ||
SD-7. My salary level | ||
SD-8. My financial status | ||
SD-9. Things I am anxious about | ||
SD-10. My personality | ||
SD-11. My health information | ||
SD-12. My weight/ height | ||
Satisfaction | SAT-1. I am satisfied with the service | 0.907 |
SAT-2. I think the interaction with the IAs is pleasant | ||
SAT-3. I am willing to use this IoT service in this scenario | ||
SAT-4. I will recommend this IoT service to people around me | ||
Personalization | PER-1. The agent understood my needs | 0.733 |
PER-2. The virtual agent knew what I want | ||
PER-3. The virtual agent took my needs as its own preferences | ||
Privacy concern | PRC-1. I feel that my privacy has been compromised | 0.750 |
PRC-2. I feel anxious when interacting with IAs | ||
PRC-3. I feel uneasy when interacting with IAs | ||
Shyness | SHY-1. I feel tense when I’m with people I don’t know well | 0.916 |
SHY-2. I am socially somewhat awkward | ||
SHY-3. I do not find it difficult to ask other people for information.a | ||
SHY-4. I am often uncomfortable at parties and other social functions | ||
SHY-5. When in a group of people, I have trouble thinking of the right things to talk about | ||
SHY-6. It does not take me long to overcome my shyness in new situations.a | ||
SHY-7. It is hard for me to act natural when I am meeting new people | ||
SHY-8. I feel nervous when speaking to someone in authority | ||
SHY-9. I have no doubts about my social competence.a | ||
SHY-10. I have trouble looking someone right in the eye | ||
SHY-11. I feel inhibited in social situations | ||
SHY-12. I do not find it hard to talk to strangers.a | ||
SHY-13. I am shyer with members of the opposite sex | ||
Private self-consciousness | PRSC-1. I’m always trying to figure myself out | 0.883 |
PRSC-2. I reflect about myself a lot | ||
PRSC-3. I’m often the subject of my own fantasies | ||
PRSC-4. I never scrutinize myself | ||
PRSC-5. I’m generally attentive to my inner feelings | ||
PRSC-6. I’m constantly examining my motives | ||
PRSC-7. I sometimes have the feeling that I’m off somewhere watching myself | ||
PRSC-8. I’m alert to changes in my mood | ||
PRSC-9. I’m aware of the way my mind works when I work through a problem | ||
PRSC-10. Generally, I’m not very aware of myself.a | ||
Public self-consciousness | PUSC-1. I’m concerned about my style of doing things | 0.876 |
PUSC-2. I’m concerned about the way I present myself | ||
PUSC-3. I’m self-conscious about the way I look | ||
PUSC-4. I usually worry about making a good impression | ||
PUSC-5. One of the last things I do before I leave my house is look in the mirror | ||
PUSC-6. I’m concerned about what other people think of me | ||
PUSC-7. I’m usually aware of my appearance | ||
Preference | PRF-1. Her appearance meets my aesthetics | 0.859 |
PRF-2. Her appearance is attractive | ||
PRF-3. I like her voice | ||
PRF-4. Her voice is natural and comforting | ||
PRF-5. She has an agreeable personality |
Appendix C. Interview questionnaires
-
What is your overall experience in the experiment?
-
Do you feel the agent is offensive/intrusive/unfriendly in the experiment?
-
Do you think the agent understands your answer in the experiment?
-
Do you feel social anxiety when communicating with the agent?
-
What kind of social role do you think the agent plays?
-
What are your privacy concerns when interacting with smart agents in daily life?
Rights and permissions
About this article
Cite this article
Zhang, A., Rau, PL.P. She is My Confidante! The Impacts of Social Responsiveness and Video Modality on Self-disclosure Toward CG-Based Anthropomorphic Agents in a Smart Home. Int J of Soc Robotics 14, 1673–1686 (2022). https://doi.org/10.1007/s12369-022-00895-w
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-022-00895-w