Abstract
Conversational agents are increasingly popular in various domains of application. Due to their ability to interact with users in human language, anthropomorphizing these agents to positively influence users’ trust perceptions seems justified. Indeed, conceptual and empirical arguments support the trust-inducing effect of anthropomorphic design. However, an opposing research stream that has widely been overlooked provides evidence that human-likeness reduces agents’ trustworthiness. Based on a thorough analysis of psychological mechanisms related to the contradicting theoretical positions, we propose that the agent substitution type acts as a situational moderator variable on the positive relationship between anthropomorphic design and agents’ trustworthiness. We argue that different agent types are related to distinct user expectations that influence the cognitive evaluation of anthropomorphic design. We further discuss how these differences translate into neurophysiological responses and propose an experimental set-up using a combination of behavioral, self-reported and eye-tracking data to empirically validate our proposed model.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Substantial advances in artificial intelligence make conversational technology increasingly relevant. Conversational agents are software systems that are able to process, understand and produce natural language interactions [1, 2]. These systems are also referred to as chatbots or intelligent virtual assistants [3]. Business analysts are expecting conversational agents to revolutionize the way humans interact with information systems in various fields of application [4, 5]. Conversational agents promise to provide convenient, instant and accurate responses to a wide range of user inquiries on the basis of natural language. Users are expected to benefit from greater accessibility and more intuitive interactions. Providers are expected to benefit by reducing costs and improving quality of standardized and recurring tasks [6]. An agent that is able to satisfy users’ expectations, thus, creates a win-win situation. Real world use cases include shopping bots that assist consumers in finding and purchasing desired products and services, health assistant bots that provide patients with personalized health information and guidance as well as enterprise software bots that enable professionals to interact with enterprise systems such as Customer-Relationship-Management (CRM). The successful design of conversational agents, however, is contingent upon an understanding of users’ expectations and perceptions in order to assure that users are willing to rely on these agents. Therefore, users’ trust is a prerequisite for successful adoption. Indeed, numerous information systems (IS) studies have addressed the role of trust for technology acceptance and use [7,8,9].
Two opposing theoretical positions exist that explain human–agent trust by adopting either a human–human or a human–machine trust perspective [17]. Table 1 provides an overview of the two perspectives. The Computers are Social Actors (CASA) paradigm [10, 19, 20] is a prominent conceptual basis for research interested in understanding how to make computer agents more trustworthy. Studies in this tradition adopt the human–human trust perspective. CASA research builds upon the media-equation hypothesis that proposes that humans place social expectations, norms and beliefs on computers [10, 11]. This stream of research produced experimental evidence indicating that anthropomorphism—the extent to which computational systems are perceived to have human characteristics—increases users’ trust into computer agents [21,22,23,24]. Inspired by these findings, designers could conclude that making conversational agents more human-like is essential to create a sustainable trust relationship between agents and users. But is human-likeness of conversational agents really unconditionally beneficial for agents’ trustworthiness? Another stream of literature adopts the human–machine trust perspective and argues that humans place more trust into computerized systems as opposed to humans [25, 26]. Researchers explain this phenomenon with the automation bias—humans’ propensity to trust computerized decision support in order to reduce own cognitive efforts [27]. Cues of automation are used as a heuristic in decision making because humans perceive computer systems as more objective and rational if compared to other humans [18]. Experimental evidence supports the trust-inducing effect of highly computerized systems [16, 25, 28, 29].
These two opposing positions bring about an interesting research puzzle. While the human–human trust perspective suggests that anthropomorphic design is beneficial for agents’ trustworthiness, the human–machine trust perspective suggests to minimize anthropomorphic design to make agents’ more trustworthy. Some researchers have investigated the difference between human–human and human–machine trust [17, 28, 29]. However, the issue regarding the trust-inducing effect of anthropomorphism remains unresolved. Understanding what situational factors influence the validity of the two positions is important to increase our conceptual understanding of human–agent trust and to inform designers of conversational technologies. We address this gap with the following research question:
What factors determine whether anthropomorphic design increases users’ initial trust into a conversational agent?
As conversational agents use human language to interact with users, anthropomorphism appears to be a natural characteristic of this technology. Nevertheless, we posit that also in the light of software agents that are able to use human language, it is not unconditionally beneficial to assign them human characteristics and behavior. Thereby, this research enhances the literature on trust into technology by considering the distinct nature of conversational technologies. In order to address the formulated research question, we are investigating the psychological mechanisms that relate anthropomorphic design to users’ trusting behavior towards a conversational agent. By doing so, we seek to identify the situational factors that explain whether or not anthropomorphic design has a positive effect on agents’ trustworthiness. We are examining this relationship by considering extant research in the context of trust into technology [7, 30, 31], the CASA paradigm [10] and the automation bias literature [25]. To test our proposed research model, we plan to conduct a NeuroIS experiment that allows us to broaden our understanding of the cognitive processes related to the evaluation of anthropomorphic design and trustworthiness of conversational agents.
2 Theoretical Development: Trust and Anthropomorphism
Trust on an individual level is defined as “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” [32]. The initial evaluation of characteristics of another actor determines the perceived trustworthiness and, ultimately, influences the decision to trust [33]. Three conceptually distinct trust dimensions have been acknowledged by extant research on technology use [30, 34]. First, competence reflects perceptions of a trustee’s ability to produce the desired outcome. Second, benevolence reflects the extent to which the trustee is perceived to be motivated to put the interest of the trustor first. Integrity, finally, reflects the extent to which the trustee is perceived to adhere to generally accepted principles and to be honest. Keeping the objective of the present research in mind, we posit to further distinguish between qualification- and goodwill-based trustworthiness in order to account for potential variations related to anthropomorphic design. The dimensions of benevolence and integrity are classified as goodwill-based trustworthiness as these consider a trustee’s intentions and motives to fulfill the raised expectations. Intentions and thoughts are a central differentiating aspect between humans and computers. Therefore, we believe it is important to contrast these from the competence dimension which is purely qualification-based. The distinction between the volitional and non-volitional dimensions of initial trust is in accordance with the reconceptualization of trust proposed by Barki et al. [35].
Anthropomorphism refers to the human tendency to attribute humanlike characteristics such as intentions, emotions or motivations to non-human agents [36]. According to psychological theory, the tendency to anthropomorphize is not universal but is triggered when humans feel the urge to increase their perceived control of an otherwise unpredictable agent [37]. When the behavior of an agent is unpredictable, anthropomorphizing this agent helps to increase the perceived level of familiarity and control with regard to that agent [21]. Correspondingly, anthropomorphism is positively related to perceived predictability. Predictability is a construct closely related to trust as it reflects the extent to which one is certain about the motives and intentions of a trustee [34]. Yet, predictability in contrast to trust is a neutral construct and can have positive or negative implications. Perceived predictability can foster the positive effect of perceived trustworthiness on trusting behavior (H4, H5) [34]. The study of anthropomorphism in the context of human–computer interaction is closely related to the CASA paradigm [10]. Studies adopting the CASA perspective find evidence that anthropomorphic design, for example via the use of social cues (names, appearance) or human behavior (politeness, gestures), increases perceptions of computer agents’ trustworthiness [21,22,23, 38]. IS studies interested in users’ trust towards recommendation agents [39,40,41,42] and trust in e-commerce [9, 43] confirm the positive effect of anthropomorphic design. In this context, it is important to also consider research on the role of perceived agency on social expectations and behavior (i.e. trusting behavior) in human–computer interactions [44, 45]. Appel et al. [45], for example, found support that knowledge about the agency (human vs. computer) of the interaction partner is related to feelings of social presence (human agency: high social presence). However, they also found evidence indicating that the displayed human characteristics (i.e. anthropomorphic design) are more important for social behavior in human–computer interactions than the knowledge about the agency of the interaction partner. In examining the role of anthropomorphism on trust towards conversational agents, it is thus important to make the agency condition explicit to minimize potential confounding effects. In sum, research on agency and anthropomorphic design supports the perspective of the CASA paradigm on the positive relationship between human-likeness and trust.
While, thus, one stream of literature argues that anthropomorphism—by increasing perceptions of control and familiarity—is positively related to trustworthiness (H1, H2) and predictability (H3), automation bias literature proposes the opposite [25, 27]. In accordance with that perspective, humans tend to trust computational systems more than other humans because humans are expected to be imperfect while the opposite is true for automation [28]. Therefore, humans use cues of automation as a heuristic to assess the perceived competence of an agent [27]. As humans naturally seek to minimize cognitive effort, heuristics provide a convenient way to perform such assessments [46]. Accordingly, anthropomorphic design is negatively related to trustworthiness and predictability as cues of humanness indicate lower qualification and also cause more cognitive evaluation efforts.
We, however, propose that both perspectives are valid in the context of conversational agents and that the agent substitution type acts as a moderator on the effect of anthropomorphic design on perceived trustworthiness and perceived predictability. We propose to differentiate between the agent as human-substitute and system-substitute. The former refers to instantiations where a conversational agent is implemented in order to substitute a human expert (e.g. sales person, teacher). The latter refers to conversational agents that are implemented to provide a more user-friendly interface to computer systems (e.g. enterprise software, databases). We expect that, in accordance with the CASA paradigm, agents as human-substitutes in contrast to system-substitutes benefit from increased anthropomorphism in terms of trust. We theorize that different expectations are triggered by the substitution type that translate into cognitive processes related to assessing an agents’ trustworthiness.
More precisely, we expect that due to humans’ desire to decrease uncertainty anthropomorphic design will be positively related to trustworthiness and predictability for human-substitute agents through increased feelings of control and familiarity (H6a, H7a, H8a). Anthropomorphizing unknown and novel interaction partners increases the perceived level of control and similarity because humans can use existing social knowledge in assessing the non-human other [36]. Exclusively human characteristics including intention, emotion and consciousness are assigned to an anthropomorphized non-human agent [21, 22]. This is beneficial for agents of a human-substitute type because in their role they need to meet not only qualification-related but also goodwill-related expectations. On the other hand, we expect that due to humans’ qualification-focused expectations and their desire to decrease mental effort the positive effect of anthropomorphic design will be negatively moderated by a system-substitute agent type (H6b, H7b, H8b). The rationale behind this is that the conveyance of human characteristics causes cognitive evaluation effort that does not add value to the predictability and qualification assessment of a system-substitute agent who is primarily expected to efficiently perform non-human tasks.
We further expect that the differences in cognitive processing related to the trust assessment can be revealed by the use of neurophysiological measures. Riedl et al. [47] conducted a brain imaging study and found mentalizing effort in interactions with computer agents. Moreover, they found less effort if compared to interaction with humans. Because increased mentalizing implies increased cognitive effort, in accordance with the Riedl et al. [47] study, we expect to find more cognitive effort caused by anthropomorphic design if compared to non-anthropomorphic design. The underlying rationale for this theorizing is also supported by evidence showing that whenever people perceive human attributes in other agents even in objects, they tend to activate mentalizing (e.g. [48]). Because eye movement measures allow to infer cognitive states of attention and mental processing [49], we plan to use eye-tracking to investigate the trust evaluation effort. Based on our theoretical discussion, we propose the following research model (Fig. 1).
3 Proposed Experimental Design
To empirically validate our research model, we plan to conduct a controlled experiment that allows us to capture behavioral, self-reported and neurophysiological measures. We propose a 2 × 2 within-subjects factorial design to examine the effects of anthropomorphism and agent type on users’ perceptions of agents’ trustworthiness and trusting behavior. Two levels of anthropomorphic design (high vs. low) and two types of agents (human-substitute vs. system-substitute). To ensure that agency of the conversational agent does not confound our findings, we are informing the participants that all conversational agents are representations of computer algorithms (non-human agency; see [45]). Participants will be provided with a task scenario. They are assigned a role as an associate in a marketing team of a company. They are told that their manager wants to capitalize on the latest progress in chatbot technology and asked them to evaluate and decide which enterprise chatbot should be implemented to efficiently perform transactions in the CRM system (system-substitute) and which customer service chatbot should be implement as a first touching point for customers (human-substitute). In order to make the manipulation of the agent type more explicit, participants will initially be informed how the respective task is currently performed. For each chatbot type the participants are provided with a highly and a slightly anthropomorphized agent (high vs. low). To provide a cover story the participants will be asked to evaluate and decide on the implementation of two other tools for the team. The order of the decision tasks will be randomized.
Measurements. Established self-rating scales for trustworthiness and predictability will be adapted from prior literature [9, 30]. During the study, we will use eye-tracking to capture participants’ eye movement, fixation and pupil dilation. According to the eye-mind hypothesis [50], eye movement data is closely related to cognitive processing of cues in view of a person. The use of eye-tracking to understand cognitive processes in human–computer interactions is in accordance with prior IS studies (e.g. [43, 51]). In relation to our hypothesis that anthropomorphism results in more cognitive effort required to evaluate agents’ trustworthiness, we expect this to translate into longer fixations. Combining this with self-reports on perceived trustworthiness, we expect that more intensive processing (fixation data) aligns with higher perceived trustworthiness in the human-substitute condition and with lower perceived trustworthiness in the system-substitute condition. In addition, we attempt to include data on pupil dilation to measure the uncertainty related to the assessment of anthropomorphic design in the two agent type conditions. According to research in neuroscience, fluctuations in pupil diameter are triggered by states of arousal in cognitive demanding situations such as decision-making under uncertainty [52, 53]. A series of experiments has successfully related perceptions of uncertainty and unexpected outcomes to increased pupil diameter [54,55,56]. Based on these findings we are confident that the diagnosticity—the precision of a physiological measure to capture the target construct [57]—of pupil dilation as a measure for uncertainty is established. In line with this body of research Xu and Riedl (2011), for example, propose to include pupil dilation to measure perceptions of uncertainty in e-commerce decision-making tasks [58]. Similarly, we expect that the uncertainty triggered by the inadequate use of anthropomorphism (human- vs. system-substitute type) is reflected in fluctuations of pupil diameter.
In addition to the self-rating scales and neurophysiological measures, the choice decision made between the offered chatbots represents the behavioral trust measure. Finally, the following control variables will be included due to their established importance in the context of trust and anthropomorphism in human–computer interactions: gender [59], trust propensity [30], computer self-efficacy [60], dispositional anthropomorphism [61] and need for cognition [15].
4 Discussion and Expected Contributions
We identified an existing contradiction regarding the use of anthropomorphic design to stimulate users’ trust into computer agents. Because conversational agents are characterized by their ability to interact in human language, it appears intuitive to conclude that such systems benefit from anthropomorphic design. By building upon theoretical knowledge on anthropomorphism, cognitive heuristics and trust we challenge this intuition. More precisely, we are proposing that the agent substitution type changes user expectations and perceptions regarding anthropomorphism. Our experimental approach seeks to assess these differences through a combination of self-rating, eye-tracking and behavioral data. By adopting a NeuroIS perspective, we seek to confirm that a misuse of anthropomorphisms results in cognitive responses that damage an agents’ trustworthiness. We expect this research to enhance existing understanding of cognitive processes triggered by anthropomorphic system design and their effect on trust perceptions. In this context, future research will also need to consider the role of the uncanny valley effect—the phenomenon that as non-human objects appear more human-like (anthropomorphic design) they increase perceptions of familiarity and trust until a certain threshold is reached that triggers sudden perceptions of disturbance and rejection due to the objects’ non-human imperfections [62]. Finally, this project also provides new areas for IS research on user trust. Future studies can investigate how trust-violation and -repair dynamics differ between human- and system-substitution type and how this relates to cognitive and emotional responses.
References
Schuetzler, R.M., Grimes, M., Giboney, J.S., Buckman, J.: Facilitating natural conversational agent interactions: lessons from a deception experiment. In: Proceedings of the 35th International Conference on Information Systems, Auckland, pp. 1–16 (2014)
Nunamaker Jr., J.F., Derrick, D.C., Elkins, A.C., Burgoon, J.K., Patton, M.W.: Embodied conversational agent-based Kiosk for automated interviewing. J. Manag. Inf. Syst. 28, 17–48 (2011)
Kassner, L., Hirmer, P., Wieland, M., Steimle, F., Königsberger, J., Mitschang, B.: The social factory: connecting people, machines and data in manufacturing for context-aware exception escalation. In: Proceedings of the 50th Hawaii International Conference on System Sciences (HICSS), pp. 1673–1682 (2017)
Gartner: (2016) https://www.gartner.com/doc/3471559?ref=SiteSearch&sthkw=chatbot&fnl=search&srcId=1–3478922254
Forbes: (2016) http://www.forbes.com/sites/parmyolson/2016/02/23/chat-bots-facebook-telegram-wechat/#4c95b5762633
Braun, A.: Chatbots in der Kundenkommunikation. Springer, Berlin (2013)
Wang, W., Benbasat, I.: Attributions of trust in decision support technologies: a study of recommendation agents for e-commerce. J. Manag. Inf. Syst. 24, 249–273 (2008)
Pavlou, P.A., Gefen, D.: Building effective online marketplaces with institution-based trust. Inf. Syst. Res. 15, 37–59 (2004)
Gefen, D., Straub, D.W.: Consumer trust in B2C e-commerce and the importance of social presence: experiments in e-products and e-services. Omega 32, 407–424 (2004)
Nass, C., Steuer, J., Tauber, E.R.: Computers are social actors. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp. 72–78, New York (1994)
Reeves, B., Nass, C.: The media equation: how people treat computers, television? New media like real people? Comput. Math Appl. 5, 33 (1997)
Cassell, J., Bickmore, T.: External manifestations of trustworthiness in the interface. Commun. ACM 43, 50–56 (2000)
Bickmore, T., Cassell, J.: Relational agents. In: Proceedings of the SIGCHI conference on Human Factors in Computing Systems—CHI ’01, New York (2001)
Muir, B.M.: Trust between humans and machines, and the design of decision aids. Int. J. Man Mach. Stud. 27, 527–539 (1987)
Dijkstra, J.J., Liebrand, W.B.G., Timminga, E.: Persuasiveness of expert systems. Behav. Inf. Technol. 17, 155–163 (1998)
Dijkstra, J.J.: User agreement with incorrect expert system advice. Behav. Inf. Technol. 18, 399–411 (1999)
Madhavan, P., Wiegmann, D.A.: Similarities and differences between human–human and human–automation trust: an integrative review. Theor Issues Ergon. Sci. 8, 277–301 (2007)
Mosier, K.L., Skitka, L.J.: Human decision makers and automated decision aids: made for each other. Autom. Hum. Perform. Theory Appl. pp. 201–220 (1996)
Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56, 81–103 (2000)
Riedl, R., Mohr, P., Kenning, P., Davis, F., Heekeren, H.: Trusting humans and avatars: behavioral and neural evidence. In: Proceedings of the Thirty Second International Conference on Information Systems, Shanghai (2011)
Epley, N., Waytz, A., Akalis, S., Cacioppo, J.T.: When we need a human: motivational determinants of anthropomorphism. Soc. Cogn. 26, 143–155 (2008)
Waytz, A., Heafner, J., Epley, N.: The mind in the machine: anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 52, 113–117 (2014)
Gong, L.: How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Comput. Hum. Behav. 24, 1494–1509 (2008)
de Visser, E.J., Monfort, S.S., McKendrick, R., Smith, M.A.B., McKnight, P.E., Krueger, F., Parasuraman, R.: Almost human: anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol. Appl. 22, 331–349 (2016)
Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58, 697–718 (2003)
Skitka, L.J., Mosier, K.L., Burdick, M.: Does automation bias decision-making? Int. J. Hum. Comput Stud. 51, 991–1006 (1999)
Mosier, K.L., Skitka, L.J., Heers, S., Burdick, M.: Automation bias: decision making and performance in high-tech Cockpits. Int. J. Aviat. Psychol. 8, 47–63 (1998)
de Visser, E.J., Krueger, F., McKnight, P.: The world is not enough: trust in cognitive agents. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting (2012)
Merritt, S.M., Heimbaugh, H., LaChapell, J., Lee, D.: I trust it, but I don’t know why. Hum. Factors J. Hum. Factors Ergon. Soc. 55, 520–534 (2013)
McKnight, D.H., Choudhury, V., Kacmar, C.: Developing and validating trust measures for e-commerce: an integrative typology. Inf. Syst. Res. 13, 334–359 (2002)
Komiak, S.Y.X., Wang, W., Benbasat, I.: Trust building in virtual salespersons versus in human salespersons: similarities and differences. e-Service J. 3, 49–64 (2004)
Rousseau, D.M., Sitkin, S.B., Burt, R.S.: Not so different after all: a cross-discipline view of trust. Acad. Manag. Rev. 23, 393–404 (1998)
Tomlinson, E.C., Mayer, R.C.: The role of causal attribution dimensions in trust repair. Acad. Manag. Rev. 34, 85–104 (2009)
Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20, 709–734 (1995)
Barki, H., Robert, J., Dulipovici, A.: Reconceptualizing trust: a non-linear Boolean model. Inf. Manag. 52, 483–495 (2015)
Epley, N., Waytz, A., Cacioppo, J.T.: On seeing human: a three-factor theory of anthropomorphism. Psychol. Rev. 114, 864–886 (2007)
Waytz, A., Morewedge, C.K., Epley, N., Monteleone, G., Gao, J.-H., Cacioppo, J.T.: Making sense by making sentient: effectance motivation increases anthropomorphism. J. Pers. Soc. Psychol. 99, 410–435 (2010)
Riedl, R., Javor, A., Gefen, D., Felten, A., Reuter, M.: Oxytocin, trust, and trustworthiness: the moderating role of music (2017)
Qiu, L., Benbasat, I.: Evaluating anthropomorphic product recommendation agents: a social relationship perspective to designing information systems. J. Manag. Inf. Syst. 25, 145–182 (2009)
Al-Natour, S., Benbasat, I., Cenfetelli, R.T.: Creating rapport and intimate interactions with online virtual advisors. SIGHCI 2007 Proceedings (2007)
Al-Natour, S., Benbasat, I., Centefelli, R.: Trustworthy virtual advisors and enjoyable interactions: designing for expressiveness and transparency. In: Proceedings of the European Conference on Information Systems (ECIS). Regensburg, Germany (2010)
Benbasat, I., Dimoka, A., Pavlou, P.A., Qiu, L.: Incorporating social presence in the design of the anthropomorphic interface of recommendation agents: insights from an fMRI study. In: Proceedings of the International Conference on Information Systems (ICIS) (2010)
Cyr, D., Head, M., Larios, H., Pan, B.: Exploring human images in website design: a multi-method approach. MIS Q. 33, 539–566 (2009)
Pütten, von der, A.M., Krämer, N.C., Gratch, J., Kang, S.-H.: “It doesn’t matter what you are!” Explaining social effects of agents and avatars. Comput. Hum. Behav. 26, 1641–1650 (2010)
Appel, J., Pütten, von der, A., Krämer, N.C., Gratch, J.: Does humanity matter? Analyzing the importance of social cues and perceived agency of a computer system for the emergence of social reactions during human–computer interaction. Adv. Hum. Comput. Interact. pp. 1–10 (2012)
Fiske, S.T., Taylor, S.E.: Social Cognition: From Brains to Culture. Sage, London (2013)
Riedl, R., Mohr, P.N.C., Kenning, P.H., Davis, F.D., Heekeren, H.R.: Trusting humans and avatars: a brain imaging study based on evolution theory. J. Manag. Inf. Syst. 30, 83–114 (2014)
Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., Kircher, T.: Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLoS ONE 3, 1–11 (2008)
Poole, A., Ball, L.J.: Eye tracking in HCI and usability research. Encycl. Hum. Comput. Interact. 1, 211–219 (2006)
Just, M.A., Carpenter, P.A.: Eye fixations and cognitive processes. Cogn. Psychol. 8, 441–480 (1976)
Léger, P.M., Sénecal, S., Courtemanche, F.: Precision is in the eye of the beholder: application of eye fixation-related potentials to information systems research. J. Assoc. Inf. Syst. 15, 651–678 (2014)
Gilzenrat, M.S., Nieuwenhuis, S., Jepma, M., Cohen, J.D.: Pupil diameter tracks changes in control state predicted by the adaptive gain theory of locus coeruleus function. Cogn. Affect. Behav. Neurosci. 10, 252–269 (2010)
Reimer, J., Froudarakis, E., Cadwell, C.R., Yatsenko, D.: Pupil fluctuations track fast switching of cortical states during quiet wakefulness. Neuron 84, 355–362 (2014)
Satterthwaite, T.D., Green, L., Myerson, J., Parker, J.: Dissociable but inter-related systems of cognitive control and reward during decision making: evidence from pupillometry and event-related fMRI. Neuroimage. pp. 1017–1031 (2007)
Preuschoff, K., Hart, B.T.: Pupil dilation signals surprise: evidence for noradrenaline’s role in decision making. Front. Neurosci. 5, article 115 (2011)
Lavín, C., Martín, R.S., Jubai, E.R.: Pupil dilation signals uncertainty and surprise in a learning gambling task. Front. Behav. Neurosci 7, article 218 (2014)
Riedl, R., Davis, F.D., Hevner, A.R.: Towards a NeuroIS research methodology: intensifying the discussion on methods, tools, and measurement. J. Assoc. Inf. Syst. 15, i–xxxv (2014)
Xu, Q., Riedl, R.: Understanding online payment method choice: an eye-tracking study. In: Proceedings of the 32th International Conference on Information Systems, pp. 1–12, Shanghai (2011)
Riedl, R., Hubert, M., Kenning, P.: Are there neural gender differences in online trust? An fMRI study on the perceived trustworthiness of eBay offers. MIS Q. 34, 397–428 (2010)
Compeau, D.R., Higgins, C.A.: Computer self-efficacy: development of a measure and initial test. MIS Q. pp. 189–211 (1995)
Waytz, A., Cacioppo, J., Epley, N.: Who sees human? Perspect. Psychol. Sci. 5, 219–232 (2010)
MacDorman, K.F., Green, R.D., Ho, C.C., Koch, C.T.: Too real for comfort? Uncanny responses to computer generated faces. Comput. Hum. Behav. 25, 695–710 (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Seeger, AM., Heinzl, A. (2018). Human Versus Machine: Contingency Factors of Anthropomorphism as a Trust-Inducing Design Strategy for Conversational Agents. In: Davis, F., Riedl, R., vom Brocke, J., Léger, PM., Randolph, A. (eds) Information Systems and Neuroscience. Lecture Notes in Information Systems and Organisation, vol 25. Springer, Cham. https://doi.org/10.1007/978-3-319-67431-5_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-67431-5_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-67430-8
Online ISBN: 978-3-319-67431-5
eBook Packages: Business and ManagementBusiness and Management (R0)