Keywords

1 How Language Influences Trust Building

Words are not just used to transport a certain message. They also indicate our emotional states, our deeply grounded attitudes, and our relation to the given communication partner. Hence, words communicate not only with but also beyond their content. When it comes to trust building with words, one might well ask for trust directly (e.g., “Please trust me, I know what I’m talking about”). However, language offers further ways to indicate that one’s message is trustworthy. When deciding whether or not to trust somebody, people have to perform an elaborate analysis of the language of the message. If the message givers talk in a very eloquent and structured way, recipients may assess their words as being more trustworthy than the same information presented in an unelaborated way (Lev-Ari and Keysar 2010). Recipients seem to be well aware of the mechanisms guiding how and when we attribute trustworthiness (Thon and Jucks submitted). In general, trustworthiness is not an all-or-nothing issue, but might best be conceptualized as a continuum. Finally, but just as importantly, trustworthiness is also built up by using certain wordings (e.g., using words such as “mostly” to emphasize the tentativeness of a scientific finding).

Many settings provide no more information than the actual words. In these cases, the mere words have to serve as an indicator of whom to trust. Following the model of organizational trust (Mayer et al. 1995), trustworthiness is then assessed by interpreting these words in terms of the ability, benevolence, and integrity they indicate in the provider of the information. Ability thereby refers to the speakers’ competence; benevolence, their willingness to help; and integrity, their orientation toward a set of values similar to that of the trust provider. In the following, we shall outline what certain words contribute to the attribution of trustworthiness. We shall illustrate this with three different scenarios that are relevant in online communication: giving peer-to-peer advice, retrieving information from other users of varying expertise, and interacting with a spoken dialogue system. In each of these scenarios, we shall focus on the specific way that words contribute to trust building through: (a) self-disclosure and the communication of empathy, (b) technical language and cues regarding the fragility of evidence, and (c) perceiving a shared view through lexical overlaps. Finally, we outline future research perspectives on the role of language in online trust-building processes.

2 Self-Disclosure and Empathy in Online Peer-to-Peer Interactions

The Internet has become a social environment that people use to exchange personal information. Various social platforms offer space for public discussions in which people may post their specific questions to unknown others. This often takes place in a peer-to-peer context, for instance, students exchanging knowledge and advice on procrastination in online forums (Thon and Jucks 2014). Procrastination is the behavior of postponing intended actions despite possible negative consequences, and is a phenomenon with which most students are familiar (cf. Steel 1991). Imagine a typical forum post from a student starting with a description of the context of her problem: For example, she wants to pass several exams in the following weeks, but instead of learning she often finds herself browsing the Internet. Another forum user then responds by making suggestions on how to overcome her procrastination (e.g., by setting an incentive for successful learning sessions). When giving advice, people often take their own personal experience into account (e.g., “I always reward myself after having worked through one chapter”). This “process of making the self known to others” (Jourard and Lasakow 1958, p. 91) is referred to as self-disclosure. The extent of information shared is influenced by the discloser, the recipient, and the specific context of the conversation (see Ignatius and Kokkonen 2007, for a review) and is determined on three different dimensions: duration, breadth, and depth (cf. Cozby 1973). Duration refers to the length of the self-disclosure and can be measured by the number of utterances starting with a first-person personal pronoun (e.g., “I procrastinate”). Breadth describes the range of different contents addressed (cf. Barak and Gluck-Ofri 2007), that is, (a) personal information or facts (e.g., one’s academic year), (b) a thought or an opinion (e.g., hoping to pass an exam), and (c) feelings (e.g., being sad). However, self-disclosure also differs in its depth, that is, the intimacy of disclosed content. Sharing personal information with one person but not another permits the establishment of trust and intimacy in relationships. However, providing personal information poses a risk by exposing one’s vulnerability to the recipient of the information. Therefore, sharing information demonstrates a willingness to be vulnerable and can be interpreted as an offer of trust. An offer of trust is often reciprocated (cf. Jourard 1971), leading to increased self-disclosure in the communication (e.g., Barak and Gluck-Ofri 2007; McAllister and Bregman 1985). Hence, the disclosure of personal experiences does not just provide information on, for example, possible strategies to prevent procrastination, but also functions as a communication strategy supporting trust-building processes. However, boundaries for the ownership of the disclosed information need to be negotiated and managed (cf. Petronio 2002). “If one trusts the recipient of the personal information, then one can act with relative freedom” (Joinson and Paine 2007, p. 242). Nonetheless, compared to talking to a fellow student in the student hall, information in online communication is transmitted on fewer dimensions (cf. Clark and Brennan 1991). Information about the specific audience of a conversation (e.g., bystanders, eavesdroppers, and side participants; cf. Clark 1996) is not as overt as in face-to-face settings, and this makes the question regarding whom we are talking to more difficult to answer. Online media play a crucial role in these communication settings because they address different audiences and thereby provide different levels of privacy. Interlocutors in e-mail communication actively determine who takes part in the conversation. Compared to other online communication media, people thereby feel private (Frye and Dornisch 2010). In contrast, online forums have multiple actors and audiences, and the extent of privacy is determined by features of the specific forum. Although there is a high variability between different online platforms, most forums are public and privacy is therefore low. Thus, the one-to-one interactions between interlocutors in online forums are only pseudo-private: The boundary between private and public communication is blurred. When the two students exchange their personal experiences on procrastination in an online forum thread, millions of people could be lurking. And because self-disclosure can also lead to negative experiences when establishing a relationship and seeking advice, such as being rejected, it can be risky even under anonymous conditions.

As outlined above, revealing and concealing private information are both of interest when seeking advice from others on procrastination behavior. However, they are also dialectical, that is, opposing processes. Multiple models in different disciplines often attempt to explain how people can deal with this dialectic by proposing some form of calculation between the benefits and risks of self-disclosure (e.g., Afifi and Steuber 2009; Omarzu 2000). Nonetheless, whereas social interactions require some kind of self-disclosure in order to establish a trusting relationship, the extent of this self-disclosure can be regulated to satisfy privacy needs. Addressing this interplay between self-disclosure and privacy in different media, Thon and Jucks (2014) investigated how sensitively users assess and react to their specific audience in online communication. A predominantly student sample answered a written inquiry from a student seeking advice on procrastination behavior. Thon and Jucks varied whether the inquiry contained high emotion-based self-disclosure or not and whether the communication situation was public or private. Results showed that participants sensitively detected the interlocutor’s self-disclosure and were aware of the degree of privacy in the context. Surprisingly, high emotion-based self-disclosure impacted negatively on the perception of benevolence. It may well be that the problem-focused content of the inquiry contributed to the perception that the interlocutor in the low self-disclosure condition was more concerned about the well-being of others. In addition, participants self-disclosed personal information when responding to the inquiry independently of whether they had received an offer of trust in the first place as well as independently of the privacy context. Interestingly, although participants did not react specifically to the offer of trust, they did respond by using more positive emotion words when the inquiry contained emotional self-disclosure. Engaging in and connecting to the interlocutor’s emotional state can be interpreted as a form of empathy possibly facilitating the formation of trust. Although trust and empathy are closely related (e.g., Ickes et al. 1990), in face-to-face interactions, empathy is transferred strongly through nonverbal cues (Ickes 1997; Ickes et al. 1990). Communicating empathy through emotion words might therefore be another communication strategy with which to establish trust in text-based online communication. Similarly, Feng et al. (2004) found that people who not only inferred other people’s feelings correctly but also gave supportive responses increased interpersonal trust in an online textual environment. Overall, these findings indicate that both self-disclosure and communicating empathy are important mechanisms for establishing trust in online peer-to-peer communication, and that they might even be privileged over other important needs in online environments such as privacy (Moll and Pieschl 2015; Thon and Jucks 2014).

3 Technical Language and Cues Regarding the Fragility of Evidence as Indicators of an Interlocutor’s Trustworthiness

People seek health information from others in order to make health-related decisions. Although access to online information can empower health information seekers (cf. Bromme and Jucks 2014; Hu and Sundar 2009), the availability of information per se does not necessarily contribute to patients’ actual understanding of a medical issue. Online communication thereby is not restricted to peer-to-peer communication—as in the previous scenario of this chapter—but also extends to expert–layperson communication. Many “ask the expert” sites provide a communication platform between someone who is an expert and a less knowledgeable person, with online health advice being a very prominent topic (Jucks and Bromme 2007). The joint goal in such communication settings is to empower patients to make well-informed health-related decisions. However, the patients’ limited understanding of medical issues creates a dependence on the advice givers. Tseng and Fogg (1999) argue that one cannot depend on information but only on a person. Hence, trust in information always involves credibility. Overall, while searching for information on the Internet, patients have to identify aspects indicating the trustworthiness of the authors and the credibility of the advice. Asking themselves whom to trust involves questions such as whether experts are trustworthy because of their academic status, because of their social role, or because they talk knowledgeably. Should they have more trust in an advice giver who uses medical terminology or should they believe someone who provides comprehensive information?

According to Mayer et al. (1995), the question whom to trust is a question of who is able to give credible advice, is benevolent, and is integer. Persons with high levels of ability are termed experts on the specific area of expertise. With regard to health-related issues, a medical expert should be expected to be more credible than someone without medical expertise (e.g., a lawyer). Medical expertise can be characterized by features such as status, long-term training, certificates, and membership of an organization. Nonetheless, in an online environment, there are no doctoral assistants and status-indicating features such as lab coats or expensive medical equipment. Hence, experts giving health advice are identified primarily by their professional knowledge. Because the vast majority of online communication refers to written characteristics, word use thereby becomes crucial. As one example of a specific language use of experts, technical language could enhance perceived trustworthiness by indicating expertise. Features indicating the comprehensibility of technical language are the use of technical terms, the frequency of words, and the complexity of sentences (Pickering and Garrod 2004). Focusing on word choice, technical terms can be defined as “meaningful entities needed to perform a task in the area of expertise” (Bromme 1996, p. 184, translated). Hence, technical terms are used in particular contexts (e.g., when doctors discuss a medical case), but may not be well understood outside these areas (e.g., when doctors communicate with patients). Therefore, assumptions regarding the effect of technical terms on credibility are twofold: On the one hand, technical terms are perceived as more complex and difficult to understand (Jucks and Paus 2011) and can therefore serve as an indicator of a person’s expertise. On the other hand, translating information containing technical language into comprehensive information when giving health advice to laypersons can also be understood as a function of expertise (DeVito 1995; Stehr and Grundmann 2015; Windshuttle and Elliot 1999). That is, experts need to adapt their language to a nonexpert audience (Jucks et al. 2016). Hence, online health advice might be perceived as being more credible when it is comprehensive and easy to understand. Previous studies on the effect of technical language on credibility have delivered mixed findings. On one side, Thomm and Bromme (2012) found that scientific texts including relativizations and citations are perceived as being more scientific and credible. These findings suggest that people have assumptions about the use of scientific language. When investigating the influence of tentativeness cues on the credibility of scientific arguments, Thiebach et al. (2015) demonstrated that arguments including cues on the fragility of evidence were judged to be more scientific and more credible. These results indicate that more complicated language might increase credibility. On the other side, when examining the credibility of written assertions, Scharrer et al. (2012) showed that people judged assertions to be less plausible when the presented texts included complicated terminology. Moreover, Thon and Jucks (submitted) varied cues on the profession (medical vs. nonmedical) and the word choice (technical vs. everyday terminology) of people providing online health advice. Note that this experiment was carried out in German, a language in which technical terms used by doctors (e.g., myocardial infarction) often have an everyday equivalent used by laypersons (e.g., heart attack). Participants were asked to judge the credibility of the medical information given (e.g., “A myocardial infarction is one of the most common causes of death in the industrial states”) and, subsequently, the trustworthiness of the advice givers (e.g., “Mr. Peters, doctor of medicine”). Results demonstrated that authors with a medical profession were perceived not only to have a higher expertise but also to be more benevolent and integer. Hence, health advice givers with a medical profession were judged to be more trustworthy than those with a nonmedical profession. In addition, messages from authors with a medical profession were more often accepted as true than messages from authors with a nonmedical profession. The word choice used by the advice givers thereby interacted with the background information about their professions: Statements including everyday language terms were judged to be more credible when used by nonmedical authors, whereas technical terms were judged to be more credible when used by medical authors. Overall, medical authors were judged to be the most credible independent of their word choice, whereas nonmedical authors were credible only when using everyday terminology. These findings indicate complex relations between aspects of technical language and their impact on the attribution of trustworthiness in experts. Even though difficulty of specialist terminology could signal expertise, findings also indicate that comprehensive everyday terminology leads to more credibility when the interlocutor’s medical expertise is low. Hence, simply talking big will not increase the credibility of one’s information in online communication, but might actually decrease it—unless one can further demonstrate that one is qualified to use the language. Future research should focus more strongly on the interplay between directly provided information (e.g., “trust me, I am a doctor”) and implicitly conveyed expertise through language, such as the use of medical terms when giving health advice. It would also be interesting to take into account the complex nature of interpersonal interactions by examining, for example, multiple turns between interlocutors.

4 Repetition of Words as an Indicator of Trustworthiness in a Spoken Dialogue System

Having examined written communication in the last two sections, in the third scenario, we move on to spoken communication. Furthermore, we explore communication with an artificial communication partner.

Language reflects the relationship we maintain with other people. We react to the way people talk to us. For example, we can adjust the choice of our words to those employed by our conversational partner (so-called lexical alignment, e.g., Branigan and Pearson 2006) and we can express either convergence or divergence with the conversational partner and the topic (Communication Accommodation Theory; Giles et al. 1991). In a discussion on abortion, for example, using the term “unborn child” instead of the previously used term “fetus” supports the position of an anti-abortionist and the wish to distance oneself from the presented point of view (cf. Danet 1980; see also lexical differentiation, Van der Wege 2009). Furthermore, linguistic adjustment can serve as a marker for the social quality of a relationship (Giles et al. 1991; Ireland and Pennebaker 2010). In addition, the adoption of the communication partner’s words reflects politeness (Branigan et al. 2010, p. 2358; Torrey et al. 2006) and evokes positive feelings in the other person (Bradac et al. 1988; Branigan et al. 2010; van Baaren et al. 2003).

Nowadays, we communicate not only with other humans but also computers. As early as 1966, the chatter-bot “Eliza” was able to have quite natural written conversations with people, and the latter entrusted Eliza with lots of information (Weizenbaum 1966). Linguistic patterns such as the adjustment described above are likewise observable in human–computer interaction. As Brennan (1996) has described, people react politely when the system is polite although this may cause miscommunication. Nonetheless, the type of conversational partner also influences linguistic behavior. In a series of experiments, Branigan et al. (2011) got subjects to perform a word reference task in a Wizard-of-Oz setting. That is, they made people believe they were communicating with either a human or a computer. In reality, they always collaborated with a computer. Branigan and her colleagues found a greater number of adopted words when people thought they were communicating with computers compared to when they were convinced they were communicating with humans. Furthermore, subjects adopted a higher amount of terms used by computers when they considered them to be less capable (although, in fact, their capacity remained the same). In an experiment with a more natural setting, we examined how convergence on the lexical level depends on the conversational partner and language style (Linnemann and Jucks 2016). We varied whether the communication partner was a human or a computer system (realized as a Wizard-of-Oz experiment) and whether the employed language style was elaborated or rather restricted. The number of adopted words turned out to be higher when the conversational partner employed a restricted language style. Moreover, an elaborated language style led subjects to prefer the human partner in terms of likeability and cognitive demand.

However, computer systems have become more and more powerful and they are now approaching natural communication abilities and are capable of affective interaction, multiple speech acts, and multilevel learning (López-Cózar et al. 2015).

Almost natural human–computer interactions find their way into everyday life through spoken dialogue systems (SDS; Edlund et al. 2008). These are computer systems that are able to understand and produce spoken language in various applied fields. For example, López-Cózar et al. (2015) list intelligent environments, in-car applications, personal assistants, smart homes, and interaction with robots and assistants for disabled and elderly people. In particular, the use of personal assistants—well-known and common representatives are, for instance, Siri from Apple® and Google Now® installed on mobile devices—affects many people’s communicative routines and personal data. With each question and answer, users reveal sensitive data without knowing exactly where it will be saved and how it will be processed and connected. SDS have become increasingly powerful (Allen et al. 2001). When SDS are capable of maintaining natural conversations, people can perceive them as effective communication partners and social actors (Nass and Lee 2000). From then on, trust comes into play (cf. Tseng and Fogg 1999). Users can draw on personal components of an SDS such as the ability, benevolence, and integrity found in Mayer et al.’s (1995) model. Furthermore, research and development aim to construct SDS that are able to maintain long-term relationships with particular users (López-Cózar et al. 2015). For that purpose, SDS have to be enabled to recognize individual users through their voice, preferences, interests, and social relationships. Not only should SDS always be ready to answer requests and perform tasks (human initiative, Mavridis 2015), but they must be proactive and, on occasion and when appropriate, start a conversation by themselves or contribute helpful information without being asked explicitly (mixed initiative dialogue, Mavridis 2015). Thus, SDS can represent attentive, caring, and close assistants that are always ready to talk. Therefore, if people are to accept their daily use, they need to perceive SDS as benevolent and integer (Mayer et al. 1995). To benefit from these tempting possibilities, users need to trust their human-like communication partner. Prior research has shown that even SDS that are not as far developed can get people to disclose intimate information about themselves and may even be preferred to human beings (Petrie and Abell 1995; Weisband and Kiesler 1996). Today, permanent connection to the Internet dramatically enhances the knowledge and communication abilities of computer systems (Mavridis 2015). Admittedly, this involves the storage of personal data and usage of information on distal servers beyond the control of users. That is, users depend on the goodness and morality of their artificial communication partner—they have to trust them (Tseng and Fogg 1999).

What does this mean in detail? Consider the example of a student in her first term: Beginning one’s studies represents an important step in life, requiring much organization and finding solutions to personal issues such as familiarizing oneself with a new environment, making new friends, or coping with loneliness. A personal assistant, on the one hand, can support the student by delivering information, and, on the other hand, by listening actively and showing sympathy. On a linguistic level, the latter is strengthened by lexical adjustment as depicted above, and this simultaneously increases the student’s willingness to trust (see Linnemann and Jucks 2016). The student can activate the SDS by either pressing a button on her device or talking directly to the system by calling its name. Likewise, the common systems Siri and Google Now can be addressed, saying “Siri!” or “Okay Google!” respectively, followed by a request. No physical contact with a technical device is necessary. In addition, the user can employ a normal, natural voice. The activation is the first trust-based decision directed toward the SDS. Imagine this student activating her personal SDS after a long day of studying, having lots of questions and new experiences in mind:

Student: “I attended my first lecture today.”

SDS: “You attended your first lecture? How did it go?”

Student: “Whew, it was really exciting, but also very difficult to listen. The topic was quite complicated and it was a bit noisy. Is that normal?”

SDS: “It is quite normal that it is really exciting and difficult to listen in the first lecture. A survey revealed that 80 % of freshmen reported feelings of excitement and 70 % felt overwhelmed by the new situation.”

The SDS actively shows interest in the student’s activities. The repetition of the statement in the form of an enquiry underlines its attentiveness and intention to care. Hence, it elicits a more detailed answer from the student; she entrusts a personal experience. The SDS comments on the utterance and enriches it with information retrieved from the Internet. This strengthens the relationship and enhances the perceived trustworthiness of the SDS. The duration of the relationship makes it easier for the student to assess the SDS’s capabilities and leads to a very low amount of conscious monitoring of the wording of own utterances. As a consequence of this, the adjustment of words by the SDS is usually also not even noted consciously.

Moreover, the SDS is able to handle a wide range of requests such as writing e-mails, managing incomes, calculating taxes, translating, checking something up in the Internet, and so forth. Once again, this involves very personal information. Trust is indispensable for a smooth and natural communication between humans and SDS. We (Linnemann and Jucks 2016) have started to identify how language—and especially lexical adjustment—serves as a mediating parameter. Many more aspects of language—phonological, syntactic, pragmatic, et cetera—have to be taken into account and investigated further. From a psychological perspective, it will be interesting to compare trust building in interpersonal communication (e.g., Thon and Jucks 2014) with trust building in human–computer interaction (e.g., Linnemann and Jucks 2016). This could contribute to the acceptance and development of SDS. The economic relevance of SDS is very high: In 2012, the market for global intelligent virtual assistants was calculated at over 350 million US dollars, and the annual growth rate from 2013 to 2020 is expected to exceed 30 per cent (Grand View Research 2014). Furthermore, the combination of technology and language seems to offer a very promising way to gain insights into what accounts for trustworthiness. When technology comes into play, we gain a comparison point for human behavior.

5 Future Perspectives on the Issue of Trust in Words and Words of Trust

In this chapter, we have outlined three scenarios related to the wording of messages that contribute to trust-building: (a) self-disclosure and the communication of empathy, (b) technical language and cues regarding the fragility of evidence, and (c) perceiving a shared view through lexical overlaps. What they all have in common is that the choice of words is a vehicle for establishing trust in interpersonal online communication—regardless of whether it is written or spoken and whether the interaction is with another human or an artificial interlocutor. We have pointed out that there are multiple and complex reactions to the wording. This chapter has provided an overview of the literature on typical online communication settings. We could show that Mayer et al.’s (1995) model of organizational trust offers a helpful framework with which to investigate the influence of language on trust. The components of the model—ability, benevolence, and integrity—can be applied to a broad range of settings as presented in the present chapter. In the first setting of a peer-to-peer forum discussion, we outlined the role of self-disclosure for building trust. Because sharing information indicates a willingness to be vulnerable, we assumed that self-disclosure is interpreted as an offer of trust and therefore increases trustworthiness. However, in Thon and Jucks’ (2014) problem-based advice setting, emotion-based self-disclosure even had detrimental effects on the perception of a student’s benevolence. The second setting addressed the retrieval of medical information from other online users with varying medical expertise. Thon and Jucks’ (submitted) findings indicate complex relations in the attribution of trustworthiness between background information on users’ expertise and the technicality of their messages. The third setting, communication with a SDS, shows that the model’s components can also be used to investigate human-computer interaction. Linnemann and Jucks’ (2016) results demonstrated that an elaborated computer system was the precondition for perceived ability. Complex SDS that serve as personal assistants and present themselves as human interlocutors can be perceived as benevolent or integer. The application of Mayer et al.’s components of trustworthiness in the context of artificial intelligence offers insights into the human trust-building process in general, because programmed features can be manipulated systematically or held constant. Overall, experimental psychology has enhanced our understanding of the effects of word usage on trust-building processes, demonstrating that there are multiple reactions to different words used to communicate online. Nonetheless, we do not have the necessary statistical data to study the impact of the combination of and interaction between these words. Hence, one big challenge for future research is to engage in the empirical study of the interplay between different word usages that have been shown to impact on trust.