Keywords

9.1 Introduction

Currently the wider trust research area (academics and industry practitioners), working in the interests of organisations and governments, value environments that promote trusting behaviour and fostering of trust [1]. This may be because, as reflected in Fukuyama’s famous line ‘trust greases the wheels’ [2], a state of trust results in higher productivity. Evaluation methods reflect this pursuit of trust, measuring for the presence and intensity of trust. When the goal is to promote trust, the approach towards measurement is clear: measure the level of trust before and after an interaction or intervention [3].

If instead, as we explain in this chapter, a practitioner considers the design of a digital environment from a user’s perspective and with the user’s interests at heart, then both trust and distrust may be valid choices [4]. For instance, there may be a good reason why a user should purchase from a particular vendor. Measuring whether trust has occurred does not reveal much beyond whether the user was coerced to engage with a system. It does not indicate whether the user was able to make the trust choice that was in his or her interests, which is the type of experience we want to design. Once the needs of the user are in the foreground, the role of context in a trust scenario is emphasised. Context is different for every user and is central to how trust and distrust are regarded [5].

Rather than systems that pretend to be trustworthy or foster and enhance trust, designers and developers are now creating digital systems (such as sets of web pages and mobile apps) that empower the user about trust choices, which we describe as TEU environments. Examples of TEU environments are dating sites that help users negotiate fraught relations [6] or an application that allows individuals to self-select and come together to develop a creative project [7]. Nurse et al. [8] present a model in which users can plug in their individual trust preferences and, at appropriate times, receive a graph suggesting future actions.

The aim of this chapter is to develop a means to evaluate whether systems that are designed to be TEU are indeed empowering users confronting trust choices. Developers, researchers, designers and others interested in working in users’ interests have some means to interrogate their work. We argue that the lens to understand whether an environment empowers a user is to investigate whether uncertainty has been reduced. Trust researchers agree that there are several results of trust and distrust [9]. A reduction in uncertainty is common to both the experience of trust and distrust and is an outcome that is beneficial for the user. The user is clearer about how best to proceed, which we define as a form of empowerment. In contrast, without a state of either trust or distrust, a user may be caught up in a cycle of assessing possibilities and considering how he/she should act, in other words, keep wondering about whether he/she should be either trusting or distrusting. Such a cycle is resource intensive for an individual [9]. To assess trust and distrust from a user’s perspective, we draw on the work of Cofta [9] who distilled work across the trust research area and argues that users look for evidence to trust across several dimensions. In short, the dimensions are continuity (how long a trustee has existed in a community), competence (does a trustee have the skills to deliver on an interaction?) and motivation (does the trustee have a commitment to working in the trustor’s interests).

Often when theory meets practice there are complexities, and this issue is no exception. This chapter outlines some of the issues practitioners need to tackle. We first review how trust in a digital environment is usually valued and measured. Examples of alternative design approaches that attempt to empower trust for the user are outlined. We then identify challenges associated with evaluating TEU environments. Finally, we suggest one path: to measure whether uncertainty has been reduced for the user. The technique of surveys arranged around Likert scale statements is an established means to gauge attitude change. Survey statements focusing on different areas of trust evidence can help a designer access the nuances of users’ understanding of a TEU.

9.2 Our Perspective

This chapter takes a social science perspective, more specifically, a user experience design (UX) viewpoint. UX is the design and communication of a system from a user’s perspective and overlaps with other practices such as interaction design, accessibility, usability and human computer interaction (HCI) [10]. Within the social sciences, trust is broadly understood as a context-bound relationship within which the trustor, in a position of vulnerability, is confident that another party (the trustee) will respond in the trustor’s interest. However, static definitions of trust do not contribute much when considering the design of trust and distrust in a practical setting, as the notion of trust is only meaningful when understood in context. The difficulty of defining trust was raised by Luhman [11], who pointed to society as the place where trust interactions are grounded. The notion of context is stressed by social scientists, because this emphasises the shift in an understanding of trust depending on who you are, where you are and the moment in time [12]. All disciplines conceptualise and define trust depending on the outcome sought by the researcher and the research area [13]. An outcome sought by the social science discipline is to problematise a situation, i.e. to problem-set rather than problem solve [14]. To study trust and distrust in digital environments from a social science perspective is to explore power relations, design, choice, and control across the Internet, which are significantly underexplored to date [15]. If issues of power are not acknowledged or interrogated, then there is the risk that what is in the interests of the most powerful is assumed as what is best for all. There are many challenges for the designers of TEU environments.

Our motivation is to gather data to inform a project we are undertaking, ‘Device Comfort’, which is a personal interface that speculates about states of interactions in an environment and its owner’s current context [16]. Working on behalf of the individual, the interface is designed specifically for the purpose of health and wellbeing and can manage an individual’s health data, aggregated from a range of sources. The interface unites several sources of information, such as nearby devices and the predilections of the user to present to the user an overall ‘comfort level’. What the user does with this guidance is ultimately up to the user. The idea is to provide an opportunity for the user to have a ‘second thought’. Developing measures of success for this interface was the original impetus for this chapter. Before they begin development, designers can benefit from a sense of the measurements for success and outcomes for users they should aim for in their projects. Usually the activities of evaluating and measuring are regarded as the domain of more quantitative-orientated disciplines. However, all developers need to have a sense of how well their projects are functioning and need to access the power of numbers to indicate change. The issue of evaluation and trust is a practical concern for designers and developers of digital environments. The intended audiences of this chapter are those who wish to apply theory about users and trust in practice in order to create websites, ‘apps’ and other digital outputs.

9.3 A Problem of Bias and Emphasis

Design is never neutral; it always works in the interests of one party over another [17]. Unsurprisingly, owners of digital environments design their spaces so that their business models and agendas are served. Although some practitioners of ‘user-centred’ design claim that they put the user first, this prioritisation is debatable, Blythe et al. [18] argue that in fact their claim is unfounded and the label ‘user-centred’ design is simply a marketing device. When owners of a digital environment employs design strategies for their space in order to increase the success of their mission and their engagement with users, one of the first qualities they seek is trustworthiness and the trust of their users (regardless of whether trust is deserved). Trust is so often touted as the magical ‘make or break’ component of a design [1].

The academic trust research area also values a result of trust (not distrust) and emphasises the study of trust (rather than distrust). Trust is regarded as beneficial and a success while distrust is considered a negative state and an outcome to be avoided [1, 19]. Blyth et al. [18] argue that the acceptance of business and commercial values is the default position of the wider human computer design industry and is due to the close links between academia and industry.

The valuation of trust and a positive outcome for organisations and businesses can be seen in the use of language and the prioritisation of goals by researchers and practitioners. For instance, the area of virtual work systems and trust is shaped by the work of Mayer et al. [20] who define trust as ‘the willingness to be vulnerable to the actions of another party’ and cooperation, which complements the goals of management [21], is seen as a measure of trust [20]. A popular theme in the research area is how trust can propagate within virtual team environments [22].

E-commerce practitioners look to the speed and number of sales as indicators of trust [23, 24]. We see this commercial interest translate into guidelines for designers to give an interface the appearance of trust, regardless of the nature of the contents (for instance [25]). A popular recommendation is to develop a ‘professionally designed’ site, one that a designer with traditional graphic skills has created in order to provide an aura of authority (for instance [26] and [27]). Other researchers provide detail of what might constitute a professional appearance. Colour, it is also argued, plays a role in the formation of a professional site, for instance, the use of blue can promote trust as does the avoidance of black [28]. Inclusion of photographs of company people on a website, incorporating their names, can also build a trustworthy picture, especially photographs of people in ‘everyday situations’. The idea is to engineer ‘human warmth’ into a digital environment [29].

If we take a user’s perspective, we know that there are good reasons for us not to trust what is presented to us in digital environments. There are very good reasons why an employee should not collaborate in the virtual workplace, even if it is in the interest of the company. For instance, management may favour one particular employee and a virtual workplace is designed to hand this employee all intellectual property at the expense of other employees [30]. In the domain of e-commerce, sometimes a user should not buy the advertised product or engage with a particular service. The product may not be what it should be or may be a ruse for a user to provide credit card details [31].

9.4 TEU Environments

Trust and distrust differs depending on who you are, as the following example illustrates. Let us assume that an individual has recently been diagnosed with a certain medical syndrome that could be a life threatening condition. The individual is exhibiting new symptoms. As it is the weekend, there is nobody for the individual to turn to and the individual logs into a portal focusing on the condition. The site includes a range of content including written advice and discussion, videos and advertising material. The trust issue for the user is working out whose advice to follow. It is difficult for the user to determine who owns the site. The material may be marketing a particular medication that may be unsuitable. On the other hand, advocates of a philosophy may be providing advice that is also biased and not what is in this individual’s interests. From the perspective of other stakeholders, the individual’s vulnerability is an opportunity for profit. Researchers exploring this case from a commercial perspective could explore whether the design increased the visitation rates of the site, whether the individual recommended the site to others, or bought products or services. From our perspective, the TEU position, we are interested in whether the site enabled a user to make the choice about her problem that was in his/her interests guided by his/her beliefs, expectations, customs and the other elements shaping the context that Zack and McKenney outline [32]. The context could include factors such as risk, visual design elements, and presentation of language. This example demonstrates how difficult it is to empower a user to form trust or distrust on his/her own terms. Designers develop solutions to these problems for users, for instance, providing tools for users to decode the bias behind the visual design of a site or to organise their observations about trustworthiness.

In contrast to a commercial design that attempts to convince its audience that it is the correct choice, a trust empowering design enables users to form their own choices. In addition to the examples provided in the introduction, we are seeing the rise of ‘trustware’, systems that attempt to assist individuals form trust perceptions about others by allowing users to translate the reputation they have developed in one network to another [33]. Some examples of these systems include TrustCloudFootnote 1 and Legit.Footnote 2 The demand for trustware is increasing as the sharing economy grows [34]. As the phenomenon of strangers sharing valuable resources continues, there needs to be means for individuals to work out with whom who they should interact. Although ‘start-up’ entrepreneurial business models are currently highly influential in the design of the systems, this area of technology design is in its infancy [34]. In the near future we will see a diverse range of trustware approaches, and perhaps systems dedicated to specific industries, such as health. An example is a system that helps individuals to negotiate smoking cessation advice provided by a range of sources. Approaches to evaluation are needed to assess these designs, examining whether users are able to form the trust choices that are in their interests.

9.5 Complexities of Designing TEU Environments

TEU environments need to engage with a range of potential users’ interpretations of the contexts they encounter. Trust and context are strongly interlinked. Trust is a social construction that is only meaningful when understood in context, i.e. the ‘here and now’ (cf. [35]). A user’s interpretation of context is shaped by a whole range of factors, such as power relations, social conventions, traditions, expectations, habits and memory [32]. Suchman [36] in her landmark work ‘Plans and Situated Actions’ argues that technological developments that ignore context result in unsuccessful technology that is not accepted by its user base. As we review in this section, recognising the unavoidable link between trust and context adds levels of complexity when considering the design and evaluation of environments intended to empower individual users.

Drawing on the authority of sources of advice deemed as trustworthy, without any further exploration, is problematic. The practice of designers, governments and companies assuming that they know what is best for users and telling them what to do is known as ‘benevolent paternalism’ [37]. Often the exponents of benevolent paternalism try to distinguish themselves from commercial practices that seek to convince users to adopt a certain behaviour in order to increase the profits of a company. But there are still problems with ‘benevolent paternalism’. Advice can be biased, for instance, motivated by political and religious agendas. Advice provide by authorities cannot provide an indisputable answer on every occasion for all people. As authorities can disagree over the solution to seemingly uncontroversial issues, there is no guidance to suggest which authority is correct. For instance, following from the example provided above of the user with a medical condition, an individual could seek advice from three online doctors about different treatments. There are two different medications for the condition. One doctor may prescribe one type of medication. Another doctor may prescribe the second medication. The third doctor may prescribe a combination. So, it is clear that it is difficult to decide which advice should dominate. The consequence in this scenario is medication that may not suit the patient’s needs. However, Srnicek and Williams [38] point out that to refuse all advice is misguided and does not take advantage of the expertise available to us in the modern world. Such a rejection of authority does not recognise the nuances by which individuals are controlled in society. Designing trust empowering systems is a political and complicated exercise.

An evaluation of a design incorporating a ‘benevolent paternalism’ perspective may test for whether the presentation of information in a digital environment allowed the user to make the ‘correct’ decision regarding a medical condition. But when one acknowledges the role of context, we can see how the ‘pre-prepared’ approach is limited. Defenders of ‘benevolent paternalism’ may argue that right or wrong answers can be successful for the majority of the population. However, we argue that it is impossible to know when and how such judgments can be applied, due to the role of context, and thus ‘benevolent paternalism’ is a questionable design strategy.

When the importance of context is acknowledged, the potential of a digital system or an authority to pre-determine right or wrong answers for individual users in a trust scenario is limited. Context, i.e. the environment in which understandings are made, can only be constructed between people as they read it, participate within it, and work out how they might function in a specific situation. Trusting and distrusting are not entirely rational thought processes: they are a combination of subjective and objective thinking. The response depends on the individual. As Möllering [39] writes, there is an element of trust that is always unaccountable and ‘mystical’, otherwise what is being discussed is not trust and could more aptly be described as ‘calculation’. When one of the elements in a context is altered, then the outcome may be different. This is why Marsh et al. [40] suggest that TEU environments should allow users to monitor and intervene, so that users can have a role in interpreting their contexts.

It is also problematic for designers to simply draw on the authority of ‘trustworthy’ sources, otherwise known as second-hand trust [41]. There is no such thing as a consistently reliable trustworthy source. The generators of what might once have been considered trustworthy or even truthful information, governments and non-government organisations, no longer wield the same respect as in the past [42]. The authority, bias and competence of these sources are now questioned on a regular basis [43]. Additionally, as users of digital environments, we know that spammers continually attempt to replicate what might be regarded as a trustworthy agent. The problem of assessing trustworthiness is multi-layered. A trust-empowering interface should not attempt to provide a definitive answer, but instead aim to keep a ‘case open’ ready to receive new developments.

9.6 Isolating a Means to Evaluate for Trust Empowerment

The creation of a TEU environment, a system that empowers its users to negotiate trust on their own terms, requires resolution of many design challenges. How can we measure whether a digital environment does indeed empower users to negotiate trust on their own terms? As we argue, measuring the presence of trust is not appropriate; distrust may be a valid option for a certain user in a specific context. Additionally, we cannot test that users have made the ‘correct’ trust choice, as the capacity to judge another’s trust perception is limited. Evaluating whether a design includes elements we think empower trust is not ideal, it is the user’s perspective that is relevant.

We argue that one way to assess the ability of an environment to trust empower the user is for researchers to measure the level of uncertainty before and after interaction, and by implication, the level of certainty, as we will explain shortly. Certainty is a subjective sense of conviction or validity about one’s attitude or opinion [44]. Certainty is when a user knows what he or she would like to do and what is important to him or her. By uncertainty, we mean that the user is unclear about what to do or how to proceed. In our scenario, a reduction in uncertainty as result of interacting with a TEU environment would mean that the user is clearer about what trust choice is best suited to their needs. The experience of the interaction with the digital system has assisted the user to negotiate and interrogate trust. It is the quality of assistance that we value, as this type of experience ‘empowers’ users rather than simply supporting their current status. The aim of the evaluation is to see if uncertainty is reduced as a result of an interaction. In this section we explain why attitude certainty, which we argue is a proxy for both trust and distrust, is an appropriate way to measure trust empowerment. In the following section, we explain how a change in uncertainty levels can be measured.

Trust researchers agree that trust and distrust have an impact on cooperation, including willingness for vulnerability, confidence, and a reduction of uncertainty (as documented in [45]). Measuring how much a user is willing to cooperate or be vulnerable focuses on what the user might be agreeable to, or arguably, how much a user can be exploited. The notion of confidence does center more on the user’s interests, but can be coopted to suit the demands of commerce and government. According to [46], this is due to the impact of the ‘New Management Era’, the movement to streamline the public sector in the U.K. and the U.S. Confidence is regarded as a means to move forward with more reforms. Thus, we argue that the concept of ‘confidence’ is not suited to our purposes because some users may associate the term with managerial approaches.

We argue that studying if there is a reduction in uncertainty for the user is the most suitable means to explore the success of a TEU empowerment. It is a result orientated to a user’s interests and the term does not have strong societal connotations, such as the word confidence. A reduction in uncertainty is a result of both trust and distrust. Focusing on the possibility of a reduction allows an interrogation of whether a trustee has received a benefit of both trust and distrust. Without trust and distrust, the user is caught up in the cycle of exploring possibilities [5]. With trust and distrust, some future possibilities are foreclosed, as Clark [15] says, there is a ‘call to action’. Distrust is at least as important as trust in this view. Although often seen as a negative state, distrust can in fact resolve a complex scenario, closing down possible paths for the individual to choose as well as protecting the individual from negative consequences. Thus we use a reduction in uncertainty as a proxy for trust empowerment, a means to understand whether a user of a digital environment has indeed been empowered regarding trust. Researchers use proxies to explore the notions of trust and distrust as neither concept can be directly observed [47], for instance, [48] use the presence of cohesion in a team as a proxy for trust, while [49] use the occurrence of an alliance of two parties.

Jøsang et al. (see [50]) also emphasise the role in uncertainty in trust interactions. In their view, subjective logic, probability calculations that work with uncertainty, can help solve trust problems. Their response is to develop an oeuvre of algorithms that draw on a range of users’ opinions in order to dissipate the impact of uncertainty. The design work we attempt aims to engage the user in an active role within a digital system. In contrast, Jøsang et al. seek to automate decisions on behalf of the user. A TEU system could allow a user to choose which interactions are handled automatically and which ones require further interrogation and customisation. This is a research issue for further investigation.

9.7 How Can Uncertainty Levels Be Measured?

We argue that a change in the user’s uncertainty levels before and after interacting with a digital environment can be a proxy for whether trust empowerment has occurred. In this section, we explore means to understand whether a digital environment has reduced it. There are well-developed techniques to evaluate the strength of attitude. One way is to ask the participant to self-report via a survey undertaken before and after an experience. The two results are compared. The field of Psychology has well-developed survey techniques to undertake these measurements and determine how strongly a respondent holds an opinion via self-report. Several fields have drawn on these techniques including Marketing and Political Science (see [51] for an overview). Likert and Thurstone are notable early attitude researchers, they developed the Likert scale to quantifiably measure attitudes [52]. A common argument by survey practitioners is that strong attitudes are more likely to exist across time, influence behaviour, and predict behaviour than are attitudes that are not as certain [53]. The work of Maio and Haddock [53], who have surveyed the field of the psychology of attitudes, argue that a measurement of attitude strength is an indication of a reduction in uncertainty. Thus there is the potential for suitable construct validity, when an operationalisation measure does indeed study the variable under consideration.

To develop survey questions revolving around trust, we recommend that survey writers work from the three dimensions of trust that the field agrees: competence, motivation, and continuity [4]. The evidence users seek in order to proceed in a trust interaction fall into these categories and analysing a design in terms of these dimensions allows us to understand an environment from a user’s perspective. The dimensions of trust are interlinked and overlap but can be described as follows. Competence refers to whether the trustee has the ability and skill to fulfill the requirements of the interaction [45]. Motivation has to do with shared interest: Does the trustee have an interest in working towards the welfare of the trustor? Finally, the dimension of continuity is about whether there is possibility of a connection between the trustor and trustee beyond the current encounter. Do the trustor and trustee belong to similar communities? Will their paths cross again? The important point to note is that we do not seek to test for the presence and strength of continuity, competence and motivation but how clear a respondent is about their perception and conviction regarding these dimensions. By studying whether a user is more certain about different dimensions of trust evidence, we can see if a TEU environment has indeed empowered a user about trust.

We now turn to the survey content to evaluate the ‘Device Comfort’ initiative, an interface that assists users with health decisions in different everyday situations. The interface aggregates advice from different locations and helps the user interpret the advice in accordance with the user’s preferences. We wish to explore whether this element does empower the user regarding trust.

The following survey statements are examples of what we will use to evaluate the performance of our interface. Each statement, which forms a survey question, focuses on one of the three dimensions of trust evidence as argued by Cofta [9]. By isolating this evidence into the three dimensions, we can gain insight into the nuances of trust and investigate whether there is a shift in just one dimension of trust, for instance, ‘continuity’ or whether there is a more generalised trust impression change. Such insights are invaluable to designers because the guidance can inform the design of one interface element over another. Participants in our study will be asked to reflect on their perception of an advice provider at different points in time. They will be asked the following questions:

  • The advice provider has the appropriate expertise and background to provide advice.(To determine the dimension of competence)

  • The advice provider has a desire to work in my interests.(To determine the dimension of motivation)

  • The advice provider has been a member of relevant communities for a long time.(To determine the dimension of continuity)

To close the survey and to ascertain how certain the respondent is, we draw on the work of attitude researchers [54, 55] (who have developed questions to be used with a Likert scale to gauge certainty). These questions provide an opportunity to compare the strength of a user’s attitude so a change can be detected:

  • I am sure that my attitude towards the advice provider is correct.

  • I feel confident that my attitude towards the advice provider is the most accurate attitude possible.

  • I believe that if someone challenged my views on the advice provider I would be able to easily defend my point of view.

  • I do not think that my attitude towards the advice provider is going to change.

Naturally, there are limitations to the survey approach and there are questions that require further exploration. Social science literature debates the issues, which are often context specific (see [56] for an overview). For instance, how many survey items gather a suitable amount of data about an interaction? What are the criteria? How much reduction in uncertainty is considered a success and does this change across contexts? If so, why? Organising users to complete a survey is difficult to achieve. The completion of two surveys by each participant is even more difficult. The survey needs to be administered at a time when the participant is mindful of the experience of the digital system.

Additionally, an increase in attitude strength across the two surveys may be a result of a participant’s familiarisation with the context, in which they are placed in for the research. Experimental researchers often encounter this issue. Can an intervention really change behaviour in the fashion intended or are the results the effect of the participants simply being involved in a study that has primed them to think in certain ways [53]? Familiarity is part of the trust equation and familiarity breeds trust [11]. There is, however, a predictive validity issue. Is the process really isolating a shift in uncertainty? Is the approach measuring the effect of trust and distrust or other variables, such as familiarity or memory, entering into the equation that could interfere with the results? Refining the boundaries of trust, familiarity, and attitude strength is another task for future research.

A future direction is to explore the potential of social network data, rather than surveys, to evaluate whether a digital environment empowers trust. Social networks can harness public comments written by users of social media sites (such as Facebook and Twitter). Conclusions can be drawn about how different sets of users are responding to new events and products. Sometimes users utilise the hashtag (#) as a means to signal to others that they want their comments to be linked to other discussions around a certain topic. The practice of sentiment analysis, which draws assumptions about how users are thinking and feeling from their social media activity, may provide precedents for exploring if a user is feeling more or less certain about their trust interactions.

9.8 Conclusion

Often industry and academia value an outcome of trust for their projects and evaluation methods follow a similar path. Yet an outcome of trust may not suit the user of a project. From the user’s perspective, distrust may be as valuable as trust. Some practitioners create projects that work in users’ interests and empowering them regarding trust (which we refer to as TEU environments). These practitioners need measures to understand the impact of their designs. Arriving at an evaluation method is not straightforward proposition. Assessing whether trust has formed is not appropriate, as is testing for right and wrong answers in connection to trust.

In order to evaluate whether a TEU is successful, we suggest evaluating whether a reduction in uncertainty for the user has occurred as a result of interacting with an environment. A reduction in uncertainty is one of the side effects shared by both trust and distrust that is commonly agreed upon in the research area. A reduction in uncertainty levels can be measured via surveys administered before and after a user interacts with a TEU environment. The work of attitude researchers provides guidance into the use of Likert scales in a survey to identify a shift in attitude.