1 Introduction

The rapid growth of artificial intelligence (AI) technologies and their pervasive influence in our lives have heightened the importance of ethics, values, and a human-centric approach to the design and development of AI (Floridi et al. 2018). Modern AI is defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” (OECD 2019, pp. 23–24). Capabilities such as decision-making and autonomy differentiate AI from traditional technologies, such as cars or computer software, which work within protocols that are under human control. AI’s autonomous functioning has changed the balance of power between humans and machines, requiring humans to trust technology. Moreover, deep learning algorithms that drive current AI lack transparency and explainability (Shin 2021), which adds to the challenge of creating trustworthy technology.

Trust is an essential construct in human relationships with an extensive body of literature (Rotenberg 2019). Human trust in technology (Hancock et al. 2011), particularly automation and AI, also has engendered significant interest in the academic community. Researchers believe trust is a fundamental step toward the social acceptance of new and potentially disruptive technologies (Wu et al. 2011). However, unlike other technologies, AI presents unique challenges for researchers because of its manifestations, such as anthropomorphic features, natural language processing, affective computing, and conversational abilities. These capabilities necessitate a new socio-technical understanding of trust that extends beyond mere functionality to human-like characteristics of the machine (Choung et al. 2022). In this emerging understanding, attributes such as concern for the well-being of society, benevolence, helpfulness, fairness and compassion have become necessary components of trust in human-AI interactions (Thiebes et al. 2021).

To develop trustworthy AI systems, various organizations and even governments have proposed ethics guidelines for AI (Jobin et al. 2019; Hagendorff 2020). These guidelines encourage fairness and promote AI applications that are unbiased, non-discriminatory, and beneficial to society. Special care is given to the unintended consequences of AI and its effects on vulnerable populations (Borgesius 2018). Although ethical values are actively promoted by AI technologists, it is not clear to what extent they influence trust among users. Therefore, in this study, we examine the importance assigned by the general population to ethical values of AI and the influence of these values on trust.

Specifically, we develop a multidimensional, multilevel understanding of trust using a survey of a representative sample of participants in the United States (N = 525). Through regression analysis, we explore the effects of demographics, dispositional trust, institutional trust, and familiarity with AI on the two key dimensions of trust. In the last step, we examine the perceived importance of the ethics requirements of AI among users and the effect of these requirements on trust perceptions.

2 Trust in AI: a multidimensional and multilevel approach

Trust is a fundamental human mechanism required to cope with vulnerability, uncertainty, complexity, and ambiguity in situations that collectively constitute a risk (Colquitt et al. 2007). As we begin to realize the potential of AI, to ensure continued success it is essential to understand the social and psychological mechanisms of trust in human-AI interaction (Gillath et al. 2021). Trust is defined as “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” (Rousseau et al. 1998, p. 395). Traditionally, trust is tied to relationships between people and is required to build mutuality and interdependence between parties in human communication. In addition, trust is a multidimensional concept that is associated with trustees’ characteristics, intentions, and behaviors (Lee and See 2004; Schoorman et al. 2007).

Mayer et al. (1995) define trust in humans as an amalgam of one’s belief in another’s ability, benevolence, and integrity. Ability refers to skills and competencies to successfully complete a given task. Benevolence pertains to whether the trustee has positive intentions that are not based purely on self-interest. And integrity describes the trustee’s sense of morality and justice, such that the trusted party’s behaviors are consistent, predictable, and honest. This three-dimensional understanding of trust is vital to interpersonal interactions, as it contributes to reliability and integrity in our dealings with fellow humans.

Past studies have extended the concept of interpersonal trust to human-technology relationships (Calhoun et al. 2019), especially to technology with human-like characteristics (Gillath et al. 2021). But some studies have noted that human users have different expectations with technologies and that the principles of interpersonal trust are not directly applicable to human-to-machine trust (e.g., Madhavan and Wiegmann 2007). McKnight et al. (2011) explained that trust in technology is qualitatively different from trust in people primarily because humans are moral agents, whereas technology lacks volition and moral responsibility. They offered three analogous dimensions for trust in technology—functionality, reliability, and helpfulness—to replace ability, integrity, and benevolence. Functionality refers to the capability of the technology, which the authors likened to human ability. Reliability is the consistency of operation, which is like integrity, and helpfulness indicates whether the specific technology is useful to users and is comparable to the benevolence dimension of human trust.

Although AI is a technology, it is involved in replacing or augmenting humans in tasks and decisions. In that sense, it is more than simply a technology, and McKnight’s definition and the dimensions of trust in technology may require adjustment when applied to AI. Unlike traditional technologies that rely on user input and the execution of rules programmed by humans, AI has more autonomy. Further, AI’s technical abilities and human-like characteristics create an interesting duality. For technology applications with greater humanness, a trust-in-humans scale was better at predicting relevant outcomes than a trust-in-technology scale (Lankton et al. 2015), which calls for a conceptualization of trust that combines the technology and in its human-like characteristics.

Though a conceptual definition of trust in AI is still evolving, it appears that at least two dimensions of AI, functionality and human-like characteristics, are relevant (Thiebes et al. 2021). Drawing from previous work, we focused on human-like trust in AI (benevolence and integrity) and functionality trust in AI (Mayer et al. 1995; Mcknight et al. 2011). The former dimension pertains to the social and cultural values of the algorithms and the values and ethics that undergird the design of AI technology. The latter dimension relates to the reliability, competency, expertise, and robustness of the technology. For human-like trust in AI, the trust is in the AI agent or system itself and not in the specific actions or operations (Choung et al. 2022).

2.1 Trust propensity and trust in institutions

Trust in the human-like attributes of AI is a holistic conceptualization that is subject to influences across levels of analysis, from the individual (i.e., intrapersonal and interpersonal) to the collective (i.e., institutions and society) (Fulmer and Dirks 2018). This multilevel framework of trust furthers our understanding of the process through which trust is built and calibrated over time (Hoff and Bashir 2015). Therefore, this study incorporates dispositional trust (propensity to trust others) and institutional trust (trust in institutions).

Propensity to trust, or dispositional trust, is the general tendency to trust another person (Mayer et al. 1995). People hold different levels of propensity to trust, and it is a relatively stable disposition. Researchers have found that the propensity to trust another person predicts initial trustworthiness, which is the perception of how trustworthy another person is (Colquitt et al. 2007; Alarcon et al. 2018). Another trust-building process that may apply to AI is institution-based trust. Trust in institutions is at an all-time low, exacerbated by mis- and disinformation (Fulmer and Dirks 2018; Edelman 2021). In this study, we predict that trust propensity and trust in institutions are predictors of trust in AI.

H1: Trust propensity (dispositional trust) and trust in institutions (institutional trust) are positively associated with trust in AI.

2.2 Familiarity

While trust propensity and trust in institutions influence initial trust, trust changes over time based on familiarity and knowledge (Gefen et al. 2003). Familiarity reduces uncertainties and counteracts concerns based on reliance on past experience (Gulati 1995), and contributes to experiential trust, which is trust built on experiences. Familiarity builds trust by creating an appropriate context to predict and interpret the other party’s behavior (Chen and Dhillon 2003). Furthermore, more familiarity implies greater accumulated knowledge derived from previous interactions, which leads to higher levels of trust (Gefen 2000). Based on these findings, the following hypothesis is derived.

H2: Familiarity (experiential trust) is positively related to trust in AI.

3 Ethics principles of AI

If trust in technology is partly dependent on human characteristics such as integrity and benevolence, then values and ethics take on added importance. Ethics in AI is a socio-technical challenge that demands the appropriate balance between the benefits of the technology and social norms and values (Chatila and Havens 2019). Although AI promises significant benefits to human life and well-being, experts from various perspectives concur that thoughtful consideration of fairness, accountability, and transparency of the technology is critical to developing trustworthy AI (Shin and Park 2019) and ensuring a sustainable society (Arogyaswamy 2020). Ethical AI enables the flourishing of all members of a society, including vulnerable populations, such as children, the elderly, and those with less power in the social hierarchy (Torresen 2018). For example, AI must be equally aware of the rights of employees while creating benefits for employers. Similarly, the rights of consumers, such as their privacy, must be considered alongside benefits to corporations.

The role of AI, then, is more than finding the appropriate solution within a specific domain. It includes evaluating outcomes and solutions offered by AI within societal values, which vary considerably by society. Furthermore, societal values are constantly in flux, requiring a framework that is flexible and can evolve over time. Recognizing these challenges, ethicists, thought leaders, nations and corporations have issued ethics guidelines (Jobin et al. 2019). Despite political and ideological differences between countries, as a human race, we share common values of the rights of individuals, justice, and common good, that serve as cross-cutting themes across proposals.

An analysis (Hagendorff 2020) of ethics guidelines from Google, Microsoft, and IBM, Organization for Economic Co-operation and Development (OECD) and Institute of Electrical and Electronics Engineers (IEEE), as well as the governments of China, the United States, and the European Union (EU), has identified common ethical requirements for AI. These include privacy, human agency, transparency, explainability, safety and cybersecurity, to name a few. A succinct framework of ethics requirements can be found in the guidance offered by the European Commission’s High Level Expert Group on AI (AI HLEG) (2019) (See Table 1 for a summary).

Table 1 Seven ethics requirements for trustworthy AI proposed by AI HLEG

The framework offered by the AI HLEG is particularly relevant for this study because it connects the trustworthiness of AI with ethics, which is the crux of the research reported in this paper. Drawing from fundamental human rights, such as respect for human autonomy, prevention of harm, fairness and explicability, the EU framework offers seven ethics requirements, which are summarized in Table 1: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; social and environmental well-being; and accountability. It is important to note that some of these ethical values and requirements may be at odds with one another, which the authors acknowledge. For example, improvement in safety in autonomous vehicles may require relinquishing more control to AI, which may be antithetical to the value of human autonomy. Such examples of competing goods and ethical tensions are common in AI applications in domains such as health, education, or law enforcement.

Despite the widespread understanding in the AI community of the importance of such moral dilemmas and ethics requirements, it is not clear how the public perceives them and what influence, if any, these requirements have on trust in AI. To examine the perceived importance of the ethical requirements of AI and their relationship to trust, the following two research questions are proposed. The influence of ethics requirements is examined using regression after controlling for demographics as well as dispositional, institutional, and experiential trust.

RQ1: What is the perceived importance among the public of the ethical requirements of AI?

RQ2: What influence, if any, do the ethical requirements of AI have on trust in AI?

4 Method

4.1 Participants and procedure

Data were collected from a general U.S. population using an online survey conducted in April 2021 through Qualtrics. A quota sampling method was used to ensure the representativeness of participants. A total of 525 respondents (age M = 45.43 SD = 17.83, 50.1% women, 65.9% White, 12% Black or African American, 12% Hispanic, 5.9% Asian, 4.4% other) participated in the study. The average time for survey completion was 10 min.

4.2 Measures

Survey items and the reliability of the scales are presented in the appendix. The survey was administered to current and potential consumers of AI products, and they were asked to evaluate smart technologies in general, such as smart home products (e.g., Google Home, Ring) and voice assistants (e.g., Siri, Alexa).

Dispositional trust or trust propensity was measured with three items (Frazier et al. 2013): “I usually trust people until they give me a reason not to trust them,” “I generally give people the benefit of the doubt when I first meet them,” and “My typical approach is to trust new acquaintances until they prove I should not trust them.” Institutional trust was the mean of trust in three institutions—the federal government, corporations, and big technology companies. Experiential trust was calculated by averaging frequency of use of AI consumer products (smart home devices, smart speakers, virtual assistants, and wearable devices). In the last section of the survey, respondents rated the importance of each of the seven ethical requirements proposed by AI HLEG.

The items to measure trust in AI were constructed to represent the three pillars of the construct used in trust in humans (Mayer et al. 1995) and trust in technology (Mcknight et al. 2011): benevolence/helpfulness, integrity/reliability, and competence/functionality. Based on Choung et al.’s (2022) factor analysis, which yielded a two-factor structure, the items measuring benevolence and integrity were grouped together, and competence was treated as another dimension. The first dimension comprises human-like trust in AI (six items), and the second dimension comprises functionality trust in AI (five items).

4.3 Analytic approach

After examining the frequencies and means of public perceptions of AI, we used hierarchical regression for the analyses. The regression analyses examined human-like trust and functionality trust in AI as separate outcomes, and the predictor variables were entered in four stages. Respondents’ age, gender, education, and income levels were entered in the first stage as control variables. Trust propensity and levels of trust in institutions were entered at stage two. Familiarity with AI technologies was entered at stage three. Importance ratings of the seven ethical requirements for AI were entered together at stage four. Intercorrelations, means, and standard deviations for the independent and dependent variables are presented in Table 2. All correlations were statistically significantly, and the predictor variables were moderately correlated with the dependent variables.

Table 2 Intercorrelations, means, standard deviations and ranges of variables used in regression analysis (N = 525)

5 Results

5.1 Levels of trust in AI

Overall, people held greater functionality trust in AI (M = 3.64, SD = 0.92) than trust in the human-like characteristics of AI (M = 3.27, SD = 1.03) (see Fig. 1). The levels of trust in AI were lower than general trust in other people (M = 3.75, SD = 0.97) but greater than trust in institutions (M = 2.94, SD = 1.20). This indicates that AI technologies are currently well-trusted and maintaining trust in AI requires shoring up trust in institutions, including Big Tech companies.

Fig. 1
figure 1

Level of trust in AI

5.2 Predictors of trust in AI

Results from regression analyses (Tables 3, 4) showed that age and education were significant predictors of the level of human-like trust (age: \(\beta\) = – 0.22, p < 0.001; education: \(\beta\) = 0.25, p < 0.001) and functionality trust in AI (age: \(\beta\) = – 0.17, p < 0.001; education: \(\beta\) = 0.20, p < 0.001). Younger adults with higher education levels exhibited greater trust in AI. In addition, level of income was a significant positive predictor of the functionality trust in AI (\(\beta\) = 0.12, p < 0.05).

Table 3 Summary of hierarchical regression analyses for variables predicting human-like trust in AI (N = 525)
Table 4 Summary of hierarchical regression analyses for variables predicting functionality trust in AI (N = 525)

H1 postulated that propensity to trust and trust in institutions would positively predict trust in AI, after controlling for the demographics. As predicted, general trust propensity was a significant predictor of both human-like trust in AI (\(\beta\) = 0.23, p < 0.001) and functionality trust in Al (\(\beta\) = 0.31, p < 0.001). Trust in institutions also positively predicted human-like trust (\(\beta\) = 0.44, p < 0.001) and functionality trust in AI (\(\beta\) = 0.29, p < 0.001). Trust propensity and institutional trust together explained an additional 28% of the variance in human-like trust in AI (F (2, 518) = 127.34, p < 0.001) and 22% of the functionality trust in AI (F (2, 518) = 85.62, p < 0.001). Therefore, H1 was supported.

Introducing familiarity with AI technologies explained an additional 6% of the variance in human-like trust in AI (F (1, 517) = 61.02, p < 0.001) and 4% of the variance in functionality trust in AI (F (1, 517) = 33.50, p < 0.001). As predicted, familiarity was a significant positive predictor of both trust dimensions (human-like trust: \(\beta\) = 0.33, p < 0.001; functionality trust: \(\beta\) = 0.27, p < 0.001), corroborating H2.

5.3 Perceived importance of ethics requirements of AI

RQ1 focused on the perceived importance of the seven ethics requirements of AI: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; social and environmental well-being; and accountability.

Fig. 2
figure 2

Importance of ethics requirements in AI

As illustrated in Fig. 2, the three ethical principles of AI considered most important were accountability (M = 4.03, SD = 1.04), transparency (M = 3.97, SD = 1.02), and privacy and data governance (M = 3.93, SD = 1.09), followed by diversity, non-discrimination and fairness (M = 3.91, SD = 1.11), technical robustness and safety (M = 3.84, SD = 1.04), human agency and oversight (M = 3.84, SD = 1.04), and societal and environmental well-being (M = 3.76, SD = 1.14). These means indicate small differences in importance ratings of the ethical requirements offered by the EU.

5.4 Ethics requirements and trust in AI

RQ2 asked whether people’s perceptions of the importance of ethics requirements predicted trust in AI. The regression analysis results showed that after accounting for the variables entered in the first three steps, adding perceptions of the importance of the seven ethics guidelines explained an additional 2% of the variance in the variance in human-like trust in AI (F (7, 510) = 3.39, p < 0.001) and an additional 6% of the variance in functionality trust in AI (F (7, 510) = 7.89, p < 0.001).

Participants’ perceived importance of one of the seven ethics requirements—societal and environmental well-being (\(\beta\) = 0.10, p < 0.05)—was a statistically significant predictor of human-like trust in AI. Two different ethics requirements, technical robustness and safety (\(\beta\) = 0.12, p < 0.05) and accountability (\(\beta\) = 0.12, p < 0.05) contributed to an increased level of functionality trust in AI.

6 Discussion

It is widely understood that AI is more than just a technology. Given its increasing importance in people’s lives and the human characteristics it manifests, it is recognized as a socio-technical system that relies on human trust and must be compliant with ethical values. Hence both trust and ethics were examined in this study. First, two dimensions emerged from our analysis—trust in the technical and functional ability of AI, and trust in the human-like characteristics of AI—which served as separate outcome variables against which demographics, levels of trust, familiarity and ethics of AI were regressed.

Age, education, and familiarity with smart consumer technologies were significant predictors of both dimensions of trust. Younger and more educated individuals indicated greater trust in AI, as did familiarity with smart consumer technologies, which was used as an indicator of experiential trust. Further, propensity for trust in other people (dispositional trust) as well as trust in institutions (institutional trust) were strongly correlated with both dimensions of trust and were significant predictors even after controlling for familiarity and demographic variables. Collectively, these findings point to an optimistic future for AI, with positive sentiments among future generations and those with more familiarity with AI consumer products. Further, trust in AI was greater than trust in other institutions, such as the government, corporations, and big tech companies, and second only to trust in other people. Overall, these findings highlight the importance of the individual, organizational, and cultural contexts in the understanding of trust in AI and its effects (Lee and See 2004). The rest of this section offers a detailed discussion on findings related to trust, followed by a discussion on the effects of ethical practices of AI on trust and the need for governance principles that conform to ethics.

6.1 Trust in AI

Results suggest that functional trust in AI is on par with trust in other people and significantly more than trust in other institutions. Furthermore, as we had expected, functional trust was higher among those who use smart home products, chatbots, conversational agents, and other consumer technologies. The positive correlation between user experience and trust augurs well for the future of AI and is in line with recent findings of positive attitudes toward AI (Helberger et al. 2020). For instance, Araujo et al. (2020) reported that decisions made by algorithms are more positively evaluated than decisions made by human experts. This optimistic bias in perceptions of AI can be attributed to “machine heuristics” (Sundar and Kim 2019) or “algorithmic appreciation” (Logg et al. 2019). Both concepts highlight tendencies in human thinking that machines and algorithms are objective and unbiased, and therefore trustworthy. Some researchers have suggested that this positive stereotyping of technology explains our high initial trust in AI, which can erode as people accumulate experiences (Lee and See 2004).

The other dimension of trust that emerged from our analysis is human-like trust in AI, which is a combination of the attributes of human characteristics like integrity and benevolence. Combining integrity and benevolence dimensions into one dimension for AI deviates from a three-dimensional conceptualization of trust presented in previous studies. On this dimension, trust in AI was significantly lower than trust in other humans but still greater than trust in other institutions, such as governments. The gap between trust in the functionality of AI and trust in human-like characteristics can be explained in part as algorithmic aversion (Burton et al. 2020). Contrary to algorithmic appreciation (Logg et al. 2019), algorithmic aversion occurs when users realize that algorithms are imperfect, such as lacking in understanding of the human condition and as a reductive process that is cold and bereft of human emotions (Dietvorst et al. 2015; Lee 2018). These concerns about algorithms have raised awareness of the need for ethics in AI and forced a reckoning at major corporations like Google. Given the reach of AI, researchers like Timnit Gebru demand higher ethical standards to address discrimination, promote equity and contribute to overall social and environmental well-being (Simonite 2021). Though some of these demands may be beyond the reach of AI at this stage in its development, experts urge us to consider the future, referred to as the point of singularity (Arogyaswamy 2020), when machine intelligence may surpass human intelligence and offer solutions that have hitherto eluded humans. For smart machines to deliver on this promise, it is essential to develop a strong foundation of values and ethics in the AI systems that we create.

6.2 Ethics of AI

A key objective of this study was to investigate the perceived importance of the seven ethical requirements of AI advanced by the by the AI HLEG, which addresses the critical issues confronting the field. We found that accountability, transparency, and privacy and data governance had the highest ratings on importance. The focus on accountability highlights the need to establish self-governance and professional accountability mechanisms to gain and maintain trust in AI systems (Mittelstadt 2019), which can be accomplished through setting policies and training developers. Given the number of data breaches and the extensive use of personal data by corporations, it is not surprising that privacy rose to the top three in perceived importance. Along with privacy, transparency also rose to the top. Machine learning, an essential part of modern AI, has been criticized widely in the media for its “black box” approach to solutions, which may have contributed to concerns about transparency. Further, transparency is closely associated with accountability because most users are willing to share some data if corporations are more transparent of potential risks and rewards. The combination of accountability, transparency and privacy suggests that the deepest concerns in AI are centered on individual safety, which was given more importance over broader concerns such as social and environmental well-being.

Interestingly, the importance assigned to these issues by survey respondents corroborates with the issues emphasized in ethical guidelines created by professional organizations and corporations. After analyzing ethics guidelines from 22 major organizations, Hagendorff (2020) found that privacy, fairness, non-discrimination, accountability, transparency and safety were the topics that received most attention. From this analysis and our findings, it appears that both people and policies are more focused on the do-no-harm principle than the do-good principle, which holds AI to standards of improving the well-being of society and the environment. However, when the focus of AI shifts from the functionality to human-like trust in AI, the humanistic requirement (positive impact on societal and environmental well-being) was significant, offering additional justification for differentiating between functionality trust and human-like trust in AI.

Overall, our findings highlight the significance of human values and ethics in AI. To translate ethical principles into practice, a mediating governance scheme is necessary (Mittelstadt 2019). Successful governance requires a multilevel approach with interdependencies among translational/national, organizational, and individual levels. Although experts, professional organizations, and companies have announced ethical guidelines, as reviewed earlier in this paper, rules and regulatory frameworks for enforcement have not kept pace (Hagendorff 2020). Recent papers have proposed ethics-based governance through certification or accreditation programs (Roski et al. 2021) and ethics-based auditing (EBA) as governance mechanisms to help organizations realize their ethical commitments (Mökander and Floridi 2021; Mökander et al. 2021; Mökander and Axente 2021). Along with organizational-level efforts, the empowerment of citizens to learn and exercise their rights and the ability of citizens to critically evaluate AI technologies will provide a solid grounding for the development and prosperity of ethical and trustworthy AI.

Other approaches to self-governance include radical transparency, such as the TuringBox project, which requires developers to share their source code to be examined by other developers and independent researchers (Epstein et al. 2018). While such approaches may work in academic settings, they may not have traction among corporations that employ AI for a competitive advantage. The role of competition and profit is a challenge for current ethical frameworks in AI, which are drawn from principles of beneficence, non-maleficence, autonomy and justice that are the bedrock of bioethics (Floridi and Cowls 2019). However, unlike the field of medicine, AI has no explicit “caring” obligation nor professional requirements, such as the Hippocratic oath or licensure. Further, unlike medicine, AI has no longstanding tradition of translating abstract values into concrete policies or a legal framework to impose accountability for malpractice (Mittelstadt 2019). In short, the soft approach currently in place is insufficient to enforce accountability.

As AI ethics has come under scrutiny, the application of classical theories like the Kantian categorical imperative that there are universal rights and wrongs, or the utilitarian principle of minimizing harm and maximizing good for most people have been examined. There appears to be an emerging consensus that virtue ethics, which emphasizes the moral character of the person over specific actions, is a promising avenue to create more responsible AI (Abney 2012). Virtues are dispositions to act in a certain way, and humans involved in the development of AI can be trained to develop virtues and integrate ethics in all aspects of design and development. But passing on virtue ethics to an autonomous agent or AI system is a formidable task because the understandings of context, intentionality, complex thoughts, and consequences of moral actions remain elusive in AI (Allen and Wallach 2012). Despite this limitation, virtue ethics is a promising approach through which humans can integrate ethics in all their interactions with AI. In summary, a combination of bottom-up virtue ethics and top-down regulatory frameworks that hold organizations more accountable appears to be a pragmatic way forward for developing ethical AI.

6.3 Limitations and future directions

Our findings offer empirical evidence of the correlation between ethical principles and trust in AI, which must be evaluated keeping in mind some of the limitations of this study. First, we focused on AI technology only in consumer products (i.e., smart technologies), which is limited in scope. Future research must explore ethical requirements of AI in specific domains such as health, criminal justice, or autonomous vehicles in which trust is critical for success. In future work, a better understanding of fairness, transparency, accountability, safety and robustness within each domain will be critical for widespread acceptance of these technologies. Second, the survey offers a snapshot of the public’s level of trust and attitudes toward ethics at the current moment. However, researchers have pointed out that trust is a dynamic construct that evolves over time that can erode quickly after an adverse event (Hoff & Bashir, 2015), which underscores the need for a longitudinal approach to the study of trust. Third, the findings from this study are correlational, and we cannot assume causation between ethics of AI and trust in AI. Lastly, as with any online panel, our study respondents may not be truly representative of the U.S. population. The fact that all study participants had access to the internet skews the sample toward those with greater access to and experience with various forms of information technology. Future studies should examine more representative samples with varying levels of experience with technology.

7 Conclusion

During a time of increased misgivings in human-established institutions, trust in AI remains relatively high. Instead of a laissez-faire approach, this is the time to incorporate ethics and values in AI systems and nurture these sensibilities among technologists through training in virtue ethics. As we develop intelligent artifacts that may someday surpass human intelligence, it behooves us to build internal mechanisms that do not perpetuate human biases. It is also important for these systems to root out bad data and malevolent actors. There is an equally pressing imperative to build AI systems with an ethical compass that elevates human life and creates a society in which humans and machines interact in harmony. Human-AI interaction must strive for higher goals that go beyond productivity and profitability to foster understanding and compassion. Furthermore, AI systems must promote compassion and understanding between humans. For example, a polite conversational agent with a calm voice could perpetuate polite behaviors in humans, especially as the use of such agents becomes pervasive in modern society. Though humans have struggled to achieve these goals, the optimistic ambition is that with assistance from AI, we may get closer to these ideals, which is at the heart of trustworthy AI.