1 Introduction

In an overview of the ethics of artificial intelligence (AI), Boddington (2017) discusses fundamental ethical issues in AI, including the general issues of benefit vs. harm, justice and fairness, the nature of moral agency and moral motivation, and transparency. She suggests that many of the ethical challenges in AI are associated with its rapid development and related to issues of predictability, sociocultural changes, handling huge volumes of data, and allocation of responsibility. The reason why these issues are particularly relevant to AI is that, in the future AI may replace human decision-making, judgement, perception and action, and simulate human emotions (ibid p.29; Vallverdú and Casacuberta 2009).

The potentially transformative nature of AI in the future makes it difficult to evaluate its ‘benefits’ versus ‘harms’ ratio now. The issue of AI safety is a good example to demonstrate the complexity of such assessment, and self-driving cars are currently a good example to observe the society’s response to the impact of AI, in the words of Russell et al.: “If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits” (Russell et al. 2015).

The Future of Life Institute in January 2017 organized the Asilomar Conference on Beneficial AI in California (Future of Life Institute 2017). A large number of pioneering researchers in the area of AI, economics, law, ethics, and philosophy discussed the ethical issues of AI and formulated a series of principles and a set of guidelines for future AI research (Kurzweil 2005; Kurzweil Network 2017). They include 23 principles that are listed on the homepage of the Future of Life Institute (https://futureoflife.org/ai-principles/). According to these principles, the goal of AI research should be to create beneficial AI, rather than undirected AI. They have recommended a serious consideration of safety, failure transparency, responsibility, human value alignment, liberty and privacy, sharing of benefits and prosperity, human control and non-subversion, along with planning and mitigation of risks.

The World Economic Forum has suggested nine potential ethical issues of AI in the future (Bossman 2016) in the following order: unemployment, increasing inequality, impact on human behavior and interaction, safeguarding against mistakes, racism (AI bias), security, unforeseen and unintended consequences, losing control to singularity, and robots’ rights (humane treatment of AI). Curiously, what is absent in the World Economic Forum report is a direct mention of human privacy, especially in the context of emerging technologies in EAI and their ability to harvest personal data from individuals without their conscious awareness (non-conscious data collection). In fact, an increasingly prevalent concern is over the use of AI to read the human face and body gestures, as the recent pushback against facial detection and recognition technologies demonstrates.

EAI describes digital technologies, interfaces, and software to read and react to human emotion via text, images, voice, computer vision and biometric sensing (Dailey et al. 2002; Bartneck and Suzuki 2005; McStay 2018). EAI experts are concerned that technological advancement in machine learning and the increasing accuracy of emotion sensing technologies in smart cities will largely expand the real time extraction of personal data from an individual’s daily life, monitoring their patterns, routines, habits, preferences, idiosyncrasies and geospatial coordinates (Anderson et al. 2018). Such practices raise ethical questions regarding civil liberties, privacy, non-conscious data protection, and governance. Moreover, as EAI moves toward greater levels of complexity in automated thinking, it may not even be clear to the creators of these systems how decisions are reached. This means that the more authority these systems have and the more autonomous they are within the society, the less explainable, understandable and ultimately irresponsible their decision-making may become.

As AI continues to progress and become more sophisticated, and long before it can reach the broad level of human intelligence over a wide range of skills, there may be opportunities as well as challenges in many areas such as employment, law enforcement, medicine, defense/warfare systems and transportation. With advances in affective computing, machine learning and AI, machines are gaining the ability to gauge, sense, learn and interact with human emotions (McStay 2020; Ortony 2003). As an emerging layer in the design of smart cities, EAI may change how people experience the urban environment. These developments may create ethical issues, because societal needs and perspectives are often ancillary in commercially driven smart city deployments. It will be important to prevent from these issues as EAI is rolled out in cities.

In this context of the imminent embeddedness of AI in society, and to gain some insight into how this embeddedness may be received by the young people who will be living in them, we conducted a survey of college students at an international university in Japan. We asked a series of questions based on the WEF criteria. This paper reports on this survey, its results, and compares these results with those of similar studies.

2 Research methods

In a large classroom of 233 college students enrolled in a general course on bioethics, we asked the students for their cooperation regarding a study about AI in the future. This class was selected, because foundation subjects are taken by all groups of students improving the degree to which the study results can be generalized to the school as a whole, and students become familiar with ethics and ethical terms through the course. They were presented with the survey question as follows, and requested to upload their answers to a digital repository within 1 month. Only 5 students did not cooperate resulting in a 97% response rate and a total of 228 responses. The respondents included 63 (28%) Japanese and 165 (72%) non-Japanese students mainly from China, Korea, Indonesia, Vietnam, Thailand, Uzbekistan, Mongolia, India, Bangladesh, and a few from other nationalities. This was the survey question:

“The World Economic Forum has suggested nine potential ethical issues of artificial intelligence (AI) in the future. Which one do you think is the most important and explain the reason why: AI control of society, AI discrimination (racism or bias), AI impact on human behavior and emotions, AI mistakes, AI security, increasing inequality, malicious and evil AI, robots’ rights and emotions, unemployment.”

Our argument for requiring an explanation for selected ethical issues was to persuade the students to reflect on their responses. The responses were collected from the digital repository and the selected ethical issue and the reasons behind the selection were recorded for each. The anonymous essays of 228 students were saved in a document format to a folder for the process of text-mining and sentiment analysis. Next, we used the R programming software (v4.0.2) for text-mining and sentiment analysis of the students’ responses, specifically the ‘tm’ package (Meyer et al. 2008; Feinerer 2019) and ‘Syuzhet’ package (Jockers 2017), respectively.

This combination of sentiment analysis and content analysis of students’ essays reflecting on the subject of AI is intended as a novel contribution to the current literature, which is dominated by studies using questionnaires or semi-structured interviews (Brougham and Haar 2017; Nadarzynski et al. 2019; Sarwar et al. 2019; Shen et al. 2020; Sit et al. 2020). For future research, one can validate the results of this study by incorporating more theoretical frameworks such as the STARA scale (Brougham and Haar 2017) or by conducting semi-structured interviews with students about various issues related to AI (Nadarzynski et al. 2019) or a multi-level Bayesian analysis of how socio-demographic factors can influence students’ perceptions of AI (Vuong et al. 2020).

3 Findings and results

Among the 228 students who responded to the survey question, there were 63 (28%) Japanese and 165 (72%) non-Japanese students; 117 (51%) were female and 111 (49%) were male; 31 (13%) were freshmen (1st year), 84 (37%) sophomores (2nd year), 72 (32%) junior (3rd year) and 41 (18%) senior (4th year) students. As for age, all respondents were between 19 and 22 years old. The students’ choices of single most important ethical issue are listed in Table 1.

Table 1 Students’ response to “which ethical issue would be the most important in AI in the future?”

The most commonly chosen single concern was over unemployment, with %65 of students selecting it. The next most common concern was over AI impact on human behavior and emotions (%8 of students). Adding responses for AI impact on human behavior to emotions with those for robots’ rights and emotions shows that EAI was the next most common single concern for about %13 of students. That was followed by increasing inequality and AI control of the society, and with the issues of AI discrimination, malicious AI, AI security, and AI mistakes chosen by only a few students.

Next, we used the text-mining and sentiment analysis tools. The tm package counted the frequency of words used in the 228 essays, as shown in Fig. 1 which charts the most frequent words. Figure 2 is a word cloud, rendered from the counts of the frequency of the terms, in which the size of the words represents their frequency.

Fig. 1
figure 1

This chart shows the frequency of the most common terms used in the 228 essays

Fig. 2
figure 2

This image shows a word cloud rendered from the counts of the frequency of the terms, in which the size of a word represents its frequency

The Syuzhet package was used to analyze the sentiments across 228 essays. It helped produce a chart on the binary sentiments of the essays (Fig. 3), a chart on the emotional trajectory across all 228 texts (Fig. 4), and a chart on the eight basic emotions expressed in the essays (Fig. 5). The Synzhet package uses the NRC-Lexicon to calculate the sentiments, which is a list of words associated with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and two sentiments (negative and positive). The NRC-Lexicon is a crowd-sourced project, which creates word–emotion associations via Amazon’s Mechanical Turk online platform (Mohammad and Turney 2013). By 2013, the project had created an emotional association with more than 10,000 English words.

Fig. 3
figure 3

This chart shows the binary sentiments (positive vs. negative) of the essays

Fig. 4
figure 4

This chart shows the emotional trajectory across all 228 texts, which is mainly positive

Fig. 5
figure 5

This chart shows the eight basic emotions (anger, anticipation, disgust, fear, joy, sadness, surprise and trust) expressed in the essays. Interestingly, trust is the most common sentiment, followed by fear, while surprise is the least common sentiment!

4 Discussion

The most common concern of college students in this study was increasing unemployment related to the use of AI. Wilkinson and Marmot (2003) refer to the social, psychological and health benefits of work; losing such benefits because of AI could be considered ethical harm. However, it is too difficult to predict how AI will impact employment, because there is a complex web of factors that need to be considered (Nilsson 1985; Marchant et al. 2014; Pol and Reveley 2017). Certain jobs seem to be more at risk such as accounting, and some aspects of jobs that involve highly repetitive actions in structured settings such as legal research and teaching; some have suggested that in the future even creative work may be taken over by AI (Bostrom 2014). Jobs that require interacting with people and using social intelligence, involve creativity and presenting clever solutions, and working in an unpredictable environment would be less at risk (Cyert and Mowery 1987; Bessen 2016; Davenport and Kirby 2016). The next big concern of the students was over emotional AI, including the AI impact on human emotion and the emotions and rights of robots themselves (13%). Rising inequality was the next potential issue that 7% of respondents in our survey selected as the most important ethical problem. Indeed, researchers have warned that inequality may increase significantly unless a fraction of the AI-created wealth is redistributed to make everyone better off (Korinek and Stiglitz 2017; Tegmark 2017).

Fear about the behavior of autonomous robots is also commonly expressed in Western literature (Liang and Lee 2017); Zhang and Dafoe 2019) and one of the major fears is societal control by autonomous machines. Such fear is not seen in Japan, where a manga series featuring a robotic creature called Astro Boy has contributed to a positive image of robots in society (Honda 2002; Wagner 2009). Interestingly, Japanese Shinto beliefs suggest that robots can have a soul, like everything else (Boddington 2017). One may hypothesize that the frequent representations of AI in popular culture, especially in Japan, have in many ways instilled the idea of inevitability and thus have normalized the introduction of AI in daily life (Kovacic 2018).

In the ‘general population’, attitudes toward AI seem to depend on cultural background and level of awareness. Bartneck et al. (2007) conducted a cross-cultural survey of people’s attitude toward robots and found that among nearly 500 participants, people from the US were the most positive towards robots, while Mexicans were negative toward robots. On the topic of robots/AI rights, a study looked at 1270 responses of online users to understand their first impressions about proposals to grant 11 possible rights to future autonomous electronic agents, and investigated the possibility of changing one’s mind when common misconceptions about robots/AI rights were debunked (Lima et al. 2020). This research found that the majority of participants were not in favor of robot rights, yet they believed preventing cruel treatment was a good idea. They also found that when people were presented with more information about common misconceptions, their perception became more positive.

There are issues of security, accuracy, and empathy of AI. A mixed-method study on the perception of AI chatbots in healthcare showed that most Internet users were receptive of this new technology (Nadarzynski et al. 2019). However, there was some hesitancy due to concerns over cyber-security, accuracy, and questions over AI capacity for empathy.

To understand public emotions regarding AI and smart cities, media researchers devised an AI-based observation framework to analyze 29,928 conversations on social media about self-driving vehicles (Adikari and Alahakoon 2020). This study showed clear emotional transitions whenever an accident of self-driving vehicles happened.

Previous research among ‘working professionals’ has shown a wide array of attitudes toward AI and its applications which seem to be associated with the type of profession, nationality, and the level of awareness as an employee about the subject. These studies have found little concern over job loss due to AI among professionals (Brougham and Haar 2017; Sarwar et al. 2019; Pinto dos Santos et al. 2019; Shen et al. 2020).

Brougham and Haar (2017) designed a measurement instrument called STARA awareness scale, which stands for Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA), to measure employees’ perceptions of the future workplace. Testing the STARA scale on 120 employees in New Zealand, they found there was not much concern over job replacement among the participants. A quantitative analysis showed that the more employees were aware of AI and what it meant for their job, the more likely they showed lower organizational commitment and career satisfaction. This may have to do with influences of current norms on professional expectations of roles to be assigned to AI. Shen et al. (2020) conducted a web-based questionnaire and collected responses from 1228 Chinese dermatologists from 30 provinces and regions in China. They found that nearly 96% of the participants believed the role of AI was to assist with diagnosis and treatment. These studies found only weak correlations between a number of stratified factors (age, gender, education degree, professional title, and hospital ownership) and concerns about AI.

As for students, most surveyed students believed they needed AI training for their medical degree and AI would play a major role in their future job. In a survey of 263 medical students, although most respondents thought AI could revolutionize (77%) and improve (86%) radiology, only 17% believed that human physicians would be replaced (Pinto dos Santos et al. 2019). Another survey of 487 pathologists found that nearly 75% of the respondents were excited and interested in the prospect of AI integration in their work, while 17.6% and 2.1% of them reported being concerned or extremely concerned about job replacement by AI tools, respectively.

A survey of 484 medical students at 19 universities in the UK showed that the majority (88%) believed they would play an important role in healthcare; yet, nearly half of the respondents expressed their concern over job loss in radiology due to AI (Sit et al. 2020). In the survey by Pinto dos Santos et al. (2019) of 263 medical students only around 52% were cognizant of the ongoing discussion about AI in radiology and 68% said they did not know about the technologies involved.

The design of our research was to ask students to choose and reflect on one of the nine ethical issues proposed by the World Economic Forum by writing a short essay, which was then analyzed using sentiment analysis and text mining. Given this setting, our study showed that most of the students chose to reflect on the problem of job loss due to AI. However, the sentiment analysis found that the students used many words associated with a positive sentiment (Fig. 3) and emotion of trust (Fig. 5).

There are a number of limitations for this study. First, the survey was completed by Japanese as well as international students from many other countries studying in one university in Japan. One should be cautious over generalizing the results. Second, sentiment analysis might not reflect true sentiments due to the mixing of cross-cultural backgrounds of the surveyed students. Third, the sentiment analysis is only accounting for eight basic emotions and lacks culturally laden emotions such as complex feelings in Confucian culture (Bartneck et al. 2007; Ivanhoe 2020).

Another limitation of our study was the exclusion of privacy issues in AI technology applications. The emerging future in algorithmic manipulation of human emotions can change the way governance is conducted, with many tolls on human society including self-censorship (Mantello 2016). There is a novel series of monitoring technologies that are transforming the nature of surveillance by exercising emotional influence in an overwhelming scale, and thus attempting to reshape individual behavior (Fazzin 2019). Such forms of automated governance and algorithm-based soul-training are already in place throughout the world.

5 Conclusion

Our survey of college students in a multicultural university suggests that worry over unemployment is the most serious concern about AI technologies. Our review of the literature confirms such worries as most references in the area of AI agree that AI will have a big impact on the number and type of jobs available in the future. Considering the fact that we did this survey in a social sciences and humanities college, the responses of the students appear to be a genuine reflection of the potential impact of AI on the job market in the future. Having said that, the sentiment analysis of the texts written by students demonstrated a generally more positive attitude towards AI. Trust was the most common emotion, which may sound naïve considering the concern of AI experts over the use of EAI for surveillance and a decline of privacy for citizens of the future. The second most common emotion was fear, which reflects the concern shared by AI experts.

One interesting result of this study was finding that many students are concerned over emotional AI issues. These included the impact of AI on human behavior as well as the emotions and rights of future robots. The review of literature on this aspect revealed a lot of controversy over fundamental concepts such as the meaning of consciousness, the long path ahead to develop AI systems that are intelligent in a broad range of functions, and the significance of feeling human-like emotions. Therefore, the responses of college students may reflect a relative lack of awareness or sensitivity towards the potential risks of EAI technologies. This implies a need to increase awareness through inclusion of related material in the curriculum of colleges. However, the study also shows that EAI has attracted the attention of a considerable number of college students and is, therefore, worth more detailed research in the future. Therefore, AI engineers may want to consider more seriously the emotional aspects of AI in research and development.