Keywords

1 Introduction

The influence of artificial intelligence (AI) on the lives of people over the past two decades is constantly increasing and is expected to become even more global in the near future. AI is used in various areas – from personal mobile devices and cars with on-board voice control computers to space shuttles and the Alpha space station – and has good prospects due to the continuous improvement of computer equipment and software, on the one hand, and the improvement of AI algorithms, which already significantly surpass some expert competences of man – on the other. As the capabilities of the AI develop, the volume of applications in which it is used will continue to grow, and it is highly likely that AI will soon begin to self-improve based on self-optimization of the algorithms, eventually reaching the level of human intelligence, and then surpassing it.

In a number of applications of AI – from safe driving to medical diagnostics – its superiority over humans has already been proven and documented. Nevertheless, we cannot predict and control possible AI solutions in situations where serious injuries caused by a collision of an unmanned vehicle with several pedestrians can be avoided only at the cost of serious or fatal injuries to passengers inside an AI vehicle. Or, vice versa, hitting a pedestrian might seem the best option for a driverless car. Thus, on 18 March 2018, an Uber self-driving auto struck and killed a woman on a street in Tempe, Arizona, despite having an emergency backup driver behind the wheel [1].

In addition to all possible cautions and macabre forecasts, there is another danger, no less significant than physical destruction, namely the danger of “loss of humanity”, i.e. human losses of the most significant characteristics that distinguish man from animals or robots.

2 AI on the Rise

Following the success of the Internet and ICTs, Artificial Intelligence is the next big thing of the high-tech industry. Three essential aspects including next-generation computing architecture, access to historical datasets, and advances in Deep Neural Networks are accelerating the pace of innovation in the field of Machine Learning and Artificial Intelligence [2].

A little over a decade ago, a supercomputer – IBM’s Deep Blue – defeated the then world chess champion, Russian grandmaster Gary Kasparov. In 2016 DeepMind’s computer program, AlphaGo, beat Go champion Lee Sedol, a Korean professional Go player of 9 dan ran. On 18 June 2018, IBM’s Project Debater engaged in the first-ever live, public debate with humans. On preparing arguments for and against the statement: “We should subsidize space exploration,” the AI system stated that its aim is to help “people make evidence-based decisions when the answers aren’t black-and-white” [3].

According to a recent Oxford and Yale University survey of over 350 AI researchers, machines are predicted to be better than us at translating languages by 2024, writing high-school essays by 2026, driving a truck by 2027, working in retail by 2031, writing a book by 2049 and performing surgery by 2053 [4].

AI software is expected to create unseen business opportunities and societal values, with still smarter chat bots providing expert assistance and robot advisors conducting instantaneous research in most humanities. Artificial intelligence is getting big. The evidence is everywhere. From critical life-saving medical equipment to smart homes, AI will be infused into almost every application and device [2]. A few years more, and humanity will be amazed at AI achievements and face new challenges. To be proactive, researchers should set challenging tasks, rising issues that seem today too early or insignificant before it has become too late the day after tomorrow.

Addressing university students in 2017, Russian President Vladimir Putin stated that the nation leading in AI ‘will be the ruler of the world’ and warned that AI offers both ‘colossal opportunities’ and yet unpredictable threats [5]. The same year, Elon Musk and 116 other technology leaders including Bill Gates and Steve Wozniak, Stuart Russell and Stephen Hawking sent a petition to the United Nations warning of a “third revolution in warfare” and calling for new regulations on the current and future AI weapons development [6].

Nowadays, more than 50 nations are developing battlefield robots and drones. The most sought-after are robots capable of making the “kill decision” without human control. These weapons are currently not prohibited by International Law, but even if they were, it is highly doubtful they can ever conform to IHL and IHRL or even laws governing armed conflicts, as AI on the loose would have to avoid mistakes telling friends from foes and combatants from civilians and provide aid to the latter along with wounded comrades in arms. And the issue of accountability seems to be even more complex. So far, these sore questions remain unanswered as the development of intelligent killing machines has gradually turned into a yet unacknowledged arms race. The numerous ‘collateral losses’ among civilians in Afghanistan, Pakistan, Yemen, and Somalia have highlighted how ethically fraught the situation is [6].

3 Specificities of Two Generations

As is known, Generational Theory was developed by William Strauss and Neil Howe based on the description of repetitive generational cycles in US history [7]. Though the theory seems to be contradictory, having gained a lot of advocates and as many opponents, the idea to compare generations seems to be quite fruitful, as people separated by 15–20 or more years naturally differ greatly due to the different life conditions, which the 20th century used to change every 30 or 20 years or even faster. The two generations, whose representatives’ attitudes are analysed in the presented research, are Generation X and Generation Y, which life circumstances differed a lot as the majority of their representatives were born in very different countries due to the collapse of the Soviet Union.

The range of Generation X is from 1965 to 1984 (but the dates are not set in stone) [8], therefore, the range of Generation Y is from 1985 to 2004. The traits listed below are generalised, yet, they are found in most representatives of the generations (basing on several surveys with the participants confirming the fact), naturally developed to a different extent due to family upbringing and personal life circumstances.

3.1 Generation X

Russian Generation X is characterized with a changed value system (as compared to the previous generation – that of their parents) and changes in the quality of life. However, as this generation faced the tragic collapse of the USSR, the dispelled value of “public good”, along with the need to survive, it has developed a number of core qualities, partly borrowed from the two previous generations (such as the enduring values of love, care, and family), along with hyper responsibility (in the first place – caring for others, even to the detriment of their own interests), strive for self-education and self-knowledge, increased anxiety, a sense of internal conflict, emotional instability, and exposure to depression. Mostly, this generation has easily entered the new digitalised world, although some representatives experienced certain difficulties mastering new ICTs.

3.2 Generation Y

Russian Generation Y has developed a new system of values, trying to get used to the collapse of the USSR, the emergence of new technologies, high probability of terrorist attacks and military conflicts. Therefore, the key traits characterising the generation include love for freedom, “lightness” in relation to life, flexibility and adaptability to changing conditions, increased attention to the quality of their life, desire to get fun and satisfaction, reluctance to accept the shortcomings of other people, greater importance of career than creating a family, a lack of desire to plan their future, preference of the Internet to any other sources of knowledge, knowledge devaluation, along with credulity, naivety and infantilism (due to the availability of any information). Being digital natives, they are mostly on short terms with ICTs and cannot imagine their life without PCs and all sorts of gadgets. Unlike their parents, they prefer to spend leisure time in the virtual world, with some of them abusing the digital reality or digital reality abusing them.

4 Research Outline

The introduced pilot research was planned to identify the contemporary perceptions of AI by two university populations – academics and students, belonging to two different generations – Generation X and Generation Y in order to reveal some current trends and grope further possible challenges.

The methodology applied in the pilot research includes a questionnaire-based survey designed to suit the research goals, content analysis of the participants’ responses, and extensive review of literature devoted to AI, UAVs and digitalisation at large.

4.1 Research Goal and Hypotheses

The goal of the research is to reveal the attitudes of academics belonging to generation X and students belonging to Generation Y to AI, its current achievements and future advances and compare them basing on systemic approach and content analysis of the contemporary discourse.

The goal of the pilot research prompted two hypotheses –

  • Hypothesis 1 – unlike representatives of Generation Y, representatives of Generation X are more restrained in their perception of AI and more cautious as regards to its further development.

  • Hypothesis 2 – unlike representatives of Generation X, representatives of Generation Y are on better terms with ICTs and gadgets and take AI for granted not perceiving it as an equal partner in communication.

The analysis of the global research discourse and preliminary collegial discussions of the issues raised in the questionnaire have contributed to identification of five challenges facing the mankind in the near future, generated by the too high speed of technological transformations, the rise of AI, drones and battlefield robots and human inability or unwillingness to tackle sore issues before they start tackling humans. The challenges revealing both the greatness of human intelligence and utter vulnerability of humanity torn apart by contradictions and rivalry for world domination in the absence of a comprehensive mankind development strategy are as follows

  1. (1)

    The issue of trust seems to be the most serious challenge to date. With trust being the most important quality in human relations valued since the first humans appeared on the Earth, people haven’t learned to respect and value trust much enough not to abuse and betray it. People do not trust strangers, nations do not trust other nations and naturally, this approach will be transferred to human-AI relations, with AI soon learning not to trust people.

  2. (2)

    Teaching AI in combat robots and drones to kill people (even the most evil ones) is the shortest way to face the challenge of possible annihilation and only probable survival.

  3. (3)

    The underestimation of AI abilities to learn and study, including its soon realised requirement to study people may lay ground for a further built vicious circle in human-AI relations, with people possibly trapped inside.

  4. (4)

    The current lack of respect to the still young AI making its first steps can be easily transferred to more mature AI developments, which may soon learn to disrespect and disregard people.

  5. (5)

    The utilitarian attitude to AI may lead to humans’ attempting to get new slaves and thus the challenge of losing some important qualities that the mankind has gained for the last 150 years after slavery was abolished and all the people were proclaimed equal. And here, again it is important to keep in mind the AI growing ability to learn faster than most humans can imagine.

4.2 Research Methods and Processes

The research methodology rests upon systemic and comparative approaches to the ideas of AI perception and collaboration with it in the present and in the future, and includes a literature review, a discourse analysis and a qualitative empirical study launched as a pilot survey of two university populations’ attitudes and opinions.

The questionnaire consisted of 3 background questions (Age, Gender, Occupation) and 16 thematic research questions, some of which were scaled according to Likert bipolar scaling, measuring the participants’ positive or negative responses from ‘Fully approve’ to ‘Totally disapprove’ aimed at revealing the respondents’ attitudes to certain AI issues and multiple-choice questions intended to identify the respondents’ knowledge, habits or opinions.

4.3 Respondents

The survey was designed as pilot and was supposed to be voluntary, so the samples were limited by the number of questionnaires handed out, the timeframe and the participants’ free choice to take part in the survey. The pilot research was planned to be comparative, engaging students and academics of Lipetsk State Pedagogical University, Russia. The university was chosen due to its characteristics.

Lipetsk State Pedagogical University (LSPU) belongs to the categories of ‘small’ and applied universities as its population of students is about 5,000 persons and its academic population is 356 persons (288 full-timers and 68 part-timers). LSPU provides education in the following 10 fields: mathematics, technical studies, law, social sciences, philology, psychology, education studies, natural sciences, cultural studies and arts, physical culture and sports, at all the three levels – for Bachelor’s, Master’s and post-graduates.

5 Research Results

The participation in the survey was voluntary and intended to encompass as many academics and students as possible. However, as the population of students at LSPU is 14 times larger, it was decided that the samples could be unequal covering twice as many students as academics. Therefore, 40 academics were provided with questionnaires printed out on paper, with 38 questionnaires returned valid for further analysis; and 80 questionnaires printed out on paper were distributed among students, with 78 students returning questionnaires valid for analysis.

The background questions were supposed to identify the respondents’ age, gender and occupation, as these data have proved the respondents’ relevance to the research, with age and occupation in particular as the research presumed comparison of two generations – Generation Y and Generation X represented by students and academics accordingly.

The histograms below (Figs. 1 and 2) show a majority of the employees were aged from 31 to 45 (68%), with 23% aged from 46 to 55, 3% aged from 26 to 30, 3% aged from 56 to 60 and 3% aged over 60. These data confirm that an overwhelming majority of the academics taking part in the survey belong to Generation X and the rest are very close to it. A majority (83%) of the students participating in the research were under 20 years of age – Bachelor’s students, with 17% of them aged from 21 to 25, i.e. Master’s students. It means that all the student participants belong to Generation Y. Therefore, the samples are relevant for the research goal and can be compared safely.

Fig. 1.
figure 1

Academics’ age groups

Fig. 2.
figure 2

Students’ age groups

Gender is a variable of secondary importance for this research, yet, it also proves its validity as the shares of female and male participants in the two samples are not contradictory – there are 48% of male and 52% of female participants in the academics sample, against 31% of male and 69% of female respondents in the student sample (Figs. 3 and 4 below). However, it would be interesting to go into a deeper research investigating the specificity of women’s and men’s attitudes to AI.

Fig. 3.
figure 3

Academics’ gender

Fig. 4.
figure 4

Students’ gender

The histograms below prove that all the employees in the sample are engaged in teaching, and that the main occupation of the student sample is studies, which confirms the validity of both groups of the participants for the research (see Figs. 5 and 6 below).

Fig. 5.
figure 5

Employees’ occupation

Fig. 6.
figure 6

Students’ occupation

Question 1 intended to identify the participants’ general attitude to AI, ranging it from highly positive to utterly negative. 26% of the academics expressed their highly positive attitude to AI (against 12% of the student sample). The shares of the respondents who expressed their positive attitudes were very close – 29% of the academics and 26% of the students. Neutral attitude (close to indifferent) was expressed by 13% of the Generation X representatives and 40% of the Generation Y representatives. But cautious about AI were 32% of the academics and 21% of the students, with the attitude of 1% being utterly negative. The higher level of cautiousness among the academics proves Hypothesis 1.

Question 2 inquired about the presence of AI in the participants’ life. 52% of the academics and 63% of the students confirmed that AI was present in their life, 35% and 31% correspondingly found this question difficult to answer, and 13% of the academics and 6% of the students denied AI presence in their life. The shares of AI proponents in the two groups were closer to each other than the shares of those who did not feel the presence of AI in their life. The twice as big share of the denying academics proves Hypothesis 2.

Question 3 aimed at revealing the participants’ estimation of the usefulness of AI developments ranging from ‘Very useful’ to ‘Harmful’. The attitudes of the two group members proved to be very similar, with the academics perceiving AI a bit more warmly: 46% of them found AI very useful (against 21% of the students), 35% believed AI is useful (against 59% of the students), yet, the bulk share of positive attitudes was practically the same (81% of the academics and 80% of the students), 16% of the academics and 12% of the students were neutral about the idea, 5% of the students found AI useless and equal shares of both groups believed that AI developments are harmful. The somewhat warmer welcome of the academics accounts for their professional vision and mission and their habit to be in the vanguard of the progress.

Question 4 targeted to reveal the respondents’ perception of AI safety /danger. The students’ responses showed a higher level of trust to AI, as 5% of the student respondents were sure of AI being totally safe (with 0% responses of the academics in this estimation), 14% of the students and 16% of the academics believed that AI was safe, 59% and 58% accordingly believed AI was neutral, 19% of the students and 26% of the academics found AI dangerous, and 3% of the students thought it was very dangerous. The data received from both groups are very close, though the academics proved to have sensed more danger in AI developments.

Question 5 aimed at identifying the kinds of AI present in the respondents’ life. For the research purposes the following AI developments were chosen: chat-bots (Facebook, VKontakte, etc.), chess-playing computers, self-driven autos, electronic concierges (OK Google, Siri, Alisa), bank terminals recognising images, and drones recognising images. The received responses showed that the survey participants must have misunderstood the question and answered it basing on their knowledge of certain AI developments, and not on their actual presence in their lives. Thus, 27% of the academics and 40% of the students confirmed the presence of chat-bots (Facebook, VKontakte, etc.) and 38% and 29% accordingly admitted the presence of electronic concierges (OK Google, Siri, Alisa). These data seem to be quite truthful but then come the responses, inspiring much doubt. 6% of the academics and 11% of the students stated they had chess-playing computers in their lives obviously confusing Deep Blue with chess play computer games, 2% of the academics (0% of the students) indicated the presence of self-driven autos in their life, though such vehicles are still under development in Russia, 27% of the academics and 18% of the students stated the availability of bank terminals recognising images, though their future installation was only announced by Sberbank some weeks ago, and 2% of the students (0% of the academics) confirmed the availability of drones recognising images, though again, such drones are only entering mass production and only for military and defence purposes.

The received data were contradictory proving that a larger share of students deal with chat-bots, though a larger share of academics seem to use electronic concierges.

Therefore, Question 6 intended to clear up the respondents’ practices of using chat-bots and electronic concierges and identify the frequency of their usage. The received responses were predictable and proved Hypothesis 2 confirming that representatives of Generation Y are on better terms with AI developments and ICTs at large: 21% of the students and only 10% of the academics used the development very often; 27% of the students and just 16% of the academics used them often; 26% of the students and 19% of the academics used the AI developments occasionally; 18% of the students and 29% of the academics used them seldom, and only 8% of the students against 26% of the academics never used chat-bots and electronic concierges.

Question 7 targeted ethical issues in the mode of building a dialogue with an electronic concierge /chat-bot and inquired whether the respondents would say “thank you” at the end of the conversation. The received data turned out to be quite unexpected – the students proved to be more polite towards chat-bots and electronic concierges and did not forget to thank Siri, Alisa or OK Google in 20% of cases (8% of the students did it always and 12% did it often), though only 3% of the academics thanked electronic concierges always and 10% did it often. 22% of the students and 19% of the academics thanked them occasionally. 21% of the students and 16% of the academics did it seldom but the number of those respondents who never thanked electronic concierges and chat-bots is really high – 37% of the students and 52% of the academics. The greater number of polite students supposedly accounts for the overwhelming majority of female students in the sample and possibly, for the field of the academics’ expertise (most of them worked in the Institute of Natural, Mathematical and Technical Sciences and may be prone to perceiving AI formally as a scope of algorithms). Further interviews with some representatives (both male and female) of both samples may shed some light on the reasons for such responses.

However, answering Question 8 aimed at identifying the level of importance of the electronic concierges’ addressing the respondents by name and finishing the conversations with “Thank you for contacting me /asking me”, an overwhelming majority of the student population (76%) confirmed its importance, with 19% admitting high importance, with only 4% seeing no difference and 1% stating low importance. Unlike the representatives of Generation Y who seemed to be highly concerned with AI polite treatment, the representatives of Generation X were not so much obsessed with display of respect by AI: only 25% of the sample confirmed importance of being called by name at the beginning of the conversation and being thanked at the end, and just 2% found it very important; 43% of the academics did not see much difference, 26% stated low importance and 4% admitted low importance. These data show that representatives of Generation Y are more sensible to the way AI may treat them in communications (so far), and proved to be much more demanding and having higher expectations in comparison with the representatives of Generation X.

The difference between the share of the students who often or always thanked the electronic concierges and the share of the students who expected to be thanked has revealed an ethical inconsistency, and this behaviour pattern violates the proverb ‘How goes around comes around’, which was developed as an important rule, almost a value in the minds of Generation X.

Question 9 focused on the respondents’ attitude to the possibility of using artificial intelligence in advertising and marketing at large. This information is important for revealing the level of trust to AI as part of persuasive technologies. A majority of the student population approved (43%) and fully approved (3%) AI use in marketing and advertising in particular in comparison to 16% (approved) and 6% (fully approved) of the academics. Most academics (62%) and many students (40%) expressed their neutral attitude to the issue. The shares of those who disapproved (13% of the academics and 10% of the students) and those who totally disapproved the idea (3% of the academics and 4% of the students) were almost the same. The data confirmed a higher level of trust characteristic of the student population.

Question 10 was supposed to identify the level of trust to the information provided by electronic concierges. The received data are provided in the table below (Table 1)

Table 1. Levels of trust to information provided by electronic concierges.

The students’ responses demonstrate a higher level of trust to the information provided by electronic concierges.

Going further into the issue of trust to the information provided by intelligent systems, Question 11 was designed to identify the respondents’ attitude to probability of unreliable information provided by AI systems. 5% of the students considered the probability very low (0% of the academics), 26% of the students found it low (10% of the academics), 24% of the students and 35% of the academics found it difficult to decide, 40% of the students and 45% of the academics admitted it was probable and 5% of the students and 10% of the academics thought the probability was very high. These data confirm the previously received information of the students’ higher level of trust to AI.

Questions 12 and 13 were similar to Questions 10 and 11 but aimed to identify the level of trust to chat-bots. The data received in response to Question 12 were as follows (Table 2)

Table 2. Levels of trust to information provided by chat-bots

The participants’ responses revealed an overall lower level of trust to chat-bots in comparison with electronic concierges and consistently higher level of trust in perception of the representative of Generation Y.

Question 13 inquired more data on the probability of unreliable information provided by chat-bots. 4% of the students believed that the probability was very low (0% of the academics), 14% of the students considered it low (10% of the academics), 33% of the students and 26% of the academics chose the option ‘Difficult to say,’ 36% of the students and 48% of the academics agreed that it was probable and 13% of the students and 16% of the academics found the probability very high. These data show a higher level of trust to AI demonstrated by the students and at the same time a lower level to trust to the reliability of the information provided by chat-bots in comparison with electronic concierges.

Question 14 was designed to clarify the respondents’ attitudes to the new method of warfare using artificial intelligence (combat robots, drones, etc.). 10% of the academics and 9% of the students found it very promising, 10% of the academics and 9% of the students believed it was promising, 26% of the academics and 21% of the students expressed their neutral attitude; however, 19% of the academics and 37% of the students were cautious about the new AI-based warfare, and 35% of the academics and 21% of the students expressed a negative attitude to the issue. Though the share of positive and neutral attitudes practically coincides, the representatives of Generation Y proved to be more cautious and the representatives of Generation X showed a more negative attitude to the issue, which can be partly explained by their larger life experience and expertise.

Question 15 focused on identifying the stakeholder responsible for the development and use of artificial intelligence, in the respondents’ opinion. The respondents put the responsibility on the core stakeholders to the following extent (Table 3)

Table 3. Key stakeholders’ responsibility for AI development and use

The responses have revealed the higher level of the academics’ own responsibility blaming primarily the society at large, then the government they voted for and only then producing corporations, as well as customers/consumers and operators of robotized machines. The students mainly found responsible producing corporations, trying to avoid any responsibility for such sore issues. These two behaviour modes are typical for representatives of the two generations.

Question 16 aimed at identifying the respondents’ attitudes to further AI development. A majority of the responses proved to be positive, which means the Russian university students and employees are in trend: 13% of the academics and 21% of the students believed that further AI development was undoubtedly worth continuing, and 45% of the academics and 40% of the students found it promising. For 29% of the academics and 26% of the students it was difficult to express their opinions, while 13% of the academics and 10% of the students stated that further AI development should not be continued and 3% of the students even believed it should be banned. The mainly positive attitudes reflected the high expectations for AI advance and achievements based on the previously received unprecedented support provided by ICTs and the Internet, which made the world smaller and life faster.

6 Conclusions and Discussion

The results of the AI discourse analysis and the responses of the survey participants have revealed differences in attitudes to AI of Russian representatives of Generation X and Generation Y stipulated by the differences in their attitudes to life in general, to the world and to themselves.

The findings have highlighted an interim communication rivalry between representatives of Generation Y and AI, as the former have higher expectations and requirements for respect and courtesy than they are ready to offer.

The survey has proved both hypothesis and demonstrated the higher level of trust to AI on the part of the students – representatives of Generation Y. The results of secondary importance have shown a higher level of trust to electronic concierges in comparison to chat-bots and highlighted a considerable lack of trust to the relevance and reliability of the information provided by both electronic concierges and chat-bots (the latter perceived as less reliable).

The five challenges formulated above prompt a number of priorities, some of them being really urgent (primarily including control over the development and use of robotized machines and drones).

With AI getting integrated into human lives more deeply, it is important to learn from the previous lessons taught by the Internet and ICTs and take some preliminary measures allowing people to always remain humans and decision-makers.