Keywords

1 Introduction

Mankind is an intelligent species whose individual and communal progress has largely been an interdependent effort [27]. Over the years, this interdependence has been strengthened and elevated with technological advances. Technology’s bounty has been in steady progress evident from the power of smart phones in our hands. The fourth industrial revolution has brought new dimensions to the evolution of technology, with biological, digital, and physical spheres, pushing open new frontiers in many industries with new technologies such as block chain, 3D printing, internet of things, and so much more. Particularly, artificial intelligence “has been described as the fourth industrial revolution” itself [8]. This evolution has been triggered by the advancement of machines that are now able to intuitively developing knowledge which goes beyond simple if-then steps and has opened new arena that are pushing boundaries of jobs, economies, customer satisfaction, predictions, medical and climate simulations, and so on.

Artificial intelligence (AI) has become an exciting area of technological development with the promise of changing the world as we know it. Although majority of the population’s understanding of AI comes mostly from movies, some of which almost always is negative, Bill Gates has stated that it is “promising and dangerous” [7].

With the UN Sustainable Development Goals focusing on quality education, building resilient infrastructure, promoting sustainable industries, reducing poverty and hunger, and encouraging gender equality, good health, responsible consumption and more, artificial intelligence is playing a crucial role in shaping industries and lives in ways not imagined or comprehended before [33]. From increasing productivity to providing green solutions, smart living to increase accessibility and inclusivity, automation and artificial intelligence has the potential to do tremendous good for the world [16]. However, it has the potential to have negative impact as well, enlarging already existing digital divide, increasing carbon footprint, and so on that can backfire on achieving the same goals that AI can help to achieve, depending on geographical, political, and economic standing [35]. More specifically, in a report by top AI experts, “speech synthesis for impersonation,” “analysis of human behaviors, moods and beliefs for manipulation,” use of “thousand micro-drones” as weapons, and attacks using and controlling autonomous cars “causing them to crash” are just some other possible arenas of use of AI that make the future threatening [6].

For this reason, in order to prepare future leaders, policy makers, programmers, and developers to be able to develop and use AI in a manner that benefits the greater community and helps achieve the United Nations Sustainable Goals and by extension UN 2030 Goals, it is imperative to not only teach students about the technology and prepare them to develop future systems but also to understand the implications, repercussions, and responsibility with which such technology should be developed and used.

Hence, studies on ethical use of artificial intelligence is emerging as more educators identify the need to infuse values in the current generation to protect future digital leaders [29], premise for this study that aims to record student perceptions of artificial intelligence and automation during a capstone lecture for an IT ethics course at a tertiary education institution over a 10 year period in order to understand how, if at all, students view artificial intelligence, how their perceptions may have changed and how they are influenced by the ethical discussions partaken during the lesson.

2 So Why Cyber Ethics?

Every year there are numerous small and large cases of cyber-attacks in every country. In 2018 alone, identity thefts affected about 60 million people in the United States [31]. This led to committing $15 billion for cyber security in the fiscal year of 2019 which is still a partial amount of entire cyber budget [31]. In 2018, a whistleblower called Jack Poulson, a former Google employee, created a non-profit initiative to question Google’s plan about building a censorship AI for the search market in China. Although the fundamental purpose of the project was to build more natural search sets for the users and re-enter the Chinese market, it clearly was a customized Google search engine tailored as per the Chinese government’s needs of surveillance and censorship for which Google’s very own Sundar Pichai had told his employees to argue that this was an “exploratory project” [11]. If this is the case for the world’s best search engine, the world’s famous social media platform is no exception to cyber breaches and controversies. Without proper consent, Facebook sold the data of 50 million users (which was later verified to be 90 million) to a political consultancy called the Cambridge Analytica and third parties ahead of the then US Presidential Campaigns in 2016 [30]. The data collection started in the form of a survey collected by a Cambridge University researcher Aleksander Kogan who paid a small fee for participating in the survey. When survey app was installed by the survey taker, it shared the information of the firsthand users of the app and their friends without consent. This Cambridge Analytica scandal is a significant one in cyber history that highlights the plight of naive digital users who assume that Facebook is “safe” for all. As dreadful as it gets, the damage done by the data malpractice of Zuckerberg does not stop at the Presidential elections. To date, nobody knows how the dataset sold has been used by all the companies that purchased it and is suspected to have fallen beyond borders – possibly into foreign hands [30]. Later in the year, Wall Street Journal reported that Facebook also accepted to have been hacked by non-political “criminal spammers” who gained access to sensitive data of 29 million users and their linked accounts [18]. This insensitive and irresponsible attitude by Facebook acquired them the title “Digital Gangsters” from the UK Parliament accusing them of an unethical attitude and performing thoughtless activities “intentionally and knowingly” [24]. Sadly, it is these corporations or conglomerates that currently run the tech show for the entire world and our future iGens are growing along with them.

3 Cyber Ethics for iGens

iGens are those children typically born after 1995 and grew up along with the internet – a generation that does not know a world without it. This generation is also called Generation Z as they succeed the Generation Y – who are also tech-savvy but not as good as the iGens [19]. However, this tech savvy generation does more than just share the information online – it produces, shares, and governs the information available over cyberspace. The cyberspace is the field where all the information is stored and shared electronically. Therefore, ethical issues are the result of this unwarranted and unverified sharing. Advancement in the usage and sharing of information are the primary reasons why cyber ethics should be considered an important aspect in today’s education. While traditional ethics are based on beliefs, norms, and goals that can differ as per various ethical principles, cyber ethics are those that show the responsible use of cyber space by governing its procedures, values, and practices [32]. Since the internet age is relevantly young in ethical experience, users of the internet need to be taught how to ethically use the same. Often, digital natives find themselves in an ethical dilemma in spite of underlining the proper use of Information and Communication Technology (ICT). Robert Kruger highlighted that 8 out of 10 students in the United States with internet access in school download unauthorized and unlawful material simply because they are “unaware” that it is wrong [14]. Most of this happens without understanding the serious consequences behind such acts. If students are taught the ethics behind shoplifting alongside its consequences, he/she would think twice before acting upon a thought. However, digital natives are only sprinkled with technical terms such as software piracy, plagiarism, and digital property rights without categorizing them as digital “stealing.” What happens if these seemingly “unaware” generation grows up blind to numerous attacks and security breaches that happen worldwide each day? [14]. So the fact remains – education is the foundation for individual progress and a crucial aspect in the contribution toward the society. Studies have shown that education is the key to bridging the gap between classroom and workplace [13]. Cyber-attacks can spark from anywhere for many reasons – it can be a corrupt employee, unhealthy business competition, criminal groups, or political conspiracy. However, only technical measures may not be the solution to effective defense. A good cyber defense strategy includes appropriate education about risks and awareness of the cyber world [27]. This gives rise to the need for education in technological advancements for which their associated moral education is imperative. To make proper sense of the different options available over the internet, parents and children are recommended to classify and conceptualize the options for real examples to determine the right from the wrong. Parents and children must not assume safety due to upcoming filtering systems and firewalls – rather education of limits and dos and don’ts alongside their consequences are known to work better. Illustrating the actual harm that could be done by plagiarizing or spreading online hate can reduce malicious behavior that wastes people’s energy and time [34]. For this reason, Sobiesk et al. proposed a mandatory multi-disciplinary approach to netizens to get a broader understanding on the emerging cyber crisis and opportunities. This era of a digitally divided society demands the facilitation of cyberspace that is currently clouded with ethical issues that are “socio-cultural and academic” in nature [21].

4 Artificial Intelligence and Cyber Ethics

Artificial intelligence is, undoubtedly, man’s most intelligent invention. Artificially intelligent systems are programmed to follow a series of algorithms. That is why some consider AI systems to be unbiased or fair or ethical or emotionally neutral. Yet, the devices that display intelligence are programmed by intelligent humans who may choose to follow an unethical course or machine who might learn our flawed and biased behavior by observing and learning our behavior. That is in the heart of research and development for programming moral agency into new technologies to manage real life situations and make decisions ethically and fairly [1]. Frameworks that develop autonomous reasoning and thinking inherit the values of the human society based on sociocultural norms. Thus, the field of AI involves weightier matters for consideration such as responsible reasoning, data stewardship, and transparency [9]. Apart from considering the practical function of any technology, the ethical functioning of the same is crucial in attaining fair and reasonable state of affairs. For example, an ATM machine must be ethically tuned to count the right amount of bills to ensure fair transactions. Implicit ethics stand at the core of computer software development, absence of which would rule out the usage of computers altogether. However, an AI agent might require more explicit representation of ethical decision-making processes. A computerized game of chess can be example of explicit ethics that can judge and calculate the next move on the board [20]. Even though explicit ethics on machines sounds elusive today, our debates are moving toward whether we should develop lethal autonomous weapons [27].

According to Vinuesa et al. [33], artificial intelligence can play a significant role in achieving the United Nation’s Sustainable Goals (UN SDG). In fact, the study posits that AI can “enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets” [33]. From “using satellite imagery to start to track how reforestation is progressing; to track(ing) livestock as a means of predicting conflict… [to tracking] smarter traffic signals to reduce congestion and pollution”, McConaghy [16] has recorded ways AI can help achieve the goals.

This duality of the technology then makes it crucial that we ensure we are preparing the next generation of tech-savvy graduates who join the workforce are in fact versed in the ethics and moral principles that should govern the development and use of artificial intelligence.

Therefore, this study’s objective is to attempt to capture if student understanding of artificial intelligence and its ethical impact has changed over the years and whether teaching about it as part of an ethics course has any real impact on how they perceive artificial intelligence in today’s world.

5 Methodology

In order to conduct this project, one lecturer’s experience in teaching a cyber ethics subject at an offshore campus of a Western university has been captured. The subject has been a part of the core degree both for business and IT, then was moved to becoming a core for only engineering and IT, and then as an elective for others. Currently, the subject is a final year elective for IT and engineering students.

Students taking the subject range in age from about 18 years to 22 years. The range is attributed to fact that the subject was originally a second-year subject, with students toward the younger spectrum till 2014/15 and moving to the higher end of the spectrum with more almost-graduating students taking the course.

Demographically, students were from various ethnic, cultural, and curricular background, enrolling in the subject.

The subject included wide and diverse topics to cater to the learning objectives as follows:

  • Identify the privacy, legal, and security issues related to the introduction of information and communication technologies

  • Explain solutions to security and privacy problems arising from the introduction of technology

  • Evaluate the impact of information technologies through the application of ethical frameworks

  • Explain the role of professional ethics codes of conduct

  • Demonstrate the understanding of the need for social computing and ethics in cyber space

The subject typically has three to four case deliverables, one group research project, an individual reflective essay, and reflective journaling. Key focus for this study is the class discussions during lectures and the reflective journal posts.

6 Sample Size

Students were always encouraged to participate in the journaling experience which also was indicative of class attendance and participation in discussions during lecture sessions. However, a retention rate of response and engagement was approximately 75%. Ohme et al. [22] have posited that the accepted rate of response in a study is about 66%, and while this refers to research studies, considering the student engagement in posting journal entry was counted is therefore drawn on as response that is studied for this project. In this respect, 75% is a good rate of response. Given this understanding, total number of responses recorded was n = 1503 from years 2009 to 2019 (Table 1).

Table 1 Distribution of responses across study period 2009–2018

6.1 Setting Up the Journal

Self-Reflective Journaling (SRJ) was introduced first as written submission and later (from 2014) as online using learning management system; after each lecture, students were expected to post their thoughts on the concepts covered and comment on the discussions that took place in class. This primarily worked as a revision and increased class attendance. SRJ helped record what students thought of the topic discussed in class. It also helped those students who were shy or introverted to voice their thoughts and put forward their views. Students had the option to share their view either with everyone in class or only with lecturer. Students independently researched the topic, uploaded videos, brought in articles for discussion with others, and so on. Overall, it encouraged active student participation, critical thought and application, and reflective writing [12].

SRJ was set up to track student understanding of ethics and how it grew or changed over the semester. Students were expected to post at least one entry every week starting Week 4 after every lecture. Although toward the beginning the students’ entries mostly included a summary of the lecture or how good the lecture was, they soon understood that wasn’t adding value. They began to think about the topic and then started writing posts that critically analyzed the topic [13].

6.2 Lecture on AI

The lecture on artificial intelligence was considered the capstone lecture for the subject and took place on the last lecture of semester. By this time, students had already attended lectures on topics such as networking, social media, computer reliability, privacy, security, intellectual property, and professional codes of conduct. Students also had lessons on ethical theories such as Kantianism, Utilitarianism, Locke’s Theory of Property, Social Contract, and others to provide frameworks that students can relate to for decision-making [23].

The teaching revolves around creating “times for telling” [28] by first introducing the topics with guided discovery activities where the students work in collaborative groups, progressively arguing their ideas and opinions by collaborating around their formal and informal knowledge and experiences. The lecturer then creates an “analogous relationship” [10] between stakeholders through scenarios, videos, and questions and gets students to discuss the possible answers. The lecturer then draws on “both sides toward the middle” technique, involving students in extremes of a continuum of an ethical dilemma. Because all students can see that each end is an extreme situation, they get interested to find out what they and their peers would think and how they would solve the dilemma toward the gray areas. Lecturer “tests the limits” [10] of the students’ understandings of concepts by changing the facts of a case until the decisions begin to differ, allowing students to “see” the influences and reasoning behind their own decisions. Lastly, the lecturer applies the “writing and policy and reversing… role” where students engage in writing out actual policies they would want to implement based on the case discussed and then are asked in a Kantian fashion [17] to reverse their role to see if they could live under such a rule.

By the last lecture, students are used to this model of teaching and when artificial intelligence theories and cases are introduced, they get into the discussion, often splitting into four segments as shown in the figure below (Fig. 1).

Fig. 1
figure 1

Class discussion segments (CDS)

7 Results

Entries were analyzed using qualitative coding. Of interest for this study were the posts on the last capstone chapter which focused on Artificial Intelligence and Robotics. Below are some sample posts:

7.1 Sample Posts (Figs. 2, 3, 4, 5, and 6)

Students’ attitude during lectures was captured based on where they stood during the CDS activity in class for questions such as “do you think artificially intelligent robots should have human characteristics”, “do you think AIR can or should be AMA (artificial moral agent),” “do you think AIR should be weaponized,” “do you think AIR should have capacity to decide on collision decisions,” and so on (Fig. 7).

Fig. 2
figure 2

Sample post 1

Fig. 3
figure 3

Sample post 2

Fig. 4
figure 4

Sample post 3

Fig. 5
figure 5

Sample post 4

Fig. 6
figure 6

Sample post 5

Fig. 7
figure 7

Student choice on CDS

Using the circular segmenting of lecture discussions shown in Fig. 1, the answers were distributed across the years as illustrated in Table 2 and Fig. 8.

Table 2 Response rate of student decisions in percentage
Fig. 8
figure 8

Word count for most used words or strings of words in online journals from 2009 to 2013 and from 2014 to 2018

Another dimension of result was based on class discussions from capstone lecture session and the journal posts, and these results were quite fascinating.

Toward the beginning, more students seemed to think that the concept of artificial intelligence and automation was “far-fetched,” “far from reality,” and “to be used with responsibility.” This is reflected in their choice of segment during the class activity where majority seemed to disagree with the questions posed on immersion of artificial intelligence and moral decisions.

In later years, more students found it “fun,” “very real,” “already here,” “cool,” “no big deal,” “great progress,” “quick and effective in different areas,” “reduces human error,” and “can be weaponized.” This is also reflected in the choice of segment where more students significantly stood on “agree” – that is, resulted in more positive than negative.

What is more is, using Bletzer [3]‘s method of visualizing qualitative data to create Word Clouds, feedback from students were segmented from 2009 to 2013 and then from 2014 to 2018. Word Cloud is “a visual representation of a collection of text documents that uses various font sizes, colors, and spaces to arrange and depict significant words” [5].

As the study’s purpose was to capture student feedback, the qualitative data were extracted from the journal entries for the topic of AI and Automation. The data were then organized in alphabetical order. Many students used similar terms, so this exercise helped identify the top 20 words or strings of words to describe how they felt about automation and ethics. Based on the frequency and emphasis by students, the words were then arranged in order from 20 down to 1 (see Table 3). An online free builder, WordClouds.com, was used by uploading the extracted and cleaned data .csv file. Maramba et al. [15] posited that using web-based text processing tools such as word cloud sites was useful in providing “quick evaluation of unstructured…feedback” and such tools have been used to generate the word clouds shown in Figure 8.

Table 3 CSV list of words or strings of words

8 Discussion and Lessons Learned

The results of the findings are very interesting when looking at how the generations of students’ choices have changed over the years toward artificial intelligence and artificially intelligent beings. Three significant results are noteworthy:

  1. 1.

    Over the years, students’ perception of artificial intelligence has changed to become significantly positive.

  2. 2.

    Students do not fully comprehend all the implications of artificial intelligence.

  3. 3.

    Irrespective of how they view the technological advancement, actively engaging in discussions on the moral principles and ethical dilemmas help students discuss and debate the necessity for transparency and predictability in decision-making based on both the development and use of artificial intelligence.

When looking at students’ attitude through their discussion in class and journal posts, one obvious reason for the difference between 2009–2013 and 2014 and after may be the speed at which technology in general has advanced in the last decade, and specifically AI has immersed into daily lives in the form of Siri, Alexa, and so on. Millennials grew up as technology and information took over. In contrast, iGens were born into technology, growing up depending on technological devices and decision-making daily [2]. Moreover, it is important to note here that the significant change that takes over the years could also be attributed to millennials graduating and iGens coming into classrooms. While millennials were and remain slightly skeptical of artificial intelligence and slow to adopt, iGens are more fluent, easily adopting to changing technological advances, finding the features and advancement as fascinating and used to interacting with artificially intelligent devices and apps [35].

It is also important to observe that while students in the early phase of the study were more skeptical of the AI and its potential and readily agreed that caution and responsibility were required, the students in the latter phase were enthusiastic about the technology, but when discussion ensued, they were more intrigued and wanted deeper understanding and discussion on issues such as “citizenship of machines such as Sophia” and “not having human-like features or skin because that made them feel uncomfortable,” particularly as they engaged in ethical debate in class and then posted their journal entries online, highlighting impact of the ethics class [12].

These are important findings, giving educators and researchers an understanding of how fast and varied the generations’ views of AI advances are, how little they understand in terms of the real possibilities of AI, and how they may be shaped.

Importance and effectiveness of teaching ethics in AI to students can also be shown through the findings of this study. Upon following discussions and engaging in activities in the lecture, students, whether millennials or iGens, do seem to respond to the theories and concepts, nudging them to look at all perspectives, not just accept AI but question our dependencies on them to decide where and how to be responsible with decision-making.

9 Conclusion

This study focuses on AI and rise of AI advancements with the advent of the fourth industrial revolution. The primary reason for focusing on AI is its potential to benefit the society, economy, environment, the rate at which AI is becoming popular, and the constant demand and need to create more “thinking machines” [4]. Introducing future developers, innovators, and users of AI to the ethical guidelines and moral values of how to build them and how to use them may be key to better awareness and ability to answer tough questions on obligations and responsibilities.

This study has recorded how from one generation to the next, the students’ perceptions of the safety of having a society immersed in artificially intelligent devices change over years. The change is significantly positive and pro technological advancement. Early in the study, students did not think that technological advancements in automation and artificial intelligence would be possible but thought it was worthwhile being careful, while in later years, students got excited by what they saw and heard about how far technological advancements had reached with classic examples discussed in class such as Boston Dynamics robots jumping, dogs running, etc. They voiced not necessarily being worried about the ethics of such developments because they felt the responsibility comes with the development and so we “just” must be careful. Upon further discussion and debate using ethical frameworks such as Kantian’s goodwill and intention, or Utilitarian’s consequences, they accepted that the responsibility must be developed and understood and not taken for granted or implicit.

While this may be an expected outcome given how technologically immersed the new generations are, a significant finding of this study also shows that although students did seem more pro-artificial intelligence, more pro-autonomous devices, and pro-giving them more choices, in the journal-posts students ultimately voiced concerns as discussed in the class, showing a possible shift in their perceptions, questioning moral position of artificial intelligence. This highlights the importance of teaching ethics courses to students in engineering and IT. This also highlights a significant concern that students may not have fully grasped the implications of the advances being made or thought of in the future.

The findings of this study are significant to academics, researchers, program coordinators, tertiary education management, and policy makers because it is a step toward positing the urgency and need to take ethics in AI seriously primarily because the study shows how students possibly do not understand the full extent and potential of AI to do harm and the matter of bias in developing or using AI in manners that may be injurious to the economy, politics, or even society, thus proving the seriousness and urgency of having and running stand-alone ethics courses to computer and engineering students.

This study has tremendous future scope. The next stage of the research looks at the learning objectives, content, ethical frameworks, and assessments in order to propose a masterclass after developing an in-depth understanding of student perception of ethics of artificial intelligence using quantitative methods that can be used by academics.