Keywords

1 Introduction

At the heart of all technology is the intention to make human life easier in some way. Technology speeds up processes to make them more efficient (and often more effective) and relieves people’s physical and mental burdens. It can enable the faster and more accurate delivery of completed tasks and give individuals the opportunity to accomplish actions that would never have been possible otherwise. For instance, steam engines not only ran according to timetables and time zones; they enabled  people to take loads on time from one place to another (see, e.g., Zerubavel 1982). Similarly, combustion engines rendered air travel possible (Hiereth and Prenninger 2007), which exponentially increased the pace of mobility and communication. These forms of technology changed human life permanently and dramatically. Humans learnt to live in radically different ways. This same rule may indeed apply to the COVID-19 era.

From axes to software applications, technical artefacts are tools people utilise to pursue their objectives—life goals as well as mundane everyday tasks. Yet, goals and actions have ethical dimensions and consequences. The intention behind the action is the definitive factor that determines whether the goal is ‘good or evil’ (Ferrarello 2015; Shotter 1995). In other words, do the goals correspond to ethics and codes of moral conduct—i.e. how people have learnt to behave in the correct way according to cultural principles. An additional consideration is whether, by behaving in a certain way, is the actor treating others how they would want to be treated (Gensler 2013)? Or is the intention behind the goal and associated actions to generate gain at the expense of others? Actions associated with these goals may be deemed necessary, allowed or forbidden (von Wright 1963). Yet, the relationship between these dimensions may be (and often is) extremely complex. The concepts and rules of ethics are intimately connected to the use of technical artefacts. Thus, at any stage of its lifespan, technology should be considered in terms of its ethical aspects. Mario Bunge coined the term ‘technoethics’ in 1974 to emphasise the ethical responsibilities of technologists (Bunge 1977). It is increasingly used to describe the concurrent relationship between technology and ethics and how they exist in relation to moral codes.

Questions of ethics have been debated for centuries. Ethics by nature trigger emotions, both in relation to the topic itself and the framework it provides for evaluating phenomena. Moreover, ethics incite emotional reactions within people regarding their own actions and the consequences of these actions. Hume (1751/1998), like many other British, American and Commonwealth philosophers since, directly linked ethical actions to human emotions (Ayer 1936; Hume 1751/1998; Moore 1991; Stevenson 1944). As Hume (1888, 457) explained, ‘Morals excite passions, and produce or prevent actions. Reason itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason’.

Hume’s position connects ethics with emotions through the control of actions. The act of understanding ethics through emotions is referred to as ‘emotivism’ (Malik 2014; van Roojen 2018). According to emotivism, ethical propositions—and thoughts—are manifested and expressed through emotional states rather than cognitively produced and explicit facts (Ayer 1936; Hume 1751/1998; Stevenson 1944). Therefore, emotions and emotional states seem to play a crucial role in defining what is ethical in the human mind. If the consequences of actions lead to states that are emotionally negative or potentially destructive and harmful, it is often assumed that these actions should not be undertaken. If an individual cognitively processes (thinks about) these cause–effect relationships, they may decide not to take such actions. Performing such actions regardless of the consequences causes people to enter a state of cognitive dissonance (Stone and Cooper 2001). This also can be classified as moral dissonance—when an individual acts against his or her moral codes and beliefs (Breslavs 2013), inducing high levels of negative arousal (intensely experienced and unsettling negative emotions). The dynamics within this state are highly complex and produce an understanding of emotions that is both multidimensional and conflicting.

For instance, an educated and aware individual may know that the development of artificial intelligent (AI) systems is based on exploiting vast amounts of data collected from people without their knowledge. Whether individuals have given their consent through pop-up messages that interrupt their initial web page viewing without carefully reading the privacy disclosures or understanding the nature of ‘cookies’, or whether the data utilised have been obtained entirely without individuals’ consent, substantial amounts of data (big data) are needed to ‘feed’ machine learning. In this example, the educated individual in question comprehends that from one perspective, AI developers are in fact stealing data from other individuals in order to enable the system to learn and operate. Thus, a negative emotional tone is set regarding the ethical correctness of the technology. However, the AI system is convenient, efficient and enables the individual to perform complex actions that they would not otherwise be able to do. This helps them to fulfil their career and life goals. The dissonance stems from the fact that the individual is not entirely emotionally positive (happy) with the ethics behind the technology’s development but they are indeed satisfied with how it works as an enabler. In sum, the essence of emotivism is the association of ethics and moral codes with the human emotional system and its properties.

Technology, human actions, culture ethics and emotions are all tightly interwoven (Chen et al. 2020; Hume 1751/1998). Where culture products—including technology—are concerned, ethics always provide a framework, whether explicit or implicit, for the complex emotional reactions that individuals experience in relation to technology design and its consequences. Thus, it is relevant to discuss the relationship between emotions and technoethics, as within the context of technology design experience (from the perspective of both the design team and users), the two cannot be neatly separated. This chapter focuses on the conceptual foundations of studying emotions and technoethical synergies. It emphasises that ethics can be understood as one of the main links between emotions and technology. As the above example illustrates, along with the nature and rapid pace of the technological development of today’s world (and arguably throughout human history), ethical values have evolved.

Emerging AI systems dominate the current cultural landscape. Robots, chatbots, other bots, agents, AI in social media (e.g. Facebook’s Sophia)Footnote 1 and autonomous cars will change the way people live. Transportation, industrial banking, culture, administration, medical care and learning are already experiencing dramatic shifts (Tegmark 2017). These shifts involve processes and operational models as much as they entail the rupturing and reformation of human–technology and human–human relationships. In an instant, the COVID-19 crisis radically altered human–human interactions. Learning, meetings, business, clinical and diagnostic sessions (health and mental care and other therapy/life exchanges), cultural consumption (concerts, theatre, art, etc.) and research suddenly moved from face-to-face events to internet-mediated transactions. When it was possible to meet in person, many chose to meet other individuals in an unmediated fashion due to their own moral stance—what they considered as being ethically correct interpersonal behaviour.Footnote 2 Traditional academic discourse and public sentiment have frequently mentioned concerns for technology (computers, robots and AI) in terms of compensating for genuine human-to-human contact (see, e.g., Barnes 1996; Cerulo 2009; MacDorman et al. 2009; Turkle 2007). Yet, during an epidemic, there is a simultaneous need to maintain distance while also remaining connected. Thus, many have embraced information technology for its ability to uphold social life and real-time interactions—even those who previously found it ethically challenging.

Every technological innovation generates effects that can be experienced to varying degrees across society. For instance, the development of agriculture has had fundamental repercussions across all areas of human life—technological, behavioural, societal, psychological, etc. (Bernal 1969; Hendrick 2009)—which have inevitably shaped current human societies and cultures. Another example is gunpowder, which can be understood as the bridge between the medieval and modern eras of human societies (Britannica 2020). Its invention was as much about how it was created as it was about what it did. The development of gunpowder marked the first true application of theory (reason and rationality) to empirical experimentation in harnessing and exploiting energy. This, in turn, laid the foundations for scientific problem-solving and development through equations and theoretical modelling before it was applied in real-life settings. Chain reactions from this discovery can be seen across society, from scientific and educational institution formation and curricula development to implementation of the technological breakthroughs that gunpowder-related discoveries enabled (e.g. the ballistic pendulum for velocity measurement). Yet in this case, the ethical basis and conflict—dissonance—between moral principles (taking another person’s life) and technosocietal advancement constitute an uneasy relationship between emotions and technoethical systems.

2 Ethical Neutrality of Technical Artefacts

Without knowledge of concrete uses, cultural framing or understanding of the design intention, technical artefacts and objects themselves are ethically neutral. When encountering a device or technical artefact, without background knowledge, it is generally impossible to say whether it is good or evil (i.e. if it will be used for benefit or detriment). Technology is culturally constructed and exists as an expression of cultural values, beliefs and social ways of being and doing (societal systems). Underlying its development, there is always intention and intentionality—a knowingness and goal-related state of being that direct human actions and shapes human-generated products. However, there is no guarantee that even technical artefacts created with the best of intentions will not be used in unethical ways to reach unethical goals.

The relationship between what is understood to be either good or evil is multi-faceted and complex and depends on a range of factors: (a) who is creating, (b) what is being created, (c) why it is being created, (d) for whom and (e) how others who are not involved in the development will receive the results of the production. In short, the techoethical relationship between a person and the design depends entirely on the relationship between the producer(s) (the commissioner(s) and the creator(s)) and the receiver(s)—customers, users, citizens, allies and opponents (Hodgson 1983). If an individual has no prior knowledge of an object when it is encountered for the first time, it appears to be simply a mass and form of materials (Ramsey 2016), even if the technology has been constructed or composed intentionally (Borgmann 2012; Cooley 1995; Jonas 1982). Intentionality still rests within the minds of humans within human–technology relationships (Cardon 2018; Devillers 2020; Haikonen 2020). The technology itself is incapable of intentionality or consciousness. Therefore, the human–technology relationship defines technoethical ambiguity.

This ambiguity between an artefact and its uses can be illustrated using the example of drones. A drone can be used for a range of humanitarian acts such as transporting food and medicines as well as aiding the performance of actions in inhumane circumstances such as warfare and unsolicited surveillance. Thus, drones are also notorious for their invasive qualities; governments worldwide have established laws forbidding their flight within certain distances of private houses due to their potential use for spying and stalking and taking unauthorised photographs. Drones are also associated with drugs and weapon smuggling as well as unfair, unethical and unprecedented warfare activity against civilians in various countries. This mixture of humane and inhumane intentions, and the use and actions associated with drone technology, renders emotional sentiment towards them conflicting and controversial. Yet a drone cannot be rendered either good or evil in its own right. Only when it is attached to human intention, behaviour and cultural framing—the discourse surrounding the technology (in the media, political, news, entertainment, etc.)—do it enter a conceptual dialogue with the individual and group’s ethical basis.

The term ‘technology’ has multiple meanings and connotations. It refers to the practical application of theoretical knowledge (Merriam-Webster 2020); its history was discussed above in the development of gunpowder. From a historical perspective (derived from the Greek word techne), technology is anything human made; it is closely connected to understandings of tools and other artefacts humans use to shape their environment (Derry and Williams 1960). What is now understood as art played a major role in the formation of history, collective memory and the infusion of human imaginings into the surrounding physical environment. Ethical frameworks and networks of trust are established through action and speech or speech acts (McNair and Paretti 2010; Jarvenpaa et al. 2004). The establishment or disbandment of trust and ethical evaluation processes may be triggered by something as subtle as an utterance—e.g. ‘Be careful, online games can be dangerous’—a locutionary act (Austin 1975) or framed by a more conventional political speech act (locutionary act).

While Bauhaus and Volkswagen have attained relatively widespread acceptance on international markets, these movements and brands were associated with the Nazi regime through Adolf Hitler’s political rhetoric in Mein KampfFootnote 3 (McGuire 1977; O’Shaughnessy 2009). Without taking into account, the cultural–political context, Bauhaus design may be understood from a functional perspective as furniture. Volkswagen could likewise be understood simply as a car manufacturer—on a basic level, producing vehicles for transportation. Yet, through deeper associative thinking and knowledge of political world history, the technology and design produced by both brands can be connected to Nazism and crimes against humanity, disrupting the moral grounding for the consumption of their products. The design objects are no longer simply objects but tools of Nazi propaganda and political rhetoric connected to the Holocaust. From a functional perspective, cars can be understood as enabling faster, broader travel opportunities than horses and trains. While seemingly purely functional, from another ethical perspective, Volkswagen vehicles and automobiles in general may be seen as speeding the pace of human communication and travel for the purpose of exponentially increasing productivity. Thus, an ethical evaluation of cars may explore ‘Who really benefits from the increase in human speed?’ There is increased productivity and increased consumption, all at the price of the increased risk of serious accidents and potential fatalities.

Thus, on the surface, objects may appear to be technoethically neutral. Yet, the act of creating (designing and producing), the act of using and discursive (speech) acts surrounding the technology design always ricochet continuously across a spectrum of ethical alternatives. These in turn affect the ways in which people emotionally experience technologies and their uses. Ethical issues that are relevant in acting with technical artefacts should always be considered in the context of people and how they use technology in their physical actions and speech. AI is no exception. Its ethical value is based on the ways in which people design, implement and utilise AI capabilities. This means that in order to consider emotional weighting and experience during the design and development process, design teams should chart and account for both possible uses associated with intention—pro-humane versus inhumane (for others’ benefit or to others’ detriment)—and possible interpretations (i.e. connections to history, culture and socio-political discourse).

3 Actions, Rules and Ethics

Human actions are based on cognitive information processes. People are information-processing animals; thus, all aspects of human actions should be considered information processes and the results of human information processing (Newell and Simon 1972; Norman 1976; Rumelhart et al. 1972). This section describes technoethics in light of information processing and actions guided by information processes.

Individuals and groups are guided by ethical norms and principles. Ethical consciousness, or ethical and moral sensitivity, varies between people (Knapp et al. 2009). While every individual has a moral code as well as implicit and explicit sets of ethical principles, some people more actively seek information and engage in ethical reflection than others (Cohen et al. 2001). Ethically conscious people try to avoid engaging in situations and making decisions in which other people, or groups of people, are placed at a disadvantage. Thus, when adhering to ethical codes such as the Golden Rule—the Code of Hammurabi—individuals understand that they should treat others how they would like to be treated themselves.Footnote 4 The phrase ‘an eye for an eye’ originated from this code. It is claimed that this code, and Hammurabi himself, is accounted for in the Hebrew Old Testament in the story of Moses (the Mosaic Code) in which God (Yahweh) inscribed a set of laws, the Ten Commandments, on two stone tablets. These laws are: (1) worship only one God, the genuine God and no other (interpreted in many ways, this can be seen to emphasise the importance of spiritual awareness and humanity and not to prioritise anything else); (2) do not disrespect God; (3) keep a day aside each week for God and spiritual health (rest); (4) respect one’s parents; (5) do not murder; (6) do not commit adultery; (7) do not steal; (8) do not lie; (9) do not lust after someone else’s partner and (10) do not be greedy or jealous (avoid ill intentions). Thus, in many ways, the Ten Commandments (the code) provide the foundations for modern-day law (Green 2000).

In both Western and non-Western societies, the premise of treating others how one would want to be treated is a benchmark for social–cultural being (civilisation) and ethics (United Nations 2020). Respect (Jing) is central to Confucianism (Chan 2006), tied to this are the elements of religion, moral rules and etiquette. The Confucian Golden Rule consists of two parts, chung and shu (Ivanhoe 1990). These are referred to as the ‘One Thread’ of the Analects of Confucius (a book of sayings) as they are tightly woven together (Ivanhoe 1990, p. 17), representing reversibility—the notion that actions taken on other people are subsequently returned to or reversed back upon the actor (Fung 1953). Chung represents the things that one should do for others, and shu represents what should not be done to others (Ivanhoe 1990). In Islam, it is stated that ‘None of you believes until he wishes for his brother what he wishes for himself(An-Nawawi’s Forty Hadith 13, cited in Islamic Network Group 2020).

All religions promote the Golden Rule in some way; thus all cultures have more or less the same basic underlying framework for ethics and moral principles that shapes people’s ways of relating to one another. Thus, ethics and the information attached to them—the knowingness or awareness of when people act towards others in a positive or negative (exploitative) way—are to varying degrees so embedded within the social–cultural being of humanity that they are intrinsically connected to emotions. This is particularly true for higher-order cognitive processing, in which deeper associative thinking—the information processing that is connected to evaluating, or appraising, cultural products (technology)—connects an individual to the reflective state of what the design, its creation, production, and consumption means for oneself and for others.

Yet, like any other area of emotional scholarship and human experience, the relationship between ethics and emotions is complex. Ethical information processes can be studied from various points of view. Ethical norms and changes in moral codes can be seen as a result of alterations in ethical and legal discourses. While laws and regulations have been influenced by the Golden Rule as discussed above, they are still cultural constructions in which people have actively created social and behavioural frameworks by which citizens must abide (Dror 1957). Thus, depending on legislative systems and governmental structures, measures may be taken to establish laws that somewhat go against basic human values as set forth in the Golden Rule, possibly creating major dissonance in the beginning, yet gradually smoothing out as the codes of behaviour become normalised and accustomed to. From a historical perspective, these types of laws can be seen in relation to Nazi Germany’s treatment of Jews and South Africa’s apartheid.

In his Critique of Practical Reason (1788/2004), Immanuel Kant described a rational approach to ethics. He argued that stealing, murdering, deceiving and adultery, for instance, would spell the end of civilisation. In order to bypass this form of reasoning, political entities have institutionalised a form of othering that exceptionalises other cultural and ethnic groups, rendering them outside the realm of human values and civilisation. The indigenous people of Australia, for example, were not recognised as having cultures of their own due to a lack of recognisable European political structures and other technological designs—i.e. architecture and post-enlightenment cultural and institutional establishments. This meant that they were not recognised as Australian citizens until 1948 (1944 in Western Australia) and rendered them unable to vote in parliamentary elections until 1967 (Western Australian Museum 2017). Institutional and individual acts of racism (breaking the Golden Rule) were somewhat accepted, as indigenous people were not considered ‘neighbours’ or ‘siblings’, but rather as beings that existed outside of culture and civilisation—not adhering to the Golden Rule and resembling animals more than humans. To institutionalise and thus cultivate this understanding, indigenous Australian artefacts and remains were placed in the technology of museums (text, display, architecture, layout and institutional discourse) in Australia and throughout Europe—separating them from what was considered the humanised norm of European society (Giles 2006; Turnbull 2007).

This act of ethical value shifting can be seen across the board of institutionalised humanity and is one of the main mechanisms of war. Culture—art and technological design—has played instrumental roles in reframing the relationship between humans and ethical standards, as witnessed in the tight connections between Hitler, Bauhaus and Volkswagen as well as Filippo Marinetti and the Futurist Movement with Benito Mussolini. Fascist ideas shaped human life, behaviour and society at the political governance level and at the practical life-shaping technological design level. Modern understandings of the ways in which technology, business and public discourse also shift ethical values can be seen in the acceptance of apps such as Tinder, Ashley Maddison, Nude and even Facebook—in which promiscuity, deception and adultery are simply part of the design and business model. Axiology (the study of the nature of values) is extremely useful in this context, as the shifting of ethical values and manipulation of the Golden Rule may be a sign of greater questions or challenges facing humankind.

Ethical values and corresponding actions may be considered from an empirical perspective, particularly in relation to the ways in which people’s behaviour changes according to the context. This is no modern insight; for example, people may behave radically against the Golden Rule and then seek absolution upon entering a designated site of spirituality (the confession box). Alternatively, in line with Kant (1788/2004) and his ideas about how people should act in an optimal way that one would hope other rational people would act—just as if it were a universal law—ethics could be considered contextually optimal psychological and social processes. This helps explain why there are subtle variations in interpretations of the Golden Rule between cultures, as no two cultural groups have exactly the same conditions (Westermarck 2017).

Ethical information processes and their analyses represent a specific approach to the study of ethics, which can be supported by its importance in designing an optimal world according to the ethical values. To use an illustration from elderly care, instead of simply representing external academic norms related to the right, or correct, kind of patient care, design teams can strive to understand how people are really cared for, for example, in units for senior citizens and what norms caretakers follow in their daily lives. This type of empirical ethics is intimately connected to the analysis of ethical processes but with an important difference. The former moves the focus from academic discussions to life as people live it, which leads to the tacit and explicit development of a society’s ethical framework and moral codes. The latter refers to the analysis of how norms are created, serving as an empirical model of meta-ethical processes in everyday life.

In the late 1800s and early 1900s, Westermarck (2017) studied the relativity norms and values of empirical ethics. His analysis of ethical information processes takes a slightly different form: it concentrates on the process of creating the social norms and ethical values people follow in their daily lives. Drawing on the traditions of John Stuart Mill, Westermarck’s approach was against the normalisation of ethics and any objective understanding of morals. Instead, his work focused on examining the contextualisation and situational relatedness of ethics and moral codes. That is, Westermarck argued that depending on the situation in which an individual found herself, her ethical stance should adjust according to what would be most useful. Thus, utilitarian ethics are guiding rules that maintain individuals should act in a way that would instil the greatest amount of happiness for as many people as possible. As Mill (1863) stated:

Now the utilitarian standard is not the agent’s own greatest happiness, but the greatest amount of happiness altogether. It may be defined as the rules and precepts for human conduct by the observance of which happiness might be, to the greatest extent possible, secured to all mankind…

Both Westermarck and Mill go beyond utilitarianism to argue that ethics also form the moral code of behaviour that supports people’s desire to be connected to one another. Human beings harbour social feelings towards their fellow humans, which increases their personal interest in ensuring the well-being of others. To this end, the creation of values and how these are adopted and abided by as social processes are important in research on ethical information processes. To contextualise this ethical discussion within contemporary technology design, development and business, ethics can be seen here to be based on the analysis of real-life value creation processes, or ‘process ethics’, in order to distinguish this way of understanding ethics from more static and normative approaches.

Value creation is an important element of design thinking. Design is a value generation process. If researchers understand the value creation process in light of the ways in which ethical values exist, operate and morph, they can improve the progress of design by providing empirical information from various angles of the process. This shift from reflective to active involvement and influence is vital when designing ethical AI systems. Academic discussion is one example of a value generation process, but administrative, journalistic and law-making processes are equally important for the overall framework of how ethics and emotions influence one another in technology design. Moreover, the most important value generation process, the arena within which the ethical guidelines and moral codes are discursively and socially formalised in an open society, is nevertheless within popular discourse and discussions between citizens.

4 A Glimpse at the Axiology of Technoethics

Ethics are social, cultural and psychological mechanisms that regulate how people should act. Knowledge of what is allowed or forbidden is expressed in the form of ethical principles and rules. Today, there are ethical requirements for AI and other emerging technologies. This knowledge opens an axiological or rule-based view to future technoethics. Ethical rules can be divided into two broad classes. The first is fundamental rules, which have been widely accepted over millennia such as the Golden Rule. The second can be seen in the forms of ethical guidelines. These have recently emerged, for example, to clarify how people should treat one another in online spaces. The Association of Internet Researchers (2020), for instance, has published four sets of guidelines since 2002 in an attempt to prevent researchers from exploiting individuals or private data on the internet. The guidelines state that all internet users are responsible for treating all content providers as vulnerable individuals whose privacy should be maintained and intellectual output respected.

Axiology (value theory) is the study and structural theory of values (Bahm 1993; Hart 1971; Hartman 2011). Axiology can be applied to study the formation of ethical rules and has been used quite extensively in areas such as environmental ethics (Muraca 2011). It has its origins in Moore’s (1959) Principia Ethica. Findlay (1970) noted that axiology was treated as the ‘tail end of ethics’—values being created through ethics. He highlighted this as an irony, since axiology should be considered in light of the formation of ethics. Axiology was originally discussed in terms of the study of ‘ultimately worthwhile things’ (Findlay 1970, p. I). Yet, even according to this early definition, it is difficult, if not redundant, to argue about which one is more important. This chapter has so far detailed that ethics are formed out of interest (value) in humanity and well-being for fellow humans; values can be formed through ethics, yet ethics also shape these values and the level of human sensitivity (ethical sensitivity and awareness) to them. Ethical rules represent and promote the maintenance of human values; they explicate how people should act or live in an ethically correct way. In Plato’s terms, such rules aspire to ensure maximum happiness (Frede 2017). In technoethics, ethical rules express how people should utilise technological capacity to support their quality of life. This means that from both design and development perspectives as well as the usage perspective, technology should be developed and used for the benefit, well-being and happiness of as many people as possible. This line of ethical thinking should guide people in their decisions and subsequent actions in any technologically related situation.

Typical examples of ethical principles can be found, for example, in the UN’s Universal Declaration of Human Rights:

A1: All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

A3. Everyone has the right to life, liberty and security of person.

A26.1: Everyone has the right to education. Education shall be free, at least in the elementary and fundamental stages. Elementary education shall be compulsory. Technical and professional education shall be made generally available and higher education shall be equally accessible to all on the basis of merit.

A26.2: Education shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms. It shall promote understanding, tolerance and friendship among all nations, racial or religious groups and shall further the activities of the United Nations for the maintenance of peace.

A26.3: Parents have a prior right to choose the kind of education that shall be given to their children.

These examples of human rights are very general, but they can be used to derive ethical principles for technologies. For example, as with the Golden Rule, the first article suggests that one should not use technology to harm other people—particularly their liberty, life or security. Article 26 encourages the use of technology for egalitarian purposes, affording equal access to education.

Examples of technoethical rules specify general ethical principles for technology, and especially emerging AI systems may be for instance:

E1: Individuals should be respected, and technological system solutions should not violate their dignity as human beings, or their rights, freedoms and cultural diversity.

E2: Individual freedom and choice. Users should have the ability to control, cope with and make personal decisions about how to live on a day-to-day basis, according to one’s own rules and preferences, rather than to be forced into technologically oriented ways of doing and thinking.

E3: Concerns the role of people and the capability of AIS to answer for the decisions and to identify errors or unexpected results. AIS should be designed, so that their effects align with a plurality of fundamental human values and rights, this means taking into account diversity between groups and individuals.

Example 1 promotes human values through respect for individual freedom and diversity. This is similar to Example 2, in which people should additionally have the ability to control and make personal decisions regarding the technology they are affiliated with, in order to adopt, adapt and appropriate the designs to their own life circumstances. As is observed in Example 3, technoethical rules specify uses of advanced intelligent systems such as AI to assist in the maintenance of human values and rights. These examples lead to the development of an overall image of ethical human–technology relations that nurture human dignity. Human dignity should in turn provide the basis for determining the direction, ethical guidelines and governance of technology and its design (Zardiashvili and Fosch-Villaronga 2020). Ethics apply to all people; thus ethical principles should be applied to both the design and utilisation of technological systems. To more clearly understand the relationship between content associated with ethics and connections to emotional aspects of technology, the next section explores how these ethical rules are created in social actions.

5 Technoethical Process

Hume (1751/1998) discussed the role of emotions in creating ethical rules. He argued that value does not simply emerge from facts. Reason operates via descriptive propositions that cannot control the emotions that are essential elements in prescriptive, ethical propositions. For instance, knowledge of the dangers of particular substances does not automatically lead to the idea that one should not drink or smoke. In other words, despite knowing the dangers of alcohol and tobacco, people do not necessarily have an automatic negative emotional reaction towards these substances. Instead, the alcohol and tobacco industries are alive and well due to the fact that the ‘dangerous’ element of knowledge about these substances is not prioritised. Other elements and associations—highly social—are connected with these such as partying, relaxing, socialising, etc. Thus, Hume’s guillotine, or the is–ought (to be) argument, is still an important ethical dilemma and one cannot say that it has been solved (Saariluoma 2020). Moreover, this dilemma may always be present as it represents the conflict between shared human values—human rights and respect for others—and vested interests in oneself disconnected from the common good (Black 1964; Kazavin 2020). To gain clarity on this issue, the cognitive–affective act of processing ethical information must be analysed.

Human experience, i.e. conscious mental representation in the human mind, forms a central component of human information processing and thinking (Chalmers 1996; Dennett 1993; Rousi 2013a). The information content of experiences and mental representations can be called mental content (Saariluoma 2003). Mental representations have cognitive and emotional dimensions (Rousi et al. 2010). Both play an important role in ethical information processing. From one perspective, Hume’s guillotine separates these: cognitive processing based on raw facts does not necessarily incite or become connected to emotional processing, yet other matters such as self-investment and/or social (socio-economic) benefit carry emotional weighting (value). Thus, the saying ‘do as I say and not as I do’ represents the conflict between cognition, action and emotion. Moreover, from an ethical perspective, it is also important to understand the role of emotional valence in mental content.

Ethically, an important type of mental content is emotional valence (Oatley 1992). Although emotions and their processing and experience are complex, most emotions can be divided into those with either positive or negative associations. However, the so-called negative basic emotions such as sadness (e.g. grief, death of a loved one) or anger (fury towards the offender) may be interpreted and even experienced in a positive way. For instance, sadness about a loss may be linked to happiness for a friendship or anger about being betrayed may be seen as a motivation to heal and move on. Thus, valence and its felt intensity (arousal, see, i.e., Kron et al. 2015 and Robinson et al. 2004) are crucial aspects of experienced emotions and their cognitive functions. Once again, these reactions are generated based on complex, dynamic processes that inevitably—consciously and subconsciously—inform an individual whether or not the evaluated (appraised) phenomenon in question is beneficial or detrimental to an individual’s personal well-being. This cognitive–affective process is referred to as appraisal (Folkman and Lazarus 1984; Frijda 1993; Lazarus and Smith 1988). Appraisal, and this complicated relationship between an individual’s emotions and ethics, poses perhaps the greatest challenge to Hume’s guillotine. While emotions may be influenced at the cognitive level by knowledge of ethics and the need to act for the maximum benefit of oneself and others, from the individual emotional perspective, opportunities may be observed through technological capacities such as the ability to collect mass amounts of data (big data) for the purposes of modelling, targeted marketing, sale (business intelligence) and further exploitation, which give the individual technologically and economically competitive edge at the expense of others. That is, individual gain is prioritised over collective gain.

Valence arguably renders emotivism and emotionally loaded ethical thinking possible as it tempers the semantic value of cognitive–affective content according to benefit or detriment and relational ways of being (Joseph 2009; Wolff 2019). Often linked to linguistics (the technology of language) and other cultural expression, emotivism in ethics begins with the idea that situations of life and respective experiences are either emotionally positive or negative (pleasant or unpleasant). The emotional analysis of the consequences of actions thus provides the basis for the ethical analysis of actions and action types. For example, the Golden Rule can be seen as a generalisation of situational experiences of deeds in which the principle is either followed or violated. Thus, emotional and ethical information processing involves the analysis and experience of emotional valence in terms of emotional meaning (the consequences for the individual as well as others or in light of others) and can be considered the first point of the ethical process.

Consequently, the development of ethical norms is grounded in the quality of valence that people associate with various emotional situations. However, it is not possible to finalise the analysis of the ethical process with emotions and their valences (Saariluoma 2020; Saariluoma and Leikas 2020). Life situations are the consequences of human actions—not necessarily carried out by the individual herself but by humans and their constructions in general. Thus, the value of actions can be defined based on the valence of situations that arise as a consequence of particular types of actions. Norms describe what kinds of actions have had emotionally positive or negative consequences. In many contexts, and as a discursive ‘rule of thumb’, actions leading to pain are not generally acceptable, while actions leading to positive emotions are often referred to as ‘good’. Yet, in the context of punishment and war, pain inflicted on the perceived offender or enemy can be emotionally experienced as positive by a collective group of individuals. Likewise, positive emotions experienced while engaging with video games can be considered negative from the perspectives of game addiction, reduced social engagement, physiological problems and depression.

The first step in defining technoethical principles is to classify situations according to their emotional values. That is, what types of emotions (positive–negative; aroused–passive) arise in relation to technology and associated factors—context, intention, technological type, logic, etc. From an emotional perspective, actions leading to various technology interaction situations can be organised into two types of categories with a follow-up question—beneficial or detrimental and for whom (the individual, the collective, humankind in general)? Even in situations in which emotions are not consciously experienced, such as elevator travel or basic graphical user–interface interaction, an ongoing evaluation process is always occurring. In studies on elevator use, for instance, Rebekah Rousi (2013b; 2014) observed that the ‘experience of no experience’—no conscious emotional experience—revealed that the technology interaction had been positive. That is, the relationship between positive and negative technological experiences with conscious and unconscious emotions is complex, dynamic and highly contextualised (Winkielman and Berridge 2004). In the context of elevator use, an individual engages in a fully embodied interaction, meaning that the stakes are high in terms of safety and security. Faulty function in the technology would mean either serious injury or loss of life. Thus, negative interactions entail a conscious emotional experience.

At the interaction design level, issues relating to mass data collection, cookie use, clickbait and indeed AI and bots may be dismissed as minor. In other words, users may not think beyond the surface (interactional) level, whereby symbolic interactions matter (Saariluoma and Rousi 2015). These symbolic interactions may not generate strong conscious emotional reactions. Yet, upon further reflection, higher order cognitive emotions are engaged through consideration for reasons behind internet-based data collection and subsequently the further use of collected data (financial gain and social–political control). Consider the knowledge of personal data being sold onwards: the owners of the website generate profits from one’s data without consent or offering royalties to the users. Then, consider the intentions of those who buy the data: Are they in line with one’s own values, desires and interests related to personal safety security and freedom? For example, based on engaging in a seemingly free online game, what do the webpage owners gain at the eventual expense of the users? If these ethical questions—and, moreover, facts—were readily apparent to users, more effective and reactive emotional reactions would be experienced.Footnote 5

The generation of ethical norms is a cultural constructive process that involves emotivism and allocating ethical–emotional meaning (Cohen et al. 1992; Hofstede 1980; Jeffrey et al. 2004; Markus and Kitayama 1994). Knowledge of the need to avoid alcohol use, for instance, as it leads to health and social problems may or may not serve as a deterrent for alcoholism. The mechanisms associated with alcohol and its use (i.e. social–cultural conditions) should be examined in order to understand which emotions dominate (and why) when people choose to engage in this consumable technology. Alcoholism can be classified as a situation in life, while drinking, its surrounding behaviour and antecedents are the actions that lead to this situation. Emotions regarding alcoholism and its long-term consequences, as seen in Korsakoff’s syndrome—a preventable memory disorder (Kopelman et al. 2009)—do not rest merely in emotional issues. The control and analysis of consequences are always cognitive (information processing) issues leading to the study of knowledge, otherwise known as hermeneutics (Gadamer 2008) and experience (phenomenology). Thus, differently to human cognitive and emotional encoding, technoethical processes are intimately linked to the whole (Barnes and Thagard 1996; Thagard 2008). In terms of how emotions operate within these cognitive processes, it is useful to refer to research on appraisal theory (Folkman and Lazarus 1984; Frijda 1993). This theory explains benefit–detriment and safety–threat relationships in terms of how emotions emerge through evaluations of encounters and experience at various levels of cognition and through different types of emotions—from basic or primal emotional reactions (Ekman 1999; Ortony and Turner 1990) to higher order cognitive emotions (Clark 2010; LeDouz and Brown 2017).

The basic unit of ethical analysis is the cognitive representation of actions associated with emotional valence. Through evolutionary psychology, humans are programmed to avoid actions that eventually lead to negative emotional states such as harm, pain, humiliation, guilt and/or stress (Neuberg et al. 2011; Petersen et al. 2012). Cognitive analysis renders it possible to understand how humans should act to avoid negative, destructive and exploitation experiences. This type of approach is useful when designing technology for ethical emotional experience. Difficulties arise, however, due to the fact that ethical experiences and interpretations of ethics differ not only between individuals, communities and cultures but also according to the context. The ethical qualities of AI that are developed to help doctors optimise the effectiveness and efficiency of disease diagnosis and immunisation development, such as needed now with the COVID-19 virus, suggest that such technology is beneficial for a large population of people. However, the scientists, developers and companies that have the resources to develop such powerful systems will also inevitably be able to exploit these findings financially. Thus, the end result may be that the subsequent developments are experienced more as a financial win for those who ‘have’ (parties who already have strong socio-economic resources) than as a health benefit for the broader global population (a mixture of ‘haves’ and ‘have nots’).

Ethical norms can be seen as a consequence of both informal (every day) and formal (or political) discourses and are the subjects of study within the area of discourse ethics (Habermas and Cronin 1993). Thus, discourse is an integral element to consider within technoethical processes. Not only does discourse reveal the semantic and interpretative framing of particular issues within the context of society but also reveals the power relationships, cause and intentionality behind the development of the ethical norms themselves. Hume (1751/1998) neglected to highlight three essential components of ethical generation processes. These components relate to: (1) life circumstances, (2) information analysis and perceived cause–effect relationships and (3) socio-ethical discourse. Firstly, the intertwined relationship between ethics and emotions depends on life circumstances and subjective conditions. That is, assessments of benefit/detriment, winners/losers and ethical interpretation always occur from the standpoint of the individual party and their approach to ethical analysis (their ethical stance: how they use the available information to justify their and others’ actions in relation to ethics). Secondly, ethical processing entails the evaluation and analysis of information related to various actions leading to various types of situations, the information produced within the situations and the subsequent outcomes. Thirdly, socio-ethical discourse needs to be incorporated into the analysis in order to define the social and historical properties of a situation. Though Hume theorised the cognitive–affective–behavioural triad in terms of emotions, reason and action, his guillotine irrationally broke the process.

Hume’s guillotine is a consequence of a mistaken analysis of the ethical process and the ethicality of actions. He was not concerned with how ethics arise from the dynamics of emotional experience and the simultaneous analysis of situations. Cognitive and emotional aspects of situations are encoded parallel to the evaluation of ethical stance and appraisal. This is why the very question of whether or not (cognitive) facts are utilised to define (emotional) values is senseless. Facts and values are simply two sides of the same mental event. Social discourse operates in a way that generalises ideas, establishing hegemony (Gramsci 2006; Laclau and Mouffe 2014), that relate actions to cognition and subsequent (perhaps idealised) social and individual emotions. Ironically, from a technological development perspective, an accurate analysis of the ethical process as a whole—that incorporates social, biological, political and psychological components—informs the development of weak and strong AI technology, for instance. This is due to the fact that technologists can ascertain the socio-cultural components of emotional experience, from and in conjunction with biological components, meaning that contextually aware emotional systems could be established in relation to information types and surrounding social discourse. But would these systems be ethical? And would this knowledge about the interaction between social knowledge, conventions and moral codes (ethics) and cognitive–affective processing be used for the right purposes?

6 Conclusion

Knowledge of ethical information processing can help to circumvent Hume’s guillotine. Hume makes the fundamental (unsupported) generalised assumption that emotions and cognition are opposites in the human mind. This establishes a binary approach to cognition and emotions, which has led to the popular myth that reason and rationality are based on the cognition of facts, whereas emotions and their expression are somewhat fuzzy, subjective reactions. However, emotions guide attention, prioritise information, enable and disable memory and shape memory. In so doing, they operate on various levels throughout the society—from the highly individual (subjective) to the mass (social) and political levels. At the beginning and the end of science are always emotions. The very act of separating what is considered factual information and cognition from emotional processing cannot be supported (Dagleish and Power 2004; Thagard 2008). One cannot conclude that the two concepts are opposites based on the fact they are expressed differently. Instead, they are components of the same evolutionary system. This is why it is essential to analyse the ethical uses and emotional consequences of actions when discussing technoethics.

It is not necessary to deem all technologies that are attached to a business model, unethical and controversial to constructive social thought. This is because technology, and how it is supported through economic models, is a result of the world’s constructed socio-economic conditions. The problems emerge from an imbalance within benefit–detriment relationships and issues of power and control. Moreover, the intentionality behind the technology development—i.e. produced for diagnosis and world health or for financial gain, destruction and control—shifts the ethical–emotional relationship between controversial and acceptable boundaries. Tobacco, for instance, can be seen to have positive effects in terms of weight control and social health (cigarettes as networking tools). Yet, from the perspective of world health, it causes various types of cancer and lethal cardiac diseases and is still legal and on the global markets due to corporate profit. Nuclear energy is another controversial technology. It is a clean technology and relatively economical to use, yet it is questionable. The consequences of malfunction are so severe, and the destructive effects are so permanent, that it is unclear whether it should be used. The critical evaluative factors involved in tackling this problem are closely linked to ethical questions regarding the risks versus benefits—those who gain and those who potentially lose. The outcomes of such evaluative processes can never be certain. Emotions experienced in relation to these processes will always be conflicting and varying.

There are also destructive technologies. For example, bots spreading intentional disinformation are not meant to increase the quality of human life (Saariluoma and Maksimainen 2012). They are instead intended and designed to influence emotion and shift social discourse for (usually political or financial) gain. They are meant to rupture societies (divide and conquer) and, through instability, to establish power through false structures—e.g. similarly to hackers and individuals guilty of cybercrime who establish cybersecurity companies. From this perspective, any form of disinformation can only be harmful. As seen in conjunction with the 2016 US presidential election and the infiltration of fake news via social media, Soviet propaganda similarly hid and manipulated important statistical facts. Thus, disinformation created illusory bubbles, which have been gradually refuted, consequently leading to political changes, mixed emotions and a lack of trust.

These examples show that improving the quality of life, which is ultimately defined emotionally, does not always drive technology design and construction. Human values will always vary. However, technology design is value motivated, rendering value and ethical analysis an important part of modern technology design, particularly where emotions are concerned. The development of ethical norms is grounded in the analysis of emotional situations and vice versa. Thus, ethical process analysis should not end with a seeming understanding of emotions; rather, ethics and emotions should be understood as being intertwined—relying on cognitive–affective, social, cultural, political and contextual interactions and relations. Situations of life are consequences of actions.

Information systems and emerging technology are involved in carrying out increasingly complex actions and processes. It is thus essential to develop ethical capacities to accommodate these systems at the levels of technological logic (input and design), and social costs and benefits, in addition to economic modelling and sustainability. Designing from an ethical perspective means increasing the benefits for as many individuals and parties as possible and being transparent about the cost–benefit dimensions—indicating who gets what, when and how. The operational roles of technology can be very independent; thus, it is essential that they adhere to a code of principles that define ethical practices.

There are two approaches to ethical AI logic: ethically weak and ethically strong AI systems. The former can apply ethical rules that have been programmed into systems, directing behaviour in given situations. In this case, AI would be able to recognise critical features in situations and select actions based on this information. In the latter type of system, ethics are simply a human-implanted feature in recognition- and action-based systems. The outcomes of technoethical processes are sets of guidelines, referred to here as moral norms, with a codification process that can be termed ‘norming’. In this technoethical process—or synthetic ethical–emotional process—moral norms and rules are the outcomes of programming in relation to moral and ethical codes. These inform the norming process, which classifies the combination of situations, actions and contexts resulting in particular circumstances as either positive or negative, depending on the application of these moral norms within the particular situation. From the perspective of a situationally aware AI system, ethical rules are utilised as criteria to evaluate the emotional consequences of deeds. The classic Golden Rule, for example, expresses human experience when people do not take into account how their actions affect others. Thus, by implementing a codified framework that may assess hypothetical alternatives about how the machinery (if conscious) would itself experience the consequences of its own actions, the system may be presented with the right solution regarding what the most appropriate action in those given circumstances would be.

Understanding the origins of social discourse may be considered a ‘which came first—the chicken or the egg’ type of deed. An individual’s primary ethical representations and schemas form the basis of social discourse, while social discourse largely informs an individual’s personal ideas about ethics and moral codes. A charismatic figure may influence social discourse and interpretative frames for many communities on numerous subjects. Yet, this charismatic figure would not have been born in a vacuum; rather, they too are the result of social–cultural conditions. Through cultural influence as well as small-scale and large-scale, formal and informal discussions, people form their views about what are the most important and fundamental ethical experiences and respective rules, as examined in scholarship on discourse ethics (Habermas 2009; Habermas and Cronin 1993). In discourse ethics, representations are submitted to argumentative or foundational analysis. Each primary representation or ethical rule is subjected to the foundational discourse. Any ethical rules that cannot be argumentatively supported will be rejected. The discourse itself has layers and sub-discourses. The main outcome is a system of ethical concepts, rules and principles. This chapter describes the unification of emotional, cognitive and social analysis as the ethical information process. It argues that in emotional–technological design, ethical information processing is a key for not only knowing how to design for emotions, or why design teams emotionally experience their technological designs in certain ways, but also important for understanding ethical ways of successfully implementing emotions in AI technology design.