Keywords

1 Introduction

Artificial Intelligence (AI) is the study of the design, development, application, and operationalisation of complex algorithmic systems that train themselves to think and act as intelligently as human beings (Schrader and Ghosh 2018). Powered by Machine Learning (ML) algorithms, where they adaptively interpret and continuously self-learn through analyzing new large datasets (Walsh et al. 2019; Floridi and Cowls 2019; Bankins 2021; Giermindl et al. 2021), these “intelligent virtual agents” recognize and respond to specific environments (Acemoglu and Restrepo 2019; Teng 2020) that would have otherwise required human-level intelligence and intervention (Biller-Andorno et al. 2020; Naude and Dimitri 2020). These environments, which require pattern identification in data and the generation of predictions to assist in decision making processes (Nadimpalli 2017; Biller-Andorno et al. 2020; Krijger 2021), include Knowledge Reasoning’, ‘Machine Learning (ML)’, ‘Robotics’, ‘Computer Vision’, ‘Visual Perception’, ‘Speech Recognition’, ‘Voice Biometrics’, and ‘Natural Language Processing (NLP)’ (Coombs et al. 2020).

NLP as an “interdisciplinary field of AI where the machine can be used to understand and manipulate natural language text or speech” (Bender and Friedman 2018; Li et al. 2021) includes text generation, speech recognition, question answering, speech to text, and text to speech. In this research study, the term AI would refer mainly to NLP, since the focus is on a Banking Contact Centre suite of AI technologies.

The AI industry forecasts some huge economic promise, potential, and benefits (Brynjolfsson and McAfee 2012), it’s resulting in things getting ‘easier, cheaper, and abundant’ (Naude and Dimitri 2020). AI has the potential to increase human wellbeing, boost most sectors of the economies, improve various societies, and general good quality customer service (Arambula and Bur 2020; James and Whelan 2022). As Dolganova (2021) points out AI increases “customer loyalty, trust, and could boost sales by up 30% per annum”.

But, with all these undoubted economic and commercial success stories, AI has the transformative social and cultural influence in society (Mika et al. 2019). AI has also demonstrated a pattern of entrenching social divides and amplifying or worsening social inequality, particularly among historically marginalized groups (Hagerty and Rubinov 2019).

Some few examples that demonstrate AI unethical behaviour in the United States are:

  1. (a).

    “a job recruiting tools being biased against women” (Hagerty and Rubinov 2019);

  2. (b).

    “Latin and African American borrowers faced with discriminatory credit algorithms” (Hagerty and Rubinov 2019);

  3. (c).

    “bias regarding race, gender, and/or sexual orientation in sentiment analysis systems, and natural language processing technologies” (Hagerty and Rubinov 2019); and

  4. (d).

    “an AI legal system that discriminated against African American and Hispanic men when making decisions about granting parole which has since become infamous” (Taddeo and Floridi 2018).

  5. (e).

    “the US recidivism prediction algorithm that allegedly mislabelled African American defendants as “high-risk” at nearly twice the rate as it mislabelled white defendants” (Angwin et al. 2016) from Krijger 2021).

  6. (f).

    “hiring algorithms that, based on analysing previous hiring decisions, penalized applicants from women’s colleges for technology related positions” (Dastin 2018).

  7. (g).

    “a healthcare entry selection program that exhibited racial bias against African American patients” (Obermeyer et al. 2019)

  8. (h).

    “Google uses AI to identify photos, people, and objects in GOOGLE‘s Photos Services and always it will not be able to produce the correct results. Also, if any camera misses mark on racial sensitivity then the predicted criminals will be always the black people” (Alotaibi 2018).

All of these incidents and instances led to governments, academics, civil society or non-profit organisations, research institutes, and even the private sector to embark on initiatives to ensure ethical AI behaviour. Ethical AI which broadly refers to “the fair and just development, use, and management of AI technologies” (Bankins 2021) was initiated. Since AI Systems are more socio-technical in nature, considering ethical implications on their design, development, deployment, and operation became of critical importance for the society. Non-governmental organisations like AI4Good led to the development of the AI4People ethical AI Framework by Floridi et al. in 2018 (Kindylidi and Cabral 2021); initiatives by European Commission’s High-Level Expert Group on Artificial Intelligence (AI-HLEG) led to the publishing of Ethics Guidelines for Trustworthy AI or Sustainable AI (Kindylidi and Cabral 2021); The Japanese Society for Artificial Intelligence (JSAI) identified the ethical principles to be followed by developers of artificial intelligence systems (Dolganova 2021); The Atomium – European Institute for Science, Media and Democracy white paper on AI ethics outlines five principles of AI ethics (Dolganova 2021); Responsible AI which is “concerned with the design, implementation, and use of ethical, transparent, and accountable AI technology in order to reduce biases, promote fairness, equality, and to help facilitate interpretability and explainability of outcomes” (Trocin et al. 2021) was also born.

These initiatives were not only in the non-profit, governmental, or research institutes based only but the private sector also came on board:

  1. (a)

    IBM launched an “Everyday Ethics for Artificial Intelligence: A practical guide for designers and developers” (Ashby 2020);

  2. (b)

    Microsoft leadership uses the “six principles mainly derived from the AI4People framework to guide its use of their AIs” (Bankins 2021); and

  3. (c)

    Google as also formulated “seven principles of artificial intelligence that the company is following in creating and using AI technologies” (Dolganova 2021).

Most AI Practitioners are pressured to prioritise commercial interests over public or ethical consideration (Mittelstadt 2020). This raises a question of whether a Balanced AI, where business benefits or commercial viability of AI (opportunity); ethics in AI design and development, and AI utilisation or operation (opportunity cost) are all balanced? So the research question and problem is on whether an ethically designed, developed, implemented, and utilised NLP for Contact Centres in South Africa continue having positive effects on business performance? This question though cannot be fully answered in this SLR publication as the case study is still to be undertaken, but the conceptual framework(s) gained from this exercise will be used as a guide to conduct the case research. The primary goal for this SLR publication was to conduct thematic analysis for academic literature to achieve a Balanced NLP.

2 Methodology

A systematic literature review (SLR) (Paré et al. 2015) was conducted to identify relevant research papers with meta-analyses for qualitative synthesis. It followed 8 stages namely: conceptualization and protocol identification, searching the literature, screening the articles, selecting the relevant articles, performing some thematic analysis, presenting the results, developing the framework, and then concluding. These stages are illustrated in Fig. 1 below:

Fig. 1.
figure 1

Systematic literature review and meta-analyses research flow

The literature search, screening, and selection was conducted in 5 steps as shown in Table 1 below. These are sources searched, search keywords, search strategy, inclusion criteria, and exclusion criteria.

  • (i) Sources Searched

    Two main databases, Web of Science and Google Scholar, were used as the main sources for the literature search. Also about 6 mainly IS journal websites were also used. These are Journal for Management Information Systems (JMIS); European Journal of Information Systems (EJIS); MIS Quarterly; Information Systems Journal (ISJ); Journal of Strategic Information Systems (JSIS); and Journal of Information Technology (JIT).

  • (ii) Search Keywords

    The keywords “AI Implementation”; “AI Adoption”, “Ethical AI Implementation”, “NLP”, and “Ethical NLP” were used to conduct the literature search from the above databases and journal websites. A total of over 350 articles were found.

  • (iii) Search Strategy

    All the websites of the 2 identified Databases and Journals were interrogated. Using the combination of the above keywords, the search strategy was for articles from the last 3 years: January 2018 to December 2021. Interestingly, there were about three relevant articles from MIS Quarterly, the International Journal of Information Management, and Critical Social Policy that were already accepted for publishing in 2022 and those 3 papers were also included.

    Table 1. Steps for searching and selecting the articles
  • (iv) Inclusion Criteria

    With the search strategy discussed in (iii) above, the results were ±64 relevant articles selected. From those articles, Google Scholar contributed about 45 articles, which accounts for about 70% of all the included publications, and Web of Science about 19, which accounts for 30% of the articles.

    The 2021 calendar year seemed to be the most prominent with over 33% of all ethical AI related articles being published that year. 2020 publications follow with 22% slightly ahead of 2019 articles at 22% were tied second at around 22% each, as shown in Table 2 below.

    Table 2. Year of article publications included in the SLR

    From the journal, the Association for Computational Linguistics and European Journal of Information Systems had the biggest number of articles with 4 each, as shown in Table 3 below. The Institute for Electronic and Electrical Engineers (IEEE), and Minds and Machines came second tied at 3 articles each. The was a very long tail of 1 publication each from various academic institutions, research institutes, MIS Quarterly, and other journals like Journal of Strategic Information Systems.

    Table 3. Some selected journal publications included in the SLR > 1 article
  • (v) Exclusion Criteria

    All publications before January 2018 were excluded and also those that were not entirely relevant to the debate of ethical AI. This resulted in around 307 articles being excluded from the SLR. The field of ethical implementation and utilisation of AI tends to be highly dynamic and Ethical AI is a phenomenon that has recently been thrust into the spotlight, hence the decision to restrict the literature search to the last 3 years.

3 Analysis of Literature Identified

All the literature that has been reviewed came from the 64 relevant articles that have been included in the study using the combination of the keywords “AI Implementation/Adoption”; “Ethical AI Implementation/Adoption”, “NLP”, and “Ethical NLP”. These are the articles published in the last 3 years, from 2018, that deal with AI ethical related issues.

Floridi and Cowls (2019) posit that the ethical debate around AI is almost as old as the 1950s and 1960s. However, it is only in recent years that impressive advances in the capabilities and applications of AI systems have brought the opportunities and risks of AI for society into sharper focus (Yang et al. 2018). Just to recap, Table 4 below shows some definitions of artificial intelligence from various authors and researchers:

Table 4. Artificial Intelligence (AI) definitions

3.1 Ethical Issues AI

Alotaibi (2018) list 9 leading ethical issues related with artificial intelligence: Unemployment; Inequality; Humanity; Artificial Stupidity; Racist Robots; Security; Evil Genies; Singularity; Robot Rights. These are exactly the same ethical issues that had been raised by Julia Bossmann at the World Economic Forum (WEF) in 2016 which demonstrated that there’s some unanimity when it comes to ethical AI issues.

Table 5. Leading ethical issues

On unemployment, the rise of Intelligence Automation (IA) – intelligent automation of knowledge and service work (Coombs et al. 2020) – could lead to job substitution especially for office work and clerical tasks, sales and commerce, transport and logistics; manufacturing industry, construction, some aspects of financial services, and some types of services like translation, and tax consultancy (Degryse 2016). IA differs from previous forms of automation because of machine learning where AI machines can learn, adapt and improve over time, based on new available data (Coombs et al. 2020; Coombs 2020), which could lead job polarization. Job Polarization is a phenomenon where “advance technology seems to be displacing workers away from middle-skill/middle-pay jobs down to low-skill/low-wage jobs, where these workers further depress low-skill wages, or for a lucky few who are retrained, up to high-skill jobs where the workers enjoy higher productivity and higher wages” (Dau-Schmidt 2017).

3.2 Global AI Ethical Initiatives

Table 5 above on leading AI ethical issues and the unethical AI behaviours listed in the introduction page have led to the global initiatives to combat unethical AI design, development, implementation, and utilisation, especially in the global north. Ethical AI broadly refers to “the fair and just development, use, and management of AI technologies” (Bankins 2021, 5). Nakata et al. (2021) list some major global initiatives led by governments, national and international organisations, research institutes, civil society (non-profit), and private sector pertaining to AI principles for ethics since 2016 (Jobin et al. 2019). Some selected of those initiatives are listed below:

  1. (i)

    5 principles on “Ethically Aligned Design” from IEEE in 2017;

  2. (ii)

    6 principles on “Future Computed” by Microsoft in 2018;

  3. (iii)

    9 principles on “Statement on AI, Robotics, and ‘Autonomous’ Systems” by European Group on Ethics in Science and New Technologies in 2018;

  4. (iv)

    5 principles on “AI in the UK: ready, willing, and able” by the Select UK Committee on AI in 2018;

  5. (v)

    5 principles on “Everyday Ethics for Planning” by IBM in 2018;

  6. (vi)

    7 principles on “Sony Group AI Ethics Guidelines” by Sony in 2018;

  7. (vii)

    5 principles on “An Ethical Framework for a Good AI Society” by AI4People in 2018

  8. (viii)

    7 principles on “Principles of Human-Centric AI Society” by the Japan’s Cabinet Office in 2019; and

  9. (ix)

    4 principles on “Ethics Guidelines for Trustworthy AI” by the EU’s High-Level Expert Group on Artificial Intelligence on AI (AI-HLEG) in 2019.

  • Source: adapted from Nakata et al. (2021)

These initiatives and the subsequent principles and guidelines have led to the emergence of terms like AI4Good and AI4People Framework (Floridi et al. 2018; Cowls et al. 2021), Trustworthy AI (Mökander and Floridi 2021; Cowls et al. 2021), Responsible AI (Kindylidi and Cabral 2021), and Emotional AI, (McStay 2019). Some of these will be discussed in detail below:

3.2.1 Responsible AI

Responsible AI is concerned with the design, development, implementation, and utilisation of ethical, transparent, and accountable AI technology in order to reduce biases, promote fairness, equality, and to help facilitate interpretability and explainability of outcomes (Trocin et al. 2021). It is a fairly new phenomena that investigates the ethics of AI to understand the moral responsibility of these technologies (Kindylidi and Cabral 2021).

3.2.2 Trustworthy AI

Ethics Guidelines on Artificial Intelligence (‘the Guidelines’) were released by the European Commission’s High-Level Expert Group on Artificial Intelligence on AI (AI-HLEG) for Trustworthy AI. Developing trustworthy AI by ensuring that systems are lawful, ethical and robust (European Commission 2019).

In general, these frameworks consist of several key themes, exemplified by the HLEG Guidelines for trustworthy AI that lay out seven key requirements: (European Commission 2019; Cowls et al. 2021; Krijger 2021):

  1. (i)

    Human Agency and Oversight, “AI systems should allow humans to make informed decisions and be subject to proper oversight.” (European Commission 2019; Cowls et al. 2021; Krijger 2021);

  2. (ii)

    Technical Robustness and Safety, “AI systems need to be resilient, secure, safe, accurate, reliable, and reproducible.” (European Commission 2019; Cowls et al. 2021; Krijger 2021);

  3. (iii)

    Privacy and Data Governance, “Adequate data governance mechanisms that fully respect privacy must be ensured.” (European Commission 2019; Cowls et al. 2021; Krijger 2021);

  4. (iv)

    Transparency, “The data, system and AI business models should be transparent and explainable to stakeholders.” (European Commission 2019; Cowls et al. 2021; Krijger 2021);

  5. (v)

    Diversity, Non-discrimination and Fairness, “Unfair bias must be avoided to mitigate the marginalisation of vulnerable groups and the exacerbation of discrimination.” (European Commission 2019; Cowls et al. 2021; Krijger 2021);

  6. (vi)

    Societal and Environmental well-being, “AI systems should be sustainable and benefit all human beings, including future generations.” (European Commission 2019; Cowls et al. 2021; Krijger 2021);

  7. (vii)

    Accountability “Responsibility and accountability for AI systems and their outcomes should be ensured.” (European Commission 2019; Cowls et al. 2021; Krijger 2021).

3.2.3 Research Institutes on Ethical AI

Below are just some selected research institutes that also developed some ethical AI principles.

  1. (a)

    The Capgemini Research Institute also designed their six core principles and characteristics of ethical (Dolganova 2021): (a). “ethical actions from design to application” (Capgemini Research Institute 2019); (b). “transparency” (Capgemini Research Institute 2019); (c). “explainability of the functioning of AI” (Capgemini Research Institute 2019); (d). “the interpretability of the results” (Capgemini Research Institute 2019); (e). “fairness, lack of bias” (Capgemini Research Institute 2019); (f). “the ability to audit” (Capgemini Research Institute 2019).

  2. (b)

    The European Institute for Science, Media and Democracy published a white paper outlining five principles of AI ethics (Floridi et al. 2020; Dolganova 2021): (a). Promoting human well-being (Floridi et al. 2020); (b). Harmlessness (“confidentiality, security, and ‘attention to opportunities’”) (Floridi et al. 2020); (c). Autonomy (“the right of people to make their own decisions”) (Floridi et al. 2020); (d). Fairness (“respect for the interests of all parties that can be influenced by the actions of the system with AI, the absence of discrimination, the possibility of eliminating errors”) (Floridi et al. 2020); (e). Explainability (“transparency of the logic of artificial intelligence, accountability”) (Floridi et al. 2020).

Other fields, especially in Health Care, have developed their Frameworks for ethical AI considerations like in Pathology: Transparency, Accountability, and Governance (Chauhan and Gullapalliyz 2021) are of critical importance in an AI System and in Otolaryngology: Consent and Patient Privacy, Beneficence, Non-Maleficence, and Justice (Arambula and Bur 2020) are the principles adopted.

3.2.4 Private Sector Ethical AI Initiatives

The private sector, especially the Tech Companies, also developed some practical standards and guidelines in designing and developing AI within their companies:

  1. (a)

    IBM’s “Everyday Ethics for Artificial Intelligence: A practical guide for designers and developers” (IBM 2018; Ashby 2020) were published in 2018 and are in full operation: (i). Purpose expressed as unambiguously prioritized goals (IBM 2018); (ii). Truth about the past and present (IBM 2018); (iii). Variety of possible actions (IBM 2018); (iv). Predictability of the future effects of actions (IBM 2018); (v). Intelligence to choose the best actions (IBM 2018); (vi). Influence on the system being regulated (IBM 2018); (vii). Ethics expressed as unambiguously prioritized rules (IBM 2018); (viii). Integrity of all subsystems (IBM 2018); (ix). Transparency of ethical behaviour (IBM 2018).

  2. (b)

    Microsoft also developed and published five principles that their leadership uses in order to guide their use of AI (Bankins 2021). These are mainly linked to the AI4People framework (Bankins and Formosa 2021; Bankins 2021) and are: (i). fairness (“aligned to the justice norm”) (Microsoft n.d.); (ii). reliability and safety (“aligned to the beneficence norm”) (Microsoft n.d.); (iii). privacy and security (“aligned to the non-maleficence norm”) (Microsoft n.d.); (iv). inclusiveness (“aligned to the justice norm”) (Microsoft n.d.); and (v). transparency and accountability (“aligned to the explicability norm”) (Microsoft n.d.).

  3. (c)

    Google as also developed and published seven principles that they use to design, develop, and utilise these AI type technologies (Dolganova 2021): (i). “AI should be socially useful” (Pichai 2018); (ii). “it is necessary to strive to avoid unfair influence on people” (Pichai 2018); (iii). “application of best security practices” (Pichai 2018); (iv). “responsibility for the actions of AI in front of people” (Pichai 2018); (v). “ensuring guarantees of confidentiality, proper transparency and control over the use of data” (Pichai 2018); (vi). “maintaining standards of excellence” (Pichai 2018); (vii). “limiting the use of potentially harmful and offensive software products” (Pichai 2018).

3.3 AI4People Framework

The AI4Good initiative led to the development AI4People Unified Framework of Principles for AI in Society (Floridi et al. 2018, 2020; Floridi and Cowls 2019). This framework seem to encapsulate all the earlier ethical AI principles and consolidated them into 5 unified ethical AI principles: (a). Beneficence: “Promoting Well‑Being, Preserving Dignity, and Sustaining the Planet” (Floridi et al. 2018, 2020; Floridi and Cowls 2019; Beil et al. 2019); (b). Non‑maleficence “Privacy, Security and ‘Capability Caution’” (Floridi et al. 2018, 2020; Floridi and Cowls 2019; Beil et al. 2019); (c). Justice: “Promoting Prosperity and Preserving Solidarity” (Floridi et al. 2018, 2020; Floridi and Cowls 2019; Beil et al. 2019); (d). Explicability: “Enabling the Other Principles Through Intelligibility and Accountability” (Floridi et al. 2018, 2020; Floridi and Cowls 2019; Beil et al. 2019); (e). Autonomy: “The Power to Decide (to Decide)” (Floridi et al. 2018, 2020; Floridi and Cowls 2019; Beil et al. 2019).

3.4 AI Ethical Design

The robustness of the design and the agility of the architecture of Trustworthy AI is extremely important and must be built into the system from the beginning (Mökander and Floridi 2021). That could also lead to a process of continuous improvement of the re-design process. The IEEE has setup 4 Ethically Aligned Design (EAD) standards to address the ethical concerns of AI (Alotaibi 2018): (a). “Model Process for Addressing Ethical Concerns during System Design” (Alotaibi 2018); (b). “Transparency of Autonomous Systems” (Alotaibi 2018); (c). “Data Privacy Process” (Alotaibi 2018); and (d). “Algorithmic Bias Considerations” (Alotaibi 2018).

Also, the Japanese Society for Artificial Intelligence (JSAI) articulated some ethical principles to be followed by designers and developers of the AI systems (Dolganova 2021): These include.

  1. (a)

    respect for human rights and respect for cultural diversity (JSAI 2017);

  2. (b)

    compliance with laws and regulations, as well as not harming others (JSAI 2017);

  3. (c)

    respect for privacy (JSAI 2017);

  4. (d)

    justice (JSAI 2017);

  5. (e)

    security (JSAI 2017);

  6. (f)

    good faith (JSAI 2017);

  7. (g)

    accountability and social responsibility (JSAI 2017);

  8. (h)

    self-development and promotion of understanding of AI by society (JSAI 2017).

Leidner and Plachouras (2017) had originally proposed 6 ethical by design principles: (i). “proactive and not reactive”; (ii). “ethical set as a default setting”; (iii). “ethics embedded into the process”; (iv). “end to end ethics”; (v). “visibility and transparency”; and (vi). “respect for user values”. These were further reiterated by Chauhan and Gullapalliyz (2021) in their Inclusive AI Design and No Bias design principles. Schrader and Ghosh (2018) also proposed an ethical AI development Framework which includes the following five components: (a). “Identify ethical issues of AI”; (b). “Improve human awareness of AI”; (c). “Engage in dialogical collaboration with AI”; (d). “Ensure the accountability of AI”; (e). “Maintain the integrity of AI”.

3.5 NLP Business Benefits

In most businesses their primary goal or objective when implementing an AI technology or AI innovation is to enable product and service innovation, enhance the customer experience, improve customer service, increase efficiency and productivity, and also Improve decision making (Amazon 2020). In some other cases, like financial services industry, risk assessment and fraud detection and prevention are another priority (Nadimpalli 2017). Because the focus area in this AI study is the NLP, which is mainly utilised in the services sector, the AI business benefits of interest would be revenue generation and cost optimisation, customer service, efficiency improvements and high productivity, and competitiveness.

3.5.1 Revenue Generation and Cost Optimisation

Revenue and sales forecasts with the deployment of the NLP solution are astronomically. For instance, the NLP can process a thousand times more the queries and transactions that would have otherwise been done by over 100 people (Nadimpalli 2017). That coupled with some cost savings and optimisation use cases like “a Bank that saved over $9 million to customers after implementing an AI tool to scan small business transactions for fake invoices” (Quest et al. 2018). The scaling of these AI technologies is also simple but the benefits are so huge. Through AI implementation in the Anti-Money-Laundering space in banking, the incidents of suspicious activity increased to more than 20 times, but the resolution is also quicker (Quest et al. 2018).

3.5.2 Customer Experience

The deployment of the AI in customer services environment has significantly enhanced customer experience (Dolganova 2021) and this became a huge competitive advantage. The introduction of the NLP in the financial services improved the Customer Satisfaction Index (CSI) score, improved the Complaints to Compliments ratio, and the Net Promoter Score (NPS) (Quest et al. 2018). With the forecast that over 80% of consumers are likely to buy a brand that offers personalised service, the need for AI to enhance customer experience is even bigger (Arambula and Bur 2020; Amazon 2020).

3.5.3 Efficiency Improvements and Enhance Productivity

The implementation of AI in various areas improves accuracy and efficiencies like in the contact centres the NLP empowered customers through self-service, shortens the Average Handle Time (AHT), especially on query resolution, significantly improves contact quality since upfront authentication enabled, and reduces customer effort (Leins et al. 2020). In the health care sector, AI improves efficiency and accuracy through image analysis, robot-assisted surgery, and drug discovery by augmented the ability to provide quality health care (Hague 2017). These AI models do not only predict the image analysis outcomes, but also identify the specific areas on an image for examination and showing the level of confidence in the prediction (Hague 2017).

3.5.4 Competitive Advantage or Strategic Competition

NLP is a strategic competition tool as it allows you to gain new customers, but also significantly enhance service offering the existing customers, which is a huge competitive advantage (Arambula and Bur 2020).

4 Analysis, Synthesis, and Interpretation

After all the ethical AI initiatives undertaken, Jobin et al. (2019) contended that there are around 84 ethical AI guidelines and around 11 principles: transparency; justice and fairness; non-maleficence; responsibility; privacy; beneficence; freedom and autonomy; trust; dignity; sustainability; and solidarity. But Floridi et al. (2020) team, which were part of the AI4Good initiative, agreed that there’s some evidence of convergence of the 11 principles into around 5 principles which are part of the AI4People Ethical AI principles: transparency; justice and fairness; non-maleficence; responsibility; and privacy.

Of the 84 AI guidelines, 43, which makes just over 50%, is shared between 5 countries who are more economically developed: USA, UK, Germany, France, and Finland (Jobin et al. 2019). The rest of the guidelines is shared between EU, EC, Australia, Canada, and EU institutions.

4.1 The AI Triple Challenge

The triple AI challenge is trying to ensure that the AI is designed, developed, and operated in such a way that the business opportunity, the ethical considerations of the AI, and the opportunity cost are all balanced (Floridi et al. 2018). If these three components or themes, as we call them in this thematic analysis study, are fully considered then a Balanced AI (NLP) will be born. The ethics - avoidance of overuse or wilful misuse of the AI - and the business benefits - opportunity from the NLP (AI) innovation - emanating from the adoption of the AI have been discussed above, but the opportunity cost (underuse) hasn’t been fully explored. The primary objective is to avoid the misuse and the underuse of the NLP (AI) technologies (Floridi et al. 2018).

The opportunity cost of the AI is the underuse of the AI technology (Floridi et al. 2018). This is the potential benefit that can be forgone or missed out because of an option that was not chosen in AI. Since it’s unseen it is easy to overlook and opportunity cost. Within AI, the broader risk is that it may be underused out of fear of overuse or misuse, hence the need to balance these in the Balanced NLP (AI) (Floridi et al. 2018).

4.1.1 Opportunity Theme

For the Balanced NLP (AI) to be realized, the NLP must satisfy all the necessary business case metrics, especially on return on investment. Within the NLP realm, Customer Experience, Revenue Generation & Cost Optimization, Competitiveness, and Efficiency Improvements & High Productivity are the best business metrics to satisfy (Table 6).

Table 6. Summary of the SLR outcome of the opportunity theme

4.1.2 Ethics Theme

The outcome of the SLR ethics theme of the AI triple challenge is summarised in Table 7 below. Beneficence & Dignity; Justice & Fairness; Accountability & Responsibility; Human Agency & Oversight; Non-Maleficence, Governance, Privacy, and Security; and Transparency, Autonomy, & Freedom are the ethical principles that are consolidated into.

Beneficence and Dignity

For the AI to achieve the Balanced AI status in the global south it must benefit everyone. In South Africa in particular there’s a crisis of inequality, unemployment, and poverty and any AI to be trusted it must create jobs and no job substitution or staff reduction. Job creation and augmentation must be at the centre instead of staff reduction or job polarization. The AI shouldn’t further exacerbate the inequality problem instead everyone must benefit equally: company, employees, and customers. Basically, the AI must be seen to addressing some of the socio-economic issues, which are really endemic in this society.

Justice and Fairness

The AI must also be seen to be helping out in ensuring fairness in society. The global south comes from some painful past of colonisation and exploitation and the AI must contribute in correcting some of those injustices. Racism was rife and AI shouldn’t be seen to be furthering some racist behaviours. The languages, dialects, and accents vary from region to region and AI must adept to these through machine learning. There should be no biasness and proper pronunciation of African names would to an AI that could be trusted.

Non-maleficence, Governance, Privacy, and Security

Most countries in the global south are underdeveloped because of lack of service delivery and corruption by elected officials. In order for the Balanced AI to be trustworthy, it must be seen to be exposing these corrupt practices and not to be misused by the corrupt elite. It must comply with the laws like POPIA in South Africa and should stand in solidarity with the marginalised.

Table 7. Summary of the SLR outcome of the ethics theme

4.1.3 Opportunity Cost Theme

From the reviewed literature, the principles that could be grouped under the opportunity cost theme are Transparency and Explainability, as well as Accountability and Responsibility. The transparency principle is as an opportunity cost as is the ethics and makes a second appearance. For the balanced NLP (AI) to be a reality, these AI systems should be have people to explain their inner workings (features and functionality) so that they are not underused or underutilized (Table 8).

Table 8. Summary of the SLR outcome of the opportunity cost theme

4.2 Ethical AI Design

Based from the data of the SLR the ethical AI design principles can be consolidated into of Data Privacy and Security, No Algorithmic Bias, Integrity of Subsystems, and Interpretability and Robustness of the AI, as shown in Table 9 below. All other ethical design considerations can be accommodated with these principles. Therefore, for a Balanced AI to be achieved, these ethical AI design principles must be adhered to.

Table 9. Summary of the SLR outcome of the ethical AI design

4.3 Balance NLP Conceptual Framework

Based on the above Ethical AI Challenge themes identified and the ethical AI design principles, Fig. 1 below represents the recommended conceptual framework for the Balanced NLP from this SLR.

The ethical AI design is fundamental in ensuring the future ethical behaviour of the Balanced AI. The integrity of the data and it’s privacy is critical since AI depends on the availability of data to continue learning and adapting. Those will lead to robust algorithms that won’t end up performing unethical conduct. So the design is the input and the start of this conceptual framework.

The opportunity, ethics, and the opportunity are the outcomes of the model. From an ethics perspective, the Balanced AI Balanced AI recommends 5 ethical AI principles instead of the 11 ethical AI principles as suggested by Jobin et al. (2019) and the 5 solid principles suggested by Floridi et al. (2018, 2020), which is not really far from the Floridi et al. (2018, 2020) teams: (i) Beneficence & Dignity; (ii). Justice & Fairness; (iii). Human Agency & Oversight; (iv). Non-Maleficence, Governance, Privacy, and Security; and (v). Transparency, Autonomy, & Freedom. As discussed in their guidelines above, if these ethical principles are adhered to, the Balanced AI would be achieved.

From an opportunity cost perspective, there’s two main principle to consider that should be the outcome of the Balanced AI: (i). Transparency and Explainability, and (ii). Accountability and Responsibility. For fear of underutilising the AI System, the ability to explain and understand all the features and functionality of the system is fundamental, hence transparency and explainability.

Fig. 2.
figure 2

Conceptual framework for a balanced NLP

The Opportunity theme or business benefits, the Balanced NLP, should be able to meet the return on investment (ROI) targets of Customer Experience, Revenue Generation & Cost Optimization, Competitiveness, and Efficiency Improvements & High Productivity. There are various metrics to test and validate these as part of the operational environment and they would need to be regularly re-visited to check the continued sustainability on the Balanced AI investment.

5 Recommendation

The above conceptual framework in Fig. 2 will be used as a guide for the case research study where the influence of ethical AI will be tested against the business models of the Contact Centre. The approach of the proposed case research study will be inductively conducted with qualitative data collection technique and thematic data analysis to test for the conceptual framework’s validity.

5.1 Balanced NLP

For the Balanced NLP to be achieved, the intersection of ethics, opportunity, and opportunity cost should be proven, as seen in Fig. 3 below:

Fig. 3.
figure 3

Balanced NLP - intersection of ethics, opportunity, and opportunity cost

  • (i) Ethics - Opportunity

    From this SLR, the link or intersection between the opportunity and ethics themes is critically important for the Balanced NLP to be achieved. This couldn’t be clearly demonstrated in this review as the case research isn’t finalised yet, but it will need to be fully explored and explained.

  • (ii) Ethics - Opportunity Cost

    The link between ethics and opportunity cost themes has been demonstrated in this SLR where ethics and opportunity cost principles converge. The Transparency and Explainability principles, and Accountability and Responsibility principles are seen as part of the opportunity cost theme, but most authors classify them as ethics principles. In the final research case study, not much emphasis will be put to this intersection.

  • (iii) Opportunity - Opportunity Cost

    From this SLR, the link or intersection between the opportunity and opportunity cost themes is also of critically importance for the Balanced NLP to be achieved. This couldn’t be clearly demonstrated in this review as the case research isn’t finalised yet, but it will need to be fully explored and explained.

  • (iv) Balanced NLP

    The opportunity theme plays a central role in achieving this Balanced NLP. Its relationship with both the ethics and opportunity cost theme will determine whether a Balanced NLP - where ethics, opportunity, and opportunity cost intersect - is possible. This will be fully explored and explained in the case research study.

6 Concluding Remarks

The ethical AI design process is as important as any of the above themes in ensuring a Balanced NLP outcome. The ethical AI design is encompassing and is overarching for all the 3 other themes as its application is important in guaranteeing the ethicalness of the AI.

Finally, to constantly check if the NLP continues to meet the required standards of Balanced AI, the ethics-based auditing process would be recommended. This a governance mechanism that is used by businesses that design and deploy AI systems so as to control the influence or behaviour of the AI system (Mökander and Floridi 2021). This is an audit process that continuously evaluate and monitor system outputs and report on the system’s performance characteristics (Mökander and Floridi 2021).

6.1 Future Research Areas

As already indicated above, the conceptual framework in Fig. 2 above will be validated in the research case study to determine the influence of ethical AI on business performance or business models in the Contact Centre. Other areas of study will be influenced by the outcome of that research case.