Keywords

1 Introduction

Recent research work by the HCI community (amongst others) related to the development of AI systems and applications, has started to shed light on new challenges for the designers involved [2,3,4]. Those challenges call for new approaches, methods and tools for designing with AI. This work is mainly focused on consumer facing products and services, such as e.g. medical decision support systems, autonomous driving services, and/or spam filters, movie or music recommenders [3, 5, 6]. In those domains the focus is on customized user scenarios. Adding insights from qualitative studies in the domain of industrial AI, namely use cases from B2B factory automation where optimization is the main driver, should bring another angle and perspective to the scientific discourse, which is currently lacking in this area. This investigation teases out similarities and differences among challenges and provides an overview of the current findings, followed by an overview and analysis of a selection of Human-Centered-AI principles [7,8,9], which are supposed to offer new ways of dealing with AI and ML systems. Hence, this paper seeks to take a step further towards the development of new methods and tools for design in the age of AI. By mapping problems and solutions an examination of the current status is possible, showing which challenges can be solved and which issues still need to be investigated.

2 Overview Design for AI Challenges

ML and AI based systems call for new methods and tools, due to their complex (eco-) systems which learn and evolve over time [10]. This in turn means that the interactions between AI based systems and their users change over time as the systems learn, potentially causing unwanted user experiences and difficulties dealing with the product or service. Those interactions are above all “multimodal” and “non-visual” [11]. Invisible algorithms are a new “design material” [2]. Designers do not yet seem to have grasped the potential of ML and are not incorporating AI technology when innovating ideas for new products and services. Additionally, the process to develop ML systems currently consists of “lengthy and costly development cycles” [5] and is mainly driven by statistics and lacking the human (centered) perspective. The behaviour of those systems is therefore not comparable to human logic which makes it hard to investigate and foresee with the tools and methods used by UX designers to date. After all, the algorithms are only as perfect as the data they are trained with, meaning those systems make mistakes [e.g. 12]. In sum, it is necessary to rethink the current approach to developing those smart and intelligent agents.

2.1 Case Studies from B2B Factory Automation

The initial B2B factory automation use case deals with improving the factory planning process of a production site for industrial controls. Time series predictions with neural networks is the chosen ML approach. A qualitative study with the development team members, among them a UX designer, and other stakeholders involved, was conducted and published [1]. From this research 14 themes were derived which represent the pitfalls and challenges encountered during the development of the factory planning solution. Meanwhile further research was conducted. The findings from the initial case study were enriched with information from two additional internal projects. Domain and field of application from all three uses cases are the same. They differ in location of production sites, development process, products produced and technical solution, in total resulting in 15 topics.

2.2 Additional Findings

One of the two additional case studies had a new hire as a requirement in order to start the project. An additional factory planner was recruited, with a background in computer science. This was a strategic decision in order to combine domain knowledge with the technical skills to be able to improve the current planning process. This meant that the end user of the final ML solution was the same person who created it, and who was able to gather the data, clean it, train, test and validate the models. “It was a lot of work for a single person… Being a user and expert in one person was a very efficient setting… resulting in a very fast Proof of Concept (PoC).” (Computer scientist) However when scaling the solution among the factory and other departments the team faced very similar challenges as the initial use case. Other stakeholders and decision makers lacked the AI expertise of the systems’ creator. Additionally, UX was not part of the development process, no user research was conducted and therefore the solution didn’t meet the needs of the other planners. This resulted in lack of trust in the output of the system. As with the initial use case a lot of effort and energy was consumed by a rigid corporate culture and people’s risk averse mindset.

The third case study had a completely different setting. The development team, including the AI and ML expertise, was completely represented by an external agency. Primarily they had one contact person at the company, namely the coordinator of the planners. This person also had access to the database and served as a single source of contact for the external partner. Throughout the whole progress of the project this person gained a lot of technical knowledge regarding the final solution. “My role and tasks changed from managing the planners, which I still have to do, to feeding the algorithm with data and providing the output to the planners.” (Project manager) Missing involvement from the planners during development resulted in rejection of the solution and a challenge to hand over the model handling process to an internal team. Additionally 3rd party software was used by the external partner which, except for data privacy issues that needed to be overcome, was very helpful for the overall speed of the project. However the off-the-shelf solution is fixed to the provided models, making it a challenge to include a new product in the forecast, where historical data has not been captured (Table 1).

Table 1. Overview of the 14 themes from the initial B2B case study + 1 deriving from the additional use cases.

2.3 Related Work

The HCI and creative community already started to investigate issues related to the challenges when designing for AI. Their findings originate from talking to UX practitioners ranging from expert level, having experience with designing for AI, through to students currently being trained in the tools and methods for classical UX or design [2, 3, 13], as well as UX practitioners that reflect and report on their own experience while designing for AI. Some researchers relate their findings to a specific use case, whereas others do not take the domain into account. There are also a large amount of reports and articles available from non scientific resources which will be neglected for this overview, but which were important to absorb to form a background to the relevant challenges, and become important when it comes to the Human-Centered-AI principles section. From this work five topics were condensed and will be introduced in more detail. Additionally those issues will be compared to the findings from the B2B use cases from factory automation.

Lack of AI Expertise. One aspect that appeared from those studies is that most designers lack detailed knowledge about the technology. Designers for the most part understood the overarching concepts, but didn’t make distinctions such as e.g. supervised or unsupervised learning [2, 13]. These issues resulted in the development of learning materials for designers. Interestingly those having work experience in the field of AI based products and systems, did not seem concerned about having a lack of AI and ML expertise [13]. When talking about AI and ML systems they referred to example products and services. Those examples were very limited in their range of diversity and represented simple technological approaches, such as spam filters, recommenders, to name a couple. This lack of knowledge about the capabilities of AI and ML might also be an obstacle when identifying and choosing the right technology to address the problem or user need, resulting in not taking AI or ML based systems into account as a technical solution.

Analysing the use cases from the B2B factory automation domain revealed a very similar issue. Asking the participants of the study about their personally rated AI expertise, those not having a computer science or data science background interestingly rated their expertise similar compared to the domain experts. They compared themselves not to experts in the field, but compared their knowledge from the start of the project till the end. “Compared to the beginning of the project, I gained a lot of knowledge about the technology.” (Product manager) It became clear that a certain degree of knowledge, not necessarily expert level, but a basic understanding of concepts such as regression, supervised/unsupervised learning, decision trees, random forest, etc. is very helpful for working together.

Define the Starting Point. The classical design/Human-Centered-Design process starts with defining a starting point, or a problem statement that needs to be solved, mostly based on user research [14] and making sure that the designed product or service answers a market need (human desirability [15]). Technological feasibility and business viability are not the main drivers. In contrast many data science projects start with a given data set and from there define what can be solved by an algorithm [3]. With this both processes operate with a different purpose and it is difficult to bring them together. The initial design idea might not take a data driven algorithm into account, resulting in the integration of UX methods and tools late in the development process when main decisions and direction are already set, causing usability problems and worst case rejection of the developed solution.

A very similar insight derived from the B2B use cases in factory automation. UX was integrated fairly late in the process (or even not at all) and was perceived as a negative aspect, since it caused more features and needs to be integrated into the final solution than initially considered. “UX really is about the right timing… if it comes too late in the process it cannot influence the direction anymore.” (Product manager) Another issue related to the definition of the starting point was the initialization of the project. In all three cases it was an initiative by management, missing out the voices of other very important stakeholders involved, resulting in a lack of engagement by the users and other project members.

Missing Data Literacy/-Centricity. AI and ML depend mainly on statistical approaches and data sets, therefore being driven by data centricity [13, 16]. Typically this data answers a precise set of questions framed by the data scientists closely related to the training and validation of the chosen technical solution. In contrast a qualitative approach preferred by UX practitioners is a divergent research method. Whereas this form of data enquiry is also very helpful for ML projects, e.g. when defining the starting point for a project or supporting and enriching the statistical data set, it is also helpful for designers to embrace the data-driven culture of AI and ML engineers. Drawing insights from quantitative data and understanding those data enquiries created by telemetry and machine sensor data, are much needed skills in a data driven context such as AI and ML.

The above challenge was not directly mentioned in the B2B use cases. However an issue related to this is the need for data visualization. The workflow of the different team members from one use case included different ways and tools to communicate and present their data (e.g. excel, tableau and SAP apps). The pure numbers generated by the neural network representing the pieces to be produced weren’t enough to validate their accuracy nor their reliability. “I need a graph that shows figures from the past and the forecast in order to examine whether I can trust the output of the neural network. A number in a cell in an excel file means nothing to me.” (Data analyst) Similar to the different data approaches there is not one visualization that fits all involved stakeholders. It could be the role of designers to negotiate and facilitate the different needs and find the best way to communicate the output of the algorithm.

Struggle to Work with Data Scientists. Co-creation with data scientists is a new territory for design and UX practitioners. A shared process model or guidance for methods and tools does not exist yet, partly due to the issues mentioned beforehand regarding data literacy, as well as various domain jargon and mindsets. So far mainly experience from best practice does apply [13]. This shows that working together on AI related use cases is currently the most promising approach for designers to influence the UX of AI in a positive direction. UX practitioners who do not have direct access to data scientists in their daily work struggle even more with the design of AI and ML based systems. They lack the feedback for technical feasibility and therefore often fall back to known design patterns and technical solutions, this way not creating innovative new products and services. Additionally, the data collected and synthesized by designers can hardly be encoded on a one to one basis into a statistical model [17]. In order to analyse the data that is needed to address user needs, a conversation with a data scientist in an early stage of the development process is very helpful.

Concerning the B2B factory automation use case where a UX practitioner was part of the team working closely with the data scientists, the iterative working mode was an essential factor for the success of this co-creation and a common denominator for both professions. “To keep the sprints and present results on a regular basis was key for the success of this project”. (ML engineer) The willingness from both sides to learn and negotiate the input from the other expert is the basic requirement for a fruitful collaboration. In this case some features that derived from the user study of the B2B use case, which concerned post processing steps of the algorithms, were neglected. The team agreed to focus on the pure output of the algorithm without any post processing, which would bring the most value to the user, even if as a consequence the post processing still needed to be done by the users. Both professions need to be open to those kinds of tradeoffs.

Difficult Prototyping. Prototyping is an essential part of the design process. It is used for idea generation in early stages as well as idea testing and validation later in the process. It is always used as a medium to communicate ideas to other stakeholders and users, as well as evaluating if a service or product is worth being pursued for implementation. Some user experiences for AI and ML applications can be prototyped, such as voice assistants, chatbots or recommenders. They all have in common that they are represented by an interface. The interaction with the user is represented by a conversation or action on a screen, which can be faked with the ‘Wizard-of-Oz’ method [18]. Still this tool has its disadvantages. Without a real data set and algorithmic model in the background, the designer can not verify what kind of errors the system might possibly produce and therefore it is hard to gather the related feedback from users. This is even harder for AI and ML applications which do not have an interface. For any prototyping tool out there, a real data set and a real model are necessary. Those prerequisites are a barrier when it comes to the design of intelligent systems [2, 5].

In the B2B factory automation use cases where a neural network produced a forecast into the future about pieces sold, it didn’t make any sense to ‘fake’ a model. The teams needed to develop a functional prototype with real data in order to validate its usefulness. The process already consumed a certain amount of resources and time. Asking for commitment from management to provide time and budget caused a degree of pressure for a successful proof of concept. “In hindsight, I think we preferred to show the line charts of the products where the AI predictions performed really well. … We wanted to meet the expectations of management.” (Data scientist) Another aspect that became clear at a later stage in the process was that scaling from the initial functional prototype and a small data sample, to a productive environment in the cloud and a larger data set, caused trouble for the development teams (except the use case which used the 3rd party solution). In one case it even resulted in reduced accuracy of the final model.

2.4 Compare the Findings

A lot of the findings from research scholars confirm the insights from analysing the B2B factory automation use cases. Four out of the five above challenges were confirmed. One was mentioned in another context, but also noted. Not all 15 themes of the B2B case studies are mentioned in related work by research scholars, which may be due to their open character that did not focus primarily on UX topics. Issues such as company culture and mindset of the different stakeholders were important topics in the B2B domain but were not mentioned among the related work. Those topics are more related to change management than design inquiries. However, in order for AI to be successfully implemented in such a setting, the culture and mindset issues need to be addressed too.

3 Overview Human-Centered-AI Principles

3.1 Purpose and Definition

Due to the present challenges the design and research community have already started to propose different approaches to solve the problems when designing for AI. Those approaches are united in a call for a bigger focus on the impact of the human perspective of the AI systems [7,8,9]. As a result the creative community and other practitioners created a number of different Human-Centered-AI principles [19, 20]. The human-centered perspective is perceived as central to the process for design practitioners tasked with AI development [21].

“Human-centered AI is about defining the goals of AI to meet human needs and to work within human environments. … Not only do we need a set of new tools and techniques to make AI work in practice, but we need to shift the process by which AI is even designed in the first place.”Agarwal, Abhay and Regalado, Marcy [7]

3.2 Proposed Solutions

Companies such as Microsoft, Google and IBM amongst others came up with Human-Centered-AI principles. Those companies are using AI solutions in consumer facing domains already and have experience with implemented AI solutions. They have a vast amount of data accessible through their portfolio of applications, as well as the workforce, and know how to develop their own algorithms. There are also some individuals working as design and UX practitioners on AI projects, who published their thoughts and principles on the web. The different resources were examined and from this amount of information the selection for the overview was made. Human-Centered-AI principles from Microsoft, Google and two individuals, namely Abhay Agarwal (former Microsoft and currently lecturer at Stanford d.school) and Marcy Regalado (a Stanford d.school graduate) were selected (Lingua Franca). The scientific nature of the Microsoft work, the helpful worksheets from the Google guidebook and the great detail of the Lingua Franca principles are the reason behind the presented selection. With this, a great variety of principles are provided.

3.3 Explanations and Analysis

The following section introduces the three chosen resources in more detail. Resulting in an analysis of what is different and where they correspond, as well as a potential for further investigation.

Microsoft provides a very comprehensive collection from 20 years of experience and collecting AI design recommendations from various sources. The baseline consists of thoughts and ideas from Eric Horvitzs’ “Principles of Mixed-Initiative User Interfaces” [22]. Those principles are then enriched with contemporary publications from the private sector, illustrating the most up to date concepts. Microsoft divides the whole set of principles into four steps. Each of those steps requires certain aspects to be taken into consideration. In total their approach represents 18 different principles. The main purpose of their principles is to supportUX experts with guidance when developing interaction of AI systems with users.

Initially it is crucial to “make clear what the system can do”. This is a way to guide users expectations towards the ML system. Setting those expectations too high will end in an unsatisfying experience while using the smart solution. It is therefore important to facilitate those expectations right from the beginning. The second set of guidelines deals with aspects during interaction with the system, for example, which wording the system uses to communicate with the user. Misleading language might evoke social injustice or address stereotypes. A third segment is devoted to the failure of the system. This shows the importance of this aspect when designing for ML. The algorithm is not perfect. It is trained on data generated by humans and can contain errors and mistakes. Therefore solutions should communicate how they derived their results (so called Explainable AI - XAI [23]). It is necessary to provide the possibility to hand over control of the system to the human user, as well as being honest about the fact that the machine might be wrong, or at least unsure about its output. Finally, every ML application should be able to learn from its interaction with users and improve over time. That’s actually the strength of the ML technology. Therefore collecting feedback from users is crucial and an important step in the process in the long run. Furthermore it is helpful to inform users about new releases and features of the system, in order to maintain trust in reliability and performance.

The work from Microsoft is primarily focused on the user experience of the final solution. They do not provide or contain advice for the initial step of developing the algorithms, such as data preparation, problem definition or choice of technology. Furthermore they are based around the idea of a graphical user interface as a means of representation between the AI system and the user. Therefore it is questionable if the guidelines also apply for non visual products and services. They provide the principles as a set of cards, naming the principle on the front and an example of use on the back.

Google provides a People + AI Guidebook. It consists of six chapters which follow the product development flow. Similar to Microsoft they provide an explanation of each chapter, as well as a related worksheet that should support the use of their principles. Their principles are primarily based on the knowledge and input of the internal project teams, enriched with academic research and expert opinions. The target audience are UX professionals as well as product managers who want to put more focus on the users when developing AI systems.

The aspects mentioned by the Google guidebook are very similar to the Microsoft ones. The order and arrangement of principles is done slightly differently. Namely, the sections about explainability, feedback and failure are mentioned in both cases and represent a common ground of general AI challenges. Those aspects are widely adopted in other principles on Human-Centered-AI [19, 20]. However, Google’s guidebook also puts emphasis on the preparation and problem definition of AI systems as a relevant step of the overall design process. This part is called “User Needs + Defining Success”. It implies that the starting point of any Human-Centered-AI project should be a user need versus a technology first approach. It also provides a list for which tasks an AI automation might be useful, or when an augmentation of the user is the better choice. The section also contains thoughts about the success criterias of the product and service. Another part is about “Data Collection + Evaluation”. Designers and artists can provide valuable input with qualitative research methods, adding meaning to purely statistical data points. This data then needs to be transformed into a format that can be used to train the algorithm. This section poses a question around whether the development team should use a given data set, or establish their own. This might be needed when a given data set is biased towards a certain user group, for example a bias on a certain gender or age group.

With this the Google principles are not fixed to AI systems represented by an interface. They also take into consideration tasks and steps that are important when designing the AI or ML based system. By providing detailed worksheets per section it is easy to follow their development path from the beginning of a project until the final solution. Their explanations and examples use a small amount of data science jargon which is necessary to know in order to understand the relevant information.

Lingua Franca - A Design Language for Human-Centered AI - is currently the most comprehensive selection of principles and guidelines for designing AI systems. It is published by Abhay Agarwal (former Microsoft and currently lecturer at Stanford d.school) and Marcy Regalado (a Stanford d.school graduate). In addition to their handbook, which introduces seven different aspects of design intervention, they also provide eight principles that every AI system should follow. They have started to collect example elements and patterns that represent a concrete solution to the stated principles. They do not mention a specific target audience or user group for their principles, as their main goal is to enhance the human perspective in AI development.

Their approach is very similar to the general design process, starting with the initial problem selection and definition, followed by observing human behaviour. However the section about data and sensemaking also puts emphasis on statistical methods and knowledge, attempting to provide a basic knowledge for those approaches for the novice, since these skills are not taken for granted amongst designers and artists. Another part of their handbook deals with the choice of technologies. This is not AI specific, but in order to be able to choose the right technology in an era of ML and AI a certain degree of literacy about the possibilities and features of the different concepts is necessary. Sometimes even the decision not to use ML at all is an important insight. Prototyping and a section about ethics and responsible design are the final steps.

Lingua Franca’s collection of handbook, guidelines and design patterns apply to a broad variety of different AI solutions and represent a huge source of information. However the amount of information can be overwhelming. It is not very easy to navigate through their online catalogue which contains a lot of cross references and links to articles and webpages. Additional hands on worksheets are missing, as well as a section about the implementation of AI based systems and their evaluation and feedback structures.

3.4 Conclusion and Missing Pieces

Most Human-Centered-AI principles have a set of aspects that are very similar; problem definition, need finding or data collection in general, explainability, trust, feedback and how to handle failure are commonly important. This is a great starting point for further research and development of a set of principles that can guide artists and designers along the development process of AI systems. However, there is still a need for new tools and methods that work alongside the guidelines. Taking into account the specific context where the guidelines will be used is missing in most principles and could be the missing link to make them work and add value in reality. The biggest value may be provided by a collection of different sets of principles which enhance a flexible use.

4 Mapping Challenges and Principles

Comparing the five challenges with the introduced principles is this paper’s attempt to analyse which problems can already be solved and which need further investigation. This is obviously not the final list of challenges designers and UX practitioners will be facing, but it should at least start to shift the conversation from pure problem spaces to solution spaces, providing concrete methods and tools for the design of AI, and in doing so endeavoring to open up space for new insight generation on missing pieces.

4.1 Which Design Challenges Are Addressed?

Define the Starting Point. Starting with the right problem and defining it very well is the most important aspect of all principles (included in the Google guidebook and Lingua Franca). This is where the team makes a lot of decisions which are hard to change further in the development of the AI system and it is crucial for designers to be part of this initial step. Too often companies try to implement AI solutions where they are not really needed, or even inadequate. Google provides a list of recommendations around when using AI is useful and when a classic heuristic based solution is preferable [9]. Problem selection and definition can be supported by designers with qualitative user research methods to spot a user driven need. Also to make an informed decision whether to use AI as a solution at all. The biggest challenge here is researching technology that is not yet in use as there is often no obvious existing behaviour to look at or existing preferences to discuss and explore. One helpful approach is not to talk about AI in your research, but instead talk about assistance [24]. Research participants could struggle to distinguish between prominent media-driven perceptions that are fueled by fear and negative effects of AI as well as the reality of their own behaviour. Helpful tools are cultural probes [25] and anything that helps to understand the current workflow of the research participants, such as workflow process mapping [26]. For example, if you want to improve the demand planning process in a factory, it is crucial to talk to the planners, understand how they currently plan and which tools they use and which other stakeholders are involved to understand the whole ecosystem.

Missing Data Literacy/-Centricity. Data scientists are used to working with data, mostly quantitative, statistical data, whereas designers are used to working with qualitative data. In the era of AI it is important to be data literate in both worlds (included in the Google guidebook and Lingua Franca). Understanding statistical data sets and being able to gain insights from those sets is new to most designers, but a value add to idea generation and working in the context of AI. Enriching those with qualitative insights is the best strategy towards creating human-centered design for AI. It also helps to detect whether additional data is needed. In the age of AI, translating user needs into a format which can be used to train an algorithm is a crucial step. “Matching user needs with data needs” is part of the Google material. Sometimes this also means to neglect findings from user research since it can not be translated into a format that can be used to train a model.

Prototyping. Developing ML and AI systems is a lengthy and costly process. Therefore, besides working iteratively, prototyping is a very crucial and helpful step (included in Lingua Franca). Unfortunately there are hardly any tools out there yet that can quickly prototype the training and evaluating of an AI algorithm without really developing and training it. ‘Wizard of Oz’ as a method became very popular for prototyping voice assistants and chatbots, but doesn’t help with AI systems that are supposed to predict and forecast user behaviour. Starting with a small sample of a given data set, then training and evaluating an algorithm on this data to judge whether it is feasible or not, is the most promising approach here. Still this requires a degree of computer science skills and quite some time. None of the principles give better advice in this area. Finding smart solutions for this problem will enable designers and other professions to speed up the development of AI systems that will benefit the users.

4.2 Which Aspects Are Still Missing?

None of the Human-Centered-AI principles mention any methods or ways on how to successfully collaborate with data scientists. This might be partly due to the fact that an improvement in data literacy might also positively influence co-creation. However it might also be possible that this issue is not really perceived as crucial among the creative community. Only when working in the field of AI and ML applications might this become a noticeable factor, as it was mentioned by certain UX practitioners working in the area. Taking one of the B2B use cases as an example, where the user and the technical expert were the same person, this combination speeded up the whole development process, however failed in the end to succeed in the implementation due to a lack of stakeholder management and user engagement. Nevertheless it demonstrates the importance of being familiar with different views and skills in an AI driven project. The computer scientist gained advantage from the domain knowledge and vice versa. The same applies when designers and data scientists team up. There shouldn’t be methods and tools primarily for designers to co create with data scientists, likewise open minded data driven people can also benefit from the designers point of view.

Lack of AI expertise is not included in any of the Human-Centered-AI principles. Still it is a reasonable challenge for designers in the age of AI, causing a barrier to use the full potential of the technology, which needs to be overcome. In related work, UX designers working on AI projects report on using familiar examples of AI and ML products as a reference in order to explain AI features [13]. This is their workaround to overcome the missing AI expertise. Those examples are very limited at the moment. Therefore a wide collection of AI and ML example case studies would be a great value add for the creative community, besides a variety of training and educational material.

Although data literacy is mentioned, the guidance given is very generic. Another promising approach to equip designers and UX practitioners with the needed skill set to be prepared for data centric practices is teaching designers in (basic) statistical data processing techniques [16]. In this respective study two approaches were tested with master’s degree students. Group A got a data set from university records related to their master thesis. They were asked to use this set of data to come up with an idea, resulting in the creation of a new product or service. Group B was introduced to some basic data collecting tools, such as web crawlers [27] and were taught how to use this additional kind of data for their projects. Both groups were taught how to clean and pre-process data. All participants answered a questionnaire at the end of the workshops and reported that the additional data added value to their overall design process.

Likewise prototyping is discussed very generically, partly due to the fact that there are not as yet any tools for prototyping AI available. Promising sources for addressing this issue are the ‘Wekinator’ by Rebekka Fiebrink [28, 29]. It is an open source tool which supports artists and musicians in their creative work. It features supervised machine learning algorithms. The artist only needs to provide input data and the corresponding output. The model is then trained on those data points. No coding skills are needed. Similar to this is the ‘Delft AI toolkit’ by Philip van Allen [11, 30] which is more targeted towards prototyping physical objects. It also provides models for different applications, such as speech-to-text for example, and only input data for training the models is needed. Another trend is so called ‘democratizing AI’, meaning trying to make AI and ML technology available for non experts [see e.g. 31]. The downside of all the mentioned tools and applications is that the algorithms are fixed and limited to those that come with the package. Additionally they are not transparent for the artist, designers and people who use the tools. This is not necessarily a problem for prototyping, but when it comes to implementing the solution in a real world scenario the artist and designer again lacks the technical know how to develop their concept at scale.

5 Conclusion

Neither the overview of challenges nor the overview of principles claim to be a complete list. They represent a selection drawn from a large number of articles and publications, chosen to provide a summary of relevant topics. Due to the lack of publications focussed on industrial AI (B2B) this paper used research from consumer facing applications to compare to an industrial setting. This comparison showed that some challenges are similar and at some points slightly different, with the aim of helping design and UX practitioners to quickly gain access to the current state of design inquiries regarding AI and ML development. Instead of adding new issues to the list, this paper aims to connect given challenges to proposed solutions, shifting the current discussion from primarily focussing on problem spaces, to focussing on solution spaces.

The Human-Centered-AI principles provide a resource for designing AI and ML based systems on a very general level. They only partly answer the call for new methods and tools in the age of AI for designing intelligent systems. The mentioned topics represent a good fit to the challenges which need to be addressed. However, except the Google ‘People + AI Guidebook’, they lack actionable worksheets and concrete examples and detail to support the general descriptions, making it hard to use them as a set of new tools for design and UX practitioners when used apart from other measures. This implies that the research work needs to be continued.

Training material for designers and UX practitioners as proposed by research scholars [16, 32] is a promising supplementary measure to support the creative community to deal with the challenges imposed by AI and ML. They could work alongside Human-Centered-AI principles, providing more detail into certain topics such as data (pre-) processing. Moreover collecting case studies from AI and ML based projects would provide additional value. Those example use cases could serve as a resource for addressing lack of AI expertise; for example illustrating how to choose the right model, introducing different development approaches as well as other relevant aspects. The examples could be created to specifically target the needs and input demanded by design and UX practitioners.