Keywords

1 Introduction

Rapidly changing market demands, increasing competitive pressure as well as an increased demand in fair working conditions and a competition for qualified personnel require innovative industries. One of the key concepts to address these challenges is the digitized and connected factory, where data-based processes are optimised and automated to increase the effectiveness and efficiency of production and improve working conditions [5, 22, 23].

In addition to machine data, smart sensors in the Internet of Things can collect data from human workers, such as the performance of a worker, his or her actions in the factory, movement data, or possibly even their well-being through camera systems. The collection and use of this personal data can have both advantages and disadvantages: On the one hand, the data can be used to make work safer, more ergonomic or more comfortable, for example by tailoring tasks to the individual’s abilities or automatically adjusting workstations to the individual’s height. On the other hand, there is a risk of workers being exploited by identifying and cancelling break times, or of sensitive personal data being shared with third parties. This involves collecting and processing not only machine data but also real-time data about an employee’s activities or movements, such as work speed to assess performance or body height to adjust machines ergonomically. The collection and use of personal data is therefore a sensitive issue that can invade employees’ privacy and be perceived by them as a risk [36].

Although employee acceptance of data-based technologies is an essential prerequisite for the successful implementation, current research often has a techno-centric perspectives and neglects the employees’ perspective [27, 28]. Therefore, this study contributes to a better understanding of the acceptance, perception, and willingness to share personal data in the smart factory. The article concludes with a research agenda and actionable guidelines on how to incorporate the social perspective into the design of monitoring systems in smart factories.

1.1 Human Perspective is Key to Industry 4.0 Adoption

Recent technological innovations led to significant changes in many areas, including production, with the emergence of Industry 4.0 and Industry 5.0 [15, 22]. These advances have the potential to improve efficiency and productivity, but their success depends on the acceptance and adoption by organizations and the individuals [48]. In this context, acceptance is defined as a positive adoption of an idea in the sense of active willingness to use it and not only in the sense of reactive acquiescence [12]. So far, a rather techno-driven approach has been taken, neglecting the human perspective, even though they are precisely the counterpart of a cyber-physical technology. However, one key factor influencing acceptance is the consideration of the human perspective. Considering the human perspective can lead to a more user-friendly and easy-to-use technology development [33]. In addition, by listening to people and meeting their needs, trust can be built between the user and the technology. This is particularly important in the context of Industry 4.0 and Industry 5.0, where the adoption of new technologies may require significant changes to existing processes and practices. This approach recognizes that technology should not be imposed on people, but developed in partnership with them [47].

It is more important than ever to understand attitudes and personal concerns about the use of personal data and perceptions of privacy, especially in the connected digital industry, where using employees’ data can help optimise the work process. What exactly is meant by data privacy in the context of the smart factory is clarified as follows.

1.2 Information Privacy and Data Sensitivity in the Smart Factory

First, it is important to have a general idea of the multifaceted and complex construct of data privacy. The number of approaches to define privacy are huge and alter referring to its discipline. Privacy was explained as the right to be “undisturbed” in the late 18th century, or a century later defined as a state of limited access or isolation [37, 46]. Altman referred to privacy as a kind of control over access to one’s self and disclosure of information [2]. With digitization, where privacy is no longer purely physical but a person’s data and information becomes relevant, privacy has been transferred to the online context and called information privacy.

Information Privacy. According to Bhave [4], information privacy is about perceived control over the collection, storage, use, disclosure and dissemination of employee information. In other words, it is about control over the information that could be made available to others. The willingness to share data depends on the context and the audience with which the information is shared [31]. Furthermore, because data in the smart factory is stored digitally and is therefore persistent, easily replicated, scaled, and may be shared with third parties, employees are concerned about how it is used, which may have a negative impact on trust, productivity, and efficiency.

Smart factory research has mainly taken a technology-centric perspective. Hence, the human perspective in general and the attitudes to privacy and willingness to share data in particular are insufficiently understood. A definition of the term Digital Shadow has been established, which aggregates, links, and abstracts human data about physical objects [5, 27]. It integrates data directly related to the human production environment, such as their behavioural patterns, data on work patterns and performance, physiological and cognitive parameters, as well as socio-demographic data [27]. Sensor-based data collection has been explored with the aim of increasing safety at work, improving employee health and satisfaction, or identifying inefficiencies caused by over- or underworking. Wearables, such as watches or sensors integrated into clothing, can collect physiological data such as heart rate or blood pressure to detect stress, physical inactivity, fatigue or physically demanding work [28].

Perceived Data Sensitivity. Data sensitivity refers to the level of risk associated with disclosing certain types of information. This is particularly relevant because, according to several studies, perceived sensitivity correlates with the willingness to share data. The higher the perceived sensitivity of data, the lower the willingness to share data [30, 39, 41].

In the context of the adoption of fitness wearables, the perceived sensitivity and importance of different types of personal data was empirically investigated by Lidynia [25]. Both the perceived sensitivity and the perceived importance of tracking varied across the available parameters. GPS data, weight, and sleep analysis were perceived as more sensitive, whereas UV radiation, number of stairs, or hours spent standing were perceived as less sensitive. Heart rate, steps and GPS position were perceived as particularly important, whereas current UV exposure, outdoor temperature and blood glucose levels were not perceived as important in the fitness context.

For workplace monitoring, Tolsdorf and others investigated the perceived sensitivity of certain types of data and employees’ willingness to disclose their data [41]. The study found that the perceived sensitivity of data for workplace monitoring differs from other contexts. In addition to considering the employee perspective on workplace monitoring, the expectations and concerns of other stakeholders are also relevant, but have not been sufficiently explored. Pütz addressed this gap in a study with over 700 managers [34] and found that most managers expect improved well-being and workplace design. Still, privacy concerns are seen as a key barrier to the adoption of monitoring systems.

In summary, using personal data in the smart factory offers many opportunities for both the companies as well as the employees. However, this potential can only be exploited if the employees’ willingness to disclose personal information and the social acceptance of these approaches is well understood. This study contributes to this understanding by evaluating the willingness to disclose various types of information in two different contexts: First for using a cobot (collaborative robot) in human-robot collaboration and second for using a chatbot. Both context have in common that they build on an autonomous agent of future smart factories. Yet, the first stands for a human-technology interface from the factories’ shopfloor (“Blue Collar” work) while the latter is envisioned to support managers of the companies in the form of conversational agents (“White Collar” work). In both contexts the use of automated agents will likely gain in relevance in the near future and both approaches may profit from using personal data for improved adaptation and personalisation.

1.3 Empirical Approach and Logic of Procedure

To gain insights into the willingness to disclose personal data in the context of smart factories, we chose a two-method approach. Figure 1 shows an overview of the research process.

Fig. 1.
figure 1

Overview of the research process showing the qualitative and quantitative measures to address the research questions.

First, we used focus groups for gathering qualitative insights into peoples’ motives and barriers, as well as a ranking of the perceived sensitivity of different types of personal data. The following guiding questions facilitated the group discussions to get the different opinions and viewpoints of the participants:

  • What are the benefits or motives of sharing personal data in a smart working scenario?

  • What are the barriers you associate with collecting personal data in the smart factory?

  • What personal data are you willing to share and what data do you consider too sensitive to share?

Second, building on the results of the focus groups, we included all relevant factors in an online survey to empirically model the peoples’ attitudes. We developed two scenarios to measure the preconditions and overall acceptance of a cobot and a chatbot that builds on the use of personal data. The research questions of the study were:

  • What data is considered sensitive in the context of the smart factory?

  • What data is associated with being of interest to the company from an employee’s perspective?

  • Which acceptance relevant factors have an impact on the intention to use smart technologies requiring personal data, exemplified by a cobot and chatbot scenario?

The following section is structured according to our study process. First, we present the procedure and results of the pre-study (focus groups). Second, we illustrate the design of the main study (scenario-based online survey).

2 Qualitative Focus Groups (Pre-study)

The aim of the focus groups was to identify and discuss the potentials, motives, and barriers of younger and older adults for the use of personal data in smart factories. The advantage of the qualitative approach is the possibility to gain new insights in an exploratory way by generating opinions and ideas that serve as a basis for an extended acceptance analysis. For this purpose, two focus groups were conducted as part of a bachelor thesis at RWTH Aachen University in winter 2021/22. The participants participated voluntarily and were not compensated. The sample, procedure and results are briefly outlined below.

2.1 Sample

Participants were recruited from the authors’ social circles. The inclusion of different age groups allowed for the consideration of age effects regarding attitudes towards privacy and the general acceptance of data collection and use in Industry 4.0 and 5.0. The first focus group included four younger participants (two male, two female) aged between 21 and 29 years (\(M = 23.3\), \(SD = 3.9\)), belonging to the generation of “Digital Natives” [29] who have grown up with digital technologies. The second focus group was conducted with three participants (two male, one female) aged between 58 and 61 years (\(M = 59.7\), \(SD = 1.5\)), belonging to the so-called “Silver Surfers” or “Best Agers” generation [3, 8]. 75.0% of younger respondents reported having experience of industrial production through a mechanical job or work experience in manufacturing. 66.6% of the older participants stated that they had experience of industrial manufacturing as an engineer or as a former workshop mechanic.

2.2 Procedure

First, participants were informed about data protection and the voluntary nature of the focus groups. As an introduction to the topic of general data collection, participants were encouraged to brainstorm about data collected by technologies such as apps in everyday life. As a next step, participants were introduced to the idea of a smart factory and brainstormed about data that is already needed and collected by technologies and that could be of interest in future smart factory scenarios. A short video clip presented a possible human-robot interaction in production. Subsequently, possible personal data of interest for such a collaboration were collected and divided into possible advantages and disadvantages. Finally, the associated data was ranked as most useful and most sensitive. After the discussion, participants completed a short paper-and-pencil questionnaire covering demographics, industry experience and technical affinity. The focus groups were audio recorded and a verbatim transcript was made. Conventional content analysis was used to identify key motivators and barriers to willingness to share personal data in the context of the smart factory.

2.3 Main Results

The analysis focused on the identification of motives and barriers, as well as a ranking of the use of personal data in Industry 4.0 from the perspective of employees. Figure 2 gives an overview of the main issues mentioned.

Fig. 2.
figure 2

The main motives, barriers and data ranking by sensitivity identified in the pre-study (n = 7).

What are the Benefits or Motives of Sharing Personal Data in a Smart Working Scenario? The motives associated with data sharing in the context of smart factories could be categorised into five themes. The specific motivation to provide personal data is based on the benefits derived from health-related, psychological, work-related, assessment-related and social benefits.

Which are the Barriers you Associate with Capturing Personal Data in the Smart Factory? Barriers to sharing personal data were identified in five themes: uncertainty about data collection, psychological reasons, work-related reasons, negative consequences for oneself, and lack of value of data collection.

Which Personal Data are you Willing to Disclose and Which Data Seems to be too Sensitive for Disclosure? A total of three types of data were asked to be ranked from the brainstorming results. Participants ranked the data differently according to their sensitivity and especially according to their generation. Younger participants are willing to share data about personal feedback, material consumption and information about attendance and absence times. Older participants categorised information about productivity, ergonomics and stress as disclosable data. Conversational data, medical records and sexual data were classified as sensitive by younger participants. Older participants classified sexual data, menstrual cycle and marital status as very sensitive data.

3 Quantitative Scenario-Based Survey (Main Study)

Based on the results from above, we developed a scenario-based online survey. Our goal was to empirically model the willingness to disclose personal information towards autonomous agents in the smart factory and how that relates to the overall acceptance of a cobot or chatbot scenario.

3.1 Methodological Approach

To understand what factors influence attitudes towards the use of smart technologies requiring personal data, we developed two scenarios and assessed their evaluations using a survey with a within-subject design. Figure 3 illustrates the study’s design. First, participants were familiarised with the context of the smart factory and the need to collect worker data for successful human-robot interaction in a digitally networked work environment.

The main part consisted of a cobot and a chatbot scenario (in randomized order) and each scenario outlined the general capabilities of either the cobot or the chatbot. In the scenarios, we asked the participants to imagine that they were working in a large industrial company with more than 32,000 employees. We further argued that the systems are aimed to increase the efficiency of work processes and that they require the collection of demographic, work, and health-related data.

Fig. 3.
figure 3

Process of scenario-based online survey. Order of cobot and chatbot scenario is randomly assigned (within-subject design).

We assessed the participants’ acceptance of the systems using the dimensions overall intention to use, performance expectancy, hedonic motivation, and effort expectancy using ten items. The dimensions and items were derived from the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology2 model (UTAUT2) [11, 44]. Trust in automation was added as a factor with four items, as it is likely to be related to acceptance [24]. We also let the participants evaluate the scenarios on a semantic differential consisting of 20 opposing word pairs adapted from Hassenzahl’s AttrakDiff [18] (e.g., “The possibility of using a cobot and the AI behind it that can access all the necessary personal data, I would find ...” important—unimportant, controllable—uncontrollable, etc.).

The requirement for a smooth personalised workflow was the disclosure of different types of personal data in both scenarios, such as name, age, height, date of birth, error rate and speed of workflows for the cobot scenario, and name, age, working days, attendance records, sick days, holidays and overtime worked in the chatbot scenario. Prior to the demographic section at the end, participants were asked to evaluate seven data categories (work-related, demographic, movement profile, health-related, biometric, expressive behaviour and personal data) regarding their perceived sensitivity and potential interest to the company on a five-point Likert item.

3.2 Sample Description and Statistical Analysis

A total of 152 surveys were fully completed, consisting of 39% male and 61% female participants aged between 18 and 81 (\(M = 31.5\), \(SD = 14.8\)). The current activity was widely spread across different fields (e.g. health (36%), IT (15%), engineering (29%)). Due to the small sample size, non-parametric tests were calculated: Spearman’s rank correlation \(r_s\), Friedman’s test to analyse differences in means, and Cohen’s d to determine effect sizes. The significance level was set at \(\alpha =.05=5\%\).

4 Results

The results are presented guided by the research questions. Specification of data willing to share with company. Seven categories were ranked according to its sensitivity as can be seen in Fig. 4. With an average value rating above three (min. 1, max. 5), work related (\(M = 3.51\), \(SD = 0.88\)) and demographic data (\(M = 3.35\), \(SD = 1.08\)) were positively evaluated and considered as disclosable data. Data about movement profile turned out to be just under average (\(M = 2.71\), \(SD = 1.11\)). Evaluated as not disclosable at all were the four remaining categories: biometric data (\(M = 1.84\), \(SD = 1.09\)), health related data (\(M = 1.84\), \(SD = 0.88\)), expressive behavior (\(M = 1.68\), \(SD = 1.14\)) and finally personal data (\(M = 1.37\), \(SD = 0.67\)).

Specification of data assumed to be important for company. Data which arise directly from employment were seen as important for the company such as work related data (\(M = 4.26\), \(SD = 0.84\)) and movement profile (\(M = 3.97\), \(SD = 1.07\)). Demographic data (\(M = 3.39\), \(SD = 1.12\)) and expressive behavior (\(M = 3.02\), \(SD = 1.38\)) were also positively evaluated as being important for the company. It turned out that, on average, the three categories were assessed as not important for the company. This included personal data (\(M = 2.20\), \(SD = 1.16\)), biometric data (\(M = 2.58\), \(SD = 1.46\)) and health related data (\(M = 2.92\), \(SD = 1.18\)).

Fig. 4.
figure 4

Willingness to share data with company and perceived importance of data to company with 1 = not at all willing to share to 5 = totally willing to share (n = 152).

Acceptance Relevant Factors for the Intention to Use a Cobot or Chatbot in an Interconnected Digital Working Scenario. As Table 1 shows, acceptance ratings were on average high and relatively similar in both scenarios.

Regarding the acceptance of the two scenarios, a small significant difference was found between the cobot and the chatbot scenario with \(U = 1951.5\), \(Z = -3.442\), \(p < .001\). The relationships between the acceptance-relevant factors and the intention to use such concepts of an smart factory (cobot vs. chatbot) differed slightly: The strongest relationship to the use of a cobot is described by hedonic motivation, followed by performance expectation, trust and effort expectancy. Conversely, the likelihood of using a chatbot is strongly related to performance expectancy, hedonic motivation, trust and effort expectancy (see Fig. 5). Figure 6 shows the acceptance ratings of the cobot and chatbot scenarios.

Table 1. Descriptive statistics and difference in mean value of TAM, UTAUT2 factors and trust regarding the evaluated cobot and chatbot scenario (min. = 1, max. = 6).

In general, the chatbot scenario was associated with more positive attributes than the cobot scenario. For the chatbot, attributes such as fair, transparent, intuitive, welcome, exciting to use and generally positive stood out. The most positive attributes for the cobot were that it is trustworthy, fast, and accelerating. However, the evaluation of the semantic differential was associated with more negative attributions such as being inhibiting, disruptive, disconnected, and negative. Similarities were that both technologies are neither bad nor good, neither discouraging nor motivating, and more difficult to use than easy, but still more controllable than uncontrollable.

Fig. 5.
figure 5

Correlations between the queried acceptance factors and intention to use the chatbot or cobot (\(n = 152\)).

Fig. 6.
figure 6

Results of acceptance assessment of cobot and chatbot scenario with semantic differential (\(n = 152\)).

5 Discussion

Data is driving the digital transformation of production, bringing huge changes to shop floor operations, production planning and management. In addition to increasing the efficiency and effectiveness of existing production systems, new applications are emerging, such as interactive agents in the form of collaborative robots (to support shop floor operations) or chatbots (to support management activities). However, as soon as workers are involved in a socio-technical production system, the collection and use of data can affect the right to self-determination and perceived autonomy of individuals. In particular, when systems collect and build on workers’ personal data, the expected benefits need to be carefully weighed against individuals’ motives, barriers and perceptions of privacy.

As these trade-offs are currently not sufficiently understood, this study investigated the motives, barriers and acceptance of technologies that use personal data in a manufacturing context. We used a two-method approach: A qualitative pre-study identified the motives and barriers for personal data sharing in the context of smart factories. In a next step, a scenario-based survey quantified the acceptance and its predictors of the willingness to disclose personal data, exemplified by a cobot and chatbot scenario.

Motives of Data Sharing. The motives for sharing data were drivers. Health-related reasons were highlighted with the possibility of alerting a worker, health precaution and conducting quick first aid measures by wearing wearables. The results were in line with the state of scientific research that the more the technologies are increasing the worker\(^\prime \)s health, the higher the acceptance to share personal data [20, 40]. The same results were found for psychological motives, such as reducing workers stress and uncertainties as well as worker\(^\prime \)s fear. Even more positive associations were found according to work related aspects. This study could also validate that the willingness to disclose data goes hand in hand with the personal arising benefits concerning work or assessment related or even personal reasons: knowing, that the own productivity will increase [16], that a relief from physical demanding work could be assured [42], or motivating factors such as competing the own performance with colleagues for fun and real could be enabled [35] are some examples. It can be stated that sharing personal data is linked to many motives that increase the willingness to share data. Further research should examine the identified motives in more detail and provide incentives with regard to privacy concerns.

Barriers of Data Sharing. Barriers of data sharing were associated with uncertainties about data collection and privacy concerns. Workers may be hesitant to share their personal data due to concerns about how it will be used and who will have access to it. Privacy concerns have particularly been studied in different domains such as i.e. in the digital health care context: by using mhealth apps [45] or in the context of ambient assisted living [38]. In either way, the intention to share data with the offered technology could technically be achieved by guaranteeing, that data is collected and stored securely and by transparently informing the user about the single stages the data is passed onto and with witch provider. Moreover, the decision to share data and with who they want to share should be adjustable. In the context of the Internet of Production (IoP) with cyber-physcial workplaces the possibility of voluntarily sharing data with whomever workers want could turn out to be challenging. However, transparency of data sharing needs to be considered as a prerequisite for positive acceptance [21]. Further concerns were mentioned regarding fear of negative feedback, unnecessary data collection, the feeling of being monitored all the time or fear of job loss among others. Taking these concerns seriously is very important and communicating transparently about the benefits, the new technology might bring already relieves these severe fears. An empirically investigated phenomena which describes a difference between the attitude and actual behaviour is known as the privacy paradox: Although people generally express concern about their data, want to protect it, and want to have control over who accessibility, they nevertheless disclose a great amount of personal information [1] due to a risk-benefit-analysis [13]. This assumption indicates that the intention to provide data depends on the trade-off between personal benefit and the perceived privacy risk. Thus, further research needs to be done on possible benefits taking the mentioned motives of this study into account.

Perceived Sensitivity of Data Sharing. Our main study gave further information about data sensitivity meaning the associated level of risk and willingness to disclose personal data. Only demographic data and work related data were seen as less sensitive and thus shareable within the working context. These findings are congruent with the results from Tolsdorf et al. [41] that found that data groups related to the employment context show a significantly lower perceived sensitivity and a higher willingness to share. According to Tolsdorf, hair color, occupation, language skills, shift schedules, and business trips were the least sensitive data. Further research should focus on the assessment of personal data and pursue a classification of different data clusters [41]. For the actual use of sensitive data in the work context, communication concepts need to be developed that transparently explain the use of the data and the processing of the data to the worker. Moreover, further research should also investigate on influencing factors such as culture and age which have been shown to have an influence on the perception of data sensitivity [6, 26]. Asked the other way around, which data is associated as being important for the company, work related, movement profile, demographic data and expressive data were stated; data, which were personally too sensitive to share. Further research is needed which firstly identifies concrete data types needed in cyber-physical work environment and secondly evaluates the data according to its perceived sensitivity.

Acceptance of Chatbot and Cobot-scenario. In this study participants evaluated two scenarios which included several kinds of personal data on whose basis a working interaction between human and technology could take place. Both, the chatbot- and cobot-scenario were rated positively. It was found that four factors were prominent in measuring the acceptance of both scenarios: hedonic motivation, performance expectancy, trust and effort expectancy.

Cobot Scenario. The strongest influence on intention to use was found for the hedonic motivation factor. Fun as a motivating factor for sharing data and interacting with a cobot is also reflected in current empirical research [10]. In order to achieve an active willingness to interact with a networked cyber-physical system that requires personal data, playful and motivational elements need to be considered in its design. In order to identify adequate motivational factors, the user must be involved in the empirical development according to user-centred design. Performance expectation, understood as utility, was the second most important factor. As cobots are designed to work side-by-side with the human operator, fluent interaction and high performance based on worker data such as location, height, etc. are essential for successful joint task performance. A clear understanding and predictability in terms of their movement, speed, acceleration and others, taking into account the required personal data of the worker, needs to be elaborated [32]. Trust in automation, as generated by the expected predictability, credibility and usefulness of the technology [24], was also considered important in relation to the intention to work with a cobot. Similar to the need for high performance, the worker needs to be able to trust the technological functions of the cobot for a successful collaboration. Once again, the clusters of personal data required for the specific collaborative work context need to be specifically explored and the benefits transparently explained to the user in order to achieve a positive roll-out for Industry 4.0. Effort expectancy is defined as the extent to which a user perceives the interaction with the cobot as user-friendly and easy. In this study, effort expectancy was rated significantly lower compared to the chatbot scenario. A smooth collaboration requires not only personal user data, but also knowledge and experience in working with robots. The very idea of working with a cobot can give an indication of the importance of effort expectancy. However, this result should only be taken as an indication. Therefore, user studies need to be conducted with real interaction and focus on the required user data.

Chatbot Scenario. Here, performance expectancy showed the strongest correlation with intention to use. It is defined as the degree to which a person believes that using the chatbot will help them achieve gains in job performance [43]. The chatbot acts as a conversational agent which, in this study, provides information about a holiday request based on personal and work-related data. This study provided insights into the general evaluation of two technological innovations through the advancement of Artifical Intelligence and machine learning. As the implementation of conversational agents such as chatbots will increase in general and specifically in the context of smart factories, deeper insights into the willingness to disclose personal data and privacy concerns when interacting with chatbots need to be conducted [19]. Hedonic motivation was seen as a second important factor and underlines the fact that the interaction with the chatbot needs to be designed with joyful elements [9]. On a conversational basis, it would be interesting to study the impact and practical use of humour within the chatbot. In addition, privacy concerns and the willingness to share personal data depending on the human-like characteristics of the chatbot may influence adoption and require further research [14]. Trust in automation was seen as reliance on the actions of the chatbot. Effort expectancy was found as an important explanation for the intention to use a chatbot. It reflects the extent to which the potential user thinks that much or little effort is necessary to use an appropriate chatbot and to learn how to deal with it. Since they are programmed to have a human-like dialogue with natural language understanding, the chatbot is perceived as human-like or anthropomorphic [17] which promotes the usage intention.

Both contexts were rated on a semantic differential and again, the chatbot was described with more positive attributes than the cobot which can be derived from the fact, that interactions with chatbots are common in i.g. booking contexts and therefore a higher experience is given [7].

The following key findings and actionable recommendation have emerged from this study:

  • Employees’ willingness to share personal data increases with personal benefits.

  • Transparent communication and detailed information about data processing is very important.

  • High acceptance of technology of either the cobot or chatbot can be reached by considering factors such as fun, performance, trust and effort expectancy.

  • Playful, joyful and motivational elements need to be considered in their design.

  • A clear understanding and predictability regarding the cobot’s movement, speed, acceleration are important.

  • Cobots need to be designed trustfully.

  • Human-like and anthropomorphic patterns are relevant.

Furthermore, the results show that the perception of sensitivity differs according to the context of use and the type of data. This indicates that there is no one-size-fits-all solution for the use of personal data in smart factories, but that data use must be negotiated individually with the affected employees.

The study can be seen as an initiation of various research projects towards the acceptance of cobots and chatbots. Understanding individual privacy concerns and trust in automation are prerequisites for the development of practical guidelines that should serve as a benchmark for the implementation of such technologies in industry for a successful roll-out.

It is foreseeable that the collection and use of personal data, in addition to machine and production data, will increase in the near future. As this has affects the employees’ self-determination, autonomy, and perceived privacy responsible (research and) innovation (R(R)I) mandates to taken this into account when designing future work environments. As privacy perceptions in smart factories are currently insufficiently understood, further research needs to integrate the perspectives of all stakeholders affected by the digital transformation of production (e.g. employees, managers, or trade unions). The different perspectives on the motives and barriers for data sharing can then be integrated and translated into a socially and collectively negotiated protocol for the use of personal data in smart factories. In the face of skills shortages caused by demographic change, the participatory integration of workers’ perspectives is gaining importance beyond responsible research and innovation: Creating smart, fair, and accepted work environments that promote self-determination and autonomy can become a key competitive advantage in attracting and retaining talent.