1 Technological alterations in the twenty-first century workplaces

Since the beginning of the twenty-first century many industries and organizations began to make micro- and macro-alterations in their technological structure. These technological changes aid them to provide wider and swifter services to their customer and clients (Lasi et al. 2014; Saucedo-Martínez et al. 2017), to compete with their counterparts for getting a larger portion of the sale market (Holweg 2008; Rüßmann et al. 2015; Saucedo-Martínez et al. 2017), and to help employees to perform their tasks with lesser job errors (Longo et al. 2017; Maguire 2001).

Using these technological innovations in the workplace is not always free of charge. In many cases, companies may need to redesign their jobs, working procedure (Davis et al. 2012; Lasi et al. 2014; Rüßmann et al. 2015) or involve their employees in particular training programs (Bekker and Long 2000; Boothby et al. 2010). Besides, companies need to know the degree to which their employees can accept these innovations before they widely apply them in the workplace.

2 Dealing with technological changes

Due to the rapid technological progresses, scientists tend to redesign the way a particular product is produced or a service is given to customers. These alterations not only affect jobs but also, influence the duties of workers (Pacaux-Lemoine et al. 2017). For example, assembly lines of factories are increasingly automated and computerized to be flexible and modular (Lasi et al. 2014; Zhang et al. 2019).

To deal with this automatization process, factories are required to provide employees with equipment that facilitates this deal. It means that employees are going to take over more supervisory and regulatory tasks than being merely a task performer. In doing so, they will need to improve their capacities in terms of anticipating, planning, and reacting to a problem (Gorecky et al. 2014; Pacaux-Lemoine et al. 2017; Stoessel et al. 2008; Zamfirescu et al. 2014). As such, the operators’ ability to collect and process information is crucial for the smooth functioning of nowadays factories which can be a problem when considering the limited capacity of humans to treat information (Lindblom and Thorvald 2014). Moreover, this increase in the complexity of the tasks to perform is likely to stretch the operators’ mental models (Moray 1998, Rasmussen and Rouse 1981; Rouse and Morris 1986), namely the psychological representation of the situation of work used by the operators to describe, explain and predict the functioning of the system they are operating (Johnson-Laird 1983; Johnson-Laird et al. 1998; Rasmussen and Rouse 1981; Rouse and Morris 1986; Moray 1998). Since mental models rely at least in part on the ability to process information (Johnson-Laird 1983; Johnson-Laird et al. 1998), increased automation may lead to a cognitive overload because more complex mental models are required. Accordingly, two-third of the observed technological accidents in the industry would be due to human errors, probably because the human limitations are underestimated (Pacaux-Lemoine et al. 2017).

Different solutions have been suggested to respond to these limitations of the human cognitive system, such as redesigning the Human–Machine interfaces (e.g., Villani et al. 2017) or optimizing the organization of the assembly lines (e.g., Fast-Berglund and Stahre 2013). In this review, we will focus on another of these solutions: the use of Wearable Cognitive Assistants (Hao and Helo 2017; Unzeitig et al. 2015). The specificity of these devices is that they aim at reducing the need of the operators to research information by giving them the right information at the right time, in an adapted form, whatever their location in the factory (Unzeitig et al. 2015). In other words, these assistants try to simplify the human–machine interface by directly providing the needed information to the operators, either proactively (when the operators need to be informed of an important event such as an error in the assembly line) or reactively (when the operators need information such as the level of a given stockpile).

As such, the wearable cognitive assistants are supposed to enhance the human–machine interaction (Romero et al. 2016), to minimize cognitive workload and errors (Romero et al. 2015) and maximize job attitudes and well-being-related outcomes in workers (Richter et al. 2018). Some of them, such as intelligent virtual assistants (e.g., Lamontagne et al. 2014), ultra-portable devices like smartphones and smartwatches (e.g., Aehnelt and Urban 2014; Ziegler et al. 2015), and augmented reality devices (e.g., Dunston 2008; Jetter et al. 2018; Regenbrecht et al. 2005; Schwald and De Laval 2003; Syberfeldt et al. 2017) have already been tested in industry. Nonetheless, whether these assistants can make up for the cognitive limitations of human operators and improve job attitudes and well-being-related outcomes in the workplace remains unclear.

3 The current study

Exploring the extent to which wearable cognitive assistants can influence cognitive performance and well-being requires an integrated theoretical framework combining elements from cognitive, social and organizational psychology. The current study, therefore, aims to conduct a critical review of these fields to identify, report and evaluate their most significant works and models in this regard. First, we attempt to understand how the current cognitive psychology literature can offer interesting leads on how to design and test wearable cognitive assistants in the workplace. We demonstrate that these assistants are likely to help operators cope with job requirements, leading to a better person–job fit (Edwards 1991, 2008). Second, we present two well-established models, the Job Characteristics Model (JCM), and the Technology Acceptance Model (TAM), which have been extensively used with technological solutions other than wearable cognitive assistants. We show that these models provide useful theoretical insight for explaining the extent to which these cognitive assistants could also enhance job satisfaction and make the job more likely to match workers’ needs, preferences and values, in other words the second way of improving the job–person fit (Edwards 1991, 2008; Kristof-Brown et al. 2005).

4 Contribution of wearable cognitive assistants to humans’ cognitive limitations

Due to the increasing automatization, human agents become more and more the primary bulwark against dysfunctions in the assembly lines. Accordingly, the ability of these agents to process information is crucial. Processing and storing information, known as working memory, limits the processing speed and the amount of information that can be stored in individuals (Baddeley 2012; Barrouillet and Camos 2015; Cowan 2010; Engle and Kane 2004; Logie 2011; Oberauer et al. 2012). Workers may deal with situations in a company that can considerably use up their working memory (Lindblom and Thorvald 2014). This is the case of the situations that generate interfering thoughts (i.e., where workers have to think about one task while performing another); situations that leads to a cognitive tunnel (i.e., where information relevant for the task at hand is scattered in several different places); situations where operators are required to keep new information in memory; or situations of cognitive constraints (i.e., where relevant information is not easily accessed because it is swamped by other information).

It is, therefore, essential to have a theoretical understanding of how working memory works under such circumstances before applying wearable cognitive assistants in the workplace. Working memory is one of the most studied structures in cognitive psychology (Baddeley 2012; Barrouillet and Camos 2015; Cowan 2010; Engle and Kane 2004; Logie 2011; Oberauer et al. 2012). Due to various definitions of working memory, it is hard to provide a precise definition of working memory (Cowan 2017), however, there are common points in the main models and definitions. First, working memory serves to manage new, unusual or complex situations as opposed, by definition, to well-known, simple situations involving automated processes. Second, working memory is strongly associated with cognitive task control. Thus, we use working memory when we have to focus on a specific task, to maintain an active goal, or when checking for errors requires an effort or our sustained attention. It also comes into play when we have to divide or alternate our attention between several tasks, or when a conflict between different actions must be resolved. Third, we also use working memory to ignore irrelevant cues, in particular when information has to be kept active in a context with a high level of distraction.

Working memory models differ when they come to the factors that are responsible for its limitation. A first limitation is related to the amount of information that can be processed simultaneously (Cowan 2010). According to Cowan (2016), for example, only four elements of information can be stored and processed in the focus of attention. These elements may be very basic (e.g., a digit, a word) or more elaborated in that they create a chunk of elements (e.g., memorizing several digits that form a number). Another limitation is related to the time spent to focus on the information in working memory (Barrouillet and Camos 2015). Indeed, traces in working memory would decay with time and would need attention to be re-activated and maintained. In other words, when attention is paid to something other than holding information in working memory, this information will deteriorate. A third possibility is to see environmental interference as the main cause of the deterioration of information in working memory (Oberauer et al. 2012). The more the environment contains elements that may interfere with information held in working memory (e.g., when someone talks when we try to retain a list of words), the more traces in working memory will deteriorate. Alternatively, it may be our innate ability to control our attention and, therefore, ignore distractors (Engle and Kane 2004) that may be the more crucial factor. Finally, the multi-component model (Baddeley 2012; Logie 2011, 2018) regards working memory as an agglomerate of components specialized in processing a given type of information (e.g., visual, verbal, and spatial). For these models, each component could process a limited amount of information, but the total amount of information processed could be maximized if there are different types of information. In particular, it would appear that verbal information could be stored independently of other information via a self-repeating mechanism (articulatory rehearsal: Camos 2015). Finally, it is worth noting that many studies suggest that knowledge already acquired (and stored in our long-term memory) has a major impact on how working memory operates (e.g., Barrouillet et al. 2004 and Unsworth 2010). Accordingly, experts would have quicker access to their long-term memory, and the knowledge stored there would be better organized (Ericsson and Kintsch 1995). Consequently, their use of working memory would be improved using such information which is generally more relevant and more accessible.

Interestingly, Wickens (1980, 2002, 2008) developed a model aimed at describing information processing in multitasks contexts and having some similarities with classic working memory models. In a similar way that the multi-component model (Baddeley 2012; Logie 2011, 2018), Wickens’ model postulates the existence of different resources, but his model places more emphasis on the different stages of processing (perception, information processing, response selection and execution) and especially perception by differentiating, for example, focal and ambient vision. This model has been used to predict the extent to which a multitasking situation leads to an overload in information processing (Wickens 2008). Others have used this model to predict why and when operators shift from one task to another (Wickens and Gutzwiller 2017).

All the models presented above are based on solid empirical results, and thus it seems reasonable to assume that all the factors set out here explain the limits to working memory, albeit to differing degrees. In summary, any technological solution aimed at discharging the operators’ cognitive system should present a small amount of information at once, limit information overlapping in time, reduce possible environmental interference, favor the use of different modules, adapt the content and quantity of information to users’ level of expertise, and avoid resource overlapping at all the levels of information processing. We believe that the wearable cognitive assistants are particularly well suited to meet these recommendations. Since they display critical information to the operators only when needed or when solicited, the wearable cognitive assistants are more likely to provide only a small amount of information with little overlapping in time than other technological solutions. Moreover, the fact that wearable cognitive assistants allow to display information to each operator individually (and directly at his/her own location) opens the opportunity to personalize the format to each of them, based on each given environment of work (to avoid environmental interference and overlapping in the type of information) and to the level of expertise. Altogether, these characteristics make the wearable cognitive assistants a very promising way of taking into account the more up-to-date knowledge about the cognitive system limitations.

It is also possible to address the question of the limits in working memory in terms of the more operational concept of cognitive load or mental load which focuses more on the quantity of demands experienced by operators and the way they react to them (Sweller 2011; Sweller et al. 2011; Zheng 2017). As mentioned earlier, human operators are active regulators of information (Bannon 1995), who will, therefore, respond to demands coming from their environment. In doing so, there are three possible situations they may encounter: (1) if the demands of the working environment exceed operators’ cognitive capacities, they are assumed to be in a situation of mental overload (Lindblom and Thorvald 2014; Hancock et al. 1995) aligned with a source of errors and stress. (2) If the demands of the environment are too weak, operators find themselves in a situation of cognitive underload (Hancock et al. 1995; Pattyn et al. 2008), which manifests itself in task disengagement, a fall in vigilance resulting in longer reaction times, and more errors (Pattyn et al. 2008). (3) An ideal situation (Hancock et al. 1995), between the extremes of 1 and 2, is where the demands of the environment are sufficient but do not exceed operators’ cognitive capacities. This comfort zone may vary from one individual to another (Martin 2013), or change depending on the circumstances (e.g., night work/day work) and corresponds to the ideal situation in terms of both performance and well-being. Ideally, wearable cognitive assistants would, therefore, aim to offload operators’ working memory, to bring them into their cognitive comfort zone.

Several tools have been proposed to directly and reliably measure cognitive load (for a discussion on indirect measures, for example performance based, see: Zheng 2017). One possibility is to use validated surveys or interviews filled in by operators once they have finished a given activity (e.g., Hart 2006; Hart and Staveland 1988; Paas et al. 1994). Many studies have used this approach to measure cognitive load in different environments. For example, Leppink et al. (2014) proposed a scale to distinguish between at least two sub-components of cognitive load, an intrinsic component that corresponds to the complexity of the information to be kept in memory, and an extraneous component corresponding to poor mastery of the aims of the task that results in unnecessary cognitive operations. Nonetheless, this approach of measurement has its limits. For instance, operators may be unable to differentiate between the difficulty of a task and the personal effort invested in it (Veltman and Gaillard 1996).

Another measurement approach would be to use physiological indicators. However, this type of measures is difficult to be accepted and used in a factory setting (for a discussion, see: Zheng 2017). Eye tracking is one of the physiological indicators which is commonly used and is based on the assumption that what we look at is what is cognitively processed (Just and Carpenter 1980). Thus, eye tracking is thought to provide indications about internal cognitive processes, and among other, about the level of mental load.

Studies using eye tracking have used several indicators such as the fixation time (e.g., De Greef et al. 2009; De Rivecourt et al. 2008; Duchowski 2002), pupillary dilatation (e.g., De Rivecourt et al. 2008; Schwalm et al. 2008) or the blink rate (Benedetto et al. 2011; Recarte et al. 2008). Other physiological measures, that can be combined with eye-tracking signal (De Rivecourt et al. 2008), have also been shown to correlate with the mental load.

For example, an increase in heart rate was observed during risky job-related procedures in various studies. In some of these studies, heart rate measurements were combined with measurements of body accelerations thus as to differentiate between heart rate fluctuations that were due to bursts of physical activity and those due to phases of heavy mental load. Similarly, using a flight simulator, heart rate was also shown to increase during take-off and landing (Wilson 2002), both procedures known to generate a high cognitive load. Furthermore, a reduction in heart rate variability was also observed during a computer-based piloting task (Durantin et al. 2014), in an air-traffic control simulator (Rowe et al. 1998), and a boat cockpit simulator (Murai et al. 2008). Such a reduction was also found with remote measurement using a camera (McDuff et al. 2014). Other physiological measures also associated with an increase in mental load, include electrodermal activity (Setz et al. 2010), infrared spectroscopy (Durantin et al. 2014), and facial thermography (Murai et al. 2008, 2015), or electroencephalography (Krol and Zander 2018; Zander and Kothe 2011).

These measures could also be useful tools for assessing whether wearable cognitive assistants can really reduce mental load, and for comparing different iterations of these technologies. Adapting these measures with real situations that are different than those found in a laboratory or a simulator is, however, still considered as a real challenge.

5 Theoretical foundations supporting the use of wearable cognitive assistants

We have seen that in the context of changes in the industry of the twenty-first century, works imply more and more cognitive demands, and applying cognitive assistants seem a promising way to help the workers to efficiently respond to these demands. Put in different words, cognitive assistants could improve the person–job fit, in that they would help employee’s skills and abilities to cope with job requirements (Edwards 1991, 2008; Kristof-Brown et al. 2005). Person–job fit is linked with important outcomes such as job performance (e.g., Bhat and Rainayee 2016 and June and Mahmood 2011), and job satisfaction (for a meta-analysis see: Kristof-Brown et al. 2005). Nonetheless, job satisfaction is probably more linked to a second aspect of the person–job fit: the fact that the job fulfills the needs, preferences and values of the workers (Edwards 1991, 2008; Kristof-Brown et al. 2005). In the following section, we argue that this type of person–job fit can be addressed by two well-established models: the Job Characteristics Model (JCM) and the Technology Acceptance Model (TAM). First, we introduce these models; and second, we explain the way they are able to provide theoretical insights and explanations in support of the use of wearable cognitive assistants in the workplace.

5.1 Job Characteristic Model and adaptation with new technologies

The Job Characteristics Model was initially developed by Hackman and Oldham (1980). The basic idea to develop this theory was to find the antecedents of job satisfaction and job motivation in the workplaces. Hackman and Lawler (1971) found that, although there had been a lot of studies aiming at enriching the workplace climates to deter dissatisfaction that comes from doing routine tasks, there were very few theories and tools to identify the extent to which the characteristics of the job can influence the job satisfaction and the job motivation. This theory is basically founded on the following propositions: (1) individuals are more likely to behave in a certain way if they think they will be rewarded (taken in the broad sense to mean both monetary and psychological reward); (2) rewards are more valuable to the individuals if they meet their physical or psychological needs; (3) working conditions are assumed to lead to a better performance if they enable these needs; (4) those needs related to job tend to be high-level needs (personal development, feeling of achieving something important) rather than low-level needs (safety, well-being); (5) the high-level needs are met when the workers are aware that they achieved something valuable or meaningful (Hackman and Oldham 1980).

Based on these propositions, it can be theorized that job characteristics can indeed influence motivation, and it even becomes possible to define the characteristics of a job that workers would find motivating. As such, it can be concluded that the job characteristics should follow principals including: they should yield a psychological reward in the form of a sense of achievement or of having done an important job; they should allow workers to feel responsible for their work; and, lastly, they should enable them to be aware of their performance and efficiency. In addition, those who strive hardest to achieve would be those who are most sensitive to these characteristics.

Relying on this theoretical base, Hackman and Oldham (1975) proposed a survey known as the Job Diagnostic Survey (JDS). They proposed that favorable work outcomes (e.g., high motivation, high job satisfaction, good performance, low absenteeism and low turnover) are obtained when three psychological states are attained (Fig. 1). The three states correspond to the aforementioned job characteristics that workers find motivating (Hackman and Lawler 1971): a sense of achievement or of having done an important job; a sense of responsibility with regard to one’s work; and awareness of one’s performance and efficiency. According to this theory, underlying these three psychological states are five core dimensions: skill variety, task identity (the fact that the work requires the jobholder to complete a whole task), and task significance (the fact that the job affects other people’s lives); autonomy; and, knowledge about performance (feedback). How these five core job characteristics affect favorable work outcomes is influenced in turn by differences in individuals’ needs for personal accomplishment (Fig. 1).

Fig. 1
figure 1

Reprinted by permission of person Education. Upper Saddle River. New Jersey

Job Characteristics Model (Hackman and Oldham 1980, p. 90).

The main theory, on which the impact of job characteristics on job satisfaction is mainly based, includes three main antecedents among two are associated with the regulation of information: autonomy and feedback. It is precisely in relation to these two aspects that wearable cognitive assistants can be assumed to contribute. First by giving operators access to the information they need when they need it, and second by enabling them to be more autonomous and more aware of the progress achieved as a result of their efforts. According to Hackman and Oldham (1975), this improvement should thus enhance operators’ job satisfaction.

There are currently a good number of studies supporting the Job Characteristics Model (for a review and a discussion see Oldham and Hackman 2010). For example, Fried and Ferris (1987) in a meta-analysis with the data obtained during laboratory and field studies suggested that all five job characteristics of the model correlate, moderately to strongly, with the positive work outcomes. In addition, this meta-analysis particularly suggested that feedback is the concept that correlates most strongly with job satisfaction. Furthermore, the model has been enriched repeatedly since its primary version. The main move was to complete the motivational approach adopted by Hackman and Oldham (1975) with other approaches that lead to a more complex model (Campion and Thayer 1985; Campion 1988; Edwards et al. 1999). More recently, Morgeson and Humphrey (2006) continued the integration approach by proposing a model combining all data present in the literature regarding the job characteristics. Morgeson and Humphrey (2006) identified 21 job characteristics that are likely to influence job satisfaction, job performance, and absenteeism. Of those, 8 relate directly to operators’ regulation of information and thus are likely to be positively impacted by the wearable cognitive assistants: autonomy (divided in this study into three parts between autonomy in terms of decision making, work scheduling and work methods), feedback from job, job complexity, information processing, problem solving, and feedback from others. In a meta-analysis with sample size more than 200,000 participants, Humphrey et al. (2007) showed that these 8 characteristics (with the different kinds of autonomy counting as a single factor) correlate very strongly with job satisfaction and moderately with job performance. Overall, the meta-analysis supports the model, albeit with a few reservations. First, the results vary with the population studied and, second, the idea of critical psychological states is not supported by the data.

Following this short review of the literature on job characteristics, we postulate that wearable cognitive assistants are susceptible to enhance job satisfaction because they are likely to influence the eight job characteristics related to the regulation of information. First, by making critical information (such as errors or the completion of a goal) proactively accessible to the operators, the wearable cognitive assistants should increase the feedback from the task, and if they include a feature allowing the communications between the operators, they should also enhance the feedback from others. Second, because they allow the operators to access reactively to personalized information about their work, wearable cognitive assistants should make information about the task at hand more accessible, which should increase information processing and optimize the job complexity. This better access to information is also susceptible to enhance the capacity of the operators to solve problems. Indeed, problem solving relies on working memory when these problems are analytic in nature (Hambrick and Engle 2003). We have seen that a better access to a personalized information should discharge the working memory of the operators, and one could, therefore, infer that it should improve analytic problem solving (but not necessarily creative problem solving, Wiley and Jarosz 2012). Moreover, wearable cognitive assistants should also help the operators to better plan their activities because they make easily available information such as the level of stocks and the performance of the machines at different level of the assembly line.

In summary, by allowing for better information management and particularly if it provides feedback on work progress, wearable cognitive assistants should make work regulation tasks easier and, therefore, promote job satisfaction as suggested by the meta-analysis of Humphrey et al. (2007).

5.2 Technology Acceptance Model and adaptation with new technologies

To sum up, cognitive assistants seems to be a promising way to enhance person–job interaction, by helping employees to cope with the cognitive demands of a more complex work, but also by enhancing job characteristics that have been shown to be linked with job satisfaction. Nonetheless, these improvements are conditioned to the fact that the workers actually make use of the technology (Alexandre et al. 2018). In other words, by adding a technological solution, one risk is basically to trade a bad human–machine interaction for a bad human–technology interaction. This risk has been widely studied with other technologies in the framework of the Technology Acceptance Model that aims at predicting the degree to which individuals tend to apply a new technology. But as we shall see, only few studies have examined the question of acceptance in the case of wearable cognitive assistants in the factory. Before reviewing these studies, we will first quickly present the TAM and its extensions.

The TAM has been the subject of many recent studies and led to developing various research measures and tools in the field of experimental social psychology. Case-based studies show that whenever a new technology has not been successfully accepted it has failed to have a positive impact. As an example, Venkatesh and Bala (2008) stated that the unsuccessful implementation of IT systems at Hewlett–Packard in 2004 resulted in 160 million-dollar losses. One of the major causes of this failure was to exclude human users in the implementation process (Regenbrecht et al. 2005). Therefore, it is crucial to find a research tool to measure the degree to which employees tend to accept a new technology at their workplace before we widely use or order a new technology.

Technology Acceptance Model (TAM) initially was developed by Davis (1989). According to this model (Fig. 2), the degree to which a technology is adopted will depend on the intention of individuals to use it. In turn, this intent to use can be predicted by individuals’ attitude towards the technology or their positive or negative perceptions of it. These perceptions are conditioned, in turn, by two factors, namely perceived ease-of-use and perceived usefulness. Consequently, insofar as repeated use may alter these two factors (an assistive device may appear easier to use with time, or its perceived usefulness may decline the more it is used, etc.), the attitude towards a given technology, the intention to use it, and, ultimately, its actual use may also change over time.

Fig. 2
figure 2

Diagram taken from Yousafzai et al. (2010)

Technology acceptance model developed by Davis (1989).

Up to present time, there are numerous studies conducted on TAM to examine its simplicity of use and robustness. As an example, Ma and Liu (2004) in a meta-analysis summarizing 26 studies found evidence for the structure of the model and the links it assumes between the different variables. The model has also been revised on several occasions, mainly with a view of enriching it with additional variables. For example, a second version of the model includes background variables in respect of perceived usefulness (Venkatesh and Davis 2000), including social norms (e.g., how individuals think people close to them expect them to behave), which seem to have a direct influence on the intention to use a given technology. This version of the model also leaves out the concept of attitude and proposes perceived ease-of-use as the determinant for intent to use and perceived usefulness. Other authors suggested to include perceived system performance (Liu and Ma 2006). Lastly, we should take into account the Unified Theory of Acceptance and Use of Technology developed by Venkatesh et al. (2003) and Venkatesh and Bala (2008) because this theory includes variables of the Technology Acceptance Model and incorporates them in a more complex model (Fig. 3). In conclusion, despite the many add-ons, the original structure of the model devised by Davis (1989) is virtually unchanged and, for the moment at least, does not appear to be brought into question (but for discussions on other possible improvements of the model see for example: Belletier et al. 2018; De Oca and Nistor 2014; Harrison et al. 2014; Nistor 2014; Nistor et al. 2014a, b; Venkatesh et al. 2012).

Fig. 3
figure 3

Technology Acceptance Model, Revised Version (Venkatesh and Bala 2008)

Several authors already suggested to use the TAM and/or the UTAUT in the context of industrial implementation of wearable devices (e.g., Hannola et al. 2017; Zhao et al. 2018). These results may give some interesting insight about the acceptance of wearable cognitive assistants. For example, Son et al. (2012) studied the use of laptop computers for managing job-related tasks in construction projects in three construction industries in South Korea. They sent a questionnaire measuring the classic TAM’s variables as well as several probable antecedents of these classic variables such as social influence or top management support. Their results showed that the main determinant for the acceptance of computers is their perceived usefulness, which was in turn predicted by social influence (how the operator thinks his/her social circle views the technology), job relevance (to what extent the operator thinks the technology is applicable to his/her job), and top management support (to what extent the operator thinks that management understands the technology and supports its use). Calisir et al. (2014) obtained similar results with the introduction of a web-based learning system in a Turkish car factory. This learning system was deployed in a training center, and was based on existing training that targeted blue collar workers. Behavioral intention to use the system was found to be predicted by perceived usefulness which was in turn predicted by content quality, namely the extent to which learning content was designed to match workers’ needs. In the same context of automotive industry in Italy and United Kingdom, Jetter et al. (2018) used the TAM during the implementation of an augmented reality software on tablets. The software was designed to help operators during maintenance, service, repair and inspection operations, and was, for example, able to make hidden components visible, to display the required work steps or real-time information about vehicle parts. The authors measured the TAM’s variables and different aspects of perceived performance (time and errors, cognitive load, spatial representation) before and after performing a representative task (proof of concept) with the augmented reality software. Once again, perceived usefulness proved to be a good predictor of the intention to use of the device. Interestingly, perceived usefulness was positively impacted by the subjective reduction of time and errors but not by the subjective cognitive load or by the subjective improvement on the spatial representations of the task. These results give first indications on how workers would judge the usefulness of such device. To sum up, perceived usefulness seems to be the more crucial predictor of technology acceptance in the industrial context, although more research is still needed to better understand what factors influence this representation, to maximize the probability that cognitive assistants would be accepted.

In conclusion, the original Technology Acceptance Model is still a popular model by dint of its simplicity. In its classic version, it includes only three variables (which means it is easy to use in a factory setting), and is considered very useful for predicting, explaining and monitoring acceptance. Besides these advantages, a more updated acceptance model may also prove very useful in the future to enrich the understanding of the acceptance of wearable cognitive assistants in industry. Such a model could make use of more comprehensive versions of the TAM, for example the UTAUT (Dwivedi et al. 2011; Venkatesh et al. 2003), or of other supplementary models (see for example: Arnold et al. 2018). By edging towards a more complex model, it should be possible to draw up special recommendations for industry with a view to optimizing acceptance of the wearable cognitive assistants.

6 Integrations, challenges and opportunities

A recent challenge for many industries has been to make a good fit between workers and their job (Edwards 2008). Several approaches attempted in the past to improve this fit by focusing on the characteristics of the workplace such as changing the organization of the workplace, increasing the automatization of the assembly lines, or reducing the physical demand of work. Another possibility is also a human-centered approach (Bannon 1995; Edwards 1991). According to this view, the person–job fit is two-sided (Edwards 1991, 2008; Kristof-Brown et al. 2005). First, workers should have enough capacity to optimally realize and perform their job. Second, the job should provide a good level of satisfaction and motivation for the workers. In this article, we argued that these two issues could be theoretically addressed and discussed in the case of wearable cognitive assistants. Indeed, we have seen that cognitive assistants should be able to reach an optimum load of working memory by providing information when needed and under the right format. Moreover, according to the Job Characteristics Model, because they are likely to improve the feedback on the performance provided to workers as well their autonomy, wearable cognitive assistants should improve job satisfaction.

Nonetheless, the implementation of wearable cognitive assistants in the factory should be done with several precautions. For example, one can imagine that a poorly designed cognitive assistant could overload the working memory of workers by given them too much information. If cognitive psychology provides several indications on the form in which the information should be displayed, it remains necessary to assess which of these indications are the more critical in the context of the factory. One valuable tool for this may be the cognitive task analysis (Crandall et al. 2006; Schraagen et al. 2000), a set of methods for collecting, analyzing and describing what the operators are thinking. More specifically, these methods aim to capture what has the operators’ attention, what strategies they are using, and how they are making decision. In other words, the idea is to determine the mental models used by operators and how they are using them. These tools are thus particularly well designed for identifying what information is central but difficult to access and, therefore, needs to be displayed by a cognitive assistant. Moreover, job characteristics theory indicates that cognitive assistants that are designed to provide feedback about the performance are most likely to increase job satisfaction, but the exact way in which such feedback should be given remains to be examined. To summarize, the integration of the technology of cognitive assistance in the industry seems to be promising, and opens a new field of research. We believe that future studies in this topic should be theoretically driven to optimize the efficacy of the wearable cognitive assistants. Finally, the last important point is the workers’ acceptance of new technology. Some recent studies began to apply this model to wearable devices in the industry. Pursuing this line of research and building a reliable model of technology acceptance is of primary importance, because an objectively efficient solution would be of limited interest if its deployment failed.

7 Discussion

Our review suggested that wearable cognitive assistants could be efficient candidates to improve the person–job interaction through enhancing the capacities of the employees to answer job demands more efficiently, and by making the job more likely to fulfill the employee’s needs. Moreover, we specified these two advantages using recent results in cognitive psychology and the two well-founded models of JCM and TAM.

First, we showed the way wearable cognitive assistants can contribute to offload working memory and optimize cognitive load of operators by providing a fast and personalized access to information. In agreement with the JCM, using wearable cognitive assistants should, therefore, lead to a more flexible work plan which could in turn enhance work autonomy. Second, as the JCM also suggested, these assistants could contribute to other job characteristics such as feedback. When employees wear the cognitive assistants, they will be informed by segmental information that provide them direct and immediate feedback on the quality and quantity of their performance. This should allow them to keep themselves updated on the way they perform the tasks. In case it is necessary, they will be, therefore, able to make small, fast and efficient modifications. In addition, as the feedback can be presented to them in an adapted format (visual, textual, aural, and taking into account the level of expertise) it can reduce any misunderstanding and confusion that may arise if they would receive it through a more classic human–machine interface. Finally, wearable cognitive assistants may also offer a way to improve communication between the operators. They indeed allow exchanging critical information very quickly without the absolute need for a physical move in the assembly line. In the case of important and concise messages, the feedback between operators could, therefore, be increased. In overall, the immediate feedback (either from the machine or from other humans) is likely to reduce the work errors and save time and energy of employees leading to better person–machine interaction.

Second, we suggested that, as it has been extensively shown with other technologies, these improvements are conditioned by the worker’s acceptance of the wearable cognitive assistants. We suggested that the use of the TAM would allow answering this challenge, first by providing a very simple and robust model, and second by offering numerous ways to enrich it. We also reviewed some studies that already suggest that the perceived usefulness of the devices is the most critical point to ensure their acceptance. As such, according to TAM a wearable cognitive assistant is anticipated to be mostly used if the employees find it as useful as possible. Perceived usefulness can be improved when a wearable cognitive assistant aid employees to perform the same tasks with lesser amount of effort and energy and greater amount of efficacy and accuracy.

Altogether, the models that we reviewed indicate that, by facilitating the access of information, wearable cognitive assistants can lead to several improvements in the workplace upon the acceptance level of employees. Combing JCM and TAM provided a solid foundation to identify the contributions of both models in improving the human–job interaction. Both models separately and together, showed not only the advantages of using of wearable cognitive assistants in the factory but they also provided a condition to detect where the human limitations can be decreased by these devices and where still needs further improvements in designing these devices to be well-matched the need of employees in the workplaces.

8 Limitations and suggestions

It should be noted that although new wearable technologies seem to be promising for our industries, the scientific literature still warns us to be skeptical in applying them (Regenbrecht et al. 2005). One way to maximize the chances of these technologies becoming really innovative solutions in the workplaces is to conduct rigorous experiments in the factory settings (for studies that used this approach see for example: Gorecky et al. 2017; Prinz et al. 2016). These studies have the advantage of directly involving end-users which is likely to optimize operator acceptance (Aedo et al. 2010). Moreover, as mentioned earlier, cognitive task analysis (Crandall et al. 2006; Schraagen et al. 2000) may provide a good understanding of the task and how it is performed by the operators. This analysis is useful for selecting what information a cognitive assistant should display, and for maximizing its perceived usefulness.

However, field studies still require adaptations to the traditional study tools used in the context of experimental psychology. This shortage, for example, is particularly important in the case of physiological measures of mental load. Even using surveys may prove to be complicated if they take too much time to be completed. All too often, operators can only devote a limited amount of time to experiments and, on top of their work, are expected to take part in training courses, optimization workshops, etc. In addition, to prioritize data that are as close as possible to normal factory conditions, experiments need to be as unobtrusive as possible. They must take up as little of operators’ time as possible, and, ideally, must not get in the way of their work.

Another point that needs to be taken into account when carrying out a scientific study in a factory setting is the wide range of different occupations and jobs encountered there. Since producing a representative sample means finding people who are doing a similar job it is essential to start by clearly defining the scope of the study. A particular machine must be chosen that is representative of the particular research topic and that has enough operators performing the same task on it (on different cycles). Ideally, therefore, there should be several identical machines in the factory. It is also important to bear in mind that results can be extrapolated and, therefore, only extended to similar machines. For the study to have a satisfactory impact, it is, therefore, preferable to choose a key machine for the group (for example a machine that will soon be used in greater numbers) or one found in several factories. Lastly, each machine may be used to perform different tasks, but what is important is to define which task is of interest to the study and to focus on that. Further experiments will be needed to explore other interesting tasks performed by the same machine.

In this review, we mentioned several times that the wearable cognitive assistants should be able to proactively give important information (such as an error) to the operators. However, it is important to remember that work interruptions are often very costly for operators (e.g., Schultz et al. 2003). To avoid being counter-productive, wearable cognitive assistants should, therefore, give priority to the information requested by operators rather than the information imposed by external circumstances that could interrupt the task being carried out. This proactive way of providing information should be restricted to the most critical information and, at the very least, less important information should be given during work breaks (Bailey and Iqbal 2008; Kolbeinsson et al. 2017a, b). One way to identify this critical information is to involve operators in the studies early on, since they can provide useful indications on their specific work conditions. More generally, inappropriate design of cognitive assistants may result in a decrement of performance instead of the expected improvement. For example, Bolstad et al. (2006) list several designs that impaired the users’ situational awareness (SA), namely their capacity to perceive, understand and predict the critical elements of the environment (Endsley 1995; Endsley and Connors 2008). This SA (which the cognitive assistants aim to improve) can be impaired in specific situations called SA Demons (e.g., the out-of-the-loop syndrome where exaggerated automation leads the user to disengage from the task) which the design of cognitive assistants should try to avoid.

Notably, we saw that several scales can be used to measure the operators’ feeling of difficulty and their perception of cognitive load and risk of errors. We also saw that the perceived reduction of errors is probably a good predictor of the perceived usefulness of the wearable cognitive assistants, and, in fine, of their acceptance (Jetter et al. 2018). Interestingly, operators might not be aware of their real performance in terms of the occurrence of omission errors (Vanderhaegen et al. 2019). This attentional dissonance between attention that is felt and effective attention can be detected using hearth rate recording (Vanderhaegen et al. 2019). This type of measurement thus offers an interesting alley of research concerning the probable link between attentional dissonance and acceptance.

Finally, in this review, we only focused on the way wearable cognitive assistants can bring information to the operators. Nonetheless, wearable assistants could be also used as wearable sensors collecting information about the operator (such as cardiac measurement, motion, and sleep) or the surrounding environment (such as a level of gas and a dangerous area). These data could be in turn used to personalize the information sent to the operators. For example, a piece of information could be displayed for a longer time if the operator is tired, or the display of a minor information could be delayed if the operator is experiencing a high level of stress. Although very interesting, the topic of wearable sensors in the factory brings its own scientific and ethical questions (e.g., Heikkilä et al. 2018; Zander and Kothe 2011; Osswald et al. 2013; Zander and Kothe 2011) and is, therefore, out of the scope of our review.

9 Conclusion

To conclude, with the development of industry in the twenty-first century, operators have to integrate more information to regulate the working of machines that are increasingly automated, modular and flexible. To relieve operators of this heavy mental load, wearable cognitive assistants offer a way of facilitating access to information and information processing.

When developing and studying the application of these technologies in the workplace, we suggest that research in cognitive psychology and the JCM and TAM should be taken into account. Together, they theoretically provided solid grounds in support of using wearable cognitive assistants to enhance the person–job fit. The research in cognitive psychology suggests that the aim of the wearable cognitive assistant should be to maintain the cognitive load in a “comfort zone” and provides some useful insight into their design. JCM suggests that wearable cognitive assistants should also increase autonomy and rapid feedback to increase job performance and satisfaction. TAM further specifies that these improvements are dependent upon the acceptance of the operators that would be maximized when two aspects of wearable cognitive assistants, namely usefulness and ease of use, are seriously taken into consideration. In all, these findings provide a context to design surveys and experiments measuring the established hypothesized associations for future studies.