Keywords

11.1 Introduction

The foundations, methods, and recent developments for generation of the Digital Twin and its applications in manufacturing have been presented and discussed at length in the preceding Chapters of this book. As a part of the respective Chapters in the discussion sections, authors have presented and discussed specific research and practical challenges of a specific process chain or step in isolation focused on a dedicated project goal. The aim of this final Chapter is twofold: at first, to draw the way for the further development of the DigiTwin solution as presented in this book, and, at second, to integrate and present current trends and challenges with respect to Digital Twin and its associated research (sub)domains, in particular Artificial Intelligence, in a comprehensive overview. To perform this integration in a structured, methodical manner, the socio-technical dimensions of Digital Twin are also explored. Recent research directions in non-technical areas (medicine, social science, economy) show further widespread of Digital Twin as a holistic, domain-independent approach. With this procedure, authors follow the structure for the final chapter already developed in prior research on Concurrent Engineering [1] and Systems Engineering [2].

For presentation and discussion of current trends and challenges in Digital Twin, a stable taxonomy with dedicated dimensions which can be described with few characteristics is beneficial. This set of dimensions helps to generalize the conclusions and to prevent the frequent entanglement of Digital Twin descriptions with specific industries and vast array of application domains. The used dimensions are based upon taxonomy of Digital Twin developed by Otto et al. [3] and are as follows (see Sect. 3.2):

  • Data link

  • Purpose

  • Conceptual elements

  • Model accuracy

  • Interface

  • Synchronization

  • Data input

  • Time of creation.

Although these eight dimensions are not exhaustive, these make up an assessment framework that facilitates processes to configure, implement and exploit a specific expression of Digital Twin according to a particular strategy. This procedure is focused more on the application rather on the architecture of a dedicated solution. Subsequently, benefits of specific Digital Twins can be identified easier. Thus, a taxonomy provides a valuable contribution to fill that research gap, as it helps to demarcate Digital Twin concepts from each other. The fulfilment of singular dimensions based on the recent review is highlighted in the Table 11.1 which is an extension of Table 3.1 [3].

Table 11.1 Practice of Digital Twin, derived from taxonomy [3]

Based on the comprehensive literature review, a particular, central type of a Digital Twin can be identified (highlighted in grey), whose characteristics, in sum, are named most frequently [3]. Therefore, most known implementations of Digital Twin discover the following characteristics (Table 11.1):

  • A bi-directional data link is given if data flow happens simultaneously between digital and physical representation in both directions. Some authors see it as a mandatory part of a Digital Twin [4,5,6].

  • Purpose of a DT is mostly the processing of data (monitoring, analysis, forecasting, or optimization) [7,8,9].

  • Conceptual elements: although a Digital Twin is mostly directly bound to its physical counterpart in a one-to-one ratio, the occurrence of an independent relationship is significant. In such a case, a Digital Twin can be seen in combination with other physical systems or one system can possess multiple Digital Twins [10,11,12].

  • Physical object is mostly described with an identical accuracy, although the partial (variable) accuracy makes sense for an efficient representation and fast processing [13,14,15].

  • Depending on the purpose and working environment (e.g. fully automated production), the implemented Digital Twins posse either a machine-to-machine interface or a human–machine interface or both [16,17,18]. That is no characteristic of a substantial distinction.

  • Although a synchronization between the Digital Twin and the physical counterpart is not always mandatory to build a Digital Twin, it is mostly supposed by research that a Digital Twin frequently obtains data updates during its lifecycle [19] or a synchronization occurs between the Digital Twin and the physical counterpart [20, 21].

  • Data are the “fuel” of Digital Twin [22]. Therefore, Digital Twins receive input data from sensors or databases, either directly as pure, raw data from data acquisition devices or pre-processed (e.g., by analytic software) before use in the Digital Twin. Here, a substantial distinction only could be made in a further breakdown of data processing [23,24,25].

  • Analysing the time of creation which describes the chronological order in which the respective parts of the Digital Twin come into existence, it becomes evident that the physical counterpart mostly is being developed first [26, 27]. That indicates that a Digital Manufacturing Twin and a Digital Instance Twin prevail in the practical exploitation of Digital Twin [28]. This can also be explained by the fact that the Digital Twin is subsequently generated for durable goods [29]. In this sense, Digital Twin can be understood as a means for digitalization of long-lasting built environment (see Chaps. 4 and 8) [30].

Besides of the above-explained eight dimensions, further views and aspects play an important role in the conception, the implementation and the exploitation of the Digital Twin [31]. Data volume and complexity caused by new concepts and associated domains such as Cyber-Physical Systems (CPS) and the Internet of Things (IoT) demand that a Digital Twin continuously works with actual data [32]. This yields methods and architectures for modelling and simulation with high level of mutual interactions [33]. These developments increasingly require an interdisciplinary or transdisciplinary approach to solve Digital Twin challenges, as the underlying constructs are themselves inter- or transdisciplinary (e.g., include new types of labour organisation or intercultural collaboration) [34]. The implementation is reinforced through different levels of aggregation, across system scope and time [35]. With respect to time, the shift of industry and academia towards an integrated perspective on products and services implies that a full lifecycle perspective on development is becoming prevalent [36]. The perspective of Digital Twin is mostly holistic: it remains active during the entire product lifecycle rather than constrained to a singular phase [37].

For a rapid emerging system like a Digital Twin, different levels of technological uncertainty require different levels of integration [38]. Integration difficulties become prominent, however, in the higher uncertainty type implementation projects. Digital Thread poses a solution for a continuous connection of all digital models over the entire product lifecycle phases [39]. It impacts recent trends regarding digitization such as consistency and traceability of data (see Chap. 7). In addition, there are consequential issues of configuration and risk management [40].

The outline of the chapter is as follows. In Sect. 11.2, the research and development directions of the DigiTwin solution presented in this book are drawn and discussed. In Sects. 11.3 and 11.4 self-X-mechanisms for experts and non-experts are presented. In Sect. 11.5 the trends of Digital Twin are discussed, following the eight dimensions given above. In a similar fashion, Sect. 11.6 highlights challenges in research and practice. Finally, a demarcation from related areas and conclusion on the presented work is given in Sect. 11.7.

11.2 Further Developments of DigiTwin Solution

This book shows how a Digital Twin of a production system can be efficiently created. The focus of the explained method is on the initial set-up, which means that a Digital Twin can be created once with the method, but no further adjustment of the Digital Twin is carried out after the method has been performed. The operation of the Digital Twin can take place from this point on, but production systems change over time. The following aspects are particularly relevant here, although other changes to the production system are of course also conceivable:

  • Adjustment activities change the production sequences, which in turn changes processing times, for example. This is caused by general aims of production management to increase productivity through minor adjustments. However, negative productivity developments are also conceivable if, for example, higher quality requirements occur or machines become worn out.

  • The production schedule may change over time. This input variable for the production system, meaning which product is to be produced at which time, is usually subject to constant alterations. However, major changes can also occur, for example due to new customers or modified sales strategies, so that the changes in the production schedule have an impact on the production system and its structure.

  • New products or product variants cause adjustments to be made to the production processes or new ones to be developed. In the simplest case, the changes that arise for the Digital Twin may involve the insertion of new products and the specification of processing times, or in the most extensive case they may lead to a layout adjustment of the production system.

  • Besides products, new machines may be added to the production system or old machines may be removed from the production system. This leads to new product movements through the production system, resulting in a comprehensive change.

  • The layout of the production system can change by moving objects in the production hall. This results in changed product movements, which in particular changes the transport times.

The aforementioned changes mean that the resulting Digital Twin must be updated over time in order to continue to represent a valid representation of the real production system. The relevance of this aspect was presented by Denkena et al. [41]. However, updating the Digital Twin is not part of the method presented here in the book. Nevertheless, the aspect of updating should be discussed in this section in the form of an outlook, since the updating process was always considered during the development of the method and the method as such has requirements for updating. In order to illustrate the updating process, three general cases are explained here, which involve different levels of complexity or effort for updating and should therefore be considered separately from each other. In ascending order of complexity, these cases are:

  1. 1.

    Parameter-based updating of the Digital Twin: The changes in the real system lead to the requirement that the Digital Twin can remain in its current form, but that single parameters have to be updated.

  2. 2.

    Structural partial update of the Digital Twin: The changes in the real system require an adjustment of the Digital Twin, but the general structure can be preserved so that a new scan is not necessary or only isolated areas have to be re-recorded.

  3. 3.

    Structural update of the Digital Twin: The changes in the real production system are so extensive that a new recording is required and the Digital Twin has to be completely recreated.

11.2.1 Parameter-Based Updating of the Digital Twin

The first case mentioned is for example when, as mentioned above, the production schedule is changed or adjustments to the production system lead to altered processing times. Moreover, new products that do not lead to structural modifications of the production system can also be addressed with this case. It is therefore a matter of adjusting parameters of the Digital Twin, such as processing or set-up times.

For this case, update options are already provided in the method presented in this book. For example, with predefined user interfaces, new products can be inserted simply and without knowledge of the simulation software. Machining and set-up times can be added or updated in the Digital Twin via the work plans that can be stored here. The menu navigation therefore also allows existing products or the production schedule to be adapted. These processes and functionalities were presented in detail in Chap. 9 of this book.

A useful addition for this case is the connection of the Digital Twin to the existing IT systems, especially Enterprise Resource Planning (ERP) or Manufacturing Execution Systems (MES). However, a connection to machine or operating data is also reasonable at this point, as for example explained by Donhauser et al. [42]. With this interface connection to ERP or MES, information about products, production schedules, work plans, machining and set-up times, or similar can be transmitted directly to the Digital Twin. The updating of the Digital Twin is carried out automatically here, so that this process does not require any further handling and thus provides a practical extension to the method presented.

For the interface between ERP and MES to the Digital Twin, a data transfer is also provided in the method described in this book. Here, transfer tables were placed in the simulation software for each product and production schedule, which have a defined formatting. This formatting can be utilised for an ERP or MES export in order to transfer the data in a form so that it can be directly integrated into the Digital Twin. This procedure was also prototypically tested in a use case, but here the MES export had to be pre-processed before it was transferred to the simulation software in order to achieve the desired form of the data. For this purpose, a macro was developed in MS Excel, so that the data transfer can be highly automated. However, it would be desirable in the next step if the MES export were directly available in the defined formatting rule. This is a planned further step.

The simplest update process described here, which only involves parameter adjustment, is thus achievable in the next development steps. The basic functionalities are considered in the current method. However, it should be noted at this point that due to the large number of different MES and ERP, a generic solution can only be achieved with standardised transfer protocols.

11.2.2 Structural Partial Update of the Digital Twin

A much more far-reaching updating process occurs when, in addition to the parameters of the Digital Twin, the structure must also be adapted. This is always the case when objects, such as machines, are added, removed or displaced in the real production system. However, this case can also arise in the case of comprehensive changes to products or the production schedule, if this results in major modifications to the production system that cannot be mapped with a pure parameter adjustment of the Digital Twin.

A structural update can of course be carried out by people who have expertise in the simulation software. However, this manual process would be costly as well as fault-prone and the corresponding IT expertise must be available. In terms of the method described in this book, an efficient method would therefore lead to a better result, since again no IT expertise is needed, the effort is low, and the costs for the update are predictable.

For this reason, an expansion of the method is being aimed at for the next development steps, which will be presented below as an outlook. Accordingly, this presentation is a research concept at this point in time without any current findings. The following Fig. 11.1 provides an overview of the research approach.

Fig. 11.1
figure 1

Research approach for the structural update of the Digital Twin

The approach consists of three key aspects, shown in the Fig. 11.1, which are needed to proceed from a current Digital Twin (left hand side of the figure) to an updated Digital Twin (right hand side of the figure):

  1. (1)

    It is intended that the change in the real system is recorded by the customer himself which is particularly beneficial if the customer is located far abroad. This means that if, for example, a machine is moved to a different location in production hall or a new machine is added to the production system, only this changed area is scanned. In this way, the overall effort is being reduced to a few objects of interests.

    To be able to do this by a customer, a simple process with a simple easy-to-use device is needed. Here, an application for common smartphones is developed with which a production area can be recorded. Photogrammetry is used to generate the required point cloud. During the recording, the customer receives augmented reality feedback on whether the respective area has been correctly recorded by displaying objects in green on the mobile phone display if they have been captured. Finally, the recording can be sent directly to the offices that will further process the recording. For this purpose, the secure data transfer is used, which was presented in Chap. 7. The volume of data which is being generated and transferred here is significantly lower than for scan of the entire production system. If the initial result is not sufficient, this procedure can be repeated as often as necessary.

  2. (2)

    Accordingly, a simple application for a smartphone is created here that enables the client to take the scan of the changed areas himself. The service offered here is a lean process that can be carried out quickly without the need to visit the customer on site.

  3. (3)

    The second aspect is the object recognition of the changed areas and the adaptation of the Digital Twin. Here, the method presented in this book must be adapted so that single areas from the Digital Twin can be segmented and modified. This poses a particular challenge because the edges between the existing and the modified areas have to be designed to fit each other. It must be resolved by an additional registration.

  4. (4)

    In addition, however, this new updating method is quite similar to the method already explained for the initial creation of the Digital Twin. Here, too, a point cloud is first created from the scan (but here by the customer). Subsequently, object recognition identifies new objects and inserts them into the CAD model, which in a further step is transferred to the simulation software to adjust the Digital Twin.

  5. (5)

    The last aspect involves an extension that mainly relates to the parameter-based updating of the Digital Twin. Likewise, this aspect represents an extension of the method presented in this book. The core idea is that behavioural models are stored for the single objects, especially machine tools. These behavioural models represent the functions and capabilities of the objects, so that changed conditions, such as new products, wear, malfunctions, etc., are directly represented by the models and do not require any further updating process for the Digital Twin. Accordingly, this aspect prevents the requirement for updating processes for the Digital Twin.

With the three aspects presented, there is therefore an efficient updating process that represents a sensible extension of the existing method. A scalable Digital Twin of manufacturing is created in short time with low effort, in which behavioural models of machines are embedded. In the sense of the automation pyramid, a vertical integration between the different planning levels of manufacturing is created, making the production planning and control of industrial companies more reliable, efficient and comprehensive [43]. The resulting Digital Twin becomes more accurate through parameterisation from the machine level and requires fewer updating processes.

11.2.3 Structural Update of the Digital Twin

For the last case of an update process, it is assumed that the changes in the production system are so large that partial updates of single areas cannot be carried out with a reasonable effort. Such cases occur in the context of fundamental organisational changes to the production system or when a far-reaching restructuring of the system is carried out.

The partial update process described in Sect. 11.2.2 is then too costly, as the additional effort required to assemble existing and updated parts of the Digital Twin becomes too great. Accordingly, in such a case, a renewed implementation of the method from this book would be recommended. This means that the complete factory floor is scanned again (see Chap. 4) and the Digital Twin is recreated with the upstream object recognition (see Chap. 8).

It should be emphasised at this point, however, that most processes can be shortened here, since an initial creation process already exists. Accordingly, company-specific data on products, for example, no longer needs to be fundamentally re-entered, but can be taken over from previous processes. Another useful extension is the integration of the machine models, as described in Sect. 11.2.2. This additional aspect then prevents the requirement for further updates.

By using the taxonomy as explained in Sect. 3.2, the classification of this new approach can be updated from status given in Sect. 3.6 as follows. With regard to data link, this Digital Twin becomes conditionally bidirectional (in case of update). Dimension purpose remains for data transfer and repository. Dimension accuracy remains partial (e.g., adjustable according to the process requirements). Dimension interface is still machine-to-machine. Synchronization between the physical and the digital part is not existent. Data input is fed with raw data. Time of creation becomes changeable. At first, it is pre-defined by the built environment: the physical counterpart first and then changed to digital part and vice versa.

11.3 Self-X Digital Twin on Backend

Advanced production systems need to interact with the physical and social world, as systems explicitly react to human behaviour and environmental conditions—and influence both through their actions. Thus, any two (sub-)systems coming into contact with the same part of the environment or group of users can influence each other in ways that can hardly be anticipated at design-time. Undesired effects may be the consequence of unintended, implicit interaction of elements that were not fully predictable or relevant beforehand. When designing Digital Twins, respective mechanisms have to be implemented, making the system autonomic as a combination of self-capabilities, such as self-configuration, self-healing, self-optimization, self-protection, self-awareness, etc [44]. This approach makes a system self-adaptable and self-decision-making support system for various activities [46].

The Self-X-mechanism of Digital Twins are suited to increase the value of production systems in operation [45]. Subsequently, the focus lies on the continuous improvements on different objectives. For example, energy or power optimization can be evaluated under different conditions. Other optimization scenarios can include:

  • Reduced breakdown

  • Reduced vibration

  • Increased mechanical life

  • Reduced fuel consumption.

The Digital Twin describes a production system including all necessary characteristics down to sub-systems of individual units. Therefore, the IT components of mechatronic systems comprise the control system in the physical machine and its entire Digital Twin. With the Self-X architecture another component is added: The Self-X-Controller being responsible for deciding on and initiating the Self-X activities. As the Self-X concept adds a lot of new functionality, a new architecture is needed for extending the Digital Twin concept with Self-X-Controllers (Fig. 11.2) The production layer communicates through connecting layer with AI services, sending real or simulated data to AI applications. While the AI layer evaluates individual processes, the additional Self-X-Controller layer tracks superordinate trends. If the difference between interpreted physical sensor readings and digital objectives becomes too high, the Self-X-Controller begins to intervene. The architecture must be scalable, distributed, and flexibly adaptable for different use cases in production [46].

Fig. 11.2
figure 2

Layers from production to Self-X-Controller

For an efficient implementation of this new architecture, information (protocols of production processes, simulation results, machine states, etc.) must be transferred in real-time, making it accessible for the Artificial Intelligence (AI) services.

With this data, the AI service generates an AI model. If its accuracy is sufficient, the AI model is directly downloaded to the Edge Device at the machine and executed within the production environment. During production, AI models predict all values and the Self-X-Controller assesses the difference between the real and the predicted values. If the difference is too high, the Self-X-Controller sends the data to the backend, a supporting higher-level IT-infrastructure. If necessary, the AI model must be trained including the most recent data to increase accuracy. If the accuracy of the new model is sufficiently high, the new AI model can be deployed on the Edge Device at the machine [47].

The Self-X-Controller itself uses AI methods like machine learning in order to realize the Self-X properties for improving the production system. In this case, the backend generates the AI models with supervised self-learning systems. The AI models are deployed to the Edge Device at the machine by the backend. Again, if the accuracy is too low, the backend updates the AI model on the Edge Device (Fig. 11.3) [48].

Fig. 11.3
figure 3

Steps from the data with the Self-X-Controller

Fig. 11.4
figure 4

Steps and phases on the Edge Device

Fig. 11.5
figure 5

Digital Thread connects lifecycle phases across six layers

At first, the real object sends the data to the backend. All received data will be stored (1). Step (2) consists of querying the data with defined rules to generate the AI model (3). The generated AI models are validated (4) with the data. If everything is sufficient, the new AI model (5) is complete and the backend deploys the AI model to the Edge Device. When the first AI model is deployed, the continuous validation process (6) between real and predict values starts. Should the validation process fail, a signal is sent to the Self-X-Controller. The Self-X-Controller (7) analyses the problem and starts the training (3) incorporating also all newly stored data. After the training finishes, the backend validates (4) the new AI model. If the accuracy of the new AI model (5) is improved, the backend deploys the new AI model to the Edge Device.

11.4 Self-X Digital Twin on Edge-Devices

In the near future, the generation and roll-out of Digital Twins for production systems including AI-services is expected not to require a high degree of expertise in the field of AI-technologies. After the backend-supervised Self-X Digital Twins described in the previous chapter, this next step will help enabling the wide-spread usage of Digital Twins in production.

In this future step, a user gets a new Edge Device including its Digital Twin. This device has some interfaces like Profinet, Ethernet, etc. When the customer plugs the device into the switch cabinet, it configures the communication interface and establishes the kind of Digital Twin to be used [49].

Phase 1 storing and interpretation of all received signals. The device has no prior knowledge of the related physical processes. Between all signals, the most relevant ones must be determined and their types classified in the initial stage. In this phase, it is important to recognize underlying correlations and perform dimension reductions.

Phase 2 consists of the signal classification. After a certain time, from the incoming signals patterns are deduced like “the machine is in operation” or “the machine does not operate". The system analyses the data in more and more detail. After this, the system has classified the signals and recognized allowed and not allowed modes of operation for the machine.

Phase 3 is the model creation. The device uses all stored classifications and generates the initial AI model. It automatically calculates a topology of an AI model based on the selected signal classifications and trains the model using the associated data.

Phase 4 includes the continuous monitoring and self-optimization based on real-time data. The Digital Twin monitors the physical system by itself and optimizes processes on the basis of new data records.

When the device reaches the fourth and final phase, the device is able to modify itself for the first time allowing for self-optimisation. To ensure the accuracy of the used AI models, the Self-X-Controller begins to continuously monitor the difference between real and predicted values. If the deviation is too high, the Self-X-Controller triggers the learning process of the AI model, thus reducing the gap between real and predicted values. Once the gap is under a defined threshold, the Self-X-Controller stops the learning process. The device is also capable of learning and evaluating new AI models simultaneously to the ongoing operations. If a new AI model shows improved performance, that is a better approximation of the underlying incoming sensor data, the Self-X-Controller switches models and activates the improved AI model [50]. Figure 11.4 shows the assignment of the steps to the phases.

The process is analogous to backend-enables approach shown before, but now all functionality is located on the Edge Device directly at the machine.

11.5 Trends in Digital Twin

In this section, trends in research and practice are identified. This overview covers different levels of system scope and technological uncertainty, as applied to the eight dimensions identified in the introduction. Together, this allows a more accurate expression of the eight dimensions in terms of aggregation levels [2].

Keeping the previous in mind, the following considers the trends evolving in the different dimension of the Digital Twin respectively. Although the taxonomy is not subdivided in further detail and all dimensions are equally weighted, this structural classification of dimensions is not as homogeneous as shown. Within the dimensions, the handling of data is a prominent topic. Looking at the recent literature, the future Digital Twin will have the following properties regarding data handling [51]: rapid bi-directional data links [52], comprehensive processing of data [53], and performant synchronization between the physical counterpart and Digital Twin (continuous or periodical) [54]. Building on a deep integration of Digital Twins, modern technologies like CPS and IoT play a crucial role, but the further development of the Digital Twin is not limited to their inclusion [55].

Furthermore, while a Digital Twin itself is a deeply integrated system, its integration with further IT systems is of crucial importance. The concept of Configuration Lifecycle Management (CLM) has a significant influence on the Digital Twin, which makes it possible to create temporally valid views of the product across IT system boundaries and to manage the product configurations across all phases of the product lifecycle. The essential function of CLM is to generate the various views of the digital product model in the course of the lifecycle, to keep them consistent and to document their validity over time. It uses cross-system and cross-disciplinary baselines for this purpose. These baselines document the status of the configuration at a specific point in time or the degree of maturity and, therefore, also control the representation of the Digital Twin. Baselines enable companies to immediately and reliably respond to the question of whether and how the product or asset meets the requirements placed on it at any point in the process or in what state the asset was at a defined point in time, for example which product configuration was delivered to the customer [56].

A high-performance PLM integration platform, containing connectors to all involved IT systems, is required to manage the configuration of a product in a traceable manner along its entire life cycle (Fig. 11.5). The higher the number of different systems interacting across various domains, the higher the importance of proper traceability of data and decisions. As an intermediate layer across IT systems, it creates the prerequisites for bringing together the information from the individual IT systems in a manner corresponding to the Digital Thread concept. Here, the Digital Thread is the unifying element which connects all phases of product lifecycle with layers of the enterprise architecture [56].

In industries such as mechanical and plant engineering or shipbuilding, enterprises face the challenge that the manufacturer who builds and provides the Digital Twin is not necessarily the operator and user who feeds it with operating data. Therefore, both the digital data and the operating data or at least part of it must be exchanged and synchronized across companies in order to keep the Digital Twin up-to-date and to be able to use the operating data for the continuous improvement of the physical asset. Therefore, issues such as data security [57], protection of intellectual property [58], and ownership of data [59] play a very central role when setting up and using a Digital Twin application [56].

Today, more and more customers are demanding that their suppliers deliver digital data and models to support Digital Twin applications along with physical assets. With the help of CLM, users can not only control the scope of information provided, but also the level of detail of the information and the formats in which it is provided. They can be compiled largely automatically and made available to the customer as a data package, for example in 3D PDF format [59].

The basis for traceability and CLM is the lightweight integration of the domains involved in the development and their IT tools (data source) through the generation of a consistent data model. The data objects contained in this data model serve as placeholders for the results that typically occur during the product development (e.g. based on the V-model): deliverables or configuration items (CIs). With the help of predefined link types (trace links) with permitted start and end data objects, traceability requirements across domains and IT tools are implemented. The data model and link types can easily be expanded or configured [56]. In this way, Digital Twins can be embedded in PLM system landscape.

Digital Twin as the means to work with the high-fidelity representation of the real world is gaining attention also outside the manufacturing industry. A comprehensive representation of trends may also include research from healthcare [60], social sciences [61], and economy [62].

The Digital Twin initiative aims at revolutionizing healthcare for the benefit of citizens and society through the creation of Digital Twins—computer models of individuals that allow identification of the individually best therapy, prevention or health care, making unavoidable mistakes (such as ineffective treatment recommendations) in complex situations safely, cheaply, and quickly on computer models of reality rather than in reality. For that, the vast and ever-growing knowledge base on biological mechanisms as well as healthcare and research data generated from countless individuals was leveraged to generate a generic reference model. This can be individualized with molecular, analytical and other diagnostic data from an individual into a Digital Twin, a digital self that accompanies each person from birth onwards, adapting and reacting as humans do. Digital Twins are accurate computer models of the key biological processes within every individual that keep humans healthy or lead to disease. Using these twins, individually optimal therapies, preventive or lifestyle measures can be identified, without exposing individuals to unnecessary risk and the healthcare system to unnecessary costs [60].

A virtual society can be built by a collection of Digital Twins which are autonomous systems (agents) and their activities. The aim of such Digital Twins is to represent and strengthen human activities in virtual societies. Such Digital Twins need to be able to interact with each other using operations and, at least in part, will inevitably need to be able to interact autonomously like human beings. Digital Twins will also carry out simulations and other calculations and actively feed the results back to the real targets. Thus, they will bridge the gap between the real and virtual. Digital Twins can be viewed as part of CPS when viewed together with real human beings or objects that have received feedback. Additionally, there are Digital Twins generated by other Digital Twins with properties that do not exist (called derivative Digital Twins). Digital Twins are autonomous agents that hold data for people and objects internally, thus can form some kind of society and need the appropriate environment. By giving the operating environments of Digital Twins or derivative Digital Twins high level functionality or even linking them with external applications, it will be possible to create various virtual societies [61].

The Digital Twin of the economy provides an economic tool and platform which allows the dynamic-experimental development as well as objective-transparent evaluation of macroeconomic policies and is constructed on two consecutive methods: Firstly, the building plan of the underlying economic model is established by an economic architecture, that is an investigative approach to uncover hidden contexts for more transparency, and, secondly, the complex nature of the economy, as designed or created after that blueprint, is then dynamically as well as realistically simulated by agent-based modelling. The purpose of this tool and its platform, apart from the provision of reliable recommendations for political decision-making, is the promotion of interdisciplinary research to enhance the state of knowledge in economics and to facilitate the transformation to a much more economically or ecologically sustainable as well as socially fairer economy or society. Macroeconomists and politicians get an adequate instrument which can more realistically replicate the real-life economy and, thus, can provide better (unbiased) advice [62].

11.6 Challenges in Digital Twin

In this section, challenges in research and practice for the near future are identified. This section builds upon the trends identified in Sect. 11.5—whereas current initiatives were described in detail there, this section will focus on major assumptions, limitations and gaps in Digital Twin research and applications.

Some authors perceive the greatest limitation of the definition of the Digital Twin in the restriction to “physical products” in the real space. Subsequently, it would be nonsensical to call the concept a Digital Twin as long as the physical pendant is not yet manufactured or existing [63]. Moreover, the physical twin emerges as the realization of a digital model. However, the Digital Twin consists of several use cases and is more of a whole strategy than just a single instance. The combination of a not yet existing physical entity in the real space can be seen as a Digital Twin, as long as a physical product is emerging in the near future and the use case contributes to the overall twinning strategy [63].

The first challenge is the implementation of the Digital Twin. Without a unified definition and standard architecture, a Digital Twin implementation is still no subject of systematic consolidation, and most studies as well as commercial offerings discover partial process views. For the conception of Digital Twin, first methodical results are available [63], based on Digital Twin types explained in Sect. 3.3. Such procedures facilitate the agile approaches for implementation of Digital Twin [64], based on a comprehensive Product Lifecycle Management (PLM). One of the observations for this gap between theory and practice is the different interpretations of how, in fact, a Digital Twin can be applied [65].

Similarly, the managerial contributions could be the possibility to develop a process model to standardize the implementation process and selection of components of a Digital Twin [3]. To achieve this goal, several, user-driven industry initiatives were constituted [66]. The aim of such associations is to bring the parallel development strands together to form an industrial Digital Twin and to develop it as an open-source solution together with the member companies. At this time, no practitioners can compare existing Digital Twins or those on development with taxonomies from the literature and equip their Digital Twins with all necessary characteristics. One of the first tasks is the definition of a reference architecture for Digital Twin, similar to RAMI [67].

With regard to the eight dimensions of Digital Twin, some challenges are weighed higher depending on the domain the Digital Twin is being implemented [68]. These challenges are primarily technical and reflect mainly the data handling: acquisition, collecting and processing of high-dimensional data; time series, multi-modal, and multi-source data communication. In a large organization, collecting data from a considerably large number of IoT devices, collating it according to time frequencies and preprocessing it for input to machine learning is a challenging big data task [69]. In many user scenarios, proper handling of Big Data is the decisive criterion for the implementation of Digital Twin [70].

High-fidelity bi-directional synchronization is especially challenging for large-scale industries, requires resources and high-stream Industrial Internet-of-Things (IIoT) connection [71]. For a proper function, Digital Twin demands synchronization of data, model, and service (DMS). An IoT system constructed by this method looks like a “device-Digital-Twin-application”. It requires the specific architecture of the IIoT: local collection devices (the edge) and cloud systems implant the unified DMS framework to form partial Digital Twins with different functions, which logically constitute the Digital Twin of the equipment which it isolates the access between business applications and machines and makes the system more secure [72]. Compared with the method of Digital Twin deployment only in the cloud, this concept strengthens the collaboration between local and cloud, more-over promotes the synchronization of virtual and real [71].

Filling the gaps between virtual and physical systems by physically bound Digital Twin can open new perspectives in Smart Manufacturing. In this case, Digital Twin represents manufacturing cells, simulate system behaviors, predict process faults, and adaptively control manipulated variables. Apart of highly-performant interfaces, the manufacturing cell demands Machine Learning approaches for the industrial control process which is able to acquire process knowledge, schedule manufacturing tasks, identify optimal actions, and demonstrate control robustness. The intelligent control algorithms are being trained and verified upfront before deployed to the physical world for implementation [73]. Managing high dimensional data, with the various other software used by an industry and combining these with expert deep learning skills and equipment is a tedious task [69]. New methods like continual learning and federated learning seem promising for the use of Digital Twin but require further research [74, 75].

Accuracy and data quality (see Chap. 7) are challenging the Digital Twin in many aspects: using partial accuracy cany improve the overall system performance [76]. Improving hydrodynamic model accuracy without compromising computational efficiency has always been of high interest for safe and cost-effective marine operations. With continuous development of sensor technology and computational capacity, an improved Digital Twin concept for vessel motion prediction can be realized based on an onboard online adaptive hydrodynamic model. This makes possible a practical approach for tuning of important vessel hydrodynamic model parameters based on simulated onboard sensor data of vessel motion response [76].

One of the almost unlimited fields for Digital Twin is the fast and predictive identification of misbehavior and damage of technical products, based on a simplified framework in the context of dynamical systems. Especially promising looks the integration of physics-based models with Machine Learning in order to investigate several damage scenarios. This approach uses an interpretable model (physics-based) to build a fast Digital Twin that will be connected to the physical twin to support real time engineering decisions. Different classifiers and different model parameters can be considered to achieve the best accuracy. The most important advance is this approach is the possibility of integrating physics-based models with machine learning for different scenarios [77].

Human factors affect the implementation and operation of the Digital Twin in multiple roles and ways. In the phase of the implementation of a Digital Twin, building a software for Digital Twin requires a development team of software engineers and subject matter experts to test the suitability of the software for the particular task. That gives this development the transdisciplinary character [78]. Moreover, simulation-based optimization provides faster and efficient solutions. Conventionally, solutions calculated by analytical models are fed to the simulation software manually (for example, hardware-in-the-loop) rather than optimization being done on the simulation software by use of mechanisms like reinforcement learning [69]. For larger systems, which include supply chain networks and logistics, etc., global implementation level is more desirable [69].

For user and operator of Digital Twin, handling and presenting a huge amount of collected data and information in a Digital Twin during operation in an intuitive manner without a mental overload remains a challenge. By integrating graphics, audios and real-world objects, Augmented Reality (AR) facilitates the users to visualize and interact with Digital Twin data at a new level. AR gives the opportunity to provide intuitive and continual visualization of the Digital Twin data. The challenge is to move parts of the Human–Machine-Interface (HMI) to a portable AR device (Microsoft HoloLens, Google Glass, or similar) in order to visualize the Digital Twin data of a dedicated process in a real environment (e.g. manufacturing, maintenance). Such an application supports the operator to monitor and control the technical system (e.g. machine tool) at the same time, but also enables to interact and manage the Digital Twin data simultaneously, which provides an intuitive and consistent HMI [79]. In a logistics use case, a concept integrating the Digital Twin and adaptive automation was developed. The usage of a digital representation with adapting autonomy allows combining the strength of humans and machines in order that the operator uses his cognitive advantage to provide specific support when the machine reaches its limits [80].

Finally, the security is one of prevailed constraints of each IT system, and, therefore, remains a challenge for the Digital Twin, too. With the Digital Twin operating across multiple industrial partners and inventory sites, the growing security concerns are inevitable. Not only the cross-industry ownership and security concerns, in particular to shared data, but also the leak of real-time monitoring data can be hazardous to an enterprise [69]. The preventing measures can be organizational [58] and technical [57]. While industrial environments are increasingly equipped with sensors and integrated to enterprise networks, attack surface grows continuously and demands new approaches. Otherwise, Digital Twins provide a chance for protection and can contribute to enterprise security by simulating attacks and analyzing the effect on the virtual counterpart. However, the integration of Digital Twin security simulations into enterprise security strategies is currently neglected. This task is provided by Security Operation Center (SOC) which takes the challenge to develop a process-based security framework to incorporate Digital Twin security simulations in the SOC [81]. However, the insufficient security remains as limitation of Digital Twin.

11.7 Closing Remarks and Conclusions

The rising digitalization is changing almost every segment of our daily lives in business and leisure. As just one expression of this, Digital Twins are as a powerful means to optimize the reality using its high-fidelity digital surrogate. Up to now, this huge potential was primarily recognized in the manufacturing industry but also in healthcare, society, and economics. In this chapter, trends, challenges and future perspectives associated with Digital Twin were presented. As it has become evident, the Digital Twin is an encompassing approach of digitization for tackling complex, real-world problems. The Digital Twin is also an important approach in overarching approaches like Concurrent Engineering, Systems Engineering and Transdisciplinary Engineering. In essence, multiple disciplines, multiple functional roles, and multiple stakeholders need to collaborate in the processes making up the engineering, healthcare, societal, or economic systems. Moreover, a lifecycle perspective supported by a holistic PLM approach is essential in achieving a solution that is both useful and usable in the context in which the challenging user scenario exists.

Product lifecycle is distributed almost everywhere across the globe: product and process development, production, and exploitation of a product rarely are fixed to a dedicated place [82]. This almost endless distribution causes high challenges to users, their organization, methods, and tools in term of communication, efficiency, and interoperability. The synchronous availability of the digital and the physical product alone brings interoperability into play as an indispensable prerequisite. It is enforced by dynamic interaction between technical and social characteristics of new product, process, and service development. Considering the human factor, e.g. human–machine interaction [83], a Digital Twin becomes part of a social-technical system. However, a human could be also considered as the subject of Digital Twin (e.g. in medicine, pharmacy, or medical engineering). This moves the scope of Digital Twin in the area of transdisciplinary engineering [84, 85].

Throughout this chapter, several aspects of a specific expression of Digital Twin have been described based on the previous chapters of this book. For all the aspects making up a system, various challenges have been identified: efficiency, accuracy, speed, etc. In all aspects, the main challenge lies in a seamless integration of methods and tools that are suitable to support the dynamic and evolving nature of the modern production system that need to be developed including the development system itself. The important fact often overlooked is that these innovative solutions are available only recently and are subject of rapid improvement. The market share of products and services which encompasses Digital Twin dramatically rises.

Coming from the astronautics and aerospace with typical long-lasting products, the Digital Twin is often mentioned in context of PLM which is a related system to Digital Twin. In case of a one-of-a-kind-product, the Digital Twin comprise the auxiliary data of the PLM: the fit is almost complete. In other cases (serial products or products with a high variance), PLM manages several Digital Twins which can be different from each other depending on type and the underlying product. Generally speaking, this is not a static representation in PLM because two systems would be linked throughout the entire lifecycle of the system. The virtual and real systems would be connected as the system went through the four phases of creation, production (manufacture), operation (sustainment/support), and disposal [86].

Digital Twin is a powerful tool with widespread, adaptable capabilities combining simulation, autonomy, agent-based modelling, Machine Learning, prototyping, decision making self-optimisation, and Big Data into one [87]. Theses sub-systems should be tailored and prioritised depending on demand and specific user scenarios. The advancement and research in these subsystems at times create a hindrance for the development of Digital Twin. As up-and-coming the Digital Twin technology is, there are several technical and domain-dependent challenges that still need to be addressed [69]. One of them is certainly the presentation and visualization of the Digital Twin, its components and the operational outcome for the users and the stakeholders. However, a seamless integration of IoT, Machine Learning, and data is the distinguishing feature of the powerful and efficient product.

From its initial focus on manufacturing on, Digital Twin is spreading into different areas (e.g. automotive, healthcare, building technology, economy, social sciences education, etc.). Digital Twin is also aimed to optimize the product design and, at the same time, the design processes (i.e. concept generation, material selection, design verification, and decision making). Digital Twin can effectively assist the concept generation and redesign based on the data from existing product Digital Twin. Associated with other emerging technologies (i.e. big data analysis, AR/VR, cloud, edge computing, etc.), Digital Twin can be used to analyze a mass data from the real environment along with the whole product lifecycle and improve the visibility of design for verification. Digital Twin can play a vital role to simplify the design processes by employing digital prototyping, testing simulation, and prediction, but it still has many potential applications in the product design stage like self-optimization of products in service [88].

Despite this progress and individual project-based efforts, significant implementation gaps exist in the field, which have caused delay in the widespread adoption of this Digital Twin. Major reasons for this delay are the lack of a universal Digital Twin reference framework, domain dependence, security concerns of shared data, reliance of Digital Twin on other technologies, and lack of Digital Twin performance metrics [69]. The lack of standardization is particularly significant because it is unlikely that a complex system will come from a single source in a challenging environment. As a tool which can have many sub-components spread across collaborators and industry partners, developing regulations and security mechanisms is imperative for widespread of adoption of Digital Twin to overcome the concerns regarding data sharing [69]. There should, however, be openness to the input of other disciplines, including their methods and tools needed to deal with their aspect of the overall problem. Selecting methods, applying them, as well as further developing these methods in the context of complex societal problems, cannot be the task of one discipline alone [28].