3.1 Introduction

The advent of the “Industry 4.0” concept addressing new paradigms involving productive and logistic systems has allowed a reformulation of industrial processes by considering new perspectives for integration and control of dynamic processes in real-time (Thoben et al. 2017; Zhong et al. 2017; Panetto et al. 2019). In this context, the industrial digitalization resulting from the recent changes, supported by new technologies, such as the Internet of Things, big data, blockchain, and artificial intelligence allowed the emergence of the concept of cyber-physical systems. In these systems, data and process can be connected in a dynamic and integrated way, bringing the industry to a new perspective in which intelligent systems present themselves as a feasible and promising solution for planning and control of production and logistics systems (Wang et al. 2015; Kusiak 2018; Ivanov et al. 2019; Dolgui et al. 2019; Kück et al. 2016, 2017).

Technological advances have increased data availability and volume in production systems (Tao et al. 2018), reducing the cost of collecting and storing large sets of information (Peres and Fogliatto 2018; Megahed and Jones-Farmer 2015). In this direction, the concept of “Smart Manufacturing” arises, referring to productive systems integrated to sensors and computational platforms with the intensive use of data modeling and predictive engineering (Kusiak 2018). Such approaches based on real-time information can provide high quality platforms for decision-makers (Heger et al. 2017). Simulation models are one of the most widely used quantitative approaches in the modelling of production and logistics systems, allowing to simulate the operation and decide on aspects involving various resources and contexts (Borshchev 2013; Agostino et al. 2019), and in addition enabling the integration of optimization approaches and data analysis (Lee et al. 2015; Frazzon et al. 2013; Ivanov et al. 2019).

The integration of analytical tools with real-time data becomes a potential field of research involving industrial systems (Heger et al. 2017). In this context, the evolution of information technologies and the increasing digitalization of production and operations connect physical and information flows into productive systems (Lee et al. 2015). This cyber-physical view allows the acquisition of system state data that can be used to support better decisions along production networks, with great potential to change paradigms in relation to the management of processes with a high degree of accuracy and productivity, supported mainly by Internet of Things technologies (Monostori et al. 2016; Tu et al. 2018).

New paradigms associated with technological development, involving remote sensing of machines and devices, as well as real-time connectivity allowed the development of the concepts of “online simulation” (Cardin and Castagna 2009, 2011), “coupling of simulation and production” (Bergeron et al. 2009; Zülch et al. 2002), and more recently “Digital Twin” (Kritzinger et al. 2018; Weyer et al. 2016). These approaches address the integration of sensors and quantitative models of simulation and optimization in industrial operations.

This emerging concept has been discussed both in the practical and academic environment. Digital Twin can be defined as a simulation model that reflects, in a timely manner, the state of a corresponding twin based on the historical data, real-time sensor data, and physical model (Glaessgen and Stargel 2012). Some theoretical studies such as the one developed by Kritzinger et al. (2018) and Weyer et al. (2016) point out the potential of the application of Digital Twins in the industrial environment involving production processes and logistics. Recent applications such as Zheng et al. (2019) corroborate this potential, but present the need for studies that broadly and generically systematize methods, tools, and concepts related to the development of Digital Twin applications.

This gap in the literature guides the development of this research. There is no consolidated reference model in the literature that deals with the application of simulation and optimization models for real-time data treatment for synchronous and data-oriented decision-making, which allows the connection of the shop floor and logistics operations with the management and dynamic control of processes.

In this way, this chapter presents a conceptual model for a data exchange framework in cyber-physical systems allowing the development of real-time simulation applications in industry. For this purpose, a bibliometric analysis was conducted to analyze the current state of the art and the practice of the application of simulation models to control and scheduling in dynamic production and logistics systems. This setup and the results of this analysis are presented in the next section. The reminder of this article is structured as follows: Sect. 3.3 describes the results of the analysis with a focus on the concepts and applications of simulation and Digital Twins. Section 3.4 describes the Digital Twin approach for production control and scheduling that is afterwards evaluated in the fifth section. The paper closes with conclusion and outlook.

3.2 A Bibliometric Analysis on Simulation in Production and Logistic Systems

To analyze the dynamics of research evolution considering simulation in productive and logistic systems in the context of Industry 4.0, a bibliometric analysis was performed. The final search was realized in August 2019 on the Scopus databases using the terms “Simulation,” “Digital Twin,” “Production,” and “Logistics” in combination with “Industry 4.0,” “Cyber-Physical Systems,” and “Smart Manufacturing” (see Fig. 3.1) applied in the titles, abstracts, and keywords of the papers. For the portfolio only publications in journals and in English were considered. A total of 249 papers were found. The final search string is presented as follows:

Fig. 3.1
figure 1

Search strategy

Search string

TITLE-ABS-KEY(((simulatio* OR “digital twin”) AND (productio* OR manufactur* OR logistic* OR scm) AND (”industry 4.0” OR “smart manufacturing” OR “cyber-physical system*”))) AND (LIMIT-TO (DOCTYPE, “ar”)) AND (LIMIT-TO ( LANGUAGE, “English”))

Figure 3.2 shows the temporal evolution of publications in the selected portfolio. 2001 was the first year that a publication appeared in a journal indexed in the considered databases. Qiu et al. (2001) developed a discrete simulation system to control a flexible manufacturing system considering real-time data. In the following years, since 2011, the number of publications has grown consistently. Between 2016 and 2017, publications increased by 246%. This analysis shows the growing interest in simulation model applications in production and logistics systems in the context of Industry 4.0.

Fig. 3.2
figure 2

Publication temporal evolution

Figure 3.3a shows the ten journals with the highest concentration of publications in the analyzed group. The IFAC-PapersOnLine, IEEE access, and International Journal of Advanced Manufacturing Technology were the journals with the highest number of publications. Figure 3.3b shows the ten most cited journals, in this case, there is great emphasis on the International Journal of Production Research as well as several other journals that link studies of operations management, logistics, and technology.

Fig. 3.3
figure 3

(a) Journals of analyzed group. (b) Most cited journals

Figure 3.4 shows the main countries which have associated studies in the research field, categorized by publications authored by only one country (in blue) and multiple countries (in red). Germany presents great prominence with 21 publications, followed by Italy with 16, Korea with 12, and the other countries with 10 or fewer publications. The high concentration of studies produced by institutions in Germany occurs mainly through the creation of the “Industrie 4.0” (I4.0) program by the government to promote the computerization of industrial processes (Thoben et al. 2017).

Fig. 3.4
figure 4

Publications by country

Figure 3.5 shows a keywords co-occurrence network by multidimensional scaling (Huang et al. 2005) using edge betweenness centrality clustering algorithm (Prell 2012). This analysis allows the identification of a main cluster of terms (in red) that deals with the intercession between the themes investigated in this research. As central terms “Industry 4.0”, “simulation”, “smart manufacturing” and “cyber-physical system” appear in the co-occurrence network. Other related terms such as Internet of Things, cloud computing, virtual reality, and big data demonstrate the connection between the research fields involving management, engineering, and computing disciplines.

Fig. 3.5
figure 5

Keywords co-occurrence network

Figure 3.6 illustrates the dynamics of evolution of the main themes over time. The frequency of keywords of the articles was analyzed, and the last 5 years were considered. It is important to highlight that some of the terms in the literature are used in similar contexts, the intention of this analysis is only to understand the dynamics of evolution of the concepts. It is possible to identify the growing interest for the development of research involving Industry 4.0. Another point evidenced is the maturation of the concept of Digital Twin in relation to the classic concept of simulation. The growth in the application of Digital Twin approaches is mainly due to technological development in conjunction with concepts and approaches proposed by Industry 4.0.

Fig. 3.6
figure 6

Thematic evolution over time

In the next section, the concepts and applications of the analyzed papers were discussed aiming at the identification of research and practice opportunities.

3.3 Simulation and Digital Twin: Concepts and Applications

Several authors discuss the theoretical and conceptual aspects involving simulation models in production and logistic systems in the context of Industry 4.0. Turner et al. (2016) reviewed the literature involving discrete event simulation and virtual reality in the industry, the article addresses real-time integration, communication protocols, system design, and model application. The authors highlight the potential of application of these joint technologies in the industrial environment. Weyer et al. (2016) investigated the future of modeling and simulation applications by focusing on aspects of cyber-physical systems. The authors highlight the importance of the application of simulation models in the decision-making process and propose a framework for modeling cyber-physical systems based on literature. Polenghi et al. (2018) performed a review of surveys on the application of simulation models in manufacturing processes, the authors classified the articles and proposed an integration of models for simulation-based decision-making.

Simulation approaches have evolved in different stages: (1) simulation of a specific device based on special tools; (2) simulation of a generic device based on standard tools; (3) multilevel and multidisciplinary simulation. Currently, a new wave of transformation occurs with the possibility of developing Digital Twin models from real-time simulation (Tao et al. 2018; Qi and Tao 2018). Weyer et al. (2016) argue that there have been three waves of simulation models and recently a new paradigm called Digital Twin has been initiated with the possibility of incorporation data in real-time and digitalization of the industrial environment.

In the context of the application of Digital Twin models, some articles have reviewed the literature and the application requirements, Tao and Zhang (2017) investigated the application of Digital Twin models as a new paradigm in the direction of intelligent manufacturing. In this context, a Digital Twin can be defined as a multiphysics, multiscale, probabilistic, ultrafidelity simulation that reflects, in a timely manner, the state of a corresponding twin based on the historical data, real-time sensor data, and a physical model (Glaessgen and Stargel 2012). Figure 3.7 shows a Digital Twin conceptual model to real-time simulation.

Fig. 3.7
figure 7

Conceptual model for Digital Twin

Some practical applications involving the development of DT models are found in the literature, Wang et al. (2020) developed a DT application for material handling in a shop-floor environment in a manufacturing process in Southern China. The application of the model in the case study resulted in the reduction of energy consumption and better route optimization. Sujová et al. (2018) developed a DT model for 8 assembly lines, with 15 work points in each one. The main objective was to integrate the control of operations using simulation models with real data. The authors report the increased responsive and control capacity of the system with greater availability of data for synchronous decision-making.

The bibliometric analysis conducted allowed to understand the dynamics of evolution of the research area, as well as to analyze the main concepts related to the development of real-time simulation models in industry. In the next section the proposed model for the development of a real-time simulation approach is presented and a case study with real industrial data is used to evaluate the model. The proposed framework objective is to serve as a generic model for implementation of Digital Twin solutions in dynamic production and logistics systems.

3.4 Building a Digital Twin for Planning and Control of Production

This section describes our approach of a Digital Twin for the planning and control of production. This Digital Twin is automatically synchronized to the state of the shop floor and optimizes the used dispatching rules for each machine in the system when a stochastic event occurs. This chapter first shows the definition of the Digital Twin applied to the problem of production planning and control and afterwards a description of the approach. This research is based on previous research of the working group as shown in Frazzon et al. (2018).

3.4.1 Definition of the Digital Twin

The basic idea of a Digital Twin is to build a digital representation of a physical object, i.e. a job shop production in a factory, which represents the real system and is updated automatically in case of changes. This scenario is shown in Fig. 3.8. On the left-hand side is the object in the real world, in this case the job shop production system, and on the right-hand side is the Digital Twin, its digital representation. Between those two components is a bi-directional information flow from the real world object to the Digital Twin (a) and also from the Digital Twin to the real world object (b).

Fig. 3.8
figure 8

Basic idea of a Digital Twin: information flows are bi-directional between the physical and digital object

Following the definition of Kritzinger et al. (2018), there is a different intense of the integration of the information. On the lowest level no automatic data exchange between the real system and its digital representation is applied. All updates are done manually, there this type is called Digital Model. The first step towards a real Digital Twin is the automatic update of the digital representation (a), but not vice versa. In this case, changes in the real world object are automatically applied in the digital representation. This type is called Digital Shadow. To complete the Digital Twin it needs to also influence the real world object directly, so have an automatic information flow from the Digital Twin to the real world object (b).

So to implement a Digital Twin of a production system it needs to be updated regularly and in real-time with the current data from the production system. This data consists of the current machine status (idle, working, down), its current job and processing times. So it is possible to create a digital copy of the real system. The other way round the system also needs to apply the changes to the real production line, e.g. by applying a new schedule for the specific machine.

To ensure that the necessary data to build the Digital Twin is available, it needs to be gathered from different sources. For example, the current status of the machines from the machine execution system (MES) while the information about the jobs is inside an enterprise-resource-planning system (ERP).

3.4.2 Proposed Method for Production Planning and Control Using a Digital Twin

Figure 3.9 gives an overview of the proposed method. On the left side the real, physical production system is shown and on the right side the Digital Twin approach, which is the synchronized digital representation of the production system in a simulation model. This model is kept up-to-date with the current state of the real system and allows to optimize the dispatching rules in the real system. The arrows show the main communication between the components and is described in the next subsection. From the production system the current system state is regularly sent. On the one hand to a trigger function (a) and on the other hand to the simulation model (b). The trigger function watches for changes in the production system. It reacts to events like new jobs, the absence of a worker, or the breakdown of a machine. In this case the simulation-based optimization is triggered (c), which then selects a new optimal set of dispatching rules for each individual machine. Additionally this method also triggers periodical optimizations, e.g. once per month. The simulation model is the Digital Twin of the real system. To keep this model synchronous it also gets the current system state (b) and updates its settings accordingly, e.g. broken machines of the real production system are disabled. If an optimization is triggered, the meta-heuristic, in this case a genetic algorithm, starts to reselect for each individual machine the dispatching rule. The meta-heuristic generates a population of possible solutions and uses the simulation model to evaluate these. In detail a possible configuration is determined by the algorithm and sent to the simulation model (d). The simulation model simulates this configuration for a given time and returns the key performance indicators (KPI) of this simulation run to the optimization (e). KPIs from the simulation are, e.g. the amount of tardy jobs or the flow time. This data is fed into the fitness function of the genetic algorithm and then a fitness is applied to the individuals. This fitness can be used to identify the current best solution for the given problem. The genetic algorithm will continue the optimization for some generations until it finishes due to its termination criterion. Then it returns the current best set of dispatching rules. This solution is sent to the production system and applied at the production (f). This process is also shown in Fig. 3.10.

Fig. 3.9
figure 9

Digital Twin approach for production planning and control

Fig. 3.10
figure 10

Sequence diagram of the process

For this research the production system was emulated in an emulation model which is a simulation model of the real production system. To delimit this model from the simulation model of the optimization it will be called emulation model. This emulation allows the authors to run experiments and optimize the method without inferring the activities in the real factory. During the experiments it was possible to let some machines break down and recover at previously defined times or using stochastic processing and setup times. It also provides the possibility to rerun identical scenarios and test different strategies. The emulation also sends the data like it is described in the data exchange framework.

3.4.3 Data Exchange Framework

Figure 3.11 shows the Data Exchange Framework which shows the participants and different systems that interact to provide the data for this Digital Twin approach. Three participants are shown. On the left-hand side the suppliers which provide raw material and on the right-hand side the customers that demand products from the manufacturer, which is located in the middle. For this participant the figure shows the information systems that participate. Next to ERP and MES, which have already been mentioned earlier, a Production Data Acquisition System (PDA) and Machine Data Acquisition System (MDA) are shown. Both systems are for the automatic collection of current activities on the shop floor.

Fig. 3.11
figure 11

Data Exchange Framework, continuation of Frazzon et al. (2018)

Central element of this framework is the MES which combines all the necessary data, stores it, and provides it as input for the Digital Twin approach. It is also responsible for applying the resulting, newly selected dispatching rules on the shop floor level.

3.5 Use Case

The developed approach was applied in a job shop of a Brazilian supplier for the automotive industry. The performance of the individual dispatching rules, selected by the approach, will be compared to the currently used scheduling approach and the use of a static, system-wide dispatching rule. This chapter is structured as follows: first the scenario is described, afterwards the used approaches, the considered KPIs, and the experiment configuration are presented. The chapter closes with the results of the different approaches and a comparison of the methods.

The scenario is shown in Fig. 3.12. It contains four production lines. The horizontal main line (black) has two workstations with parallel machines (yellow) which are shared by jobs from two other lines (green, dashed and blue, dotted). The fourth line (red) produces parts that are assembled with parts of the main line. The whole scenario contains 20 workstations, which group 28 individual machines. Each workstation has between one and four machines and is grouped by a box in the graphic. A buffer is located in front of each workstation. On these lines 24 products are produced. Each product has a defined route that is basically one of the production lines but some jobs skip machines in the line. Therefore the amount of operations differs per product and takes between two and nine operations. The processing times are heterogenous for each product. Setup times are defined for the workstations m2 and m3 and applied on every change of the product.

Fig. 3.12
figure 12

Layout of the production line

Given monthly demand is splitted in weekly demands, as there are weekly deliveries to the customers. This demand is converted to jobs, using a given economic lot size which is a cost-optimal amount to be produced. There are 2570 jobs which are released monthly.

3.5.1 Experiment Details

This study compares the quality of the developed Digital Twin approach with two benchmarks. On the one hand the use of static dispatching rules at machine level and on the other hand a static, monthly calculated schedule. A dispatching rule is used to decide which of the arriving jobs at a machine should be produced next. As these rules just decide on the basis of the current queue state they are highly flexible, but they do not consider the overall system state and therefore only optimize the performance of an individual machine.

The other benchmark is a schedule which gives for each machine an ordered list of jobs to be processed. Following this list the production will be executed. This schedule optimizes the overall system but is not flexible if a job is delayed due to longer processing times at a previous operation. In this case a tardy job might lead to additional tardy jobs as capacities at machines are kept free to fulfill the job.

3.5.1.1 KPIs

These two benchmarks will be compared to the proposed Digital Twin approach by three KPIs. The first KPI is the number of tardy jobs. As each job has a due date, a job becomes tardy if it is finished after this due date. The number only counts the total number of tardy jobs, but not the total tardiness. As second KPI the throughput time is considered which is the duration the job takes from the beginning of its first operation until the end of its last operation. It omits the waiting time at the first workstation before the processing. The third KPI is the monthly working time usage. This is the mean value of the monthly maximum flow time of all jobs that have a due date in this month. It is like the mean of a monthly cMax calculation; as the overall cMax has the problem that it is highly influenced by the last month.

3.5.1.2 Experiment Configuration

The experiment is executed with stochastic influences. The processing and setup times are considered as stochastic values as the basic values are multiplied by a stochastic factor, drawn from a triangular distribution (with the parameters min 0.98, mode 1.01, and max 1.10). These processing times are recorded and used as mean values of the data from the last 6 months in the optimization method. Typically the times of individual machine breakdowns and reactivations are defined by distributions for the mean time between failure (MTBF) and mean time to repair (MTTR), but for this experiment they were defined static in advance to correspond to real breakdowns. Six failures occur on simple machines and five failures on a single machine in a machine group during the observation period. These breakdowns are between 1.4 and 6.5 working days, at a mean duration of 4.1 working days. The optimization uses a genetic algorithm that was run with the following settings: 100 individuals, 10 elitism individuals, mutation rate 10%, crossover rate 90%. The selection of parents is done by roulette wheel selection. However, each optimization is only running for five generations. In comparison to, e.g. 100 generations, our experiments have shown that the method in this case with fewer iterations provides better results. The population of the genetic algorithm, used for the optimization, is created using a strategy that reuses knowledge of a previously optimized population if available. Therefore a new population is initialized randomly, then 20% of the individuals are replaced by the best 20% individuals of the old population and additionally the benchmark rules are added, which have for all machines the same dispatching rule. For each dispatching rule we consider in the optimization, one benchmark individual is added. Orders for the next 3 months are taken into account during optimization. During the optimization 30 replications of the simulation model are used to determine the fitness of the individuals. The simulation model is adapted to the changed simulation state each time a new optimization is triggered. Replanning takes place both monthly and when events, like machine breakdowns or new job arrivals, occur. The total simulation time is 20 months, from which the first eight months are a transient phase and the last 12 months are the observation period.

3.5.1.3 Experiments

First the use of dispatching rules is evaluated. Therefore 28 dispatching rules were evaluated that were available in the used simulation software jasima. For each rule a separate experiment was executed where the dispatching rule was set at all machines. For each simulation 30 replications were executed and the mean values of the selected KPIs calculated. During this experiment stochastic influences are applied, but no optimization takes place. Afterwards these benchmark data were used to select the dispatching rules which will be listed as benchmark values in the comparison and also the rule set from that the optimization takes its individual rules.

Next step was to calculate the second benchmark, applying a static schedule. This schedule is generated monthly. For the benchmark the schedule was calculated using a planning method which utilizes a dispatching rule for the decision about the order in the schedule. This method runs all jobs in the current month with the rule and records the order of jobs. This generated schedule is then executed by a special dispatching rule, the ScheduleExecutor. This rule simply sequentially processes the schedule of the individual machines and only releases according to the schedule job operations for processing that are currently in the queue. This means that machines may not be able to process despite the fact that they are idle and orders are waiting in front of the machine because the machine is waiting for a particular job to arrive. For every previously selected dispatching rules a monthly schedule using this method has been calculated and then subsequently evaluated in 30 replications.

Afterwards the optimization of the individual machine dispatching rules has been executed. Therefore the selected dispatching rules have been considered as candidates. For each KPI an individual optimization is run as the method only considers one criterion in its fitness function. Each optimization is done in 10 replications and the results are mean values of these replications.

3.5.2 Results

The results for the three experiments that have been conducted are summarized below.

3.5.2.1 Benchmark Dispatching Rules

Table 3.1 shows for each KPI the benchmark results of the five best dispatching rules. The throughput time was converted from seconds to hours and the average monthly working time usage to working days. Taking into account the assumption that work is carried out in two shifts 5 days a week, the working time for a working day is 15 h. These best dispatching rules are Modified Due Date (MDD), Earliest Due Date (EDD), Modified Operation Due Date (MOD), Operation Due Date (ODD), Least (global) Slack (SLK), Shortest Processing Time (SPT), Shortest Remaining Processing Time (SRPT), SI × (SI), First Come First Served (FCSFS or FIFO), and Critical Ratio (CR). It shows that for each KPI a different rule is appropriate.

Table 3.1 Average quality of the best five dispatching rules in each category from 30 replications

From these rules, those have to be selected that are used as benchmark for the optimization and also the rules that are used in the optimization to be selected for individual machines.

The results prove that the MDD rule is the best for the number of tardy orders. The EDD and ODD rules perform well and also have a good performance for the third criterion. For the throughput time the table shows that only the MDD rule is found in the best five for this KPI. This can be explained by the fact that the focus of these rules is different. While in the previous case the delays of orders were reduced, in this case the completion of the individual orders should take place as quickly as possible and therefore the throughput time of the orders should be minimized. The SPT rule is a typical representative here. The SRPT rule shows an even better quality and is therefore also selected. The other rules are significantly worse in the benchmark run and are therefore not considered further. The third criterion is the average monthly working time usage. The SLK rule provides the best result here. With EDD, MDD, and ODD, the remaining best rules for this criterion are also good for the first criterion. The rules MDD, EDD, ODD, SRPT, SPT, and SLK are therefore selected as rule set.

3.5.2.2 Benchmark Schedule

The second benchmark is the schedule. Table 3.2 shows an excerpt of the best scheduling results. These results are significantly worse than the application of dispatching rules because of the dynamic, stochastic influences in the production system that are not considered in the schedule as it is fixed. Again for each KPI a different schedule, based on a different dispatching rule, provides the best result. So schedules generated by EDD, SPT, and SLK are used as scheduling benchmarks.

Table 3.2 Excerpt from the best schedule benchmark runs

3.5.2.3 Optimization

Table 3.3 shows the mean results for each optimization and the three KPIs. As expected for each KPI the corresponding optimization delivers the best results. A bit special is the optimization for the reduction of the tardy jobs as this one also reduces the monthly working time usage quite successful and to the nearly same level than the individual optimization for this KPI.

Table 3.3 Results of the optimization for each of the three KPIs: [1] number of tardy jobs, [2] throughput time, and [3] average monthly working time usage

In the following these results will be considered and evaluated for each KPI.

The first criterion is the number of delayed orders. Table 3.4 lists the average number of tardy orders for 30 replications each for the selected dispatching rules, the three schedules, and the three optimizations according to the three criteria. The schedule selected for this target criterion, the schedule calculated by EDD, is regarded as the comparative value for the evaluation. It turns out that this schedule, with an average of 235.6 late orders, is of only mediocre quality. Some dispatching rules as well as the optimization according to two criteria provide better results. On average, the optimization gives the best result with 92.3 late orders, which are overall just 3.6% tardy jobs instead of 9.2% at a monthly schedule.

Table 3.4 Results for the criterion “number of tardy jobs”

For the second criterion, the results mentioned in Table 3.5 are structured analogously. The schedule that was calculated using the SRPT rule is used here as the comparison value. The optimization for the corresponding target criterion is also better than the schedule here. Altogether, however, two dispatching rules are better than the optimization method. This is somewhat surprising, since the benchmark rules SRPT and SPT were also added to the population as benchmark individuals. Accordingly, the optimized solution should actually be much closer to the quality of the dispatching rules. Apparently, however, these rules are not of such high quality during the evaluation that they are selected as a corresponding set of rules. Further research is necessary to get an insight why this is happening.

Table 3.5 Results for criterion “throughput time”

For the last criterion, when looking at the results in Table 3.6, it is noticeable that the differences between the results seem to be much smaller here than for the other criteria. The maximum deviation here is only 5.0%, but in absolute terms this is 1.04 working days, which corresponds to 1.04 days ⋅ 15 h/day =  15.6 h. This observation makes it clear that the deviation is definitely more extensive. Effectively, the optimizations according to two criteria, both the corresponding and the number of delayed orders, do quite well here, but unfortunately cannot keep up with the SLK benchmark rule. As with the previous criterion, further adjustments to the optimization method are therefore required. The similar quality of the optimization according to the number of delayed orders indicates that a small number of delayed orders also lead to a good use of the available working time.

Table 3.6 Results for criterion “monthly working time usage”

Taking all three optimization criteria into account, it can be concluded that in all cases the results of the method are better than those of a previously calculated schedule that is not adjusted with current system data during its execution. For the criterion number of delayed orders, the optimization finds a better rule selection that has a significantly better quality than a simple, identical dispatching rule on all machines. In this case, the additional effort for the selection of the dispatching rules is especially appropriate.

3.6 Conclusion and Outlook

The recent development of real-time approaches with the advent of Industry 4.0 technologies has been driving applications of simulation models with direct links to operational data. This new scenario enables the development of Digital Twin models, which represent a robust approach for the optimization and data analysis in industrial contexts. In the consulted literature, a clear research direction embraces the development of Digital Twin concepts and approaches, along with their application in real scenarios. The maturation of the Digital Twinning concept seems to go hand in hand with the development of Industry 4.0, both in the literature and in practice. However, there is no consolidated reference model in the literature that guides the application of simulation and optimization models for real-time data treatment for synchronous and data-oriented decision-making.

The data exchange framework of the proposed Digital Twin approach for the production planning and control aims to contribute to this gap in literature and practice to enable the development of real-time simulations models. The presented use case allows for the evaluation of the proposed approach, demonstrating promising results for future adoption. The approach is able to improve all KPIs in comparison to a static schedule that is not considering the current system date. For the amount of late jobs the selection of individual dispatching rules even outperforms a static dispatching rule on all machines. Also the reselection of a new set of rules is quite fast. For the given scenario it takes around 20 s on a normal computer which is much faster than the approach the industrial partner is currently using for its scheduling.

The current approach lacks some aspects of the real world. Currently no maintenance strategy is considered. To be more realistic the method should consider preventive maintenance and planned maintenance jobs. Additionally, raw material is not considered. It is assumed that there is always enough material in the stocks to fulfil the demand of the customers. But in reality some production stops might be originated from missing raw material. Both aspects need to be taken into account in further research.

The described method provides a way to build a real-time simulation model of a production system with high-resolution data from the real process. This model could also be used for different purposes, e.g. for the simulation of alternative system designs in the case of the renewal or enlargement of the production or to test, e.g. different strategies for maintenance in the system without interfering the real production.

Overall this data-driven approach offers possibilities to use the newly available data from the shop floor to improve the operational performance and to foster value creation.