Keywords

1 Introduction

We discussed earlier in Chapter “Traditional Simulation Applications in Industry 4.0”, the ways that traditional DES can be used to meet the traditional factory modeling needs. We also discussed some of the challenges found in Industry 4.0 implementations and how some of those challenges can be met with traditional DES products. In this chapter we will discuss Industry 4.0 challenges and opportunities that cannot generally be met with traditional DES products and discuss a relatively new solution to meet those problems. Some material in this chapter is adapted from similar material in the book Simio and Simulation: Modeling, Analysis, Applications [1] and is included with permission.

2 The Need for a Digital Twin

In the consumer environment the Internet of Things (IoT) provides a network of connected devices, with secure data communication, to allow them to work together cooperatively. In the manufacturing environment, the Industrial Internet of Things (IIoT) allows the machines and products within the process to communicate with each other to achieve the end goal of efficient production. With the growing move to Industry 4.0, increased digitalization is bringing its own unique challenges and concerns to manufacturing. One way of meeting those challenges is with the use of a Digital Twin .

A Digital Twin provides a virtual representation of a product, part, system or process that allows you to see how it will perform, sometimes even before it exists. Sometimes this term is applied at the device level. A digital twin of a device will perform in a virtual world very similar to how the real device performs in the physical world. An important application is a digital twin of the entire manufacturing facility that again performs in a virtual world very similar to how the entire manufacturing facility performs in the physical world. The latter, broader definition of a digital twin may seem unattainable, but it is not. Just like in the design-oriented models; our objective is not to create a ‘perfect model’ but rather to create a model that is ‘close enough’ to generate results useful in meeting our objectives. Let’s explore how we can meet that goal.

According to some practitioners you can call a model a digital twin only when it is fully connected to all the other systems containing the data to enable the digital twin. A standalone simulation model will be referred to as a virtual factory model but not a digital twin until it is fully connected and runs in real-time (near real-time) mode, driven by data from the relevant ERP , MES , etc. systems. In addition to reading and processing data from external sources, it is therefore also important that you should be able to generate models from data to function effectively as a digital twin. This allows the model to react based on the changes in data more than just properties—items like adding a resource/machine and having that automatically created in the model and schedules by just importing the latest data.

In the current age of Industry 4.0 the exponential growth of technological developments allows us to gather, store and manipulate data like never before. Smaller sensors, cheaper memory storage and faster processors, all wirelessly connected to the network, facilitate dynamic simulation and modeling in order to project the object into the digital world. This virtual model is then able to receive operational, historical and environmental data.

Technological advancements have made the collection and sharing of large volumes of data much easier, as well as facilitating its application to the model and the processing involved in iteration through various possible scenarios to predict and drive outcomes. Of course, data security is an ever-important consideration with a Data Twin, as with any digital modeling of critical resources.

As a computerized version of a physical asset, the Digital Twin can be used for various valuable purposes. It can determine its remaining useful life, predict breakdowns, project performance and estimate financial returns. Using this data, design, manufacture and operation can be optimized in order to benefit from forecasted opportunities.

There is a three stage process to implement a useful Digital Twin:

Establish the model: More than just overlaying digital data on a physical item, the subject is simulated using 3D software. Interactions then take place with the model to communicate with it about all the relevant parameters. Data is imposed, and the model ‘learns’, through similarity, how it is supposed to behave.

Make the model active: By running simulations, the model continuously updates itself according to the data, both known and imposed. Taking information from other sources including history, other models, connected devices, forecasts and costs, the software runs permutations of options to provide insights, relative to risk and confidence levels.

Learn from the model: Using the resulting prescriptions and suggestions, plans can be put into action to create or manipulate the situation in the real life industry context, in order to achieve optimal outcomes in terms of utilization, in is more than just a blueprint or schematic of a device or system; it is an actual virtual representation of all the elements involved in its operation, including how these elements dynamically interact with each other and their environment. The great benefits come through monitoring these elements, improving diagnostics and prognostics, and investigating root causes of any issues in order to increase efficiencies and overall productivity.

A correctly generated Digital Twin can be used to dynamically calibrate the operational environment to positively impact every phase of the product lifecycle; through design, building and operation. For any such application, before the digital twin model is created the objectives and expectations must be well understood. Only then can a model be created at the correct fidelity to meet those objectives and provide the intended benefits. Examples of those benefits include:

  • Equipment monitors its own state and can even schedule maintenance and order replacement parts when required.

  • Mixed model production can be loaded and scheduled to maximize equipment usage with-out compromising delivery times.

  • Fast rescheduling in the event of resource changes reduces losses by re-optimizing loading to meet important delivery dates.

3 The Role of Simulation Based Scheduling

The rise of Industry 4.0 has expedited the need for simulation of the day-to-day scheduling of complex systems with expensive and competing resources. This has extended the value of simulation beyond its traditional role of improving system design into the realms of providing faster, more efficient process management and increased performance productivity. With the latest technologies, like Risk-based Planning and Scheduling (RPS), the same model that was built for evaluating and generating the design of the system can be carried forward to become an important business tool in scheduling day to day operations in the Industry 4.0 environment.

With digitalized manufacturing, connected technologies now form the smart factory, having the ability to transmit data to help with process analysis and control during the production process. Sensors and microchips are added to machines, tools and even to the products themselves. This means that ‘smart’ products made in the Industry 4.0 factory can transmit status reports throughout their journey, from raw material to finished product.

Increased data availability throughout the manufacturing process means greater flexibility and responsiveness, making the move towards smaller batch sizes and make-to-order possible.

In order to capitalize on this adaptivity, an Industry 4.0 scheduling system needs to:

  • Accurately model all elements

  • Compute schedules quickly

  • Provide easy visualization.

With IoT devices, big data and cloud computing as features of Industry 4.0, the scheduling system needs more than ever to bridge the gap between the physical and digital worlds.

Traditionally, there are three approaches to scheduling: manual, constraint-based and simulation.

Although labor-intensive, manual scheduling can be effective in smaller or less complex systems. But a manual approach becomes impractical in a large, highly dynamic production environment, due to the sheer volume and complexity of data.

Constraint-based scheduling involves the solution of equations that are formulated to represent all the system constraints. A mathematical model could be built of all the elements of a Smart factory; however, it would be highly complicated to populate and solve, probably taking a long time to do so. Key aspects would have to be ignored or simplified to allow for a solution which, when found, would be difficult to interpret, visualize and implement.

Often scheduling today is done for a department or section of the facility to reduce complexity both for manual and constraint-based scheduling. These localized schedules result in process buffers between section in either time or inventory or capacity.

Simulation-based scheduling stands out as the best solution for Industry 4.0 applications. Each element of the system can be modeled, and data assigned to it. The resources, in terms of equipment, tools and workers, can be represented, as well as the materials consumed and produced in the process. In this way, the flow of jobs through the system can be simulated, showing exact resource and material usage at each stage for real-time status updates.

Decision logic can be embedded in the model, for example to select minimum changeover times, as well as custom rules added from worker experience. These equations combine to produce a range of rules that accurately model the actual flow of materials and components through the system.

This means that simulation-based scheduling software can perform calculations and permutations on all aspects of the production process. This ability, combined with the large volume of real-time data provided by the digitalized workstations, means that scheduling is fast, detailed and accurate.

Thus, the three main requirements for scheduling in Smart factories are satisfied by simulation- based scheduling software:

  • Accurate modeling of all elements—a flexible model is generated from computerized information, including full representation of operating constraints as well as custom rules.

  • Fast computation of schedules—calculation of schedules and scheduling alternatives, comparison and distribution is carried out quickly and precisely.

  • Easily visualized—computerized simulation allows the schedule to be communicated clearly and effectively across all organizational levels.

Improved labor effectiveness is another benefit of simulation-based scheduling. The details generated enables the use of technology like Smart glass which may be one of the most significant ways of enabling the labor force—smart glass will provide employees with timely, detailed instructions. By constantly evaluating the schedule the simulation model using actual and current data will allow for the most efficient way to direct each worker using smart glass as to the next task to perform.

While such a schedule is an essential part of a smart factory, the model can play an even more integral role that just scheduling.

4 Simulation as the Digital Twin

The IT innovations of Industry 4.0 allow data collected from its digitalized component systems in the smart factory to be used to simulate the whole production line using Discrete Event Simulation software. Real time information on inventory levels, component histories, expiration dates, transport, logistics and much more can be fed into the model, developing different plans and schedules through simulation. In this way, alternative sources of supply or production deviations can be evaluated against each other while minimizing potential loss and disruption.

When change happens , be it a simple stock out or equipment breakdown or an unexpected natural disaster on a huge scale, simulation models can show how downstream services will be affected and the impact on production. Revised courses of action can then be manually or automatically assessed, and a solution implemented.

The benefits of using simulation to schedule and reduce risk in an Industry 4.0 environment include assuring consistent production where costs are controlled, and quality is maintained under any set of circumstances.

By leveraging scheduling, highly data-driven simulation models can also fill the role of a Digital Twin. Figure 1 illustrates how a simulation model can sit at the core of a smart factory. It can communicate with all the critical sub-systems, collect planning and real-time execution information, automatically create a short-term schedule, and distribute the components and results of that schedule back to each sub-system for further action. Advanced simulation-based scheduling software is uniquely suited for such an application due to its ability to communicate in batch or real-time with any sub-system, model the complex behavior required to represent the factory, execute sophisticated techniques to generate a suitably ‘optimal’ schedule, report that schedule back to stakeholders for execution, then wait for a deviation from plan to be reported which could cause a repeat of the process. This fills an important gap left in most smart factory plans.

Fig. 1
figure 1

Digital twin enabling the smart factory

5 Tough Problems in Planning and Scheduling

Planning and scheduling are often discussed together because they are related applications. Planning is the “big-picture” analysis—how much can or should be made, when, where, and how, and what materials and resources will be required to make it? Planning is typically done on an aggregate view of the capacity assuming infinite material. Scheduling is concerned with the operational details—given the current production situation, actual capacities, resource availabilities, and work in progress (WIP), what priorities, sequencing, and tactical decisions will result in best meeting the important goals? Where planning is required days, weeks or months ahead of execution, scheduling is often done only minutes, hours, or days ahead. In many applications, planning and scheduling tasks are done separately. In fact, it is not unusual for only one to be done while the other may be ignored.

One simple type of planning is based on lead times . For example, if averages have historically indicated that most parts of a certain type are “normally” shipped 3 weeks after order release, it will be assumed that—regardless of other factors—when we want to produce one, we should allow 3 weeks. This often does not adequately account for resource utilization. If you have more parts in process than “normal,” the lead times may be optimistic.

Another simple type of planning uses a magnetic board , white board , or a spreadsheet to manually create a Gantt chart to show how parts move through the system and how resources are utilized. This can be a very labor-intensive operation, and the quality of the resulting plans may be highly variable, depending on the complexity of the system and the experience level of the planners.

A third planning option is a purpose-built system—a system that is designed and developed using custom algorithms usually expressed in a programming language. These are highly customized to a particular domain and a particular system. Although they have the potential to perform quite well, they often have a very high cost and implementation time and low opportunity for reuse because of the level of customization.

One of the most popular general techniques is Advanced Planning and Scheduling (APS ). APS is a process that allocates production capacity, resources, and materials optimally to meet production demand. There are a number of APS products on the market designed to integrate detailed production scheduling into the overall Enterprise Resource Planning (ERP) solution, but these solutions have some widely recognized shortcomings. For the most part the ERP system and day-to-day production remain disconnected largely due to two limitations that impede their success: Complexity and Variation .

Complexity . The first limitation is the inability to effectively deal with indeterminately complex systems. Although purpose-built systems can potentially represent any system, the cost and time required to create a detailed, custom-built system often prevents it from being a practical solution. Techniques such as those discussed above tend to work well if the system is very close to a standard benchmark implementation, but to the extent the system varies from that benchmark, the tool may lack enough detail to provide an adequate solution. Critical situations that are not handled include complex material handing (e.g., cranes, robotic equipment, transporters, workers), specialized operations and resource allocations (e.g., changeovers, sequence dependent setups, operators), and experience-based decision logic and operating rules (e.g., order priorities, work selection rules, buffering, order sequence).

Variation . A second limitation is the inability to effectively deal with variation within the system. All processing times must be known, and all other variability is typically ignored. For example, unpredictable downtimes and machine failures aren’t explicitly accounted for; problems with workers and materials never occur, and other negative events don’t happen. The resulting plan is by nature overly optimistic. Figure 2 illustrates a typical scheduling output in the form of a Gantt chart where the green dashed line indicates the slack between the (black) planned completion date and the (gray) due date. Unfortunately, it is difficult to determine if the planned slack is enough. It is common that what starts off as a feasible schedule turns infeasible over time as variation and unplanned events degrade performance. It is normal to have large discrepancies between predicted schedules and actual performance. To protect against delays, the scheduler must buffer with some combination of extra time, inventory, or capacity; all these add cost to the system.

Fig. 2
figure 2

Typical Gantt chart produced in planning

The problem of generating a schedule that is feasible given a limited set of capacitated resources (e.g. workers, machines, transportation devices) is typically referred to as Finite Capacity Scheduling (FCS).

There are two basic approaches to Finite Capacity Scheduling . The first approach is a mathematical optimization approach in which the system is defined by a set of mathematical relationships expressed as constraints. An algorithmic Solver is then used to find a solution to the mathematical model that satisfies the constraints while striving to meet an objective such as minimizing the number of tardy jobs. Unfortunately, these mathematical models fall into a class of problems referred to as NP-Hard for which there are no known efficient algorithms for finding an optimal solution. Hence, in practice, heuristic solvers must be used that are intended to find a “good” solution as opposed to an optimal solution to the scheduling problem. Two well-known examples of commercial products that use this approach are the ILOG product family (CPLEX ) from IBM, and APO-PP/DS from SAP .

The mathematical approach to scheduling has well-known shortcomings. Representing the system by a set of mathematical constraints is a very complex and expensive process, and the mathematical model is difficult to maintain over time as the system changes. In addition, there may be many important constraints in the real system that cannot be accurately modeled using the mathematical constraints and must be ignored. The resulting schedules may satisfy the mathematical model but are not feasible in the real system. Finally, the solvers used to generate a solution to the mathematical model often take many hours to produce a good candidate schedule. Hence these schedules are often run overnight or over the weekend. The resulting schedules typically have a short useful life because they are quickly outdated as unplanned events occur (e.g. a machine breaks down, material arrives late, workers call in sick).

This section was not intended as a thorough treatment, but rather a quick overview of a few concepts and common problems. For more in-depth coverage we recommend Factory Physics [2].

6 Simulation-Based Scheduling

As an alternative to the mathematical approach discussed above, another approach to Finite Capacity Scheduling is based on using a simulation model to capture the limited resources in the system. The concept of using simulation tools as a planning and scheduling aid has been around for decades. This author used simulation to develop a steel-making scheduling system in the early 1980s. In scheduling applications, we initialize the simulation model to the current state of the system and simulate the flow of the actual planned work through the model. To generate the schedule, we must eliminate all variation and unplanned events when executing the simulation.

Simulation-based scheduling generates a heuristic solution —but can do so in a fraction of the time required by the optimization approach. The quality of the simulation-based schedule is determined based on the decision logic that allocates limited resources to activities within the model. For example, when a resource such as a machine goes idle, a rule within the model is used to select the next entity for processing. This rule might be a simple static ranking rule such as the highest priority job, or a more complex dynamic selection rule such as a rule that minimizes a sequence dependent setup time, or a rule that selects the job based on urgency by picking the job with the smallest value of the time remaining until the due date, divided by the work time remaining (critical ratio).

Many of the simulation-based scheduling systems have been developed around a data-driven pre-existing, or “canned,” job shop model of the system. For example, the system is viewed as a collection of workstations, where each workstation is broken into a setup, processing, and teardown phase, and each job that moves through the system follows a specific routing from workstation to workstation. The software is configured using data to describe the workstations, materials, and jobs. If the application is a good match for the canned model, it may provide a good solution; if not, there is limited opportunity to customize the model to your needs. You may be forced to ignore critical constraints that exist in the real system but are not included in the canned model.

It is also possible to use a general purpose discrete event simulation (DES) product for Finite Capacity Scheduling . Figure 3 illustrates a typical architecture for using a DES engine at the core of a planning and scheduling system. The advantages of this approach include:

Fig. 3
figure 3

Architecture of a typical simulation-based scheduling system

  • It is flexible. A general-purpose tool can model any important aspects of the system, just like in a model built for system design.

  • It is scalable. Again, similar to simulations for design, it can (and should) be done iteratively. You can solve part of the problem and then start using the solution. Iteratively add model breadth and depth as needed until the model provides the schedule accuracy you desire.

  • It can leverage previous work. Since the system model required for scheduling is very similar to that which is needed (and hopefully was already used) to fine tune your design, you can extend the use of that design model for planning and scheduling.

  • It can operate stochastically. Just as design models use stochastic analysis to evaluate system configuration, a planning model can stochastically evaluate work rules and other operational characteristics of a scheduling system. This can result in a “smarter” scheduling system that makes better decisions from the start.

  • It can be deterministic . You can disable the stochastic capabilities while you generate a deterministic schedule. This will still result in an optimistic schedule as discussed above, but because of the high level of detail possible, this will tend to be more accurate than a schedule based on other tools. And you can evaluate how optimistic it is (see next point).

  • It can evaluate risk . It can use the built-in stochastic capability to run AFTER the deterministic plan has been generated. By again turning on the variation —all the bad things that are likely to happen—and running multiple replications against that plan, you can evaluate how likely you are to achieve important performance targets. You can use this information to objectively adjust the schedule to manage the risk in the most cost effective way.

  • It supports any desired performance measures. The model can collect key information about performance targets at any time during model execution, so you can measure the viability and risk of a schedule in any way that is meaningful to you.

However, there are also some unique challenges in trying to use a general purpose DES product for scheduling, since they have not been specifically designed for that purpose. Some of the issues that might occur include the following:

  • Scheduling Results: A general purpose DES typically presents summary statistics on key system parameters such as throughput and utilization. Although these are still relevant, the main focus in scheduling applications is on individual jobs (entities) and resources, often presented in the form of a Gantt chart or detailed tracking logs. This level of detail is typically not automatically recorded in a general purpose DES product.

  • Model Initialization : In design applications of simulation we often start the model empty and idle and then discard the initial portion of the simulation to eliminate bias. In scheduling applications, it is critical that we are able to initialize the model to the current state of the system—including jobs that are in process and at different points in their routing through the system. This is not easily done with most DES products.

  • Controlling Randomness: Our DES model typically contains random times (e.g. processing times) and events (e.g. machine breakdowns). During generation of a plan, we want to be able to use the expected times and turn off all random events. However, once the plan is generated, we would like to include variation in the model to evaluate the risk with the plan. A typical DES product is not designed to support both modes of operation.

  • Interfacing to Enterprise Data: The information that is required to drive a planning or scheduling model typically resides in the company’s ERP system or databases. In either case, the information typically involves complex data relations between multiple data tables. Most DES products are not designed to interface to or work with relational data sources.

  • Updating Status: The planning and scheduling model must continually adjust to changes that take place in the actual system e.g. machine breakdowns. This requires an interactive interface for entering status changes.

  • Scheduling User Interface: A typical DES product has a user interface that is designed to support the building and running of design models. In scheduling and planning applications, a specialized user interface is required by the staff that employs an existing model (developed by someone else) to generate plans and evaluate risk across a set of potential operational decisions (e.g. adding overtime or expediting material shipments).

A new approach, Risk-based Planning and Scheduling (RPS), is designed to overcome these shortcomings to fully capitalize on the significant advantages of a simulation approach.

7 Risk-Based Planning and Scheduling

Risk-based Planning and Scheduling (RPS) is a technology that combines deterministic and stochastic simulation to bring the full power of traditional DES to operational planning and scheduling applications [3]. The technical background for RPS is more fully described in Deliver On Your Promise: How Simulation-Based Scheduling Will Change Your Business [4]. RPS extends traditional APS to fully account for the variation that is present in nearly every production system and provides the necessary information to the scheduler to allow the upfront mitigation of risk and uncertainty. RPS makes dual use of the underlying simulation model. The simulation model can be built at any level of detail and can incorporate all the random variation that is present in the real system.

RPS begins by generating a deterministic schedule by executing the simulation model with randomness disabled (deterministic mode). This is roughly equivalent to the deterministic schedule produced by an APS solution but can account for much greater detail when necessary.

However, RPS then uses the same simulation model with randomness enabled (stochastic ) to replicate the schedule execution multiple times (employing multiple processers when available), and record statistics on the schedule performance across replications. The recorded performance measures include the likelihood of meeting a target (such as a due date), the expected milestone completion date (typically later than the planned date based on the underlying variation in the system), as well as optimistic and pessimistic completion times (percentile estimates, also based on variation ). Contrast Fig. 2 with the RPS analysis presented in Fig. 4. Here the risk analysis has identified that even though Order-02 appears to have adequate slack, there is a relatively low likelihood (47%) that it will complete on time after considering the risk associated with that particular order, and the resources and materials it requires. Having an objective measure of risk while still in the plan development phase provides the opportunity to mitigate risk in the most effective way.

Fig. 4
figure 4

Gantt chart identifying high-risk order

RPS uses a simulation-based approach to scheduling that is built around a purpose-built simulation model of the system. The key advantage of this is that the full modeling power of the simulation software is available to fully capture the constraints in your system. You can model your system using the complete simulation toolkit. You can use custom objects for modeling complex systems (if your simulation software provides that capability). You can include moving material devices, such as forklift trucks or AGVs (along with the congestion that occurs on their travel paths), as well as complex material handling devices such as cranes and conveyors. You can also accurately model complex workstations such as ovens and machining centers with tool changers.

RPS imposes no restrictions on the type and number of constraints included in the model. You no longer must assume away critical constraints in your production system. You can generate both the deterministic plan and associated risk analysis using a model that fully captures the realities of your complex production and supply chain . You can also use the same model that is developed for evaluating changes to your facility design to drive an RPS installation which means a single model can be used to drive improvements to your facility design as well as to your day-to-day operations.

RPS implemented as a Digital Twin can be used as a continuous improvement platform to continuously review operational strategies and perform what-if analysis while generating the daily schedule. It can be used off-line to test things like the introduction of a new part to be produced or new machine/line to be installed. When you update the model to reflect the new reality or decision rules it then can be promoted to be the live operational model to immediately affect the schedule based on the changes without having to re-implement the software or make costly updates or changes.

The same model can be extended into the planning horizon to ensure better alignment between the master plan and the detail factory schedule to ensure better supply chain performance. The same model will run for 3 to 6 weeks for planning and 1 to 3 weeks for scheduling and perhaps 1 or 2 days for the detail production schedule for execution. This will then ensure material availability as procurement will be based on the correct requirement dates. This more accurate information can then be used to update the ERP system, for example feeding updates back to SAP .

RPS can even be linked to optimization programs like OptQuest. You can set corporate KPIs and run automatic experiments to find the best configuration for things such as buffer sizes, resource schedules, dispatching rules, etc. to effectively run the factory and then schedule accordingly.

Let’s end this chapter by using Simio to build and analyze a system similar to what we did in Chapter “Traditional Simulation Applications in Industry 4.0”, Traditional Simulation Applications in Industry 4.0, but this time we will follow a data driven approach, such as you might use if you were building a digital twin of an existing system and you could use data that already existed in an MES system like Wonderware or an ERP system like SAP . For this example, we will assume that that data is stored in a B2MML -compliant format and we will start our model-building effort by importing that data.

8 Modelling Data First Approach to Scheduling

In Sect. 5 of Chapter “Traditional Simulation Applications in Industry 4.0”, we practiced building a partially data-driven model with the model first approach. Another approach to building a model is to create it from existing data. This data generated approach is appropriate when you have an existing system and the model configuration data already exists in Enterprise resource planning (ERP ) (e.g., SAP ), MES (e.g., Wonderware), spreadsheets, or elsewhere. A significant benefit of this approach is that you can create a base working model much faster. Now that we have a bit more modeling and scheduling background, let’s build a model from a common data standard (B2MML ) and then explore how we might enhance that model.

B2MML is an XML implementation of the ANSI/ISA-95 family of standards (ISA-95), known internationally as IEC/ISO 62264 . B2MML consists of a set of XML schemas […] that implement the data models in the ISA-95 standard. Companies […] may use B2MML to integrate business systems such as ERP and supply chain management systems with manufacturing systems such as control systems and manufacturing execution systems. [5]

The system we are modeling has two machines to choose from for each of four operations as illustrated in Fig. 5. Each product will have its own routing through the machines. We will start by using some built-in tools to setup the data tables and configure the model with predefined objects that will be used by the imported data. Then we will import a set of B2MML data files to populate our tables. We will also import some dashboard reports and table reports to help us analyze the data.

Fig. 5
figure 5

Overview of data-generated model

8.1 Configuring the Model for Data Import

Simio B2MML compliant tables include: Resources, Routing Destinations, Materials, Material Lots, Manufacturing Orders, Routings, Bill Of Materials, Work In Process, and Manufacturing Orders Output. We will be creating all of these tables and importing all except the last one. But before we can import them, we will configure the model for their use. To do this we will go to the Schema ribbon on the Data tab and press the Scheduling button to the right as illustrated in Fig. 6. After indicating Yes, to continue, you next select whether your routings are based on products (e.g., all products that are the same have the same routing) or orders (e.g., each order has its own independent routing). We will select the Product Based Routing Type for this example. This will create the set of data tables with the B2MML-compliant data schemas and add additional objects to your model that are customized to work with the B2MML data.

Fig. 6
figure 6

Configuring model for B2MML data import

8.2 Data Import

We are now ready to import the data. Select the Resources table. Choose the Create Binding option on the Content ribbon, select CSV, and select the file named Resources.csv from the folder named DataFirstModelDataFiles found in the student downloads files. Then click the Import Table button on the Content ribbon. If you navigate to the Facility view, you will see that the resources have been added to the model.

Navigate back to the Data tab. Repeat the above process with each of the seven other tables, binding each to its associated CSV file, then importing it. After completing the imports, if you navigate back to the Facility view, you will see our completed model. The navigation view of Fig. 7 illustrates the custom objects that were added to this model when you clicked the Configure Scheduling Resources button. If you select the Shape1 object, you can see in the Properties window on the right that it is a SchedServer custom object and that many of the properties like the Work Schedule, Processing Tasks, and Assignments have been preconfigured to draw data directly from the tables. If the properties seem familiar, it is because SchedServer was actually derived from (and almost identical to) the Server object in the Standard Library.

Fig. 7
figure 7

Model after importing B2MML data

8.3 Running and Analyzing the Model

Our model has been completely built and configured using the data files! You can now run the model interactively and see the animation. Before we can use this model for scheduling we must go to the Run ribbon Advanced Options and select Enable Interactive Logging . Note that each custom object we used already has its option set to log its own resource usage. Now you can go to the Planning tab and click the Create Plan button to generate the Gantt charts and other analysis previously discussed.

Let’s import some predefined dashboards that were designed to work with this data schema. These dashboards are saved as XML files and can be found in the same folder as the CSV files. The three dashboards provide material details, order details, and a dispatch list for use by operators. To import these dashboards , go to the Dashboard Reports window of the Results tab (not the Results window under Planning) and select the Dashboards ribbon. Select the Import button and select the Dispatch List.xml file from the same folder used above. Repeat this process with the Materials.xml file and the Order Details.xml file. If you go back to the Planning tab—Results window—Dashboard Reports sub-tab, you can now select any of the three reports for display. Figure 8 illustrates the Order Details dashboard report.

Fig. 8
figure 8

Order details dashboard report

Finally, lets add a couple traditional reports. To import these reports, go to the Table Reports window of the Results tab (again, not the Results window under Planning) and select the Table Reports ribbon. Select the Import button for ManufacturingOrdersOutput and select the Dispatch List Report.repx file from the same folder used above. Repeat this process Importing for ManufacturingOrders with the OrderDetails.repx file. Importing these two files has now defined the reports for use in the Planning tab. If you go back to the Planning tab—Results window—Table Reports sub-tab, you can now select either of the two new custom reports for display. Figure 9 illustrates the Dispatch List report for the Cut1 resource.

Fig. 9
figure 9

Dispatch list report for Cut1 resource

While this was obviously a small example, it illustrates the potential for building entire models from existing data sources such as B2MML , Wonderware MES , and SAP ERP systems. This approach can provide an initial functioning model with relatively low effort. Then the model can be enhanced with extra detail and logic to provide better solutions. This is a very powerful approach!.

9 Additional Information and Examples

If you installed Simio so that you can follow along with the examples, you already have additional resources at hand to learn more. The Simio software includes the e-book Planning and Scheduling with Simio: An Introduction to Simio Enterprise Edition. You can find this on the Books button on the Support ribbon. This is an excellent place to continue your exploration of simulation-based scheduling. This book covers the standard data schemas and many of the general scheduling concepts and how each of those is addressed in Simio.

The Simio software also includes the e-book Deliver on Your Promise: How Simulation-Based Scheduling will Change Your Business [4]. This book is great for managers who want a better understanding of the complex process of scheduling. This provides more details on some of the topics discussed in this chapter as well as describes a few case studies. You are encouraged to share this pdf (or the printed version available on-line) with managers who are looking to solve their scheduling problems.

The Simio software includes three scheduling examples that are each thoroughly documented in accompanying pdf files. These files are located under the Examples button on the Support ribbon:

  • Scheduling Discrete Part Production

  • Scheduling Bicycle Assembly

  • Scheduling Batch Beverage Production.

10 Summary

In Chapter “Traditional Simulation Applications in Industry 4.0”, we discussed ways that traditional DES could be used to meet some smart factory modeling needs and we illustrated with a model using Simio. While many DES products can fulfill important aspects of that role, there are many challenges remaining. In this chapter, we discussed some of those remaining Industry 4.0 challenges and opportunities.

We discussed the concept of a digital twin and how it addresses many of those challenges. Then we continued by examining how modern simulation software can be used to create a digital twin of the entire factory. We looked at some of the tough problems in planning and scheduling and the weaknesses of common approaches—weaknesses that often prevent realizing an effective solution. We discussed how simulation can be used to overcome many of these problems, especially using data-driven and data-generated models.

We continued with a discussion of how Simio’s patented Risk-based Planning and Scheduling (RPS) provides a unique solution. Then we ended by creating a simple data-generated model, from a set of B2MML -compatible data files. Finally, we have provided resources for additional learning opportunities.

Combining traditional simulation, RPS, and optimization together you could follow modeling phases like the following

  1. (1)

    Build the DES model to assess the design.

  2. (2)

    Use that model to optimize system configuration.

  3. (3)

    Add details and heuristics to prepare the model for scheduling use.

  4. (4)

    Use the model to optimize heuristics and tune the system to achieve best results overall.

  5. (5)

    Use the model to generate a proven, feasible schedule.

  6. (6)

    Use variability analysis (RPS) to evaluate risk and assess the schedule robustness .

  7. (7)

    Optimize short-term options to improve robustness and effectiveness at the lowest cost.

All of these can take place using a single tool and a single model. And the 3D animation supports and encourages stakeholder buy-in at each phase. A well-designed model is the simplest model that meets the objectives for each phase. Then, rather than having a static tool that can only be changed by “the experts”, the model animation and graphical logic definition make it easy to understand, and incrementally change as needed over its lifespan adapting to refined heuristics and system changes.

There are many advantages to using simulation in Industry 4.0 applications, and new applications are being discovered every day particularly relating to designing, assessing, and implementing digital twins.