1 Introduction

The marketplace is so demanding that companies must change their procedures and approaches to stand on top. This comes from the fact that new products are been demanded by users. Furthermore, competitors are delivering products with new features that only some of them are appealing to the users. Hence, new product development (NPD) requires quickening its process to stay on top, and in turn, be competitive.

Technological advances have caused growth in factory productivity since the industrial revolution. It happens in the fourth stage: the first, at the end of the 18th century, with the steam engine powering factories. The second, at the beginning of 20th century, with electricity responsible for mass production. It had iconic names such as Henry Ford and Frederick Taylor. The third revolution started in the 1970s with digital automation entering industries and benefited from the power of electronics and information technologies [1]. Currently, the fourth stage, or Industry 4.0, takes advantage of technologies with relationships between all stages of product development. Further, Industry 4.0, involve the physical world and the cyber world to create a Smart Factory (refer to Fig. 1). This factory needs components for monitoring, control, and likely interconnection to the cloud. It would result in a decentralized and optimizable factory [2].

Fig. 1
figure 1

Illustration of components and connections for a Industry 4.0 Smart Factory

Standard processes in system development are modeling and simulation (e.g., verify properties or reinforce decisions). Normally, simulation solutions optimize operations and predict failure [3, 4]. Yet, they are commonly performed before the process and only updated after some fault appears which might result in flaws.

Further, as data is growing from the production process, an important feature of Industry 4.0 is that it requires to analyze what data is useful to improve the manufacturing process [5]. For example, [5] shows that it is important to analyze which data is useful to improve the manufacturing process. Additionally, in Industry 4.0 to improve the quality it requires viewing the system as a multi-tiered system that requires optimizing its data. For example, having machine data, inner sensors, external sensors, and the human component requires to detect which of the data is relevant and from this data develop a quality monitoring system determining whether a part is within specifications or out of specifications [6].

As an alternative, Industry 4.0 uses Digital-Twins for prediction and prevention [7, 8]. Manufacturing systems couple with its digital equivalence to predict errors with a small delay between data acquisition and response. Additionally, the use of a Digital-Twin can simulate various scenarios, exploiting synchronizations with the sensors and provide a virtual representation of systems [8, 9]. Digital Twin is thought of as the next stream of modeling, simulation, and optimization technology [10]. A definition is given by Glaessgen and Stargel [11]: “digital twin is an integrated multi-physics, multi-scale, probabilistic simulation of a complex product and uses the best available physical models, sensor updates, etc., to mirror the life of its corresponding twin”.

Even though much literature on Digital-Twin, a broad concept and an agreement over its features and range has not been reached. For instance, whether the Digital Twin data flows with the physical in both directions (physical-to-virtual and vice versa) [8, 12]. Or if the Digital-Twin would act on the control system of its physical counterpart [12], or if the control must adapt immediately the parameters or only after an event has occur [13]. Further, how much information the Digital-Twin pass, the communication speed, or if it is only an offline simulation approach [14].

A smart factory process that can benefit from a Digital-Twin (using simulations and optimization) is the Computer numerical control (CNC) machine tool (CNCMT) [15]. According to Luo et al. [15] CNCMT must use precise simulations using design parameter and actual working conditions; must be self-sensing of its conditions; should self-adjust to produce in less time, less waste and better quality; should self-predict faults timely before any serious fault; and should self-assess its status, optimize working parameters, and make decisions based on machine learning. Moreover, CNCMT is autonomous robots, and using simulations can find better paths and optimize their process.

All those requirements are quite time and resource consuming. For this reason, the use of a Digital-Twin can be beneficial by exchanging information back and forth with the CNCMT about the process (e.g., position, movements, sensors, and information of energy consumption) and update the control of the CNCMT immediately or pass the data for the machining of the next board. In other words, the Digital-Twin can simulate the process several times and find the optimal parameters and send them back to the CNCMT (while was performing other activities) and correct its behavior. Moreover, the Digital-Twin creates a path for the cyber-physical integration in manufacturing, which is important for smart manufacturing [16].

This work presents how a metaheuristic optimization algorithm helps printed circuit board (PCB) manufacturing, a Simulink implementation that uses scheduling and workers that can work simultaneously for a Digital-Twin. Furthermore, this work presents a case of smart manufacturing using CNCMT with the use of synchronized simulation with optimization of the drilling process of three-phase inverter PCB. The drilling process besides having many holes in different positions it also has different diameters that need a change of drilling tool. This causes three optimization requirements: first, there is a need to optimize the path taken and travel the shortest distance. Second, there are cases it is useful to change many times the tool and focus on traveling the least distance (changing tools is almost automatically), yet, there are situations that it is useful to stay with the same tool for a longer period since changing it would be time and energy-consuming. And third, that the machine could change to new designs for products and optimize for the new conditions.

2 Industry 4.0

Industry 4.0 is driven by nine foundations or interconnected technology advances [17] (see Fig. 2): (1) autonomous robots, (2) simulation, (3) horizontal and vertical system integration, (4) the Industrial Internet of things, (5) cybersecurity, (6) the cloud, (7) additive manufacturing, (8) augmented reality, and (9) big data and analytics.

Fig. 2
figure 2

The nine foundations of Industry 4.0

These 9 technological advances can work together to help to develop and speed up the process. An example of interconnectivity is a machining tool that uses autonomous robots and additive manufacturing for some of its parts, moreover, it has its sensors connected to the network to verify the programmed trajectories. This information is also used for planning times and movements using simulations, optimization, the cloud, and big data analytics. Then, information is sent back to correct actuators’ movements reducing energy consumption. Finally, all the different data transmissions need cybersecurity to guarantee safe communication.

2.1 Digital twin

The term Digital-Twin concerns a digital duplicate of physical entities that virtualize physical conditions. They can model, simulate, and optimize technology [10, 18] using the direct connection between the physical and virtual model, simulation can be done in real time. The information must send seamlessly to allow virtual and physical entities to exist together.

Digital-Twins integrate all 9 technological advances to create a digital simulation model that updates and changes as it receives information from their physical analog. Furthermore, Digital-Twins can analyze theoretical values of big data and real values to optimize, simulate, monitor, and verify system operations [19]. Additionally, if the Digital-Twin is correctly implemented, it could have direct interaction with the supply chain and smart logistics. For example in Fig. 3 the physical component and the Digital-Twin share information from the sensors, from which the Digital-Twin can make simulations and perform optimization to improve the production. Further, the Digital-Twin can be improved using information from the cloud, supply chain, management, and smart logistics.

Fig. 3
figure 3

The structure proposed for a Digital-Twin. In the proposal a physical device shares information from its sensors with the Digital-Twin, which uses it to make simulations and optimize the production

Some of the popular uses for Digital Twins in manufacturing include:

  • Quality management

    Continuous check of product data has clear benefits over random inspection in quality management. Using a Digital-Twin can track and model all the production process to determine where a quality problem might happen [20]. Also, analyzing the product materials to check whether there are better materials and/or production process can be used [21].

  • System planning/virtual start-up

    Historical analysis of similar systems allows the prediction of the system’s performance that has not been built. Digital-Twins would use historical information to model various scenarios resembling the desired one and determine in what sections to enhance a factory [22]. Further, using cloud computing a data bank can be created with images of old machining information (e.g., images of past PCBs designs). Then, using these images and a classification algorithm (i.e., convolution neural network) can detect components and type of design, and suggest improvements [19].

  • Logistics planning

    The supply chain can be optimized with the help of a Digital-Twin by providing a clearer view of how the materials are being used and automatizing goods supply. For example, if the plant is working with lean manufacturing Digital-Twin can increase its efficiency [23].

  • Product development

    Digital-Twins can help developing new products using virtual simulations permitting to mix production information with other real-world information (e.g., customer experience) [19, 24].

  • Product redesign

    Adaptation in manufacturing to different products can run first in a Digital-Twin allowing the model to observe how much production will be affected and check how to adjust the process to the new design. This can be done using simulation on how the new product interacts with the existing equipment and optimizing its production time [25, 26].

As it can be seen in the previous list, the use of optimization and simulation is a high part of using Digital-Twin. The selected algorithm was Ant Colony Optimization, yet, this study does not focus on how one optimization algorithm is more beneficial than another, it focuses on how optimization is beneficial for a Digital-Twin. Hence, the algorithm is not important but is explained to show it requires some time to finish obtaining the optimal values.

Additionally, this work compares to other works such as [27] which uses the simulation of a beam to prove the concept of interaction of Digital-Twin and its physical counterpart. Also, using the idea of [28] the physical-Twin can be observed, and using the Digital-Twin prediction can be made using simulation. Differently, this work besides using simulation and optimization to get the best performance it also integrates schedule, workers, and different design of PCB into the Digital-Twin.

3 Implementation

Most problems in real-life are complex and often are not solved easily, at least not analytically. For instance, a Traveling salesman problem (TSP), would not be simple since it is an NP-complete problem and they are hard to solve [29]. For example, for 52 drilling holes it would have 51!/2 = 7.7556 × 1065 different possible combinations of drilling holes and finding the best solution will be time-consuming. Thus, finding the best solution for a PCB that has hundreds of holes will be close to unfeasible.

Even so, often you need to get an operation value that, although it might not be the best one (global best), it is good enough for the problem (local best). Then, optimization algorithms try solving such problems finding “good-enough” solutions from a set of alternatives available.

3.1 Optimization

Moreover, optimization algorithms find minimums or maximums using heuristics to quicken exploration of the search space. As previously explained, TSP are a specific case of optimization problems where the main goal is to find the route with the shortest distance to visit a set of cities. To solve the TSP problem one of the most used algorithms is the Ant Colony Optimization (ACO). ACO mimics how ants try to find the shortest path to food putting a trail of pheromones from the nest to the food source. The trail that is more explored would be the route with the shortest distance.

ACO algorithm could simply be described as follows (a more elaborate form can be found in [30])

  • Initialize an ant colony

  • Initialize pheromone trails and random attraction levels

  • Repeat until a termination criterion

    • Choose for each ant a path with a probability P

    • Advance to the next chosen state

    • Update the traces of pheromones of ants

    • Update pheromone attraction levels

  • Return the best pheromone trail

Mathematically, all the ants will move from node i to node j as:

$$ p_{i,j} = \frac{(\tau_{i,j}^{\alpha})(\eta_{i,j}^{\beta})}{\sum(\tau_{i,j}^{\alpha})(\eta_{i,j}^{\beta})} $$
(1)

with α and β as the parameters to control the influence of τi,j and ηi,j which are the amount of pheromone on edge i and j and how desirable is the edge i and j. The parameter to update the amount of pheromone is updated according to:

$$ \tau_{i,j}= (1-\rho)\tau_{i,j} + {\varDelta}\tau_{i,j} $$
(2)

where ρ and Δτi,j as the rate of pheromone evaporation and the amount of pheromone deposited, given by:

$$ \begin{array}{@{}rcl@{}} {\varDelta}\tau_{i,j} &= Q/L_{k}\quad \text{if ant \textit{k} travels on edge $i,j$} \\ &= 0\quad \text{otherwise} \end{array} $$
(3)

where Q and Lk are the pheromones deposited constant and the cost of the k th ant’s tour, which is normally the distance. The pseudocode can be found in Fig. 4.

Fig. 4
figure 4

Ant colony pseudocode

3.2 Cost equation

As seen before, this problem involves finding the shortest distance between each perforation, with the addition that each tool change adds extra time. Hence, this problem would be a mixture of TSP with some extra conditions for tool change.

As an initial solution, a first cost function includes a penalization for tool change with extra distance traveled. These are on the next following cost equation:

$$ \mathcal{L}_{ij} = \frac{d_{ij} + P}{v} $$
(4)

where dij is the distance from node i to node j, P is the penalization constant and v is the horizontal speed of the drilling tool. The distance was calculated as

$$ d_{ij} = \sqrt{ (X_{j} - X_{i})^{2} +(y_{j} - y_{i})^{2}} $$
(5)

As a secondary test, the cost function was further developed and was implemented so it uses a changing point (or home) as a reference for the optimization. The resulting cost equation was

$$ \mathcal{L}_{ij} = \begin{cases} \frac{d_{ij} }{v}, & \text{if no tool change is required} \\ T_{Tc}, & \text{if tool change is required} \end{cases} $$
(6)

where TTc as the time for tool changing defined as

$$ T_{Tc} = \frac{d_{io}}{v} + \frac{d_{oj}}{v} + tc $$
(7)

where dio and doj as the distance from node i to the tool changing point and the distance from the tool changing point to the node j, and tc as a time constant to change the tool and the energy consumption according to the transitory response of position into actuators.

Since controllers in actuators can achieve almost every position, usually the energy consumption is linked with the controller effort. This means, if the position controller requires to reach the reference position in a short period of time, the amount of energy required by the controller increases. So, there is a trade-off between the time to reach the position and the amount of energy spent; sometimes if the energy is high extra cooling must be included and the price of manufacturing the PCB increases. Hence, the value of tc must consider the time for changing tools as well as the time for reaching the reference position with the amount of energy demanded by the controller. To deal with it another optimization algorithm could run to find its optimal value. In this paper, the value of tc is fixed according to the conventional requirements without running an optimization algorithm. As a result, tc is defined as

$$ tc=tct + trp $$
(8)

where tct is the time for changing the tool and trp is the time for reaching the position reference that equals the ratio between the energy spent by the controller and the number of products required per day. And time for changing the tool, which is considered as a priority when there are more than 2 tools to change in a short period of time.

Once the problem has been adapted to just distance to optimize the perforations the ACO algorithm was implemented. ACO ran over 250 iterations with a population of 30 ants and learning parameters rate of evaporation ρ = 0.5, control of influence α = 2, control parameter β = 6, and pheromone deposited constant Q = 1.

3.3 Simulink

In addition, to further elaborate on the Digital-Twin, the model was set in Simulink (see Fig. 5). The model had workers allocation, a part generator, a buffer to store not milled PCBs, a milling machine, and a conveyor belt to take the piece out.

Fig. 5
figure 5

Developed model and sub-model of the drilling process. a Main model. b Sub-model

To further elaborate, the milling machine had a sub-model consisting of: a system that gets the card, allocates a worker, waits for loading the PCB, release the worker, the milling machine, another worker allocation, waiting to unload the PCB, release the worker and finally it sends the finished PCB out. More in detail, the sub-model receives information about the holes’ distribution to start making the plaques (Fig. 5b). This sub-model starts the work on the first PCB with a random drill sequence. At the same time, the sub-model starts the optimization process, which returns a new optimized drilling sequence before the first PCB finish that could be used for the rest of that batch. Further, to have information on the time a worker takes to load and unload a PCB, the sub-model uses worker allocation and deallocation with 40 s to finish each task (this time could be adjusted to the real loading time). Lastly, the system adds a random time of 5% of the time to finish the plaque to emulate any eventuality.

In this case, the model work used a predefined schedule that sends a new batch of similar PCBs to the drilling machine once it has finish processing all the previous batch plaques. Each PCB batch has random sizes, random amount, and position of holes. It is worth mentioning, that a random schedule generator was used for next-day planning that could be replaced with a real plant that processes the orders as they arrive.

3.4 Case study parameters

For this case study, the main goal of the designed inverted was to serve as a driver that allows controlling BLDC motors. Table 1 shows the main parameters for the implemented circuit.

Table 1 Inverter’s design parameters

As early stage on the design was implemented in a surface mounting circuit. The components are dual in-line packages (DIP), which had to either be through-hole mounted or inserted in a socket in a Printed Circuit Board (PCB).

The PCB was manufactured with DIP components that need different perforation diameters. For automation, the PCB goes through a CNC machine that must change the tools for every diameter.

The main problem is to go through all the points in conjunction with all the time lost in tool changes, it often compromises the manufacturing time, and in some cases, it might as well compromise the tool’s structural integrity. Additionally, it is normal that simulation software gives a non-optimal solution of the sequence for drilling. Hence, it needs to have an optimization of paths and the tool change, which as a result it will tend to better usage of the machine time.

The holes are distributed as follows:

  • The 0.8128 (mm) is required by the sockets for the components with DIP packages, such as ATmega16 microcontroller, the MOSFET gate drivers IR2112, the serial interface for the MAX232 microcontroller, among others.

  • The 3.302 (mm) is used for the through-hole mounting components like resistances, capacitors, and crystal oscillator. As it might be clear from Fig. 6, these are the most common hole required for the implementation.

  • The 1.2 (mm) is used by the screw terminals, which are used for the digital power connections, the motor connections, and the voltage of the power electronics stage.

  • The 1.1 (mm) is required by the 5 (W)–10 (Ω) resistance selected for the dynamic brake stage.

  • 1.016 (mm) is for the terminals of the DV-9 female terminals (USB-serial communication), the headers connectors, the lineal voltage regulator 7805, and the IRF3710 MOSFETs.

  • The 3.302 (mm) is also for the DV-9 female terminal screw terminals to ensure a good connection with the USB Serial cable.

  • The 0.9144 (mm) is necessary for the 104k polyester capacitors, connected in parallel to the power supply.

Fig. 6
figure 6

Holes’ distribution with different colors for each diameter

Table 2 summarizes all the components needed for the design. It shows the quantity of each component, its name, the drill radius, and the number of holes per component.

Table 2 List of the components for the PCB
Table 3 Required drill radius with their corresponding number of drills

The number of drills by each tool that is required by the final design can be seen in Table 3.

Figure 7 show the simulated diameters and trajectories of the design without the optimization sent by the PCB software used for the designing task (EAGLE). For the manufacturing process, the copper plate to be used was 20 (mm) wide by 20 (mm) long.

Fig. 7
figure 7

Simulated drilling of the PCB. Left: Simulated scaled positions and diameters of the drills in the copper plate. Right: Simulated non-optimized coordinate sequence sent by the design software. a Positions and diameters. b Drilling

Figure 7 left and Table 3 show that the total number of drilling holes is 361 and that they have six different diameters. Further, observing the calculated final trajectory (Fig. 7 right), and considering a distance of 200 every time the tool needs to be changed, it would result in a total distance traveled of 51,668 mm. Additionally, considering a velocity of 50 mm/s it would result in \(\sim \) 17 min. The result is far from optimal since only \(\sim 3\) of these PCBs can be manufactured in 1 h, which would be higher if it had more holes. Hence, the optimization must solve for all these holes positions and the required tool changes.

4 Methodology

For industrial purposes, the algorithm was implemented in MATLAB and Simulink to find the drilling process’s final path. Both software were selected since they have dedicated hardware, which allows them to connect in real-time simulation and real systems. In MATLAB the optimization function was implemented reads all the coordinates (X and Y) and the tool required for each drilling. Then, a model is created that would reduce the distance between the coordinates. For all conditions, a velocity of 50 mm/s was considered.

As a start, to measure the cost equation two situations were evaluated: first, if there was a tool change a penalization of 50 mm, 100 mm and 200 mm was added. Second, to test the second cost equation four different conditions for the tool changing point: (x,y) = (0,0), (x,y) = (0,200), (x,y) = (200,0) and (x,y) = (200,200) covering all four corners.

The second part consisted of testing the Simulink Digital-Twin model. The model worked using a predefined schedule and ran for 8 h. The schedule sends a new batch of PCBs to the drilling machine after it has finished working on the previous batch. Each PCB batch has random sizes with a distribution of 1 to 10, a random number of holes with a distribution of 200 to 400, and a random position of holes in an area of 200 × 200 mm.

The process operates using no optimization, and optimization with the first and second cost equations. It is important to remember that the optimized route is calculated while the first PCB of each batch is being drilled, hence, it will run as if it does not have the optimization for that first PCB. But, the consequent PCBs with the same configuration will use the optimized route. Additionally, to represent a real change of PCB a random extra time between 0 and 5% is added to each route.

Lastly, the system runs for the PCB case study and shows the result with the holes and components installed.

5 Results

First, Fig. 8 shows the evolution of the ACO algorithm using a penalization of 50 mm, 100 mm, and 200 mm for tool change. This results in the process requiring a change of tools 8, 7, and 6 times, respectively. It shows on the left the evolution of the ACO algorithm with a final cost of 68.46s, 78.10s, and 88.38s; on the right, it shows the final drilling path. Each color represents a tool and the red line stands in for a change of tool in the drilling process. It is worth noticing that using a low penalization primarily focuses on closer drilling holes, without considering that changing the tool. But if the penalization gets higher it starts focusing on the first on the holes with the same diameter and later with the tool change. This implies that the optimization would avoid unnecessary tool changes if they are not strictly necessary.

Fig. 8
figure 8

ACO using a penalization of 50 mm, 100 mm and 200 mm. On the left evolution of the optimization and on the right each color line represents the path taken for each of the different hole diameters, the red represents a change of tool between the paths. The first line represents 50mm, the second 100mm and the third 200mm

Second, the experiment runs using the second cost equation with different tool changing points. Figures 9 and 10 shows these changes. It is worth noticing that using this cost only changes 6 times. Also, that lower right corner (x,y) = (200,0) has the smallest cost 98.12s. This can be explained as there are more changes at that corner and it would be better to start there. Hence, it would be important to check initially each corner which could be done in the Digital-Twin before starting, which would be implemented in a future version.

Fig. 9
figure 9

ACO starting at lower left in the top row and lower right corners on the bottom row. On the left evolution of the optimization and on the right each color line represents the path taken for each of the different hole diameters, the red represents a change of tool between the paths

Fig. 10
figure 10

ACO starting at upper left in the top row and upper right corners on the bottom row. On the left evolution of the optimization and on the right each color line represents the path taken for each of the different hole diameters, the red represents a change of tool between the paths

Third, Fig. 11 in the first row shows the number of parts produced each run (8 h), the second row shows each time a worker was allocated in the run. Each one of the columns represents, from left to right, the runs with no optimization, with optimization using the first and second cost equation. It can be seen that without optimization only 11 plaques can be made, on the other hand with optimization 32 and 38 plaques with each of the equations. Further, without optimization it can only work with 2 or 3 changes, differently with optimization it can have 7 to 8 different PCB designs. Thus, the use of the Digital-Twin can highly improve the production of plaques.

Fig. 11
figure 11

The number of plaques made and workers’ activation time for an 8-h simulation. In the first row, each step represents a new PCB, being drilled, the second row shows every time a worker downloads and uploads a new PCB. First column without optimization, and a second and third column with first and second cost equation

Lastly, as evidence, Fig. 12 shows the final PCB drilling, where it shows the bottom and top view after the drilling.

Fig. 12
figure 12

Final PCB with its corresponding drilling holes. a Bottom view of the manufactured PCB. b Top view of the manufactured PCB

6 Conclusions

Our aim was to test whether or not the use of a Digital-Twin can be highly beneficial in product redesign, by simulating how new products will affect the production line. It is used for smart manufacturing of a PCB and how it will affect the time consumption. The results of this research suggest that:

  • Depending on the variable analysis, cost function, and the use of optimization the design of how the drilling holes can be done before construction.

  • Using a speed of 50 mm/s the time without optimization was \(t = \sim 17\) m, conversely, using the optimization for most cases is less than halve \(\sim 2~\)min = 120 s.

  • It took less time to run the optimization than drilling one PCB, hence, it can run in parallel for the first PCB and use the results for the next PCBs in the batch.

  • Using the Simulink model with optimization increases the number of manufactured PCBs.

  • With optimization the workers would have to replace the PCBs more often.

All these would reduce the time enormously, especially for large scale production, which is helpful for a great demand for new PCB orders and reconfiguration is required.

7 Future work

The presented work has the following limitations and considerations:

  1. 1.

    It does not include IoT communication in real-time for updating the information from the complete process

  2. 2.

    The Digital-Twin is only a local representation that does not integrate the complete supply chain

  3. 3.

    The optimization algorithm has to be deployed into an embedded digital system that could provide information in order to predict the performance of the process

  4. 4.

    The data regarding the failures is not stored and modeled

  5. 5.

    It is not included an economic study that shows the main advantages of using this optimization algorithm

  6. 6.

    This research does not assess all the metaheuristic optimization methods

  7. 7.

    Only one type of material is evaluated in designing the PCB

  8. 8.

    The manufacturing time could also change according to with new degrees of freedom of each tool so an optimization about the number of degrees of freedom

  9. 9.

    This research does not study the components optimization placement

As a part of future work, as in [5, 6] it would be important to check what variables are relevant and from this data develop a quality monitoring system determining whether a part is within specifications or out of specifications. Also, check the use of different materials, and the optimization to use more of the PCB. Finally, it would require checking further models of PCB with a higher number of holes and different specifications.