Keywords

1 Introduction

My interest in what would be Digital Twins (DTs) actually dates to the early 70s. I had been fortunate enough to be selected in the summer before my senior year at my Catholic high school, Cabrini High, for a National Science Foundation program at Oakland University in Rochester Michigan. This program drew students from all over the Detroit area and was called a “Math Camp”.

Of the three courses at the “camp”, two were about math, but one was on programming. We were taught to program in Fortran on the university’s IBM 1620. Because of that, by the end of my freshman year at the University of Detroit, I was working as a systems programmer on timesharing operating systems and compilers for a company called Applied Computer Timesharing or ACTS Computing. ACTS has two different timesharing systems, a GE 465 and later a GE 265 that was located at the Ford Motor Company No. 2 Engineering Center.

In the early 70s, one of the ACTS salespeople came to me with project that he was interested in. What was then the local telephone company, Michigan Bell, had the significant problem with people cutting telephone lines that were buried on their property. Michigan Bell had started a program called Miss Dig.

Ms. Dig was an attractive employee of the telephone company who was dressed in a miniskirt and white knee-high go-go boots.Footnote 1 In print and TV ads, Ms. Dig encouraged people to call the telephone company before they dug on their property. It was and still is an expensive proposition to send a team out to a property every time somebody calls, so the salesperson was wondering whether this might be solved by computers.

I thought about the problem. My thought was that if we could create what would now be considered a Digital Twin of the counties that Michigan Bell was operating in, we could indicate which part of the property had a telephone line in it. The problem was that the solution I envisioned meant dividing the area up into one square foot areas and then indicating within that area whether there was a telephone line in it or not. The quick calculation however showed that amount of data that this required would completely dwarf the capacity of the GE 465 time sharing system, which had 64k 24-bit words or 192,000 bytes of internal memory and maybe 100 MB of disk space.

However, I continued to think from time to time the usefulness of representing physical things in digital space. Later in the 70s, when I was involved with the world’s first supercomputer, the Illiac IV, I quickly concluded that even that computer system didn’t have anywhere near the capacity for a project like this.

By my 30s I had started my own computer company and had the privilege of interacting with some of the pioneers of the information networking area, such as Bob Metcalfe, one of the inventors of Ethernet. However, these early personal computer systems clearly didn’t have the capacity for virtualization.

By the late 90s, I had tired of being a corporate executive, even if it was the company I founded. In fact, I had taken my company public in the mid 90s, so it was not really my company any longer. I was also spending far more time dealing with lawyers and accountants than the technology. I decided that it was time to do something different.

That something different was enrolling in a new multi-disciplinary, executive-oriented doctoral program (EDM) at Case Western Reserve University in Cleveland, Ohio. I was interested in moving beyond the commercial aspects of information processing and delving deeper into the underlying theory and constructs about information itself.

I was particularly interested in the idea that the information that was embedded in physical objects could be stripped from those objects and created as an entity. Because of the exponential increases in Moore’s Law, we were rapidly approaching a point where the information that we could obtain by being in physical possession of physical object could be replicated digitally within a computer.

This idea of duality of a physical object and its embedded information and a virtual object, the information itself, was a concept that begin to crystallize early on in the EDM program [16].Footnote 2 This duality of objects, both physical and virtual, today is known as the Digital Twin Model.

I had the idea of the Digital Twin model at the beginning of the millennium. However, because of the compute, storage, and bandwidth intensive requirements that the Digital Twin model requires, the exponential increases only began making the Digital Twin a reality by the middle of the 2010s. To understand how the Digital Twin concept took shape after the Case Western doctoral program, I would like to describe the path that it has taken over the past two decades.

2 The First Digital Twin Model

The origins of the Digital Twin concept and its associated model is well-established in both industry and academic literature. As shown in Fig. 1, the Digital Twin Model was first presented at a Society of Manufacturing Engineering (SME) conference in Troy, Michigan in October of 2002 [9]. The presentation was on the support/operational phase of the product lifecycle. The model did not even have a name. The slide was simply entitled, The Conceptual Ideal of PLM.

Fig. 1
A screenshot of a presentation titled, conceptual ideal for P L M has a flow chart from real space to virtual space with data and information transfers from one another. Virtual space further classifies into V S subscript 1, V S subscript 2, to V S subscript n.

The viewgraph from the first presentation about the Digital Twin as part of the conceptual idea for Product Lifecycle Management (PLM)

A little later that year, I presented the Digital Twin in a more general way at the organizational meeting of what we were calling the Product Lifecycle Management Development Consortium (PLM DC) [10]. This was a meeting in the Lurie Engineering Center at the University of Michigan. The purpose was to garner commitment for this new product-oriented concept at the time, Product Lifecycle Management (PLM).

The idea was to create a research center at the University of Michigan focusing on PLM applied research. I was a Co-Director of the Center. The attendees were of two types: engineering and information technology executives from the auto industry, both OEMs and Tier 1s, and representation from the nascent PLM software community that included EDS,Footnote 3 Dassault, PTC, and Matrix One (subsequently acquired by Dassault).

While the Digital Twin model was the same model presented earlier at the SME conference, the model was intended to convey its applicability across the entire lifecycle of the product. Because of the automotive industry attendees, there was a strong engineering and manufacturing focus.

The Digital Twin model from the 2002 time frame is substantially unchanged from today’s Digital Twin model. The original model contains the three main components of today. These components are: (a) physical space and its products, (b) virtual or digital space and its products, and (c) the connection between the two spaces.

There was a fourth component in this original Digital Twin model. Since the idea of virtual or digital spaces was relatively new at the time, this component was intended to emphasize that unlike physical space, where there is a single instance that we have access to, virtual or digital spaces have an infinite number of instances that we can use. Because it was an automotive group, the example I used to illustrate this was crash testing. I said, “in physical space, the car that is crash tested is destroyed and cannot be used again. In these virtual spaces, we can crash test that same vehicle repeatedly.”

When the Digital Twin model was introduced in 2002, I did not even give it a name. As Fig. 1 shows, it was simply the “Conceptual Ideal for PLM.” I did name the concept in a 2005 paper as the Mirrored Spaces’ Model [11], but changed the name in my first book in 2006 [12] on PLM to the Information Mirroring Model, where I also called the Digital Twin a virtual doppelganger [27].

It was not until 2010 that the concept acquired its “Digital Twin” name. I was a consultant to NASA, and modeling and simulating spacecraft was instrumental in thinking about Digital Twin and its entailments.Footnote 4 The NASA colleague that introduced me into NASA, John Vickers, did take the inelegant names at the time I had for the DT concept and coined the actual “Digital Twin” name. He also introduced the Digital Twin within NASA in his 2010 roadmap [21].

In spite of this, the Information Mirroring Model name stayed the same in my second book on PLM [13]. However, I did take the hint that John Vickers was on to something with the name and included a footnote mentioning John and the “Digital Twin” name. The Digital Twin name finally made it into my work in my often cited manufacturing white paper [14]. From that point on, I have used Digital Twin as the name of the concept and model.

“Digital Twin” well conveys the conceptual idea behind the Digital Twin that exists through today. It is the idea that a proposed or actual product information can and should be an artifact. This allows us to move work that has historically been done in the physical world into the virtual world, as I shall discuss later in this chapter.

3 Digital Twin Model Today

It is useful to review just what a Digital Twin is. The Digital Twin Model is a concept that, as shown in Fig. 1, consists of three main elements: an actual or intended physical element on the left side that currently exists or will exist in the physical world (the “Physical Twin”) [15], the virtual or digital counterpart on the right side that exists in the virtual or digital world (“the Digital Twin”), and the communication channel of data and information between these two elements (the “Digital Thread”).

Figure 2 is the Digital Twin model today. The graphics are better courtesy of my time with NASA, when we put together presentations for a DoD conference [5]. The model itself is pretty much the same as the original model in Fig. 1. There are three main characteristics or core components of the Digital Twin model. On the left side, we have physical space and physical products that we have always had since time immemorial and still will continue to have in the real world. We will always require real, physical products to perform work in the physical world.

Fig. 2
An illustration represents the digital twin model. A shaded chart with illustrations on the left is labeled physical space and a shaded chart with a series of numbers imprinted and illustration on the right is labeled virtual space. The data is transformed from physical to virtual space, represented by a solid arrow, and information is transformed from virtual to physical space, represented by a dotted arrow.

The model of the Digital Twin as practiced today showing the correspondence between the Virtual and Physical Space

On the right side is this idea a virtual space. This is our digital representation of the products that are over on the left side, and the information about products are contained in this virtual space. The third component here is a connection between the physical space and the virtual space. What we want to convey here is the idea of moving data from physical space environment into virtual space to create and inform our virtual product. We want to use that information from our virtual space over in the physical space.

The connection between the two space is commonly referred to as the “Digital Thread.” I’m not a fan of that term, because I have hung on by a thread too often in my career. The term “thread” doesn’t give me a lot of comfort. I’d rather have digital cable or digital pipe. However, we are stuck with the “Digital Thread” terminology.

Why we want to do that is this premise that says that we want to move work from the physical world of the twentieth century and before to the virtual world in the twenty-first century and beyond. As I will discuss below, we want to substitute information for wasted physical resources.

This is the key as to why Digital Twin is so important moving forward in product development, manufacturing, and operations/sustainment. This is the current Digital Twin model. It hasn’t changed very much since it was introduced in 2002. However, we are now able to implement it with information technology that wasn’t available in the early 2000s.

4 Digital Twin Scale and Scope

The wide-spread interest in Digital Twin dates from about the middle of the last decade. It roughly coincides with publication of one of the first popular press articles on the topic. The article was originally published in the Economist GE Look ahead section, entitled, The Digital Twin: Could this be the 21st-century approach to productivity enhancements? The article was re-published by the World Economic Forum, where it is still available [28].

In the 2015 timeframe, there were few mentions of the Digital Twin. Searching Google then, there would have been few hits, with most of them being mine. In 2019, there were about a million hits. Today, in 2023, there are over 9M hits, with a search count of images even higher. In Google Scholar, the number of academic papers increases from around a 1,000 to over 52,000 in 2023.

The scope of the Digital Twin is also increasing rapidly. The Digital Twin was originally the purview of aerospace and manufacturing. It has rapidly expanded from that into ships [7, 20], railway infrastructure [1], oil and gas (O&G) [19], smart buildings and smart cities [2].

One of the rapidly growing areas is healthcare with Digital Twins of people [23]. There are articles on implants, cardiac care [24], prosthetics [17], and Covid treatments [3]. There is even an article that proposes Digital Twins should help select your spouse [6]. Digital Twins are also of interest to the pharmaceutical industry. There is an article that suggests that it is unethical to use placebos for humans in the standard double-blind tests. The ethical approach is that placebos should be given only to the Digital Twin of the human [22].

I had always thought that Digital Twins could be used for non-tangible things like processes. My premise was that if we could visualize it, we could create its Digital Twin. I intentionally kept away from the intangible early on so as to not add additional elements that might be confusing. However, we are now seeing Digital Twins of manufacturing processes, supply chains [4], financial products, and other intangible things. While there is always danger of over-hyping a concept, there does seem the ability to move even intangible work from the physical world into the virtual world.

5 Digital Twin Types

As shown in Fig. 3, I divide the product lifecycle into four phases: create, build, operate/sustain, and dispose.Footnote 5 While the lines of demarcation between the phases are not bright lines, this is a useful way of looking at the lifecycle of a product. Under the framework of Digital Twin, there are three types of Digital Twins depending on the phase of the product lifecycle. Figure 4 shows the three types of Digital Twins.

Fig. 3
A cyclic flow diagram with a photograph of a rocket at the center illustrates 4 phases of the product life cycle, build, operate or sustain, dispose and create.

The four phases of the Product Lifecycle that are useful for thinking about the uses of Digital Twins in Product Support

Fig. 4
A pictorial V- shaped flow chart illustrates the 3 digital twin types, D T. It starts with the digital twin prototype, D T P, all products that CAN BE made, with 4 partially shaded individual photos of a jet. It leads to digital twin instance D T I, individual products that ARE made, with 4 clear individual photos of a jet, which leads to digital twin aggregate D T A, all products that HAVE BEEN made, with 5 overlapped photos of a jet.

The types of Digital Twins that accompany different phases of the Product Lifecycle

5.1 Digital Twin Prototype (DTP)

We first start off with what is called the Digital Twin Prototype (DTP) which is the prototypical Digital Twin. This is the idea that that we have a Digital Twin before we have a physical product. This is because we want to move as much work as we possibly can into the virtual realm. We would like to create our product, test our product, manufacture our product, and support our product virtually. Only when we get the product as perfect as we can make it do we want to manufacture the physical product. If we are going to make mistakes, the virtual realm is the place to make mistakes, because the cost of these mistakes virtually is approaching zero.

The DTP is all the products that can be made. The DTP is the product and its variants. As we can see from the figure, the product takes shape over time. The product goes from an idea to a first manufactured article.

Virtually Reality (VR) technology is extremely useful here. The ability for humans to use their highest bandwidth input, their eyes, allows them to process much more data than seeing reams of numbers. Humans, as creatures of the physical world, need to see things to fully understand what is occurring.

5.2 Digital Twin Instance (DTI)

When we start to produce production products, we transition to creating Digital Twin Instances (DTIs). These are all the products that are made. I now want to create a Digital Twin Instance of a specific product. I don’t have Geometric Dimensioning and Tolerancing (GD&T) for a specific manufactured instance, such as X + − .05 mm. I now have the specific number of the measurement for a specific part. I have serial numbers and not just simply the name of this assembly.

Since we are going to want to track this product throughout its life, we need the As-Built of that product instance as it is created. The requirement for a DTI is driven from the business use case of having this information. The need for a DTI corresponds to the complexity and importance of the product. The F-35 pictured in the figure needs a DTI. A paper clip doesn’t.

A good deal of the information for a DTI is going to be coming from the first type of Digital Twin, the DTP. This information will not need to be duplicated. However, we now move from the ideal specifications to the measurements of individual products.

Augmented Reality (AR) technology will be increasingly important for DTIs. Instead of the Physical Twin and the Digital Twin being separate things. AR allows the Digital Twin to be overlayed on its Physical Twin. With AR equipment, technicians can not only see the product in front of them, but they can see the performance of that product, such as the temperature gradients, fuel flow speeds, or power outputs.

5.3 Digital Twin Aggregate (DTA)

The third type of Digital Twin is the Digital Twin Aggregate (DTA). This is the aggregation of all the products that have been made. We can collect and aggregate the data from the population of products to provide value.

We would like to predict issues or failures with the product before they occur. We would like to correlate certain sensor readings to resulting issues. When we see these certain sensor readings in products, we can alert the user that this is an indication a future problem has a high probability of occurring. We want to move from periodic maintenance to conditions-based maintenance. While we would prefer to have causation rather than correlation, we will happily accept correlations if it prevents product failures, even at the expense of replacing some parts too early.

Digital Twin Aggregates (DTAs) are the aggregation or composite of all the DTIs. DTAs are both longitudinal and latitudinal representations of behavior. Their longitudinal value is to correlate previous state changes with subsequent behavioral outcomes. This enables, for example, prediction of component failure when certain sensor data occurs. Latitudinal value can occur via a learning process, when a later group of DTIs learn from the experiences of previous products. That learning can be conveyed to the rest of the DTIs from then on. Figure 5 shows an example of DTI and DTA use in interrogation, prediction, and learning.

Fig. 5
A block chart along with illustrations of the usage of Digital Twin Aggregates and Digital Twin Instances in interrogative, predictive, and learning. Interrogative includes, what is the pressure at inlet A? how many hours has the engine been firing? and what is the speed of the turbine blades? Predictive includes a description of the inlet, bearing, and catastrophic failure. Learning includes, what blade R P Ms minimizes fuel usage? and what should the thrust setting be to maximize performance?

An illustration of the relationship between Digital Twin Aggregates (DTAs) and Digital Twin Instances (DTIs) and how they are used in interrogation, prediction, and learning

6 Digital Twin Types Throughout the Lifecycle

Since the Digital Twin model applies to the entire lifecycle of create, build, and operate/sustain, we need to understand what Digital Twin types apply to the various lifecycle phases. We can see visually see in Fig. 6 the relationship as to how this occurs.

Fig. 6
A set of illustrations demonstrates the usage of 3 different digital twin models through the lifecycle. They are applied in various lifecycle phases such as create D T P, build D T P or D T I, and sustain D T I or D T A.

The application of the different Digital Twin Types through the Product Lifecycle

The top line of the figure is the create phase. The figure shows the standard view of the Digital Twin model with both the physical and virtual product. However, in this phase, we create the virtual product first. In keeping with moving our work into the virtual world, we ideally would like to design the product, test the product, manufacture the product, and support the product all virtually. Only when we have all the issues worked out, do we want to create a physical product. Obviously, this is the ideal. We currently may do some physical prototypes at this phase, but we are seeing many organizations dramatically reduce the need for those physical prototypes.

However, we need to make a physical product and put it into production. It’s not enough to have the designs of the product that we have perfected. We need to physically produce it. This means that we need to create a Bill of Process (BoP). This is reflected in the representation of the far-right side.

There is a misconception that manufacturing is a function of engineering. The reality is that we need the desired product plans to be the result of a function of manufacturing. It is at this stage that we have both design plans and the manufacturing plan that results in those design plans being realized. We are using the DTP here.

The next level is the build phase. It is in this phase that we move into production. As shown by the digital threads, we are using the BoP from our Digital Twin of the designated production equipment to provide that information to the DTIs of the actual machines. Those machine DTIs will then provide the required information to the physical machines on the factory floor. Those physical machines then produce the physical products on the left.

As we are producing those physical products, we want to create the As-builts of their Digital Twin counterparts. These are the DTIs. We need to capture the necessary data of the actual measurements of what we have produced, key process measurements of how we have produced them, serial numbers of required parts, and quality control data to assure ourselves that these products have been produced to our specifications.

In this build phase, we use the DTP information and create the DTIs. Again, much of the information that the DTIs needs will be from the DTP. We will not need to duplicate that information. However, the DTP contains specifications with tolerances. The DTIs will have exact measurements of how these instances were built.

The bottom level is the operate/sustain phase. In this phase, we want to have its DTIs reflect any changes to its corresponding physical product. We want to capture behavior and performance metrics. We also want to have the DTI maintain longitudinal data. Unlike the physical world, when the moment passes, we have little access to history. In the DTI, we can capture that data and therefore have full access to the product’s history of behavior.

It is in this phase that we create the Digital Twin Aggregate from the DTIs. We can start to correlate precursor data with ensuing results. This will allow us to predict future behaviors, such as product failures, as we collect more and more data from the DTIs. We can engage in machine learning to determine how products decline from their optimum performance at the beginning of their operation and then pass that on to later versions of the product.

Finally, by using the DTA, as a digital thread shows going from the operate/sustain phase back to the create phase, we can close the loop between product design and the actual product behavior. Too often future generations of products exhibit the same flaws. This is a result of the product designers not knowing that their assumptions about the product performance are incorrect. Using the DTA can correct this issue.

7 Digital Twin Underlying Economics

Intelligence can be defined as goal seeking while minimizing resources. We perform what I will call “tasks” to accomplish goals. As shown in Fig. 7, if we take any physical task, we can divide that task into two parts. In the left bar, the lower part is the most efficient use of physical resources to complete the task. The upper part is any use of physical resources above that, which, by definition, is wasted physical resources. To assess this properly, there are constraints on this task. The first is the task’s goal can be successfully completed.

Fig. 7
An illustration chart of the information as time, energy, and material trade-off. 2 vertical bars labeled C t, e, m, on the left and C t, e, l, on the right plots for information, inefficiency, and efficiency. The C t, e, m bar has efficiency at the bottom and inefficiencies at the top, and the C t, e, l bar has efficiency at the bottom and information at the top. Sigma from 0 to N of, C, I, less than, sigma from 0 to N of, C subscript W of, t, e, m equation is present on the right.

The use of Digital Twins to achieve efficient use of resources in performing “task” to accomplish goals

The second constraint is that the most efficient use of resources is set prior to the task. As an example, consider building a runway. One way would be to have thousands of people prepare the runway with hand tools, as the Chinese people did in World War II. The second way would be to have earthmoving equipment deployed to build the runway. Both the cost, time, and deficiencies of the two methods will vary and maybe vary greatly. However, in this paradigm, the most effective use of resources is relative to the physical resources that are available to the task.

Our method of evaluation is chosen to be cost. In a capitalistic society we can cost physical resources. We can cost the time of human labor. We can cost the time of capital equipment that is involved. We can cost the energy that we use for the task.

Finally, we can cost the material that we will use in completing the task. We also have overhead costs. These are costs that are incurred simply by virtue of performing the task. These overhead costs start when the task starts and end when the task ends.

The right bar shows the impact of information. The bars are similar in that the most efficient use of physical resources is the same in both the right and the left bar. However, in the right bar, we are showing the ideal situation where information is replacement for all wasted resources. In our imperfect physical world, this will never happen. However, the key point here is that information is replacement not for the physical resources we require to perform the task in the most efficient manner, but as a substitute for wasted resources over and above that.

This is not simply about efficiency. It is also about effectiveness. If we have information that our approach to the task will not result in the goal being reached, we would not be expending the resources on the futile effort.

However, information is not costless. The issue we have with information is that there is not a unit of measure that we can use for the cost of information for our task. We do expend physical resources to produce information. In our current environment, this is the hardware and software necessary, the human resources required to engage with the hardware and software, and the energy required to power our equipment. For a task, we can use these costs as a proxy cost for units of information.

This brings us to the condition let that allows us to state that information is a substitute or replacement for wasting resources. This is indicated by the formula in the figure. That condition is that the cost of the information is less than the cost of wasting physical resources in performing the task over all the times the task is performed.

In most situations, it would make very little sense for us to create information systems for simple tasks. It would make sense to use trial and error to perform such tasks. The wasted resources will be substantially less than the cost of information.

For complex tasks that are repeated, the cost of information has shown to be less than the cost of wasting physical resources. While the impact of information technology has been debated, it is pretty apparent to the casual observer that the exponential increase in computing capability over the last 50–60 years has had substantial effect on both efficiency and effectiveness.

This is the economics that is driving Digital Twins. In fact, it is economics that is behind digital transformation in general and is driving the movement of work from the physical world to the virtual world.

8 Digital Twin Fallacy

There is a widespread fallacy that the Digital Twin does not exist until and unless there is a physical product. In fact, the majority of academic papers in a survey had this position [26]. Some authors go so far as to equate Digital Twins with human twins. This is overspecifying the “twin” metaphor.

Metaphors are extremely powerful in invoking complex mental constructs and even entire mental spaces in humans. Metaphors are not simply comparisons, but generative devices that allow rich understandings, new perspectives, and generative ideas that open up areas of opportunities that had not previously been thought of [8].

The twin metaphor has only two key attributes: duality and strong similarity. With respect to this fallacy discussion, there is no metaphorical requirement for timeline simultaneity or for the precedence of one type of twin before another type of twin.

That is there is no requirement that a twin only exists if its counterpart exists simultaneously. Nor is there a requirement that one type of twin, the Physical Twin, must exist before there is the other type of twin, the Digital Twin, can also exist. The only requirement is that a twin’s counterpart exist at some point in the twin’s lifecycle. This means the Digital Twin can exist prior to the creation of a physical counterpart and can also exist after the physical counterpart ceases existence or is retired.

The requirement that there must be an actual physical thing before there can be a Digital Twin is simply a wrong perspective. The key differentiator of whether a digital model and associated information is a Digital Twin is that it is intended that this model become a physical product and that its physical counterpart is realized.

It is that intention and the work that goes into the realization of that intention that differentiates a digital model from a Digital Twin. If the physical counterpart is never realized, then the digital model was never a Digital Twin. A digital model of a flying carpet will never become a Digital Twin because there is no intention, let alone ability, to make it a physical product.

From its inception, the Digital Twin has always been intended to exist in all four phases of the product lifecycle: create, build, operate/sustain, and dispose [12]. It is embodied in the saying that “no one goes into a factory, pounds on some metal, and hopes an airplane will come out.” A tremendous value of the Digital Twin is that it does exist before there is a physical product.

Dispelling these misperceptions of the “twin” metaphor, there are five major reasons why the Digital Twin does not require a physical product before the Digital Twin exists. These reasons are:

  • The DT framework should cover the entire product lifecycle

  • The DT is especially valuable during the create phase

  • The DT does exist prior to the physical product –it just has a different name

  • The DT regresses to being functionally siloed if there is no DT prior to the physical product

  • The DT existing only after there is a physical product is conceptually inelegant and piecemeal

The practical reality even for those making a claim that the Digital Twin does not exist prior to a physical product is that there actually is a Digital Twin within their organization before there is a physical product. This product information continues to exist throughout the entire product lifecycle. In all those situations, it simply has a different name. It may be called the digital model, the digital design, the digital systems model, or some such variation. However, it has most if not all the characteristics of the Digital Twin Prototype (DTP).

While the Digital Twin has value across the product lifecycle, the DTP is especially valuable in the create phase. It is this phase of the lifecycle that work can be moved from the physical world into the virtual world. If virtual products can be modelled and tested in a virtual environment, replacing physical prototypes and testing, the potential for a reduction of wasted physical resources is substantial. As noted above, these wasted resources include material, energy, and labor time, but also elapsed development time.

Even though the create phase may be short in comparison to the entire lifecycle of a product that may span decades, decisions in this phase have a major impact in determining future product costs. Estimates of product cost determination during the create phase are as high as 80% [18]. There is an increasing ability to perform virtual testing at a fraction of the cost and in less time to replace physical testing. This has the potential to reduce costs, improve quality, and reduce time to market.

A major problem with renaming the DTP as something different and not having that something in the Digital Twin framework is that it encourages and maintains functional siloing. If this different named thing exists in engineering, prior to manufacturing having a Digital Twin of the instance of a physical product, then this information will tend not to be shared between engineering and manufacturing.

The powerful aspect of the Digital Twin is that it is product centric throughout the entire product lifecycle. Information is populated and consumed irrespective of the functional area. For the Digital Twin to exist only after moving to manufacturing diminishes greatly its effect. A substantial amount of the information for a specific Digital Twin Instance (DTI) is contained in the Digital Twin Prototype.

Finally, it is inelegant and piecemeal to not have the Digital Twin encompass the entire product lifecycle. The intent of the Digital Twin is to have a framework that persists throughout the entire lifecycle. That has been the intent since the origination of the Digital Twin concept. Requiring the Digital Twin to only exist once there is a physical product is inconsistent with that approach. Having different types of Digital Twins, DTP, DTI, and DTA, allows us to have a consistent framework, yet differentiate how the Digital Twin manifests itself at different phases of product lifecycle.

9 Digital Twin Evolution

The Digital Twin is evolving at a fast rate. Figure 8 shows this evolution. The move from physical to virtual maturity is along the x-axis and shows the progression over time. The evolution of information in both scale and scope is shown on the y-axis. Obviously as we move more and more work into the virtual space, the amount and complexity of information increases.

Fig. 8
A pictorial graph of different phases of digital twin evolution. Pictures of traditional, transitional, conceptual, replication, and front-running simulation F R S are marked in an increasing trend against the growth of information evolution on the y-axis and physical to virtual maturity on the x-axis.

Different phases in the evolution of the Digital Twin: from Traditional to Intelligent DTs

9.1 Traditional – Phase 0

This is labelled as Phase 0, because it is the phase that humans have been primarily in since the beginning of time until recently. At the far-left side is what is called the traditional representation. As soon as an idea for a product began to take shape, it immediately took a physical form. It had to be translated into atoms almost immediately to be shared with other people. Initially this was in the form of sketches and physical models, scale or otherwise. In the mid-1800s it started take the form of blueprints that had measurement details about the product. This continued throughout the twentieth century.

We started processing information in computers in the late 60s and early 70s. In the 80s, the ability to put geometric information in computers began.Footnote 6 However, this information was effectively just an electronic version of blueprints. In fact, CAD, which stands for Computer-Aided Design, was mostly a means to capture this 2D geometric information in a computer and to be able to print out multiple versions of it. Up until this point in time, duplicating blueprints was done by hand.

9.2 Transitional – Phase 1

The Transitional Phase, Phase 1, is the beginning of the Digital Twin era. The 2000’s marked a seismic shift in moving this information into the virtual area. The development of 3D models in a computer was a quantum leap in terms of having geometric information fully contained within a computer. Not only did 3D models give a visual representation of the product from any angle, but it allowed for the integration of multiple parts. We started to get comfortable with being able to manipulate virtual objects within the computer space without having to have them take physical shape in the real world first. This reduced the need to have physical prototypes as these physical prototypes began to be replaced by Digital Mockup Units (DMUs).

The ability to simulate the behavior of these geometric models was the next big step. Not only could we do form and fit for our geometric models, but we could also simulate and analyze their behavior. This was not something new. What was new is that the computing resources capable of doing that moved from requiring a supercomputer to being able to be performed on an ordinary computer. This was courtesy of the advances predicted by Moore’s Law.

9.3 Conceptual – Phase 2

The Conceptual Phase is when we take a concept or model and start to ask, “what if”. We begin to create processes and technologies to experiment, test, and even begin to implement the “what ifs”. The purpose is to determine if the concept creates value on an ad hoc basis. For powerful concepts, such as the Digital Twin and the underlying premise of moving work from physical space into virtual space, this can be a period of explosive growth in scale and scope. There is ample evidence that this is occurring.

In this phase, the Digital Twin is an entity that we conceptually create from disparate and even fragmented data sources. We use different existing systems to pull data from. We start building correlations and even causations of data source inputs to results. We build different simulation views and determine how well they map to reality. We start to put manual processes in place to pull the data from different sources, even if on an ad hoc basis, to create a Digital Twin view.

We attempt to determine if our concept and ensuing models can do two things: replicate past and current reality and predict future states. At this stage, the aphorism that all models are wrong, but some of them are useful is accurate. We want to determine the useful aspects and refine the models so that they both replicate reality and allow us to predict outcomes, even if probabilistically.

At this phase, while there may not be a discrete, tangible Digital Twin, there is enough substance that we can generate a shared result among users that the Digital Twin exists.

9.4 Replicative – Phase 3

Phase 3 is what I will call the Replication Phase. It is in this phase that we have Digital Twins that are not in do-it-yourself form but exist as entities. This will require that we have technological platforms that will do what we did ourselves in the conceptual phase and pull together the necessary information to present to us actual Digital Twins.

While it may be in some cases, the platform system has its own repositories for information, this is not a requirement. Because it is information and not physical artifacts, the information can exist logically in different places and applications. What the platform will do on some sort of basis, depending on the immediacy of the data, is pull together the requisite information to present it digitally.

The platform will need access to this information on a secured basis. The platform will need the mechanism to obtain this information either in the form of data transfers or APIs. In some cases, the platform may simply act as a pass-through for existing systems that contain that information. For example, the PLM system may contain all the geometric information to project a complete visual Digital Twin.

We would expect the platforms to support all three Digital Twin types, DTP, DTI, and DTA. In the DTP or create phase, the platform would support the development of the product so that at any point in time the Digital Twin to-be product is taking shape. The platform could also consist of behavior stimulations that have been certified as validated replicas of the physical world. This would mean that testing could take place in the virtual world and to a great extent replace physical testing.

The platform would need to capture or have access to the actual as-built products and its DTIs. The platform would then be able to at any point in time in the future show a replica of what the physical product of that DTI was doing. The driving economic value that will make these platforms economically viable and attract investment is being able to collect the data of the product as it is in use and replace wasted physical resources.

The ability to create the Digital Twin Aggregate and, through either correlation or causation, predict future performance will be a significant opportunity for revenue production. In this phase the Digital Twin platform would support both interrogation of the product status at any point time and would allow for degrees of predictive aspects. The opportunity is that the more these platform scale, the more data that is turned into information so that real value will be produced.

9.5 Front Running – Phase 4

Phase 4 is the Front Running Phase, utilizing the Intelligent Digital Twin (IDT) that is characterized by what I refer to as Front Running Simulations (FRS) that are constantly occurring with a product’s Digital Twin. The Digital Twin is “intelligent” because AI is employed to constantly be assessing the data and making predictions. This phase is marked by moving from a platform that is reactive to inquiries from its users to a platform that is proactive in presenting information to its users on a constant basis.

This means that this platform is online all the time. In the create phase the IDT is in cued availability mode. This means the IDT is a constant agent looking at what the user is doing. The Digital Twin is constantly looking at the vast amount of data it has access to from the different sources that it is connected into and, with its cues from the user, providing information that it perceives that the user needs to know about.

For example, if the user is designing a new part, the IDT will look at the requirements for both geometry and behavior and then propose parts that have the same key geometry characteristics and key behaviors. It will also be constantly running simulations for both fit and behavior of those parts as the part is developing to prevent the user from wasting time on things that will not meet the requirements. This means we can move from periodic reviews to continuous reviews. It will also do this at higher levels of the full system so that that in essence there is a constant review of the complete system.

For DTIs, the proposal is that IDT will constantly be running a simulation (FRS) in front of the performance of the product. At every new periodic t0, IDT will run a simulation of the future, predicting potential system states, especially ones that are predicted to cause problems. For example, the IDT will constantly be projecting into the future and warn the user of impending failures or malfunctions.

In this phase, the IDT acts as a crystal ball projecting the outcomes with probabilities, utilizing not only the DTI itself but the DTA of all the products that it has information for. With enough data from its population of ever-growing products, the Digital Twin’s front running capability will be able to put probabilities on its predictions. For example, FRS will predict that a specific part with the current sensor readings will fail in the next month with a 60% probability but fail within 2 months with a 95% probability.

Clearly this will be the compute intensive phase. However, the next decade or two projections of computing capability predict that there will be a tremendous amount computing capacity and associated information technology capabilities, such as storage and communication bandwidth, available. Current rough predictions of computing capability are that from a current capability of 80 billion transistors, by 2030 that will rise to 6 trillion and by 2040 to 885 trillion.

By preventing product failures, warning of errors in human judgment, and preventing avoidable failures, the IDT will be in position of substituting information for the waste of physical resources. Especially for prevention of and the loss of life from catastrophic failures, the IDT take the Digital Twin to its logical conclusion.

10 Digital Twin Progress Through Testing

With apologies to Alan Turing [25], the critical requirement is to have the computer simulate everything in the universe, except human intelligence. If we can fully simulate the inanimate universe but only obtain assistance from AI, we will obtain tremendous value for products throughout the product lifecycle.

Modeling and simulation (M&S) are about representing physical products and their behaviors in a digital or virtual environment. In this context, a model is a static representation of the physical product. The current technology allows for this model to be a three-dimensional replication that has complete fidelity in terms of dimensioning. The behavior is modeled in mathematical form, describing the forces acting on the physical product and the forces the physical product generates and acts upon the environment.

Simulation is dynamic. Simulation adds the component of time and describes how the product changes as forces act on it and how the forces it generates act on the environment. Simulation shows the changes in geometry as the mathematical behavior model of forces transforms material. A simulation of a vehicle crash test shows at a user defined time scale the deformation of all components of the automobile as it crashes into a barrier.

Simulation relies on two things: an increasing knowledge of the physics that determine the physical environment and computing power to calculate the physics at the required scale and fidelity. This has meant that there have been limitations of the products that could be modelled and simulated. A few decades ago, only simple product systems could be simulated. With the exponential increase predicted by Moore’s Law, today even complex product systems can be simulated.

The question is how do we decide how well we are doing with M&S? Over a decade and a half ago, I proposed some Tests of Virtuality to answer that question. The Test of Virtuality were modeled on the Turing Test.

The original formulation of the Grieves Test of Virtuality had three distinct tests: a visual test, a performance test, and a reflection test. The format of the tests was similar. An observer was exposed to the physical and the virtual versions. If he or she could not tell the difference between the two versions, then the test was passed.

In the Visual Test, the observer looked at the video screens of a product placed in a physical room and the Digital Twin version. The observer could ask for any spatial manipulation to be done. The observer could ask to see the product from any angle.

The observer could ask that the product be disassembled and look at any individual component. In the example of a car, the observer could ask that the doors be opened to look inside at the interior or that the hood be opened and to look in the engine compartment. If the observer cannot tell the difference between the physical version and the Digital Twin version, then Grieves Visual Test is said to be passed.

The behavioral test was a little more difficult. The observer had the same two views of the product, the physical and the Digital Twin one. The observer could then ask that forces be generated and/or applied to both and observe the results. For an airplane, this could mean that its jet engine would be turned on, and the plane sent down the runway to take off. This would be an internal force.

The observer could also ask that once the plane was flying that it be put into a steep dive and see the forces that acted upon it. That would be an external force test. If the observer could not tell the difference between the physical and the virtual performance, then it passed the Grieves Test of Performance.

The third test is a test of reflectivity. Reflectivity was defined as any change to the physical product would be reflected in its Digital Twin. Again, we have the observer and the Physical Twin and Digital Twin versions. In this case, the test is that the observer can see no differences between the two versions. Using an example of an oil rig, if the observer were to compare every valve setting, every gauge, every pump serial number, there would be no difference between the two versions. If that was the case, then the Grieves Test of Reflectivity would be passed.

These tests were meant to be ideal tests. The tests were always meant to be tied to use cases that would provide value to the user. It was intended that only those things that provided value would be the things that were intended to be tested. If there was no value in disassembling a product and determining if the physical and Digital Twin versions were identical, then that would not be part of the test. If a component of the oil rig above was serialized, but there was no interest in tracking serial numbers, then the Physical Twin and Digital Twin versions not being identical would be irrelevant.

So where are we a decade and a half later? The short answer is that these tests of virtuality are easily passed daily. A decade and half ago, we were close to passing the visual test. Today there is no question that the visualization of physical products and its Digital Twins have the fidelity and granularity that we need for most use cases.

The behavior and reflectivity tests were proposed before IoT became so prevalent. Back then, the issue for these tests was going to be being able to get the appropriate sensor and instrumentation data to maintain the Digital Twin version. Today with our smart products, we routinely get the sensoring information that we need to pass both those tests. Again, we need to remember that this is driven by use cases. The value to the user needs to be there to expend the resources necessary to maintaining the Digital Twin for all these tests.

At this time, I would like to propose a new Grieves Test of Virtuality. This is Grieves Test of Prediction. This is a slightly different test and much harder. In this test version, the observer asks that the Digital Twin version be moved a certain amount of time into the future. The observer then waits that amount of time. When that time has elapsed, the observer now compares the two versions. If the states of the Physical Twin and Digital Twin are effectively identical, then the Grieves Test of Prediction is passed.

11 Conclusion

In approximately two decades, the Digital Twin concept and model has gone from simply being the “Underlying Premise of PLM” to being considered for use cases in all aspects of human endeavors. This is from the tangible complex products that the Digital Twin was initially created for to intangible processes, such as supply chains, logistic systems, and monetary systems. Digital Twins are being proposed for such complex systems as cities, the earth itself, and humans, especially in healthcare.

This is being driven by the exponential increase information capabilities predicted by Moore’s law. This increase in capability is allowing us to move work from the physical world into the virtual world. We want to do this because information is a substitute for wasted physical resources.

The Digital Twin needs to cover the entire lifecycle of its physical targets. It is a fallacy that a Digital Twin only exists once there is a physical artifact. There is tremendous value in using the Digital Twin before a physical product exists. Our ideal is to create the product virtually, test the product virtually, manufacture the product virtually, and support the product virtually. Only when we get it all right, do we move physical atoms to make a physical product.

To cover the entire product lifecycle, the Digital Twin has three types: Digital Twin Prototype, Digital Twin Instance, and Digital Twin Aggregate. I expect a Digital Twin to rapidly evolve from the conceptual stage that it is currently in, to the Replicative Platform Stage, and then into an Intelligent Digital Twin with Front Running Simulations. These new phases will be computer intensive, but if Moore’s law continues to hold true, we will have the computing capability to enable these future Digital Twins.