1 From the Digital Twin to the Digital Thread

The Industry 4.0 movement towards smart advanced manufacturing presupposes a thorough revolution in the perception and practice of the “Art of Manufacturing” that shifts the innovation focus increasingly from the traditional crafts and engineering, like mechanical engineering, materials science and plants construction, to immaterial assets, like knowledge, flexibility and ultimately models for their exploration ‘in silico’. The role of models is well understood for what concerns blueprints of facilities, CAD of machines and parts, P&IDs of various tooling processes in production [68], and various behavioural models like stress curves of materials, yield of various processes, and isolated simulations. The new trend of modelling concerns the Digital Twin of processes, machines and parts. That the essence of a Digital Twin is not yet stabilized is witnessed by the fact that everyone seems to have a different definition. For example, as of today Wikipedia [67] provides an own description (note: not a definition) as “A digital twin is a digital replica of a living or non-living physical entity.[1] By bridging the physical and the virtual world, data is transmitted seamlessly allowing the virtual entity to exist simultaneously with the physical entity. Digital twin refers to a digital replica of potential and actual physical assets (physical twin), processes, people, places, systems and devices that can be used for various purposes.” and “Definitions of digital twin technology used in prior research emphasize two important characteristics. Firstly, each definition emphasizes the connection between the physical model and the corresponding virtual model or virtual counterpart. [8] Secondly, this connection is established by generating real time data using sensors. [2]”. Wikipedia then lists 10 different definitions, from 2012 to 2019, that include references to as diverse concepts as multilevel, multiphysics, multiscale, real time, cloud platform, lifecycle, health condition, and many more. A recent, quite realistic and encompassing definition is by Ashtari et al. [65]: “The Digital Twin is a virtual representation of a physical asset in a Cyber-Physical Production System (CPPS), capable of mirroring its static and dynamic characteristics. It contains and maps various models of a physical asset, of which some are executable, called simulation models. But not all models are executable, therefore the Digital Twin is more than just a simulation of a physical asset. Within this context, an asset can be an entity that already exists in the real world or can be a representation of a future entity that will be constructed.

In this context - an adaptation of Cyberphysical Systems to the production environment - most engineers accept the need of continuous models or discretized numerical simulation models like in finite element analysis. They are however not familiar with models for software, nor are they aware that software, and thus software correctness and thus software quality, are essential to the information propagation, aggregation, and analysis that combines collections of such models to deliver or at least enable the desired insight from those models. In other words, the Digital Thread that connects real things and their twin models, but also the communication networks, the decision algorithms, the visualisations needed to work in design, construction, and operation within a mature Industry 4.0 environment are still not in the sphere of awareness of the responsibles. Thus software in general and software models in particular are hopelessly underestimated in their relevance, complexity, challenges and cost.

The modern version of the Digital Thread can be seen as the information-relay framework which enables the holistic view and traceability of an asset along its entire lifecycle. This framework includes any data, behaviours, models, protocols, security, and their standards related to the asset as well as to the context where it is expected to operate. As such, it goes well beyond the customary understanding of the thread as mostly a collection of data, cast in terms of the Digital Twin as a new way of managing variants in a Product Line Engineering context and with the Digital Thread being the new counterpart of the established Product Lifecycle Management [13].

Being a Principal Investigator in Confirm responsible for the co-direction of the Cyberphysical Systems research hub, involved in the Virtual and Physical Testbeds hub, and working with Digital Twins to deliver the Confirm platform, the current unawareness in industry and academia alike about the high demands and enormous potential of the Digital Thread are a threat and an opportunity at the same time. We found out that the first step of communication needed to reach professionals who do not know much about software, less about models, and even less about (discrete) behavioural models for software and systems, is to build a micro-size demonstrator in exactly their domain.

In the rest of the paper we provide a description of the micro-demo we built this Summer, essentially a web based UR3 robot controller (Sect. 2), we give a little background on the Model-driven Integrated Development Environment DIME [5] that we used to build it, and the Robotics DSL (domain specific language) we built to integrate robotics capabilities in DIME (Sect. 3). Then we provide some final considerations about the prospective of using this approach and framework as the technical platform to build the Confirm Digital Thread and Digital Twin shared assets (Sect. 4).

Fig. 1.
figure 1

The simulator provided by UR.

2 Bringing the UR3 Controller in the Web

The mini-project we identified as the first nugget of model driven design and development of a demonstrator in the robotic field for Confirm literally unleashes the normally tethered UR3 robot control. Universal Robots, an originally Danish company acquired in 2015 by Teradyne, is a leader in collaborative robots (cobots), i.e. robots that can work sharing the same space with humans, instead of having to be isolated from humans in shielded work cells. UR was the first company to deliver commercially viable collaborative robots. This ability to act next to or with humans enables advanced robotics to help with mixed product assembly, and is transforming companies and entire industries.

The UR3 is their smallest cobot model. It is a compact table-top robot for light assembly tasks and automated workbench scenarios. It weighs only 11 kg, but has a payload of 3 kg, 360-degree rotation on all wrist joints, and infinite rotation on the end joint. Confirm has bought one for demonstration and outreach purposes, and a few small applications have been built with it using the programming techniques offered by UR3.

Fig. 2.
figure 2

UR3 web controller: the main page.

2.1 Programming the UR3

Polyscope GUI

A tablet with a graphical GUI for steering and commanding the robot is tethered to the robot. This Polyscope GUI interface is programmed using the touch screen of the tablet with URP files. Figure 1 shows the Polyscope GUI in the UR3 simulator environment: one can control the robot movements using the arrows on the left, or program waypoints by entering the coordinates on the right. It is also possible to upload scripts in the UR script language to the tablet.

UR Script Language

We used instead the UR script language. UR Script has variables, types, flow of control statements, function etc. In addition, UR Script has several built-in variables and functions which control the I/O and the movements of the robot. UR script commands can be sent from a host computer or PC via an Ethernet TCP socket connection directly to the UR robot, for motion and action control without using the tablet pendant. We use the UR simulator to execute .urp progams. At the bottom of Fig. 1 we see that the simulator screen provides buttons to go to a Home position, to perform Freedrive using the arrows, and to a Zero Position.

The RoboDK Simulator

RoboDK is a simulator and offline programming software for industrial robots. It can generate script and URP files which can be executed by a UR robot. Additionally, it is possible to execute programs on the robot from the RoboDK if the robot is connected to the computer. RoboDK can also import script files to the simulator. This allows to simulate existing script programs, modify them and re-export them.

Fig. 3.
figure 3

UR3 web controller: the Move to Coordinates page

2.2 Case Study: A Web UR3 Controller

We used the Script language and the TCP socket to program the UR3, creating a web based application that controls the robot in a very simple fashion. After having entered the IP of the robot in a first screen, Fig. 2 shows the main page of the controller: One can bring the robot to a predefined Initial Position (equivalent to the simulator’s Home button), bring it to a predefined Test position (equivalent to the simulator’s Zero Position), ask it to Move to Coordinates, Pause the robot, Stop it, and perform its Shutdown. All these buttons drive the simulator, or, alternatively, the real robot. The UR Manual button links to the online manual provided by UR. The only action that requires further inputs is Move to Coordinates: Fig. 3 shows the corresponding page.

As mentioned before, the Web application is connected per TCP to the Robot and the execution happens on the simulator or on the real UR3 cobot.

But how did we program the Web application?

This was started as this Summer as a small Proof of Concept demo by two interns from INSA Rouen in France, mathematics students after the 2nd year, not really familiar with programming, in particular not with Web programming, Java, nor deployment.

Fig. 4.
figure 4

The main workflow of the control application.

3 DIME as a Model Driven IDE for Robotics

The “program” that steers the UR3 controller application as well as the GUI layer for the Web are actually not hand-programmed in code at all. We see in Fig. 4 that models cover the complete web application design thanks to the use of DIME [5], the DyWA Integrated Modelling Environment [45], which is the currently most advanced and comprehensive Cinco-product [43]. DIME is a allows the integrated modelling of all the aspects needed for the design of a complete web application in terms of Graphical Domain-Specific Languages (GDSLs). Figure 4 shows that models capture the control flow as in the Service Logic Graphs of METAFrame [53, 57] and jABC [36, 44, 61] before DIME. In DIME we have additionally also data models and UI models in the same Integrated Development Environment (IDE). In fact we see in this model:

  • the control flow, showing that from the Start we proceed to the GetAddress page, with the symbol of a UI model, and then from there to the PublicHome page, another UI model, from which the control branches (solid arrows) lead us to one of the successive actions: ReturnInitialPosition, ControlRobot, EnterCoordinates, EnterPauseDuration, Stop, Exit.

  • the data flow, showing that there is a Data Context containing the data collected or used in the application. The data flow arrows are dotted, they connect the data from the output branches (like the IPaddress and their type, here text, the coordinates, and the pause time) to the data context, which models the store (i.e. the memory) and the data context to the inputs of the successive processes. For example the MoveCoordRobot process receives both the Coordinates and the IPaddress.

  • the subprocesses, like ReturnInitialPosition, ControlRobot, which have own models,

  • the Web pages, with GUI models, which are the only part the user experiences, as the processes work in the background.

  • We also see that this is a reactive application: there is no End action, all the paths through the web application workflows lead back to the PublicHome main page of Fig. 2.

All these models are interdependently connected, shaping the ‘one thing’ [39, 62] global model in a manner which is formal, yet easy to understand and to use. The continuous and model driven deployment cycle is simplified to the extent that its code can be one-click-generated and deployed as a complete and ready to run web application. This happens along the methodology and process of [5].

3.1 Behavioural Models and Feature DSL

The power of DIME as a modelling and development tool for code-less [10] and even no-code development is connected with behavioral model-driven design as auspicated in [31] and [32]: the original DyWA was for the user a web-based definition facility for the type schema of any application domain of choice. Coupled with the defined types is the automatic generation of corresponding Create, Read, Update, Delete (CRUD) operations, so that application experts are able to model domain specific business processes which are directly executable in our modelling environment. Upon change, the prototype can be augmented or modified stepwise by acting on one or more types in the type schema, the corresponding data-objects, and the executable process models, while maintaining executability at all times. As every step is automated via a corresponding code generator, no manual coding is required.

Fig. 5.
figure 5

Subprocess MoveCoordRobot: sends the Move command to the robot.

In this case, we see that the main workflow includes various other process models, indicated by the little graph symbol. For example, the MoveCoordRobot subprocess that sends the coordinates to the robot is shown in Fig. 5. Upon start it receives the coordinates collected in the respective webpage (see Fig. 3), it prepares the UR script program that initialises the robot, instructs it to move to those coordinates, then sends the commands and shuts down the robot.

Fig. 6.
figure 6

Overview of the created processes: the behavioural Features DSL

The collection of such processes is a domain specific language (DSL) at the feature [1, 9, 16, 28] level: these process accomplish one user level intent each, which is the definition of feature. Additionally, these features are behavioural: the processes express operations of the UR3 (i.e., what it does) instead of static affordances (i.e., what are its components). We see in Fig. 6 the collection of subprocesses created for this application. Behavioural features are usually domain specific but application independent: we could easily reuse them to create a different demonstration with the UR3. As in DIME, we distinguish

  • Basic processes, like CreateUser needed to log in and access the WebApplication. Basic processes are DIME-level processes, that deal with the essential elements of a web application, like creating users. This process has been in fact reused from a previous application, and not developed specifically for this demo.

  • Interaction processes are executed client-side within the user’s web browser. They define the immediate interaction between user and application and can be regarded as a site map.

  • Interactable processes are slightly restricted basic processes that are provided as REST services and can thus be included in interaction processes.

Not used here are additionally

  • Long running processes, that describe the entire lifecycle of entities. They integrate interactions with one or multiple users as well as business logic in the form of interactable and basic processes.

  • Security processes, that realise access control based on the specific user currently logged in and the association, e.g. in form of access rights, to other entities in the system.

It is also very likely that we could easily reuse the behavioural features in a demo for a different UR robot, like the larger UR5, even larger UR10, and the brand new UR16. In this particular case of the UR cobot family, they are built in a physically similar way, because they all have the same arm component types, only scaled dimensionally or made internally more robust, but they have the same functionalities and thus share the same list of behavioural capabilities. What makes a difference is the reach, own weight, maximum load, which impact the parameter (i.e., the concrete data) but not the collection of capabilities. As the workflows are parameterised in the data, they could be reused very likely as they are (Fig. 7).

Fig. 7.
figure 7

Overview of the created SIBs: the native UR3 DSL

3.2 The UR3 Native DSL

As we see in Fig. 4, the process models can contain other process models, hierarchically and also recursively, collected in this case in the Feature DSL of Fig. 6, but at some point processes also include atomic actions, like the InitiateRobot, MoveCoordinatesRobot, ShutdownRobot, and CommandRobot in Fig. 5. Figure 7 shows the collection of UR3-specific atomic actions (called SIBs, for service independent building blocks) used in this application. Notice the symbol of an atom, indicating their basic character, i.e. their referring to either native code, or an external API or service. This distinguishes them from the processes. While processes are models (Service Logic Graphs in our terminology) and are “implemented” by models and symbolised by a little graph icon, atomic SIBs are implemented by code. Therefore they are the basic units (thus “atoms”) from a model point of view, and they embody and provide the link to the traditional programming level.

The native SIB palette robot.sibs encapsulates at the model level a subset of the native commands in the UR script programming language. There are more commands in the script language, and in order to use them in this or other applications they would need to be encapsulated similarly, to be included in the palette and made available to the application modellers.

Like the Feature DSL, also this DSL is reusable. While the Feature DSL is a collection of models, and thus it can change and evolve largely independently of programming artefacts in native code, the evolution of the Native DSLs depends on changes in the languages, APIs, and (service) interfaces they encapsulate. In this sense, the use of these layered DSLs provides also a very clean structure for organising and managing the domain and application evolution, which are usually in distinct areas of competence and responsibility:

  • On the basis of the available Native DSLs, application designers can change, enrich and evolve their application, like our UR3 Controller demo, at pleasure without the need of programming. Modifications happen in the GUI or in the business logic, on the basis of existing native libraries and thus with the same code basis. The new workflows and GUI models are compiled to new application code and deployed without the need of any programming skills nor activity. In this sense, our XMDD approach [38, 42] is a zero-code environment for this application design and evolution.

  • If the UR script language changes, or an API or another interface, library, or code that are encapsulated in respective Native DSLs change, this happens in the decision and management sphere of the organisation or entity that is responsible for that platform or code. In such case it may be necessary to revisit the corresponding Native SIB DSL, and potentially take action. In some cases, changes to implementations internal to a platform do not impact the interface. In that case the Native DSL is still valid as is. In case there are changes, they can lead to the addition, modification or deletion of operations, which may require the Native DSL manager to modify the palette accordingly. Potentially this can lead to versioning of atomic SIBs, because applications relying on the older palette may have to be retained running on the previous version. Modifications and deletions in a Native DSL may also have consequences for the applications that use them. This is however not different from the usual practice with any application relying on external code or libraries.

3.3 GUI Models and Their DSL

GUI models allow the definition of the user interface of the web application. As we see in Fig. 8 (right), they reflect the structure of the individual web pages. They can be included within the sitemap processes as an interaction point for the user, like in this case, or within other GUI models to reuse already modelled parts, as in the case of the Header GUI model at the top of the model, containing the UR logo and the top line (Remote Robot Control) that is the title of the demo, and the Footer GUI model, containing the logos of the university, Confirm, SFI etc.

Fig. 8.
figure 8

The DIME GUI palette (left) and the GUI editor for the control page.

On the right we see the GUI Native DSL of DIME. It is important to note that this GUI comes with DIME and is shared by all the applications, win whatever domain, built with DIME. Not only it ensures a common technological basis for all the DIME applications, it also enables application designers with no experience of GUI and web programming, like our interns, to create really nice applications without the need to learn the various server side and client side technologies, nor to have to deal with compilation and deployment stacks. In our application, the students concentrated their effort exclusively on the creation of the Native UR DSL, the conception of the application, and its design and debugging at the level of the (executable) models. If the common GUI palette is not enough there is a way to write Native Frontend Components, similar to the Java Native SIBs.

In the design, the connection between the different model types is evident in the combination of the GUI model of Fig. 8 (right) and the workflow model in Fig. 4. When composing the GUI model of the homepage, the designer drags and drops the various buttons and elements from the GUI palette onto the canvas, places them and defines their surface properties, like e.g. what’s written on a button and its colour. Due to the knowledge about these models and model elements embedded in the DIME environment, for example adding a button to the page makes its GUI model add one outgoing branch with that name. As a consequence, once the designer has defined the Home GUI model in Fig. 8 (right), so that it now appears in the list of defined GUIs, and then drags it onto the Workflow canvas when composing the workflow model of Fig. 4, the symbol of the Home GUI automatically displays also the 6 outgoing branches, each labelled with the name of a button in the corresponding GUI model. This way the consistency and correctness of the connections between different model types and different aspects and element of the application are enforced or supported by DIME in its nature as knowledge-infused IDE for models.

4 XMDD for the Digital Thread

The rising dependence on intersectoral knowledge and the convergence of economic sectors are being increasingly experienced in Industry 4.0. This happens internally due to the smart manufacturing model, which connects hardware, software, networks and communications, materials, business models, laws and regulations. It happens also externally, due to the manufacturing of all kinds of goods, in any sector, and the need to be more flexible in moving from one product or kind of products to another due to increased market demand fluctuations and opportunities.

As described last year in [32], referring to the Irish and global situation, a number of “needs” are mentioned over and over again:

  • the need to integrate across specialisation domains, spanning across various disciplines of research and professions;

  • the need to become increasingly agnostic about the specific technologies: the programming language, the operating system, the data management/information system, the communication networks, the runtime platforms, and more;

  • the need to be future-ready: projects, collaborations, consortia and alliances change. No IT product can afford being locked into technological walled gardens, the need is voiced over and over again to be as technology- and as platform-independent as possible;

  • the need to be able to try fast and improve fast: time to market is important, but time to test/time to retest are equally important. What is called “continuous development” or “continuous integration” needs to be supported as the new mainstream paradigm of system design and evolution.

The answer rests in the up-skilling of knowledge, at the same time trying to go to the essential bits, without lengthy and costly education paths. It seems the case that at the centre of the Smart Manufacturing movement lie the increased use of models, first and foremost the creation and adoption of the Digital Twin, and the increased integration of heterogeneous models in a Digital Thread, and this has to happen within the next few years, this will have to happen with the current labour force in the design and decision workshops. This population of selected experts is sectorally trained, sectorally competent, and unlikely to be able or willing to undergo significant retraining specifically in software.

The creation of a software or system is itself innovation, and the change or evolution of an existing system is innovation, too. Consistently with the widely successful school of lean approaches to innovation, one needs to fail fast and eliminate waste. The key is to recognise as early as possible that something is not as wished, possibly before investing time and resources into producing what will need to be amended, and make changes right away on the artefacts that are available at that stage. The systematic use of portable and reusable models, in a single paradigm, combined with generative approaches to code, has the potential to provide a Next Wave of IT that meets by and large the ambitious goals described above.

Revisiting some of the aspects mentioned in [32] after over one year of further interaction with various industries and more direct experience on what works and what is too complicated or looks too unfamiliar to be successful, we think that the approach described in the UR3 case study has a number of essential traits that make it attractive to the new public of Industry 4.0.

4.1 Innovation Directions

An efficient way to deal with this paradigm addresses a variety of aspects and innovation directions, that we briefly summarise.

The cultural aspect is made accessible by resorting to the description of artefacts within domain models, rather than code: the immediacy of understanding of these simple models has a chance to concretise “what happens when” (addressing comprehension, structure, documentation, training) and “what happens if” (covering simulation and prognosis) to a nearly haptic level. In discussions with engineers, they called the models and elements “blocks” of what happens, that one can assemble and construct in a virtual way but feel concrete and real. The concept of DSLs and domain specific Palettes of features and native SIBs in DIME covers this aspect in an intuitive way.

The code aspect is addressed by separating the (application specific or reusable) “logic” from the implementation of the operations and the system. What’s native is system-side, and it encapsulates and integrates what’s already there, reusing it and making it better understandable. Models, simulation environments, tools and machines, but also AI tools, visualisation tools and communication networks, IoT devices etc come with APIs amenable to the DSL transformation and integration as we just illustrated for the UR3. Here, the service oriented computing embraced by DIME and its underlying integration paradigm leverages component based design together with a high level description of the interfaces and properties of the components.

The testing aspect is streamlined and anticipated in the design lifecycle by choosing modelling languages that facilitate an early stage “checking” of the model structure, of the architectural and behavioural compatibilities. This is happening at the Digital Twin level with the choice of adequate simulation languages and environments. There is an interest spike for SysML due to the influence of the traditional UML community on the software engineering mainstream, but not without critics due to the heterogeneity and heavy weight approach to modelling it imposes. This is perceived as costly and bulky, so there is interest for lighter alternatives. On the software side, Architecture Analysis and Description Languages (AADLs) cover the architectural and static aspects, while the use of formal models like the KTS used in jABC [36], DIME [5] and in general graph-based models supported by Cinco [43] allow also a behavioural analysis. Early analysis capabilities, e.g. by model checking on the kind of models DIME provides, and in some cases a correct-by-construction synthesis of property-conform models seem to be useful to deliver the “speed” of early detection and even avoidance of errors that makes the successive testing on the (mostly generated) code much faster.

The dissemination and adoption aspect is going to be taken care of by sharing such models, for example within the Confirm community of practice. Such models are understandable to the domain experts, and to a good extent (at the level of the processes and features) independent of the specific technology of a vendor. In several cases it would be possible to share also the implementations of the building blocks, at the level of Native DSLs, using for instance Open Source Software facilities and structures [64]. Libraries of services have been in use in the telecommunication domain since the ’80s [59], they are increasingly in use in bioinformatics, geo-information systems, and are slowly taking a center stage attention also in the advanced manufacturing community, albeit mostly still in form of reference architectures and shared component models for which standards need to be developed.

The speed to change is accelerated by using generative approaches that transform the models into lower level descriptions, possibly into code. The speed and quality advantage of this demo in comparison with a handcrafted programmed version was immediately evident to all the non-software engineers who saw it. The core engine for this is the generative approach from models to code, and - also an aspect that arose by itself - to re-targetable code. In jABC, Cinco and DIME we use both model-to-model and model-to-code transformations, starting from the Genesys approach of [19] and [17], up to the generalized approach that Cinco adopts also at the tool metalevel [43].

The rich description of the single components, data, and applications is achieved by means of both domain-independent and domain-specific knowledge about the functionalities, the data and business objects, and the application’s requirements and quality profile, as in language-oriented programming [11, 66] or language-driven engineering [51]. The variety of vendor-specific robot programming languages, the individual APIs coming with sensors, actuators and IoT devices, in spite of standardisation efforts, are a well known obstacle to interoperability. The ease of mapping from abstract DSLs to native DSLs, via code or also via smaller adapter processes that manage the data and protocol adaptation, is possible and already experienced in DIME. It could bring here a significant relief.

The scalability and speed of education are supported by teaching domain specialists to deal with these domain specific models, their analysis and composition, and the validation using tools that exploit the domain-specific and contextual knowledge to detect the suitability or not of the current version of the application’s models for solving a certain problem in a certain (regulatory, technological, economic) context. We have had successes in the context of school pupils, postgraduate students with a background in biology and geography [21]. We are now creating an online course that will provide a gentle introduction to these models and a number of e-tivities so that professionals and students alike will be able to self-pace and gain experience in the use of these models.

The quick evolution is delivered by means of integrated design environments that support the collaboration of all the professional profiles and stakeholders on the same set of models and descriptions, as in the XMDD [38, 42] and One Thing Approach [39, 62], applied to models of systems and of test cases [47].

The structuring approaches are based e.g. on hierarchy [54, 55, 58], on several notions of features like in [7, 16, 20, 56], or contracts for abstraction and compositionality as in [14]. These structures allow a nice and incremental factoring of well characterised system components. This factoring aides the hierarchical and collaborative organisation of complex or large systems, while supporting the intuition of the domain experts. It also supports finding, maybe in approximations, the most opportune units of reuse (for development, evolution and testing), and units of localised responsibility, e.g. for maintenance, evolution and support.

4.2 Experiential Evidence so Far

We used DIME, Cinco or its predecessors in a large number of industry projects, research projects, and educational settings. Various aspects of the overall picture needed in a suitable and mature integrated modelling environment for the digital thread have been already exercised in those context. They focussed mostly on a single or a small number of aspects, due to the focus of the respective project, while a Confirm digital Thread platform would be required to match and surpass them all. Still, it is useful to summarise what we know that we can do.

The acceptance of formality as a means to clarify interfaces, behaviours and properties and as a precondition to verify and prevent, instead of implementing, testing, and then repairing case by case, is essential to meet the challenges and demands for the future platforms of IT provision and use. In this sense, we need both Archimedean points [63] and future oriented knowledge management for change management [31] for the fundamental attention to usability by non-IT specialists. With this work, we intend to leverage the XMDD approach [38, 42] and the previous work on evolution-oriented software engineering, under the aspect of simplicity [41] and continuous systems engineering [40] especially from the perspective of reconciling domain experts and software professionals [30]. Models with a formal underpinning are likely to be central to the wish of many manufacturers to be able to reconfigure production, which demands simplicity and predictability in evolution and reconfiguration.

We build upon over a decade of previous experiences gathered in various application domains. Specifically, our own work in scientific workflows summarized in [27] included experiences gathered from the initial projects in 2006 [33] to building platforms for the access to complex genetic data manipulations in the bioinformatics domain (the Bio-jETI platform of [23, 34] and the agile Gene-Fisher-P [24]). These platforms, and other similar experiences in geo-information systems, have proven that we are able to create platforms that virtualise and render interoperable collections of third-party services and tools that were not designed for interoperability. This happened with essentially the technology exemplified in the tiny UR3 remote controller. We hope that this previous experience may raise in manufacturing designers and producers the confidence that such a platform is indeed feasible also for the Digital Thread.

The tools we intend to use span from the Cinco-products [43] DIME and DyWA to the most work on DSLs for decision services [12]. The availability of well characterised service collections makes a semantic web-like approach feasible for these DSLs: their classification in terms of taxonomies and labelled properties brings semantics and semantics-based composition within reach. In these respects, we have a long experience of modelling, integration, and synthesis [35, 37, 60] up to entire synthesis-based platforms and applications [25, 26] as well as benchmark generation platforms [52]. The domain specific knowledge captured in native DSLs and their taxonomic classification as well as the feature-level DSLs and collections of properties may in fact lead to synthesizable workflows and processes within the Digital Thread.

We have experience of various tool integration techniques [29] and service integration platforms, and in the Semantic Web Challenge [48, 49]. This past experience has taught that various technology choices in the interfaces, design choices in and degrees of uniformity in the structure and presentation of APIs make a huge difference in the degree of automation of producing native DSLs from such native libraries and APIs. Here we expect unfortunately a large manual effort, as from the current experience with the IoT devices, sensors and actuators we expect to need bespoke solutions for each device, manufacturer, and constant changes across a product’s evolution. Interfaces and APIs design appear today in fact almost accidental, they are at the moment one of the most severely under-managed assets in the Industry 4.0 landscape.

We used features for a long time to model variability [20], introduced various categories of constraints to define structural and behavioural aspects of variability [18] and provided constraint-driven safe service customization adapting features to various contexts [6], up to higher order processes in [46]. On the effects of knowledge on testing, we addressed specifically efficient regression testing supported by models [15] and the hybrid test of web applications with various tools and approaches [50]. We used many techniques that leverage the mathematical nature of the models in order to prove the correctness of the dataflow [22], the control flow [3], the use of games to enhance diagnosis capabilities in model-driven verification [4]. We expect these techniques to be applicable also to the Digital Twin integration in a Digital Thread, and we expect that their impact on shorter design times (i.e., quick prototyping and validation of a new application) and increased quality (i.e. less testing) will lead to a significant speed up in comparison with the current patchwork of ad-hoc code.

We are convinced that all these abilities will be essential for the realisation of an integrated, efficient and correct Digital Thread for the connected and evolving industrial critical systems of tomorrow, to achieve true Sustainable Computing in Continuous EngineeringFootnote 1 platforms for Industry 4.0.