Keywords

1 Introduction

Cyber-Physical Systems (CPS) are present in a variety of safety/mission critical domains [24]. Given the pervasiveness of CPS and their criticality to the daily functioning of society, it is vital for such systems to operate in a reliable manner. However, since they generally function in an inherently complex and unpredictable physical environment, a major difficulty with these systems is that they must be designed and operated in the presence of uncertainty. By uncertainty we mean here the lack of certainty (i.e., knowledge) about the timing and nature of inputs, the state of a system, a future outcome, as well as other relevant factors.

As a first crucial step in such an investigation, we feel that it is necessary to understand the phenomenon of uncertainty and all its relevant manifestations. This means to systematically identify, classify and specify uncertainties that might arise at any of the three levels of CPS: Application, Infrastructure, and Integration. Based on studying and analyzing existing uncertainty models developed in other fields, including philosophy, physics, statistics and healthcare [58], we have defined an uncertainty conceptual model for CPS (U-Model) with the following objectives: (1) provide a unified and comprehensive description of uncertainties to both researchers and practitioners, (2) classify uncertainties with the aim of identifying common representational patterns when modeling uncertain behaviors, (3) provide a reference model for systematically collecting uncertainty requirements, (4) serve as a methodological baseline for modeling uncertain behaviors in CPS, and, last but not least, (5) provide a basis for standardization of the conceptual model leading to its broader application in practice.

To verify the completeness and validity of the U-Model, we validated it using uncertainty requirementsFootnote 1 collected from two industrial case studies from two different domains: (1) Automated Warehouses developed by ULMA Handling Systems (www.ulmahandling.com/en/), Spain, (2) GeoSports (fpx.se/geo-sports/) developed by Future Position X, Sweden. This empirical validation was systematically performed in several stages and, as a result, several revisions of the U-Model were obtained in addition to a refined set of uncertainty requirements. The version of the U-Model that emerged from this work is presented in this paper. Based on the results of this validation, we discovered 61.5 % (averaged across the two case studies) additional uncertainties not identified in the initial specifications. The rest of this paper is organized as follows: Sect. 2 presents the background and a running example. Section 3 presents the U-Model. Section 4 presents evaluation and discussion. Section 5 discusses related work and we conclude the paper in Sect. 6.

2 Background and Running Example

A CPS is defined in [1] as: “A set of heterogeneous physical units (e.g., sensors, control modules) communicating via heterogeneous networks (using networking equipment) and potentially interacting with applications deployed on cloud infrastructures and/or humans to achieve a common goal” and is conceptually shown in Fig. 1. As defined in [1], uncertainty can occur at the following three levels (Fig. 1): (1) Application level: Due to events/data originating from the application of the CPS; (2) Infrastructure level: Due to interactions including events/data among physical units, networking infrastructure, and/or cloud infrastructure, (3) Integration level: Due to either interaction among uncertainties at the first two levels or due to interactions between application and infrastructure levels.

Fig. 1.
figure 1

Conceptual model of a Cyber-Physical System [1]

Due to confidentiality constraints, the actual industrial CPS case studies that we used to evaluate the U-Model (Sect. 4) cannot be described in detail. Instead, we chose a Videoconferencing Systems (VCS) developed by Cisco, Norway, as an example to illustrate the conceptual model that has been used in our previous projects.

A typical VCS sends and receives audio/video streams to other VCS in a videoconference including dedicated hardware-based VCS, software-based VCS for PCs, and cloud-based VCS solutions (e.g., WebEx) as shown in Fig. 2 (inspired from [9] and our existing collaboration with Cisco). To support videoconferences a complex infrastructure is provided by Cisco (Fig. 2) comprising of a variety of hardware such as gateways (e.g., Expressway) and dedicated servers (e.g., Telepresence and unified Call Management servers). In Fig. 2, we also show the various levels at which the uncertainties can occur in the context of our running example. For example, as shown in Fig. 2, at Site 2, the interactions of Application level uncertainties in VCS 2 and uncertainties in the Telepresence Servers are shown as Integration level uncertainties.

Fig. 2.
figure 2

Running example – Videoconferencing System (VCS)

To facilitate the understanding of concepts, a VCS represents aspects of the physical world in a somewhat simplified form. Among other functions, the VCS controls the movement of a set of cameras that are directly attached to it via wired/wireless media. This can also be performed via a cloud-based VCS application (i.e., WebEx) in addition to dedicated hardware-based solutions. In the course of a videoconference, a number of different uncertainties exist due to the complex and heterogeneous collection of networks, cloud-based infrastructures, and VCSs.

3 Uncertainty Conceptual Model

The U-Model includes Belief Model, Uncertainty Model and Measure Model. Their key details are presented below, whereas more details are presented in [10].

3.1 Belief Model

The U-Model takes a subjective approach to representing uncertainty. This means that uncertainty is modeled as a state (i.e., worldview) of some agent or agency – henceforth referred to as a BeliefAgent – that, for whatever reason, is incapable of possessing complete and fully accurate knowledge about some subject of interest. Since it lacks perfect knowledge, a BeliefAgent possesses a set of subjective Beliefs about the subject. These may be valid, if the beliefs accurately represent facts, or invalid, if they do notFootnote 2. A Belief is an abstract concept, but can be expressed in concrete form via one or more explicit BeliefStatements. Different BeliefAgents may hold different views about a given subject, which is why each BeliefStatement is associated with a particular BeliefAgent. Note that a BeliefAgent does not necessarily represent a human individual; it could constitute a community of individuals, some non-human organism, or even some technological system, such as a computer systemFootnote 3.

These and other core concepts of the U-Model are represented as a class diagram in Fig. 3, where subjective concepts are represented by the grey-filled boxes and objective concepts as the unfilled boxes in Fig. 3. Subjective concepts are manifestations of the imperfect knowledge of a BeliefAgent. Conversely, objective concepts reflect objective reality and are, therefore, independent of BeliefAgents and their imperfections. One significant characteristic of the subjective concepts is that they can vary over time, as might occur, e.g., when more information becomes availableFootnote 4.

Fig. 3.
figure 3

The Core Belief Model

Uncertainty (lack of confidence) represents a state of affairs whereby a BeliefAgent does not have full confidence in a Belief that it holds. This may be due to various factors: lack of information, inherent variability in the subject matter, ignorance, or even due to physical phenomena, e.g., the Heisenberg uncertainty principle. While Uncertainty is an abstract concept, it can be represented by a corresponding Measurement expressing in some concrete form the subjective degree of uncertainty held by the agent to a BeliefStatement. Since the latter is a subjective notion, a Measurement should not be confused with the degree of validity of a BeliefStatement. Instead, it indicates the level of confidence that the agent has in a statementFootnote 5.

Finally, note that this model is intentionally made very general, which allows it to be extended and customized for a variety of purposes, e.g., uncertainty model-based testing of CPS in the context of our project. Figure 3 does not show the complete model, e.g., to reduce visual clutter, some of the OCL constraints have been removed. The complete model is described in [10]. In the remainder of this section, we examine key concepts of the core model in more detail and illustrate some of them using the running VCS example (see Table 1).

Table 1. Running example – Dial of VCS

Belief, BeliefAgent and BeliefStatement.

A Belief is an implicit subjective explanation or description of some phenomena or notionsFootnote 6 held by a BeliefAgent. This is an abstract concept whose only concrete manifestation is as a BeliefStatement. In our running example, a test engineer at Cisco may have his/her own Beliefs about how a VCS works. When coding test cases, he/she concretizes his/her Beliefs as executable test scripts that may or may not correspond to the actual implementation the VCS. A BeliefStatement in this context could be manifested as one executable test case file and in other contexts it may correspond to other artifacts, e.g., source code.

A BeliefAgent is a physical entityFootnote 7 owning one or more Beliefs about phenomena/notion. A BeliefAgent can take actions based on its Beliefs. In our example of CPS testing, BeliefAgents include: (1) Application level: software test engineers focusing on testing new versions of the VCS software, and (2) Infrastructure level: Network engineers focusing on testing a VCS under diverse network situations.

A BeliefStatement is a concrete and explicit specification of some Belief held by a BeliefAgent about possible phenomena or notions belonging to a given subject area. A BeliefStatement can be an aggregate of two or more component BeliefStatements, or it may require one or more prerequisite BeliefStatements.

The concrete form of a BeliefStatement can vary, and may represent informal pronouncements made by individuals or groups, documented textual specifications expressed in either natural or formal languages, formal or informal diagrams, etc.

Due to the complex nature of objective reality and our human and technical limitations, it may not always be possible to determine whether or not a BeliefStatement is valid. Furthermore, the validity of a statement may only be meaningfully defined within a given context or purpose at a given point of time. Thus, the statement that “the Earth can be represented as a perfect sphere” may be perfectly valid for some purposes but invalid or only partly valid for others. For our needs, we are more interested in analyzing uncertainties in a BeliefStatement rather than studying its validity.

In our example, we define the following BeliefStatements: (1) Application level: The VCS will successfully connect to another VCS 70 % of the time (see Table 1); (2) Infrastructure level: The Expressway gateway is successful 99 % of the time in connecting a Cisco VCS with a third party VCS (see Table 1); and (3) Integration level: A VCS communicates with the Expressway gateway with a 90 %–95 % success rate.

Evidence, EvidenceKnowledge, IndeterminacySource and IndeterminacyKnowledge.

Evidence is either an observation or a record of a real-world event occurrence or, alternatively, the conclusion of some formalized chain of logical inference that provides information that can contribute to determining the validity (i.e., truthfulness) of a BeliefStatement. Evidence is inherently an objective phenomenon, representing something that actually happened. This means that we exclude here the possibility of counterfeit or invented evidence. Nevertheless, although Evidence represents objective reality, it needs not be conclusive in the sense that it removes all doubt (Uncertainty) about a BeliefStatement. In our example of an Application level BeliefStatement, i.e., “The VCS successfully dials to another VCS 70 % of the time”. The Evidence of the 70 % of success rate of dial may be obtained from the execution of 100 test cases on the VCS in the past week (see Evidence Table 1).

EvidenceKnowledge expresses an objective relationship between a BeliefStatement and relevant Evidence. It identifies whether the corresponding BeliefAgent is aware of the appropriate Evidence. Thus, an agent may be either aware that it knows something (KnownKnown), or it may be completely unaware of Evidence (UnknownKnown). This is formally expressed by the two constraints attached to EvidenceKnowledge (Fig. 3). An example is provided in Table 1.

Indeterminacy is a situation whereby the full knowledge necessary to determine the required factual state of some phenomena/notions is unavailableFootnote 8. This is an abstract concept whose only concrete manifestation is in the form of an IndeterminacySource. As noted earlier, this may be due either to subjective reasons (e.g., agent ignorance) or to objective reasons (e.g., the Heisenberg uncertainty). It is also useful to explicitly identify factors that lead to Uncertainty referred to as IndeterminacySources. This represents a situation whereby the information required to ascertain the validity of a BeliefStatement is indeterminate in some way, resulting in Uncertainty being associated with that statement. One possible source of indeterminacy can be another BeliefStatement, which is why the latter is a specialization of IndeterminacySource (Fig. 3). For example, for the following BeliefStatement: “The VCS successfully dials to another VCS 70 % of the time”, for which there might be several IndeterminacySources. A possibility is incorrect operator behavior, where an incomplete name of the target VCS specified (IndeterminacySource entry in Table 1).

IndeterminacyNature represents the specific kind of indeterminacy and can be one of the following: (1) InsufficientResolution – The information available about the phenomenon in question is not sufficiently precise; (2) MissingInfo – The full set of information about the phenomenon in question is unavailable at the time when the statement is made; (3) Non-determinism – The phenomenon in question is either practically or inherently non-deterministic; (4) Composite – A combination of more than one kinds of indeterminacy; (5) Unclassified – Indeterminate indeterminacy.

IndeterminacyKnowledge expresses an objective relationship between an IndeterminacySource and the awareness that the BeliefAgent has of that source. So, even though it is agent specific, it is still an objective concept since it does not represent something that is declared by the agent. For instance, an agent may be aware that it does not know something about a possible source (KnownUnknown), or the agent may be completely unaware of a possible source of indeterminacy (UnknownUnknown).

KnowledgeType (represented as enumeration) has four values: (1) KnownKnown indicates that an associated BeliefAgent is consciously aware of some relevant aspect; (2) KnownUnknown (Conscious Ignorance) indicates that an associated BeliefAgent understands that it is ignorant of some aspect; (3) UnknownKnown (Tacit Knowledge) indicates that an associated BeliefAgent is not explicitly aware of some relevant aspect, but may be able to exploit in some way; (4) UnknownUnknown (Meta Ignorance) indicates that an associated BeliefAgent is unaware of some relevant aspect.

At a given point in time, a BeliefAgent always makes a statement based on a KnownKnown Evidence and a KnownUnknown IndeterminacySource. Splitting EvidenceKnowledge and IndeterminacyKnowledge provides the flexibility to enable transitions among different knowledge types (e.g., from UnknownKnown to KnownKnown), based on the evolution of EvidenceKnowledge and IndeterminacyKnowledge related to the associated BeliefAgent. For the following BeliefStatement: “The VCS successfully dials to another VCS 70 % of the time” and an IndeterminacySource is improper operator behavior, the KnowledgeType of IndeterminacyKnowledge is KnownUnknown.

Measurement and Measure.

Measurement when associated with a given IndeterminacySource represents the optional quantification (or qualification) that specifies the degree of indeterminacy of the IndeterminacySource. For example, in the case of a Non-determinism IndeterminacySource, its measurement could be expressed by a probability or a probability density function. For the example presented in Table 1, ‘70 %’ is the measurement of the IndeterminacySource improper operator behavior.

Measurement when associated with Uncertainty is a subjective concept representing the actual measured value of an uncertainty defined by a BeliefAgent. It may be possible to specify a Measurement that quantifies in some way (e.g., as a probability) the degree of the uncertainty that a BeliefAgent associates with a BeliefStatement. Measurement when associated with Belief represents sets of measured values of all the uncertainties contained by a BeliefStatement defined by a BeliefAgent. Several constraints on Measurement ensure that each Measurement owned by either Belief, Uncertainty or IndeterminacySource has a unique Measure. Currently, we modeled three different measures, i.e., Probability, Ambiguity and Vagueness that are discussed in the Measure Model (Sect. 3.3). In the future, we will provide UML model libraries for Measurement when implementing U-Model as a UML profile. Measure is an objective concept specifying method of measuring uncertainty. More details are presented in Sect. 3.3.

3.2 Uncertainty Model

This model (Fig. 4) was inspired by concepts defined in the literature on uncertainty [1115] and is an adjunct to the Core Belief Model (Sect. 3.1). The uncertainty model expands on Uncertainty from several different viewpoints and introduces related abstractions. Notice that Uncertainty has a self-association. This self-association facilitates: (1) relating different Application level uncertainties to each other, (2) relating different Infrastructure level uncertainties to each other, (3) relating Application level and Infrastructure level uncertainties to each other, (4) relating Integration level uncertainties to each other, and (5) relating Application, Integration, and Infrastructure level uncertainties. This self-association can be specialized into different types of relationships such as ordering and dependencies. Here, we intentionally did not specialize it to keep the model general, so that it can be specialized for various purposes and contexts. In the rest of the section, we discuss each subtype of Uncertainty and its associated concepts.

Fig. 4.
figure 4

The core uncertainty model

Uncertainty, Lifetime and Pattern.

Uncertainty represents a situation whereby a BeliefAgent lacks confidence in a BeliefStatement. Figure 4 shows a conceptual model for different types of Uncertainty inspired from the concepts reported in [12, 14, 15]. Uncertainty is specialized into the following types: (1) Content – represents a situation, whereby a BeliefAgent lacks confidence in content existing in a BeliefStatement; (2) Environment – represents a situation whereby a BeliefAgent lacks confidence in the surroundings of a physical system existing in a BeliefStatement; (3) GeographicalLocation – represents a situation whereby a BeliefAgent lacks confidence in geographical location existing in a BeliefStatement; (4) Occurrence – represents a situation whereby a BeliefAgent lacks confidence in the occurrence of events existing in a BeliefStatement; (5) Time – represents a situation whereby a BeliefAgent lacks confidence in time existing in a BeliefStatement. For example, for the BeliefStatement: “The VCS successfully calls another VCS 70 % of the time”, the Uncertainty is whether the dialing to another VCS will be successful or not and classified as Occurrence uncertainty. In case of the BeliefStatement: “The Expressway gateway is successful 99 % of the time in connecting a Cisco VCS with a third party VCS”, the Uncertainty is in the connection of the gateway with the third party VCS, and type of uncertainty is again Occurrence (see type of Uncertainty in Table 1).

Lifetime represents an interval of time, during which an Uncertainty exists. That is, an Uncertainty may appear temporarily and then disappear. On the other hand, an Uncertainty could be persistent, i.e., it remains until appropriate actions are taken to resolve it. An example of Lifetime is shown in Table 1. We show two types of time in Fig. 5: (1) Real Time showing the actual passing of the time, (2) Testing Time, i.e., a time point in real time, where a testing activity was performed, e.g., a call attempt to establish a videoconference (stimulus to the system under test) or a response from the system was received about success or failure of the call (test result). Time points t n are shown on Testing Time in Fig. 5. A BeliefStatement can be made at any point in the real time, for example, three versions of BeliefStatement B 1 (B 1.1 , B 1.2 , and B 1.3 ) can be made at different points of time as shown in Fig. 5. Lifetime of Uncertainty (the occurrence of successful dial) in BeliefStatement B 1 should be t nt n−1: difference of time that the dial was initiated and response from the system was received for B 1.3 .

Fig. 5.
figure 5

Example of Lifetime and Pattern of Uncertainty

Figure 6 shows a conceptual model for the occurrence Pattern of Uncertainty inspired from concepts reported in [14, 16, 17]. Notice that in this section, patterns presented are by no means the representation of a complete set of patterns that may exist for an Uncertainty. Rather, we only present the most common patterns.

Fig. 6.
figure 6

The Patterns of Uncertainty

Periodic uncertainty occurs at regular intervals of time, whereas Persistent uncertainty is the one that lasts forever. The definition of “forever” varies; e.g., an uncertainty may exist permanently until appropriate actions are taken. On the other hand, an uncertainty may not be resolvable and remains forever. Both Periodic and Persistent inherit from Systematic, which means that these types of patterns occur in some methodical manners, i.e., a pattern that can be described in a mathematical way.

An uncertainty with an Aperiodic pattern occurs at irregular intervals of time, which is further specialized into Sporadic and Transient. A Sporadic uncertainty occurs occasionally, whereas a Transient uncertainty occurs temporarily. Systematic and Aperiodic uncertainty patterns inherit from Temporal, which means that they both inherently have the notion of time. If an uncertainty occurs without a definite method, purpose or conscious decision, the type of the pattern it follows is referred to as Random. For example, when looking at Fig. 5, a pattern of the Uncertainty (the occurrence of a successful call attempt) can be derived after collecting values of Lifetime of the Uncertainty (see Pattern in Table 1).

Locality and Risk.

Locality (see Fig. 4) is a particular place or a position where an Uncertainty occurs in a BeliefStatement. For example, for the BeliefStatement: “The VCS successfully dials to another VCS 70 % of the time”, the Locality of the Uncertainty (whether the call attempt to another VCS will be successful or not) is in the invocation (position) of dial API of VCS (see Locality in Table 1).

An uncertainty may have an associated Risk and high-risk uncertainties deserve special attention. As shown in Fig. 4, an Uncertainty might or might not associated to Risk, whose level can be classified into four levels according to the ISO 31000 – Risk Management standard [18]. Level / Rating is derived from Measurement owned by Uncertainty (e.g., Probability of the Occurrence of an Uncertainty) and Measurement owned by Effect (e.g., high impact using the risk matrix in [19] or any other matrix). For example, for the BeliefStatement: “The VCS successfully calls another VCS 70 % of the time”, the Risk associated with the Uncertainty in this BeliefStatement is low or the risk could be even ignored (see Risk in Table 1).

3.3 Measure Model

Figure 7 shows the Measure Model of the U-Model, inspired from concepts reported in [1214] and by no means complete. Depending on the type of Uncertainty, a variety of measures could be applied and new ones can also be proposed when needed. We aim to give a high-level introduction to commonly known measures.

Fig. 7.
figure 7

Measure Model

An uncertainty may be described ambiguously (Ambiguity). For example, in statement “The camera is down”, the ambiguity is in the measurement, i.e., the camera is either facing down or disconnected. Interested readers may consult [20] for various measures of Ambiguity. Another common way of measuring Uncertainty is in a vague manner (i.e., Vagueness), which can be further classified into Fuzziness and NonSpecificity. Regarding Fuzziness, an uncertainty may be measured using fuzzy methods. More details can be referred to the fuzzy logic literature such as [20]. In certain cases, it may not be possible to measure an uncertainty using quantitative measurements and instead qualitative measurements can be used. Such qualitative measurements are classified under NonSpecificity methods. Finally, a common way of measuring uncertainty is via Probability. For example, for the BeliefStatement: “The VCS successfully calls another VCS 70 % of the time”, the Uncertainty is measured by Probability (see Measure in Table 1).

4 Evaluation

This section presents the results of the industrial case studies that we conducted to evaluate the U-Model and collect uncertainty requirements. First case study is about Automated Warehouse (AW) provided by ULMA Handling Systems and the second case study is about Geo Sports (GS) by Future Position X (further details in [10]).

4.1 Development and Validation of Uncertainty Requirements and U-Model

We collected uncertainty requirements from the two industrial case studies in the following ways. The uncertainty requirements were collected as part of an EU project on testing CPS under uncertainty (www.u-test.eu). An initial set of uncertainty requirements were collected by the industrial partners themselves and were later classified into the three CPS levels: Application, Infrastructure, and Integration. Later on, the researchers of Simula Research Laboratory conducted one workshop per partner to further refine the requirements. For AW, the onsite workshop took around three days, whereas in case of GS, a one-day onsite workshop was organized.

The validation procedure is summarized in Fig. 8 and comprises two parallel validation processes. The first validation process is related to the validation of the U-Model and was mainly conducted by the researchers. The second validation process focuses on the validation of uncertainty requirements and was mainly performed by the industrial partners.

Fig. 8.
figure 8

Development and validation of uncertainty requirement and U-Model

The validation was developed incrementally (Activities A1 and A2 in Fig. 8), based on existing models in the literature and other related published works (see Sect. 5 for details). The Simula team validated the conceptual model using two types of examples shown as inputs to A2 in Fig. 8: (1) Examples of uncertainties from domains other than CPS, and (2) A subset of VCS requirements. As a result an initial version of the U-Model was produced referred as U-Model V.1 in Fig. 8.

In parallel, initial uncertainty requirements (Reqs V.1) were provided (Activity B1 in Fig. 8) by the industrial partners based on their domain knowledge, existing requirements of their CPS, and some information from the real operation of the CPS. These initial uncertainty requirements were used as input for A3, focusing on further refining the U-Model. In addition, the researchers inspected the collected uncertainty requirements using a requirements inspection checklist provided in [21] and provided a set of comments for the industrial partners on how to improve their requirements. There were two key outputs of the A3 activity: U-Model V.2 and comments to refine the requirements. These comments were used by the industrial partners to produce a second version of requirements (Reqs V.2) in B2.

4.2 Evaluation Results

For each of the industrial case studies, we mapped the three versions of uncertainty requirements (Reqs V.1, Reqs V.2, and Reqs V.4) to the three versions of U-Model (V.1 to V.3). The number of the instances of the concepts are shown in columns x (for mapping Reqs V.1 to U-Model V.1), y (for mapping Reqs. V.2 to U-Model V.2), and z (for mapping Reqs V.4 to U-Model V.3) of Table 2, respectively. Notice that Reqs V.3 was the result of the onsite workshops together with U-Model V.3 and thus these requirements are not mapped to the model since both the conceptual model and requirements were refined together. We analyzed in total 20 use cases for AW and 18 use cases for GS. Notice that, the number of use cases for each case study did not change during the requirements collection and the U-Model validation process. They were selected at the beginning of the process to capture and specify the key functionalities of the CPS.

Table 2. Evaluation results of uncertainty requirements and U-Model

Based on the final version of requirements, we can see from Table 2 that most common types of identified uncertainties are Content uncertainties having 91 instances (the last column in Table 2) and Occurrence uncertainties having 205 instances. On the other hand, a relatively lower number of Time uncertainties (50), Environment uncertainties (32), and GeographicalLocation uncertainties (31) were found in the case studies. Most of the time, uncertainties are due to InsufficientResolution (42 instances), MissingInfo (31 instances) or Non-determinism (89 instances). In terms of Measure, our analysis revealed that 76 of the uncertainties across the case studies may be measured with the Fuzziness measures, 119 with NonSpecificity, whereas 148 with Probability. Notice that in Table 2, we do not show the concepts that have no instances identified from any of the case studies.

In Table 2, the R1 = y/x1 column represents the increased percentage of mapping of concepts explicitly captured in Reqs V.2 as compared to Reqs V.1. The R2 = z/y − 1 column shows the increased percentage of mapping of concepts explicitly captured in Reqs V.4, i.e., including unknown uncertainties that weren’t explicitly specified in Reqs V.2. As can be seen from Table 2, in case of AW for R1, on average, we identified an additional 1.43 of uncertainties and in R2 we identified an additional 0.51 of uncertainties. For GS, these percentages are 2.39 in R1, and 0.72 in R2, respectively. In total, in R1 on average we identified additional 1.91 of uncertainties, whereas in R2 we identified on average 0.615 of unknown uncertainties.

In Table 2, one can see that we didn’t have exact data (e.g., probability) and risk information available at the moment. Such data will be collected using questionnaire-based surveys in the future to quantify the identified uncertainties. In addition, we didn’t observe any pattern for the occurrences of the identified uncertainties. Moreover, the Belief part of the conceptual model (e.g., concepts Belief, BeliefAgent) was derived to understand Uncertainty and is not relevant for the validation.

5 Related Work

Uncertainty is a term that has been used in various fields such as philosophy, physics, statistics and engineering to describe a state of having limited knowledge where it is impossible to exactly tell the existing state, a future outcome or more than one possible outcome [18]. Various uncertainty models have been proposed in the literature from different perspectives for various domains. For instance, from an ethics perspective, uncertainties are classified as objective uncertainty and subjective uncertainty, both of which are further classified into subcategories to support decision-making [5]. In healthcare, uncertainty has often been defined as “the inability to determine the meaning of illness-related events” [6] and comprehensive domain-specific uncertainty models (e.g., [7]) have been proposed, as discussed in [8].

Uncertainty is receiving more and more attention in recent years in both system and software engineering, especially for CPS, which are required to be more and more context aware [2224]. Moreover, CPS inherently involves tight interactions between various engineering disciplines, information technology, and computer science. This magnifies uncertainties. Therefore, adequate treatment of uncertainty becomes increasingly more relevant for any non-trivial CPS. However, to the best of our knowledge, there is no comprehensive uncertainty conceptual model existing in literature that focused specifically on CPS design or on system/software engineering in general. In the remainder of the section, we discuss how the concepts uncovered during the literature review align with our proposed conceptual model.

The U-Model concepts BeliefAgent, BeliefStatement, and Belief of the Belief model were adapted from [12]. The author of [12] postulates that uncertainty involves a statement whose truth is expected by a person, and therefore the truth might differ for different persons (defined as BeliefAgent in our model). However, as we discussed in Sect. 3.1, we assigned a broader meaning to BeliefAgent: which can be an individual, a community of individuals, or a technology. The U-Model concepts Environment and Locality were adapted from [12, 2527], and we related them to the other U-Model concepts.

Our knowledge conceptual model aligns well with the model of knowledge reported in [28]. Here the authors looked at how to manage different types of known and unknown knowledge to distinguish what is known from what is not known. Knowledge is also classified from a different perspective: something that everyone knows, tacit knowledge, conscious ignorance and meta-ignorance. Their objective is to better understand ignorance. The author of [29] also studied unknowns and provided a taxonomy particularly focusing on ignorance (named as KnownUnknown and UnknownUnknown in our conceptual model). In our conceptual model, we further elaborate these concepts and captured them as KnowledgeType, which is associated to Evidence and IndeterminacySource via EvidenceKnowledge and IndeterminacyKnowledge.

We classified uncertainties into various types including Content, Time and Occurrence. In [12], a chapter was dedicated to the discussion of content uncertainty and its measurement. The other two types of uncertainties were mentioned in [12, 14, 15], with examples but with no clear definitions provided. We adopted the measurements in our conceptual model. Different types of sources of uncertainty for various purposes have been identified in the literature. In [30], the authors captured sources of uncertainty by considering risk and reliability analyses, based on which they classified uncertainty. The authors of [15, 31] identified sources of uncertainty in active systems. In [23, 32], the authors described the sources of uncertainty in software engineering in general. We however proposed the U-Model concepts IndeterminacySource and IndeterminacyNature to capture sources of uncertainty.

Aleatory and Epistemic uncertainties are the two generic categories of uncertainties discussed in many works [30, 33]. According to the work reported in [30], Aleatory is due to the inherent randomness of phenomena, whereas the Epistemic uncertainty is mainly due to the lack of knowledge. These two types are also covered in the U-Model. For example, the Non-determinism (nature of indeterminacy in U-Model) represents the randomness as in Aleatory, and Epistemic is covered by MissingInfo — nature of indeterminacy.

In [34], the author noted that uncertainty can occur in a random or systematic manner. In the Pattern part of the U-Model, we further elaborated the “systematic” concept by introducing Pattern and its sub categories. In literature, uncertainty is often related to Risk. The acquisition project team of the US Air Force Electronic System Center (ESC) has proposed a risk matrix for evaluating risks [19]. They introduced the concepts of Risk, impact, likelihood of occurrence, and rate of Risk and also identified their relations. We reused these concepts and linked them with Uncertainty.

6 Conclusion

Cyber-Physical Systems (CPS) often consist of heterogeneous physical units (e.g., sensors, control modules) communicating via various networking equipment, interacting with applications and humans. Thus, uncertainty is inherent in CPS due to tight interactions between hardware, software and humans, and the need for them to be increasingly context aware. To understand uncertainty in the context of CPS, unified and comprehensive uncertainty conceptual model should be derived. The U-Model is such a conceptual model developed in an EU project, based on a thorough literature review of existing uncertainty models from various domains (e.g., philosophy, healthcare), and refined and validated with two industrial CPS case studies of various domains. Based on the results of several stages validation, we obtained the current version of the conceptual model in addition to refined uncertainty requirements. On average, we managed to learn 61.5 % of unknown uncertainties that weren’t explicitly specified in the uncertainty requirements collected from the two case studies.