1 Introduction

Self-adaptive systems enable the dynamic adaptation of software systems with respect to their internal state, and the situations of their surrounding execution environment. Such information about a system is gathered respectively using internal system monitors and sensor networks. Dynamic software adaptation of a system takes place with the objective of providing a better Quality of Service (QoS), or the most appropriate service behavior for the system with respect to the current situations of its execution environment [29].

System monitors and sensors gather information about the various internal and external situations to which software systems could respectively adapt. Software systems’ adaptation based on such information posses two requirements on the systems design. Systems must: (1) provide a continuous adaptation process, where components can be introduced into, or removed from the system unannounced without stopping the system’s execution, and (2) assure that the system’s adaptations remain consistent with respect to the expected behavior of the system and other adaptations made to the system [43]. The former characteristic is important to ensure that systems effectively change its structure or behavior, to provide a more appropriate service with respect to its changing environment. The latter characteristic is important to avoid errors or system crashes that may arise due to the (incorrect) interaction between adaptations during the execution of the system.

There are different techniques fostering dynamic adaptation of software systems with respect to their execution environment, for example, by using dynamic software recompositions or program upgrades. Ensuring the consistency of these adaptations is, however, a more challenging task requiring that the system behaves as expected in all possible situations in which it may be adapted —that is, all possible combinations of adaptations are consistent. The number of possible adaptation combinations increases exponentially with the definition of every new adaptation. It is thus unfeasible to design a model that takes into account all combinations beforehand.

Self-adaptive systems are extended with orchestration models to deal with the complexity of adaptations and their interaction. We focus the discussion of this chapter on the description of different approaches for self-adaptation that enable the reasoning and managing of adaptations’ properties. In particular, the approaches presented hereinafter are assessed with respect to the consistency assurances they provide for adaptations and their run-time interaction. Rather than presenting concrete self-adaptive development models for consistency assurance, we describe the general approaches that can be (and are) used to provide a concrete orchestration model. To structure the discussion and assessment of the approaches, we group them into four categories, as follows: (1) formal approaches, backed by a mathematical formalism in which consistency of the system can be verified; (2) architectural modeling approaches, which are based on abstract architectural descriptions (e.g., UML models) of the system structure to represent and manage adaptations; (3) rule-based approaches, which use production rule systems to describe adaptations and actively verify the validity of their rules; and (4) transition system approaches, which are defined in a state-transition fashion describing adaptations and managing the actions between them.

2 Background

Before delving into the analysis of self-adaptive system and their consistency assurance models, we describe the properties that these systems should satisfy with respect to their expected execution environments. Self-adaptation is envisioned to provide the most benefits for large-scale, critical, time-constrained, or distributed systems. The three properties presented here are extracted taking these application domains into account [14]. An application domain of adaptive systems is that of Cyber-Physical Systems (CPSs), for example, smart-home systems, where different household appliances can adapt their behavior according to user preferences, interaction with other appliances, their internal state, or conditions from the surrounding environment. The smart-home system is used as a unifying example throughout our analysis of existing consistency models for self-adaptive systems.

2.1 Case Study: Smart-Home Systems

Smart-home systems have become a de-facto example to highly dynamic systems exhibiting adaptations with respect to their surrounding environment [20, 30]. In smart-home systems, household appliances (e.g., stereo, television, windows, heater, lights, ...) are interconnected creating a (wireless) sensor network. Different services can be defined on top of the sensor network to control devices remotely or even autonomously, for example, remote light switching or automatic temperature adjustment. Household inhabitants have different preferences about the settings and usage of appliances. Hence, services in a smart-home system would benefit from dynamic customization, taking into account users’ preferences and the surrounding execution environment of the service.

As a concrete example of adaptation and interaction we will use the case of two services deployed in the system: user detection, and room ambient services. The user detection service is used to identify users moving about a smart-home. User presence can be detected using infrared or RFID sensors, their identity can be verified using their mobile devices (through the wifi connection), RFID tags (through RFID readers), or voice and facial recognition (respectively using noise sensors and cameras deployed around the house). The room ambient services use ambient sensors such as temperature, humidity, air quality, noise, or photocells to adjust room conditions to users’ preferences. Conditions in a room change autonomously as users are identified in a room.

Adaptation scenario: Having acquired a user’s identity, appliances can adapt to provide more appropriate room ambient conditions, taking into account both, users’ preferences and the external situations of the environment. For example, room temperature can be set based on a user’s particular comfort level, current weather conditions, and season of the year. The way in which the system satisfies these situations leads to many different possibilities of adaptation. For example, opening the windows to cool the room down if it is neither raining nor winter, or turning on the air-conditioning if the user would rather keep the windows closed. The user identification service is also responsible for contacting the authorities in case of emergencies. This process may vary according to the particular emergency. For example, if users cannot be identified properly, an alert is sent to the police and the home is locked to avoid burglary. In case of fire or flood, however, the fire department should be contacted instead, giving free access into the house for firefighters. If identified users are detected to have a (medical) emergency (e.g., falls, or severe injuries) the appropriate medical team is contacted and granted access into the home.

In the presented scenario of the smart-home system example, it is possible to see that interaction is a central factor to dynamic adaptations. Moreover, adaptation interaction is inherent to services sharing a base application domain. Therefore, interactions must always be accounted for, otherwise the system may present inconsistent or erroneous behavior. In our example, inconsistencies due to interactions can occur whenever both the user detection and room ambient services are required to contact the authorities in customized ways. The system needs to ensure the contacted authorities are consistent with the detected emergency, the firefighters should be called with the appropriate information to be prepared for the situation (i.e., fire or flood). Additionally, both services actuate over the same appliances (e.g., doors and windows) depending on the situation at hand. In case of burglary, the user detection service may intend to close the windows, while the room ambient service may require to keep them open to maintain the room’s temperature.

Therefore, we assert self-adaptive systems require a model that effectively expresses and manages interaction between adaptations in order to ensure the consistency of the system. To accomplish this, the model must manage all possible interactions between adaptations, even if not explicitly defined beforehand.

2.2 Consistency Assurance Requirements

Software models for the development of self-adaptive systems should satisfy the following three properties in order to assure the consistent interaction between adaptations.

  • M.1 Abstraction: Adaptations are the central entity to model self-adaptive systems. Hence, models managing these systems must provide the capability to represent (at run time) the system as a composition of its adaptations, their defined interactions (if any), and their states. The purpose of these representations is to support the run-time verification of system properties in order to ensure their consistency. The system representation at run time must be concise and express the different states of the system and the transitions (actions) between such states. An additional restriction is set for the abstraction property. Abstractions to manage the consistency between adaptations should correspond, as much as possible to the abstractions to develop the system. This is to avoid inconsistencies between the verification and execution layers of the system.

  • M.2 Run-time Consistency: Interactions between adaptations may be unknown as adaptations are introduced to the running system. New or unknown adaptations can be harmful to the system as their behavior may conflict with other adaptations already existing in the running system, for example, by providing a contradictory behavior. This can be evidenced in the smart-home system with the introduction of the room ambient adaptation that opens a room’s windows. This new behavior contradicts that of the user detection service in case of burglary. Every time an adaptation is introduced to the running system, it should be guaranteed that this does not lead to inconsistent system states. At run time, adaptations should be verified to assure their interactions are consistent.

  • M.3 Offline Consistency: Run-time consistency verification can be too heavyweight, specially in large systems. To mitigate this, the model should also provide a capability to reason about system properties, that may influence its overall run-time consistency, as interactions between adaptations are defined. The analysis process must also be extensible to take into account interactions that may occur between adaptations at run time, even if they were not explicitly defined.

3 Approaches for Consistency Management

We now provide an overview of the different models that can be used to manage adaptation interaction in self-adaptive systems. As previously mentioned, studied approaches are characterized in four main groups: formal, architectural modeling, rule-based, and transition system. Each of the approaches is presented in a structured way where we describe the generalities of the model, how it enables consistency management capabilities, and an evaluation of the three consistency assurance requirements. The discussion and evaluation of the approaches is driven via the smart-home system example introduced in Sect. 2.1. A comparative analysis of the approaches is presented in Sect. 4.

The four classification groups are taken from the commonalities of concrete run-time models used for the coordination of (self-adaptive) software systems. Therefore, to complement the description of the different approaches, we present a description of a concrete run-time model based on the approach managing the run-time adaptations. Note that by run-time models, here and in the remainder of the chapter, we mean an abstraction of the system’s state space and the different actions between states that can be accessed and maintained dynamically.

3.1 Formal Approaches

Formal approaches are the basis for the specification and analysis of software systems. These approaches are normally used at early stages of the development process, benefiting from the provided mathematical tools to prove or identify interesting system properties. Three common uses of formal approaches in software systems development include: (1) disambiguate the concepts embedded in the system, (2) analyze, verify, or prove properties about the system, and (3) raise the level of abstraction of the system. While formal approaches are normally developed in parallel to an adaptive system (and therefore do not provide adaptation abstractions), we include them in the our study due to their powerful reasoning capabilities and their tight integration with the development of self-adaptive systems. Here, we focus on the latter two categories, paying special attention to the approaches that can be used in the development of dynamic adaptations. Moreover, we highlight the approaches that focus on the verification of interaction consistency between adaptations. Formal approaches are normally developed as abstract reasoning engines that are used to specify a given (adaptive) system, or are coupled to one as a reasoning engine. Therefore, the approaches presented hereinafter may not be directly applied to develop self-adaptive systems, but could be used together with the other approaches described in later sections to develop the system and assure its consistency.

The formal approaches described here are: computational logic, algebraic specification, and Satisfiability Modulo Theories, selected for our analysis as they have been used in the implementation of concrete models for the assurance of consistency of self-adaptive systems.

Computational Logic. Computational logic is used to reason about computations of software systems through logic formalisms. The logic programming paradigm embodies computational logic by putting forward mathematical logic statements as the base instructions to build a software system. Statements are used to express properties about the states in a system, while fact rules express transitions between these states. In order to analyze a system, its states are given an abstract representation appropriate for the intended purpose of the system. For example, using an Abstract Syntax Tree (AST) representation, or control flow graph to reason about the consistency of the system’s evolution, querying the system to identify behavioral or design errors [19]. Different logic systems can be used to further reason about the consistency of the system. For example using the Linear Temporal Logic (LTL) or Computation Tree Logic (CTL) temporal modal logics [8], it is possible to reason about properties that should be satisfied over a period of time (e.g., point liveness) [68].

Logic systems can be used for the specification of dynamically adaptive systems by associating statements, transition rules, and queries with adaptable program elements. Statements express the conditions in which an adaptation is applicable. For example, in our case study, the statements :when (livingroom_sensor == "sunny") or window(livingRoom, sunny, open) express the conditions in which windows open in the room ambient service. These statements are applied to the windows if two conditions are true, the current room is the living room, and the weather sensors signals it is sunny. Transition rules verify statements’ validity, and based on the result adaptations are composed into or withdrawn from the system. If the statements match the conditions specified in the transition rule, then the system transitions to that state, and the adaptation associated with the statement (open in our case) is incorporated with the system. The transition rule matching the window statement as specified in rules in a rule-based system is shown in Line 5 of Sect. 3.3. Finally, a set of queries (or formulae) may be used to define the properties that should hold during the execution of the system, for example, a modal formula query, \(\square \)(window == "closed" U (weather == sunny)), verifies that the windows are always closed, unless it is sunny.

Interaction inconsistencies between adaptations can be identified using logic programming, for example, if at least one of the defined queries is unsatisfied. To assure the consistency of the system two possibilities exist. Statically, all possible combinations between adaptations would have to be tested running all applicable queries for each scenario. Dynamically, whenever an adaptation takes place, all queries associated with the adaptations currently composed in the system are verified. While the two possibilities are plausible, note that both require the complete verification of all defined queries. This can be too heavyweight for large systems with many queries. Moreover, an additional abstract model of the system is used to very the system properties, where, a priori, the states of the system and transitions between such states are represented.

Consistency requirements satisfaction: Computational logic provides a model to reason about the system in which system states are defined as statements, transition between states are defined as transition rules, and management of adaptations takes place via querying the system states. Consistent interactions between adaptations in self-adaptive systems defined using logic programming, as presented here, are given by sets of adaptations that can satisfy all queries associated with them, satisfying the run-time consistency requirement. Additionally, analysis of interactions and correctness of the logic system can be performed statically, satisfying the offline consistency requirement. This last requirement is further supported by the use of other logic systems (complemented by system’s abstractions e.g., AST) to reason about properties of the system. The use of computational logic enable the adaptation (via parametric adaptation) via the model verification of statement and transition rules, however, in existing approaches adaptations cannot be explicitly defined as system elements during the development. Hence, the abstraction requirement is dissatisfied.

Application to self-adaptive systems: Logic approaches have been used mainly for the specification of adaptive systems, applied to the specification of sockets [68], or monitoring in embedded systems [64]. Computational logic approaches can also be used to manage self-adaptive systems if they are extended to express adaptations and interactions between program entities (similarly to the extensions required to reason over source-code). Since the systems must be extended and modified, the verification and reasoning techniques defined with computational logic must be revisited. While logic programming languages cannot be used directly in the implementation of full-fletched self-adaptive systems, the ideas can serve as the basis to develop self-adaptive software systems as it is the case in rule-based approaches (cf. Sect. 3.3).

Algebraic Specification. Different mathematical formalisms can also be used to model and foster formal reasoning of software systems and their properties. The idea behind these approaches is to propose abstractions for a system, within the boundaries of a specific mathematical formalism in such a way that they can be used as decision engines for different system properties. Here we present those formalisms suitable to reason about self-adaptive systems.

Process algebras [24, 34] are used to model concurrent processes, providing high-level abstractions for operations between processes such as parallel composition, communication, replication, restriction, and synchronization. The most prominent representative of process algebras is the \(\pi \)-calculus specification [48], in which it is possible to fully express the way in which different components communicate. In particular, it is possible to express the conditions under which two processes communicate (via restrictions). Restrictions can be used to limit how and when the main process of the system communicates. Similarly, restrictions can capture the context (i.e., situations in the surrounding environment) in which processes execute. Such contexts can be used to define the different adaptations of a system such that composing different available processes will exhibit different behavior for different situations of their surrounding environment. In our case study’s example, the restriction to open a room’s window is that the weather is sunny, if this restriction \(\nu l := \textsc {weather} == sunny\), is satisfied, then the window state carried through process p will change from its current value \(\bar{s}\) to OPEN. Otherwise the adaptation is not composed in the system as the state remains the same S. The complete formula can be written as \( \{ \} \vdash \textsc {Window} \xrightarrow {p} \{ state \} \vdash S \, | \, (\nu l)(\bar{s} \rightarrow OPEN)\).

Coalgebras, and in particular coalgebraic specification [38], have been used to express the dynamic behavior of systems. Coalgebraic specifications are structured using notions of inheritance and aggregation from Object-Oriented Programming (OOP), which are used to model state-based systems (where the state is considered as a black box). Dynamic adaptations can be expressed using coalgebraic specification by describing the conditions in which particular (pieces of) behavior should take place. A coalgebraic specification consists of a list of operations on the elements of the systems (i.e., types of objects) and a list of assertions that such elements must satisfy.

These formalisms are used on their own as an abstract model to prove consistency or decidability properties about the systems they model. Process algebras enable developers to specify and restrict the computation flow of a system, which is used to reason about possible interactions between system entities. Inconsistencies in such interactions can be detected, for example, by identifying livelocks or deadlocks. Redundancies can be identified via behavioral equivalences (using bisimulation equivalence) of composed computation flows. Coalgebraic specifications allow us to reason about dynamic behavior of a system in terms of invariance and bisimilarity. Additionally, modal operators may be used within the coalgebraic specifications as invariants for reasoning about future states, and safety of progress/modal formulae among other system properties.

Consistency requirements satisfaction: Algebraic specification approaches are used to represent program properties, rather than to represent the actual program, and are consequently not used at run time. Therefore, it is not possible to have a run-time verification of interactions between program entities, dissatisfying the run-time consistency requirement. Similar to computational logic approaches, algebraic specification does not provide the concept of adaptation. Rather, adaptations are created implicitly, for example, using dedicated communication channels and restrictions. This complicates the definition of the system, as one of its concepts is hidden in the specification. Hence, these approaches also fail to satisfy the abstraction requirement. Note, however, that it is possible to capture the context of the system. Using this property and the natural specification available to reason about different system properties the offline consistency requirement is satisfied.

Application to self-adaptive systems: In order to build self-adaptive systems using algebraic specification approaches, these need to be complemented with additional programming abstractions to represent and execute states of programs and the transitions between such states. Concrete models complementing algebraic specifications include abstract state machines [9], algebraic Petri nets [25], or alternating automata [61]. With the help of such models it is possible to introduce explicit notions of adaptations and structure a system’s behavior in accordance to its algebraic specification.

Satisfiability Modulo Theories. Satisfiability Modulo Theories (SMT) are used as an approach to decide whether a set of first-order logic formulae describing a problem are satisfied. Here we present two approaches used in self-adaptive systems development.

Model Checking: Model checking is an analysis and verification technique based on the specification and restriction of system properties by means of logic formulae. Model checking techniques for the verification of logic formulae include abstract interpretation, partial order reduction, or automated theorem proving. The most prominent model checking technique is based on the boolean SATisfiability problem (SAT), which is commonly used to decide whether a system satisfies a set of logic formulae [53].

Model checking, and in particular SAT solvers, can be used in the context of self-adaptive systems as a reasoning tool. Interactions between adaptations within a system are usually represented by means of transition systems. Such representation can be used to create a model representation of the system using a specification language (e.g., Promela) which can then be translated to logic formulae and verified by a model checking engine (e.g., SPIN). Snippet 1 shows the specification of the events and adaptations of the smart-home system example using Promela. The transition rules describe the channel in which events are receive (e.g., from sensors as channel ? WeatherSensorEvent). Processing of received information in the channel leads to the verification of the conditions in which one adaptation should be applied (e.g., remove the closeWindow adaptation to introduce the openWindow one, whenever is sunny). Snippet 2 shows the translation of Promela rules into LTL formulae (the first two lines in Snippet 2) in SPIN, where [] and U are, respectively, the global and until modal operators.

figure a

Inconsistencies between adaptations are identified if the model checker cannot assigned values to all variables in the formulae such that these are satisfiable (i.e., their valuation is true).

Similar to algebraic specification, model checking is a verification approach, complementing the development and execution of self-adaptive systems. On its own, model checking does not provide means to explicitly represent adaptations, but rather it provides a way to verify consistency of transition rules defined between them. The abstraction requirement is thus not satisfied. Model checking does provide support for run-time verification from the implementation of the system by means of abstraction and symbolic algorithms built in SAT solvers, that could even be used at runtime [21, 57], satisfying the run-time consistency requirement. Finally, model checking techniques enable the analysis of system properties when combined with a self-adaptive development approach. As an example, the EventCJ language provides high-level abstractions for the definition of adaptations and their dynamic behavior, which has been used to develop routing, mobile, and web systems. Such abstractions are specified using Promela and verified at development time to assure the validity of transition rules between adaptations in SPIN. However useful, we note that using multiple specifications of the system (i.e., the state space, its specification model, and the program itself) can be error prone, as all representations must co-evolve over time. A unified model for the system and its specification is therefore desired. Given these properties, model checking satisfies the offline consistency requirement.

Constraint Satisfaction Problem: The Constraint Satisfaction Problem (CSP) is a technique to solve SMT problems by modeling the interactions between domain variables as restrictions in the system’s functionality (i.e., constraints). CSPs are specified by a set of variables \({{\varvec{x}}} := (x_1, \ldots , x_n)\) with domains \({{\varvec{X}}} := (X_1, \ldots , X_n)\), over which constraints are defined as clauses of the form \((\varPhi _1({{\varvec{x}}}), \ldots , \varPhi _m({{\varvec{x}}})).\) Variables are associated with entities in the system (e.g., represent system variables or objects), while clauses define relations between such variables. Similar to model checkers, CSP solvers find values for the variables in \({{\varvec{x}}}\) such that the valuation for all clauses \(\varPhi _i({{\varvec{x}}})\) is true. If not all constraints are satisfied, the solver responds an “unsatisfiable” message, such message is possibly accompanied with the sets of constraints that could not be satisfied [10]. Using constraint solvers, adaptations can be modeled by defining constraints that associate structural and behavioral system properties with particular situations of its surrounding environment. Enabling adaptation management can be achieved using one of two approaches.

A first approach [59], considers the adaptation of system’s entities to maintain the satisfiability of the constraints defined in the system. Upon changes in the system’s surrounding environment, system entities are adapted so that all constraints are satisfied. Consider the smart-home system in which constraints are extracted from the system’s model, based on the current values of its variables as shown in Fig. 1. The model defines weak constraints, which are a special type of constraints that are not required to be satisfied if not all constraints can be satisfiable. Evaluation of the constraints in the righthand side of Fig. 1 results in an unsatisfiable set containing constraints \(\{4., 5., 7.\}\), evidencing three inconsistencies in the system’s execution. As a consequence, the system must be adapted to resolve as many inconsistencies as it is possible. A possible adaptation of the model to resolve such conflict would be change the state of the variable Window.open from false to true (i.e., opening the window). Applying this resolution constraint 5. is the only constraint that remains in the unsatisfiable set. Note that this constraint cannot be resolved as possible adaptations will raise inconsistencies with other interacting constraints (i.e., \(\{6., 7., 4.\}\)).

Fig. 1.
figure 1

Constraints associated with the smart-home system example (adapted from [59]).

A second approach, consists of using constraints to drive the system adaptation. In this approach entities are dynamically composed as part of the system if and only if their associated constraints are satisfiable in the context of the system’s surrounding environment. Here, system adaptations are driven by changes in the surrounding environment—that is, introduction or withdrawal of data gathered by sensors. Such data can be explicitly represented by the definition of a context model [21]. Constraints are introduced to complement the context model, defining consistency relations between contexts. As new information is gathered, defined constraints in the system are evaluated. The system will then execute the actions associated with those contexts that do not lead to inconsistencies—that is, set of constraints with true valuations.

Constraint solvers are used to provide a decision layer for software systems by introducing constraints which define/restrict interactions between their entities. Constraint solvers do not provide an explicit abstraction to represent adaptations, but rather rely on the abstractions provided by the underlying system. Constraint solvers, thus, do not satisfy the abstraction requirement. Using constraint solvers, adaptations take place after all constraints defined in the system are evaluated upon changes in the system’s surrounding environment. Such changes can be consequences of conditions on the constraint variables’ values changing, or constraints being added to or withdrawn from the system. The evaluation process (i.e., constraint solving) ensures that as many constraints as possible are satisfied in the system (by means of adaptation). For example, the sm@rt framework [59] defines constraints extracted from the system’s structure and adapts its values to maximize the number of satisfied constraints, and consequently the assure interactions between system entities are consistent. The framework has been used for the development of smart CPSs, and cloud infrastructure management. The constraint solving process, thus, satisfies the run-time consistency requirement. We note that while it is possible to evaluate the constraints at earlier stages of the development process, such evaluation is equivalent to run-time constraint evaluation. Since constraint solvers do not provide additional mechanisms to reason about the system’s consistency beforehand, the decision requirement is not satisfied.

3.2 Architectural Modeling Approaches

Software systems’ models as understood within the Model-Driven Engineering (MDE) community are a series of artifacts used to describe and design (parts of) a system. A large body of research has been applied to ensure the consistency among different models describing software systems as they evolve over time [52, 60]. Modeling approaches were initially thought out in the setting of static modification of the system’s structure, where the system could be stopped and modified offline. Interest in the application of model modification at run time has given rise to the Models@runtime [49] community. Model modification at run time entails the interaction of three model component definitions of a system: (1) a model describing the main structure and adaptation points of the system, (2) a model specifying the system environment and variables relevant for the adaptation, and (3) the model’s adaptations —that is, specific artifacts modifying the structure of the system with respect to its environment. These model components are overseen by a reasoning component, responsible for ensuring that adaptations are composed with the main structure of the system whenever the conditions specified by the system environment model are satisfied.

The architectural modeling approaches described here: model transformations, and feature-oriented domain analysis are selected as they have been used in the implementation of concrete models for the assurance of consistency of self-adaptive systems.

Model Transformations. Model transformation consists of a description of the system using a modeling language (like UML), where system adaptations are described as independent models. Changes signaled from the system are enacted by transitioning from the current model to a new model containing the required adaptations. A common approach to transition between models is to define adaptations as graph transformation rules transitioning from one model to a model containing the adaptation [23, 56].

We follow the approach of Degrandsart et al. [23] as an example to drive the discussion. The surrounding environment is defined using contexts domains, which are multi-dimensional spaces representing the domains of the situations that are relevant in the surrounding execution environment of the system. Specific situations in the environment are realizations of context domains, giving concrete values for (some of) its dimensions. The impact of a context change from context c to context d is specified by a transformation rule \(r_{c,d}\) explicitly expressing the effect of context changes over model artifacts. In the smart-home system example take the base application (i.e., without adaptations) as a context c, and the situation where a user is identified in a room as a context d. The effect on the class Window of changing from the former context to the latter one, is to add two new pieces of behavior to it; the functionality to open() and close() the windows, as shown at the top of Fig. 2. The example in Fig. 2 shows the adaptation of the model (for the aforementioned situation) matching the adaptation elements with the models to which they are applied. Given a model \(M_{c}\) of the system matching (through function m) with adaptation component L, \(M_c\) is transformed into a new model \(M_d\) matching (through function \(m'\)) adaptation component R, by following the transformation rule \(r_{c,d}\). The rule is applied whenever there is a context change from c to d —that is, the values defined in context c changed to give rise to context d on a given context domain. Note that not all possible transformation rules have to be defined. The model allows the transitive composition of transformation rules by relying on the sequential composition of graph transformations available in its execution language AGG [63].

Graph transformation approaches can also be considered as rule-based approaches, as they transition between system states via transitions rules, however, we classify them independently here, as the focus of this section is on the transformation of graph artifacts. Rule-based system are discussed in Sect. 3.3.

Fig. 2.
figure 2

Model transformation responding to context change, from no context active, to a user detected context in the smart-home system example.

There are two situations in which the interaction between adaptations can conflict: whenever one transformation rule deletes a model element required by a second transformation rule as an input, or whenever a transformation rule changes attribute values of the adaptation vector, such that a second transformation rule is no longer applicable. These conflicts are identified in the model by analyzing the coverability property of adaptations defined in the system —that is, analyzing if all adaptations defined in the system can be reached from the initial state of the system by means of graph transformation rules. The analysis of conflicts between adaptations takes place during the design phase of the system throughout a pair-wise analysis. Conflicts have to be resolved manually by domain experts, and re-analyzed. Once all conflicts are resolved, the application and its adaptations can be deployed.

Requirements satisfaction: Model transformation systems define adaptations as components in transformation rules. Each component contains the adapted object and is associated with the transformations the model object must undergo. The abstraction requirement is satisfied in these models as adaptations are explicitly represented by system entities. Moreover, the state of the system can be represented by the sequential composition of graph transformation from the base model. These models also provide the possibility to manage interactions between adaptations at design time, by means of the conflict identification analysis, thus, satisfying the offline consistency requirement. The run-time consistency requirement, however, is not satisfied in these models as conflicts are not verified at run time. If conflicts are overlooked during the design of the system, introduction or composition of adaptations could still yield an inconsistent behavior of the system due to conflicting adaptations.

Application to self-adaptive systems: Model transformations are fit to software systems that undertake a modeling approach for their conception. Introduction of a model transformation approach can be naturally integrated with the development process of the system, as it has been the case for smart visit applications [23]. Note that this model (and similar transformation approaches as Atlas Transformation Language (ATL) [41]) present disadvantages discouraging their use for the development of self-adaptive systems. First, the system is not represented by a single model, but by a set of models (i.e., the basic representation, the activity model, and the adaptations of the system). Such proliferation of models can become complex to manage as systems grow and more entities and situations to which these can adapt are added. Second, interaction between adaptation dimensions (or their values) cannot be expressed. As a matter of fact, overlap between adaptation dimensions is known to cause conflicts between adaptations —that is, whenever the value space of one dimension is contained in the value space of another dimension, this will cause conflicts during the reduction and transitive closure calculation of the system analysis.

Feature-Oriented Domain Analysis. Features models emerged with the goal of expressing distinct functionality applicable to entities on a software system. The Feature-Oriented Programming (FOP) paradigm [54] arose to consider the implementation aspects of features. In the setting of FOP, a software system’s artifacts are considered as aggregation of features, defining the main modules of the system and their functional behavior. The Feature-Oriented Domain Analysis (FODA) methodology [42] was proposed as a modeling technique to represent such modules and their sub-parts. Adaptations can be represented in feature models as possible functionality associated with a given artifact.

In the smart-home system example, the functionality associated with windows can be modeled as shown in Fig. 3. Software products are built by combining features described in the feature model starting from the root until the leaf nodes. A possible product composition of the smart-home system is a house environment consisting of the Airco, Camera, MonitorUser, UnknownUser, Window, Close, Infrared, RFIDReader features. Note, however, that not all possible combination of features would make a consistent product according to the expected behavior of the system. In the previous feature composition, if we add the Open feature associated with the Window appliance, the resulting product would be inconsistent with the requirement to lock down windows for unknown users.

Fig. 3.
figure 3

Design of the smart-home system example using features.

To avoid inconsistent product configurations, feature models can be enhanced with constraints defining interaction between features. Common interaction constraints define exclusion, inclusion, or requirement relations between features. These constraints can be expressed by extending the feature diagram with semantic elements expressing the relations or using a textual definition [36]. Below we present two constraints defined textually. Constraint (C1) ensures that whenever the UnknownUser feature is selected in the product, the Open feature is not and viceversa. Constraint (C2) ensures that if the KnownUser feature is selected, then the Open and Close features are also selected.

Feature interaction reasoning techniques in FOP are used to identify systems that would yield inconsistencies due to the interactions of the features composing them. Traditionally, this has been treated as a static problem tackled via automated reasoning based on variety of approaches, such as CSP, model checking, computational logics, or state machines [11]. Analyzing interaction between features becomes even more challenging when composing the features at run time, in the majority of cases being an undecidable problem. Feature interaction can be identified using on-line managers [55], and negotiation agents [2]. However identification of feature interaction problems is only one piece of the puzzle. Inconsistencies caused by interactions also need to be resolved at run time. To overcome this problem, recent approaches use upfront definition of policies between features that may interact at run time [46]. Whenever a feature is included in the program at run time, the policies are verified. In case of an interaction problem, the corrective process defined by the policy associated with the conflicting features is applied.

Requirements satisfaction: In self-adaptive systems, adaptations can be abstracted as features, defining the relations between adaptations following the FODA process. As adaptations can have a direct abstraction as features, these satisfying the abstraction requirement. Inconsistency identification and resolution is based on automated reasoning approaches and predefined policy rules. As a consequence, not all possible interactions may count with a resolution strategy, causing inconsistencies to persist at run time, due to the unannounced appearance of features during execution, dissatisfying the run-time consistency requirement. Identification of inconsistencies can be performed statically, satisfying the offline consistency requirement. Nonetheless, verification and application of conflict resolution policies have been reported as costly operations, making these analysis techniques less suitable for highly dynamic settings as those proposed by self-adaptive systems.

Application to self-adaptive systems: The use of features is common in the design of Software Product Line (SPLs) since they model the concept of multiple products having a common structure. At run time, these models are still used as a basis of dynamic SPL [33], where variations of a running software are generated using incremental move or regenerative composable component construction techniques. These techniques are mainly characterized by a synthesis and modification steps. In the synthesis step, a new base model is generated using the new configuration that gathers applicable adaptations. In the modification step, the differences between the original base model and the generated model are calculated. These two steps are supported by an internal rule engine of the system, gathering adaptation policies and evaluating their feasibility from the space of possible adaptations. FODA has been used in adaptive systems in the domains of automated manufacturing [33], and critical systems [13].

3.3 Rule-Based Approaches

Rule-based approaches consist of (external) production rule systems coordinating the behavior of a software system. Rule execution engines are used to maintain a clear separation between data and business logic. The idea behind rule execution engines is to process complex rules efficiently as data is made available from internal (e.g., system monitors) or external (e.g., sensors) sources. Rule execution engines exhibit one of two implementations to process such data. (1) Forward-chaining engines [28] infer the validity of rules describing the system based on the available input data. Immediately after processing the data the actions described by evaluated rules are enacted. (2) Backward-chaining engines [66] infer validity of rules based on the system’s goals. A path of rules to be executed is build backwards from the goals until the rules satisfied by the input data are found. Once the path is complete, the actions described by these rules are executed.

Rule execution engines are used as coordination models providing a consistent view of software systems. The view of the system can be generated, for example, by means of a fact space [50] (using computational logic for its definition and processing).Footnote 1 Data gathered about the system’s state is modeled as facts, and transition rules describe conditions on such states. Each rule is associated with a set of actions to be taken whenever it becomes valid. At run time, the visible actions of the system correspond to those actions for which their associated rules are valid, while actions associated to invalid or inconsistent rules are oblivious to the system execution. Snippet 3 shows the description of two facts (Lines 2 and 3) and two rules (at Line 5 and Line 11) for the smart-home system example using Crime [50], a language for developing context-aware applications using a fact space model. The :monitorUser rule is used to monitor the status of a user if this is known. This rule is applicable if and only if there is a fact in the current fact space, registering a user with his id for which the status is known. Note that fact spaces allow us to restrict the domain in which production rules are applicable. For example, the :changeWindowState production rule is only applicable when the user fact is available for a specific room fact space (the rule matches with the window fact as given by its parameters). Rules applicable for the complete system are conditioned in the public fact space.

figure b

Rule-based approaches provide support for the development of self-adaptive systems by introducing adaptation entities as facts and the conditions evaluating such facts as rules. Adaptations are managed at run time by means of fact matching as specified by the rule engine. Whenever a fact in the fact space matches a rule, the adaptation is considered available and the actions described by the rule are composed into the system —that is, they are executed. Unmatched facts represent unavailable adaptations and the actions described by their rules are withdrawn from the fact space —that is, they are not executed.

Requirements satisfaction: Rule-based approaches can represent adaptations as facts, however, there is no explicit representation of interactions between adaptations. Interactions are represented implicitly as a consequence of rule matching. Since there is no explicit representation of adaptations or their interaction, we conclude rule-based approaches do not satisfy the abstraction requirement. In contrast, the natural rule matching algorithm lends itself for effective run-time management of adaptations, ensuring that new actions are available in the system if their associated rules are matched by consistent facts (adaptations), if conflict resolution algorithms are set in place, partially satisfying the run-time consistency requirement. Even if rule-based approaches can manage the run-time consistency of the system by ensuring that production rule actions only take place whenever their conditions are satisfied, it is still up to the programmer to ensure that the provided facts are correct and there is no accidental interactions, starvation, or cyclic delegations between production rules. Rule-based engines do not provide support to ensure this and therefore do not satisfy the offline consistency requirement.

Application to self-adaptive systems: Rule-based approaches are used to process/solve complex systems based on available data. Similar to CSP approaches, in order to enable self-adaptation, rule-based approaches must be extended to account for adaptations. Once the notion of adaptation has been defined and software systems can be modularly composed in different ways according to their surrounding environment (i.e., available data), they can be used to ensure the system is composed consistently. For example, forward chaining engines can be extended as it was done in Crime to generate context-aware systems [50], where actions executed in rules are made available based on the validity of facts. Such an approach can be used to implement pervasive applications. Backward chaining engines can be used to enable behavior adaptation at the service composition level for distributed system, as explored in the Semantic Service Overlay Network (SSON) [22]. In SSON, services’ tasks (components of a service) can be adapted at run time if task providers disappear form the network. Adaptation of a task takes place by replacing the missing task by a set of tasks providing behavior equivalent to the missing task.

3.4 Transition System Approaches

Transition systems are used to represent a system’s states and the actions between them. Using such representation, software systems are said to transition from one state to another via predefined actions. Transition systems are normally represented as graph-like structures, where nodes denote the states of the system and labeled edges denote actions to transition between states. Transition systems are used as a modeling technique for software systems providing a structural representation and a formal basis to reason about their properties.

The transition system approaches described here: automata and labeled transition systems, and statecharts, dataflow graphs, and Petri nets are selected as these have been used in the implementation of concrete models for the assurance of consistency of self-adaptive systems.

Automata and Labeled Transition Systems. Automata [35] are graph-based models used to describe systems’ behavior based on their possible states and the set of actions to be taken at each state. Automata are normally used to verify system properties, such as program termination. Automata are preferred as modeling techniques for software systems because they present a model that is easy to operate, for example, to merge, intersect, and (parallel) compose software systems. Similar to automata, Labeled Transition Systems (LTS) describe systems as sets of states and transition functions between such states. Since LTS are normally represented by means of automata we consider them jointly in the following.

Different automaton models have been used to represent and manage self-adaptive systems.

  1. 1.

    Using states to represent the execution flow of the system in which each state can execute a certain set of actions, and the transitions between states follow the control flow of application under development [7, 37].

  2. 2.

    Extending states with behavioral constraints, as in S[B]-Systems [65], where each possible state is expressed and the transitions between states are explicitly specified.

Two characteristics are common to these models. First, adaptations are not explicitly represented in the model, rather the different states of the system are defined, and adaptations are implicit from the actions the system can take in each state or the transition between states. Second, the “current state” is given by a single node in the graph —that is, only one of the states of an automaton is “active” at a time. This leads to explicitly representing possible state combinations in the system, leading to state explosion as the system grows.

Nonetheless, automata could be use directly to represent system adaptations, as shown for the smart-home system in Fig. 4, state transitions represent the change detected in the surrounding environment. Note that since adaptations occur unannounced and unordered, therefore, all states should be reachable from any other state, making the automaton cluttered and difficult to manage. On the contrary, inconsistent states can be explicitly represented as non final states.

Fig. 4.
figure 4

Automata model for the web booking application.

A possible inconsistency in the smart-home system example can exist whenever a user’s preferences are set to open the windows, and there is an unknown user, requiring the windows to be closed. In such a situation the state UserUnknownUserWindow would be reached in Fig. 4. On the one hand, this approach presents as a drawback that all inconsistencies must also be known beforehand by developers, which is unfeasible in self-adaptive systems. Moreover it significantly increases the size of the automaton. On the other hand, if no inconsistencies are explicitly modeled, then verifications of the system as termination become trivial (the system always terminates as all its states are final).

Requirements satisfaction: Using automata or LTS, adaptations are modeled as states of the system in which behavior specific to that state and only that state is available. Such model allows the explicit definition of adaptations, satisfying the abstraction requirement. Additionally, automata models have the advantage of providing a firsthand view of the system. At run time, this can be used to validate that the system remains in a consistent state. Furthermore, recent approaches complement automata models for self-adaptation to verify that new adaptations are consistent with respect to the initial specification of the system [37]. Bringing together these two characteristics, this approach satisfies the run-time consistency requirement. Finally, automata are commonly used due to their decision characteristics to compose automata and reason about their properties (e.g., program termination), hence, satisfying the offline consistency requirement.

Application to self-adaptive systems: As previously mentioned, recent approaches, as ActiveFORMS [37], foster the use of automata to build self-adaptive systems, applied to CPS. In this approach the system uses automata as the run-time model of the system in charge of analyzing and executing adaptations according to the system’s surrounding environment. The system further presents a goal management model that allows the modification of the run-time model itself —that is, the (re)definition/deletion of adaptations and their interactions at run time. The goals of the system are specified as boolean rules and are verified dynamically whenever the automata run-time model needs to be modified. Such verification process takes place to ensure that modification of the system’s goals remain consistent with respect to its formal specification. Other approaches have used automata to model adaptive ecology systems [65], and camera detection systems for UAV [7].

Statecharts. Statecharts provide a specification of the internal behavior of system components as well as the interactions between components in the setting of reactive and event-based systems [31]. Statecharts are an extension of state transition diagrams (i.e., LTS), proposed to tackle the state explosion problem by successfully modularizing components in a hierarchical way. Additionally, the model enables system designers to easily express general properties such as clustering, orthogonality, or refinement.

Figure 5 shows a statecharts diagram of the smart-home system example, where states are represented as rounded rectangles, events are represented as labeled arcs between states, and conditionals are represented as circles. In Fig. 5, clustering and hierarchy between states is depicted by the containment relation, inner states have a higher hierarchy than outer states. Orthogonality is depicted between the Detected user and Emergency states by means of the dotted line separating the two states. Whenever the container state is accessed, the two sub-states work independently of each other, by applying the User adaptation state in the first case, and the Monitoring state in the second case. Refinement is the property of statecharts to capture the different components of the system. For example, Fig. 5 represents the user detection component (the refinement) of a smart-home system.

Fig. 5.
figure 5

Statecharts model for the user detection service.

The visual representation provided by statecharts accounts for the formal specification of the system. Such specification can be used to analyze different properties about the system [44].

Requirements satisfaction: Since statecharts are a structural extension of automata, they also satisfy the abstraction and offline consistency requirements. Abstraction is satisfied by defining the states of the system as adaptations in which specific behavior is is available. Offline consistency is satisfied by the mechanisms to reason about system properties beforehand. This mechanism can be supported by means of model checking techniques, taking advantage of the explicit representation of the system behavior and state changes as reactions to event changes. The run-time consistency requirement is however not satisfied. In order to satisfy this requirement, statecharts would have to be extended with support to introduce new states, for example, via event triggering. Introduction of new states also requires a run-time verification process.

Dataflow Graphs. Dataflow programming [40] representes programs as directed graphs where data flows between nodes along the graph’s arcs. Graph’s nodes represent program entities (e.g., modules, objects, or instructions, according to the level of abstraction used), and directed arcs represent data dependencies between program entities. Dataflow graphs are used to specify the computation (i.e., are the system) and manage the system.

Figure 6 shows the user detection example of the smart-home system expressed as a dataflow graph. For the example, we use the dataflow graphs as defined in AmbientTalk/R\(^V\) [47]. The management model of dataflow systems consists of operators processing input data (e.g., node RFIDSensor processing the movementDetected event in Fig. 6) and producing output data (e.g., node HouseEmpty producing the empty boolean value in Fig. 6). Dependencies between nodes are explicitly represented by edges in the graph. Data always flows from the outputs generated by a node to the input of another node. A node in a dataflow graph is said to be enabled to fire if there are values available for all of its inputs. Node firing (execution) can take place at one of two moments. Once all of its inputs have received a (new) value, or whenever one of its input values changes. These two approaches are referred to as synchronous dataflow and asynchronous dataflow, respectively. Arcs for nodes like RFIDSensor are called forking arcs. Whenever data reaches a forking arc it is duplicated and sent to each of its subsequent nodes. Dataflow graphs enable the parallel execution of processes. Tasks that can be computed in parallel are depicted by the H\(_i\) doted lines in Fig. 6. All operations in the same line fire in parallel (as long as they can fire). That is, nodes UserIdentification and MonitorUser can be computed in parallel if RFIDSensor produces output data.

Fig. 6.
figure 6

Dataflow graph representation for user detection in the smart-home system.

Systems progress is unequivocally dictated by its dataflow graph. Hence, dataflow graphs successfully manage software systems. Modeling adaptations as system entities (i.e., nodes in the graph), they will become available as long as the appropriate input is provided to their nodes. Adaptations are then managed using the dataflow as control. This means that if new sources of data appear in the surrounding environment, new adaptations could be added dynamically. However, dataflow graphs do not provide a direct way to analyze properties of the system. The system and its adaptations will be consistent if and only if the definition of the graph is. To verify the consistency of the graph, dataflow graphs must be extended using rule-based systems or model-checkers.

Requirements satisfaction: Dataflow graphs satisfy the abstraction requirement by defining adaptations as nodes in the graph. This provides a level of abstraction similar to that of other transition systems in this section. Dataflow graphs have been successfully used as visual programming languages, where the dataflow graph is not only a representation of the system, but it is indeed the execution model of the system to which new nodes can be added at run time. This provides an adaptive control structure for the system’s data flow. Such visual run-time model of the system can be used, as a debugger [32], in dataflow programming languages like NL. However, it is not possible to verify that existing or added adaptations are consistent with each other at run time or beforehand. As a consequence, dataflow graphs do not satisfy the run-time or offline consistency requirements.

Application to self-adaptive systems: The computation style offered by dataflow graphs is of particular use in situations requiring data processing, as it is the case of reactive programming. The data processing control structure used in reactive programming is extended with adaptation specific constructs in the Flute [4] language, enabling users to define self-adaptive systems that respond to continuous input from their surrounding environment; used for example, in adaptive personalized assistants. In Flute, developers do not directly interact with the dataflow graph but rather are exposed to language abstractions that create, manage, and modify the graph behind the scenes. However, such languages, do not directly provide verification mechanisms for consistency. In order to enable dynamic verifications the language would need to be extended to account for consistency.

Petri Nets. Petri nets [51] are directed graph structures consisting of places, transitions and tokens. Places represent the different states of the system, transitions represent actions between states and tokens represent data flowing in the net. Tokens reside in places, indicating that the place is currently active. At a given moment in time, the set of tokens in all places is defined as a marking. Tokens (data) flow from one place to another if the transition between them fires. In the basic Petri net model, transitions are enabled to fire whenever all of their input places are marked.

In Petri nets, similar to automata, adaptations can be defined as states in the net associated with specific behavior which becomes visible only when the state is active. Moreover, interaction between adaptations can be modeled by defining transitions between the states representing adaptations, so that the active state of an adaptation can influence that of other adaptations defined in the system. Figure 7 shows a Petri net model for the smart-home system example expressing the interaction between the KnownUser (K) , Window (W) , Security (S) , and UnknownUser (U) adaptations, following the specification of self-adaptive systems, as defined by the context Petri nets (CoPNs) [18] run-time model.

Fig. 7.
figure 7

Smart-home system adaptations’ interaction design using CoPN.

Petri nets provide a first hand view of the model at run-time which can be used to manage the consistency of the system as adaptations become active and inactive. That is, the conditions in which an adaptation can become active or inactive can be constrained by the state of other adaptations. Such constraints are encoded as combinations of transition firings. This means constraints will be automatically verified and enforced at run time as part of the execution semantics of the Petri net (i.e., flow of tokens). Additionally, Petri nets are formal specification of software systems, as such, they also enable the analysis of different system properties like liveness, deadlocks, or reachability. Using the analysis techniques already provided in Petri nets it is possible to reason about the consistency of self-adaptive systems beforehand. For example, inconsistencies in the interaction between adaptations could be identified whenever: there are deadlocks, it is no longer possible to activate a given adaptation, or it is possible to activate adaptations that should not be active in presence of other adaptations [16].

Requirements satisfaction: Petri nets combine the modeling power of automata and dataflow graphs, allowing developers to define adaptations as particular states (places) of the system, satisfying the abstraction requirement. Note that in Petri nets there is no state explosion problem, as its inherent support for concurrency allows to model different states being active simultaneously by means of the Petri net marking. Interactions between adaptations is defined through the firing of actions (transitions), which follow the operational semantics of Petri nets. Additionally, new adaptations could be introduced to the system by means of Petri net composition. Since consistency of adaptations is verified at run time as transitions fire, the model satisfies the run-time consistency requirement. Finally, the analyses provided in the Petri nets formalism allow us to reason about different system properties, satisfying the offline consistency requirement.

Application to self-adaptive systems: Petri nets have been used to model the dynamics of software systems. Their applicability to self-adaptive systems is realized, for example, in the CoPNs model [18]. This model defines a Petri net structure as the run-time model of self-adaptive systems, and it uses a programming language layer for the interaction with the model —that is, developers do not directly manipulate the Petri net, but rather interact with it through a language layer. In particular, CoPNs are designed to deal with interactions between adaptations, allowing their definition as part of the run-time model of the system. Furthermore, CoPNs define management and analysis modules for the system, in which it is possible to verify the consistent activation of adaptations at run time [17], and analyze the coherence and consistency of the system beforehand [16].

4 Analysis and Challenges of Consistency Management

Section 3 outlines different models and techniques used in the development of self-adaptive systems to assure the run-time consistency of adaptations as they interact with each other. This section analyzes the approaches with respect to the requirements M.1 through M.3 put forward in Sect. 2.2 and discusses avenues for future work in the development of concrete models to manage self-adaptive system’s consistency.

4.1 Analysis of Consistency Assurance Models

Whenever systems’ behavior changes over time there are no guarantees that the new behavior complies with the expected behavior of the system, possibly leading to system errors or inconsistencies. It is therefore of the utmost importance to manage the adaptation process in a way that enables the system to ensure new behavior interacts correctly with that already deployed in the system. The description of the approaches presented in Sect. 3 is motivated by this observation, focusing on a development perspective for the verification and analysis of the system consistency with respect to its changing environment. Table 1 presents a summary of the described approaches and their satisfiability of the requirements for consistent interaction between adaptations (shown as black boxes in the table). In the following we describe approaches’ properties that lead to the satisfaction or dissatisfaction of each of the requirements.

Table 1. Consistency interaction properties satisfaction.

 

M.1 Abstraction: :

Managing self-adaptive systems’ adaptations at run time requires to model adaptations explicitly, such that adaptations’ state and their interactions are well defined in the system. Such an explicit representation of adaptations, provides a structured model to dynamically manage changes in the system’s states in response to its surrounding environment. Additionally, it is possible to model adaptations’ interactions such that their state changes as a consequence of changes to other adaptations too. Formal and rule-based approaches do not satisfy this abstraction requirement while architectural modeling and transition system approaches do.

Formal approaches are normally used for the specification of software systems by means of formulae. using such specification it is then possible to reason about different system properties (beforehand). In the context of self-adaptive systems, formal approaches have been used as external models to prove properties about the system, rather than to model its behavior, or drive its development or execution. Even though formal approaches (e.g., modal logics [68]) have been extended to take into account adaptation concepts, they still play an auxiliary role in the system development —that is, are used for specification and analysis purposes. As a consequence, systems’ consistency is limited by its specification. Particularly, expressing adaptations interaction as formulae, requires to model every possible interaction by a formula, which can become cumbersome or possibly unfeasible for large-scale systems if no automated support to generate such formulae is provided, the assurances about the system’s consistency are limited. These reasons lead us to concluded that, however useful for supporting self-adaptive systems development, formal approaches do not provide the necessary abstractions to develop consistent self-adaptive systems.

Architectural modeling approaches provide a structured view of the system at design time. In this, view the main entities of the system have a explicit representation in a graphical notation, that can be mapped into the system’s development environment. Similar to other system entities, adaptations can be defined as independent model entities, while interactions between adaptations are explicitly defined by means of rules expressing how other system entities (and adaptations) are affected by a new adaptation. Such explicit representation of adaptations in a system model can be use for the development of the system, however, a transformation between the model and its implementation (e.g., using ATL or FOP) would be required. Additionally, not all rule systems defining interactions are suitable for these approaches. Rule systems used are required to have a transitive closure property, so that not all interactions need to be defined explicitly, for example, as is the case of FODA approaches.

Rule-based approaches (partially) abstract the state of the system by the use of facts over which it is possible to reason about by means of rules. Facts are statements about the state of variables in the system, while rules are used to take dedicated actions upon fact’s validity. Rule systems do not provide the means to define adaptations explicitly, rather adaptations need to be regarded as particular facts that are only valid under certain situations. Additionally, definition of interactions between adaptations is not explicitly supported in rule-based systems, rather evaluation of rules generates implicit dependencies between them.

Transition system approaches effectively abstract the states and actions of the system by representing them as graph-like models, where graph nodes represent adaptations and interactions between such adaptations are represented as edges between nodes. In these approaches, the state of adaptations depends on whether the graph node representing the adaptation is the current node of execution. The advantage of using transition system approaches is that they provide a first-hand view of the system. Moreover, these approaches can be mapped directly to the execution environment of the system, thus, there is no need for additional transformations of the transition system specification, as is the case in model-based architectural approaches.

We recognize that transition system approaches could better model self-adaptive systems. In transition system approaches, adaptations and their interactions can be naturally abstracted as states of the system and actions between those states, respectively. Such abstractions provide a first-hand view of the system that can be effectively used during its execution to orchestrate operations (e.g., composition, inclusion, removal) over adaptations, as well as its verification.

M.2 Run-time Consistency: :

The run-time consistency requirement refers to the ability of software systems to assure that newly introduced adaptations do not break the original system behavior, or that already provided by other adaptations. To satisfy this requirement, an adaptation management approach must allow the run-time introduction of adaptations. Moreover, if the approach provides a means to reason about system properties, such mechanism is to be triggered whenever adaptations are introduced to or withdrawn from the run-time environment. Ensuring the run-time consistency of adaptations is principally supported by formal and rule-based approaches. In the former case using dynamic verification of the system’s specification, and in the latter case using the execution of rules defining the control flow of the system. Note, that two of the transition system approaches, namely automata and Petri nets, also satisfy this requirement. This is due to the execution semantics accompanying these models, providing a dynamic behavior similar to that specified by rule-transition systems.

Formal approaches express system specifications in terms of formulae. Such formulae are evaluated using the techniques developed in each approach in order to prove satisfiability of the system’s properties. Formulae may specify features, interactions, or constraints in a system. The general technique used in formal approaches (except for algebraic specifications which do not satisfy the run-time consistency requirement) to guarantee the system’s consistency as adaptations are introduced to and withdrawn from the system (i.e., by adding or removing formulae) is to verify the satisfiability of the formulae set whenever this changes. If the set of formulae is large, the verification process may be time-consuming, therefore, formulae verification is normally performed statically. In recent years, researchers have explore techniques to speed up the verification process in order to use verification techniques at run time. Some of the formalisms in which run-time techniques can be used include computational logic and model checking [68], and CSP [59].

Architectural modeling approaches are defined statically, where all adaptations are defined during the system’s design. The run-time management featured in these approaches extends to the composition of defined adaptations. If new adaptations are introduced at run time, these are not verified. Any inconsistency that may have been introduced due to interactions with new adaptations, or may have been overlooked during the system’s design, will perdure at run time. For example, whenever the domains of adaptations’ attribute values have a containment relation, errors might appear at run time [23].

Rule-based approaches can publish facts and production rules at any moment in time. Once a production rule is published into the fact space, it is automatically evaluated with all existing production rules. Changes in the behavior of the system (i.e., adaptations) are affected immediately by the validity of the new facts associated to it. However, the consistency of the system relies on programmers to ensure facts are correct and there are no accidental interactions between facts.

Transition system approaches, similar to architectural modeling approaches, are used to model states and actions of a system. As such, these approaches can only provide verification of system properties if the approach’s underlying specification allows it to do so, as it is the case of automata and Petri nets. Automata composition (i.e., union, product, intersection) may preserve the properties of its components. For example, union of regular automata is a regular automata. In the Petri nets case, properties such as liveness or deadlocks can be verified at run time, for example, as new places and transitions are added. Similar verification mechanisms could be defined for statecharts or dataflow graphs, although these are currently not implemented in the concrete models surveyed. Unlike architectural approaches, Automata and Petri nets are defined with an execution semantics that enable the verification of the semantics of the system at run time. That is, if adaptation’s interactions are encoded within in the model, it can be assured that the desired interactions are maintained (e.g., CoPN [16]).

Formal approaches are the ideal candidate to enable the run-time verification of self-adaptive systems. Formal approaches encode properties about a system (e.g., consistency) as part of their specification, allowing the system to re-evaluate such specification whenever it changes.

M.3 Offline Consistency: :

The offline consistency requirement refers to the ability to reason about the system and its properties at early stages of the development cycle. Upfront analysis of systems’ properties is motivated by two observations from the conclusions of the run-time consistency requirement. First, run-time verification of all system properties may be too time consuming for medium- or large-scale systems. Second, the run-time verification process relies on the composition properties of the model. For example, is should be the case that, given two consistent adaptations their composition will be consistent. The support that upfront analysis can provide is to reduce the time of verification at run time and to assure initial adaptations are consistent. Most of the surveyed approaches provide a means to reason about systems’ properties beforehand, in some cases by complementing the approach with other of the approaches proposed.

Formal approaches are by definition reasoning frameworks. The ability to analyze system properties is inherent to these approaches, be it by formal verification or theorem proving. Properties, like consistency, must be encoded into the formalism taking into account the relations describing interaction between adaptations. Once these have been defined, satisfiability of such properties can be analyzed. It is however unclear, in some cases, how to capture the run-time properties of the system by means of a formal model, in particular if the model’s main focus is not on the run-time behavior of the system. Such a problem can be solved by complementing the formal approach with an approach providing a run-time representation of the system, as transition system approaches do. Common properties verified by formal approaches include, system liveness and deadlocks, program invariants, and progress and safety of future states.

Architectural modeling approaches are introduced with the sole purpose of providing the means to reason about system properties using dedicated analyses. These analyses use the abstract model of the system to identify inconsistencies between defined system entities (e.g., adaptations) and their possible composition or interaction at run time. Analyses in architectural modeling approaches are normally supported in combination of formal approaches, for example, model checking or constraint solving techniques to analyze the system beforehand.

Rule-based approaches are not concerned with the analysis of system properties beforehand, so no offline consistency support is provided for any kind of system property. These approaches are intended as reactive approaches in which system rules and facts are evaluated as they are needed, or become available. If inconsistencies would arise from rules’ definition or evaluation, it is up to the programmer to identify and solve them.

State machine approaches provide support for structural system properties, such as composition or hierarchization. Even though they provide only limited support for the analysis of run-time properties of the system, the supported analyses could be used beforehand to reason about properties such as reachability, timeliness, liveness, or coherence between adaptations. As it is the case with architectural modeling approaches, the analysis of transition system approaches is normally supported by formal approaches, like model checking or CSP.

The majority of the approaches presented here provide means to analyze the system properties beforehand, thus many of them would be suitable to assure the consistency of self-adaptive systems. However, as noted, some of the approaches rely on analysis techniques provided by formal approaches. As a consequence, formal approaches can be put forward to enable consistency analysis of self-adaptive systems.

 

Three things are apparent from Table 1. A first observation is that all approaches satisfy either the run-time or offline consistency requirements. This means that, up to some extend, existing approaches for self-adaptation are concerned with managing interactions consistently. A second observation is that formal approaches are naturally conditioned for verification and analysis (respectively satisfying the run-time and offline consistency requirements) of software systems. Nonetheless, the importance of an explicit representation of adaptations (the abstraction requirement) triumphs as a vital characteristic for the development of self-adaptive systems. Finally, with the exception of automata and Petri nets, none of the approaches satisfy all three requirements. As a consequence, in order to satisfy all three requirements it is likely that two or more of the described approaches need to be combined in a concrete model, as it is the case for some of the concrete models discussed in Sect. 3.

In particular, we note there is a close relation between transition system approaches and formal approaches, where the former are used for the explicit representation and run-time management of adaptations, and the latter provides the underlying specification of the system. Such specification is used to assure consistency properties about the system through dedicated analyses at development time, and run-time verification during execution. There is also a strong connection between architectural approaches and rule-based approaches. Where the former provide a concrete representation of the system and its adaptations, as well as analysis mechanism to reason about system properties (e.g., consistency between model artifacts). Note that analyses in architectural approaches are usually supported by formal approaches like logic programming [19] or description logics [67]. Rule-based approaches can be used to manage and actuate adaptations at run time, as for example is the case of AGG in [23]. Finally, approaches like discrete control theory [45] also live at the intersection of the different approaches discussed here. Discrete control systems can be used to deal with inconsistencies [3] in autonomous systems or to assure the correctness of adaptive component systems [1]. Discrete control systems are realized by a set of automata structures specifying the main system and its properties. Graph transition systems are used to combine the different automata and enabling/disabling specific behavior in the system. Consistency or correctness properties of the system are verified using a logic system describing the system and its general goal, much in the way model-checking approaches work.

It is important to recognize that although satisfiability of the consistency assurance requirements can be due from the combination of transition system approaches with formal approaches, such combination is not a necessary condition. Concrete models, such as ActivFORMS [37], context Petri net [17], and reactive control techniques [5], integrate verification and analysis mechanisms into the development environment in order to manage interaction between adaptations at run time, and to reason about system properties beforehand, respectively.

Table 2 presents a summary highlighting the trade-offs of using the different approaches presented in this chapter for assuring consistency of adaptations interactions as part of the development process of self-adaptive systems.

Table 2. Summary of consistency assurance approaches for self-adaptive software systems.

4.2 Challenges in Self-adaptation Consistency

Current efforts in consistency assurance for self-adaptive systems foster the management of adaptations as they are introduced to and withdrawn from a running system. The concrete consistency assurance models presented in Sect. 3 are a starting point to ensure their run-time consistency. However, as it is possible to see, these models assume a central entity managing the consistency for the entire system, or take into account no time restrictions for the adaptation process. Such assumptions restrict the applicability of the models, since the target domain of self-adaptive systems is large-scale, heterogeneous and distributed environments, as for example, time-constrained and critical systems.

In order to fully embrace the vision of self-adaptive systems, we recognize that there are challenges that still need to be addressed to assure the consistency of self-adaptive systems in large-scale, heterogenous and distributed environments. We discuss these challenges and possible directions about how to address them.

Efficiency. The adaptation process proposed in self-adaptive systems is based on the MAPE loop (i.e., Monitoring, Analysis, Planing, and Execution). This entails that and active monitor entity observes changes in the system’s surrounding environment; once changes are sensed, the analysis and planning phases follow to assess adaptations applicability; finally an execution phase enacts the system adaptations. The approaches described in this chapter are used during the analysis and planning phases to manage the consistent interaction between adaptations. However, as systems grow, the adaptation process can become too time consuming, lagging the execution of the system as a result.

The concrete implementations of the different approaches studied in this chapter present small and medium application examples in which execution lagging can be disparage as it has no visible impact on the usability of the system. However, it is unclear whether an impact can be observed for large systems in which the analysis and planning phases may take several seconds or even minutes, therefore hampering system’s usability. Responsiveness is crucial in critical or time-constrained systems, in which delaying execution may have repercussions on the execution of the whole system. This observation follows the requirements defined for the consistency assurance for the development of self-adaptive systems put forward in Sect. 2.2. To prove responsive, consistency assurance models must perform an initial system analysis offline (Requirement M.3), and can continue verifying the system at run time (Requirement M.2) [12].

Incremental verification [58] techniques are introduced to reduce analysis and verification execution time following the two-step model of the run-time and offline consistency requirements. The purpose of incremental verification is to reduce, as much as possible, the time it takes to verify the system properties upon run-time changes to its surrounding environment. As a consequence, these models allow to provide a system specification with incomplete information. An initial analysis is performed with the information available offline, continuing the verification dynamically as missing information is filled in at run time. Incremental verification techniques can be implemented by reusing state machine approaches as the structured specification of the system, where the variable parts of the system are verified at run-time reusing previously obtained information from a model checker [39], or probabilities are associated with unknown information beforehand, verifying such parts of the system as information becomes available (using a probabilistic Markov chain) [6, 27]. Such techniques for incremental verification are used, respectively, to efficiently verify the structure of event- and component-based systems as this changes dynamically [39], and to assure the reliability of software systems as they adapt to their surrounding environment [26].

Concurrency. During the evaluation of consistency management models for self-adaptive systems we considered the adaptations to be processed sequentially. As changes are sensed in the environment, they are processed atomically and one by one —that is, processing one adaptation will not start until the processing of the previous one has finished. This method proved successful for the models at hand in that it is possible to manage and verify incoming adaptations at run time. However, such model of computation restricts the applicability and expressiveness of self-adaptive systems, not being able to cope with large-scale and distributed environment, where changes to the environment take place simultaneously.

In self-adaptive systems, adaptations take place in response to continuous updates from a sensor network gathering information about the system’s surrounding environment (context) and internal state (self). In a simplified model, each adaptation can be thought to be associated with a single sensor. Depending on the information provided by sensors, adaptations may be composed into or withdrawn from the system. Sensor information updates take place unannounced, and thus there is no single synchronization point to process gathered information. Moreover, two or more sensors could signal information updates simultaneously. Managing simultaneous adaptation requests sequentially presents two main problems. (1) Unpredictability of the system is raised. The order in which adaptation requests are processed may differ for different runs of the system. Such ordering can have an effect on the overall decision to compose the adaptations. Different sub-sets of adaptations may be composed in the system for a given set of simultaneous requests, raising the unpredictability of the system’s observed behavior. (2) Incoming adaptations verification. As adaptations are introduced composition of other interacting adaptations can be affected by other introduced adaptations with which the former adaptations may interact. The outcome of the consistency verification may change depending on the moment in which the evaluation is performed. If other interacting adaptations are being composed or withdrawn, the verification may yield different results. In addition to these problems, we note that concurrent management and evaluation of adaptations becomes ever more important in the context of distributed systems, as the likelihood of multiple sensors signaling updates simultaneously increases. Based on these observations we put forward the need to evaluate adaptation requests concurrently in the concrete models for consistency assurance.

In order to embrace concurrent evaluation and verification of adaptations at run time, the consistency assurance models have to be extended with a concurrent programming model. Although some of the surveyed approaches are already equipped with the notion of concurrency (e.g., Process algebras, Dataflow graphs, and Petri nets) and could therefore be used to deal with concurrency, the concrete models currently used in self-adaptive systems do not use this property for the processing, evaluation, or verification of adaptations. The impact of extending the models to deal with concurrent evaluation of adaptations is currently unknown and needs to be assessed.

Distribution. Self-adaptive systems are not always contained in a single machine. Target application domains for self-adaptive systems are usually large-scale systems composed of multiple distributed components that interact with each other. In such systems, some of the components may be unknown to a component. This may be due to network failures or components’ mobility. The approaches surveyed here assume an adaptation model with complete knowledge of the system. However, assuming such a model to manage adaptations’ consistency is incorrect in the context of distributed systems.

To deal with distribution, adaptive systems need to account for distributed features at their core (i.e., discovery, communication, failures). Current self-adaptive system models [15, 62] provide such features to enable self-adaptation in a distributed system. Nonetheless, to manage the consistency between adaptations, extensions to consistency management approaches are required. In a distributed setting, where the totality of the system may be unknown, each distributed component is required to manage and assure the consistency of its own adaptations. This implies having an independent management and verification module for each component. Furthermore, the consistency management modules need to be extended to collaborate with other management modules that may be discovered at run time.

Incremental management and verification techniques are commonly used in distributed systems to reason about their properties. Similar techniques could also be used to manage the consistency between adaptations in self-adaptive systems. Similar to the incremental verification processes envisioned to tackle the efficiency challenge, each distributed component is in charge of ensuring its own consistency. As other components are discovered, the consistency managers need to synchronize (an re-evaluate) to ensure the complete system (i.e., discovered components) is consistent, possibly re-using the results previously obtained for each independent component. As mentioned previously, working with distributed systems evidences even more the requirement to deal with concurrency as adaptations can be requested simultaneously. However, note that as each component processes its own adaptation requests, dealing with simultaneous requests is resolved at the inter component synchronization level. Known concurrency synchronization techniques (e.g., barriers) could be applied to deal with simultaneous adaptation requests in distributed components.

Support. From the surveyed approaches, it is possible to observe that the concrete models used to assure consistency between adaptations provide the support to deal with the consistency requirements. Nonetheless, for the most part, these models are detached from other parts of self-adaptive systems construction (e.g., language, tool support). Specification, analysis, verification and management of consistency in self-adaptive systems should not represent an overhead for the developers of the system, as expressed by Requirement M.1 in Sect. 2.2. Developers should only be required to work with a comprehensive set of tools allowing them to build, test, and verify their system.

We recognize that to raise acceptance of self-adaptive systems, a more involved development process is required. This encompasses both a seamless integration between all phases of self-adaptive systems (e.g., specification, development, verification, analysis, testing) and better tool support for each of these phases. Existing tools, for example CoPNs, already present a step forward in this direction. Such existing tools need to be complemented by adding other system properties to be verified.

5 Conclusion

Self-adaptive systems are introduced to enable and manage the dynamic adaptation of software systems’ structure and behavior. Adaptations are applied to the system in response to information gathered from a sensor network. The goal behind dynamic adaptations is to provide a more appropriate system’s behavior with respect to the situations of its surrounding execution environment and to improve the system’s Quality of Service (QoS) requirements. However, dynamic modification of a system’s behavior does not come without complications. As behavior changes at run time, the actual behavior composed in the system may present errors due to, amongst others, inconsistencies in the interactions between its adaptations.

This chapter provides an overview of different general approaches that can be used to assure adaptations’ consistent interaction in self-adaptive systems. We classified existing approaches in four categories, formal, architectural modeling, rule-based, and transition system approaches, according to the level of abstraction in which the approach is developed. To compare the approaches with one another, we put forward three requirements for the management and assurance of consistency—that is, abstraction, run-time consistency, and offline consistency. These requirements were extracted from prototypical self-adaptive systems examples, and driven by a concrete case of smart-home systems.

The observations of our comparison revealed that most of the approaches provided support for one or two of the consistency requirements but nearly half of them fail to support the abstraction requirement. As a matter of a fact, only two of the approaches satisfy all three requirements. Notwithstanding, it is also possible to identify complementary features between the four categories, where requirements dissatisfied by one approach, could by satisfy by combining it with another. Formal approaches provide the strongest foundations to assure consistency between adaptations, as they offer analysis and verification techniques to reason about the system properties. However, such approaches are only used as additional artifacts to the development of self-adaptive systems. Architectural modeling approaches are ideal for designing the system as they provide specific abstractions for the definition of adaptations. Using the specification of the system design, it is possible to reason about its properties offline. Nonetheless, there are no means to reason about the system at run time since only those entities preconceived during the design of the system are part of the reasoning process. In contrast, rule-based approaches are design to manage the dynamics of software systems. However, the only abstraction offered by these approaches is that of facts, which can be verified at run time for satisfiability. Moreover, in rule-based systems, it is up to the developer to assure the system is consistent, as there are no integrated offline reasoning mechanisms. Finally, transition system approaches are ideal for abstracting adaptations, providing a firsthand view of the system’s structure. Depending on the specific model used, the dynamics of the system can be specified such that consistency is verified as the system progresses. Moreover, some of these approaches, supported by formal approaches (e.g., model checking) can use the system abstraction as a specification to reason about system properties offline. However, not all concrete models provide such functionalities.

As a result of these tradeoffs presented all approaches, we foresee successful consistency assurance approaches to combine transition systems and formal approaches since they offer an explicit representation of the system, and the means to verify and reason about different system properties, respectively. The combination of these two approaches can be observed in the concrete models that do support all three requirements, automata and Petri nets [16, 26, 37, 39]. In addition to the analysis of consistency assurance models, we also identified the relevance of the models with respect to their application domains.

Self-adaptive systems are envisioned to operate in large-scale distributed environments interconnecting multiple subsystems, and gathering information from multiple sources. Such inherent characteristics of self-adaptive systems reveled four challenges that need to be addressed to successfully assure consistency between adaptations with respect to their target systems. Each of the challenges, efficiency, concurrency, distribution, and support, constitute an interesting avenue to explore, in which a combination of transition systems and formal approaches (e.g., respectively using concurrent processing models, and incremental verification) can further complement a concrete model for consistency assurance of self-adaptive systems.