1 Motivation

Embedded systems are computer systems that have a dedicated function within a larger system, often with real-time constraints [19]. Hence, their performance is vital. However, good performance is hard to achieve, because embedded systems come with increasingly heterogeneous, parallel and distributed architectures and may comprise many product lines and different configurations.

Here, we consider service-oriented systems [10,11,12,13,14,15], a subclass of embedded systems, which: (i) provide services to their environment, accessible via so-called requests; (ii) each service request leads to one response; (iii) service requests are functionally isolated from each other; but, (iv) may affect each other’s performance by competing for the same resource in the service-oriented system.

We propose a performance evaluation framework that can be used to evaluate the performance of service-oriented systems based on real measurements for calibration (Contribution \(\mathcal {C}1\)). We realize this framework via iDSL, which comprises the domain-specific, high-level iDSL language (Contribution \(\mathcal {C}2\)) to model service-oriented systems and the iDSL toolchain (Contribution \(\mathcal {C}3\)) to evaluate the performance of these systems in a fully automatic fashion. This approach separates the description of the user concerns from the solution approach, in accordance with the Declarative Performance Engineering (DPE, [16]) approach.

2 The High-Level iDSL Language

The iDSL language [10,11,12,13,14,15] has been developed to model service-oriented systems. It is tailored to be used and understood by system designers and experts in the service-oriented systems domain, in line with \(\mathcal {C}2\). Figure 1 depicts the six high-level concepts of the iDSL language, as follows. A service system (Fig. 1-C) provides services to consumers in its environment. A consumer can send a request for a specific service at a certain time, after which the system responds with some delay. A service is implemented using a process (A), resources (B) and a mapping. A process decomposes high-level service requests into atomic tasks, which are each assigned to a resource in the mapping. Resources are capable of performing one atomic task at a time, in a certain amount of time. When multiple services are invoked, their resource needs may overlap, causing contention and making performance analysis harder. A scenario (D) consists of a number of invoked service requests over time to observe specific performance behavior of the system. A study (E) evaluates a selection of systematically chosen scenarios to derive the system’s underlying characteristics. Finally, measures of interest (F) define what performance metrics to obtain, given a system in a scenario.

Fig. 1.
figure 1

The meta model of the iDSL language

Table 1. An example service-oriented system, modeled using the iDSL language

For illustration, Table 1 provides an example iDSL language instance of a medical imaging system [14, Sect. 3], as follows. The process contains a sequence of the processes “image_pre_processing”, “image_processing” and “image_post_processing”. In turn, process “image_processing” decomposes into “motion_compensation”, “noise_reduction” and “contrast”. Each atomic process has a load, an amount of work. The resource contains a CPU with rate 2, i.e., it can process 2 loads per time unit, and a GPU with rate 5. The system combines the process and resource, and has a mapping to connect atomic tasks to resources. The scenario encompasses two streams of requests for the only service. Both streams have fixed inter-arrival times of 400. One stream has an initial delay of 0. The initial delay of the other is determined by an offset parameter, which is a variable that is defined in the so-called design space of the study. Finally, the measure contains two measures of interest referring to performance evaluation.

3 The Integrated iDSL Toolchain

In this section, we discuss the iDSL toolchain which ranges from creating an iDSL language instance to generating performance artifacts, in line with \(\mathcal {C}3\).

Creating the performance model involves the conjoint modeling by a modeler and analyzer of a case study in the iDSL language. A modeler determines how the system behaves and generates a system model, i.e., a process, resource and system (cf. Fig. 1-A, B and C). The analyzer determines system usage and creates a study, i.e., scenario, study and measure (cf. Fig. 1-D, E and F).

During the modeling process, the Eclipse Integrated Development Environment [2] is used to support the user. This environment enables, among others, syntax highlighting, code completion, and “input validation”, e.g., checking the code for invalid references, unused objects and ambiguous definitions. Also warnings and information boxes are displayed, e.g., when the design space is too large.

Under the hood, the iDSL grammar has been defined using the Xtext framework [18]. The toolchain functionality is programmed in the Xtend language [17].

In the following, we briefly describe the four main activities that constitute the performance analysis toolchain of iDSL.

Process Measurements. Measurements are performed on a real system and injected the into the iDSL model for calibration [15, Sect. 3]. The text-processing tool AWK [1] is used to facilitate this.

  1. 1.

    Perform measurements on a real system [15, Sect. 3.1].

  2. 2.

    Create Gantts: group measurements into execution times [15, Sect. 3.2].

  3. 3.

    Generate Empirical Distribution Functions (EDFs) [15, Sect. 3.3].

  4. 4.

    Inject the EDFs of step 3 into the IDSL model via a model transformation: represent EDFs as probabilistic alternatives (PALT, [4]) constructs, in line with \(\mathcal {C}1\). For illustration, we have drawn 100 numbers from a normal distribution (\(\mu =100\), \(\sigma =10\)) [7] representing measurements. Table 1g then shows the resulting EDF in iDSL. For instance, “2 atom load 91” means that the 100 drawn numbers contain 2 times value 91.

Fig. 2.
figure 2

Four ways of representing latencies, generated from the iDSL code (Color figure online)

Model Simplification. iDSL determines whether the model can practically be evaluated [12, Sect. 4.3]. If not, it is simplified via a transformation, as follows.

  1. 1.

    Cluster similar measurements in each generated EDF [12, Sect. 4.1].

  2. 2.

    Increase the time unit of all time occurrences in the model [12, Sect. 4.2].

Model evaluation is delegated to Modest [4].

  1. 1.

    Create Modest models: transform iDSL into Modest [11, Sect. 4.3]

  2. 2.

    Evaluate the Modest models for performance using the Modest toolset.

    1. (a)

      Discrete-event simulation: yields average latencies [14, Sect. 4.2]:

    2. (b)

      Timed Automata (TA)-model checking: a binary search for absolute bounds [14, Sect. 4.2].

    3. (c)

      Probabilistic Timed Automata (PTA)-model checking: an iterative algorithm in which cumulative latency probabilities are computed one at a time [13, Sect. 4].

    4. (d)

      Efficient PTA-model checking: a carefully constructed combination of the aforementioned techniques [12, Sect. 6].

  3. 3.

    Parse results: parse the Modest results into high-level iDSL results.

Create visualizations turns the parsed results into intuitive graphs.

  1. 1.

    Latency breakdown chart (see Fig. 2a): displays the structure of a service, i.e., the underlying processes and resources, and its dynamics, i.e., process latencies and resource utilizations.

  2. 2.

    Multi-design latency Cumulative Distribution Function (CDF, see Fig. 2b): provides latency CDFs for multiple designs in one graph to easily determine the effect of design decisions.

  3. 3.

    Latency bar chart (see Fig. 2c): shows the subsequent latency times of a service which provides insight in jitter, i.e., the variation of latencies.

  4. 4.

    Latency CDF (see Fig. 2d): provides a lower (purple) and upper bound (red) CDFs whose difference is the result of how nondeterminism is resolved.

Figure 2a–c are based on discrete-event simulations, and Fig. 2d on PTA-model checking. Figure 2a is made by GraphViz [3], the others by GNUplot [6].

4 Background

iDSL is different from tools such as the Modest toolset [4], Storm [8], UPPAAL [9] and PRISM [5]. Where the latter deliver relatively generic, widely-applicable languages, instead, iDSL provides a domain-specific language (\(\mathcal {C}2\)) which allows measurements-based calibration (\(\mathcal {C}1\)), and a fully automated toolchain (\(\mathcal {C}3\)).