Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Motivations and Related Work

Artificial intelligence systems often adopt a sensory infrastructure characterized by elevated device heterogeneity both in terms of the energy consumption profile and the type of measurements collected. One of the application scenarios in artificial intelligence, where this feature is more evident is Ambient Intelligence, characterized by the adoption of pervasive and ubiquitous sensors for monitoring relevant ambient features. Data fusion, by enabling high-level context information to be obtained from raw sensory data, may offer a solution to the need to cope with such heterogeneity, and to manage data that may be only partially correlated with the phenomenon of interest [1, 2]. Considerable attention has been devoted to context information, such as user presence in monitored areas [35] or current user activities [6, 7].

When dealing with multi-sensor data fusion, one of the most relevant issues is the management of the non-negligible level of uncertainity and noise in data gathered by low-cost devices. To deal with this problem, several papers in the literature have suggested adopting a probabilistic approach, such as Naive Bayes classifiers, Hidden Markov Models (HMMs) and Conditional Random Fields (CRF), as described in [7], which compares the performance of these three approaches in different type of datasets, adopting a semi-supervised learning scheme. In [8] a distributed and adaptive Bayesian network is proposed for the detection of data anomalies in WSN data.

In line with state of the art research, we propose a Bayesian adaptive system devoted to inferring user activity through a set of low-cost sensors, embedded into a Wireless Sensor Network (WSN) [9], as preliminarly described in [10] for detecting user presence [11]. A WSN comprises a huge set of wireless sensor nodes, pervasively deployed in the environment and capable of performing on-board computations. These devices are characterized by limited, non-renewable, energy resources. This latter feature makes the maximization of the network lifetime a crucial goal, with the proviso that its achievement should not excessively sacrifice the inference accuracy. The proposed system aims to dynamically find the best trade-off between these two contrasting goals, maximizing both the WSN lifetime and the quality of the information gathered. This problem dealt with by minimizing both the uncertainty of inferred knowledge and the energy consumption of the sensory infrastructure.

In order to solve our multi-objective problem, two objective functions have to be formally defined. For the uncertainty function we used the classic definition provided in [12]. The definition of an objective function for representing energy consumption of sensor nodes is a rather more complex problem. Several papers have dealt with the issue of minimizing the energy consumption of a WSN [1315]. In [14] the author describes PAMAS, a MAC layer protocol which reduces the cost of routing packets over the shortest-hop routing. In [13] the authors propose a node cost model for their clustering-based protocol that utilizes randomized rotation of local cluster base-stations (cluster heads) to distribute the energy load among sensor nodes. In [15], the authors offer an analysis of the power consumption model for the communication module of a generic WSN node. To the best of our knowledge, most of the research in the literature deals with the problem of maximizing the WSN lifetime either at MAC level or at routing level. In contrast, our system manages the entire sensory infrastructure at a higher level, making our approach independent from low-level details.

This chapter is structured as follows. Section 2.1 provides a general description of the system proposed here, in terms of the concepts involved and the relations among them. Section 2 provides a formal definition of the Bayesian Network (BN) adopted, and of the quality indices exploited to evaluate system performance. The self-configuration problem through which the system is able to adapt its sensory infrastructure is described in Sect. 2.6. Section 3 details the results of the experimental evaluation of the proposed system, and finally, Sect. 4 states our conclusions and proposes some future developments of our work.

2 Proposed System

We propose the adoption of an AmI system whose sensory infrastructure is based on Wireless Sensor Networks composed of off-the-self and low-cost devices. This feature makes it possible to maintain a low intrusiveness for the users and for the monitored premises, but implies that the signals gathered are, in general, only partially correlated with the feature of interest.

To overcome this problem, the adopted system exploits Bayesian network (BN) as a framework for performing multi-sensor data fusion. In particular, the BN aims to detect the activity performed by the user in the monitored premises.

With a view to evaluating the behavior of the current sensory infrastructure, we defined two quality indices, expressing the actual energy consumption of sensory devices and the quality of the gathered information. These quality indices are continuously monitored in order to detect anomalous situations, and whenever one of them goes over a given threshold an alarm is triggered. In such cases, the system then reconfigures the sensory infrastructure.

A meta-level for self-configuration is implemented over the BN, as shown in Fig. 1. Such high-level component try to achieve the best trade-off between the degree of confidence of the Bayesian network and the energy consumption of the sensory infrastructure; a plan is produced stating which sensory devices have to be activated or de-activated.

Fig. 1
figure 1

Block diagram for the proposed system

2.1 Conceptual Representation

We formally modeled the concepts characterizing our domain and the relationships between them through an ontology. This formalism allows us to understand the structure and the behavior of our system better, and to support the automatic interaction with other AmI components. The proposed ontology also makes it is possible to describe the components of our system, namely the sensory infrastructure, the inference engine and the optimization module. The relationships among these components are showed in Fig. 2: the optimization module changes the configuration of the sensory infrastructure in order to find the best trade-off between energy consumption and quality of the information obtained, thus affecting the accuracy of the inference engine.

Fig. 2
figure 2

Taxonomy of system components and their relationships, as described in the proposed ontology

The role of the optimization module is represented by the concepts and relationships depicted in Fig. 3. At each time step, the optimization module observes the inference accuracy characterizing the inference engine and the power consumption caused by the sensory infrastructure. These two indices are verified against two fixed thresholds, and whenever one index exceeds its threshold, an alarm is fired, thus triggering the reconfiguration of the sensory infrastructure. The formal definition of such indices is provided in the following section.

Fig. 3
figure 3

Description of concepts involved in the functioning of the optimization module

As demonstrated in Fig. 4, the system knows that the sensory infrastructure is composed of several sensors, and that each of these sensors consumes energy and contributes to the energy consumption of the whole sensory infrastructure. Switching a sensor on or off affects not only such consumption, but also the set of sensory readings gathered in a given time step. Because the inference engine uses as input the sensory readings gathered, each change in the state of the sensory infrastructure indirectly affects the accuracy of the inference process.

Fig. 4
figure 4

The ontology proposed represents the indirect dependency between the status of the sensory infrastructure and both energy consumption and inference accuracy

2.2 Basic Definitions

Before describing the structure of the BN, we provide some formal definitions, which are required to formally state both the structure of the Bayesian system and the multi-objective problem.

\({{\mathcal {X}}}\)

:

the set of activity IDs (numerical);

\(n_{{\mathcal {X}}}\)

:

the number of all possible activities, i.e., \(n_{{\mathcal {X}}} = \sharp {({{\mathcal {X}}})}\);

\(x\)

:

a generic activity, i.e. \(x \in {{\mathcal {X}}}\);

\(x_t\)

:

a generic activity performed at time step \(t\), i.e. \(x_t \in {{\mathcal {X}}}\);

\({\mathcal {T}}\)

:

the set of all possible time steps;

\(t\)

:

a generic time step, i.e. \(t \in {\mathcal {T}}\);

\({\mathcal {S}}\)

:

the set of sensor IDs (numerical);

\(n_{\mathcal {S}}\)

:

the number of all sensors, i.e. \(n_{\mathcal {S}} = \sharp {({\mathcal {S}})}\);

\(s\)

:

a generic sensor, i.e., \(s \in {\mathcal {S}}\);

\(c_{s,t}\)

:

the state of sensor \(s\) at time \(t\); \(c_{s,t} \in \left\{ 0,1 \right\} \), where 0 means that sensor \(s\) is OFF;

\(\mathbf{c }_t\)

:

the binary vector encoding the configuration of the sensory infrastructure at the time step

  

\(t\), i.e. \(\mathbf{c }_t \in \left\{ 0,1 \right\} ^{n_{\mathcal {S}}}\);

\({\mathcal {I}}(\mathbf{c }_t)\)

:

the subset of sensors ON in the configuration \(\mathbf{c }_t\), i.e., \({\mathcal {I}}(\mathbf{c }_t) = \left\{ s \in {\mathcal {S}} \;|\; c_{s,t} = 1 \right\} \);

\({\mathcal {E}}\)

:

the set of numerical IDs, one for each possible value of sensory readings;

\(e_t^s\)

:

the reading gathered by sensor \(s\) at time \(t\), i.e. \(e_t^s \in {\mathcal {E}}\);

\(e_t^{{\mathcal {I}}(\mathbf{c }_t)}\)

:

the set of readings gathered by active sensors at time \(t\), i.e., \(e_t^{{\mathcal {I}}(\mathbf{c }_t)} = \left\{ e_t^s \;|\; s \in {\mathcal {I}}(\mathbf{c }_t)\right\} \)

  

(ordered by sensor ID);

\(e_{1:t}^{{\mathcal {I}}(\mathbf{c }_k)}\)

:

the set of sensory readings gathered from the initial time step to \(t\), i.e., \(e_{1:t}^{{\mathcal {I}}(\mathbf{c }_k)} =\)

  

\(\left\{ e_k^s \;|\; 1 \le k \le t \;,\; s \in {\mathcal {I}}(\mathbf{c }_k)\right\} \).

The definitions given above are used in the rest of the chapter, in order to formally define the inference process of the proposed BN. In particular, to define the BN, it is necessary to consider the state transition model, expressing the probability that the user will perform a particular activity in the next timestep, given the current activity, i.e., \(p(x_t | x_{t-1})\). Moreover, it is necessary to define the sensor model, expressing the probability that a specific set of sensor readings is gathered by the sensory infrastructure, given a specific activity performed by the user, i.e., \(p(e_t^{{\mathcal {I}}(\mathbf{c }_t)} | x_t)\). The state of the sensory infrastructure is fully specified by the binary vector \(\mathbf{c }_t= (c_{1,\,t}, c_{2,\,t},\,\ldots \,, c_{n_{\mathcal {S}},t})\), if we assume that the location of each device does not change over the time. It is worth noting the relevance of \({\mathcal {I}}(\mathbf{c }_t)\), which can be seen as an operator which, given a sensory infrastructure, returns the set of active sensors at time \(t\), thus making it possible to indicate which sensors really contribute to inferring context knowledge.

2.3 Inference Engine

Given the structure of the Bayesian network shown in Fig. 5, the probabilistic state transition model, i.e., \(p(x_t | x_{t-1})\), and the probabilistic sensor model, i.e., \(p(e_t^{{\mathcal {I}}(\mathbf{c }_t)} | x_t)\), fully define the Bayesian network. The Bayesian network allows the inference engine to build its own belief about the activity currently being performed by the user, taking as input the whole observation set, as follows:

Fig. 5
figure 5

Structure of the Bayesian Network for detecting user activity

$$\begin{aligned} Bel(x_t;\mathbf c _t) = p(x_t | e_{1:t}^{{\mathcal {I}}(\mathbf{c }_k)}), \end{aligned}$$
(1)

This belief is evaluated over a state \(x_t \in {\mathcal {X}}\), and it is parametric with respect to the configuration of the sensory infrastructure \(\mathbf{c }_t\). The evaluation of such belief requires knowledge both of the evolution of sensory infrastructure over the time, i.e. \(\mathbf{c }_1, \mathbf{c }_2,\,\ldots \,, \mathbf{c }_t\), and of the whole set of sensory readings gathered over the time in question. Equation 1 can be expressed as a recursive equation thanks to the assumption of independence between the different measures given a state value, and to the validity of the Markov assumption [16].

Indeed, by using the Bayes rule, it is possible to derive the following equation:

$$\begin{aligned} Bel(x_t;\mathbf{c }_t)&= p(x_t | e_{1:t}^{{\mathcal {I}}(\mathbf{c }_k)}) = p(x_t | e_{t}^{{\mathcal {I}}(\mathbf{c }_t)}, e_{1:t-1}^{{\mathcal {I}}(\mathbf{c }_k)}) = \\&= \eta \times p(e_{t}^{{\mathcal {I}}(\mathbf{c }_t)} | x_t, e_{1:t-1}^{{\mathcal {I}}(\mathbf{c }_k)}) \times p(x_t | e_{1:t-1}^{{\mathcal {I}}(\mathbf{c }_k)}), \nonumber \end{aligned}$$
(2)

where \(\eta \) is a normalizing factor.

The Markov assumption makes it possible to neglect the sensory readings gathered up to \(t-1\), when the knowledge of the state \(x_{t.1}\) is given, thus the following equation holds:

$$\begin{aligned} p(e_{t}^{{\mathcal {I}}(\mathbf{c }_t)} | x_t, e_{1:t-1}^{{\mathcal {I}}(\mathbf{c }_k)}) = p(e_{t}^{{\mathcal {I}}(\mathbf{c }_t)} | x_t). \end{aligned}$$
(3)

The assumption of measures independence, given the state \(x_t\), allows factorization as follows:

$$\begin{aligned} p(e_{t}^{{\mathcal {I}}(\mathbf{c }_t)} | x_t) = \prod _{s \in {\mathcal {I}}(\mathbf{c }_t)} p(e_t^s | x_t). \end{aligned}$$
(4)

Consequently, the belief can be expressed through the following equation:

$$\begin{aligned} Bel(x_t;\mathbf c _t) = \eta \prod _{s \in {\mathcal {I}}(\mathbf{c }_t)} p(e_t^s | x_t) p(x_t | e_{1:t-1}^{{\mathcal {I}}(\mathbf{c }_k)}). \end{aligned}$$
(5)

The last term in Eq. 5 can be further decomposed as follows:

$$\begin{aligned} p(x_t | e_{1:t-1}^{{\mathcal {I}}(\mathbf{c }_k)})&= \sum _{x_{t-1} \in {\mathcal {X}}} p(x_t, x_{t-1} | e_{1:t-1}^{{\mathcal {I}}(\mathbf{c }_k)}) \nonumber \\&= \gamma \sum _{x_{t-1} \in {\mathcal {X}}} p(x_t | x_{t-1}, e_{1:t-1}^{{\mathcal {I}}(\mathbf{c }_k)}) p(x_{t-1} | e_{1:t-1}^{{\mathcal {I}}(\mathbf{c }_k)})\\&= \gamma \sum _{x_{t-1} \in {\mathcal {X}}} p(x_t | x_{t-1}, e_{1:t-1}^{{\mathcal {I}}(\mathbf{c }_k)}) Bel(x_{t-1} ; \mathbf{c }_{t-1}), \nonumber \end{aligned}$$
(6)

where \(\gamma \) is a normalizing factor.

The substitution of equation (6) in equation (5) and a further application of the Markov assumption lead to the following recursive definition of the belief:

$$\begin{aligned} Bel(x_t;\mathbf{c }_t) = \eta \prod _{s \in {\mathcal {I}}(\mathbf{c }_t)} p(e_t^s | x_t) \sum _{x_{t-1} \in {\mathcal {X}}} p(x_t | x_{t-1}) Bel(x_{t-1} ; \mathbf{c }_{t-1}), \end{aligned}$$
(7)

where \(\gamma \) is integrated in the normalization factor \(\eta \). It is worth noting that such expression of the belief is directly reflected in the graphical representation of the proposed BN shown in Fig. 5.

2.4 Uncertainty Index

We define the uncertainty index at the timestep \(t\), on the basis of the classical definition of entropy for the a priori probability distribution of a random variable:

$$\begin{aligned} U(\mathbf{c }_t) = -\sum _{x_t \in {\mathcal {X}}} Bel(x_t ;\mathbf{c }_t) \log _2(Bel(x_t ;\mathbf{c }_t)). \end{aligned}$$
(8)

By varying the configuration \(\mathbf{c }_t\) of the sensory infrastructure, it is possible to decrease belief uncertainty and thus to improve the information inferred at next timestep. This index makes it is possible, at least, to predict a better configuration of the sensory infrastructure and to obtain a lower degree of uncertainty for the inferred knowledge.

2.5 Power Consumption Index

Generally, sensor nodes are able to monitor their own residual energy. If \(E_s(t)\) indicates the quantity of residual energy of node \(s\) at the timestep \(t\), typically associated with its battery charge, the residual energy for the entire sensory infrastructure can be expressed as follows:

$$\begin{aligned} E(t) = \sum _{s=1}^{n_{\mathcal {S}}} E_s(t). \end{aligned}$$
(9)

In what follows we will omit an explicit indication of the dependency of \(E\) on \(t\). By supposing that \(E\) is differentiable for small timesteps, the following approximation of the energy variation with a first order differential equation holds:

$$\begin{aligned} dE = \sum _{s=1}^{n_{\mathcal {S}}} dE_s. \end{aligned}$$
(10)

By dividing both members by \(dt\), it is possible to obtain the following expression:

$$\begin{aligned} \frac{dE}{dt} = \sum _{s = 1}^{n_{\mathcal {S}}} \frac{dE_s}{dt} \Rightarrow P = \sum _{s=1}^{n_{\mathcal {S}}} P_s, \end{aligned}$$
(11)

where \(P = P(t)\) is the total power consumption of the sensory infrastructure and \(P_s = P_s(t)\) is the power consumption of the sensor \(s\) at \(t\).

Fig. 6
figure 6

Structure of the communication module of a WSN sensor node [15]

Obviously, the power consumption depends heavily on the configuration of the sensory infrastructure, thus we express the power consumption as a parametric function, as follows:

$$\begin{aligned} P(\mathbf{c }_t) = \sum _{s \in {\mathcal {I}}(\mathbf{c }_t)} P_s. \end{aligned}$$
(12)

In the literature, there is a considerable body of work on the form of \(P_s\) for a single WSN node. By adopting one of these models, it is possible to compute the power consumption of the whole sensory infrastructure. In this chapter, the model presented in [15] is adopted. Figure 6 illustrates the internal structure of the communication module of a typical WSN node, and defines the power consumption of each component. The total power consumption for transmitting and receiving, are denoted by \(P_T(d)\) and \(P_R\); it is worth noting that the consumption required for transmitting depends on the transmission range. These values are computed Based on the structure and power consumption of each component of the communication module, according to the following equations:

$$\begin{aligned}&P_T(d) = P_{TB} + P_{TRF} + P_A(d) = P_{T0} + P_A(d), \nonumber \\&P_R = P_{RB} + P_{RRF} + P_L = P_{R0}. \end{aligned}$$
(13)

In Eq. (13) the term \(P_A(d)\) represents the power consumption of the amplifier, and it is the only term depending on the transmission range. Other terms can be modeled as constant values: \(P_{T0}\) for the constant part of the power consumption of the transmitting circuit, and \(P_{R0}\) for the power consumption of the receiving circuit. \(P_A(d)\) depends on several physical features, like antenna and propagation medium features. For example, by supposing that signals propagate in free space, i.e. in a vacuum without obstacles, the term \(P_A(d)\) can be expressed as follows:

$$\begin{aligned} P_A(d) = \frac{P_R}{G_T G_R} \left( \frac{4 \pi d}{\lambda }\right) ^2, \end{aligned}$$
(14)

where \(G_T\) and \(G_R\) are the gains for the transmitting antenna and for the receiving antenna respectively, \(P_R\) is the power required by the receiving antenna, \(\lambda \) is the wavelength adopted, and \(d\) is the distance between antennas. Equation (14) is the well-known Friis Formula [17] and summarizes the features of the medium and physical characteristics of the device. There are more general versions of such equations, which take into account the non vacuum space, namely both the presence of obstacles and different media [17]. Equation (14) shows heavy interdependence between transmission power, device features and the environment in which the sensory infrastructure is deployed.

2.6 Self-Configuration Behavior

The self-configuration capability of the proposed system allows it to find the optimal configuration of the sensory infrastructure autonomously, based on the uncertainty of the inference engine and on the energy consumption of the sensor nodes. In order to quantify these contrasting goals we propose to exploit the uncertainty index \(U(\mathbf{c }_t)\), described in Sect. 2.4 , and the power consumption index \(P(\mathbf{c }_t)\), described in Sect. 2.5.

The configuration problem is a multi-objective problem with two objective functions to be minimized:

$$\begin{aligned} \left\{ \begin{array}{l} f_1(\mathbf{c }_t) = U(\mathbf{c }_{t}) \\ f_2(\mathbf{c }_t) = P(\mathbf{c }_{t}). \end{array} \right. \end{aligned}$$
(15)

In order to avoid drastic changes in the sensory infrastructure, the configuration is forced to change at most the status of a single sensor at each time step. Formally, this dynamic constraint is expressed as follows:

$$\begin{aligned} \mathbf{c }_{t+1} \in \varGamma (\mathbf{c }_t), \end{aligned}$$
(16)

where \(\varGamma (\mathbf{c }_t)\) defines the region containing possible configurations of the sensory infrastructure, given the current one. This set of configuration is obtained from \(\mathbf{c }_t\), by switching on or off only one sensor. Formally, it is defined as follows:

$$\begin{aligned} \varGamma (\mathbf{c }_t) =\left\{ \hat{\mathbf{c }}_t \ :\ \sum _{i=1}^{n_{\mathcal {S}}} \left| c_{i,t} -\hat{c}_{i,t} \right| \le 1 \right\} \end{aligned}$$
(17)

In order to solve the multi-objective problem the multi-objective problem defined in Eq. (15), we chose to look for the Pareto optimal solutions, as proposed in [18] in the context of multi-objective genetic algorithms.

The pseudocode for the self-configuration algorithm is shown in Algorithm 1. The algorithm proposed here consists of three parts: (i) the delimitation of the admissible region, according to Eq. 17, (ii) the identification of the Pareto optimal solutions, and (iii) the selection from the admissible and Pareto-optimal solutions, of the one that improves the index which triggered the alarm.

figure a

2.7 System Overview

The overall behavior of the proposed system is described by the pseudocode in Algorithm 2. Two main parts are identifiable: the belief update and the self-configuration. Belief update is performed according to the classical equations of a Bayesian filter, as described in Sect. 2.3. This involves verifying whether the current sensory configuration triggers some alarms. Then, if necessary, self-configuration is performed, as described in Sect. 2.6.

figure b

3 Experimental Evaluation

3.1 Experimental Setting

In order to evaluate the performance of the proposed system we used a synthetic dataset built on the basis of the WSU CASAS Datasets [7], which consists of rows, as follows:

\(<\) day, time, sensor_name, sensor_measure, activity, label \(>\).

Each term in a row is expressed according to the following BNF grammar:

$$\begin{aligned} \begin{array}{l} \mathtt{day } \rightarrow \mathtt{yy-mm-dd } \\ \mathtt{time } \rightarrow \mathtt{hh:mm:ss } \\ \mathtt{sensor\_name } \rightarrow \mathtt{M0[01-31] } \;|\; \mathtt{D001 } \;|\; \mathtt{D002 } \;|\; \mathtt{D004 }\\ \mathtt{sensor\_measure } \rightarrow \mathtt{ON } \;|\; \mathtt{OFF } \;|\; \mathtt{OPEN } \;|\; \mathtt{CLOSE }\\ \mathtt{activity } \rightarrow \mathtt{activity\_label } \;|\; \epsilon \\ \mathtt{label } \rightarrow \mathtt{begin } \;|\; \mathtt{end } \;|\; \epsilon \end{array} \end{aligned}$$

It is worth noting that our synthetic dataset only contains readings of movement sensors and sensors about the state of doors, whereas temperature readings present in the original DB have been discarded because of the low correlation between this physical phenomenon and the activity performed by the user.

On the basis of the dataset adopted, it is possible to properly define the sets \({\mathcal {X}}, {\mathcal {S}}, {\mathcal {T}}\) and \({\mathcal {E}}\) as required in Sect. 2.2. In the case under consideration, the definition of \({\mathcal {X}}\) simply requires distinct activity labels to be considered, and for each of them to be associated with a unique numerical ID. An analogous procedure involving sensors is required to define \({\mathcal {S}}\). In order to define \({\mathcal {T}}\) we considered the number of seconds in a 24 h day and then we divided them into interval of 30 s. Finally, we assigned a unique numerical ID to each interval. In order to define set \({\mathcal {E}}\), a preprocessing of the original DB was required. Let us suppose that the DB contains two distinct rows (denominated \(row_i\) and \(row_j\), where \(i < j\)), associated to the same sensor \(s\), and that the label is ON for \(row_i\) and OFF for \(row_j\). If \(t_1\) and \(t_2\) are the value of the time field of \(row_1\) and \(row_2\) respectively, then our DB has to contain an entry for each \(t \in \left[ t_1, t_2 \right] \) indicating that the sensor \(s\) is active, i.e., \(e_t^s=1\).

3.2 Experimental Results

The original DB contains some unclassified sensory readings and the authors of [7] adopted a semi-supervised approach [19] to deal with this lack of information. To fulfill the same purpose, we used the Expectation Maximization (EM) algorithm. In order to evaluate the performance of our system we adopted the cross validation method dividing our DB into ten parts.

We compared the performance of three different systems. The first system is obtained by deactivating self-configuring behavior and favors minimization of the uncertainty index, thus setting all sensors permanently to on. The second system is obtained also by deactivating self-configuring behavior, but it favors minimization of the power consumption index, thus setting only a minimal subset of sensors to on; this set is fixed and it consists of 10 of the 34 sensors available. The third system is obtained by activating the self-configuring behavior.

The performance of these three systems are compared in Fig. 7. Figure 7 shows the trend of the uncertainty index during a given day, with Fig. 7 showing the trend of the power consumption index during the same day. As expected, with the first base-line system, when all sensors are on, it is possible to obtain the lowest level of uncertainty, but the maximum level of power consumption. In contrast, the second base-line system, with a fixed and limited set of on sensors, is characterized by the highest level of uncertainty and the minimum level of power consumption. The proposed adaptive system, able to self-configure the sensory infrastructure, shows an uncertainty level close to that of the first base-line system, with a significant reduction in power consumption. Table 1 and Table 2 summarize the mean accuracy for all of the tests in the cross validation phase.

Fig. 7
figure 7

Comparison of the trend of the uncertainty index and of the power consumption index during a given day for the system proposed with the two base-line systems considered here

Table 1 Mean accuracy, for all of the tests considered in the cross, of the proposed Adaptive System compared with the two base-line systems considered (All Sensors On and Subset of Sensors On)

4 Conclusions

This chapter describes formal and practical details of the design and implementation of an adaptive Bayesian system for performing multi-sensor data fusion in an Ambient Intelligence scenario. The adaptivity consists of dynamic self-configuration of the underlying sensor network, with the aim of finding the best trade-off between the uncertainty of the inferred knowledge and the power consumption of sensory devices.

Table 2 Overal mean accuracy of the proposed Adaptive System compared with the two base-line systems considered (All Sensors On and Subset of Sensors On)

The proposed system has been evaluated on a synthetic dataset based on a well-known dataset for Smart Homes, available in the literature. The experimental results show a clear energy saving as compared with a static approach where all sensor nodes are always on, at the cost of a small reduction in inference accuracy. On the other hand, the capability of dynamically selecting which sensors to hold on was found to produce a clear advantage in terms of inference accuracy over a static approach in which only a fixed subset of sensor nodes are on.