Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Beesper SmartBridge

Beesper is a flexible system for wireless distributed data collection, which allows to measure, visualize and analyse environmental parameters of various types, including: ambient temperature, pressure, humidity, light intensity, noise, air quality, vibrations and crack size, amongst others. In particular, a Beesper system is composed by a network of low-power wireless sensor nodes and one, or more, bridge node that collects the data and sends it to a remote server for further analysis. In order to be deployable in any environment, all components of a Beesper system are battery operated and guarantee multi-year monitoring activity. This requirement limits the computational power of the Beesper Bridge and the type of analysis on the collected data. In order to overcome this limitation, a new battery-operated “smart bridge” is being developed to process high complexity data, like stream of images, and low complexity data, like temperature, on site. These features allow real-time monitoring and reduce the need of constant communication with a remote server, which may be accessible only with limited bandwidth due to the physical location of the monitoring system (e.g. mountains).

The Beesper SmartBridge system is a solar-powered and battery-operated intelligent monitoring device that acts as a Beesper bridge and is also able to perform advanced processing on site. The system must also be able to optimize power consumption, in order to be deployed in remote areas with power supply provided by only solar power and battery.

The main applications developed are:

  • Target-based landslide movements monitoring, using frequent images from cameras, and feature extraction, using machine vision and particle swarm optimization;

  • Markerless rockfall and rockwall composition identification, using frequent images from cameras, through machine vision feature extraction and machine learning classification;

  • Dilatation of a rockwall analysis, through machine learning using data from a wireless sensor network (WSN);

and two support applications as

  • Network traffic manager

  • Power monitor and board monitor logger

A specific installation of the system may be composed by only a subset of those applications, depending on the monitoring requirements; in particular, this chapter will focus on the latter two main applications.

The artificial intelligence system will be able to adapt itself to each landslide or rockwall condition, by performing the training of the system on the specific location, thus adapting the internal memory. This process will lead to better performance, plus lowering the cost of the solution and the time to market, with respect to a custom made system for each scenario.

The HARPA methodology allows to dynamically adjust the system power consumption, by changing applications’ set of resource requirement (else said Application Working Mode or AWM), thus enabling to an improved quality of service, extended battery life and higher reliability.

1.1 Hardware Prototype

The Beesper SmartBridge system is composed by a solar panel, a solar charger, a DC/DC converter and a battery which provides power to a multicore ARM board. Internet connection can be provided by various means like UMTS/LTE, Ethernet, WiMAX or point-to-point radio connections.

Connection to the Beesper WSN is provided by a standard Beesper Bridge directly connected to the Beesper SmartBridge. The custom radio interface will be integrated in the Beesper SmartBridge in the final hardware version.

The system has been assembled in a weatherproof box with filtered air vents and it has been installed on a pole at about 3 m height to avoid tampering.

Details of the hardware components may be found in Table 13.1 and a picture can be found in the left part of Fig. 13.1.

Fig. 13.1
figure 1

Left: Beesper SmartBridge prototype. Right: Beesper SmartBridge prototype installed on a pole

Table 13.1 Beesper SmartBridge prototype hardware components

1.2 Applications

The two core applications of the Beesper SmartBridge provide the main monitoring capabilities of the system and are developed to provide the state-of-the-art performance.

1.2.1 Video Rockfall Detection

The aim of this application is to check if any rockfall, or major modification, has happened on the whole rock wall monitored using a high-resolution camera.

It is possible to identity four major processing steps:

  1. 1.

    Once the application is started, it will power up the camera and acquire a set of images in fast sequence and will evaluate the quality to select the best one for further processing. A sample image is shown in the left part of Fig. 13.2. In case of insufficient image quality (e.g. low light, heavy rain, …), the system may inhibit further application executions for a period of time.

    Fig. 13.2
    figure 2

    Video Rockfall detection application. Left: Original image. Centre: Classification of vegetation, rock and sky. Right: Rockwall modification detection

  2. 2.

    Second step is to perform the actual computation by classifying the target image and tracking the relevant part of the image to detect rockfalls as shown in Fig. 13.2.

  3. 3.

    Third step is to save selected image and results to a temporary local drive storage, to queue for remote transmission. During the current test deploy phase, an additional local copy for long-term storage is written to a local flash memory.

After these three steps, the application waits to be rescheduled by HARPA-OS.

1.2.2 Extensometer Analysis

The aim of this application is to model, classify, and predict the behaviour of the rockwall boulders, monitored using a set of extensometers and temperature wireless sensors, to detect anomalies and dangerous events. The proprietary extensometers sensors used in this application are able to measure submillimetre rock displacement highlighting very subtle variation that may be correlated with the temperature of the rock. The site selected to host the HARPA application deployment is already monitored using a standard Beesper Wireless Sensor Network, thus a dataset composed of about 300 days is already available. The data is retrieved from the Beesper server using a set of API via the Internet. During the development phase, the whole historical dataset of the extensometers data has been downloaded and processed in order to train the machine learning system based on a proprietary framework based on hierarchical temporal-aware neural networks [1]. The historical data is classified by experts (geologists) between normal and not normal behaviour and then a model is trained for each extensometer. During run time, the updated data for each extensometer is retrieved from the Beesper server and is processed by the corresponding model. The model outputs the probability that the current situation represents a “not normal” behaviour or an anomaly.

1.3 Application Testing

The Video Rockfall detection application and the Fessurimeter analysis application have been tested to identify the power consumption sources on the ARM board, without the other hardware components of the Beesper SmartBridge, like solar charger or DC/DC converter. The test has been performed using the ODROID Smart Power deviceFootnote 1 which collects voltage, current and power of the system load. Moreover, this device, by cooperating with the on-board electronics, is able to measure the energy consumption for the USB, SoC, Ethernet and board. The values for the camera and other devices are derived from laboratory experiments.

Table 13.2 reports the time and energy needed for each applications’ execution, while Table 13.3 reports the energy consumption per board component for each application execution computed in a controlled environment.

Table 13.2 Time and energy comparison of the two main applications
Table 13.3 Energy consumption per board component for each application execution computed in a controlled environment of the two main applications

The two applications have a quite different power consumption profile and time requirements due to the specific processing performed, as it is possible to observe in Fig. 13.3.

Fig. 13.3
figure 3

Power usage of the two main applications. Top: Video Rockfall detection. Bottom: Fessurimeter analysis

The Video Rockfall detection application has a quicker acquisition phase where the connected camera captures a set of pictures and then the ARM board computes the results. Between different application executions, the system is idle.

The Fessurimeter analysis application has a much longer acquisition phase where the systems query and wait for the WSN data from a remote server via the Internet, thus the system is actually idle and the actual power consumption is not directly affected by this activity. This behaviour may lead to consider the second application more expensive in terms of total energy consumption per cycle (0.3438 Wh vs. 0.0834 Wh) while considering only the processing part it is less expensive (0.0685 Wh vs. 0.0718 Wh) than the camera-based one. In particular, analysing the distribution of the energy consumption per component it is possible to understand that most of the consumption in the Fessurimeter analysis application is due to the idle consumption of the board itself and not the SoC during the active processing of the data since it accounts about 20% of the overall energy consumption.

1.4 HARPA Integration

HARPA-OS provides a managed environment where to run the applications. HARPA-OS allocates resources to the applications depending on a set of policies, like temperature of the system, available energy, system load and data from the system monitors. The allocation is done by reconfiguring the application to a specific AWM (Application Working Mode) defined and profiled offline. The different AWM represent different application modality with different computational requirements. For instance, the application may reduce the accuracy of the algorithm or the duty cycle. Once the computational unit has finished, the applications must return the control to HARPA-OS in order to be, eventually, reconfigured. This interval should be in the seconds range. Given this workflow, the applications better suited to exploit HARPA-OS are the ones that analyse streaming data.

In the Beesper SmartBridge system, a specific logger reads current energy status, and associated resources from the memory, and provides the information to HARPA-OS. These “monitors” used by HARPA-OS are:

  • Current battery level and

  • Current power provided by the solar panels.

The “knob” used by HARPA-OS to tune the system behaviour is the applications duty cycle, in particular the different AWM are defined in Table 13.4. The duty cycle for the different applications depends also on the physical phenomena to be monitored and on the sensors used; for instance, the wireless extensometers provide a new value every 15 min.

Table 13.4 Application Working Mode (AWM) duty cycle

HARPA-OS will reconfigure the applications to different AWM depending on the monitors values according to Table 13.5 in order to maximize the uptime of the system, by selecting a less-power hungry configuration when the energy is scarce, and to improve the quality of service, by selecting a higher monitoring frequency when energy is abundant.

Table 13.5 AWM selection

2 Real-World HARPA Testing

The selected location to test the HARPA system is Pietra di Bismantova, a famous touristic location for climbers and excursionists in the district of Castelnovo ne’ Monti (Reggio Emilia, Italy). This location is already monitored using a standard Beesper system, comprising cameras and WSN since January 2015. This system provides the internet connectivity via Ethernet to the Beesper SmartBridge only for installation convenience. The mains power is not connected to the Beesper SmartBridge.

The HARPA-based system has been installed in Bismantova from 13th November 2015 to 18th September 2016 and it is shown in Fig. 13.1. Due to development constraints, the system was initially deployed with only the Video Rockfall detection application while the Fessurimeter analysis application was deployed on 20th April 2016.

The total energy generated (and used) by the whole system during the whole testing period of 310 days is 25.89 kWh as calculated by the solar charger. After taking in account battery inefficiencies and solar charger’s energy consumption, the energy spent by the computation system (including the camera) for the whole period is 19.6 kWh.

The idle energy required by the system is computed by considering all the logged data when no active process is running; the resulting average idle power is 6.0784 W.

The target is to reach the maximum uptime of the system with the higher quality of service possible. The system shuts off during night time since there would be no light to perform image-based analysis.

2.1 Collected Data Analysis for Each Application Working Mode

During the testing period, the system experienced all weather and illumination conditions which allowed to test system performance when the available energy was scarce or abundant; in particular, all the different AWM were selected during the test, thus it is possible to compute aggregate statistics over all the system AWM.

2.1.1 Load Power Analysis

Left part of Fig. 13.4 shows the aggregate load power statistics over the whole testing period. The height of the bin reflects the time spent by the system within the specific load power. Since the system is idle most of the time, it is possible to observe a peak around 6 W, which is the idle power consumption of the system. The right part of Fig. 13.4 shows a detail of the right tail of the previous plot. In this case, it is possible to observe a difference in the load power of the system in different AWM; in particular, AWM 2 and 3 highlight a considerable amount of time spent at high load power.

Fig. 13.4
figure 4

Left: Power used by the system in each Application Working Mode (AWM). Right: Detailed view of the distribution “tail”

2.1.2 Solar Panel Power and Battery Level Analysis

Figure 13.5 shows the battery level and solar panel power distributions in the various AWM. It is possible to observe how the logged battery level and power generated from the panel correlates with the selected AWM of the system as expected. In particular, where the battery level and generated power are low the selected AWM is lower, while with high battery level or high generated power the selected AWM is higher.

Fig. 13.5
figure 5

Solar panel and battery percentage statistics in each AWM

It has been observed that the battery level is not reliable during active charging by the solar panels. In particular, it reaches 100% in a very short amount of time and the actual energy accumulated is not represented by the battery value. The current solution is a custom filtering to ignore non-realistic values while a more appropriate solution would require dedicated hardware components to evaluate the effective amount of charge flowing in the battery.

2.1.3 Application Working Mode Power Consumption Analysis

By further analysing the logged data, it is possible to calculate average energy for each run of each application AWM. Table 13.6 shows that the influence of the AWM can impact the energy requirement of the system up to an increase of 35%. A dynamic resource allocation system like HARPA-OS should be able to exploit this property.

Table 13.6 AWM average power consumption

As previously discussed, the average energy for the Fessurimeter analysis application is affected by the length of the acquisition phase where the system is idle; this behaviour shifts the average application energy to the idle one. Moreover, the very high consumption of the Video Rockfall detection in AWM 3 is also due to the removal of a wait period in case of low-quality image (e.g. low light, heavy rain, fog, bad camera initialization, …).

2.1.4 System Temperature Analysis

HARPA-OS provides an innovative scheduler, Tempura , able to allocate the available resources to the different applications, assuring that the thermal behaviour of the system is kept under control and the system performance is guaranteed.

During the testing period, the system has not reached critical temperatures leading to an emergency shutdown.

By further analysing the logged data, it is possible to compute a histogram representing the temperature distribution logged in the various AWM. Figure 13.6 highlights how the current AWM is directly linked to CPU temperature. In particular, it is possible to observe a maximum temperature reached by the system in all AWM; this behaviour will be analysed in Sect. 13.2.1.5.

Fig. 13.6
figure 6

System temperature statistics in each AWM

Moreover, during the testing period the box temperature has never surpassed 30 C and the CPU temperature in AWM 2 and 3 is not able to reach down to the enclosure box temperature since the duration of the Video Rockwall detection application is (or is very close to) the scheduling cycle, thus the system is always under active load.

2.1.5 Execution Time Analysis

The Tempura scheduler allocates the available resources in order to guarantee constant execution time. Figure 13.7 shows the correlation between CPU temperature (on the horizontal axis) and execution time (on the vertical axis) of the Video Rockwall detection application where the colour is the frequency of the event. It is possible to observe how the execution time clusters around two different values independently from the temperature value. This condition is generated by different exit conditions of the Video Rockfall detection application. If the image quality is not sufficient, the last stage of processing is skipped, thus resulting in a lower execution time.

Fig. 13.7
figure 7

Correlation between CPU temperature and execution time

This result indicates the benefits achieved with the HARPA-OS controlling the application scheduling with the Tempura resource allocation policy:

  • Execution time is independent from the temperature of the system;

  • The maximum temperature of the system is under control and does not lead to safety shutdowns (thus no QoS issues): the hard threshold of 95 C has never been reached even under heavy load of the two applications running at the same time in the highest AWM during summer time;

  • When two applications are running, the resources are reallocated to guarantee a predictable performance level.

2.2 Performance Analysis

The main target of the Beesper SmartBridge system is to ensure a constant monitoring service of the rockwall, thus the main performance measure of is the uptime of the system. The uptime of the system in Table 13.7 has been computed by analysing the logs and by considering when the system was able to perform a successful operation (data acquisition, data processing and result communication) during a day.

Table 13.7 Beesper SmartBridge uptime

Nevertheless, this coarse measure is not sufficient to evaluate the true performance of a monitoring system. To address this, two new measures of the quality of service of the system are introduced and will be described in the following sections:

  • Uptime Quality of Service (UQoS)

  • Performance Quality of Service (PQoS)

2.2.1 Uptime Quality of Service Analysis

The Uptime Quality of Service (UQoS) has been defined based on the percentage of time the system is active over 1 day. This index is directly correlated with the reliability of a monitoring system as can be defined as in Table 13.8. The colour indicated in the table will be used in the figures containing the Beesper SmartBridge UQoS results.

Table 13.8 Uptime Quality of Service (UQoS) definition

Figure 13.8 shows this index as a colour-coded scale on a calendar in order to highlight the time of the year with respect to the performance of the system. The colour of the days represents the UQoS that the system achieved on the specific day.

Fig. 13.8
figure 8

Daily Uptime Quality of Service (UQoS) of the HARPA system over the calendar months

To perform this analysis, the applications’ current AWM has not been considered since all applications AWM assure a minimum level of service.

It is possible to observe in Fig. 13.8 how during spring and summertime the system is active with the best possible UQoS almost the whole time thanks to the available increased solar power. The only red day in May 2016 is due to a software bug that prevented the system to boot up, condition that was automatically solved the day after without requiring an on-site intervention.

Table 13.9 and Fig. 13.9 show the percentage of time at which the system reached each UQoS month by month. It is possible to observe that the system can reach the minimum UQoS-10 for most of the time during the whole year and the optimal UQoS-90 during late spring, summer and early autumn. From an application point of view, these results indicate the suitability of these performances to create a battery-powered monitoring system able to provide consistent performance for most part of the year.

Fig. 13.9
figure 9

Percentage of time the system reached each UQoS month by month

Table 13.9 Percentage of time the system reached each UQoS month by month

These results will be compared in Sect. 13.3.2.2 with the one obtained simulating the system performance without HARPA-OS.

2.2.2 Performance Quality of Service Analysis

The Performance Quality of Service (PQoS) measures how often the Beesper SmartBridge monitors the environment; thus, it is able to provide a measure of the system performance in assessing the rockwall condition.

This index is directly correlated with the active AWM of the system as:

  • AWM 0 = very low PQoS

  • AWM 1 = sufficient PQoS

  • AWM 2 = good PQoS

  • AWM 3 = best PQoS

A further analysis of the log files allows to understand the AWM distribution over the different months in order to understand the impact of the HARPA-OS scheduler. In particular, Fig. 13.10 shows how the AWM 0 was more present when less solar power was available while AWM 2 and 3, that allow a higher quality of service, can be safely scheduled when there is no risk to reduce the battery life. Table 13.10 contains the detailed data of Fig. 13.10 and highlights how the AWM 0 and AWM 2 are active for a consistent amount of time during the months when power available was, respectively, scarce or plentiful. The data in these plots and table are computed on the overall time of the month, not on a day basis as the UQoS.

Fig. 13.10
figure 10

PQoS of the system: percentage of time for each month for each AWM

Table 13.10 PQoS of the system: percentage of time for each month for each AWM

These results are consistent with the data presented in Fig. 13.4, where the various AWM show different power requirements.

These strong results and the one from the UQoS analysis show the effectiveness of the HARPA methodology to increase the monitoring performance of the Beesper SmartBridge without impacting the uptime of the system when a greater amount of energy is available.

In order to understand the impact on the system uptime also during the other periods, a comparison with a simulated system is presented in Sect. 13.3.

3 Evaluating HARPA Impact: A Simulation Approach

HARPA-OS provides a dynamic resource allocation system thus a variable level of performance and power consumption. A simulated system with standard static resource allocation is used to evaluate HARPA-OS impact on the Beesper SmartBridge uptime.

The simulated system exploits the data logged by the Beesper SmartBridge during the testing period, in particular the actual power consumption and available solar radiation in the installation site. To further increase the accuracy of the simulation, a set of laboratory tests were performed to determine the efficiency of the various Beesper SmartBridge components, in particular the battery, the DC–DC converter and the solar charger. All these information have been used to create a custom energy simulator of the Beesper SmartBridge.

3.1 Available Energy Simulation

The total solar power generated is autonomously logged by the solar charger device, to which the system queries every 30 s to save the current value. Since the system was not active the whole time, thus after a period of time when the system was off, the first value read from the solar charger corresponds to the total amount of energy generated since the last query.

To obtain a reliable simulation model, it is important to have realistic solar radiation data during each hour, and not only the total amount. Thus, it is necessary to perform an interpolation process to estimate the energy generated while the system was off while keeping the total energy generated consistent.

The total amount of energy generated by the Beesper SmartBridge during the testing period has been measured as 25.89 kWh; while the total amount of energy available to the simulated system has been computed as 25.48 kWh exposing a difference of −1.58% which allows to compare the simulated values with the real system. These values do not account for the solar charger power consumption thus have been scaled accordingly before the usage in the simulator.

3.2 Application Working Mode Simulation

Two different simulation tests are performed considering the system running always in AWM 1 (sufficient PQoS) and AWM 3 (best PQoS), as defined in Sect. 13.2.2.2.

Moreover, as seen in Fig. 13.10, AWM 1 is the most active AWM; thus, it is likely to expect similar results regarding the uptime to the HARPA-based system, beside the exploitation of AWM 0 to save power. Instead, the simulation results with AWM 3 are likely to be quite different given the much higher power requirements.

The simulation is performed by creating a model of the system that receives as input the simulated available energy and simulates application executions, battery and solar panel status hour-by-hour. The simulated test has the same temporal duration as the real-world test in order to compare them. The required energy for each application execution is computed using the data collected during system testing depending on the selected AWM.

In order to have a fair comparison between the real system and the simulated one, it has been considered only the power consumption related to the Video Rockwall detection application since it is the only application that was active during the whole recording period.

3.2.1 Uptime Days Comparison

Table 13.11 reports the uptime results of the simulated systems compared to the real HARPA-based system and the energy requirements for AWM 1 and AWM 3. The uptime is computed, as in Sect. 13.2.2, considering when the system performs a successful operation (data acquisition, data processing and result communication) during a single day. From this table, it is possible to observe how the HARPA system achieves higher uptime values, even accounting for the energy error of 1.58% previously reported. Moreover, the small difference between the HARPA system and the simulated systems is consistent with the expectation, considering how the uptime is computed.

Table 13.11 Uptime comparison between the simulated and HARPA-based systems

Nevertheless, this very high-level comparison is not able to highlight short-term differences between the different approaches, thus a more fine-grained measure, like the UQoS of the simulated systems, should be computed.

3.2.2 Uptime Quality of Service Comparison

In Sect. 13.2.2.1, the Uptime Quality of Service (UQoS) has been defined as the percentage of time the system is active over 1 day and the results over the HARPA-based system were presented. This index is directly correlated with the reliability of a monitoring system as can be defined as in Table 13.8. The colour indicated in the table will be used in the figures containing the Beesper SmartBridge UQoS results.

Table 13.12 reports a quantitative analysis and it is possible to observe the HARPA system compared to the simulated systems.

Table 13.12 UQoS results of the simulated systems compared to the HARPA-based system

With respect to the AWM 1 simulated system, the HARPA system has a reduced number of UQoS-0 days, where the minimum UQoS is not reached, by increasing the number of UQoS-10 days thanks to its dynamic scheduling of AWMs. This is a strong evidence of the effectiveness of a dynamic approach, such as the one presented in HARPA-OS, that allows to reach a minimum level of quality of service by enforcing dynamic energy saving measures.

The comparison with the simulated AWM 3 system shows, as expected, a greatly reduced UQoS; in particular, it is not able to reach very good UQoS-90 level consistently.

As a qualitative comparison between the HARPA system and the simulated systems, it is possible to analyse the UQoS in a calendar plot as shown in Fig. 13.11. The colour of the days represents the UQoS that the system achieved on the specific day. A similar plot was presented for the HARPA system in Fig. 13.8.

Fig. 13.11
figure 11

UQoS comparison between the real and simulated systems in a calendar plot

The comparison between the HARPA system and the AWM 1 simulated system shows a great similarity of UQoS level, as expected, during most of the year. Nevertheless, it is important to remember that the HARPA system provides a higher level of performance, through AWM 2 and AWM 3, in particular during the months with more solar power available as demonstrated in Sect. 13.2.2.2.

Moreover, even during summer time, when the solar power is greatly available, a dynamic approach like HARPA has an impact on the UQoS of the system. The simulated AWM 3 system is not able to reach constant high UQoS while the AWM 1 system shows more UQoS-0 and UQoS-10 days than the HARPA system.

Table 13.13 reports the analytical data of Fig. 13.11 and highlights in bold how the simulated AWM 1 and HARPA systems show consistent results while the simulated AWM 3 system is not able to reach similar performance levels.

Table 13.13 UQoS comparison between the real and simulated systems month by month

Figure 13.12 shows a qualitative comparison of the simulated systems and the HARPA system over the UQoS level reached during each month.

Fig. 13.12
figure 12

UQoS comparison between the real and simulated systems month by month

4 Conclusions

The Beesper SmartBridge prototype presented in this chapter is the demonstrator in the low-end of embedded system domain for HARPA.

The Beesper SmartBridge is a HARPA-based solar-powered and battery-operated intelligent monitoring device able to autonomously perform markerless rockfall and rockwall composition identification (using frequent images from cameras analysed through machine vision feature extraction and machine learning classification) and dilatation of a rockwall analysis (through machine learning using data from a WSN) as detailed in Sect. 13.1. Since the monitoring system is battery operated, the main target is to provide consistent monitoring performance while maximizing the system uptime.

The Beesper SmartBridge has been successfully installed at Pietra di Bismantova (Reggio Emilia, Italy) from 13th November 2015 to 18th September 2016 for a total of 310 days as reported in Sect. 13.2.

The laboratory testing of the system and applications, reported in Sect. 13.1.4, shows the suitability of the Beesper SmartBridge to the HARPA approach given that the various applications run concurrently with tunable power consumption.

The system was able to log accurate information during the whole test duration and Sect. 13.2.1 presents an analysis about the load power, battery percentage and power generated from the solar panel with respect to the various AWM that highlights how the AWM selection process based on external factors is successful.

Finally, the average energy required by each application AWM is computed from the logged data giving a real-world estimation of the energy need of this system. These results, reported in Table 13.6, show that the selection of an AWM has a significant impact on the total energy requirements (up to 35% more than the idle power), thus a dynamic resource allocation approach like HARPA-OS is appropriate.

HARPA-OS provides a temperature and energy aware scheduler, Tempura, and Sect. 13.2.1.5 reports its impact during the whole test period. It has been demonstrated how the Tempura scheduler allocates resources to the two applications in order to provide consistent execution time, while keeping under control the temperature of the system. It is worth remarking how the system could operate correctly both during winter time, with low temperatures, and summer time, with higher environmental temperature.

The Beesper SmartBridge is a monitoring system where the various applications duty cycle directly influence the quality of the performance of the system; thus, the duty cycle is the main “knob” to tune system performance. The various Application Working Mode (AWM) are defined with different “wait intervals” between execution, thus exhibit different level of average power consumption. The main system “monitors” are the battery level, available solar power and system temperature.

To evaluate the performance of the system, three different measures of the quality of service have been defined in Sect. 13.2.2 and two simulated systems with static AWM definition have been created.

The simulated system exploits the data logged by the Beesper SmartBridge during the testing period, in particular the actual power consumption and available solar radiation in the installation site. To increase further the accuracy of the simulation, a set of laboratory tests were performed to determine the efficiency of the various Beesper SmartBridge components, in particular the battery, the DC–DC converter and the solar charger. All these information have been used to create a custom simulator.

The Uptime days performance evaluation resulted in the HARPA-OS-based system with an uptime of 286 over 310 days (92.26%) while the simulated systems in AWM 1 and AWM 3 scored, respectively, 267 (86.13%) and 246 (79.35%) days of uptime. The small difference between the HARPA system and the simulated systems is consistent with the uptime definition and the fact that AWM 1 is the most selected AWM by HARPA-OS as reported in Table 13.10.

The Performance Quality of Service (PQoS) focuses on the distribution of the application AWM during each month, from AWM 0 (very low PQoS) to AWM 3 (best PQoS) and shows in Sect. 13.2.2.2 the impact of the HARPA-OS dynamic scheduler. In particular, the AWM 0 is more present (up to 23.90% in January) when less solar power was available, while AWM 2 can be safely scheduled when there is no risk to reduce the battery life (up to 33.15% in July).

The results are confirmed when comparing the Uptime Quality of Service (UQoS) performance index as reported in Table 13.12 where it is possible to observe how the HARPA system, compared to the simulated AWM 1 system, can reduce the number of days with insufficient UQoS-0 of 60% by increasing the number of days with sufficient UQoS-10 of 47% thanks to its dynamic scheduling of resources.

From Table 13.13, it is possible to observe that HARPA-OS has also a strong impact during the months with more solar power available. For instance, the simulated AWM 3 system reaches only 58% of the time the UQoS-90, while the HARPA system can reach the same UQoS-90 almost 100% of the time during summer months.

It is not possible to compare the Performance Quality of Service PQoS performance index since the AWM is fixed in the simulated systems.

These important results demonstrate how HARPA-OS is able to allow a greater PQoS when possible, while keeping a minimum PQoS and consistent UQOS during the rest of the time.

All these results provide strong evidence of the effectiveness of a dynamic approach, such as the one presented in HARPA-OS, in increasing the overall quality of the service provided by a real-world system equipped with such functionality. Moreover, the Beesper SmartBridge, with HARPA-OS, has been installed for almost 1 year outdoor, without any need of maintenance, demonstrating the stability and reliability of the hardware and software solutions and providing a good level of performance. Finally, the system has collected an extremely valuable dataset of high-quality pictures, temperatures and rock dilatations during almost one full year that will be used to greatly improve the quality of the computation results in the next development phase.