Keywords

1 Introduction

Wireless sensor networks, social networks and distributed systems are becoming the most important ingredients for the new computing era, the Internet of Things (IoT). IoT allows digital interconnection of everyday objects with the Internet, including those electronic ones as home appliances or cars which may be controlled by developed applications over a Smartphone (SP) or tablet, or any personal device, e.g. [1]. By 2017, there will be 8,6 billion handheld or personal mobile devices and there will be nearly 1,4 mobile devices per capita, which means over 10 billion mobile-connected devices including M2 M modules that will exceed the world’s population at that time (7,6 billion) [2]. So, these multi-sensor and multi-network electronic devices have become a fundamental part of the bridge to the knowledge of the physical world, reaching any kind of (measurable) information, anytime, and anywhere.

While it is true that the quality of MEMS sensors, as well as of the ones embedded into SPs, is lower than specific sensors in their respective areas of work: take an accelerograph in seismology, or a magnetometer in navigation as an example, it is also important to take two important considerations into account. First, SP manufacturers are continuously expanding Hw and Sw features of SPs, making their measurements more reliable and accurate and, second, through data collection from a large number of SPs, known as mobile crowd-sensing (MCS), it is possible to obtain a huge low-cost, overlay networks that use individual sensing SPs capabilities where the average weight of the community (given a sufficient number of individuals) can be computed with a high degree of accuracy, e.g. [3]. Nowadays, community resources such as SPs are taken as an advantage in opportunistic-sensing applications [4] attempting to resolve global current problems in different areas like telecommunication, entertainment, seismology and more. And it is in one of these fields, that our research focuses. This research presents a real time and economic solution to a natural hazard: the earthquakes. The earth’s movements should be regularly monitored: by means of an online service is possible to know all about a zone and their hazard just in seconds. And still more, with affordable sensors it is absolutely possible to monitor a whole area, learn their physical characteristics, and most importantly, detect seismic movements to raise early warnings in order to provide extra time for making better decisions. This is one of the key contributions of this work. However, the SP market is full of highly heterogeneous devices with different characteristics Hw and Sw. And if there were not a process of unification, this would lead to restriction of sensors, decreasing the number of devices on the network, and therefore creating less accuracy. The Open Geospatial Consortium (OGC) [5] solves this problem by defining the Sensor Web Enablement (SWE) framework [6] and specifically to its Sensor Observation Service (SOS) component, which defines a unified communication standard for sensors and sensor services achieving a higher level of compatibility. And it means lower costs and better quality sensor communications because we are using an only protocol instead of several proprietary ones, which involve serious problems in interoperability [7]. Much of this project focuses on coupling the type of communication to the system requirements: Standardization sensor data, quick access to data, real time communication, and immediate notifications. We use a paradigm of distributed systems SWE.

Seismic activity is increasing, and consequently the risk that these also attract; so much so that in April 2014 there was a world record number of large earthquakes (greater than 6,5) [8]. There are places more exposed to this type of natural disaster such as the countries which make up the “Pacific Ring of Fire”, where at least 80 % of all earthquakes are produced [9]. Take the case of Ecuador, a country in which the validation of this proposed architecture is based, where only in 2013 it registered 2.420 seismic events (6 earthquakes per day) and around 10 % exceeded of them a magnitude of 4.

The rest of the paper is organized as follows: The previous and related works in the area with their respective contribution can be found in Sect. 2. Section 3 contains the proposed architecture structure and its justification. While in Sect. 4 the evaluation and results are cited, in the Sect. 5, the conclusions of our research are presented. Future work is also referred to in this section.

2 Motivation and Related Work

The use of SPs in the field of Earthquake Early Warning Systems (EEWS) is booming: [10] is a project that detects seismic events using MEMS accelerometers where, if the acceleration exceeds a threshold value, the information is transmitted to a server and presumes the intensity and the hypocenter. References [11, 12] are projects that use static devices composed of a fixed accelerometer and a personal computer providing good accuracy by means of P and S waves [13] as peak detection mechanism. Reference [14] is a system that uses SPs to measure the acceleration and then determines the arrival of an earthquake. Furthermore, in contrast to our proposal, which uses an accelerometer as the principal sensor, [14] includes a compass sensor for peaks validation. Reference [15] uses MEMS accelerometers, a seismological processing unit and 3 types of alarms. The effective warning time in average is 8,1 s to be detected and 12,4 s the time from the maximum vibrations. On the other hand, there are different EEWS using other types of communications. Reference [16] was put into operation from December 2000 to June 2001 in Taiwan. It proposed a Virtual Subnet Network for EEWS achieving to detect an event in 30 s with an average of 22 s after the origin time to cities at distance were greater than 145 km from the source. The proposed architecture achieves detect the maximum acceleration peak in 12 s surpassing the results of the projects mentioned above. (See Sect. 4). Reference [17] gathers data from SPs and implements a distributed decision-making process using virtual servers provided by the Google App Engine architecture [18]. References [19, 20] detect waves of a quake and report the event using Google Cloud to Device Messaging (C2DM).

All of them have been the motivation to achieve a different and innovative EEWS. We proposed a future perspective to exploit community resources as SPs together with a reliable and robust real time communication infrastructure, specially during the natural disaster. Contrary to [10], our seismic detection uses dynamic thresholds for distinguishing between a repetitive sudden movement by the user and an actual seismic event. Our architecture involves additional challenges as incorporating heterogeneous devices, thus gaining in scalability; new detection algorithms, mobility and suitability to the application environment demands unlike [11, 12]. In contrast to our work, [12] performs accurate validations without taking into account either the processing time or the computational cost, something that is implicit in its working process, due to SP usage. Furthermore [11, 12] are complemented by our work, widely covering the future work proposed by both. Reference [14] still has great limitations of usage and efficiency because an accurate orientation, due to issues with the compass, cannot be obtained if the device is in constant motion, and consequently forces the system to remain stationary. Should one day disaster happen and Google cease service, [17, 19, 20] would cease to work too. To be more realistic, should the Google go down for an hour, these projects would also be down for an hour without the least idea of what happened. So these are completely at the mercy of Google.

A Service oriented approach for sensor access and usage is used by several groups: [21, 22] use their proprietary standards and data repositories, each one with different functions, access and use of sensors. SWE and its SOS component have been successfully applied to indoor [23] and outdoor scenarios. References [24] presents an Internet based urban environment observation system that is able to monitor several environmental variables (temperature, humidity, seismic activity) in real time. In conclusion the SWE’s approach offers a standard method for use sensor data to facilitate a rapid response to a disaster scenario. And this work takes advantage of this.

Finally, citing the post-event management, a project that is worth emphasizing is [25] which is a Europe-Union project with 1, 2 million euros. Reference [25] investigates current media as Facebook, Twitter or YouTube, and uses them in crisis management to promote collaboration of first-responders and citizens. Limitations such as marginalization of people who do not use social media, or the reliability of information like spreading of false information are its points to improve.

3 System Architecture

The accelerograph network developed in the paper us based on a three layer hierarchical architecture for EEWS as shown Fig. 1. On Layer-1, SPs are used as processing units and send samples to the Intermediate Server (IS), corresponding to Layer-2 as soon as SP detects a seismic-peak after overcoming a process which has been specifically designed. Each IS decide whether there was a seismic event or not, and immediately notifies their own users and, at the same time, communicates the incident to the Control Center (CC), the third layer. The data gathered from the sensors are inserted in a SOS (SWE) into the IS. The CC aggregates different applications that make decisions based on the information available in each SOS.

Fig. 1.
figure 1

Hierarchical 3-layered architecture: Layer 1: Smartphone application and acceleration processing; Layer 2: The intermediate server; Layer 3: The control center.

The three layers and the SOS will be integrated in a scalable manner (1 or more SPs, 1 or more ISs) until to complete the system in order to verify how these interact properly, cover the functionalities and conform to the requirements; and furthermore contribute in non-functional requirements as: agile and easy portability, simple maintenance, ensure the integrity, confidentiality and availability of information (security) in the architecture, reduce the cost in locating errors and indispensable, an economic huge sensor network.

3.1 Layer 1: Client Application and Acceleration Processing

The SP application must be simple, non-interfering with the user’s daily activities and non-battery consuming, as well as a great help during and after a seismic catastrophe in order to assist crisis managers to make better decisions. Figure 1 shows the designed and implemented algorithm to detect acceleration peaks representing the destructive power of an earthquake.

An accelerogram is defined as the union of seismic signal and noise vs. time. The Discrete Fourier Transform (DFT) [26] is used to change the time to frequency domain to apply low-pass filters to remove high frequencies corresponding to the noise. Later the Short Term Averaging/Long Term Averaging algorithm (STA/LTA) [27] is used because of its wide capabilities in detecting events in seismology, the low amount of computation, low energy consumption, contributing towards the overall success of the system. We keep a dynamic threshold in order to distinguish between user’s periodical movements (running, jogging, or walking) and a real seismic peak. Samples which present a lower threshold than the calculated in each window are discarded and continue working. The application accesses the GPS sensor to get the user’s current location, which is necessary for a validation on the IS and important to the SOS. In case the application should not be able to manage this sensor, the sample will not be sent to the IS. Then, in a hard real-time system it is necessary to maintain the same timeline throughout the architecture, for which, the protocol NTP is implemented in order to synchronize the whole network.

3.2 Layer 2: Intermediate Server

The whole process performed by the IS is required to ensure the global reliability of the system, with the following assumptions: (1) The samples of the first layer are independent of each other; (2) the higher number of samples analyzed more reliably; (3) the mathematical and statistical process support the data fusion; (4) ability to receive information from heterogeneous devices. Figure 2 presents an overview of the IS process, and described continue in greater detail:

Fig. 2.
figure 2

Intermediate Server: Sliding window configuration (A, B, C, D, E) (Kruskal Wallis) and temporal and spatial analysis.

Spatial Analysis.

Each IS works by physical areas and other projects are limited to subdivide the samples in rectangular areas as [17] or another ones do not take it into consideration. The attenuation equations show intensity ratio decreases like the distance increases; in a seismic event, “A” would measure a greater acceleration than the SPs in a farther zone “B”. A balance is necessary between effectiveness and number of samples. If the distance is too small, maybe the case that the IS leaves without test samples, and if this is too long, the samples will lose correlation. So, in Ecuador’s attenuation equations [28]: setting a magnitude of 5 as minimum intensity with the corresponding acceleration for this intensity, the distance calculated was 35 km. Samples whose latitude and longitude do not satisfy the Haversine function [29] with the IS’s location are discarded and must be considered by other IS closer, as in Fig. 2.

Sampling Test.

Minimum Sample test [30] is necessary to determine if the number of SPs which have sent a seismic peak have been enough to deduce that an earthquake has actually happened. It determines how many active SPs of all those registered in ISs are enough to generalize the population with a percentage of reliability of 0,95 %, and a margin of error of 0,05 %. Both SP and IS do validations to determine which SPs are alive (active) and which are not. First, SPs send beacons and are constantly monitoring the network for reconnections, and second, the IS validates the last connection’s time and, after a fixed time period (30 min) changes the SP’s state to inactive state.

Kruskal Wallis Test.

Kruskal Wallis [31] is an analysis variance ANOVA test used to compare samples or group of samples that better fits the nature of the seismic data. A periodical sliding windows algorithm has been developed in order to couple the Kruskal Wallis to seismic data vs. time. Figure 2, shows the configuration (A, B, C, D, E), that will be tested to find the optimal one whose results demonstrate the best correlation measured by the Kruskal Wallis Probability (KWP), and the most important, the time in advance to an earthquake. The optimal configuration is (explained in Sect. 4) (0.3, 1, 20, 5, 1): The algorithm is performed three times in each window of size one second (B = 1), comparing whether the variability of the samples exceeds the KWP of 0,5 or not (A = 0.3, E = 1). Next, trying to eliminate the risk of notifying very close replicas to the user, it validates that the time between the last and the present event is at least 20 s (C = 20). And the minimum intensity to alert is 5 (D = 5).

Message Queue Telemetry Transport (MQTT).

When the IS detects a seismic event, it sends an alarm to their users in coverage, and the CC at the same time using MQTT [32] as messaging protocol for real time notifications. MQTT has been designed for devices as SPs due to reduced available resources, low power usage and efficient distribution of information to one or many receivers and, in addition, offers privacy and security. MQTT is easy to implement because of it only requires to modify a configuration file where parameters such as security or QoS are modified depending upon the requirements of each system.

Sensor Web Enablement – Sensor Observation Service.

At the present time, it is of utmost important to achieve extreme information decentralization, and the SWE technology makes it possible to have easy and secure access to the information, enabling a near real time communication that is a requirement in this research. SWE describes sensor observations and (web) services to access those data structures specially using World Wide Web. The advantage of using this standard is that we are not prescribing any implementation, so each project can build its own services architecture in the preferred language. This research focuses on the SOS which supports a common data registration model from any SP leaving as the only restriction in SOS’s capacity; which in our design is distributed (a SOS for each IS) thus achieving gather data flows generated by the sensors without scalability problems.

Data in the SOS.

Data quality and QoS interoperability are important issues to address in sensor web standards development activities. A key communication component at application level is the interaction with the SOS. The main advantage is that the interface is provided via web (HTTP) so that each SP, regardless of its characteristics can easily communicate with the SOS (i.e. registering and inserting observations). The SOS is a service that basically offers two levels of interfaces (Fig. 3):

  • An interface for sensors: it consists in registering each sensor in the SOS and then sending measurements. The first step is performed by means of a registerSensor operation, which allows saving a new sensor. Once the sensor has been registered, it can start sending measurements at certain intervals, which depends either on the physical quantity being measured or the need of control required. The operation is called insertObservation. The SOS supports both fixed and mobile sensors. In the latter case, mobile sensors must send (besides measurements in the insertObservation operation) their current location. The operation is called updateSensor.

  • An interface for external processes, through which any application can access historical data (even real-time data) regarding any registered sensor. Note that, as the SOS service centralizes all sensors, it is possible to search and apply simple spatio-temporal filters, e.g. “get all sensors that monitor temperature” or “get all sensors located in an area”.

    Fig. 3.
    figure 3

    Basic sequence diagram for the SOS service.

3.3 Layer 3: Control Center

The CC must behave as a good command and control post, delivering information about global risks to the emergency management centers (firefighter, police, ambulance and others), helping them to make proper decisions, as e.g. the Geographic Institute at the National Polytechnic School (IGEPN) [33] in Ecuador. The CC allows a system’s extension from a pre-event to a post-event management schema. At a first stage, each SP helps the CC by sending multimedia information such as comments, pictures and videos helping in the process of achieving a global view of what is going on in a disaster in real time; and hence, in the second instance, CC helps the users make better post-event decisions by providing “tips” about closest aid-centers, safer and faster routes (the user on their own knowledge is totally unaware of the real situation of the disaster).

4 Performance Evaluation

For a more accurate and real validation, the IGEPN provided seismic-data of recent earthquakes into (or near) Ecuador: (1) 2013/02/09 - 09:14; Pasto-Colombia, 345 km north of Quito; lasted 30 s; Richter-Magnitude 7,4. In Quito-Ecuador, it was felt in around 5 provinces with an intensity of 4; (2) 2012/02/08 – 13:50; Esmeraldas-Ecuador; 288 km north of Quito; lasted 6 s; In Quito was felt with an intensity of 5,2; (3) 2011/10/29 – 10:54; Quito-Ecuador; Richter-Magnitude 4,3.

To determine the best configuration-parameters set (A, B, C, D, E) the validation process for each earthquake is accomplished and then comparing them taking account two considerations: First, the parameter C (Waiting Time), which corresponds to the time between two detected seismic-peaks, is set to 1 (C = 1), in order to compare which of them (configurations) detects a higher number of seismic peaks. Shown in Table 1. If it is too large, a peak (aftershock) cannot be detected, even if it is higher than the last one. Second, the parameter D (Minimum Intensity) is set to 2 (D = 2) to check that the algorithm is able to sense an earthquake even of very low intensity. However, in the optimal configuration, D is set to 5; according to Modified Mercalli Intensity Scale (MMI) [34], it has a very light potential damage, and is perceived as moderate. So, the analysis in each earthquake signal leads us to choose the best configuration, achieving a balance (0.3, 1, 20, 5, 1):

Table 1. Intermediate Server. Sliding window configuration (A, B, C, D, E) comparison.
  • It reduces the number of false positives because of the KWP average in 0,362. It also avoids that all data are recognized as a seismic peak, but nevertheless higher than other configurations.

  • The Table 1 highlight the good correlation existing between window-samples, and there exists a good data correlation at the just at the instant when the maximum seismic peak occurs.

  • This configuration allows arise an early warning 12 s ahead the maximum seismic peak occurs, providing extra time, for even the epicenter zone, which is the best achievement obtained.

  • The optimal configuration detects the higher number of seismic peaks into each signal. As e.g. in Pasto-Colombia earthquake reached 11 detected peaks.

  • This configuration perceives a lower MMI, implying that would be possible to alert earthquakes whose damage are less, and more, it could become an extra information that IGEPN (or another CC) needs.

5 Conclusions and Future Work

It is impossible to know where and when a seismic event can happen, thus it is known that an earthquake is unpredictable at the epicenter. So, the best way to mitigate damages in infrastructure, assets and even human lives, is the early detection, where a real-time architecture and an efficient communication between actors becomes a requirement. Now, in this case, big part of the success corresponds to heterogeneous actors (smartphones), all of them forming the Layer-1 of our architecture. Therefore the key point is standardization the sensor data, and it is achieved by SWE and his component SOS which are used to gather sensor observations in a standard way, so they allow an easily integration in any terminal improving the communication in the whole design. A connection using World Wide Web allows a standardization of all this community sensors could communicate with their SOS (i.e. registering and inserting observations) in real time and secure way. Further, taking to account that the incorporation of a SOS, allows to have thousands of sensors each one with different advantages and limitations resulting in other words in efficiency and accuracy.

This provides a modular and scalable architecture design. The Layer-2 is a server, named Intermediate Server, with enough capacity to listen and process the SP’s samples, detect a seismic event and, notify to all clients in the covered area through the information collected in SOS. This server implements temporal and spatial analysis not presented in another works promoting in the success of the proposal. The last layer, the Control Center can manage in a proper way the actual information in order to help the aids-centers to properly distribute their resources (human or monetary). And second, it can help the users to make better post decisions, being that, they are totally unaware of the real situation of the disaster.

The architecture was validated by means past actual data from Ecuador, which is a country in constant seismic risk. Our solution anticipates the maximum seismic-peak in 12 s in the seismic focus, however this time could be greater in further areas from the epicenter. As well the benefits provided could be greater depending on the earthquake’s features (when, duration and location).

Given the limitations of validation, our first step would be improve the structure of the testing process and find agreements with centers that have new testing devices, like rooms with earthquake simulators to achieve better validation and improvement of the overall system. As further work, it would be interesting to more disaster scenarios, considering multiple sensors totally heterogeneous using the same design and coupling the detection process, into the two first layers, to de type of the natural disaster, such as: fires, volcanic eruptions, and many more.