1 Introduction

Application of Wireless Sensor Networks (WSNs) is dominant and growing fast [1]. Designing intelligent sensors with low power, low weight, and wireless connections make it possible to use sensors in healthcare. Sensors, attached to a patient’s body, can sense, process, and generate data (such as the pulse rate of a critically sick patient), which are eventually delivered to the control center for analysis by medical personnel, but we face some challenges such as congestion control and energy consuming which are the typical issues in a HWSNs [2, 3].

WSN may be used in a variety of healthcare applications. For example, in hospitals, WSN can be applied in monitoring of the heart disease patient’s ECG. It is possible to use WSN for early detection of problems like heart attacks. In homes, asthma and stress monitoring, and also monitoring of old or disabled people is possible using WSN [2, 47]. WSN may also be used in medical emergencies. In majority of emergency cases such as diabetes, the reaction time is a critical issue. The long reaction time in these cases may cause irreparable damages or even may leads to death of the patient. Monitoring via sensors will allow remote (non-hospital) monitoring of the patient. It also increases efficiency of primary medical care in hospital by reducing caregiver’s reaction time. Here, the sensors may monitor a patient’s physical situation and send vital signs to the hospital or doctors immediately. Patients can be prioritized based on the severity of their condition. Accurate and quick delivery of patient vital signs is crucial for making efficient and real time decisions [8].Applications of sensor networks should have an adequate quality of services to guard against delay, and packet loss; and also being able to enhance the lifetime of the networks.

In many HWSNs (for example when sensors are installed inside the patient’s body) [4, 9], the nodes are mandated to work continuously and uninterruptedly for months, and sometimes years without having the possibility of replacing/recharging the energy source. Therefore, optimizing the energy consumption of the sensor node is always a critical issue. In medical emergencies, in a short period of time, different type of sensors may sense a large number of events and produce large amounts of data and try to transmit their data to the sink (Medical Center) with the help of intermediate nodes. This leads to a burst data on the network which in turn may cause congestion at the intermediate nodes. Congestion has a direct impact on increasing the end-to-end delay and the packet loss. In modern wireless biomedical sensor networks, increasing these two parameters for the packets that carry EKG signals may even result in the death of the patient [10]. Furthermore, when congestion occurs, because of the packet loss, packet retransmission increases accordingly. The retransmission directly affects the lifetime of the nodes with limited battery, which in turn reduces the network lifetime. Due to mentioned issues, it can be concluded that congestion in HWSNs, is more challenging compared to other applications.

In WSNs, two types of congestion may occur [11, 12]: node-level congestion and link-level congestion. (See Fig. 1). The first type of congestion is called node-level congestion (Fig. 1a) and that is common in conventional networks. It happens when the packet arrival rate is higher than the packet service rate and it causes buffer overflow in the node. This type of congestion can result in packet loss and increased queuing delay. Packet loss will lead to retransmission and will consume additional energy. The second type of congestion is link-level congestion (Fig. 1b). Wireless channels are shared by several nodes using CSMA protocols (Carrier Sense Multiple Access). Collisions may occur when multiple active sensor nodes try to seize the channel at the same time. Link level congestion increases packet service time and decreases link utilization, (which leads to decreasing the overall throughput), and waste energy at the sensor nodes. Both types of congestion have direct impact on energy efficiency and QoS.

Fig. 1
figure 1

Congestion conditions in wireless sensor networks. a Node level congestion buffer overflow, b link-level congestion link collision

Recently, congestion control in WSNs, received a considerable attention [13, 14]. It is composed of three mechanisms: congestion detection, congestion notification, and rate adjustment. Depending on the congestion control protocol, the following objectives may be achieved and efficiency of the protocol may be improved. (1) The energy-efficiency requires to be improved in order to extend the system’s lifetime. Therefore, in congestion control protocols, packet loss must be avoided or reduced to a least possible value in order to save energy (2) Support traditional QoS metrics such as packet loss, packet delay, and throughput. For example, in multimedia applications, end-to-end delay also needed to be guaranteed [15]. (3) fairness should be guaranteed so that each node can achieve fair bandwidth throughput.

The existing congestion control protocols for WSNs cannot be directly applied in HWSNs. The reason is that: firstly, these protocols do not consider the special QoS requirements of the data carried in these environments. Secondly, these protocols treat the congestion problem after it occurs. In HWSN application, it is also necessary to avoid the congestion in advance before it occurs and then face the congestion itself.

In this paper an Optimized Congestion management protocol is proposed. We propose an Active Queue Management scheme AQM) to avoid congestion according to a packet drop probability. We also proposed three state automata to calculate the congestion level in each node and an optimization function for calculating the new sending rate for child nodes.

The rest of this paper is organized as follows: In Sect. 2, a brief review of prior related works on congestion control is presented. Then the details of the proposed model are explained in Sect. 3. In Sect. 4 performance of the proposed model is evaluated by using computer simulation. And Sect. 5 concludes the paper.

2 Related Works

In recent years congestion control protocols have been studied extensively in WSNs [11, 12, 1524]. Congestion is an important problem in wireless sensor networks. The reason is that it leads to packet loss and retransmissions which wastes the sensors limited battery energy.

Congestion Detection and Avoidance (CODA) [16], uses open-loop and closed-loop rate adjustment. In open-loop hop-by-hop backpressure, when a node experiences congestion, it broadcasts back-pressure messages upstream towards the source nodes, to reduce their sending rate. In closed-loop multi-source regulation, the sink control congestion of multiple sources. It detects congestion based on buffer occupancy as well as the wireless channel load. The rate adjustment algorithm works like an additive increase multiplicative decrease (AIMD).

Congestion Control and Fairness (CCF) [12] is a protocol that uses packet service time for detecting congestion in each intermediate sensor node and for deducing the available service rate. This protocol controls congestion in a hop-by-hop manner, with exact rate adjustments based upon its available service rate and the number of each node’s childs. CCF guarantees simple fairness by the same throughput for each node. It also adjusts the rates that rely only on packet service time, which could have low utilization due to sensor nodes not generating enough traffic or if there is a significant Packet Error Rate (PER).

In Priority for Congestion Control Protocol (PCCP) [11], a hop-by-hop upstream congestion control protocol for WSNs is proposed. The congestion degree in PCCP is measured as the ratio of packet inter-arrival time to the packet service time. This protocol utilizes a cross-layer optimization and imposes a hop-by-hop approach to control congestion based on the introduced congestion degree and the node priority index. It has also been demonstrated that PCCP achieves energy efficiency and flexible weighted fairness for both single-path and multipath routing.

Learning Automata-Based Congestion Avoidance Scheme (LACAS) [18] is appropriate for HWSN application usage. The advantage of LACAS is that it is intelligent, learns from the past, and improves its performance in time. Optimal action is selected at the moment, based on the knowledge of previous levels of congestion, by the automata stationed in the different sensor nodes. The results show that the proposed algorithm is avoiding congestion successfully in typical HWSNs, which is needed for a reliable congestion control mechanism.

In Priority-based rate control [15], a model is presented which is used for congestion control and service differentiation in WMSNs. The proposed congestion control protocol adjusts the traffic source rates based on current congestion in the upstream nodes and the priority of each traffic source. Results show that it can achieve low packet loss probability; it is possible to provide low queuing delay, and it guarantees bandwidth for high priority real time traffics.

Fairness-Aware Congestion Control protocol (FACC) [17] has provided a new mechanism to control congestion and achieve reasonably fair bandwidth allocation in resource-constrained WSNs. In this protocol, intermediate sensor nodes are categorized into near-source nodes and near-sink nodes. A per-flow state is maintained by the near-source nodes and an approximately fair rate is allocated to each passing flow. Near-sink nodes do not need to maintain a per-flow state. The nodes use a lightweight probabilistic dropping algorithm based on queue occupancy and hit frequency. FACC provides better performance than previous approaches in terms of throughput, packet loss, energy efficiency, and fairness.

Enhanced Congestion Detection and Avoidance (ECODA) [19] proposes a novel energy efficient congestion control scheme for sensor networks. It uses a flexible queue scheduler and packets are scheduled according to the priority. Transient congestion and persistent congestion efficiently are considered by ECODA. It adopts hop-by-hop congestion control scheme for transient congestion and adjusts the bottleneck node, based on source data sending rate control for persistent congestion. ECODA achieves efficient congestion control and flexible weighted fairness for different class of traffics. It leads to lower packet loss, higher energy efficiency, and better QoS in terms of throughput, fairness, and delay.

Fuzzy congestion controller for wireless sensor networks (FCC) [20] develops a fuzzy rule base as well as fellowship functions. It uses channel load and queue size of intermediate nodes as the indication of congestion to organize the inputs. The output is branched from the fuzzy rule base and the fuzzy reference engine conjuncts and determines new source rates. This algorithm reduces packet loss comparing with non-fuzzy methods. It increases throughput and energy consumption.

A Prioritization Based Congestion Control Protocol for Healthcare Monitoring Application in Wireless Sensor Networks is proposed in [23] provides a congestion control and service prioritization protocol for real time monitoring of patients’ vital signs using wireless biomedical sensor networks. This protocol assigns different priorities for physiological signals and provides a better quality of service for transmitting highly important vital signs. Congestion control is performed by considering both the congestion situation in the parent node and the priority of the child nodes in assigning network bandwidth to signals from different patients. In [25] a class based congestion control protocol with service differentiation is proposed to reduce queuing delay and provide reliable data transmission for them. It distinguishes between different kinds of traffic like delay sensitive, drop sensitive and control traffic. Delay sensitive traffic comes from emergency conditions, where the emergency data should be guaranteed to be delivered with a reasonable delay. Drop sensitive traffic comes from patients’ normal conditions where the normal data can tolerate some delay but they need low packet drop. Control traffic are generate for network maintenance and doctors periodical or on demand report.

The [26] designs a Queue management and energy efficient congestion control protocol in the wireless body sensor network. The proposed protocol focuses on efficient management of queue to provide reliability and reduce packet loss. In [24] a distributed congestion control algorithm is proposed for tree based communications in wireless sensor networks. This protocol allocates a fair and efficient transmission rate to each node. The congestion control algorithm apportions the total change in a control interval in aggregate traffic into individual flows to achieve fairness.

3 Proposed Congestion Management Protocol

In this section we describe our proposed congestion management protocol for HWSNs. This protocol is motivated by the limitations of existing popular congestion control protocols such as PCCP and CCF. These protocols do not entertain congestion avoidance mechanisms. Furthermore, The CCF protocol cannot effectively allocate the remaining capacity and use work-conservation scheduling algorithm which leads to a lower throughput. The PCCP protocol performs poorly in providing relative priority in case of different congestion situations. In case of low congestion, PCCP increases the scheduling and data rate of all traffic sources without paying attention to their priority index. In case of high congestion, PCCP decreases the sending rate of all traffic sources based on their priority index. Another problem of PCCP is that, it considers only geographical priority and this protocol does not support multi-sensor with dynamic priority situations (patients overall health status) which is necessary in HWSNs. Another issue is that these two protocols do not consider the importance of a node (e.g. The heart rate of a patient with heart diseases compared to a heart rate of a patient with a broken leg).

The proposed protocol solves the problem by a proper adjustment of the send rate of each child node using an optimization function. The proposed optimization function adjusts the sending rate of each child node based on its output available bandwidth and dynamic priority level of each child node.

The proposed protocol architecture is shown in Fig. 2. It consists of two stages. The first stage tries to avoid congestion by using an AQM. It decides either to accept or to drop an incoming packet, according to a packet drop probability. If the incoming packet is accepted, then in the second stage, three mechanisms are used to control possible congestion, namely, automata based Congestion Detection (ACD), implicit Congestion Notification (ICN), and optimized Rate Adjustment (ORA).

Fig. 2
figure 2

Architecture of the proposed congestion management protocol

ACD detects congestion by a three-state machine and virtual queue status. The output of ACD is the congestion level \((0\le CL\le 1)\).

If CL is more than a specific threshold, optimized rate adjustment (ORA) procedure will be called. This procedure will resolve an optimization problem and will obtain new share rates for child nodes. Since in the applications like healthcare, each node (patient) can be installed with multiple sensors having different priorities, we use priority as an important parameter in our model. We use child priority (CHP) and Available Bandwidth (AB) as the input parameters for the optimization function (\(AB=1-CL\)). ORA procedure calculates the new share rates for each child node as well as local source traffic.

The proposed protocol uses the implicit Congestion Notification (ICN) which piggybacks congestion information in the header part of data packets and avoids sending additional control messages which consequently, causes an improvement in energy-efficiency.

Before describing the proposed model in detail, the following provide some definitions related to the priority:

Definition 1

Static Priority at node i \((STP^{\mathrm{i}})\) is defined as:

$$\begin{aligned} STP^{i}=PaI^{i}.\sum _k {SP_k^i } \end{aligned}$$
(1)

where, Patient’s importance \((0\le PaI^{i}\le 1)\) at node i is the importance degree of the patient. For example a heart disease patient and a diabetic patient have different importance. The parameter \(0\le SP_k^i \le 1\) denotes the kth source data priority for a sensor type in node \(i\). Its value can be set manually for service differentiation. \(k\) is the sensor type on a patient’s body.

Definition 2

Global Priority at node \(i (GP^{i})\) is defined as:

$$\begin{aligned} GP^{i}=TP^{i}+STP^{i}, \end{aligned}$$
(2)

where, \(TP^{i }\) is Transit data Priority at node \(i\). It shows the priority of transit data traffic forwarded through node \(i.\)

The Global Priority Ratio \(GPR_j^i \) for each virtual queue \(j\) in node \(i\) is defined as follows. The structure of the node’s queue is defined in the next section in detail.

$$\begin{aligned} GPR_1^i =\frac{STP^{i}}{GP^{i}},\quad GPR_j^i =\frac{GP^{j}}{GP^{i}},\quad j=2,3,\ldots ,N_i +1 \end{aligned}$$
(3)

where \(N_{i }\) is the number of child nodes of the node \(i\).

Definition 3

Child Priority \(j\) in node \(i (CP_j^i )\) is defined as:

$$\begin{aligned} CP_j^i =\frac{GPR_j^i .(1-CL_j^i )\cdot {\overline{TW}} _j^i \cdot {\overline{delay}} _j^i }{\sum _{j=1}^{N_i +1} {GPR_j^i \cdot (1-CL_j^i )\cdot {\overline{TW}} _j^i \cdot {\overline{delay}} _j^i } } \end{aligned}$$
(4)

where average traffic weight \(({\overline{TW}} _j^i )\) and average delay \(({\overline{delay}} _j^i )\) of each virtual queue \(j\) in node \(i\) are obtained as follows:

$$\begin{aligned} {\overline{TW}} _j^i =\frac{\sum _k {m_{_k }^j } (i).SP_k }{\sum _k {SP_k } },\quad {\overline{delay}} _j^i =\frac{\sum _k {\overline{delay}}_{k }^j (i).SP_k }{\sum _k {SP_k } } \end{aligned}$$
(5)

Here, \(m_k^j (i)\) is the number of packets with source priority \(k\) in virtual queue \(j \cdot delay\) is the time elapsed from packet generation up to current time. \(\overline{delay} _k^j (i)\) is the average packet delay of type \(k\) in virtual queue \(j\) at node \(i\). \(0\le CP_j^i \le 1\) is the normalized child priority. Note that CP (child priority) for each node \(i \) is independent of the other nodes.

3.1 Congestion Avoidance Algorithm

Active Queue Management (AQM) schemes are mechanisms to avoid congestion and provide quality of service (QoS) by pro-actively dropping packets [2731].With these schemes, congestion can be controlled, and network performance such as network delay, link utilization, packet loss rate, and system fairness are improved. The proposed algorithm uses a flexible procedure for queue management. This procedure uses separate virtual queues to store the input packets from each child node. It means that a physical queue (QL) is shared virtually between \(N\) child node’s traffic and one local source’s traffic based on their importance and priority (Fig. 3). If a child node has more priority, it can use more space for its queue. However, the boundaries between these virtual queues are not constant; when one of the child nodes has a free space in its own virtual queue, others can use the free space if necessary.

Fig. 3
figure 3

A physical queue is divided into N \(+\) 1 virtual queue

When a packet is received from a child node \(j\) by a parent node, packet drop probability \(Pdrop_{j}\) is computed for the packet. At first the initial value of packet drop probability for \(j \,Pdrop_j^{pri} \), is calculated as follows:

$$\begin{aligned} {Pdrop}_j^{pri} =\left\{ {\begin{array}{l@{\quad }l} q_j < \alpha _{1 } \cdot QL\cdot {CP}_j &{} 0 \\ \alpha _{1 } \cdot QL\cdot {CP}_j < q_j < \alpha _{2} \cdot QL\cdot {CP}_j &{} \frac{q_j -\alpha _1 \cdot QL\cdot {CP}_j }{\alpha _2 \cdot QL\cdot {CP}_j -\alpha _1 \cdot QL\cdot {CP}_j } \\ q_{ j^> } \alpha _{2 } \cdot QL\cdot {CP}_j &{} 1 \end{array}} \right. \end{aligned}$$
(6)

Here \(q_{j }\) is the queue size of jth virtual queue (which belongs to the jth child), and QL is the maximum size of the physical queue.\(\alpha _{1}\) and \(\alpha _{2 }\) are lower and upper thresholds respectively that will be adjusted through simulations. \(CP_{j }\) is the normalized child node priority.

Note that \(QL \cdot CP_{j }\) represents the virtual queue share for child \(j \) in the physical queue QL. After the initial value of packet drop probability is computed, AQM mechanism calculates packet drop \(P_{j}\) as follows:

$$\begin{aligned} {Pdrop}_j = \beta _1 . {\delta qv}_i - \beta _2 .\left( 1-\left( \sum \nolimits _{m=1}^k { q_m } /QL\right) \right) + {Pdrop}_j^{pri} \end{aligned}$$
(7)

The primary value for the drop probability \(Pdrop_{j}^{pri}\), is adjusted based on the network condition. The second expression (7) specifies the total free space in the physical queue. Multiplying this value by \(\beta _{2 }\) (it is a flexible parameter), will cause \(Pdrop_{j }\) to be reduced. In the other words, the more the free space is, the less packet drop probability will be achieved. However, the effect of this value depends on the \(\beta _{2}\) parameter.

\(\delta qv_{j}\) is the level of variation in the length of the \(j\)th virtual queue and is calculated as:

$$\begin{aligned} {\delta qv}_j =\frac{ q_j^{new} - q_j^{old} }{QL \cdot {CP}_j } \end{aligned}$$
(8)

The value of \(\delta qv_{j}\) can be positive or negative. \(\delta qv_{j}\) is multiplied by coefficient \(\beta _{1}\), as presented in Eq. (7). Note that the parameters in the Eqs. (6)–(8) are determined periodically. Therefore, the value of \( q_j^{old} \) is the \(j\)th virtual queue size at the start of the last period, and the value of \( q_j^{new}\) being the \(j\)th virtual queue size at the end of the last period and the beginning of the new period. If the differential of \(j\)th virtual queue size is positive, \(\delta qv_{j }\) will be positive, and will remain positive after multiplying by \(\beta _{1}\) causing \(P_{j}\) to increase. It means that if the variation in the virtual queue length is positive (the queue size grows up), then the packet drop probability is increased.

Finally, the packet will be queued or dropped, according to the drop probability \(P_{j }\) value. If the packet doesn’t drop and is queued in its virtual queue in the parent node \(i, \) then our protocol checks congestion status in the parent node and decides whether to control congestion or not.

3.2 Congestion Control Algorithm

(1) Three state Automata: We use automata to determine congestion levels in each node. We assume all the intermediate nodes have automata stationed. The concept underlying the theory of automata is that automata is a self-operating machine, or a mechanism that responses to a sequence of instructions in a certain way to reach a certain goal. We map our proposed three state automata to the 5-tuple {Q, A, B, F(.,.), G(.)}, in [32]:

Q \(=\) {Normal, Congestion Avoidance, Congested} is the set of finite states.

A \(=\) {0 \(<\) congestion level \(<\) 1} is the output.

B \(=\) {\(q_{1},q_{2} ,{\ldots }, q_{k}\)} are the inputs to the automaton, which represent a queue portion for each child in the physical queue at any time t.

F(.,.): Q \(\times \) B \(\rightarrow \) Q is the transition function [i.e. a function that determines the state of the automaton at any subsequent time instant (t \(+\) 1)] mapping in terms of the state and input at the instant t, such that, q (t \(+\) 1) \(=\) F(Q, B).

G(.): G : Q \(\rightarrow \) A is the output function mapping, that determines the output of the automaton (congestion level).

Our proposed automaton is presented in the Fig. 4. Finite states are illustrated by circles and transitions represented by arrows. When the automaton meets a symbol of the condition according to Eq. (9), it makes a transition (or jump) to another state based on its transition function (which takes the current state and each child virtual queue portion as its inputs). When the packet is accepted and queued in a virtual queue in the parent node, three finite states of machine calculate congestion level for the child node traffic.

Fig. 4
figure 4

The proposed automaton states

The automaton transitions are as follows:

$$\begin{aligned}&\hbox {I}:\hbox { if}\sum \nolimits _{m=1}^{k} q_m >QL\cdot \alpha _1\, or\, \exists j|q_j >QL\cdot CP_j \nonumber \\&\hbox {II}:\hbox { if}\sum \nolimits _{m=1}^{k} q_m >QL\cdot \alpha _1\, or\, \forall j\in S,L_j >0\cdot 95\cdot QL\cdot CP_j \nonumber \\&\hbox {III}:\hbox { if}\sum \nolimits _{m=1}^{k} q_m <QL\cdot \alpha _2 \nonumber \\&\hbox {IV}:\hbox { if}\sum \nolimits _{m=1}^{k} q_m <QL\cdot \alpha _1 \nonumber \\&0\le \alpha _1 \le \alpha _2 \le 1 \end{aligned}$$
(9)

I. At first all nodes are in Normal state. In this state, the arrival rate from the neighborhood of the node, is less than or equal to the service rate of the relative queue. So congestion doesn’t occur at any node and the average of virtual queue length is lower than their portions. When the total length of virtual queues becomes more than \(QL.\alpha _{1}\), or at least one of the virtual queues exceeds from its own portion, the neighborhood transmission rate is increased and the node will change from Normal state into Congestion Avoidance state.

II. In Congestion Avoidance state, when the arrival rate increases, the algorithm prevents congestion as much as possible. More rigor is applied for preventing congestion compared with the Normal state. If the total length of virtual queues is more than \(QL.\alpha _{2},\) or for each neighbor part of \(s\) (\(s\) is half of neighborhood number \(k\)/2), the related size of the virtual queue is more than 95 percent of its portion (depending on the repeated experiments), the node state will change into the Congested state.

III. The node has congestion in the Congested state. If the total length of queues becomes lower than \(QL\cdot \alpha _{2}\), the node comes back to the Congestion Avoidance state. This will help the node to adapt itself if congestion is reduced and it aids in the congestion level measuring.

IV. Also, if in the Congestion Avoidance state, the total length of queues becomes lower than \(QL\cdot \alpha _{2}\), the node comes back to the Normal state.

\(\alpha _{1}\) and \(\alpha _{2}\) is the threshold of transferring from Normal to Congestion Avoidance state and from Congestion Avoidance to Congestion state respectively.

(2) Automata based Congestion Detection (ACD): The node congestion level \(ncl_{i}\) reflects the current congestion level at \(i\) sensor node. In each ith node with \(k\) child nodes (Fig. 3), the congestion level of jth child \((cl_{j})\), is calculated according to its current virtual queue and automaton status. We suppose that each child node which is sending traffic, buffers it in a separate virtual queue. Initially all queues are in the Normal state. So if a child node’s traffic is queued in a virtual queue with Normal state, and its size \(q_{i }\) is more than its share, the child node congestion level \((cl_{j})\) is calculated as:

$$\begin{aligned} \forall j,\,if\,\frac{ q_j }{QL. {CP}_j }\ge 1\,then\quad {cl}_j =\frac{ q_j }{QL\cdot {CP}_j }-1,\, else\quad {cl}_j =0 \end{aligned}$$
(10)

where \(QL\cdot CP_{j }\) is the jth child node’s portion of physical queue QL in node \(i\). If sent traffic from a child node is queued in a virtual queue with Congestion Avoidance state, the child node congestion level is obtained as:

$$\begin{aligned} \forall j,if \left\{ \begin{array}{l} \frac{ q_j }{ {QL}\cdot {CP}_j }<1/3\rightarrow {cl}_j =0\\ 1/3<\frac{ q_j }{QL\cdot C P_j }<2/3\rightarrow {cl}_j =\frac{ q_j }{ {QL}\cdot {CP}_j }\cdot \frac{ L_j }{ {QL}\cdot {CP}_j }\\ \frac{ q_j }{ {QL}\cdot {CP}_j }>2/3\rightarrow {cl}_j =\frac{ L_j }{ {QL}\cdot {CP}_j} \end{array} \right. \end{aligned}$$
(11)

where \(L_{j }\) is the average length of jth virtual queue, with input rate \(\mu \), and service rate \(\lambda \) in a server system computed as [33]:

$$\begin{aligned}&L_j =\frac{\rho }{1-\rho }-\frac{(k+1)\cdot \rho ^{k+1} }{1- \rho ^{k+1} },\quad k=QL\cdot {CP}_j \nonumber \\&\rho =\frac{\lambda \cdot (1- P_k )}{\mu } \nonumber \\&P_k =\frac{(1-\rho ), \rho ^k }{1- \rho ^{k+1} } \end{aligned}$$
(12)

\(K \) is the total queue size and \(P=(\lambda /\mu )\). If the sent traffic from a child node is queued in a virtual queue with Congestion state, the child node congestion level is calculated as:

$$\begin{aligned} \forall j,if \left\{ \begin{array}{l} \frac{ q_j }{ {QL}\cdot {CP}_j }<1/3\rightarrow {cl}_j =\frac{ q_j }{ {QL}\cdot {CP}_j }\cdot \frac{ L_j }{ {QL}\cdot {CP}_j }\\ \frac{ q_j }{ {QL}\cdot {CP}_j }>1/3\rightarrow {cl}_j =\frac{ q_j }{ {QL}\cdot {CP}_j} \end{array} \right. \end{aligned}$$
(13)

\(cl_{j}\) is utilized for rate adjustment. Using the congestion level equation (13), each \(i\) sensor node may calculate the node congestion level \(ncl_{i }\) as:

$$\begin{aligned} ncl_i =\sum \nolimits ^{k}_{j=1} CP_j .cl_j \end{aligned}$$
(14)

The node congestion level (ncl) indicates the available bandwidth in the node (\(AB = 1- ncl \)). When the packet arrival rate is larger than packet service rate \((ncl>0)\), the node experiences congestion.

(3) Implicit Congestion Notification (ICN): There are two methods to inform congestion information: Explicit Congestion Notification (ECN) and Implicit Congestion Notification (ICN). The explicit congestion notification uses specific control messages and causes additional overhead. So, we use the advantage of the broadcast nature of wireless channels (we assume that Omnidirectional antenna is used), so that the child nodes can obtain congestion and acknowledgment information from the packet’s header which are then forwarded towards the sink by parent nodes. In this way transmission of additional control messages is avoided in the implicit congestion notification.

(4) Optimized Rate Adjustment (ORA): Suppose that node \(i\) has \(k\) child nodes which they send packets to their parents to forward them toward the sink, with their initial rate. The maximum rate for each node is \(r_{max}\). Congestion will occur in node \(i \) if the total average rate of child nodes is more than service rate. In this situation, an optimization problem will be used to compute the new sending rate for child nodes (Fig. 4). Following, we propose an optimization problem, to calculate a new sending rate for child nodes.

$$\begin{aligned} MinF_i =\sum _{j=1}^{N+1} {CP_j }\cdot \frac{AB_i -S_j }{\varepsilon +S_j} \end{aligned}$$
(15)
$$\begin{aligned}&S.t. \\&\sum _{j=1}^{N+1} {S_j \le \alpha \cdot AB_i },\quad S_j \ge 0 , 0\le \alpha \le 1 \end{aligned}$$
(15-1)
$$\begin{aligned}&where \\&0\le CP_j \le 1,\quad AB_i =1-ncl_i \end{aligned}$$
(15-2)

Figure 5 clarifies Eq. (15). Equations (15-1) and (15-2) present the conditions of optimization problem. The aim of optimization is to minimize the cost function of Eq. (15). In this function, \(N\) is the number of child nodes of the node \(i\) and \(CP_{j }\) is the jth child node’s normalized priority. This function calculates \(N\) new share rates for the child nodes’ data traffic and one for the local source node’s data traffic. \(AB_{i }\) is the available bandwidth in node \(i\). The overall congestion level at node \(i\) is denoted by \(ncl_{i}\). If the node congestion level increases, its available bandwidth will be decreased. For node \(i\) with \(N\) child nodes, the total share of all child nodes is obtained as Eq. (15-1), where \(s_{j }\) is the portion of \(j\)th child node for sending data determined by node \(i\). This share is proportional to the maximum allowed bandwidth of the child nodes. Therefore the new sending rate of each child node is calculated as in Fig. 5 \((r_{new} =r_{max}\times s_{i})\). In Eq. (15-1), \(\alpha \) is a constant which controls the maximum rate of the total share in relation with available bandwidth.

Fig. 5
figure 5

Rate adjustment model used in intermediate nodes

One of the fundamental characteristics of the proposed algorithm is that not only priority of the nodes is considered but also, the network status is taken to account and depending on the transmission portion is adjusted.

In Fig. 6 a run of optimization function for a node with two child node is depicted for \(CP_{1} =0.6,CP_{2}=0.4, \alpha =0.9, AB=0.8.\)

Fig. 6
figure 6

A run of optimization function

3.3 Output Scheduler

In the output of each node, there is a scheduler to service the packets in virtual queues, based on the priority of source or child’s traffic. We use a weighted fair queuing (WFQ) algorithm which is able to service each virtual queue practically and can guarantee fairness between source and child traffic. The priority of source traffic and child’s transit traffic as in relation (4) is used as the weight, for source traffic and child’s transit traffic virtual queues.

4 Performance Evaluation of the Proposed Protocol

We evaluate the performance of the proposed protocol using MATLAB and OPNET [34]. We run the optimization function in Eq. (15) in MATLAB and the simulation process was carried out using OPNET. Since both software have been programmed based on C++, we have the option of creating links between them. Therefore, the proposed protocol was simulated by linking OPNET and MATLAB software which use C++ compiler. In this simulation, we run both MATLAB and OPNET software concurrently. The optimization function equation (15) along with the other related functions are implemented in MATLAB software. Then OPNET calls MATLAB functions when is needed. The network topology used in the simulation is a simple single path network as in Fig. 7. Table 1 also presents parameters used in simulations.

Fig. 7
figure 7

The single path topology used in simulation

Table 1 Simulation parameters

The structure of the proposed data packet header is shown in Fig. 8.

Fig. 8
figure 8

The structure of the data packet header. PT Packet type, FP flow priority, SndAd sender address, Sqn sequence number, CT created time, NH next hop

The packet type distinguishes data packets from control packets (rate adjustment packet). The flow priority indicates the source sensor priority. The Sender address specifies the child number for subsequent rate share determination. The Sequence number field is a positive integer that avoids duplicated packets. The created time field is used to calculate data packet delay. The next hop determines the next node to which a data packet should be delivered.

The structure of rate adjustment control packet in the proposed protocol is shown in Fig.9.

Fig. 9
figure 9

The structure of the rate adjustment control packet. PT packet type, AB available bandwidth, S share rate, SRV service rate, NH next hop

The Available Bandwidth indicates the free bandwidth of parent node. The \(S \) field is share rate of the child node. The Service rate is the parent node maximum service rate.

In these simulations we implemented backpressure (25 %) and (75 %), CCF and PCCP algorithms for comparison purposes in our simulations. As we all know, backpressure refers to the CODA’s hop-by-hop congestion-control mechanism [16]. If a sensor node is congested (based on channel utilization and buffer occupancy), it periodically sends backpressure messages to its child nodes, which will reduce their forwarding rates by a certain percentage (i.e. (25 %) or (75 %) reduction percentages in simulations) in response to a backpressure message. If the child nodes are data source node, it reduces the new data generating rate by the same percentage.

4.1 Throughput

We first start the simulation on normal situation in which all nodes are generating packets and have the same priority and importance. This can be the case in the hospital Intensive Care Unit (ICU) or other sections in which all patients are in bed and have the same priority and importance. As shown in Fig. 10, similar to PCCP, we consider a fixed service time for all sensor nodes. Figure 10 shows the sum of the throughput for all nodes. We define network total throughput as the packet receiving rate of the sink over the total bandwidth of the sink. For example if the packet receiving rate of the sink is 900 packets/s and the total bandwidth of the sink is 1000 packets/s, then the network total throughput becomes 0.9.

Fig. 10
figure 10

System total normalized throughput (fixed service time)

Comparison between CCF, PCCP and our proposed protocol showed in Fig. 10. Among the five compared protocols, the CCF, PCCP and our proposed protocol have the most throughputs which are between 0.9 and 1. As shown in the figure, the average throughput of CCF, PCCP and the proposed protocol are equal to 0.97, 0.96 and 0.99, respectively. According to our simulation results the average throughput of backpressure (25 %) and backpressure (75 %) are equal to 0.85 and 0.72 respectively. The reason is that backpressure algorithms are hard to adapt to an appropriate level of bandwidth share, while the other three protocols assign a better share of the bandwidth (based on a certain policy). Because of the low throughput of backpressure algorithms, these two algorithms have been ignored in presenting the results.

In Fig. 11, the sensor nodes 1, 9 and 13 turn off in the time interval [40 s,80 s]. This can be the case in which some of the patients do not transmit data in some time intervals, so we need other nodes could scale-up their scheduling rate. As mentioned before, CCF cannot effectively allocate the remaining bandwidth, so it has a lower throughput in the interval [40 s, 80 s] when some nodes do not have sufficient source traffic. Proposed protocol and PCCP can use the remaining bandwidth better and so achieve more throughput than other protocols. The proposed protocol includes the available bandwidth in the calculation of the optimum new share rates, so its throughput is better than the PCCP protocol in the interval [40 s, 80 s].

Fig. 11
figure 11

Total normalized throughput (node 1, 9 and 13 are off in the interval [40 s, 80 s]

When nodes 1, 9 and 13 generates small traffic or stop sending data during the interval [40 s, 80 s], our proposed protocol determines that the congestion level gets lower in these nodes, so it calls optimization function and calculates new portions for child nodes (i.e., nodes 10 and 11 for node 13). As a result, the proposed protocol maintains higher throughput during this time interval as shown in Fig. 3.

4.2 End-to-End Delay and Jitter

We consider the end-to-end delay as a fundamental parameter which is very important in critical applications like healthcare. End-to-End delay is the time taken for a packet to be transmitted from a source node to the sink. In Fig. 12 end-to-end delay for different protocols has been shown. This figure shows that, the end-to-end delay of the proposed protocol is less than the other algorithms. Low end-to-end delay is expected for critical patient’s packets.

Fig. 12
figure 12

End to end delay

We define jitter as the time between each two end-to-end delays. In a remote healthcare system that monitors patients or elderly people, some video or acoustic sensors may be used. These kinds of sensors are not jitter-tolerant.In these applications, jitter is a critical issue and should be avoided or lowered to a minimum possible value.

As shown in Fig. 12, the end-to-end delay for backpressure algorithms (25 %) and (75 %) is higher than CCF, PCCP and the proposed protocols. The end-to-end delay for backpressure algorithms (25 %) and (75 %) is above 1 s while it is under 1 s for the other three protocols.

In the simulated scenario, the jitter is shown for CCF, PCCP and proposed protocols. In this scenario, sensor nodes 1, 9 and 13 were inactive (turned off) during the time interval [40 s, 80 s]. As can be seen from Fig. 13, jitter of our proposed protocol is lower than the CCF, and PCCP protocols. The reason is that the proposed protocol applies an optimization function for child nodes which uses network situations and available bandwidth of the parent node in order to assign the optimum sending rate. This decreases the end to end delay and jitter.

Fig. 13
figure 13

Jitter

4.3 Packet Loss

We define packet loss probability as :

$$\begin{aligned} P_{loss} =\left( {N_{loss} /N_{loss} +N_{recived} } \right) \end{aligned}$$
(16)

where \(N_{loss }\) is the number of dropped packets in each epoch time (epoch time \(=\) 1 s) in the network. This can be because of AQM Pdrop in Eq. (7) or queue overflow or packet error rate (PER) in the network. \(N_{recived }\) is the number of receiving packets at the epoch time.

The PCCP uses a single queue for all transit traffic and one for local source traffic, but we use one queue for all traffic, and divide it into \((N_{i}+1)\) virtual queues; One virtual queue for local source traffic and the others for \(N_{i }\) child nodes, where \(N_{i }\) is equal to the number of Childs.

Figures 14 and 15 present the packet loss probability and cumulated packet loss over time respectively. As Fig. 14 shows the packet loss probability of CCF, PCCP and proposed protocols are lower than backpressure algorithms, because in the occurrence of congestion, backpressure algorithms cannot allocate optimized rates for the source and the child nodes and hence, their packet loss rate is higher.

Fig. 14
figure 14

Packet loss probability (buffer size \(=\) 150 packets)

Fig. 15
figure 15

Cumulated packet loss over time (buffer size \(=\) 150 packets)

As PCCP maintains two different queues, if the sending rate of a source or child node is more than its share, its packet will be dropped immediately and not allowed to use the other child node’s queue share. Also, in CCF method an equal portion is allocated to each child node at every moment. Consequently, packet loss increases in both PCCP and CCF.

The proposed protocol in the first step defines a dynamic weight for the child nodes using Eq. (4). According to this weight, each child node’s portion is determined in each time interval. Then, it exploits its new AQM algorithm which allows if any child node’s allocated queue is going to be filled and there are empty places in other virtual queues, they are utilized instead of dropping the packets. When a child node uses the other one’s portion, in the second phase of congestion control based on automaton, the congestion level of the node is increased. Considering the empty bandwidth along with the weight and importance of child nodes and by means of the optimization function, the new portions will be divided among them so that each node’s input rate is equal to its output rate which leads to reduction in packet loss probability.

Therefore, the packet loss in the proposed method is less than CCF and PCCP methods even at the beginning of data transmission (Fig. 14). This is very important especially for sensitive traffic flows of HWSNs in which a patient’s status may change suddenly and there is little time for analyzing the situation and making decision about him/her. As can be seen, the other methods have usually more packet loss at the beginning of the transmission.

4.4 Queue Size

The queue size is the number of packets in the queue. Figure 16 illustrates the mean queue size in the node 14, the closest node to sink over time. It means that queue size is a major metric in delay measurement. The more queue size will cause more delay. Our proposed protocol considers the optimum portion for child nodes by the optimization function so that input rates get always lower than the output rates in each node.

Fig. 16
figure 16

Mean queue size at the node closest to the sink (buffer size \(=\) 150)

Using the optimization function along with considering the available bandwidth in each node’s output and the priority of each child node, our proposed protocol allocates the suitable portion to each child node so that the sending rate of any child equals its parent’s available bandwidth. In this way, the parent’s queue length can be managed so that the overflow doesn’t occur.

It is clearly seen in Fig. 5 that the queue length of our proposed protocol is less than PCCP in which the queue length may exceed half of the buffer in some moments leading to more packet loss as well as more delay. In contrast, it is clear that the proposed AQM provides a reasonable stability for mean queue size. The proposed algorithm may stabilize the queue length around a desired level.

4.5 Energy Performance Comparison

In this section, performance of the proposed protocol in terms of energy efficiency is compared with the mentioned protocols. In different simulations, remained energy value and consumed energy value are considered.

According to the results shown in Fig. 17, the normalized fairness of the network in energy consumption for the proposed protocol is more than the other protocols. Equation (17) calculates the variance of normalized remaining energy of the network’s nodes to the average remaining energy (Ave) of the total network. In the worst case scenario, half of the nodes have no energy and half of them will remain full. If we normalize the equation, we will achieve the normalized fairness equation. In Eq. (17), energy is the remaining energy of the node at the end of the simulations. The Fairness parameter shows that, if the remaining energy of the nodes becomes closer to each other, a protocol will be more successful. If fairness is equal to one, the network has the best and the fairest case (when all nodes have the same remained energy). However, if it is equal to zero, we have the most unfairness cases of energy consumption. As shown in Fig. 11, the proposed protocol fairness is more successful rather than the other protocols.

$$\begin{aligned}&DEV=\sum _{i=1}^n { {( {Energy}_i -Ave)}^2 } \nonumber \\&Fairness=1-DEV/DEV\, Worst \end{aligned}$$
(17)
Fig. 17
figure 17

Normalized fairness over energy

Because the sensor nodes used in healthcare are of different types and hence they have different lifetimes, we assumed a network of sensor nodes with different initial energies. The initial sending rate of all nodes is 10 packets/s. In such conditions, we can observe that the nodes which use the proposed algorithm show the better fairness in energy consumption.

Since in a sensor network, all the packets are forwarded towards the sink, the nodes closer to the sink pass the most traffic and hence usually uses up their energy before the other nodes. In Fig. 18, we show the energy of the closest node to the sink. As can be seen, the proposed method has increased the network lifetime using an improved traffic management. (Note that we define the network lifetime as the time the first node in the network will die.)

Fig. 18
figure 18

Energy consumption in closest node to sink

At the end, the total result of Figs. 17 and 18 shows that, the energy performance of the proposed protocol is more efficient.

4.6 Source Data Transfer Rate Adjustment

Figure 18 shows the sending rate of the source node 0 in different protocols. In this diagram, the initial sending rate of node 0 and node 5 has been assumed 100 and 10 packets/s is respectively. The service rate of the node near the sink, namely node 14 is 700 packets/s. Also, all the nodes have the same significance and priority. This can happen for a part of the HWSN environment in which all of the patients have the same initial status.

One of the important problems in HWSN is related to the time when a node suddenly starts its operations in a high traffic rate. In this situation the primary times of sending data have a particular importance so that the most significant decisions for one’s health provision are made in these times.

Although CCF and PCCP methods adjust the rate properly, but they do this only based on nodes’ priority and do not take into account the traffic conditions of a specific node. As it can be seen in Fig. 19, the proposed method considers the traffic conditions in addition to the importance and priority of nodes. Thus, it doesn’t reduce the sending rate immediately and does it slowly with respect to the initial sending rate of the nodes. It will cause transferring the most important data before any reduction of sending rate, in the primary times.

Fig. 19
figure 19

a Node 0 source rate, b Node 5 source rate

The proposed algorithm achieves the largest total source data send rate due to its efficient rate adjustment method in comparison with CCF and PCCP protocols.

5 Conclusion

In this paper we presented a congestion management protocol for healthcare wireless sensor networks (HWSNs). The best scenario for the proposed protocol can be stationary patients in a special ward of a hospital or a clinic, particularly apoplectic or brain dead patients. This protocol consists of two stages. In the first stage, a novel Active Queue Management (AQM) scheme is proposed to avoid congestion and provide quality of service (QoS). This scheme uses separate virtual queues on a single physical queue to store the input packets from each child node based on importance and priority of the source’s traffic. Using this scheme when one of the child nodes has a free space on its own virtual queue, others can use the free space if necessary.

According to the proposed AQM, if the incoming packet is accepted, the second stage starts. In the second stage, three mechanisms are used to control congestion. The proposed protocol detects congestion by a three-state machine and virtual queue status; it adjusts the child’s sending rate by an optimization function.

Simulation results confirmed that the proposed protocol is more efficient than CCF, PCCP and backpressure algorithms in terms of packet loss, energy efficiency, end-to-end delay and fairness. As a future direction of this research, efficiency of the proposed protocol for the mobile patients in the hospital or city can be studied.