1 Introduction

Generally WSNs are composed of more than one sink and many hundreds of thousands of sensor nodes lying scattered in the physical space. With amix of data recognizing, check, and remote correspondence, the sensor nodes focuses on recognizing physical data, handle obnoxious data, and report obliged data to the sink [1]. These sensors are basically nothing and can perceive measure, and total data from the earth. The average undertaking of sensor focus is to collect the data from the scene of theoccasion and send the information to a sink focus [2]. Blockage in WSN dominatingly occurs because of the fundamental reasons [3] are the time when different focus directs require toward transmitting information through a practically identical channel at any given moment, and the managing focus point neglects to forward the got information to the going with controlling focus focuses.

Utilization of WSNs in the zone, thecondition and living space viewing require the sensor focus focuses to sporadically gather and course information towards a sink [4]. Additionally, it is comprehended that every sensor focus point must be outfitted with an obliged measure of cutoff, so if at any given organizing focus the information gathering rate administers the information sending rate blockage begins to make at this middle point. Such sort of stop up and information catastrophe regularly happens at the focuses organized in the district of a static sink [5]. Information affliction at these middle focuses occurs in perspective of the way that at any given inspiration driving time a sink can basically converse with one or a set number of sensors focus [6]. So the sink executes a round robin like estimation to give level with achance to all the sensor focus focuses masterminded in the district of the sink for exchanging information to the sink [7]. Along these lines, amidst any offered time reprieve just a subset of the sink’s neighboring focus focuses can exchange information to the sink, while whatever is left of the focuses sit tight for their turn. In the then holding up focus focuses continue enduring information from their neighboring focus focuses [8].

Similar to application sort, the stream sort which comprises of high gigantic nests, efficiently manages an ensured blockage control by combining a particular bundle or more by offering light, medium and tight control as per the requirement/demand [9]. When interminable nodes send data, their flows may get intercepted. Even though such interminable source nodeslead to blockage they still help to increase the quality [10]. It is especially hard to expect the crossing point appears due system parts, fluctuation in radio channel quality after some time [11]. At any instant, congestion taking place in ad-hoc wireless sensor network can be considerably reduced and the transmission between sender and receiver can be made in the best path between them using an innovative protocol [12]. Extra stop up control plot amplified the throughput of load adjusting and not any more amazing conceivable rate supportable. Learning Automata-based stop up shirking calculation (LACAS) [13] have managed the blockage issue in social assurance WSNs. The sensibility cautious blockage control (FACC) custom [14] controls the stop up and completes by and large sensible data trade constrain undertaking for various streams. An improvement watchful dynamic controlling (TADR) estimation [15] used to course separates the stop up districts and scramble the over the top bundles along different courses containing sit still and under stacked focus focuses.

Contributions This paper focuses on controlling congestion with respect to the selection of next hop and queuing delays due to high-density sensor nodes, which affects the data transmission between source to destination. A fast Congestion control (FCC) scheme with efficient routing technique has been proposed.

The rest of this paper includes the recent related works under Sect. 2, problem methodology and system model in Sect. 3, the proposed FCC scheme is detailed out in Sect. 4, performance of the proposed FCC scheme using network simulator tool is shown underSection 5 and conclusion is presented under Sect. 6.

2 Related works

Blockages in WSNs can be avoided by making effective choice approaches in sink management by using a hierarchical tree alternate path based resource control estimation technique[16]. The gigantic nests utilization of HTAP under different concentration point conditions has been the more observable mind boggling position of HTAP figuring and does not add any massive overhead to the enough tremendous stacked structure if there should rise an event of disheartening.

Uthra et al. [17] have proposed aprobabilistic congestion and efficient rate control method which assist in predicting congestion. In probabilistic approach, the occurrence of blockage is efficiently predicted and normalcy is brought by data movement and strengthening inhabitance. Flow is regulated in rate control process by utilizing a back off attestation mastermind rate regulation scheme. Throughput is upgraded leading to reduced drop in packets using split protocol.

Gholipour et al. [18] have proposed that packets can be efficiently routed around congested areas by utilizing idle or less burdened nodes using a traffic aware routing algorithm. The count is made by using two free point fields that uses centrality and progression stacking. The significance was made consistently as in other incline based sorting out traditions that find the most minimized courses for packs. The second part addresses the improvement stacking of neighboring concentration centers that may change into the running with theforwarder. Once the change stacking beats an edge, it understands that there is stop up at a center on the route toward a specific sink; the groups would stream along with other lacking ways.

Gholipour et al. [19] have presented asupport vector machines (SVMs) for change the data transmission rate, impelling stop up avoidance in WSNs. They picked SVMs because support vector machines are superior than other delineation methods as they increase the speed of the data in middle of data transmission. This is in light of the fact that SVMs issues can yield better results, oozing issues than various procedures.

Kafi et al. [20] have proposed an efficient protocol that keeps impedances and providesa high respectability of information exchange limit use among sensornode core interests. The blockage and the deterrent in cover and intra ways issue ranges are alleviated through considering the capability between joins limits at the arranging system. Straight created work PC programs were used to complete consummate use capacity of the most finished the top open information exchange restrict.

Chen et al. [21] have presented a hierarchy based congestion control and energy-balanced scheme (CcEbH). Another leveled topology with the neighbor center reasons for an inside point will be explicitly withdrawn into three sorts, the upstream concentration centers, and the downstream core interests. The stop up avoidance structure with center uses other lower plan neighbor centers to forward data when its downstream concentration will be congested. The stop up control instrument have used to see the blockage of an inside by strategies for its line length, sending and getting therate, and light up its upstream concentrations to find another next hop to release the blockage.

Whenever congestion happens the in between nodes will not receive the data packets correctly and so the correct routing of the data packets towards the destination cannot be performed. Nikokheslat et al. [22] proposed an approach to sort out the above-mentioned issue of delivering transmitted packets to the sink node. They used another leveled tree and system structure to make a crucial topology and Prim’s check to arrange fitting neighbors. Particular leveled tree structure and resource control figuring’s have used for manage topologies strategies and control blockage as it were. The dynamic line affiliation was used to transmit data with growing needs. In the occasion that stops up happens, by picking centers of a comparative level and supplanting them, the proposed computation attempts to lessen blockage.

Coupling of QoS with congestion control in multihop WSNs is proposed by Chen et al. [23]. The disconnected covering affiliation (DQS) and Semi-TCP give per-isolate QoS and finish profitable bounce by-ricochet stop up control. The techniques proposed efficiently manage various issues like for example by taking into account of efficient account of the latest transit time that is used to survey the latest departure time for groups and handle past due social events, and a versatile ACK part to upgrade the general execution to the degree both non-unfaltering/propelling applications.Vijayakumar et al. [27, 28] proposed periodic assessment of deployed components for their smooth functioning focusing upon regular checks carried out to ensure proper working of the applications. R.Varatharajan et al. [29] proposed a Wearable sensor devices for early detection of Alzheimer disease using dynamic time warping algorithm.

3 Problem methodology and network model

3.1 Problem methodology

Ding et al. [24] have proposed a congestion control scheme using optimizing routing algorithm (CCOR). Queuing network models have joins with the interdependencies between spread rate, sporadic access at the MAC layer and advancement arranging. This model is used to see the blockage level of sensor centers and convincingly see through thebottlenecks. They settle on two self-choice breaking points using center point zones and affiliation rate of packs which are called interface edge and action goes. The connection point work is used to set up the most smaller courses for groups, if the blockage degree outmaneuvers a particular edge, the headway create work ensures that social occasions are incessantly sent along the delicately stacked zones to coordinate prevention. The CCOR count draws in each sensor concentrate point passes on the action stack among various approaches that isused to ensure sensible utilization of remote structure resources and affiliations. In our previous work [25] congestion control was achieved by reducing the overhead and increasing the routing efficiency by applying backpressure routing algorithm.

In the earliercongestion control techniques [16]-[25], authorsdo not focus on mobility of sensor node, which affects network performance in terms of correct hop selection and hops handoff process. For that reason, the proposed FCC scheme is required for recent trends, which consists of two optimization algorithm. The next hop selection process is performed by multi-input time on task optimization algorithm. The considered multiple inputs are waiting time (delay), received signal strength, and mobility. Then, the efficient routing between source to destination is performed by modified gravitational search algorithm, which minimizes the energy consumption, connectivity in terms of network lifetime, and routing overhead.Performance comparison of the proposed hybrid optimization algorithm based FCC scheme is made with existing CCOR scheme [24], using NS-2 tool with respect to packet drops, energy consumption, average hops, and network lifetime.

3.2 Network model

As shown in Fig. 1. the sensor nodes and sink node make up the network model of the proposed work. The sensor nodes are self-assertively real and see the including information unpredictably, and a while later obliges the packs to the sink by a multi-skip way. Both the sensor nodes and sink node arerandomly scattered in a square zone that is considered. In this work, all sensor nodes’adaptability feature is assumed to be isomorphic. Moreover they all have a close beginning state. The imperativeness supply of sensor center concentrations is resolved and vacillates transmission control as shown by the portion to its receiver. Strength of the signal received helps in determining the distance between the transmitter and the receiver.

Fig. 1
figure 1

Network model of proposed work

Two source nodes transmit the gatherings to an equivalent broadly engaging node 1, by then it picks the node 2 for next community for information transmission, solely, theblockage will happen reasonably at node 2. Since node 2 forward more than two sources’ focus data. It provides the best bounce focus with simplicity to forward information in a flash. For the adaptability time, we expect handoff quickly, at the time our streamlining calculation gives best next neighboring node. With the multipath routing, the information is scattered among the nearby nodes which can eventually transmit the information to the sink at the cost of some additional hops that can eliminate the blockage while increasing the system lifetime. The next hop selection algorithm is combined with the altered gravitational search algorithm for computingefficient route between sources to sink.

4 Proposed fast congestion control (FCC) scheme

There are two phases in the proposed fast congestion control scheme. In phase-I of our proposed FCC approach multiple inputs at varying time intervals are processed to identify an ideal next hop. Input is obtained from three parameters namely event waiting delay, received signal strength and mobility under discrete time intervals.In phase-II the efficient route is computed betweensource to sink. The detailed structure of both phases are discussed in this section.

4.1 Next hop selection using multi-input time on task optimization algorithm

The generic model of time on task [26] addresses the issue of time taken to complete a given task. As the player may carry out a different task upon redirecting,the time for each assignment including the time spent in redirection may be allotted to the user, thereby the ultimate task selection is decided by the user himself. The undertaking time metric provides the time taken to complete the routine task or different task in a free and efficient way. The estimation of multi input time metric is non specific as it does not depend on any specific task assigned or to be assigned, making a way for an e-learning framework.Multi-input time on task have been translated in thecontext of predefined assignment with respect to subordinate edges.

The relative changes in engagement can be assessed using the generic time on task metric. For a given task \(t_x\) and particular relative engagement change \(\Delta E\) (e.g., 5, 10, 15%), an upper threshold value \(TV_{t_x , \Delta E}\) can be defined and it is defined by the condition, Engagement of a student decreases with \(\Delta E\) whenever he/she exceeds the threshold in completing a task, T.

$$\begin{aligned} \hbox {If} \left( {T\ge TV_{t_x , \Delta E} } \right) \hbox {then engagement decrease with} \Delta E \end{aligned}$$
(1)

Multi-input time on task threshold values \(TV_{t_x , \Delta E} \)can be computed from various predefined\(\Delta E\) relative engagement changes with a specific extreme goal to amplify the engagement watching granularity. Subsequently upon specific granularity the multi-input time on undertaking dull metric can be taken care of for the whole enjoyment, for a distraction level or for every individual redirected task. When the computed metric reaches the threshold or crosses over it the same could be conveyed or a procedure to make adjustment could be invoked. Since various amusements, beguilement levels or distraction errands have specific properties, for a near player the multi-input time on undertaking metric will take arranged respects crosswise over completed distinctive diversions, and moreover transversely completed diverse redirection levels or attempts. Besides, remarkable players could have adistinctive engagement for an equivalent undertaking and along these lines, the metric will have unmistakable respects crosswise over completed people. As long as the relative distance between the edges remain intact, degree of engagement remains high. But once relative distance increases, degree of engagement diminishes. Hence an errand assignment should be differentiated from proper assignment accordingly.

Fig. 2
figure 2

Queue waiting time delay optimization

The course selector automatically calculates a threshold value for queue waiting time using the multi input time on task mechanism. The basics of multi-input time on task progress calculation requires the course selector to arrange a concealed testing session for theparty all directing qualities from the sensor focus focuses and organizing way. The engagement related information is utilized by the multi-input time to validate the predefined threshold. If the computed threshold is greater than the predefined, the newly computed threshold is over written as the new predefined threshold \(\Delta E\).Working step of multi-input time on task threshold computation of queue waiting time is given in Fig. 2.

Fig. 3
figure 3

Mobility model with next neighboring node

Event waiting delay The receiving queuerepresented by Q\(_{RX}\),denotes the arrival data queue in which arrival data are waiting for data transmission at the node. The transmission queuerepresented by Q\(_{TX}\)denotes the data queue after transmission in which aggregated data are waiting for transmission to theneighbor node. For anode\(n_i \), data transmission rate is decided by itself and data receiving rate is decided by its upper neighboring node \(n_{i+1} \) and the overhearing rate is decided by its lower neighboring node\(n_{i-1} \). For anode\(n_i \), data transmission rate \(\left( {{R}'_i } \right) \) is decided by data generation rate\(\left( {R_{DG} } \right) \)and actual data transmission rate\(\left( {R_{AD} } \right) \).The total event waiting time of data aggregation in \(\hbox {Q}_{\hbox {RX}} \)as follows:

$$\begin{aligned} \tau _e \left( i \right) =\frac{{R}''_{i+1} }{\left( {2R_i +R_{AD} } \right) } \end{aligned}$$
(2)

The data arrival rate in \(\hbox {Q}_{\hbox {TX}} \)

$$\begin{aligned} {R}'_i =R_i +\frac{R_i }{R_i +R_{AD} }R_{AD} \end{aligned}$$
(3)

The average data transmission time with next node selection time \(\left( S \right) \) and next node waiting time are as follows:

$$\begin{aligned} \tau _{tx} \left( i \right) =\frac{s}{1-{R}'_i s} \end{aligned}$$
(4)
$$\begin{aligned} \tau _s \left( i \right) =\frac{{R}'_i \left( {\tau _{tx} \left( i \right) } \right) ^{2}}{2\left( {1-{R}'_i \cdot \tau _{tx} \left( i \right) } \right) } \end{aligned}$$
(5)

Next node has to wait a period of time if it’s upper or lower neighbors are transmitting data, the waiting time is channel waiting time.

$$\begin{aligned} \tau _c \left( i \right) =2{R}'_i \times \tau _{tx}^2 \left( i \right) \end{aligned}$$
(6)

The total event waiting delay is the time interval from when an event data is generated at a node until the data is received by a sink node in multi-hop (N) network.

$$\begin{aligned} T_d =\sum _{i=1}^N {\left( {\tau _e \left( i \right) +\tau _s \left( i \right) +\tau _c \left( i \right) +\tau _{tx} \left( i \right) } \right) } \end{aligned}$$
(7)

Received signal strength It is based on the metrics such as distance and transmission energy, if the source node transmits apacket with energy\(E_{S-next} \), the sensor nodes received signal strength, with the distance of \(d_{ic} +d_s \) can be expressed as follows:

$$\begin{aligned} RSS=\bigg (\frac{Es-next}{4\pi (dic+ds)}\bigg )^{2}.\hbox {tlt2} \end{aligned}$$
(8)

Mobility model For a moving sensor node fixations, the division between source to trade changes with time, which makes the association perplexity issue in remote structure. Consider the focal comfort appear with source and trade focus as appeared in Fig. 3. The model which a source focus point advance is appeared as a straight line correspondingly as even and vertical point and its space far from the picked trade focus point. Consider C is the level of the trade focus point standard onto the change zone. The division among source and trade focus diminishes as the source focus is moving towards the C and degrees of progress as moving far from the point C.

Let \(d_1 , d_2 , \ldots , d_n \)be the link distance that a corresponding moving point of \(p_1 , p_2 , \ldots , p_n \)with the condition of \(p_1<p_2 \cdots <p_n \)and \(d_1<d_2 \cdots <d_n \). In thetransmission mode, source nodes obsessions focus point can move starting with one place then onto the running with its requirement, which makes relate frustration between a source to next neighboring node. The exchanging parts can be set up by a particular way hardship appears. Permit d to be the package between the source and hand-disproportionate point, and let be the likelihood that the exchanging point p as takes after:

$$\begin{aligned} M=\left\{ {{\begin{array}{l} {p \left( {d_{i+1}<d<d_i } \right) ; for i=1, 2, \ldots , n } \\ {p \left( {0<d<d_i } \right) ; for i= n } \\ \end{array} }} \right. \end{aligned}$$
(9)

Once the source identifies an engagement node, it begins to transmit data. As the source is moving the engagement strength with the engaged node is volatile yet dependable. After a certain area is covered by the source, denoted by position C in the Fig. 3, additional nodes apart from already engaged node may be spotted. The source ignores such nodes as it maintains strong (dependanble) signal relationship with the already engaged node. As the source moves further, the engaging energy starts deteriorating. At that juncture by applying exchanging locales, speed of focus and the degree reference to the point C, a new node has to be designated as the next engaging node. A specific zone where the source focus exhibit is changed next zone, i.e., the point nwith \(n+1\) as its separation to the reference point.

After normalizing all above objective functions that are in the range of 0 and 1 for efficient optimization of the function, the optimal next neighboring node is identified as given below.

$$\begin{aligned} F=r_1 T_d +r_2 RSS+r_3 M \end{aligned}$$
(10)

It is subject to \(\hbox {next}_{\hbox {node}} =\hbox {Min} \left( \hbox {F} \right) \)with the multiple constraints are T\(_{d}\), RSS, and M, and therandom parameters are r\(_{1}\), r\(_{2}\), and r\(_{3}\) respectively, and it ensures that the valuesshould not fall on extreme ends.

Network lifetime: The time at which the first node in the network dries out its energy, leading to its failure to send packets thereby bringing the functionality of the network down.

4.2 Route computation using altered gravitational search algorithm

An altered gravitational search algorithm is used to choose an efficient way for transmitting the information to the sink. This algorithm proceeds in selecting the best skip to transmit the perceived information to the Sink focus point. Since the source node keeps on moving, an efficient way to compute the route for transmitting the information to the sink isneeded. Our proposed altered gravitational search algorithm serves this purpose in a way similar to Newton’s law of gravity, which imparts that every molecule pulls in each other inertia with force (F), which guide in regard to the result of their mass and oppositely identifying with the square of the distance between them.

$$\begin{aligned} F=G \left[ {\frac{P_1 P_2 }{D^{2}}} \right] \end{aligned}$$
(11)

where F is the force between the particles, G is the gravitational constant, P\(_{1}\) and P\(_{2}\) are the mass of the two particles respectively, and D is the distance between the particles. Here each source focuses transmits the information to its separate next neighboring focus and it forward to the sink, which collects the information and packs it before sending it especially to the sink. But in some cases when the node receiving data intermediately begins to drift away from the sink node, irrespective of whether the drifted node possesses battery power or not, it should not be considered as in earlier case. The altered gravitational search algorithm is used to calculate a new route for transmitting data to the sink via a new intermediate node.

5 Experimental result

The proposed FCC scheme is simulated using Network Simulator (NS-2) tool with a large network size as 1000\(\times \)1000 m\(^{2}\) with high-density sensor nodes and a Sink node. The simulation time of proposed system is set as 500 seconds. We use the bandwidth be 22 MHz for all channels and the transmit power for each node is 1W. User’s and base station’s transmitting power is fixed at 17dBm and 20dBmrespectively. In the proposed system IEEE 802.11n is used as the physical layer using 5GHz band, MAC service data unit aggregation with 1500 bytes payload data per frame. The performance of proposed FCC scheme analysed by two different testing scenarios (i.e)number of node variations and node mobility variations. The resultsare compared with the existing congestion control scheme,CCOR [24] in terms of data loss ratio, energy consumption, average number of hops, and network lifetime. The experimental parametersand test scenarios aredepicted in Table 1 and 2.

Table 1 Experimental parameters
Table 2 Test scenarios
Fig. 4
figure 4

Performance comparison of data drop ratio with varying number of nodes

Fig. 5
figure 5

Performance comparison of energy consumption with varying number of nodes

Fig. 6
figure 6

Performance comparison of average hop counts with varying number of nodes

Fig. 7
figure 7

Performance comparison of network lifetime with varying number of nodes

Fig. 8
figure 8

Performance comparison of data drop ratio with varying node mobility

Fig. 9
figure 9

Performance comparison of energy consumption with varying node mobility

Fig. 10
figure 10

Performance comparison of average hop counts with varying node mobility

Fig. 11
figure 11

Performance comparison of network lifetime with varying node mobility

5.1 Varying number of nodes

In this test, the number of nodes are varied from 100 to 500 in the span of 100 with the fixed network area as 1000\(\times \)1000 m\(^{2}\) and user mobility (speed) as 500 ms.The performance comparison of datadrop ratio is given in Fig. 4. From this plot, FCC scheme achieves low data losses rather than CCOR scheme.In FCC scheme the packets that are about to be dropped in compliance with the queue length, are caught and spread over the network, facilitating the packets to eventually reach the sink. This is in contrary to the CCOR technique where packets to be dropped are eventually lost.The performance comparison of energy consumption is given in Fig. 5. From this plot, we infer that our FCC scheme minimizes energy consumption rather than CCOR scheme. Theoretically, our FCC scheme makes more number of hops and consumes more energy, but reduce much packet loss and their re-forwarding process.In case of same node number the average routing hops of FCC is higher than the CCOR scheme which is shown Fig. 6 because the data are distributed to multiple hops before reaching the sink, resulting inreduced loss rate. The performance comparison of network lifetime is given in Fig. 7. The plot clearly depicts that the proposed FCC scheme maximizes the network lifetime rather than CCOR scheme because we contribute to select best congestion less node. It is suitable for data forward in any situation.

5.2 Varying node mobility

In this test, we vary the node mobility (speed) as 100, 200, 300, 400, and 500 ms with the fixed network area as 1000 \(\times \) 1000 m\(^{2}\) and sensor as 500. The performance comparison of data loss ratio is given in Fig. 8. From this plot, it is inferred that FCC scheme achieves low data losses rather than CCOR scheme.

The performance comparison of energy consumption is given in Fig. 9. From this plot, our FCC scheme minimizes energy consumption rather than CCOR scheme. The average number of intermediate hops between sources to sink node is showed in Fig. 10. In case of same node number the average routing hops of FCC is higher than the CCOR scheme which is also shown in Fig. 10. The performance comparison of network lifetime is given in Fig. 11. The plot clearly depicts our FCC scheme maximizes the network lifetime rather than CCOR scheme.

6 Conclusion

In this work a fast congestion control (FCC) scheme has been proposed using hybrid optimization algorithm for WSNs in a closed experimental setup. First, we select best intermediate node using a multi-input time on task optimization algorithm. The waiting delay, received signal strength, and mobility are considered as multi-inputs of this algorithm, which improves network lifetime to reduce congestion. Then, a path is computed between sources to sink node using modified gravitational search algorithm, which provides better routing path. Experimental results show that FCC scheme can effectively control congestion with minimizing the data loss, energy consumption and maximize the hop counts, network lifetime compared to existing CCOR scheme. Since the NS2 has limited scalability, the work can be carried forward to a distributed environment by replacing it with other real time distributed network protocols.