1 Introduction

Recent years witnessed an escalation in the deployment of wireless multi-hopping technologies in accessing the internet via 4G/5G cellular links [1,2,3] and Wi-Fi hotspots [4,5,6,7]. The multi-hop communications considerably widen the network coverage and boost the system capacity of the cellular and Wi-Fi networks. The multi-hop device-to-device (D2D) communication remains the vibrant research topic in 5G networks [8,9,10] as it improves spectrum efficiency and attains better fairness in sharing the network resource. The collection of self-governing and self-healing wireless nodes combined to form multi-hop wireless networks (MHWN) [11,12,13,14]. Wireless nodes in MHWN share information among themselves through a chain of intermediate wireless relay nodes without any pre-existing communication infrastructure. The wireless relay nodes function as a router that can maintain, update, repair, and reconstruct routes autonomously. The MHWN, with its ubiquitous computing capability, self-configuration, and faster deployment, makes them an excellent choice for emergency response communication, battlefield communication, disaster communication, and community wireless network. Figure 1 shows the schematic representation of multi-hopping at the last-mile cellular wireless network.

Fig. 1
figure 1

Multi-hop networks deployed at last mile cellular wireless network

In wired or wireless internet, TCP [15] remains a dominant transport protocol as it handles an enormous volume of global internet data traffic and reliably delivers data packets between the end devices. TCP plays an influential role in delivering data traffic between wireless end systems and the remote cloud server in the 5G enabled internet of things (IoT) for smart city applications [16,17,18,19]. In recent years, an extensive array of experiments and studies were carried out to validate the performance of TCP under the different channel and traffic conditions of ultra-dense 5G networks [20,21,22,23]. TCP’s congestion control algorithm remains the prime research agenda due to its responsibility in maintaining the system throughput rate below the network capacity, i.e., controlling the packet transmission rate to avoid buffer overflows at the wireless relay nodes. For each connection, TCP maintains two state variables congestion window (cwnd) at the source and the receiver window (rwnd) at the destination that regulates the data flow into the network based on network capacity or receiver buffer capacity. TCP’s congestion control process is governed primarily by the slow start (SS) phase, congestion avoidance (CA) phase, and loss or congestion recovery phase.

The sender fixes the threshold limit for the SS phase, commonly known as SSThresh, based on the receiver’s advertised capacity during the connection negotiation phase. In the SS phase, the cwnd doubles for each acknowledgment (ACK) packet, and the sender swiftly invokes the CA phase as the cwnd growth exceeds the SSThresh. In the CA phase, cwnd growth is incremented one packet for each round trip time (RTT) until the missing ACK or packet loss is detected. On experiencing the packet loss through missing ACK or timeout, TCP invokes the congestion control process by flatly reducing the transmission rate by half of its current cwnd size. In the congestion recovery phase, the sender gradual increment cwnd size by one packet for each RTT until the attainment of steady-state condition (cwnd = network capacity).

The congestion control algorithms under MHWN are categorized broadly into independent and network-assisted congestion control approaches. The independent congestion control approaches remain vulnerable against RTT delay spike impact, which occurs due to the rerouting of data packets in the wireless relay nodes associated with the node mobility. Several research studies and experimental investigations in the past [24,25,26,27,28] reveals that the throughput stability of TCP traffic deteriorates under the wireless environment due to spurious initiation of a congestion control process for non-congested packet drops (link failure and channel errors). However, the network-assisted congestion control approach remains the preferred choice in wireless conditions that minimize the impact of the spurious rate reduction based on the explicit congestion notification (ECN) [29, 30] from the intermediate nodes. The sender responds to the ECN message by initiating the congestion control process, thereby maintaining flow stability and preventing network congestion at its infancy stage.

In the MHWN, the three significant limitations that cripple throughput, fairness, and end-to-end message latency performances of the standard TCP implementation are

  1. (i)

    TCP, with its weakly designed congestion control algorithm, fails to recognize non-congested packet losses (channel error and link failure losses) and initiates a spurious congestion control process, results in severe throughput degradation.

  2. (ii)

    The flat-rate reduction initiated by the sender during the congestion control process severely affects the bandwidth sharing capability of TCP connection with the lowest utility rate.

  3. (iii)

    In the loss recovery phase, the cwnd growth with a slower flow convergence rate requires more RTTs to reach the steady-state condition that result in more extended time duration to complete the data flow between the source–destination processes.

The FAIR+ algorithm proposed in this article suppresses traditional TCP’s limitations by implementing an aggressive cwnd growth pattern, utilization level based rate reduction process, and faster window growth in the loss recovery process. The subsequent sections of this article are arranged as follows; Sect. 2 concisely discusses the background and related works of network-assisted congestion control approaches. Section 3 describes the TCP FAIR+ working mechanism. Section 4 derives the analytical model on throughput stability using duality theory. Section 5 outlines the simulation results and analysis of proposed TCP FAIR+ with the standard TCP variants and Section 6with the conclusion.

2 Background and related works

This section briefly discusses various ECN based congestion control algorithms proposed for multi-hop wireless environment. The ECN implementation requires modification in the relay router queuing discipline and initiates congestion notification when the packet ingress rate exceeds the queue threshold level. Jersey [31] and New Jersey [32] algorithms are the first of its kind to introduce the ECN approach in the relay node to alert the end stations about network congestion. Furthermore, these algorithms rely on available bandwidth estimation (ABE) of Westwood [33] for fixing new SSThresh value after each window reduction process. However, these algorithms successfully regulate spurious rate reduction impact but fail to achieve convergence rate in the loss recovery phase due to slow cwnd growth pattern.

MAD-TCP [34] implementation requires modification in the medium access control (MAC) and internet protocol (IP) packet headers to warn the sender about buffer overflows at the relay routers, MAC layer contention, link failures, and channel errors. The sender initiates the rate reduction or route repair process based on the internet control message protocol (ICMP) notifications. The feedback assisted congestion control method [35] requires feedback value computation based on ECN and the updated advertised window (awnd) value from the ACK packet to initiate the congestion control process. The link-layer originated explicit link status notification (LL-ELSN) [36] approach prevents the sender from spurious timeout during the packet losses associated with the link failure and channel errors based on the explicit re-transmission start notification of the relay node.

The joined TCP bandwidth allocation (TBA) algorithm and wireless explicit congestion notification (WECN) mechanism [37] estimates the optimal cwnd size that the destination can handle during each reception process, and the WECN mechanism intended to initiate an early warning to the sender about the congested wireless link. During ECN marking, the receiver process fixes a suitable cwnd size in the advertised window that the destination can handle without oppressing the wireless link capacity.

The non-congestion events (NCE) [38] implementation detects non-congested losses from the bottleneck link losses based on the ECN approach. The second approach differentiates the non-congested losses by comparing flight size information of the packets with link latency threshold value. However, the NCE algorithm fails to address the flat-rate reduction and slower convergence rate of the traditional NewReno approach. The non-congested re-transmission timeout (NRT) [39] comprises of NRT-detection, NRT-differentiation, and NRT-reaction. In NRT-detection, the sender verifies the ACK packets marked with the ECN flag as the implication of the bottleneck wireless link and cuts down the current transmission rate by half of its capacity. In NRT-differentiation and reaction, the sender interprets packet loss of unmarked ACK packets as the spurious timeout and initiate re-transmission without reducing the current transmission rate. NRT congestion control algorithm capable of curbing unnecessary rate reduction during non-congested losses neglects the limitations of flat-rate reduction and slower convergence rate.

The joint congestion control and scheduling (JCCS) [40] approach determine the minimum transmission rate of the sender based on the virtual throughput estimation approach. The congestion control process is initiated based on explicit feedback from the wireless relay node. However, the window-based approach results in a slower recovery rate during the loss or congestion recovery process. TCP optimal queue selection (OQS) [41] is a joint effort of sender-side TCP, and the wireless relay node's dual queue algorithm, intended to address the flow fairness and throughput stability issues in the asymmetric wireless network. In the dual queue approach, the primary buffer holds the data packets, and the secondary buffer holds the ACK packets. The primary and secondary buffer implements unique queuing disciplines, i.e., active queue management (AQM) for primary and first-in, first-out (FIFO) for secondary buffer. At the sender side, the congestion control process is initiated based on the primary buffer ECN mechanism. During the congestion control process, the sender implements flat-rate reduction similar to that of the New Reno approach.

The feedback-assisted recovery (FAR) [42] implements concave window growth to achieve faster converge during the recovery process. The FAR uses three decisive points to fix the new SSThresh during the rate reduction process. However, FAR implementation requires more tuning parameters to achieve faster flow convergence compared to the traditional AIMD approach. Inter vehicular access network (IVAN) TCP [43] developed for vehicular environment follows the concave cwnd growth pattern and push more data packets into the network. However, during the node mobility and channel error conditions, IVAN experiences an enormous amount of packet drops compared to the linear window increment pattern.

In summary, the ECN based approaches yield a better throughput performance under a multi-hop wireless environment. However, two inherent limitations compromise the throughput and flow fairness performance of these approaches, (1) flat dispatching rate reduction during the congestion control process and (2) slow flow convergence in the loss recovery phase. The proposed FAIR+ approach is designed to overcome these limitations by implementing the GBRC algorithm during the rate reduction process and AWR algorithm in the loss recovery phase cwnd growth.

3 Proposed TCP FAIR+ congestion control approach

The proposed TCP FAIR+ implementation requires modifications in the SS phase cwnd growth pattern, congestion control process, and loss recovery algorithm.

3.1 SS phase window growth

In the SS phase, the sender gradually increments the transmission rate (cwnd) until it finds the network capacity. The proposed FAIR+ algorithm fixes the initial SSThresh as the maximum window size (Max_wnd) and partitions the entire cwnd growth into low and high utilization regions based on the growth threshold level δ, (δ = \(\frac{Max\_wnd}{2}\)). In the first RTT, the sender fixes cwnd value as initial window size [44] and subsequent RTTs; the sender doubles the cwnd size for the successful reception of each ACK packet. In the region below \(\frac{\delta }{ 8}\)cwnd is updated as

$$cwnd_{{_{(k + 1)} }} = cwnd_{{_{(k)} }} + 1,\quad cwnd < \frac{\delta }{8}$$
(1)

When the cwnd growth rate exceeds \(\frac{\delta }{ 8}\), the FAIR+ implements a newer increment pattern that allows the source node to grab the available network bandwidth quickly, and the cwnd is updated as

$$cwnd_{{_{(k + 1)} }} = cwnd_{{_{(k)} }} + ((2*Max\_wnd) - cwnd_{{_{(k)} }} {) *}\gamma )$$
(2)

The parameter γ is the rate increment factor and takes the value \(\frac{1}{4}\). The lesser γ value results in a slower growth rate, and the immense value leads to buffer overflows in the bottleneck routers. When the cwnd matches with Max_wnd, the sender swiftly initiates the CA phase and the cwnd is updated as

$$cwnd_{{_{(k + 1)} }} = cwnd_{{_{(k)} }} + \frac{1}{{cwnd_{{_{(k)} }} }}$$
(3)

The pseudocode for the SS and CA phase is presented as 

figure d

The SS phase of the FAIR+ algorithm implements a faster cwnd growth pattern compared to that of the existing TCP implementation [45] that allows the sender to grab the available network bandwidth quickly (Fig. 2).

Fig. 2
figure 2

SS phase cwnd growth of FAIR+ and AIMD

3.2 Congestion control process

The congestion in the relay node leads to performance degradation in the form of queuing latency, buffer overflows, and even collapse of the entire network. At the sender side, the congestion control process is initiated based on the two-level congestion notifications. The proposed dual queue dual marking (DQDM) (Fig. 3) algorithm has two significant functionalities.

  1. (i)

    The packet classifier drives the incoming data and control packets into appropriate queues (data (QD) and control (QC) buffers).

  2. (ii)

    The QC holds the non-data packets and ensures no loss of control packets in the relay routers. The QC implements a FIFO queuing mechanism. The QD accommodates the data packets and implements AQM [46, 47] that maintains different threshold levels to notify the queue build-up to the sender.

Fig. 3
figure 3

The proposed DQDM model

The control packets in the MHWN remain the lifeline for network survivability as it carries acknowledge packets and vital link related packets, such as route requests, route reply, link error, and link repair/update, message. The QC acts as a separate pipe to hold the very vital control packets and occupies 10% of the total queue size (QT). For every packet arrival, QC compares the instantaneous queue size (QInst) with the queue limits (Qlimit) and discards the packets when the QInst exceeds Qlimit. The QD occupies 90% of QT and works on the following strategies (i) enqueuing data packets and (ii) congestion alert at regular intervals based on the queue accumulation levels (Queue threshold 1 (QTH1) and queue threshold 2 (QTH2)).

Fixing the queue threshold levels (QTH1and QTH2) remains a prime concern and the larger QTH1results in slower responsiveness and buffer bloat at the relay nodes. Conversely, a smaller QTH1value leads to frequent congestion notifications. For faster responsiveness towards queue build-up, the QD uses the minimum of 5 data packets as QTH1and a maximum of 15 data packets as QTH2. The ingenious idea behind two-level notifications is to alert the sender about the incipient or severe congestion in the wireless relay routers. The QD computes the average queue size (QAvg) [48] alternately of the actual queue size (Qcur) with the highest marking probability (Mp = 1) and, the QAvg value is computed as

$$Q_{Avg} \left( {K + 1} \right) \, = \, Q_{Avg} \left( K \right) \, + \, Q_{wt} \left( {Q_{cur} \left( K \right) \, - \, Q_{Avg} \left( K \right)} \right)$$
(4)

where QAvg (K + 1): Average queue size of (K + 1) packet, QAvg (K): Average queue size of (K) packet, Qwt: Queue Weight, Qcur (K): Current queue size of (K + 1) packet.

The relay router initiates congestion notifications (level 1 and level 2) when the packet arrival rate exceeds the first marking threshold QTH1 and QTH2. The pseudo code for the packet processing in the QD is given as.

figure e

In the IP header, a one-bit congestion experience-1 (CE-1) and CE-2 field are added for two-level congestion notifications, as shown in Fig. 4a. For higher-layer support, an explicit congestion notification echo-1 (ECE-1) and ECE-2 flag are added in the TCP header, as shown in Fig. 4b. The relay router enables the CE-1 or CE-2 in the IP header for level 1 or level 2 notifications. Based on CE-1 or CE-2, the receiver sets the ECE-1 or ECE-2 flag in the ACK packets moving towards the sender. The receiver stops enabling the ECE-1 or ECE-2 echoes until the reception of the sender response flag, i.e., the congestion window reduced (CWR) flag. The overall communication model of the FAIR+ algorithm (Fig. 5) is developed based on the [49].

Fig. 4
figure 4

a IP header modification and b TCP flag modification

Fig. 5
figure 5

FAIR+ communication model

3.3 Rate adjustment and loss recovery

On receiving the ACK packet enabled with ECE-1, FAIR+ implements a growth based rate control (GBRC) algorithm that lessens the dispatching rate based on window utility threshold (δ). When the cwnd growth is in the low utilization region (cwnd ≤ δ), new SSThresh is computed as

$$SS_{Thresh} \, = \,\beta_{1} *cwnd_{(k)} ;\quad cwnd \le \delta$$
(5)

Conversely, when the cwnd growth is in the high utilization region (cwnd > δ), new SSThresh is computed as

$$SS_{Thresh} = \beta_{2} *cwnd_{(k)} ;cwnd > \delta$$
(6)

For the second notification level, the sender fixes the new SSThresh irrespective of its utilization levels.

$$\begin{gathered} SS_{Thresh} \,\, = \,\beta_{3} *cwnd_{(k)} \, \hfill \\ \hfill \\ \end{gathered}$$
(7)
$$\beta_{3} \, = \,2*\left( {\beta_{1} - \beta_{2} } \right)$$
(8)

where β1 (β1 = 0.7) and β2 (β2 = 0.5) are the window reduction factors, larger (β1 and β2) value results in buffer overflows in the relay router and lesser value results in slower window recovery. In the congestion recovery phase, the proposed FAIR+ implements AWR algorithm which updates its cwnd based on new linear equation as follows

$$cwnd_{(k + 1)} = cwnd_{(k)} + \frac{{\left( {\eta *Max\_wnd} \right)}}{{cwnd_{(k)} }}$$
(9)

where η is the window growth factor and takes the value 2, larger η value results in faster oscillation, and the lesser η value results in slower window recovery. The pseudocode for the rate adjustment and loss recovery approach is given as.

figure f

The proposed FAIR+ algorithm performance is validated using two research methodologies (i) analytical model and (ii) simulation experiments. Finite state machine for FAIR+ algorithm is presented in Fig. 6

Fig. 6
figure 6

FAIR+ state transition diagram

4 Throughput stability model

TCP throughput stability model based on duality theory [50] derives the best possible transmission rate xs that the sender can achieve a maximum link utility Us(xs) without exceeding the network capacity. Figure 7 represents closed-loop feedback model of network-assisted congestion control approach.

Fig. 7
figure 7

TCP closed loop feedback model

The throughput stability model of the FAIR+ algorithm considers source node transmission rate xs as primal variable and wireless relay node's queue accumulation level as dual variable p. The sender initiates rate reduction based on relay node queue length, commonly known as link price (or) congestion measure of the link. The maximum link utility Us(xs) depends on the following factors, wireless link capacity (Lc), the number of senders (s), minimum transmission rate (mxs), and the maximum transmission rate (Mxs). The link capacity Lc depends on the signal to interference and noise ratio (SINR) of the radio channel, derived as [51]

$$L_{c} = W \, log(1 + K\gamma )$$
(10)

where γ is the instantaneous interference of wireless link lW is the signal bandwidth, and constant K is fixed depending on bit-error-rate, modulation, and the coding rate. To achieve the optimal transmission rate xs, the sender needs to satisfy the following conditions, i.e., the primal variable xs objective function is defined as

$$\mathop {\max }\limits_{{mx_{s} \le x_{s} \le Mx_{s} }} \sum\limits_{s} {{\text{U}}_{{\text{s}}} {\text{(x}}_{{\text{s}}} {)}}$$
(11)
$${\text{Subject}}\;{\text{to}}\quad \sum\limits_{s} {{\text{x}}_{{\text{s}}} \le {\text{L}}_{{\text{c}}} ,\quad \forall l}$$
(12)

The first condition states that the sender attains a maximum link utility Us(xs) by choosing the transmission rate xs between mxs and Mxs, where mxs ≥ 0 and Mxs ≤ ∞. The second condition states that the aggregate optimal transmission rate xs does not exceed link capacity Lc. Condition 1 and 2 are attained by deriving the duality Eq. (13).

$$\mathop {\min }\limits_{p \ge 0} D(p) = \sum\limits_{s} {\left( {\mathop {\max }\limits_{{mx_{s} \le x_{s} \le Mx_{s} }} U_{s} (x_{s} ) - x_{s} (p)} \right) + \sum\limits_{L} {p*L_{c} } }$$
(13)

where Us(xs) is a strictly concave function or non-decreasing function, i.e., higher utility results in a larger transmission rate. The xs(p) is the inverse of the utility function (\(U_{s} (p)^{{{ - }1}}\)) and exist between mxs, and Mxs.

$$x_{s} (p) = \left[ {U_{s} (p)^{ - 1} } \right]_{{mx_{s} }}^{{Mx_{s} }}$$
(14)

The transmission rate xs is the function (f(xs = p)) of the congestion measure variable p. In the equilibrium (or) steady-state condition, the source node transmission rate xs equals the link capacity. The sender infers equilibrium state through congestion measure variable p (queue length information). In the equilibrium condition, utility function Us(xs) remains within p

$$U_{s} (x_{s} ) = p\,;\quad \forall \,s$$
(15)

For each sender, Us(xs) is the function of xs (xs = ps)

$$U_{s} (x_{s} ) = \int {f_{s} (x_{s} )dx,\quad x_{s} \ge {0}}$$
(16)

The second term in Eq. (13) denotes the capacity per link and congestion price per link.

The proposed FAIR+ algorithm uses the relay node's queue length information as a congestion measure variable (p(t)) and reduces transmission rate based on TCP flows current cwnd growth level. Let the equilibrium window size be ws, and RTTs be the round trip time delay. The average data transmitted per unit time xs(t) is given by

$$x_{s} (t) = \frac{{w_{s} (t)}}{{RTT_{s} }}$$
(17)

The sender increments window size for every successful acknowledgment with probability (1-d) and reduces the transmission rate based on loss probability, 'd', (d = p). The FAIR+ algorithm implements two different rate reduction approaches based on the wc growth level. Let r1 be the rate reduction factor for cwnd with lesser growth rate and r2 be the rate reduction factor for cwnd with the higher growth rate. From Eqs. (5) and (6), the reduced transmission rate for high (r1) and low (r2) rate flows are computed as

$$\begin{aligned} r_{1} & = \frac{7}{10};\quad cwnd \le \delta \\ r_{1} & = \frac{1}{2};\quad \;cwnd > \delta \\ \end{aligned}$$
(18)

The flow with the highest window growth (r2) is taken into consideration for the analysis as the goal of the optimization algorithm is to find the maximum link utility Us(xs). Furthermore, the window increment rate of the proposed AWR algorithm in the loss recovery phase varies between 1/ws and 4/ws (Eq. (9)). For every positive acknowledgment (1-d), the window is updated per unit time as

$$\frac{{4x_{s} (t)(1 - d(t))}}{{w_{s} (t)}}$$
(19)

Similarly, for ECN marked acknowledgment (d), the rate is reduced by ws/2 and the window is decremented per unit time as

$$\frac{{x_{s} (t)w_{s} (t)d(t)}}{2}$$
(20)

At equilibrium, average window increment is balanced by average window decrement per unit time

$$\frac{{4x_{s} (t)(1 - d(t))}}{{w_{s} (t)}} = \frac{{x_{s} (t)w_{s} (t)d(t)}}{2}$$
(21)

The loss rate d(t) is derived by substituting Eq. (17) in Eq. (21)

$$d(t) = \frac{8}{{8 + x_{s}^{2} (t)RTT_{s}^{2} }}$$
(22)

We know that xs = ps = d, substituting Eq. (22) in Eq. (16)

$$U_{s} (x_{s} ) = \int {\frac{8}{{8 + x_{s}^{2} (t)RTT_{s}^{2} }}dx,\quad x_{s} \ge {0}}$$
(23)

After integrating Eq. (22), the maximum link utility Us(xs)is derived as

$${\text{U}}_{{\text{s}}} {\text{(x}}_{{\text{s}}} {)} = \frac{{\sqrt {8} }}{{{\text{RTT}}_{{\text{s}}} }}\tan^{ - 1} \sqrt {8} {\text{x}}_{{\text{s}}} {\text{RTT}}_{{\text{s}}}$$
(24)

The FAIR+ iterative algorithm can be derived by incorporating the DQDM congestion marking mechanism. Let Qb be the queue size of the bottleneck wireless relay router, and QThresh be the marking threshold. When the arrival rate λ exceeds the QThresh,  the wireless relay router initiates congestion notification with the highest probability (mp(t) = 1), as shown in Fig. 8.

Fig. 8
figure 8

DQDM packet marking probability

As illustrated in Fig. 8, the marking probability in mp(t) of DQDM is derived as

$$\begin{aligned} & m_{p} (t) = 0;\quad Q_{b} < Q_{Thresh} \\ & \quad \quad \quad \;1;\quad Q_{b} \ge Q_{Thresh} \\ \end{aligned}$$
(25)

The relationship between ws(t),  ECN marked acknowledgment packet d(t) at the link l and end-to-end ECN probability mp(t) per unit time is given by

$$m_{p} (t) = w_{s} (t)d(t)$$
(26)

In each RTT, sender increase the rate by 4/ RTTs with probability 1- mp(t) (positive acknowledgment) per unit time and trims down the window growth ws/(2RTTs)with mp(t) probability. Now average change in window growth in time period t is given by

$$\frac{{4(1 - m_{{\text{p}}} (t))}}{{RTT_{s} }} - \frac{{w_{s} (t)m_{p} (t)}}{{2RTT_{s} }}$$
(27)

By substituting Eqs. (17) and (26) in Eq. (27)

$$\frac{{4(1 - x_{{\text{s}}} {\text{(t)}}RTT_{s} d(t))}}{{RTT_{s} }} - \frac{{x_{{_{s} }}^{2} (t)RTT_{s} d(t)}}{2}$$
(28)

Equation (17) can be rewritten per unit time period (1/ RTTs) as

$$\frac{{4(1 - x_{{\text{s}}} {\text{(t)}}RTT_{s} d(t))}}{{RTT_{{_{s} }}^{2} }} - \frac{{x_{{_{s} }}^{2} (t)d(t)}}{2}$$
(29)

Based on the marking probability in mp(t), the rate adjustment mechanism of the proposed FAIR+ algorithm is derived as

$$x_{s} (t + 1) = \left[ {x_{s} (t) + \frac{{4(1 - x_{s} (t)RTT_{s} d(t))}}{{RTT_{s}^{2} }} - \, \frac{{d(t)x_{s}^{2} (t)}}{2}} \right]$$
(30)

Based on the Eqs. (22) and (30), the source rate of FAIR+ is compared with ECN enabled NewReno (Fig. 9) for 50 RTT samples, The following parameters are used for computation; QThresh = 5 packets (QThresh1 = 5 packets and QThresh2 = 15 packets), RTTs varies between 60 and 150 ms, ws(t) attains the maximum value of 1030 packets. The proposed FAIR+ attains an improved transfer rate than the standard NewReno approach under varying RTT conditions. In the subsequent section, the duality model is validated by MHWN simulations under three diverse scenarios

Fig. 9
figure 9

Source rate of FAIR+ and NewReno/ECN

5 Simulation and analysis

Multi-hop wireless simulation experiments carried out to validate the analytical model claim. The propagation and traffic models are created using network simulator 2 (NS-2) [52], and wireless node movements are generated using the Bonnmotion mobility generator [53]. A total of 204 simulated experiments were performed with a 95% confidence interval under the three patterns (hop length, traffic connections, and mobility) of the multi-hop wireless scenarios. As bounded by RFC 5166 [54], average throughput, average goodput, the average end-to-end latency, and flow fairness metrics were chosen for the analysis.

  1. (i)

    Average throughput is the measure of successful data packets received over the radio communication channel during the specified time interval.

  2. (ii)

    Average goodput is the measure of successful reception of actual (excluding header) data/packet/information rate over the radio communication channel during the specified time interval.

  3. (iii)

    Average end-to-end data packet latency is the measure of RTT delay between the sending and receiving process.

  4. (iv)

    Flow fairness is the measure of resource sharing among the different RTT competing flows.

Table 1 gives precise details on simulation parameters used for the MHWN simulation.

Table 1 Simulation parameters for indoor and outdoor wireless models

5.1 Hop length analysis

The path or hop length analysis measures the FAIR+ algorithm and existing approaches performance under multi-hop length conditions. The multi-hop simulation experiments are carried out by constructing the chain topology with a minimum path length of 2 hops to a maximum of 10 hops. The performances of hop-length experiment analysis in Fig. 10 are summarized as follows; the FAIR+ algorithm under minimum and maximum hop length conditions attain an average throughput of 5920 KB/s and 1950 KB/s for the simulation duration of 1000 s. As the hop length increases, there is a considerable reduction in the transmission rate due to processing and queuing delays at intermediate wireless relay routers. However, under both (minimum and maximum) path length conditions, the FAIR+ algorithm attains an average throughput improvement of 31.58% and 34.35% against Westwood, 23.31%, and 17.43% improvement against OQS and 25.16% and 24.61% improvement against NRT approach. Similarly, the FAIR+ attains an average goodput of 5710 KB/s and 1780 KB/s over the existing congestion control approaches under both path length conditions. The proposed FAIR+ attains improvement in average goodput rate of 28.02% and 39.95% against Westwood, 23.11%, and 16.11% improvement in goodput against OQS, and 26.61%, and 29.71% improvement in goodput against NRT under both hop length conditions.

Fig. 10
figure 10

Hop length versus Metrics a Average throughput (KB/s), b Average goodput (KB/s), c Average message latency (ms) and d Fairness (%)

The flow stability in the network path influences end-to-end message latency between sender-receiver processes. The DQDM algorithm implemented at the relay router responds quickly to the increase in the queue accumulation level by initiating ECN, and the sender maintains flow stability by regulating outbound network traffic that results in minimum end-to-end average message latency. For maximum and minimum path length conditions, the FAIR+ algorithm attains the substantial reduction in average message latency against (60.68%, and 56.94%) Westwood, (51.39%, and 47.64%) OQS, and (57.58%, and 52.14%) NRT approaches. TCP traffic connections with FAIR+ implementation attains the higher bandwidth sharing capability due to cwnd utilization based rate control during the ECN marking and AWR algorithm in the loss recovery phase. The FAIR+ attains the flow fairness of 97.8% against Westwood (91%), OQS (94%), and NRT (93%) approaches under path length conditions.

5.2 Traffic connections analysis

The traffic handling capacity of the proposed FAIR+ algorithm is measured by conducting simulation experiments under increasing traffic flows of two connections to a maximum of ten connections. The grid topology (22 × 22) was chosen for conducting traffic flow experiments and the traffic analysis performances are summarized as follows; the FAIR+ algorithm attains a throughput of 4670 KB/s and 1530 KB/s for light load and heavy load conditions (Fig. 11). In the increasing traffic flow conditions, the FAIR+ algorithm initiates rate reduction during incipient congestion based on the relay node's queue accumulation level, thereby maintaining flow stability with improved throughput and goodput rate over the existing approaches. Under light and heavy load conditions, the FAIR+ algorithm yields an average throughput improvement of 36.65% and 35.35% against Westwood, 31.11%, and 14.37% against OQS and 35.33% and 24.18% against NRT approach. Similarly, the FAIR+ attains a better goodput of 4410 KB/s and 1410 KB/s for light and heavy traffic conditions. Under both traffic conditions, FAIR+ attains a considerable improvement in the goodput rate over its counterparts, i.e., goodput improvement of 35.33% and 24.18% against Westwood, 35.33%, and 24.18% against OQS, and 35.33% and 24.18% against NRT approach.

Fig. 11
figure 11

Traffic flows versus Metrics a Average throughput (KB/s), b Average goodput (KB/s), c Average message latency (ms) and d Fairness (%)

The average end-to-end message latency performance in the high traffic conditions are influenced by regulating the arrival rate within the relay router's processing capability. The FAIR+ rate adjustment based on queue length notification achieves better flow stability in the relay routers. In high traffic conditions, FAIR+ attains 48.71%, 38.58%, and 43.96% reduction in average message latency against Westwood, OQS, and NRT. Similarly, FAIR+ attains a 98% flow fairness due to its cwnd utilization based reduction and AWR algorithm during the loss recovery phase. Conversely, the existing approaches attain the flow fairness of 92% (Westwood), 94% (OQS), and 94% (NRT) due to its flat-rate reduction approach during the congestion control process and cwnd growth in the loss recovery phase.

5.3 Mobility analysis

The node mobility experiments were conducted to measure the FAIR+ congestion control algorithm performance under different node mobility conditions ranging from 5 m/s to a maximum of 20 m/s. The following observations were noted from the node mobility analysis. In the wireless environment, frequent link breakage and route association takes place between the mobile nodes due to the fast movement of wireless nodes, results in packet drops. The FAIR  algorithm with ECN markings identifies the non-congested packet loss and prevents needless transmission rate reduction, which substantially improves its throughput and goodput performance. The FAIR+ algorithm attains an average throughput of 3730 KB/s and 940 KB/s for low and high node mobility conditions (Fig. 12). FAIR+ algorithm attains throughput improvement of 18.76% and 39.89% against Westwood, 7.51%, and 13.82% against OQS and 15.54% and 26.7% against the NRT approach under low and high node mobility conditions.

Fig. 12
figure 12

Node mobility versus Metrics a Average throughput (KB/s), b Average goodput (KB/s), c Average message latency (ms) and d Fairness (%)

The FAIR+ attains the average goodput rate of 3591 KB/s and 880 KB/s for high and low mobility conditions, i.e., 18.11% and 42.95% improvement against Westwood, 8.63% and 16.93% improvement in OQS, and 16.21% and 32.72% improvement against NRT. Furthermore, FAIR+ attains the substantial reduction in average end-to-end message latency performance due to its queue accumulation assisted rate control that maintains the TCP traffic within the network capacity. Under both mobility conditions, the FAIR+ algorithm yields a lesser average latency of 38.17% and 29.12% against Westwood, 22.51%, and 10.12% against OQS, and 31.72% and 18.25% against NRT approach. In node mobility conditions, the FAIR+ algorithm attains a 95% flow fairness performance against Westwood (89.4%), OQS (90%), and NRT (90%) approaches.

6 Conclusion

The ECN based FAIR+ approach proposed in this article implements the DQDM, GBRC, and AWR algorithms to suppress the limitations of flat-rate reduction, and slower flow convergence under the multi-hop wireless environment At the relay node, the DQDM algorithm initiates level 1 (incipient) and level 2 (severe) congestion notifications based on the queue accumulation level. At the sender side, the GBRC algorithm initiates rate reduction based on the cwnd utilization levels that considerably reduces flat-rate reduction limitations and allows the fair sharing of network resources among different TCP flows. In the congestion recovery phase, the AWR algorithm implements a faster cwnd increment pattern, results in a faster steady-state convergence rate. The performance of the FAIR+ algorithm is analytically validated using the duality theory-based throughput stability model, which derives the best possible transmission rate based on window utility and queue length information. Similarly, simulation analysis proves that the FAIR+ attains a considerable improvement in average throughput, average goodput and flow fairness performances, and a significant decrease in average end-to-end message latency performance against other schemes under different wireless scenarios.