1 Introduction

The communication networks are growing rapidly and becoming more complex to be managed and handled. This is exclusively factual with traditional network design, protocol stack and network systems which is difficult to provide satisfactory solutions to the existing networking demands. However, Software Defined Network (SDN) is an encouraging technique to computer networking that splits the data plane and the control plane mutual with centralized controller to allow network operator to design the packet forwarding. The architecture of SDN is shown in Fig. 1. Number of researchers defined the SDN as detached the control plane from the data plane, this network is different from traditional networks which were more complex and difficult to manage because these legacy networks were used those protocols which allows nodes to work with other nodes to interchange the data [1,2,3,4].

TCP act as a de-facto transport protocol standard for all the applications using advanced Internet of Things (IoT) based Software defined Networking (SDN) networks. By using the IoT based advanced networks technology, healthcare centers has deployed their networks to a centralize location. The centralize data center connects many healthcare centers that are deployed to multiple geographic locations all over the word. Internet content caching for medical data has received much attention mainly in the field of healthcare centers as a primary solution to save network resources and improve quality of Service (QoS). Rapidly increasing of medical data in healthcare centers brings a challenge of how to efficiently deliver data in remote healthcare locations [5]. During last one decade, researchers are continuously improving the performance of medical data delivery in remote healthcare centers. This research focuses the performance of medical data delivery by using content caching based congestion control scheme for healthcare centers using advanced IoT based SDN networks.

The target of IoT based SDN networks is to increase the network bandwidth to be 1000 times greater than 4G networks [6, 7]. It implies that advanced IoT based SDN networks would be perfectly suitable for the healthcare centers that are deployed to many remote locations [8, 9] . However, compared huge amounts of medical data and healthcare networks overhead, there is an obvious paradox in advanced IoT based SDN network [10]. In Fig. 1, numerous advanced IoT based devices ubiquitously connect to the SDN network, making the backhaul become the bottleneck and congestion in the network. Thus there is a need to design a content caching based congestion control scheme for these kinds of healthcare SDN networks.

Fig. 1
figure 1

SDN architecture [11]

Packet loss event is an indication of congestion occurrence in the SDN networks, so to remedy the congestion in the network, content caching scheme or congestion control scheme are used. At each packet loss event, TCP Reno, which is trademark congestion control scheme and TCP Compound, which is default congestion control scheme in Microsoft Windows operating system, reduce the size of its Congestion Window (cwnd) by 50% of its previous cwnd size. However, TCP CUBIC reduces its cwnd size by only 20% of its actual size. Growth and reduction parameters of (cwnd) are defined by the AIMD algorithm of the congestion control scheme [12]. AIMD consists of two parameters, additive increase, which refers as cwnd growth parameter (\(\alpha \)) and multiplicative decrease, which refers as cwnd reduction parameters (\(\beta \)). General form of AIMD algorithm for TCP Reno is described in Eq. 1, where \(\alpha =1\), represents the cwnd growth parameter and \(\beta =0.5\) represents the cwnd reduction parameter at each loss event. Figure 2 illustrates the behavior of TCP Reno by using AIMD algorithm. At each packet loss event, it reduces the size of cwnd by 50% from its previous size. TCP Compound uses similar \(\alpha \) and \(\beta \) values being used by TCP Reno. Most of new congestion control schemes use enhanced values of \(\alpha \) and \(\beta \) as cwnd growth and reduction parameters respectively. TCP CUBIC modified the AIMD algorithm of TCP Reno, with enhanced values of \(\alpha \) and \(\beta \) parameters.

$$\begin{aligned} AIMD:\left\{ \begin{array}{lll} ACK &{} = &{} cwnd\leftarrow cwnd+\frac{\alpha }{cwnd} \\ Loss &{} = &{} cwnd\leftarrow (1-\beta )\times cwnd \end{array}\right\} \end{aligned}$$
(1)
Fig. 2
figure 2

Behavior of TCP Reno congestion control mechanism

In TCP CUBIC values of \(\alpha \) and \(\beta \) are equal to 0.3 and 0.2 respectively. That is why TCP CUBIC has reduction rate of cwnd 20% instead of 50% as in TCP Reno and TCP Compound. Equation 2 describes the reduction percentage in cwnd size at each loss event of TCP Reno, TCP Compound and TCP CUBIC [13,14,15]. Due to this reason, after each packet loss event, TCP flows, configured with TCP CUBIC in the network does not release sufficient amount of link bandwidth to remedy the congestion. As a result, new incoming TCP flows cannot get the properly available link bandwidth. Thus, unfair share of available link bandwidth and slow convergence of flows occur in the network. These problems are mostly happened in advanced IOT based SDN networks. These issues faced by congestion control schemes are measured by using protocol fairness and convergence time performance metrics [16,17,18,19,20]. As the cwnd reduction percentage of TCP CUBIC flow is lower than the TCP Reno and TCP Compound flows, the average cwnd size of TCP CUBIC flow is also higher than TCP Reno and TCP Compound flows.

$$\begin{aligned} Loss{:}\left\{ \begin{array}{llllll} cwnd &{} = &{} (1-\beta ) \times cwnd &{} 50\%\,\,Reduction &{}TCP\,\, Reno &{} \beta =0.5 \\ cwnd &{} = &{} (1-\beta ) \times cwnd &{} 50\%\,\,Reduction &{}TCP\,\,\, Compound &{} \beta =0.5 \\ cwnd &{} = &{} (1-\beta ) \times cwnd &{} 20\%\,\,Reduction &{}TCP\,\,\, CUBIC &{} \beta =0.2 \end{array}\right\} \nonumber \\ \end{aligned}$$
(2)

The remaining of the paper is organized as follows: Sect. 2 highlights the research gap. Section 3 provides the extensive literature review of congestion control schemes. Section 4 explains the design, development and implementation of the CCM-LE in TCP CUBIC. Section 5 is dedicated to the results discussion and future work.

2 Research Gap

After each packet loss event, the percentage of cwnd reduction rate is vary in different congestion control schemes. The minimum reduction rate of cwnd in percentage is equal to \(12.5\%\), which is the default reduction rate of cwnd of Scalable TCP [21]. The maximum percentage of cwnd reduction rate is equal to \(50\%\), which is the default reduction rate of TCP Reno. Many high speed congestion control schemes are also configured with \(50\%\)cwnd reduction parameter, such as in TCP Compound, HighSpeed TCP [22] and Hamilton TCP [23]. The reduction rate of cwnd after each packet loss event, effects the protocol behavior and slow convergence of TCP flows. Based on the research of [19, 24], TCP CUBIC is still a congestion control scheme that is under development, so, there is a need for more evaluation studies of TCP CUBIC. According to their research, TCP CUBIC has lower cwnd decrease parameter (\(\beta \)) and it should release more bandwidth for new incoming flows. That is why, TCP CUBIC has RTT fairness problem. Thus, more research is needed on TCP CUBIC by adoptive adjustment of cwnd reduction parameter (\(\beta \)), which is also known as the percentage of reduction in cwnd size after each loss event. Thus, to overcome the issue of fair sharing of available link bandwidth and the convergence of TCP flows, a scheme is proposed to increase the cwnd reduction rate which is based on content caching based congestion control.

The above research gap leads this research to address the problems about reduction percentage in cwnd size of TCP CUBIC at each packet loss event. Thus, there is a need to increase the cwnd reduction percentage of TCP CUBIC flows at each packet loss event, such that available link bandwidth can be shared fairly and fast among TCP flows. The aim of this research is to enhance the performance of TCP CUBIC congestion control scheme for advanced IoT based SDN networks. The aim is achieved by designing and developing a new content caching based congestion control scheme for loss event to increase the cwnd reduction percentage after each packet loss event, such that available link bandwidth can be shared fast and fairly among flows of TCP CUBIC.

3 Literature Review

Following is a history of different congestion control schemes that are proposed to solve the various congestion issues in the network. Due to the change in network distance, link bandwidth and operating system, old versions of congestion control schemes are enhanced and upgraded. Legacy and new congestion control schemes are discussed as follows:

TCP Tahoe [12] is the oldest of all TCP congestion control schemes. TCP Tahoe added three new congestion control modules: slow start, congestion avoidance and fast retransmit. Major contribution in TCP Tahoe is the enhancement of the RTT estimator. However, if the timer expires before the ACK is received, the source assumes that the segment is lost and needs to be retransmit it. This kind timer expiration is called timeout. TCP Tahoe does not work efficient if multiple packets are dropped. At each packet loss event, it resets the value of cwnd to 1 i.e., \(cwnd=1\) as shown in Fig. 3. It uses packet loss event as a congestion indication thus it is also known as loss-based congestion control scheme.

Fig. 3
figure 3

TCP Tahoe congestion window growth behavior [12]

TCP Reno is developed by Van Jacobson and it is an enhanced form of TCP Tahoe. TCP Reno is considered as a trademark of TCP congestion control schemes. After each loss event the size of cwnd is set to half of its previous value as shown in Fig. 4. Thus, it decreases the size of cwnd by \(50\%\) and having cwnd reduction parameter (\(\beta \)) is equal to 0.5. If the source is still able to receive the ACKs and after receiving a number of duplicate ACKs, TCP Reno enters in the fast recovery phase and the source retransmits the lost packet, but unlike Tahoe, it will not fall back into slow start state [25]. The Main problem in TCP Reno is that fast retransmit scheme assumes that only one segment is lost, if more than one segments are lost, it results in loss of ACK clocking and timeouts. ACK starvation is another problem in TCP Reno, which occurs due to the ambiguity of duplicate ACKs. It uses packet loss event as a congestion indication in network and also known as loss-based congestion control scheme.

Fig. 4
figure 4

TCP Reno congestion window growth behavior [26]

TCP Vegas [27] is a delay-based congestion control mechanism which uses variations in measured throughput as an indication of congestion instead of packet loss event. TCP Vegas calculates the difference, which is denoted as diff, between the expected throughput \(R_{E}=(cwnd/RTT_{min})\) and actual throughput \(R_{A}=(cwnd/RTT)\), where \(RTT_{min}\) is the based RTT of first segment and (RTT) is the sample RTT. \(R_{E}\) and \(R_{A}\) are updated once per each RTT. TCP Vegas adjusts the cwnd according to phases it performed and the value of diff. It compares \({\varDelta }\) with \(\gamma \) in slow start phase and with \(\alpha , \beta \) in congestion avoidance phase to determine cwnd adjustments. In TCP Vegas, the value of \(\beta \) is equal to 0.5, hence the reduction rate of cwnd size is \(50\%\) at each packet loss event. Estimation of this gap \({\varDelta }\) is calculated per RTT and is given in Eq. 3.

$$\begin{aligned} {\varDelta }=\left( R_{E} - R_{A} \right) \times RTT_{min} \end{aligned}$$
(3)

TCP Newreno [28] is same as TCP Reno, however, behaves more intelligent during fast recovery. The wait for the retransmit timer is eliminated when multiple packets are lost. TCP Newreno is based on the idea of partial ACKs. In case of multiple packets losses, the ACK for retransmitted packet will acknowledge some but not all the packets sent before the fast retransmit. TCP Newreno retransmits one packet per RTT until all lost packets are retransmitted. Fast recovery mechanism only begins when three duplicate ACKs are received. It also uses packet loss event as a congestion indication in the network and known as loss-based congestion control mechanism. Scalable TCP [21] is an enhanced form or TCP Newreno. Scalable TCP increases the size of cwnd 1% per at receipt of each ACK and at loss event, it decreases \(12.5\%\) from the current cwnd size. Therefore values of congestion window increase parameter \(\alpha \) and decrease parameter \(\beta \) are 0.01 and 0.125 respectively.

Compound TCP [14] is developed by Microsoft for the Vista operating system and is designed for long distance, high bandwidth network uses a scalable delay-based component of TCP Vegas into the TCP Reno. Thus, it is called loss and delay-based congestion control mechanism. Compound TCP maintains two cwnd sizes concurrently, a regular cwnd based on TCP Reno and a delay window (dwnd) based on TCP Vegas. The sending rate of Compound TCP is determined by win by summing the above two congestion windows as described in Equation 4. The value of cwnd reduction parameter \(\beta \) is equal to 0.5 same as in TCP Reno.

$$\begin{aligned} win=\left\{ \begin{array}{ll} cwnd+0 &{} Slow\, \, Start\\ cwnd+dwnd &{} Congestion\, \, Avoidance \end{array}\right\} \end{aligned}$$
(4)

TCP CUBIC [15] is an enhanced form of BIC TCP [29] and TCP Reno. TCP CUBIC is a loss-based congestion control mechanism. TCP CUBIC uses packet-train concept for the estimation of available link bandwidth. TCP CUBIC adapts new slow start module HyStart, which uses a Safe exit point during slow start phase to switch the connection from slow start to congestion avoidance phase. TCP CUBIC uses a cubic growth function of the elapsed time from the last congestion event. It uses both the concave and convex features of a cubic function for cwnd growth as shown in Fig. 5. After a cwnd reduction due to a loss event, TCP CUBIC registers \(W_{max}\) as the cwnd size where the loss even occurred. Then it decreases the cwnd by a constant decrease parameter \(\beta \) whose value is equal to 0.2 and enters into congestion avoidance phase and begins to increase the cwnd size by using a concave feature of cubic function, until the cwnd size becomes \(W_{max}\). The cwnd grows very fast after reduction, but as it gets close to \(W_{max}\), it slows down its growth, around \(W_{max}\), the cwnd growth becomes almost zero.

Fig. 5
figure 5

Typical behavior of CUBIC congestion window curve [30]

After that, TCP CUBIC increases the size of cwnd by convex growth of cubic function. Under steady state, the size of cwnd is almost close to \(W_{max}\), thus achieving highest network utilization. Equation 5 shows the growth function of TCP CUBIC, where C is the TCP CUBIC parameter, t is the elapsed time from the last window reduction and K is the time period that the function requires to increase the size of cwnd from W to \(W_{max}\) (when there are no further loss events occur). K is calculated by using the function given in Eq. 6. TCP CUBIC sets the \(W(t+RTT)\) as the candidate target value of cwnd.

$$\begin{aligned} W(t)= & {} C(t-K)^{3}+W_{max} \end{aligned}$$
(5)
$$\begin{aligned} K= & {} \root 3 \of {\frac{W_{max}\, \, \beta }{C}} \end{aligned}$$
(6)

Based on the value of current cwnd size, TCP CUBIC runs in three different modes. Suppose cwnd is the current size of cwnd, if cwnd is less than the cwnd size that TCP Reno would reach at time t after the last loss event, then TCP CUBIC is in TCPMode. Otherwise, if cwnd is less than \(W_{max}\), then TCP CUBIC is in ConcaveMode and if cwnd is greater than \(W_{max}\), TCP CUBIC is in ConvexMode as described in Eq. 7.

$$\begin{aligned} TCP \,\,CUBIC\, \, Mode = \left\{ \begin{array}{ll} TCP \, Mode &{} if\, \, cwnd< cwnd_{TCP Reno} \\ Concave &{} if \, \, cwnd<W_{max}\\ Convex &{} if\, \, cwnd>W_{max} \end{array}\right\} \end{aligned}$$
(7)

The cwnd growth of this protocol is shown in Fig. 6. cwnd increases exponentially during slow start phase and at each packet loss event, TCP CUBIC reduces its cwnd size only \(20\%\).

Fig. 6
figure 6

TCP CUBIC congestion window growth behavior

Marfia et al. [31] proposed TCP Libra, which is an enhanced form of TCP Newreno. They used NS-2 to evaluate the performance of TCP Libra. They used Jain’s fairness formula to show the intra and inter protocol fairness of TCP Libra flows. They also compared TCP Libra performance with TCP SACK [32], TCP Vegas, TCP CUBIC and TCP Hybla [33] in terms of fairness, goodput and fast convergence.

Xu et al. [34] combined the loss-based congestion estimation component of TCP BIC [29] and the delay-based congestion estimation component of EEFAST [35] to propose a Hybrid Congestion Control mechanism (HCC TCP) for high speed networks. Xu et al. [34] used NS-2 with dumbbell topology to evaluate the performance of HCC TCP with high speed TCPs in terms of efficiency, fairness and TCP friendliness. Xu et al. [34] found that HCC TCP is better than TCP Illinois [36] and FAST TCP [37]. Fu et al. [38] proposed a BIPR congestion control mechanism for satellite networks, by using probe method of TCP BIC. Fu et al. [38] used NS-2 with dumbbell topology to evaluate the performance of TCP BIPR in terms of throughput, link utilization and Jain’s fairness.

To tackle the shortcomings of Scalable TCP, Wang et al. [39] proposed an improved DSTCP, which can dynamically adjust the cwnd size according to the link’s congestion level. DSTCP is an improved form of Scalable TCP. DSTCP improved the TCP friendliness and stability of Scalable TCP. Wang et al. [39] compared the performance of DSTCP with TCP Reno, YeAH TCP [40], TCP CUBIC and Scalable. They used NS-2 with drop-tail algorithm and throughput, bandwidth utilization, TCP friendliness and stability as performance metrics. Elmannai et al. [41] proposed TCP University of Bridgeport (TCP UB) by integrating the features of TCP Westwood and TCP Vegas for Ad Hot Networks (MANET). By using NS-2, Elmannai et al. [41] compared the performance of TCP UB with TCP Westwood and TCP Vegas in terms of goodput and found that TCP UB is outperformed as compared to TCP Westwood and TCP Vegas.

Hagag and El-Sayed [42] proposed a new congestion control mechanism called TCP WestwoodNew to increase the performance of TCP Westwood by enhancing the congestion avoidance feature of TCP Westwood. To evaluate the performance of TCP WestwoodNew, Hagag and El-Sayed [42] used NS-2 and compared the throughput and packet loss rate of TCP Reno, TCP Newreno, TCP Tahoe, TCP Westwood, TCP SACK and TCP Vegas with TCP WestwoodNew. Lv and Zhang [43] proposed a new congestion control mechanism TCP PN to optimize the performance of the private network. TCP PN is based on the features of TCP BIC and TCP Vegas. Froldi and Fonseca [44] proposed a Datagram Congestion Control Protocol (DCCP) for high-speed networks. Froldi and Fonseca [44] used NS-2 with dumbbell topology to evaluate the new protocol regarding Jain’s fairness index.

Because of the popularity of Linux—based HTTP servers, TCP CUBIC is close to the new de facto standard for Internet congestion control [18]. Thus, in 2013, many researchers worked on TCP CUBIC to improve its performance. Wang et al. [18] introduced first time the idea of delay-based information as congestion indication into the TCP CUBIC congestion control mechanism. By using NS-2 with dumbbell topology, Wang et al. [18] compared the performance of CUBIC-FIT, TCP CUBIC, TCP Reno, TCP FIT and Compound TCP in terms of throughput and protocol fairness (Jain’s index fairness).

Kozu et al. [17] improves the protocol fairness of TCP BUBIC by tuning the value of K of TCP CUBIC. K is a time period required for TCP CUBIC to increase its cwnd size from \(W_{last-max}\) to \(W_{max}\). Kozu et al. [17] used an emulator on FreeBSD operating system and repeated the experiments by tuning the value of K of TCP CUBIC. Kozu et al. [17] also proposed a method to improve the protocol fairness of TCP CUBIC by adjusting the value of K according to the RTT as described in Eq. 8. x(RTT) is a function of RTT, x(RTT) monotonically decrease as RTT increases.

$$\begin{aligned} K=\root 3 \of {\frac{W_{max}\, \, \beta }{C}} \times x(RTT) \end{aligned}$$
(8)

Today most of the mobile devices use Android operating system, and TCP CUBIC is also default congestion control mechanism in Android operating system [45]. For mobile devices, Gwak et al. [45] proposed WiCUBIC, which is an enhanced form of TCP CUBIC. WiCUBIC can distinguish between the losses induced by wireless channel and the losses induced by network congestion. WiCUBIC improved the throughput performance of TCP CUBIC in a wireless environment slightly. At each packet loss event, WiCUBIC decreases the size of cwnd by \(20\%\) of its actual size (similar to TCP CUBIC).

Goyzueta and Chen [19] worked in the concave region of cwnd curve of TCP CUBIC. By using NS-2 Goyzueta and Chen [19] analyzed the fast convergence mechanism of TCP CUBIC. According to them, TCP CUBIC is still a protocol that is under development. There is a need for more evaluation studies of TCP CUBIC. According to [19], TCP CUBIC has lower cwnd decrease parameter (\(\beta \)), and it should release more bandwidth for new incoming flows.

To solve the problem issues of HighSpeed TCP, Qureshi et al. [24] proposed Quick TCP (Q-TCP) for high-speed networks. Q-TCP integrated the features of HighSpeed TCP and TCP CUBIC. The Q-TCP is based on optimization of HighSpeed TCP slow start algorithm and Additive Increase and Multiplicative Decrease (AIMD) algorithm of TCP CUBIC. Q-TCP was evaluated by using a dumbbell topology in NS-2. Qureshi et al. [24] compared the performance of Q-TCP with TCP Newreno, HighSpeed TCP, TCP CUBIC, BIC TCP, Scalable TCP, Hamilton TCP and FAST regarding throughput and inter-protocol/intra-protocol fairness between flows. According to [24] TCP CUBIC has RTT fairness problem. Thus, more research is needed on TCP CUBIC by adaptive adjustment of cwnd reduction parameter (\(\beta \)), which is also known the percentage of reduction in cwnd size after each loss event.

For ultra high speed networks, Wang et al. [46] proposed TCP FAST-FIT to improve the link utilization and TCP friendliness of FAST TCP [37]. Wang et al. [46] used hardware based emulator for performance analysis. Wang et al. [46] used network utilization and TCP friendliness as performance metrics.

3.1 Analysis of TCP Congestion Control Mechanisms

In Sect. 3, a systematic literature review has been conducted for the study of various TCP congestion control mechanisms. After literature analysis, it is observed that mostly new or enhanced congestion control mechanisms are based on the features of either TCP Reno or TCP Vegas. A summary of literature analysis of congestion control mechanisms is briefly described in Table 1. Few congestion control mechanisms are based on the features of both TCP Reno and TCP Vegas. Congestion control mechanisms that are based on the characteristics of TCP Reno, they use packet loss event, as an indication of congestion occurrence in the network. Thus, this kind of congestion control mechanisms is known as loss-based congestion control mechanisms. The congestion control mechanisms that are based on the features of TCP Vegas, they use delay (variation in throughput) as an indication of congestion imminent in the network. So, this kind of congestion control mechanisms is known as delay-based congestion control mechanisms. Few congestion control mechanisms use features of both TCP Reno and TCP Vegas, referred to as hybrid congestion control mechanisms, and they use both packet loss event and delay variation as indications for congestion occurrence in the network. Other two types of congestion control mechanisms are rate based and ECN based congestion control mechanisms, however, they are beyond the scope of this research.

Table 1 Analysis of TCP congestion control mechanisms
Table 2 Strength and weakness of TCP congestion control mechanisms

A new or enhanced congestion control mechanism is designed at two levels: flow-level and packet-level. The flow-level design aims to achieve high utilization, low queuing delay, low packet loss rate, fairness and stability while packet-level design implements these flow level goals within the constraints imposed by end-to-end control. The congestion avoidance mechanism of TCP Reno and its variants are based on packet level model of AIMD. This packet level model induces certain flow level properties, such as, throughput, fairness and stability. There are three major approaches of congestion control mechanisms, depending on the congestion detection techniques. Table 2 summarize these approaches with congestion control methodologies being used by these mechanisms. The approaches of congestion control mechanisms are described as follows:

  1. (i)

    Loss-Based Approach

    Loss-based congestion control mechanisms use a packet-loss event as an indication of congestion in the network. All congestion control mechanisms of this category have modified the cwnd growth and reduction parameter of TCP Reno. In other words, they enhanced the Additive Increase and Multiplicative Decrease (AIMD) algorithm of TCP Reno. The congestion control mechanisms in this category are TCP Tahoe, TCP Reno, TCP Newreno, Scalable TCP, HighSpeed TCP, Hamilton TCP, BIC TCP and TCP CUBIC.

  2. (ii)

    Delay-Based Approach

    Delay-based congestion control mechanisms use variations in throughput as an indication of congestion occurrence in the network. However, the major weakness of delay-based mechanisms is that they are not competitive with loss-based approaches. Thus, this weakness is difficult to be remedied by delay-based mechanisms themselves [14]. In delay based mechanisms, when queue begins to fill, the delay increases. Delay-based mechanisms are least aggressive when sending rate of packets is near to the capacity of the link. Means that when link capacity of the network is near to fill, delay-based congestion control mechanisms did not increase the size of cwnd aggressively. Examples of delay-based congestion control mechanisms are TCP Vegas and FAST TCP.

  3. (iii)

    Loss and Delay-Based Approach

    Loss-based and delay-based congestion control mechanisms use both packet loss events and throughput variations as indications of congestion. The mechanisms of this approach are also known as hybrid congestion control mechanisms. Africa TCP, Compound TCP, YeAH TCP and Fusion TCP are popular mechanisms of this approach.

In this research, an end-to-end, loss-based congestion control mechanism TCP CUBIC is enhanced for long distance, high bandwidth networks. Enhanced TCP CUBIC modified the cwnd reduction percentage of TCP CUBIC. By this enhancement, protocol fairness and convergence time of flows are improved.

4 Congestion Control Module for Loss Event

In a network, when multiple flows of a congestion control mechanism are transmitting data over the same link, did not fairly share available link bandwidth among each other. Initial flows in the network occupy all the available link bandwidth very quickly and during packet loss event, did not reduce enough bandwidth. As a result, congestion occurs while more flows are still entering into the network causing a shortage of bandwidth. This problem happened in TCP CUBIC, because, at each packet loss event, TCP CUBIC reduces its cwnd size only 20% from its actual size, thus, TCP CUBIC flows did not free up sufficient amount of bandwidth for new incoming flows and the remedy of the congestion. Due to this reason, congestion prolongs in the network and performance of the network decreases.

To solve this shortage bandwidth issue, Congestion Control Module for Loss Event (CCM-LE) is proposed, which is based on the reduction mechanism of the cwnd of TCP CUBIC. The purpose of CCM-LE is to decrease the sufficient amount of cwnd at each loss event. Thus, the amount of available link bandwidth can be increased, such that, all new competing flows on the link can share the bandwidth fairly and fast among each other.

4.1 Design of CCM-LE

CCM-LE is based on Additive Increase and Multiplicative Decrease (AIMD) algorithm and cwnd reduction mechanism of TCP CUBIC. Based on AIMD, if there is no congestion in a network, the sender increases its cwnd additively and in the presence of congestion, sender drops its cwnd by 50% of its current cwnd size. AIMD algorithm for TCP Reno having \(\alpha =1\) and \(\beta =0.5\) as its default values, is described in Eq. 9. As TCP Reno and TCP Compound did the same after each packet loss event. This reduction of cwnd after each packet loss event increases the available link bandwidth on the network path. AIMD algorithm has two parts: additive increase in cwnd and a multiplicative decrease in cwnd. The first part of AIMD algorithm is related to convergence time of flows (sharing of available link bandwidth in a shorter time) and the second part of this algorithm is responsible for the protocol fairness (fair share of available link bandwidth) among competing flows. General form of AIMD is described in Eq. 10, where \(\alpha \) represents additive increase and \(\beta \) represents the multiplicative decrease parameter of cwnd. That is, \(\alpha \) represents the cwnd growth parameter and \(\beta \) represents the cwnd reduction parameter after each packet loss event.

$$\begin{aligned}&AIMD_{For \,\, Standard \,\, TCP}:\left\{ \begin{array}{lll} ACK &{} : &{} cwnd\leftarrow cwnd+\frac{1}{cwnd} \\ Loss &{} : &{} cwnd\leftarrow (1-0.5)\times cwnd \end{array}\right\} \end{aligned}$$
(9)
$$\begin{aligned}&\quad AIMD:\left\{ \begin{array}{llll} At \,\, each \,\, ACK &{} : &{} cwnd_{new}\leftarrow cwnd+\frac{\alpha }{cwnd} &{} Additive Increase \\ At \,\, each \,\, Loss \,\, Event &{} : &{} cwnd_{new}\leftarrow (1-\beta )\times cwnd &{} Multiplicative Decrease \end{array}\right\} \qquad \end{aligned}$$
(10)
Fig. 7
figure 7

Design concept of CCM-LE

CCM-LE uses the concept of multiplicative decrease in cwnd from the second part of AIMD algorithm. Figure 7 shows the relation between AIMD, default version of TCP CUBIC, which refers as TCP CUBIC-Default and Enhanced TCP CUBIC with the implementation of CCM-LE, which refers as TCP CUBIC-(CCM-LE). At each packet loss event, TCP CUBIC-Default decreases the size of cwnd by a multiplicative decrease parameter \((1-\beta )\), means 20% reduction, whereas TCP CUBIC configured with CCM-LE reduces by a factor \((1-\mu \times \beta )\), means, 30% reduction. AIMD algorithms for TCP CUBIC-Default and TCP CUBIC-(CCM-LE) are denoted in Eqs. 11 and 12 respectively. In Eq. 11, 0.3 represents the value of \(\alpha \) and 0.2 represents the value of \(\beta \). Whereas in Eq. 12, the value of \(\alpha \) is equal to 0.5 and \(\beta \) is equal to 0.3.

$$\begin{aligned}&AIMD_{TCP \,\, CUBIC-(Default)}:\left\{ \begin{array}{lll} ACK &{} : &{} cwnd\leftarrow cwnd+\frac{0.3}{cwnd} \\ Loss &{} : &{} cwnd\leftarrow (1-0.2)\times cwnd \end{array}\right\} \end{aligned}$$
(11)
$$\begin{aligned}&\quad AIMD_{TCP \,\, CUBIC-(CCM-LE)}:\left\{ \begin{array}{lll} ACK &{} : &{} cwnd\leftarrow cwnd+\frac{0.5}{cwnd} \\ Loss &{} : &{} cwnd\leftarrow (1-0.3)\times cwnd \end{array}\right\} \qquad \quad \end{aligned}$$
(12)

The purpose of the CCM-LE is to decrease the size of cwnd after a loss event, such that, the fairness of the protocol is maximized and convergence time of flows is minimized by mitigating the unfair share of available link bandwidth among flows. This is done by using an enhanced rule of decrease parameter for cwnd after each loss event. CCM-LE increases the multiplicative decrease parameter \(\beta \) of cwnd that is being used in TCP CUBIC. After each congestion occurrence or each packet loss event, current flows can release more bandwidth for new incoming flows. The default decrease parameter (\(\beta \)) of cwnd in TCP CUBIC reduces the size of cwnd of flows only by \(20\%\) for each packet loss event, which is a very low percentage. Due to this reason, new flows cannot use or get properly available bandwidth, causing unfairness, long convergence time and congestion in the network. CCM-LE increases the multiplicative decrease parameter (\(\beta \)) of cwnd by a new variable \(\mu \), whose experimental and statistical value is equal to 1.5. So, that, after each packet loss event, flows can reduce the size of cwnd by \(30\%\), thus, releasing more bandwidth for other incoming flow. This change in decrease parameter assures fair bandwidth distribution and shorter convergence time among competing flows. The results of CCM-LE improve the overall protocol fairness and convergence time.

Fig. 8
figure 8

Dynamics of congestion window growth

The CCM-LE uses the concept of TCP CUBIC cwnd dynamics which uses only packet loss event as an indication of congestion, because, it is a loss-based congestion control mechanism. Figure 8 shows the dynamics of cwnd growth of CCM-LE, which registers \(W_{max}\) as the cwnd size when the loss even occurred and \(W_{last\_max}\) is the previous maximum cwnd size. Figure 8 shows the previous maximum and current maximum size of cwnd graphically. It also shows the instant of packet loss event and the time period K. At each loss event, it decreases the size of cwnd by a constant decrease factor \((1-\mu \times \beta )\) and later on enters into congestion avoidance phase. The pseudo code of CCM-LE at each packet loss event is described in Algorithm 1. As the size of cwnd gets close to \(W_{max}\), it slows down its growth and near \(W_{max}\), the cwnd increment becomes almost zero as shown in Fig. 8. Equation 13 shows the growth function of CCM-LE, where C is the original TCP CUBIC parameter, t is the elapsed time from the last cwnd reduction and \(\bar{K}\) is the time period that the function requires to increase the size of cwnd from \(W_{last\_max}\) to \(W_{max}\) (when there are no further loss events occur). CCM-LE increases the time period required to increase the size of cwnd from \(W_{last\_max}\) to \(W_{max}\). \(\bar{K}\) for CCM-LE is calculated by using Eq. 15. The original value of K for TCP CUBIC is denoted in Eq. 15 and the difference between two time periods (\(\bar{K}_{CCM-LE}\) and \(K_{CUBIC}\)) is denoted by \({\varDelta }\).

figure a
$$\begin{aligned} W_{(t)}= & {} C(t-\bar{K})^{3}+W_{max} \end{aligned}$$
(13)
$$\begin{aligned}&\bar{K}_{TCP \,\, CUBIC-(CCM-LE)}=\root 3 \of {\frac{W_{max} \times \mu \times \beta }{C}} \end{aligned}$$
(14)
$$\begin{aligned}&K_{TCP \,\, CUBIC-(Default)}=\root 3 \of {\frac{W_{max} \times \beta }{C}} \end{aligned}$$
(15)
Table 3 Variable used for the calculation of CCM-LE-difference (\({\varDelta }\))

For the calculation of \({\varDelta }\), different variables are used which are summarized in Table 3. "W" represents the size of cwnd right before the loss event occurred. \(\mu \times \beta \) is the multiplicative decrease parameter of cwnd being used CCM-LE. iT is the ith round trip time of the flow. C represents the scaling factor of original TCP CUBIC. \(\bar{K}\) is the time period required to increase the size of cwnd from W to \(W_{max}\). Packet loss probability for a given network is defined as p and the probability that a packet has acknowledged successfully is denoted by q. Equation 16 shows a relation between the values of p and q. However, based on TCP CUBIC, after loss event, in ith RTT, the size of cwnd is \(C(iT-\bar{K})^3+W_{max}\), where iT is the ith RTT of the flow. The total number of packets that has been sent in a loss epoch (time interval between two consecutive loss events) are defined in Eq. 17.

$$\begin{aligned} q= & {} \frac{1}{p} \end{aligned}$$
(16)
$$\begin{aligned} q= & {} \frac{1}{p}=\sum _{i=0}^{i=n}(C(iT-\bar{K})+W_{max}) \end{aligned}$$
(17)

Solving Eq. 17 by using the value of \((\bar{K})\) from Eq. 15,

$$\begin{aligned} \frac{1}{p}=\sum _{i=0}^{i=n}\left( C\left( iT-\root 3 \of {\frac{W_{max} \times \mu \times \beta }{C}}\right) +W_{max}\right) \end{aligned}$$
(18)

If \(W_{max}\) is equal to w, by solving Eqs. 13 and 15 by using \(\mu \), \(\beta \) and C values, Eqs. 19 and 20 can calculates the time interval (required for cwnd to increase its size from W to \(W_{max}\)) for CCM-LE and TCP CUBIC as:

$$\begin{aligned}&\bar{K}_{TCP \,\,CUBIC-(CCM\_LE)}=\root 3 \of {\frac{w \times 1.5 \times 0.2}{0.4}} \end{aligned}$$
(19)
$$\begin{aligned}&\quad K_{TCP \,\,CUBIC-(Default)}=\root 3 \of {\frac{w \times 0.2}{0.4}} \end{aligned}$$
(20)

By subtracting Eqs. 19 and 20, Eq. 21 shows the difference \({\varDelta }\) between these two time intervals.

$$\begin{aligned} (\bar{K}-K)=\root 3 \of {0.25 \times w} \end{aligned}$$
(21)

However, \((\bar{K}-K)\) denotes the value of \({\varDelta }\), Eq. 22 is represents the required result.

$$\begin{aligned} {\varDelta }=\root 3 \of {0.25 \times w} \end{aligned}$$
(22)

Due to this difference \({\varDelta }\), TCP CUBIC configured with CCM-LE as its cwnd reduction mechanism improves its protocol fairness and convergence time. For the validation of the results, an experimental setup is designed for the evaluation of TCP CUBIC with its default reduction mechanism and TCP CUBIC with CCM-LE in short RTT and long RTT networks. In these experiments, CCM-LE is evaluated based on intra-protocol fairness, inter-protocol fairness and convergence time of competing flows. The results of the CCM-LE provide the best solution for cwnd decrease mechanism with minimized convergence time and maximize protocol fairness. Thus, CCM-LE provides higher protocol fairness and better convergence time results as compared to TCP CUBIC and state-of-the-art congestion control mechanisms (TCP Compound, TCP BIC, HighSpeed TCP and TCP Reno). The findings of this contribution written as follows:

  1. (i)

    At each packet loss event, TCP CUBIC flows configured with default cwnd reduction mechanism decreases the size of cwnd 20% from actual size, which causes less availability of use of available link bandwidth for new incoming flows in the network. However, when TCP CUBIC configured with CCM-LE as its default cwnd reduction mechanism, it increases cwnd reduction percentage at each packet loss event, which causes the increase in availability of use of available link bandwidth for new incoming flows in the network.

  2. (ii)

    At each packet loss event, default version of TCP CUBIC flows decreases the size of cwnd 20% from actual size, which, in turn, causing less time for flows to increase the size of cwnd from \(W_{last\_max}\) to \(W_{max}\) as shown in Fig. 8. However, TCP CUBIC configured with CCM-LE, increases cwnd reduction percentage at each packet loss event, which in turn, increases the time required for a flow to increase the size of cwnd from \(W_{last\_max}\) to \(W_{max}\) as refers to Fig. 8.

  3. (iii)

    Due to the enhancement in congestion window reduction mechanism of TCP CUBIC, the performance of flows in the form of protocol fairness and convergence time is increased.

5 Results and Discussion

This section presents the results of Congestion Control Module for Loss Event (CCM-LE) to verify the performance of this module. Results of CCM-LE are presented in the form of protocol fairness and convergence time. Performance comparison of CCM-LE is conducted with default Congestion Window (cwnd) reduction mechanism of TCP CUBIC. For this purpose, CCM-LE is implemented in TCP CUBIC in replacement of its default cwnd reduction mechanism. TCP CUBIC with its default cwnd reduction mechanism is referred to as TCP CUBIC-Default and TCP CUBIC configured with CCM-LE is referred to as TCP CUBIC-(CCM-LE). The results are also compared with state-of-the-art congestion control mechanisms. Protocol fairness and convergence time of TCP CUBIC-Default, TCP CUBIC-(CCM-LE), BIC TCP, TCP Compound, TCP Reno and HighSpeed TCP are graphically presented with detailed explanation as follows:

5.1 Protocol Fairness Comparison

Fair distribution of available link bandwidth among all the competing flows of a particular congestion control mechanism has a significance importance in order to utilize the total available link bandwidth of the network, fairly and fast. Figure 9 shows the comprehensive comparison of protocol fairness of TCP CUBIC-Default and TCP CUBIC-(CCM-LE), with respect to buffer queue size, link bandwidth, long RTT and short RTT networks. Buffer queue size which is calculated as a percentage of Bandwidth Delay Product (BDP) value, affect the performance of protocol fairness. Link bandwidth in Mbps and RTT configurations of flows in millisecond (ms) can also change the behavior of protocol fairness of congestion control mechanisms. Figure 9a illustrates that for any percentage of BDP value, fairness value of TCP CUBIC-(CCM-LE) is higher than TCP CUBIC-Default. Thus, TCP CUBIC-(CCM-LE) shows higher protocol fairness as compared to TCP CUBIC for any possibility of buffer queue size. Thus, flows of CCM-LE share available link bandwidth fairly to each other as compared to flows of TCP CUBIC-Default. Figure 9b illustrates that for any configuration of link bandwidth (100, 200, 300, 400 and 500 Mbps), CCM-LE flows have higher protocol fairness as compared to TCP CUBIC-Default flows.

Fig. 9
figure 9

Protocol fairness comparison of TCP CUBIC-Default and TCP CUBIC-(CCM-LE). a Buffer queue wise protocol fairness. b Link bandwidth wise protocol fairness. c Long RTT wise protocol fairness. d Short RTT wise protocol fairness

Figure 9c illustrates the protocol fairness of TCP CUBIC-Default and TCP CUBIC-(CCM-LE), when individual competing flows are configure in long RTT network scenario that are referred to as long distance, high bandwidth networks. For any value of RTT (50, 70, 90, 110, 130, 150, 170 and 190 ms), individual competing flows of TCP CUBIC-(CCM-LE) have higher protocol fairness as compared to flows of TCP CUBIC-Default. Thus, CCM-LE flows share available link bandwidth more fairly to each other as compared to TCP CUBIC-Default flows in long distance, high bandwidth networks. Figure 9d shows the protocol fairness in short RTT network scenario that are referred to as short distance, high bandwidth networks. Protocol fairness of CCM-LE is higher as compared to TCP CUBIC-Default when individual flows are configured in short RTT network scenario. Finally, it is concluded that flows release more bandwidth (reduce more percentage of cwnd size) on each packet loss event when they are configured with CCM-LE, so that, incoming flows can take sufficient amount of available link bandwidth for communication. As a result available link bandwidth is fairly distributed among the competing flows. Moreover, it can be concluded that, flows are sharing available link bandwidth fairly among themselves when they are configured with CCM-LE.

Fig. 10
figure 10

Protocol fairness comparison of TCP CUBIC-(CCM-LE) with state-of-the-art congestion control mechanisms at 100 Mbps link bandwidth. a Inter-protocol fairness (Short RTT). b Intra-protocol fairness (Short RTT). c Inter-protocol fairness (Long RTT). d Intra-protocol fairness (Long RTT)

Fig. 11
figure 11

Protocol fairness comparison of TCP CUBIC-(CCM-LE) with state-of-the-art congestion control mechanisms at 500 Mbps link bandwidth. a Inter-protocol fairness (Short RTT). b Intra-protocol fairness (Short RTT). c Inter-protocol fairness (Long RTT). d Intra-protocol fairness (Long RTT)

Figures 10 and 11 shows the comparison results of CCM-LE with state-of-the-art congestion control mechanisms at 100 and 500 Mbps link bandwidth respectively. Results show that TCP CUBIC performs higher inter and intra-protocol fairness when it is configured with CCM-LE as cwnd reduction mechanism instead of its default cwnd reduction mechanism. It is also concluded that all the six congestion control mechanisms show very high inter-protocol fairness in short RTT networks. Overall, it is observed that TCP CUBIC improves its performance by using CCM-LE.

Figure 12 shows the mean protocol fairness comparison of CCM-LE with state-of-the-art congestion mechanisms at 100, 200, 300, 400 and 500 Mbps link bandwidth. Results show that, by using CCM-LE cwnd reduction mechanism, TCP CUBIC improves its inter and intra-protocol fairness both in short and long RTT networks. Thus, by using TCP CUBIC-(CCM-LE), flows share available link bandwidth fairly with each other as compared to flows of TCP CUBIC-Default. Results show that TCP CUBIC-(CCM-LE) improved 13% protocol fairness performance as compared to TCP CUBIC-Default.

Fig. 12
figure 12

Mean protocol fairness comparison of TCP CUBIC-(CCM-LE) with state-of-the-art congestion control mechanisms. a Mean protocol fairness (Short RTT). b Mean protocol fairness (Long RTT)

Fig. 13
figure 13

Stability comparison of TCP CUBIC-Default and TCP CUBIC-(CCM-LE) with respect to protocol fairness. a Buffer queue wise stability. b Link bandwidth wise stability. c Long RTT wise stability. d Short RTT wise stability

To validate the stability of CCM-LE regarding protocol fairness, Coefficient of Variance (CoV) is calculated, which divides the standard deviation with the mean of all fairness values obtained from simulation results. Higher CoV value means lower stability of that congestion control mechanism. Figure 13 shows the stability comparison of TCP CUBIC-Default and TCP CUBIC-(CCM-LE) in four different ways: queue buffer size, link bandwidth, long RTT wise and short RTT wise. Figure 13a compares the stability of protocol fairness with respect to queue buffer size. Fairness of both congestion control mechanisms is more stable at lower values of queue buffer size as compared to higher values. However, for any value of buffer queue size (0.0–2.0), stability value of TCP CUBIC-(CCM-LE) is lower than TCP CUBIC-Default, which validates the high stability of TCP CUBIC-(CCM-LE) flows with respect to protocol fairness. Figure 13b illustrates the stability comparison of protocol fairness with respect to link bandwidth. Fairness stability of both congestion control mechanisms increases with the increase in link bandwidth, which means that fairness stability of both mechanisms, is lower at the high value of link bandwidth. For any configuration of link bandwidth, 100, 200, 300, 400 and 500 Mbps, the stability of protocol fairness of TCP CUBIC-(CCM-LE) flows is higher than TCP CUBIC-Default flows. Figure 13c illustrates the stability of both mechanisms when individual flows of mechanisms are configured in long RTT network scenario. Stability values of both mechanisms are very high at longer round trip time values, which indicate low stability of these both mechanisms in long RTT networks at longer round trip times. Fairness stability of these two congestion control mechanisms is same at 50 ms RTT. Figure 13d illustrates the fairness stability of both mechanisms in short RTT networks. Moreover, the stability of protocol fairness of TCP CUBIC-(CCM-LE) is higher than TCP CUBIC-Default in short RTT networks. Finally, it is concluded that fairness stability of TCP CUBIC-(CCM-LE) is greater than TCP CUBIC-Default in any network configuration. Thus, it can be found that flows are sharing available link bandwidth fairly with each other when using CCM-LE as reduction mechanism.

5.2 Convergence Time Comparison

In this section, convergence time results of two flows of TCP CUBIC are presented. TCP CUBIC flows that are configured with its default cwnd reduction mechanism, are referred to as TCP CUBIC-Default flows, whereas TCP CUBIC flows configured with CCM-LE are referred to as TCP CUBIC-(CCM-LE) flows. First of all, convergence time comparison of two flows of TCP CUBIC-Default and two flows of TCP CUBIC(CCM-LE) is graphically presented with respect to RTTs at 200 Mbps link bandwidth. In the second step, convergence time results of two flows of TCP CUBIC-Default and two flows of TCP CUBIC-(CCM-LE) are presented with respect to link bandwidth in Mbps. In the third step, convergence time results of two flows of TCP CUBIC-(CCM-LE) is compared comprehensively with state-of-the-art congestion control mechanisms (TCP BIC, TCP Compound, TCP Reno and HighSpeed TCP) with respect to short RTT, long RTT, buffer queue and link bandwidth. Finally, mean convergence time results of two flows of the above congestion control mechanisms are presented graphically in the form of bar graphs. The short value of convergence time means the flows converge into each other in short time, which implies that existing flows shares available link bandwidth very fast (in less time) with new incoming flows in the network.

Figure 14 shows the comparison of convergence time of two flows of TCP CUBIC-Default and TCP CUBIC-(CCM-LE). As per simulation parameters, flow 1 starts transmitting data on 1 second and flow 2 starts transmission on 150 s. As the flow 2 enters into the network, flow 1 starts decreasing its cwnd size while flow 2 continuously increasing its cwnd size. After some time, both flows converge with each other to share available link bandwidth fairly. In Fig. 14, both flow 1 and flow 2 are configured with 200 Mbps link bandwidth with 8 ms and 16 ms round trip time respectively. Results show that convergence time of flows of TCP CUBIC-(CCM-LE) is shorter than that of TCP CUBIC-Default. In other words, TCP CUBIC-(CCM-LE) flows share available link bandwidth fairly to each other in shorter convergence time as compared to flows of TCP CUBIC-Default.

Fig. 14
figure 14

Convergence time comparison of two flows of TCP CUBIC-Default and TCP CUBIC-(CCM-LE). a Link bandwidth = 200 Mbps, RTTs = 8 ms. b Link bandwidth = 200 Mbps, RTTs = 16 ms. c Link bandwidth = 200 Mbps, RTTs = 8 ms. d Link bandwidth = 200 Mbps, RTTs = 16 ms

Figure 15 illustrates the mean convergence time results of two flows of TCP CUBIC-Default and TCP CUBIC-(CCM-LE) with respect to link bandwidth. Figure 15 shows the results of convergence time of two flows in the form of bar graph. In this figure, each bar line is representing the mean convergence time results of two flows of these mechanisms. Figure 15 shows that during link bandwidth 100 Mbps to 200 Mbps, TCP CUBIC-(CCM-LE) flows have longer convergence time as compared to TCP CUBIC-Default flows. Whereas, TCP CUBIC-(CCM-LE) flows have shorter convergence time during link bandwidth 300–500 Mbps. Arrows in Fig. 15 pointing the improvement in convergence time by using CCM-LE as cwnd reduction mechanism instead of default reduction mechanism of TCP CUBIC. At 300 Mbps link bandwidth, the convergence time of TCP CUBIC-(CCM-LE) is very shorter than that of TCP CUBIC-Default. In a network, having link bandwidth 300 Mbps, competing flows configured with CCM-LE can share available link bandwidth with new incoming flows very fast (in a short time) as compared to TCP CUBIC-Default flows. In high bandwidth networks, TCP CUBIC-(CCM-LE) flows have shorter convergence time as compared to TCP CUBIC-Default flows. Finally, it is concluded that TCP CUBIC-(CCM-LE) proved better convergence time as compared to the default version of TCP CUBIC.

Fig. 15
figure 15

Convergence time comparison of TCP CUBIC-Default and TCP CUBIC-(CCM-LE)

Fig. 16
figure 16

Convergence time comparison of TCP CUBIC-(CCM-LE) with state-of-the-art congestion control mechanisms. a Short RTT wise convergence time. b Long RTT wise convergence time. c Buffer queue wise convergence time. d Link bandwidth wise convergence time

Fig. 17
figure 17

Mean convergence time comparison of TCP CUBIC-(CCM-LE) with state-of-the-art congestion control mechanisms

Figure 16 shows the comparison results of convergence time TCP CUBIC-(CCM-LE) and state-of-the-art congestion control mechanisms (TCP BIC, TCP Compound, TCP Reno and HighSpeed TCP). Results are explained regarding short RTT, long RTT, buffer queue size and link bandwidth. Results show that convergence time of all congestion control mechanism is very low in short RTT networks as shown in Fig. 16a. It means that the individual flows of each congestion control mechanisms, share available link bandwidth very fast when they are configured in short RTT networks. In short RTT networks, the convergence time of TCP CUBIC-(CCM-LE) is shorter than TCP CUBIC-Default and TCP Compound, whereas it is longer than TCP BIC, TCP Reno and HighSpeed TCP as shown in Fig. 16a. Thus, TCP CUBIC-(CCM-LE) shows higher performance regarding convergence time as compared to TCP CUBIC-Default and TCP Compound. However, in long RTT networks, these six congestion control mechanisms shows relative lower performance regarding convergence time as compared to short RTT networks. In long RTT networks, the convergence time of TCP CUBIC-(CCM-LE) is longer than TCP BIC, TCP Reno and HighSpeed TCP but shorter than TCP CUBIC-Default and TCP Compound, as shown in Fig. 16b. TCP CUBIC-(CCM-LE) improved its performance regarding convergence time both in short and long RTT networks as compared to TCP CUBIC-Default.

Figure 16c shows the convergence time of congestion control mechanisms with respect to queue buffer size. Results show that TCP CUBIC-(CCM-LE) improved its performance as compared to the state-of-the-art congestion control mechanisms. Figure 16d shows the convergence time of congestion control mechanisms with respect to link bandwidth among nodes. Convergence time of all TCP congestion control mechanisms is compared by varying link bandwidth from 100, 200, 300, 400 and 500 Mbps among the nodes. It is observed that change in link bandwidth did not affect the convergence time of TCP CUBIC and BIC TCP flows, however, at higher link bandwidth 300–500 Mbps, convergence time of HighSpeed TCP and TCP Reno increases. Figure 16d indicates that TCP CUBIC-(CCM-LE) performance is also higher than TCP CUBIC-Default, TCP Compound, TCP Reno and HighSpeed TCP.

Figure 17 illustrates the comprehensive performance analysis of TCP CUBIC-(CCM-LE) with state-of-the-art congestion control mechanisms. Figure 17 compares the results of convergence time of two flows of each congestion control mechanism in four different ways: similar RTTs, non-similar RTTs, long RTT and short RTT networks. Results show that all the congestion control mechanisms show excellent performance regarding convergence time in short RTT networks and show relative low performance in long RTT networks. Results also show that when both flows of congestion control mechanisms have similar RTTs, congestion control mechanisms show better convergence time performance as compared to non-similar RTTs. Arrows in the Fig. 17 show that TCP CUBIC-Default increases its convergence time in long RTT networks: means, TCP CUBIC-Default flows required more time to share available link bandwidth fairly to each other as compared to short RTT networks. Results also show that TCP CUBIC-(CCM-LE) improved convergence time performance as compared to TCP CUBIC-Default in all four cases as previously discussed. Overall TCP CUBIC-(CCM-LE) flows improved 17% convergence time performance as compared to the TCP CUBIC-Default flows.

6 Conclusion and Future Work

This research has addressed the issues of TCP CUBIC at the time of each packet loss event. In the congestion control module for loss event, reduction percentage of cwnd size after each packet loss event is increased by 10%, which improved the protocol fairness and convergence time. The focus of this research is to maximize the performance of enhanced TCP CUBIC in long distance, high bandwidth networks by using enhanced cwnd reduction parameter.

In a network topology, initial flows grab all the available link bandwidth very quickly and at packet loss events, did not reduce the size of cwnd sufficiently. As a result, congestion occurs while more flows are still entering into the network, causing the shortage of available link bandwidth, which, in turn, causes low performance of the network. To overcome this issue, Congestion Control Module for Loss Event (CCM-LE) is proposed in this research which is based on the reduction parameter of cwnd at the time of each packet loss. The purpose of CCM-LE is to decrease the sufficient amount of cwnd at each packet loss event so that the amount of available link bandwidth can be increased and all new competing flows on the link can share the bandwidth fairly and fast among each other. CCM-LE increased the reduction percentage of cwnd size after each packet loss event, which is done by increasing the multiplicative reduction parameter \(\beta \) of cwnd being used in TCP CUBIC, so that, at each packet loss event, current flows can release more bandwidth for new incoming flows. Thus, at each packet loss event, CCM-LE reduces the size of cwnd by a multiplicative decrease parameter \((\mu \times \beta )\) instead of \((\beta )\) as used in TCP CUBIC. CCM-LE is evaluated based on protocol fairness and convergence time of competing flows. The graphical results of the CCM-LE provide the best solution for cwnd reduction parameter with a fair distribution of available link bandwidth among all competing flows in shorter convergence time. CCM-LE provides higher protocol fairness and better convergence time results in long distance, high bandwidth networks as compared to TCP CUBIC. By using CCM-LE in enhanced TCP CUBIC, 13% of protocol fairness and 17% of convergence time is improved in performance as compared to the default version of TCP CUBIC. This research can be further extended to develop RTT dependent cwnd reduction mechanism.

Some suggestions are discussed here which can be considered to expand and improve the future research and projects. This research focused on the methodology to control and avoid congestion and a mechanism is used to deal with the issues that could affect the performance of SDN and its underlying platforms. However, desirable performance is achieved for TCP CUBIC in SDN network environment by proposing the CCM-LE technique. Therefore, we will extend our research work and include the high-speed transmission rate, buffer limit and develop RTT dependent cwnd reduction mechanism to investigate the performance of TCP CUBIC in SDN environment.