Keywords

1 Introduction

With the rapid development of emerging wireless communication, a number of wired/wireless heterogeneous networks have been widely deployed nowadays. However, the traditional TCP protocol originally designed primarily for the wired networks has been unable to meet the performance requirements of increasingly complex wired/wireless heterogeneous networks. In wired networks, the random bit-error rate (BER) is negligible and the congestion is the main cause of packet loss, while in wireless networks, frequent random packet losses are unavoidable due to some non-negligible factors, such as high BER, fading and blackout [1]. The traditional TCP’s reactive congestion control and avoidance mechanism taken the assumption that all packet losses are due to congestions become incapable of differentiating the mixed packet losses. Therefore, TCP without modification suffers throughput degradations when used in heterogeneous networks.

In the past few years, many research efforts have been made to adapt TCP to heterogeneous networks. Such works can be classified into three main categories: split-connection schemes [2], localized link layer solution [3], and end-to-end schemes [4]. Among them, strictly end-to-end schemes at the transport layer have been paid much attention because they require no support from the network. As a representative end-to-end congestion control algorithm applied in wireless network nowadays, TCP Westwood [5], has evolved into many variants. When the packet loss occurs, it sets the slow-start threshold (ssthresh) and congestion window (cwnd) based on bandwidth estimation, which eliminates the influence of wireless random loss to some extent and provides significant throughput gains. However, the following shortages still exist in TCP Westwood when it is applied in heterogeneous networks with high BER. (1) It has no loss differentiation mechanism, which misinterprets random packet losses as congestion losses and results in an unnecessary congestion control. (2) It remains the slow-start phase intact as traditional TCP Reno [6] does, which initializes the ssthresh blindly with a fixed value leading to a decline of bandwidth utilization.

In order to overcome the above disadvantages, we propose a novel TCP BRJ scheme with loss differentiation mechanism and following improvements. (1) In slow-start phase, TCP BRJ adjusts the initial ssthresh according to bandwidth estimation in real time. (2) Every time after receiving an arrival acknowledgment (ACK) of new data segments, TCP BRJ calculates the RTT jitter and divide the network into five congestion grades based on the jitter in congestion avoidance phase. (3) When three duplicate ACKs occur, TCP BRJ differentiates the random packet losses from the congestion packet losses based on network congestion grades and reacts accordingly.

2 Overview of TCP Westwood

TCP Westwood (TCPW) is a sender-only modification of TCP NewReno [7]. The TCP sender determines bandwidth estimation (BWE) when each arrival ACK packet of new data is received. The estimation is based on information in the ACKs, and the rate at which the ACKs are received. After a packet loss indication, the sender uses BWE to set the cwnd and the ssthresh properly. Further details of BWE are presented in [5]. Here, we describe how the estimation is used to set the cwnd and ssthresh. Firstly, in TCPW, cwnd dynamics during slow start and congestion avoidance are unchanged, that is, they increase exponentially and linearly, respectively. A packet loss is indicated by (a) the reception of three duplicate ACKs, or (b) coarse timeout expiration. In case (a), TCPW sets cwnd and ssthresh as follows.

  if (three DUPACKs are received) { //packet loss

  ssthresh = (BWE * RTTmin) /seg_size;

  if (cwnd > ssthresh) cwnd = ssthresh; } //congestion avoidance.

  When coarse timeout expires, the cwnd and ssthresh are set as follows:

  if (coarse timeout expires) {

  cwnd = 1;

  ssthresh = (BWE * RTTmin) / seg_size;

  if (ssthresh < 2) ssthresh = 2; }

3 RTT Jitter and Congestion Grade

A packet’s round-trip time (RTT) consists of three parts: transmission delay, propagation delay and queuing delay. If all the packets pass through the same router and the packets are of the same size, the transmission delay and propagation delay are fixed. Then the changes of RTT are determined only by the queuing delay, which indicates the congestion degree of network. Therefore the network state can be characterized to a large extent by the RTT jitter, so TCP BRJ uses the adaptable RTT jitter to predict network states, and then divides the network congestion into different grades according to the previous forecast.

We use the following formula to calculate the RTT jitter based on RTT:

$$ {\mathrm{J}}_{\mathrm{sample}}\left(\mathrm{k}\right) = {\mathrm{RTT}}_{\mathrm{k}} - {\mathrm{RTT}}_{\mathrm{k}-1} $$
(64.1)
$$ {\mathrm{J}}_{\mathrm{estimate}}\left(\mathrm{k}\right) = \upalpha *{\mathrm{J}}_{\mathrm{estimate}}\left(\mathrm{k}-1\right) + \left(1-\upalpha \right)*{\mathrm{J}}_{\mathrm{sample}}\left(\mathrm{k}\right) $$
(64.2)

Where RTTk and Jsample(k) means respectively the RTT and RTT jitter measured when the kth ACK received at the sender. To improve the accuracy of RTT jitter, we use an exponentially-weighted moving average (EWMA) filter in formula (64.2) to calculate the estimation value of RTT jitter Jestimate(k), where α є [0, 1], and α is generally set to a fixed value equal to 0.125. From (64.2), we can see that if α is small, the value of Jestimate(k) depends on Jsample(k) and otherwise depends on Jestimate(k − 1). However, when the network state changes frequently, it is difficult to get accurate RTT jitter estimation adopting a fixed α. In TCP BRJ, we update the value of α dynamically according to the network states, which will be provided in following sections. If the network state changes quickly, α will be set to a smaller value, otherwise, if the network state is stable, we use a larger α. TCP BRJ obtains a more accurate RTT jitter estimation. The pseudo code is shown below.

if (network states change quickly) {

        α = | Jsample(k) - Jestimate(k-1) | / Jestimate(k-1);

        if (α > 1) α = 1/ α;

} else α = 0.875; // network states are stable

This paper takes advantage of the network congestion flag (NCF) to realize a fine-grained classification of network congestion degree. The NCF is given by NCF = (Jestimate(k) − T)/T, namely the difference rate between RTT jitter estimation and a constant variable T. Where T is set to 0.5 ms, which is an experience value gained from simulation experiments. According to the value of NCF, the network congestion degree will be specifically divided into five grades as shown in Table 64.1.

Table 64.1 Division of congestion grade

4 Congestion Control of TCP BRJ

We note that in TCPW, the initial value of ssthresh is set to 1 artificially, which is hard to be adaptable to the network states on the condition of unknown bottleneck link bandwidth. And an inappropriate value of ssthresh will lead to a decline of bandwidth utilization. In this paper, we propose an improved slow-start phase, which adjusts the initial ssthresh and cwnd in real time according to BWE. During the detection period, if the current ssthresh is lower than BWE, the sender will set the ssthresh equal to BWE and increase the cwnd at a slightly higher speed than exponentially growth in TCPW, which will then make the slow-start process astringed fast. The pseudo code of improved slow-start phase is shown below.

  if (ssthresh < (BWE*RTTmin)/seg_size)

          ssthresh = (BWE*RTTmin)/seg_size;

  if (cwnd < ssthresh) cwnd = cwnd + 1.5;

  else go to the congestion avoidance phase

The improved congestion avoidance phase of TCP BRJ realizes not only the division of congestion grade but also adopts a new cwnd adjustment strategy. The process of improved congestion avoidance phase is shown in Table 64.2.

Table 64.2 The adjustments in improved congestion avoidance phase

Firstly, when the congestion grade is 1 and 2, the network utilization is very low, and the cwnd is increased by 1.1 and 1.2 respectively in order to make full use of the network resources as fast as possible. Secondly, the congestion grade equaled to 3 means a normal network state, and we increase the cwnd by 1. Thirdly, when the congestion grade grows to 4, which indicates a slight congestion, the sender will decrease the growth rate of cwnd appropriately. When the congestion grade is equal to 5, the network is seriously congested, and then we increase the cwnd by a quite small value.

The proposed TCP BRJ also has a novel loss differentiation mechanism based on the bandwidth estimation and RTT jitter. When more than three duplicate ACKs occur, TCP BRJ differentiates the random packet losses from the congestion packet losses based on network congestion grades and reacts accordingly. If the congestion grade is larger than 3, TCP BRJ regards the packet loss as a congestion loss, otherwise, a random loss. More details are shown below.

  More than three duplicate ACKs are received:

  if (grade <= 3) { //Wireless random loss

   ssthresh = int (cwnd * 4/5);

   cwnd = ssthresh + 3;

  } else if (grade = 4) { //Congestion loss

   sstemp = (BWE*RTTmin)/seg_size;

   ssthresh = int (sstemp);

   if (cwnd > sstemp) cwnd = (cwnd + sstemp)/2;

  } else { //Serious congestion loss

   ssthresh = int (sstemp);

   if (cwnd > sstemp) cwnd = cwnd * 0.6;}

5 Performance Evaluations

In this section, we simulate and analyze the performance improvements of TCP BRJ in throughput, bandwidth utilization, fairness and friendliness compared with TCP WestwoodNR (NS-2 modules of TCP Westwood with the NewReno feature) and TCP Reno in heterogeneous networks by the NS-2 simulator.

Throughput is an important feature of network performance, which is calculated by the effective amount of data delivered through the network based on received ACKs at the sender. The simulation topology is depicted in Fig. 64.1.

Fig. 64.1
figure 1

Simulation topology for throughput

A single TCP connection running a long-live FTP application delivers data from 0 to 100 s. We run the simulation for BRJ, Reno and WestwoodNR, respectively. The BER at the wireless bottleneck link varies from 0.01 to 10 %.

The comparison of throughput is shown in Fig. 64.2. For random loss rate smaller than 0.1 %, all TCP schemes perform closely to each other. Besides that, BRJ starts to outperform the other TCP variants when loss rate is bigger than 0.1 %. What’s more, the throughput of BRJ is especially superior to others when the loss rate varies from 1 to 5 %. At a very practical wireless loss rate, i.e., 2 %, BRJ outperforms WestwoodNR by 34.3 % and Reno by 95.3 %. Furthermore, the advantages of BRJ still exist in various bottleneck bandwidths as shown in Fig. 64.3.

Fig. 64.2
figure 2

Throughput comparison

Fig. 64.3
figure 3

Throughput in different bottleneck

We use the fairness index function proposed in [8] to evaluate the fairness of TCP schemes. Multiple connections of the same TCP scheme must interoperate and converge on their fair share. The simulation topology is shown in Fig. 64.4, where 10 same TCP flows share a 20 Mb/s bottleneck link with 1–5 % BER. We simulate different TCP schemes and the results are shown in Fig. 64.5. We can see that all TCP variants achieve satisfactory fairness index. What’s more, from Fig. 64.6, which show the total throughput of 10 TCP flows, we can see that TCP BRJ greatly improves bandwidth utilization of bottleneck link compared to the others.

Fig. 64.4
figure 4

Simulation topology for multiple flows

Fig. 64.5
figure 5

Fairness index comparison

Fig. 64.6
figure 6

Total throughput comparison

A friendly TCP scheme should be coexisted with other TCP variants and not cause them starvation. To verify the friendliness of TCP BRJ towards TCP Reno, we use the simulation topology in Fig. 64.4. There are 10 TCP flows running together through a bottleneck link with 0.1 % BER, where m TCP BRJ flows coexist with n TCP Reno flows. We vary the flows proportion of these two TCP schemes by adjusting the variables m and n. All ten connections are expected to share the bottleneck bandwidth equally, i.e., roughly 2 Mbps per connection.

The results are show in Fig. 64.7. It is observed that the bandwidth allocation of each TCP connection is close to its fair share at bottleneck link, except that TCP BRJ achieves a slightly higher throughput than Reno, but within a tolerable range.

Fig. 64.7
figure 7

Friendliness of BRJ towards Reno

Conclusion

In this paper, we propose a new TCP scheme, TCP BRJ, to improve the TCP performance in the wired/wireless heterogeneous networks. TCP BRJ has an improved slow-start phase and realizes a novel congestion avoidance phase. Moreover, it proposes a loss differentiation mechanism. The simulation results show that TCP BRJ provides a significant performance improvement in throughput, bandwidth utilization and fairness than others in heterogeneous networks with high BER, especially 1–2 %, a typical characteristic of wireless link. And for practical purpose, TCP BRJ is friendly towards TCP Reno.