1 Introduction

In recent years, the number of mobile broadband subscribers grows at a double-digit rate. By 2017, the number of mobile-broadband subscriptions reaches 4.3 billion, more than 50% of the population [1]. The user equipments (UE) are usually equipped with multiple network interfaces such as 3G/4G and Wi-Fi. A multipath congestion control protocol allows an UE to utilize multiple paths to transmit data. Using multipath transport protocol at UEs achieves a higher throughput, provides a smoother hand-off, and improves the high-availability.

The multipath congestion control protocol (MPTCP) proposed in [9, 10] couples the congestion windows of the subflows from one source. It is designed to satisfy three design goals, i.e., 1) improving the throughput, 2) balancing the congestion between the subflows, and 3) being friendly to the single-path TCP (a.k.a. do-not-harm). The authors in [13] have shown the non-Pareto optimality of MPTCP via experiments and analysis. The throughputs of some single-path TCP flows are decreased without increasing the throughput of any other flows. A Pareto-optimal protocol called opportunistic link increase algorithm (OLIA) is proposed to address this problem. However, it turns out that OLIA is not responsiveness in some specific scenarios as shown by [14]. Balia protocol, which is proposed in [14], addresses the drawback of OLIA. It is also proved that the responsiveness is traded off by TCP friendliness in many loss-based multipath congestion control protocols [14]. Our previous work [11] designs a class for multipath TCPs based on a multipath network utility maximization (NUM) model. The work [12] applies the framework to design mReno which is compatible to TCP Reno and also satisfies three design goals of MPTCP.

MPTCP, mReno, OLIA, and Balia are the loss-based multipath congestion control protocols, which cannot utilize bandwidth of the high bandwidth-delay product (BDP) links because of their conservative congestion control (additive increase - multiplicative decrease). We need a different type of congestion control protocols for the high BDP connections. Fast TCP is a well-known single-path delay-based congestion control and suitable for high BDP links [5,6,7]. We design a delay-based multipath congestion control protocol, called mFast, which is compatible to Fast TCP and hence, is appropriate to high BDP connections in this work.

Our design framework is based on the NUM model, which is invented by Kelly et al. [2]. Several TCPs such as Reno, Vegas and Fast are reversed engineered and it turns out that these TCPs correspond to specific utility functions in NUM [3,4,5]. We extend the NUM model to multipath NUM in which the utility is redesigned for the multipath flows. Based on the relation between the single-path NUM and Fast TCP [5,6,7], we forward engineer to develop mFast from a multipath NUM model (see Fig. 1). The performance of mFast such as load-balancing, TCP friendliness, and throughput improvement are evaluated via analysis as well as simulations.

Fig. 1
figure 1

The design flow-chart of mFast

The remain of the paper is organized as follows. Section 2 presents some related works. Section 3 introduces the design of mFast. Section 4 investigates the characteristics mFast such as equilibrium point, TCP friendliness, and stability. The experiments and conclusions are presented in Sections 5 and 6, respectively.

2 Related works

There are not many studies designing multipath congestion control protocols for high BDP networks. MPCubic [17] is one of these protocols. It is shown to have throughput improvement, load balancing and fairness. Developed from TCP Cubic, the subflows adjust their congestion window based on the time elapse since the last packet drop. It is a loss-based protocol. However, loss packet is a rare event in the high BDP connections, hence it inherits the drawback of a loss-based protocol for high BDP network such as archiving much less fine-grained of load balancing than a delay-based approach. Using delay-based congestion control is appropriate to high BDP network because queueing delay is more accurately estimated than packet loss for this network.

Vegas TCP [8] and Fast TCP [5] are two well-known delay-based protocols for single-path flows. Both of them have the same equilibrium point. However, Vegas is suitable for low BDP whereas Fast TCP is suitable for high BDP networks. Weighted Vegas (wVegas) which is proposed in [16] uses delay as the congestion price to feedback to the source. It obtains a fine-grained load-balancing and provides an option for multipath TCP used in data-center networks [15]. However, because wVegas is extended from TCP Vegas, it also inherits the drawbacks of TCP Vegas, which is appropriate to low BDP paths [16].

The paper [18] has proposed a multipath TCP which is also extended from Fast TCP. However, the congestion control is not developed from a NUM model, and the performance analysis of the protocol is not provided in this work.

3 From single-path fast TCP to multipath fast TCP

3.1 Single-path fast TCP

Let’s consider a network including a set of links L in which link lL has finite capacity cl. The network is shared by a set of flows S. Let ws, xs, and qs be the congestion window size, throughput, and queueing delay of flow s, respectively. A single-path flow running Fast TCP updates its congestion window size according to the following equation

$$ w_{s} \leftarrow w_{s} + \gamma(\alpha_{s} - x_{s}q_{s}), $$
(1)

where γ is a constant between 0 and 1 determining the step-size of the algorithm. αs is a protocol parameter controlling the fairness and also indicating the number of backlog packets of the flow in the network. The congestion price qs of flow s is the sum of congestion prices pl for all link l along the path of flow s. In Fast TCP, qs is the queueing delay of flow s.

Let ds be the propagation delay of the flow s. The round-trip-time of flow s is ds + qs, hence, the flow rate is estimated by [5,6,7]

$$ x_{s}= \frac{w_{s}}{d_{s}+q_{s}}. $$
(2)

It is known that TCPs Vegas and Fast have the same equilibrium point, which is the implicit solution to the NUM model with logarithm utility function [5]

$$\begin{array}{@{}rcl@{}} \text{Max.} &&~ \sum\limits_{s \in \mathcal S} \alpha_{s}\log(x_{s}) \end{array} $$
(3)
$$\begin{array}{@{}rcl@{}} \text{s.t.}&& ~ \sum\limits_{s:l \in \mathcal L_{s}} x_{s} \leq c_{l}, ~\forall l \in \mathcal L, \end{array} $$
(4)

where αs is the number of backlog packets on the path Ls of flow s. Constraint (4) means sum of all the rates of the flows on link l must not exceed the link capacity.

3.2 Multipath fast TCP (mFast)

We extend the above NUM model of Fast TCP to a model for multipath flows in which a flow s may include several subflows rs. (We abuse notation by using s to denote a source and also a set of subflows from source s. Please see Table 1 for the descriptions of main notations used in the paper.) Let’s \(F=\{(s,r) \}_{r\in s , s \in \mathcal S}\) be the set of subflows of all flows in the network and R ∈ {0, 1}|L|×|F| be the routing matrix in which Rlr = 1 if subflow r uses link l, and 0 otherwise. A natural extension of Fast TCP’s utility to the function associated with multipath flow s is given by

$$ \alpha_{s}\log \left( \sum\limits_{r \in s} x_{s,r} \right), \forall s \in \mathcal S. $$
(5)

Function (5) is a concave function, however, it is not strictly concave. Hence, the extended multipath NUM with the utility given by Eq. 5 may result in multiple optimal solutions.

Table 1 Main notations

Let’s consider the following multipath utility function associated with flow s as follows

$$ U_{s}(\boldsymbol{x}_{s}) = (1-\epsilon) \alpha_{s}\log \left( \sum\limits_{r \in s} x_{s,r} \right) + \epsilon \sum\limits_{r \in s} \alpha_{s}\log (x_{s,r}), $$
(6)

for all \(s \in \mathcal S\), where 𝜖 is a positive constant in (0,1]. The utility Us(xs) is a linear combination of the coupled part \(\alpha _{s}\log ({\sum }_{r \in s} x_{s,r})\) and the uncoupled part \({\sum }_{r \in s} \alpha _{s}\log (x_{s,r})\). It is a strictly concave function. The extended multipath NUM is given by

$$\begin{array}{@{}rcl@{}} \text{Max.} &&~ U(\boldsymbol{x}) = \sum\limits_{s \in \mathcal S} U_{s}(\boldsymbol{x}_{s}), \end{array} $$
(7)
$$\begin{array}{@{}rcl@{}} \text{s.t.} &&~ R\boldsymbol{x} \leq \boldsymbol c, \end{array} $$
(8)

The optimization problem (7)–(8) has a unique optimal solution since the objective is a strictly concave function [20]. (The readers can refer to our previous work [11] for more details about multipath NUM model.)

Let’s consider the following window update of mFast for subflow r of multipath flow s corresponding to the utility function (6)

$$ w_{s,r} \!\leftarrow\! w_{s,r} + \gamma\left( \left( (1 - \epsilon)\frac{x_{s,r}}{x_{s}} + \epsilon\right)\alpha_{s} - x_{s,r}q_{s,r}\right), \forall r \in s, $$
(9)

where \(x_{s} = {\sum }_{r \in {s}}{x_{s,r}}\) for all \(s \in \mathcal S\).

Let yl be the aggregate throughput of all the subflows on link l, \(y_{l} = {\sum }_{(s,r): l \in \mathcal L_{s,r}} {x_{s,r}}\), and y be the vector of \(\{y_{l}\}_{l \in \mathcal L}\). Similarly, let \(q_{s,r} = {\sum }_{l \in \mathcal L_{s,r}} p_{l}\) be the sum of congestion prices of all the links along the path of subflow (s,r), and q be the vector of \(\{q_{s,r}\}_{(s,r) \in \mathcal F}\). We have

$$\begin{array}{@{}rcl@{}} \boldsymbol y & =& R \boldsymbol x, \end{array} $$
(10)
$$\begin{array}{@{}rcl@{}} \boldsymbol q & =& R^{T}\boldsymbol p. \end{array} $$
(11)

With the relation \(x_{s,r} = \frac {w_{s,r}}{d_{s,r}+q_{s,r}}\), for all \((s,r) \in \mathcal F\), we represent (9) using fluid model as follows

$$\begin{array}{@{}rcl@{}} && \dot{w}_{s,r} =\gamma\left( \left( (1-\epsilon)\frac{x_{s,r}}{x_{s}}+\epsilon \right)\alpha_{s} - x_{s,r}q_{s,r}\right), \forall r \in s, s \in \mathcal S, \end{array} $$
(12)
$$\begin{array}{@{}rcl@{}} && \dot{p}_{l} = \gamma_{l}(y_{l}-c_{l})^{+}_{p_{l}}, \forall l \in \mathcal L, \end{array} $$
(13)

where \((a)^{+}_{x}=a\) for x > 0 and max{0, a} for x ≤ 0.

Notice that mFast is backward compatible to the single-path Fast TCP. If the flow has only one subflow, then (9) becomes the window update of Fast TCP.

4 Performance analysis

We analyze the equilibrium point, stability, TCP friendliness and throughput improvement of mFast in this section.

4.1 Uniqueness of equilibrium point

The equilibrium point (w,p) of the dynamic system given by Eqs. 1213 satisfies

$$\begin{array}{@{}rcl@{}} \frac{(1-\epsilon)}{{\sum}_{i\in {s}} x_{s,i}} + \frac{\epsilon}{x_{s,r}} &=& \frac {q_{s,r}}{\alpha_{s}}, \forall r \in s, s \in \mathcal S \end{array} $$
(14)
$$\begin{array}{@{}rcl@{}} p_{l} (y_{l} -c_{l})&=& 0 , \forall l \in \mathcal L, \end{array} $$
(15)

where \(x_{s,r} = \frac {w_{s,r}}{d_{s,r}+q_{s,r}}\), for all \(r \in s, s \in \mathcal S\).

We easily check that the above point satisfies the Karush-Kuhn-Tucker condition of the optimization problem (7)–(8) [20]. Hence, it is also the optimal solution of problem (7)–(8). Since the objective of this problem is a strictly concave function (𝜖 > 0), the optimal solution is unique.

4.2 Throughput improvement

Informally, a multipath protocol is said to have throughput improvement if its flow rate is greater than any single-path flow rate that shares the same path [9]. We assume that the single-path Fast TCP s with flow rate xs share the same path with subflow r of the multipath flow m with flow rate xm. We also assume that the protocol parameter α is the same for all the flows. At the equilibrium point, we have

$$ \frac {q_{m,r}}{\alpha} = \frac{(1-\epsilon)}{x_{m}} + \frac{\epsilon}{x_{m,r}} = \frac{1}{x_{s}}, \forall r \in m. $$
(16)

Hence, \(\frac {1}{x_{s}} > \frac {(1-\epsilon )}{x_{m}}\). Therefore, xm > xs which implies that the multipath mFast TCP improves the throughput compared to a single-path Fast TCP on the same path. With 𝜖 ≈ 0, the multipath flow rate is actually approximated to the single-path flow rate with same network condition. When 𝜖 is higher, the throughput improvement of the multipath flow is higher.

4.3 Load-balancing

Load-balancing is the ability of shifting the traffic to the less congested path. When 𝜖 is a very small number, the formula (14) of equilibrium point yields \(q_{s,r} \approx \frac {\alpha _{s}(1-\epsilon )}{x_{s}}, \forall r \in s\). This approximation means that the controller always tries to balance the congestion between the subflows from a same source and a higher congestion yields a lower flow rate. With a higher 𝜖, the approximation is less accurate. In other words, the controller is less balanced between the subflows from the same source with a higher 𝜖.

4.4 TCP friendliness

A multipath flow is TCP friendly if it does not dominate the rate other single-path flows that share the same link. As in [14], we verify the friendliness of mFast by using the one-bottleneck-link network in which a multipath flow and a single-path flow go through a bottleneck link (Fig. 2).

Fig. 2
figure 2

One-bottleneck-link topology for testing the friendliness of mFast

Theorem 1

The higher 𝜖 yields a worse friendliness.

Proof

Let f1 be the multipath flow with N subflows and f2 be the single-path flow (see Fig. 2). At the equilibrium point,

$$ \frac{1-\epsilon}{x_{1}} + \frac{\epsilon}{x_{1,r}} = \frac{q}{\alpha_{1}}, \forall r \in f_{1}, $$
(17)

where x1 is the flow rate of f1 and q is the queueing delay of the link. Hence, x1,r has the same value for all subflows r in the multipath flow. Assume that the number of subflows of the multipath flow is N,

$$ x_{1} = N x_{1,r}. $$
(18)

Hence,

$$ x_{1}=(1+(N-1)\epsilon)\frac{\alpha_{1}}{q}. $$
(19)

On the other hand, at equilibrium point, the single-path flow f2 has \(x_{2}=\frac {\alpha _{2}}{q}\). Substituting in the equation x1 + x2 = c, where c is the capacity of the link, we obtain

$$ q=\frac{(1+(N-1)\epsilon)\alpha_{1} + \alpha_{2}}{c}. $$
(20)

Also from the equation x1 + x2 = c, we have

$$ x_{1} = c - \frac{\alpha_{2}}{q}. $$
(21)

From the Eqs. 20 and 21, we have the following relation: the increasing of 𝜖 leads to the increasing of q according to Eq. 20, which implies the increasing of x1 according to Eq. 21. Therefore, the higher 𝜖 results in a less friendliness. □

4.5 Stability

In this subsection, we provide the analysis of locally asymptotically stable of the equilibrium point in the case that the rank of routing matrix R equals to the number of subflow (R is a full-column rank matrix). If R is not a full-column rank matrix, the locally asymptotically stability for the special case in which two flows, a single-path flow and a multipath flow, share a bottleneck link (Fig. 2) is presented. The analysis in the general case is reserved for future study.

Theorem 2

If routing matrix R is a full-column rank matrix (the rank of R equals to the number of subflows), then the equilibrium (w, p) described in Eqs. 1415is locally asymptotically stable.

Theorem 3

With the network topology given in Fig. 2, the equilibrium (w, p) described in Eqs. 1415is locally asymptotically stable.

The proofs of Theorems 2 and 3 are provided in Appendix A.

5 Evaluations

We verify the performance of mFast including fairness to the single-path Fast TCP, efficiency and responsiveness capability via the simulations. We use NS-2 [21] with SACK option and data packet size 1000 bytes. The data rate is sampled in every 2 seconds. Figure 3 describe the topologies used in the simulations. The simulations are conducted with two values of 𝜖, 0.01 and 0.1, to investigate the relations between the features of mFast.

Fig. 3
figure 3

Network topologies used in the simulations

5.1 TCP friendliness

We evaluate the friendliness of mFast to the single-path Fast TCP over a bottleneck link in the topology described in Fig. 3a. There are two single-path Fast TCP flow and a mFast flow with two subflows on a bottleneck link with bandwidth 100 Mbps and propagation delay 100 ms. We can see in Fig. 4 that the sum of the throughput of subflows of multipath flow approximates to the throughputs of the single-path flows at the steady state when 𝜖 = 0.01. It means that mFAST meets the third goal of the design of a multipath protocol. When we increase 𝜖, the fairness is decreased. The higher 𝜖 yields a higher throughput of the multipath flow, and hence the less friendliness. The value 𝜖 represents the trade-off between friendliness and efficiency of mFast.

Fig. 4
figure 4

Fairness of mFast

5.2 Load balancing and responsiveness

Load balancing of a multipath protocol is the ability to control its traffic among the paths. It implies that the multipath protocol can pool the bandwidths of all links that the subflows pass through. This feature also improves the traffic engineering for the network beside routing when the network condition temporarily changes in a short time. In the closed fence topology as shown in Fig. 3b, each mFast flow transmits through two separate links, so that four mFast flows share total bandwidth of four links (i.e., 380 Mbps). The ideal throughput of each mFast flow should be 95 Mbps. In this simulation, at the period from 100 s to 200 s, a background traffic of 25 Mbps is transmitted on the 160 Mbps link by using constant-bit-rate flows. In Fig. 5, all the multipath flows tends to fairly share the pool of links, which is around the ideal value 95 Mbps, before turning on and after turning off the background traffic.

Fig. 5
figure 5

Congestion balance and responsiveness of mFast

Also in agreement with the previous observation, a smaller 𝜖 yields a more fairness than a larger 𝜖. Additionally, Fig. 5 shows that mFast has a good responsiveness to the change of the network condition. All the flows adjusts their rates quickly according to the network change.

5.3 Throughput improvement

We verify the efficiency of mFast using the topology given in Fig. 3c. A two-path multipath flow and two single-path flows share two separate links with bandwidth 80 Mbps and 100 Mbps. Propagation delay of two links are both 50 ms. We can see in Fig. 6 that the throughput of the multipath flow is higher than the ones of the single-path flows on both links. In addition, the higher of 𝜖 yields a more improvement in throughput of the multipath flow but less friendliness as shown in Section 5.1.

Fig. 6
figure 6

MFast improves the throughput

6 Conclusions

We have proposed a multipath protocol, called mFast, which is suitable for high BDP network. Based on a multipath NUM model, we have derived a multipath protocol which is fair to the single-path Fast TCP. Moreover, we have shown the efficiency, responsiveness and load balancing capability of mFast via analysis as well as extensive simulations. The local stability for some special cases has been also proved.