1 Introduction

The digital video broadcasting project (DVB) is a consortium committed to designing standards for video and data service. The satellite service of DVB is divided between the transmission system (from providers to users) and the control channel (from users to providers). The current standards for the satellite links are DVB-S2 (diffusion channel) and DVB-RCS (return channel). In 2013, DVB validated the specifications for the second generation of the satellite return channel, DVB-RCS2. In [1], an overview of the system is given, [2] details the standards for satellite lower layers, and [3] details higher layers specifications. This new version for the transmission of data from home users to satellite gateways features enhanced security, improved quality of service and support for IPv6. This would enable the user terminals to not only transmit control data but also send HTTP requests and other low volume uplink Internet traffic.

The quality of experience for a user browsing the web or sending an email is highly linked to the latency. As an example, in [4], the authors explain that Google measured that “an additional 500 ms to compute (a search) [...] resulted in a 25 % drop in the number of searches done by users.” To improve the web experience over DVB-RCS2 links, we propose to adapt the channel access strategy and reduce the transmission time of the first packets of a transmission. While the standard argues that both methods (switching between random and dedicated) can be implemented, no insight is provided as to their potential benefits. As a matter of fact, before starting an engineering process to design an algorithm allowing to combine both methods, it seems necessary to assess the performance of each.

Preliminary studies [5, 6] have presented the simulating tool PCA and have evaluated the use of random access methods in the context of DVB-RCS2 system. In this paper, we extend these works by providing an in-depth analysis of the utility of driving the selection of DVB-RCS2 channel access method by using transport layer (TCP) protocol information.

The remainder of the article is organized as follows. We present the different component of the DVB-RCS2 specifications in Section 2. We also summarize the main studies assessing the impact of access methods on the performance of transport layer protocols. In Section 3, we present our NS-2 module that enables modeling of the DVB-RCS2 channel access, by leveraging experimental data. Section 4 presents the network parameters used in this study. To compare the dedicated and random access methods, in Section 5, we evaluate the overall network performance, for a scenario that includes multiple TCP sessions. We provide further insights into TCP performance with different access methods in Section 6, where we evaluate the time needed to transmit the first packets of a TCP session. We discuss additional issues related to switching between different access methods in Section 7 and conclude in Section 8.

2 State of the art

2.1 Multi-Frequency time division multiple access (MF-TDMA)

On an MF-TDMA link, the capacity is shared at the access point. The access point forwards traffic between one or more satellite gateway and satellite terminals (home users) over the shared medium, therefore covering both up and down link scenarios. We provide some definitions of the terms used in this paper to describe MF-TDMA processes:

  • Flow: data transfer at the transport layer;

  • Datagram: network layer segment of a flow;

  • Link layer data unit (LLDU): N data bytes of a fragmented datagram;

  • Physical layer data unit (PLDU): LLDU with an optional N repair recovery bytes (N = N data+N repair);

  • Block: PLDUs can be split into N block blocks if the access method requires;

  • Frame: “time × frequency” set of blocks transmitted between gateway and users, generated every T F ;

  • Slots: element of a frame where a block can be scheduled.

satellite gateway and the home users (Satellite Terminals, ST): the satellite medium is shared among the users and protocols must allocate resources to each of them. The channel access methods define the way data can be transmitted from the home users (Satellite Terminals, ST) to the satellite gateway, exploiting the DVB-RCS2 link. From the satellite gateway to the ST, time-division multiplexing is used, guaranteeing a sufficiently high capacity for each terminal. Te return link (DVB-RCS2) is therefore the performance bottleneck of the connection in terms of delay.

The resource is distributed every T F = 45 ms by the network control center (NCC). The NCC is the element that adapts the repartition of the available slots at each frame, in order to (1) accept late-comer flows; (2) adapt the time slots reservation depending on each user characteristics (different priority between the users); (3) adjust the distribution of time slots depending on the network load; (4) optimize the “mod-cod” (modulation and coding) at the physical layer for dedicated access methods [1]. The NCC transmits a “burst time plan” (BTP) to the users and indicates when and how to transmit data. Several timeslots are available per frequency. A “time × frequency” block is called a “frame” [2, sec. 7.5.1.1], detailed in Fig. 1.

Fig. 1
figure 1

Times × Frequency block of time slots

Every time slot, an access scheme may allow terminals to transmit data on a given sub-“time × frequency” block. There are more blocks than carriers and we refer to these blocks simply as “frequencies” in the rest of the document. We denote by N S the number of time slots available per frequency. The frequencies on which data is transmitted can be divided depending on the access method: F R frequencies are dedicated to the random access methods and F D are reserved to the dedicated access methods. In total, a frame can carry N S ×(F R +F D ) slots. The BTP specifies which timeslots each user can transmit data on [2, sec. 6.2.2.8]. The resource distribution depends on the nature of the flows, the network load, and the access methods. The standard defines two strategies for channel access:

  • Dedicated access [2, sec. 7.2.6]: timeslots are reserved for the transmission between ST and terminals. This induces a negotiation delay.

  • Random access [2, sec. 7.2.5]: in order to simplify and minimize the negotiation at the MAC level, there is no reservation of timeslots.

2.1.1 Capacity allocation schemes

DVB-RCS2 features capacity allocation schemes that control how the capacity is given to an ST: continuous rate assignment (CRA), rate-based dynamic capacity (RBDC), volume-based dynamic capacity (VBDC) and free capacity allocation (FCA). CRA, RBDC, and VBDC are dedicated access methods, as they are the result of specific user requests, whereas FCA is a random access method. The rationale of this paper is to assess how TCP congestion and flow control are affected by dedicated and random access methods. For the sake of simplicity, we consider the CRA and FCA capacity allocation schemes, as they are basic allocation schemes that would not increase the complexity the simulation outputs, as RBDC and VBDC would. It is worth pointing out that RBDC and VBDC could easily be integrated in our NS-2 module presented in Section 3 and [5]. We note that considering RBDC and VBDC would result in slightly different numbers, but considering only CRA and FCA nonetheless allows us to derive qualitative conclusions on the respective impact of dedicated vs. random access method on TCP.

2.1.2 Dedicated access with CRA

Depending on the load on the network, the NCC computes an adequate BTP for each ST having requested satellite capacity and established the connection [2, sec. 7.2.6.3]. Therefore, these methods ensure a reliable transmission of data but introduce a negotiation delay of at least one RTT. The reservation ensures that capacity is fairly distributed: if there are 40 slots available and 10 users, each user can transmit data on 4 slots.

2.1.3 Random access with FCA

With random access methods, in order to simplify and minimize the negotiation at the MAC level, there is no reservation of timeslots. The NCC cannot ensure that different terminals transmit data on separate blocks, which cannot guarantee a reliable transmission. Stronger error codes are therefore introduced at the physical layer: N repair redundancy bytes are added to the reduced N data bytes to form a code word of N = N data+N repair bytes that is split into N block blocks. N ra slots form a random access block (RA block) on which erasure codes are introduced [2, sec. 7.2.5.2]. Each transmitter randomly spreads its N block blocks across the N ra slots of the RA block for spectral diversity. In [7], the authors define guidelines to design random access methods, and assess the performance of CRDSA [8]. Among the different random access methods, we can cite the following: multi-slots coded ALOHA (MuSCA) [9], ALOHA [10], diversity slotted ALOHA [11], contention resolution diversity slotted ALOHA (CRDSA) [8]. Random access methods are not only exploited in the context of DVB-RCS2 but also in vehicular networks [12] and sensor networks [13].

Performance of random access methods can be described by the probability that a receiver decodes its N data useful bytes depending on the number of users that transmit data on a RA block.

In [2, sec. 7.2.5.1.3], the authors advise that “the ST shall by default not transmit in contention timeslots for traffic, but may do this when explicitly allowed by indication in the lower layer service descriptor or by other administrative means” making the random access methods used mostly for login procedure [2, sec. 9.2.3] and optionally for traffic. Moreover, in [2, sec. 6.2.2.8], the specifications defines that the “terminal burst time plan table version 2” (TBTP2) “may be used to assign dedicated access timeslots, […] allocate timeslots for random access,” and indicate the access methods used for each timeslots information to the BTP. The specifications present the potential for introducing both random and dedicated access methods without detailing in which proportion the timeslots of the frames must be divided between them, even though [2, sec. 7.2.5.1.3] highlights the preference for dedicated access methods.

2.2 Existing studies on the impact of DVB-RCS2 channel access methods on TCP

2.2.1 TCP and dedicated access

In [14], the authors analyze the interactions between the TCP and DVB-RCS control loops by exploring the performance of TCP over the different capacity allocation categories defined in the DVB-RCS standard. They show that the performance of access schemes is strongly linked to the traffic characteristics such as the size of the flow, or the required QoS. In [15], the authors highlight that, in the context of demand assigned multiple access (DAMA), delay variability severely impacts the performance of TCP. They run simulations with NS-2 and analyze the MAC–TCP interactions to improve the performance of TCP New Reno. Then they propose a cross-layer technique based on queuing sizes at the MAC layer. Their proposal is not adapted to short flows nor to the DVB context and focuses only on dedicated access methods. In [16], the author analyzes the performance of competing TCP flows using different return channel satellite terminals (RCSTs) and competing for the DVB-RCS return link through DAMA mechanisms. They consider an emulated network and observe relatively poor performance. They only focus on the dedicated access method.

The studies presented above [1416] focus on dedicated access methods and highlight the difficulties encountered by TCP to transmit data on the return link. They show the negative impact of the queuing delay introduced by access methods, without comparing their results when random access methods are applied.

2.2.2 TCP and random access

In a mobile context (mobile cars and satellite links), the authors of [17], show that, when users act as senders, random access methods are not suitable; however, depending on the size of the file transmitted, there is a certain interest for dedicating more timeslots for random access methods when the users act as receivers. Their results can not be leveraged in our specific context because of the model for satellite mobile links and the different capacities and access strategies do not apply. However, following this idea, the authors of [18] highlight a possible advantage of introducing more random access methods in DVB-RCS2. More recently, the authors of [19] assess the issues encountered by TCP over CRDSA in the context of DVB-RCS2. However, they do not conduct extensive simulations neither in terms of traffic considered nor channel access strategies, which make their studies insufficient to properly determine the benefits of using random schemes instead of dedicated schemes.

In [20], the authors evaluate how the loss events with random access methods impact on TCP performances when TCP NewReno is used to carry sustained rate traffic. They conclude that there is an interest in using random access schemes for such small flows, which is an interesting input for cross-validating our results. However, this work is not entirely in the scope of our paper, as (1) we intend to evaluate the impact of various access methods (dedicated and random) on the internal parameters of TCP, such as its congestion window evolution; (2) the metrics that they provide hardly let us assess the latency for short flows.

The analysis of the bad performance of TCP on high variable delays is relevant [14] and reducing the connection establishment delay with random access methods might be of interests [1820]. However, existing studies do not enable to provide a definitive answer as to whether introducing random access methods to carry data traffic is beneficial. As far as we know, there is a lack of relevant comparisons of the various channel access strategies in the context of DVB-RCS2.

2.3 Simulating DVB-RCS2

We now survey available tools to simulate DVB-RCS2.

As discussed in [21], emulation can be expensive not only for the technology involved but also for the “man-in-the-loop” manipulations and synchronization. Additionally, new satellite transmission may require not-yet-validated or -implemented support. In addition to this, we assess experimental access methods and, therefore prefer running parametrable simulations to experimenting on less flexible real deployments of DVB-RCS2 networks. A few models of the DVB-S2/RCS satellite network in NS-2 have already been proposed in [22, 23]. These NS-2 modules attempt to be as close as possible to the real system. While accurate, they are however not flexible enough for our study, as we want to integrate specific inputs, such as performance of experimental random access methods, other internal parameters of the NCC component. For this purpose, we implemented a specific module to emulate channel access on top of experimental physical channel access (PCA) traces.

3 Physical channel access (PCA)

In Fig. 2, we compare the enque() and deque() methods of DropTail and DropTail/PCA. With DropTail, when the enque() method adds packet P N+1, it is added at the end of the sending buffer and trans- mitted when P 1,…,P N have been transmitted with the deque() method. With DropTail/PCA, when a packet is enque()ed, it is also added to the sending buffer. However, depending on the access method introduced, only a subset of the datagram is considered sent with each frame. Each subset of a datagram will be transmitted with the same access method, and when the last byte of a datagram has been transmitted, deque(), which is called every T F , removes the packets from the sending buffer and passes it along. More details on the implementation of PCA can be found in [5].

Fig. 2
figure 2

enque() and deque() methods: capacity allocation with DropTail and PCA

4 Simulation parameters

4.1 Framing parameters

The following parameters have to be specified prior to starting a simulation:

  • cutConnect_: time after which the connection between the gateway and the user is closed (in seconds);

  • esN0_: signal-to-noise ratio of the channel in dB;

  • switchAleaDet_: sequence number at which the access method switches from random to dedicated (see Section 7);

  • frameDuration_: duration of a frame (T F );

  • nbSlotPerFreq_: number of time slots per frequency (N S );

  • sizeSlotRandom_: useful number of bits that can be sent on one RA block (i.e., where random access methods are introduced) (N data);

  • sizeSlotDeter_: useful number of bits for each time slots where dedicated access methods are introduced (N data);

  • rtt_: two-way link delay (in seconds);

  • freqRandom_: number of frequencies used for random access (F R );

  • nbFreqPerRand_: number of frequencies comprised in an RA block ((F R ×N S )/N ra);

  • freqDeter_: number of frequencies used for dedicated access (F D );

  • maxThroughtput_: maximum authorized throughput for one given flow (in Mbps);

  • nbSlotRndFreqGroup_: number of blocks a PLDU is split into for distribution in one RA block (N block);

  • boolAntennaLimit_: boolean indicating whether one transmitter has one (False) or F R +F D (True) single terminal amplifier (see Section 4.2).

4.2 Single terminal amplifier limitation

In DVB-RCS2, the outdoor unit includes a unique RF power amplifier: therefore, terminals cannot send data on different frequencies at once. This limitation imposes a maximum number of slots that a user is allowed to occupy on each frame. Also, the standards limits the hopping distance while switching frequencies: this is neglected in our model.

As an example, if N S = 40 slots and:

  • with a dedicated access, a unique user can use N S = 40 slots;

  • with a random access, N block = 3, N ra = 40: a unique user can use ⌊N S /N block⌋=13 slots.

4.3 Parameters

We consider a frame of 45 ms length which covers 100 frequencies (i.e., comprising 100×40 blocks). Therefore, 100 slots are grouped in a random access (RA) block composed of 2.5 frequencies. We base the choice of parameters on specifications defined in [2] and present them in Table 1. We provide more details about sizeSlotRandom_ and sizeSlotDeter_ in Section 4.4.

Table 1 Use case simulation parameters

4.4 Access methods

4.4.1 Dedicated access

Each slot of length 1.09 ms carries 536 symbols. In the simulations, we consider a “clear sky” scenario with a signal-to-noise-ratio equal to 8.6 dB. We assume that the users apply a code of rate R = 2/3 combined with 8PSK modulation to encode a packet of 920 information bits into a codeword of 1380 bits, i.e., 460 symbols. Due to the encapsulation at the physical layer level, the physical layer data unit is increased to 536 symbols.

4.4.2 Random access

Users connecting to the satellite with random access generally use an operating point lower than for the dedicated access. In the rest of this article, we take a margin of 3.5 dB. Thus, for the “clear sky” scenario, we consider that: (E s /N 0) r a n d o m = (E s /N 0) d e d i c a t e d −3.5=8.5−3.5=5 dB. In systems using CRDSA at 5 dB, each user can apply error-correcting codes of rate R C R D S A = 2/3, associated with QPSK to encode a packet of 613 bits (597 information bits and 16 header bits) into a codeword of 920 bits, i.e., 460 symbols. The error-correcting code used is a turbo code. As detailed in [8], N b l o c k bursts of length about 530 symbols are then created. The number of generated bursts depends on the version of CRDSA. In this paper, we study the performance of regular CRDSA-3 (N b l o c k = 3). The N b l o c k bursts are transmitted randomly into N b l o c k slots of an RA block.

With MuSCA [9, 24] as the random access method, users encode a packet of 680 bits (594 information bits and 86 header bits) with a turbo code of rate 1/4 associated with QPSK modulation to create codewords of 1380 symbols. The codeword is split into N b l o c k = 3 parts to generate N b l o c k = 3 bursts sent on time slots of the same RA block.

Figure 3 depicts the performance in terms of packet loss ratio (PLR) depending on the number of packets transmitted per RA block by CRDSA-3 and MuSCA-3.

Fig. 3
figure 3

Packet loss rate and number of packets sent on a RA block of 100 slots with CRDSA and MuSCA at 5 dB

4.5 Topology, traffic, and hypothesis

We consider two nodes in NS-2. The first node represents the set of ST and the second node acts as the satellite gateway. The PCA module presented in Section 3 is introduced from the ST to the satellite gateway. We present in Table 2 the different traffic types considered in the following sections. The size of the IP datagrams is 1500 bytes, and the queue at the transmitter is large enough to prevent overflowing.

Table 2 Traffic generation

Various TCP options could have been considered as they might have an impact on the resulting traffic load on the return link. As an example, web browsing results in up to 6 TCP flows for a single page and the acknowledgments that would be sent from the home user to the server would be synchronized for the first objects downloaded. This synchronization would result in a momentarily high traffic load on the return link that might be hard to handle and induce loss of acknowledgments. In this context, delayed acknowledgments or TCP pacing would break that synchronization and then reduce the potential loss of crucial acknowledgments. We do not consider these options in the rest of the paper, use default implementations of TCP. We, however, note that improvements at the transport layer may result in slight performance variations. We note that considering a single access point is a simplification, but it will not overly affect the qualitative results derived from our simulation. We also did not dive into the lower layer parameterization as we are interested in a high level view of the performance at the transport layer.

5 Evaluating the performance of random access methods for data traffic

In this section, we assess the benefits of utilizing random access methods in DVB-RCS2 for transmitting data.

5.1 Experimental scenario

We consider the parameters detailed in Table 1, i.e., there are (F D +F R N S = 100×40=4,000 slots per frame. We also introduce the single terminal amplifier limitation detailed in Section 4.2, which limits the maximum number of slots per TCP session to 13 for random access methods and 40 for dedicated access method. Moreover, the capacity is fairly shared between the N U ST using a dedicated access method, therefore the maximum number of slots per TCP sessions in this case is: \(\min ((F_{D}+F_{R}) \times N_{S} / N_{U},N_{S})=N_{S} \times \min ((F_{D}+F_{R})/N_{U},1)\).

When N U ≤(F D +F R ) (i.e., with our parameters, when N U ≤100), a TCP session can transmit ⌊N S /N b l o c k ⌋×N d a t a = 40/1×920=36,800 bits per frame with a dedicated access, or ⌊40/3⌋×594=7,722 bits with MuSCA as random access. With dedicated access methods, each TCP session can transmit more data per frame when the load is low. When the load increases: (1) with dedicated access methods, the goodput of each TCP session decreases and (2) with random access methods, the goodput of each TCP session remains the same, but there might be collisions at the MAC layer, resulting in a lower goodput.

Considering phenomena (1) and (2), we now evaluate which access scheme allows each TCP session to have a better goodput when the load increases. We consider traffic case A of Table 2: we measure the range of network loads for which makes the choice of random access methods is more suitable to transmit data than dedicated access methods.

5.2 Throughput and datagram loss rate

We show in Fig. 4 the average number of datagrams sent per TCP sessions. Additionally, Fig. 5 gives the error probability by considering the number of datagrams dropped at the gateway level and the number of datagrams successfully transmitted. Finally, to better assess the impact of the datagrams errors on the transmission efficiency, we show in Fig. 6 the efficiency determined by the ratio of the average number of transmitted datagrams to the maximum number of datagrams that could have been transmitted without errors.

Fig. 4
figure 4

Average number of datagrams sent per TCP session in 20 s

Fig. 5
figure 5

Average number of datagrams lost per TCP sessions in 20 s

Fig. 6
figure 6

Transmission efficiency, the ratio between the average number of datagrams transmitted over the maximum number of datagrams that can be transmitted in 20 s per FTP session

From Fig. 5, we see that datagram errors appear from N = 100 for CRDSA and N = 300 for MuSCA. This results in an efficiency (ratio between the average number of datagrams transmitted over the maximum number of datagrams that can be transmitted in 20 s per FTP session) of the transmission that decrease when the number of FTP sessions increases as illustrated in Fig. 6. However, Fig. 4 illustrates that when N≤300, one TCP session transmits more datagrams with dedicated access methods than with any random access methods. Moreover, when the load increases, the datagram errors increases consequently with random access methods and, as a result, the average number of datagrams sent per TCP session decreases. As an example, with MuSCA as access method, one TCP session transmits on average 50 datagrams, whereas with dedicated access, one TCP session transmits on average more than 350 datagrams.

5.3 Discussion

We observed in our NS-2 simulations that, even though the throughput of each TCP session decreases when the load of the network increases, dedicated access methods enable the transmission of more data than random access schemes which lose a large number of datagrams: phenomena (1) of Section 5.1 is not as bad as phenomena (2).

Indeed, we earlier derived that the maximum number of slots available per frame per TCP session is defined by \(N_{S} \times \min ((F_{D}+F_{R})/N_{U},1)\). When N U ≥(F R +F D ), a TCP session can transmit \((F_{R}+F_{D}) \times N_{S}/N_{U} \times N_{data}^{DE}\) bits per frame with a dedicated access, and \(\lfloor N_{S}/N_{block} \rfloor \times N_{data}^{RA}\) with MuSCA as a random access method. For a future random access to be viable, the number of users, \(N_{U}^{\prime } \geq (F_{R}\) +F D ), that enables it to transmit more data should verify Eq. 2.

$$ (F_{R}+F_{D}) \times N_{S}/N_{U}^{\prime} \times N_{data}^{DE} \leq \lfloor N_{S}/N_{block} \rfloor \times N_{data}^{RA} $$
(1)
$$ N_{U}^{\prime} \geq \frac{(F_{R}+F_{D}) \times N_{S}/N_{U}^{\prime} \times N_{data}^{DE}}{\lfloor N_{S}/N_{block} \rfloor \times N_{data}^{RA}} $$
(2)

Figure 7 illustrates the performance that a random access method should have to be more efficient than dedicated access methods when the load on the network is large. We denote by N M a x R A the number of users from which the random access methods starts to introduce errors (i.e., N M a x R A = 300 with MuSCA).

Fig. 7
figure 7

Illustration of \(N_{U}^{\prime }\)

With our parameters, the minimal desirable \(N_{U}^{\prime }\) is such that \(N_{U}^{\prime } \geq 477\) applying (2). With MuSCA as a random access method, these \(N_{U}^{\prime }\) users would be equally spread among the 40 RA blocks. There would be \(N_{U}^{\prime } \times 13/40=477 \times 13/40=155\) users per RA block, which MuSCA can not carry, as illustrated in Fig. 3.

As far as we know, there is no random access method that can carry traffic and verify (2). However, this equation can help to assess the load from which it is interesting to transmit some data with random access methods.

Both NS-2 simulations and mathematical expressions illustrate that the transmission of data is more efficient with dedicated access methods, as random access methods enable to transmit less data on one given frame and errors might occur. However, the next section looks more closely at detailed performance of one given flow in order to explain the rational of introducing random access methods to carry short data flows.

6 Transmission times of short flows

Section 5 showed that data transmission is more efficient with dedicated access methods. However, considering that (1) there is a connection delay introduced by dedicated access methods, (2) there is an important proportion of short flows in the Internet (measured in [25, 26]), we now study the benefits that random access methods can provide in terms of transmission delay for short flows when there are no errors (we focus on cases when N<N M a x R A in Fig. 7).

6.1 TCP sessions

We consider the conjoint generation of traffic cases A and B of Table 2. In Fig. 8, we plot the evolution of the TCP segment sequence for one typical flow decoded at the receiver side with 150 TCP sessions. We consider flows that do not lose datagrams when random access methods are introduced. We can see the progression of the congestion window of TCP in the slow start phase, with CWND and RTT presented in the figure. Overall, this figure illustrates that the RTT needed for the connection when dedicated access is involved delays the transmission of the first datagrams. We can also see that the time needed to transmit two datagrams is smaller with dedicated access (denoted T2) than with random access (denoted T1). As a result, with dedicated access, the progression of the congestion window is faster, but starts later.

Fig. 8
figure 8

Evolution of TCP segment sequence number reception

The amount of useful bits that can be sent on each slot is 597 with CRDSA and 594 with MuSCA: when there are no losses, which is the case in the results presented in Fig. 8, the performance for a given flow similar.

We present the time needed to transmit 30 kB (traffic case B of Table 2) in Table 3. When there are 200 TCP sessions, both CRDSA and MuSCA transmit the 30 kB faster by 90 ms than dedicated access. We confirm that when there are more TCP sessions the time needed to transmit 30 kB, with dedicated access methods, is larger, i.e., resulting in lower throughput for each user. Conversely, the transmission of those 20 datagrams is faster with random access methods (when there are no errors). We propose in the next section to validate this interpretation by considering a more realistic traffic model.

Table 3 Transmission times of 30 kB

6.2 HTTP traffic with Packmime

Packmime [27] is an NS-2 module that models HTTP traffic. It is controlled by a rate parameter, i.e., the average number of new connections that start each second. This module enables us to model clients (ST) that send requests to the servers (satellite gateway). We consider the traffic case D of Table 2.

We let Packmime generate 2000 requests and observe the flows’ performance with dedicated and random access methods (the performance of MuSCA and CRDSA are identical). The size of the requests generated by the clients is within [150;650] bytes and the rate is set to 500. We define the transmission time as the time between the transmission of the first request (SYN/SYN ACK of 40 bytes) from the client and the reception of the last one at the server. The results are summarized in Table 4 which show the minimum, median, and maximum requests transmission times.

Table 4 HTTP request transmission times

This confirms that the transmission of short requests is faster with random access methods.

6.3 Short flows with errors

The conclusions from the previous section must be adjusted with evaluations on the impact of network load (and resulting collisions) on the transmission time. We now consider scenarios where the network load is too high for random access methods to transmit data without error (N>N M a x R A ). We measure the maximum transmission times of short flows when error events occur. The goal of this section is also to determine the minimum delay introduced by retransmissions depending on the number of datagrams and the state of the network.

We consider that one user transmits D datagrams. We denote by P e r r (N) the probability to lose a datagram with N users (from Fig. 5), and by Δ = {d 1;d 2;…;d D } the set of datagrams of one flow. \(P (r_{d_{i}}=R)\) is the probability that the datagram d i is retransmitted R times and is determined by (4).

$$ \forall R \in \mathbb{N} , \forall N \in \mathbb{N} , \forall d_{i} \in {\Delta} , $$
(3)
$$ P(r_{d_{i}} = R) = (1-P_{err}(N)) \times P_{err}(N)^{R} $$
(4)

We determine the probability to have at least \(\hat {R}\) retransmissions for one datagram. As one retransmission increases the delay by at least one RTT, we propose a simple lower bound of the time needed to transmit a flow composed of several datagrams. As illustrated in Fig. 9, we focus on the first datagrams of one TCP session, we do not consider the congestion avoidance phase. We compute a “lower” bound for the supplementary delay introduced by loss events.

$$ P(\hat{R})={\sum}_{k=0}^{D-1} \left( \begin{array}{l}D \\ k\end{array}\right) P(r<R)^{k} \times P(r=R)^{D-k} $$
(5)
$$ P(\hat{R})={\sum}_{k=0}^{D-1} \left( \begin{array}{l}D \\ k\end{array}\right) \left ({\sum}_{i=0}^{R-1} P(r=i) \right )^{k} \times P(r=R)^{D-k} $$
(6)
Fig. 9
figure 9

Datagrams errors and short flows

Using (4) and (6), we derive numerical evaluations in Table 5, using the error probability and resulting delays, depending on the number of users from Fig. 5.

Table 5 Retransmission probabilities

When NN M a x R A , the probability of collision events, inducing retransmissions, cannot be neglected. During the transmission of three datagrams, the probability for one of those to be lost can be up to 22 % with CRDSA and 300 TCP sessions and provoke at least an additional 500 ms in transmission time. We showed in Fig. 8 that the transmission of three datagrams without loss is faster with random access methods: when a loss event occurs, the whole benefits provided by random access methods is lost. We conjointly generated the traffic cases A and case C of Table 2. The starting time is different from the case B, since we consider that there might be losses and retransmissions in this section, and we did not want to bias the results by limiting the maximum transmission time. When there are 200 TCP sessions (resp. 300 TCP sessions), with CRDSA (resp. MuSCA) as random access method, over 250 runs, the maximum transmission time is 4.325 s (resp. 4.145 s).

7 Discussion: mixing random and dedicated access methods

We verified in Section 6 that the transmission of the first packet of a flow was indeed faster with random access methods under low network loads (i.e., N<N M a x R A ). These results are consistent with those presented in [20], where the performance of TCP over DVB-RCS2 and random access schemes are assessed. We cannot compare our numerical values, as we considered different metrics and they only look at normalized TCP goodput. The qualitative trend is the same.

Figure 9 shows that, depending on the traffic load, the benefits provided by random access vary. When the load increases, the capacity is shared among the users with dedicated access and the time needed to transmit a certain amount of datagrams increase. We quantified the situations when datagrams errors introduce a delay, canceling out the benefits that random access methods may provide.

We show, in Fig. 10, where a switch from random to dedicated access could be introduced to improve the transmission of both short and long flows. The idea is to enable a faster transmission for the first packet of the TCP sessions by adapting the choice of the access method. To increase the transmission of the first datagrams, they must be transmitted over random access methods. This should, however, be done only when the network load is low enough, that is, in the congestion range for which random access methods can guarantee the error-free delivery—or recovery—of the transmitted packets. Obviously, one issue would be to predict how the traffic load for the next frame, but we believe that in 45 ms (a super frame duration) the traffic would not have time to drastically change. In order to guarantee room for flows transmitting over the random access method, the traffic over the dedicated access methods could be shaped so that the peaks of incoming of traffic are reduced; this can be achieved by using similar techniques than those presented in [28]. The results presented in the article let us argue that, for a given TCP session, starting with random access before switching to dedicated access can be beneficial. This would improve end-user experience of the service as well as spectrum efficiency by allowing the first data packets to be sent, earlier, on RA blocks rather than waiting for a dedicated reservation. This is in line with [29], which presents analytical derivations of the benefits of such switching for a given application. Their benefits require further validation through simulations. This idea is also mentioned in IP Over Satellite (IPOS) standards. We argue for the integration of a similar strategy in the current DVB-RCS2standards.

Fig. 10
figure 10

Switch from random to dedicated access

8 Conclusion

The current DVB-RCS2 standards do not mandate a specific channel access strategy for the return satellite link (transmission of data from home users to satellite gateways). In this article, we compared the impact of both dedicated and random access methods on the performance of TCP. We note that our simulations used a few simplifying assumptions with respect to lower layers schemes and the topology. While this might impact the accuracy of quantitative results, the qualitative trends were clear. We presented an NS-2 module, PCA, that emulates the access to the DVB-RCS2 return link. PCA can be directly reused for any other scenarios where satellite channel access methods are involved. We showed that the transmission of data is more efficient with dedicated access methods, as random access methods enable to transmit less data per frame and errors might occur. However, we also found that the transmission of short flows is faster with random access methods. Service providers can benefit from reduced page loading time. For example, Amazon found that it increased revenue by 1 % for every 100 ms in page loading time [30]. We believe that, in order to provide a good web experience, random access methods and a switch between both schemes should be considered as part of the DVB-RCS2 specification effort.