1 Introduction

Free space optics (FSO) has advantages for inter-satellite links (ISL) over radio frequencies (RF) because optics allows high bit rates to be achieved [8]. In addition, the optical transmission performance through space as a communication medium is not affected by atmospheric factors such as fog, dust, sand, and heat that can easily cause significant degradation or even disruption of terrestrial FSO links. However, factors that affect the transmission over the long distances of ISLs are attenuation of the signal, power budget and sensitivity of the receptor.

Satellite-based networking is developing quickly in order to provide diverse services for space stations, satellite constellations and for Earth data-networks. Satellite constellations expand the use of limited ground-space communications [18], providing higher overall network capacity and the use of free space optics. The development of network protocols that can satisfy the high demands of ISL requires new designs in several networking layers of the ISO/OSI Reference Model [49] and the TCP/IP model as well.

The traffic transmitted over inter-satellite links has restrictions due to long distances between the satellites and as a consequence long propagation delays and poor channel performance, resulting in high bit error rates (BER). BER measurements are useful in order to determine the degradation of the transmission. However, BER does not clearly indicate how the goodput is affected [27]—i.e., a relationship between bit error rate and the amount of useful data successfully received per time unit for the application layers is not trivial [34]. In that work, the authors find that, in this free space optical communication scenario, a linear relationship is useful to investigate the effects of BER on the packet error rate (PER). This was calculated considering the worst case scenario.

In this work we consider that the amount of data successfully transmitted and useful for application layers are the goal to reach and not just to increase the bit-rate. The TCP protocol plays an important role in retransmitting segments of data with errors and hence we consider it in this analysis. The system under investigation consists in end-to-end optical communications between satellites varying the distance. TCP flows are used to generate traffic at 1 Gbps and 4 Gbps using the SSFNet simulator [12].Footnote 1 Parallel transmission using WDM is also simulated to estimate the TCP performance for four optical links each at 1 Gbps rate which generate an aggregated rate of 4 Gbps.

Other multilayer analysis has been presented in previous work but the analysis shown here is new in the context of FSO and considering a BER/PER relationship for TCP flows [26]. This work, to the best of the authors’ knowledge is the first of its kind where a multidisciplinary approach is proposed. So, this is one of the main contributions of this paper.

A cross-layer analysis is presented in order to understand fundamental problems with the long delays and high bit error rates incurred in the inter-satellite communications. The use of multiple wavelengths in WDM to overcome these limitations is also presented in this paper. In addition, the use of the window scale option (WSO) for TCP is evaluated by simulation.

This paper is organized as follows. In Sect. 2, an overview of satellite communications and the use of TCP in ISL are presented. In Sect. 3 the concept of WDM, wavelength striping and the problems faced by TCP in long fat pipes are described. Section 4 shows the methodology used to calculate the BER and the cross-layer analysis in order to calculate the TCP performance. In Sect. 5, the performance of TCP and BER using direct and coherent optical detection are assessed through simulation. The results of the TCP performance with 1 Gbps, 4 Gbps and 4 × 1 Gbps WDM are presented in Sect. 6. Finally, in Sect. 7, conclusions are given.

2 Related work

Arthur C. Clarke’s paper in Wireless World in 1945 [11], introduced the idea of a constellation of three geostationary satellites to provide full Equatorial coverage of the Earth, using the geostationary earth orbit (GEO). At present, GEO systems approximate to Clarke’s idea and bring to the world a data communications infrastructure for both developing countries and the more technically advanced services of developed countries. Low-earth orbiting (LEO) and medium-earth-orbiting (MEO) satellite constellations, using orbits lower than the geostationary orbit have been developed, as have highly elliptical orbit (HEO) constellations. These give full global or targeted coverage of the Earth [47]. Communication between different orbits, referred to as multi-layer satellite network, has also been studied and in this work we name it as multi-orbit satellite network so as to avoid confusion with the ISO/OSI multi-layer analysis presented in this paper. The multi-orbit satellite network (MOSN), which is more complex than a single-orbit network and with longer distances between satellites, has become an important research field in satellite communications since the 1990’s. Kimura proposed a double orbit satellite constellation made up of LEO and MEO satellites [30]. Li at al., and Akyildiz each designed different multi-orbit satellite network having different spacing ISL topologies and studied the routing problem in MOSN [2, 30, 32]. The long round trip times (RTT) observed in MOSN due to the long distances between satellites are significant and require an in-depth analysis [43]. Examples of distances between satellites are shown in Fig. 1.

Fig. 1
figure 1

Examples of FSO links and ISL distances representing long RTTs for TCP communications

In [2], Akyildiz points out the communication problems of TCP arising from the RTT and with the congestion window which represents the amount of unacknowledged data that the sender can have in transit to the receiver [3, 23, 46]. Latency affects TCP adversely since the growth of the window size depends on the RTT between the fragments of data packets and their acknowledgments. Other work suggests several TCP variants in order to improve performance in satellite communications [1, 3, 4, 19].

The TCP congestion management is composed of two important algorithms. The slow-start and congestion avoidance algorithms allow TCP to increase the data transmission rate without overwhelming the network. They use a variable called the congestion window (CWND). TCP’s congestion window is the size of the sliding window used by the sender. TCP cannot inject more than CWND segments of unacknowledged data into the network. The general behavior of TCP’s algorithms is referred to as Additive Increase Multiplicative Decrease (AIMD). For example the TCP Reno variant increases the congestion window by one packet per window of data acknowledged, and halves the window for every window of data containing a packet drop [6, 20, 23, 24].

HighSpeed TCP for Large Congestion Windows was introduced by Floyd [17] as a modification of TCP’s congestion control mechanism to improve performance of TCP connections with large congestion windows. Modified Response Function HighSpeed TCP introduces a new relation between the average congestion window and the steady-state packet drop rate. The HighSpeed TCP response function maintains the property that the response function gives a straight line on a log-log scale, this is also the response function for regular TCP in the presence of low to moderate congestion [13]. Work on large windows suitable for long fat networks (LFNs, pronounced ‘elephants’) [7, 25] has been designed to overcome implementation problems with communications that exhibit high bandwidth-delay products, such as ISL [1, 17, 38, 48].

It has been observed in [31] that random losses lead to significant throughput deterioration when the product of the loss probability and the square of the bandwidth-delay product is larger than one. In addition, the authors showed that for multiple connections sharing a bottleneck link, TCP is unfair to connections with high round-trip delays. It was concluded that changes in both the transport and the network layers are required to provide good end-to-end performance over high-speed networks.

Although TCP has been carried over ISL successfully, the long propagation delay to satellites in geostationary earth orbit (GEO) has limitations on data applications. In this work, we propose the use of the window scale option for TCP in order to transfer data efficiently when the bandwidth-delay product is greater than 64 kbytes. Additionally, we consider in our analysis the use of WDM to send information in parallel.

3 Context: parallel communications and long latency effects

It is important to consider the effects of latency and degradation in FSO-ISL communications due to the attenuation of the signal and noise. TCP and 10 Gigabit Ethernet with 64B/66B scrambler coding are used for the analysis presented.

A single channel link with a data rate of 4 Gbps is analyzed using two methods of signal detection, direct detection, for distances of less than 20,000 km, and coherent detection, for distances of more than 20,000 km and no more than 85,000 km. These results have been compared with the calculation of a 4 wavelengths WDM link at 1 Gbps data rate (4×1 Gbps) which corresponds to a 4 Gbps aggregated data rate. Thus, a comparative analysis of 4 Gbps and 4×1 Gbps for TCP is presented in this paper.

3.1 Wavelength division multiplexing and wavelength striping

The Internet Protocol (IP) over WDM is used for high capacity networks. While IP provides a common platform for services, WDM offers a large bandwidth and throughput. In optical communications, wavelength-division multiplexing (WDM) is a technology which multiplexes several optical carrier signals on a single optical fiber or in a free space propagation medium by using different wavelengths of laser light (i.e., coherent light) to carry different signals. WDM is equivalent to frequency division multiplexing (FDM), which is the term often used in applications with frequencies out of the light spectrum where different frequencies are used to carry different signals. WDM is therefore a technique that can reduce the impact of the high speed inequalities, by partitioning huge bandwidth into multiple wavelengths, each operating at slower speeds and more suitable with the network interface [10, 15]. Although one problem with WDM is the long latency during the switching time, it can be useful for point-to-point links and for systems with longer switching time than the propagation time. An alternative to this is the use of wavelength-striping also known as WDM-Bit-Per-Wavelength (BPW). This technology reduces the latency in the switching time, sending the information in parallel and switching in parallel. Other differences from the WDM technique are that no parallel-to-serial conversion is needed and that parallel pulses are launched simultaneously on different wavelengths [35]. For example, in a microprocessor the equivalent would be to send eight bits where each bit is transmitted in a different single line, thus if the eight lines are observed simultaneously the result is that a byte is transmitted in a single time-slot. This concept of sending bits per wavelength, or fragments of packets per wavelength per time-slot is described in this work as “wavelength-striping”.

This paper evaluates WDM in order to compensate long propagation delays and their use in future ISL implementations. Figure 2 illustrates how information (packets) coming from different sources are distributed and transmitted using four wavelengths. Thus, if more wavelengths with packets are added in the transmission, a higher aggregate throughput will be achieved.

Fig. 2
figure 2

Wavelength-striping with semi-synchronous time-slots

When wavelength-striping is used and the packets arrive at the switch fabric, all the wavelengths are switched per time-slot. This time-slot and the stripe-size are related. This is because the number of bits allocated in the stripe is restricted by the time-slot. When the switching is performed with fixed time-slots, the stripes of data can be allocated into the time-slots. These time-slots can be adjusted to the size of the stripes plus a guard-band gap. The switching time in time-slotted networks can be reduced by the use of wavelength-striping when the stripe-size is short, but this has an adverse effect on long distance communications.

Figure 2 shows stripes allocated into the wavelengths in time-slots. For example, the stripe-size distributed over the four wavelengths can be as short as a bit. However, the process to insert those bits into a time-slot requires complex electronic control to perform the high speed switching, and this results in an impractical bit level time-slot for the upper layer. On the other hand, if the stripe-size is excessively long the advantage of fast switching is compromised and the system may not have any advantage over WDM. Consequently, the optimum stripe-size depends on the control characteristics and the requirements of the switch fabric and applications. For ISL it is required to have stripe-sizes as big as possible to compensate long RTT values. In addition, the switching time required for ISL is not significant in comparison with the huge propagation time and thus, in this case, it is advantageous to use WDM instead of wavelength-striping with short stripes. The optimal stripe size can be calculated as a trade-off between the link capacity, i.e. the bandwidth-delay product and the switching performance required.

This paper investigates the use of multiple-wavelengths which gives several advantages that improve the performance of FSO-ISL. Although wavelength-striping is recommended for links with transmission-time ≫ propagation-time (e.g. short reach interconnects), the use of TCP is impractical when transmission-time ≪ propagation-time, since the slow-start in the congestion window is directly affected by the RTT. The TCP’s slow-start function limits the growth of the TCP window and therefore, when WDM is used it is possible to have a higher aggregated throughput.

High transmission frequencies and data rates have detection problems and are more sensible to noise effects. WDM splits the data rate in different wavelengths and hence the data rate is a fraction of the original data rate. As a consequence, noise effects and detection problems are reduced.

Latency affects TCP adversely because the growth of the window size depends of the RTT. For example, an RTT between Earth and a geostationary satellite station is in the order of 500 ms, and for ISL it can vary between 100 ms and 533 ms; these long latencies of the RTT have a significant impact in the performance of the communication, and as a consequence, the goodput is reduced considerably.

The ISL cannot be used at its full capacity by a single TCP connection for a single high bandwidth application, due to small buffer spaces at the sender and receiver. This limits the use of satellites for fluent internetworking. There is, however, no limit to the number of concurrent TCP flows that can be multiplexed together to fill the satellite channel [47]. Section 5 describes the TCP window scaling-up and the increase of the buffers at the sender and receiver and Sect. 6 shows the results of the simulation. In addition, it has been suggested that by having multiple parallel connections, the throughput increases making better use of existing ISL [5]. The BER as a function of distance is assessed for the FSO model and the results are used to simulate the packet error rate (PER) during the transmissions. These results are used in order to obtain the goodput (i.e., the amount of useful data successfully transmitted) of the 4 Gbps link and for the 4×1 Gbps link for long distances. It is shown in the next sections that WDM improves the goodput of the TCP transmission.

4 Methodology

The system under investigation consists in end-to-end optical communications between satellites. The satellite distances are different as shown in Fig. 1 and hence, examples of different distances between satellites are simulated. TCP flows are used to generate traffic at 1 Gbps and 4 Gbps and the communications are simulated with the SSFNet simulator [12]. Parallel transmission using WDM is also simulated in order to calculate the TCP performance for four optical links each at 1 Gbps rate.

4.1 Comparison of single channel 4 Gbps and 4×1 Gbps WDM links

A comparison of a single channel at 4 Gbps and 4 channels at 1 Gbps WDM links is shown to prove that there are significant advantages by increasing the number of wavelengths rather than increasing the bitrate of a single channel. This comparison is assessed with a corresponding BER for the Inter-satellite links and for the optical detection methods used as shown in next section.

BER as a function of data rate and distance for optical coherent and direct detection

At the detector electronics, the instantaneous signal to noise ratio (SNR) is given by:

$$ \mathrm{SNR} = \frac{(\mathrm{Res} \times P_R)^2}{i_{n}^2} $$
(1)

with ‘Res’ being the device’s responsitivity, as obtained in [39],

$$ \mathrm{Res} = q_e \eta \frac{\lambda}{hc} $$
(2)

where η is the quantum efficiency of the photo detector (p-i-n), h is the Planck’s constant and c is the speed of light in vacuum.

Then, let the power emitted by the transmitter be given by P T . Thus, the instantaneous received optical power P R , as shown in [42], is given by

$$ P_R = P_T \eta_R \eta_T G_R G_T \biggl(\frac{\lambda}{4 \pi z}\biggr)^2 $$
(3)

where,

z :

distance between two satellites

η R :

optical efficiency of the receiver

η T :

optical efficiency of the transmitter

G R :

receiver telescope gain

G T :

transmitter telescope gain.

For a case in which the aperture size A of both telescopes is the same, the telescope gain is:

$$ G_T = G_R = \biggl(\frac{\pi A}{\lambda} \biggr)^2. $$
(4)

The noise signal i N in Eq. (1) arises mainly from two sources: thermal noise and shot noise. The dominant type of noise source depends strongly on the detection method chosen. For a direct detection method the thermal noise is dominant and therefore, restricts the ability of the link to operate close to the photo-detector quantum limit. The thermal noise signal then is given by

$$ i_{N_{th}}^2 = \frac{4 K_B T}{R} B_e , $$
(5)

where K B is Boltzmann’s constant, T is the operating temperature, R is the resistor’s value and B e is the receiver electrical bandwidth. This last factor is adjusted according to the bit rate of the incoming signal. For an On-Off keying (OOK) modulation B e is twice the channel bandwidth (BW) to prevent signal distortion [39].

On one hand, for a coherent detection scheme the noise at the detector is given primarily by the shot noise of a local oscillator. This enables the electronics receiver to operate close to the quantum limit. In an ISL where laser power is limited, a coherent detection is an advantage due to the long distances involved. On the other hand, the hardware required to implement this type of detection scheme is more complicated than direct detection [39]. Assuming that the requirements for a coherent detection scheme can be met, the performance of an FSO-ISL is assessed.

The shot noise at the receiver is given by

$$ i_{N_{\mathrm{sh}}}^2 = 2 q_e \mathrm{Res} P_R B_e , $$
(6)

where B e is adjusted according to the bit rate of the incoming signal.

With the use of previous equations the SNR is determined as a function of the distance z. The BER as obtained in [39] is related with the SNR as follows,

$$ \mathrm{BER} = \frac{1}{\sqrt{2 \pi \mathrm{SNR}}} e^{-\mathrm{SNR}/2}. $$
(7)

In Eq. (7), we assume that the decision threshold is zero. This means that for zero current, the logic value is ’0’. The BER calculations as a function of the distance between satellites are plotted later in Sect. 6.

4.2 Physical layer

The case scenarios investigated reflect the advanced technology used in ISL, and they are considered for the calculations. The factors that system designers wish to optimize on board a satellite are: size, weight, power and cost. These factors are used as basic reference points; examples of equipment characteristics used in ISL are listed below [22, 41].

  • Data rate 0.5–1 Gbps

  • Range > 10,000 km

  • Estimated power consumption 55–140 W

The literature indicates a range between 0.5 and 1 watt for the transmission power. The parameters used for the simulation are shown in Table 1 [8, 44].

Table 1 Parameters for the BER calculation

4.3 Considerations for the BER/PER cross-layer analysis

The worst case scenario is when a single bit damaged or lost in the Physical Layer causes a whole packet to be damaged in the upper layers. In this case, TCP answers with a negative acknowledgment (NACK) to the sender and retransmission takes place adding more delay to the transmission. Another significant consideration is that block coding impacts the interaction between the Optical Physical Layer and the transported data, and this represents a cross-layer effect since it adds encoding bits to the Physical Layer. As a consequence, the sequences of ones and zeros are modified.

Figure 3 shows how a single damaged bit causes a packet error; this may affect the goodput because a whole packet loss will require a TCP retransmission. Without this condition, is not trivial to find a relationship between BER and PER.

Fig. 3
figure 3

Worst case scenario where each 1 bit-error causes a packet-error

Taking into consideration the worst case scenario, the linear relationship between BER and packet error rate (PER) is expressed as:

$$ \mathrm{PER} = 8 \times \mathrm{BER} \times \mathrm{MTU} \times 66/64, $$
(8)

where the MTU is the maximum transmission unit, and using the Ethernet standards it is set to 1500 bytes for the simulations and then the MTU is increased to improve performance. A conversion from 8 bits to 1 byte is shown, and the mapping for the 64B/66B line coding is applied and is equal to 1.03125.

The block coding (n,k) which is used in the standard 10 Gigabit Ethernet has been adopted for the analysis in this work in order to establish the relationship between TCP and BER in the free space optical links. Block codes (n,k) are data bits grouped into data words of length n. A set of code words with k bits is chosen where n<k and the data words are mapped onto the code words.

Block coding is a commonly used technique because it reduces the ambiguity of the arbitrary transitions between binary states and also helps to maintain timing information when long strings of ones or zeros are present. Block codes restrict the longest time interval in which no level transitions occur and therefore help to recover the timing or synchronization. The 64B/66B scrambling, which is a one-to-one mapping of the data stream provides data-whitening. Such data-whitening can remove the random distribution of the error data, and consequently the uniformity of the data errors is improved. Scrambling produces a random data pattern by the modulo-2 addition of a known bit sequence to the data stream. The resulting randomness of scrambled data provides an adequate amount of timing information [29]. In addition, the 64B/66B code used in Ethernet at 10 Gbps [37] also improves the homogeneity of the incidence of data errors among the packets received by the Network Layer. Another advantage of scrambling is that additional bandwidth is not required because it is a binary exclusive ‘OR’ operation (XOR) with a selected sequence of bits. Then the consequence of using the 64B/66B code is the elimination of the non-uniformity of data-errors [3335]. We believe that modern coding techniques, like network coding [14, 16, 28, 45], may be also applicable to our approach, though we leave that study for future work in this direction.

If the scrambling code has to work with a cyclic redundant check (CRC) implemented in the Link Layer, then the selected sequence could interact in a negative way. This is because the ability of the CRC to detect errors is reduced whenever its polynomial generator has factors in common with the polynomial scrambler [9]. In this paper, the Link Layer with error detection is considered but not error correction. The influence of the lower layers in the TCP performance is considered as part of the payload of a TCP segment, thus it could be generalized that the encapsulation of the bits into the lower layers affect TCP as indicated in Fig. 2 where the worst case scenario is shown and, where each bit-error causes a packet error, otherwise it is not feasible to assume a suitable cross layer analysis.

The ability of the 64B/66B code to distribute errors uniformly is commented in [34]. This is a significant function which allows errors in the Physical Layer to be related with errors in above layers. This is because the errors in the Physical Layer are uniformly distributed, which suggests that packet errors are also uniformly distributed and therefore a connection between BER and PER can be established.

5 PER as a function of delay (distance) and analysis of traffic performance

5.1 Scaling of the window

The window scale of the TCP sending rate is a significant limitation of standard current implementations of TCP. However, the results in previous work suggest that by scaling the TCP sending rate, the performance improves considerably [36, 40].

If the window scale option (WSO) [23] is not used, the maximum value for the slow start threshold (SST) is 65 KB. After this SST is reached, the scaling of the window size becomes linear, a function of the congestion-avoidance algorithm [21, 25]. The conventional SST for all versions of TCP congestion window is limited to 65 KB unless a scaling option is used. The WSO allows exponential growth to 1.07 GB for the congestion window, i.e. 65,535×214, where the number 214 derives from options fields within the TCP header which have a window scaling factor of 3 bytes. The last byte of the 3 in the window scaling factor is the shift count and this is set to 0 when WSO is applied.

The role of window scaling was explored by implementing a customized option for the SSFNet simulator [12], an option not previously available; this new scaling implementation was developed in previous work and it will be referred to in this paper as the “scaling up (Sup)” model [40].

The ‘Sup’ model considered the maximum congestion window available (i.e. 1.07 Gbytes) in TCP for sending and receiving rates. ‘Sup’ directly modifies the limit of congestion window exponential growth without changing the TCP header. Thus it differs from the original idea of WSO because the scaling is not announced between the receiver and the server during the synchronization of the communication, i.e. when the transmission is initiated. By modifying the Congestion Window threshold and the Advertised Window in the ‘Sup’ model, scaling is available for all transmissions without being announced.

Even if the WSO and the proposed ‘scaling up’ are different in the initial period of the communication (i.e., TCP synchronization), the scaling of the congestion window is similar during the slow-start growth. The exponential growth is controlled by the value of the slow-start threshold (SST). In the simulation model, the SST is expanded allowing the window to grow always to 1 GB without modifying the TCP header.

Figure 4 shows the difference between scaling up a conventional growth of the TCP window. The growth of the window in a TCP connection is as follows:

figure a
Fig. 4
figure 4

Congestion window vs time

5.2 TCP traffic scenarios

We use the SSFNet simulator to determine the TCP throughput on the basis of the predicted PER which is related to the BER using Eq. (8). Several TCP concurrent flows are generated and transmitted over the free space optics both with a single link and with four wavelength link between two satellites. The aim is to generate a considerable traffic flow; the sending of concurrent flows of data provides full-pipe utilization for both 4 Gbps rate link examined. The transmission simulations are performed with 1 Gbps rate for each of the four wavelengths and therefore, ideally, the aggregated transmission rate is 4 Gbps.

The goodput of the TCP connections is affected by the link delay as shown in Fig. 5. Even though the data rate is 4 Gbps, the 1 Gbps link results in nearly the same goodput. This is because the slow-start growth of the TCP window increases as a function of the number of acknowledgments received and the number of segments. These segments are set to the maximum segment size (MSS) of 1500 bytes which is a standard for Ethernet and TCP, and then the MSS is doubled increased up to 12000 bytes which is close to the simulation limit. This simulation shows when the propagation time is greater than the transmission time. Therefore, the dominant delay of the transmission is due to the signal transmitted over the distance at speed of light.

Fig. 5
figure 5

Time vs. distance for 1 Gbps and 4 Gbps where the transmission time ≪ propagation time

The width of the bars in Fig. 5 represents the transmission delay and the slope of the diagonals represents the propagation delay (for simplicity, the slopes of the diagonals are not proportional to the width of the bars).

Four case scenarios have been chosen for the analysis of the delay of the ISL. These delays were assessed with the distances between satellites as follows in Table 2.

Table 2 ISL distances and corresponding one-way propagation delays

6 Results

6.1 BER as a function of delay for FSO-ISL

We assume a fixed transmission power of 0.9 watts to calculate BER as a function of delay for FSO-ISL. The parameters used for the analysis are as shown previously in Table 1. From Eq. (8), we find that direct detection results in a higher BER than coherent detection. However, coherent detection requires more complex hardware to be implemented. Therefore, the use of direct detection is suggested for distances below 20,000 km and coherent detection for distances above 20,000 km.

The BER/PER relationship described above is used to perform the TCP traffic flow simulations. The BER results in a PER that greatly increases the ISL with distances above 20,000 km, as shown in Fig. 6. Thus the TCP simulations of distances above 50,000 km include the BER effect. The BER results of 1 Gbps and 4 Gbps are shown in Fig. 6.

Fig. 6
figure 6

BER vs distance—direct and coherent detection for 1 Gbps and 4 Gbps

With the use of coherent detection it is possible to transmit longer distances than using direct detection. In both cases, with direct and coherent detection the BER is higher for transmissions at 4 Gbps. Thus, with the use of low data rates is possible to keep a lower BER and for this reason it is suggested that the data rate should be split into several wavelengths.

These results have been assessed for delays corresponding to the distances between satellites as shown previously in Table 2. We consider PER for TCP from the cross-layer analysis as described in Sect. 4.3 in this work, and we assess it using Eq. (8).

6.2 Scaling-up of the window and maximum segment size increase

We calculate the goodput for several serial links by varying the distances of each link with the delays shown in Table 2. A BER of 10−9 is used for the calculations, and this corresponds to the distances shown in Fig. 6 for the direct and coherent detection methods previously described. In addition, the maximum segment size (MSS) is increased to compare the performance of the link transmission. The increased MTU improves performance if the MSS is also increased as shown in the simulation results. The goodput with and without the scaling-up (Sup) of the TCP window size is simulated and the results are shown in Fig. 7.

Fig. 7
figure 7

Goodput vs. line delay for 1 Gbps, varying the maximum segment size (MSS) and with and without scaling up of the TCP Window (WSO)

The goodput of the TCP connections is affected by the link delay as shown in Fig. 8 and Table 3. The WDM link transmits the packet faster. This is because the slow-start growth of the TCP window increases as a function of the number of acknowledgments received. Therefore, the delay in the RTT is the key factor to determine performance when 1 Gbps and 4 Gbps are compared.

Fig. 8
figure 8

Goodput vs. distance for 1 Gbps, 4 Gbps single link and for 4×1 Gbps WDM

Table 3 Aggregated goodput with and without wavelength striping and with two detection methods

Because of the four fold increase in the aggregated data rate, important implications can be deduced from the results. The goodput decreases with distance but the four fold difference between 4 Gbps single link and the 4 Gbps wavelength-striped link is maintained even with the increase in distance. Thus, the increment in the number of wavelengths increases proportionally the aggregated throughput and it helps to compensate the diminished performance of TCP due to long delays. It is interesting to observe that the goodput for 1 Gbps and 4 Gbps single links are practically the same due to the propagation delay dominance.

Table 3 shows that the goodput for 1 Gbps and 4 Gbps are indeed the same due to the propagation delay dominance. Thus a trade-off between cost to include more links and performance should be considered.

7 Conclusions

We described in this paper how WDM can increase the goodput of the transmission. A cross-layer analysis is established using a relationship between BER and PER for TCP flows for 64B/66B encoding. This relationship is not trivial for most of the application case scenarios, here the analysis of a worst case scenario is shown. So, our findings show limits on this research direction.

This work provides strong multidisciplinary foundations and in-depth analysis to show significant characteristics that can improve the performance of FSO over ISL. The TCP window is expanded with the use of a customized simulation of the window scale option and the results show that for a long propagation delay it is useful to scale the window size. Simulations of the direct and coherent methods for optical detection are assessed and the corresponding BER for the Inter-satellite links are demonstrated.

The analysis shown in this paper suggests that WDM is a promising solution to overcome latency related problems in FSO over ISL. The advantages of increasing the number of wavelengths rather than increasing the transmission data rate are shown when the propagation delay is greater than the transmission delay.

We believe that network coding may be applied to our approach with good results. Recent work has shown that using such techniques for satellite systems is a promising solution. So, our future work will look further on that direction.