Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

3.1 Control Center Design

This chapter describes the design aspects of a typical Mission Control Center (MCC). The German Space Operations Center (GSOC) is taken as an example (Fig. 3.1).

Fig. 3.1
figure 1

Position of the mission control center in the ground segment

The Mission Control Center—as the name implies—is the central ground facility of a space mission. It is the central point where all data and management information concerning the spacecraft are consolidated. These data are received, checked, and processed, decisions are made and—in case of an emergency—the respective procedures are performed in order to restore the nominal conditions of the mission. The way how the MCC operates is defined by its design which specifies its capabilities, flexibility, and robustness. MCC operations are also defined by the people working within, primarily of course by the Flight Operations Team, but also by all personnel responsible for interfaces and infrastructure. Their work in the background is equally important. Finally, the design of the MCC needs to conform to the customer requirements and provide a safe and secure environment for spacecraft operations. This includes not only purely technical solutions but also the respective environment for the people working there.

Within this chapter we focus on several aspects of the design of a control center. At first the necessary infrastructure is analyzed. Then the design of the local control center network is examined followed by the software needed.

A so-called Multi-Mission operations concept allows greater operational flexibility and an easier phasing of new missions. The decision whether the MCC is laid out as a multi-mission or single MCC should be made early in the design process, as it has far reaching effects on the overall design, especially on the IT infrastructure and the network. The operations concept (Multi-Mission or Single-Mission) also has a large impact on how personnel, especially the mission operation teams, are assigned.

In this chapter we focus on a multi-mission environment, based on the GSOC design. The multi-mission design is typically more complex to design but offers a greater flexibility for the integration of future projects. However, there are also some situations where a multi-mission concept may not be adequate. Simultaneous operations of missions which have no or only minor similarities (e.g., because of different requirements, security aspects etc.) cannot be grouped together easily into a multi-mission environment. It would be difficult, for example, to integrate a scientific mission whose data are more or less publicly accessible with a military mission with very strict security requirements.

In the following we will discuss different aspects of the design for the facility itself (the building), different office and operational subsystems, as well as IT hardware. The construction of the building itself will not be a topic here; however, some important issues will be mentioned.

3.1.1 Infrastructure

Identifying a suitable location for the Control Center building is the first task in planning the infrastructure. Several important aspects have to be considered here. Adverse geological conditions should be taken into account. E.g. geologically insecure zones (earthquakes) as well as areas subject to frequent flooding should be avoided if possible. Nevertheless appropriate measures should be taken to make the Control Center less vulnerable to natural or technical contingencies. This can be achieved, e.g., by using a redundancy concept on different levels. Redundant power supply is essential. Uninterruptible Power Supplies (UPS) can provide constant power to all of the MCC’s systems in case of short power breaks, or bridge a time gap until diesel generators run up. The latter can provide power even for several days. A completely independent backup control center will provide the highest level of redundancy.

Concerning redundancy in the area of communication infrastructure, it is advantageous if the control center is located closely to a large city hub, where multiple independent connections to the telecommunication network are available. A dedicated communications antenna for the MCC provides additional independence in case the terrestrial communication lines are cut.

Maintenance aspects should also be given a great deal of attention. Occasionally, it will be necessary to replace part of the equipment (hard drives, switches, workstations, etc.). In most of the cases this can be done without an impact on operations, sometimes however it becomes necessary to really shut down parts of or even the complete MCC. In such a case planning and coordination is vital. Affected projects need to be informed, maintenance tasks need to be scheduled carefully and backup solutions shall be discussed (what happens if the maintenance tasks take longer as planned or are not successful). Additionally it should be considered that some equipment is undergoing considerable deterioration when its power supply is switched on and off repeatedly. Many electronic devices react very sensitive to such power cycling. The damage caused by this power cycling might even be worse than that which caused the initial maintenance action.

The MCC facility has to meet security standards, as applicable by laws, company policy, or project requirements. An access control system includes the basic technical infrastructure (like protective doors, door key management, respective locking policies) as well as more sophisticated elements like access terminals (with key cards) with respective key card management, monitoring cameras in critical rooms and on corridors, as well as alarm systems (intruder alarm). Additionally security personnel shall be available on site all the time. More or less strict visitor control may be implemented, depending on projects located at the facility as well as on facility capabilities. In general, visitors will have no access at all to the network and data processing facilities, whereas conference or display areas with mock ups of satellites may be treated as low security zones).

The facility needs to be equipped and maintained for the safety of personnel. This includes emergency exits and signage, fire and smoke alarm systems (which might be connected to the local fire brigade), and different kind of fire extinguishing facilities. Especially the latter might be essential for larger computing or power (UPS) facilities. A central fire suppression system can be installed (e.g., Argon inert gas extinguishing system or similar sophisticated systems targeted to avoid damage to the equipment). Finally procedures for the case of fire have to be developed and in place, and especially for the spacecraft operations the procedures for the spacecraft operators have to be precise and should clearly define under which circumstances the control room has to be evacuated and when and if running systems should be shut down and how they shall be recovered afterwards.

3.1.1.1 Control Rooms

At the heart of the MCC are the control rooms. Depending on the available resources and needs, there may be several control rooms. They can be of different sizes and might serve different purposes. They can be assigned permanently to certain space missions or might be assigned only for specific phases of a mission (like e.g. for a LEOP—Launch and Early Orbit Phase). Mostly control rooms will be equipped with air condition not only for human comfort but also for the computing hardware. As already mentioned above, the room shall have emergency exits and has to provide enough space for the operational team and additional equipment like printers, voice, and video systems. A photocopier shall be available as well, possibly however outside the control room itself, as it produces a considerable amount of noise, which is not desirable inside the control room.

Space mission may draw a large amount of public attention, which shall however not affect operations. Visitor areas with large glass windows allow a direct view into the control room and give an impression of the operation of a spacecraft mission. It should be possible to use blinds (or a similar solution) to cover the windows for situations or missions that do not want public visibility (Fig. 3.2).

Fig. 3.2
figure 2

Control room of German Space Operations Center (GSOC)

Control rooms should allow changes in the configuration of consoles, as spacecraft missions might have changing requirements to the control room layout during mission lifetime. This requires a forward-looking initial design of elements like cabling for network, telephone, voice, and power.

Control room consoles require typically not only access to the operational systems but also to the office network (for e-mail, documentation) and to the Internet (e.g., to allow representatives of the satellite manufacturer or customers to access their company network over VPN).

Facilities like restrooms or the coffee kitchen should be located close to the control room to minimize the time spacecraft operators need to spend outside the control room.

Besides the central spacecraft control rooms, there are several other rooms which are typically necessary during operations. The flight dynamics team typically has its own control room for the LEOP phase, to be able to closely coordinate with the flight team and quickly exchange data products. The Network and Systems control room is a communication hub connecting all incoming and outgoing connections to and from the MCC and providing voice communication with the outside world for monitoring and coordination of the Ground Station Network or other important external operational interfaces. Also staff from satellite manufacturers or customers may need dedicated rooms where they can perform important off-line tasks in close vicinity to mission operations. These rooms may need specific access control.

3.1.1.2 Public Space in the Control Center

Finally control centers need enough office space for its own employees as well as for guests. Depending on the overall purpose of the control center facility, different approaches to the layout of the areas for public space, presentation, and catering may be chosen. Many facilities for military or communication purposes will require only relatively small presentation and catering areas. Large national control centers which perform LEOPs and many public relation activities will require large areas for the interested public, along with press and meeting rooms as well as for the catering of visitors. Space missions are still at the cutting edge of technology and hold fascination for many people. A control center offers the unique possibility of fostering the interest in space missions and technology for the public. Therefore it should provide respective means for public information and education.

Before we take a closer look at the computing network in the next chapter, we quickly go through some other subsystems and elements, which are not less important, but not so much in the focus here.

3.1.1.3 Server Rooms and Computer Hardware

The server room is equipped with server, routing, and switching equipment. When designing the system, the servers require special attention, as their reliability, flexibility, and capacity are defined to a large extend by the effort which is needed for maintenance and extension of the system. Until recently, the design was dictated by the use of powerful servers. However, this concept was not very flexible; one defective element of the server caused its total outage and bringing it back to full operation took a considerable amount of time.

As the flexibility and redundancy requirements increased (accompanied by the increase of server count) the focus shifted to so-called blade servers. They can be packed densely and support the increase of applications. At the same time they provide backup capabilities and easy exchange of defective modules. Currently, however, virtualization is the trend. Virtualized application servers are even easier to maintain. They may be seen as a single available space for computing and storage resources which can be used in a very flexible way. In the times when physical servers were used, running 10 applications meant to have 20 physical servers in place (including backup). The same situation in a virtualized environment, however, requires only two physical machines (for prime and backup) hosting ten virtual applications each. The decision which technology to use, should be made on a case-by-case basis. Each technology has its pros and cons, and so for example virtualization with all its advantages is not very well suited for applications with intensive network traffic (because a single physical port of the server hardware has to be shared). The virtualization principle is, however, the prerequisite for a further improvement in maintainability of the control room hardware (operator consoles). This is achieved by using thin client terminals. As these thin client terminals do not contain local hard drives or other moving parts, their reliability and expected lifetime are much higher than that of conventional PCs.

Other advantages are decreased power consumption and less heat buildup, which in turn will result in much less effort for providing adequate air-conditioning for both computer hardware and human operators.

Data storage is also an important topic to consider. For most of the office applications, the hard drives of the office computers and possibly Network Attached Storages (NAS) may be sufficient. Storage of the spacecraft data and documentation requires a different approach, as security, continuity, and collaboration issues have to be addressed. Here again several solutions may be considered—starting from high capacity NAS-like storages, through SAN (Storage Area Network) for short term and working storage, through data vaults and long term archives (in form of magnetic tapes with automated refresh mechanisms).

3.1.2 Control Center Network

In this chapter we will take a closer look at the design and security and maintenance aspects of the control center network or LAN. The computing network is the backbone which connects all of the subsystems within the MCC. It is the linking element and at the same time it protects specific systems from unauthorized access.

3.1.2.1 Network Topologies

Figure 3.3 shows the principle of the network connection between GSOC and the Ground Station in Weilheim. Two separate paths can be identified, each designed to fulfill its special requirements. On the left side there is the so-called Office Path connecting the Office LANs of GSOC and Weilheim together (e.g., to allow teams at both sites to exchange documents). This connection is realized with the help of a VPN over the DLR campus network. On the other side there is a highly reliable, redundant SDH connection for the real-time satellite data, connecting Operational LANs of both sites. The same data router covers also Voice over IP (VoIP) traffic multiplexed on data connection, to allow also highly reliable voice communication for operators at MCC and the ground station.

Fig. 3.3
figure 3

Control center example of link to the ground station

The above example shows that there are two independent network branches, the Office and Operational LANs. This separation is also realized in the network structure within the control center itself. It reflects the solution which is introduced to fulfill security requirements.

The Operational LAN (called also Ops-LAN) is the high security network area. It is physically separated from other networks with only very limited access from the outside. File transfers are allowed only over specific FTP server located in so-called DMZ (Demilitarized Zone) and real-time connections from ground stations are also only allowed over a firewall and only to trusted locations, having similar high security operational networks.

The mentioned DMZ is the typical separator between different LANs. It consists of another network part with two entry points protected by firewalls. DMZs typically contain only Firewalls and FTP servers, in case of some specific application the outermost DMZ may, however, also include some application (Web) server to provide some MCC services to the outside world.

The Office LAN is a network part used typically in offices of the MCC and is intended for general office work like viewing and working on documentation as well as e-mail access. The Office-LAN has Internet access; however, this functionality is restricted (e.g., the Internet can be accessed from Office-LAN, but the Office-LAN cannot be accessed from the Internet, so it is only a “one-way” access). The Office-LAN is controlled and only registered devices may access the network and the IP addresses are managed centrally by the network administrator of the MCC.

Another network presented in the figures below is the so-called Ops-Support-LAN. This is not necessarily needed for each control center, and rather encompasses some supporting systems, which may need some more access to the outside world, but at the same time are very important for the operative system (i.e., deliver command files). In the example below, the Ops-Support LAN contains Flight Dynamics and Mission Planning systems.

Every area has its own service and security segment, hosting proxies, virus scanners and authentication, name, time, and file servers. Clients are not able to open direct connection to hosts outside their LAN area; the connections can only be established via the proxies in the service segment. These proxies are directly connected to virus scanners in the DMZs, where the in- and outgoing traffic is scanned.

Figure 3.4 shows an example of an MCC network. The real-time TM/TC connections with the ground stations and external partners are shown at the bottom of the figure (OPS-LAN). Also essential operational files are transported on these network parts. It can be seen that to transfer a file from Ops-LAN to some external customer over the Internet, it has to pass a few firewalls and DMZs, before it will be made available on an FTP server at the outermost DMZ.

Fig. 3.4
figure 4

Control center example network

3.1.2.2 Network Technologies

The MCC network is based on the TCP/IP protocol and underlying Ethernet. The type of cabling depends on the available resources, in principle however fiber optic cabling offers a bigger potential for future upgrades and—also an important factor—are tap-proof. Typically it also provides higher bandwidths and thus data rates, so that for future expansions only equipment like routers or switches needs to be exchanged. Exchange of cabling on the other hand is very expensive and may require a high effort. Also respective equipment with interfaces for fiber optic cabling is in general much more expensive than for conventional cables. Therefore it may be reasonable to implement a hybrid solution (fiber optic between big hubs and connections to the single users with means of copper Ethernet cable) for office equipment (PC or laptops).

As already mentioned the control center network constitutes the backbone of all of operational systems and is strictly critical for operations. This requires respective support from skilled personnel. Depending on the size of the control center, and thus also the network as well as projects being supported, it may be required to have the respective network support personnel available permanently, either through shift work or on-call service.

Another aspect of the network is its maintenance. In many cases it will be possible to perform maintenance with minimal or completely no impact on the running operational business. This may be the case if for example equipment needs to be exchanged and this may be performed during the time between two satellite passes. But even then, and especially when there may be a real impact on operations (like an outage of the systems availability for some hours), there is a need for appropriate planning and preparations (i.e., backup regulations, spacecraft on-board autonomy).

3.1.3 Control Center Software

3.1.3.1 General

Within this chapter a few prominent examples of specific software used within a control center are presented. Their generic functionality will be explained using GSOC applications as example. Standard programs and software packages like office software and the operating systems are not discussed here.

The software of a control center is special and often custom-made, although there are a number of commercial software packages on the market which support satellite operations. However, they are typically not cheap (because the customer base is small) and it always has to be evaluated how they fit into the specific environment of the control center and satellite project under discussion. The software not only has to process, e.g., telemetry or perform orbit calculations, but also needs to provide interfaces to other packages or systems. Some of these interfaces may be proprietary, which makes the usage of off-the-shelf products impossible.

Ground station and control center software and especially its interfaces can be divided analogue to the data flow paths.

We differentiate between real-time data stream, which flows from the spacecraft through the ground station to the MCC on one hand. On the other hand a multitude of data has to be transported in the form of files. These are also called “products.” They contain data like event predictions, converted telemetry extracts, input from external parties, ground station predictions, etc. They have to be exchanged between internal and external partners. Due to the high number of files, the need for timely delivery as well as reliability, most of the transfers should be done automatically. It happens more or less asynchronously and is not as time critical as the real-time data stream.

The real-time data is often called “online” and respectively file transfers are named “off-line.” The diagram in Fig. 3.5 shows these two types of information flow between ground segment subsystems.

Fig. 3.5
figure 5

Online (solid lines) and off-line (dotted lines) data transfers

For both types of data flows there is dedicated software which generates, processes, transfers, and converts it, as shown in Fig. 3.5.

The online communication is realized with four main elements in a chain. The baseband software is usually installed at the ground station site. It performs basic tasks on the lowest level like frame synchronization, error correction, or time stamping. The service provider delivers the data from the ground station to the corresponding MCC. Currently in most cases the Space Link Extension (SLE) service is used here, which is described in more detail in Sect. 3.1.3.2. Here, the service user acts as the counterpart to the ground station on MCC side. Very common is here again the SLE user software, which receives SLE data and provides them to the Monitoring and Control (M&C) system in a respective format. Finally the spacecraft M&C system (called also TM/TC processor) provides the actual data processing and user interface for the flight controllers.

The off-line communication is similarly built out of four components. The generating and processing systems produce or use files; these systems may also include the M&C system mentioned above. There are dedicated storage systems which provide the required hardware, but also the corresponding data management software. This includes in particular also all databases required for operations. There is automated file transfer software. One implementation example for it, the Automated File Distribution (AFD) software, is described in more detail in Sect. 3.1.3.3. Finally, of course security plays a major role in expensive and sensitive satellite missions; therefore firewalls and virus scanning software are deployed as well.

3.1.3.2 Space Link Extension Gateway System

To perform safe communication with sometimes distant ground stations, specific communication means need to be used. This includes the communication lines which need to be ordered and maintained (ISDN, VSAT, leased line). In most cases commercially available lines are utilized. Of course also protocols tailored to the specifics of space missions need to be employed. A protocol which is frequently used is the SLE. SLE is based on CCSDS standards and is widely accepted by agencies and companies operating ground stations, because SLE ensures interoperability. In contrast to previous systems, cross support can be made available, without the need to readapt interfaces for each mission and each customer. SLE is based on a client-server architecture and allows transfer of telecommands and telemetry which are encapsulated within SLE packets and can be transported that way over the WAN (Wide Area Network) (Fig. 3.6).

Fig. 3.6
figure 6

The communication between the ground station and the mission control center via a WAN can be performed using the SLE gateway. For that the ground station equipment and software like the HF and the Baseband interfaces with the SLE Service Provider, which is connected via the network and its equipment to the SLE User on the MCC side, which in turn interfaces with the MCC software and hardware like the Monitoring and Control Software

As mentioned already, the SLE is server-client based. The role of the server is taken by the so-called SLE service provider. The service provider is located at the ground station, and on request it provides the services related to that station. These services are called Forward Command Link Transfer Unit (FCLTU) for telecommand and Return Channel Frames (RCF) or Return All Frames (RAF) for telemetry. These services are specified in detail in the corresponding CCSDS standards (cf. Table 3.2).

On the opposite site of the network, the SLE user is located at the control center. It manages the abovementioned services, performs all required protocol conversions, and acts as an interface to the Satellite Monitoring and Control (M&C) system. E.g., GSOC uses the SLE Switch Board (SSB), which is able to receive telemetry flows from the different stations in parallel and can send them to different MCS instances. Alternatively, it receives telecommands from the M&C software, converts them into SLE format, and sends them to the corresponding station.

3.1.3.3 Automated File Distribution Subsystem

The file transfer is a key element in the above described off-line communications.

GSOC is using the AFD, a tool developed as open source by Deutscher Wetterdienst (German Weather Service). This system is used by all GSOC satellite projects. The system is placed in a multi-mission environment, and so each project defines its own file transfer matrix, which acts as an input for AFD configuration. The matrix defines what kind of files shall be transferred, from where to where and how often. As soon the configuration is activated, the AFD system starts the monitoring of the defined directories and performs transfers fully autonomously. AFD is especially useful in case of complex network structures, as it is depicted for the three LANs which are used at GSOC in Fig. 3.7.

Fig. 3.7
figure 7

AFD configuration in GSOC network

3.1.3.4 Spacecraft Monitoring and Control System

As already laid out above, the Spacecraft M&C software package has the tasks to receive and unpack the telemetry, to process the data and display it, to process and encode the telecommands, and to send them out via its interfaces.

It is the central software component used by the Flight Operations Team in the control room of the control center.

In many cases this is a monolithic application, but also designs with separated components for different tasks are in use. In the last decades quite a number of different systems have been developed and are commercially available on the market. At GSOC mainly the SCOS 2000 system is used that was developed by the European Space Agency (ESA).

In many control centers and especially for LEOP operations, the mission data needs to be available at various workplaces at the same time. Therefore a server/client functionality is usually included.

As depicted in Fig. 3.8, there are two basic data flow paths in an M&C system. Dedicated interfaces are required on both paths to establish the communication with the data user interface, which in turn ensures the connection with the ground station. Telecommands which are usually grouped in the so-called command stacks need to be processed until they can be broadcasted out via the interface: They need to be encoded and packetized. On the other hand the incoming telemetry also requires processing by the monitoring and control system: The packets need to be “opened” and its content to be attributed to their original parameter. This raw telemetry can still not be efficiently displayed for the flight controllers, since it is so far only a bit pattern, which first needs to be calibrated to appear, e.g., as meaningful physical values like temperatures or currents. Finally there is an option to perform an automatic limit check on selected telemetry parameters: Their values are compared against predefined thresholds, in some cases also more sophisticated mathematical operations are performed. The result of that check can either be just a highlighting of the corresponding telemetry item on the display, an alarm for the flight controller, or even an automated, defined reaction of the system. The central component of the M&C, which allows all the processing described above is the spacecraft database, is also called Mission Information Base (MIB). It contains the definition of the telemetry and the telecommand streams, the calibration information, and the limit definition. Although most M&C systems are developed along standards and are intended for use in all types of missions, it typically requires some effort to adapt the software for each mission. In the end the M&C has to reflect the capabilities of the onboard data handling system of the spacecraft, as described in Sect. 6.2.

Fig. 3.8
figure 8

The functional components of a typical Monitoring and Control system. Two paths can be distinguished: Coming from the ground station, some steps of processing are required until it can be displayed on a telemetry client: After being received at the interface of the MCS, the telemetry packets need to be unpacked, the content calibrated, and corresponding limit checks to be performed. On the opposite paths the telecommands of the so-called command stacks need to be encoded, packets are generated, and finally submitted via a defined transmission interface. Further essential components are listed as well

After the acquisition of the spacecraft, the satellite starts to send telemetry and the ground station antenna receives it. The ground station performs demodulation and decoding; first error checks and possibly error corrections to the data stream are applied. All the received data is stored locally in either short term or long term archive (depending on ground station and its applicability). Furthermore, the data stream is being provided on the WAN interface to the MCC. There, the data is received, further decoded, further error corrections are performed, and finally every portion of the actual telemetry is processed, analyzed, stored, and some part of that is displayed at the spacecraft operators console for direct view.

Spacecraft commands are going in the opposite direction. In contrast to the telemetry, which is streaming almost permanently during the contact and often containing redundant or repeating information, commands are being sent only using special operations procedures as described in detail in Sect. 2.2. The telecommand is being sent out of the M&C System at the MCC, packed in respective transfer protocols and transferred over WAN to the ground station. In the mean time, the ground station needs to have established the uplink, which is defined as a stable radio connection with spacecraft. This time the ground antenna is sending and the spacecraft antenna is receiving. Often, to compensate for frequency variations as introduced by the Doppler Effect, a sweep needs to be performed.

The telecommand packets coming from MCC are being modulated on carrier frequency and radiated into direction of the satellite.

3.1.4 Outlook

Although the basic function of control centers is the same and is constant over time, its design is dependent on and evolving with the requirements of the missions it is responsible for. Some design principles are maintained since the beginning and can be found all over the world, while new requirements are emerging with new technologies. In that way each control center will be unique.

3.2 Ground Station Network

The Ground Station Network (GSN) plays a major role in space missions. It has to establish communication with the spacecraft and with other control centers, support specific spacecraft characteristics, and provide operability and safety of the mission. Due to its nature, the GSN takes part in cross support activities between different organizations and agencies.

The GSN covers several functional aspects—the communication path between control center and the ground stations (transporting online data, off-line data, voice), the management of the stations and their antennas, as well as coordination tasks and station scheduling.

Communication with the spacecraft, as an essential part of the spacecraft operations, is mainly represented in reception of telemetry, transmission of telecommands, and tracking. To improve the operability and safety of the mission, there are several approaches. First of all we try to extend the contact time, typically done by introducing additional ground stations to the network and selecting optimally located stations. For some missions (like ISS, LANDSAT, SILEX, and Sentinel) a viable option is the use of a GEO-Relay satellite. This has been used since the TDRS programme to support the Space Shuttle. More relay satellites have become available and the market is expanding (ARTEMIS and in the near future EDRS).

A specific case is the LEOP (Launch and Early Orbit Phase) as contact has to be established assuredly for the first time of the spacecraft’s life for several critical tasks. The GSN has the task of reducing the time from separation of the spacecraft from the launcher until the first acquisition. The first acquisition station has to perform the first tracking of the satellite (to allow exact orbit determination), to receive the telemetry to assess spacecraft health after launch. If time allows or it is required, the first acquisition station performs also the uplink, to perform time-critical operations like sun-pointing mode or folding out of solar panels.

3.2.1 Station Selection

During design of the GSN, several technical properties, parameters, and requirements need to be considered. They are mostly provided in form of the Space to Ground Interface Control Document, while some others may be found in the Spacecraft Design Document and the requirement documents (starting from Mission Requirements, through Customer Requirements, and finally in Ground Segment Requirements).

The analysis starts with the main mission characteristic, which is the orbit. Based on the knowledge of the location and speed of the satellite during the mission phases, we can decide which ground stations may potentially be used, depending on their geographical location. See also Sect. 4.1.3.1. For earth missions, the orbit type can change during the LEOP, but remains rather stable during the routine phase. Orbit types are categorized by the height (or other words from the distance from the Earth), the inclination of the orbit plane against the earth equator, and the shape of the orbit path.

The majority of satellites are on circular orbits. Here we differentiate between Low Earth Orbit (LEO) with heights of up to 1,000 km, Geostationary Earth Orbit (GEO) with approximately 36,000 km, and everything in between, called Medium Earth Orbits (MEO). Orbits may have different inclination, for GEO mostly being at zero degrees, for polar-orbit LEOs close to 90°, and for many other LEOs somewhere in between. Spacecraft flying in LEO with inclination of let’s say 55° may be contacted only by stations which are at maximum such latitude.

Polar stations are of high importance for the polar-orbit earth-observation missions, as they allow having contact practically every single orbit. For missions with low inclination like GEO they are not usable, however.

Other earth satellites are on elliptical ( = eccentric) orbits. This results in varying orbit height and velocity, from very low ones (around 100 km only) up to 60,000 km (way beyond GEO). The reason for this can be either a transfer phase between different orbits (e.g., GTO = geostationary transfer orbit), an improved observability from ground (e.g., Molniya type orbits) or scientific requirements.

The GSN needs to be carefully fitted to the mission. In case of highly elliptical orbits, a spacecraft at apogee is visible for many ground stations for long times (hours); at the same time the signal strength is significantly weaker. At the perigee on the other hand, the spacecraft will be at a very low height with extreme speed. The resulting antenna tracking speed is very high and excludes most antennas.

Missions to the Moon, Mars, or further in space are called Deep Space Missions. For such missions the spacecraft do not orbit anymore around the Earth. Due to the long distances, the required antennas are larger in size to reach the required sensitivity. They can be built with a slower maximum tracking speed as the target motion is dominated by the earth rotation speed (Figs. 3.9 and 3.10).

Fig. 3.9
figure 9

Typical LEOP Ground Station Network for LEO Spacecraft

Fig. 3.10
figure 10

Typical LEOP Ground Station Network for GEO Spacecraft in GTO

Let’s look shortly at other parameters which influence the choice of ground stations or antennas. We mentioned already the signal strength. Two main parameters which influence the space link quality are the transmitting power (Equivalent Isotropically Radiated Power—EIRP) of the spacecraft and the ground station, as well as the reception sensitivity (antenna gain, called also G/T—gain-to-noise temperature). The space link quality ( = link budget) calculation (see Sect. 1.3.3.2) shows how much margin is left in different conditions during the mission.

Other parameters which influence the choice of the stations are the downlink and uplink frequencies, in general called also frequency bands as shown in Table 3.1. Not all stations have antennas supporting all possible frequency bands; there is (at least up to now) specific specialization for stations, depending on their purpose. And so the stations supporting GEO missions typically have antennas with Ku- and Ka-Band capabilities, whereas LEO stations have S- and X-Band capabilities. Deep space antennas typically support also S- and X-Band frequencies, however with much larger diameter of the dish (to provide better EIRP and G/T).

Table 3.1 Frequency band assignments as used in space operations

They make the situation even more complex, during the LEOP of the geostationary satellites. S-Band is traditionally used before switching over to Ku- or Ka-Band for Payload-IOT (in-orbit test) and routine operations.

More and more there is an interest to support LEO satellites with Ka-Band due to higher bandwidth available. The trend is also apparent for LEOP support of GEO satellites as S-Band is under increasing pressure and interference from other spectrum users like mobile Internet access. The necessary ground station infrastructure is slowly being built up.

Another aspect of station selection is the bandwidth. Mostly ground stations support the fully available downlink data rate in the specific frequency band. For the uplink sometimes limitations apply. Up to now, relatively low data rates (between 4 and 20 kbps—kilobits per second) have been used for uplink. The capabilities and equipment of existing stations were designed accordingly. A tendency to increased uplink data rates can be observed to satisfy trends like more frequent software uploads. Not all ground stations can support this.

Other parameters, which we will not deliberate here too much, just to mention are modulation type, coding, randomization, space link data format, and finally specific tracking requirements—for ranging and Doppler. All of these things are luckily standardized, and the respective CCSDS (Consultative Committee for Space Data Systems) and ECSS (European Cooperation for Space Standardization) standards shown in Table 3.2 describe it in detail.

Table 3.2 The most important CCSDS and ECSS standards for space mission communication. They are available from the Web sites of the organizations

The technical compatibility between spacecraft and the station is essential; thus one should not rely only on standards. Before the launch of the spacecraft, a Radio-Frequency Compatibility Test (often referred to as RF-Comptest) is usually arranged. This test assures the radio interface compatibility and is used to prepare the station configuration for use during LEOP. The RF-Comptest is performed typically at the latest six months before launch, as soon as the so-called “RF-Suitcase” is available. The RF-Suitcase typically contains the Flight Model of the RF equipment with some parts of the On-Board Computer (OBC). In some cases even the whole spacecraft is transported to the ground station to perform testing. In the latter case a clean room needs to be available, however. In most cases only the relatively small RF-Suitcase is provided.

There are a few more aspects one should not forget, when designing the GSN. The accuracy of the carrier separation between a spacecraft’s multiple antennas or multiple spacecraft is essential, as interferences may occur, rendering one or both space links unusable. This may lead to exclusion of up- or downlink over specific geographic regions or during specific times.

Another thing is the spacecraft autonomy, depending on it we may use less stations on ground, being sure the satellite may survive several hours on its own in case of some outage in the ground station. On the other hand, in case of low autonomy or critical application (e.g., precise orbit keeping), the requirements on GSN increase dramatically. Finally onboard constraints may require more frequent contact times (e.g., limited onboard data storage).

3.2.2 Station Communication

The selected ground stations are connected to the control center with a communication network. Different levels of network communication characteristics have to be considered and used. The choices we make will be based on some baseline requirements like bandwidth needed for supporting the mission, availability at specific locations, and total cost over the mission lifetime.

3.2.2.1 Communication Paths

There are several ways to connect to ground stations.

Most often leased lines are used. The backbone technology of these lines is in the hands of the telecom provider, and the control center may or may not have some disadvantages due to that. But even though we cannot influence the selection, we need to know it, to judge the quality of the link. Keywords here are SDH (Synchronous Digital Hierarchy), ATM (Asynchronous Transfer Mode), or MPLS (Multi-Protocol Label Switching) (Fig. 3.11).

Another technology, which is still operationally meaningful, is ISDN (Integrated Services Digital Network). It does not provide too much bandwidth (64 kbps per line), but is very reliable and provides the option of dial-up connection. This is analogous to the phone system at home; you pay as you use it. This may be especially interesting for small missions with limited budgets. Unfortunately the ISDN technology is getting older now, and many telecom operators are phasing it out, because it requires a separated infrastructure, is utilized less frequently, and provides relatively low flexibility in comparison to modern IP networks.

A specific variant of communication is the so-called roof-to-roof communication, which is typically realized by VSAT (very small aperture terminal). This is nothing else than satellite dishes installed at the premises of the MCC and its partner center or station, communicating over a geostationary communication satellite. The advantage is an extremely high flexibility (you can virtually connect your MCC with every point on the Earth) at decent bandwidths. The solution is, however, typically pretty expensive (rental of the GEO transponder) and considerable delays to your data streaming are being introduced.

Finally, let’s say some words about the Virtual Private Network (VPN) over Internet. This option may seem very attractive, as the Internet “costs nothing,” virtually provides “unlimited bandwidth,” and “is everywhere.” However, one should consider if the disadvantages are not too large. It may not cost much, but you actually share it with unspecified number of other users, and there is no “support for Internet” available. The bandwidth varies permanently and not necessarily you have really good access from everywhere. Many operational systems are not allowed to be attached to a network that has Internet access. In general, it is suggested not to use the Internet as a transport technology for real-time TM/TC or other critical applications, where security and data integrity play a role. Still it is a viable solution for off-line data transfers and for connection between MCC and the manufacturer site for simulation and testing with Central Checkout System (CCS).

Fig. 3.11
figure 11

Example of redundant ground station connection

3.2.2.2 Data Transfer Methods

Consideration is required for the data types which shall be exchanged over the communication lines. For telemetry and telecommand as well as for voice interfaces, good, reliable real-time connection is needed but not necessarily with high bandwidth, whereas management information, scheduling, tracking, pointing may be organized in a more cost friendly manner. Nowadays practically all traffic is realized based on TCP/IP protocol, whereas in past some proprietary transport protocols were used (DECnet, X25, NASCOM).

At the level of application, the CCSDS Space Link Extension (SLE) standard is extensively used for real-time communication, whereas the FTP is the main file transfer mechanism. In the future it is expected—especially for file transfer and management information exchange—some changes, and respective standardization work is being performed right now. More online services as well as mission operation services will be made available.

Addressing operational aspects of the GSN, one should keep in mind that all of the connections need to be tested prior to LEOP, and so respectively the arrangement and integration of communication lines and ground stations need to be performed at the right time. The personnel need to be trained to operate the network, and procedures for different operational scenarios need to be prepared and validated.

3.2.2.3 GSN Examples

As we know now all aspects of the GSN, we may look at the example presented in Fig. 3.12.

Fig. 3.12
figure 12

Communication within GSN for the LEOP of a typical GEO mission

This is an example of the GSN and communication infrastructure for the LEOP of a geostationary satellite. At the bottom, the MCC (in this case GSOC) is located. Solid lines constitute voice communication, whereas dotted ones the data connections (in terms of real time TM/TC).

The MCC has a voice connection either over voice system or telephone with the launch site, ground station Weilheim, and the respective Network Management Centers of external partners (PrioraNet, CNES and ISRO) (Fig. 3.13). Data links are implemented fully redundant using different routes. Weilheim is integrated with two SDH 2 Mbps links, whereas connection to PrioraNet stations is available over two levels of NMCs with VSAT terminal and ISDN. Similarly the CNES and ISRO networks have been integrated with a single NMC each.

One can clearly see that the partnerships with external partners and agencies allow extending the GSN resulting in a sophisticated spacecraft support network, raising the total reliability to very high levels. On the other hand, it results in a complex network, which needs to be controlled, planned, and maintained. Not to be neglected are contractual and financial aspects, with questions like: do we always get the highest priority at the specific station? How much does it cost in long term? Is it possible to get some discounts if we consolidate our requirements to only one provider? What would be the trade-offs if resigning of some specific station?

Fig. 3.13
figure 13

International GSN cooperation from the perspective of an MCC-like GSOC

3.2.3 LEOP and Routine Operations

Within this chapter, we will look a bit more at the operational aspects of the control center infrastructure, network, and GSN. The main operational work is divided equally between Ground Data System, network, and hardware groups, whereas GDS performs most of coordination work as well as GSN operations.

The GDS team acts as an interface between the satellite operations teams and the other communication and ground station support groups. It manages the GSN for the satellite missions, and is responsible for the interfaces with external partners supporting the missions. In other words the GDS manages operational internal and external interfaces of MCC.

The GDS accompanies each satellite project within MCC from the very early phases (mission studies, Phase A) until the very end (Phase F, decommissioning). Within the projects, the GDS is one of the project’s subsystems, where it works on fulfilling the GDS part of the project requirements, and acts as an expert for GSN and control center infrastructure for the whole MCC for the missions. This construction allows project managers to bundle all communication and infrastructure questions and requests to one person (the designated project responsible from GDS). That person takes care of distribution and coordination within Communication and Ground Stations department or with external partners. This allows high synergy, a high reuse of resources, and an optimal work distribution.

Other tasks and areas of responsibility of GDS are listed below:

  • Participate in meetings and project reviews (PDR, CDR, TAR, etc…)

  • Take responsibility for the project’s Ground Station Network

  • Perform management of interfaces to the external partners, including contractual and technical agreements

  • Providing first level expertise for all network, communications, and infrastructure questions and issues

  • Coordination of all operations related activities within Communication and Ground Stations department

  • Preparation of work packages, work package description

  • Preparation of cost calculations related to communication and infrastructure

  • Preparation of relevant project documentation [requirements, design, test plans, reports, ICDs (Interface Control Documents), DMRs (Detailed Mission Requirements)]

  • Assessment and implementation of new operational solutions for communications and infrastructure

  • Participation in international standardization organizations with operational communication topics (CCSDS, IOAG)

In the example of GSOC, the GDS group includes three specific subgroups, which have their own tasks.

3.2.3.1 GDS Engineering Team (NOPE)

The GDS NOPE (Network Operations Engineer) team is a group of engineers taking care of the technical and organizational tasks related to specific satellite missions. Typically each mission has a designated GDS engineer who accompanies the project from phase C (cf. Sect. 2.1), participates in project meetings, and plays the role of contact person in the absence of the GDS manager. The most important tasks of the GDS engineers are:

  • Mission preparation

  • LEOP (Launch and Early Orbit Phase) preparation (configurations, coordination)

  • Active mission support (NOPE) during LEOP (also on shift)

  • Performing of tests (Data Flow Tests (DFT), connection tests, configuration tests)

  • Preparation of the configuration for all data connections for the mission

  • Configuration coordination with external partners

  • Preparation of reports

  • Troubleshooting and failure analysis

3.2.3.2 Systems (Network and Systems Control)

The Network and Systems Control (sometimes also called Network Management Center) is—at least from the communications point of view—in charge of MCC operations. The team consists of a number of operators, who work on shift to cover 24/7 operations, and support engineers who coordinate the shift team and manage the work and operational processes within the Network Control Room (Systems Room). “Systems” can be compared to a central phone switch board, where all connections (operational and technical) from all MCC control rooms are routed (switched) to the ground stations worldwide. Systems also plays the role of a voice center, as it is permanently staffed and has contact (either via telephone or a special voice system) with all projects and all stations, allowing quick reaction in case of contingencies or emergencies. This function is for example used to coordinate extraordinary contract requests on holidays or at night, when the scheduling office is not working.

3.2.3.3 “Systems” (Network and Systems Control Team)

  • Network control during routine operations (establishing of connections on project request and along the schedule)

  • Support for NOPE during LEOP

  • Monitoring of the connections and network within GSOC and to external partners

  • Support of contingency and emergency scheduling and operations

3.2.3.4 Scheduling Office

A dedicated scheduling office is a functionality which becomes necessary with increasing numbers of missions and available antennas of a control center. Coordination between these two elements becomes essential to avoid conflicts and to increase synergy between missions. The scheduling office tasks can be performed by one person and needs to be operated only during office hours. The tasks of scheduling are:

  • Receive ground station support requests from projects and coordinate allocation at the organization’s own and at external ground stations

  • Perform contact planning according to mission requirements applying mission priority rules

  • Publish the weekly contact plan (schedule) for all MCC missions and resources

  • Provide support and propose solutions in case of conflicts

When we look at the operations work aligned to the mission lifetime, most of the work may be divided into three phases: preliminary preparation (design), detailed preparation (design), and mission execution, which contains specific events like LEOP. Interestingly, most of these tasks may be equally used for all systems and subsystems, as they perform them more or less in the same way, just at different levels.

In the preliminary preparation phase the main work focuses mainly on the analysis of the customer requirements. This consists of checking if the existing system fulfills the requirements or if changes or upgrades have to be considered. This is important, because it will drive the costs. Based on that, detailed requirements for the subsystem are defined.

Another task is to prepare the general design (concept), which includes interface specification. This part is continued in the detailed design definition phase, where also test and verification plans need to be created.

The implementation phase is typically very busy for everybody in a space mission project, and it is not different for the Control Center infrastructure, network, GDS, and GSN. In detail it is necessary to implement and integrate all subsystems, which encompasses hardware, software, and service procurement (like communication lines), installation, and testing. Sometimes hard- or software has to be delivered to cooperating partners, which means that necessary export licenses have to be issued on time.

Aside from that, the frequencies have to be coordinated and licensed. This is typically done at national level for ground stations. The spacecraft owner needs to apply at the International Telecommunication Union (ITU) for the allocation of the communication frequencies.

Also a specification for external partners needs to be prepared and issued in form of Detailed Mission Requirements (DMR). This, however, may be done only as soon as respective contracts with these partners are in place. So, as one can see, there is a lot of paper work which needs to be taken care of in advance.

Finally in that phase, the complex set of technical and operational tests and validations is performed. They are based on previously prepared test plans, and include all subsystems, from data processing, through communication, and ending with end to end tests (including all components) and simulations. Especially the latter ones are important for validation of previously prepared operational procedures (e.g., emergency procedures). Not to be neglected is staff planning and training, both on technical and operational side.

The LEOP marks the border between preparation phases and the routine operations. All systems need to be handed over to the operational team before LEOP, typically performed formally during the Operational Readiness Review (ORR).

During operations (including LEOP) there are a number of tasks performed repeatedly, like scheduling of ground contacts, preparations and execution of passes, reporting, and accounting. At the same time, the whole GSN and MCC infrastructure needs to be monitored and controlled; maintenance needs to be performed. The interfaces to external partners, to all ground stations, and of course the internal interfaces need to be handled. In case of any anomalies and failures, the actions need to be performed according to procedures, the error reports need to be generated, and any anomaly and failure shall be tracked with respective Discrepancy Report, to avoid such cases in future.