Keywords

1 Introduction

The explosive growth in the demand for broadband and mobile services goes hand in hand with the radical proliferation of media services of various purposes and types; while the media sector has been identified as a vertical industry being highly impacted by the enhanced mobile broadband capabilities of 5G and beyond networks. At the same time, the media sector is gradually becoming an integral part of transportations, as a variety of media-related services (especially infotainment and safety/security related) can be provided to passengers for performing various tasks in versatile situations.

A number of projects and programmes (EU, national funded etc., e.g. 5G-PPP Project 5G-PICTURE [1], 5G-MEDIA [2], 5G-SOLUTIONS [3], 5GMediaHUB [4], 5G-ZORRO[5]) are focusing on the technical realization of CDN (Content Delivery Network) concepts and principles for the provisioning of infotainment media services, and their deployment over 5G infrastructures.

To this end, 5G-PPP 5G-PICTURE project [1] delivered a 5G infrastructure paradigm able to support a wide variety of vertical services, including infotainment services at railway environment over mmWave wireless access deployed along the tracks. 5G-PICTURE solution enabled services, among others, such as railway on-board multimedia entertainment / infotainment services and telecommunication services at ultra-dense hotspots (e.g. stadiums) with huge traffic demands and extreme irregular and seasonal characteristics.

Focusing on the media sector, 5G-PPP 5G-MEDIA project [2], presented an agile programming, verification and orchestration platform for services, and the development of network functions and applications to be deployed over large-scale media service deployments. The innovative components of the platform have been exploited for the delivery of ultra-high-definition (UHD) media over CDNs. 5G-MEDIA provided Media Service Providers with the ability to build flexible and adaptable media distribution service chains, made up of virtualized functions, and deliver UHD media content over 5G network infrastructures to end-users. Moreover, 5G-PPP 5G-SOLUTIONS project [3] aims to prove and validate 5G infrastructures in delivering use cases of significant vertical domains including the media industry. As media services distributed through multiple CDNs need to be shielded from content delivery degradation and outages, the project leverages the ETSI 5G MEC architecture capabilities by introducing caching of popular content at the network edge. Using active monitoring by smart proxies paired to gNB (5G access nodes) in combination with enhanced client-side analytics, the project provides a solution to predictively decide to switch to alternative CDNs or to cache content, in order to prevent degradation or improve the QoS (Quality of Service).

Considering extending large scale architectures for the support of media services, 5G-PPP 5GMediaHUB project [4] aims to accelerate the testing and validation of innovative 5G-empowered media applications and NetApps from 3rd parties through an open, integrated experimentation facility. The project builds an elastic, secure and trusted multi-tenant service execution and NetApps development environment and leverages 5G technologies to explore its applicability as media distribution network. Emphasis is placed on CDN-aided multi-domain content caching deployments and its evaluation. In addition, 5G-PPP 5GZORRO project [5], envisions the 5G Long Term Evolution and its capability to deliver diverse vertical applications, including media applications. 5G services are envisioned to be deployed over logically segmented and geographically distributed virtualized infrastructure resources, which can be allocated flexibly according to traffic or user-generated requirements. For instance, content popularity dynamics present significant variations in the context of flash crowd scenarios e.g., video sharing in stadiums, sharing breaking news live feed, etc. In such cases, 5G-ZORRO introduces a novel approach based on leveraging upon 3rd party resources to support scalable, pervasive vCDN services (live and/or VoD).

In this landscape, the 5G-PPP 5G-VICTORI project [6] is extending existing 5G experimentation facilities towards adopting a novel solution for the integration of CDN-aided infotainment services in 5G network deployments to enable the uninterrupted delivery of such services with high quality to static and mobile users. The project puts focus on delivering two implementation paradigms, available for experimentation at railway operational sites (as indicative environments of high mobility) in Greece and Germany. This paper presents the implementation paradigm at an experimental railway facility in Patras, Greece, and discusses the initial evaluation results.

This paper is organized as follows: Sect. 2 presents the requirements and KPIs of media services vertical in various dense, static and mobile environments and discusses the high-level 5G network deployment options. In Sect. 3, the 5G-VICTORI network deployment paradigm is described, on top of which a multi-level CDN service is deployed, especially addressing the specific challenges and characteristics of operational railway environments. The next section describes the lab setup in which the paradigm was initially validated and presents the obtained performance evaluation results. Finally, conclusions are drawn.

2 Media Services and Requirements in 5G-VICTORI

2.1 Overview of Vertical Service Requirements and KPIs

Media services constitute the most representative category of the enhanced Mobile Broadband (eMBB) profile of 3GPP standardized services [7]. Versatile factors determine the requirements and Key Performance Indicators (KPIs) set for the underlying networks to fulfill (e.g. format, purpose and scope of service, interactivity, type of terminal device, environment etc.). In general, the most critical requirement for infotainment services is to provide high data rate at various availability levels.

At the same time, in modern railway transportations, there is a demand for novel services addressing various end-users and versatile rail operations. These services, collectively denoted as Future Railway Mobile Communication System services (FRMCS), correspond to applications for passengers, critical services and emergency services for stakeholders engaged in train operation, as well as complementary services related to optimization of train operation ([3, 5]). FRMCS services are typically categorized into “Business”, “Performance” and “Critical” services.

In particular, “Business services” refer to communication and broadband connectivity services provided to passengers located at railway facilities, i.e. at the train stations/ platforms, on-board. These services include infotainment (e.g. internet, Video on Demand (VoD), linear TV services, etc.), digital mobility, travel information services etc. These services are in the focus of this paper. At this point, the media and the transportation vertical sectors’ interests come together and their requirements need to be converged in 5G and beyond network deployments. In this landscape, 5G-VICTORI aims to provide a cross-vertical deployment paradigm for (CDN-aided) VoD (Video on Demand) and near-real time, linear TV services. These services pose the following high-level requirements (aligned with [9]):

  • Support for High-resolution Real-time Video Quality of video content/ TV streaming channels, implying jitter limits below 40 ms, end-to-end latency below 100ms and guaranteed data rates of 5-10Mbps per stream.

  • Considering the access network capacity, aggregate data rates should be at least 150-500Mbps for the support of CDN “data shower” capability. This KPI will be further analyzed explained in Sect. 3.2. Assuming also all FRMCS service categories as mentioned in [10], in cases that this is not possible to have coverage along the tracks, data rates of 1–1.5 Gbps are required at places where the train resides for some time, e.g. at platforms, train depots, etc.

  • Low Channel/Stream Switching time, corresponding to the time between the triggering of channel switching and the presentation of the new channel on screen; typically to be under the 1–2 s, corresponding to end-to-end network latency below 150ms.

  • Total Wagon User Density accounting for 100–300 users per train (peak time) at large, highly congested trains.

  • Seamless service provisioning to train wagons at high speeds is required, reflecting the vertical specific requirement for mobility at trains velocity (reaching 100 km/h and in cases 250 km/h), for all services provided on-board.

  • Service Availability on board the train, wherever it resides along the tracks (and at station platforms); with “Business” services being highly tolerant to low availability levels.

  • Service deployment time-that is the time required for deploying all CDN components over the 5G infrastructure through its orchestration layer and verifying initial connectivity between them- should be less than 90 min (adhering to 5GPPP targets for 5G networks).

2.2 Network Deployment Requirements and Options

Radio access network planning in railway environments needs to take into consideration the fact that area characteristics along the railway tracks may vary between areas that are remote, isolated, with challenging terrain for radio coverage (e.g. mountainous, with tunnels, long straight track-segments etc.) and metropolitan areas (e.g. with high buildings, tunnels, lots of curves, etc.). Considering also the access network capacity requirements, along the tracks and in cases that this is not possible, at places where the train resides for some time, e.g. at platforms, train depots, etc. it becomes obvious that, there is no single solution to address such environment.

Currently, private GSM-R (Global System for Mobile Communications – Railway) networks serve part of the railway communication needs – mainly the Rail-critical services- while public and on-board WiFi networks provide connectivity for “business” services. To meet the service requirements and KPIs, novel architectural solutions and network deployment options tailored to the railway environment need to be considered. The fact that most services are consumed in the railway facilities, and most of them are only relevant in this environment, makes non-public network (NPN) deployments [11] a candidate deployment option, as long as service continuity is ensured for services (especially of “Business” type) consumed across private and public networks.

Other factors may as well lead to extending public networks to the railway facilities as another deployment option. In these cases, a distributed core network deployment allowing service deployment and processing at edge compute resources, and traffic offloading at Mobile Edge Computing (MEC) network elements can serve well the purpose of meeting the service performance requirements (especially for low latency), while optimizing public network utilization and performance. Moving one step further, for services that are consumed or/and processed at train level, the inclusion of on-board edge resources in the distributed network and service deployment setup can be considered.

Leveraging these concepts, the 5G-VICTORI project has proposed three deployment options for the case of the railway vertical network [12]:

  • Option 1: NPN (as autonomous edge): that considers a silo 5G Core Network (5GCN) deployed at central premises of the vertical facility.

  • Option 2: NPN with distributed data plane – multiple edges: including a 5GCN deployed at central premises of the vertical and several UPFs and application servers deployed at several sites of the vertical facility.

  • Option 3: Fully disaggregated 5G architecture: including a 5GCN deployed at central premises of the vertical, UPFs and application servers deployed at several sites of the vertical facility, and disaggregated RAN (Radio Access Network) segment.

Considering the logical network design and configuration, the performance requirements and the diversity of the complete FRMCS services necessitates the adoption of the 5G network slicing - at service and tenant level. The 3GPP distinction of services to uRLLC (ultra-Reliable Low Latency), and eMBB (enhanced Mobile Broadband), can be considered as basis for network layer slicing, over either a single private network deployment or a distributed public one.

3 Media Services Deployment for 5G-VICTORI

3.1 5G Network Deployment

Considering the CDN services as well as the deployment requirements and challenges related to the railway environment, 5G-VICTORI has proposed a disaggregated, layered experimentation framework ([12, 13]) extending the 5G-VINNI experimentation facility in Patras (Patras5G) [14], along a railway track in the area of Patras, Greece.

Access and Transport Network: A straight forward access network deployment option is the installation of gNBs providing coverage along the tracks and at platforms. Besides that, 5G-VICTORI solution includes a multi-technology dense transport network providing transport coverage along the tracks to support disaggregated RAN options. Transport aggregation is also considered.

An on-board train installation is also foreseen. This comprises: (1) 5G CPEs on-board the train allowing connectivity of the train systems to the 5G networks, (2) on-board compute servers for on-board application processing, data storage etc., and (3) on-board WiFi connectivity to end-users. The on-board train installation has two purposes: (1) to allow connectivity of the train systems to the 5G network and (2) to provide services to end-users.

However, key assumption for the “Business” services is that there can be service coverage gaps along the tracks, attributed either to the lack of RAN nodes or to unavailability of RAN nodes due to prioritization of other traffic. In this case asynchronous services can rely on high-data rate access network connectivity at train stations and platforms.

Edge Computing: For the deployment of services, cloud compute resources are available at the central cloud infrastructure of the Patras5G facility (University of Patras premises). Also, to achieve high network performance and to optimize resource utilization, edge and far edge computing are integrated as MEC (Mobile Edge Computing) in the Patras5G facility using the aforementioned deployment options. The NPN option is supported via the deployment of the complete 5GCN close to the vertical premises. The implementation is based on the Patras 5G Autonomous Edge, which is a portable “box”, ideal for on-premise 5G deployments, containing everything from the 5G NR and 5GCN and service orchestration on a virtualized environment based on OpenStack.

Last mile edge computing is considered on-board the train, to provide the necessary storage and compute capabilities for asynchronous “business” services. Virtualization of the edge resources allows them being used for various applications/services and integrated at various layers with the rest of the multi-technology, multi-layer network setup. In practice, virtualized edge resources can be integrated with the non-3GPP last-mile transport and WiFi access connectivity layer for hosting part of the asynchronous applications, enabling service delivery temporarily, during the periods that 5G access network coverage is unavailable.

Network Management, Orchestration and Slicing: Adhering to the 5G-VICTORI architecture, a network management and service orchestration layer operates on top of the aforesaid facility, to ensure that services are delivered across the edges ([12, 13]). This layer is based on Open Source MANO (OSM) and Openslice ([15, 16]), and it assumes automated deployment of services and multiple customized-slices over the complete infrastructure (access, transport and core). In the specific setup, this layer follows the Network Slice as a Service (NSaaS) paradigm.

In the context of 5G-VICTORI railway use case, compute and network resources are included in the orchestration layer instance. The CDN components are managed as 3rd party virtual network functions (VNFs) with the necessary Network Service Descriptors (NSDs) by the service orchestration layer, thus automated deployment of these services across the distributed compute resources is supported, along with creation of an end-to-end slice.

3.2 CDN Deployment

In this context, 5G-VICTORI aims to showcase that it is possible to provide - through a multi-stage caching CDN platform - continuous TV and VoD content to railway passengers as they move between train stations, without full 5G coverage along the tracks. In other words, to showcase how the combination of the advanced, multi-stages CDN platform with the 5G-enabled “data shower” when available, can alleviate delays and content gaps occurring when content is delivered directly from the existing content origins. This implies having a multi-stage CDN platform with appropriate caching capabilities. The CDN platform provided is comprising 3 CDN stages (considered as 3-stages linear application graph) implemented by 3 CDN application components:

  1. 1.

    the Central CDN Server serving as a point of connection to various sources/ TV/ VoD providers etc., mainly responsible for receiving the (CDN) Source Content and preparing it to be delivered.

  2. 2.

    the Main CDN Server serving as the main caching point that receives content from the Central CDN server and provides the necessary functionalities and elements to support the content delivery (storage and streaming) to end users. This is also called Train Station CDN Cache/ Server.

  3. 3.

    the Edge CDN Server providing the last mile caching server that receives content from the Main CDN server. This is also called Train CDN Cache/Server.

In this linear application graph of the 3 CDN components, the most delay and data-rate critical interface -which determines the overall CDN service performance - is that between the Edge CDN server and the Main CDN server. Connectivity to this interface is provided over a 5G slice.

The media content can be either of VoD type (e.g. pre-stored content that can be stored at UoP cloud) or live TV pulled by the CDN from a niche content origin extending a commercial TV platform. The Content Origin platform comprises an Origin, an OTT (Over The Top) Encoder and Headend equipment with interfaces that provide access to a number of linear TV channels for the CDN platform. The latter requires interconnectivity between the 5GCore site and the content origin. Apparently, the delays occurring over this interconnection link actually constitute the problem to be solved by the 3 stages CDN platform. In this context, the CDN service is the vertical service under evaluation/ validation in the context of 5G-VICTORI, and the content origin is the necessary counterpart of the service to prove its operation/ performance.

The purpose of the operation of CDN in a “data shower” fashion in the railway environment consists in ensuring that the Train CDN Cache is updated with popular content (not yet acquired) each time the train reaches a train station, i.e. whenever the train resides in 5G network coverage. This can be either VoD content that was not completely transferred during the previous stop due to its size, or live content that was shown during the train's trip from one stop to another and shall be stored on the train to be played as time-shifted (delayed by some minutes). This update should be performed in a pull-based manner, i.e. the Train CDN Cache requesting content from the Train Station CDN Cache, so that when leaving the station, the Train CDN Cache contains as much popular content as possible (from the previous stop’s available download).

Apparently, the necessary aggregate data rate required between the Train CDN Cache and the Train Station CDN Cache is a function of the number of channels to be provided to end-users, of the time the train resides at a train station and of the time it takes to the train to arrive to the next station with available connectivity (and Train Station CDN Cache). For instance, assuming, 3–10 TV channels of 7-10 Mbps data rate, about 3–5 min stop time at the train station, 10–15 min time between the two train station, the necessary data rate of the Train CDN Cache – Train Station CDN Cache ranges between 150-500 Mbps.

The proposed deployment scenarios assume that the central CDN Server is deployed at the central facility premises. At the MEC level, the Main CDN server (packaged as VNFs) is deployed and provides the necessary functionalities and elements to support the content delivery (storage and streaming) to end users. An additional Edge (CDN) server is deployed at the train on-board compute resources. The Edge (CDN) server is responsible for preloading and caching large amounts of content and serving the passengers even during disconnection periods. The Content Origin point(s) is located at central facility premises, (i.e. as a video streaming server hosting VoD content) to emulate local streaming services, and remotely to emulate a Content Origin point of a real CDN network deployment.

As the train approaches the station, the on-board 5G-CPE connects to the available 5G RAN in order to download streaming content in a “data shower” fashion to the Edge server -which is deployed at the on-board computer resources of the train-, with the aid of the aforementioned CDN multi-level solution.

The CDN service deployment for the demonstration activities of 5G-VICTORI project, across the remote premises of the TV source, the central regional cloud facilities, the local railway premises or the Central Train Station, and the Train is presented in Fig. 1. The underlying 5G network infrastructure the 5G network functions deployment is also presented along with the interfaces, the 5G slices and the CDN application flow.

Fig. 1.
figure 1

Deployment blueprint for CDN service in static and mobile environments

4 Experimental Setup and Results

4.1 Lab CDN Deployment

For the deployment and evaluation of the different services of the above use case, the first testing phases took place in a lab setup. Remote configuration and testing of the application at the Patras5G facility was feasible. The CDN solution is containerized and the Central CDN server and the Main CDN Server (Train Station Cache) were deployed in NFV-compatible format at and Patras5G Cloud facility. The lab setup was also connected to external TV streaming content source provided by a remote niche (CDN) Content Origin platform in Athens. The Edge CDN server (Train Cache) was deployed on a laptop connected to a 5G-CPE and through that it was connected to the Main CDN server /cache over the NPN 5G network deployed at UoP lab. An eMBB slice (meeting the relevant requirements) was configured for this session. The same laptop also emulated the end user/passenger UE.

The access network configuration in the Patras experimentation setup is the following: Frequency Band: n78, Bandwidth: 100 MHz, subcarrier spacing 20 kHz, 4x4 MIMO, TDD, UL/DL modulation 256 QAM/256 QAM. The setup for the lab testing along with the different facilities of the deployment is illustrated in Fig. 2.

Fig. 2.
figure 2

The setup for the lab testing of the CDN scenario

4.2 Performance Results

The CDN (data shower) application was evaluated in detail through the execution of the following five tests and the measurement of the relevant KPIs. The network conditions and metrics were being monitored by a monitoring software and the results were displayed on a local screen.

Initially the CDN Application Scenario Deployment test was performed, in which the CDN components were deployed through the orchestration layer at UoP cloud and the average time required for the VNFs deployment was measured. It shall be noted that the low application deployment time, < 90 min, is a key target KPI which was measured from the difference between the timestamps of the application deployment request and the application deployment completion. From the results, a service deployment time of 5.36 min was achieved.

After the deployment of the CDN components, initial connectivity between all of them was verified and evaluated. The link between the Edge CDN cache and the Main CDN cache was measured, as well as the data rate between the Central CDN server and the Main DCN cache (despite deployed as VNFs at the same facility in the context of lab testing). An average of around 95 Mbps was measured using iperf tool and/or computed by the volume of transferred data in a specific amount of time.

Moving on the synchronization of the Main CDN cache with the latest linear TV content from the central CDN server, the performance of the Periodic Update functionality was measured. End-to-end 5G network latency was measured around 35ms on average on a high-bandwidth slice.

The next step was to evaluate the data shower mechanism from the Main CDN cache to the Edge CDN cache; which was also verified. The CDN (data shower) scenario experimentation was complemented with performance measurement of Content Distribution to passengers onboard. An average of around 75 Mpbs was measured using iperf tool, resulting in the transfer of approximately 2.5 GB of content in 4 min on the train and therefore adequate content onboard to serve passengers for almost 38 min after the loss of connectivity. The content was measured to be distributed with an approximate data rate of 88.36 Mbps to the passengers.

5 Conclusions

This paper has provided an overview of the media service requirements and KPIs, and their relation to FRMCS “business” services category as an indicative media-transportation cross-vertical use case. These applications have been used as a basis for the definition of system specifications of the 5G-VICTORI solution. Delivering a high-performance deployment for these demanding verticals entails network planning based on various technologies and on the placement of compute resources at the right proximity to the end user.

To this end, specifications have been nailed down to an experimentation deployment for testing and performance evaluation of services in operational railway environment in the area of Patras, Greece. The deployment entails integration of multi-level CDN platforms with private 5G network deployments that include edge computing capabilities and edge caching on-board the train. Multi-level CDN capabilities are enabled via “Data Showers” at selected locations along the train route. Initial lab testing activities have validated the applicability of such extensions to the railway environment, taking under consideration the time trains reside/ wait at platforms, the number of stations between platforms and the time it takes the train to move between platforms, the Video on Demand (VoD) as well as semi-real time TV streaming service requirements.