4.1 The Converging Evolution of Telecommunications

If it is true that “you could not step twice into the same river” (HeraclitusFootnote 1), by supporting the evolution of the Internet, telecom infrastructure is transforming itself, to ultimately become part of the river, resembling the very nature of the Internet. But what actually is the nature of the Internet?

The Internet is at the dawn of the digitization era, where everything and everyone will be connected everywhere. Most of our daily tasks will be automated, our lives will be simplified and our decision making improved. It will be challenging to stay disconnected and live a normal life. This will not be the final stage of Internet evolution but, as argued by Steve Case (2016), the beginning of the third Internet era.

The first Internet era was defined by the building of the Internet infrastructure (1985–2000), a very creative period of pioneers who laid the foundations for everything that followed, linking content online with a URL and making it discoverable. During the second wave (2000–2015), mostly consumer centric, the focus turned from connecting people to creating new ways for them to access information, leveraging the smartphone revolution, a seamless integration of hardware, software and services which unleashed the app economy. Companies like Google and Facebook were able to develop on top of the Internet infrastructure to create search and social networking capabilities, while apps like WhatsApp and Snapchat became the most successful smartphone companions. The third era, or wave, is on the way. It will be characterized by a period in which the Internet is integrated into every aspect of everyday life, in increasingly ubiquitous ways. This will vastly transform most of the major “real world” sectors like entertainment, health, education, transportation, energy, financial services, food and other industries representing the largest part of the world economy.

However, the Internet is the most unintended telecommunication success. Its achievements represent a revolution that happened despite the efforts of telecom companies to harness it, only to end up being ruled and transformed by it.

During the first Internet wave, the telecom infrastructure was the critical and exclusive gateway enabling slow, painful access to a world of marvels online under the standard telecom paradigm of circuit switching. In circuit switching, two telephones (or two computers) establish a communication channel (circuit) through the network before they can communicate. The circuit works as if the telephones were physically connected. This direct “like” connection guarantees that the full bandwidth of the channel is dedicated only to the call, and remains connected for the duration of the communication session. Circuit switching is relatively inefficient since the communication channel is reserved whether or not the connection is used. But it has the advantage of ensuring the best possible quality to the communication, given the available resources. Accordingly, during the first Internet wave a dial-up Internet connection allowed users to navigate or to talk over the phone, but not to do both unless they had a second and very expensive dedicated data line.

In the second wave, the “always on” imperative forced telecom providers to change the underpinning communication technology from circuit switching to packet switching. In packet switching every communication is split into small pieces, called packets, transmitted through the network independently. Each packet is labelled with its destination address and a sequence number for ordering it in relation to other packets. At the destination, the original message is reassembled based on the packet number to reproduce the original message. In this way, every packet can be routed via a different path and the network bandwidth is shared by packets from multiple competing communication sessions, resulting in a more efficient use of the network but also a potential loss of quality compared to the service guaranteed by circuit switching. But the risk was worth the savings, because network capacity was becoming a scarce resource.

Packet switching saw its first large scale adoption on mobile phones, permitting people to talk on the phone and navigate at the same time, but then spread to the Internet, with application like Skype and to more traditional fixed lines. The technology behind voice over the phone changed, adopting Internet communication standards, splitting the conversation flow into thousands of data packets sent best effort, without any guarantee of any quality of service, over a normal data line using Voice-over-Internet-Protocol (VoIP) technology. This transformation unveiled big opportunities to telco operators along with new services to the final users. But this was also the beginning of the “internetization” of telecom technologies, the inner transformation of the telecom infrastructure, which was adopting more and more solutions and technologies developed or refined by Internet players. From then on, the transformation soon became irreversible.

However, during the upcoming third wave, this evolution of telecom infrastructure will go even further down this path. Telephone exchanges will be converted into data s, telecom equipment will be virtualized on commoditized computer hardware, and traditional network architectures will turn into software-defined networks.

It will be more of a revolution than anything that has ever occurred in the past of telecommunications. As we will illustrate in this section, these developments are needed to serve the rising demand for services that will have a growing number of connected devices, all transmitting more data and requiring higher network quality. Because the importance of telecommunications networks has never been so critical.

4.1.1 A Growing Number of Connected People

If the health of an industry were to be judged only by the demand for its products or services, the outlook for the telecommunications sector could not be better.

First, the potential market for telecommunications—the entire world population—continues to grow, and to no small degree. An average growth rate of 1.2% per year may not look like much, but in 18 years (from 2000 to 2018) it equates to an increase of 1.5 billion people, a total growth of 25% in world population.

Second, in telecommunications, everything else grows even faster. In the same period the number of Internet users rose by 3.8 billion (14% CAGR) and mobile subscriptions by 7.8 billion (15% CAGR), adding more than 4.9 billion mobile-unique users to the telecommunication market (a 13% CAGR). Only fixed-telephone subscriptions have decreased, albeit just by 52 million (–0.7% CAGR, Fig. 4.1).

Fig. 4.1
figure 1

The evolution of the global population, subscriptions and users in telecommunication services (source: World Bank, ITU, GSMA and author’s estimations, 2018)

However, this scenario is destined to last long. Between 2018 and 2022, the world population increase will be 318 million, which means the growth rate will slow to 1% CAGR (a –17% change). Instead, only 621 million Internet users will be added (3.6% CAGR, a 75% drop) and just 605 million mobile subscriptions (1.7% CAGR, an 88% decrease), with 290 million new unique mobile users (1.3% CAGR, –90%). Fixed-telephone subscriptions will continue to decline but at a quicker pace, losing 87 million lines (–2.5% CAGR, decreasing eight times faster).

However, this evolution will not be the same in every country, a fact which will impact the kind of telecom infrastructures that will serve these new potential users. The main reason for this is that there are more people and population grows faster in the poorest countries. While in developed countries, that represent just 17% of the world population, the average growth rate is only 0.4%, in developing countries (83% of world population) it is more than triple (1.4%) and less developed countries, a subset of developing countries that stands for 13% of the world population, see more than six times the growth rate (2.6%). This means that of the total population growth since 2005 (1.1 billion people), about 94% (more than a billion) were not born in the wealthiest and most developed countries. For those populations, being the largest share and occupying the greater part of our planet, the cost of telecommunications infrastructure will be a real issue because fixed broadband networks are far more expensive to implement and have many more constraints than mobile networks.

As a consequence, even if it is true that “the world is going mobile”, the imperative “mobile first” means different things in different areas of the world, and this will continue to be the case. If at a global level the ratio between mobile broadband users and fixed broadband users in 2018 was 4.9 to 1, this ratio is undoubtedly destined to grow all over the world to create an ever-expanding distance between developing countries and developed countries (Fig. 4.2). In the more prosperous nations of the world, this ratio will be 3.4 mobile users for each user on the fixed network, while developing countries will range from 5.9 to 1, up to 21 to 1 in less developed countries.

Fig. 4.2
figure 2

Ratio evolution between mobile broadband and fixed broadband users in developing and developed countries (source: ITU, Global and Regional ICT statistics 2018, https://www.itu.int/en/ITU-D/Statistics/Pages/stat/default.aspx)

These differences are significant not so much for the relative distances that appear among different areas of the world, but for the differences in absolute terms that create incentives for the development of the telecommunication networks of the future. About 17% of the world’s population lives in developed countries, where population growth is low, but telecom operators can afford to take on investments in fixed broadband networks even if people are increasingly abandoning fixed telephony. The rest of the world, whose population is still growing at varying rates, will not go through the same model of development in telecommunications as the most developed countries. On the contrary, less developed regions are focusing most of their efforts on developing mobile telecommunications networks or something similar but with a low cost for coverage, such as satellites provide. These fundamental differences are reflected not only in the development plans of telecommunications operators but also in those of telecom equipment producers. Both must decide which direction to push development efforts for new technologies in mobile and that decision will impact on everybody else is interested in the future of Internet services.

4.1.2 A Growing Number of Connected Devices

Telecommunication operators have seen their customer base grow over the last 20 years by more than eight billion subscribers. But, the most significant growth factor for the future will no longer be the number of users, but the number of connected devices owned by each user.

Between 2018 and 2022, more than 22 billion new Internet of Things (IoT) devices will be connected, with an average growth rate of 34% per year. Sensors, cameras, smart speakers, smart lockers and hundreds of other types of devices will accumulate investments of more than 4.6 trillion dollars. Another 10 trillion dollars will be added to this figure between 2023 and 2026, to install more than 31 billion connected devices, reaching an installed base of more than 64 billion devices and an average annual expenditure of 3.3 trillion dollars in 2026 (Fig. 4.3).

Fig. 4.3
figure 3

IoT installed base and yearly spending 2016–2026 (source: Business Insider Intelligence, The Internet of Things Report 2019)

This enormous number of devices, once connected, is destined to change many sectors, and indeed our entire world. While the number of annual installations (Fig. 4.4) is expected to skyrocket from the current 1.5 billion in 2018 to 8.3 billion in 2023, the average cost of an installation, which could involve many devices, will decline from 2019 until 2022 thanks to economies of scale. Then, it will rise again due to an expansion in the average size of the installation. This proliferation of IoT devices everywhere and in every aspect of our future life has already begun, but only just. Soon, with a Cambrian explosion creating thousands of new IoT typologies and use cases, a long journey will begin that will make these objects more and more intelligent, useful and reliable, thanks to the use of artificial intelligence, new communication technologies and better planning.

Fig. 4.4
figure 4

IoT annual installations and average yearly spending per installation 2016–2026 (source: Business Insider Intelligence, The Internet of Things Report 2019)

IoT devices already power much of the developing data-based economy, and are transforming the relationship between the physical and digital worlds for enterprises, consumers, and governments. Companies are using devices to automate and optimize workflows and decrease labor costs. The most-used types of IoT solutions are remote monitoring devices, asset tracking systems, smart facility management and wearables.

The consumer and business IoT markets differ significantly. The former is made up of the portions of the IoT that serve end users in their homes or personal lives, like smart speakers, smart home devices, smart thermostats or smart lockers, but not only devices. Companies like Samsung and Whirlpool are integrating smart appliances with ecommerce applications, and beginning to build services out of smart home devices.

Governments are investing in creating smart cities using a range of technologies aimed at reducing crime, saving money, facilitating small business and improving environmental conditions. Smart cities leverage IoT devices like connected sensors, lights and meters to gather data to analyze. These data provide insights on infrastructure, population and public services, and enable cities to create efficiencies that affect the lives of their residents, as discussed by Gatti and Chiarella in Chap. 6.

No matter how you look at it, the IoT market is destined to bring great changes, becoming a natural complement of our daily life, just as smartphones are today. On average, on a global scale, we will go from 1.1 IoT devices per capita to 7 in just 8 years. But in many advanced economies, such as the US, growth will be much stronger, going from 2.5 to 26 IoT devices per capita.

This will represent a major challenge for telecommunication infrastructure, especially because this trend is coupled with a skyrocketing number of users, subscribers and devices. As we will see later in this chapter, telecommunication infrastructure should deeply evolve its technology and architecture to acquire the ability to connect and serve these users. But, even if the technical answer to this challenge is very complex, the main result will be simple: a huge growth in data traffic.

4.1.3 A Growing Flow of Data

Overall, Internet traffic will triple from 2017 to 2022, from 122 exabytesFootnote 2 (EB) per month to 396 EB by 2022 (Cisco 2018), which represents a CAGR of 26% (Fig. 4.5).

Fig. 4.5
figure 5

Forecast of Internet traffic per month by 2022 (source: Cisco VNI Global IP Traffic Forecast, 2017–2022, November 2018)

Globally, the per capita increase in Internet traffic has followed a similarly steep growth curve over the past few years. In 2000, per capita Internet traffic was 10 megabytes (MB) per month; in 2007 it was well under 1 gigabytes (GB) per month to reach 16 GB per capita in 2017. This number will top 50 GB per capita by 2022.

Internet traffic continues to proliferate, exceeding all expectations. Indeed, this forecast represents a slight rise over past predictions, which projected a CAGR of 24% from 2016 to 2021 (Cisco 2017), mainly caused by an increase in the share of mobile traffic as a percentage of the total IP traffic.

All this traffic will not be distributed evenly between fixed and mobile networks in different countries; instead there are a variety of models of network usage and device adoption. However, these models are more complex than the simple distinction between developing and developed countries. For example, there is a rising number of nations who have seen a rise in fixed traffic which rivals that of their mobile traffic. The United States is the outlier in this trend, with an upturn in fixed Internet traffic of 26% in 2017 and in mobile of 23% over the same time period. Japan, Korea, Canada, Germany and Sweden, all have fixed growth that is only slightly lower than mobile, but most countries have significantly higher rates for mobile than for fixed connections (Fig. 4.6).

Fig. 4.6
figure 6

Fixed and mobile Internet traffic growth rates (source: Cisco VNI Global IP Traffic Forecast, 2017–2022, November 2018)

The relationship between fixed and mobile networks is more complex than that of two alternative worlds. When more mobile data is transmitted, this does not necessarily mean more traffic on mobile networks. In fact, just the opposite is true: a continuously increasing part of data traffic, e.g. from smartphones, is offloaded to wifi networks which are connected to wired networks (Fig. 4.7). For this reason, streaming movies or music on mobile devices usually transits on fixed networks, not mobile ones. This offloading role of wifi networks, which became dominant in 2015 and never stopped growing, will top 63% of the global mobile traffic in 2021. In fact, mobile networks could not handle all the data traffic generated by all the mobile devices if it were not for wifi networks, at least not with the current network architecture.

Fig. 4.7
figure 7

Offloading of mobile traffic to wifi, % of global mobile traffic (source: Cisco VNI Global IP Traffic Forecast, 2017–2022, November 2018 and Venkateshwar et al. 2019a)

Public wifi networks keep multiplying (Fig. 4.8). Globally, total wifi hotspots (including homespotsFootnote 3 and public hotspots) will quadruple from 124 million in 2017 to 549 million by 2022. Hotels, cafes and restaurants will have the highest number of hotspots by then globally, but the fastest growth is in healthcare facilities such as hospitals. This continuous expansion is the reason for the emergence of associations like the Wireless Broadband Alliance (WBA), founded by AT&T, BT, Cisco Systems, Comcast, Intel, KT Corporation, Liberty Global, NTT Docomo and Orange, among others. Together they manage more than 30 million hotspots globally like a consortium. Their goal is to create opportunities for service providers, enterprises and cities to improve customer experience on wifi and similar technologies, but also to eventually serve new markets like IoT. Flexibility and low cost make wifi networks an important cornerstone for mobile users. Although intrinsically unsecure and insufficiently effective at managing interferences, in an environment increasingly dense with wireless devices, wifi networks behave naturally like an infrastructure without really being one.

Fig. 4.8
figure 8

Global public wifi hotspots: 2015–2022 (source: Cisco VNI Global IP Traffic Forecast, 2017–2022, November 2018 and Venkateshwar et al. 2019a)

Globally, the rise in Internet traffic will be higher on mobile networks than on fixed networks (Table 4.1). So, it is not surprising that the percentage of total data transmitted on the move will increase as well. However, fixed network traffic will remain dominant by far, even if its share will decrease slightly, from 85 to 78% of the total. In contrast, consumers, who represent the largest traffic segment, generating 83%, will create even more traffic in the future (27% CAGR between 2017 and 2022) compared to businesses (23% CAGR).

Table 4.1 Global Internet traffic growth 2017–2022

From a geographical point of view, despite becoming the second-fastest growing IP traffic area (surpassed by Latin America) Asia Pacific is—and is destined to remain—the region with the highest share of total Internet traffic in the world, which will go from 38% in 2018 up to 44% in 2022. North America is in second place, with Europe a distant third, where it will stay until 2022. Instead, thanks to its very high growth in Internet traffic, Latin America will replace the Middle East and Africa in the penultimate position.

However, the growth rates of Internet traffic inside these main continental areas reflect only the evolution of their final users’ activity. Everyone on the Internet is connected to everyone else, but not all of them are equally important for all the others. At a global level, the volumes of international Internet traffic between geographic areas, no matter the direction, naturally give rise to a ranking of importance depending on the concentration. As shown in Fig. 4.9, it is no surprise that the U.S. and Canada are still the center of the global Internet. In fact, they attract and concentrate the largest share of traffic, measured in terabits per second (Tbps).Footnote 4 Although less so than in the past, their central position is still indisputable. These countries are followed by Europe, which is a hub for the Middle East and Africa, with Asia in third place but rapidly rising.

Fig. 4.9
figure 9

Global inter-regional traffic: Tbps (source: TeleGeography 2019)

4.1.4 The Evolution of the Consumer Market

This top-down scenario of demand evolution in the telecommunications market would not be complete without adding some apparently marginal details about the ongoing transformation of the telecom infrastructure and its structural components.

The first, and most important, concerns the characteristics of consumer traffic, by far the most important component of the demand for communication services. Not only is consumer demand growing faster than business, this growth applies to both fixed and mobile traffic, with the latter rising at almost twice the fixed rate (Table 4.2).

Table 4.2 Global consumer Internet traffic growth: 2017–2022

The main source of traffic is—not surprisingly—video, which in 2018 represented about 75% of total traffic. With an average growth rate of 34% (CAGR), in 2022 video will account for 82% of the total, the equivalent of ten billion DVDs per month. In part, this leap will be boosted by the increase in transmission quality. In Ultra-High Definition (UHD or 4K), the bit rate for video streaming runs at about 15–18 Mbps, more than double the HD rate and nine times more than Standard Definition (SD). Given that by 2022 about 62% of the installed flat-panel TV sets will be UHD, up from 23% in 2017 (Cisco 2018), this proliferation of video usage should come as no surprise.

Furthermore, 4K video is not the final step in the evolution of video quality. BS8K, the first broadcast channel in 8K technology (also known as Full UHD or FUHD, requiring double the bit rate of 4K) was launched by the Japanese company NHK on December 1, 2018. This move aimed to begin experimenting in view of the 2020 Summer Olympics in Tokyo, which will be broadcast entirely in 8K. Raising the bar on the average video quality, video traffic is likely to intensify even further.

Apart from video traffic and web browsing, online gaming will be the most important traffic generator, growing ninefold between 2017 and 2022. Gaming on demand (or cloud gaming) and streaming platforms for gamers have been in development for several years, and now they appear to be sufficiently mature from a technological standpoint. In traditional on-console gaming, such as with a PlayStation or Xbox, graphical processing is performed locally on the gamer’s console or computer, without creating Internet traffic. With streaming platforms for gamers, instead, the graphics of the game are produced on a remote server and transmitted over the network to the gamer, just like a Netflix video streamed from the cloud to the user. As cloud gaming becomes more and more popular, gaming could turn into one of the largest Internet traffic generators. This would bring with it an important advantage: a powerful ally in the fight against counterfeiting and piracy. This was a winning move in the music industry, and it is succeeding in the movie business too. Case in point is the fact that file sharing is no longer increasing in absolute numbers and in proportion is actually seeing a downturn, from 7 to 2% of the total traffic (–3% CAGR).

Virtual Reality (VR) and Augmented Reality (AR) applications today are still too insignificant to be included in a ranking like the one in Table 4.2, but in the future they could be the biggest potential traffic generators. Indeed, VR and AR are poised to grow 12-fold over the next 5 years (65% CAGR), a promising development that stems mainly from downloads of large virtual reality content files and applications. But this will prove to be a very conservative prediction if virtual reality streaming wins the popularity it deserves.

Another major trend is the fact that busy-hour traffic (defined as traffic in the busiest 60-min period of the day) continues to grow faster than average Internet traffic (calculated as the simple “average” of the Internet traffic during a day), which is quickly losing relevance (Cisco 2018). This phenomenon is noteworthy because service providers plan network capacity according to peak rates rather than average rates, and those two measures are diverging (Fig. 4.10). Between 2017 and 2022, global busy-hour Internet use will grow at a CAGR of 37%, compared with 30% for average Internet traffic, a gap destined to widen more and more.

Fig. 4.10
figure 10

Average Internet traffic and busy-hour traffic (source: Cisco VNI Global IP Traffic Forecast, 2017–2022, November 2018)

Again, video is the main underlying reason for accelerated busy-hour traffic growth. Video has a “prime time,” unlike other forms of traffic, which are spread almost evenly throughout the day (such as web browsing and file sharing). Because of this video consumption pattern, the Internet now has a much busier busy hour, and Internet traffic at this time will grow faster than average traffic. More specifically, this happens because video, which is gaining traffic share, has a higher peak-to-average ratio than data or file sharing. In addition, the composition of Internet video is changing, with more live video, ambient video and video calling; all these uses have a peak-to-average ratio even higher than on-demand video. For telecom operators, this trend will create more demand for faster and more reliable connections. But it also represents a source of pressure for augmenting investments to add network capacity, which is already scarce.

Speed is always a critical factor in Internet traffic, but sometimes for counterintuitive reasons. The Jevons paradox, or the Jevons effect, is well-known in environmental economics: increased efficiency in the use of a resource leads to increased consumption of that resource (e.g. a higher number of fuel-efficient cars leads to more car usage and then greater fuel consumption). This paradox contradicts governments and environmentalists, that generally assume efficiency gains will lower resource consumption, ignoring rebound effects from improved efficiency. But this paradox applies to telecommunication usage as well as to fuel-efficient cars. And in fact, service providers have discovered that users with greater bandwidth generate more traffic. When speed accelerates, users stream and download greater volumes of content. By 2022, around the globe, households with high-speed fiber connectivity will generate 31% more traffic than households connected by xDSL or cable broadband (Cisco 2018). The average fiber-to-the-home (FTTH) household generated 86 GB of traffic per month in 2017, and will produce 264 GB per month by 2022.

From the point of view of telecommunication infrastructures, this means that telco operators must also calculate the rebound effects of their investments, increasing their access to infrastructure more than proportionally compared to past trends after network performance upgrades. On top of this, these operators should also invest more than proportionally to improve the bandwidth of the backhaul connectionFootnote 5 every time they invest in upgrading the access technology of their networks, migrating for example from ADSL to FTTH. If such upgrades are not monetized at all, or partially monetized, as has happened several times in the past, then the net effect for a company investing in improving its network performances will be a decline in profitability. This will be accompanied by reduced network quality (unless additional investments are made to cover the corresponding rebound effects) and, again, lower profitability.

4.2 The Evolution of the Telecom Network as a Consequence of Demand Evolution

It is rare that a product can revolutionize an entire industry. But the iPhone launch in 2007 set in motion a chain of changes that, like a tectonic event, radically transformed the mobile telecommunications landscape. This device converted the “raw material” offered by the industry from voice communication with some messaging and little data, to a data service. Voice and messaging are still offered and promoted separately. However from a technological point of view, they are both a data service wrapped in a different package, although not yet billed as the main service, which is still voice at 52% (Fig. 4.11).

Fig. 4.11
figure 11

Breakdown of global wireless revenues (source: Bloomberg, Ovum, Company Reports, Barclays Research, 2019). Note: Messaging and data revenues estimated at percentage of total wireless data service revenue

In this industry in which the raw material has completely changed, the network can no longer be taken for granted by its users. In the past the most difficult test for any telephone network was the ability to handle the explosion of calls on Mother’s Day. In this new scenario, the busiest day for a network can happen any day. For example, when a new season of a popular series is released and could be watched in streaming, or when a smartphone OS upgrade is made available, or every time there is a new popular event, or a combination of these circumstances. A network today needs to be always ready for reaching a new higher peak (Donovan and Prabhu 2017).

Moreover, as we have seen in Sect. 4.2, there are more and more people all over the world with more connected devices transmitting more data creating higher peaks of network utilization for uses that are increasingly critical, this will create a problem for telecom infrastructure difficult and expensive to be solved using traditional equipment. If network loads cannot be forecasted and continue to grow at such a rapid pace, the rigidity and the cost of traditional equipment make it very expensive to respond in an effective way. Capacity should be gauged on peaks, remaining unused for the rest of the time; but overcapacity should also be factored in to create a safety margin for the continuous growth of traffic.

If only the ability to quickly scale up capacity and scale out geographically could ensure high-quality, sustainable user experience during the rapid expansion of network traffic, then a network should behave like cloud computing. What this means is the network should be able to expand its capacity automatically, following predefined rules, when there is a peak in demand—and all this without active intervention by the telecom provider. Then, when the peak is over, the network should reduce the capacity allocated to manage the peak and reallocate it to deal with another peak in another area. Or this surplus capacity could be put on stand-by, waiting for another surge in traffic demand somewhere else. But there are two practical constraints to consider here: to create savings, telecom operators should use a commodity hardware and centralize resources to make it possible to reallocate them as needed.

Managing the network like cloud computing, without specialized, dedicated hardware but using standard commodity servers, is possible only if operators radically switch away from traditional network equipment and use software defined networks (SDN) and network function virtualization (NFV) instead.

SDNs call for a completely different approach, abstracting physical networking resources (e.g. switches, routers) and replacing them with software. SDN is a solution developed by telecom operators years ago but widely and successfully adopted in data centers. A SDN centralizes network intelligence and decision making while the forwarding components which implement central ruling remain distributed. An internal study by Bell Labs shows that SDNs reduce operational costs by more than 50% compared to legacy technologies, and improve optimal traffic by as much as 150% of capacity utilization (Weldon 2016). In addition, SDNs make it possible to separate non-mission-critical workloads, transferring compute and store processes to low-cost data center facilities and services, such as those offered by public cloud providers.

Complementing SDN with NFV has an even stronger impact on savings and flexibility in network management. NFV can replace on software (virtualize) any network devices (load balancers, firewalls, intrusion detection devices, for example) and run them on commodity hardware. The network and almost all its components can be reconfigured and provisioned to quickly meet fluctuating needs and demands via software.

For a network, changing ‘quickly’ does not mean change instantaneously. However, in the new paradigm of virtual network infrastructure even milliseconds could mean something because latencyFootnote 6 and bandwidthFootnote 7 are the most critical requirements that networks need to manage, even more so in the future, and they are strictly interconnected.

4.2.1 The Problem of Bandwidth and Latency in Telecommunication Networks

Often Internet service providers advertise their connection using bandwidth as the main metric for speed. They claim that their connection is as fast as 100 Mbps or that their speeds is 20% faster than their competitors. But these claims are misleading. Bandwidth is the amount of data a user can receive every second; it is not a measure of speed. If the Internet connection were a pipe, bandwidth would measure how wide, or narrow, the pipe was, and latency would be how fast a drop of the liquid it carries moves from one end to the next.

Distance is the primary cause of latency. The optic impulse, moving approximately at the speed of light, induces 4.5 ms of latency for every 1000 km, and therefore requires a proximity of about 100 km or less to support a response time of 1 ms.

The other cause of network latency are the delays induced by network hops.Footnote 8 Every hop adds some delay to a transmission, because data packets must be routed and/or queued for delivery over an interface that may have lower capacity than the sum of the input flows. This queuing delay is less than a millisecond on average, but in times of severe congestion this can add up to tens of milliseconds. If traffic congestion cannot be managed or avoided, the performance of latency-sensitive services will be unpredictable.

In order to offer low-latency service guarantees, providers must minimize the number of network hops and maximize the available bandwidth. These dual requirements essentially mandate the creation of edge computing nodesFootnote 9 and ultra-high-capacity networks in fiber optics to provide the required connectivity to these nodes.

Low latency is critical requisite for ensuring that SDN and NFV work in an effective way. But this characteristic is also important when considering interactions with humans. A nerve impulse travels at a maximum speed of approximately 100 meters per second (m/s) in the human body. Therefore, the time required to propagate a signal from the hand to the brain, excluding the time required for the brain to process the signal, is approximately 10 ms. As network latency approaches this same level, it is possible to enable interactions with a distant object with no perceived difference compared to interactions with a local object (Weldon 2016).

In autonomous cars, at 120 km/h, 3 m distance corresponds to 100 ms of delay. With about 90% of this time allocated to the processing required for the driving application to make the decision and the vehicle to act on the resulting instructions, only 10 ms can be allocated to network latency, with little tolerance for variance and extremely high availability required. Similarly, a low-latency and high-bandwidth network is key in enabling a new wave of innovative VR and AR applications, with content and processing power in the cloud. Physiologically, the vestibulo-ocular reflex (VOR) in humans coordinates eye and head movements to stabilize images on the retina. Studies have shown the VOR to require approximately 7 ms. Therefore, to avoid user disorientation, including occasional nausea, a similar level of latencies must be guaranteed to VR and AR applications by the network to achieve mass market adoption (Weldon 2016).

4.2.2 The Telecom Network and Its Evolution

Telecom networks are changed at every level, global and national. If reducing latency and increasing bandwidth to serve a growing demand is the main driver of this evolution, at the top level, where there are the international cables and cloud data centers, controlling connections is the main issue.

The structure of global telecom networks can be mapped in a simplified way as in Fig. 4.12. The big international cables that encircle the globe connect to nation-al and local networks in facilities called international telephone gateways (for voice calls) or Internet Exchange Points (for Internet connections). Here the big carriers exchange their traffic or interconnect their networks.

Fig. 4.12
figure 12

A simplified map of a telecom network

International telephone gateways have maintained almost the same hierarchical structure of the past, with international carriers at the top, receiving and routing international traffic from national and local operators. Internet Exchange points, in contrast, are developing a quite different structure as compared to previous years. First, there were as many as 488 in 2018, including exchanges in marginal locations for traffic routing. This increase in number has diluted the traffic of large interconnection hubs, at the same time reducing the risk of traffic congestion while shortening the average distance of communications and average latency.

Second, but more importantly, since 2010 big content providers, including Google, Facebook, Microsoft and Amazon, have started buying international cables to route traffic generated by their own companies and their clients on their own infrastructures. In 2006, the percentage of traffic controlled by Internet backbone providers was 80%. In 2018, for the first time, they were surpassed by content providers, who routed 54% of the global international traffic on their international cables (TeleGeography 2019).

Finally, the main providers of cloud computing services, who are these same content providers, have moved their cloud data centers up in the Internet hierarchy, connecting them in many cases directly to the exchange points. This choice is justified by the fact that 86% of the global computational load is already performed in the cloud (a figure that will reach 94% in 2021, Fig. 4.13). This is an enormous share. More importantly it should be noted that 90% of the total Internet traffic goes through the cloud (hitting 95% in 2021).

Fig. 4.13
figure 13

Cloud computing—global computational load and traffic (source: Cisco Global Cloud Index: Forecast and Methodology: 2016–2021, 2018)

At a lower level, under the international gateways or the Internet exchanges, there are national networks with their telecom exchanges at core and metro level. National telecom networks are organized hierarchically to cover the entire territory of a given country and interconnected to each other through a redundant backbone. Between the network at the metro level and the access level lies the edge network, which will be tremendously important for the future of telecommunications. At a lower level, there is the access network, connecting urban telecom exchanges to end users. This is divided into a primary network, which goes from the telecom exchanges to the distribution cabinet, and a secondary network, from the cabinet to the final user.

Mobile networks and fixed wireless access (FWA) networks can interconnect to one other and with the central office using radio links, without laying cables. This type of connection is cheaper but also lower quality compared to cable. For this reason, especially in mobile networks, radio links have been gradually set aside and left as residual solutions, with preference going to fiber optic connections.

From the map in Fig. 4.13, it is easy to see that there is a single network that connects all its components and users, even if controlled by different players using different technologies. But from the point of view of the final user it may seem different, because access to the network could be either fixed, mobile or FWA.

The Future That Comes from the Cloud

Cloud computing is a very successful business model. In 2020 its global turnover will surpass that of more traditional IT (IDC 2017). The market leader Amazon Web Services (AWS) had revenues of $25.7 billion in 2018 but was able to maintain 47% growth year-on-year. Microsoft, its closest contender, garnered revenues of $23.2 billion, up 56% from 2017.

The market is highly concentrated in the hands of a few companies: AWS, Microsoft, IBM, Google and Alibaba together hold 75% of the total market (Gartner 2019).

However, cloud computing could be considered a successful technology model too:

  • SDN and NFV implement solutions that have been the norm in cloud computing for years.

  • Edge computing has already been tested by cloud providers that today are offering specialized solutions.

  • IoT will be a potential market for telecom operators but it is an actual market for cloud providers.

  • Cloud data centers are far more energy efficient than telecom central offices.

  • The first successful implementation of augmented reality on a global scale was Pokemon Go in 2016 on Google’s edge network.

4.2.3 The Evolution of Fixed Networks

Before the deregulation of the telecommunication industry in the early 1990s, telecom operators offered a limited portfolio of content and services, built on proprietary platforms and limited to the walled garden of their network realm. Fiber to the home (FTTH) was seen as the ultimate solution for broadband access; PayTV video services were considered to be the killer application that would fund the cost of deploying a new optical access infrastructure. Just after the start of the deregulation process, the advent of the Internet and of the World Wide Web opened a new perspective, providing a platform for sharing content efficiently.

However, after the initial excitement, the realization emerged that the cost associated with deploying new wired infrastructure in fiber to every home was enormous. It would take decades to roll out the new networks, and video services offered limited additional revenue potential. Combined, these factors meant the estimated returns on the investments would come in more than a decade, a period that was deemed unacceptable by investors and shareholders.

Consequently, access network providers started looking for alternative technologies to reuse their existing infrastructure to enable faster deployment of broadband services with an acceptable return on investment. In 1997, incumbent telecom operators started using new digital subscriber line (DSL) technology over their twisted pair copper wires. At the same time, cable operators introduced cable modem technology over their coaxial cable, using the so-called hybrid fiber-coaxial (HFC) technology. Both DSL and cable modems were relatively economical to deploy and offered acceptable bandwidth. The result was that FTTH was nearly shelved everywhere and restricted to greenfield deployments where the relative economics were comparable to those of copper-based technologies (Weldon 2016).

However, there were two noteworthy exceptions: Japan and Korea. In both cases, fiber deployments in metropolitan areas were considered by the government a long-term strategic priority and the relative density of houses made the economics more affordable. Later, China joined these two countries, due to a lack of existing copper infrastructure in large parts of the country and a desire to create a future-proof solution.

Access capacity for DSL services improved exponentially. Asymmetric digital subscriber line (ADSL), followed by its improved version (ADSL 2), was well suited for early web browsing on the Internet, while very-high-bit-rate DSL (VDSL) was ideal for the delivery of video. Then the introduction of vectoring gave new impetus to investments. VDSL was able to support up to 100 Mbps and if multiple pairs of copper were available, it was also possible to combine their capacity through bonding across pairs, further enhancing performances. The latest DSL standards, Vplus and Gfast, are about to be deployed.

As with DSL, the same happened for cable networks using the data-over-cable service interface specification (DOCSIS) standard. In 1997, with DOCSIS 1.0, it was created the first specification for a non-proprietary, high-speed data service infrastructure capable of providing Internet web browsing services. DOCSIS 1.1 offered the ability to differentiate traffic flows to upgrade the service quality, while DOCSIS 2.0 expanded the upstream bandwidth allowing VOIP telephony. DOCSIS 3.0 significantly boosted capacity by bonding channels which, combined with the new-and-improved DOCSIS 3.1, reached 10 Gbps downstream and 1 Gbps upstream. This thanks to the use of a wider spectrum and better modulation.

Similarly, the evolution of the optical network has improved its already high performances, reducing its costs as well. Passive optical network (PON) has emerged as the most economical choice because it enables multiple subscribers (typically 32) to share a downstream laser passively split to each home with individual drop fibers, based on a tree-like structure. The first generation of PON was the Gigabit PON (GPON) standard, which allowed 2.5 Gbps downstream and 1.25 Gbps upstream. In light of the international success of the GPON, 2010 saw the release of a second generation called XG-PON (or 10-GPON), with transmission capacity amplified significantly compared to the previous generation, (with shared speed of 10 Gbps downstream and 2.5 Gbps upstream respectively). This standard, although already available for some years, has not been widely adopted due to its higher cost compared to the much more common GPON system. Starting in 2012, a new standard, called NG-PON2 (Next-Generation Passive Optical Network 2) was launched, with two possible options: TWDM PON (Time and Wavelength Division Multiplexing—PON) and PtPWDM PON (Point-to-Point Wavelength Division Multiplexing—PON). The TWDM PON consists of the overlapping of several systems (up to 8) XG-PON operating at different wavelengths, thus creating a multi-channel optical transmission system. This new system is capable of offering on the single optical shaft up to eight times the transmission capacity of a single XG-PON system to reach 80 Gbps downstream and 20 Gbps in upstream or, optionally, even 80 Gbps symmetrically. The PtPWDM PON option refers to a system in which each optical channel is dedicated to the individual user, using software to create a point-to-point system on a point-to-multipoint physical network. In 2016, the standard of an additional PON, halfway between XG-PON and NG-PON2, was introduced; this new standard was called XGS-PON. It is a “symmetrical” version of the XG-PON system (10 Gbps symmetrically) but it is simpler than the NG-PON2 systems (as it is not multichannel). XGS-PON has already reached technological maturity and garnered commercial interest thanks to the abundant availability of upstream bandwidth, which makes it more suitable for future applications (Weldon 2016). A key feature of the different PON generations is that, using a different allocation of wavelengths, they can coexist on the same infrastructure. Therefore, the new generation can be incrementally introduced into the network, even where the consolidated GPON technology has already been adopted, to gradually offer the higher speed service only where the need arises.

As a final remark on the evolution of fixed networks, we can say that DSL and DOCSIS standards have evolved, improving their performances and the quality of their electronics. At the same time, however, their cost has increased, while the optimal length required for the piece of copper cable has decreased, requiring instead a fiber connection that comes closer and closer to the user. It is legitimate to ask whether it is no longer rational to keep investing in copper, given its high maintenance costs and the lack of a long-term outlook for the old copper networks. In contrast, PON technologies have continuously shored up their performance and minimized their limitations, coming closer and closer to performance of an active connection (one fiber straight from the central office to the user). But while the perceived value of a good Internet connection is increasing, overcoming the problem of its cost, PON standards cannot yet offer anything more than a fast, cheap connection, without added services on top that can differentiate it from competition or create additional value.

4.2.4 The Evolution of Mobile Networks

Retrospectively, the evolution of mobile telecommunications seems simple: a new generation every 10 years. The first generation of mobile phones (1G) appeared on the scene in the 1980s, exploiting analog technology supporting voice only calls with poor battery life and voice quality, little security and a tendency to drop calls.

Then, in 1990s the cell phones received their first major upgrade when technology went from 1G to 2G on GSM networks. This was a radical transformation. The switch from analog to digital communications brought in call and text encryption along with data services such as SMSs, picture messages and MMSs. Voice calls were free from background noise due to digital modulation. Only with 2.5G, also known as GPRS, packet switching came into picture with data transmits at 64–144 kbps, making voice calls possible during data transmission. With the GSM Evolution (EDGE or 2.75G), the speed hit 1 Mbps to satisfy increasingly data-hungry users.

Data transmission was also the key to the evolution to 3G, introduced commercially in 2001. The goals set out for this third generation of mobile communication were to facilitate data transmission and to support a wider range of applications at a lower cost. The 3G standard was based on a new technology called UMTS (Universal Mobile Telecommunications System) and a new core network architecture able to support more active calls and/or data transmits at the same time. The maximum speed for 3G was around 2 Mbps for non-moving devices and 384 Kbps in moving vehicles, giving rise to the term “mobile broadband,” which first applied to 3G cellular technology. As with the previous generation, 3G evolved into the much faster 3.5G and 3.75G, as more features were introduced to prepare for the advent of the following generation.

Conceived in 2000 but only deployed in 2010, 4G or LTE (Long Term Evolution) was first released in 2008. It is still the dominant mobile technology, and also the first to be globally adopted. Very different from its previous iteration, 4G was essentially made possible only thanks to advancements in electronics. 4G can provide high speed, high quality and high capacity to users while improving security and lowering the cost of voice and data services, multimedia and Internet over IP. Potential and current applications included mobile web access, IP telephony, gaming services, high-definition mobile TV, video conferencing, 3D television and cloud computing services. The top speed shot up to 1 Gbps for a stationary or walking user and 100 Mbps when the device was moving.

In all these generations there were two constants. First, every new generations added more frequencies to the previous. Second, newer generations of phones were designed to be only backward-compatible, so a 4G phone can communicate through a 3G or even 2G network but not the other way around. The same will be true for the fifth mobile generation (5G), which will gradually be rolled out beginning in 2019.

5G networks are not an evolution of 4G networks, because their architecture is completely revolutionized with respect to the previous generation. This has several consequences for businesses that we will analyze in Sect. 4.5.2. To make a comparison, what the markets wanted from the evolution of 4G networks is the equivalent in the automotive industry of demanding a car that is 100 times lighter and 100 times more resistant: the only way this is possible is by completely upending the paradigm. More specifically, the new network will upgrade existing 4G networks in several ways:

  • 5G networks can be 100 times faster than their 4G antecedent, up to 10 Gbps.

  • Latency will potentially decrease up to 1 ms, which is 30–50 times better than before.

  • It will be possible to have up to one million connections per km2, 100 times more than 4G, which would be useful to support IoT.

  • Mobility will be improved, enabling connectivity on high speed trains moving up to 500 km/h, which is about 1.5 times better than 4G.

  • 5G will support NFV, SDN and network slicing,Footnote 10 while 4G networks were inflexible.

  • The radio interface will be 90% more energy efficient than 4G.

Mobile phone standards up to 4G were defined to serve the needs of a mass market. On the contrary, 5G was designed to serve a sum of vertical markets with very different and somewhat conflicting needs. What Some of these verticals have the constraint of low latency and great bandwidth, no matter the conditions, as with virtual reality applications; others have only the constraint of low power consumption, no matter the latency or the available bandwidth, as some IoT devices.

With 5G there will be no discernible differences between wired and wireless connections, opening a range of possibilities that can take advantage of near-instantaneous response and high data speeds. 5G will offer companies blazing-fast connections and the ability to use the cloud seamlessly for computation-intensive tasks with real-time decision-making, or for retrieving all the data needed for local decision-making.

However, big opportunities do not come at a small price. The challenge is how to meet government-mandated coverage goals even where business justification is lacking. It has been estimated that the rollout cost for 5G across Europe would be significantly higher than for 4G, running between 300 and 500 billion € (GSMA 2019b), an enormous commitment for European telecom operators.

In parallel with the evolution of mobile telephony standards, there have also been some developments in the use of the radio spectrum for mobile communication. The portion of the spectrum used for any radio communication is very important. To prevent interference between various users, every use of radio waves is strictly regulated by national laws and coordinated by an international body, the ITU. Different parts of the radio spectrum are allocated for different technologies and applications. Mobile telecom operators and broadcast television stations have well defined limits. In some cases, parts of the radio spectrum are sold or licensed to operators of transmission services. But being a fixed and scarce resource contended by an increasing number of users, the radio spectrum has become more and more congested and precious.

A part of the spectrum is “unlicensed” or “license-free”, having predefined rules to mitigate interferences. Basically, anyone can use these bands and if they obey these rules, they have the right to transmit within given power limits. But they have no right to receive. In other words, no one has any guarantee that there will not be interference from other similar systems, as would be the case with 2G, 3G or 4G bands. Nevertheless, if the transmission is local and covers only small distances, this problem is usually negligible. Indeed wifi, that has the lion’s share of data transmission (see Sect. 4.1.3), works only in the unlicensed spectrum.

While other wireless technologies like LoRaFootnote 11 or MultefireFootnote 12 only use the unlicensed spectrum, standards like WiMaxFootnote 13 use both the licensed and unlicensed spectrum. But technologies such as LTE, which is the base for 4G, typically work on licensed spectrum, although they can be implemented using unlicensed bands in private implementations covering a plant, an office, or a stadium, for example. The global opportunity for “private LTE” (and in the future possibly “private 5G”) in industrial and business critical environments is significant. The global revenues for the private LTE addressable market will skyrocket from $22.1 billion in 2017 to $118.5 billion in 2023 at a 32.3% CAGR. The relative device shipment volumes will jump from 170.7 million in 2017 to 765.1 million in 2023 at a 28.4% CAGR (Harbor Research 2018).

In the U.S., the unlicensed spectrum is even more appealing given the presence of the Citizens Broadband Radio Service (CBRS). This is a relatively large part of the spectrum of 150 MHz in the 3550–3700 MHz range, almost all included in the 5G range. What is unique about this band is the fact that it is one of the few in the US that is authorized for multiple use cases, rather than being licensed to one operator or available for unlicensed use only. The Federal Communications Commission (FCC), the American regulator, has authorized three categories of users under its CBRS rules but left the use of the entire band to unlicensed users, albeit with the lowest priority. The importance of the CBRS lies in being a credible potential base for cable operators to offer a wireless service with a small investment, and powerful leverage for unconventional operators to disrupt the telecom business.

There are different ways to create disruptions in the infrastructure sector, as explained by Venzin and Konert (2020), but the CBRS could change a significant part of the telecommunication ecosystem, especially in rural areas. Google, Amdocs, CommScope, Federated Wireless, Key Bridge, and Sony have already applied to become administrators of the CBRS band and ensure real-time allocation of bandwidth between various users, based on the kind of license. Amazon also is undertaking significant testing involving the CBRS band, not just for a wireless network but also to backhaul infrastructure. As example is the use of AWS to support private LTE networks running on the CBRS spectrum. The growing interest in the CBRS spectrum of big players such as Google and Amazon highlights other potential paths of evolution for technology. For instance, if this spectrum does allow more localized networks, each with their own network cores (similar to local cable companies), companies such as Google and Amazon are well positioned to serve as neutral host networks that manage traffic across private networks through a centralized hub (Venkateshwar et al. 2019b).

4.3 The Value of the Networks for OTTs and the Consequences for Traditional Telecom Operators

There is an interesting AT&T video from 1993 that describes the future of telecommunications as they imagined it then,Footnote 14 just before the Internet era began. There would be e-mail, mobile telecommunications, smartphones, e-commerce, search engines, and cloud computing. Everything that was imagined back then came true. But telecommunications companies such as AT&T and many others which had accurately envisioned the future were not the protagonists who were able to bring that future about. Telecommunication companies have invested in many of these services, such as search engines, e-mail, messaging apps, digital content and more. But in the end, they were unable to capitalize on their efforts and were forced to stand on the sidelines watching while others reaped the fruits of the Internet.

The real beneficiaries of the Internet revolution were a bunch of start-ups that became Internet giants. The telecom operators call them the over-the-top players (OTT) because they provide their services directly to their users, bypassing the companies traditionally acted as controllers or distributors of any service provided through telecommunications networks. But telecommunications operators, after losing control of the access to the network, now risk losing the battle to manage the value created around the network too. And that could have a big impact on the future of telecommunication infrastructures.

The main threat comes from telecom companies losing economic relevance. In 2018 there were 17 telecommunication companies in the list of the Fortune Global 500 with cumulated revenues of $1.22 trillion. On the same list the technology companies were 46, with revenues of $2.66 trillion. Among the ten largest companies by capitalization in the world at the end of 2018, seven belonged to the technological sector with an accrued value of $4.1 trillion, 78% of the total. None was in the telecom business (Financial Times Global 500 rankings). The same ranking in 1997, before the dot.com bubble burst, showed a cumulated 1.5 trillion dollars of value, of which 20% was represented by two tech companies. Just one telecom company was included in the list and was valued 10% of the total capitalization. In between, there was a process of value erosion for telecom operators that today manage a business that is much more important for its users than themselves.

Together, the American GAFAM (Google, Apple, Facebook, Amazon, Microsoft) and the Asian BAT (Baidu, Alibaba, Tencent) form the OTT group. This is a de facto oligopoly dominating most segments (search, social media, communication, e-commerce, video) with very few real competitors. The companies that do compete typically operate in a single segment (e.g. Netflix, Uber, Airbnb, JD.Com, Expedia), or in a local market (e.g. Yandex and Mail.ru in Russia, Naver and Daum in South Korea, Rakuten in Japan) (iDate 2019). The two groups have significant differences in their financial performance: the GAFAM quintet out-earns the BAT trio by a ratio of several dozen to one. But the Asian OTT have an extraordinary growth trajectory: +30% on average per annum for the past several years. Plus OTTs EBITDA-to-revenue ratio exceeds 30% in most cases, the only exception being Amazon, but it is for a good reason.

To keep revenues growing, Amazon is continuously cross-financing its ventures scarifying its margins, to end up once again joying profits well above those of other OTT companies. In both GAFAM and BAT, capex is relatively low. Most invest less than 10% of their revenue in infrastructure (compared to 18% for telcos) with the exception of Google and Facebook, which are heavily investing in data centers and submarine cables (iDate 2019). Because of their low capex, OTTs players have an enormous amount of free cash flows to invest. This huge influx of liquidity allows the Internet giants to make dozens of small but strategic acquisitions a year without Antitrust intervention (The Economist 2019) to protect their core business and further fortify their positions. Investing in start-ups but sometime also in veteran players alike enables them to move rapidly into new sectors, including non-digital ones (iDate 2019). For example, Amazon spent $13.7 billion to acquire Whole Foods.

Nonetheless, the OTTs rely mainly on telecom networks for their business and their evolution. They all offer or use cloud computing or cloud-based services as a core activity. Consequently, the telecom network is a key conditioning factor for them. In fact, many have invested in research into telecom infrastructures to keep up the pressure on telco companies to upgrade networks and improve connectivity in underserved areas or in underdeveloped countries, to expand their markets.

Below we will provide a short analysis of the main initiatives undertaken by OTTs. Our aim is to evaluate their impact on the evolution of the telecom business but, mostly, on the evolution of telecom infrastructures, as summarized in Table 4.3.

Table 4.3 A summary of OTT activity in telecommunication infrastructures and adjacent markets

4.3.1 Google Alphabet

The foremost telecom investment from the OTTs is Google Fiber. Launched by Google in 2010, Google Fiber was later moved under the Access division after Alphabet Inc. became Google’s parent company in 2015. Google Fiber was substantially reorganized in 2016, when Alphabet started slashing capital expenditures for its Other Bets segment, where the fiber company was the biggest source of cash drain. “Capex for that segment totalled $181 million in 2018, down significantly from $493 million the year before and $1.37 billion in 2016. Google at the time credited the bulk of that sum to deploying its fiber network” (Gallagher 2019).

Whether intentionally or not, Google Fiber has certainly had something to do with the pace at which 1 Gbps broadband was deployed by telecom operators, such as AT&T, Verizon and the US cable industry. According to the Internet & Television American Association, speeds of up to 1 Gbps are available today across 80% of the US via cable networks, a upward leap from just 5% in 2016. It’s hard to say how much credit for that pace should be given to the spectre of Google Fiber, but some are convinced that its role was decisive, even as cable providers are planning to push toward symmetrical 10 Gbps speeds (Baumgartner 2019).

Because Google’s mission is “to make sure that information serves everyone, not just a few,” other Alphabet companies are also pursuing initiatives with similar goals. An example is Project Loon, started in October 2017 within X (formerly Google X) and spun out into a separate company, named Loon LLC, in July 2018. The company uses high-altitude balloons placed in the stratosphere between 18 and 25 km using the LTE standard to create an aerial wireless network. At the beginning Loon used the unlicensed spectrum, but then the company started cooperating with local telecommunication operators using the cellular spectrum to deliver basic Internet connectivity to more than 100,000 people in Puerto Rico and to some of Kenya’s most inaccessible regions in 2019. A huge impact with a modest investment.

4.3.2 Facebook

Similar to Google, Facebook’s mission is “to bring affordable access to selected Internet services to less developed countries by increasing efficiency and facilitating the development of new business models around the provision of Internet access.” In keeping with this mission, Facebook launched Internet.org in 2013.

Based on a partnership with Samsung, Ericsson, MediaTek, Opera Software, Nokia and Qualcomm, as of December 2018, more than 100 million people are using an Internet connection based on Internet.org and its app, Free Basics, which delivers its services. In March 2014, as part of the Internet.org initiative, Facebook announced a connectivity lab with the goal of bringing the Internet to everybody and acquired Ascenta, a maker of solar-powered drones. Then the company expanded this lab activity to low-Earth orbit and geosynchronous satellites for establishing Internet connectivity in other areas. For all three projects Facebook looks like relies on free space optical (FSO)Footnote 15 or laser communication (Harris 2019).

In 2016, for a similar purpose but with a different nature, Facebook launched at Mobile World Congress in Barcelona the Telecom Infra Project (TIP). Born as a collaborative effort with an engineering focus, TIP and its annual meeting (TIP Summit), have become the most prominent reference point for all those who seek to generate disruption in the telecommunications infrastructure sector. Funded at its start by Facebook, TIP is jointly steered by its group of founding tech and telecom companies. The project has more than 500 participating member organizations, including all the main telecom operators, suppliers, developers, integrators, start-ups and other entities. TIP is organized in three strategic networks areas that collectively make up an end-to-end network: Access (including Radio Access Network, or RAN solutions), Backhaul, and Core and Management. In 2019 at Mobile World Congress TIP was able to showcase the interoperability of its technologies in its first end-to-end telecom network demonstration.

4.3.3 Microsoft

Even Microsoft has heavily invested in telecommunications but with a very different angle. In 2011, it acquired Skype Technologies in an $8.5 billion deal; according to Trefis, in 2018 Skype had an estimated user base of 1.43 billion worldwide. In 2014 the telephony company accounted for 39% of the combined international volume of calls for every telco in the world (TeleGeography 2014), so Skype itself was a source of disruption for the telecommunications sector. Since then, things have changed dramatically and even got worse for telecom operators.

Today there are many alternatives to Skype: WhatsApp, WeChat, Facebook Messenger, Viber, Line, Tango, Google Hangouts, and Samsung’s ChatOn. But none of them was conceived as Skype to have also a telephone number from the public switched telephone network (PSTN) to substitute a fixed telephone line using software. Moreover, Microsoft has not stopped investing in Skype, adding new features such as artificial intelligence with the ability to translate calls into 12 different languages in real time.

What’s more, in recent years, Microsoft has continued to invest in international submarine cables like New Cross Pacific (NCP) Cable Network, Hibernia Express Cable, AcquaComms, to be autonomous in connecting its data centers over long distances.

4.3.4 Amazon

Amazon made many investments to turn a profit from telecommunication disruption. In September 2018, Amazon Web Services announced a partnership with Iridium Communications to develop a satellite-based network called CloudConnect for IoT applications. In January 2019, Iridium completed its $3 billion satellite network Iridium NEXT, consisting of 75 satellites launched by SpaceX for which Iridium is its largest non-government customer.

Moreover, Amazon Web Services (AWS) announced AWS Ground Station, a plan to build a dozen satellite transmission facilities throughout the world. Ground stations are essentially antenna-equipped facilities that can send and receive data from satellites orbiting the earth. Amazon will let customers rent access to these stations in the same manner that they lease access to its cloud data centers. Using this new service, companies that are too small to build and operate their own satellite transmission infrastructure will be able to access satellite services on-demand. Amazon will make it low cost and very simple, so as to replicate the key success factors of its cloud computing platform.

4.3.5 An Evaluation of the OTT Approach in the Telecom Business

Some of the moves by OTTs are aimed at putting pressure to the telecom industry, as in the case of Google, to speed-up fiber investments, or Facebook, to improve quality and reduce the cost of telecom equipment. The aim of the latter is to spread Internet broadband in every remote location on the planet. Others, however, have the goal of substantially changing the telecommunication world by creating new forms of communication, as is the case with Microsoft’s Skype, or offering access to a completely new communication network, as with Amazon’s satellite network, to create a different kind of communication wherever possible. A comparison between OTTs and traditional telco operators is summarized in Table 4.4.

Table 4.4 A comparison between OTT as a whole and a typical telco operator profile

Everyone has learned that in technology, realizing a desired effect takes more than just investing; it is more effective to apply the right kind of pressure. What experience has shown in recent years is that OTTs are much more adept at achieving their objectives than telecommunication companies are in defending their own markets. But the real difference is that OTTs are playing on their home field, in a more favorable position. They have more technical skills, move faster and are less worried about failing in the struggle to innovate. They look at physical infrastructure as an unbearable burden that should be reduced to a minimum. All the key components of their products or services should use proprietary technologies or adhere to an open standard.

Traditional telecom operators, on the contrary, have been delegating innovation to equipment vendors for years. Being complex giants, they move slowly. Because they have a make-no-mistakes culture, they are used to levels of reliability the Internet world cannot afford. Traditional telcos are intimately linked to physical infrastructure, which they consider an entry barrier and a source of competitive advantage. They are recent converts to open standards, just because they have seen the positive effects on OTTs, but they never controlled their key technologies. In the end, their playing field is becoming more and more the increasingly problematic one of the internetization, a world dominated by the standards of the Internet, with its technical solutions and its disruptive business models.

4.4 Evolution of the Telecom Industry and Regulation Issues

4.4.1 The Telecom Industry Evolution

Despite an increasingly stronger global demand for data and mobile telephony, sustained by a steady proliferation of fixed broadband connections, this magic moment of a favourable market has not materialized in revenues in the same way all over in the world (Fig. 4.14). Since the 2011 crisis, telecommunications revenues have risen by 8% on a global scale. Nonetheless, due to more intense regulatory and competitive pressure, this trend has not been seen across the board. In other words, revenues are up everywhere except in Europe. The Middle East and Africa saw the best of this trend, with revenue growth of 29%, almost double that of Latin America and Asia but more than triple that of North America. In the same period, on the contrary, in Europe revenues decreased by 8%, with a minimal trend reversal in 2017.

Fig. 4.14
figure 14

Telecom revenues by region, 2011–17, index numbers (source: iDate 2018)

In the European scenario, mobile revenues (representing 51% of total telecom revenues) dropped by 13% and fixed telephony revenues (18% of the total) by 36%. These trends were not fully compensated by a 15% increase in fixed broadband, which unfortunately represented only one-third of industry revenues (Fig. 4.15).

Fig. 4.15
figure 15

Telecom revenues in Europe (EU 28) by service, 2011–17, index numbers (source: iDate 2018)

But how was that possible if demand for telecommunications services was so strong, as we have seen above? The answer is a generalized downturn in prices in Europe. This happened in fixed broadband, where average revenue per user (ARPU) fell by 6% (Fig. 4.16), although growing volumes managed to offset this decline. The mobile sector saw a much stronger decrease in average prices (13%), which volumes did not compensate for, leading to a sharp drop in revenues.

Fig. 4.16
figure 16

Telecom ARPU in Europe according to European Telecommunications Network Operators’ Association (ETNO) by service, 2011–17, index numbers (ETNO perimeter includes EU 28 plus Albania, FYR Macedonia, Iceland, Norway, Switzerland and Turkey. Source: iDate 2018)

Because of this negative trend, European telecom operators devoted an increasingly higher share of their sales to infrastructure investments compared to their peers; European incumbents even more (Table 4.5). The capex-to-sales ratio was 14.1% in the USA in 2018 while for European telecom incumbents the figure was 17.5% and for European telecom challengers 15.2%.

Table 4.5 Capex to sales ratio for main telecom operator aggregations, percentage

Despite this, in relative terms American operators from 2010 to 2016 boosted their investments by 21%, while this figure for their European counterparts was 17% (Fig. 4.17). That was possible, in absolute terms, thanks to the more favourable evolution of revenues in the US, which sustained an increase in investments that rose from 51.8 billion € of capex in 2010 to 62.8 billion € in 2016. This number was almost 33% higher than in the European Union, where the 28 member states (EU 28) stepped up their efforts to 47.2 billion € from 40.5 billion in 2010. In terms of spending per capita, this meant that American operators invested 193.9 € per capita of capex, twice the 85.0 € in the ETNO perimeter. In the meantime, Japan had just completed its investment cycle, after creating a state-of-the-art infrastructure.

Fig. 4.17
figure 17

Telecom Tangible Capex (excluding Spectrum), 2011–17, index numbers (source: iDate 2018)

Europe is struggling to find a way to overcome its problems of slow investments, and prospects are not terribly promising. The profitability of European telecom operators has been sliding since 2011 in all the main countries (Fig. 4.18). In fact, profitability is at much lower levels than the USA. Case in point: Italy’s profitability is just one-third that of the US and falling. Even if in France and in Germany the situation is expected to improve, unfortunately levels still remain too low to justify and support the new investment cycle of 5G in front of the shareholders of telecom companies.

Fig. 4.18
figure 18

Country ROCE of telecom operators (excluded specialized), 2011–19 (source: Patrick et al. 2018)

However, a more detailed analysis of telecom profitability shows wide differences across Europe. The Nordic countries stand out as the most profitable, with a ROCE ranging from 11.9 to 10.4%, well above the sample average of 7.9%. This is because of smaller national size, stable competition, solid profitability and relatively low spectrum costs. Due to lower capex and much lower spectrum spend, profitability in Spain is much higher than other EU markets while Italy represents the worst case (Fig. 4.19), with high spectrum costs and intense competition.

Fig. 4.19
figure 19

Country ROCE of telecom operators (excluded specialized): a comparison USA vs. selected European countries (source: Elaborations on Venkateshwar et al. 2019a)

Estimates by BCG (Bock and Wilms 2016), Accenture (2017) and the European Commission (2016) indicate that in Europe the actual pace of investments will not be sufficient to be able to achieve the Gigabit Society objectives set for the European UnionFootnote 16 by 2025. These objectives are as follows:

  • All schools, transport hubs and main providers of public services as well as digitally intensive enterprises should have access to Internet connections with download/upload speeds of 1 Gigabit of data per second.

  • All European households, rural or urban, should have access to networks offering a download speed of at least 100 Mbps, which can be upgraded to 1 Gigabit.

  • All urban areas as well as major roads and railways should have uninterrupted 5G wireless broadband coverage, starting with fully-fledged commercial service in at least one major city in each EU member state by as early as 2020.

The cost of reaching the EU connectivity objectives is estimated at 500 billion € in investments from 2016 to 2025. These funds would come largely from the private sector, but under current investment trends, there is a 155 billion € investment shortfall, according to European Commission calculations.

Even in a scenario in which the telecom sector will continuously inflate the capex/revenue ratio to the benefit of investments, it is difficult to sustain this position without incremental revenues. Indeed, according to a survey by McKinsey & Company (Grijpink et al. 2019) based on interviews with 46 chief technology officers at large telcos around the globe, while the majority of North American telecom operators (56%) will have large scale 5G deployment before 2020, no other region is above 40%. What is the explanation of this difference? Most operators surveyed (60%) think that the biggest challenge to their 5G strategies is identifying a business case. But this was the answer of 100% of European operators and of only 11% of North American operators (Fig. 4.20). This viewpoint is a sharp departure from the rollouts of earlier mobile generations, such as 2G and 3G, when Europe led the technology’s introduction. It is not a problem of confidence in the technology, that is high, but of uncertainty about whether and how soon 5G can fuel new products and services that customers are willing to pay for.

Fig. 4.20
figure 20

International Telecom Operators, share of respondents that chose “Business case” as top challenge for 5G, by national area (source: Grijpink et al. 2019)

There are three other elements that emerge from the research that are equally noteworthy and will have an impact on the future of telecom infrastructures:

  • The uncertain economics of 5G are spurring telcos to consider some alternative business models. About 93% of the respondents said they expect network sharing to expand with efforts to bring 5G to areas where it does not make sense to have multiple networks. Moreover, approximately 90% anticipate that third-party neutral hosts will supply a part of the network to run for several operators

  • While the top reason for investing in 5G is network leadership, that means pure competitive pressure, at least at the outset, the majority of the telecom operators see enhanced mobile broadband, IoT, fixed wireless access (see Sect. 4.4 for more details) and mission-critical applications as the most prevalent applications for 5G. These are not the revolutionary use cases mentioned by 5G enthusiasts, nonetheless in the eyes of telecommunications operators these are the most credible applications.

  • From a global perspective, the survey confirms a new scenario in regional technological leadership. Although North America is still in the lead, Asia is keeping pace and Europe is waiting for a clearer view on use case economics to accelerate.

Is this last point another proof of the beginning of a new and negative phase for European telecommunications? European telecom operators no doubt face pressure from regulatory bodies and competition. At the same time, they are loading their balance sheets to undertake investments while trying to meet shareholders’ expectations of preserving the historical dividend distribution. They are also defending their position from the potential threats of a long-awaited industry consolidation through mergers or acquisitions. As a result, the European Commission is struggling to incentivize the start of a new investment cycle in the telecom industry.

4.4.2 Regulation Issues in Europe

The European Commission is in a difficult position, as declarations about Gigabit society, the strategic importance of digital connectivity for European competitiveness and results are not tightly coupled. The European role in the digital arena remains very weak and technological leadership is losing ground while prices and competition have favored European citizens, as seen above.

Following the proposal for a new Electronic Communications Code from the European Commission in September 2016, in June 2018 a political agreement was reachedFootnote 17 to update the EU’s telecom regulatory framework (after the previous update in 2009). Adopted by the Parliament and then by the Council in November 2018, member states have until 21 December 2020 to transpose the new directive into national legislation.

The code sets a new regulatory objective of promoting access to, and take-up of, very high capacity connectivity (fixed and mobile) across the European Union. This in addition to the existing objectives of promoting competition, contributing to development of the internal market and fostering the interests of EU citizens.

The Commission proposal addresses four existing directives, on the Framework, Access, Authorization and Universal Service. The code would amend these directives and integrate all four into a single new legal text with two major objectives:

  1. 1.

    Enhance the deployment of 5G networks by ensuring the availability of 5G radio spectrum by the end of 2020 in the EU and provide operators with predictability for at least 20 years regarding spectrum licensing;

  2. 2.

    Facilitate the roll-out of new, very high capacity fixed networks by:

    • Making rules for co-investment more predictable and promoting risk sharing in the deployment of very high capacity networks;

    • Promoting sustainable competition for the benefit of consumers, with a regulatory emphasis on the real bottlenecks, such as wiring, ducts and cabling inside buildings;

    • Creating a specific regulatory regime for wholesale-only operators (see Sect. 4.4).

The last point is in part a new proposal that could open sizeable investment spaces to institutional investors in telecommunications, especially if the whole set of guidelines included in the code is matched with the opportunities arising from the evolution of the telecom infrastructure.

4.4.3 The Geopolitical Role of Telecom Investments

The evolution of telecom infrastructure is so critical for OTTs that they are actively involved in trying to influence it. But they are not the only ones.

Telecommunication infrastructure is a general-purpose technology (Bresnahan and Trajtenberg 1995) that, as such, has a big impact on potential productivity gains and economic growth across major economic sectors and on a large scale. So, governments are paying more attention to their comparative position in the deployment and adoption of telecommunication infrastructures while there is growing evidence of the socio-economic impact of this kind of investments on economic growth, local development, labor market, firm productivity and entrepreneurship, (Alizadeh 2017; Edquist et al. 2018; Oughton et al. 2018; Abrardi and Cambini 2019).

This is even more palpable because the most recent developments in manufacturing and IT (Internet-of-Things, artificial intelligence, augmented or virtual reality, blockchain, big data, additive manufacturing, etc.) have ever-increasing telecommunication needs, both fixed and wireless. Further, given the importance of cloud computing, which is “where” most of the most advanced technologies are located, an obsolete telco infrastructure could delay or reduce the impact of these innovations.

To quantify the economic relevance of telecommunications, consider this: reaching the objectives set by the European Commission for the “Gigabit Society by 2025” will trigger investments that could boost European GDP by an additional 910 billion € ($1.023 trillion). In addition, 1.3 million new jobs will be created by 2025, according to European Commission evaluations.Footnote 18

Likewise, mobile technologies make a significant contribution to socioeconomic development around the world. In 2018, these technologies and related services generated $3.9 trillion of economic value (4.6% of GDP) globally. This contribution will reach $4.8 trillion (4.8% of GDP) by 2023 as countries derive ever greater benefit from the improvements in productivity and efficiency brought about by more widespread take-up of mobile services. The global mobile ecosystem generated $1.1 trillion of economic value in 2018 with infrastructure providers accounting for $80 billion (7%). Further ahead, 5G technologies are expected to contribute $2.2 trillion to the global economy over the next 15 years (GSMA 2019a).

Moreover, in every national plan to improve competitiveness in manufacturing (and all the major countries have one), the role of telecommunications is critical. This is true for “Industrie 4.0”, the German national plan launched in 2013 that will leverage on Cyber-Physical Systems (CPS) to defend the future of Germany in manufacturing. It is also true for “The Next Wave of Manufacturing” in Australia (2013), “Made Different” in Belgium (2013), “Make in India” in India (2014), “Produktion 2030” in Sweden (2014), “Smart Industry” in the Netherlands (2014), “Manufacturing Innovation 3.0” in South Korea (2014), “Industrial Value Chain Initiative” in Japan (2015), “Made-in-China 2025” in China (2015), “Industrial Internet of Things” in Canada (2015), “Industrie du Futur” in France (2015), “Industrial Strategy” in the UK (2015) and “Industria 4.0” in Italy (2016). This is also the reason why China, Korea and Japan had such a strong government push to lead the world in fiber adoption and in 5G plans.

However, this industrial perspective views the telecom infrastructure as a means to improve competitiveness in manufacturing. But there is also an industrial opportunity that sees telecom infrastructure from the opposite standpoint. Leading the adoption of a technology (e.g. 5G or fiber networks) gives a country the opportunity to develop and nurture national champions up to a point in which they can develop the underlying products to a level of maturity to be competitive in exporting them to other countries.

This was China’s strategy in fiber optics, for example. China had 347 million subscribers to FTTH or FTTB (Fiber-to-the-Building) lines in 2018 while in North America were 19 million and in Western Europe 26 million (iDate 2018). 2021 forecasts set that number at 421 million in China, but only 26 million in North America and 52 million in Western Europe. Consuming 58% of the total fiber optic produced in the world, China has successfully become the worldwide leader in passive fiber, the de facto global standard in optical networks.

China is trying to implement the same approach in mobile networks. In 2G, China had none to speak of, but developed a China-only standard in 3G and had some marginal participation in 4G research. But after LTE (4G), which was the first global telecommunication standard, 5G will be the first real universal standard, redesigning telecom networks from the ground up. In 5G, China is in first place for patent owners, controlling 31% of the Standard-Essential Patents (SEPs) for 5G networks. This country also leads the world as contributor to research with 40% of the total standard proposal submitted (Table 4.6). Plus, the number of Chinese representatives in the International Telecommunication Union (ITU),Footnote 19 technical specification groups (TSG) and sub-groups has increased from 8 out of 57 in 2013 to 10 in 2017 (Lee and Chau 2017, 2019).

Table 4.6 Standard-Essential Patents (SEPs) for 5G and 5G Standard Proposals, owners and contributors by country

Once 5G networks begin to be built and deployed, the control of technical standards will influence which companies will win lucrative equipment contracts. Whoever owns a significant portion of the patents in the underlying technology should be able to be more effective in bidding for network projects. It is a commercial advantage which parlays itself into a security advantage: whoever controls the technology has an intimate knowledge of how it was built and where all the doors and buttons are (Zhong 2018).

Finally, if China ends up dominating 5G networks, the authority will also shift toward China to set standards for future network technologies such as 6G, which is already under development. Furthermore, whoever dictates the standards will dominate future products because early developments will be faster and work better than others. In sum, commercial power almost directly translates into standard-setting power.

Actually, the market leaders in telecom equipment are already Chinese (Huawei and ZTE): in a few years they managed to surpass Nokia Networks and Ericsson, contending for some of Cisco’s niches too. Since 2012, when a US congressional report revealed that the Chinese government could potentially use Huawei’s equipment to spy on Americans, telecom security is a top concern in the United States. Both ZTE and Huawei have been effectively blocked from major US telecom networks due to fears that their gear could be used for espionage. In addition, US authorities have pulled the companies’ smartphones from US military bases and stopped all sales by ZTE and Huawei to the government.

In August 2018, Australia excluded Chinese telecommunications equipment manufacturers from the countries’ 5G rollout over fears of possible cyber espionage. The decision was based on the belief that 5G networks will be more vulnerable to security breaches because they will be less centralized than current networks, with more sensitive network activity occurring in a multitude of locations closer to users (Strumpf and Cherney 2018). Japan, the UK, Germany and Italy have also started studying the prospect of a similar ban with restrictions on Huawei and ZTE ahead of the rollout of their 5G networks. It is impossible to imagine the outcome of this battle on the control of technology, but it is already clear that it has changed forever the perception of the consequences of technology choices.

Clearly, in the future the geopolitical impact of telecommunications investments will be stronger than in the past. Most likely investments in mobile and fixed networks are destined to do the same. Moreover, almost all the big telecom incumbents, with a few exceptions, are controlled by national governments with heightened sensitivity to competitiveness issues linked to technology and cyber-safety. This will make government interventions more and more likely in the technological infrastructures of their countries, even in Europe.

Therefore, in the future there will be huge investments in fiber networks and 5G, pushed by a strong demand by users. To face this investment cycle telecom companies should commit a huge amount of resources, but they also need new skills and a fresher approach. Considering the negative trend in Europe in terms of profitability and revenues, we can anticipate a probable outcome: soon in Europe there will be very interesting opportunities to invest in the telecommunications infrastructure that were unimaginable in the past. But most likely that will not be good news for telecom operators.

4.5 Emerging Investment Opportunities in the Telecom Industry

As explained in the previous sections, for different reasons, telecom operators face an extraordinary number of critical challenges, and will continue to do so. These challenges, listed below, often call for decisions that cannot be postponed, and almost always require new investments despite growing uncertainty on returns.

  • Mobile networks’ transition to 5G in a scenario of uncertainty as far as the sustainability of the business cases;

  • Mounting competitive pressure from the OTTs on different arenas increasingly targeting some cornerstones of telecom business;

  • Improving fixed/mobile network quality to guarantee a lower latency and more reliable connections, with or without edge computing;

  • Creating a business case on edge networking or leaving the floor to operators like OTTs that can further weaken telecoms traditional business and harm the future profitability of 5G networks;

  • The decommissioning of a large number of central offices and the transformation of the remaining ones in data centers;

  • Peak time traffic increasing faster than average traffic, adding to the problem of a greater need for backhauling capacity due to rebound effects from faster connections on mobile and fixed networks;

  • Geopolitical issues delaying and potentially making every answer to telecom infrastructure challenges more critical and expensive;

  • Additional investments in submarine transcontinental cables if telecom operators want to compete with OTTs on network performances.

Traditionally, telecom operators have always been very jealous of their business, especially incumbents. They take great pride in controlling their network and every aspect of their operation. But things are changing. Telecom companies are in second place as the industry most reliant on outsourcing: 72% of their executives currently outsource or offshore services. Moreover, in 46% of these companies, demand for outsourced technology is boosted by an in-house lack of talent (Nash 2017). Furthermore, as we have seen, there is growing pressure on the telecom industry about financial results. Therefore, telcos can be less effective in defending their business from outside investments or be tempted by opportunities for containing their capital commitment.

In this scenario, especially in Europe, where decreasing revenues and thinner margins are coupled with a tighter procompetitive regulation, there could be a proliferation of new opportunities for investing in the telecommunication industry. These range from fixed to mobile networks, but the 5G transformation could create even more lucrative opportunities across the two networks.

4.5.1 Emerging Infrastructure Investments in Fixed Networks and from Network Evolution

In fixed networks, given the existing configuration, there could be four major cases of separable infrastructures, giving rise to different models (Fig. 4.21):

1. Vertically Integrated

In a vertically integrated infrastructure, the separable infrastructures may include the access to ducts and poles, sometimes other structural passive elements of the network; this is the case in Japan.

The separation of ducts and poles is a complex operation with a high execution risk because it is difficult to manage contractually and even in day-by-day operations.

With this model, incumbents try to exert tight control over the value chain and to improve their cash flow profile.

Duplication of vertical infrastructures creates a high barrier for new entrants which, in turn, after the initial investment, works as a barrier against other potential entrants

This model tends to be very closed to external investments unless forced by the national regulator.

2. Passive sharing

As with Openreach in the UK, this model is easier to realize and can capture a large part of the revenue potential.

The infrastructure owner lacks direct control over the revenue stream and marketing to the end-user, but this model can ensure stable cash flows.

An effective and credible regulator is needed.

Interesting opportunities can open up for investments if vertical service providers are able to differentiate their services.

3. Active sharing

This model, diffused in Asia and in India, creates large infrastructure providers with stable cash flows.

It creates additional margins for modest incremental investment, giving an incentive for continuous updates.

It must be technically credible yet flexible.

With an effective and credible regulator, this is the model that best fits into the technological evolution taking place.

Small retail service providers may struggle if there are no commercial and operational standards for wholesale.

4. Full separation

This model, realized in the Netherlands, is the most difficult to implement.

It creates additional margins for modest incremental investment to the infrastructure owner and network operator.

It must be technically credible yet flexible.

This model can catalyze many resources, especially from local entities, but needs an effective and credible regulator.

Theoretically, this is the perfect pro-competitive model, but it is difficult to manage in practice.

Small retail service providers may struggle if there are no commercial and operational standards for wholesale.

Fig. 4.21
figure 21

Separable infrastructures in the traditional telecom infrastructure (source: Adapted from Alcatel-Lucent, FTTH Council)

Since the vertically integrated model is the natural monopolistic starting point in developed countries for fixed networks, most of the evolutions towards other models are driven by a need to facilitate investment in new technologies. Broadband, ADSL, but mostly FTTxFootnote 20 are the real triggers that could open up new spaces for investments in telecom networks. But, due to the delay accumulated in fiber deployment, mainly in rural areas and in Europe, there could be even more investment opportunities.

Therefore, political pressure for investments in fiber will likely intensify. The primary reason for this is that in a 5G future, fiber densification is a mandatory requirement to ensure backhauling connections to the thousands of new micro cells, creating the service umbrella for this extremely promising evolution. But the changing structure of the network (whether or not it supports 5G communications) is in itself a source of new kinds of investment opportunities in telecom infrastructures, as illustrated in Fig. 4.22.

Fig. 4.22
figure 22

Separable infrastructures in the telecom infrastructure formed by ongoing and future network evolutions

Starting from the physical infrastructure, closer to the final user, there is the pure (1) fiber wholesaler, which provides the fiber lines (the grey lines in the figure), with or without the FTTB or the FTTH connections (the green dots in the figure). This model has been codified for the first time by the European Commission in the new regulatory framework. The first real example of this new business model is the Italian Open Fiber, financed by private and public money. The aim here is to realize a fiber network in areas under-served or far from the coverage plan of the main operators but also in areas already served by other fiber providers without a FTTH or FTTB infrastructure. It is too early to judge the sustainability of this business model, but it looks promising. Its weakest points, being almost greenfield, are the timing of the coverage, which requires effective and timely execution, and the ability to transform this coverage into subscribers at a fast pace leveraging the appropriate marketing approach.

Vouchers and Incentives for FTTH/FTTB Take-Up

Governments in Europe have just started to give financial incentives, funded by the EU, especially to families to increase the user base of fiber networks. In the new European communications code, promoting access to, and take-up of, very high capacity connectivity is a new regulatory objective. The incentives are not directly linked to the market structure, but when there is a pure fiber wholesaler it is much easier to satisfy the regulatory requirements. The first proposal in 2014 came in Italy; since then it was approved but never launched. On the contrary, since 2016, Denmark is fully operational with a tax break of up to 1600 € per family. Since 2018, the United Kingdom and Greece are in a pilot phase, the UK with a voucher up to £3000 for SMEs and up to £500 for individuals. Greece offers a 48 € discount on installation plus a discount of 13 € per month for 2 years on the subscription cost for a FTTH connection in selected areas. Germany is moving in the same direction. Here in 2018 some telecom associations proposed that the government adopt a voucher program to incentivize FTTH or FTTB connections, offering up to 1500 € per installation.

In the present phase of radical transformation of telecommunication networks, the pure fiber wholesaler could have an advantage in not remaining a pure passive provider of infrastructures. For example, an opportunity in 5G networks is the shift of radio coverage from macro cells (a few very powerful cells, covering a very large area) to small cells (many more cells, about 6–10 times more, much smaller than 4G but able to ensure a very high throughput). For small cells, the business of traditional “tower companies” could be replicated through infrastructures with a smaller scale but a vast coverage, like that of a fiber wholesaler network. Every point along the fiber network with a minimum of space, having a fiber connection and easy access to a power supply, could readily be used as a base for mobile radio stations. This is the business of “enercom”: wherever energy plus communication is available, there is value, and this value will grow.

The Emerging Enercom Infrastructures

The evolution of energy and telecommunication infrastructure, both in a phase of turbulent change change (see Di Castelnuovo and Biancardi 2020), is partially overlapping. Wherever there are electrical infrastructures, the presence of a form of communication adds value, enabling new business models or different kind of services. Just to mention some: smart grids, demand-response systems, V2G (Vehicle-to-Grid), EV recharging points, smart street lights, and smart lighting. Wherever there is a source of communication or a communication device, there are electrical needs to be met in different ways and forms. Some examples are: low or high voltage power, batteries, battery back-up, solar panels and batteries for power autonomy, redundant supply of energy, and surge suppression systems. Therefore, every public site equipped with both energy and communication, (hence the term “enercom site”) will have a different value in the future from a strategic perspective. In fact, each enercom site is a potential piece of a larger infrastructural telecom network (for example, a small cell for 5G, a point for FWA distribution, or part of a network using unlicensed spectrum).

Moreover, the telecommunication industry has a problem with energy cost and supply because the proliferation in communication traffic analyzed in Sect. 4.1.3 will also lead to a substantial hike in energy consumption. Over the next few years, in fact, global energy consumption of telecommunication networks will surge from $40 billion in 2011 to $343 billion in 2025, with wireless networks accounting for over 70% of the total (Weldon 2016). Telecommunications represent about 2% of the worldwide electricity consumption, whereas the entire ICT sector (including data centers, devices, computers and peripherals) accounts for about 6%.The network energy bill typically runs between 7 and 15% of the operational expenses of telecommunication service providers in developed countries and up to 40–50% in some developing countries (Intelligent Energy 2012; GSMA 2014; Kim 2017). A major European network operator stated that its energy bill would hit the $1 billion mark by 2020 (Le Maistre 2014), whereas that of some of the large operators in the USA had already topped this price point in 2012. In the UK and Italy, telecommunications operators are the largest consumers of electricity, utilizing about 1% of the total electricity generation of their countries. For these reasons, the energy problem has become critical in telecommunications as well as any form of energy saving or any potential use of renewable sources of energy. This leads to the opportunity to develop an “enercom business” that manages and optimizes all the energy needs of telecom infrastructure.

In any case, there is a third possible business for enercom infrastructures. IoT devices and sensors, mostly equipped with batteries, individually tend to consume relatively small amounts of energy in absolute terms, but as we have seen earlier, there will be an enormous number of such devices deployed. Therefore, all the activities related to enercom management are key to monitor and manage such networked infrastructures, replacing and recycling batteries while maintaining devices.

Another option to enrich the business of a fiber wholesaler is the Open Service Exchange Operator (OSEO). To a dark fiber infrastructure, the OSEO adds a technical layer that simplifies the day-by-day operations of monitoring fiber lines. This operator also provides a business support system for selling, delivering, invoicing, administering and managing the final users of fixed fiber operators (see Sect. 4.2.2). The OSEO can increase the revenues of the wholesaler and enlarge its market by changing the billing operator without any physical intervention, greatly improving opex. The OSEO also makes it possible to offer innovative, customized subscription plans with time-based service, for example, for vacation homes. Such a plan might work over the weekend with a full bandwidth, and at a reduced speed during the week, solely for security and monitoring purposes.

The pure fiber wholesaler, that sells to the (2) fixed telco operator (FTO), share with it a large part of its destiny. The FTO is a relatively new business model. For its success the key appears to be its ability to execute and to differentiate its offer with a convincing service proposition. Its business could be relatively poor or rich in terms of infrastructure, depending on whether or not the fiber wholesaler manages the fiber connection from the basement of a building to the home (the green dot in the figure) in a FTTH scheme, or to the building for FTTB. The natural evolution of this model is to enrich the fiber connection with a “triple play” (telephone, Internet connection and media services), but other services such as Internet security can complement the offering too.

Just a little further away from the final user, we find the (3) specialized business telco operator which works on mainly with business customers and on their premises (the red dots in the figure). These customers are served with fiber connections to distribute other telecom or IT services, which might include network management, security, wifi or more sophisticated forms of wireless connections such as Multefire, Sigfox, Lora, CBRS or other services in the IoT market that work on the unlicensed spectrum. Some of the business models enabled by the OTTs (and described in Sect. 4.3) could belong to this category even if they offer their final users fixed wireless access service or a form of mobile connection. The specialized business telco operator could be a small-scale enterprise or part of a larger network with a sizeable infrastructure.

The (4) Fixed Wireless Access (FWA) operator (the yellow dots in the figure) is an emerging business model that has a proprietary infrastructure connecting a point-of-presence (POP), linked to the FWA with leased fibers or lines, to its users in a fixed position with a wireless link. In some cases, this is the solution to coverage problems in rural areas, but sometimes it is also a cheaper alternative in densely populated areas as well. AT&T and Verizon are using the FWA model in urbanized areas where a low population density does not justify more investments to bring other forms of high-speed connections. Google Fiber, instead, after the acquisition of WebPass, a specialized FWA operator, is using it in some very dense urban areas, such as in San Francisco. FWA could be delivered in many ways. Usually, the fixed wireless broadcasting equipment is installed on the roofs of buildings, on balconies or out of a window to ensure an obstruction-free connection, since most FWA receivers are conceived to be connected in line of sight for better signal reception. FWA could also be implemented as a point-to-multipoint or multipoint-to-multipoint infrastructure, as with 5G.

Fixed Wired Access has still a business model in evolution without a dominant technical solution. The most promising one appears to be 5G, which is so flexible that it can also serve fixed installations with a special equipment, ensuring a connection quality that is similar to a fixed connection but at a much lower cost. Exploiting beam-forming and millimetre wave spectrum (which are part of the 5G technology) provides a considerable performance boost to wireless broadband services. As of 2018, Verizon already has a 5G FWA program up and running on a small scale in Sacramento and Cincinnati, but forecasts are that in 2024 more than 12 million households in the US will receive home Internet service via 5G through FWA points (Newman 2019b).

In this scenario, traditional (5) mobile telecom operators (MTO) can choose to take advantage of fiber densification and improve their network density (the dark blue dots in the figure). Since expectations for 5G mainly center on performances, the most critical requirement to ensure this promise is cell densification, which means having much more cells, covering a smaller area, and bringing fiber to every micro cell. From this perspective, as anticipated above in this section, there could be space for a new kind of tower company. In fact, fiber wholesalers or specialized business telco operators can form a new kind of infrastructure, without owning big towers but having access, control or simply installation rights on enercom points like public lampposts, electric or telephone poles, electric substations or telecom secondary stations. The development of this kind of infrastructure is only beginning, but with ongoing progress in 5G deployment, many owners of small urban infrastructures may realize they are sitting on a truly valuable asset for them and for MTOs.

At a similar stage is the business model of the (6) edge cloud operator (the small light blue cloud in the figure). Also known as fog computing, it is partially linked to 5G networks and still under development. The edge cloud operator is a data processing model that uses sensors and connected devices to transmit data to a nearby computing device for processing, instead of sending it back to the cloud or a remote data center. Edge computing solutions are located close to where applications or data are utilized, so users do not need to deal with the time that it takes for communication to travel back and forth to the cloud or a server and delays due to latency are minimized. This allows edge computing users to make real-time decisions and to automate processes, since it takes almost no time to create and analyze data and then take a decision on it. There is a second reason that makes edge computing so important: by using an edge computing solution, companies process their data locally, meaning they can extract what is useful out of raw data and store only the insights in the cloud. This cuts down on the volume of data they need to send to the cloud (reducing networking needs) as well as the amount of data that is being kept on cloud storage.

Edge computing has many use cases. It could greatly improve efficiency when processing the growing volumes of data-rich video from security cameras and other camera-based monitoring solutions, for example. Business Insider Intelligence forecasts that smart city systems, which include connected cameras, will generate nearly 180 billion terabytes of data a year by 2023 (Newman 2019a). A Gartner research (van der Meulen 2018) reports that around 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud, but by 2025, this figure is predicted to reach 75%. Besides, the augmenting complexity of vehicles and the amount of data they record pose a problem for automakers and operators looking to process that data. A connected car generates thousands of GBs of data every day, without taking into account additional autonomous features. In fact, an autonomous vehicle could churn out 4000 GB of data every day, according to Intel’s estimates. Moreover, total data exchanged between vehicles and the cloud could reach 10 billion GB per month, based on Toyota’s forecasts. This raw data streaming to the cloud can be critical for improving autonomous driving capabilities, but the volume is staggering and could overwhelm both cloud systems and cellular networks. By 2023, vehicles in the US will generate 8 ZB annually, up from 0.72 ZB in 2018 (Business Insider Intelligence, The EDGE Computing Report 2018), creating an opportunity for edge computing.

Edge servers can form clusters or micro data centers giving processing power or data storage where more computing power is needed locally. With local processing, telcos could reduce data loads on their networks and generate additional revenues while companies can choose to send only meaningful insights to the cloud. With edge computing, the more technical structure of 5G networks moves away from the core of the network but not necessarily into the hands of telecom companies. Edge computing act as if it were part of cloud computing, only closer to the final user. But the OTTs are much better at managing and operate the cloud, they created it, and they are also better equipped to take profit of it than telecom operators. For example, since 2017 Amazon AWS has been selling edge computing solutions connected to its cloud computing infrastructure. Since 2019 all the services on its Elastic Compute Cloud have been available at the edge of the network through a relatively small but powerful device. This development has two infrastructural implications. First, unless OTTs accept to have in edge computing the role of pure technology providers, the business case of edge computing with great difficulty could become a separate infrastructure to develop, which was totally unanticipated at the inception of 5G. Second, small enercom points could instead serve to support edge computing, especially if they can form a capillary infrastructure.

Finally, there is the (7) cloud computing level (the small grey clouds in the figure), which is becoming a different and more effective computing paradigm for all the players that intend to leverage IT: telecom and IT companies and the clients of both. At the moment, from a business point of view, there is no question that OTTs have been more successful in the cloud business than telecom operators. In fact, the latter tried to compete for cloud services, but with poor results in terms of market share (which is still negligible) because telcos struggle to keep pace with OTTs in competitiveness and innovation. Global leader Amazon AWS, for example, has lowered its prices by as much as 65 times since its launch in 2006 (every 2 months and 6 days approximately). This translates to an average price reduction of around 14% per year over the period from 2008 to 2018, which means in 10 years prices have plunged to less than a quarter of the original starting price. What’s more, only in 2018, Amazon AWS was launching 1985 new functionalities on its cloud platform, an average of almost five new functions every day.

However, from the infrastructural point of view, for telecom companies, along with fiber densification, the transformation of their central offices into data centers could be a saving opportunity. The number of offices will drop; the capex of every central office will be much lower, using commoditized hardware such as cloud computing data centers; even the opex will be lower in a re-engineered architecture. Therefore, the old central offices could be sold or repurposed as a different infrastructure.

4.5.2 Emerging Infrastructure Investments in Wireless Networks

The huge investments needed to deploy 5G networks and the opportunities it opens will transform the whole mobile industry landscape.Footnote 21 But since now, they create incentives to find alternative solutions to the traditional proprietary model of mobile operators (see Sect. 4.4.1). This opens a large opportunity window to infrastructure investors willing to contribute to financing the 5G infrastructure, in whole or in part. As specified in the box below, infrastructure sharing is already being put into practice, although at the moment, it is limited only to agreements between peers.

A Common Infrastructure for 5G Networks

Some countries are already exploring a single infrastructure across different operators for 5G networks:

  • In South Korea, wireless carriers and Internet Service Providers (ISP), with a combined annual capex of $6 billion, are pooling resources to build out 5G with an expected capex savings of around $1 billion in 10 years.

  • In China the largest enercom agreement in the world is ongoing. China Mobile, China Telecom and China Unicom jointly own China Tower, controlling about 2.5 million towers. This partnership stipulated a deal with State Grid Corporation of China (SGCC, the country’s largest state-run electric utility company) to share resources in telecommunications infrastructure and electricity. By sharing telecommunications towers, the deployment of 5G and smart grids is accelerating, lowering installation and operative costs of Chinese mobile infrastructure. A similar agreement has already been negotiated with China Southern Power Grid, another state-owned electric utility, to share resources and establish regular cooperation. According to the Chinese press, feasibility studies and applications on power and communication infrastructure resource sharing have already been carried out by the companies in Fujian, Yunnan, Hainan and Hubei.

  • In the USA, the National Security Council (NSC) proposed a state-owned 5G infrastructure involving AT&T, Verizon, T-mobile and Sprint, with a combined annual capex of $30 billion.

  • In Italy, Telecom Italia (TIM) and Vodafone Italy have agreed to an active 5G network sharing project and are examining a move to share 4G infrastructure. The two companies would combine their respective mobile tower networks which together cover some 22,000 sites to support faster deployment of 5G over a wider geographic area, at a lower cost.

In general, since 2012 there has been growing interest in negotiating agreements involving fixed and wireless infrastructures. This trend could result in synergies and savings on mobile networks, improving the offering profile. In total, 26 partnership have been negotiated, of which 20 (77%) in Europe followed by 2 (8%) in Asia, involving 29 different countries. Four proposed partnerships have been abandoned, 2 are pending and 20 are signed, for a total declared value of 180 billion €, on average 7 billion € per deal (Venkateshwar et al. 2019a).

The antenna site is the easiest component to share in a mobile network. A typical as-is model, illustrated in Fig. 4.23, has an antenna positioned in an authorized site. The site may be exclusively available to a mobile network operator (MNO) or shared with another MNO. This way the MNO can reduce costs by giving up an alternative site and placing its antenna, connected to its network, in the same site. A mobile virtual network operator (MVNO), hosted on the network of the second MNO, does not need another antenna on the same site, but being only a virtual operator, can leverage the existing equipment.

Fig. 4.23
figure 23

The as-is model of a mobile network

To enhance performances, 5G networks are denser in populated places, because of the greater use of small cells, covering a smaller area compared to the typical macro cells of 3G/4G. Thus, 5G networks require a great number of small sites within urban areas, either outdoor or indoor, in shopping centers or stadiums, for example. These sites can be shared basically in two ways. First, as in Fig. 4.24 Model 1, with an MNO physically controlling a privately owned site that can be shared with another MNO. Since 5G is more flexible in terms of configurations, another antenna is not needed, almost as would be the case with an MVNO. Second, as in Model 2, the site can be controlled by an intermediary, which rents the site as a neutral host. The site itself can either already be equipped as an enercom point, or be equipped by the MNOs, which can share the site and the antenna.

Fig. 4.24
figure 24

Two models of site sharing in 5G

Especially in rural areas, a backhaul is needed for 5G cell sites. This can potentially represent another service sold to the MNOs by the site owner, increasing its revenues. Edge computing or energy back-ups could be other potential services.

A different approach to sharing telecommunications infrastructures is based on services. Vertical markets with specialized requirements could be an opportunity for intermediaries who are familiar with relative requirements and industries (DotEcon and Axon Partners 2018). These intermediaries could be in an optimal position to assemble connectivity services targeting industry needs, bundling them with other specialized services to differentiate their offering from the non-specialized communication services of a typical MNO (Fig. 4.25). For example, intermediaries could serve hospitals with low latency services for remote surgical interventions. Being able to identify and address their specialized needs, these companies can develop an infrastructure able to complement medical equipment that can be used in emergencies, bundling together 5G connectivity with edge computing and other supporting hardware to offer a service capable of operating a portable ultrasound system or an electrocardiograph. Differentiation by price may allow niche services to develop and be paid by users with specific needs, whilst avoiding price increases for users who do not require these additional functionalities.

Fig. 4.25
figure 25

Sharing bundled services through intermediaries in vertical industries

Therefore, in specialized vertical markets there could be other business models for 5G deployment. In a typical as-is model, MNOs use their spectrum to provide connectivity and negotiate with a vertical and/or an original equipment manufacturer (OEM) customer to provide a bespoke connectivity (Fig. 4.26, As Is Model).

Fig. 4.26
figure 26

Different approaches in vertical industries or for specialized OEM

A variation of this paradigm is represented by Model 1 in Fig. 4.26. A vertical industry and/or a specialized OEM customer uses a self-supplied private 5G network solution due to concerns regarding public network security, quality or cost. A private infrastructure can be developed that leverages 5G standards but uses the unlicensed spectrum, avoiding in this way traditional MNOs. This approach could be successful in highly specialized industries like oil, for example, and may be a likely scenario especially in case of slow deployment of 5G networks. But, once developed, these wireless private networks will remain, reducing the potential 5G market for traditional operators.

On this model there could be a variation (Model 2): a joint venture in a vertical industry (or in part of the industry), eventually between some OEM customers and network operators to share the cost of 5G network deployment. In this way the infrastructure would be deployed more quickly, but it would remain under the control of an MNO that could still sell other services.

This approach, on a larger scale, could work as in Model 3. In this model there is an opportunity for new intermediaries to enter the market who can negotiate deals with a large number of mobile operators. Then they could market a single “connectivity solution” to the vertical and/or to the OEM customers.

The distinctive technical characteristics of 5G networks result in the ability to manage a large number of devices simultaneously, with low latency or particularly high data transmission rates. There are many areas of application for 5G, from healthcare to the automotive sector to logistics. Mobile operators will be able to configure networks in different ways to offer tailored solutions. But telecommunication services will simply be an ingredient, and sometimes a small ingredient, of the recipe that wins over the market.

The 5G era opens the prospect for telecommunication operators to provide differentiated services for a number of different verticals simultaneously. Furthermore, service innovation should become faster and more effective. Thus, the emergence of 5G could lead to significant changes within the value chain for mobile data connectivity, both by modifying the traditional business models of telecom operators, and by providing new opportunities for intermediaries of various types. It may be even possible to create new “merchant markets” where various connectivity services are exchanged on the wholesale level between operators and orchestrated physical networks to create a certified communication service for customers.

There are several changes to the current telecom business that may emerge with extensive 5G deployment. Each change can potentially create great risks to telcos and service providers, but together, with a staff reskilling and a suitable supporting infrastructure, those risks can also be transformed in valuable opportunities.

4.6 Conclusion

Telecommunication companies are in the midst of many overlapping transformations. Telecommunication operators, traditionally rich and enjoying solid financial resources, have never left large investment spaces open within their sector. Thanks to the evolution of regulation, competition and technology, this will no longer be true.

Soon, there will be great room for investments, which may differ in size and quality, but this space will emerge at the intersection of:

  • New definitions of “investable assets”, identified by technology;

  • New business models, designed by the competition; and

  • New roles defined by both technology and competition.

From the point of view of a potential investor in telecommunication infrastructures, the discriminating rule to distinguish between mature, promising and risky opportunities is not easy to identify. But being that technology is the source of this reshaping of the traditionally slow-moving world of telecommunications, technology itself could be the possible key.

Compared to the past, telecom infrastructure presents two distinguish evolutions:

  1. 1.

    There is a sort of Cambrian explosion in the number and variety of assets that can form an asset base or be part of a larger definition of an asset base (e.g. pure passive fiber to be used by a pure fiber wholesaler, a specialized business telecom operator or an edge cloud operator; enercom points that could be an asset base to rent to service providers, MNOs or specialized business telecom operators).

  2. 2.

    There is a clear trend in transforming basic telecommunication assets (e.g. unlicensed radio spectrum, satellite communications, edge computing sites, enercom sites) using software to create different business models and potential disruptions.

Despite many discussions on the topic of 5G, the evolution of telecommunication networks and their seducing promises of extraordinary performances, the business case for 5G is still vague for telecom operators while it is already popular among their potential customers. And this is a significant potential risk. Indeed, the monetization of innovation is always risky. As far as the growing demand for communication services, 5G, fiber and satellite will provide an enormous technical improvement, creating great opportunities. But no one will have any guarantee regarding economic returns. The forces surrounding these incredible improvements will decide for everybody. On one side there is the actions of the OTTs, which are intently interested and investing in telecommunication technologies. They dominate the software component along the entire value chain of telecommunications and are in the best position to judge and influence its evolutions. On the opposite side there are traditional telecom operators struggling to keep pace with innovation imposed by the OTTs, but without the right set of skills to impact the fight for dominance in the future of telecommunication. In the middle there are the current and future customers of this evolution, both companies and individuals.

Assets that have a software component (e.g. FWA, low-power wide-range devices, solutions in the unlicensed spectrum) look riskier because these assets could be disrupted by new business models, new evolution, new combinations of assets. More traditional and essential assets, like naked fiber, or small urban locations equipped with power or poles, appear to be components of a structure that will be complex and always evolving, but that is starting to have some solid, even if minimal, cornerstones. New assets, like the enercom infrastructures, based on the recombination of more traditional assets but answering to a widespread need in the industry, instead will have a bright future.

The ancient alchemists believed that does not exist emptiness in nature. Maybe that is also true in the highly competitive market of telecom services, because every market space left unfilled will be served in some other way by someone else. In this perspective, the telecom industry could benefit from the contribution of other industries in keeping pace with the market, following the evolutions and the transformations of the market, and giving its best to create the best of possible futures.