1 Ubiquitous Computing

In the beginning was the word, and the word was about the vision of a new economy:

It begins and ends with people. Our people are the architects of the new Internet economy. Our clients grasp the magnitude of the opportunity. Together, we’re changing everything. Anything less would simply be another job. (Proxicom 1997)

The quote above is out of Proxicom’s Little Red Book, a brochure for new hires laying out the values and the vision of the company. Proxicom was a full-service e-business professional services provider, one of the so-called “Little Five,” besides Razorfish, Sapient, Scient, and Viant, which were challenging the consulting arms of the “Big Five” accounting firms (Sadler 2001). Everything, everywhere on every device was the motto based on the vision of “ubiquitous computing” (Weiser 1991). And nothing less did these pioneers try to achieve by breaking the rules and by rebuilding processes, entire industries, and markets.

Ubiquitous Computing eventually started to become true by the end of the 1990s, when the Internet was exploding. New technologies allowed to scale and implement more and more sophisticated websites and web-based solutions. The upcoming mobile Internet was an early glimpse into a wireless interconnected world. Looking backward it turns out that the ideas behind ubiquitous computing were on the spot. Many, if not most of the correlating ideas were to become true. Then, in 1997, we just did not expect that implementing all of these would take us so long.

2 The Journey: Or How We Got Here

Sometimes when you deal with a complex situation, it is helpful to step back and look what happened in the past, to better understand where you are standing today. What we are calling the Internet of Everything, which encompasses technologies like 5G and architecture concepts like Edge Computing, did not drop out of a box, but it was a long-lasting iterative development process and a progress taking several steps and hurdles throughout the years.

The three main components building today’s communications ecosystem are:

  1. 1.

    The (inter-networking) network

  2. 2.

    Hard- and software

  3. 3.

    Mobile communications

2.1 The Becoming of the Inter-Networking Network

Let us celebrate a 50th birthday: It was on October 29, 1969, when the first technologies of what would once become the Internet were turned into action forming the basic elements of the “ARPANET” (Advanced Research Projects Agency Network). This computer communication network was initially established between universities and research institutes and it eventually expanded over the entire United States. With the possibility to connect computers across the country, another challenge needed to be solved: the question how to address other users in the network. In 1971, a computer scientist named Ray Tomlinson eventually had the idea to use the @ symbol for this purpose (Allman 2012). Also and already in 1971, the University of Hawaii developed a wireless network, the so-called AlohaNet (McClelland 2017) which would connect computer systems via radio. With the invention of TCP/IP (Cerf and Kahn 1974), a new era was getting prepared to launch. This technology was based on the idea of packet switching, allowing several computers to share a single network without the data packages interfering with each other. Together with the Ethernet standard, TCP/IP was to become the norm of interconnection between computer systems. Finally, on January 1, 1983 all 400 hosts of what had been the ARPANET until then were migrated to the new TCP/IP protocol. From an inter-networking perspective, this was one of the founding pillars of the Internet of today.

With computers getting connected all over the world, efficient processing and structuring of information became more and more important. Since the switchover to TCP/IP, it took another 6 years for the most essential building elements of the Internet to be defined:

  • HTML—the Hyper Text Markup Language. It was Sir Tim Berners-Lee, who wrote a memo to his manager at CERN, a European research facility based in Geneva, suggesting the introduction of a general information system (Berners-Lee 1989). Eventually his concept was agreed upon and Berners-Lee was writing a browser software to access his html data, which he called:

  • “World Wide Web.” What was initially only the name of a piece of software should become the synonym of what has become globally available by now: The World Wide Web as a technology platform of interconnected content created what we call “the Internet” today.

But to create a ubiquitous computing world, it needed more than a standard for inter-networking, content access, and distribution. Way before software was eating hardware for breakfast, it required loads of hardware available to many, many people. These nerds, programmers, software engineers, coders, were the ones to develop the ecosystem of programming languages, resources, and tools around, as we have them available today.

2.2 Hard- and Software Evolution

Machine-based computing was piloted by Konrad Zuse, a German engineer prior to the Second World War. Eventually he created the Z3—the first binary 22-bit floating point calculator the world had ever seen. The company Konrad Zuse founded—the Zuse KG—was going to sell a total of 251 computers before it was eventually acquired by Siemens in 1967. This shows how exponential curves start: with small numbers. We know the rest of the story: With the invention of semiconductor-based transistors, vacuum tubes were being replaced and things started to get rolling and scaling.

Moore defined his law in 1965, stating that semiconductor performance would double every 18 months. It should be proven true to this day (Hiremane 2005). This incredible increase in performance accompanied by falling prices drove the computing world from mainframes to home computers such as the Apple II, the IBM PC, SUN, the Commodore C64, the Sinclair ZX82, followed by the Apple Macintosh, and many more ever since. The important aspect of this part of technology history is the sudden and unprecedented availability of computing power at hands of hundreds of thousands of people. Not only professional software engineers were able to afford these new machines, but everyone was able to own one. Once the Genie was out of the bottle, there was no stopping anymore for the exploding software industry.

When machine computing started, coding software was basically punching holes into punch cards, putting them into a reader and hoping that you had not punched the wrong hole. Since then, the art of writing code has leap-jumped along with the invention of supporting operating systems. Examples are IBM’s OS/360, CP/M, Unix, MS-DOS, just to name the most relevant ones of the early years. It was this development of standardized systems that allowed the portability of software and therefore the scale of the young industry. With the rise of the Apple Macintosh and its revolutionary operating system in 1984, yet another era began: graphical user interfaces changed the way of interacting with computers forever. WYSIWYG—what you see is what you get—was the new interface standard. Microsoft would need another 6 years before releasing their first graphical user interface with their Windows operating system, soon to be followed by Linux, BeOS, and NeXtStep. The latter, also a Unix derivative, eventually should become the core of the current Apple Mac OS-X with the returning of Steve Jobs to the company he founded.

To program all these mainframes and computer systems with their individual operating systems, it needed suitable and stable programming languages and frameworks. Since the invention of computers, mankind actually invented an incredible number of programming languages. On checking the Internet, you easily end up with over 600. Actually, one would be surprised, how many you have not heard of. It is actually a simple self-test to write down the ones you do know, which may become a rather revealing exercise given our growing dependency on software.

2.3 Mobile Telecommunications Everywhere

Yet, it needed one more element for achieving ubiquity: wireless/mobile communication. Mobile communication turned out to have the largest imprint on the world as we know it today. Elements of this technology actually have been foreseen in articles in a book called “Die Welt in 100 Jahren” (The world in 100 years) by German journalist Arthur Bremer (1910). Already at the beginning of the twentieth century, this book spoke about devices to transport voice and video via radio. It even predicted the availability of wireless handheld devices in everybody’s pocket. And it may be hard to believe, but in 1926, a fast train connecting Berlin and Hamburg already had a public phone cell providing “wireless” phone calls from the train to fixed line receivers. Today we know that these engineers were on the right track back then already.

This promising development in the early years of the last century took a long break during World War II, but eventually picked up again in the “Wirtschaftswunderland” Germany, once the war was over. With the economic success of the country, the need for mobile telecommunication surged and resulted in a series of mobile network technologies to be launched in the market. In Germany, these networks were named by the alphabet: In 1958 the A-Network was launched, followed by the B-Network (1972). Finally, the C-Network, the last analog mobile communications network, was launched in 1986. These analog networks were considered the first generation of mobile telecommunication networks and therefore named 1G networks. The actual 1G telephones had little to do with today’s mobile phones, and typically were built into cars and other vehicles, as their sheer size and weight did not allow them to be carried around much.

The analog mobile communications age came to an end with 2G—second-generation—networks. Mobile communications started being based on digital compression algorithms, enabled by fast low energy number crunching processors. For the transmission of data, these mobile networks started operating on a tiered network architecture similar to the one of the Internet. These networks were operating on a different protocol: the SS7 protocol (Techopedia 2019). In 1987, the GSM (Groupe Speciale Mobile) released a MoU introducing the GSM standard (GMSA 2019a). The first German 2G GSM network was launched by DeTeMobil, a subsidiary of Deutsche Post, in 1992. Then, the German post office (Deutsche Post) was in charge of all of the country’s telecommunication services. DeTeMobile was followed by the mobile network operator Mannesmann Mobilfunk, a diversification of the Mannesmann steel portfolio. Following in the alphabetic order, the E-Net was launched with the mobile network operators E-Plus and Viag Interkom. Today, DeTeMobil has become Deutsche Telekom and is widely known in some markets as T-Mobile. Mannesmann Mobilfunk eventually got acquired by Vodafone. And Viag Interkom was first acquired by O2, then by Telefonica, which eventually swallowed the fourth provider E-Plus as well.

When communication started, it was all about data already. The Morse code was the way to transport characters and numbers over wires across the land or over radio at sea. Still, for a long time after, telecom communication was perceived mainly as a voice-based service. Of course, there were still telegrams, Telex and later fax machines, mainly in offices and in the post office. But these were expensive and bulky machines.

It should take until the introduction of GSM, that another chapter of mobile communication was opened: sending and receiving data from and to everybody. Utilizing some leftover signaling capacity in the SS7 network protocol, the Short-Messaging-Service—SMS was invented by a Deutsche Post employee. It was introduced to the market und the brand name “D1-Alpha.” And nobody expected what should turn out of a service limited to 160 characters, which tediously needed to be typed on a numerical keyboard. Not only did SMS at an initial price tag of 39 German Pfennig, contribute large parts of profit to mobile operator’s balance sheets in the coming years. It also opened the doors for large scale application-based data transmission via wireless telecommunication networks. SMS enabled the first M2M—machine-to-machine—data use cases on GSM networks.

3 Mixing The Dough

It took 25 years to get the ingredients prepared for a market and technology mixture that would expand like yeast dough. We had the World Wide Web, the Internet, and the network technology to connect millions of computers being programmed by an ever-growing number of software developers. We had affordable computers and we had a global mobile communications network, which was about to grow exponentially with the accelerating globalization and with the rising numbers of the World Wide Web users. It was the beginning of what would later be called the dot.com bubble: an overhyped market with lots of technical and business phantasy, setting the example of hockey stick business case dreams and resulting thereof, exaggerated company evaluations.

Along with these new technologies, corresponding associations and standardization bodies were created. The W3C—the World Wide Web Consortium—was founded by Tim Berners-Lee in 1994 (W3C 2019). The GSM Association in 1995 (GSMA) and ICANN—the California-based nonprofit “Internet Corporation for Assigned Names and Numbers”—in 1998. ICANN is the organization which is home to the Internet Assigned Numbers Authority (IANA) group, which is in charge of Internet domain names and IP numbering. The market was creating its own rules with a new paradigm of interconnection, international inter-operability and—step by step—with inter-exchangeable data formats. This was a major breakthrough in a computer world that was protected by walled gardens for the longest time. These days, you could not even send e-mails from the AOL online service to their competitor Compuserve.

But of course, it was exactly companies like AOL to make the Internet a common asset. They gave millions of private households access to the Internet with their “You have got mail” alert on receiving new e-mails, a soundbite, which eventually became famous with the identically named movie. Many others were using the Compuserve or Prodigy services to go online, and yet others again simple bought themselves a 3.5 in. disk with the necessary PPP drivers to connect to the next Internet server at their university. Mosaic, the first commercial browser, was pushing the window to the World Wide Web open, soon to be overtaken by the Netscape Communications browser. Evers since, several other browsers have followed, competing on speed, html-interpretation and X-platform compatibility until today. With the World Wide Web and browsers available and websites piling up on the Internet, finding content became a challenge. It was the opportunity of web crawlers like Yahoo! and AltaVista, which launched in 1995, soon to be followed by Google in 1997. The latter eventually to become the synonym of web search in today’s languages.

Toward the end of the 1990s, the Internet was the driving force of business. E-commerce startups and e-business consultancies such as the previously mentioned Little Five were challenging brick-and-mortar business models. Disintermediating established value chains was the motto. A famous example is the Polymerland online marketplace project at GE Plastics. Following Jack Welsh’s motto “Destroy your own Business” (Martinson 2000), the world’s largest plastic pellet manufacturer was kicking out the intermediaries in their cascaded distribution chain. On the B2C (business to consumer) side, Amazon was attacking the booksellers’ market, and eBay was inspiring many other marketplaces in the B2C and B2B (business to business) segment to offer similar online auctioning platforms.

On the network architecture side, the addressable space in the Internet was getting tight with the strong increase of web sites. To solve this issue, in 1998, IPv6 was introduced to expand the address space to 2128 possible addresses. In parallel security became an ever-growing concern on the web and TLS (Transport Layer Security) was introduced as successor to SSL (Secure Socket Layers). Companies like Thawte started their Web of Trust model, providing web security certificates via their network of registered and certified notaries.

In the same year, on the mobile communications side of things, 3GPP, the 3rd Generation Partnership Project, united several regional standardization bodies. Their aim was to lay out the technical specifications of the third generation of mobile networks. It was the initiation of 3G/UMTS networks (3GPP 2019). Today, the 3GPP organization is headquartered in Sophia Antipolis, a small town in the Provence in southern France. This is Europe’s high-tech center and has been a sweet spot of technical excellence for a long time.

In these last years of the ending millennium, Siemens delivered the first GSM data module to the market. This allowed the use of mobile data transmission with an industry-proof device. Another mobile web “killer” application called WAP (web-access-protocol) was less successful. It delivered an awful user experience on SMS-style text web pages and first implementations failed due to lacking business cases. Yet another technology to influence our every day’s life should be defined by a newly built association—the Wireless Ethernet Compatibility Alliance (WECA) branded their new technology Wi-Fi and should later on rename themselves to Wi-Fi Alliance (Information Gatekeepers 2002). Today, the first question of every kid in the world entering a new building is: “Do you have Wi-Fi? What is the password?”

With beginning of the new millennium, our civilization survived the year 2000 bug without major damages and started into a new chapter of a mobilized Internet world. Microsoft released Windows Mobile and the widely spread Handspring Palm PDAs (personal digital assistants) were enhanced with GSM functionality under the Palm treo product line. Nokias Communicator was running the mobile operating system Symbian S60. And Nokia, having some 30% market share of all mobile phone sales worldwide, was Europe’s most valuable company (Young 1999). With introducing the data transmission protocol GPRS, also called 2.5G, the GSM networks expanded their data transfer capacities to staggering 40 kbit/s, which was similar to what you would get via a regular phone line modem then.

The stage was set for a new term to show up: Internet of Things (IoT). A term coined by Kevin Ashton at Procter & Gamble (Cole 2018), IoT was originally intended for describing a solution to track goods in the supply chain via RFID chips. But even before that, first devices were connected via M2M (machine to machine communication), such as connected Coke vending machines (IBM 2018) or connected toasters (Rebaudengo 2012). What IoT needed to flourish were further technological frameworks to simplify the architectural setup. As such, the introduction of the XML (Extensible Markup Language) through W3C, and later REST (Representational State Transfer), as well as JSON (JavaScript Object Notation) would give developers the architectural stability to build what was called Web Services. Eventually Amazon would become the leading web services provider with its subsidiary Amazon Web Services (AWS), which it founded in 2006 to outsource its increasing internal IT demands. Cloud Computing was born and has diversified into Software as a Service (SaaS)—e.g., SAP S/4HANA. Later came Platform as a Service (PaaS), e.g., IBM Bluemix/Watson, and Infrastructure as a Service (IaaS), e.g., the Oracle Cloud Infrastructure (OCI). Meanwhile, entire corporate IT system landscapes are being delivered as cloud-based applications, with the advantage of scalability, reliability, constantly upgraded and bug-fixed, and providing more security than most IT departments could ever achieve themselves.

On the mobile network side, data bandwidth and radio coverage have been continuously increasing. The leading mobile phone suppliers of the early millennium, Nokia, Motorola, and Siemens Mobile, were offering smart phones with included e-mail, calendar and web applications and further smart features. In 2002, first 3G/UMTS networks were launched and promised speeds up to 384 kbit/s. However, even in highly developed countries, EDGE, or 2.75G, still is the best connection speed you can get in some areas until today. For short-range connectivity of smart devices, the standards ZigBee and Z-Wafe were established and triggered new Internet-of-things product lines around sensors and actors collecting themselves in local mesh networks, such as smart home automation, steering air conditioning, heat, light and other appliances.

In the years to come, the GSMA was pushing the mobile carrier market with further, even faster transmission technologies. On 3G followed “fake” LTE in its 3.9G version in 2010, even though true LTE+/4G would not be around before 2014. The Internet received a major feature update by the standardization of HTML5 in 2008. This led to many new website designs with new features like responsive web design. Eventually HTML5 put an end to energy-consuming flash websites and flash-based banner advertising. Further wireless data transmission technologies for connected things like LoRa, NB-IoT and LTE-M were established, paving the way of connecting the foreseen 50 billion smart devices as predicted by Cisco for the year 2020 (Cisco 2011).

With all focus on technologies and standards, we must not disregard the maybe most important aspect of this story: The human contribution. Along with all these developments in 50 years, a continuously growing number of software developers was required to do the job. What was once an elite knowledge limited to scientists in white coats and to electrical engineers has become a job for many more people: Jeans and t-shirt-dressed global nomads, sitting on palm beaches and making a living with laptops on their knees. By the end of 2018, according to IDC, the number of software developers worldwide has reached 22.3 million with a growing tendency (IDC 2018). Soon we are about to reach the point, when more people on this planet are developing software, than are involved in building cars (Wickham 2017). This is a paradigm change we will need to consider: The transformation of value creation—from building things to building code.

Finally, one major disruption altered the world into a before and after. On January 9, 2007 at the Macworld Conference & Expo in San Francisco, Steve Jobs announced “… a product that comes along that changes everything …” The first iPhone should ring the final bell for the old league of mobile phone producers. With its market availability in 2008 and together with the Google mobile operating system Android, which was released the same year, the term “smart phone” should now be something very different than ever before. For consumers, the iPhone and Android based devices made the dream of ubiquitous computing come true: all content of this world at your hands, at any place, at any time.

4 Status Quo 2019

Not everybody needs to be able to cook a stew, but maybe everybody should be capable of appreciating the work that has been done, as much as it would be helpful for everybody to be able to identify the ingredients of the soup. Exactly this was the intention of the previous chapters and I hope you enjoyed the cooking session.

Here we are in the year 2019 and we have come a long way, eventually arriving at a cross-junction of fixed and mobile networks. Here end users and applications and devices are sharing the Internet for all thinkable sorts of communication and data. The underlying architectures and standards have achieved a stable, cross-operational environment for the use cases we are seeing today. The industry behind hardware, software, and networks has gone through several cycles and leap-jumps of innovation and consolidation. This resulted in a global footprint of thousands of technology firms delivering elements to the entire ecosystem. The market is led by a winning pack of a few global players, which dominate the playing field: Amazon, Apple, Google, IBM, Microsoft, and not to forget the forward-storming Chinese players Alibaba, Baidu, Huawei, Tencent (WeChat), and Xiaomi.

On the mobile network side of things, operators are still busy with upgrading their 3G and 3.9G systems to 4G-LTE+ (LTE-Advanced), with entire regions outside areas of high population being stuck without any acceptable data coverage (Opensignal 2019). With only 65% availability of high-speed LTE connections, Germany is making a disappointing rank 70 in an international comparison. The average max download speed in Germany is lower than 15 Mbit/s. That is less than half of the 35 Mbit/s the Netherland is achieving, not to talk about the 50Mbit/s of South Korea (Speedcheck 2019). In this context, Germany is a developing country.

The good news is that the number of mobile telephony subscribers first time ever has bypassed the number of fixed network subscribers in Germany, and this trend is going to continue. Regarding data consumption in Germany, fixed networks still are in the lead with 45.000 million Gigabyte of data. Mobile has a share of 1.993 million GB only, but with a 44% year-over-year growth rate, it is going to eat into the big chunk of the pie chart (Bundesnetzagentur 2019a). Talking about appetite: data is eating voice, and since the introduction of VoLTE, the difference between voice and data foreseeably is going to be void in the future. With the auctioning of the 5G spectrum (Bundesnetzagentur 2019b), the legal framework is prepared for the now again four mobile network operators to offer 5G in Germany. Already, the first 5G networks have been switched live in dedicated areas. 5G is starting to make its way into our connected reality, telling big promises of a high-speed mobile network in the near future.

5 The Evolutionary Revolution to 5G

When it comes to keeping marketing terminologies such as “LTE” or “5G” apart from the physical reality, you have to understand that the technical developments of cellular or mobile networks are happening based on subsequent specification releases worked out in the working groups of the previously cited 3GPP organization. The intention behind 3GPPs activities is to reduce complexity and to avoid fragmentation of technologies with each progressive 3GPP radio access technology. There will be no switch turning off 4G and turning on 5G tomorrow, but it will be a side-by-side developing, kind of overlaying and morphing the one network into the other over the years to come. In this sense, 5G actually is not a revolution, but more of an evolution and an extension of performance plus the add-on of new technologies to existing setups.

From a use case perspective, 5G is intended to combine these three different scenarios:

  1. (a)

    Ultra-fast data transmission capability by enhanced Mobile Broadband Capability (eMBB)

  2. (b)

    The M2M network of Internet of Things by offering the capability of addressing and communicating with literally billions of devices through massive Machine Type Communications (mMTC), and finally

  3. (c)

    The support of time or mission-critical applications through Ultra-Reliable and Low Latency Communications (URLLC)

Let us elaborate further on the most important features of 5G. The major difference to preceding GSM technologies is the significantly higher maximum data transmission rate. LTE (LTE+ or LTE-Advanced) has a theoretical max data transmission rate of 500–1000 Gbit/s. The fully fledged real-world 5G+ is promised to turn out up to 20 Gbit/s. To put it in more illustrative words: downloading a UHD movie will need a few seconds only. The secret behind this huge increase is for one the improved overall technological architecture, but also the extension of utilized frequencies. While the old mobile networking technologies were operating on a spectrum between 0.8 and 2.6 Ghz, 5G is designed to also work on frequencies between 6 and 300 Gigahertz (Ghz) to achieve its full advantages. As physics teaches us, increasing frequencies is coming along with shortening wavelengths, so we are moving the spectrum to mm-waves in future. This will result in much denser information packaging, so much more data to be carried along on each wave segment. On the downside, the higher the frequencies, the shorter the reach. Millimeter waves cannot easily pass through objects or walls, which limits such theoretical 5G use case significantly. Anyway, the performance improvement is significant, even when used on today’s frequencies.

Besides speed improvements, 5G will support technologies like Beamforming, Massive Multiple Input Multiple Output (MIMO) and Networkslicing.

  • Beamforming allows to change the expansion of the radio waves in a way to maximize coverage in a defined target area. Instead of radiating the radio wave in a spherical way, a beamforming signal would look more like a baseball club hitting the ball (the receiving device) right where it flies.

  • Massive Multiple Input Multiple Output (MIMO) is the bundling of several antennas of a wireless network, which allows transmitting and receiving of several data signals simultaneously over the same radio channel.

  • Networkslicing will allow to create a multilayered bundle of virtual networks on the same antenna, providing different qualities of service levels and thereby serving different purposes. Conflicts between data packages of different importance and criticality can be kept apart—such as e-mails that are being sent from real-time traffic information.

Being more efficient and faster in sending to and receiving from end-devices will allow for another major improvement of 5G networks: the optimization of latency. This will drop from best case 10 ms today to under 2 ms, allowing for time critical systems, such as, e.g., remote surgery, remote control of fast-moving objects, to be using near-real-time data connectivity.

After first pilots in 2018, the first commercial 5G networks were actually launched in 2019. The first 5G devices were made available at the industry’s major fair the Mobile World Congress (MWC) in Barcelona this year. At the time this article was written, about one dozen 5G handsets had made it to the market. Apple as one the major players in the smartphone market has not yet implemented it into its brand-new iPhone 11, though.

Along with all the increased data transfer rates of 5G, one more phenomenon is growing in importance: the necessity to operate on these massive amounts of data right after or before such data enters or leaves the network. Computing capacity is required at exactly this point, to guarantee the low latency promise. Sending all this information first through the mobile operator’s network, on to the Internet into the cloud, would ruin the time advantage right away. This is where the term “Edge Computing” comes into play, to unlock the key benefits of 5G—speed, low latency, and the capability to handle huge amounts of connected devices in the network.

6 Edge Computing

To put it in simple words: we need to minimize the travel time for data. Edge Computing is about installing computer hardware at the edge of the network, this is at the antenna and base stations. After we have stored our information in the cloud on platforms, we now we need to bring the number crunching power of such services closer to the application needing it in minimum time. Which works just fine with a technology called network and server virtualization. This allows the dynamic allocation of computing power dependent on various criteria to the most suitable location and addressing it as a virtual entity, still.

With the implementation of network virtualization, it does not matter anymore, where you actually locate your systems. Consequently, for the key 5G benefits latency and speed they can be located as close as possible to the Edge. This is exactly what Edge computing is all about (Ericsson 2019). Besides reducing data travel time to and from end application or device, Edge computing also reduces the overall load on the core network of mobile operators, as well as on the backbones of the Internet as such. Therefore, following the goal, that data should not have to travel far in the network, reduces overall data load on the Internet.

One other data hungry technology requiring Edge Computing is the field of AI—Artificial Intelligence and machine learning. Huge data amounts are required for both—for training AI systems, as well as for using AI solutions. While, e.g., text-based language processing, so called chat bots, are running on relatively small amounts of data, requirements are getting bigger once you start using spoken language, as entire recordings need to be transferred. With the analysis of video content, for example to implement face recognitions technologies, or even interpretations of mood and situational context, even more data will need to be exchanged.

The same is true for the thousands of applications not yet invented around the use of sensor data of all kind in all circumstances of human life, industry, environment, farming, etc., which is supported by another recent extension of mobile networks: Narrowband-IoT (NB-IoT) and LTE-M, two standards specifically targeting the connection of billions of smart devices.

Our world is changing into a Matrix of information. Not yet like the movie “Matrix,” but with the invention of digital twins of physical realities, we are heading strongly into this direction.

7 Benefits of 5G and Edge Computing

Business is about making money, and so is building 5G networks, installing Edge Computing capacity, or developing AI platforms. The GSMA is expecting mobile network subscriber base to grow up to 5.8 billion subscribers, then representing 71% of the world population. In the same time horizon until 2025, the number of IoT devices is about to scale up to 25 billion. Even if the latter is only half of what Cisco predicted in 2011, it still correlates with revenue expectations of 1.1 trillion USD (GSMA 2019b). There will be literally no aspect of human society that is not going to be connected with what we call the Internet of Things, or to express it more correctly—the Internet of Everything. Digital is the new normal. Or to say it in the words of the German digital transformation insider Karl-Heinz Land: “Everything that can be digitalized, will be digitalized.”

And the industry has understood this very well. According to IDG, data analytics is the top IoT benefit for enterprises. The largest target of future investments will go in there, along with IoT Security and IoT Management and Connectivity (IDG 2018). Never before, data had such value for all aspects of human society. May it be your online or offline purchasing behavior, or your medical information, or the availability of the next bus to come, or the traffic jam prediction of your connected car—all of this needs IoT and data analysis. And thinking beyond the human interaction layer, even more data is being used in background processes steering and controlling machines, our infrastructure, industries, simply our entire civilization.

8 Top Five Use Case Categories

In its “The Internet of Things: Mapping the Value beyond the Hype” report from June 2015, the McKinsey Global Institute was framing nine use case scenarios for the Internet of Things, together with a forecast of revenues and data usage (McKinsey 2015). Many of the IoT cases are can be applied seamlessly for 5G, even though for some this may not always make much sense. The key USPs of 5G may not be even required for the respective IoT application. Let us have a look at the short list of application scenarios for 5G and Edge Computing which seem to be most promising.

8.1 Human Beings

Everything is all about people, so this category deserves to be named first. When talking about IoT, 5G and human beings, some of the older folks may start thinking about “The Borg,” a technological enhanced collective society and race from the TV series Star Trek. I guess it does not have to be that scary. Indeed, there is a lot of promising potential in connecting human data with the cloud. We all know the simplest scenario of such data application: the emergency room in a hospital, where somebody is connected to the monitors and in case something develops into the wrong direction, an alarm goes off, calling the doctors for help. This self-explanatory scenario can be easily extended to real life outside of hospitals today. Especially for elderly people, a real-time connection via an IoT monitoring devices can save lives or avoid further complications. In a medical emergency situation, using smart AI analysis on a combination of recent and historical body data jointly with the background of the patient’s health files, would drastically improve the quality of medical treatment decisions.

But what works for grandma, does as well for your newborn, while you are watching TV in the other room. And it works for all of us, when going running in the park, hiking the mountains, or riding our bikes. Maybe it is only used as a gamification of our daily activity, reminding us to be more health and body conscious. Fitness trackers like fitbit together with fitness apps like Runtastic, heart-frequency and pulse rate measurement devices, blood-sugar sensors, and much more, are available on the market, as all-in-one smart watches or as specialized application. The value of the combination of 5G and Edge in this context is the possibility to connect real time and at any place where there is coverage to corresponding data sources, AI-health-analysis and further connected features in the cloud. Certainly, a clear case, where each of us would benefit from.

But without doubt, the most convincing argument for 5G is even more simple: billions of mobile subscribers love to use bandwidth, unlimited data packages and transmission speed for all kinds of entertainment: mainly movies and games, which thanks to advanced Edge-based Content Delivery Networks (CDNs) will be delivered as real-time 3D experience, soon. And asking the experts in the GSM arena, that is exactly the main reason why their networks are being upgraded.

8.2 Smart Mobility

When riding an ICE train in Germany about 5 years ago, one could be stunned by the fact that communicated reason for not processed seat reservations, was the missed handover of a 3.5″ floppy disk at the previous train station. Hopefully this is managed by wireless solutions in the meantime, and it shows, how smart mobility starts with such simple services. Of course, trains, planes, trucks, and cars are at the core of smart connected devices, where all of the qualities provided by 5G and Edge Computing phenotypically will come to play. Mobility has bypassed the limits of sustainable growth a long time ago. Traffic jams, parking nightmares, accidents are only the tips of the iceberg of a severely dysfunctional feature of human society. On the contrary, a globalized economy, an interacting world population is not imaginable without mobility either. As a consequence, mobility needs an upgrade.

Together with the advance of electric cars such as Tesla’s, a new concept had its breakthrough: An over-the-air, remotely updatable car operating system, with a promise of real self-driving level 5 capability in the foreseeable future. But not only the remote software updates of such cars are requiring high-speed, reliable, and secure mobile communications. A whole bunch of other features and functions in today’s cars are using online connectivity. For example, eCall, an automatic accident detector, is alerting first aid and police. Real-time information on delicate functional elements of the car can be forwarded for pattern recognition purposes to the manufacturers’ databases and AI systems for failure analysis.

Already today, traffic information is being received and sent by the cars navigation system, informing the driver and contributing to the cloud-based traffic information status on the same hand. Real time and latency critical data for controlling and for steering the car will pull from the cloud and from the surrounding mesh and this will be by far exceeding the perception bandwidth of the human driver. A whole flood of near-real-time information on surrounding traffic situations, special weather conditions, accident warnings and traffic congestion updates, and similar information will be processed by smart vehicles. The human society will remain to be mobile. But in future, humans may be passengers only, no matter if they are using cars, trains, planes, or flying taxis.

8.3 Smart Logistics

One of the oldest IoT Case Studies is the Power-by-the-hour service offering of the jet-turbine manufacturer Rolls-Royce. It was in 1962 that Rolls-Royce started to sell its engines by the hours they were used, instead of selling the machine as a piece. Back then, this was a revolutionary approach which changed the market dynamics and it was hard to handle. You needed quite some effort to pull the usage data together, though (Garvey 2016). Today, accessing such kind of data is possible as a result of global GSM network coverage. Even with plenty of white spots in between, it is pretty much impossible for a plane not to pass a mobile network base station in reach. Today it is not the hours anymore which are being counted alone. It is all data the machine produces, which is of interest for the manufacturer running on a usage-based business model. Predictive maintenance, the early indication of things going wrong in a machine way before it really happens, is based on number crunching pattern recognition, and will get even more reliable with smarter AI to be developed.

Of course, smart logistics encompasses all the other aspects of mobile connectivity, such as tracking and tracing, route planning, avoiding weather conditions by integrating real-time weather information, controlling temperature and humidity of transported goods and much more. But smart logistics may be also about smart contracting, merging the field of export banking and logistics to new set of services. This could be done by deploying blockchain technologies to parcel tracking in combination with Bank Payment Obligations (BPOs) or the older Letter of Credits (LoCs) and a real-time insurance package on top of it. A trustful combination of exactly knowing, where your goods are, where your money is, and that everything is properly insured.

In many of these scenarios it is not the speed and the bandwidth of the network, which is relevant, but it is the promise of 5G of handling a huge amount of individually connected devices, which make the difference to the earlier network setups. Tracking everything, everywhere and receiving a stream of information which can be processed in minimal time to cause a counterreaction based on the given data, is what will change logistics and mobility of the future.

8.4 Smart Environment

Within the context of IoT and 5G, a lot is being talked about smart farming, which really should be considered to be part of a Smart Environment category. Without a doubt, this is a use case where mobile connectivity can help feed the world. The ever-updated information on temperature, moisture in the ground, growth of plants development, the precise identification of need of countermeasures against insects or plant diseases, will help the agricultural industry feeding the world with improved sustainability, less chemistry, and with the outcome of healthier food. The combination of sensors, smart machinery such as smart harvesters, weed eliminating robots and more are already finding their way from startup status to production.

But there is another aspect, which is not making headlines until we read about disasters: measuring the activity of this planet, of its climate, its forests, its oceans. Scientists have been covering the planet with sensors for all kinds of information and it has been complicated and costly to maintain such data aggregation in the past. With 5G and accompanying IoT network technologies, the concept of Smart Dust (Marr 2018) is becoming reality. Smart dust are smallest devices, also called micro-electro-mechanical systems (MEMS) for collecting and communicating information. The vision is that such can be distributed out of a helicopter or plane in even most difficult terrains. With such technology, our knowledge about what is going on in our entire environment will grow exponentially. Much, which is based on limited data and constructed theories nowadays, will become factual knowledge in the future.

8.5 Smart Industry

What Germany calls “Industry 4.0” is an industry-focused perspective of IoT. Consequently, there is a Germany-only 5G-specification with the possibility to set up nonoperator so-called “Campus Networks.” Such are networks limited to a specific and limited area, such as an industrial site. For such locations, dedicated 5G networks can be setup after receiving a special license within the 3700–3800 MHz range from Germanys Federal Network Agency (Bundesnetzagentur). Deutsche Telekom has already launched a campus network prototype together with OSRAM (Telekom 2019) and other industrial players are evaluating the feasibility of such projects.

The reasoning behind using 5G instead of fixed cabling of assets or standard Wi-Fi technology is the combination of secure connectivity with the hope of less interferences, and therefore more network reliability. And—of course—building and operating a wireless network for a whole industrial complex with potentially tens of thousands of connected devices and machines is not an easy job, not even for large corporate IT departments. That is a skill predominantly available at the mobile network operator technology departments. It remains to be proven if the increased cost of building such networks (with niche market equipment being necessary due to the special frequency range) is paying off with the expected business cases and with the productivity gains.

Wrapping it all up, we are truly looking into a future of ubiquitous computing, as once envisioned at the end of the last millennium. The amount of use cases of interconnected devices and applications, the pervasiveness of this technology into all aspects of human live, human society, into business and science will without a doubt represent a next step of development of the human race. 5G and Edge computing are only in-between technologies on this way. And considering the developments of the last 50 years, today’s development are rather evolutionary ways of rolling-out ever improving components and successor technologies. Trusting in Garners most recent hype cycle, 5G is at the peak of inflated expectations, so regarding networks and connectivity, the 5G market is on a train with clear destination: downwards, before it goes up again (Gartner 2019).

9 Conclusions/What Is Left to Do

Being a step away from the cliff of the valley of despair—or to stay with Gartners terminology—the Trough of Disillusionment—5G will develop from a hyped marketing terminology to a very real part of our technological ecosystem. No doubt, the implementation of all of 5G capabilities in their full spread, especially the high frequency, low latency, and max speed vision, is going to take years to be built. Initially, we will be seeing 5G networks mainly in highly populated regions or in specific industrial areas, where either many subscribers are being served, or where IoT use cases will be prototyped. This all will go in parallel with the existing and still being implemented setup of 4G coverage, which will be more than sufficient for many use cases for quite a while. 4G coverage of remote areas, uninterrupted coverage on highways, train lines, and in architectural difficult parts of the city may require preferred attention in the year to come, compared to getting it all on 5G, which may need significantly more antennas and masts than today’s network setup.

Rolling out 5G networks across entire countries will be a challenge. Not only does it require enormous investments into many more base stations and antennas, due to the shorter range and reach of high frequency 5G networks, it also requires connecting all of these network elements with fiber cables. Considering the fact that today, at least in Germany, we by far do not have full 3G or 4G coverage across the nation, not to talk about full coverage of every network operator in every corner of the nation, we can develop a feeling of how long it may take to get the 5G job done.

Besides this, it may be a legitimate question to ask, if we really want to entrust any kind of mission critical, security involving data-based decision to be dependent on the availability and quality of the mobile Internet connection at the given time. Just imagine, sitting in a car that requires a continuous high-speed online connection to ensure its self-driving functions. Based on our everyday experience of failing connectivity, or of difficulties with connections in between different mobile network providers, this would not feel very good.

Accepting this dilemma, other solutions need to be considered. Staying with mobility, self-driving cars, automated flying taxis and whatever else the future might bring, will need sufficient AI-power on board, to deliver the essential Level 5 functions also without network availability. Connectivity to Internet-based computing power or other real-time information therefore may be an add-on benefit to improve the user experience of such autonomous systems, only.

Way more important will be a possibility to gather information from your immediate surroundings, beyond sight, but within reach of possible interaction based on direction and speed of your own system’s movement. 5G is promising some peer-to-peer P2P functionality down the road, but this remains to be validated. And it would come with the same bad gut feeling, described above. Looking at the history of peer2peer networks, most prominent is probably the music sharing platform Napster, we do have working concepts of direct connectivity between network elements at hand. During the recent protests in Hong Kong in 2019, a peer2peer chat platform app called Bridgefy has successfully bypassed mobile network-based control mechanisms and spontaneously generated mesh networks of the involved chat participants.

Such mesh networks may have to be developed for mobility and other time critical, moving or regional systems as well. It should be possible to come up with a safe, spontaneous network building standard based on NFC or Bluetooth or any other wireless protocol, or a combination of several protocols, to provide what is obviously needed. Of course, such would be bypassing the centralized approach and control regime of the established mobile network kingdoms of today.

May it be for mobility solutions, or for democracy and freedom of speech, or for simple financial or technological reasons, I have no doubts that we will see this becoming real. We will be living the ubiquitous dream.

Even more than we can imagine today.