Keywords

4.1 Introduction

Energy systems are moving from static and centralised architectures towards more flexible and distributed structures as the share of distributed energy resources (DERs) gets bigger [6]. At the same time, the utilisation of ICT expands throughout geographically distributed networks allowing an increased number of remotely monitored and controlled subsystems and components. In particular, large-scale deployment of DERs, smart distribution systems, and real-time market will boost the utilisation of automation and ICT.

As the energy flow becomes more and more bidirectional, the amount of exchanged information increases and demands for communications become more stringent and more versatile because of the diversity of, e.g. offered ancillary services, end-users’ preferences, and market regulations. For example, improved communication solutions are needed to increase controllability and to ensure high quality and flexibility of distributed energy systems. This poses challenges for today’s communication systems, because the communication cost, flexibility, quality of service, availability, response time, and security do not always meet all of the expectations.

We also need to understand that it will take time before the new TSO-DSO coordination schemes and market models presented in Chap. 1 are realised. Meanwhile, ICT solutions will evolve making it difficult to predict the most optimal and cost-efficient communication technologies for different parts of the energy system.

4.1.1 Conceptual Reference Model

In order to select appropriate technologies, we first need to identify relevant ICT requirements in different layers of the energy system. A conceptual model depicted in Fig. 4.1 can be used for the dialogue between ICT and energy personnel to capture the main data exchange operations and their requirements in different TSO-DSO coordination schemes.

Fig. 4.1
figure 1

A conceptual reference model illustrating actors, system components, and services in the energy system [19]

The model presents actors, system components, and services. In the figure, grey rounded boxes present core business actors/roles in different coordination schemes. The stakeholders can play multiple business actor roles. For example, an aggregator can do both technical aggregation and energy trading. The market operator (MO) role can be played by various stakeholders depending on the market scenario: central TSO (market), DSO (local), TSO-DSO (shared), or IMO (independent).

Blue boxes represent the main system components a.k.a. system actors or entities used by business actors. Trading system (TS) is devoted to exchange information with the market management system, e.g. schedules for prequalification, procurement, or activation of ancillary services. DMS/EMS-SCADA is considered here as the system used for network monitoring and control operations. Respectively, market management system (MMS) is dedicated for running market processes (by the TSO, DSO, or IMO) and to establish a link between the market operator and stakeholders.

Connecting blue lines represent external data exchange links between system components. Thinner lines in the figure are presenting internal communication links. We are analysing the system operations mainly from an energy market point of view. Market operations are not time critical, so they do not require an immediate response. Only DMS/EMS-SCADA-related remote control and monitoring—involving the real-time exchange of measured data—is considered time sensitive.

Green boxes represent core ancillary services including frequency and voltage control and congestion management. The pictures in the middle represent the grid infrastructure and distributed energy resources, from high voltage down to low voltage, which help mapping the energy market events to the physical grid entities.

4.2 Review of Enabling Technologies

We need to have understanding about the capabilities of the available technologies with respect to today’s and tomorrow’s requirements of ancillary service (AS) applications. Some ASs may have very relaxed criteria whereas others may have stringent requirements for reliability and response time. Those types of services are used mainly for control and protection of critical parts in the energy system.

Communication technologies can be divided into wired and wireless ones. The former includes communication technologies that utilise, e.g. copper and fibre-optic cables that can be considered as reliable and secure communication technologies. On the other hand, wireless technologies are more flexible, agile, and less expensive to deploy and operate. They are more compelling for utilities and aggregators as communication network performance and security have improved significantly and end devices have become less expensive. Unfortunately, there is no one-size-one-technology-fits-all solution available.

Next, we go through some potential wireless technologies introduced for machine to machine (M2M) and Internet of things (IoT).

4.2.1 Existing Mobile Communication Technologies

Mobile communication technologies allow us to communicate with others in different locations without the use of cables. These technologies exploit cellular network infrastructure distributed over a wide geographical area. Each cell has a base station in a fixed location. Cells together provide wide radio coverage so that a mobile terminal can communicate while moving through cells during the data transmission. The high availability, low operational and device costs, and steadily increased performance and quality have made those technologies very compelling for energy systems.

GSM (Global System for Mobile Communications), UMTS (Universal Mobile Telecommunications Service), and LTE (Long-Term Evolution) are examples of worldwide licenced mobile communication technologies.

GSM is a second-generation (2G) cellular technology offering digital voice calls and limited data services (SMS and MMS messages). GSM has evolved significantly over the years. General Packet Radio Service (GPRS) added packet-switched functionality to GSM for always-on data connection. It also offered higher data rates by aggregating several GSM time slots. The speed is around 14 kbit/s in the uplink direction (from a terminal to a base station) and 40 kbit/s in the downlink (from a base station to a terminal). EDGE (Enhanced Data rates for Global Evolution) technology improved data rates to few hundreds of kbit/s. The EDGE is generally seen as a 2.5G technology between GSM and UMTS.

UTMS is the third-generation (3G) cellular technology that offered greater spectral efficiency and higher bandwidths enabling multimedia services and Internet access. UMTS has evolved over the years. High-speed packet access (HSPA) and evolved high-speed packet access (HSPA+) are offering over 10 Mbps data rates. They are using improved high-speed download packet access (HSDPA) and an uplink equivalent (HSUPA) protocols. This allows the multimedia services, interactive gaming, and large file downloads.

LTE is the most recently deployed technology. It is a fourth-generation (4G) mobile technology offering high-speed data services and more flexible use of frequency allocations. LTE base stations are collaborating directly and they can exchange information about, e.g. current load levels, predicted capacity, and coverage levels in order to steer their coverage and capacity according to the cellular network load. For example, if demand increases at a particular location, a base station can shrink its coverage in order to increase capacity locally. The excluded areas are then serviced by other base stations. Decisions made in base stations flatten the LTE architecture and improve the cellular network performance and reliability. LTE exploits also advanced antenna techniques and beamforming to increase data rates and reliability. Improvements in data rates in 3G and 4G mobile technologies are presented in Fig. 4.2.

Fig. 4.2
figure 2

Evolution of cellular technologies and data rates [15]

One complementary mobile communication technology designed for critical communication especially for authorities is TETRA (Terrestrial Trunked Radio). This technology is also used by several DSOs and TSOs. TETRA is designed to provide secure, reliable, and robust communication services. It offers Short Data Service (SDS), which is comparable with GSM’s Short Data Message (SMS) over a guaranteed and secure pipe. One of the drawbacks of TETRA technology is that it is deployed by a relatively small end-user group, so service and device costs are rather high compared to other mobile technologies. TETRA is a narrowband technology like GSM having very limited data transmission capabilities.

The current trend is towards converging LTE and mission critical TETRA technologies. Three evolution paths are proposed: one looking at LTE (and its future releases or 5G) as a replacement, and another seeing LTE as a complementary technology to be used together with TETRA to give the best of both technologies (for example, voice over TETRA and data over LTE). The third option is to adopt a range of different standards for both narrow and broadband technologies according to their operational and business requirements, spectrum availability, and legislation.

4.2.2 Future Mobile Communication Technology

The next mobile generation is 5G which is no longer intended exclusively for human communications. The volume of data and information communicated with and between things and machines is anticipated to increase drastically. Moreover, applications will span from traditional voice and video applications towards industrial automation, virtual reality, automated driving, and robot applications as illustrated in Fig. 4.3.

Fig. 4.3
figure 3

5G verticals and potential applications [9]

There are three main performance dimensions defined for 5G:

  • Massive machine-type communications (mMTC) is designed to connect millions of inexpensive sensors and machines. Contention-based and connectionless access procedures are supported. The connectivity is evaluated in terms of how well the applications work regardless of device type, time, or location.

  • Ultra-reliable and low latency communication (URLLC) is designed to connect more complex devices having stricter requirements on reliability and availability. While data transmissions from mMTC devices are infrequent and not delay dependent, ultra-reliable machine-type communication (uMTC) addresses services with high reliability and short latencies. uMTC services are typically safety or mission critical.

  • Enhanced mobile broadband (eMBB) is designed to provide capacity enhancements. It is considered as extended support of conventional mobile broadband through improved peak, average, cell-edge data rates, capacity, and coverage.

Figure 4.4 presents the main 5G key performance indicators (KPIs) characterising 5G dimensions with different colours: eMBB (green), URLLC (yellow), and mMTC (blue) [12].

Fig. 4.4
figure 4

5G KPIs according to service type [12]

5G is planned to be launched around the year 2020 according to the time schedule in Fig. 4.5. It is designed for carrying mission critical, massive machine type of traffic and connecting efficiently other types of devices than mobile terminals.

Fig. 4.5
figure 5

Standardisation timeline [3]

The standardisation of 5G is still an ongoing process, but some key 5G requirements and capabilities have been agreed.

5G will be a single radio access network (single RAN) technology that is built upon new radio access technologies and evolved existing ones like LTE, HSPA, GSM, and Wi-Fi. The benefit of the single RAN technology is that mobile operators can simplify their cellular network architecture by operating different radio technologies on a single multipurpose hardware platform. This platform will exploit both licensed and unlicensed bands. The latter ones are used to provide additional capacity in the best effort manner.

5G will be a unified and programmable infrastructure that offers a scalable service experience everywhere and anytime. This means that changes in logical cellular network architecture can be done simply by software updates. This flexibility will be achieved by exploiting upcoming architectural trends like network clouds, software-defined networking (SDN), network function virtualisation (NFV), multi-access edge computing (MEC), and fog computing (FC).

Network clouds allow resource pooling that reduces overprovisioning and underutilisation of cellular network resources. For example, some base stations dedicated for low-latency services could be connected directly to a small nearby data centre, whereas in the case of no latency critical services, the connection can be to a large data centre further away. Such flexibility would allow mobile network operators or energy stakeholders to deploy data centres of different sizes to meet specific service needs. The more general fog computing concept offers an architecture that exploits end-user clients or near-user edge devices to carry out storage, communication, computation, and control in a communication network.

SDN decouples control and user data planes of communication network devices and provides a logically centralised network view and control. In a complementary way, NFV decouples cellular network functionality from dedicated hardware and promotes the software-driven implementation of system functionality. Together SDN and NFV offer new tools for cellular network load optimisation and improved resilience. They enable repositioning of cellular network functions according to network load, service quality, or operational reasons. Multi-access edge computing will offer a solution to move cloud computing capabilities and an IT services at the edge of the mobile network, which enables higher reliability and lower latency for real-time data exchange.

The main enhancements of 5G with target numbers are presented in Fig. 4.6.

Fig. 4.6
figure 6

Performance enhancements designed for 5G [13]

Table 4.1 summarises main 5G trends focusing on reliable, high-quality, and flexible data services.

Table 4.1 A summary of major trends in 5G

4.2.3 Wireless Sensor Networks

Wireless sensor networks (WSN) are infrastructures containing sensing, computing, and communication elements that give the ability to measure, collect, and react to events in a restricted area or space. Wireless sensor networks are designed especially for flexible communication. Typical wireless sensor network grid applications are near-field metering and monitoring applications.

The sensor network can be homogeneous or heterogeneous. In a homogeneous WSN, all the nodes have same capabilities whereas in a heterogeneous network, some nodes are assigned to carry more responsibilities with respect to the sensor network load and control. WSN applications can be divided into time-driven, event-driven, and query-driven. The classification is done based on the sensor network activity.

In a time-driven implementation, sensors will transmit their readings periodically. Sampling and communication occur periodically meaning that the communication times are known beforehand. In an event-driven sensor network, the sensors monitor the area and transmit information only when something meaningful happens. The attempt is to minimise the data traffic and transmission of redundant information. The last category is query-driven systems where gathered information is stored locally in the sensor nodes and required information is retrieved with queries. Scheduled communication protocols are typically used in time-driven and on-demand protocols in event-driven implementations.

One of the widely deployed sensor network topology is a cluster star. It is a hybrid topology formed from star and mesh network topologies. Sensor nodes are divided into normal nodes and cluster heads a.k.a. sinks. Normal nodes are only communicating with their cluster head whereas cluster heads can also communicate with each other. This flexible structure allows most of the sensors in the network to be very simple, and only a few nodes need to have additional memory and processing capabilities. The communication link between the sensor network and global network is typically through dedicated gateways using licensed or unlicensed radio access technologies.

4.2.4 Low-Power Wide Area Networks

Low-power wide area networks (LPWAN) are designed for M2M applications that have a long communication link (even a few hundred kilometers), low data rates, and long battery lives. They can operate unattended for long periods of time and at frequencies below 1 GHz, because it offers longer range and better building penetration, e.g. in indoor spaces.

LPWA technologies are split into two subcategories (see Table 4.2). The current proprietary LPWA technologies, such as SigFox, LoRa, M-Bus, or Dash-7, operate in unlicensed spectrum, while Clean Slate and 3GPP-standardised cellular IoT technologies, e.g. NB-IoT and NB-LTE-M, operate in licensed spectrum. LTE-M and NB-LTE-M are supplementary solutions addressing different use cases. LTE-M has higher capacity but NB-LTE-M has slightly lower cost and better coverage. Differences between different LPWA technologies are presented in Table 4.2.

Table 4.2 LPWA IoT connectivity overview [14]

4.2.5 Unlicensed LPWA Technologies

SigFox, LoRa, Wireless M-Bus, and Dash-7 are the best known unlicensed band LPWA technologies.

SigFox is a narrowband (or ultra-narrowband) technology with low noise level. It is bidirectional, but its capacity to downlink direction (i.e. from the base station to the endpoint) is more limited. SigFox owns all of its technology from the backend data and cloud server to the endpoint software, but it has opened its endpoint technology to silicon manufacturers and vendors. The business idea is to allow the applications to be very inexpensive and offer already-installed nationwide networks. The drawback is that only one SigFox network can be deployed in an area due to exclusive arrangements with the selected network operator. Moreover, the technology is not applicable for continuous communication due to the relatively high latency with low predictability.

LoRa is a wideband CDMA technology with an inherently higher noise level. Due to efficient coding, communication link budget figures are about the same as in SigFox. LoRa uses the same radio on a base station and at endpoints. Consequently, the cost of a LoRa terminal is higher than a SigFox terminal but a LoRa base station is cheaper making the overall technology less expensive for network deployment. LoRa ecosystem is open, so anyone, e.g. large network operators, private companies, and start-ups, can basically build and manage their own networks. However, there are open issues related to the roaming from public to public and from private to private network. Although the LoRa ecosystem itself is open, it contains a black box element. Semtech is the only company that makes the radio for LoRa.

Wireless M-Bus has been specifically standardised for the smart grid domain. The interface is M-Bus, and the wireless part is merely an extension. Although Wireless M-Bus has been deployed for advanced metering system (AMS), the limitation to IPv4 may restrict its deployments in Asia [10].

The last unlicensed LPWA technology is Dash-7 , which originates from the ISO/IEC 18000-7 standard. The technology was used for military logistics, but has evolved to support mid-range LPWA applications. The network topology is a tree or star. The technology forces end devices to check the channel periodically for possible downlink transmissions. As a result, Dash-7 has much lower latency for downlink communication than other LPWA technologies but at an expense of higher energy consumption [16].

Table 4.3 summarises the key features of the four unlicensed LP-WAN technologies.

Table 4.3 Properties of unlicensed LP-WAN technologies

4.2.6 Licensed LPWA Technologies

The challenge with unlicensed systems is that the communication is not guaranteed and other devices can use the same frequency band and interfere with the communication. 3GPP introduced MTC (machine-type communication) in LTE to cover machine-to-machine communications including all types of data communication without human intervention. In addition to the conventional GSM and LTE, three new technologies—eMTC, NB-IoT, and EC-GSM-IoT—for licensed cellular IoT technologies have been standardised in 3GPP [1, 2]. Their features are presented in Table 4.4.

Table 4.4 Licenced MTC and IoT technologies

The specification of NB-IoT builds on synergies of existing mobile network infrastructure. It provides an extension to LTE with flexible deployment options. The 3GPP specifications allow three NB-IoT deployment options: stand-alone, guard band, and in-band.

LPWANs are an emerging area of the IoT and they represent a huge market opportunity as the IoT matures. This technology is anticipated to create new M2M use cases where connectivity costs are expressed in a few dollars per year rather than per month. It is estimated that there will be 3.5 billion LPWA connections and 1.3 billion cellular network connections by the year 2025 [4]. That is equivalent to the current number of global cellular subscriptions, but the density of connected devices is likely to be less uniform (Fig. 4.7).

Fig. 4.7
figure 7

Analysys Mason’s forecasts for LPWA and cellular M2M connections 2015–2025 [4]

The deployment of IoT for energy systems will gradually extend towards utility and transport networks by 2020. The goal after 2025 is to have plug-and-play smart objects, which can be installed in any environment with an interoperable backbone allowing them to communicate with other smart objects in the vicinity.

4.2.7 Service Architectures

Future energy systems do not only struggle with how to coordinate operation of millions of different devices and subsystems but also how to manage and orchestrate an increasing number of services. The bad performance and inflexibility may be result from the inadequate service architecture. IT enterprises have been struggling with constantly changing application versions and upgrades, integration, and security issues, etc., so different types of service-oriented architectures (SOAs) have been designed to cope with challenges.

4.2.7.1 Enterprise Service Bus

The core concept of the enterprise service bus (ESB) architecture is that different applications are integrated by putting a communication bus between them and enabling each application to talk via the bus. This decouples applications and services from each other allowing them to communicate without dependency or knowledge of other applications on the bus. ESB provides basic adoption, translation, routing, and so-called commodity services including event handling and queuing, data transformation and protocol conversions, security, and exception handling. ESB defines a set of rules and principles for integrating numerous applications together.

ESB architecture is suitable for large monolithic systems with in-built security and service orchestration. Therefore, it is applicable in TSO-DSO level and could be bought as a turnkey solution from big service providers. However, ESB solutions are too expensive and too inflexible at the aggregator and DER levels where heterogeneity of applications and small mobile or embedded devices is higher. Moreover, applications and services have evolved since ESB was introduced. Nowadays, services are more distributed with significantly shorter lifecycles, e.g. mobile and IoT applications. The conventional all-or-nothing ESB implementation is too expensive and restrictive for smaller enterprises, so new complementary architecture concepts were needed.

Integration of monolithic and agile IoT services is a challenging task. To lower the complexity, complementary technologies such as SOA Gateways and microservices are applied. They are simpler and thus can enable a lightweight deployment with higher agility and lower cost.

4.2.7.2 SOA Gateway

SOA Gateway was initially created to protect internal applications when interfaces protected only by firewalls are being exposed to external parties. SOA Gateway offers additional security and is applicable for dividing the secure centralised core system parts from the distributed and less reliable system parts.

The gateway is typically deployed as a hardware component that seamlessly controls access to services, protects information through data-level encryption, ensures the integrity of a message through signatures, and controls corporate information flow.

Key benefits of SOA Gateway are as follows:

  • Scales from point solutions to enterprise-wide deployment

  • More configuration rather than integration

  • No central rules or brokers

  • Easy to plug in and plug out and loosely coupling system

  • Incremental upgrades and patches without service interruptions

On the other hand, key disadvantages are as follows:

  • Slower communication speed compared to compatible services.

  • Single point of failure can bring down all communications in the enterprise.

  • High configuration and maintenance complexity.

The strengths of SOA Gateway are related to security, high-performance transformation, and edge-based protocol mediation. SOA Gateway can act as a bridge between different technologies, which is important in order to build integrated enterprise and IoT/mobile services. It also offers secure extensions to applications deployed in public or private cloud environments.

4.2.7.3 Microservice Architecture

In microservice architecture , an application or service is decomposed into multiple small, granular, independently deployable services. Microservices are designed with agility in mind [7]. Services are very simple and they focus on doing only one function well. As a result, they are easier to test and validate, which ensures higher service quality.

The architecture encourages developers to implement IT solutions as microservices without using any intermediate integration products such as ESB. Since the parts are independently deployed, they can also be independently scaled to fulfil different end-user needs.

Microservice architecture offers the highest flexibility and support for IoT and mobile services with low OPEX and CAPEX costs, because:

  • Each service can be built with the best-suited technologies and tools, allowing high flexibility for implementation.

  • Multiple software developers can deliver services independently enabling continuous delivery and frequent releases while keeping the rest of the system stable.

  • In case a service goes down, it will only affect the parts that directly depend on the service. The other parts will continue to function well.

In practice, microservices cannot be used alone in energy systems, since large TSOs, DSOs, and aggregators have internal proprietary or legacy systems, which cannot be converted into microservices. Instead of focusing on one service architecture, the most suitable and flexible solution for future energy system would be to take advantage of all of them. Microservices can be used to address specific service cases executed at the edges of the grid. ESB can take care of service orchestration in core grid and cater all integration needs requiring high security and reliability. SOA Gateway can act as a bridge between ESB and microservice environments.

4.2.8 Data Hub

Data hubs are recently deployed also in energy systems. Data hub is a central service platform that facilitates transparent and neutral exchange of market information and execution of business processes between all market parties [21].

Metering operators are responsible for collecting metered values directly from smart meters. The metered values are sent to a data hub through standardised processes, timeframes, and communication formats. A data hub contains the data necessary for consumption settlement and execution of market processes, such as master data related to consumers and metering points, metered values with a relevant time resolution, and historical data for analysis purposes. In a data hub, the transactions and actual metered values are interlinked with the identified metering point and can be traced upwards or downwards in the executed processes.

Additionally, a data hub provides other functionalities such as calculations of settlement and imbalance data as well as providing data for reconciliations and aggregation processes by the market parties. Information about how consumers and market players are related is managed through standardised market processes. Figure 4.8 illustrates the difference between traditional communication between suppliers and DSOs before and after data hub deployment. Dedicated point-to-point connections are replaced with a common data hub connection.

Fig. 4.8
figure 8

Communication before and after a data hub deployment

The benefits of having data hubs is a single set of services that all the system actors can connect to and trust rather than having many peer-to-peer connections and also to have a coherent and verified data set in a well-specified format.

One drawback is the possibility of a single point of failure in the system. However, a data hub is seldom hosted on a single server. It is typically distributed, even geographically, in a dedicated server farms and hosted by certified data management companies. This means that the overall reliability and availability of the system is higher or at least in the same level than having peer-to-peer networks of servers at different companies’ premises.

4.2.9 Blockchain

Blockchain is the technology that was invented to create the peer-to-peer digital cash Bitcoin in 2008. Blockchain can be regarded as an electronically distributed ledger that is essentially an asset transaction database that can be shared across a network of multiple sites, geographies, or institutions. All participants within a network can have their own identical copy of the ledger. Any changes to the ledger are reflected in all copies in minutes or, in some cases, seconds. The assets can be financial, legal, physical, or electronic. The security and accuracy of the assets stored in the ledger are maintained cryptographically through the use of keys and signatures. Entries can also be updated by one, some, or all of the participants according to rules agreed by the network [22].

The uniqueness of this technology lies in the fact that blockchains are maintained by a shared or distributed network of participants—not by a centralised entity. This means that there is no central validation system. By design, a blockchain is inherently resistant to modification of the data.

Transactions can be created collaboratively by multiple writers, without either party exposing themselves to security threats. This is what allows delivery versus payment settlement to be performed safely over a blockchain without requiring a trusted intermediary. Another important feature of distributed ledger technologies (DLT) is the extensive use of cryptography to store, secure, and validate asset transactions.

The blockchain is, in essence, an open permissionless system where all participants can contribute to the validation process. However, the use of blockchains or distributed ledger technologies in energy trading markets could be a permission-based system with authorised participants only.

Digital cryptocurrencies like Bitcoin was implemented with blockchain 1.0. Since then, blockchain technology has evolved. Blockchain 2.0 provides smart contracts that are more extensive than simple cash transactions. Blockchain 3.0 extends the application domain beyond currency, finance, and markets covering domains like health, government, science, literacy, culture, art, and energy.

Potential uses of blockchain technology in the energy domain are quite diverse from the microenergy market to energy and flexibility trading. The blockchain technology has the potential to make trading processes far more efficient, lower the cost of trading, improve regulatory control, and eliminate unnecessary intermediaries. Security, privacy, non-repudiation, traceability, immutability, and availability are fundamental characteristics inherent to blockchain technology. Its decentralised approach and peer-to-peer architecture makes it very robust. A failure of a single node or even multiple nodes will not break down the entire system. Also, real-time processing of mass data and payment/settlement as a by-product in the trading process could be realised. Use of blockchain enforces common data formats and communication protocols, which promotes also future cross-border operations.

The blockchain technology has also shortcomings. Computational work consumes a lot of energy and how to deal with outdated data versus active data. Blockchains will always be less performant than centralised databases; lack of privacy, transaction data should not be accessible to all participants, but to authorised users only. Moreover, blockchain technology is not yet mature. It is evolving fast and a rich ecosystem of players experimenting with different blockchain variants is emerging. Apart from the Bitcoin case, proof of the technology promises in other domains has still to be delivered. It may take years before a blockchain technology is suitable for ancillary services. Although blockchain technology has a lot of potential in energy and flexibility trading, it will be economic, legal, and regulatory issues that determine whether blockchains will be used.

4.3 Analysis Process and Classification of ICT Requirements

The provision of ancillary services (ASs) from distribution networks involves the coordination between different actors and systems. Data exchanged among them contains ICT requirements that need to be known during the system design phase.

The Smart Grid Architecture Model (SGAM) created by Smart Grid Coordination Group/Reference Architecture Working Group (SG-CG/RA) [5] presents a structured approach for modelling the Smart Grid architecture. The basis for the SGAM is a three-dimensional framework consisting of domains, zones, and layers that are used to distinguish process and information managements. The physical domains present the electrical energy conversion chain including generation, transmission, distribution, distributed energy resources (DERs), and customer premises. Hierarchical zones represent the power system management entities including market, enterprise, operation, station, field, and process. Domains and zones (two-dimensional axes) form five abstract interoperability layers that are depicted in Fig. 4.9 and shortly described in Table 4.5.

Fig. 4.9
figure 9

Smart Grid Architecture Model (SGAM) [5]

Table 4.5 SGAM layers

The SGAM approach can also be used for modelling the proposed TSO-DSO coordination schemes with ICT requirements.

In business and function levels, existing ICT solutions are pretty ready for supporting different coordination schemes. The readiness comes from the fact that ICT infrastructures are providing services to multiple industry sectors. In addition to that, TSOs and DSOs have other more stringent communications needs, e.g. grid protection that poses requirements that exceed the ones defined by the coordination schemes. Available and future communication technologies, presented in the previous chapter, provide a good set of alternatives to fulfil the high-level communication requirements between interacting systems.

4.3.1 Data Protocols

Data protocols are required to convey information over a communication link. From ICT’s viewpoint, the most relevant ancillary service procedures from the market side are prequalification and procurement. At the control side, activation and settlement are the most critical ones. The presented coordination schemes appear to be very different from market model perspective, but from the ICT’s viewpoint, they have a lot of similarities. Only data exchange links between sources and targets change. In case of distributed market models, tighter coordination between TSOs and DSOs as well as between other different actors and systems is needed. Increased coordination affects especially interoperability and security requirements.

The content of information being exchanged and fulfilling associated requirements are vital to enable fluent, efficient, reliable, and secure data exchange between components in the energy systems. The amount of information sent and received and their criticalities affect the data management, security requirements, and communication and computation loads. The following data protocol properties need to be considered:

  • Data structures and formats

  • Data size

  • Implementation complexity

  • Availability and cost-efficiency

  • Open or restricted (international/de facto/proprietary standard)

  • Security

  • Legacy issues

  • Available communication technologies

Regarding to the data being exchanged, suitable protocols and standards already exist. For example, Smart Energy Grid Coordination Group (SG-CG a.k.a. SEG-CG) has provided recommendations for suitable communication and information standards to achieve interoperability in energy systems throughout Europe. The SG-CG recommends the following standards involving market interactions [11]:

  • EDI: it is not really a standard but a library by ENTSO-E containing several documents and definitions for the harmonisation and implementation of standardised electronic data interchanges in the context of achieving EU energy policy goals. The Market Data Exchange Standard (MADES) is comprised of standard protocols and it utilises IT best practices to create a mechanism for exchanging data (documents) over any TCP/IP communication network and to facilitate business to business information exchanges as described in IEC 62325-351 and IEC 62325-451 standards.

  • IEC 62325: it is a set of standards describing a framework for energy market communications. Its main parts are covering the communication between market participants and market operators. The common information model (CIM) specifies the basis for the semantics for this message exchange.

IEC 62325 and ENTSO-E EDI standards provide core foundations for ancillary service market processes. Table 4.6 shows a summary of exchanged data types supported by EDI and IEC 62325 standards.

Table 4.6 Market data types in ICT standards and protocols

IEC 62325 includes data formats for, e.g. bids, market results, acknowledgement, and settlements. IEC 62351 is considered as a reference standard for security in smart grid environments. It is aimed at improving security in automation systems in the power system domain.

The standards proposed at European level pursuing interoperability must be used whenever possible. They set high-level requirements for the design of the communication architecture. Proprietary solutions should be limited to cases where existing standards cannot be deployed. For example, an end device in a process or field zone has too low computation capacity to run complex standardised protocols. Here, the use of de facto standards is justified. If a dedicated gateway component is used to convert information between standardised and de facto protocols, then the use of de facto standard has a low impact on overall interoperability.

Security is getting increasingly more important as remote control and system automation is extended to the edges of the grid. Encryption and use of digital certificates are already widely deployed in market platforms, but this does not diminish the relevancy of considering integrity, availability, confidentiality, authentication, and non-repudiation aspects in communication. Improved security can also degrade the system performance by increasing overall latency, volumes of exchanged data, and requiring significantly more processing capacity. The system design is often a compromise of performance, cost, and security.

As a summary, existing data exchange protocols and standards offer good foundation and suit well to the proposed coordination schemes. The main question in the future will be more related to privacy and data ownership in case of cross-border systems.

4.3.2 Process of Capturing ICT Requirements

In the SmartNet project [20], a process to identify communication and ICT requirements in five TSO-DSO coordination schemes was developed. The focus was on ancillary services covering frequency control (FC), automatic and manual frequency restoration reserve (aFRR/mFRR), and voltage control (VC) presented in Chap. 1. Figure 4.10 shows in rows the five distributed and centralised market models and in columns the investigated ancillary services. All ancillary services are not relevant to all of the coordination schemes, so the excluded ones are marked with red minus symbols in Fig. 4.10. For an example, frequency control is not relevant in cases of local AS, shared balancing responsibility, and integrated flexibility market models and voltage control in centralised AS or integrated flexibility market models.

Fig. 4.10
figure 10

Mapping of ancillary services and coordination schemes [17]

For discovering ICT requirements in different TSO-DSO coordination scenarios, the SGAM framework is applied together with IEC 62559 and ELECTRA’s use case design methodologies and design templates [8]. The SmartNet project extended this process by adding ICT requirements in each layer of the SGAM model. The developed process is iterative and incremental. The iterative approach is used to refine requirements and to discover possible gaps in the design by studying revised specifications. The incremental approach is used to extend the design in a step-by-step manner from the business layer down to the component layer.

The process is divided into three stages: (i) classification of ICT requirements, (ii) harmonisation of ICT requirements and creation of the common architecture model, and (iii) testing with a system realisation. Those stages are presented as large white blue-framed boxes in Fig. 4.11. The process involves four main iteration cycles (blue circles) taking information from green boxes as input and creating outcomes shown in blue boxes. The main artefacts of the process are the SGAM architecture model for ancillary services with ICT recommendations and ICT recommendations for system realisations.

Fig. 4.11
figure 11

Analysis procedure used for capturing ICT requirements and specifying the architecture design [19]

The following subsections describe the details of each process stage.

4.3.2.1 Stage 1: Classification of ICT Requirements

This stage captures and prioritises ICT requirements by analysing the five TSO-DSO coordination schemes and associated ancillary service use cases. This first stage focuses on business and function layers. The ELECTRA’s use case template is used to identify business and system actors and their interactions. In the SmartNet project, the template was enhanced to include also functional ICT requirements related to [18]:

  • Communication technologies

  • Latency and security aspects

  • Bandwidth

  • Coverage

  • Scalability

  • Ownership

  • Terminal density

  • Interface flexibility

  • OPEX and CAPEX costs

  • Market characteristics

  • Reliability

  • Security (integrity, availability, confidentiality, authentication, non-repudiation)

  • Data and communication protocols

At this stage, the identified requirements are still generic and business/function driven focusing mainly on market characteristics and interactions between business actors.

4.3.2.2 Stage 2: Harmonisation of ICT Requirements and Creation of the Common Architecture Model

The second stage refines the architecture design by extending the design and ICT requirements to information, network, and component layers. This involves complementing functional requirements with data structures and protocols. At this stage, suitable data and security-related protocols, e.g. EDI and IEC 62325, are investigated. The design covers physical system components and their interactions. Corresponding ICT requirements are mapped to the following requirement categories with own properties:

  • Security

  • Communication

  • Latency

  • Data protocol

  • Device

Categorisation of ICT requirements is needed to ensure that the number of requirements remains manageable and that they can be aligned across all SGAM layers to form a common architecture model.

ICT systems evolve faster than energy systems making it difficult to choose optimal ICT solutions with long life expectancy for different parts of the energy system. As a result, system functions and their requirements tend to change more frequently in the future. To cope with this, the SmartNet project developed a parametrised SGAM realisation with Enterprise Architect (EA) architecture design tool. The parametrised model means that ICT requirements and their threshold values can be altered and their effects in different coordination schemes can be analysed.

The system actors, business actors, and interacting systems covering all TSO-DSO coordination schemes are presented in Table 4.7. As stated earlier, the proposed coordination schemes have a lot of similarities from the ICT’s viewpoint. All the coordination schemes are assumed to utilise existing communications infrastructure and differ mainly from IT systems used for calculating market clearings and aggregations.

Table 4.7 System actors in TSO-DSO coordination schemes

Figures 4.12 and 4.13 show information exchange between the system components in cases of common TSO-DSO and local AS market models. In the figure, arrows are showing directions of data flows. The thickness of the line indicates how many different types of messages are exchanged between system components. The actual number of sent messages (communication cost) depends on system realisation and information exchange interval. The data exchange can be periodical or event driven. The colour of the line shows whether the communication link is external (black) or internal (blue). The internal link is considered more reliable and secure, since it is managed by a single actor. Figure 4.12 shows that the most critical system component in the common TSO-DSO market model is market management system (MO MMS) operated by a common market operator and majority of the interactions are over external communication links.

Fig. 4.12
figure 12

Key interactions between system components in a common TSO-DSO market model

Fig. 4.13
figure 13

Key interactions between energy system components in a local AS market model

In case of the local AS market model, the critical point is local DSO’s market management system (DSO MMS) . There exist more blue lines than in common TSO-DSO model, which indicates that more information can be transmitted over internal links. This improves the reliability and security of the overall system.

Requirements related to, e.g. latency, reliability, and security tend to change as energy systems, markets, and communication technologies evolve. Latency and security requirements are major factors when decisions to deploy wireless or wired connections are made. Data amounts exchanged in trading and resource control are not considered large, so the speed is not a critical factor. Wireless connections are more cost-effective and flexible, but wired connections offer more speed and reliability.

Figure 4.14 shows an example where the model parametrisation is used to assess the deployment possibilities of wireless communication technologies. The threshold values for latency and security in wireless connections are set as:

$$ \mathrm{Latency}\le 100\;\mathrm{ms}\kern0.5em \mathrm{or}\kern0.5em \mathrm{Security}\ \mathrm{level}\ge 4 $$
(4.1)
Fig. 4.14
figure 14

Communication types: wireless connections are shown in green, and wired connections are shown in black

If none of the messages exchanged between two system components have stricter criteria, then a wireless option for the communication link can be utilised. The outcome is presented in Fig. 4.14, where wired connections are shown in black having more stringent requirements for latency and/or security and wireless connections in green with more relaxed requirements. The diagram indicates that wireless connections could be deployed more frequently. The wired connections are mainly needed for exchanging resource control information with SCADA systems in TSO, DSO, aggregator, and DER levels.

The advantage of using a parametrised model is that it enables to alter ICT requirements and threshold values and to compare their effects to different TSO-DSO coordination schemes. However, it is important to keep in mind that the outcome of the analysis depends on how precisely ICT requirements and threshold values can be defined for each connection in the target system. Therefore, ICT requirements need to be analysed systematically in all SGAM layers.

4.3.2.3 Stage 3: Testing with a System Realisation

The last stage of the analysis process is to utilise the system design in a real implementation. In the SmartNet project, one of the target systems was the Danish pilot described in detail in Chap. 6. This pilot was implemented to use summer houses with swimming pools for the provision of ancillary services. The electrical load used to heat water in a swimming pool is used for balancing the load. The coordination scheme of the pilot is combined “mixed shared balancing responsibility” and “common TSO-DSO” AS markets.

The following four analysis steps were taken (Fig. 4.15):

  1. 1.

    Defining the sequence of core information exchanges between pilot components utilising the conceptual reference model.

  2. 2.

    Selecting ICT requirements for each information exchange event from the ICT requirements catalogue generated during the overall system design.

  3. 3.

    Configuring the selected ICT requirements to match the pilot-specific requirements.

  4. 4.

    Creating a parametrised SGAM realisation to see which system components are needed and which information exchange links could be implemented over wireless technologies.

Fig. 4.15
figure 15

A graphical representation of the analysis steps for the Danish pilot

Although the chart in step 4 presents only few links (wireless connections in green and wired connections in black) between components, the created digital presentation includes different types of diagrams with ICT requirements covering all SGAM layers. This provided additional design support for the pilot system realisation is described in Chap. 6.

4.4 Conclusion

In this chapter, the importance of ICT in future energy systems is presented. Identification of ICT requirements is needed from business use cases to system components. From ICT’s viewpoint, the presented five coordination schemes have a lot of similarities and existing data exchange protocols and standards, e.g. EDI and IEC 62325, form a good foundation for future energy systems. To enable cross-border interoperability, the common data and protocol standards should be used whenever possible.

Wireless technologies are expected to have a bigger role in future systems and could be utilised especially in the edges of the grid due to cost-efficiency and flexibility. This chapter presents also enabling technologies that offer new alternatives for communications. Choosing the optimal communication solutions, however, depends on several factors, e.g. regulation, business and market models, existing infrastructure, end-user requirements, and investment and operation costs.

The selected service architecture has a significant impact on energy systems’ interoperability, flexibility, and security. Conventional ESB solutions are applicable in core parts of the systems, e.g. in the TSO-DSO level. SOA Gateways are offering additional security by enabling to split the secure and insecure parts of the networks, and microservices can be a cost-effective way of building services for DER level trading. In addition, global data providers, e.g. Amazon and Google, can be potential new stakeholders for data provision.

The analysis revealed that aggregators have a central role in new market models. The security threats are the highest at the edges of the grid due to the lowest investments to communication quality and security. Security aspects are adequately and systemically planned and implemented by DSOs, TSOs, and large aggregators. Regulatory support may be needed for small DERs’ owners and aggregators that may not have sufficient competence or capital to invest on equipment and software to make their communication links and data secure.

The chapter presents a process for capturing ICT requirements and how parametrised SGAM model can support the system design and implementation. However, the outcome depends on how well the ICT requirements of the system can be identified in all SGAM layers and how much technological, economical, and regulatory uncertainties there exist.