1 Networks and Storage: An Introduction

The traditional view of electric power systems suggested that power cannot be efficiently stored and that the partially controllable supply and exogenous demand have to be matched at all times. The networks in electric-power systems are traditionally divided into the high-voltage “transmission” system and the lower-voltage “distribution” system. The transformers connecting the low-voltage distribution system to the high-voltage transmission system are often in so-called distribution substantions. From the point of view of the distribution system, the substation provides a source of electric power, and traditionally the only source. From the point of view of a transmission system, the aggregate demand of customers (in the traditional un-observable low-voltage distribution system) is “revealed” as voltage at the substations. The voltage in the transmission system then drives the power flow within the transmission system, as well as the generation of power at the (so-called synchronous) generators. This view is now being challenged by the increasing availability of demand response management, distributed generation, storage, and generators that cannot easily respond to voltage changes by changing power output. Still, electric power transmission is of paramount importance, and its physical shape is changing only very slowly.

1.1 The Physical Reality

Let us elaborate upon the physical reality of power transmission. The electric power transmission can be implemented using a variety of means, with the most common one being the overhead power lines. In overhead power lines, pylons are connected by (most often) multiple high-tension lines, each of which is typically made of aluminium wires wrapped around a steel core. Possibly, there may be additional sensors along the line, such as fiber optics for capturing the temperature gradient, or sensors measuring the magnetic field induced by the current flowing along the line. One typically utilises the alternating current (AC) in over-head power lines, as explained below.

As an alternative to overhead power lines, one can use underground and submarine power transmission, albeit at much higher investment costs and sometimes higher operational costs. First, the investment costs are higher, due to the needs for excavations, insulation, and power electronics. Underground or under sea, high-voltage (HV) cables require considerable amounts of insulation (often based on pressurised oil or polyethylene). Because alternating current allows only for very short lines (under 50 km), due to the high capacitance of the cable, one often uses high-voltage direct current (DC), which requires considerable investment in (and losses at) the power electronics involved in the AC-DC and DC-AC conversion (known as rectifiers and inverters, respectively). While the speed of the spread of HVDC cables beyond the situations, where they are deployed already, cf. the deployment of underground power transmission and distribution in Denmark and cables between Germany and Sweden (Baltic Cable), and Norway and the Netherlands (NorNed), remains unclear, it is clear that these require modelling both AC and DC transmission as well as the related power electronics, in many countries.

One connects multiple power lines at so-called substations. At the most basic, these can house a large slab of metal (e.g., copper), which is called a “bus”, onto which the power lines are physically attached and which equalises the voltage of the connected ends of the connected power lines. More often, one connects multiple transmission lines to multiple coils (so-called windings) of a transformer. The transformer can step up or steps down the voltage, in discrete steps, depending on how the windings are connected. There can be step-up transformers connecting a power station to the high-voltage transmission network (e.g., 400 kV), step-down transformers within the transmission network (e.g., to 220 kV and 110 kV), and then there are step-down transformers connecting the high-voltage transmission system to the distribution system (e.g., 55 kV). Traditionally, the settings of the transformer been limited to the step, and fixed for substantial periods of time, although this is changing (cf. FACTS devices below). Such a transformer would still often be referred to as a bus in an abstract view of buses connected by branches.

1.2 Models of Electric Power

Let us now elaborate upon the models of electric power. Although in general, the current and voltage are an arbitrary signal in both alternating and directed current systems, and one could hence use signal processing throughout, one often assumes the harmonic currents to simplify the modelling of real-world power systems. There, voltage, current, and power are sine waves with magnitudes, angular frequency ω, and π is the phase. (Notice that one can use Fourier transform to approximate any signal by sinusoids). Then, we have a closed-form solution for integral for the average power transmitted, which is equal to the product of the current and the voltage and the cosine of the phase. Together with the usual relationships of:

  • ohmic heating (i.e., losses equal to the product of the resistance and the square of current),

  • Kirchhoff’s current laws (e.g., sum of current injected is the sum of currents ejected, modulo losses),

one can formulate a variety of mathematical models for the harmonic currents, all of which are non-convex.

A key choice in formulating a mathematical model of harmonic currents is the choice of sine-waves to represent and the choice between polar and rectangular representation thereof. Generally speaking, using the rectangular representation, one can often derive a polynomial optimisation problem (POP), while using the polar representation, one obtains a problem with trigonometric constraints. In some cases, it may also be beneficial to combine both representations, especially when one considers piece-wise linearisations. The key choices studied so far include:

  • polar power and polar voltage [69, 70, e.g.], where power generated at generators and all voltages (except for a reference bus) are employed

  • rectangular power and polar voltage [294, e.g.]

  • rectangular power and rectangular voltage [168, 262, e.g.], where power generated at generators and all voltages (except for a reference bus) are employed

  • rectangular current and rectangular voltage [294, 335, e.g.], where currents and voltages are represented

  • rectangular current injection.

We refer to [64] for a survey of the history of these formulations. We note that one may consider problem with trigonometric constraints and derive the polynomial optimisation problem [74] using substitutions and one of several well-known trigonometric identities. This way, one obtains many further polynomial optimization formulations.

1.3 Approximations

Considering the non-convexity, one often utilises approximations of a widely varying quality, and widely varying shape, dependent on the choices above. The simplest approaches assume the problem is convex, while it is clearly not, and apply gradient methods or Newton method directly to the non-convex problem. In this case, convergence guarantees can be obtained only for starting points within the vicinity of a local optima; recently, it has been shown that whether one is close enough is actually testable [270].

Without a starting point in the vicinity of a local optimum, one often considers either convex relaxations. or mixed-integer convex approximations (e.g., piece-wise linearisations), as in much of optimisation and control. In the most simple relaxation, known as the Direct Current (DC) model (but confusingly applied to AC systems or systems combining AC and DC transmission), the network structure is taken into account, including the capacity of the transmission links, but a simplified version of Kirchhoff laws is used so that the corresponding constraints become linear. In more sophisticated convex relaxations, one uses semidefinite programming (SDP) and second-order cone programming (SOCP). One should like to note that such sophisticated convexifications [168, 279, e.g.] can be made arbitrarily strong, i.e., with solution arbitrarily close to the solution of the non-convex problem, albeit at a major expense of computational power. Within mixed-integer convex approximations, one often considers piece-wise linearisations.

The most common convexifications include, depending on the choice of the variables:

  • rectangular power and polar voltage can be piece-wise linearised in either an inexact and well-performing or asymptotically exact and rather less well performing fashion

  • rectangular power and voltage yields very strong semidefinite-programming relaxations, and convergent hierarchies of semidefinite-programming relaxations

  • rectangular current and voltage, which produces weaker convex relaxations, but may be suitable for the use in optimal transmission switching and network expansion planning, where the current may be set to zero without consider a high-degree polynomial

  • rectangular current injection, which may again be suitable for the use in optimal transmission switching and network expansion planning, whenever the degree of a polynomial is less of a concern than the dimension of the system.

It should be noted that the convex and piece-wise convex approximations are an active area of research and many rules of thumb above may be invalidated yet. Finally, one sometimes uses the so called “transportation models”, where network flows of units of energy are considered.

Let us now consider the time scale for the application of the approximation. Clearly, the changes to demand and (consequently) voltage are continuous. Some changes of limits on the power output may be continuous (e.g., wind power at low winds), while others may be discontinuous (e.g., where there is no momentum, e.g., when a wind turbine gets disconnected due to high winds). As in much of optimisation and control, one often considers a discretisation of time and classical batch-optimization algorithms that compute optimal operations based on a fully-specified input, valid at one point in time. One should note, however, that with the increasing volatility, this may seem inadequate. Novel algorithms that capture the inherent time-varying nature of the problem and leverage on-line optimization techniques as well as insights from control theory [39, 40, 87, 88, 192, 394, 409, 457] show a certain promise. One should like to point out that they present only an initial approaches to an otherwise very open problem, considering their use of crude convex approximations of the non-convex problem. The use of on-line non-convex optimisation [271] is nascent.

1.4 Looking Beyond

Going beyond the traditional view of power systems, energy storage [108] is a very active are of research within electrical engineering and materials engineering. Current large-scale implementations are based on pumped hydroelectric energy storage (PHES), which provides close to 40 GW of capacity in Europe and a similar capacity in the United States. Pilot projects involve lithium-ion batteries, cf. deployments in New England and Australia, lead batteries, sodium batteries, (super)capacitors, pumped storage underwater reservoirs, spinning rotary machinery (fly-wheels), compressed air, heavy-goods trains pushed uphill cranes lifting weights in the air or in a mine shaft, and many other suggestions. It should be noted that the reach of pumped hydro is limited to areas with the appropriate physical geography, while the pilot projects have not shown a system that would be clearly commercially feasible to operate at scale. See Sect. 6 It is hence not clear what shape and form energy storage would take, eventually.

Finally, in demand response management (DRM), one hopes to “emulate” energy storage by incentivising customers to amend their consumption in real time. We refer to Sect. 1 above. Although the first related policies have been proposed decades ago, large-scale deployments are still rather limited to, e.g., deferrable loads in industrial refrigeration. Still, DRM excites many, due to its zero losses, and hence costs bounded from below only by zero. One can construe a “virtual power plant” (VPP) being formed this way.

2 An Overview of Network-Constrained Optimization Problems

At a high-level, network-constrained problems of electric power systems can be characterised by the market environment they consider, and the time horizon. In vertically integrated systems the strategic electrical network management is performed in an integrated fashion by the monopolist, whereas in market-based ones, the responsibilities are split between the operators of the generating capacity (GenCos), operators of the high-voltage transmission system (TSO), lower-voltage distribution system (DSO), and possibly market operators and regulators. Long-term planning problems include:

  • Network Expansion Problems: Expand the networks by constructing new branches and possibly removing old ones. Additionally, the decision of installing network technologies, together with their siting, can be considered in the expansion and reinforcement process.

  • Energy Storage System (ESS) Siting and Sizing: Deciding the location and the size of an ESS, e.g. [320].

  • Smart Grid Design: The actual design of a smart grid includes the siting and sizing of technologies that could enhance the observability and controllability of the system and include: Phasor Measurement Units (PMUs), Wide-Area Measurement Systems (WAMS), and notably Flexible Alternating Current Transmission Systems (FACTS).

As a sub-problem of a long-term problem, or independently over a shorter time horizon, one considers a variety of operations problem:

  • Load Flow (LF): LF is actually not an optimization problem, but rather a calculation of the power flowing along an electrical network, once we have fixed the generation schedule and the load in the substations. While not an optimization problem, it gives evidence on the networks operating points under different conditions. LF can also be used integrated in “what if” analyses.

  • Optimal Power Flow (OPF): The OPF problem deals with the continuous-valued decisions within the optimization of the generating cost, and operations of renewable energy sources (esp. hydropower), considering the electricity grid. In considering the grid, OPF takes into account the non-linear Kirchhoff laws and the restrictions on power flow on each branch (transmission line) and voltage angles. Typically, the generation cost optimization is performed considering all the units status (on or off) fixed to a feasible status otherwise found. Similarly to the LF, OPF can also be used in a what-if analysis tool.

  • Security Constrained (SC) Problems: Integrated problems, wherein one wants to consider a detailed set of constraints modelling reliability of the power plants and the grid, as well as the physics of the grid itself. Typically, the goal is to find a least cost schedule of production and flows that is also resistant to unpredictable fault of one of the components (power plant, network branch etc.). The n-1 security problem refers to a single fault. From a methodological standpoint one could consider n-k models with k faults, and some models in this direction have been presented. In practice, TSO tend to decouple OPF or unit commitment from n-k models, solving this latter problem by adding security requirements to an already quasi-fixed solution from SCUC [44].

In the following sections, we introduce these problems in turn.

One should like to note that the two horizons are not disconnected. It is very important to consider the operation and scheduling of generation and storage units already at design phase to determine the most convenient combination of technology selection and size. This is especially true when dealing with sizing of energy storage. Long-term storage systems have recently caught much attention due to their ability to compensate the seasonal intermittency of renewable energy sources. However, compensating renewable fluctuations at the seasonal scale is particularly challenging: on the one hand, a few systems, such as hydro storage, hydrogen storage and large thermal storage can be used to this purpose; on the other hand, the optimization problem is complicated due to the different periodicities of the involved operation cycles, i.e., from daily to yearly. This implies long time horizons with fine resolution which, in its turn, translates into very large optimization problems. Furthermore, such systems often require the integration of different energy carriers, including electricity, heat, and water. Exploiting the interaction between different energy infrastructure, in the so-called multi-energy systems (MES), allows to improve the technical, economic and environmental performance of the overall system [290].

To consider another such integrated problem, consider the discrete decisions (the so-called unit commitment problem) as a sub problem at design phase, which implies taking into account the expected profiles of electricity and fuel prices, weather conditions, and electricity and thermal demands along entire years. Moreover, the technical features of conversion and storage units should be accurately described. The resulting optimization problem can be described through a mixed integer nonlinear program (MINLP), which is often simplified in a mixed integer linear problem (MILP) due to the global optimality guarantees and the effectiveness of the available commercial solvers (e.g. CPLEX, Gurobi, Mosek, etc.). In this context, integer variables are generally implemented to describe the number of installed units for a given unit, whereas binary variables are typically used to describe the on/off status of a certain technology. Furthermore, decomposition approaches relying on heuristic algorithms for unit selection and sizing have been proposed. A comprehensive review of MINLP, MILP and decomposition approaches for the design of MES including storage technologies has been carried out by Elsido et al. [112]. However, independently of the implemented approach, significant model simplifications are required to maintain the tractability of the problem. Such simplifications include limiting the number of considered technologies, restricting technology installation to a subset of locations, analysing entire years based on seasonal design days or weeks, or aggregating the hours of each day into a few periods. Such integrated problems are a major direction for future research.

3 Problems of Network Expansion Planning

Network expansion planning (NEP) is one of the main strategic decisions in power systems and has a deep, long-lasting impact on the operations of the system. Relatively recent developments in power systems, such as renewable integration or regional planning, have increased considerably the complexity and relevance of this problem.

NEP has multiple criteria, albeit frequently combined into a single objective function, perhaps by considering the costs of the multiple criteria in a single monetary objective. The main criteria are usually: costs, environmental impact, market integration, and certain “exogeneous” factors. Costs are measured by the attributes such as investment and operating costs of the transmission decisions, but also operating costs of the system. In the cost criterion, one can also consider reliability. Environmental impact is determined by attributes such as the amount of renewable integration or curtailment avoided at system level and impact of the line construction. Market integration is accounted as the number of hours of market splitting. Social acceptance is an exogenous criterion and, nowadays, is a major concern of the current planning process and is the cause of many delays.

Among the current challenges to be addressed for the network expansion planning we can mention the following ones:

  • coordination with GEP, as discussed previously. On the one hand, GEP is a deregulated business activity, while NEP is mostly regulated. On the other hand, generation investments can take around 3 years, while network expansion needs to be anticipated longer periods.

  • renewable integration is one of the major drivers for investing in new transmission lines. Onshore and offshore wind power, and solar generation are renewable technologies currently being developed at large scale to meet the low-carbon electricity generation targets. A large part of this generation is located in remote areas far from the load centers requiring transmission reinforcements or new connections. Besides, the intermittent nature of these renewables introduces operational challenges and, from the network planning point of view, many varied operation situations should be considered.

  • market integration is the current paradigm to achieve a competitive, sustainable and reliable electric system and the network is a facilitator in this process. The creation of an European internal market with strong enough interconnection capacity among the member states increases the scope of the planning process, from a national activity to a European scale.

A variety of mathematical optimization techniques are used for solving the network expansion planning.

Classical methods include linear, nonlinear and mixed integer programming methods. Linear optimization ignores the discrete nature of the investment decisions but still it can be useful is system is too large to be solved with discrete variables or a relaxed solution is good enough. A transportation or a direct-current (DC) load flow fit in this linear formulation. Nonlinear, in particular quadratic, models appear as a way to represent transmission losses. Finally, mixed integer optimization allows considering the integer nature of the decisions. If stochasticity in some parameters is included then models become stochastic and, therefore, decomposition techniques should be used for large-scale systems. Among them, Benders decomposition, Lagrangean relaxation and column generation are frequently used.

There is a vast array of academic literature on the subject. References [201, 261, 276] provide a good starting point. In the following two sections, we point to original research on Sects. 4 and 5.

4 Transmission Network Expansion Planning (TNEP)

Transmission network expansion concerns the expansion of the high-voltage part of the network. Market integration is the current paradigm to achieve a competitive, sustainable and reliable electric system and the network is a facilitator in this process. This is particularly true in the case of the EU. These challenges have been addressed in a vast array of both projects and papers. Thorough reviews of the academic literature on this topic can be found in [364, 387, 388].

A wide variety of models and the corresponding mathematical optimization techniques are used in solving the network expansion planning. Initially, models can be classified as either linear, non-linear, mixed integer linear (MILP), or non-linear (MINLP). Linear models, often based on transportation or direct-current (DC) load-flow, ignore the discrete nature of the investment decisions, but can be useful as an approximation. Nonlinear models, often quadratic, represent transmission losses, but still usually ignore the discrete nature of the investment decisions. MILP approaches allow for the integer nature of the investment decisions to be considered, but are restricted to an approximation of the non-linearity, either using piece-wise linearisations, or linearisations including the DC and transportation load-flow. If stochasticity in some of the parameters is considered, then models may become challenging to solve, and decomposition techniques are often used for large-scale instances. Finally, one may consider the full MINLP model: there, both the discrete and non-linear features of the problem are modelled faithfully, but the problem is challenging.

Further, one may consider a wide variety of objectives, although a single objective function is often obtained by combining the multiple criteria into one, e.g., by considering the monetary costs associated with each criterion, and minimising the total monetary costs across all criteria. The main criteria are usually: investment costs, costs of operations (OPEX), reliability issues, environmental impact, market integration factors, and rarely, other factors. While investment costs are often relatively straightforward to estimate, the operational expenses associated may be harder to estimate, especially considering the long planning horizon often considered. Similarly, the impact on reliability is often modelled only very approximately. Environmental impact is often evaluated in terms of the amounts of renewable integration made possible, or curtailment avoided at system level, in response to the line construction. Market integration is accounted as number of hours of market splitting. When the monetary costs of such approaches cannot be approximated, metaheuristic approaches may provide a sample of the feasible solutions, without any guarantees of their distance to optimality.

Considering GEP is a deregulated business activity, while NEP is mostly regulated at both national and super-national levels, one may also introduce market considerations explicitly. For example, one may consider an equilibrium in a pool-based market at one level, possibly including spot prices, and the transmission and generation expansion at another level. Such bi-level and multi-level models have been attempted, but often increase the complexity to a point, where real-life applicability is limited, considering the extent of many markets. In particular: Many super-national markets area already in operations. The eventual creation of a single European internal market with strong-enough interconnection capacity among the member states, for instance, increases the scope and complexity of the planning process.

Further, one may attempt to solve a problem combining the expansion of transmission (NEP) with the expansion of generation (GEP). Clearly, generation expansion has bearing upon network expansion, and vice versa. In particular, renewable integration is one of the major drivers for investing in new transmission lines. Onshore and offshore wind power, and solar generation are renewable technologies currently being developed at large scale to meet the low-carbon electricity generation targets. A large part of this generation is located in remote areas far from the load centres, and hence requires transmission reinforcements or new connections. Besides, the intermittent nature of these renewables introduces operational challenges and, from the network planning point of view, many varied operation situations should be considered. In such integrated problems, the size of the instances grows.

Within linear models, such as transportation or direct-current (DC) load-flow, general-purpose linear programming optimisation software is often used, based either on simplex or interior-point (barrier) methods. Often, it turns out to be challenging to devise a problem-specific method, whose performance improves upon the general-purpose methods. Still, in case of particularly large-scale instances, problem-specific decompositions such as column generation are used.

Within nonlinear models, often quadratic, a wide variety of methods is used, considering the limitations of the general-purpose non-linear programming optimisation software. Since 1990, interior-point methods have been most popular. First-order methods, including gradient and coordinate descent, and their stochastic variants, had been used prior to this and also very recently, inspired by their resurgence within machine learning.

Within MILP models, there has been much recent progress in general-purpose optimisation software based on branch-and-bound-and-cut. Often, modest instances considering either piece-wise linearisations or uncertainty, can be solved exactly using the general-purpose software.

Decompositions, such as Benders decomposition, Lagrangian relaxation or column generation are frequently used.

Within MINLP models, the methods are an active area of research, considering the limitations of the general-purpose non-linear programming optimisation software. Marecek et al. [294] surveys three convergent approaches, based on piece-wise linearisation of certain higher-dimensional surfaces, based on the method of moments, and based on combining lifting and branching. The preliminary conclusion is that the combining lifting and branching may be the most promising.

We refer to [201, 239, 261, 276, 364, 387, 388] for detailed surveys. See [294] for the impact of the impact of the choice of model (AC vs. PWL vs. DC), [389] for an illustration of the impact of security of transmission constraints, [423] for an example of the impact of the uncertainty.

Software

Within two-stage approaches, there is a long tradition of work on decomposition methods [30, 344], although even a monolithic scenario expansion may be tractable [273, 383, 410], when AC and security of transmission constraints are ignored and the model of the network [383] is sufficiently coarse. The incorporation of market considerations [348, 364] complicates matters considerably. Within multi-stage approaches, there are very well-developed decompositions [5].

5 Distribution Network Expansion Planning (DNEP)

Network capacities were designed with a wide safety margin, so for a long time expansion planning in electrical energy systems was concentrated on generation expansion planning (GEP) with the goal of covering cumulative demand uncertainty based on averaged historic demand data in monopolistic environment for energy transmission. These were modelled as stochastic optimization problems with a one dimensional demand distribution represented by two-stage or multi-stage scenario trees that were generated by Monte Carlo methods. The models went to the limit of computational possibilities at any point in time, included binary decision variable, with a risk neutral approach and, then, only expected values in the objective function where considered in the time horizon over the scenarios. Very limited use was made of risk averse measures.

In order to solve the large-scale problems, decomposition methods played a central role, in particular the following methodologies:

  • Two-stage Benders Decomposition (BD) for linear problems [37]. See [24, 275, 424], among many others.

  • Multistage Benders Decomposition (BD) methodology for linear problems. See [45] among others.

  • Two-stage Lagrangian Decomposition (LD) heuristic methodology. See [68, 115, 118, 172, 173, 267, 330, 332], among many others. See also [405, 421] for two surveys on the state-of-the-art of two-stage stochastic unit commitment, and using LD with bundle methods. See also [113, 371, 422] two-stage LD approaches with bundle methods applied to energy problems.

  • Multistage Clustering Lagrangian Decomposition (MCLD) heuristic methodology. See [121, 126, 128, 287], among some others.

  • Regularization methods. See [26, 267, 317, 368, 369, 386], among others.

  • Progressive Hedging algorithm (PHA) for multistage primal decomposition. See [363, 438], among others.

  • Nested Stochastic Decomposition (NSD). See [8, 86, 122, 127, 181, 246, 323, 345, 346, 367, 391, 463], among others.

  • Multistage cluster primal decomposition. See [9, 10, 17, 34, 126, 287, 339, 377, 448], among others.

  • Parallelized decomposition algorithms. See [8,9,10, 16, 26, 38, 269, 317, 339, 367, 377, 448], among others.

Today, new power production possibilities, technological developments and deregulation bring along several new sources of uncertainty with highly differing levels of variability. In addition to traditional demand, these are foremost dependencies on wind, market prices, mobile electricity consumers like cars, power exchanges on international level, local energy producers on distribution network level and, to a lesser extent, solar radiation. This introduces complex and volatile load and demand structures that pose a severe challenge for strategic planning in production and transmission and, on a shorter time scale, in distribution. Networks may now be equipped with new infrastructure like Phase Measurement Units (PMUs) and other information technology in order to improve their cost efficiency. At the same time these upgraded networks should ensure high standards in reliability in their daily use and resilience against natural or human caused disasters. Companies now have teams devoted to the task of generating suitable planning data.

In optimization models, the emphasis has shifted to high dimensional stochastic data and to considering risk reduction measures instead of expected values. Computationally integrated models considering all relevant aspects are out of scope. Even for simplified models it is often difficult or not known how to provide stochastic data of sufficient quality [399]. Alternatives are then:

  • robust optimization, where distributions are replaced by “easier” uncertainty sets [36],

  • methods, where uncertainties are replaced by a kind of interval arithmetic equipped with scenario dependent probabilities [447],

  • stochastic approaches: where some input data follow probability density function and some can be represented by fuzzy membership functions [397].

  • information gap decision theory that aims at hedging against information errors [35, 355].

Methods for solving these stochastic optimization problems with binary decision variables employ the same decomposition approaches listed above, but much more care needs to be devoted to the properties of the decomposition. For risk averse measures in multistage models, methods are distinguished regarding their “time consistency” or “time inconsistency”. So far, stochastic dynamic programming approaches are the most suitable ones for dealing with the time consistency property of risk measures, so that the original stochastic problem may be decomposed more easily via scenario clustering and cluster dependent risk levels.

In power generation optimization models for big companies the following are the issues of relevance, mainly addressed in the context of market competition:

  • when and where to install how much new production capacity, mainly considering wind generators and thermal plants.

  • how to extend or renew hydro plants and where to install what pumping capacities. Today, solar power is typically handled at the level of distribution networks.

In contrast, competition is not an issue for transmission and distribution network operators. Regulations on efficiency, reliability and resilience levels are the driving force in the following problems:

  • when and where to install how much network capacity and information equipment,

  • reducing transmission losses,

  • reducing distribution losses (technical and detecting non-technical ones).

Challenges today and for the future comprise:

  • The robust approach allows for safe optimization with uncertain data. What information can be extracted from these robust solutions e.g., on which additional data would be needed to improve the quality of the model?

  • Several risk averse measures have been proposed, each with its advantages and disadvantages. How to make use of them in the best way?

  • How to deal with endogenous uncertainty, i.e., with optimizing big player decisions that influence the probability distributions that are optimized over?

  • How to construct hierarchical decomposition approaches in a consistent way?

  • How to make use of high-performance computing (HPC, multi-core or Distributed) in decomposition approaches?

  • How to integrate chance constraints (ICC), e.g., with respect to reliability or resilience?

General goals for future models include: increasing the level of integration; bringing models closer to reality by avoiding the excessive linearization of nonlinear aspects; reducing the gap between methods used in academia and those applied in practice; making use of new monitoring devices and communication systems; exploring the chances of cooperation between electric and other energy commodity systems.

On the software side, general-purpose stochastic optimization software still seems far away. Planning models are highly problem-dependent and off-the-shelf packages are not available. Companies use modelling languages like GAMS [398], AMPL, AIMMS, Python together with standard solvers to develop problem-specific approaches.

6 Energy Storage System (EES) Siting and Sizing

It is very important to consider the operation and scheduling of generation and storage units already at design phase to determine the most convenient combination (i.e., minimum objective function) of technology selection and size. This is especially true when dealing with selection, sizing and unit commitment of long-term, or seasonal, energy storage. Long-term storage systems have recently caught much attention due to their ability to compensate the seasonal intermittency of renewable energy sources. However, compensating renewable fluctuations at the seasonal scale is particularly challenging: on the one hand, a few systems, such as hydro storage, hydrogen storage and large thermal storage can be used to this purpose; on the other hand, the optimization problem is complicated due to the different periodicities of the involved operation cycles, i.e. from daily to yearly. This implies long time horizons with fine resolution which, in its turn, translates into very large optimization problems. Furthermore, such systems often require the integration of different energy carriers, e.g., electricity, heat and hydrogen. Exploiting the interaction between different energy infrastructure, in the so-called multi-energy systems (MES), allows to improve the technical, economic and environmental performance of the overall system [290].

In this framework, including the unit commitment problem already at design phase implies taking into account the expected profiles of electricity and gas prices, weather conditions, and electricity and thermal demands along entire years. Moreover, the technical features of conversion and storage units should be accurately described. The resulting optimization problem can be described through a mixed integer nonlinear program (MINLP), which is often simplified in a mixed integer linear problem (MILP) due to the global optimality guarantees and the effectiveness of the available commercial solvers (e.g., CPLEX, Gurobi, Mosek, etc.). In this context, integer variables are generally implemented to describe the number of installed units for a given unit, whereas binary variables are typically used to describe the on/off status of a certain technology. Furthermore, decomposition approaches relying on meta-heuristic algorithms for unit selection and sizing have been proposed. A comprehensive review of MINLP, MILP and decomposition approaches for the design of MES including storage technologies has been carried out by Elsido et al. [112]. However, independently of the implemented approach, significant model simplifications are required to maintain the tractability of the problem. Such simplifications include limiting the number of considered technologies, restricting technology installation to a subset of locations, analyzing entire years based on seasonal design days or weeks, or aggregating the hours of each day into a few periods.

7 Optimal Power Flow (OPF)

In the optimal flow problem, the costs of generation and transmission of electric energy is optimised, taking into account the active and reactive power generation limits, demand requirements, bus voltage limits, and network flow limits. In the alternating-current (AC) model, OPF is formulated as a non-convex optimisation problem (ACOPF) that is generally difficult to solve, due to the non-linear nature of the power-flow constraints. The problem was first formulated in 1962 and a large number of optimization algorithms and relaxations have been proposed [308, and references in] since then.

The directed-current optimal power flow (DCOPF) is a popular approximation based on the linear programming problem, which is obtained through the linearisation of the power flow equations. While DCOPF is useful in a wide variety of applications, a solution of DCOPF may not satisfy the non-linear power flow equations and hence the resulting solution may be infeasible and may be of limited utility.

Numerous heuristic algorithms were proposed for the OPF, including Newton-Raphson, Lagrangian relaxation, and primal-dual interior point methods. Although some of these algorithms can handle large-scale networks most them can only compute stationary point usually without assurance on the quality of the solution. That is because most of the algorithms rely on first-order (Karush-Kuhn-Tucker) necessary conditions of optimality, which cannot even guarantee a locally optimal solution, in the non-convex problem, without considering the presence in the basin of attraction of a global optimum [270].

Alternatively, the OPF can be formulated as a non-convex quadratically constrained quadratic program, or more generally polynomial optimisation problem (POP). There, convex relaxations within second-order cone (SOCP) programming and semidefinite programming (SDP) can be applied. In contrast to the other proposed approaches, convex relaxations make it possible to check if a solution is globally optimal. If the solution is not optimal, the relaxations provide a lower bound and hence a bound on how far any feasible solution is from optimality. In particular, [27] proposed the first semidefinite programming relaxation for the ACOPF for general networks. Its strengthened versions [295] make it possible to find globally optimal solutions for several well-known instances. More recently, the moments and sum-of-square decomposition have been used [168, 250] to build hierarchies of improving SDP relaxations for a polynomial programming formulation of ACOPF. To overcome the computational complexity of using SDP and polynomial programming, sparsity has been exploited [168, 281] to simplify the SDP relaxation of the OPF. A number of challenges remain:

  • To further improve the scalability of SDP relaxations, Alternating Direction Method of Multipliers (ADMM)-based computation can be used to solve sparse, large-scale SDPs [281].

  • Alternatively, cheaper hierarchies are being investigated based on LP and SOCP relaxations. In the future, a combination of hierarchies mixing constraints from different cones may be envisioned.

  • Another issue to address is development of techniques to certify infeasibility of optimal power flow instances.

  • From an industrial point of view, dealing with incomplete data is one of the issues models and tools have to address. Aggregations of industrial data may lead to physically non-meaningful models, since some section of the power network are not represented in the data.

8 Security-Constrained Optimal Power Flow (OPF)

The security constrained optimal power flow (SCOPF) is an extension of the standard OPF which takes into account line outages that have an effect on the line flows. The SCOPF problem is modelled as a nonconvex mixed-integer non-linear, large-scale optimization problem, with both continuous and discrete variables. The optimization problem determines a generation dispatch with lowest costs while respecting the constraints, both under normal operating conditions and for specified disturbances, such as outages or equipment failures. A number of issues make the SCOPF much more challenging than the OPF problem: the significantly larger problem size, the need to handle discrete variables describing control actions (e.g. the start up of generating units and network switching) and the variety of corrective control strategies in the post-contingency states.

Similar to OPF problems, different solution approaches have been proposed to solve the SCOPF problem such as linear programming approximations and heuristics in addition to non-linear-programming based methods. For example, to obtain feasible solutions, [180] propose to adjust the generation levels with the commitment states obtained in the dual solution of the Langrangian relaxation [211].

9 Optimal Transmission Switching (OTS)

The Optimal Transmission Switching deals with changing the transmission network topology in order to improve voltage profiles, increase transfer capacity, and reduce the market power of some market participants. The topology is changed, primarily by the deliberate outage of some specific transmission lines. Further, one may also consider, the use of phase shifters (which change the angle difference between two adjacent buses) and other Flexible Alternating Current Transmission System (FACTS) devices (which can, among others, increase/decrease the impedance of two adjacent buses). The change in topology can be done by one or combination of the following actions:

  • Deliberate outage of some specific transmission lines

  • Adding phase shifters (these devices can change the angle difference between two connected buses)

  • Adding Flexible Alternating Current Transmission System (FACTS) devices (these devices can increase/decrease the impedance of two connected buses in the system)

  • Adding reactive series impedance (these devices can increase the impedance of two connected buses in the system) [400]

The idea of topology dispatch has been studied for several decades [170, 196, 299, 334], although it has gained much attention recently thanks to [143, 196], who have demonstrated how it can provide the electricity market with greater efficiency and competition. This idea was further developed in [197, 198, 373, 429] by not only considering the normal operation but also the N-1 contingencies and financial transmission rights (FTR) and Flexible Alternating Current Transmission System (FACTS) devices. The unit commitment problem constrained by transmission system is solved in [428].

Much of this early modelling work has been performed using linear programming (LP) approximations of the alternating-current power flow and can be applied to large-scale transmission systems. The present best LP formulations have been presented by Kocuk et al. [244] and Fattahi et al. [137].

Much recent work considers non-linear relaxations, in order to model the alternating-current transmission constraints without piece-wise linearisation. Jabr [219] proposes an SOCP relaxation and [245] extends it. Marecek et al. [294] have experimented with the sparse variant of the method of moments for two formulations, lift-and-branch-and-bound using SDP relaxations, and certain piece-wise linearisations. Capitanescu and Wehenkel [65] and Sahraei−Ardakani et al. [372] study of heuristics based on non-linear optimisation. Generally, convergent methods considering the line-use decision within the alternating current model [219, 245, 294] have turned out to be challenging.

For mixed-integer linear-programming (MILP) models, there has been much recent progress in general-purpose optimisation software based on branch-and-bound-and-cut. Often, modest instances considering either piece-wise linearisations or uncertainty, can be solved exactly using the general-purpose software. Decompositions, such as Benders decomposition, Lagrangian relaxation, or column generation [428, 429] are frequently used beyond that.

For mixed-integer non-linear programming (MINLP) models, the methods are an active area of research [65, 83, 219, 245, 294, 372], considering the limitations of the general-purpose non-linear programming optimisation software. Marecek et al. [294] surveys three convergent approaches, based on piece-wise linearisation of certain higher-dimensional surfaces, based on the method of moments, and based on combining lifting and branching. The preliminary conclusion is that the combining lifting and branching may be the most promising.

See also Transmission expansion planning, which is structurally very closely related, although the uncertainty is often modelled differently. Note also one would often [429] like to expand the network knowing that one can perform switching later.

10 Optimal Network Islanding and Restoration

The power systems are usually subject to disturbances which may lead to loss of synchronization between groups of generators and possibly blackouts.

The system islanding refers to the condition, in which some areas of the transmission or distribution system are disconnected from the main grid, however the power supply continues in that region by local generating facilities. It may happen automatically, after some transmission lines are tripped by local relays to isolate the faulted region. The role of system operator is to optimally maintain the balance between the generation and demand in each island. The main idea is to reduce the total amount of load shedding to maintain such a balance and avoiding the blackout.

There are two types of islanding:

  • Intentional Islanding: It is done to determine optimal splitting points (or called splitting strategies) to split the entire interconnected transmission network into islands ensuring generation/load balance and satisfaction of transmission capacity constraints when islanding operation of system is unavoidable [402]. It is considered as an emergency response for isolating failures that might propagate and lead to major disturbances [340].

  • Unplanned Islanding: This is an unplanned condition which should be avoided [136]. The islanding detection techniques are applied to reduce the risk of this event. This phenomenon is due to line tripping, equipment failure, human errors and so on [268].

Studies [442] have shown that by intentionally splitting the system into islands wide-area blackouts could have been prevented for several large disturbance events, e.g., [419]. The objective would be to isolate the faulty part of the network in order to limit the spread of a cascading failure. Intentional islanding is therefore attracting an increasing amount of attention. Islands should be designed such that they are balanced in load and generation and have stable steady-state operating points that satisfy voltage and line limits. Further the action of splitting should not cause transient instability. Since this problem potentially involves a 0-1 decision for every line in the network the search space grows exponentially with the size of the network leading to a considerable computational challenge.

Most approaches in the literature deal with finding a pre-determined islanding strategy that could be implemented in case of a network fault irrespective of where the fault occurs. The simplest example of this is forming islands by only requiring that load and generation are balanced. In [232], a three-phase ordered binary decision diagram (OBDD) method is proposed that determines a set of islanding strategies. The approach uses a reduced graph-theoretical model of the network to minimize the search space for islanding; power flow analyses are subsequently executed on islands to exclude strategies that violate operating constraints, e.g., line limits.

An alternative strategy that aims to avoid transient problems is to split the network into electromechanically stable islands, commonly by splitting so that generators with coherent oscillatory modes are grouped. If the system can be split along boundaries of coherent generator groups while not causing excessive imbalance between load and generation, then the system is less likely to lose stability. Typically, these strategies additionally consider load-generation balance and other constraints; algorithms include exhaustive search [445], minimal-flow minimal-cutset determination using breadth-/depth-first search [436], and graph simplification and partitioning [441]. The authors of [226] note that splitting based simply on slow coherency is not always effective under complex oscillatory conditions, and propose a framework that, iteratively, identifies the controlling group of machines and the contingencies that most severely impact system stability, and uses a heuristic method to search for a splitting strategy that maintains a desired margin. Wang et al. [437] employed a power-flow tracing algorithm to first determine the domain of each generator, i.e., the set of load buses that ‘belong’ to each generator. Subsequently, the network is coarsely split along domain intersections before refinement of boundaries to minimize imbalances.

While several useful strategies exist for determining pre-planned islanding decisions, little attention has been paid to islanding in response to particular contingencies. If, for example, a line failure occurs and subsequent cascading failures are likely, it may be desirable to isolate a small part of the network—the impacted area—from the rest. A method that does not take the impacted area into account when designing islands may leave this area within an arbitrary large section of the network, all of which may become insecure as a result.

In [414] (for DC network constraints) and [415] (extended to AC constraints) the authors propose an optimization-based approach to system islanding and load shedding. Given some uncertain set of buses and/or lines, solving an optimization determines (1) the optimal set of lines to cut, (2) how to adjust the outputs of generators, and (3) which loads to shed. The authors assume that this is done intentionally under central control and not left to automatic safety devices. A key feature of the method is that any islands created satisfy power flow equations and operating constraints. Therefore, if a transiently stable path is followed from a pre-islanding state to the post-islanding operating point, the islanded network will be balanced and with minimal disruption to load.

The optimal network restoration is called to a class of actions taken by network operator to bring back the power system into its normal condition following a complete or partial collapse. Intentional system islanding can be one of these actions [73, 365], but generally the methods are only partially developed.

From a mathematical perspective the islanding MILP problem has similarities with the transmission switching problem [199] (cf. Sect. 9), in that the decision variable includes which lines to disconnect, while power flow constraints must be satisfied following any disconnection. Similar decision variables are also involved in transmission expansion planning [294] (cf. Sect. 4). All three approaches—expansion planning, transmission switching, and islanding—may be seen as network topology optimization problems with added power flow constraints.

11 Operations of Smart Grids

The smart grid paradigm improves upon the controllability and control of existing power systems. With the increased penetration of distributed production (solar, wind), energy storage (pumped storage, batteries, compressed air storage, and plug-in hybrid electric vehicles), transmission switching and controllable elements called FACTS (see below), power flows can be and need to be dynamically adjusted in order to improve reliability and efficiency. Also, a partial load shifting from peak hours to off peak hours is possible. Such opportunities also increase the complexity of the design and operations of the power system. A broad class of novel optimization problems hence emerges, with the focus varying power system to power system.

In power systems, where peak demand occurs in one season, while the peak generation from renewables occurs in another season [169], the focus has largely been on the improvements to the efficiency of power generation and reliability of power transmission under stress due to peak demand or peak generation from renewables. The improvements are made possible by the so called flexible alternating current transmission system (FACTS) devices [260], which are now routinely installed at generators, at the interconnection of one national transmission system (TS) with others, and elsewhere, such that the national transmission system operators (TSO) gain more control over the power flows in their TS [84, 359]. FACTS devices intended for steady-state operations include:

  • load tap changer (LTC), thyristor-controlled load tap changers, which make it possibly to vary the tap ratio rapidly

  • phase-shifters (PS), e.g. thyristor-controlled phase shifters, which make it possibly to vary the phase angle rapidly

  • series capacitor (SC), e.g., thyristor-controlled series capacitor coupled in parallel with a thyristor-controlled reactor (TCR), makes it possible to smooth the output of the reactor with varying reactance

  • interphase power controller (IPC), which makes it possible to control reactive and active power independently

  • static VAR compensator (SVC), which is a source or sink of reactive power

  • static compensator (STATCOM), which allows to control either the nodal voltage magnitude or the reactive power injected at the bus.

The availability of such devices underlies the corrective actions available in response to stress. To summarize the book-length treatment of [3]:

  • when voltages are too low, one supplies reactive power (using STATCOM, SVC)

  • when the voltages are too high, reactive power is absorbed (using STATCOM, SVC)

  • when thermal limits are exceeded, load is reduced (using SC, IPC),

  • when loop flows appear, series reactance is adjusted (using IPC, SC, PS),

  • when power flow direction is reversed, phase angles are adjusted (using IPC, SC, PS).

It is hence believed that wider availability of FACTS devices will lead to an increased stability of power systems. The non-convex optimization problems combining efficiency and reliability objectives, decisions as to FACTS settings, and constraints of the alternating-current power flows remain a major challenge.

Especially in power systems, where peak demand and peak renewable generation occur within the same season, there is an additional focus on energy storage and demand response management. One report [351] estimates that the potential demand response capability was about 20,500 megawatts (MW) in the US, or 3% of total peak demand. This is obtained by combining a variety of readily deferrable loads, comprising:

  • pumped energy storage, which has been introduced into a number of power systems since 1950s, and remains an important feature to the present day

  • large industrial customers, e.g., in refrigeration, and gas networks operations, who are being converted to flexible contracts, allowing for load shedding

  • charging of electric cars which could become a major load, eventually while many other loads may become deferrable, should the regulatory environment change such that retail prices vary over time and load control switches (e.g., remotely controlled relays or relays relying on price data, such as learning thermostats connected to domestic air conditioning) become widespread.

While many other loads may become deferrable, should the regulatory environment change such that retail prices vary over time and load control switches (e.g., remotely controlled relays or relays relying on price data, such as learning thermostats connected to domestic air conditioning) become widespread. In some regions, such as California, where the photo-voltaic generation facilities are widespread and the peak demand is due to the use of air conditioning, the resulting savings can be considerable. Notice, however, that a number of challenges remain. First, there is the issue of information provision: in many markets with dynamic pricing, customers do not have access to data on current prices. The immediate announcement of prices may lead to swings in the demand, whereas no announcement may make it impossible to reach the best possible efficiency. Second, the regulatory framework has to be compatible with the free markets. Third, if the decision making is to remain centralised, one needs to model the behaviour of the users. Because of the numerous difficulties of doing so, a number of mechanism design studies and distributed decision-making schemes have been proposed.

Overall, smart grids require both changes to the power systems’ infrastructure, as well as changes to their control mechanisms, which require the generation, distribution, transmission, and consumption to be modelled jointly. Although much innovative thinking is required, any progress on solving the underlying problems (mainly LF, OPF, ONI and OTS) is still relevant.

12 Energy Storage Operations Management

12.1 Storage Systems

The increased awareness of the environmental impact and of the carbon footprint of all energy sources have motivated the recent widespread adoption of Renewable Energy Sources (RES). However, the intrinsic intermittent and not-schedulable nature of such naturally generated energy introduces a new source of uncertainty in the operation and planning of electric power systems. This poses a critical threat to the power grid since its stability relies on the balance between energy production and demand [251]. Therefore, as the installed capacity of RES keeps increasing, the need to compensate the fluctuations caused by non-dispatchable energy sources has become one of the most compelling drivers of research in the power-grid scientific community.

There are many ways to mitigate the variability of power generation from RES. On the one side, there have been many efforts in improving the accuracy of power generation forecasts from renewable sources. Most notably, recent efforts in this direction can be found in [21] and [443], and in the book [311]. Another possibility to handle the intermittent nature of RES is to use conventional (i.e., dispatchable) power plants as back-up to improve the resiliency and the flexibility of the overall mix of power plants in the power grid. Obviously, this solution brings back the pollution issues, associated with the usage of conventional power plants [283] and [439]. A further opportunity can be provided by hydro power plants, as they can respond quickly and absorb some of the energy fluctuations; however, hydro resources are limited by their availability and their unsuitability to handle frequent charge-discharge cycles.

According to the previous discussion, there is a general consensus that Energy Storage Systems (ESSs) may provide a viable way to systematically support power generation from RES, as they represent a cost-effective, flexible and quick tool to smooth and regularize intermittent power generation [32]. The next sections describe the main technologies employed to build storage devices, their main applications in power grids, and the main mathematical methods that are used to solve grid-related optimization problems when storage devices are also explicitly taken into account.

12.2 Technology

The physical characteristics of a storage system must be adapted to the particular service of interest. For instance, an ESS that has to provide primary frequency regulation will present different characteristics from one that is desired to provide the local supply to a private house. Accordingly, storage techniques can be divided into four categories [215]:

  • Low power applications (e.g., transducers, private houses)

  • Medium power applications (e.g., individual electrical systems, town supply)

  • Peak levelling and network connection applications

  • Power-quality applications

For the first two categories, we consider small-scale systems in which energy can be stored in the form of a flywheel (kinetic energy), fuel cells (hydrogen), or supercapacitors. The last two categories are instead large-scale applications and the most used technologies rely on storing the energy in the form of gravitational energy (e.g., hydraulic systems), thermal energy or compressed air. Finally, note that Electric Vehicles (EVs, either in terms of Fully Electric Vehicles or Plug-in Hybrid Vehicles) have been recently assimilated to ESS, due to their ability to behave as a battery when the vehicle is idly connected to the grid. Given the special characteristics of EVs (whose main purpose is clearly to serve as mobile vehicles, and not to serve as batteries), a specific and more detailed discussion about their usage is given here (add a link to the wiki entry to electric vehicles).

For a more detailed discussion on storage technology and their technical characteristics, we refer to [32, 215] and to the more recent [79, 459].

12.3 Benefits

The use of ESSs, due to their versatility and flexibility, can lead to a number of advantages for the power grid, both from a technical and an economic perspective. In what follows we list the main services that storage technology can bring. For a more detailed discussion the interested reader can refer to [32, 175, 215, 459], and especially to [459, Section 3].

  • Ancillary services: ESSs can help regulate the active power supplied by non-dispatchable generation and provide primary frequency and voltage control, therefore improving the transient response of the power grid. This would remove the need to keep expensive dispatchable back-up power generation and would greatly facilitate the penetration of wind and solar power. Examples of technologies in the ESS for ancillary services segment are pumped storage for longer duration applications such as load following, reserve capacity and spinning reserves, or flywheels for high-power, short-duration applications such as frequency regulation.

  • Energy arbitrage: ESSs would allow to purchase inexpensive electric energy, available during periods when prices or system marginal costs are low, to charge the storage system so that the stored energy can be used as a substitute for the expensive primary power used in peak-load power stations. Alternatively, ESSs could store excess energy production, which otherwise would be lost, from RES. A typical example would be Pumped storage. The principle is that during periods when demand is low, these stations use electricity to pump the water from the lower reservoir to the upper reservoir. When demand is very high, the water flows out of the upper reservoir and activates the turbines to generate high-value electricity for peak hours.

  • Network savings: Power consumption during the day is characterized by high fluctuations, meaning that the minimum level of consumption is usually much lower than the maximum daily peak (especially during summer and winter). This leads to over sizing the production units and transmission lines, and the necessary equipment, that are tailored to absorb the peak demand. On the other hand, the usage of local supply in the form of ESS, would help compensating load variations and would make possible to operate transmission and distribution networks with lighter designs, closer to the average daily consumption rather than to the peak demand.

Due to the aforementioned diverse applications, the mathematical problems associated with ESSs that are of utmost interest for the power grid, correspond to their optimal siting (i.e., finding the most convenient location where to install them) and sizing within the power grid [459]. The next section reviews the most used techniques to address such problems.

12.4 Models

Different models, with different levels of accuracy, have been developed in the literature to model the functioning of storage devices. The level of detail usually depends on the particular application of interest, and in general on the level of detail with which other power grid devices have been modelled. When accurate models of the batteries are not required, in some cases simple first order linear equation may be used, [77, 139, 140, e.g.]. Such simple models can be used when one is not interested in the point-wise behaviour of the system (as the low-level electrical behaviour of the ESS is neglected) but, for example, when the aim of the study focuses on the effects of the transient behaviour of a power grid [139, 140]. More sophisticated and realistic models can be found in [336, 337], where many other low-level details of a storage unit are also taken into account (e.g., life cycle, ageing, dc link, specific technology).

While simultaneous determination of the optimal location and size of ESS is known to be a non-deterministic polynomial-time hard problem [459], yet different strategies have been adopted to tackle it. This includes the use of Monte Carlo simulations, more analytic approaches (like dynamic programming, mixed integer-linear programming and second-order cone programming), and certain heuristic methods.