1 Introduction

Risk measures are widely used in both financial regulation and economic decisions. Since the seminal work of Artzner et al. [2], risk measures are commonly defined as functionals on a space of random variables or, with the assumption of law-invariance, on the set of their distribution functions. The most popular risk measures are value-at-risk (VaR) and expected shortfall (ES); see Artzner et al. [2] and Föllmer and Schied [17, Chap. 4] for the classic theory of risk measures, and documents from the Basel Committee on Banking Supervision (BCBS), e.g. BCBS [3, Chap. MAR10], for regulatory practice in banking.

In this paper, we propose a novel framework for measures of risk under uncertainty. Let us first explain our motivation. We take market risk as our primary example, although our discussions naturally apply to many other types of risks. A portfolio is associated with a future loss random variable \(X\) representing the portfolio risk. The loss \(X\) has two important practical aspects: the specification and the modelling.

1) The specification refers to how \(X\) is defined in terms of the underlying risk factors (e.g. asset prices, exchange rates, credit scores, volatilities, etc). More precisely, \(X\) is the financial loss (or gain) from holding assets, derivatives or other investments in the portfolio. Mathematically, the specification of \(X\) is represented by a function \(X:\Omega \to \mathbb{R}\) which maps each state of the future financial world (each element of the sample space \(\Omega \)) into a realised loss.

2) The modelling refers to the statistical assessment of the likelihood and the severity of a loss \(X\). The modelling of \(X\) is usually summarised by a distribution, or a collection of distributions in case of model uncertainty, under some estimated or hypothetical (e.g. in stress testing) probability measures \(\mathbb{P}\in \mathcal {P}\), where \(\mathcal {P}\) is the set of probability measures on the sample space \(\Omega \).

In the classic framework of Artzner et al. [2], a risk measure \(\rho \) is defined on a set \(\mathcal {X}\) of random variables, and the risk value \(\rho (X)\) is thereby determined by the specification of \(X\). The modelling of \(X\) is, however, implicit in this setting: if a probability ℙ is assumed available, then the distribution of \(X\) under ℙ is determined by its specification.

There is a visible gap in the classic setting \(\rho :\mathcal {X}\to \mathbb{R}\). In practice, neither \(X\) nor ℙ is fixed in general. A change in \(X\) means adjusting positions via trading financial securities. A change in ℙ means an update of the modelling, estimation and calibration of the random world. In financial practice, both \(X\) and ℙ evolve on a daily basis for a trading desk; yet they are modified daily for very different reasons.

For another concrete example, suppose that a regulator specifies a risk measure (e.g. ES at the level 0.975 as in BCBS [3]), and two firms assess the risk of the same portfolio separately. Due to different modelling and data processing techniques used by the two firms, their reported ES values are typically not the same. However, the loss random variable \(X\) from the portfolio is the same for both firms. Therefore, the risk measure should not be only determined by the specification of \(X\), but also by the modelling information. In practice, modelling is always subject to uncertainty (called ambiguity in decision theory). Even in the simple estimation of a parametric model, the plausible models are not unique; see Gilboa and Schmeidler [19] for a classic treatment of ambiguity.

Motivated by the above observations, we propose a new framework of risk measures taking into account both the specification and the modelling of random losses. We choose a set of probability measures instead of a single probability measure as the input for the modelling component. Formally, we introduce generalised risk measures \(\Psi :\mathcal {X}\times 2^{\mathcal {P}}\to [-\infty ,\infty ]\) which have two input arguments: a random variable \(X\in \mathcal {X}\) representing the specification of the loss, and a set \(\mathcal {Q}\subseteq \mathcal {P}\) of probability measures representing the modelling of the random world; each probability measure in \(\mathcal {P}\) is called a scenario. Our framework includes the traditional law-invariant risk measures as a special case when \(\mathcal {Q}\) is a pre-specified singleton. When \(\mathcal {Q}\) is fixed but not a singleton, our generalised risk measures include the scenario-based risk measures of Wang and Ziegel [31]. The framework also incorporates other complicated decision criteria addressing model uncertainty in the literature, which will be discussed later.

We take the perspective of a regulator who designs a regulatory capital assessment scheme that must be complied with by financial institutions. Financial institutions (or their trading desks) can choose their portfolio positions with losses \(X\in \mathcal {X}\), and subject to passing regulatory backtests for statistical prudence, they can also choose their internal models \(\mathcal {Q}\subseteq \mathcal {P}\). The generalised risk measure is crucial to the design of the capital assessment procedure, because it acts on portfolios and models from financial institutions and computes regulatory capital requirements. Therefore, our theoretical framework closely resembles the regulatory practice in the fundamental review of the trading book (FRTB) of BCBS [3]; see Wang and Ziegel [31] for discussions on the risk assessment practice of FRTB, and Cambou and Filipović [4] for model and scenario aggregation methods in solvency assessment. It is important to note that the input scenario set \(\mathcal {Q}\) does not necessarily contain the decision maker’s subjective probability governing the random world, because most models are simplifications or approximations, as argued by Cerreia-Vioglio et al. [6].

Figure 1 illustrates a stylised risk assessment procedure, reflecting many of the above considerations. There are four roles: regulator (external), risk analyst (internal), portfolio manager (internal) and model risk manager (internal). In Fig. 1, except for the regulator’s actions, the other actions are changing dynamically on a daily (or similar) basis, making it clear that one should take both \(\mathcal {Q}\) and \(X\) as inputs and allow them to vary in a unified framework.

Fig. 1
figure 1

A stylised procedure for risk assessment practice

1.1 Contribution and structure of the paper

As explained above, in the literature on the axiomatic theory of risk measures, one often first designs axioms to identify desirable risk measures without model uncertainty and then puts model uncertainty into the model as an exogenous object. This approach, although easy to apply, is unsatisfactory from a decision-theoretic point of view as it does not identify desirable axioms for risk measures when model uncertainty is taken as input. One of our main contributions is to provide an axiomatic framework of generalised risk measures which allows us to consider properties on both the model uncertainty and the random losses, thus addressing this practical issue for the first time. The rigorous mathematical formulation of generalised risk measures is laid out in Sect. 2.

Since generalised risk measures are defined as mappings from \(\mathcal {X}\times 2^{\mathcal {P}}\) to the (extended) real line, the mathematical structure is much more complicated than that of traditional risk measures. We establish several relevant theoretical results related to this new framework. In Sect. 3, we obtain an axiomatic characterisation of worst-case generalised risk measures via some simple properties (Theorem 3.1). The worst-case generalised risk measures are the most practical and they appear extensively in the literature on risk measure and optimisation.

Law-invariance is a crucial property that connects loss random variables to statistical models. In the traditional framework, law-invariant risk measures can be equivalently expressed as functionals on a space of distributions; this is no longer true in our generalised framework. We provide three different forms of law-invariance which reflect different considerations: strong law-invariance, loss law-invariance, and scenario law-invariance; see Sect. 4 for details. In general, the three notions of law-invariance are not equivalent and reflect very different modelling considerations. Indeed, if strong law-invariance is assumed, our framework can be converted to the traditional setting without many mathematical difficulties. However, in practice, strong law-invariance may not be desirable, and technical complications arise when it has to be weakened. In Sect. 4, we show an equivalence between strong law-invariance and a combination of two weaker versions of law-invariance under mild conditions (Theorem 4.3). Moreover, we express worst-case generalised risk measures with various kinds of law-invariance as functions defined on distributions (Proposition 4.6). Therefore, from traditional law-invariant risk measures defined on distributions, we can easily construct generalised risk measures satisfying certain desirable properties.

In Sect. 5, we focus on coherent generalised risk measures, which are analogues to the coherent traditional risk measures of Artzner et al. [2], and characterise the simplest form (expectation-type) in Theorem 5.1. Moreover, we propose the notion of ambiguity sensitivity and establish an equivalence between strong law-invariance and a combination of a weaker law-invariance and ambiguity sensitivity (Theorem 5.3). In addition, together with a few simple properties, the combination of the weaker law-invariance and ambiguity sensitivity implies coherence, which supports coherent risk measures in the traditional framework from a completely novel perspective.

In Sect. 6, we discuss some connections of our framework to decision theory. In particular, we characterise the multi-prior expected utility of Gilboa and Schmeidler [19] with several properties (Proposition 6.2) and obtain an axiomatic characterisation for robust generalised risk measures (Proposition 6.3). The latter are closely related to the variational preferences of Maccheroni et al. [24]. Section 7 contains further discussions and remarks. The proofs of all theorems and propositions are in the Appendix.

1.2 Connections to other frameworks in the literature

Our framework is in sharp contrast to the existing ones in the literature on risk management. We have already discussed the difference between our framework and the classic frameworks of risk measures (see Artzner et al. [2] and Föllmer and Schied [17, Chap. 4]) or preferences (see Wakker [29, Chap. 8] for a comprehensive treatment) which are all defined on \(\mathcal {X}\). The setting of scenario-based risk measures of Wang and Ziegel [31] is also motivated by the regulatory framework of BCBS [3] and aims to understand uncertainty in risk measures, but is mathematically quite different. Scenario-based risk measures are mappings on \(\mathcal {X}\) determined by the distributions of the random losses under a collection of pre-specified scenarios. Since the scenarios are fixed, the key question of how a risk measure reacts when scenarios change is left unaddressed. As such, the mathematical results in this paper have no overlap with Wang and Ziegel [31].

Model uncertainty is an important topic in economic decision theory. In the classic setting of Anscombe and Aumann [1], a risk (called a lottery) is represented by a collection of possible distributions, whereas in our framework, the input consists of a random variable and a collection of probability measures, which interact with each other. There are many recent developments in this stream of literature which focus on the characterisation of preferences under uncertainty via some axioms. For a non-exclusive list, we mention the multi-prior expected utility of Gilboa and Schmeidler [19], the multiplier preferences of Hansen and Sargent [20], the smooth ambiguity preference of Klibanoff et al. [22], the variational preference of Maccheroni et al. [24] and the model misspecification preference of Cerreia-Vioglio et al. [6]. They can be formulated as examples of our framework, as will be illustrated in Example 6.1.

Some conceptual frameworks in decision theory reflect similar considerations towards risk and uncertainty as ours. In particular, Cerreia-Vioglio et al. [6] studied preferences under model misspecification, and their set of structured models corresponds to our set \(\mathcal {Q}\) of scenarios. An earlier work closely related to our framework is Gajdos et al. [18], where the authors studied preferences defined on the outcome mapping (an act) and the set of possible probabilities; thus conceptual similarity is clear. Nevertheless, since the main context of our work is financial risk assessment instead of decision making, the axioms and properties considered in this paper, as well as technical results and their implications, are completely different from [18] and [6].

In the operations research literature, Delage et al. [8] recently investigated a model for decision making with and without uncertainty and analysed the conditions under which random decisions are strictly better than deterministic ones. Model uncertainty also widely appears in robust optimisation; see El Ghaoui et al. [13], Zhu and Fukushima [33] and Zymler et al. [34] for optimising risk measures under uncertainty. In the above literature, model uncertainty is generally pre-specified and regarded as an objective fact, whereas we study the properties of risk measures taking model uncertainty as an input argument that can vary over all possible choices.

2 A framework for measures of risk and uncertainty

2.1 Notation

We begin by stating some notation used throughout. Let \((\Omega , \mathcal {F})\) be a measurable space and \(\mathcal {P}\) the class of atomless probability measures defined on ℱ. Recall that a probability measure \(P\) on \((\Omega , \mathcal {F})\) is atomless if there exists a uniform random variable on \((\Omega , \mathcal {F},P)\). The set of all subsets of \(\mathcal {P}\) is denoted by \(2^{\mathcal {P}}\). Let \(\mathcal {X} \) be the space of bounded random variables and ℳ the set of compactly supported distributions on ℝ. For \(X,Y\in \mathcal {X}\) and \(P, Q \in \mathcal {P}\), we write \(X\vert _{P} \stackrel{\,\mathrm{d}}{=}Y \vert _{Q}\) if the distribution of \(X\) under \(P\) is identical to that of \(Y\) under \(Q\). We denote by \(F_{X\vert P} \in \mathcal {M}\) the distribution of \(X\) under \(P\). For an increasing set function \(\nu : \mathcal {F}\to \mathbb{R}\) with \(\nu [\varnothing ]=0\), the Choquet integral (e.g. Föllmer and Schied [17, Definition 4.76]) with respect to \(\nu \) is defined as

$$ \int X \,\mathrm{d}\nu := \int _{-\infty}^{0} (\nu [X \geq x] - \nu [\Omega ] ) \,\mathrm{d}x + \int _{0}^{\infty} \nu [X \geq x] \,\mathrm{d}x , \qquad X \in \mathcal {X}. $$

We note in the following example that the same random variable \(X\) can be continuously distributed under one atomless probability measure \(P\) and discretely distributed under another atomless probability measure \(Q\). Working with atomless probability measures allows us to study continuously distributed as well as discrete random variables in a unified framework.

Example 2.1

Consider \((\Omega ,\mathcal {F})=([0,1]^{2},\mathcal {B}([0,1]^{2}))\), where ℬ is the Borel-\(\sigma \)-algebra. Let \(P = \lambda \times \lambda \) and \(Q=\delta _{1} \times \lambda \), where \(\lambda \) is Lebesgue measure on \([0,1]\) and \(\delta _{1}\) is the point mass at 1. Note that both \(P\), \(Q\) are atomless probability measures. Let \(X(s,t)=s\) for \((s,t)\in [0,1]^{2}\). The distribution of \(X\) is under \(P\) the uniform distribution on \([0,1]\) and under \(Q\) the point mass at 1.

2.2 A new and generalised framework for risk measures

Traditionally, risk measures as in Artzner et al. [2] and Föllmer and Schied [17, Chap. 4] are mappings from \(\mathcal {X}\) to ℝ. We call them traditional risk measures. Note that a traditional risk measure does not require the specification of a probability measure unless we additionally assume law-invariance; this is further discussed in Sect. 4.

In the new framework that we work with in this paper, the input of a risk measure is a combination of the loss \(X\) and a set \(\mathcal {Q}\) of possible probability measures that represents the best knowledge of the underlying random nature. To distinguish from the traditional setting, we refer to these functionals as generalised risk measures.

Definition 2.2

A generalised risk measure is a mapping \(\Psi :\mathcal {X}\times 2^{\mathcal {P}}\to [-\infty , +\infty ]\). It is called standard if \({\Psi (s\vert \mathcal {Q}) = s}\) for all \({s\in \mathbb{R}}\) and \({\mathcal {Q}\subseteq \mathcal {P}}\). A single-scenario risk measure is a mapping \(\Psi :\mathcal {X}\times \mathcal {P}\to [-\infty ,+\infty ]\).

Clearly, a single-scenario risk measure is precisely a generalised risk measure with its second argument confined to singletons of scenarios. For a singleton \(\{P\}\subseteq \mathcal {P}\), we use the simpler notation \({\Psi (X\vert P):=\Psi (X\vert \{P\})}\). For any fixed \(P\), the mapping \(X\mapsto \Psi (X\vert P)\) is a risk measure in the traditional sense. Besides, we use the notation \(\Psi (X\vert \mathcal {Q})\) instead of \(\Psi (X,\mathcal {Q})\) to emphasise the different roles of \(X\in \mathcal {X}\) and \(\mathcal {Q}\in 2^{\mathcal {P}}\).

The requirement of standardisation reflects the consideration that for any fixed constant \(s\), \(\Psi (s\vert \mathcal {Q})\) does not depend on the input scenarios \(\mathcal {Q}\). The range of \(\Psi \) is chosen as \([-\infty ,+\infty ]\) in our general framework to allow the greatest generality. In practical applications, one may restrict the range to be ℝ or \((-\infty ,+\infty ]\).

Generalised risk measures are much more complicated as a mathematical object than traditional risk measures since their input includes both a random loss \(X\) and a set \(\mathcal {Q}\) of probability measures. Below we collect some basic properties to consider for a generalised risk measure \(\Psi \).

(A1) Uncertainty aversion: \(\Psi (X\vert \mathcal {Q})\le \Psi (X\vert \mathcal {R}) \) for all \(X\in \mathcal {X}\) and \(\mathcal {Q}\subseteq \mathcal {R}\subseteq \mathcal {P}\).

(A2) Scenario monotonicity: \(\Psi (X\vert \mathcal {Q}) \leq \Psi (Y\vert \mathcal {Q})\) if \(\Psi (X\vert P ) \leq \Psi (Y\vert P )\) for all \(P \in \mathcal {Q}\).

(A3) Scenario upper bound: \(\Psi (X \vert \mathcal {Q}) \leq \sup _{P\in \mathcal {Q}} \Psi (X\vert P)\) for all \(X \in \mathcal {X}\) and \(\mathcal {Q} \subseteq \mathcal {P}\).

Property (A1) means that the evaluation of a risk weakly increases if model uncertainty increases, and this reflects an aversion to model uncertainty. Property (A2) means that if under each possible scenario, \(X\) is evaluated to be less risky than \(Y\), then the overall evaluation of the risk of \(X\) should not be more than that of \(Y\). Property (A3) means that the overall evaluation of \(X\) is not more extreme than that of \(X\) evaluated under the worst-case scenario.

Properties (A2) and (A3) are quite natural and are satisfied by most examples of generalised risk measures in their various disguises in the risk management and decision theory literature; we discuss some of them later.

Property (A1) is more specialised as it leads to worst-case risk evaluation or decision making (Theorem 3.1 below) axiomatised in decision theory by Gilboa and Schmeidler [19]. This property is not satisfied in models where uncertainty is aggregated in some form of averaging, such as taking a weighted average of risk evaluates as the average ES of Wang and Ziegel [31] or the smooth ambiguity model of Klibanoff et al. [22]. Indeed, if a new scenario \(P\) is added to an existing collection \(\mathcal {Q}\) of scenarios and a random loss \(X\) is considered safe under \(P\), then it may be desirable in risk management practice to reduce the assessment of riskiness of \(X\) by including \(P\), that is, \(\Psi (X \vert \{\mathcal {Q},P\})< \Psi (X\vert \mathcal {Q})\), violating (A1).

In decision theory, after a proper translation between the two frameworks, the preferential version of (A2) appears in Cerreia-Vioglio et al. [6] as \(Q\)-separability, and (A1) is genuinely weaker than monotonicity in model ambiguity of [6] on preferences. For a fixed set \(\mathcal {Q}\) of scenarios, (A1) and (A2) are respectively similar to ambiguity aversion and ambiguity monotonicity in Delage et al. [8], which are formulated for distributions rather than random variables.

2.3 Examples: VaR and ES

We first give a few examples in this section, and more will be discussed later. The two popular traditional risk measures in banking and insurance are value-at-risk (VaR) and expected shortfall (ES); see Embrechts et al. [15] for a review. Both risk measures in the classic formulation are defined with a fixed scenario \(P\in \mathcal {P}\), and allowing \(P\) to vary, we can treat them as single-scenario risk measures in Definition 2.2. For a level \(\alpha \in (0,1]\), the VaR under \(P\) is defined as

$$ \mathrm {VaR}_{\alpha} (X\vert P) = \inf \{x \in \mathbb{R} : P[X\le x] \geq \alpha \} , \qquad X\in \mathcal {X}, $$

and the ES under \(P\) is defined as

$$ \mathrm {ES}_{\alpha}(X\vert P) = \frac{1}{1-\alpha} \int _{\alpha}^{1} \mathrm {VaR}_{ \beta }(X\vert P) \,\mathrm{d}\beta , \qquad X\in \mathcal {X}. $$

We first show some properties of VaR and ES in our setting. These properties follow from existing properties of VaR and ES with a fixed \(P\), but the concavity or convexity with respect to scenarios is not formally studied in the literature since our framework is new.

Proposition 2.3

For any fixed level \(\alpha \in (0,1)\), the single-scenario risk measure \((X,P)\mapsto \mathrm {ES}_{\alpha}(X\vert P)\) is convex in \(X\) and concave in \(P\), while \((X,P)\mapsto \mathrm {VaR}_{\alpha}(X\vert P)\) is neither convex nor concave in \(X\) or \(P\).

Remark 2.4

The statement that \((X,P)\mapsto \mathrm {VaR}_{\alpha}(X\vert P)\) is neither convex nor concave in \(X\) or \(P\) may fail if \(P\) is an atomic probability measure. For instance, if \(P\) is a discrete measure with probability mass \(1/n\) on \(n\) points, then \(\mathrm {VaR}_{\alpha}(\, \cdot \, | P)=\mathrm {ES}_{\alpha}(\, \cdot \, | P)\) for \(\alpha >1-1/n\), making the statement false. Recall that we work with atomless probability measures throughout.

Building on the single-scenario VaR and ES, we can define generalised risk measures such as worst-case VaR and worst-case ES via

$$\begin{aligned} \overline{\mathrm{VaR}}_{\alpha}(X\vert \mathcal {Q}) &:= \sup _{P\in \mathcal {Q}} \mathrm {VaR}_{\alpha }(X\vert P), \\ \overline{\mathrm{ES}}_{\alpha}(X\vert \mathcal {Q}) &:= \sup _{P \in \mathcal {Q}} \mathrm {ES}_{\alpha}(X\vert P), \qquad (X,\mathcal {Q})\in \mathcal {X}\times 2^{ \mathcal {P}}. \end{aligned}$$

We refer to El Ghaoui et al. [13] for optimisation of the worst-case VaR, Zhu and Fukushima [33] for optimisation of the worst-case ES, and Wang and Ziegel [31] for their theoretical properties. The worst-case VaR and worst-case ES are both standard and satisfy (A1)–(A3).

For a given fixed \(\mathcal {Q}\subseteq \mathcal {P}\), several other examples of ES and VaR with aggregated scenarios, such as averages (with respect to a pre-specified measure over \(\mathcal {Q}\)) and inf-convolutions (for a finite \(\mathcal {Q}\)), are also considered by Wang and Ziegel [31] and Castagnoli et al. [5]. For instance, we can define an average ES by

$$ (X,\mathcal {Q})\mapsto \int _{\mathcal {Q}} \mathrm {ES}_{\alpha}(X\vert Q) \,\mathrm{d}\mu _{\mathcal {Q}}(Q), $$
(2.1)

where \(\mu _{\mathcal {Q}}\) is a measure over \(\mathcal {Q}\) for each \(\mathcal {Q}\subseteq \mathcal {P}\). The average ES in (2.1) is standard and satisfies (A2) and (A3); it does not satisfy (A1) in general. We remark that although sharing many common forms and examples, our framework is fundamentally different from the existing ones in the literature, as it is crucial for a generalised risk measure to use \(\mathcal {Q}\) as an input variable instead of a pre-specified collection.

3 Worst-case generalised risk measures

In this section, we present our first theoretical result, a characterisation of generalised risk measures satisfying (A1) as the supremum of risk measures in the traditional sense. This allows us to apply many results on traditional risk measures to generalised risk measures.

Theorem 3.1

Fix a generalised risk measure \(\Psi : \mathcal {X}\times 2^{\mathcal {P}}\rightarrow \mathbb{R}\).

(i) Suppose that \(\Psi \) is standard. Then \(\Psi \) satisfies (A1) and (A2) if and only if it admits a representation as

$$ \Psi (X\vert \mathcal {Q}) = \sup _{P\in \mathcal {Q}} \Psi (X\vert P), \qquad (X, \mathcal {Q})\in \mathcal {X}\times 2^{\mathcal {P}}. $$
(3.1)

(ii) The mapping \(\Psi \) satisfies (A1) and (A3) if and only if it admits a representation (3.1).

Using Theorem 3.1, we can pin down the forms of possible generalised risk measures by specifying properties on the simpler object \(\Psi (X\vert P)\) for \(X\in \mathcal {X}\) and \(P\in \mathcal {P}\). Theorem 3.1 is a general functional form of the specific preferential characterisation treated in Cerreia-Vioglio et al. [6, Theorem 2].

Definition 3.2

For a given generalised risk measure \(\Psi \), the single-scenario risk measure \((X,P)\mapsto \Psi (X\vert P)\), \(X\in \mathcal {X}\), \(P\in \mathcal {P}\), is called the core of \(\Psi \).

By Theorem 3.1, the cores correspond via (3.1) one-to-one to standard generalised risk measures that satisfy (A1) and (A2). Note that in general, the core of \(\Psi \) does not determine \(\Psi \) on \(\mathcal {X}\times 2^{\mathcal {P}}\) if the conditions in Theorem 3.1 are not satisfied.

In case (3.1) holds, we say that the core \(\Psi \) on \(\mathcal {X}\times \mathcal {P}\) induces the generalised risk measure \(\Psi \) on \(\mathcal {X}\times 2^{\mathcal {P}}\). Many results in this paper are stated for cores instead of the generalised risk measure. Nevertheless, when we speak of cores, we do not need to assume the worst-case form (3.1) or any of (A1)–(A3).

Some simple examples for worst-case generalised risk measures are collected below, and they appear in forms similar to those in the classic theory of risk measures.

Example 3.3

(i) The expectation core

$$ \Psi (X\vert P)= \mathbb{E}^{P} [X], \qquad (X,P)\in \mathcal {X}\times \mathcal {P}, $$

induces the generalised risk measure

$$ \Psi (X\vert \mathcal {Q})= \sup _{P\in \mathcal {Q}} \mathbb{E}^{P} [X] , \qquad (X, \mathcal {Q})\in \mathcal {X}\times 2^{\mathcal {P}}. $$

For a fixed \(\mathcal {Q}\), \(\Psi (\, \cdot \, \vert \mathcal {Q})\) is the robust representation of a traditional coherent risk measure of Artzner et al. [2]. This class of risk measures is the most studied in the literature, and we pay special attention to it in Sect. 5.

(ii) Let \(\gamma :\mathcal {P}\to \mathbb{R}\) be a non-constant function on \(\mathcal {P}\). The penalised-mean core

$$ \Psi (X\vert P)= \mathbb{E}^{P} [X] - \gamma (P), \qquad (X,P)\in \mathcal {X}\times \mathcal {P}, $$

induces the generalised risk measure

$$ \Psi (X\vert \mathcal {Q})= \sup _{P\in \mathcal {Q}} \big( \mathbb{E}^{P} [X] - \gamma (P)\big), \qquad (X, \mathcal {Q})\in \mathcal {X}\times 2^{\mathcal {P}}. $$

For a fixed \(\mathcal {Q}\), \(\Psi (\, \cdot \, \vert \mathcal {Q})\) is the robust representation of a traditional convex risk measure of Föllmer and Schied [17, Theorem 4.16].

(iii) For \(\alpha \in (0,1)\), the VaR core \((X,P)\mapsto \mathrm {VaR}_{\alpha }(X\vert P) \) induces the worst-case VaR in Sect. 2.3.

(iv) For \(\alpha \in (0,1)\), the ES core \((X,P)\mapsto \mathrm {ES}_{\alpha }(X\vert P) \) induces the worst-case ES in Sect. 2.3.

4 Three formulations of law-invariance

For a given \(P\), the functional \(X\mapsto \Psi (X\vert P)\) is a traditional risk measure and properties can be imposed for it. A more interesting and non-trivial question is the interplay between \(X\) and \(P\) for the core \(\Psi \), which we address below. Since \(P\in \mathcal {P}\) is interpreted as a scenario for us to generate a statistical model for the loss \(X\), the evaluation of the risk should depend on the distribution of \(X\). Motivated by this consideration, we can consider three forms of law-invariance for the generalised risk measure \(\Psi \) or its core:

(B1) Strong law-invariance: \(\Psi (X\lvert P ) = \Psi (Y\lvert Q)\) for \(X,Y \in \mathcal {X}\) and \(P, Q \in \mathcal {P}\) with \(X\vert _{P} \stackrel{\,\mathrm{d}}{=}Y \vert _{Q}\).

(B2) Loss law-invariance: \(\Psi (X\lvert P ) = \Psi (Y\lvert P)\) for \(X,Y \in \mathcal {X}\) and \(P \in \mathcal {P}\) with \(X\vert _{P} \stackrel{\,\mathrm{d}}{=}Y \vert _{P}\).

(B3) Scenario law-invariance: \(\Psi (X\lvert P ) = \Psi (X\lvert Q)\) for \(X \in \mathcal {X}\) and \(P,Q \in \mathcal {P}\) with \(X\vert _{P} \stackrel{\,\mathrm{d}}{=}X \vert _{Q}\).

Remark 4.1

In this paper, all properties (Ax) reflect how \(\Psi \) reacts to \(\mathcal {Q}\), all properties (Bx) reflect how \(\Psi \) reacts to the distributions of the risk, all properties (Cx) reflect consideration for \(\Psi \) in terms of a traditional risk measure, all properties (Dx) are relevant to a mapping defined on the set \(\mathcal {P}\) of measures, and all properties (Ex) reflect consideration on decision-theoretic preference.

Clearly, (B1) is stronger than both (B2) and (B3). Each of (B1)–(B3) reflects the consideration that the probability measure \(P\) in \(\Psi (X\vert P)\) is used to model the distribution of the loss \(X\). More precisely, (B1) is an agreement of risk assessment for the same distribution across different scenarios and different losses, whereas (B2) only yields the agreement for each particular scenario, and (B3) only yields the agreement for each particular loss. The following example shows that (B1)–(B3) are genuinely different concepts.

Example 4.2

(i) The cores in Examples 3.3 (i), (iii) and (iv) are strongly law-invariant.

(ii) The core in Example 3.3 (ii) is loss law-invariant, but in general not scenario law-invariant.

(iii) Let \(\beta :\mathcal {X}\to \mathbb{R}\) be a non-constant function on \(\mathcal {X}\). The core

$$ \Psi (X \vert P)=\mathbb{E}^{P} [X] - \beta (X), \qquad (X,P)\in \mathcal {X}\times \mathcal {P}, $$

is scenario law-invariant, but in general not loss law-invariant.

Since (B1) implies both (B2) and (B3), one may wonder whether (B2) and (B3) jointly imply (B1), which turns out to be a tricky question. In other words, we aim to show from (B2) and (B3) that \(\Psi (X\vert P ) = \Psi (Y\vert Q)\) holds for \(P,Q\in \mathcal {P}\) and \(X,Y\in \mathcal {X}\) satisfying \(X\vert _{P} \stackrel{\,\mathrm{d}}{=}Y \vert _{Q}\). Denote by \(F\) the distribution of \(X\) under \(P\), which is the same as that of \(Y\) under \(Q\). If there exists \(Z\in \mathcal {X}\) which has the distribution \(F\) under both \(P\) and \(Q\), then we have the desired chain of equalities \(\Psi (X\vert P) = \Psi (Z\vert P) = \Psi (Z\vert Q) = \Psi (Y \vert Q) \). Unfortunately, the existence of such a \(Z\) depends on the specification of \(P\), \(Q\) and cannot be expected in general; this problem is non-trivial and has been studied in detail by Shen et al. [27]. In the result below, we show that under the extra assumption that the measurable space \((\Omega ,\mathcal {F})\) is standard Borel (i.e., isomorphic to the Borel space on \([0,1]\)), it is possible to find an intermediate measure \(R\) and two random variables \(Z,W\in \mathcal {X}\) such that the chain of equalities

$$ X\vert _{P} \stackrel{\,\mathrm{d}}{=}Z\vert _{P} \stackrel{\,\mathrm{d}}{=}Z\vert _{R} \stackrel{\,\mathrm{d}}{=}W\vert _{R} \stackrel{\,\mathrm{d}}{=}W \vert _{Q} \stackrel{\,\mathrm{d}}{=}Y \vert _{Q} $$

holds, and this gives the desired statement \(\Psi (X\vert P) =\Psi (Y\vert Q)\) needed for (B1).

Theorem 4.3

For a core \(\Psi \), (B1) implies both (B2) and (B3). If \((\Omega ,\mathcal {F})\) is standard Borel, then (B2) and (B3) together are equivalent to (B1).

Remark 4.4

If \((\Omega ,\mathcal {F})\) is not standard Borel, it remains unclear whether the equivalence (B2+B3) ⇔ (B1) holds. For applications in finance and risk management, it is typically sufficient to use a standard Borel space because one can construct countably many independent Brownian motions on the corresponding probability space. The assumption of a standard Borel space is used in some classic literature on risk measures, e.g. Delbaen [9] and Jouini et al. [21].

Loss law-invariance (B2) seems to be always desirable to assume in practice, because if two random losses \(X\) and \(Y\) share the same distribution under a chosen scenario \(P\) of interest, then it is natural to assign the same risk value to these two losses. For a fixed collection \(\mathcal {Q}\in 2^{\mathcal {P}}\), this property defines the \(\mathcal {Q}\)-based risk measure of Wang and Ziegel [31]. On the other hand, it may not always be desirable to assume (B3); although two scenarios may give the same distribution of a loss \(X\), the riskiness may not be understood as the same, as illustrated by the following example.

Example 4.5

Let \(P\) represent a good and \(Q\) an adverse economic scenario (e.g. COVID-19). Assume that the distribution of \(X\) is the same under \(P\) and \(Q\), which means that \(X\) is independent of the particular economic factor which generates \(P\) and \(Q\). The values \(\Psi (X\vert P)\) and \(\Psi (X\vert Q)\) quantify the riskiness of \(X\) when \(P\) respectively \(Q\) is the chosen scenario. Since \(P\) describes a better economy, the risk manager may think that \(X\) is more acceptable in this situation, leading to \(\Psi (X\vert P)<\Psi (X\vert Q)\). For instance, the core in Example 3.3 (ii), the robust representation of convex risk measures, reflects this consideration, and it is not scenario law-invariant.

Next, we collect some representation results based on (A1), (A3) and (B1)–(B3). Recall that ℳ is the set of compactly supported distributions on ℝ. Throughout, we define

$$ \Sigma = \{ \psi :\mathcal {M}\to [-\infty ,+\infty ]\}. $$

Each mapping \(\psi \in \Sigma \) represents a traditional law-invariant risk measure treated as a functional on ℳ instead of on \(\mathcal {X}\).

Proposition 4.6

Let \(\Psi \) be a generalised risk measure.

(i) The mapping \(\Psi \) satisfies (A1), (A3) and (B1) if and only if there exists \(\psi \in \Sigma \) such that

$$ \Psi (X\vert \mathcal {Q}) = \sup _{P\in \mathcal {Q}} \psi (F_{X|P}), \qquad (X, \mathcal {Q})\in \mathcal {X}\times 2^{\mathcal {P}}. $$

(ii) The mapping \(\Psi \) satisfies (A1), (A3) and (B2) if and only if there exists \(\{\psi _{P}: P\in \mathcal {P} \} \subseteq \Sigma \) such that

$$ \Psi (X\vert \mathcal {Q}) = \sup _{P\in \mathcal {Q}} \psi _{P} (F_{X|P}), \qquad (X, \mathcal {Q})\in \mathcal {X}\times 2^{\mathcal {P}}. $$

(iii) The mapping \(\Psi \) satisfies (A1), (A3) and (B3) if and only if there exists \(\{\psi _{X}: X\in \mathcal {X} \} \subseteq \Sigma \) such that

$$ \Psi (X\vert \mathcal {Q}) = \sup _{P\in \mathcal {Q}}\psi _{X} (F_{X|P}), \qquad (X, \mathcal {Q})\in \mathcal {X}\times 2^{\mathcal {P}}. $$

5 Coherent generalised risk measures

In this section, we pay special attention to the most important class of traditional risk measures, namely coherent risk measures of Artzner et al. [2]. We first provide a characterisation for a generalised risk measure to have the form of coherent risk measures in Example 3.3, and then discuss a few additional properties specific to our setting.

5.1 A characterisation for coherent risk measures

We give a simple characterisation of the coherent risk measures in Example 3.3 (i). Coherent risk measures including ES (for a fixed scenario) are the most studied class of risk measures in the finance and engineering literature. We first list some properties of traditional risk measures of Artzner et al. [2] and Föllmer and Schied [17, Chap. 4]. These properties are formulated for the traditional risk measure \(X\mapsto \Psi (X\vert \mathcal {Q})\) on \(\mathcal {X}\) for each fixed \(\mathcal {Q}\subseteq \mathcal {P}\), and we denote this by \(\Psi _{\mathcal {Q}}\).

(C1) Monotonicity: \(\Psi _{\mathcal {Q}} (X) \le \Psi _{\mathcal {Q}} (Y) \) for all \(X,Y \in \mathcal {X}\) with \(X \le Y\).

(C2) Cash-additivity: \(\Psi _{\mathcal {Q}} (X+m) = \Psi _{\mathcal {Q}} (X)+m\) for all \(X \in \mathcal {X}\) and \(m \in \mathbb{R}\).

(C3) Positive homogeneity: \(\Psi _{\mathcal {Q}} (\lambda X) = \lambda \Psi _{\mathcal {Q}} (X)\) for all \(\lambda > 0\) and \(X \in \mathcal {X}\).

(C4) Subadditivity: \(\Psi _{\mathcal {Q}} (X + Y ) \leq \Psi _{\mathcal {Q}}(X) + \Psi _{ \mathcal {Q}} (Y) \) for all \(X,Y \in \mathcal {X}\).

Following the terminology for traditional risk measures, a generalised risk measure \(\Psi \) is monetary if it satisfies (C1) and (C2), and coherent if it satisfies (C1)–(C4). We further state a strong property imposed on the cores.

(C0) Additivity of the core: \(\Psi (X+Y\vert P) = \Psi (X\vert P) + \Psi (Y\vert P)\) for all \(X,Y \in \mathcal {X}\) and \(P\in \mathcal {P}\).

The property (C0) will be a key property to pin down the form of coherent traditional risk measures.

Theorem 5.1

A standard generalised risk measure \(\Psi \) satisfies (A1), (A2), (B2), (C1) and (C0) if and only if it is uniquely given by

$$\begin{aligned} \Psi (X\vert \mathcal {Q})= \sup _{P\in \mathcal {Q}} \mathbb{E}^{P} [X] , \qquad (X, \mathcal {Q})\in \mathcal {X}\times 2^{\mathcal {P}}. \end{aligned}$$
(5.1)

Moreover, \(\Psi \) in (5.1) satisfies (C2)(C4).

The most important property used in Theorem 5.1 is the additivity of the core (C0), which may be seen as quite strong. As a primary example of coherent risk measures of the form (5.1) in financial practice, the Chicago Mercantile Exchange (CME) uses (5.1) to determine margin requirements for a portfolio of instruments; see McNeil et al. [25, Sect. 2.3]. In the CME approach, under each fixed scenario, the risk factors move in a particular deterministic way and hence the portfolio loss assessment is additive; thus (C0) is natural in this context.

5.2 Ambiguity sensitivity and comonotonically additive risk measures

As we have seen from Example 4.2, strong law-invariance (B1) is genuinely stronger than the weaker notions of (B2) and (B3). In the following result, we connect weak and strong law invariance via an additional property which is related to the core of the generalised risk measure \(\Psi \).

(B4) Ambiguity sensitivity: For all \(X\in \mathcal {X}\), \(P,Q\in \mathcal {P}\) and \(\lambda \in [0,1]\), we have \(\Psi (X \lvert \lambda P + (1-\lambda )Q ) \ge \lambda \Psi (X \lvert P ) + (1- \lambda ) \Psi (X\lvert Q)\). Moreover,

Ψ( 1 A |λP+(1λ)Q)=λΨ( 1 A |P)+(1λ)Ψ( 1 A |Q)

for all \(A\in \mathcal {F}\) such that \(P[A]=Q[A]\).

The first statement of (B4) intuitively means that due to ambiguity on the distribution of \(X\), the risk of \(X\) under a mixture is larger than the mixture of its risks under \(P\) and \(Q\); this is the concavity in \(P\) in Proposition 2.3. For instance, a random variable \(X\) which is constant under both \(P\) and \(Q\) may be random (Bernoulli) under \(\lambda P + (1-\lambda )Q \), and hence its risk should be larger under the mixture than under the individual scenarios. Regarding the second statement of (B4), if the probability measures \(P\) and \(Q\) agree on how likely an event \(A\) is, then there is no ambiguity on \(A\) and its risk under a mixture should be simply the mixture of its risks under \(P\) and \(Q\). Another explanation is provided in the following example.

Example 5.2

Assume that \(P\) is used by one risk analyst and \(Q\) by another. The manager would like to use \(\lambda P + (1-\lambda )Q\), a mixture of \(P\) and \(Q\), to reflect the knowledge of both analysts. For simplicity, the random loss \(X\) is assumed to be the indicator of a loss event \(A\). If \(P\) and \(Q\) give different assessments of the probability of \(A\), the manager would be worried about the discrepancy in the models, and her final risk assessment \(\Psi (X\vert \lambda P + (1-\lambda )Q )\) is more than \(\lambda \Psi (X\vert P) + (1-\lambda )\Psi (X\vert Q)\), the weighted average of the two analysts’ assessments. On the other hand, if \(P\) and \(Q\) give the same probability to \(A\), there is no disagreement in predicting \(A\). In this case, her risk assessment of 1 A is the same as the weighted average of the two analysts’ assessments.

Another property essential to our next characterisation result is comonotonic additivity, which is intimately linked to Choquet integrals; see e.g. Wang et al. [30].

(C5) Comonotonic additivity: \(\Psi _{\mathcal {Q}}(X+Y) = \Psi _{\mathcal {Q}}(X) + \Psi _{\mathcal {Q}}(Y)\) for all \(X,Y \in \mathcal {X} \) which are comonotonic.

Recall that two random variables \(X\) and \(Y\) are called comonotonic if they satisfy \((X(\omega ) - X(\omega '))(Y( \omega ) - Y(\omega ')) \geq 0\) for all \((\omega , \omega ') \in \Omega \times \Omega \).

The following result characterises loss law-invariant risk measures with ambiguity sensitivity: they turn out to be strongly law-invariant risk measures without this assumption. The proof is quite technical and relies on Lyapunov’s convexity theorem as well as a few characterisation results on Choquet integrals in Wang et al. [30]. It is important to note that when we say the core satisfies some properties (C1)–(C5), this means that it satisfies these properties as a traditional risk measure.

Theorem 5.3

For a core \(\Psi \), the following are equivalent:

(i) The mapping \(\Psi \) is loss law-invariant, ambiguity sensitive, monetary and comonotonically additive, i.e., \(\Psi \) satisfies (B2), (B4), (C1), (C2) and (C5).

(ii) The mapping \(\Psi \) is strongly law-invariant, coherent and comonotonically additive, i.e., \(\Psi \) satisfies (B1) and (C1)(C5).

(iii) There exists an increasing concave function \(h :[0,1]\to [0,1]\) such that \(h(0)=0=1-h(1)\) and

$$ \Psi (X \vert P) = \int X \,\mathrm{d}(h \circ P), \qquad (X,P)\in \mathcal {X}\times \mathcal {P}. $$
(5.2)

There has been an extensive debate in both academia and industry on whether subadditivity (C4) proposed by Artzner et al. [2] is a good criterion for risk measures used in regulatory practice, as (C4) is the key property which distinguishes VaR and ES; see Embrechts et al. [14], Embrechts et al. [16] and the references therein. By Theorem 5.3, from the perspective of multiple models, we can obtain (C4) by using ambiguity sensitivity (B4). Hence our framework and results offer a novel decision-theoretic reason to support coherent risk measures (in particular, ES over VaR) without directly assuming subadditivity (C4).

6 Connection to decision theory

In this section, we discuss the connection of our generalised risk measures to classic notions in decision theory, as model uncertainty has been dealt with extensively in the decision-theoretic literature and traditional risk measures are intimately linked to decision preferences in various forms; see e.g. Drapeau and Kupper [11]. We first present a list of decision-theoretic criteria as examples for our framework, followed by characterisation results of two classic notions: the multi-prior expected utility of Gilboa and Schmeidler [19] and the variational preferences of Maccheroni et al. [24].

6.1 Examples of generalised risk measures in decision theory

Our framework includes many criteria in decision theory as typical examples. Although the considerations of these criteria are different from our paper, the following examples show the generality of our framework.

Example 6.1

(i) The multi-prior expected utility of Gilboa and Schmeidler [19] has a numerical representation

$$ \Psi (X\vert \mathcal {Q}) = u^{-1}\Big(\min _{P \in \mathcal {Q}} \mathbb{E}^{P}[u(X)] \Big),$$

where \(u\) is a strictly increasing utility function.

(ii) The variational preference of Maccheroni et al. [24] has a numerical representation

$$ \Psi (X\vert \mathcal {Q}) = \min _{P \in \mathcal {Q}} \big(\mathbb{E}^{P}[u(X)] - \gamma (P)\big),$$

where \(u\) is a strictly increasing utility function and \(\gamma : \mathcal {P}\to [-\infty ,+\infty )\) is a penalty function. The multiplier preferences of Hansen and Sargent [20] correspond to a special choice of \(\gamma \) which is the Kullback–Leibler divergence from a reference scenario.

(iii) Let \(\mathcal {Q}\subseteq \mathcal {P}\) be pre-specified and \(\mu \) a probability measure on \(\mathcal {Q}\). The smooth ambiguity preference of Klibanoff et al. [22] has a numerical representation

$$ \Psi (X\vert \mathcal {Q}) = \phi ^{-1} \bigg(\int _{\mathcal {Q}} \phi \Big(u^{-1} \big(\mathbb{E}^{P}[u(X)] \big)\Big)\,\mathrm{d}\mu (P) \bigg),$$

where \(u\) is a strictly increasing utility function and \(\phi \) a strictly increasing function. Note that in this formulation, \(\mu \) needs to be specified together with \(\mathcal {Q}\) and hence should be considered as an input of \(\Psi \) in our framework; see Sect. 7 for more discussion on this.

(iv) The imprecise information preference of Gajdos et al. [18] has a numerical representation

$$ \Psi (X\vert \mathcal {Q}) = u^{-1}\Big(\min _{P \in \phi (\mathcal {Q})} \mathbb{E}^{P}[u(X)] \Big), $$

where \(u\) is a strictly increasing utility function and \(\phi \) a selecting function (assumed to exist) reflecting the decision maker’s attitude to imprecision.

(v) The model misspecification preference of Cerreia-Vioglio et al. [6] has a numerical representation

$$ \Psi (X\vert \mathcal {Q}) = \min _{P \in \mathcal {P}} \Big( \mathbb{E}^{P} [ u(X) ] + \min _{Q \in \mathcal {Q}} c(P, Q) \Big), $$

where \(u\) is a strictly increasing utility function and \(c\) a distance on the set of measures which penalises the model misspecification.

6.2 Multi-prior expected utilities

Gilboa and Schmeidler [19] proposed the notion of multi-prior expected utility in decision theory. Motivated by that, we consider a preference on \(\mathcal {X}\times \mathcal {S}\) which is represented by a total pre-order ⪯, where \(\mathcal {S}\) is the collection of all finite subsets of \(\mathcal {P}\). For tractability, we consider \(\mathcal {S}\) instead of \(2^{\mathcal {P}}\) in this subsection. The decision is to compare a risk and set of scenarios with another risk and set of scenarios. This setting was studied by Gajdos et al. [18]. We denote by ≃ the equivalence under ⪯. As above, we write \((X, P)\) if the set of scenarios has only one element \(P\). For decisions among \((X_{1}, \mathcal {Q}_{1}), (X_{2}, \mathcal {Q}_{2}) \in \mathcal {X}\times \mathcal {S}\), we propose the following axioms similar to what we have seen so far in this paper, but defined for preferences instead of generalised risk measures.

(E1) Strong law-invariance: \((X, P) \simeq (Y, Q)\) for any \(P, Q \in \mathcal {P}\) and \(X, Y \in \mathcal {X}\) satisfying \(X\vert _{P} \stackrel{\,\mathrm{d}}{=}Y \vert _{Q}\).

(E2) Uncertainty aversion: \((X, \mathcal {Q}) \preceq (X, \mathcal {R}) \) for any \(X\in \mathcal {X}\) and \(\mathcal {R}, \mathcal {Q} \in \mathcal {S}\) with \(\mathcal {R}\subseteq \mathcal {Q}\).

(E3) Uncertainty bound: For any \(X \in \mathcal {X}\) and \(\mathcal {Q} \in \mathcal {S}\), there exists some \(P \in \mathcal {Q}\) such that \((X, P) \preceq (X, \mathcal {Q})\).

(E4) Independence: for any \(P,Q \in \mathcal {P}\), any \(X, Y \in \mathcal {X}\) satisfying \(X\vert _{Q} \stackrel{\,\mathrm{d}}{=}Y \vert _{Q}\) and any \(\alpha \in (0,1)\), we have

$$ (X, P) \preceq (Y, P) \quad \Longleftrightarrow \quad \big(X, \alpha P+(1- \alpha )Q\big) \preceq \big(Y, \alpha P+(1-\alpha )Q\big). $$

(E5) Continuity: For any \(P,Q,R \in \mathcal {P}\) and \(X \in \mathcal {X}\), if \((X, P) \preceq (X, Q) \preceq (X, R)\), then there exists \(\alpha \in [0,1]\) such that \((X, \alpha P+(1-\alpha )R) \simeq (X, Q)\).

Proposition 6.2 illustrates a decision-theoretic characterisation for the multi-prior expected utility. The proof is based on Theorem 3.1 and the classic result of von Neumann and Morgenstern [28, Chap. 3].

Proposition 6.2

A preferenceon \(\mathcal {X}\times \mathcal {S}\) satisfies (E1)–(E5) if and only if it is a multi-prior expected utility, i.e., there exists a function \(u: \mathbb{R}\to \mathbb{R}\) such that

$$ (X_{1}, \mathcal {Q}_{1}) \preceq (X_{2}, \mathcal {Q}_{2}) \quad \Longleftrightarrow \quad \min _{P\in \mathcal {Q}_{1}} \mathbb{E}^{P}[ u (X_{1})] \leq \min _{P \in \mathcal {Q}_{2}} \mathbb{E}^{P}[ u (X_{2})]. $$
(6.1)

The strong law-invariance (E1) which allows us to translate ⪯ to a preference on the set of distributions on the real line is crucial for this representation result. The properties (E2) and (E3) are reasonable for uncertainty-averse decision makers, and they correspond to (A1) and (A3), respectively, in the framework of generalised risk measures. The properties (E4) and (E5) correspond to the independence and continuity axioms of von Neumann and Morgenstern [28, Chap. 3], respectively.

6.3 Robust generalised risk measures

In addition to the worst-case generalised risk measure characterised in Theorem 3.1, another popular form of risk measures involving multiple probability measures arises from the robust representation of convex risk measures as in Example 3.3 (ii). More precisely, a traditional convex risk measure \(\rho \) of Föllmer and Schied [17, Theorem 4.16] takes the form, for some \(\mathcal {Q} \subseteq \mathcal {P}\),

$$ \rho (X) = \sup _{P\in \mathcal {Q}} \big( \mathbb{E}^{P}[X] - \gamma (P)\big), \qquad X \in \mathcal {X}, $$
(6.2)

where \(\gamma :\mathcal {P}\to (-\infty ,+\infty ]\) is a penalty function. Moreover, the variational preference of Maccheroni et al. [24] takes a similar form to (6.2) with the mean \(\mathbb{E}^{P}\) replaced by an expected utility; see Example 6.1 (ii). Note that in the setting of numerical representation of preferences, a negative sign needs to be applied to a generalised risk measure to transform it to a preference functional.

Inspired by (6.2) and the variational preferences of Maccheroni et al. [24], we consider generalised risk measures with the form, for some \(\psi \in \Sigma \),

$$ \Psi (X\vert \mathcal {Q}) = \sup _{P\in \mathcal {Q}} \big( \psi (F_{X|P}) - \gamma (P)\big), \qquad (X, \mathcal {Q})\in \mathcal {X}\times 2^{\mathcal {P}}. $$
(6.3)

Clearly, if \(\psi \) is the mean functional, then (6.3) yields the traditional (convex) risk measure (6.2) for a given \(\mathcal {Q}\). The generalised risk measure in (6.3) is loss law-invariant (B2), but neither scenario law-invariant (B3) nor strongly law-invariant (B1). In order to characterise (6.3), we further impose the following technical property which says that the difference between the values of the core evaluated on \(P\) and \(Q\) for identically distributed losses only depends on \(P\) and \(Q\), but not on the random loss.

(B5) If \(X\vert _{P} \stackrel{\,\mathrm{d}}{=}Y \vert _{Q}\) and \(Z\vert _{P} \stackrel{\,\mathrm{d}}{=}W \vert _{Q}\), then

$$ \Psi (X\lvert P )- \Psi (Y\lvert Q) = \Psi (Z\lvert P )- \Psi (W \lvert Q). $$

Proposition 6.3

Let \(\Psi \) be a generalised risk measure. Then \(\Psi \) satisfies (A1), (A3), (B2) and (B5) if and only if there exist a penalty function \(\gamma : \mathcal {P}\rightarrow \mathbb{R}\) and some \(\psi \in \Sigma \) such that the representation (6.3) holds.

Property (B5) can be roughly interpreted as saying that the magnitude of penalisation for a given scenario \(P\) is independent of the risky position \(X\) being evaluated. This may be seen as a bit artificial. Our characterisation in Proposition 6.3 is mainly motivated by the great popularity of the robust representation of convex risk measures and variational preferences, and we omit a detailed discussion of the economic desirability or undesirability of (B5).

7 Concluding remarks

The new framework of generalised risk measures introduced in this paper allows a unified formulation of measures of risk and uncertainty. Our results are only first attempts to understand the new setting and many further questions arise, especially regarding the interplay between the risk variable \(X\) and the uncertainty collection \(\mathcal {Q}\) for a generalised risk measure. Both new economic and mathematical questions arise as the new functionals are by definition more sophisticated than traditionally studied objects.

Worst-case generalised risk measures are characterised with a few axioms in Theorem 3.1. Another popular way of handling model uncertainty is to use a weighted average of risk evaluations. In the case of a finite collection \(\mathcal {Q}\), we can always use the arithmetic average as risk evaluation, that is, generate \(\Psi \) via its core by

$$ \Psi (X\vert \mathcal {Q}) = \frac {1}{|\mathcal {Q}|}\sum _{Q\in \mathcal {Q}} \Psi (X\vert Q) . $$

Certainly, such a formulation does not satisfy (A1) but satisfies (A2) and (A3). In general, to allow different weights and infinite collections, one needs to associate each collection \(\mathcal {Q}\) with a measure as in (2.1) or in the smooth ambiguity preference of Klibanoff et al. [22] in Example 6.1 (iii). Such a measure can either be pre-specified or treated as an input of \(\Psi \), thus slightly extending our framework.

We have studied several popular properties such as law-invariance, coherence and comonotonic additivity, but many more properties in the new framework remain to be explored as the literature on traditional risk measures is very rich. In particular, the desirability of theoretical properties in risk management practice requires thorough study, as they may have different interpretations from their traditional counterparts. For instance, additivity of the core may be sensible in our framework (Theorem 5.1) and nicely connects to the scenario-based margin calculation used by CME. However, such a property is not desirable for traditional risk measures as it essentially forces the risk measure to collapse to the mean; see e.g. Liebrich and Munari [23] and Chen et al. [7].

Finally, we mention that in some formulations of generalised risk measures, not all choices of the input scenario \(\mathcal {Q}\) are economically meaningful. In particular, for a given penalty function \(\gamma \) on \(\mathcal {P}\), the core \(\Psi (X\vert P)= \mathbb{E}^{P} [X] - \gamma (P)\) in Example 3.3 (ii) or \(\Psi (X\vert P)= \mathbb{E}^{P} [u(X)] - \gamma (P)\) in Example 6.1 (ii) is not meant to be used directly with a single \(P\); the use of \(\gamma \) already implicitly implies that there is some level of model uncertainty, and it is supposed to be coupled with the worst-case operation. The value \(\Psi (X\vert P)\) for a standalone \(P\) is thus difficult to interpret and should not be used for decision making without properly specifying the uncertainty collection \(\mathcal {Q}\). On the other hand, such a situation does not happen for instance in the worst-case or average-type generalised risk measures based on traditional risk measures, such as the worst-case ES.