1 Introduction

For many decision making problems under uncertainty, it may be essential to consider multiple possibly conflicting stochastic performance criteria. Stochastic multicriteria decision making problems arise in a wide range of areas, including financial portfolio optimization, humanitarian relief network design, production scheduling, public debt management, and homeland security budget allocation (see, e.g., Balibek and Köksalan 2010; Hu et al. 2011; Köksalan and Şakar 2016; Noyan 2012; Noyan et al. 2016). In such problems, we can represent the stochastic outcomes of interest by a random vector, each dimension of which corresponds to a particular decision criterion. Then, comparing the potential decisions requires specifying preference relations among random vectors. It is also crucial to compare the random outcomes based on the decision makers’ risk preferences. These concerns call for optimization models that incorporate multivariate risk-averse preference relations into constraints and/or objectives. The class of models, which incorporates the multivariate risk preferences into the constraints using benchmarking relations, has received some attention in the recent literature. Alternatively, in this study, we introduce a new class of models with an objective of optimizing a multivariate risk measure.

First, we review the existing literature on risk-averse multicriteria optimization models that feature benchmarking preference relations. In this line of research initiated by Dentcheva and Ruszczyński (2009), two types of benchmarking relations are modeled as constraints: multivariate risk-averse relations based on second-order stochastic dominance (SSD) and conditional value-at-risk (CVaR). These models assume that a benchmark random outcome vector is available and extend univariate (scalar-based) preference rules to the multivariate (vector-based) case by using linear scalarization functions. The linear scalarization corresponds to the weighted-sum approach, which is widely used in multicriteria decision making (Steuer 1986; Ehrgott 2005); the scalarization coefficients are interpreted as the weights representing the relative (subjective) importance of each decision criterion.

In many decision making situations, especially those involving multiple decision makers, it can be difficult to determine a single weight vector. There are many alternative methods to elicit relative weights of each criterion, including multiattribute weighting, swing weighting and the analytic hierarchy process (for a review, see von Winterfeldt and Edwards 1986; Saaty 2000). However, the relative weights of even a single expert could be very different depending on which elicitation approach is used as shown in Schoemaker and Waid (1982) and Borcherding et al. (1991). The problem of choosing a single weight vector is further exacerbated if multiple experts are involved. To address these ambiguity and inconsistency issues, a so-called robust approach considers a collection of weight vectors within a prescribed scalarization set instead of a single weight vector. Various scalarization sets are considered in the literature such as the set of all non-negative coefficients, arbitrary polyhedral and arbitrary convex sets (see, e.g.,Dentcheva and Ruszczyński 2009; Homem-de-Mello and Mehrotra 2009; Hu et al. 2012, respectively).

While the majority of existing studies focuses on enforcing multivariate SSD relations (see, e.g., Dentcheva and Ruszczyński 2009; Homem-de-Mello and Mehrotra 2009; Hu et al. 2012; Dentcheva and Wolfhagen 2013), this modeling approach can be overly conservative in practice and leads to very demanding constraints that sometimes cannot be satisfied. For example, due to this infeasibility issue, Hu et al. (2011) solve such an optimization problem with relaxed SSD constraints. As an alternative, Noyan and Rudolf (2013) propose to use a multivariate preference relation based on CVaR; their approach is motivated by the fact that the univariate SSD relation is equivalent to a continuum of \({\text {CVaR}}\) inequalities (Dentcheva and Ruszczyński 2006). The authors consider polyhedral scalarization sets and show that their CVaR-based methodology can be extended to optimization problems featuring benchmarking constraints based on a wider class of coherent risk measures. In our study, we follow the line of research of Noyan and Rudolf (2013), which provides sufficient flexibility to obtain feasible problem formulations and capture a wide range of risk preferences, including risk-neutral and worst-case approaches.

Optimization models under both types of multivariate preference relations (SSD and CVaR) are challenging, since they require introducing infinitely many univariate risk constraints associated with all possible weight vectors in the scalarization set. For polyhedral scalarization sets, Homem-de-Mello and Mehrotra (2009) and Noyan and Rudolf (2013) show that enforcing the corresponding univariate risk constraint for a finite (exponential) subset of weight vectors is sufficient to model the multivariate SSD and CVaR relations, respectively. These finite representation results allow them to develop finitely convergent delayed cut generation algorithms, where each cut is obtained by solving a mixed-integer programming (MIP) problem. Since solving these MIP formulations is the main computational bottleneck, Küçükyavuz and Noyan (2016) develop computationally effective solution methods for the cut generation problems arising in both types of optimization models.

As outlined earlier, the existing literature on risk-averse multicriteria optimization problems mainly focuses on multivariate risk-constrained models, where a benchmark random vector is available and the goal is to find a solution with a multivariate outcome vector that is preferable to the benchmark (with respect to the multivariate SSD or CVaR relation). In this paper, we propose a novel model which does not require a given benchmark and aims to optimize the risk associated with the decision-based random vector of outcomes. In this sense, the problem we consider can be seen as a risk-averse stochastic multiobjective optimization. There are, in general, two types of approaches to solve stochastic multiobjective problems: (1) to eliminate the stochastic nature of the problem by replacing each random objective function with one of its summary statistics; (2) to eliminate the multiobjective structure of the problem by aggregating the multiple objectives and obtaining a single random objective function. For recent surveys on these two types of approaches we refer to Gutjahr and Pichler (2016) and Ben Abdelaziz (2012). The first (non-aggregation based) approach results in a traditional deterministic multiobjective problem and requires the identification of multiple (typically exponential) non-dominated solutions in the efficient frontier. Ultimately, however, the decision makers need to specify the weights for each criterion to choose among the non-dominated solutions. In the second (aggregation-based) approach, one can consider a weighted sum of the multiple objectives and solve the resulting stochastic problem to obtain a solution. However, the weights to be used in either approach can be ambiguous and inconsistent due to the presence of conflicting criteria and lack of consensus among multiple experts. Alternatively, in the second approach of aggregating multiple objectives into one, one can use an aggregated (but non-scalarized) single objective using stochastic goal programming. This approach considers random and/or deterministic goals (benchmark values) for the different objectives and constructs a single objective based on a function of the deviations from the goals. However, a benchmark goal may not be immediately available in all practical applications. For problems where the relative importance of the criteria is ambiguous and a benchmark performance vector is not available, we propose to focus on the worst-case CVaR with respect to the prescribed scalarization set and employ a recent notion of CVaR robustness in the context of stochastic multicriteria optimization.

In a related line of work, to address the ambiguity and inconsistency in the weights used to scalarize the multiple criteria in the objective function of a deterministic optimization problem, Ogryczak (2010) and Hu and Mehrotra (2012) consider minimax type robustness with respect to a given weight set. Note that such existing robust weighted-sum models assume that either the problem parameters are deterministic or the decision makers are risk-neutral. For an overview on minimax robustness for multiobjective optimization problems we refer to Ehrgott et al. (2014). However, some multicriteria decision-making problems of recent interest, such as disaster preparedness (Hu and Mehrotra 2012) and homeland security (Hu et al. 2011), involve uncertain events with small probabilities but dire consequences. Therefore, it is crucial to incorporate risk aversion into multicriteria optimization models, which is the main focus of our study. Note that the risk-averse model we propose in this paper features the risk-neutral version as a special case.

In the recent literature, another type of CVaR robustness appears in the univariate case stemming from the distributional robustness. Zhu and Fukushima (2009) and Wozabal (2014) consider optimizing the worst-case CVaR and a wider class of convex risk measures (of a scalar-based random variable), respectively. However, this line of work assumes that there is ambiguity in the underlying probability distribution and express the worst-case with respect to a specified set of distributions. In contrast, we assume that the underlying probability distribution is known but there is ambiguity in the scalarization vector (i.e., relative importance of multiple criteria) within a polyhedral set; this leads to a worst-case multivariate CVaR measure. For robust optimization in general, the interested reader may refer to Ben-Tal et al. (2009) and Bertsimas et al. (2011).

1.1 Our contributions

We incorporate risk aversion into multicriteria optimization models using the concept of multivariate CVaR. We propose a maximin type model optimizing the worst-case CVaR over a scalarization set. While the worst-case multivariate CVaR measure was recently introduced in the finance literature to assess the risk of given portfolio vectors (see, e.g., Rüschendorf 2013), to the best of our knowledge, there is no model or method to assist decision makers in choosing a portfolio vector that optimizes this risk measure. In this paper, we fill this gap, and give an optimization model that maximizes the worst-case multivariate CVaR. To demonstrate the adequacy of the proposed model, we show that the risk measure of interest is coherent in an appropriate multivariate sense, and an optimal solution of the model is not dominated in an appropriate stochastic sense. These two properties are highly desirable in risk-averse optimization and multicriteria optimization, respectively.

Unlike the risk-neutral version with a polyhedral weight set, in the risk-averse case, the inner optimization problem involves a concave minimization. Hence, the problem in general can no longer be solved as a compact linear program (as in Hu and Mehrotra 2012). Therefore, we propose a delayed cut generation-based solution algorithm and show that the cut generation problem can be modeled as a bilinear program that contains the multiplication of the scalarization variables and some binary variables used for representing CVaR. We demonstrate that the assumptions on the scalarization set allow us to employ the reformulation-linearization technique (RLT) (Sherali and Adams 1994; Sherali et al. 1998) to strengthen the resulting MIP formulations of the cut generation problem. We observe that the cut generation subproblems in the proposed algorithm have similar structure with those encountered in solving the related multivariate CVaR-constrained optimization model. The cut generation problem for this related model has been identified as the major bottleneck in earlier work and progress has been made in strengthening the corresponding formulations under the general probability case. Recognizing the importance of reducing the cut generation solution times, we utilize the RLT technique to obtain further stronger and computationally more efficient cut generation formulations for optimization under multivariate CVaR constraints, especially for the equal probability case. Note that the equiprobable case is of particular importance for the Monte Carlo sampling-based stochastic optimization models. While similar bilinear formulations and their weak linearizations exist in the literature for the CVaR-constrained optimization model, our work is a first in observing that these formulations are amenable to significant strengthening using the RLT reformulations and other polyhedral developments we provide. This observation in turn speeds up the overall solution time considerably as we show in our extensive computational study.

1.2 Outline

The rest of the paper is organized as follows. In Sect. 2, we introduce the new worst-case CVaR optimization model and provide some analytical results to highlight the appropriateness of the proposed modeling approach. This section also presents a cut generation algorithm and effective mathematical programming formulations of the original optimization problem and the corresponding cut generation problems for some special cases. We describe how to apply some of these algorithmic features to the multivariate CVaR-constrained models in Sect. 3. Section 4 gives a hybrid model that includes both the multivariate CVaR-based constraints and objective. We give a unified methodology that solves the hybrid model, integrating the algorithmic developments in Sects. 2 and 3. Section 5 is dedicated to the computational study, while Sect. 6 contains concluding remarks.

2 Worst-case CVaR optimization model

In our study, we consider a multicriteria decision making problem where d random performance measures of interest associated with the decision vector \(\mathbf {z}\) are represented by the random outcome vector \(\mathbf {G}(\mathbf {z})=(G_1(\mathbf {z}),\ldots ,G_d(\mathbf {z}))\). All random variables in this paper are assumed to be defined on some finite probability spaces; we simplify our exposition accordingly. Let \((\varOmega ,2^\varOmega ,\mathbbm {P})\) be a finite probability space with \(\varOmega =\{\omega _1,\ldots ,\omega _n\}\) and \(\mathbbm {P}(\omega _i)=p_i, i=1,\dots ,n\). In particular, denoting the set of feasible decisions by Z, the random outcomes are determined according to the outcome mapping \(\mathbf {G}{:}Z\times \varOmega \rightarrow \mathbbm {R}^d\), and the random outcome vector \(G(\mathbf {z}){:}\varOmega \rightarrow \mathbbm {R}^d\) is defined by \(G(\mathbf {z})(\omega )=G(\mathbf {z},\omega )\) for all \(\omega \in \varOmega \). For a given elementary event \(\omega _i\) the mapping \(\mathbf {g}_i{:}Z\rightarrow \mathbbm {R}^d\) is defined by \(\mathbf {g}_i(\mathbf {z})=\mathbf {G}(\mathbf {z},\omega _i)\). Let \(C\subset \mathbbm {R}_+^d\) be a polyhedron of scalarization vectors, each component of which corresponds to the relative importance of each criterion. We naturally assume, without loss of generality, that C is a subset of the unit simplex, \(C_f \), i.e., \(C\subseteq C_f :=\{\mathbf {c}\in \mathbbm {R}_+^d\; | \; \sum _{i \in [d]} c_i = 1 \}\).

Before proceeding to give our definitions and models, we need to make a note of some conventions used throughout this paper, and recall a basic definition. The set of the first n positive integers is denoted by \([n]= \{1,\ldots ,n\}\), while the positive part of a number \(x\in \mathbbm {R}\) is denoted by \([x]_+ = \max (x, 0)\). We assume that larger values of random variables are preferred. We quantify the risk associated with a random variable via a risk measure (specifically, CVaR) where higher values correspond to less risky random outcomes. In this context, risk measures are often referred to as acceptability functionals. Our presentation follows along the lines of Pflug and Römisch (2007) and Noyan and Rudolf (2013). Recall that for a univariate random variable X with (not necessarily distinct) realizations \(x_1,\ldots ,x_n\) and corresponding probabilities \(p_1,\ldots ,p_n\), the conditional value-at-risk at confidence level \(\alpha \in (0,1]\) is given by (Rockafellar and Uryasev 2000)

$$\begin{aligned} {\text {CVaR}}_{\alpha }(X)&= \max \left\{ \eta -\frac{1}{\alpha }\mathbbm {E}\left( [\eta -X]_+\right) {:}\eta \in \mathbbm {R}\right\} \end{aligned}$$
(1)
$$\begin{aligned}&= \max \left\{ \eta -\frac{1}{\alpha }\sum _{i\in [n]}p_i w_i \,{:}\, w_i\ge \eta -x_i,\quad \forall ~i\in [n],\quad \mathbf {w}\in \mathbbm {R}_+^{n},~\eta \in \mathbbm {R}\right\} \end{aligned}$$
(2)
$$\begin{aligned}&=\max \limits _{k\in [n]}\left\{ x_k-\frac{1}{\alpha }\sum \limits _{i\in [n]}p_i\left[ x_k-x_i\right] _+\right\} , \end{aligned}$$
(3)

where the last equality follows from the well known result that the maximum in definition (2) is attained at the \(\alpha \)-quantile, which is known as the value-at-risk (\({\text {VaR}}\)) at confidence level \(\alpha \) (denoted by \({\text {VaR}}_\alpha (X)\)) and that \({\text {VaR}}_{\alpha }(X)=x_k\) for at least one \(k\in [n]\). For risk-averse decision makers typical choices for the confidence level are small values such as \(\alpha =0.05\). Note that \({\text {CVaR}}_{\alpha }(X)\), as defined in (1), is concave in X. For example, suppose that the random variable X represents the return of an investment and its realizations are equally likely. In this context, larger values are preferred, and \({\text {VaR}}_{\alpha }(X)\) provides a lower bound on the return that is exceeded with a high probability of \(1-\alpha \) while \({\text {CVaR}}_{\alpha }(X)\) measures the severity of the return if it is no larger than \({\text {VaR}}_{\alpha }(X)\). When \(\alpha =k/n\) for some \(k\in [n]\), \({\text {VaR}}_{\alpha }(X)\) is equal to the kth smallest realization and \({\text {CVaR}}_{\alpha }(X)\) is the average of the k smallest (least favorable) realizations no larger than \({\text {VaR}}_{\alpha }(X)\).

The significance of modeling robustness against the ambiguity and inconsistency in relative weights motivates us to introduce a new robust optimization model for the stochastic multicriteria decision making problem of interest. To model the risk aversion of the decision makers, we use CVaR as the acceptability functional. In particular, we focus on the recently introduced worst-case multivariate CVaR (Rüschendorf 2013) with respect to the specified scalarization set C, which we review next.

Definition 2.1

(Worst-case multivariate polyhedral \({\text {CVaR}}\)) Let \(\mathbf {X}\) be a d-dimensional random vector and \(C\subseteq C_f\) a set of scalarization vectors. The worst-case multivariate polyhedral CVaR (WCVaR) at confidence level \(\alpha \in (0,1]\) with respect to C is defined as

$$\begin{aligned} {\text {WCVaR}}_{C,\alpha }( \mathbf {X})=\min \limits _{\mathbf {c}\in C}{\text {CVaR}}_{\alpha }\left( \mathbf {c}^\top \mathbf {X}\right) . \end{aligned}$$
(4)

Following a risk-averse approach, we propose to optimize \({\text {WCVaR}}_{C,\alpha }\) for a given confidence level \(\alpha \in (0,1]\) and a scalarization set C, and introduce a new class of robust multicriteria optimization problems of the general form

$$\begin{aligned} \left( \mathbf {W}\text {-}\mathbf {CVaR}\right) :~&\max \limits _{\mathbf {z}\in Z}\; \; \min \limits _{\mathbf {c}\in C} \;\;{\text {CVaR}}_\alpha \left( \mathbf {c}^\top \mathbf {G}(\mathbf {z})\right) . \end{aligned}$$
(5)

We note that the proposed risk-averse \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) problem features the risk-neutral version, proposed in Hu and Mehrotra (2012), as a special case when \(\alpha =1\). Another special case appears in the literature (Ehrgott 2005) for a sufficiently small value of \(\alpha \) (corresponding to the worst-case); it optimizes the worst value of a particular weighted sum over the set of scenarios. This robust version of the weighted sum scalarization problem is clearly a special case of \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) if we assume that all scenarios are equally likely, \(\alpha =1/n\), and there is a single scalarization vector in the scalarization set C.

It is important to note that the major difficulty of the proposed optimization problem \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \), and the related models in the literature, stems from the presence of the joint acceptability functional \({\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {G}(\mathbf {z}))\). One might wonder why an alternative model that maximizes the scalarization of component-wise acceptability functionals, i.e., \(\max _{\mathbf {z}\in Z} \min _{\mathbf {c}\in C} \sum _{i\in [d]} c_i {\text {CVaR}}_\alpha (G_i(\mathbf {z}))\) is not preferred. After all, this approach would lead to more tractable reformulations; for example, the alternative model can be formulated as a linear program when there is no integrality restriction on the decision vector \(\mathbf {z}\), Z is a polyhedral set, and the mapping \(\mathbf {g}_i(\mathbf {z})\) is linear in \(\mathbf {z}\) for all \(i\in [n]\). However, such a model completely ignores the correlation between the random variables \(G_i(\mathbf {z}),~i\in [d]\). The worst \(\alpha \) proportion of scenarios with respect to one criterion would most likely not coincide with the worst \(\alpha \) proportion of scenarios with respect to the other criteria, except for the very trivial case when \(G_i(\mathbf {z}),~i\in [d],\) are comonotone random variables. Therefore, using the aforementioned alternative modeling approach could only be justified to capture the multivariate risk in the trivial case when the worst-case scenarios of the multiple random outcomes coincide, which does not appear to be the typical situation in optimization with conflicting criteria. In all other cases, it would be a conservative approximation. In this paper, we are interested in exact models and methods that optimize a multivariate risk measure based on the joint behavior of the random outcomes of interest.

In the remainder of this section, we first provide some analytical results to highlight the appropriateness of the proposed model (Sect. 2.1). Then, in Sect. 2.2, we develop methods to solve this new class of problems.

2.1 Coherence and stochastic Pareto optimality

We first analyze the properties of \({\text {WCVaR}}_{C,\alpha }\) as a risk measure and then show that an optimal solution of \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) is Pareto optimal according to a certain stochastic dominance relation.

Desirable properties of risk measures have been axiomatized starting with the work of Artzner et al. (1999), in which the concept of coherent risk measures for scalar-valued random variables is introduced. There are several approaches to define the concept of coherency for the vector-valued random variables (see, e.g., Jouini et al. 2004; Burgert and Rüschendorf 2006; Rüschendorf 2013; Hamel et al. 2013). For example, Hamel et al. (2013) introduce set-valued conditional value-at-risk for multivariate random variables; using such set-valued functionals as risk measures is appropriate for financial market models with transaction costs (see, e.g., Jouini et al. 2004). Our approach is more aligned with the studies which consider multivariate risk measures that map a random vector to a scalar value; in particular, we consider the following definition of coherence in the multivariate case (Ekeland and Schachermayer 2011).

We say that a functional \(\rho {:}\, L^\infty (\varOmega ,2^\varOmega ,\mathbbm {P};\mathbbm {R}^d)\rightarrow \mathbbm {R}\) mapping a d-dimensional random vector to a real number is coherent in dimension d (in other words, \(\rho \) is a coherent acceptability functional in dimension d, equivalently, that \(-\rho \) is a coherent risk measure in dimension d), if \(\rho \) has the following properties (for all d-dimensional random vectors \(\mathbf {V},\mathbf {V}_1,\mathbf {V}_2\)):

  1. (i)

    Normalized .

  2. (ii)

    Monotone \(\mathbf {V}_1\le \mathbf {V}_2~\Rightarrow ~\rho (\mathbf {V}_1)\le \rho (\mathbf {V}_2)\).

  3. (iii)

    Positive homogeneous \(\rho (\lambda \mathbf {V})=\lambda \rho (\mathbf {V})\) for all \(\lambda >0\).

  4. (iv)

    Superadditive \(\rho (\mathbf {V}_1+\mathbf {V}_2)\ge \rho (\mathbf {V}_1)+\rho (\mathbf {V}_2)\).

  5. (v)

    Translation invariant (equivariant) \(\rho (\mathbf {V}+\lambda \mathbf {e})=\rho (\mathbf {V})+\lambda \) for all \(\lambda \in \mathbbm {R}\).

The constant vector \(\mathbf {e}\) denotes the vector of ones \((1,1,\ldots ,1)\). It is easy to see that for \(d=1\) the definition coincides with the notion of coherence for scalar-valued random variables (Artzner et al. 1999); we remind the reader that we provide the definition for acceptability functionals, along the lines of Pflug and Römisch (2007). In the monotonicity property (ii), we consider the usual component-wise ordering; i.e., \(\mathbf {V}_1\le \mathbf {V}_2\) if \(\mathbf {V}_1(j)\le \mathbf {V}_2(j)\) for all \(j\in [d]\).

The next result indicates that the proposed risk measure is of particular importance since it satisfies the desirable properties axiomatized in the above definition of coherence.

Proposition 2.1

Consider a one-dimensional mapping \(\rho \) and a scalarization set \(C\subseteq C_f\), and let \(\rho _{C}(\mathbf {X})=\min \limits _{\mathbf {c}\in C}\rho (\mathbf {c}^\top \mathbf {X})\) for a d-dimensional random vector \(\mathbf {X}\). If \(\rho \) is a coherent acceptability functional (\({-}\rho \) is a coherent risk measure), then \(\rho _{C}(\mathbf {X})\) denoting the worst-case functional in dimension d (with respect to C) is also coherent.

Proof

It is easy to verify that \(\rho _{C}\) is normalized, monotone, and positive homogeneous. To show that \(\rho _{C}\) is superadditive, let us consider two d-dimensional random vectors \(\mathbf {V}_1\) and \(\mathbf {V}_2\). Then, by the supperadditivity of \(\rho \) and the minimum operator, we have \(\rho _{C}(\mathbf {V}_1+\mathbf {V}_2)=\min \nolimits _{\mathbf {c}\in C}\rho (\mathbf {c}^\top (\mathbf {V}_1+\mathbf {V}_2))\ge \min \nolimits _{\mathbf {c}\in C}(\rho (\mathbf {c}^\top \mathbf {V}_1)+\rho (\mathbf {c}^\top \mathbf {V}_2)) \ge \min \nolimits _{\mathbf {c}\in C}\rho (\mathbf {c}^\top \mathbf {V}_1)+ \min \nolimits _{\mathbf {c}\in C}\rho (\mathbf {c}^\top \mathbf {V}_2)=\rho _{C}(\mathbf {V}_1)+\rho _{C}(\mathbf {V}_2)\). The translation invariance of \(\rho _{C}\) follows from the assumptions that \(\sum _{j\in [d]}c_j=1\) and \(\rho \) is translation invariant: for any constant \(\lambda \), \(\rho _{C}(\mathbf {V}+\lambda \mathbf {e})=\min \nolimits _{\mathbf {c}\in C}\rho (\mathbf {c}^\top (\mathbf {V}+\lambda \mathbf {e}))=\min \nolimits _{\mathbf {c}\in C}\rho (\mathbf {c}^\top \mathbf {V}+\lambda )=\min \nolimits _{\mathbf {c}\in C}\rho (\mathbf {c}^\top \mathbf {V})+\lambda =\rho _{C}(\mathbf {V})+\lambda \). \(\square \)

We note that one can also consider a stronger notion of translation invariance in condition (v) of the above definition of coherence; for example, Burgert and Rüschendorf (2006) state it as follows: \(\rho (\mathbf {V}+\lambda \mathbf {e}_j)=\rho (\mathbf {V})+\lambda \) for all \(j\in [d]\) and \(\lambda \in \mathbbm {R}\), where \(\mathbf {e}_j\) is the standard basis vector (1 in the jth component, 0 elsewhere). Rüschendorf (2013) claims that \(\rho _{C}(\mathbf {X})\) is coherent when \(\rho \) is a coherent acceptability functional, even with the above mentioned stronger translation invariance property. However, this claim is not correct even for the unit simplex \((C=C_f)\), as we explain next. Since \(\rho (\mathbf {c}^\top \mathbf {V})\) is concave in \(\mathbf {c}\), the minimum in the definition of \(\rho _{C}(\mathbf {V})\) is attained at an extreme point of C, i.e., \(\rho _{C}(\mathbf {V})=\min \{\rho (V_1),\rho (V_2),\ldots ,\rho (V_d)\}\) if C is a unit simplex. Suppose that \(\rho (V_j),~j\in [d],\) are not all equal, which implies that there exists an index \(j^*\in [d]\) such that \(\rho _{C}(\mathbf {V}) < \rho (V_{j^*})\). Then, for any \(\lambda >0\), by the monotonicity of \(\rho \), we have \(\rho (\mathbf {V}+\lambda \mathbf {e}_{j^*})=\min \{\min _{j \in [d]{\setminus }\{j^*\}} \rho (V_j), \rho (V_{j^*}+\lambda )\}=\min _{j \in [d]{\setminus } \{j^*\}}\rho (V_j)=\rho _{C}(\mathbf {V})< \rho (V_{j^*})< \rho (V_{j^*}+\lambda )=\rho (V_{j^*})+\lambda \). This provides an example where \(\rho _{C}(\mathbf {V}+\lambda \mathbf {e}_j)\ne \rho _{C}(\mathbf {V})+\lambda \) for all \(j\in [d]\) and \(\lambda \in \mathbbm {R}\).

We next discuss the Pareto efficiency/optimality of the solutions of \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \). For deterministic multiobjective optimization problems, the concept of Pareto optimality is well-known and it defines a dominance relation to compare the solutions with respect to the multiple criteria. It is natural to consider the “non-dominated” solutions as potentially good solutions. Here, we recall two widely-used Pareto optimality concepts:

  • A point \(\mathbf {z}^* \in Z\) is called Pareto optimal if there exists no point \(\mathbf {z}\in Z\) such that

    $$\begin{aligned} G_j(\mathbf {z})\ge G_j(\mathbf {z}^*) \text { for all }j \in [d] \text { and }G_j(\mathbf {z})> G_j(\mathbf {z}^*) \text { for at least one index }j\in [d]. \end{aligned}$$
    (6)
  • A point \(\mathbf {z}^* \in Z\) is called weakly Pareto optimal if there exists no point \(\mathbf {z}\in Z\) such that

    $$\begin{aligned} G_j(\mathbf {z})> G_j(\mathbf {z}^*) \text { for all }j \in [d]. \end{aligned}$$
    (7)

In contrast to the deterministic case, in a stochastic context there is no single widely-adopted concept of Pareto optimality. The challenge stems from the stochasticity of the criteria; \(G_1(\mathbf {z}),\ldots ,G_d(\mathbf {z})\) are in general random variables for any decision vector \(\mathbf {z}\in Z\), and one should specify how to compare solutions in terms of these random objective criteria. To this end, in this paper, we use the stochastic dominance rules and introduce stochastic dominance-based Pareto optimality concepts below for stochastic multiobjective optimization problems. For \(k\in \mathbb {N}_0=\{0,1,\ldots \}\), let us denote the kth degree stochastic dominance (kSD) relation by \(\succeq _{(k)}\); we refer the reader to Appendix A for a brief review of these relations (see, also, Ogryczak and Ruszczyński 2001).

Definition 2.2

(Stochastic dominance-based Pareto optimality) A point \(\mathbf {z}^* \in Z\) is called k SD-based Pareto optimal for some \(k\in \mathbb {N}_0\) if there exists no point \(\mathbf {z}\in Z\) such that

$$\begin{aligned} G_j(\mathbf {z})\succeq _{(k)} G_j(\mathbf {z}^*) \text { for all }j \in [d] \text { and }G_j(\mathbf {z})\succ _{(k)} G_j(\mathbf {z}^*) \text { for at least one index }j\in [d]. \end{aligned}$$
(8)

Definition 2.3

(Stochastic dominance-based weak Pareto optimality) A point \(\mathbf {z}^* \in Z\) is called weakly k SD-based Pareto optimal for some \(k\in \mathbb {N}_0\) if there exists no point \(\mathbf {z}\in Z\) such that

$$\begin{aligned} G_j(\mathbf {z})\succ _{(k)}G_j(\mathbf {z}^*) \text { for all }j \in [d]. \end{aligned}$$
(9)

These stochastic Pareto optimality concepts are based on comparing the random variables \(G_j(\mathbf {z})\) and \(G_j(\mathbf {z}^*)\) [in relations (6) and (7)] using a univariate stochastic dominance rule for each criterion \(j\in [d]\). Such a component-wise dominance relation provides a natural and an intuitive approach for extending the concept of traditional Pareto optimality to the stochastic case. A closely related but slightly different notion of efficiency based on the realizations under each scenario is presented in Ben Abdelaziz (2012). Alternatively, one can consider a multivariate stochastic dominance relation as in Ben Abdelaziz et al. (1995). However, multivariate stochastic dominance relations are very restrictive (see, e.g., Müller and Stoyan 2002) and finding a non-dominated solution according to such a multivariate relation may not even be possible. For other generalizations of the Pareto efficiency concept to multiobjective stochastic problems we refer to Ben Abdelaziz (2012).

We next focus on the zeroth-order stochastic dominance (ZSD) rule (also known as statewise dominance) defined in Appendix A, and present a close analogue of Theorem 2.2 in Hu and Mehrotra (2012), which provides some managerial insights about our new model \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \).

Proposition 2.2

Let \(C \subseteq C_f\) and \(\mathbf {z}^*\) be an optimal solution of \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \).

  1. (i)

    \(\mathbf {z}^*\) is a weakly ZSD-based Pareto optimal solution of \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \).

  2. (ii)

    If for every \(\mathbf {c}\in C\) we have \(c_j > 0\) for all \(j \in [d]\), then \(\mathbf {z}^*\) is a ZSD-based Pareto optimal solution of \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \).

  3. (iii)

    If \(\mathbf {z}^*\) is a unique optimal solution of \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \), then it is a ZSD-based Pareto optimal solution of \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \).

Proof

Let us assume for contradiction that \(\mathbf {z}^*\) is not a weakly ZSD-based Pareto optimal solution of \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \). Then there exists \({\hat{\mathbf {z}}} \in Z\) such that \(G_j({\hat{\mathbf {z}}},\omega _i)> G_j(\mathbf {z}^*,\omega _i)\) for all \(i \in [n]\) and \(j \in [d]\). By the non-negativity of \(\mathbf {c}\in C\) and the observation that \(c_k>0\) for at least one index \(k\in [d]\) for every \(\mathbf {c}\in C\), we have \(\sum _{j \in [d]}c_jG_j({\hat{\mathbf {z}}},\omega _i)> \sum _{j \in [d]}c_jG_j(\mathbf {z}^*,\omega _i)\) for all \(i \in [n]\) and \(\mathbf {c}\in C\). Then, by the monotonicity of CVaR it is easy to see that \({\text {CVaR}}_\alpha \left( \mathbf {c}^\top \mathbf {G}({\hat{\mathbf {z}}})\right) >{\text {CVaR}}_\alpha \left( \mathbf {c}^\top \mathbf {G}(\mathbf {z}^*)\right) \) holds for any \(\alpha \in (0,1]\) and \(\mathbf {c}\in C\). Therefore, the following inequalities hold and result in a contradiction:

$$\begin{aligned} \begin{aligned}\max \limits _{\mathbf {z}\in Z}\; \; \min \limits _{\mathbf {c}\in C} \;\;{\text {CVaR}}_\alpha \left( \mathbf {c}^\top \mathbf {G}(\mathbf {z})\right)&\ge \min \limits _{\mathbf {c}\in C} \;\;{\text {CVaR}}_\alpha \left( \mathbf {c}^\top \mathbf {G}({\hat{\mathbf {z}}})\right) \\&>\min \limits _{\mathbf {c}\in C} \;\;{\text {CVaR}}_\alpha \left( \mathbf {c}^\top \mathbf {G}(\mathbf {z}^*)\right) =\max \limits _{\mathbf {z}\in Z}\; \; \min \limits _{\mathbf {c}\in C}\;\;{\text {CVaR}}_\alpha \left( \mathbf {c}^\top \mathbf {G}(\mathbf {z})\right) . \end{aligned} \end{aligned}$$

This completes the proof of part (i). The proofs of parts (ii) and (iii) follow from similar arguments. \(\square \)

We would like to emphasize that the \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) model keeps the stochastic nature of the weighted-sum, and is novel in terms of incorporating the risk associated with the inherent randomness. Therefore, it calls for the development of stochastic Pareto efficiency concepts discussed above. In contrast, in some of the existing stochastic multiobjective optimization models, summary statistics such as expected value, CVaR or variance are used as the multiple criteria (see, for example, Köksalan and Şakar 2016 for a stochastic portfolio optimization problem with three criteria: expected return, CVaR and a liquidity measure). Using these summary statistics, the resulting problem becomes a deterministic multicriteria optimization problem for which the well-defined deterministic Pareto optimality concepts can be applied. One method of obtaining Pareto optimal solutions is to scalarize these multiple criteria using a single weight vector in the scalarization set C. By heuristically searching over C using the weighted Tchebycheff program, multiple solutions in the deterministic efficient frontier are generated, and then an interactive method is employed for the decision makers to choose among these solutions. To illustrate this approach, consider a modification of the portfolio optimization problem in Köksalan and Şakar (2016), where \(G_1(\mathbf {z})\) is the uncertain return of the portfolio and \(G_2(\mathbf {z})\) is a random liquidity measure. Suppose that two criteria are considered: \({\text {CVaR}}_\alpha (G_1(\mathbf {z}))\) and \({\text {CVaR}}_\alpha (G_2(\mathbf {z}))\). Thus, for a fixed \(\mathbf {c}\in C\), the objective is to maximize the scalarization of the component-wise acceptability functionals, leading to the following problem: \(\max _{\mathbf {z}\in Z}\{c_1{\text {CVaR}}_\alpha (G_1(\mathbf {z}))+c_2{\text {CVaR}}_\alpha (G_2(\mathbf {z}))\}\). In contrast, in our model, we search over \(\mathbf {c}\in C\), such that the worst-case multivariate CVaR is maximized while considering the joint behavior of the random outcomes of interest: \(\max _{\mathbf {z}\in Z}\min _{\mathbf {c}\in C}\{{\text {CVaR}}_\alpha (c_1G_1(\mathbf {z})+c_2G_2(\mathbf {z}))\}\). First, note that the term in the minimization in the \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) model is different from the objective of the interactive approach, because the order of CVaR and scalarization operations cannot be changed. Only for the special case that the decision makers are risk-neutral (i.e., \(\alpha =1\)), the order of CVaR (expectation) and scalarization operations can be changed. The stochastic multiobjective modeling approaches based on component-wise risk measures (or acceptability functionals), such as the one proposed by Köksalan and Şakar (2016), have the advantage of providing a set of non-dominated solutions to choose from. In cases when there is a conflict or ambiguity in the decision makers’ preferences and the joint behavior of the random performance measures is of particular interest, our model can be used to directly determine a single solution which optimizes the worst-case CVaR performance of the scalarized random outcomes with respect to all the possible choices of the preference weights; this set of possible preference weights of the multiple criteria is closely related to the well-known uncertainty set in robust optimization (see, e.g., Ben-Tal et al. 2009). The resulting single solution could be seen as a worst-case compromise decision that could be used as a benchmark or an additional input to support the decision making process.

2.2 Solution methods

In this section, we give reformulations and solution methods for \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \). We also provide improved formulations for the important special case when each scenario has an equal probability. Before proceeding to describe the solution methods we first show that \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) is a convex program under certain conditions.

Proposition 2.3

If Z is a convex set and \(G_j(\mathbf {z})\) is concave in \(\mathbf {z}\in Z\) for all \(j\in [d]\), then \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) is a convex program.

Proof

It is sufficient to prove that the mapping \(\mathbf {z}\mapsto \min \limits _{\mathbf {c}\in C}{\text {CVaR}}_{\alpha }(\mathbf {c}^\top \mathbf {G}(\mathbf {z}))\) is concave. Recall that by our assumptions \(c_j\) is non-negative and \(G_j(\mathbf {z})\) is concave in \(\mathbf {z}\in Z\) for all \(j \in [d]\) and \(\mathbf {c}\in C\). Since any non-negative combination of concave functions is also concave, the mapping \(\mathbf {z}\mapsto \mathbf {c}^\top \mathbf {G}(\mathbf {z})\) is concave for any \(\mathbf {c}\in C\). Then, by the monotonicity and concavity of CVaR, the mapping \(\mathbf {z}\mapsto {\text {CVaR}}_{\alpha }(\mathbf {c}^\top \mathbf {G}(\mathbf {z}))\) is concave, and the claim follows from the superadditivity of the minimum operator. \(\square \)

2.2.1 General probability case

We first observe that, for the concave outcome mappings \(G_j,~j\in [d]\), the inner optimization problem in (5) is a concave minimization over a convex set, which implies that an optimal solution of the inner problem occurs at an extreme point of C. Let \({\hat{\mathbf {c}}}^{1},\ldots , {\hat{\mathbf {c}}}^{N}\) be the extreme points of C. Then, using the definition of CVaR given in (2), we can formulate (5) as follows:

$$\begin{aligned} \max&\; \psi \end{aligned}$$
(10a)
$$\begin{aligned} \text {s.t.}&\; \psi \le \eta _{\ell }-\frac{1}{\alpha }\sum _{i \in [n]}p_iw_{\ell i},&\forall ~\ell \in [N] \end{aligned}$$
(10b)
$$\begin{aligned}&w_{\ell i} \ge \eta _{\ell }-({\hat{\mathbf {c}}}^{\ell })^\top \mathbf {g}_i(\mathbf {z}),&\forall ~\ell \in [N],i \in [n] \end{aligned}$$
(10c)
$$\begin{aligned}&\mathbf {z}\in Z, \quad \mathbf {w}\in \mathbbm {R}_+^{N\times n},\quad \varvec{\eta }\in \mathbbm {R}^N,\quad \psi \in \mathbbm {R}. \end{aligned}$$
(10d)

Note that if the mapping \(\mathbf {g}_i(\mathbf {z})\) is linear in \(\mathbf {z}\) for all \(i\in [n]\), Z is a polyhedral set, and \(\mathbf {z}\) is a continuous decision vector, then the formulation (10) is a linear program. Under certain assumptions on the scalarization set, the number of extreme points of C may be polynomial (we will discuss these cases in Sect. 2.2.2), and hence the resulting formulation (10) is compact. However, in general, the number of extreme points, N, is exponential. Therefore, we propose a delayed cut generation algorithm to solve (10). We start with an initial subset of scalarization vectors \({\hat{\mathbf {c}}}^{1},\dots , {\hat{\mathbf {c}}}^{L}\) and solve an intermediate relaxed master problem (RMP), which is obtained by replacing N with L in (10). Solving the RMP provides us with a candidate solution denoted by \((\mathbf {z}^*,\psi ^*,\mathbf {w}^*,\varvec{\eta }^*)\). At each iteration, we solve a cut generation problem:

$$\begin{aligned} \left( \mathbf {CutGen-Robust}\right) {}{:} \quad \quad \min _{\mathbf {c}\in C} {\text {CVaR}}\left( \mathbf {c}^\top \mathbf {G}(\mathbf {z}^*)\right) . \end{aligned}$$

If the optimal objective function value of the cut generation problem is not smaller than \(\psi ^*\), then the current solution \((\mathbf {z}^*,\psi ^*,\mathbf {w}^*,\varvec{\eta }^*)\) is optimal. Otherwise, the optimal solution \(\mathbf {c}^t\) at iteration t gives a violated inequality of the form \(\psi \le {\text {CVaR}}_{\alpha }((\mathbf {c}^t)^\top \mathbf {G}(\mathbf {z}))\). We augment the RMP by setting \(L\leftarrow L+1\), and \({\hat{\mathbf {c}}}^{L+1}\leftarrow \mathbf {c}^t\).

Observe that in the multivariate CVaR-constrained problems studied in Noyan and Rudolf (2013) and Küçükyavuz and Noyan (2016), given a random benchmark vector \(\mathbf {Y}\), the cut generation problems are given by \(\min _{\mathbf {c}\in C} {\text {CVaR}}(\mathbf {c}^\top \mathbf {G}(\mathbf {z}^*))-{\text {CVaR}}(\mathbf {c}^\top \mathbf {Y})\) (we will revisit this cut generation problem in Sect. 3). Due to the similar structure, we can use the formulations and enhancements given in Noyan and Rudolf (2013) and Küçükyavuz and Noyan (2016) to formulate the cut generation problem \(\left( \mathbf {CutGen-Robust}\right) \) as a mixed-integer program. Let \(\mathbf {X}=\mathbf {G}(\mathbf {z}^*)\) with the realizations \(\mathbf {x}_1,\ldots ,\mathbf {x}_n\) (i.e., \(\mathbf {x}_i=\mathbf {g}_i(\mathbf {z}), i\in [n]\)). The representation of CVaR in (3) leads to the following formulation of \(\left( \mathbf {CutGen-Robust}\right) \):

$$\begin{aligned} \min \quad&\mu \end{aligned}$$
(11a)
$$\begin{aligned} \text {s.t.}\quad&\mu \ge \mathbf {c}^\top \mathbf {x}_k-\frac{1}{\alpha }\sum \limits _{i\in [n]}p_i\left[ \mathbf {c}^\top \mathbf {x}_k-\mathbf {c}^\top \mathbf {x}_i\right] _+,&\forall ~k\in [n] \end{aligned}$$
(11b)
$$\begin{aligned}&\mathbf {c}\in C,\quad \mu \in \mathbbm {R}. \end{aligned}$$
(11c)

The shortfall terms \([\mathbf {c}^\top \mathbf {x}_k-\mathbf {c}^\top \mathbf {x}_i]_+\) in inequalities (11b) present a computational challenge. Introducing additional variables and constraints, we can linearize these terms using big-M type of constraints, and obtain an equivalent MIP formulation similar to the one proposed by Noyan and Rudolf (2013) for the cut generation problems arising in optimization under multivariate polyhedral CVaR constraints. However, the big-M type constraints may lead to weak LP relaxation bounds and computational difficulties. In order to deal with these difficulties, Küçükyavuz and Noyan (2016) propose an improved model based on a new representation of \(\mathrm {VaR}_\alpha \), which we describe next. Let

$$\begin{aligned} M_{ik}=\max \left\{ \max \limits _{\mathbf {c}\in C}~\left\{ \mathbf {c}^\top \mathbf {x}_k-\mathbf {c}^\top \mathbf {x}_i\right\} ,0\right\} \text { and } M_{ki}=\max \left\{ \max \limits _{\mathbf {c}\in C}~\left\{ \mathbf {c}^\top \mathbf {x}_i-\mathbf {c}^\top \mathbf {x}_k\right\} ,0\right\} . \end{aligned}$$

Also let \(M_{i*} = \max _{k\in [n]} M_{ik}\) and \(M_{*i} = \max _{k \in [n]} M_{ki}\) for \(i \in [n]\), and \({\tilde{M}}_j = \max \{c_j \; : \; \mathbf {c}\in C\}\) for \(j \in [d]\). Then, the following inequalities hold for any \(\mathbf {c}\in C\):

$$\begin{aligned}&z \le \mathbf {c}^\top \mathbf {x}_i + \beta _i M_{i*} ,&\forall ~i \in [n] \end{aligned}$$
(12a)
$$\begin{aligned}&z \ge \mathbf {c}^\top \mathbf {x}_i - (1 - \beta _i) M_{*i},&\forall ~i \in [n] \end{aligned}$$
(12b)
$$\begin{aligned}&z = \sum _{i \in [n]} \varvec{\xi }_i^\top \mathbf {x}_i, \end{aligned}$$
(12c)
$$\begin{aligned}&\xi _{ij} \le {\tilde{M}}_j u_i,&\forall ~i \in [n] ,j \in [d] \end{aligned}$$
(12d)
$$\begin{aligned}&\sum _{i \in [n]} \xi _{ij} = c_j,&\forall ~j \in [d] \end{aligned}$$
(12e)
$$\begin{aligned}&\sum _{i \in [n]} p_i\beta _i \ge \alpha , \end{aligned}$$
(12f)
$$\begin{aligned}&\sum _{i \in [n]} p_i\beta _i-\sum _{i \in [n]} p_iu_i \le \alpha -\epsilon , \end{aligned}$$
(12g)
$$\begin{aligned}&\sum _{i \in [n]} u_i = 1, \end{aligned}$$
(12h)
$$\begin{aligned}&u_i \le \beta _i,&\forall ~i\in [n] \end{aligned}$$
(12i)
$$\begin{aligned}&\varvec{\beta },~\mathbf {u}\in \{0,1\}^n, \quad \varvec{\xi }\in \mathbbm {R}_+^{n \times d}, \quad z \in R, \end{aligned}$$
(12j)

if and only if \(z = {\text {VaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\). Here \(\epsilon \) is a sufficiently small positive constant to ensure that the constraint (12g) is equivalent to the strict inequality \(\sum _{i \in [n]} p_i\beta _i-\sum _{i \in [n]} p_iu_i < \alpha \). Denoting the finite set of all non-zero probabilities of events by \(\mathcal {K}=\left\{ \mathbbm {P}(S){:}\,S\in 2^\varOmega ,~\mathbbm {P}(S)>0\right\} \) it is easy to see that \(\epsilon \) can be taken as any number that satisfies \(0<\epsilon<\min \left\{ \alpha -\kappa {:}\,\kappa \in \mathcal {K}\cup \{0\},~\kappa <\alpha \right\} \). For example, for the equiprobable case and \(\alpha =k/n\) for some \(k\in [n]\), we let \(0<\epsilon <\frac{1}{n}\). The logical variable \(u_i = 1\) only if the i-th scenario corresponds to \({\text {VaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\), the logical variable \(\beta _i=1\) only if \(\mathbf {c}^\top \mathbf {x}_i\le {\text {VaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\), and the variable \(\xi _{ij} = c_j\) only when \(u_i = 1\) for all \(i\in [n]\) and \(j\in [d]\).

Based on the representation of \({\text {VaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\) given in (12), we propose an alternative formulation for \(\left( \mathbf {CutGen-Robust}\right) \):

$$\begin{aligned} \min \quad&z -\frac{1}{\alpha }\sum \limits _{i\in [n]}p_iv_{i} \end{aligned}$$
(13a)
$$\begin{aligned} \mathrm {s.t.} \;&(12a) {-}(12i), \end{aligned}$$
(13b)
$$\begin{aligned}&v_i - \delta _i = z - \mathbf {c}^\top \mathbf {x}_i,&\forall ~i \in [n] \end{aligned}$$
(13c)
$$\begin{aligned}&v_i \le M_{i*} \beta _i,&\forall ~i \in [n] \end{aligned}$$
(13d)
$$\begin{aligned}&\delta _i \le M_{*i}(1 - \beta _i),&\forall ~i \in [n] \end{aligned}$$
(13e)
$$\begin{aligned}&\varvec{\beta },~\mathbf {u}\in \{0,1\}^n, \quad \varvec{\xi }\in \mathbbm {R}_+^{n \times d}, \quad z \in R, \end{aligned}$$
(13f)
$$\begin{aligned}&\mathbf {c}\in C,\quad \mathbf {v},~\varvec{\delta }\in \mathbbm {R}_+^n. \end{aligned}$$
(13g)

In this formulation, it is guaranteed that \(v_i= [z - \mathbf {c}^\top \mathbf {x}_i]_+\) and \(\delta _i= [ \mathbf {c}^\top \mathbf {x}_i-z]_+\) for \(i\in [n]\).

2.2.2 Equal probability case

To keep our exposition simple, we consider confidence levels of the form \(\alpha =k/n\) for some \(k\in [n]\), and refer to Noyan and Rudolf (2013) for an extended MIP formulation with an arbitrary confidence level. In this case, an alternative formulation of \(\left( \mathbf {CutGen-Robust}\right) \), adapted from Noyan and Rudolf (2013), is given by the bilinear program

$$\begin{aligned} \min \quad&\; \frac{1}{k} \sum _{i \in [n]} \sum _{j \in [d]} x_{ij}c_j\beta _i \\ \text {s.t.}&\; \sum _{i \in [n]}\beta _i=k,&\\&\varvec{\beta }\in [0,1]^n, \quad \mathbf {c}\in C. \end{aligned}$$

Note that we can relax the integrality of \(\varvec{\beta }\) in this formulation, which follows from the observation that in the special case of equal probabilities and \(\alpha =k/n\), \( {\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\) corresponds to the weighted sum of the smallest k out of n realizations (\(\mathbf {c}^\top \mathbf {x}_i,~i \in [n]\)). Using McCormick envelopes (McCormick 1976), we can linearize the bilinear terms \(c_j\beta _i\) in the objective function. Introducing the additional variables \(\gamma _{ij}=c_j\beta _i\), for all \(i \in [n]\) and \(j \in [d]\), an equivalent MIP formulation is stated as:

$$\begin{aligned} \min \quad&\frac{1}{k} \sum _{i \in [n]} \sum _{j \in [d]} x_{ij}\gamma _{ij} \end{aligned}$$
(14a)
$$\begin{aligned} \text {s.t.}&\; \gamma _{ij} \le c_j,&\forall ~i \in [n],~j \in [d] \end{aligned}$$
(14b)
$$\begin{aligned}&\gamma _{ij} \ge c_j- {\tilde{M}}_j(1-\beta _i),&\forall ~i \in [n],~j \in [d] \end{aligned}$$
(14c)
$$\begin{aligned}&\gamma _{ij} \le {\tilde{M}}_j\beta _i,&\forall ~i \in [n],~j \in [d] \end{aligned}$$
(14d)
$$\begin{aligned}&\sum _{i \in [n]}\beta _i=k,&\end{aligned}$$
(14e)
$$\begin{aligned}&\varvec{\beta }\in \{0,1\}^n, \quad \varvec{\gamma }\in \mathbbm {R}_+^{n\times d}, \quad \mathbf {c}\in C. \end{aligned}$$
(14f)

For \(i \in [n]\), if \(\beta _i = 1\), then constraint (14b) together with (14c) enforces that \(\gamma _{ij} = c_j\), for all \(j \in [d]\). For \(i \in [n]\), if \(\beta _i = 0\), then constraint (14d) enforces \(\gamma _{ij}\) to be 0.

Let \(P:= \{ (\varvec{\gamma }, \varvec{\beta }, \mathbf {c})\in \mathbbm {R}^{n\times d}_+\times \{0,1\}^n \times C \;| \; \varvec{\gamma }= \varvec{\beta }\mathbf {c}^\top , \sum _{i \in [n]} \beta _i = k \}.\) Then we have \(\min _{\mathbf {c}\in C}\; {\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {X}) = \min _{(\gamma , \beta , \mathbf {c}) \in P} \sum _{i \in [n]} \sum _{j \in [d]} x_{ij}\gamma _{ij}.\) Note that the structure of P also appears in pooling problems (c.f., Gupte et al. 2017). The next proposition gives the convex hull of P for a special choice of C using the reformulation-linearization technique (RLT) (Sherali and Adams 1994).

Proposition 2.4

(Sherali et al. 1998; Gupte et al. 2017) If C is a unit simplex (i.e., \(C=C_f\)), then the convex hull of P is described by:

$$\begin{aligned} \textit{conv}(P) =&\left\{ \left( \varvec{\gamma }, \varvec{\beta }, \mathbf {c}\right) \in \mathbbm {R}^{n\times d}_+\times [0,1]^n \times C\;| \;\gamma _{ij} \le c_j, i \in [n],~ j \in [d], \right. \\ \sum _{j\in [d]} \gamma _{ij}=&\left. \beta _{i}, i \epsilon [n], \quad \sum _{i \in [n]} \gamma _{ij} =k c_j, j \epsilon [d]\right\} . \end{aligned}$$

Using the fact that \(C\subseteq C_f\) and Proposition 2.4, we can strengthen the formulation (14) as follows:

$$\begin{aligned} \min \quad&\frac{1}{k} \sum _{i \in [n]} \sum _{j \in [d]} x_{ij}\gamma _{ij} \end{aligned}$$
(15a)
$$\begin{aligned} \mathrm {s.t.} \;&\gamma _{ij} \le c_j,&\forall ~i \in [n], ~j \in [d] \end{aligned}$$
(15b)
$$\begin{aligned}&\sum _{j \in [d]} \gamma _{ij} =\beta _i,&\forall ~i \in [n] \end{aligned}$$
(15c)
$$\begin{aligned}&\sum _{i \in [n]} \gamma _{ij} = k c_j,&\forall ~j \in [d] \end{aligned}$$
(15d)
$$\begin{aligned}&(14c) {-}(14d), \end{aligned}$$
(15e)
$$\begin{aligned}&\mathbf {c}\in C, \quad \varvec{\beta }\in \{0,1\}^n,\quad \varvec{\gamma }\in \mathbbm {R}_+^{n \times d}. \end{aligned}$$
(15f)

Note also that if C is the unit simplex (\(C=C_f\)), then the integrality restrictions on \(\varvec{\beta }\) can be relaxed in (15) and the cut generation problem is an LP. However, recall that if C is the unit simplex, then the extreme points of C are polynomial, given by \({\hat{\mathbf {c}}}^{\ell } = \mathbf {e}_\ell \) for \(\ell \in [d]\). Hence, in this case, the overall problem formulation (10) itself is a compact LP when the mapping \(\mathbf {g}_i(\mathbf {z})\) is linear in \(\mathbf {z}\) for all \(i\in [n]\), and Z is a polyhedral set without integrality restrictions, even under general probabilities.

Furthermore, using the additional information on the structure of the scalarization polytope C and the RLT technique, we can obtain stronger formulations. Suppose that \(C=\{\mathbf {c}\in \mathbbm {R}_+^d\;| B\mathbf {c}\ge \mathbf b\}\), for a given \(r\times d\) matrix B and \(\mathbf b=(b_1,\ldots ,b_r)\). Let \(B_\ell \) be the \(\ell \)th row of B. Then, we can strengthen the formulation (14) as follows:

$$\begin{aligned} \min \quad&\frac{1}{k} \sum _{i \in [n]} \sum _{j \in [d]} x_{ij}\gamma _{ij} \end{aligned}$$
(16a)
$$\begin{aligned} \mathrm {s.t.} \;&\sum _{j\in [d]} B_{\ell j}\gamma _{ij} -b_\ell \beta _i \le B_\ell \mathbf {c}- b_\ell ,&\forall ~i \in [n],~\ell \in [r] \end{aligned}$$
(16b)
$$\begin{aligned}&\sum _{j \in [d]} \sum _{j\in [d]} B_{\ell j}\gamma _{ij} -b_\ell \beta _i \ge 0,&\forall ~i \in [n],~\ell \in [r] \end{aligned}$$
(16c)
$$\begin{aligned}&\sum _{i \in [n]}( \sum _{j\in [d]} B_{\ell j}\gamma _{ij} -b_\ell \beta _i )= k(B_\ell \mathbf {c}- b_\ell ),&\forall ~\ell \in [r] \end{aligned}$$
(16d)
$$\begin{aligned}&\mathbf {c}\in C, \quad \varvec{\beta }\in \{0,1\}^n, \quad \varvec{\gamma }\in \mathbbm {R}_+^{n \times d}. \end{aligned}$$
(16e)

It is known that if \(C=\{\mathbf {c}\in \mathbbm {R}_+^d\;| B\mathbf {c}\ge \mathbf b\}\) is a d-simplex, then \(conv(P)=\{ (\varvec{\gamma }, \varvec{\beta }, \mathbf {c}) \in \mathbbm {R}^{n\times d}_+\times [0,1]^n \times C\;| (16b){-}(16d) \}\) (Gupte et al. 2017). Therefore, the LP relaxation of (16) is integral in this case.

Remark 2.1

Note that if \( {\tilde{M}}_j=1\) for all \(j\in [d]\) (as is the case when C is the unit simplex), then constraints (14c)–(14d) are implied by (15c)–(15d), and can be dropped from the formulation. However, for the situations where \(\tilde{M}_j<1\) for some \(j \in [d]\), the constraints (14c)–(14d), obtained by applying the RLT technique to the constraints \(c_j\le {\tilde{M}}_j, j\in [d]\), can be useful to reduce the solution time.

Remark 2.2

It is also possible to obtain stronger formulations of (12) by applying the RLT technique for the general probability case. In particular, the RLT procedure based on the constraint \(\sum _{i \in [d]} c_i = 1\) provides the following valid inequality

$$\begin{aligned} \sum _{j \in [d]} \xi _{ij} = u_i, \end{aligned}$$
(17)

which can be added to the formulation (12).

Next we consider an important special case of C that applies to multicriteria optimization when certain criteria are deemed more important than others. In particular, we study the case where C contains ordered preference constraints that take the form

$$\begin{aligned} C= \left\{ \mathbf {c}\in \mathbbm {R}_+^d\; | \; \sum _{j \in [d]} c_j= 1, c_j \ge c_{j + 1}, \quad \forall ~j \in [d-1] \right\} . \end{aligned}$$
(18)

If the set C has the ordered preference structure (18), then we are able to obtain the convex hull of P, which is stated in the following result.

Proposition 2.5

If C is given by (18), then the convex hull of P is described by:

$$\begin{aligned} \textit{conv}(P) =&\left\{ (\varvec{\gamma }, \varvec{\beta }, \mathbf {c}) \in \mathbbm {R}^{n\times d}_+\times [0,1]^n \times C\;| \; (15c), (15d) , \gamma _{ij} \ge \gamma _{ij + 1} ,\right. \\&\left. \gamma _{ij} - \gamma _{ij + 1} \le c_{j} - c_{j + 1}, i \in [n], ~j \in [d-1]\right\} . \end{aligned}$$

Proof

First, we show that the extreme points of C are given by

$$\begin{aligned}&{\hat{\mathbf {c}}}^1 = (1, 0, 0, \ldots , 0) \\&{\hat{\mathbf {c}}}^2= \left( \frac{1}{2}, \frac{1}{2}, 0, \ldots , 0\right) \\&{\hat{\mathbf {c}}}^3= \left( \frac{1}{3}, \frac{1}{3}, \frac{1}{3}, \ldots , 0\right) \\&\qquad \vdots \\&{\hat{\mathbf {c}}}^d= \left( \frac{1}{d}, \frac{1}{d}, \frac{1}{d}, \ldots , \frac{1}{d}\right) . \end{aligned}$$

Let \( {\tilde{\mathbf {c}}} =( {\tilde{c}}_1, {\tilde{c}}_2, \dots , {\tilde{c}}_d)\) be a feasible point of C, by definition, we have \( {\tilde{c}}_1 \ge {\tilde{c}}_2\ge \cdots \ge {\tilde{c}}_d\). First, we show that \( {\tilde{c}}_j \le \frac{1}{j}\), for all \(j \in [d]\). Suppose that there exists \(j \in [d]\) such that \({\tilde{c}}_j > \frac{1}{j}\), then we have \(\sum _{i = 1}^j {\tilde{c}}_i \ge j {\tilde{c}}_j > 1\) since \({\tilde{c}}_{i} \ge c_{j}\), for all \(i \in [j - 1]\); this results in a contradiction. Hence, for any feasible point, we have \( {\tilde{c}}_j \le \frac{1}{j}\), for all \(j \in [d]\). Next, let \(\lambda _j = j ({\tilde{c}}_{j} - {\tilde{c}}_{j + 1})\), for all \(j \in [d]\), where \({\tilde{c}}_{d + 1} = 0\). Note that \(0 \le \lambda _j \le 1\), for all \(j \in [d]\), and \(\sum _{j = 1}^d \lambda _j = 1\). We have \( {\tilde{\mathbf {c}}} = \sum _{j = 1}^d \lambda _j {\hat{\mathbf {c}}}^j\), which indicates that any feasible point \( {\tilde{\mathbf {c}}}\) can be represented as a convex combination of the points \({\hat{\mathbf {c}}}^j\), for all \(j \in [d]\). As a result, C is a \((d-1)\)-simplex, and the proposition follows similarly from Gupte et al. (2017). \(\square \)

2.3 Finite convergence

In this section, we study the convergence of the proposed cut generation algorithm.

Proposition 2.6

The delayed cut generation algorithm described in Sect. 2.2 to solve \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) is finitely convergent.

Proof

We show that given a solution to RMP we can find an optimal solution to the cut generation subproblem, which is an extreme point of C. As a result, the proposed cut generation algorithm is finitely convergent, because there are finitely many extreme points of C. For the general probability case, we can obtain such a vertex optimal solution by using the following method: suppose that we solve one of the MIP formulations of \(\left( \mathbf {CutGen-Robust}\right) \) and obtain an optimal solution \(\mathbf {c}^*\). Let \(\pi \) be a permutation describing a non-decreasing ordering of the realizations of the random vector \({\mathbf {c}^*}^\top \mathbf {X}\), i.e., \({\mathbf {c}^*}^\top \mathbf {x}_{\pi (1)}\le \cdots \le {\mathbf {c}^*}^\top \mathbf {x}_{\pi (n)}\), and define

$$\begin{aligned} k^*=\min \left\{ k\in [n]{:}\,\sum \limits _{i\in [k]} p_{\pi (i)} \ge \alpha \right\} \quad \text {and}\quad K^*=\{\pi (1),\ldots ,\pi (k^*-1)\}. \end{aligned}$$

Then, we can obtain the desired vertex solution \({\hat{\mathbf {c}}}\) by finding a vertex optimal solution of the following linear program:

$$\begin{aligned} \begin{aligned} \min \limits _{\mathbf {c}\in C}~~&\frac{1}{\alpha }\left[ \sum _{i \in K^*}p_i\mathbf {c}^\top \mathbf {x}_i+\left( \alpha -\sum _{i\in K^*}p_i\right) \mathbf {c}^\top \mathbf {x}_{\pi (k^*)}\right] . \end{aligned} \end{aligned}$$

This LP relies on the subset-based representation of \({\text {CVaR}}\) (Theorem 1, Noyan and Rudolf 2013). The feasible set is the polytope C, so there exists a vertex optimal solution \({\hat{\mathbf {c}}}\). It is easy to show that \({\hat{\mathbf {c}}}\) is also an optimal solution of \(\left( \mathbf {CutGen-Robust}\right) \). \(\square \)

Furthermore, when equal probability is assumed, by solving the alternative cut generation formulation (16) using a branch-and-bound (B&B) method, we are guaranteed to obtain a desired vertex optimal solution \(\mathbf {c}\) without solving an additional LP. To see this, note that once the LP relaxation at a B&B node results in an integral \(\varvec{\beta }\), the only remaining constraints enforce \(\mathbf {c}\in C\).

3 Multivariate CVaR-constrained optimization model

In this section, we consider a related class of multicriteria decision making problems, where the decision vector \(\mathbf {z}\) is selected from a feasible set Z and associated random outcomes are determined by the outcome mapping \(\mathbf {G}:Z\times \varOmega \rightarrow \mathbbm {R}^d\). We consider an arbitrary objective function \(f : Z \mapsto \mathbbm {R}\) and assume that a d-dimensional benchmark random vector \(\mathbf {Y}\) is available. We aim to find the best decision vector \(\mathbf {z}\) for which the random outcome vector \(\mathbf {G}(\mathbf {z})\) is preferable to the benchmark \(\mathbf {Y}\) with respect to the multivariate polyhedral CVaR preference relation. Such multivariate CVaR-constrained optimization problems are introduced in Noyan and Rudolf (2013). Given a polyhedron of scalarization vectors \(C\subseteq C_f\) and a confidence level \(\alpha \in (0,1]\), the problem is of the general form:

$$\begin{aligned} \max \;&f(\mathbf {z}) \end{aligned}$$
(19a)
$$\begin{aligned} \text {s.t.}\;&{\text {CVaR}}_{\alpha }\left( \mathbf {c}^\top \mathbf {G}(\mathbf {z})\right) \ge {\text {CVaR}}_{\alpha }\left( \mathbf {c}^\top \mathbf {Y}\right) ,&\forall ~ \mathbf {c}\in C\end{aligned}$$
(19b)
$$\begin{aligned}&\mathbf {z}\in Z. \end{aligned}$$
(19c)

The benchmark random vector can be defined on a different probability space, but in practical applications it often takes the form \(\mathbf {Y}= \mathbf {G}(\bar{\mathbf {z}})\), where \(\bar{\mathbf {z}}\) is a benchmark decision.

Observe that (19b) contains infinitely many inequalities. Noyan and Rudolf (2013) show that these inequalities can be replaced with those for a finite subset of scalarization vectors corresponding to the vertices of a higher dimensional polyhedron. The authors propose a delayed cut generation algorithm, which involves the solution of a relaxed master problem (RMP-B) to obtain a candidate solution \({\hat{\mathbf {z}}}\in Z\), and the following cut generation subproblem:

$$\begin{aligned} \left( \mathbf {CutGen-Benchmark}\right) {}: \quad \min _{\mathbf {c}\in C} \;&{\text {CVaR}}_\alpha \left( \mathbf {c}^\top \mathbf {X}\right) - {\text {CVaR}}_\alpha \left( \mathbf {c}^\top \mathbf {Y}\right) , \end{aligned}$$
(20)

where \(\mathbf {X}=\mathbf {G}({\hat{\mathbf {z}}})\). If the optimal objective function value of \(\left( \mathbf {CutGen-Benchmark}\right) \) is non-negative, then \({\hat{\mathbf {z}}}\) is optimal, otherwise we obtain a solution \(\mathbf {c}^* \in C\) such that the corresponding CVaR inequality in (19b) is violated. We augment RMP-B by adding this violated CVaR constraint and resolve it. According to Noyan and Rudolf (2013), the main bottleneck of this delayed cut generation algorithm is solving the cut-generation problem (20), since it is generally nonconvex. Therefore, the main focus of this section is the cut generation problem. Throughout the rest of this paper, we assume that \(\mathbf {Y}\) is a random vector with (not necessarily distinct) realizations \(\mathbf {y}_1,\ldots ,\mathbf {y}_m\) and corresponding probabilities \(q_1,\ldots ,q_m\). As before, we let \(\mathbf {g}_i({\hat{\mathbf {z}}})=\mathbf {x}_i=(x_{i1}, \ldots , x_{id} )\) for all \(i \in [n]\).

To solve (20), we first need to represent \({\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\) and \({\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {Y})\) appropriately. Using the LP representation (2) for \({\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {Y})\), we can reformulate \(\left( \mathbf {CutGen}-\mathbf {Benchmark}\right) \) as

$$\begin{aligned} \min \quad&{\text {CVaR}}_\alpha \left( \mathbf {c}^\top \mathbf {X}\right) - \eta + \frac{1}{\alpha }\sum _{l \in [m]} q_l w_l\nonumber \\ \mathrm {s.t.} \;&w_l \ge \eta - \mathbf {c}^\top \mathbf {y}_l,&\forall ~l \in [m] \end{aligned}$$
(21a)
$$\begin{aligned}&\mathbf {w}\in \mathbbm {R}_+^m,\quad \eta \in \mathbbm {R}, \quad \mathbf {c}\in C. \end{aligned}$$
(21b)

The minimization of the concave term \({\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\) causes computational difficulties. For this cut generation problem, Küçükyavuz and Noyan (2016) introduce a MIP formulation based on the VaR representation of CVaR (see (12)), which is given by

$$\begin{aligned} \min \quad&z - \frac{1}{\alpha }\sum _{i \in [n]} p_iv_i - \eta + \frac{1}{\alpha } \sum _{l \in [m]} q_lw_l \end{aligned}$$
(22a)
$$\begin{aligned} \mathrm {s.t.} \;&(13b){-}(13e), ~(21a),\end{aligned}$$
(22b)
$$\begin{aligned}&\varvec{\beta },~\mathbf {u}\in \{0,1\}^n, \quad \varvec{\xi }\in \mathbbm {R}_+^{n \times d}, \quad z,\eta \in \mathbbm {R},\end{aligned}$$
(22c)
$$\begin{aligned}&\mathbf {c}\in C,\quad \mathbf {v},\varvec{\delta }\in \mathbbm {R}_+^n,\quad \mathbf {w}\in \mathbbm {R}_+^m. \end{aligned}$$
(22d)

The authors demonstrate that this formulation, which we refer to as \(\left( \mathbf {MIP-CVaR}\right) \), along with computational enhancements, outperforms existing models for \(\left( \mathbf {CutGen-Benchmark}\right) \) under general probabilities. In this section, we consider the special case of equal probabilities, and propose strengthened MIP formulations for the cut generation problems using the RLT technique.

As in Sect. 2.2.2, to keep our exposition simple, we consider confidence levels of the form \(\alpha =k/n\) and assume that all the outcomes of \(\mathbf {X}\) are equally likely. For this special case, similar to the development in Sect. 2.2.2, Noyan and Rudolf (2013) give the equivalent formulation below:

$$\begin{aligned} \min \quad&\frac{1}{k}\sum \limits _{i\in [n]}\varvec{\gamma }_i^\top \mathbf {x}_i - \eta + \frac{1}{\alpha }\sum _{l\in [m]}q_lw_l \end{aligned}$$
(23a)
$$\begin{aligned} \text {s.t.}\quad&(14b){-} (14e), ~(21a), \end{aligned}$$
(23b)
$$\begin{aligned}&\varvec{\beta }\in \{0,1\}^n, \quad \varvec{\gamma }\in \mathbbm {R}_+^{n\times d}, \quad \mathbf {c}\in C,\quad \mathbf {w}\in \mathbbm {R}_+^m,\quad \eta \in \mathbbm {R}. \end{aligned}$$
(23c)

As before, \(\tilde{M}_j=\max \{c_j:~\mathbf {c}\in C\}\). Suppose that the vertices of the polytope C is known and given as \(\{\hat{\mathbf {c}}_1,\ldots ,\hat{\mathbf {c}}_N\}\). Then, we can simply set \(\tilde{M}_{j}=\max \limits _{\ell \in [N]} \hat{c}_{\ell j}.\) Furthermore, we can use the RLT-based strengthening for (14b)–(14e) and obtain the following MIP formulation:

$$\begin{aligned} \left( \mathbf {MIP-Special}\right) {}:&\min \quad&\frac{1}{k}\sum \limits _{i\in [n]}\varvec{\gamma }_i^\top \mathbf {x}_i - \eta + \frac{1}{\alpha }\sum _{l\in [m]}q_lw_l \end{aligned}$$
(24a)
$$\begin{aligned}&\text {s.t.}\quad&(15b){-} (15e),~(21a), \end{aligned}$$
(24b)
$$\begin{aligned}&\varvec{\beta }\in \{0,1\}^n,\quad \varvec{\gamma }\in \mathbbm {R}_+^{n \times d},\quad \mathbf {c}\in C, \quad \mathbf {w}\in \mathbbm {R}_+^m,\quad \eta \in \mathbbm {R}. \end{aligned}$$
(24c)

In addition, we can use the RLT technique to further strengthen this formulation using any additional constraints in C as in (16); we provide some numerical results on the performance of such strengthened versions in the computational study (Sect. 5.2).

From Proposition 2.4, we can obtain the minimum of \({\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\) by solving a linear program when C is a d-simplex. However, even for the special case of unit simplex, constraints (15b)–(15d) are not sufficient to describe the convex hull of the set of feasible solutions to (24), due to the additional constraints (21a)–(21b) representing \({\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {Y})\). To show this and develop potentially stronger MIP formulations, we derive a class of valid inequalities that describes facets of the convex hull of feasible solutions to (24c). Let

$$\begin{aligned} \mathcal S :=&\Bigg \{\left( \varvec{\gamma }, \mathbf {c}, \varvec{\beta }, \eta , \mathbf {w}\right) \in \mathbbm {R}_+^{n \times d}\times \mathbbm {R}_+^d \times \{0,1\}^n \times \mathbbm {R}\times \mathbbm {R}^m_+ \,| \varvec{\gamma }= \varvec{\beta }\mathbf {c}^\top , ~\sum _{j \in [d]} c_j = 1,\\&\sum _{i \in [n]} \beta _i = k, \mathbf {c}^\top \mathbf {y}_l \ge \eta - w_l,~\forall ~l \in [m]\Bigg \}. \end{aligned}$$

Proposition 3.1

For any \(i \in [n]\), \(s \in [m]\), and \(t \in [m]{\setminus } \{s\}\), the inequality

$$\begin{aligned} \mathbf {c}^\top \mathbf {y}_s - \sum _{j \in [d]} \left( y_{sj}-y_{tj}\right) \gamma _{ij} \ge \eta - w_s - w_t, \end{aligned}$$
(25)

is valid for \(\mathcal S\). In addition, inequality (25) defines a facet of \(conv(\mathcal S)\) if and only if \(s \in [m], t\in [m]{\setminus } \{s\}\) are such that \(y_{sj}< y_{tj}\) and \(y_{si} > y_{ti}\) for some \(i,j\in [d]\).

Proof

Suppose that \(\beta _i = 0\), then \(\gamma _{ij} = 0\) for all \(j \in [d]\). Hence, inequality (25) reduces to

$$\begin{aligned} \mathbf {c}^\top \mathbf {y}_s \ge \eta - w_s - w_t , \end{aligned}$$

which is valid since \(w_t \ge 0\). Otherwise, suppose that \(\beta _i = 1\), then \(\gamma _{ij} = c_j\) for all \(j \in [d]\), and inequality (25) reduces to

$$\begin{aligned} \mathbf {c}^\top \mathbf {y}_t \ge \eta - w_t - w_s, \end{aligned}$$

which is valid, because \(w_s \ge 0\), for all \(s \in [m]\). We provide the facet proof in Appendix B (see Proposition B.2). \(\square \)

Note that applying the RLT procedure directly to the additional constraints

$$\begin{aligned} \mathbf {c}^\top \mathbf {y}_l \ge \eta - w_l,&\quad \forall ~l \in [m], \end{aligned}$$
(26)

in the set \(\mathcal S\), would lead to additional bilinear terms \(\eta \beta _i\) and \(w_l\beta _i\) that will need to be linearized by introducing additional variables and big-M constraints. The proposed inequalities (25) can also be obtained by an indirect application of the RLT procedure as follows. Given \(i \in [n]\), \(s \in [m]\), and \(t \in [m]{\setminus } \{s\}\), multiply constraint (26) for \(l=s\) with \((1-\beta _i)\), constraint (26) for \(l=t\) with \(\beta _i\), constraint \(0\ge -w_s\) with \(\beta _i\) and constraint \(0\ge -w_t\) with \((1-\beta _i)\), and sum the resulting inequalities up to obtain inequality (25) (the undesirable nonlinear terms cancel out with this selection of multipliers). It is interesting to note that such an application of RLT yields facet-defining inequalities as claimed in Proposition 3.1.

Alternative VaR-based formulations. Here, without loss of generality, we assume that all the realizations of \(\mathbf {c}^\top \mathbf {X}\) are non-negative (or equivalently, \(\mathbf {x}_i\) is non-negative for all \(i \in [n]\)). Then, it is easy to show that \(\left( \mathbf {CutGen-Benchmark}\right) \) can be formulated as follows:

$$\begin{aligned} \min ~~&\frac{1}{k}\sum \limits _{i\in [n]}\theta _i -\eta + \frac{1}{\alpha }\sum _{l \in [m]}q_lw_l\nonumber \\ \text {s.t }&\theta _i\ge \mathbf {c}^\top \mathbf {x}_i -(1-\beta _i)M_{i},&\forall ~ i\in [n] \end{aligned}$$
(27a)
$$\begin{aligned}&\sum _{i\in [n]}\beta _i=k,\end{aligned}$$
(27b)
$$\begin{aligned}&(21a),\end{aligned}$$
(27c)
$$\begin{aligned}&\mathbf {c}\in C, \quad \varvec{\beta }\in \{0,1\}^n,\quad \varvec{\theta }\in \mathbbm {R}_+^n,\quad \mathbf {w}\in \mathbbm {R}_+^m,\quad \eta \in \mathbbm {R}. \end{aligned}$$
(27d)

In this formulation, \(M_i\) is the largest possible value of \(\theta _i\) (e.g., \(M_i=\max \limits _{\mathbf {c}\in C} \mathbf {c}^{\top }\mathbf {x}_{i}\)). This new formulation again follows from the observation that in the special case of equal probabilities and \(\alpha =k/n\), \({\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\) corresponds to the weighted sum of the smallest k realizations of \(\mathbf {c}^\top \mathbf {X}\). In this special case, \({\text {VaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\) corresponds to the kth smallest realization, and the model guarantees that \(\theta _i=\mathbf {c}^\top \mathbf {x}_i\) if \(\mathbf {c}^\top \mathbf {x}_i\le {\text {VaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\), and \(\theta _i=0\) otherwise. However, this MIP formulation is weak due to the big-M constraints (27a). Hence, we can take advantage of the new representation of VaR given in (12) to develop a stronger MIP formulation:

$$\begin{aligned} \left( \mathbf {MIP\_VaR\_Special}\right) {}:&\min \quad&\frac{1}{k}\sum \limits _{i\in [n]} \theta _i -\eta +\frac{1}{\alpha }\sum \limits _{l\in [m]} q_lw_{l} \end{aligned}$$
(28a)
$$\begin{aligned}&\text {s.t.} \quad&z\le \mathbf {c}^\top \mathbf {x}_{i}+\beta _iM_{i*},\qquad \qquad \qquad \qquad \forall ~i\in [n]\nonumber \\\end{aligned}$$
(28b)
$$\begin{aligned}&\theta _i\ge \mathbf {c}^\top \mathbf {x}_{i}-(1-\beta _i)M_{i},\qquad \qquad \quad \forall ~i\in [n]\nonumber \\\end{aligned}$$
(28c)
$$\begin{aligned}&z \ge \theta _i,\qquad \qquad \qquad \qquad \qquad \qquad \qquad \,\,\forall ~i\in [n]\nonumber \\\end{aligned}$$
(28d)
$$\begin{aligned}&z=\sum \limits _{i\in [n]}\varvec{\xi }_i^\top \mathbf {x}_i,\end{aligned}$$
(28e)
$$\begin{aligned}&\sum \limits _{i\in [n]}\xi _{ij}=c_j,\qquad \qquad \qquad \qquad \qquad \,\,\,\forall ~j\in [d]\nonumber \\\end{aligned}$$
(28f)
$$\begin{aligned}&\sum _{j \in [d]} \xi _{ij} = u_i, \qquad \qquad \qquad \qquad \qquad \,\,\, \forall ~i\in [n] \nonumber \\\end{aligned}$$
(28g)
$$\begin{aligned}&\sum \limits _{i\in [n]}\beta _i=k,\end{aligned}$$
(28h)
$$\begin{aligned}&(12d), (12h){-} (12i), (12a),\end{aligned}$$
(28i)
$$\begin{aligned}&\mathbf {c}\in C, \quad \varvec{\beta },\mathbf {u}\in \{0,1\}^n,\quad \mathbf {w}\in \mathbbm {R}_+^m,\quad \eta ,z \in \mathbbm {R}, \end{aligned}$$
(28j)
$$\begin{aligned}&\varvec{\xi }\in \mathbbm {R}_+^{n \times d}, \quad \varvec{\theta }\in \mathbbm {R}_+^n. \end{aligned}$$
(28k)

In this formulation, the variable \(z={\text {VaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\) is represented by \(\sum \limits _{i\in [n]}\varvec{\xi }_i^\top \mathbf {x}_i=\sum \limits _{i\in [n]}u_i\mathbf {c}^\top \mathbf {x}_i\), and it is guaranteed that \(\xi _{ij}=c_j u_i\) for all \(i\in [n]\) and \(j\in [d]\). These bilinear terms are linearized by using the McCormick envelopes and their RLT strengthening based on only the information that C is a subset of the unit simplex. Additional constraints on the scalarization set C can be used to further strengthen the above formulation. Notice that different from (12), this formulation includes the RLT strengthening equality (17) [or (28g)].

Finally, we note that \(\left( \mathbf {MIP\_VaR\_Special}\right) \) can also be applied to solve \(\left( \mathbf {CutGen}-\mathbf {Robust}\right) \) by dropping the variables and constraints associated with \({\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {Y})\); leading to enhanced versions of (13) for the equal probability case. We test its computational performance in Sect. 5.2.

4 Hybrid model

In this section, we present a hybrid model that includes both the multivariate CVaR constraints and the robust objective based on the worst-case CVaR. We show that the algorithms in Sects. 2 and 3 can be integrated into a unified methodology to solve the hybrid model of the form

$$\begin{aligned} \left( \mathbf {Hybrid}\right) {}{:}&\max _{\mathbf {z}\in Z} \quad&\min _{\mathbf {c}\in C} \;\;{\text {CVaR}}_{\alpha } \left( \mathbf {c}^\top \mathbf {X}\right) \nonumber \\&\text {s.t.}\quad&{\text {CVaR}}_{\alpha }\left( \mathbf {c}^\top \mathbf {G}(\mathbf {z})\right) \ge {\text {CVaR}}_{\alpha }\left( \mathbf {c}^\top \mathbf {Y}\right) ,&\forall ~\mathbf {c}\in C. \end{aligned}$$
(29)

For a given subset of scalarization vectors \(\tilde{C}:=\{\tilde{\mathbf {c}}^{1},\cdots ,\tilde{\mathbf {c}}^{{\tilde{L}}}\}\subset C\) a relaxed master problem (RMP-H) is given by

$$\begin{aligned} \max _{\mathbf {z}\in Z } \quad&\min _{\mathbf {c}\in C} \;\;{\text {CVaR}}_{\alpha } \left( \mathbf {c}^\top \mathbf {X}\right) \end{aligned}$$
(30a)
$$\begin{aligned} \text {s.t. }&{\text {CVaR}}_{\alpha }\left( \left( \tilde{\mathbf {c}}^{\ell }\right) ^\top \mathbf {G}(\mathbf {z})\right) \ge {\text {CVaR}}_{\alpha }\left( \left( \tilde{\mathbf {c}}^{\ell }\right) ^\top \mathbf {Y}\right) ,&\forall ~\ell \in [{\tilde{L}}]. \end{aligned}$$
(30b)

We can represent the constraints (30b) by linear inequalities, leading to the following equivalent reformulation of RMP-H:

$$\begin{aligned} \begin{aligned} \max \quad&\min \limits _{\mathbf {c}\in C} \;\;{\text {CVaR}}_{\alpha } \left( \mathbf {c}^\top \mathbf {X}\right) \\ \text {s.t.} \,\,&{\tilde{\eta }}_{r} -\frac{1}{ \alpha }\sum \limits _{i\in [n]} p_i{\tilde{w}}_{r i}\ge {\text {CVaR}}_{\alpha }\left( \left( {\tilde{\mathbf {c}}}^{r}\right) ^\top \mathbf {Y}\right) ,&\forall ~r\in [{\tilde{L}}]\\&{\tilde{w}}_{r i} \ge {\tilde{\eta }}_{r} - \left( {\tilde{\mathbf {c}}}^{r}\right) ^\top \mathbf {g}_i(\mathbf {z}),&\forall ~r\in \left[ {\tilde{L}}\right] ,~i\in [n]\\&{\tilde{\mathbf {w}}} \in \mathbbm {R}_+^{{\tilde{L}}\times n}, \quad {\tilde{\varvec{\eta }}} \in \mathbbm {R}_+^{{\tilde{L}}}, \quad \mathbf {z}\in Z.\\ \end{aligned} \end{aligned}$$

As discussed in Sect. 2.2, we can handle the maximin type objective function of interest using a finitely convergent delayed cut generation algorithm. In this spirit, suppose now that \({\hat{C}}=\{\hat{\mathbf {c}}^{1},\dots ,\hat{\mathbf {c}}^{L}\}\subset C\) is a given subset of scalarization vectors used to calculate the worst-case CVaR. In line with the formulation given in (10), RMP-H takes the following form:

$$\begin{aligned} \max \quad&\psi \end{aligned}$$
(31a)
$$\begin{aligned} \text {s.t.}&\quad {\tilde{\eta }}_{r} -\frac{1}{\alpha }\sum \limits _{i\in [n]}{\tilde{p}}_i{\tilde{w}}_{ri}\ge {\text {CVaR}}_{\alpha }\left( \left( {\tilde{\mathbf {c}}}^{r}\right) ^\top \mathbf {Y}\right) ,\qquad \qquad \forall ~r\in \left[ {\tilde{L}}\right] \end{aligned}$$
(31b)
$$\begin{aligned}&\quad {\tilde{w}}_{ri} \ge {\tilde{\eta }}_{r} -\left( {\tilde{\mathbf {c}}}^{r}\right) ^\top \mathbf {g}_i(\mathbf {z}),\qquad \qquad \qquad \qquad \forall ~r\in \left[ {\tilde{L}}\right] ,~i\in [n]\end{aligned}$$
(31c)
$$\begin{aligned}&\quad \psi \le \eta _{\ell }-\frac{1}{\alpha }\sum _{i \in [n]}p_iw_{\ell i},\qquad \qquad \qquad \qquad \,\,\, \forall ~ \ell \in [L],~i \in [n] \end{aligned}$$
(31d)
$$\begin{aligned}&\quad w_{\ell i} \ge \eta _{\ell }-\left( {\hat{\mathbf {c}}}^{\ell }\right) ^\top \mathbf {g}_i(\mathbf {z}),\qquad \qquad \qquad \qquad \,\, \forall ~ \ell \in [L],~i \in [n] \end{aligned}$$
(31e)
$$\begin{aligned}&\quad {\tilde{\mathbf {w}}} \in \mathbbm {R}_+^{{\tilde{L}}\times n},\quad \mathbf {w}\in \mathbbm {R}_+^{L\times n},\quad {\tilde{\varvec{\eta }}} \in \mathbbm {R}_+^{{\tilde{L}}},\quad \psi \in R, \quad \mathbf {z}\in Z. \end{aligned}$$
(31f)

Given a solution to the RMP-H (31), two types of cut generation problems are solved to identify if the current solution is optimal or if there is a scalarization vector \(\mathbf {c}\in C\) for which at least one of the following constraints is violated: (10b) and (29). As discussed in Sect. 2.2, for minimizing the worst-case CVaR, it is sufficient to consider the extreme points of C. On the other hand, for the multivariate CVaR relation, it is sufficient to consider the finitely many \(\mathbf {c}\) vectors obtained as the projections of the vertices of the higher dimensional polyhedron \(P(C,\mathbf {Y})\) given by (Noyan and Rudolf 2013)

$$\begin{aligned} P(C,\mathbf {Y})=\left\{ \left( \mathbf {c},\eta ,\mathbf {w}\right) \in C \times \mathbbm {R}\times \mathbbm {R}_+^m {:}\, w_l \ge \eta - \mathbf {c}^\top \mathbf {y}_l,\quad l\in [m]\right\} . \end{aligned}$$
(32)

Thus, generating the violated constraints associated with those particular vertex scalarization vectors at each iteration guarantees the finite convergence of the delayed cut generation algorithm of \(\left( \mathbf {Hybrid}\right) \). In other words, the provable finite convergence depends on finding a solution to the cut generation problems \(\left( \mathbf {CutGen-Robust}\right) \) and \(\left( \mathbf {CutGen-Benchmark}\right) \), which is an extreme point of C and the projection of a vertex of \(P(C,\mathbf {Y})\), respectively. In Sect. 2.3, we discuss how to obtain a vertex optimal solution of \(\left( \mathbf {CutGen-Robust}\right) \) from an optimal solution obtained by solving one of its MIP formulations [such as (13)]. For obtaining the desired vertex optimal solution of \(\left( \mathbf {CutGen-Benchmark}\right) \), we refer to Noyan and Rudolf (2013).

Remark 4.1

The finitely convergent delayed cut generation algorithms for \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) and \(\left( \mathbf {Hybrid}\right) {}\) are also valid even if we relax the assumption that \(G_j(\mathbf {z})\) is concave in \(\mathbf {z}\in Z\) for all \(j \in [d]\). Under the more general setting, the claim that it is sufficient to consider the extreme points of C for minimizing the worst-case CVaR follows from the finite representation of the multivariate CVaR relation [see (32)] with . In this special case, it is easy to see that the projections of the vertices of the polyhedron \(P(C,\mathbf {Y})\) in (32) coincide with the vertices of the set C, as desired.

5 Computational study

In the first part of our computational study, we investigate the value of the proposed \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) model with respect a robust risk-neutral model and a multivariate CVaR-constrained model. We also report on the performance of the cut generation algorithm for the \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) model. In the second part, we demonstrate the computational effectiveness of the MIP formulations developed (in Sect. 3) for the cut generation problem arising in multivariate CVaR-constrained optimization models.

5.1 Worst-case multivariate CVaR optimization

We explore the effectiveness of the proposed \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) model by applying it to a homeland security budget allocation (HSBA) problem (Hu et al. 2011). This problem studies the allocation of a fixed budget to ten urban areas, which are classified in three groups: (1) higher risk: New York; (2) medium risk: Chicago, San Francisco Bay Area, Washington DC-MD-VA-WV, and Los Angeles-Long Beach; (3) lower risk: Philadelphia PA-NJ, Boston MA-NH, Houston, Newark, and Seattle-Bellevue-Everett. The risk share of each area is measured based on four criteria: (1) property losses, (2) fatalities, (3) air departures and (4) average daily bridge traffic. To represent the inherent randomness a random risk share matrix \(A: \varOmega \rightarrow \mathbbm {R}_+^{4 \times 10}\) is considered, where \(A_{ij}\) denotes the proportion of losses in urban area j relative to the total losses for criterion i. The set \(Z = \{\mathbf {z}\in \mathbbm {R}_+^{10} : \; \sum _{j \in [10]}z_j = 1 \}\) represents all the feasible allocations and the associated random performance measures of interest are specified based on a particular type of penalty function for allocations under the risk share. The negatives of these budget misallocations associated with each criterion are used to construct the random outcome vector \(\mathbf {G}(\mathbf {z})=(G_1(\mathbf {z}),\ldots ,G_4(\mathbf {z}))\), as given below, in order to be consistent with our setup where the larger values of the random variables are preferred:

$$\begin{aligned} G_i(\mathbf {z}) = -\sum _{j \in [10]} \left[ A_{ij} - z_j\right] _+ , \quad i\in [4]. \end{aligned}$$

Hu et al. (2011) model this HSBA problem using optimization under multivariate polyhedral SSD constraints based on two benchmarks: one based on average government allocations (Department of Homeland Security’s Urban Areas Security Initiative)—denoted by \(\mathbf {G}(\mathbf {z}^G)\), and one based on the suggestions in the RAND report (Willis et al. 2005)—denoted by \(\mathbf {G}(\mathbf {z}^R)\). On the other hand, Noyan and Rudolf (2013) replace the SSD constraints with CVaR-based ones, leading to the following optimization model:

$$\begin{aligned} \max \quad&\min _{\mathbf {c}\in C} \mathbbm {E}\left( \mathbf {c}^\top \mathbf {G}(\mathbf {z})\right) \end{aligned}$$
(33a)
$$\begin{aligned} \text {s.t.}\quad&{\text {CVaR}}_{\alpha }\left( \mathbf {c}^\top \mathbf {G}(\mathbf {z})\right) \ge {\text {CVaR}}_{\alpha }\left( \mathbf {c}^\top \mathbf {G}(\mathbf {z}^R)\right) ,\quad \quad \forall ~ \mathbf {c}\in C\end{aligned}$$
(33b)
$$\begin{aligned}&{\text {CVaR}}_{\alpha }\left( \mathbf {c}^\top \mathbf {G}(\mathbf {z})\right) \ge {\text {CVaR}}_{\alpha }\left( \mathbf {c}^\top \mathbf {G}(\mathbf {z}^G)\right) ,\quad \quad \forall ~ \mathbf {c}\in C\end{aligned}$$
(33c)
$$\begin{aligned}&\mathbf {z}\in Z. \end{aligned}$$
(33d)

We benchmark our model \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \), defined in (5), against two relevant existing models: the first one, which we refer to as \(\left( \mathbf {B}\text {-}\mathbf {CVaR}\right) {}\), is obtained from (33) by dropping (33c) (the government benchmark is ignored for simplicity), and the second one is the risk-neutral counterpart of our model (Hu and Mehrotra 2012):

$$\begin{aligned} \left( \mathbf {W}\text {-}\mathbf {Exp}\right) {:} \quad \max _{\mathbf {z}\in Z} \;&\min _{\mathbf {c}\in C} \; \mathbbm {E}\left( \mathbf {c}^\top \mathbf {G}(\mathbf {z})\right) . \end{aligned}$$

We follow the data generation scheme described in Noyan and Rudolf (2013) and consider their “base case” scalarization set given by \(C=C_{{\text {Base}}}:=\{\mathbf {c}\in \mathbbm {R}_+^4:\; \sum _{i \in [4]} c_j= 1, ~c_j \ge c_j^* - \frac{\theta }{3},~ j \in [4]\}\), where \(\mathbf {c}^*=(1/4,1/4,1/4,1/4)\) and \(\theta = 0.25\). Additionally, we also consider a second choice of C, which involves the so-called ordered preferences as follows: \(C = C_{{\text {Ord}}}:=\{ \mathbf {c}\in \mathbbm {R}_+^4:\; \sum _{i \in [4]} c_j= 1, ~c_2 \ge c_1 \ge c_3 \ge c_4 \}\). This choice relies on the assumption that the second criterion (based on fatalities) is the most important one, followed by the first criterion (based on property losses), the third criterion (based on air departures) and the fourth criterion (based on average daily bridge traffic). For further details on data generation, we refer to Hu et al. (2011) and Noyan and Rudolf (2013).

In our benchmarking analysis, we consider the equal probability case, set \(n = 500\) and obtain the results for three models \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\), \(\left( \mathbf {W}\text {-}\mathbf {Exp}\right) {}\), and \(\left( \mathbf {B}\text {-}\mathbf {CVaR}\right) {}\) under each value of \(\alpha \in \{0.05, 0.1,0.15\}\). The results on allocation decisions—averaged over ten randomly generated instances—are reported in Table 1. As seen from these results, for each setting, \(\left( \mathbf {B}\text {-}\mathbf {CVaR}\right) {}\) provides solutions that allocate most of the budget (at least 51%) to the area with the highest risk (New York). This is primarily due to that fact that New York has a large (\(58.61\%\)) allocation in the RAND benchmark. On the other hand, the budget percentage allocated to the five urban areas with lower risk cities is less than 20 and 12 for the scalarization sets \(C_{{\text {Base}}}\) and \(C_{{\text {Ord}}}\), respectively. Since the set \(C_{{\text {Ord}}}\) involves the scalarization vectors giving more priority to the second criterion (based on fatalities), \(\left( \mathbf {B}\text {-}\mathbf {CVaR}\right) {}\) suggests to allocate even more budget to New York, the most populated area with a significantly higher risk share associated with fatalities; for the raw data for fatalities and the remaining three criterion see Table 1 in Hu et al. (2011). As expected, the allocation decisions obtained by \(\left( \mathbf {B}\text {-}\mathbf {CVaR}\right) {}\) with benchmarking constraints are sensitive to the particular benchmark allocations. On the other hand, the robust risk-neutral model \(\left( \mathbf {W}\text {-}\mathbf {Exp}\right) {}\) provides a more “averaged” solution compared to \(\left( \mathbf {B}\text {-}\mathbf {CVaR}\right) {}\) and \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\). For both choices of the scalarization set, \(\left( \mathbf {W}\text {-}\mathbf {Exp}\right) {}\) always allocates more budget to the areas with medium risk compared to the other models. For example, for the instances with \(C_{{\text {Ord}}}\) and \(\alpha = 0.05\), it allocates over five percent more budget to such areas than \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\), and this behavior is also observed under the other settings. The results of \(\left( \mathbf {W}\text {-}\mathbf {Exp}\right) {}\) are consistent with its “risk-neutral” nature.

Finally, we would like to emphasize that \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) allocates more budget to the areas with lower risk compared to the other models. In particular, for the instances with the scalarization set \(C_{{\text {Base}}}\), \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) allocates on average close to three percent more budget to such areas than \(\left( \mathbf {W}\text {-}\mathbf {Exp}\right) \). These results are consistent with the risk-averse perspective of \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \). Moreover, it is much less conservative than \(\left( \mathbf {B}\text {-}\mathbf {CVaR}\right) {}\) with respect to its allocation to New York.

Table 1 Model benchmarking results for the HSBA data with \(n=500\)

In summary, in the presence of multiple criteria with ambiguous weights, we recommend the use of the risk-neutral model \(\left( \mathbf {W}\text {-}\mathbf {Exp}\right) \) if the decision makers are interested in the average performance. This expectation-based decision making approach is justified if the uncertain environment does not feature extreme events, or by the Law of Large Numbers if the same decisions are made repeatedly under similar conditions. Otherwise, when it is essential to hedge against a potentially high level of random variability, we recommend that the decision makers use \(\left( \mathbf {B}\text {-}\mathbf {CVaR}\right) \) or \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \) depending on the availability of a reasonable benchmark random outcome vector—often based on a benchmark decision. More specifically, \(\left( \mathbf {B}\text {-}\mathbf {CVaR}\right) \) should be used in the presence of a benchmark to be outperformed, while \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) \) is the model of preference if the decision makers do not have an alternative decision to benchmark against.

We next provide some insights about the solution times of our model \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) for the instances under consideration. All computations in this study are performed on a 64-bit Windows Server 2012 R2 Datacenter with 2.40GHz Intel Xeon CPU E5-2630 processor with 32 GB RAM, unless otherwise stated. The vertices of both types of scalarization sets are known and there are only four of them. Thus, we could easily solve \(\left( \mathbf {W}\text {-}\mathbf {CVaR}\right) {}\) using the compact LP formulation (10). For the HSBA instances with \(C_{{\text {Base}}}\), \(\alpha =0.1\), and \(n=500\), it takes at most 20 s to obtain an optimal solution; even for \(n = 5000\) it takes at most 60 s. We observe that while the cut generation algorithm we propose is only essential for cases where the number of extreme points of C is exponential, it could also be useful in cases where the number of extreme points is small. For example, for \(C_{{\text {Base}}}\), the compact LP takes 200 s on average for the three hardest instances with \(n=5000\) and \(\alpha =0.15\), whereas the cut generation algorithm takes on average 20 s, and generates only three extreme points of \(C_{{\text {Base}}}\). This difference in solution times can be due to the large number of scenario dependent constraints and variables introduced in (10b)–(10c) for each extreme point of C.

5.2 Multivariate polyhedral CVaR-constrained optimization

In this section, we perform a detailed analysis on comparing the computational performance of the alternative MIP formulations of \(\left( \mathbf {CutGen-Benchmark}\right) \) under equal probabilities. Note that for the HSBA instances, \(\mathbf {Y}\) is already well-defined since the benchmark allocations are given. To obtain the realizations of the random vector \(\mathbf {X}\), we solve the corresponding RMP-B once, and use its optimal solution to calculate the realizations of the associated 4-dimensional random vector \(\mathbf {X}\).

All the optimization problems are modeled with the AMPL mathematical programming language. All runs were executed on four threads of a Lenovo(R) workstation with two Intel® Xeon® 2.30 GHz CE5-2630 CPUs and 64 GB memory running on Microsoft Windows Server 8.1 Pro x64 Edition. All reported times are elapsed times, and the time limit is set to 3600 s unless otherwise stated. CPLEX 12.2 is invoked with its default set of options and parameters. If optimality is not proven within the time allotted, we record both the best lower bound on the optimal objective value (retrieved from CPLEX and denoted by \({\text {LB}}\)) and the best available objective value (denoted by \({\text {UB}}\)). Since the optimal objective function can take any value including 0, we report the following relative optimality gap: \({\text {ROG}}=|{\text {LB}}-{\text {UB}}|/(|{\text {LB}}|)\). We report the results averaged over two instances with different benchmarks (based on Government and RAND benchmarks) for each combination of \(\alpha \) and n. In all the tables in this section, the “Time” column reports the average solution time and the “B&B Nodes” column collects the average number of nodes used during the branch-and-cut process.

One can obtain slightly different versions of the presented MIP formulations by applying the RLT techniques for different types of available information (such as the valid lower and upper bounds on the scalarization vectors). We next provide the alternative MIP formulations of \(\left( \mathbf {CutGen-Benchmark}\right) \) for which we report results in Tables 2 and 3.

  • \(\left( \mathbf {MIP-CVaR}\right) \) The best available benchmark model proposed by Küçükyavuz and Noyan (2016); it is based on the VaR representation (12) and its formulation is given by (22). For further computational enhancements, we added the valid inequality (17), and deleted the set of big-M constraints (12d).

  • \(\left( \mathbf {MIP\_VaR\_Special}\right) \) This new formulation is also based on the VaR representation (12) but it is valid for the case of equal probabilities. Its formulation is given in (28); (12d) is deleted as in \(\left( \mathbf {MIP-CVaR}\right) \).

  • \(\left( \mathbf {MIP-Special}\right) \) This new model is obtained by using the RLT-based strengthening for (23). The formulation (24) involves the inequalities obtained by applying the RLT procedure based on the unit simplex condition and the upper bounding constraints. We also apply the RLT procedure based on the lower bounding information \((c_j \ge L^c_j,~j \in [d]\)), which provides the following valid inequalities:

    $$\begin{aligned} \gamma _{ij}\ge&L^c_j \beta _i,\quad \forall ~i\in [n],~j\in [d], \end{aligned}$$
    (34)
    $$\begin{aligned} -\gamma _{ij}+c_j\ge&L^c_j(1-\beta _i),\quad \forall ~i\in [n],~j\in [d]. \end{aligned}$$
    (35)

    Unless stated otherwise, \(\left( \mathbf {MIP-Special}\right) \) refers to the formulation obtained by adding the constraints (34)–(35) to (24).

From Remark 2.1 for the unit simplex case, we drop the redundant constraints (those obtained from the upper and lower bounding information). In Table 4, “Base Special” refers to the model obtained from \(\left( \mathbf {MIP-Special}\right) \) by deleting the constraints (14c)–(14d) and (34)–(35); it only involves the most effective constraints (obtained from the unit simplex condition).

From Table 2, we can see that \(\left( \mathbf {MIP-Special}\right) \) solves a majority of the test instances in the shortest amount of time. However, there are some instances (for example, for HSBA data, unit simplex, \(\alpha =0.01, n=1000, 1500\)) for which \(\left( \mathbf {MIP-Special}\right) \) only solves one out of the four instances within the time limit as opposed to \(\left( \mathbf {MIP\_VaR\_Special}\right) \) which solves three of the instances within the limit. In addition, both of the new formulations we propose significantly outperform the existing formulation \(\left( \mathbf {MIP-CVaR}\right) \) for the equal probability case.

Table 2 Computational performance of the alternative MIPs for \(\left( \mathbf {CutGen-CVaR}\right) {}\) under equal probabilities
Table 3 Computational performance of the alternative enhanced MIPs (fixing, bounding, ordering inequalities) for \(\left( \mathbf {CutGen-CVaR}\right) {}\) under equal probabilities

Furthermore, we can apply the computational enhancements proposed in Küçükyavuz and Noyan (2016) to the proposed formulations, namely variable fixing, bounding and a class of valid inequalities referred to as the ordering inequalities (on the \(\beta \) variables). The variable fixing method recognizes scenarios which are guaranteed to be larger than VaR, and fixes the corresponding \(\beta \) variables to zero. In particular, for each \(k\in [n]\), a set \(L_k:=\{i\in [n]{\setminus } k: \max _{\mathbf {c}\in C}\{\mathbf {c}^\top (\mathbf {x}_i-\mathbf {x}_k)\}<0\}\) (resp., \(H_k:=\{i\in [n]{\setminus } k: \max _{\mathbf {c}\in C}\{\mathbf {c}^\top (\mathbf {x}_k-\mathbf {x}_i)\}<0\}\)) is defined to denote the scenarios that are guaranteed to return lower (resp., higher) scalarized outcomes than scenario k. Note the correction in the definition of the set \(L_k\) (resp., \(H_k\)) compared to that in Küçükyavuz and Noyan (2016), where we exclude scenario i with \(\max _{\mathbf {c}\in C}\{\mathbf {c}^\top (\mathbf {x}_i-\mathbf {x}_k)\}=0\) (resp., \(\max _{\mathbf {c}\in C}\{\mathbf {c}^\top (\mathbf {x}_k-\mathbf {x}_i)\}=0\)) from the set \(L_k\) (resp., \(H_k\)). In addition, for the existing MIP (23) and \(\left( \mathbf {MIP-Special}\right) \), we introduce upper and lower bounds on \({\text {CVaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\), for the others which involve the z decision variable (representing the VaR) we introduce upper and lower bounds on \({\text {VaR}}_\alpha (\mathbf {c}^\top \mathbf {X})\). Table 3 summarizes our computational experience with using these enhancements. The ‘Remaining Binary Var.’ column reports remaining percentage of binary variables after the preprocessing, and the ‘# of Order. Ineq.’ column represents the number of ordering inequalities added to the formulations. Observe that there is a significant reduction in the number of binary variables. Furthermore, many ordering inequalities are added to strengthen the formulation. As a result, instances that were not solvable to optimality by any of the methods (reported in Table 2) can now be solved to optimality with at least one of the new formulations. We would also like to note that the total time spent on preprocessing (for calculating the big-M coefficients and handling all the enhancements—fixing, bounding, and ordering inequalities), which is not included in the times reported, is negligible.

Table 4 Computational performance of the RLT procedure for \(\left( \mathbf {CutGen-CVaR}\right) {}\) under equal probabilities

Next, we use the additional information on C to obtain stronger RLT formulations. Our experiments are reported in Table 4, for the scalarization sets \(C_{{\text {Base}}}\) and \(C_{{\text {Ord}}}\). We observe that the RLT-based strengthening using only the unit simplex information (15b)–(15d), reported in the column titled Base Simplex, is not very effective. Recall (Remark 2.1) that when there exists an index \(j \in [d]\) such that \(\tilde{M}_j=\max \{c_j:~\mathbf {c}\in C\}<1\), the constraints (14c)–(14d) are not redundant for \(\left( \mathbf {MIP-Special}\right) \). In fact, for the HSBA instances, including these inequalities in \(\left( \mathbf {MIP-Special}\right) \) leads to a significant reduction in the computational time as reported in the second column of Table 4. It is surprising to observe that \(\left( \mathbf {MIP-Special}\right) \) could solve some instances in very short CPU time, while it reaches the time limit when (14c)–(14d) are dropped.

When we have the extreme points of C, we can easily obtain the upper and lower bounds on the components of \(\mathbf {c}\). For \(C_{{\text {Ord}}}\) including the ordered preference constraints \(c_j \ge c_{j+1}\), we obtain the corresponding inequalities obtained by using the RLT (see Proposition 2.5):

$$\begin{aligned} \gamma _{ij} \ge \gamma _{ij + 1} ,\quad \forall ~i \in [n], ~ j \in [d-1], \end{aligned}$$
(36)
$$\begin{aligned} \gamma _{ij} - \gamma _{ij + 1} \le c_{j} - c_{j + 1},\quad \forall ~i\in [n],~j\in [d-1]. \end{aligned}$$
(37)

In addition, for this case, \( \tilde{ \mathbf {M}}=(1,1/2,1/3,1/4)\) and \(\mathbf {L}^c=(1/4,0,0,0)\). In our computational experiments reported in Table 4, we use the RLT strengthening of the upper bounding inequalities and the ordered preference constraints defining \(C_{{\text {Base}}}\).

Table 4 demonstrates that the most effective solution method for cut generation under equal probabilities is to use the formulation \(\left( \mathbf {MIP-Special}\right) \) with all enhancements: fixing, bounding, ordering inequalities on \(\varvec{\beta }\), and the RLT-based strengthening using the additional inequalities defining C.

6 Conclusions

In this paper, we study risk-averse models for multicriteria optimization problems under uncertainty. First, we introduce a model that optimizes the worst-case multivariate CVaR, and develop a finitely convergent delayed cut generation algorithm for finite probability spaces. In addition, for the cut generation problem, which is in general a mixed-integer program, we give a stronger formulation for the equiprobable case using the reformulation linearization technique. Next, we observe that similar polyhedral enhancements are also useful for a related class of multivariate CVaR-constrained optimization problems that has attracted attention recently. Our computational study demonstrates the effectiveness of the proposed solution methods for both classes of models.