Keywords

1 Introduction

Understanding the worst case revenue obtained by simple pricing mechanisms is a fundamental question in Economics and Computation [2, 3, 10, 16, 18]. In this context probably the most basic setting corresponds to selling a single item to n buyers with valuations given by independent and identically distributed random variables. Here the simplest possible mechanism is that of setting a fixed price (a.k.a. anonymous price) for the item and the benchmark, to which we want to compare to, is the revenue obtained by Myerson’s optimal mechanism [25]. Through the well established connection between posted pricing mechanisms and prophet inequalities [5, 7, 15], evaluating this revenue gap is equivalent to determining the best possible single threshold prophet inequality in the i.i.d. case. Thus, a result of Ehsani et al. [9] establishes that the performance of a fixed threshold policy when facing i.i.d. samples is at least a fraction \(1-1/e\) of that of the optimal mechanism, and the bound is best possible.Footnote 1 Footnote 2 In this paper, we explore this basic question under an additional large markets assumption that is relevant to most modern online marketplaces.

In our study we take the viewpoint of prophet inequalities rather than that of pricing mechanisms, mostly because this has become the standard in the literature. Let us thus briefly recall some of the basics. For a fixed positive integer n, let \(X_1,\ldots , X_n\) be a non-negative, independent random variables and \(\mathcal {S}_n\) their set of stopping rules. A classic result of Krengel and Sucheston, and Garling [22, 23] asserts that \(\mathbb {E}(\max \{X_1,\ldots ,X_n\})\le 2 \sup \{\mathbb {E}(X_s):s\in \mathcal {S}_n\}\) and that two is the best possible bound. The study of this type of inequalities, known as prophet inequalities, was initiated by Gilbert and Mosteller [13] and attracted a lot of attention in the eighties [17, 20, 21, 27, 28]. In particular, Samuel-Cahn [28] noted that rather than looking at the set of all stopping rules one can obtain the same result by using a single threshold stopping rule in which the decision to stop depends on whether the value of the currently observed random variable is above a certain threshold. A natural restriction of this setting, which we consider here, is the case in which the random variables are identically distributed. This problem was studied by Hill and Kertz [17] who provided the family of worst possible instances from which Kertz [20] proved that no stopping rule can extract more than a fraction of roughly 0.745 of the expectation of the maximum. Later, Correa et al. [6] proved that in fact this value is tight. We note, however, that the optimal stopping rule in this i.i.d. case cannot be achieved by a fixed threshold policy. Indeed, the best such policy has an approximation guarantee of \(1-1/e \approx 0.632\) [9].

In the last two decades, prophet inequalities gained particular attention due to its close connection with online mechanisms. The connection involves mapping the random variables in the prophet inequality setting to the virtual valuations in the pricing setting and the expectation of the maximum value in the prophet inequality setting to revenue of the optimal mechanism in the pricing setting. This relation was firstly studied by Hajiaghayi et al. [15], who showed that prophet inequalities can be interpreted as posted price mechanisms for online selection problems. Later, Chawla et al. [5] proved that any prophet inequality can be turned into a posted price mechanism with the same approximation guarantee. The reverse direction was proven by Correa et al. [7] and thus the guarantees for optimal stopping problems are in fact equivalent to the problem of designing posted price mechanisms. Furthermore, in the i.i.d. setting, fixed threshold stopping rules become equivalent to fixed price policies.

In this work we study single threshold prophet inequalities in a large market regime, where the random variables arriving over time are i.i.d. according to a known and fixed distribution. The essential difference with the classic setting is that rather than considering the worst distribution for each possible market size n, we first fix the distribution and then take n grow to infinity. Our main question is thus to understand to what extent one can obtain improved single threshold prophet inequalities (or fixed price policies) when the market is very large. Interestingly, this setting, though with general stopping rules, was considered three decades ago by Kennedy and Kertz [19]. They prove that the optimal stopping rule recovers at least a 0.776 fraction of the expectation of the maximum, establishing that there is a sensible advantage when compared to the 0.745 bound of classic i.i.d. setting [17, 20]. Kennedy and Kertz realize that the limit problem may be ill behaved and thus impose an extreme value condition.Footnote 3 This condition is, essentially, the equivalent of a central limit theorem for the maximum of an i.i.d. sample, and it is the cornerstone of the extreme value theory.

Then, a natural question that arises is whether the result obtained by Kennedy and Kertz [19] for the optimal stopping rule also holds for the much simpler single threshold policies. We answer this question on the positive proving that the large market assumption allows to obtain a guarantee of 0.712 significantly improving the bound of \(1-1/e\) [9]. We further consider the case of selecting k items (or selling k items) with a fixed threshold policy and prove that this large market advantage vanishes as k grows.

1.1 Our Results

For every positive integer n, consider an i.i.d. sample \(X_1,X_2,\ldots ,X_n\) with \(X_j\) distributed according to F for every \(j\in \{1,\ldots ,n\}\), where F is a distribution over the non-negative reals. Given a value T, consider the simple algorithm given by stopping the first time that a random variable exceeds T. Then, for each distribution F, we are interested in understanding the limit ratio between the reward of this simple stopping rule which is simply given by the probability of having an \(X_i\) above T, \(1-F^n(T)\) times the expected value of this \(X_i\) conditioned on it being larger than T, and the expectation of the maximum \(X_i\), denoted as \(M_n\). Our quantity of interest is thus:

$$\begin{aligned} \text {apx}(F)=\liminf _{n\rightarrow \infty }\sup _{T\in \mathbb {R}_+}\frac{1-F^n(T)}{\mathbb {E}(M_n)}\left( T+\frac{1}{1-F(T)}\int _{T}^{\infty }(1-F(s))\mathrm ds\right) . \end{aligned}$$
(1)

Our first main result shows that 0.712 is a tight lower bound for \(\text {apx}(F)\) when the distribution satisfies the extreme value condition. This value is substantially better than the known bound of \(1-1/e\) by Ehsani et al. [9] and thus represents a significant advantage for the large markets setting. We remark that we are mainly interested in the case of distributions F with unbounded support, since one can show that \(\text {apx}(F)=1\) when F is of bounded support.

A natural and practically relevant extension of the single selection prophet inequality is to consider the setting in which we can select up to k different samples (or sell k items). We call this problem k-selection problem and we study whether the large market advantage continues to be significant beyond the single selection case. To this end, we provide a lower bound for the approximation factor achievable by the best single threshold policy, again under the extreme value condition. More specifically, for each value of k, the approximation factor is bounded by a (computationally) simple optimization problem. In particular, the bound presented when \(k=1\) follows by obtaining the exact solution of the optimization problem. The performance obtained by our characterization yields prophet inequalities that represent an advantage for the k-selection problem. However, we also show that this advantage vanishes as \(k\rightarrow \infty \). Indeed, we prove that for each integer k, the approximation factor is more than \(1-1/\sqrt{2k\pi }\), but there exists F such that this lower bound is asymptotically tight in k. This tightness, together with the recent result of Duetting et al. [8] establishing that the approximation ratio of the k-selection problem (without the large market assumption) is almost exactly \(1-1/\sqrt{2k\pi }\),Footnote 4 implies that the large market advantage vanishes as \(k\rightarrow \infty \). For an illustration, Fig. 1 depicts the bound obtained by our optimization problem and compares it with \(1-1/\sqrt{2k\pi }\). We finally note that as a direct corollary, when F satisfies the extreme value condition and for large markets, we can derive the worst case ratio between the optimal single threshold prophet inequality obtained by our characterization theorem and the value obtained by the optimal dynamic policy of Kennedy and Kertz, the adaptivity gap. This value is, roughly, at most 1.105.

Fig. 1.
figure 1

Our optimal revenue guarantee over k (continuous line) vs. the bound of \(1-1/\sqrt{2k\pi }\) (dashed line).

As already mentioned, our main result for the multiple selection problem translates into a fixed price policy when the buyers’ valuations are identically and independently distributed, say according to F.Footnote 5 Of course, this works as long as the distribution of the virtual values of F, call it G, satisfies the extreme value condition. This motivates the following question: When F satisfies the extreme value condition, can we guarantee that the distribution of the virtual valuation G also does? And, if this is the case, does G and F fall in the same extreme value family? We answer these questions in the positive under some mild assumptions.

2 Preliminaries

We recall that F is a distribution if it is a right-continuous and non-decreasing function, with limit equal to zero in \(-\infty \) and equal to one in \(+\infty \). We consider F to be absolutely continuous in \(\mathbb {R}\), and we denote its density by f or \(F'\), depending on the context. In general, F is not invertible, but we work with its generalized inverse, given by \(F^{-1}(y)=\inf \{t\in \mathbb {R}:F(t)\ge y\}\). We denote by \(\omega _0(F)=\inf \{t\in \mathbb {R}:F(t)>0\}\) and \(\omega _1(F)=\sup \{t\in \mathbb {R}:F(t)<1\}\), and we call the interval \((\omega _0(F),\omega _1(F))\) the support of F. Given a sequence \(\{X_j\}_{j\in \mathbb {N}}\) of i.i.d. random variables with distribution F, we denote by \(M_n=\max _{j\in \{1,\ldots ,n\}}X_j\).

One of the main goals in the extreme value theory is to understand the limit behavior of the sequence \(\{M_n\}_{n\in \mathbb {N}}\). As the central limit theorem characterizes the convergence in distribution of the average of random variables to a normal distribution, a similar result can be obtained for the sequence of maxima \(\{M_n\}_{n\in \mathbb {N}}\), but this time there are three possible limit situations. One of the possible limits is the Gumbel distribution \(\varLambda (t)=\exp (-e^{-t})\); we call these distributions the Gumbel family. Given \(\alpha >0\), the second possible limit is the Fréchet distribution of parameter \(\alpha \), defined by \(\varPhi _{\alpha }(t)=\exp (-t^{-\alpha })\) if \(t\ge 0\), and zero otherwise; we call these distributions the Fréchet family. Finally, given \(\alpha >0\), the third possibility is the reversed Weibull distribution of parameter \(\alpha \), defined by \(\Psi _{\alpha }(t)=\exp (-(-t)^{\alpha })\) if \(t\le 0\), and one otherwise; we call these distributions the reversed Weibull family. We now state formally the extreme values theorem, result due independently to Gnedenko [14] and Fisher & Tippett [11].

Theorem 1

(see [26]). Let F be a distribution for which there exists a positive real sequence \(\{a_n\}_{n\in \mathbb {N}}\) and other sequence \(\{b_n\}_{n\in \mathbb {N}}\) such that \((M_n-b_n)/a_n\) converges in distribution to a random variable with distribution H, namely, \(\mathbb {P}\left( M_n-b_n\le a_n t\right) =F^n(a_nt+b_n)\rightarrow H(t)\) for every \(t\in \mathbb {R}\) when \(n\rightarrow \infty \). Then we have that one of the following possibilities hold: H is the Gumbel, H is in the Fréchet family or H is in the reversed Weibull family (see Table 1).

In the following, we say that a distribution F satisfies the extreme value condition if there exist sequences \(\{a_n\}_{n\in \mathbb {N}}\), that we call the scaling sequence, and \(\{b_{n}\}_{n\in \mathbb {N}}\), that we call the shifting sequence, satisfying the condition of Theorem 1.Footnote 6 It can be shown that for every distribution F with extreme type in the reversed Weibull family we have \(\omega _1(F)<\infty \) [26, Proposition 1.13, p. 59]. When F has extreme type Fréchet, we have \(\omega _1(F)=\infty \) [26, Proposition 1.11, p. 54]. For the distributions with extreme type Gumbel the picture is not so clear since \(\omega _1(F)\) is neither finite nor unbounded in general. In our analysis we need a tool from the extreme value theory related to the order statistics of a sample according to F. We denote the order statistics of a sample of size n by \(M_n=M_n^1\ge M_n^2\ge \cdots \ge M_n^n\).

Theorem 2

(see [24]). Let F be a distribution satisfying the extreme value condition with the scaling and shifting sequences \(\{a_n\}_{n\in \mathbb {N}}\) and \(\{b_n\}_{n\in \mathbb {N}}\) such that \(\mathbb {P}\left( M_n-b_n\le a_n t\right) \rightarrow H(t)\) for every \(t\in \mathbb {R}\) when \(n\rightarrow \infty \). Then, for each \(j\in \{1,2,\ldots ,n\}\) and every \(t\in \mathbb {R}\) we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {P}\left( M_n^j-b_n\le a_n t\right) = H(t) \sum _{s=0}^{j-1} \frac{(-\log H(t))^s}{s!}. \end{aligned}$$
Table 1. Summary of the three possible extreme value distributions. The Fréchet family and the Reversed Weibull family are associated to a parameter \(\alpha \in (0,\infty )\). Recall that for \(\alpha >0\), the Pareto distribution of parameter \(\alpha \) is given by \(1-t^{-\alpha }\) for \(t\ge 1\) and zero otherwise.

A distribution V is in the Von Mises family if there exist \(z_0\in \mathbb {R}\), a constant \(\theta >0\) and a function \(\mu :(\omega _0(V),\infty )\rightarrow \mathbb {R}_+\) absolutely continuous with \(\lim _{u \rightarrow \infty } \mu '(u)=0\), such that for every \(t\in (z_0,\infty )\) we have

$$\begin{aligned} 1-V(t)=\theta \exp \Big ( - \int \limits _{z_0}^t \frac{1}{\mu (s)} \mathrm ds \Big ). \end{aligned}$$
(2)

We call such \(\mu \) an auxiliary function of V. We summarize next some technical results related to the Von Mises family of distributions that we use in our analysis.

Lemma 1

(see [26]). Let V be in the Von Mises family with auxiliary function \(\mu \) and such that \(\omega _1(V)=\infty \). Then, V has extreme type Gumbel, and the shifting and scaling sequences may be chosen respectively as \(b_n = V^{-1}(1-1/n)\) and \( a_n = \mu (b_n)\) for every n. Furthermore, we have \(\lim _{t \rightarrow \infty } \mu (t)/t = 0\) and \(\lim _{t\rightarrow \infty }(t+x\mu (t))=\infty \) for every \(x\in \mathbb {R}\).

For example, the exponential distribution of parameter \(\lambda \) is in the Von Mises family, with auxiliary constant function \(1/\lambda \), \(\theta =1\) and \(z_0=0\). Furthermore, for every positive integer n we have \(b_n=F^{-1}(1-1/n)=(\log n)/\lambda \) and \(a_n=\mu (b_n)=1/\lambda \). We need a few results from the extreme value theory. In particular, a relevant property states that every distribution with extreme type Gumbel can be represented by a distribution in the Von Mises family in the following precise sense.

Lemma 2

(see [26]). Let F be a distribution satisfying the extreme value condition with \(\omega _1(F)=\infty \). Then, F has extreme type Gumbel if and only if there exists V in the Von Mises family and a positive function \(\eta :(\omega _0(F),\infty )\rightarrow \mathbb {R}_+\) with \(\lim _{t \rightarrow \infty } \eta (t)= \eta ^{\star }>0\) such that \(1-F(t)=\eta (t)(1-V(t))\) for every \(t\in (\omega _0(F),\infty )\).

Then, whenever F has extreme Gumbel there exists a pair \((V,\eta )\) satisfying the condition guaranteed in Lemma 2, and in this case we say that \((V,\eta )\) is a Von Mises representation of the distribution F.

3 Prophet Inequalities in Large Markets Through Extreme Value Theory

We say that a stopping rule for the k-selection problem with an i.i.d. sample \(X_1,X_2,\ldots ,X_n\) is a single threshold policy if there exists a threshold value T such that we select the first \(\min \{k,|Q|\}\) samples attaining a value larger than T, where Q is the subset of samples attaining a value larger than T. Consider the random variable \(\mathcal {R}^n_{k,T}\) equal to the summation of the first \(\min \{k,|Q|\}\) samples attaining a value larger than T. In particular, this value is completely determined by the sample size n, the distribution F and the threshold T. We are interested in understanding the value

$$\begin{aligned} \text {apx}_k(F)=\liminf _{n\rightarrow \infty }\sup _{T \in \mathbb {R}_+}\frac{\mathbb {E}(\mathcal {R}^n_{k,T})}{\sum _{j=1}^k \mathbb {E}(M_n^{j})}, \end{aligned}$$

where \(M_n^1\ge M_n^2\ge \cdots \ge M_n^n\) are the order statistics of a sample of size n according to F. We observe that when \(k=1\) the value \(\text {apx}_k(F)\) corresponds to the value \(\text {apx}(F)\) in (1). Now we present formally our main results for prophet inequalities in the k-selection problem.

Theorem 3

Let F be a distribution over the non-negative reals that satisfies the extreme value condition. Then, the following holds.

  1. (a)

    When F has an extreme type Fréchet of parameter \(\alpha \), we have that \(apx_k(F) \ge \varphi _k(\alpha ),\) where \(\varphi _k:(1,\infty )\rightarrow \mathbb {R}_+\) is given by

    $$\begin{aligned} \varphi _k(\alpha )=\frac{\varGamma (k)}{\varGamma (k+1-1/\alpha )} \max _{x\in (0, \infty )} x \exp (-x^{-\alpha }) \sum _{j=1}^k \sum _{s=j}^{\infty } \frac{x^{-s \alpha }}{s!}. \end{aligned}$$
    (3)

    In particular, we have \(\text {apx}_k(F)\ge 1-1/\sqrt{2\pi k}\) for every distribution F with extreme type in the Fréchet family.

  2. (b)

    When F has extreme type in the Gumbel or reversed Weibull families, we have that \(\text {apx}_k(F)=1\) for every positive integer k.

Theorem 4

Let F be the Pareto distribution with parameter \(\alpha =2\). Then, for every \(\varepsilon >0\) there exists a positive integer \(k_{\varepsilon }\) such that for every \(k\ge k_{\varepsilon }\) it holds that \(\text {apx}_{k}(F) \le 1-(1-\varepsilon )/\sqrt{2\pi k}\).

Observe that by Theorem 3 we have that for each integer k the approximation factor is more than \(1-1/\sqrt{2k\pi }\) under the large market assumption. Moreover, by Theorem 4 this lower bound is in fact asymptotically tight in k for the distributions with extreme type Fréchet of parameter \(\alpha =2\). This tightness, together with the recent result of Duetting et al. [8] establishing that the approximation ratio of the k-selection problem without the large market assumption is almost \(1-1/\sqrt{2k\pi }\), allows us to obtain the surprising result that the large market advantage vanishes as \(k\rightarrow \infty \).

Despite the tightness result established in Theorem 4, for small values of k this bound is in fact substantially better. Consider a distribution F with extreme type Fréchet of parameter \(\alpha \in (1,\infty )\). By Theorem 3 (a), when \(k=1\) it holds that

$$\begin{aligned} \varphi _1(\alpha )=\frac{1}{\varGamma (2-1/\alpha )}\sup _{x\in (0,\infty )}x\Big (1-\exp (-x^{-\alpha })\Big ), \end{aligned}$$

for every \(\alpha \in (1,\infty )\). The optimum for the above optimization problem as a function of \(\alpha \) is attained at the smallest real non-negative solution \(U^*(\alpha )\) of the first order condition \(U^{\alpha }+\alpha =U^{\alpha }\exp (U^{-\alpha })\), which is given by

$$\begin{aligned} U^*(\alpha )=\left( -\frac{1}{\alpha }\left( \alpha W_{-1}\left( -\frac{1}{\alpha }e^{-1/\alpha }\right) +1\right) \right) ^{-1/\alpha }, \end{aligned}$$

where \(W_{-1}\) is the negative branch of the Lambert function. Therefore, we have

$$\begin{aligned} \varphi _1(\alpha )=\frac{\alpha }{\varGamma (2-1/\alpha )}\cdot \frac{U^*(\alpha )}{U^*(\alpha )^{\alpha }+\alpha }. \end{aligned}$$

The minimum value is at least 0.712 and it is attained at \(\alpha ^*\approx 1.656\). Note that when \(\alpha \) approaches to zero or \(\infty \), the function \(\varphi _1\) goes to one and thus the unique minimizer is given by \(\alpha ^*\approx 1.656\).

We highlight here that, even though Theorem 3 implies that \(\text {apx}_1(F)\) is at least \(\varphi _1(\alpha ^*) \approx 0.712\) when F has extreme type Fréchet, this bound is in fact reached by the Pareto distribution with parameter \(\alpha ^*\) and therefore this bound is tight.

Given our closed expression for the function \(\varphi _1\), we can compare it with the closed expression obtained Kennedy and Kertz for the revenue guarantees of the optimal dynamic policy [19]. Given a distribution F, for every positive integer n let \(v_n=\sup \{\mathbb {E}(X_{\tau }):\tau \in \mathcal {T}_n\}\) and consider the stopping time given by \(\tau _n=\min \{k\in \{1,\ldots ,n\}:X_k>v_{n-k}\}\). In particular, \(v_n=\mathbb {E}(X_{\tau _n})\) for every positive integer n. The following summarizes the result of Kennedy and Kertz [19] for the optimal dynamic policy: When F is a distribution in the Fréchet family, there exists \(\nu :(1,\infty )\rightarrow (0,1)\) such that \(\lim _{n\rightarrow \infty }v_n/\mathbb {E}(M_n)=\nu (\alpha )\) when F has an extreme type Fréchet of parameter \(\alpha \). Furthermore, \(\lim _{\alpha \rightarrow \infty }\nu (\alpha )=\lim _{\alpha \rightarrow 1}\nu (\alpha )=1\) and \(\nu (\alpha )\ge 0.776\) for every \(\alpha \in (1,\infty )\). The function \(\nu \) is given by

$$\begin{aligned} \nu (\alpha )=\frac{1}{\varGamma (2-1/\alpha )}\left( 1-\frac{1}{\alpha }\right) ^{1-\frac{1}{\alpha }}, \end{aligned}$$

and we have \(\varphi _1(\alpha )\le \nu (\alpha )\) for every \(\alpha \in (1,\infty )\). Kennedy and Kertz show that the asymptotic approximation obtained by their multi-threshold policy when the distribution has an extreme type in the Gumbel and reversed Weibull family is equal to one. Our Theorem 3 (b) shows that for both such families we can attain this value by using just single threshold policies. The adaptivity gap is equal to the ratio between the optimal prophet inequality obtained by a single threshold policy and the value obtained by the multi-threshold policy of Kennedy and Kertz. As a corollary of our result for \(k=1\), we obtain an upper bound on the adaptivity gap for the case of distributions with extreme value. For the family of distributions over the non-negative reals and satisfying the extreme value condition we have that the adaptivity gap is at most \(\max _{\alpha \in (1,\infty )}\nu (\alpha )/\varphi _1(\alpha )\approx 1.105\) and is attained at \(\alpha \approx 1.493\).

4 Analysis of the k-Selection Prophet Inequalities

In this section we prove Theorem 3. Throughout the section we introduce some necessary technical results, whose proof can be found in the full version paper. The following proposition gives an equivalent expression for the value \(\text {apx}_k(F)\), which is useful in our analysis.

Proposition 1

Let F be a distribution, let T be a real value and let \(X_1,\ldots ,X_n\) be an i.i.d. sample according to F. Then, for every positive integer k we have \(\mathbb {E}(\mathcal {R}^n_{k,T})=\mathbb {E}\left( X_1 \vert X_1> T \right) \sum _{j=1}^{k} \mathbb {P}(M_n^{j}>T).\)

Using Proposition 1 we have that \(\text {apx}_k(F)\) is therefore given by

$$\begin{aligned} \text {apx}_k(F)= \liminf _{n\rightarrow \infty }\sup _{T \in \mathbb {R}_+} \mathbb {E}\left( X \vert X> T\right) \frac{\sum _{j=1}^{k} \mathbb {P}(M_n^{j}>T)}{\sum _{j=1}^k \mathbb {E}(M_n^{j})}, \end{aligned}$$
(4)

where X is a random variable distributed according to F.

4.1 Proof of Theorem 3 (a): The Fréchet Family

In what follows we restrict to the case in which the distribution F has extreme type in the Fréchet family. We remark that if \(\alpha \in (0,1]\) the expected value of a random variable with distribution Fréchet \(\varPhi _{\alpha }\) is not finite. Therefore, we further restrict to the Fréchet family where \(\alpha \in (1,\infty )\). To prove Theorem 3 (a) we require a technical lemma, where we exploit the structure given by the existence of an extreme value and we show how to characterize the approximation factor of a distribution in the Fréchet family for large values of n. Before stating this lemma, let us introduce a few results about the Fréchet family that will be required.

We say that a positive measurable function \(\ell :(0,\infty )\rightarrow \mathbb {R}\) is slowly varying if for every \(u>0\) we have \(\lim _{t\rightarrow \infty }\ell (ut)/\ell (t)=1.\) For example, the function \(\ell (t)=\log (t)\) is slowly varying, since \(\ell (ut)/\ell (t)=\log (u)/\log (t)+1\rightarrow 1\) when \(t\rightarrow \infty \). On the other hand, the function \(\ell (t)=t^{\gamma }\) is not slowly varying, since for every \(u>0\) we have \(\ell (ut)/\ell (t)=u^{\gamma }\). The following lemma shows the existence of a strong connection between the distributions with extreme type in Fréchet family and slowly varying functions. Recall that for \(\alpha >0\), the Pareto distribution of parameter \(\alpha \) is given by \(P_{\alpha }(t)=1-t^{-\alpha }\) for \(t\ge 1\) and zero otherwise.

Lemma 3

([26]). Let F be a distribution with extreme type in the Fréchet family. Then, for every positive integer n, we have \(a_n=F^{-1}(1-1/n)\) and \(b_n=0\) are scaling and shifting sequences for F. Furthermore, there exists a slowly varying function \(\ell _F\) such that \(1-F(t)=t^{-\alpha }\ell _F(t)\), for every \(t\in \mathbb {R}_+\). In particular, we have \(1-F(t)=(1-P_{\alpha }(t))\cdot \ell _F(t)\) for every \(t\in \mathbb {R}_+\).

Observe that this lemma says that if F has extreme type Fréchet of parameter \(\alpha \), then it essentially corresponds to a perturbation of a Pareto distribution with parameter \(\alpha \) by some slowly varying function. Let \(\{a_n\}_{n\in \mathbb {N}}\) be a scaling sequence for the distribution F in the Fréchet family. Thanks to Lemma 3, we have the shifting sequence in this case is zero. We are now ready to state the main technical lemma.

Lemma 4

Let F be a distribution with extreme type Fréchet of parameter \(\alpha \) and let \(\{a_n\}_{n\in \mathbb {N}}\) be an appropriate scaling sequence. Consider a positive sequence \(\{T_n\}_{n\in \mathbb {N}}\) with \(T_n\rightarrow \infty \) and for which there exists \(U\in \mathbb {R}_+\) such that \(T_n/a_n\rightarrow U\). Then, we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {E}\left( X \vert X> T_n\right) \frac{\sum _{j=1}^{k} \mathbb {P}(M_n^{j}>T_n)}{\sum _{j=1}^k \mathbb {E}(M_n^{j})}= \frac{\varGamma (k)}{\varGamma (k+1-1/\alpha )} U \exp (-U^{-\alpha }) \sum _{j=1}^k \sum _{s=j}^{\infty } \frac{U^{-s \alpha }}{s!}. \end{aligned}$$

We use this lemma to prove Theorem 3 (a).

Proof

(Proof of Theorem 3 (a)). Let F be a distribution with extreme type Fréchet of parameter \(\alpha \). We first prove that for each positive integer k it holds that \(\text {apx}_k(F) \ge \varphi _k(\alpha )\). To this end, for each positive integer n and positive real number U, let \(T_n\) be the threshold given by \(T_n=a_n \cdot U\), where \(\{ a_n\}_{n \in \mathbb {N}}\) is the scaling sequence for the distribution F given by Lemma 3. Then,

$$\begin{aligned} \text {apx}_k(F) \ge \liminf _{n\rightarrow \infty } \mathbb {E}\left( X \vert X> T_n \right) \frac{\sum _{j=1}^{k} \mathbb {P}(M_n^{j}>T_n)}{\sum _{j=1}^k \mathbb {E}(M_n^{j})}. \end{aligned}$$
(5)

Note that \(\lim \inf _{n \rightarrow \infty } T_n= \infty \) (and thus \(T_n \rightarrow \infty \)), since \(U \in \mathbb {R}_+\) and \(a_n \rightarrow \infty \). Furthermore, \(\lim _{n \rightarrow \infty } T_n/a_n= U\) and then applying Lemma 4 together with inequality (5) we obtain that

$$\begin{aligned} \text {apx}_k(F) \ge \frac{\varGamma (k)}{\varGamma (k+1-1/\alpha )} U \exp (- U^{-\alpha }) \sum _{j=1}^k \sum _{s=j}^{\infty } \frac{U^{-s \alpha }}{s!}. \end{aligned}$$

Given that the inequality above holds for every positive real number U, we have

$$\begin{aligned} \text {apx}_k(F) \ge \frac{\varGamma (k)}{\varGamma (k+1-1/\alpha )} \max _{U \in \mathbb {R}_+} U \exp (- U^{-\alpha }) \sum _{j=1}^k \sum _{s=j}^{\infty } \frac{U^{-s \alpha }}{s!}= \varphi _k (\alpha ). \end{aligned}$$

In the rest of the proof we show that, for each positive real number k and each \(\alpha \in (1, \infty )\), \(\varphi _k(\alpha )\) is lower bounded by \( 1-1/\sqrt{2k \pi }.\) To this end, we just need to evaluate the objective function of our optimization problem in a well chosen value. One of the Gautschi inequalities for the Gamma function states that for every \(s \in (0,1)\) and every \(x \ge 1\) we have \(\varGamma (x+1)>x^{1-s}\cdot \varGamma (x+s)\) [12]. Then, setting \(x=k\) and \(s=1-1/\alpha \) yields \(\varGamma (k+1)>k^{1/\alpha } \varGamma (k+1-1/\alpha ).\) Since \(\varGamma (k)=\varGamma (k+1)/k\), we therefore obtain \( k^{1-1/\alpha }>\varGamma (k+1-1/\alpha )/\varGamma (k).\) On the other hand, note that for each \(U \in (0, \infty )\) we have

$$\begin{aligned} U \exp (-U^{-\alpha })\sum _{j=1}^k\sum _{s=j}^\infty \frac{U^{-s \alpha }}{s!}= & {} U \exp (-U^{-\alpha })\left( \sum _{s=1}^k \frac{s\cdot U^{-s \alpha }}{s!}+ k \sum _{s=k+1}^\infty \frac{U^{-s\alpha }}{s!}\right) \\= & {} U \exp (-U^{-\alpha })\left( U^{-\alpha } \sum _{s=0}^{k-1} \frac{U^{-s \alpha }}{s!}+ k \sum _{s=k+1}^\infty \frac{U^{-s\alpha }}{s!}\right) . \end{aligned}$$

In particular, by taking \(U_{k,\alpha }=k^{-1/\alpha }\) we get that

$$\begin{aligned} \varphi _k(\alpha )\cdot \frac{\varGamma (k+1-1/\alpha )}{\varGamma (k)}\ge & {} U_{k,\alpha }\cdot k \exp (-U_{k,\alpha }^{-\alpha }) \left( \sum _{s=0}^{k-1} \frac{U_{k,\alpha }^{-s \alpha }}{s!}+ \sum _{s=k+1}^\infty \frac{U_{k,\alpha }^{-s\alpha }}{s!}\right) \\= & {} U_{k,\alpha }\cdot k \exp (-U_{k,\alpha }^{-\alpha }) \left( \exp (U_{k,\alpha }^{-\alpha })- \frac{U_{k,\alpha }^{-\alpha k}}{k!}\right) \\= & {} k^{1-1/\alpha } \left( 1- \frac{e^{-k}k^k}{k!}\right) \ge \frac{\varGamma (k+1-1/\alpha )}{\varGamma (k)}\left( 1- \frac{1}{\sqrt{2 \pi k}} \right) , \end{aligned}$$

where the first inequality follows since the value of \(\varphi _k(\alpha )\) involves the maximum over \((0,\infty )\), the first equality from the Taylor series for the exponential function and the last inequality is obtained by applying Stirling’s approximation inequality. This concludes the proof of the theorem.    \(\square \)

4.2 Proof of Theorem 3 (b): Gumbel and Reversed Weibull Family

In what follows we consider a distribution F with extreme type Gumbel or in the reversed Weibull family. We consider both cases separately. Recall that if F has extreme type in the reversed Weibull family then it holds that \(\omega _1(F)<\infty \), that is, F has bounded support.

We start by showing that when \(\omega _1(F)<\infty \) we have \(\text {apx}_k(F)=1\) for every positive integer k. In particular, the approximation result follows directly from this in the case of a distribution F with extreme type in the reversed Weibull family. When the support of F is upper bounded by \(\omega _1(F)<\infty \), we have \(\mathbb {E}(M_n^j)\le \omega _1(F)\) for every \(j\in \{1,\ldots ,k\}\). For every \(\varepsilon >0\) consider \(T_{\varepsilon }=(1-\varepsilon )\cdot \omega _1(F)\). Then, by the expression in (4) we have that \(\text {apx}_k(F)\) can be lower bounded as \(\text {apx}_k(F)\ge (1-\varepsilon )\cdot \omega _1(F)\cdot \liminf _{n\rightarrow \infty }\sum _{j=1}^{k} \mathbb {P}(M_n^{j}>T_{\varepsilon })/(k\cdot \omega _1(F))=1-\varepsilon \), and we conclude that \(\text {apx}_k(F)=1\).

In what follows we restrict attention to the distributions F with extreme type Gumbel where \(\omega _1(F)=\infty \). Key to our analysis are the result presented in the Preliminaries Sect. 2 about Von Mises representations for distributions in the Gumbel family. We need some lemmas about the structure of a distribution in the Gumbel family before proving the theorem.

Lemma 5

Let F be a distribution with extreme type in the Gumbel family such that \(\omega _1(F)=\infty \) and let \((V,\eta )\) be a Von Mises representation of F such that \(\lim _{t\rightarrow \infty }\eta (t)=\eta ^{\star }\). Let \(\{a_n\}_{n\in \mathbb {N}}\) and \(\{b_n\}_{n\in \mathbb {N}}\) be scaling and shifting sequences, respectively, for V. For every positive integer n consider \(b_n^{\eta }=b_n+a_n\log \eta ^{\star }\). Then, the following holds:

  1. (a)

    \(\{a_n\}_{n\in \mathbb {N}}\) and \(\{b_n^{\eta }\}_{n\in \mathbb {N}}\) are scaling and shifting sequences, respectively, for F.

  2. (b)

    For every \(U\in \mathbb {R}\) we have \(\lim _{n\rightarrow \infty }(a_nU+b_n^{\eta })= \infty \).

  3. (c)

    For every \(U\in \mathbb {R}\) and every positive integer k we have that \(\lim _{n\rightarrow \infty }(a_nU+b_n^{\eta })/\sum _{j=1}^k \mathbb {E}(M_n^{j})=1/k\), where \(M_n^1,\ldots ,M_n^n\) are the order statistics for F.

Lemma 6

Let F be a distribution with extreme type in the Gumbel family and let \(\{\varTheta _n\}_{n\in \mathbb {N}}\) be a sequence of real values such that \(\varTheta _n\rightarrow \infty \). Then, we have \(\lim _{n\rightarrow \infty }\frac{1}{\varTheta _n}\mathbb {E}(X|X>\varTheta _n)=1\), where X is distributed according to F.

We are now ready to prove Theorem 3 (b) for the Gumbel family.

Proof

(Proof of Theorem 3 (b) for the Gumbel family). Let F be a distribution with extreme type in the Gumbel family and such that \(\omega _1(F)=\infty \). Consider a Von Mises pair \((V,\eta )\) that represents F and such that \(\lim _{t\rightarrow \infty }\eta (t)=\eta ^{\star }>0\), guaranteed to exist by Lemma 2. Let \(\{a_n\}_{n\in \mathbb {N}}\) and \(\{b_n\}_{n\in \mathbb {N}}\) be scaling and shifting sequences, respectively, for V. For every positive integer n consider \(b_n^{\eta }=b_n+a_n\log \eta ^{\star }\). We can lower bound the value of \(\text {apx}_k(F)\) by

$$ \sup _{U\in \mathbb {R}}\liminf _{n\rightarrow \infty } \frac{\mathbb {E}\left( X \vert X> a_nU+b_n^{\eta } \right) }{a_nU+b_n^{\eta }} \cdot \frac{a_nU+b_n^{\eta }}{\sum _{j=1}^k \mathbb {E}(M_n^{j})}\cdot \sum _{j=1}^{k} \mathbb {P}(M_n^{j}>a_nU+b_n^{\eta }). $$

By Lemma 5 (b), we have \(a_nU+b_n^{\eta }\rightarrow \infty \) for every U when \(n\rightarrow \infty \), and therefore from Lemma 6 we obtain

$$\lim _{n\rightarrow \infty }\frac{\mathbb {E}\left( X \vert X > a_nU+b_n^{\eta } \right) }{a_nU+b_n^{\eta }}=1 ,$$

for every U. Furthermore, Lemma 5 (c) implies that for every U and every positive integer k it holds \((a_nU+b_n^{\eta })/\sum _{j=1}^k \mathbb {E}(M_n^{j})\rightarrow 1/k\). We conclude that for every U

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{\mathbb {E}\left( X \vert X > a_nU+b_n^{\eta } \right) }{a_nU+b_n^{\eta }} \cdot \frac{a_nU+b_n^{\eta }}{\sum _{j=1}^k \mathbb {E}(M_n^{j})}=\frac{1}{k}. \end{aligned}$$

By Lemma 5 (a), \(\{a_n\}_{n\in \mathbb {N}}\) and \(\{b_n^{\eta }\}_{n\in \mathbb {N}}\) are scaling and shifting sequences, respectively, for F. Therefore, by Theorem 2 we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\sum _{j=1}^{k} \mathbb {P}(M_n^{j}>a_nU+b_n^{\eta })&= \lim _{n\rightarrow \infty }\sum _{j=1}^{k} \mathbb {P}\left( \frac{M_n^{j}-b^{\eta }_n}{a_n}>U\right) \\&=\sum _{j=1}^{k}\left( 1-\exp \Big (- e^{-U}\Big )\sum _{s=0}^{j-1}\frac{ e^{-sU}}{s!}\right) \\&=k-\exp \Big (-e^{-U}\Big )\sum _{j=1}^{k}\sum _{s=0}^{j-1}\frac{e^{-sU}}{s!}. \end{aligned}$$

Note that the last term is non-negative for every U. Furthermore, we get that

$$\begin{aligned} \lim _{U\rightarrow \infty }\exp \Big (-e^{-U}\Big )\sum _{j=1}^{k}\sum _{s=0}^{j-1}\frac{ e^{-sU}}{s!}=\inf _{U\in \mathbb {R}}\exp \Big (-e^{-U}\Big )\sum _{j=1}^{k}\sum _{s=0}^{j-1}\frac{ e^{-sU}}{s!}=0 \end{aligned}$$

since \(\sum _{s=0}^{\infty }e^{-sU}/s!=\exp (-e^{-U})\). We conclude that

$$\begin{aligned} \sup _{U\in \mathbb {R}}\lim _{n\rightarrow \infty } \frac{\mathbb {E}\left( X \vert X> a_nU+b_n^{\eta } \right) }{a_nU+b_n^{\eta }} \cdot \frac{a_nU+b_n^{\eta }}{\sum _{j=1}^k \mathbb {E}(M_n^{j})}\cdot \sum _{j=1}^{k} \mathbb {P}(M_n^{j}>a_nU+b_n^{\eta })=\frac{1}{k}\cdot k=1, \end{aligned}$$

and therefore \(\text {apx}_k(F)= 1\). That concludes the proof for the Gumbel family.   \(\square \)

5 Extreme Types and Virtual Valuations

The virtual valuation associated to a distribution G is given by \(\phi _G(t)=t-(1-G(t))/g(t)\), where g is the density function of G. When v is distributed according to G, we denote by \(G_{\phi }\) the distribution of \(\phi _G(v)\) and by \(G_{\phi }^+\) the distribution of \(\phi ^+_G(v)=\max \{0,\phi _G(v)\}\). Using Theorem 3 we can apply the existing reductions in the literature [5, 7, 15] to translate our optimal guarantees for single threshold prophet inequalities to optimal fixed price mechanisms as long as \(G_{\phi }^+\) satisfies the extreme value condition. If \(G_{\phi }^+\) has extreme value Fréchet, the revenue gap of the fixed price PPM for the k-selection problem is bounded by a limit of the maximization problem (3) and, for every k, this revenue gap is more than \(1-1/\sqrt{2k\pi }\) and asymptotically tight in k. When \(k=1\) we further have that the revenue gap is roughly 0.712. When \(G_{\phi }^+\) is in the Gumbel or reversed Weibull families, we have that with fixed prices a PPM is able to recover the same revenue of that of the optimal mechanism for the k-selection problem, for every positive integer k.

In what follows, we say that a pair \((V,\eta )\) smoothly represents a distribution G if it satisfies the conditions in (2) where V is in the Von Mises family and \(\lim _{t \rightarrow \omega _1(F)} \eta '(t)=0\). We say that a distribution G with extreme type Fréchet of parameter \(\alpha \) satisfies the asymptotic regularity condition if \(\lim _{t\rightarrow \infty }(1-G(t))/(tg(t))=1/\alpha \), where g is the density of the distribution G. This holds, for example, every time that g is non-decreasing [26, Proposition 1.15]. In our next result we show that if a distribution G with extreme type satisfies any of these two conditions, the distribution \(G_{\phi }^+\) has an extreme type as well, and furthermore, it belongs to the same family.

Theorem 5

Let G be a distribution satisfying the extreme value condition. Then, the following holds:

  1. (a)

    When G has extreme type in the Fréchet family and if it satisfies the asymptotic regularity condition, then \(G_{\phi }^+\) has extreme type in the Fréchet family as well.

  2. (b)

    When G has extreme type Gumbel and if it can be smoothly represented, then \(G_{\phi }^+\) has extreme type Gumbel as well.