Keywords

1 Introduction

Traditionally, if a cryptographic system is secure, it means that it can be formally proved that it’s secure in a certain security model, which usually takes for granted that the secret is perfectly random. Unfortunately, in the real world the secret may only be obtained from non-uniform distribution. For example, if the secret source is biometric data [8, 11], physical sources [3, 4], secrets with partial leakage or group elements from Diffie-Hellman key exchange [16, 22], then it can’t satisfy perfect randomness.

Recently, there has been some interest in basing cryptographic primitives on weak secrets, where the only information about the secret is some non-trivial amount of entropy. Formally, the \((T, \varepsilon )\)–security (in the real model) of a cryptographic application \(P\) essentially requires that for any adversary \(\mathsf A \) with resource \(T\) and non-uniform distribution \(R\), the expectation of \(f(R)\), which we call ‘weak expectation’ is upper bounded by \(\varepsilon \), where the function \(f(r)\) denotes \(\mathsf A '\) s advantage conditioned on secret key being \(r\). In the ideal model, the non-uniform distribution \(R\) is replaced with uniform distribution. Dodis and Yu [13] have discovered an elementary inequality that upper bounds the weak expectation of \(f(R)\) by a product of two terms: the first term only depends on the entropy deficiency (i.e. the difference between m = length(R) and the amount of entropy it has), and the second is essentially the ‘variance’ of \(f\) under uniform distribution \(U_m\). However, in [13], only min-entropy is considered for some applications and collision entropy is considered for some other applications. It’s well known that as a measure of the diversity, uncertainty, or randomness of a system, the R\(\acute{e}\)nyi entropy [24] is the most general notion, which includes the Shannon entropy, the min-entropy, and the collision entropy. The advantage of the R\(\acute{e}\)nyi entropy compared with collision entropy was proposed by Hayashi [18, 19]. We’ll study the upper bound of the ‘weak expectation’ from the most general perspective of the entropy–the R\(\acute{e}\)nyi entropy. When the R\(\acute{e}\)nyi entropy is converted into min-entropy and collision entropy, the corresponding results are the same as those of [13]. Moreover, in [13], the entropy is information-theoretic. In reality, it’s infeasible to information-theoretically measure the ‘weak source’.

The discovery [6, 17, 25] that simple computational assumptions (namely the existence of one-way functions) make the computational and information-theoretic notions completely different has been one of the most fruitful results in computer science history, with impact on cryptography, complexity theory and computational learning theory. Two of the fundamental papers [17, 25] found it natural to extend information theory more generally to the computational setting, and attempt to define its most fundamental notion of entropy. The most used is due to H\(\dot{a}\)stad, Impagliazzo, Levin, and Luby [17] (called HILL entropy). In this paper, we’ll use an expanded version of the HILL entropy to study how to overcome ‘weak’ expectations.

APPLICATIONS. Firstly, we capitalize on the inequalities via the R\(\acute{e}\)nyi entropy and expanded computational entropy to all unpredictabilityFootnote 1 applications (e.g. one-way functions, MACs and digital signatures). Secondly, the inequalities dependent on the R\(\acute{e}\)nyi entropy can be applied to indistinguishabilityFootnote 2 applications including CPA-secure symmetric-key encryption schemes, weak Pseudorandom Functions, Extractors and Non-Malleable Extractors similar to the results of [13] that depend on collision entropy. Thirdly, for some indistinguishability applications, such as CPA-secure symmetric key encryption schemes and weak pseudorandom functions (PRF), we obtain some results based on the expanded computational entropy, while those [13] that depend on collision entropy are already greatly improved results as compared to state-of-the-art [23] with much simpler proofs. We also introduce the concept of weaker computational extractors, whose input is a source of sufficiently high computational entropy, while the output is close to a uniform distribution under the computational distance. Then we show a construction of a weaker computational extractor from any weak PRF.

OUR CONTRIBUTION AND TECHNIQUES. We employ the discrete form of the H\(\ddot{o}\)lder inequality, so that the weak expectation can be upper bounded by a product of two terms, the first of which is determined by the R\(\acute{e}\)nyi entropy, while the second only depends on the expectation of some function of \(f\) under uniform distribution. More formally, we get two main results:

Result 1

If an unpredictability application \(P\) is \((T, \varepsilon )-\)secure in the ideal model, then \(P\) is \((T, (2^d \cdot \varepsilon )^{\frac{1}{\alpha '}})-\)secure in the \((m-d)-\)real\(_\alpha \) model where \(\alpha \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\).

Result 2

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\). If an indistinguishability application \(P\) is \((T', \varepsilon )-\)secure and \((T', T, \gamma )\)-simulatable, then

  1. (1)

     If \(1 < \alpha ' < 2\), we have that \(P\) is \((T, \sigma )\)-\(\alpha '\)th power secure, where \(\sigma \le (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}\). In particular, \(P\) is \((T, [2^d \cdot (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}]^{\frac{1}{\alpha '}} )-\)secure in the \((m-d)-\)real\(_\alpha \) model.

  2. (2)

     If \(\alpha ' \ge 2\), we have that \(P\) is \((T, \sigma )\)-\(\alpha '\)th power secure, where \(\sigma \le \frac{\varepsilon + \gamma }{2}\). In particular, \(P\) is \((T, (2^d \cdot \frac{\varepsilon + \gamma }{2})^{\frac{1}{\alpha '}} )-\)secure in the \((m-d)-\)real\(_\alpha \) model.

Compared to previous results, our results are more general, since in previous literature [13], only \(\alpha =\infty \) (i.e. the min entropy) and \(\alpha = 2\) (i.e. the collision entropy) are considered.

Though Barak and Dodis et al. [2, 13] proposed a double-run trick to study the connection between the security model and the square-security model for indistinguishability applications, it seems impossible to directly expand this trick to find the relationship between the security model and the \(\beta \)th power security model in the ideal world. Fortunately, we find that if the H\(\ddot{o}\)lder inequality is employed, then we can adopt the square-security model as a bridge between the security model and the \(\beta \)th power security model. Consequently, Result 2 can be obtained.

In [13], the randomness of a distribution is measured by its entropy which is information-theoretic. In reality, it’s infeasible to information-theoretically measure the ‘weak source’. Thus in this paper, we extend these objective measures to the computational case. By employing the R\(\acute{e}\)nyi entropy, we expand the computational entropy in [17] and study how to overcome the ‘weak’ expectation. We discover that if the attacker’s advantage circuit size is chosen to be the circuit size in the concept of expanded computational entropy, then this kind of computational entropy is useful in overcoming the ‘weak’ expectation, and some results similar to Result 1 and 2 can be obtained. We also show some indistinguishability applications including CPA-secure symmetric-key encryption schemes, weak Pseudorandom Functions and Weaker Computational Extractors.

ORGANIZATION. The rest of the paper is organized as follows. In the following section, we recall some concepts and notations to be used in the paper. In Sect. 3, we present how to overcome Weak Expectations via the R\(\acute{e}\)nyi Entropy. In Sect. 4, we use an expanded version of the HILL entropy to study how to overcome ‘weak’ expectations. Section 5 concludes the paper.

2 Preliminaries

In this section, we present some notations and definitions that will be used later.

Definition 2.1

([24]) The R\(\acute{e}\)nyi entropy of order \(\alpha \) of a random variable \(X\) is defined as

$$H_\alpha (X)=\frac{1}{1-\alpha } \log (\sum \limits _x Pr(X=x)^\alpha ),$$

where \(\alpha \ge 0\) and \(\alpha \ne 1\).

Remark 2.1

If \(\alpha \rightarrow 1\), \(H_\alpha (X)\) converges to the Shannon entropy [9]:

$$H_1(X)=-\sum _x Pr(X=x) \log Pr(X=x).$$

If \(\alpha =2\), \(H_\alpha (X)\) is called the collision entropy of \(X\):

$$H_2(X)= -\log \sum _x Pr(X=x)^2.$$

If \(\alpha \rightarrow \infty \), \(H_\alpha (X)\) converges to the min entropy:

$$ H_\infty (X)= - \log \max _x Pr(X=x).$$

It’s very natural to extend the average (aka conditional) collision entropy and min-entropy in [13] to the average (aka conditional) R\(\acute{e}\)nyi entropy, which is as follows.

Definition 2.2

The average (aka conditional) R\(\acute{e}\)nyi entropy of a random variable X conditioned on another random variable Z is defined as follows.

$$H_\alpha (X|Z) = \frac{1}{1-\alpha } \log (\mathbb {E}_{z \leftarrow Z} [\sum \limits _x Pr(X=x|Z=z)^\alpha ]),$$

where \(z \leftarrow Z\) is denoted as sampling an element \(z\) according to distribution \(Z\), \(\alpha \ge 0\) and \(\alpha \ne 1\).

If \(\alpha =2 (resp. \infty )\), \(H_\alpha (X|Z)\) is the same as the definition of average (aka conditional) collision entropy (resp. min-entropy) in [13]. The difference between average (aka conditional) R\(\acute{e}\)nyi entropy (when \(1<\alpha \le 2\)) and collision entropy was discussed in [18, 19].

The advantage of a circuit \(C\) in distinguishing the random variables \(X\), \(Y\) is denoted as \(\varDelta _C(X, Y)= |Pr(C(X)=1)- Pr(C(Y)=1)|\). The statistical distance between two random variables \(X\), \(Y\) is defined by \(\mathsf{{SD}}(X, Y)= \frac{1}{2} \sum \limits _{x} |Pr(X=x)-Pr(Y=x)|= \max \limits _{C} \varDelta _C(X, Y)\). Given a circuit \(D\), define the computational distance \(\delta ^D\) between \(X\) and \(Y\) as \(\delta ^D(X, Y ) = |\mathbb {E}[D(X)]- \mathbb {E}[D(Y )]|\). We write \(\varDelta _C(X, Y|Z)\)(resp. \(\mathsf{{SD}}(X, Y|Z)\), \(\delta ^D(X, Y|Z)\)) as shorthand for \(\varDelta _C((X, Z), (Y, Z))\)(resp. \(\mathsf{{SD}}((X, Z), (Y, Z))\), \(\delta ^D((X, Z), (Y, Z))\)).

Let \(\mathcal {D}_s^{det, \{0, 1\}}\) (resp. \(\mathcal {D}_s^{det, [0, 1]}\)) be the set of all deterministic circuits of size \(s\) with binary output in \(\{0, 1\}\) (resp. \([0, 1]\)), and let \(\mathcal {D}_s^{rand, \{0, 1\}}\) (resp. \(\mathcal {D}_s^{rand, [0, 1]}\)) as the set of probabilistic circuits with output in \(\{0, 1\}\) (resp. \([0, 1]\)).

HILL computational entropy is parameterized by quality (how distinguishable is \(X\) from a variable \(Z\) that has true entropy) and quantity (how much true entropy is there in \(Z\)). Formally, it’s defined as follows.

Definition 2.3

([5, 17]) A distribution \(X\) has \(HILL\) entropy at least \(k\), denoted \(H_{\varepsilon , s}^{HILL} (X) \ge k\) if there exists a distribution \(Y\) where \(H_{\infty } (Y) \ge k\), such that \(\forall D \in \mathcal {D}_s^{det, [0, 1]}\), \(\delta ^D(X, Y) \le \varepsilon .\)

Definition 2.4

([21]) Let \((X, Y)\) be a pair of random variables. \(X\) has conditional HILL entropy at least \(k\) conditioned on \(Y\), denoted \(H^{HILL}_{\varepsilon , s}(X|Y) \ge k\), if there exists a collection of distributions \(Z_y\) for each \(y \in Y\), giving rise to a joint distribution \((Z, Y)\), such that \(H_\infty (Z|Y) \ge k\) and \(\forall D \in \mathcal {D}_s^{rand, [0, 1]}\), \(\delta ^D((X, Y), (Z, Y)) \le \varepsilon \).

As shown in [15], HILL entropy (resp. conditional HILL entropy) drawing \(D\) from \(\mathcal {D}_s^{det, \{0, 1\}}\), \(\mathcal {D}_s^{det, [0, 1]}\), \(\mathcal {D}_s^{rand, \{0, 1\}}\), \(\mathcal {D}_s^{rand, [0, 1]}\) is essentially equivalent. However, the above definition is only limited to the min-entropy. It’s natural to expand it to the R\(\acute{e}\)nyi entropy. The expanded version is as follows.

Definition 2.5

A distribution \(X\) has Expanded \(HILL\) entropy at least \(k\), denoted \(H_{\alpha , \varepsilon , s}^{EHILL} (X) \ge k\) if there exists a distribution \(Y\) where \(H_\alpha (Y) \ge k\), such that \(\forall D \in \mathcal {D}_s^{det, [a, b]}\), \(\delta ^D(X, Y) \le \varepsilon ,\) where \(a < b\) and \(\alpha \ge 0\) and \(\alpha \ne 1\).

Definition 2.6

Let \((X, Y)\) be a pair of random variables. \(X\) has conditional EHILL entropy at least \(k\) conditioned on \(Y\), denoted \(H^{EHILL}_{\alpha , \varepsilon , s}(X|Y) \ge k\), if there exists a collection of distributions \(Z_y\) for each \(y \in Y\), giving rise to a joint distribution \((Z, Y)\), such that \(H_\alpha (Z|Y) \ge k\) and \(\forall D \in \mathcal {D}_s^{rand, [a, b]}\), \(\delta ^D((X, Y), (Z, Y)) \le \varepsilon ,\) where \(a < b\) and \(\alpha \ge 0\) and \(\alpha \ne 1\).

Similar to [15], Expanded \(HILL\) entropy (resp. conditional EHILL entropy) drawing \(D\) from \(\mathcal {D}_s^{det, \{a, b\}}\), \(\mathcal {D}_s^{det, [a, b]}\), \(\mathcal {D}_s^{rand, \{a, b\}}\), \(\mathcal {D}_s^{rand, [a, b]}\) is essentially equivalent.

ABSTRACT SECURITY GAMES. The security of an application \(P\) is defined via an interactive game between a probabilistic attacker \(\mathsf A \) and a probabilistic challenger \(\mathsf C (r)\), where \(\mathsf C \) is fixed by the definition of \(P\), and where the particular secret key \(r\) used by \(\mathsf C \) is derived from \(U_m\) in the ‘ideal’ setting, and from some distribution \(R\) in the ‘real’ setting. The game can have an arbitrary structure, but at the end \(\mathsf C (r)\) should output a bit, with output 1 indicating that \(\mathsf A \) ‘won’ the game and 0 otherwise.

Given a particular key \(r\), we define the advantage \(f_\mathsf{A }(r)\) of \(\mathsf A \) or \(r\) (against particular \(\mathsf C \) fixed by \(P\)) as follows. For unpredictability games, denote \(f_\mathsf{A }(r)\) as the expected value of \(\mathsf C (r)\) taken over the internal coins of \(\mathsf A \) and \(\mathsf C \); and for indistinguishability games, \(f_\mathsf{A }(r)\) is the expectation of \(\mathsf C (r)-\frac{1}{2}\), thus \(f_\mathsf{A } (r)= Pr_\mathsf{C , r}(\mathsf A ~ wins) - \frac{1}{2}\). \(f_\mathsf{A }\) is called \(\mathsf A \)’s advantage circuit.

Let \(U_m\) be the uniformly random distribution over \(\{0, 1\}^m\). Denote \(|\mathbb {E}(f_\mathsf{A }(U_m))|\) as the advantage of \(\mathsf A \) (in the ideal model). For \(c \ge 0\) and \(c \ne 1\) and all distributions \(R\) with \(H_c(R) \ge m-d\), denote \(\max _R|\mathbb {E}(f_\mathsf{A }(R))|\) as the advantage of \(\mathsf A \) in the \((m - d)-\)real\(_c\) model instead of limiting \(c\) to be \(\infty \) or 2 in [13]. Similarly, for \(\alpha \ge 0\) and \(\alpha \ne 1\), denote \(\max _R|\mathbb {E}(f_\mathsf{A }(R))|\), taken over all distributions \(R\) with \(H_{\alpha , \varepsilon , s}^{EHILL} (R)\ge m-d\), as the advantage of \(\mathsf A \) in the \((m-d)-\)real\(_{\alpha , \varepsilon , s}^{EHILL}\) modelFootnote 3.

Definition 2.7

([13]) An application \(P\) is \((T, s, \varepsilon )-\)secure (in the ideal model) if the advantage of any \(T-\)bounded \( \mathsf A \) with the advantage circuit size \(s\) is at most \(\varepsilon \).

For \(c \ge 0\) and \(c \ne 1\), an application \(P\) is \((T', \varepsilon ')-\)secure in the \((m-d)-\)real\(_c\) model if the advantage of any \(T'-\)bounded \(\mathsf A \) in the \((m-d)-\)real\(_c\) model is at most \(\varepsilon '\).

Definition 2.8

For \(\alpha \ge 0\) and \(\alpha \ne 1\), an application \(P\) is \((T', s', \varepsilon ')-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s'}^{EHILL}\) model if the advantage of any \(T'-\)bounded \(\mathsf A \) with the advantage circuit size \(s'\) in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s'}^{EHILL}\) model is at most \(\varepsilon '\).

3 Overcoming Weak Expectations via the R\(\acute{e}\)nyi Entropy

Since the unavailability of perfect random secret sources, it’s valuable to discover the connection between the advantages \(|\mathbb {E}(f_\mathsf{A }(U_m))|\) (in the ideal model) and \(\max _R|\mathbb {E}f_\mathsf{A }(R)|\) (in the \((m - d)-\)real\(_c\) model) where \(c \ge 0\) and \(c \ne 1\). Unfortunately, existing results [13] only considered the min-entropy for unpredictability applications and the collision entropy for indistinguishability applications. In the following, based on the R\(\acute{e}\)nyi Entropy and the discrete form of the H\(\ddot{o}\)lder inequality, we’ll present an inequality which unify the corresponding connection in both unpredictability and indistinguishability applications. Then we present its applications and discuss its form when side information exists.

Lemma 3.1

([1]) Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\), then for all \((x_1, x_2, \cdots , x_n), (y_1, y_2, \cdots , y_n) \in \mathbb {R}^n\), we have

$$\sum \limits _{k=1}^n|x_k y_k| \le (\sum \limits _{k=1}^n |x_k|^\alpha )^{1/\alpha } (\sum \limits _{k=1}^n |y_k|^{\alpha '})^{1/{\alpha '}},$$

which is the discrete form of the H\(\ddot{o}\)lder inequality.

Theorem 3.1

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\), then for any (deterministic) real-valued function \(f: \{0,1\}^m \rightarrow \mathbb {R}\) and any random variable \(R\) with \(H_\alpha (R)\ge m-d\), we have

$$|\mathbb {E}[f(R)]| \le (2^d \cdot \mathbb {E}[|f(U_m)|^{\alpha '}])^{\frac{1}{\alpha '}}.$$

Proof

From Definition 2.1, \(\sum \limits _{r} Pr(R=r)^\alpha = 2 ^{(1-\alpha ) H_\alpha (R)}\). According to Lemma 3.1, we have

$$\begin{aligned}&|\mathbb {E}[f(R)]|=|\sum \limits _{r} Pr(R=r) f(r)| \le \sum \limits _{r} |Pr(R=r) f(r)|\\&{~~~~~~~~~~~~}\le [\sum \limits _{r} Pr(R=r)^\alpha ]^{\frac{1}{\alpha }} \cdot [\sum \limits _{r} |f(r)|^{\alpha '}]^{\frac{1}{\alpha '}}\\&{~~~~~~~~~~~~}=[2^{(1-\alpha )H_\alpha (R)}]^{\frac{1}{\alpha }}\cdot [\frac{1}{2^m} \cdot \sum \limits _{r} |f(r)|^{\alpha '}]^{\frac{1}{\alpha '}} \cdot (2^m)^{\frac{1}{\alpha '}}\\&{~~~~~~~~~~~~}=[2^{(1-\alpha )H_\alpha (R)}]^{\frac{1}{\alpha }} \cdot \{2^m \cdot \mathbb {E}[|f(U_m)|^{\alpha '}]\}^{\frac{1}{\alpha '}}\\&{~~~~~~~~~~~~} \le 2^ {\frac{(1-\alpha )(m-d)}{\alpha }} \cdot \{2^m \cdot \mathbb {E}[|f(U_m)|^{\alpha '}]\}^{\frac{1}{\alpha '}}\\&{~~~~~~~~~~~~}=(2^d \cdot \mathbb {E}[|f(U_m)|^{\alpha '}])^{\frac{1}{\alpha '}}. \end{aligned}$$

Remark 3.1

  • If \(\alpha \rightarrow \infty \), we get \(|\mathbb {E}[f(R)]| \le 2^d \mathbb {E}[|f(U_m)|]\). Furthermore, if \(f\) is non-negative function, the result of Theorem 3.1 degenerates into the result of Lemma 3.1 in [13]. It was mentioned that when the value of \(f\) can be negative, the result in Corollary 3.1 of [13] (i.e. \(\mathbb {E}[f(R)] \le 2^d \mathbb {E}[f(U_m)]\)) is generally false, while our result shows that if the absolute value operation is added to both sides of the inequality, then the inequality holds.

  • If \(\alpha =2\), the result of Theorem 3.1 degenerates into the result of Lemma 3.2 in [13].

  • If \(\alpha \rightarrow 1\), using L’H\(\hat{o}\)pital’s rule, we get

    $$\begin{aligned}&\lim \limits _{\alpha ' \rightarrow \infty } \{2^d \cdot \mathbb {E}[|f(U_m)|^{\alpha '}]\}^{\frac{1}{\alpha '}} = \lim \limits _{\alpha ' \rightarrow \infty } [\sum \limits _{r} |f(r)|^{\alpha '}]^{\frac{1}{\alpha '}}\\&= e^{\lim \limits _{\alpha ' \rightarrow \infty } \frac{\ln (\sum \limits _{r} |f(r)|^{\alpha '})}{\alpha '} }=e^{\lim \limits _{\alpha ' \rightarrow \infty } \frac{\sum \limits _{r} |f(r)|^{\alpha '} \ln |f(r)|}{\sum \limits _{r} |f(r)|^{\alpha '}}}=\max \limits _r |f(r)|.\\ \end{aligned}$$

On the other hand, we can easily get that \(|\mathbb {E}[f(R)]| \le \max \limits _r |f(r)|\) irrespective of the R\(\acute{e}\)nyi entropy. Thus it’s meaningless to discuss the result under Shannon entropy.

Corollary 3.1

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\), then for any (deterministic) real-valued function \(f : \{0, 1\}^m \rightarrow [0, 1]\) and any random variable \(R\) with \(H_\alpha (R) \ge m-d\), we have

$$\mathbb {E}[f(R)] \le (2^d \cdot \mathbb {E}[|f(U_m)|^{\alpha '}])^{\frac{1}{\alpha '}} \le (2^d \cdot \mathbb {E}[|f(U_m)|])^{\frac{1}{\alpha '}}.$$

Corollary 3.2

If an unpredictability application \(P\) is \((T, \varepsilon )-\)secure in the ideal model, then \(P\) is \((T, (2^d \cdot \varepsilon )^{\frac{1}{\alpha '}})-\)secure in the \((m-d)-\)real\(_\alpha \) model where \(\alpha \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\).

Remark 3.2

The above corollary shows a upper bound about the advantage of an adversary in the \((m-d)-\)real\(_\alpha \) model, though the upper bound is not tight. Since \( \lim \limits _{\alpha \rightarrow \infty } (2^d \cdot \varepsilon )^{\frac{1}{\alpha '}} = \lim \limits _{\alpha ' \rightarrow 1} (2^d \cdot \varepsilon )^{\frac{1}{\alpha '}} = 2^d \cdot \varepsilon \), we have that the above corollary degenerates into Corollary 3.1 of [13].

Corollary 3.2 can be applied to all “unpredictability” applications such as one-way functions, MACs and digital signatures. It shows that if the secret sources are derived from the real world only guaranteeing high R\(\acute{e}\)nyi entropy, then the unpredictable cryptosystems in ideality have not to be redesigned in the real model, and the security levels in reality can be measured by that in ideality and the R\(\acute{e}\)nyi entropy of the ‘weak’ random secret sources.

Definition 3.1

Consider \(\beta > 1\). An application \(P\) is \((T, \sigma )-\beta \)th power secure if for any \(T-\)bounded adversary \(\mathsf A \), we have \(\mathbb {E}[|f(U_m)|^\beta ] \le \sigma \), where \(f(r)\) denotes \(\mathsf A '\)s advantage conditioned on key being \(r\).

Corollary 3.3

If an indistinguishability application \(P\) is \((T, \sigma )-\alpha '\)th power secure in the ideal model, then \(P\) is \((T, (2^d \cdot \sigma )^{\frac{1}{\alpha '}})-\)secure in the \((m-d)-\)real\(_\alpha \) model where \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\).

Though the min-entropy can be generalized to the R\(\acute{e}\)nyi entropy for all unpredictability applications, it appears very difficult to connect \(\mathbb {E}[|f(U_m)|^{\alpha '}]\) and \(\mathbb {E}[f(U_m)]\) for indistinguishability applications. The reason is that for indistinguishability applications, the range of the function \(f(r)\) is \([-\frac{1}{2}, \frac{1}{2}]\), thus the absolute value operation on the right hand side of the inequality in Theorem 3.1 can’t be deleted. Fortunately, Dodis et al. [13] proposed the Double-Run Trick to connect \(\mathbb {E}[f_\mathsf{B }(U_m)]\) and \(\mathbb {E}[f_\mathsf{A }(U_m)^2]\) where \(\mathsf A \) and \(\mathsf B \) are two adversaries in the same kind of attack game with different number of oracle queries. We’ll adopt this technique and find the relationship between \(\mathbb {E}[f_\mathsf{A }(U_m)^2]\) and \(\mathbb {E}[|f_\mathsf{A }(U_m)|^{\alpha '}]\) with \(\alpha ' > 1\).

Dodis et al. [13] introduced the concept of simulatability about an indistinguishability application \(P\) and obtain the following Lemma (Lemma 3.2) via using the Double-Run Trick.

Lemma 3.2

(see [13]) Assume \(P\) is \((T', \varepsilon )-\)secure and \((T', T, \gamma )\)-simulatableFootnote 4, then \(P\) is \((T, \sigma )\)-square secure, where \(\sigma \le \frac{\varepsilon + \gamma }{2} \).

In particular, \(P\) is \((T, \sqrt{2^{d-1}(\varepsilon + \gamma )})\)-secure in the \((m-d)\)-real\(_2\) model.

In the above lemma, only collision entropy is considered. We’ll amplify it to the R\(\acute{e}\)nyi entropy. Divesh Aggarwal observed that the following lemma can be employed to make the amplification.

Lemma 3.3

Consider \(1 < \beta < 2\). For any (deterministic) real-valued function \(f: \{0, 1\}^m \rightarrow [-\frac{1}{2}, \frac{1}{2}]\), we have

$$ \mathbb {E}[|f(U_m)|^\beta ] \le \{ \mathbb {E}[|f(U_m)|^2]\}^{\frac{\beta }{2}}.$$

The proof of Lemma 3.3 is given in Appendix B.

Theorem 3.2

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\). Assume an indistinguishability application \(P\) is \((T', \varepsilon )-\)secure and \((T', T, \gamma )\)-simulatable, then

  1. (1)

     If \(1 < \alpha ' < 2\), we have that \(P\) is \((T, \sigma )\)-\(\alpha '\)th power secure, where \(\sigma \le (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}\). In particular, by Corollary 3.3, \(P\) is \((T, [2^d \cdot (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}]^{\frac{1}{\alpha '}} )-\)secure in the \((m-d)-\)real\(_\alpha \) model.

  2. (2)

     If \(\alpha ' \ge 2\), we have that \(P\) is \((T, \sigma )\)-\(\alpha '\)th power secure, where \(\sigma \le \frac{\varepsilon + \gamma }{2}\). In particular, by Corollary 3.3, \(P\) is \((T, (2^d \cdot \frac{\varepsilon + \gamma }{2})^{\frac{1}{\alpha '}} )-\)secure in the \((m-d)-\)real\(_\alpha \) model.

Proof

By Lemma 3.2, we have that \(P\) is \((T, \sigma ')\)-square secure, where \(\sigma ' \le \frac{\varepsilon + \gamma }{2}\).

(1) If \(1 < \alpha ' < 2\), from Lemma 3.3, we obtain that

$$\mathbb {E}[|f_\mathsf{A }(U_m)|^{\alpha '}] \le \{ \mathbb {E}[|f_\mathsf{A }(U_m)|^2]\}^{\frac{\alpha '}{2}} \le (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}.$$

Therefore, \(P\) is \((T, \sigma )\)-\(\alpha '\)th power secure, where \(\sigma \le (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}\).

(2) If \(\alpha ' \ge 2\), then \(\mathbb {E}[|f_\mathsf{A }(U_m)|^{\alpha '}] \le \mathbb {E}[|f_\mathsf{A }(U_m)|^2]\) according to \(f_\mathsf{A }(r) \in [-\frac{1}{2}, \frac{1}{2}]\) for all \(r \in \{0, 1\}^m\). Therefore, \(P\) is \((T, \sigma )\)-\(\alpha '\)th power secure, where \(\sigma \le \frac{\varepsilon + \gamma }{2}\).

Remark 3.3

Dodis et al. [13] enumerated the applications of Lemma 3.2 to CPA-secure symmetric-key encryption schemes, weak Pseudorandom Functions, Extractors and Non-Malleable Extractors. If the weak random sources are measured by the R\(\acute{e}\)nyi entropy instead of the collision entropy, we can obtain similar results via Theorem 3.2.

In some settings, the weak secret sources are derived using some procedure, during which the attacker gets some side information \(S\) about the secret sources. Therefore, it’s valuable to extend the inequality of Theorem 3.1 into the average (aka. conditional) setting. Then it can be employed similarly to study the relationship between the average-case ideal security quantity and real security quantity. The inequality is as follows.

Theorem 3.3

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\), then for any real-valued function \(f(z, s)\) and any random variables \((Z, S)\), where \(|Z|=m\) and \(H_\alpha (Z|S)\ge m-d\), we have

$$|\mathbb {E}[f(Z, S)]| \le \{2^d \cdot \mathbb {E}[|f(U_m, S)|^{\alpha '}]\}^{\frac{1}{\alpha '}}.$$
  • If \(\alpha \rightarrow \infty \) and \(f\) is a non-negative function, the above result degenerates into the result of Lemma 3.5 (a) in [13].

  • If \(\alpha =2\), the above result degenerates into the result of Lemma 3.5 (b) in [13].

Proof

From Definition 2.2, we have

$$\sum \limits _{s} Pr[S=s] \sum \limits _{z} Pr[Z=z|S=s]^\alpha =2^{(1-\alpha )H_\alpha (Z|S)}.$$

Therefore, from Lemma 3.1, we have

$$\begin{aligned}&|\mathbb {E}[f(Z, S)]|= |\sum \limits _{s, z} Pr[S=s] \cdot Pr[Z=z|S=s] \cdot f(z, s)|\\&= 2^m \cdot |\sum \limits _{s, z} [ (\frac{1}{2^m})^{\frac{1}{\alpha }} \cdot Pr[S=s]^{\frac{1}{\alpha }} \cdot Pr[Z=z|S=s] ] \cdot [(\frac{1}{2^m})^{\frac{1}{\alpha '}} \cdot Pr[S=s]^{\frac{1}{\alpha '}} \cdot f(z, s)]| \\&\le 2^m \cdot \root \alpha \of {\sum \limits _{s, z} \frac{1}{2^m} \cdot Pr[S=s] \cdot Pr[Z=z|S=s]^\alpha } \cdot \root \alpha ' \of {\sum \limits _{s, z} \frac{1}{2^m} \cdot Pr[S=s] \cdot |f(z, s)|^{\alpha '}}\\&= 2^m \cdot \root \alpha \of {\frac{1}{2^m} \cdot 2^{(1-\alpha ) \cdot H_\alpha (Z|S)}} \cdot \root \alpha ' \of { \mathbb {E}[|f(U_m, S)|^{\alpha '}]}\\&\le 2^ {\frac{(1-\alpha )(m-d)}{\alpha }} \cdot \{2^m \cdot \mathbb {E}[|f(U_m, S)|^{\alpha '}]\}^{\frac{1}{\alpha '}}\\&= \{2^d \cdot \mathbb {E}[|f(U_m, S)|^{\alpha '}]\}^{\frac{1}{\alpha '}}.\\ \end{aligned}$$

4 Overcoming Weak Expectations via the Expanded Computational Entropy

Dodis and Yu [13] found some elegant results about overcoming weak expectations based on the min-entropy and the collision entropy, which are information-theoretic. In reality, the ‘weak source’ may only be measured by efficient algorithms. Thus, computational entropy may be a more valuable tool to study how to overcoming weak expectation. Two papers [17, 25] extended information theory more generally to the computational setting, and attempted to define its most fundamental notion of entropy. Due to the most usefulness of HILL entropy introduced by H\(\dot{a}\)stad, Impagliazzo, Levin, and Luby [17], in this section, we’ll use an expanded version of the HILL entropy (EHILL entropy) to study how to overcome ‘weak’ expectations. We choose the attacker’s advantage circuit size as the circuit size in the concept of expanded computational entropy, and obtain an inequality via the EHILL entropy. Then we present the relationship between the ideal security quantity and real security quantity via the EHILL entropy in all unpredictability applications and some indistinguishability applications. We also show some concrete indistinguishability applications. Finally, we discuss its form when side information exists.

Theorem 4.1

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\), then for any (deterministic) real-valued function \(f : \{0, 1\}^m \rightarrow [a, b]\) with size \(s\) and any random variable \(R\) with \(H_{\alpha , \varepsilon _0, s}^{EHILL} (R) \ge m-d\), we have \(|\mathbb {E}[f(R)]| \le \{2^d \cdot \mathbb {E}[|f(U_m)|^{\alpha '}]\}^{\frac{1}{\alpha '}} + \varepsilon _0.\)

Proof

Since \(H_{\alpha , \varepsilon _0, s}^{EHILL} (R) \ge m-d\), we get that there exists a distribution \(Y\) where \(H_\alpha (Y) \ge m-d\), such that \(\forall D \in \mathcal {D}_s^{det, [a, b]}\), \(\delta ^D(R, Y) \le \varepsilon _0.\)

From Theorem 3.1, we have that \(\mathbb {E}[f(Y)] \le \{2^d \cdot \mathbb {E}[|f(U_m)|^{\alpha '}]\}^{\frac{1}{\alpha '}}\). Since \(f \in \mathcal {D}_s^{det, [0, 1]}\), we have that \(|\mathbb {E}[f(R)]- \mathbb {E}[f(Y)]| \le \varepsilon _0\). Thus

$$|\mathbb {E}[f(R)]| \le \{2^d \cdot \mathbb {E}[|f(U_m)|^{\alpha '}]\}^{\frac{1}{\alpha '}} + \varepsilon _0.$$

Corollary 4.1

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\). For any (deterministic) real-valued function \(f : \{0, 1\}^m \rightarrow [0, 1]\) with size \(s\) and any random variable \(R\) with \(H_{\alpha , \varepsilon _0, s}^{EHILL} (R) \ge m-d\), we have

$$\mathbb {E}[f(R)] \le \{2^d \cdot \mathbb {E}[f(U_m)^{\alpha '}]\}^{\frac{1}{\alpha '}} + \varepsilon _0 \le \{2^d \cdot \mathbb {E}[f(U_m)]\}^{\frac{1}{\alpha '}} + \varepsilon _0.$$

Corollary 4.2

If an unpredictability application \(P\) is \((T, \varepsilon )-\)secure in the ideal model, then \(P\) is \((T, (2^d \cdot \varepsilon ) ^{\frac{1}{\alpha '}} +\varepsilon _0)-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s}^{EHILL}\) model where \(\alpha \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\).

The above corollary shows that if the secret sources are derived from the real world only guaranteeing high expanded computational entropy, then the unpredictable cryptosystems in ideality have not to be redesigned in the real model, and the security levels in reality can be measured by that in ideality and the expanded computational entropy of the ‘weak’ random secret sources.

For indistinguishability applications, since \(f(r) \in [-\frac{1}{2}, \frac{1}{2}]\), we can’t get the inequality like Corollary 4.1. We’ll instead introduce the following definition.

Definition 4.1

Consider \(\beta > 1\). An application \(P\) is \((T, s, \sigma )-\beta \)th power secure if for any \(T-\)bounded adversary \(\mathsf A \) with the advantage circuit size \(s\), we have \(\mathbb {E}[|f(U_m)|^\beta ] \le \sigma \), where \(f(r)\) denotes \(\mathsf A '\)s advantage conditioned on key being \(r\).

Applying this definition to Theorem 4.1, we get that \(\alpha '\)th power security implies real model security as follows.

Corollary 4.3

If an application \(P\) is \((T, s, \sigma )-\alpha '\)th power secure, then \(P\) is \((T, s, (2^d \cdot \sigma )^{\frac{1}{\alpha '}} + \varepsilon _0)-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s}^{EHILL}\) model where \(\alpha \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\).

In the following we’ll obtain the relationship between the ideal security quantity and real security quantity.

Theorem 4.2

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\). Assume \(P\) is a \((T', s', \varepsilon )-\)secure and \(((T', s'), (T, s), \gamma )-\)simulatableFootnote 5, then

  1. (1)

     If \(1 < \alpha ' < 2\), we have that \(P\) is \((T, s, \sigma )\)-\(\alpha '\)th power secure, where \(\sigma \le (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}\). In particular, by Corollary 4.3, \(P\) is \((T, s, [2^d \cdot (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}]^{\frac{1}{\alpha '}} + \varepsilon _0)-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s}^{EHILL}\) model.

  2. (2)

     If \(\alpha ' \ge 2\), we have that \(P\) is \((T, s, \sigma )\)-\(\alpha '\)th power secure, where \(\sigma \le \frac{\varepsilon + \gamma }{2}\). In particular, by Corollary 4.3, \(P\) is \((T, s, (2^d \cdot \frac{\varepsilon + \gamma }{2})^{\frac{1}{\alpha '}} + \varepsilon _0 )-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s}^{EHILL}\) model.

Proof

From Theorem 3.2, we get this result.

In the following, we show that the above result can be successfully applied to CPA-secure symmetric-key encryption schemes, weak Pseudorandom Functions and Weaker Computational Extractors.

Theorem 4.3

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\). Assume \(P\) is a \(((2t, 2q), s', 0)-\) CPA secure symmetric-key encryption scheme in the ideal model. Then

  1. (1)

     If \(1 < \alpha ' < 2\), then \(P\) is \(((t, q), s, [2^d \cdot (\frac{\varepsilon }{2})^{\frac{\alpha '}{2}}]^{\frac{1}{\alpha '}} + \varepsilon _0)-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s}^{EHILL}\) model.

  2. (2)

     If \(\alpha ' \ge 2\), then \(P\) is \(((t, q), s, (2^d \cdot \frac{\varepsilon }{2})^{\frac{1}{\alpha '}} + \varepsilon _0 )-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s}^{EHILL}\) model.

Proof

From Theorem 4.2, we get this result.

Definition 4.2

A family \(\mathcal {H}\) of functions \(\{h_r : \{0, 1\}^n \rightarrow \{0, 1\}^l | r \in \{0, 1\}^m\}\) is \(((t, q), s, \delta )-\)secure weak PRF, if for any \(t-\)bounded attacker \(\mathsf A \) with the advantage circuit size \(s\), and random \(x\), \(x_1\), \(\cdots \), \(x_{q-1} \leftarrow U_n\) and \(r \leftarrow U_m\), we have

$$\varDelta _\mathsf{A } (h_r(x), U_l| x, x_1, h_r(x_1), \cdots , x_{q-1}, h_r(x_{q-1})) \le \delta .$$

The concept of weak PRF here is essentially equivalent to the one in [13], as the definition here is produced via adding the parameter \(s\) to the one in [13]. Just like CPA-secure symmetric-key encryption schemes, weak PRFs are easily seen to be \((((2t, 2q), s'), ((t, q), s), 0)-\)simulatable. By Theorem 4.2, we have

Theorem 4.4

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\). Assume \(P\) is a \(((2t, 2q), s', \delta )-\)secure weak PRF in the ideal model. Then

  1. (1)

     If \(1 < \alpha ' < 2\), we have that \(P\) is \(((t, q), s, \sigma )\)-\(\alpha '\)th power secure, where \(\sigma \le (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}\). In particular, \(P\) is \(((t, q), s, [2^d \cdot (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}]^{\frac{1}{\alpha '}} + \varepsilon _0)-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s}^{EHILL}\) model.

  2. (2)

     If \(\alpha ' \ge 2\), we have that \(P\) is \(((t, q), s, \sigma )\)-\(\alpha '\)th power secure, where \(\sigma \le \frac{\varepsilon + \gamma }{2}\). In particular, \(P\) is \(((t, q), s, (2^d \cdot \frac{\varepsilon + \gamma }{2})^{\frac{1}{\alpha '}} + \varepsilon _0 )-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s}^{EHILL}\) model.

Extractors have many applications such as key generation, leakage-resilient encryption and derandomization of BPP and IP, but its existence is still not enough under computational perspective. Therefore, we introduce an expanded definition of Extractors.

Definition 4.3

(Weaker Computational Extractors) We say that an efficient function \(E_{wc}: \{0, 1\}^m \times \{0, 1\}^n \rightarrow \{0, 1\}^l \) is a \((k, H_{\alpha , \varepsilon _0, s}^{EHILL}, \varepsilon )-\)weaker computational extractor, if for all \(R\) (over \(\{0, 1\}^m\)) with \(H_{\alpha , \varepsilon _0, s}^{EHILL}(R)\ge k\), for random \(S\) (uniform over \(\{0, 1\}^n\)), and \(\forall \) \(D \in \mathcal {D}_s^{ran, [0, 1]}\), we get

$$\delta ^D(E_{wc}(R; S), U_l | S) \le \varepsilon $$

where \(\alpha \ge 0\), \(\alpha \ne 1\), and \(S \leftarrow U_n\) is the random seed of \(E_{wc}\). The value \(L = k-l\) is called the entropy loss of \(E_{wc}\).

Remark 4.1

It should be noticed that computational extractors and weak computational extractors have been studied in [10]. Essentially, computational extractors are efficient functions that map a source of sufficiently high min-entropy to an output, requiring that the joint distribution of output and seed be computationally indistinguishable from uniform. A weak computational extractor is defined similarly, but only requiring that the output of the function (without the seed) be indistinguishable from uniform. While in this paper, it only requires the computational distance between the uniform distribution and the joint distribution of output and seed be upper bounded by \(\varepsilon \), in this sense, the output of the function is further relaxed.

Corollary 4.4

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\). If \(\mathcal {H} \overset{def}{=}\{h_r : \{0, 1\}^n \rightarrow \{0, 1\}^l | r \in \{0, 1\}^m\}\) is \(((2t, 2), s', \delta )-\)secure weak PRF in the ideal model, then

  1. (1)

    If \(1 < \alpha ' < 2\), then \(E_{wc} (r; z) \overset{def}{=} h_r(z)\) is \(((t, 1), s, [2^d \cdot (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}]^{\frac{1}{\alpha '}} + \varepsilon _0)-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s}^{EHILL}\) model. Hence, \(E_{wc} (r; z)\) is a \((m-d,H_{\alpha , \varepsilon _0, s}^{EHILL}\), \([2^d \cdot (\frac{\varepsilon + \gamma }{2})^{\frac{\alpha '}{2}}]^{\frac{1}{\alpha '}} + \varepsilon _0)\)-weaker computational extractor.

  2. (2)

     If \(\alpha ' \ge 2\), then \(E_{wc} (r; z) \overset{def}{=} h_r(z)\) is \(((t, 1), s, (2^d \cdot \frac{\varepsilon + \gamma }{2})^{\frac{1}{\alpha '}} + \varepsilon _0 )-\)secure in the \((m-d)-\)real\(_{\alpha , \varepsilon _0, s}^{EHILL}\) model. Hence, \(E_{wc} (r; z)\) is a \((m-d, H_{\alpha , \varepsilon _0, s}^{EHILL}, (2^d \cdot \frac{\varepsilon + \gamma }{2})^{\frac{1}{\alpha '}} + \varepsilon _0)\)-weaker computational extractor.

In the following, we study how to extend the inequality of Theorem 3.1 into the average (aka. conditional) setting. It’s as follows.

Theorem 4.5

Let \(\alpha , \alpha ' \in (1, \infty )\) and \(\frac{1}{\alpha } + \frac{1}{\alpha '}=1\), then for any real-valued function \(f(r, s)\) with size \(s_0\) and any random variables \((R, S)\), where \(|R|=m\) and \(H^{EHILL}_{\alpha , \varepsilon _0, s_0} (R|S)\ge m-d\), we have

$$|\mathbb {E}[f(R, S)]| \le \{2^d \cdot \mathbb {E}[|f(U_m, S)|^{\alpha '}]\}^{\frac{1}{\alpha '}} + \varepsilon _0.$$

Proof

Since \(H^{EHILL}_{\alpha , \varepsilon _0, s_0} (R|S)\ge m-d\), we have that there exists a joint distribution \((Z, S)\), such that \(H_\infty (Z|S) \ge m-d\) and for \(\forall D \in \mathcal {D}_s^{ran, [0, 1]}\), \(\delta ^D((R, S), (Z, S)) \le \varepsilon _0\).

According to Theorem 3.3, we have \(|\mathbb {E}[f(Z, S)]| \le \{2^d \cdot \mathbb {E}[|f(U_m, S)|^{\alpha '}]\}^{\frac{1}{\alpha '}}\).

Since \( f \in \mathcal {D}_s^{ran, [a, b]}\), we have \(|\mathbb {E}[f(Z, S)]- \mathbb {E}[f(R, S)] | \le \varepsilon _0\). Thus,

$$|\mathbb {E}[f(R, S)] | \le |\mathbb {E}[f(Z, S)]| + \varepsilon _0 \le \{2^d \cdot \mathbb {E}[|f(U_m, S)|^{\alpha '}]\}^{\frac{1}{\alpha '}} + \varepsilon _0.$$

5 Conclusion

In the ideal world, a cryptographic system is considered to be secure, if it can be formally proved that it’s secure in certain security model, which takes for granted the secret is perfectly random. However, in the real world, the secret may only be obtained from non-uniform distribution with some non-trivial amount of entropy. The security of cryptographic models is measured by the expectation of some function, with called ‘perfect’ expectation in the ideal model and ‘weak’ expectation in the real model respectively. We propose some elementary inequalities via the R\(\acute{e}\)nyi entropy and the expanded computational entropy, which show that the ‘weak’ expectation is not much worse than the ‘perfect’ expectation. The results apply to all ‘unpredictability’ applications and some indistinguishability applications including CPA-secure symmetric-key encryption schemes, weak Pseudorandom Functions and Weaker Computational Extractors. Compared to the results based on the min-entropy and collision entropy by Dodis and Yu [TCC 2013], our results are more general, and we also obtain some results from a computational perspective.