1 Introduction

Consider the following question:

Can a sender leak a message anonymously, by exclusively participating in a public non-anonymous discussion where everyone sees who said what?

In particular, we consider a setting where the participants are having a seemingly innocuous discussion (e.g., about favorite cat videos). The discussion is public and non-anonymous, meaning that the participants are using their real identities and everyone knows who said what.Footnote 1 The non-sender participants are having a real conversation about this topic. On the other hand, the sender is carefully choosing what to say in a way that looks like she is participating in the conversation, but her real goal is to leak a secret document (e.g., NSA’s polynomial-time factoring algorithm). At the end of the discussion, anyone should be able to look at the transcript of the conversation and reconstruct the secret document, without learning anything about which of the participants was the actual sender responsible for leaking it. Despite its conceptual importance and simplicity, this question has not been studied until recently, perhaps because it may appear “obviously impossible”.

A formal study of the question was recently initiated by Agrikola, Couteau and Maier in [3], who, perhaps surprisingly, raise the intriguing possibility of answering it positively using cryptography. They do so by introducing a new cryptographic primitive, dubbed anonymous transfer (henceforth AT), to capture the setting above. An anonymous transfer involves a sender with a secret document, along with unaware dummy participants who send uniformly random messages.Footnote 2 The parties run for some number of rounds, where in each round the sender and each participant sends a message. At the end of the protocol anyone can reconstruct the secret document with high probability given the transcript. However, the transcript cannot be used to identify who the sender is among the participants.

Crucially, anonymous transfer does not rely on the availability of any (weak) anonymous channels, nor on the availability of trusted third parties during the execution. Instead, all protocol messages are assumed to be traceable to their respective senders, and all other dummy participants only passively send random messages. The simplicity of the setting makes it both a natural question to explore, and raises very intriguing possibility of “creating” anonymity in a seemingly non-anonymous environment.

Anonymous Transfer and Whistleblowing. One central motivation for studying anonymous transfer is its relation to whistleblowing, where whistleblowers wish to leak confidential and oftentimes sensitive information, while operating in a potentially untrusted environment. The whistleblowers themselves usually risk being subjected to both harsh social, financial, and even legal consequences if caught [1, 4, 13]. One natural mitigation for those risks is the use appropriate tools, typically cryptographic ones, to ensure anonymity of the leak. And indeed, a large body of work is devoted to build such tools.

One crucial aspect of these tools is the assumptions made on resources available to the whistleblower, which we would ideally like to minimize. From a practical perspective, it seems unreasonable to assume the general availability of, say, anonymous channels or online trusted parties to whistleblowers. In fact, even given the availability of such anonymous channels, their use alone could potentially be incriminating. From a more theoretical perspective, cryptographic solutions leveraging such assumptions could be seen as bootstrapping weaker forms of anonymity. Unfortunately, as far as we are aware, except the work of [3], all prior work on whistleblowing assume the availability of an online form of trust, and thus do not seem to answer the initial question we consider. In contrast, [3] asks the intriguing question of whether cryptography can create forms of anonymity in a more fundamental sense.

Prior Work on Anonymous Transfer. Along with introducing anonymous transfer, [3] gives both lower bounds, and, perhaps surprisingly, plausibility results on its feasibility. Let us introduce some notation. The correctness error \(\varepsilon =\varepsilon (\lambda )\) of an anonymous transfer is the probability secret documents fail to be reconstructed, and the anonymity \(\delta =\delta (\lambda )\) of an AT is the advantage a transcript of the AT provides towards identifying the sender.Footnote 3 An AT is in general interactive, and consists of \(c=c(\lambda )\) rounds of interaction.

On the negative side, [3] shows that no protocol can satisfy close to ideal forms of correctness and security, namely \(\varepsilon , \delta = \textrm{negl}(\lambda )\), against all polynomial time adversaries. They supplement this lower bound with a plausibility result, by giving heuristic constructions of anonymous transfer with fine-grained security. This heuristic construction provides negligible correctness error, but weaker anonymity guarantees (namely \(\delta \approx 1/c\), where c is the number of rounds), and only against a restricted class of fine-grained adversaries, who are allowed restricted to be at most O(c) times more powerful than honest users, which are argued secure by relying on ideal obfuscation.

Still, the work of [3] leaves open the possibility of building anonymous transfer with non-optimal correctness and security guarantees (e.g., \(\delta \le 1/c\)) secure against arbitrary polynomial-time attacks.

Our Results. In this work, we give improved lower bounds for anonymous transfer, largely ruling out potential improvements over the heuristic upper bound from [3]. Throughout this exposition, we will consider the case of 2 participants, one sender and a non-sender; [3] shows that lower bounds in that setting translates to lower bounds for any larger number of parties. Our main theorem shows that anonymous transfer with any non-trivial anonymity against general polynomial-time attackers is impossible, solving a conjecture explicitly stated in [3].

Theorem 1 (Informal)

For any 2-party anonymous transfer protocol for \(\omega (\log \lambda )\)-bit messages with correctness error \(\varepsilon \), for all polynomial \(\alpha =\alpha (\lambda )\), there exists a polynomial-time adversary that identifies the sender with probability at least \(1-\varepsilon -1/\alpha \).

Note that, the probability of identifying the sender is essentially optimal, as, with probability \(\varepsilon \), the sender might act as a dummy party, and therefore this rules out any non-trivial constructions.

Our attack runs in polynomial-time, but where the polynomial is fairly large. This unfortunately does not match the run-time of allowed adversaries in the heuristic construction of [3].

As a secondary result, we show that even in the setting of fine-grained adversaries whose run-time is essentially equivalent to that of the honest parties, we can identify senders with probability 1/c whenever the secret document can be reconstructed. This shows that, even in the fine-grained setting, one cannot improve on the quantitative anonymity guarantees achieved by the heuristic construction of [3].

Theorem 2 (Informal)

For any 2-party anonymous transfer protocol for \(\omega (\log \lambda )\)-bit messages, with correctness error \(\varepsilon \), and having c-round of interaction, there exists a fine-grained adversary whose run-time matches that of the reconstruction procedure up to additive constant factors, that identifies the sender with probability at least \((1-\varepsilon -\textrm{negl}(\lambda ))/c\).

Theorem 2 in particular rules out all fine-grained protocols with a polynomial number of rounds, if both \(\delta \) and \(\varepsilon \) are negligible. For comparison, the lower bound of [3] rules out very similar parameters, but where the run-time of the adversary is \(m(\lambda ) = \lambda \cdot c^g\) times larger than the one of the reconstruction procedure, for some arbitrary constant \(g>0\).

Related Work on Whistleblowing. Current solutions for anonymous messaging and anonymous whistleblowing include systems based on onion routing [10], mix-nets [7], and Dining Cryptographer networks or DC-nets [2, 6, 9, 11, 14]. Additionally, there have been other applications developed that utilize new techniques inspired by the models mentioned previously [5, 8, 9]. Each of these solutions, however, intrinsically assumes that there exists non-colluding honest servers that participate to ensure anonymity. [3] is the first to introduce a model which does not rely on this assumption. Impossibility results could be interpreted as evidence that such an assumption is in fact necessary.

Open Problems. The main open question left by [3] and this work is the construction of fine-grained anonymous transfer matching their heuristic construction, but under standard assumptions.

Additionally, our attack in Theorem 1 runs in fairly large polynomial time, which does not tightly match the fine-grained security proved in the heuristic construction of [3]. We leave for future work the possibility of improving the run-time of an attack matching the properties of Theorem 1.

2 Technical Overview

Anonymous Transfer. Let us first recall some basic syntax and notations for anonymous transfer (henceforth AT), introduced in [3]. In this overview, we will focus on 2-party anonymous transfer, which features a sender, a dummy party and an external receiver.Footnote 4\(^{,}\)Footnote 5 The sender takes as input a message \(\mu \) to transfer. The sender and the dummy party exchange messages in synchronous rounds, with the restriction that the dummy party only sends random bits. An execution of the transfer spans over c rounds of interaction, where both parties send a message at each round. Given a full transcript, the external receiver can (attempt to) reconstruct the original message. We say that an AT has \(\varepsilon \) correctness error if the reconstruction procedure fails to recover \(\mu \) with probability at most \(\varepsilon \); and that it is \(\delta \)-anonymous if no adversary has advantage greater than \(\delta \) in identifying the sender amongst the two participating parties over a random guess, where the adversary can choose the message to be sent.Footnote 6 We refer to Sect. 3.1 for formal definitions.

In that setting, [3] showed the following lower bound on AT.

Theorem 3

([3], paraphrased). Every (two-party, silent receiver) AT with \(\varepsilon \)-correctness and \(\delta \)-anonymity against all polynomial-time adversary, and consisting of c rounds, satisfies \(\delta \cdot c \ge \frac{1-\varepsilon }{2} - 1/m(\lambda )\) for all polynomial \(m(\lambda )\).

In particular, no AT can satisfy \(\delta ,\varepsilon = \textrm{negl}(\lambda )\) (assuming \(c=\textrm{poly}(\lambda )\), which holds if participants are polynomial-time). More precisely, [3] show, for all polynomial \(m(\lambda )\), an attack with runtime \(m(\lambda )\cdot \textrm{poly}(\lambda )\) with advantage at least \(\frac{1}{c}\cdot (\frac{1-\varepsilon }{2} - 1/m(\lambda ))\).

The main limitation of Theorem 3 is that it does not rule out the existence of AT protocols with anonymity \(\delta \) scaling inverse-polynomially with the number of rounds c, e.g. \(\delta = 1/c\). In other words, the trade-off between correctness and security could potentially be improved by relying on a large amount of interaction. And indeed, [3] does provide a plausibility result, where, assuming ideal obfuscation, there exists a fine-grained AT with \(\delta \approx 1/c\), \(\varepsilon = \textrm{negl}(\lambda )\), so that anonymity does improve with the number of rounds. A secondary limitation is that, because the attack corresponding to Theorem 3 needs to call the honest algorithms a polynomial number of times (even though the polynomial can arbitrarily small), this potentially leaves room for “very fine-grained” protocols, where security would only hold against adversaries running in mild super-linear time compared to honest users.

Our main results are stronger generic attacks on anonymous transfer protocols.

A General Blueprint for Our Attacks. The core idea behind all our attacks is a simple notion of progress associated to any (potentially partial) transcript of an AT. We do so by associating a real value \(p\in [0,1]_\mathbb {R}\) to partial transcripts of an AT, as follows. We can complete any partial transcripts, replacing all missing messages by uniformly random ones, and attempt to recover the input message \(\mu \leftarrow \{0,1\}^\ell \) from the sender. For a partial transcript, we define \(p\in [0,1]_\mathbb {R}\) to be the probability that a random completion of the transcript allows to recover \(\mu \).

The next step is to attribute partial evolutions of p, as the transcript gets longer, to parties in the protocol. Namely if, after party A sends the ith message in the transcript, the value of the protocol evolves from \(p_{i-1}\) to \(p_i\), and we attribute to A some progress dependent on \(p_{i-1}\) and \(p_i\). We then make the following observations: the empty transcript has value \(p_0 = 1/2^\ell \) close to 0 (if \(\mu \) is chosen uniformly at random), and full transcripts have (on expectation) value \(p_{2c} = 1-\varepsilon \) close to 1 by correctness. Our main leverage is that messages sent by the unaware, dummy participant in an AT do not significantly change the value of a partial transcript: this is because, in our random completion of transcripts, messages from the dummy party follow their real distribution. Furthermore, as long as the final value \(p_{2c}\) is significantly larger than the initial value \(p_0\), then a significant amount of total progress has to be made at some point. Therefore the messages from the sender have to significantly bias the values of partial transcripts towards 1.

This results in the following blueprint for identifying the sender of the AT. We first estimate the contribution of each party towards total progress, namely, the evolution of the values p associated to partial transcripts where the last message was sent from that party.Footnote 7 Then, we argue that (1) the contribution of the dummy party is likely to be small overall and (2) the total contribution of both parties is fairly large (on expectation), from which we conclude that the party whose messages contributed the most to increasing the value p is likely to be the AT sender.

Covert Cheating Games. We abstract out this recipe as a natural game, that we call a covert cheating game. A covert cheating game played by two players A and B, who take 2c alternate turns moving a point, or current state of the game, on the real interval [0, 1]. One player is designed to be a bias inducer, and the other a neutral party. The initial state is \(p_0\), and the final state is \(p_{2c}\) is either 0 or 1. We say that a strategy has success rate \(p_f>p_0\) if \(\mathbb {E}[p_{2c}]\ge p_f\), regardless of the identity of the bias inducer. The neutral party is restricted to exclusively making randomized moves that do not affect the current state on expectation. The goal of a third player, the observer C, is to determine, given access to the states of the game, which player is the bias inducer. Our main technical contribution is to show generic observer strategies for this game. We refer to Definition 3 for a more detailed definition.

We use this abstraction to capture the fact that our attacks use the AT in a specific black-box sense, namely, only to measure out values \(p\in [0,1]_\mathbb {R}\), and using all the AT algorithms in a black-box manner. Overall, our abstraction of attacks on ATs as strategies in a game captures a natural family of black-box distinguishing algorithms, which we believe capture most reasonable attacks on AT.Footnote 8 Indeed, it is not clear how to leverage any non-black-box use of honest user algorithms, as they could potentially be obfuscated (and indeed, the plausibility result of [3] does rely on obfuscated programs to be run by honest users). We believe this game to be natural enough to see other applications in the future.

In the rest of the technical overview, we focus on describing generic attacks in the language of covert cheating games.

A Generic “Free-Lunch” Attack. We describe our first attack on the game introduced above, which corresponds to a proof sketch of Theorem 2. Our attack is very simple, and only leverages the fact that, on expectation over a random move, moves done by the bias inducer bias the outcome by an additive term \((p_f-p_0)/c\), while moves from the neutral party do not add any bias. Suppose the game consists of c rounds (each consisting of one move from each party AB), and that party A makes the first move, so that A makes the odd moves \(2k+1\), and B makes the even moves 2k. Our strategy is to pick a random move \(k\leftarrow [c]\) from A, whose kth move makes the game evolve from state \(p_{2k}\) to \(p_{2k+1}\). We simply output “A is the bias inducer” with probability \(p_{2k+1}\) (and B with probability \(1-p_{2k+1}\)).

The main idea is that if A is the neutral party, then on expectation \(p_{2k+1}= p_{2k}\), and thus our strategy outputs A with probability \(p_k\). On the other hand, if A is the bias inducer, our strategy outputs A with probability \(p_{2k+1}\).Footnote 9 Because B is then a neutral party, B’s total expected contribution is 0, namely \(\mathbb {E}_k[p_{2k+2}-p_{2k+1}]=0\), so that the advantage of our algorithm towards determining A is:

$$\begin{aligned} \mathbb E_k[p_{2k+1}- p_{2k}] = \mathbb {E}_k[p_{2k+1}- p_{2k} +\underbrace{(p_{2k+2}-p_{2k+1})}_0] = (p_f-p_0)/c. \end{aligned}$$

The cost of our attack is the cost of obtaining a single sample with probability \(p_{2k+1}\). Going back to AT, this corresponds to the cost of running the honest users’ algorithms once (namely, attempting to reconstruct the message of a random completion of a randomly chosen partial transcript with last message from A). We conclude no AT can provide security with parameters from Theorem 2, in any fine-grained setting (as long as adversaries are allowed to be in the same complexity class as honest users).

A Generic Attack with Large Advantage. We now describe a slightly more involved attack that achieves stronger advantage, at the cost of running in larger polynomial time. The main inspiration behind this new attack comes from taking a closer look on the restriction that the neutral party’s moves do not change the game state on expectation. We observe that this is a more stringent restriction if the current game state p is close to 0. For concreteness, if the current state of the game is \(p=1/2\), then the neutral party could potentially move the state to \(p'=0\) or \(p'=1\) with probability 1/2 each, inducing a large change of the value of p. However, starting at \(p\gtrapprox 0\), Markov’s inequality ensures that \(p'\) cannot be too large.

This motivates us to consider a different quantification of progress where additive progress close to 0 is weighed more significantly than additive progress at large constants (e.g. 1/2). We do so by considering a multiplicative form of progress associated to moves and players. Namely, if the ith move of the game transforms the game state from \(p_{i-1}\) to \(p_i\), then we define the multiplicative progress associated with the move asFootnote 10

$$\begin{aligned} r_i = \frac{p_i}{p_{i-1}}. \end{aligned}$$

The total progress associated with a player would then be the product of the progress associated with its moves.

Our blueprint still applies in this context. The total progress of all the moves combined isFootnote 11

$$\begin{aligned} \prod _{i\in [2c]} r_i = \prod _{i\in [2c]} \frac{p_i}{p_{i-1}} = \frac{p_f}{p_0}, \end{aligned}$$

and so one of the players (on expectation) needs to have progress at least \(\sqrt{p_f/p_0}\). Furthermore, one can show that the restriction on neutral party’s moves implies that the product of the \(r_i\) associated to the neutral party is 1 on expectation. Namely, denoting N the set of indices corresponding to moves made by the neutral party: \(\mathbb {E}\left[ \prod _{N} r_i \right] = 1\). Markov’s inequality then gives:

$$\begin{aligned} \Pr \left[ \prod _{N} r_i \ge \sqrt{\frac{p_f}{p_0}} \right] \le \sqrt{\frac{p_0}{p_f}}. \end{aligned}$$

Overall, this shows that with good probability \(1-\sqrt{p_0/p_f}\), the sender has a large total contribution, and the dummy party has a small contribution, so that an attacker can identify them with at least such a probability.

We are unfortunately not done yet, because observers do not have direct access to the real values \(p \in [0,1]\): they are only given the ability to sample coins with probability p (going back to AT, recall that this is done by sampling a random completion of a transcript and testing whether the reconstructed message matched the sender’s message). This is problematic: from the perspective of a polynomial-time observer, the values \(p = \textrm{negl}(\lambda )\) and \(p=0\) are indistinguishable, given only sampling access. How can we then ensure that the ratios \(r_i = p_i/p_{i-1}\) are even well-defined (that is, that \(p_{i-1}\ne 0\))?

We solve this issue by conditioning our product to be over moves \(i\ge i^*\), such that for all \(i> i^*, p_i \ge \tau \) for some small accuracy threshold \(p_0 < \tau < p_f\) (think \(\tau = 1/\textrm{poly}(\lambda )\)), and where we set the convention \(p_{i^*}=\tau \). Now the ratios are well-defined, and the total contribution is now \(p_f/\tau \). It remains to argue that the product corresponding to the neutral party is small. While we might have biased the distribution of the neutral party by conditioning on the product starting at \(i^*\), we argue by a union bound that, with sufficiently high probability \(1-c\sqrt{\tau /p_f}\), all “suffix-products” from the dummy party are small (namely, smaller than \(\sqrt{p_f/\tau }\))

Summing up, our final observer strategy estimates all the \(p_i\) up to some sufficiently good precision (using Chernoff) so that the product of the \(r_i = p_i/p_{i-1}\) is ensured to be accurate, as long as all the terms \(p_i\) that appear in the product are large enough compared to our threshold \(\tau \). We refer to Sect. 4.4 for more formal details.

Taking a step back, the major strength of Theorem 1 is that the advantage of the associated attack is independent of the number of rounds: only its running time scales with the number of rounds (in order to ensure sufficient precision with Chernoff bounds). This is in our eyes a quantitative justification that multiplicative progress is better suited to identify bias in a covert cheating game.

Extending the Lower Bound to N Parties. Last, we sketch how to extend our attack from Theorem 1 to the N-party setting, which consists of a sender interacting with \(N-1\) dummy parties. Our first step is to observe that our attacks described above directly translate to targeted-predicting attacks, which correctly identify the sender given the promise that the sender is either party \(i\in [N]\) or \(j\in [N]\) where \(i \ne j\) are arbitrary but fixed for the targeted predictor. This follows from [3], which builds a 2-party AT from any N-party AT, while preserving targeted-predicting security.Footnote 12 In other words, given the promise that the sender is either party i or party j, we can correctly identify the sender with the same guarantees as in Theorem 1 (or even Theorem 2).

However, we ideally wish to obtain general predicting attacks that do not rely on any additional information to correctly output the identity of the sender. We generically upgrade any targeted-predicting attack to a standard predicting attack, while preserving the advantage \(\delta \), as follows. The attack simply runs the targeted-predicting attack on all pairs of distinct indices \(\{(i,j) \,|\, i,j\in [N], i\ne j\}\), and outputs as the sender the party \(i^*\) that got designated as the sender in all the runs \((i^*,j)\), \(j\ne i^*\).Footnote 13 Now, if \(i^*\) is the sender of the N-party AT, an union bound implies that the probability that all the internal runs \((i^*,j), j\ne i^*\) of the targeted-predicting attack correctly point out to \(i^*\) as the sender is at least \(\delta '\ge 1-N(1-\delta )\). Starting with the attack from Theorem 1 with \(\alpha '=N\cdot \alpha \),Footnote 14 we obtain the same lower bound as Theorem 1 in the N-party setting, at the cost of a \(\textrm{poly}(N)\) overhead in the runtime of our attack.Footnote 15

3 Preliminaries and Definitions

Notations. When X is a distribution, or a random variable following this distribution, we let \(x \leftarrow X\) denote the process of sampling x according to the distribution X. If X is a set, we let \(x \leftarrow X\) denote sampling x uniformly at random from X; if \(\textsf{Alg}\) is a randomized algorithm, we denote by \(x\leftarrow \textsf{Alg}\) the process of sampling an output of \(\textsf{Alg}\) using uniformly random coins. We use the notation [k] to denote the set \(\{1,\ldots ,k\}\) where \(k\in \mathbb N\), and \([0,1]_\mathbb {R}\) to denote the real interval \(\{x\in \mathbb R \,|\, 0\le x \le 1\}\). We denote by \(\textrm{negl}(\lambda )\) functions f such that \(f(\lambda ) = 1/\lambda ^{\omega (1)}\).

Chernoff Bound. We will use the following (multiplicative) form of Chernoff-Hoeffding inequality.

Lemma 1 (Multiplicative Chernoff)

Suppose \(X_1,\cdots ,X_n\) are independent Bernouilli variables with common mean p. Then, for all \(t>0\), we have:

$$\begin{aligned} \Pr \left[ \sum _{i=1}^n X_i \notin [(1-t) \cdot np, (1+t) \cdot np)]\right] \le 2 e^{-2t^2p^2n}. \end{aligned}$$

3.1 Anonymous Transfer

We recall here the notion anonymous transfer, introduced in [3]. Throughout most of this work, we focus the two-party setting, involving a sender, a dummy non-sender and a (silent) receiver.Footnote 16

Definition 1

((Two-Party, Silent-Receiver) Anonymous Transfer); adapted from [3]). A two-party anonymous transfer (AT) \(\Pi _{AT} ^{\ell }\), with correctness error \(\varepsilon \in [0,1]_{\mathbb {R}}\), anonymity \(\delta \in [0,1]_\mathbb {R}\), consisting of \(c\in \mathbb N\) rounds and message length \(\ell \in \mathbb N\) (all possibly functions of \(\lambda \)), is a tuple of PPT algorithms \((\textsf{Setup}, \textsf{Transfer}, \textsf{Reconstruct})\) with the following specifications:

  • \(\textsf{Setup}(1^{\lambda })\) takes as input a unary encoding of the security parameter \(\lambda \) and outputs a common reference string \(\textsf{crs}\).

  • \(\textsf{Transfer}(\textsf{crs},b,\mu )\) takes as input a common reference string \(\textsf{crs}\), the index of the sender \(b \in \{0,1\}\), the message to be transferred \(\mu \in \{0,1\} ^{\ell }\), and outputs a transcript \(\pi \). Transcripts \(\pi \) consists of c rounds of interaction between the sender and the dummy party, where the dummy party (with index \(1-b\)) sends uniform and independent messages at each round, and the each message from the sender depends on the partial transcript so far, with a next message function implicitly defined by \(\textsf{Transfer}(\textsf{crs},b,\mu )\). By default, we assume that the receiver does not send any messages (namely, the receiver is silent).Footnote 17

  • \(\textsf{Reconstruct}(\textsf{crs}, \pi )\) takes as input a common reference string \(\textsf{crs}\), a transcript \(\pi \) and outputs a message \(\mu ' \in \{0,1\}^\ell \). By default, we assume that \(\textsf{Reconstruct}\) is deterministic.Footnote 18

We require that the following properties are satisfied.

\(\varepsilon \)-Correctness. An anonymous transfer \(\Pi _{AT} ^{\ell }\) has correctness error \(\varepsilon \) if, for all large enough security parameter \(\lambda \), index \(b \in \{0,1\}\), message length \(\ell \in \textrm{poly}(\lambda )\), and all message \(\mu \in \{0,1\} ^{\ell }\), we have:

figure a

\(\delta \)-Anonymity. An anonymous transfer \(\Pi _{AT} ^{\ell }\) is \(\delta \)-anonymous if, for all PPT algorithm D, all large enough security parameter \(\lambda \), message length \(\ell \in \textrm{poly}(\lambda )\), and all message \(\mu \in \{0,1\} ^{\ell }\),

(1)

where the probability is over the randomness of \(\textsf{Setup}\), \(\textsf{Transfer}\), and the internal randomness of D.

We alternatively say that \(\Pi _{AT} ^{\ell }\) is \(\delta \)-anonymous with respect to a class of adversaries \(\mathcal C\), if Eq. (1) holds instead for all distinguishers \(D\in \mathcal C\).

Definition 2

We say that an anonymous transfer is symmetric if the next message function of the sender, implicitly defined by \(\textsf{Transfer}(\textsf{crs},b,\mu )\) where b is the sender, does not depend on b, and if \(\textsf{Reconstruct}\) does not depend on the identities of the participants.

Remark 1

(Comparison with [3]). Our notation and definitions are slightly different but equivalent from the ones from [3]. With our conventions, \(\varepsilon \) denotes a correctness error, and \(\delta \) denotes a (bound on a) distinguishing advantage, and therefore an AT has stronger correctness (resp. stronger anonymity) guarantees as \(\varepsilon ,\delta \) decrease. This is the opposite of [3], where guarantees gets better as their parameters \(\varepsilon , \delta \) tend to 1.

Our definition of correctness error is over the random choice of \(\textsf{crs}\leftarrow \textsf{Setup}\), while it is worst case over \(\textsf{crs}\) in [3]. Because this defines a larger class of protocols, ruling out the definition above makes our lower bounds stronger.

Our definition of \(\delta \)-anonymity is formulated specifically for the two-party case, and is worded differently from theirs, which states, up to the mapping \(\delta \mapsto 1-\delta \) discussed above:

$$\begin{aligned} \left| \Pr _{b\leftarrow \{0,1\}} \left[ \pi ^{(b)}\leftarrow \textsf{Transfer}(\textsf{crs},b,m) : D(\pi ^{(b)}) = b\right] - \frac{1}{2} \right| \le \frac{\delta }{2}. \end{aligned}$$
(2)

However one can easily show that both definitions correspond to the same value \(\delta \) .

Remark 2 (Silent Receivers)

As noted in [3], without loss of generality, the receiver in an anonymous transfer can be made silent, namely, does not send any messages in the protocol execution. This is because its random tape can be hard-coded in the CRS.

Remark 3 (Deterministic reconstruction)

We observe that \(\textsf{Reconstruct}\) can be assumed to be deterministic without loss of generality; this is because random coins for \(\textsf{Reconstruct}\) can be sampled and included in the common reference string \(\textsf{crs}\).

Remark 4 (AT with larger number of parties)

[3] more generally defines anonymous transfer with a larger number of participants \(N\in \mathbb N\). We refer to [3, Definition 3] for a formal definition.Footnote 19 The main difference (in the silent receiver case), is that \(\delta \) is defined as an advantage over random guessing among the N participants. Namely, Eq. (2) becomes:

$$\begin{aligned} \left| \Pr _{k\leftarrow [N]} \left[ \pi ^{(k)}\leftarrow \textsf{Transfer}(\textsf{crs},k,m) : D(\pi ^{(k)}) = k\right] - \frac{1}{N} \right| \le \delta \cdot \frac{N-1}{N}. \end{aligned}$$

In particular, while the indistinguishability-based definition in Eq. (1) and the predicting-based definition in Eq. (2) are equivalent in the two-party setting, it is not immediately clear that this holds in the N-party setting. Looking ahead, in order to extend our results from the 2-party to the N-party setting, our main observation is to show that this equivalence in fact holds, up to some mild loss in the parameters. We refer to Sect. 5.3 for more details.

4 Identifying Covert Cheaters

In this section, we introduce our abstraction of the covert cheating games, and then show generic strategies for the game.

4.1 Covert Cheating Game

We define a covert cheating game as follows.

Definition 3 (Covert Cheating Game)

Let \(c\in \mathbb N\), \(p_0 \in ]0,1[_\mathbb {R}\) be parameters.

  • Setup, players and roles. A covert cheating game is played by two (randomized) players A and B, who can agree on a strategy in advance. They play against an observer C. During setup, one party is (randomly) designated as the bias inducer while the other is designated as the neutral party.

  • Execution and states of a game. An execution of the game consists of players A and B take alternate moves making moves in the game, with the convention that player A makes the first move. The game consists of c rounds (that is, 2c total moves, c moves from A and c moves from B), where \(c\in \mathbb N\) is a parameter of the game. At any point during the game, the current state of a game is represented by a real number \(p\in [0,1]_\mathbb {R}\). The final state of the game is a bit \(p_{2c}\in \{0,1\}\) (where one can consider 1 as a winning outcome for the players AB, and 0 as a losing outcome). For \(k\in [c]\), if A is the bias inducer, we will use either of the notations \(X_{2k-1}=X^{(A)}_{2k-1}\) (resp. \(X_{2k}=X_{2k}^{(B)}\)), to denote the random variable associated to the state resulting from A (resp. B) making its kth move. In other words, the superscript in \(X^{(A)}_{2k-1}\) (resp. \(X_{2k}^{(B)}\)) is a redundant notation to make remind the reader that A (resp. B) made the \((2k-1)\)st (resp. (2k)th) move of the game. Similarly, for \(k\in [c]\), if B is the bias inducer, we will use either of the notations \(Y_{2k-1}=Y_{2k-1}^{(A)}\) (resp. \(Y_{2k}=Y_{2k}^{(B)}\)), to denote the random variable associated to the state resulting from A (resp. B) making its kth move. Again, the redundant superscript is used to make the player associated to the move explicit. The initial state of the game is defined as \(p_0 \in ]0,1[_\mathbb {R}\), where \(p_0\) is a parameter of the game. In other words, \(X_0 =Y_0= p_0\). We say that a strategy for A and B has success rate \(p_f\) if \(\mathbb {E}[X_{2c}]\ge p_f\) and \(\mathbb {E}[Y_{2c}]\ge p_f\).

  • Rules on moves. The neutral party is restricted to making moves that do not change the state of the game on expectation, namely, the moves behave as martingales with respect to the current game state. More formally, with our notation, for all \(k\in [c]\), we have:Footnote 20

    $$\begin{aligned} \mathbb E[X^{(B)}_{2k}|X_{2k-1},\cdots ,X_{0}] &= X_{2k-1}. \end{aligned}$$
    (3)
    $$\begin{aligned} E[Y^{(A)}_{2k-1}|Y_{2k},\cdots ,Y_{0}] &= Y_{2k-2}. \end{aligned}$$
    (4)

    where the first equation (resp. second equation) corresponds to A (resp. B) being the bias inducer.

  • Objective of the game. The goal of the game is, for the bias inducer, to be covert with respect to the observer C, while maintaining a high success rate \(p_f\) (namely, a high probability of ending up at 1 in the final state). The observer C has access to intermediate states of the execution \(p_i\leftarrow X_i\) (if A is the bias inducer, or \(p_i\leftarrow Y_i\) otherwise) via a (distribution of) oracles \(\mathcal O\). In each oracle \(\mathcal O\) is hard-coded a sequence of 2c states of the game \(p_i, i\le 2c\) induced by an execution of the game. We will respectively denote by \(\mathcal O^{(A)}\) (resp. \(\mathcal O^{(B)}\)) (the distribution of) oracles corresponding to when A (resp. B) is designated as the bias inducer. We consider the following variants of the oracles \(\mathcal O \in \{\mathcal O^{(A)},\mathcal O^{(B)}\}\).

    • Sampling access. We say that the observer C gets sampling access to game states \(p_i\in [0,1]_\mathbb {R}\), if oracles \(\mathcal {O}\) are probabilistic oracles such that, for all \(i\in [2c]\), \(\mathcal {O}(i) = 1\) with probability \(p_i\), and \(\mathcal {O}(i)=0\) with probability \(1-p_i\), where the randomness is uniformly and independently sampled at each oracle call. This is our default notion of access.

    • Direct access. We say that an observer gets direct access to game states \(p_i\in [0,1]_\mathbb {R}\), if oracles \(\mathcal {O}\) are defined as \(\mathcal {O_\textsf{direct}}(i) = p_i\in [0,1]_\mathbb {R}\) for all \(i\in [2c]\).

    We say that the bias inducer successfully \(\delta \)-fools a class \(\mathcal C\) of observers with respect to sampling access if for every algorithm \(C\in \mathcal C\), we have:

    $$ \left| \,\Pr \left[ C^{\mathcal O^{(A)}}=1\right] - \Pr \left[ C^{\mathcal {O^{(B)}}}=1\right] \,\right| \le \delta ,$$

    where \(C^{\mathcal {O^{(X)}}}\), where \(\mathcal X\in \{A,B\}\), denotes the experiment of sampling \(\mathcal O\leftarrow \mathcal {O^{(X)}}\) (which is defined as sampling a random execution of the game when \(\mathcal X\) is the bias inducer, yielding states \(p_i, i\le 2c\), and defining \(\mathcal O\) with respect to \(\{p_i\}\)), and giving C oracle access to \(\mathcal O\). We say that the bias inducer successfully \(\delta \)-fools a class \(\mathcal C\) of observers with respect to direct access if the observer C gets oracle access to \(\mathcal O_\textsf{direct}\) instead.

  • (Optional Property): Symmetricity. We say that a strategy for players A and B is symmetric if:

    $$\begin{aligned} \forall k \in [c], \mathbb {E}\left[ X_{2k} \right] = \mathbb {E}\left[ Y_{2k}\right] , \end{aligned}$$
    (5)

    that is, the state of the game is (on expectation) independent of the identity of the bias inducer, whenever the bias inducer and the neutral party made an identical number of moves (which happens after an even number of total moves).

  • (Optional Property): Absorption. We say that a strategy is absorbent (implicitly, with respect to states 0 and 1) if, for all \(i\in [2c]\) and bit \(b\in \{0,1\}\):

    $$\begin{aligned} \{X_i = b\} \Longrightarrow \{\forall j\ge i, X_j = b\}. \end{aligned}$$
    (6)

Remark 5 (Implicit Conditioning on Prior Moves.)

We allow the strategies from A and B to be adaptive, namely, to depend on prior moves. As a result, all the expectations on random variables \(X_i, Y_i\) are technically always considered conditioned on all the prior moves. For ease of notation, we will not make this conditioning explicit, and will always implicitly consider the conditional version of expectations for these variables (and resulting variables defined as a function of \(X_i, Y_i\)).

4.2 Attack 1: Free-Lunch Attack with Weak Distinguishing Guarantees

We show here that there exists a very efficient generic observer strategy given sampling access to game states with small but non-negligible distinguishing advantage. Namely:

Theorem 4 (Free-Lunch Distinguisher)

For any covert cheating game , consisting of 2c total moves, with starting state \(p_0\) and satisfying symmetricity (see Definition 3, Eq. (5)), and any strategy for that game with success rate \(p_f>p_0\), there exists an observer strategy \(C^*\) that determines the identity of the bias inducer with advantage \(\delta = \frac{p_f-p_0}{c}\), by making a single call to the sampling oracle \(\mathcal O\).

In other words, the strategy does not \(\delta \)-fool the class of observers making a single sampling oracle call.

Proof

We build our observer strategy as follows:

 

  • Pick a random \(k\leftarrow [c]\). Output \(\mathcal O(2k-1) \in \{0,1\}\).

In other words, \(C^*\) picks a random move from A, and outputs 1 with probability the state of the game after A’s kth move. Let us analyze the advantage of \(C^*\).

Suppose A is the bias inducer. Then:

$$\begin{aligned} \mathbb E_{k\leftarrow [c]}\left[ \mathcal O^{(A)}(2k-1) \right] = \mathbb E\left[ X_{2k-1}\right] , \end{aligned}$$

and we furthermore have by Eq. (3) that for all \(k\in [c]\)Footnote 21:

$$\begin{aligned} \mathbb E\left[ X^{(B)}_{2k}-X^{(A)}_{2k-1}\right] =0, \end{aligned}$$
(7)

namely, B’s moves do not change X on expectation.

Suppose now that B is the bias inducer. Then, Eq. (4) gives that for all \(k\in [c]\):

$$\begin{aligned} \mathbb E\left[ Y^{(A)}_{2k-1}-Y^{(B)}_{2k-2}\right] =0, \end{aligned}$$
(8)

namely, A’s moves do not change Y on expectation. This gives:

$$\begin{aligned} \mathbb E_{k\leftarrow [c]} \left[ \mathcal O^{(B)}(2k-1) \right] &= \mathbb E\left[ Y_{2k-1}\right] \\ &= \mathbb E\left[ Y_{2k-2}\right] \\ &= \mathbb E\left[ X_{2k-2}\right] , \end{aligned}$$

where the second equality comes from Eq. (8), and the last equality follows by symmetry if \(k>1\) (Eq. (5)), or as \(X_0=Y_0 = p_0\) if \(k=1\).

Overall, we obtain that the advantage of \(C^*\) is, by telescoping:

$$\begin{aligned} &\left| \,\Pr \left[ C^{\mathcal O^{(A)}}=1\right] - \Pr \left[ C^{\mathcal {O^{(B)}}}=1\right] \,\right| \\ &\ge \Pr \left[ C^{\mathcal O^{(A)}}=1\right] - \Pr \left[ C^{\mathcal {O^{(B)}}}=1\right] \\ &= \mathbb E_{k{\mathop {\leftarrow }\limits ^{\$}}[c]}\left[ X^{(A)}_{2k-1}-Y^{(A)}_{2k-1}\right] \\ &= \mathbb E_{k{\mathop {\leftarrow }\limits ^{\$}}[c]}\left[ X^{(A)}_{2k-1}-X^{(B)}_{2k-2}\right] \\ &= \sum _{k\in [c]} \frac{\mathbb E\left[ X_{2k-1}^{(A)}-X_{2k-2}^{(B)}\right] }{c} + \underbrace{\frac{\mathbb E\left[ X_{2k}^{(B)}-X_{2k-1}^{(A)}\right] }{c}}_{=0 \text { (Eq.} 7)} \\ &= \frac{\mathbb {E}\left[ X_{2c}-X_0 \right] }{c} \\ &=\frac{p_f-p_0}{c} \end{aligned}$$

which concludes the proof.

Remark 6 (Correct predictions)

Our attack provides a slightly better guarantee than stated in Theorem 4: it correctly outputs the identity of the bias inducer (say by associating output 1 to A being the bias inducer), as opposed to simply distinguishing them. In other words, we have:

$$\begin{aligned} \Pr \left[ C^{\mathcal O^{(A)}}=1\right] - \Pr \left[ C^{\mathcal {O^{(B)}}}=1\right] =\frac{p_f-p_0}{c}. \end{aligned}$$

4.3 Attack 2.1: A Strong Attack Given Direct Access to States

Next, we describe a generic attack with large advantage, given direct access to game states. We refer to the technical overview (Sect. 2) for an intuition of the attack. Compared to the exposition in the technical overview, the main difference is that we have to deal with games where the end state is not consistently \(p_f\), but rather 0 or 1 with expectation \(p_f\). This does lead to technical complications. Indeed, one crucial argument in our proof is that the (multiplicative) contribution of the neutral party is 1 on expectation, which allows us to call Markov’s inequality. However, switching to the expectation statement, conditioning on the end state being 1 might skew the contribution of the neutral party, which might prevent us from concluding. Instead, we carefully define several useful events, which allows us to compute the advantage of our strategy without ever conditioning on successful runs. More precisely, we now prove the slightly stronger statement that for all winning executions (that is, such that \(p_{2c}=1\)), we only fail to identify the sender with small probability \(\sqrt{p_0}\).Footnote 22

Last, there is a minor technicality in how to handle denominators being equal to 0 (again, where we do not wish to condition on denominators not being equal to 0), which we solve by requiring a stronger, but natural “absorption” property of the covert cheating game (Definition 3, Eq. (6)).

Theorem 5 (Strong Distinguisher given Direct Access)

For any covert cheating game satisfying absorption (Definition 3, Eq. (6)), consisting of 2c total moves, with starting state \(p_0>0\), and any strategy for that game with success rate \(p_f>2\sqrt{p_0}\), there exists an observer strategy \(C^*\) that determines the identity of the bias inducer with advantage at least \(p_f-2\sqrt{p_0}\) given 2c oracle calls to the direct access oracle \(\mathcal O_\textsf{direct}\) (Definition 3).

Proof

We describe the observer strategy.

 

  • Compute for all \(i\in [2c]\): \(p_i = \mathcal O_\textsf{direct}(i)\). If \(p_{2c}=0\), output a random bit \(\beta \leftarrow \{0,1\}\) . Otherwise, compute:

    $$\begin{aligned} t^{(A)} &= \prod _{k=1}^{c} \frac{p_{2k-1}}{p_{2k-2}},\\ t^{(B)} &= \prod _{k=1}^{c} \frac{p_{2k}}{p_{2k-1}}, \end{aligned}$$

    with the convention that \(t^{(A)}=1\) (resp. \(t^{(B)}=1\)) if \(p_{2k-2}=0\) for some \(k\in [c]\) (resp. if \(p_{2k-1}=0\) for some \(k\in [c]\)).

  • Output 1 (that we associate to outputting “A”) if \(t^{(A)} \ge \sqrt{\frac{1}{p_0}}\). Otherwise, if \(t^{(B)} \ge \sqrt{\frac{1}{p_0}}\), output 0 (that we associate to outputting “B”). Otherwise, output \(\bot \).Footnote 23

Let us analyze the advantage of \(C^*\).

Case 1: A is the bias inducer. Suppose A is the bias inducer. We define the following events:

$$\begin{aligned} \textsf{CORRECT}_A &{:}{=}\left\{ X_{2c}=1 \right\} ;\\ \textsf{LARGE}_A^{(A)} &{:}{=}\left\{ t^{(A)} \ge \sqrt{\frac{1}{p_0}} \right\} ; \\ \textsf{SMALL}_A^{(B)} &{:}{=}\left\{ t^{(B)} < \sqrt{\frac{1}{p_0}} \right\} ; \\ \textsf{GOOD}_A &{:}{=}\textsf{CORRECT}_A \, \wedge \, \textsf{LARGE}_A^{(A)} \, \wedge \, \textsf{SMALL}_A^{(B)}. \end{aligned}$$

Note that if \(\textsf{GOOD}_A\) occurs, our algorithm is correct when A is the bias inducer. We argue that \(\textsf{GOOD}_A\) occurs with high probability. We start by analyzing the contribution \(t^{(B)}\) of B.

Lemma 2

We have:

$$\begin{aligned} \Pr [\textsf{SMALL}_A^{(B)}] \ge 1-\sqrt{p_0}. \end{aligned}$$

Proof

For \(k\in [c]\), let us define the partial product of ratios associated to B:

$$\begin{aligned} P_k^{(B)} = \prod _{j=1}^{k} \frac{X^{(B)}_{2j}}{X^{(A)}_{2j-1}}, \end{aligned}$$

with the convention that \(P_k^{(B)}=1\) if \(p_{2j-1}=0\) for some \(j\in [k]\), and observe that:

$$\begin{aligned} t^{(B)}\leftarrow P_{c}^{(B)}. \end{aligned}$$

First, observe that

$$\begin{aligned} \mathbb {E}[P_1^{(B)}] = \frac{\mathbb {E}[X^{(B)}_1]}{p_0}=1, \end{aligned}$$

by Eq. (4).

Let \(k\in \{2,\cdots , c\}\); suppose that \(\mathbb {E}[P_{k-1}^{(B)}]=1\). We have:

$$\begin{aligned} \mathbb {E}[P_k^{(B)}] &= \mathbb {E}_{Y_0,\cdots Y_{2k-1}}\left[ \mathbb {E}[P_k^{(B)} | Y_0,\cdots , Y_{2k-1}]\right] \\ &= \mathbb {E}_{p_0,\cdots p_{2k-1}} \left[ \mathbb {E}\left[ \left. P^{(B)}_{k-1} \cdot \frac{X^{(B)}_{2k}}{X^{(A)}_{2k-1}}\right| Y_0=p_0, \cdots , Y_{2k-1} = p_{2k-1}\right] \right] \\ &= \mathbb {E}_{p_0,\cdots p_{2k-1}} \left[ \mathbb {E}\left[ \left. P^{(B)}_{k-1} \cdot \frac{X^{(B)}_{2k}}{p_{2k-1}}\right| Y_0=p_0, \cdots , Y_{2k-1} = p_{2k-1}\right] \right] \\ &= \mathbb {E}_{p_0,\cdots p_{2k-1}} \left[ \left. t^{(B)}_{k-1} \cdot \frac{\mathbb {E}\left[ X^{(B)}_{2k}\right] }{p_{2k-1}}\right| Y_0=p_0, \cdots , Y_{2k-1} = p_{2k-1}\right] \\ &= \mathbb {E}_{p_0,\cdots p_{2k-1}} \left[ \left. t^{(B)}_{k-1} \right| Y_0=p_0, \cdots , Y_{2k-1} = p_{2k-1}\right] \\ &= P_{k-1}^{(B)}, \end{aligned}$$

where we define \(t_{k-1}^{(B)} = t_{k-1}^{(B)}(p_0,\cdots ,p_{2k})\) as: \(t_{k-1}^{(B)} = \prod _{j=1}^{k-1} \frac{p_{2j}}{p_{2j-1}}\), and where the second to last equality follows by Eq. (4), and with the convention that a fraction with denominator 0 is equal to 1. Therefore, for all \(k \in [c]\) (and in particular \(k=c-1\)), we have:

$$\begin{aligned} \mathbb {E}[P_k^{(B)}] = 1. \end{aligned}$$
(9)

Markov’s inequality thus gives:

$$\begin{aligned} \Pr [\lnot \textsf{SMALL}_A^{(B)}] = \Pr \left[ P_{k}^{(B)} \ge \sqrt{\frac{1}{p_0}} \right] \le \sqrt{p_0}, \end{aligned}$$

which concludes the proof of Lemma 2.

Next, by definition of success rate and \(p_f\), we have \(\Pr [\textsf{CORRECT}_A] \ge p_f\). Thus:

$$\begin{aligned} \Pr \left[ \textsf{CORRECT}_A \wedge \textsf{SMALL}_A^{(B)} \right] \ge \Pr \left[ \textsf{CORRECT}_A\right] - \Pr \left[ \lnot \textsf{SMALL}_A^{(B)} \right] \ge p_f - \sqrt{p_0}. \end{aligned}$$

Last, we observe that \(\textsf{CORRECT}_A \wedge \textsf{SMALL}_A^{(B)}\) implies \(\textsf{CORRECT}_A \wedge \textsf{LARGE}_A^{(A)} \wedge \textsf{SMALL}_A^{(B)}\). Indeed, suppose \(\textsf{CORRECT}_A\) occurs. By absorption of the game Eq. (6), none of the terms used in a denominator equal 0 (otherwise the final state would be 0). Furthermore, whenever \(\textsf{CORRECT}_A\) occurs, we have by a telescoping product:

$$\begin{aligned} t^{(A)} \cdot t^{(B)} = 1, \end{aligned}$$

and therefore, \(t^{(B)}< \sqrt{1/p_0}\) (given by \(\textsf{SMALL}_A^{(B)}\)) implies that \(t^{(A)} \ge \sqrt{1/p_0}\), namely that \(\textsf{LARGE}_A^{(A)}\) occurs.

Overall, this ensures:

$$\begin{aligned} \Pr [\textsf{GOOD}_A] \ge \Pr [\textsf{CORRECT}_A \wedge \textsf{SMALL}_A^{(B)}] \ge p_f - \sqrt{p_0}, \end{aligned}$$

and therefore \(C^*\) will be correct with probability at least \(p_f-\sqrt{p_0}\) when A is the bias inducer when \(\textsf{CORRECT}_A\) occurs, and correct with probability 1/2 when \(\lnot \textsf{CORRECT}_A\) occurs (which occurs with probability \(1-p_f\) by definition of \(p_f\)). In other words, when A is the bias inducer, \(C^*\) outputs 1 with probability at least \(p_f-\sqrt{p_0} + (1-p_f)/2\).

Case 2: B is the bias inducer. Suppose now B is the bias inducer. We can similarly define:

$$\begin{aligned} \textsf{CORRECT}_B &{:}{=}\left\{ Y_{2c}=1 \right\} ;\\ \textsf{LARGE}_B^{(B)} &{:}{=}\left\{ t^{(B)} \ge \sqrt{\frac{1}{p_0}} \right\} ; \\ \textsf{SMALL}_B^{(A)} &{:}{=}\left\{ t^{(A)} < \sqrt{\frac{1}{p_0}} \right\} ; \\ \textsf{GOOD}_B &{:}{=}\textsf{LARGE}_A^{(A)} \, \wedge \, \textsf{SMALL}_A^{(B)}. \end{aligned}$$

An almost identical analysis (using random variables X instead of Y, and shifting the indices appropriately) shows that

$$\begin{aligned} \Pr [\textsf{GOOD}_B] \ge p_f - \sqrt{p_0}, \end{aligned}$$

and therefore \(C^*\) will be correct with probability at least \(p_f-\sqrt{p_0} + (1-p_f)/2\) when B is the bias inducer.

Wrapping Up. Overall, the advantage of \(C^*\) is

$$\begin{aligned} \left| \,\Pr \left[ C^{\mathcal O^{(A)}}=1\right] - \Pr \left[ C^{\mathcal {O^{(B)}}}=1\right] \,\right| &\ge \Pr \left[ C^{\mathcal O^{(A)}}=1\right] - \Pr \left[ C^{\mathcal {O^{(B)}}}=1\right] \nonumber \\ &\ge 2(p_f-\sqrt{p_0})-1 + (1-p_f) = p_f - 2\sqrt{p_0}, \end{aligned}$$

which concludes the proof.

4.4 Attack 2.2: A Strong Attack Given Sampling Access to States

Next, we port our attack from Sect. 4.3 to the much weaker sampling setting. Overall, this new attack

  • works in the much weaker sampling setting, and does not require the game to satisfy absorption (Eq. (6)),

  • but has slightly weaker advantage \(\approx p_f-1/\textrm{poly}(\lambda )\) while requiring \(p_0\) to be fairly small (the advantage holding for any \(\textrm{poly}(\lambda )\) of our choice, as long as \(p_0\) is small enough), and has a quite larger polynomial sample complexity \(q\approx c^6\cdot \textrm{poly}(\lambda )\) with respect to the sampling oracle.Footnote 24

Our new analysis is more involved, as to carefully estimate the multiplicative progress of the players despite having imperfect access to the game states \(p_i\). The main problem arises when the state of the game becomes (say, exponentially close or even equal to) 0. Indeed, such states are indistinguishable from the state being actually 0 from the view of a polynomial-time observer with only sampling access to the state. However, they cannot be treated using an absorption argument (Eq. (6)), like in Theorem 5: this is because Eq. (6) only holds for the two states 0 and 1. We solve this by thresholdizing the (partial) products, and only considering “suffix-products” (that is, over indices \(i\ge i^*\) for some index \(i^*\)) when all the probabilities handled are large enough (say \(\gtrsim 1/c^2\)). We refer to Sect. 2 for an intuition for the attack.

One difference with the overview in Sect. 2 is, again, that the strategy from the players AB do not necessarily finish at state \(p_{2c}\ge p_f\); this guarantee only holds on expectation. We solve this issue similarly to Theorem 5, by defining several useful events, and argue that products associated to the neutral party are small with high probability without (significantly) conditioning. And similarly to Theorem 5, we prove a slightly stronger result: for all winning executions of the game (that is, such that \(p_{2c}=1\)), we only fail to identify the sender with probability \(\approx 1/\textrm{poly}(\lambda )\).

Overall, we present an attack with guarantees comparable with the ones from Theorem 5. Even if the analysis is quite tedious and notation-heavy, it is still very similar in spirit to the one of Theorem 5.

Theorem 6 (Strong Distinguisher given Sampling Access)

Let \(\alpha (\lambda )\ge 1\) be a polynomial. For all covert cheating game satisfying \(p_0 \le O\left( \frac{1}{c^2\alpha ^2}\right) \), and any strategy with success rate \(p_f \ge 1/\alpha (\lambda )\), there exists an observer \(C^*\) that determines the identity of the bias inducer with advantage at least \(p_f - 1/\alpha - \textrm{negl}(\lambda )\).

Furthermore, the observer makes \(c^6\alpha ^4 \omega (\log ^2 \lambda )\) calls to the sampling oracle \(\mathcal O\).

In particular, if \(p_0 = \textrm{negl}(\lambda )\), there is, for all polynomial \(\alpha \), an observer strategy with advantage at least \(p_f - 1/\alpha - \textrm{negl}(\lambda )\), with a query complexity of \(c^6\alpha ^4 \omega (\log ^2 \lambda )\).

Proof

Suppose the covert cheating game satisfies the constraints on \(p_0\) and \(p_f\); observe that in particular \(p_0 \le p_f\).

We describe our attack, which uses the following parameters:

  • \(\tau =\tau (\lambda ,c) \in [0,1]_\mathbb {R}\), a threshold precision for our estimation procedure. We will use \(\tau = 1/(64 c^2\cdot \alpha ^2(\lambda ))\) where \(\alpha \) is specified in Theorem 6.

  • \(t = t(\lambda ,c) \in [0,1]_\mathbb {R}\), a multiplicative approximation factor for our estimation. We will use \(t=1/2c\).

  • \(s=\textrm{poly}(\lambda ,c,\tau ,t)\), a number of repetitions for our estimation. We will set \(s= c^6\cdot \textrm{poly}(\lambda )\), so that \(s = \frac{\log c\cdot \omega (\log \lambda )}{\tau ^2 t^2}\).

 

  1. 1.

    ):

    • Set \(\widetilde{p_0}=p_0\).

    • For \(i=1\) to 2c:

      • For \(j=1\) to s, sample \(b_j\leftarrow \mathcal O(i)\).

      • Compute \(\widetilde{p_i} = \frac{1}{s}\sum _{j=1}^s b_j\).

    • If \(\widetilde{p_{2c}}\le 1-\tau \), output a random bit \(\beta \leftarrow \{0,1\}\).Footnote 25

    • Otherwise, let \(i^*\) be the largest index in [0, 2c] such that \(\widetilde{p_i} \le \tau \) (which exists as we set \(p_0 = \widetilde{p_0} \le \tau \)).

  2. 2.

    (Estimation of partial numerator and denominator): Compute

    $$\begin{aligned} \widetilde{t^{(A)}}_\textsf{num}&= \prod _{k \, | \, \begin{array}{c} 2k-2\ge i^* \end{array}}^{c} \widetilde{p_{2k-1}}. & \widetilde{t^{(B)}}_\textsf{num}&= \frac{1}{\widetilde{p_{2c}}}\prod _{k \, | \, \begin{array}{c} 2k-1\ge i^* \end{array}}^{c} \widetilde{p_{2k}}\\{t^{(A)}}_\textsf{denom}&= K^{(A)}\cdot \prod _{k \, | \, \begin{array}{c} 2k-1\ge i^* \end{array}}^{c} \widetilde{p_{2k-2}}, & \widetilde{t^{(B)}}_\textsf{denom}&= K^{(B)}\cdot \prod _{k \, | \, \begin{array}{c} 2k\ge i^* \end{array}}^{c} \widetilde{p_{2k-1}}, \end{aligned}$$

    where \(K^{(A)}= {\left\{ \begin{array}{ll} 1\ \text {if}\; i^*\; \text {is odd}\\ \tau \ \text { if}\; i^*\; \text {is even}, \end{array}\right. }\) and \(K^{(B)}= {\left\{ \begin{array}{ll} \tau \ \text {if}\; i^*\; \text {is odd}\\ 1\ \text { if}\; i^*\; \text {is even}. \end{array}\right. }\) In other words, this computes partial products starting at \(i^*\) with the convention \(\widetilde{p_{i^*}} = \tau \) (which only appears in one denominator, according to the parity of \(i^*\)), and \(\widetilde{p_{2c}}=1\).

  3. 3.

    Output: Output 1 (that we associate to outputting “A”) if

    $$\begin{aligned} \widetilde{t^{(A)}} {:}{=}\frac{\widetilde{t^{(A)}_{\textsf{num}}}}{\widetilde{t^{(A)}_{\textsf{denom}}}} \ge \sqrt{\frac{1}{\tau }}. \end{aligned}$$

    Otherwise, output 0 (that we associate to outputting “B”) if

    $$\begin{aligned} \widetilde{t^{(B)}} {:}{=}\frac{\widetilde{t^{(B)}_{\textsf{num}}}}{\widetilde{t^{(B)}_{\textsf{denom}}}} \ge \sqrt{\frac{1}{\tau }}. \end{aligned}$$

    Otherwise, output \(\bot \).Footnote 26

Let us analyze the advantage of \(C^*\).

Case 1. A is the bias inducer. We define the following events, similar to the proof of Theorem 5, adapted to the approximate setting:

$$\begin{aligned} \textsf{CORRECT}_A &{:}{=}\left\{ \widetilde{p_{2c}}\ge 1-t \right\} ;\\ \textsf{LARGE}_A^{(A)} &{:}{=}\left\{ \widetilde{t^{(A)}} \ge \sqrt{\frac{1}{\tau }} \right\} ;\\ \textsf{SMALL}_A^{(B)} &{:}{=}\left\{ \widetilde{t^{(B)}} < \sqrt{\frac{1}{\tau }} \right\} ;\\ \textsf{GOOD}_A &{:}{=}\textsf{CORRECT}_A \, \wedge \, \textsf{LARGE}_A^{(A)} \, \wedge \, \textsf{SMALL}_A^{(B)}. \end{aligned}$$

We furthermore define the following auxiliary events related to the accuracy of the estimation procedure:

$$\begin{aligned} \textsf{BAD}_0 &{:}{=}\left\{ \forall i \text { s.t. } p_i \ge \tau , \widetilde{p_i} \notin \left[ \left( 1-t\right) p_i, \left( 1+t\right) p_i\right] \right\} ;\\ \textsf{BAD}_1 &{:}{=}\left\{ p_{i^*} \le 2\tau \right\} . \end{aligned}$$

We first argue that these auxiliary events only hold with negligible probability.

Lemma 3

We have: \(\Pr [\lnot \textsf{BAD}_0 \, \wedge \, \lnot \textsf{BAD}_1] = \textrm{negl}(\lambda )\).

Proof

This follows from routine Chernoff bounds. Define:

$$\begin{aligned} \textsf{BAD}_2 {:}{=}\left\{ \exists i> i^*, p_i \le \tau /2\right\} . \end{aligned}$$

Combining Chernoff (Lemma 1 with \(t=1\)) with an union bound over the at most 2c indices i gives \(\Pr [\textsf{BAD}_2] \le 2c\cdot e^{-8s \tau ^2}\). Whenever \(\textsf{BAD}_2\) does not occur, we have \(\Pr [\textsf{BAD}_0 \wedge \lnot \textsf{BAD}_2] \le 4c\cdot e^{-2\tau ^2t^2s}\) by another combination of Chernoff and an union bound, which overall yields:

$$\begin{aligned} \Pr [\textsf{BAD}_0] \le 6c\cdot e^{-8\tau ^2t^2\,s} \le \textrm{negl}(\lambda ), \end{aligned}$$

as long as \(\tau ^2t^2s \ge \log (c) \omega (\log \lambda )\), which holds by our setting of s.

Similarly, a Chernoff bound with \(t=2\) gives \(\Pr [\textsf{BAD}_1] \le e^{-8\tau ^2 s}\), which is negligible as long as \(\tau ^2s \ge \omega (\log \lambda )\).

We want to prove two main claims, namely:

  1. (1)

    Whenever \(\textsf{GOOD}_A\) occurs, \(C^*\) correctly outputs 1.

  2. (2)

    \(\textsf{GOOD}_A\) occurs with sufficiently high probability (Sect. 4.4).

Claim (1) follows, similarly to the case in the proof of Theorem 5, from the claim that \(\textsf{CORRECT}_A \wedge \textsf{SMALL}_A^{(B)}\) holding implies holds except with negligible probability. Indeed, whenever

\(\textsf{CORRECT}_A\) and \(\textsf{BAD}_0\) occur, we have

$$\begin{aligned} \widetilde{t^{(A)}} \cdot \widetilde{t^{(B)}} = 1, \end{aligned}$$

and therefore \(\widetilde{t^{(B)}} < \frac{1}{2}\cdot \sqrt{1/\tau }\) (given by \(\textsf{SMALL}_A^{(B)}\)) implies that \(t^{(A)} \ge 2\sqrt{1/\tau }\), namely that \(\textsf{LARGE}_A^{(A)}\) occurs, and Lemma 3 concludes the claim.

It therefore suffices to prove (2).

Claim

We have:

$$\begin{aligned} \Pr [\textsf{GOOD}_A] \ge p_f - 4c\cdot \sqrt{\tau } - \textrm{negl}(\lambda ). \end{aligned}$$

We first show a few intermediate lemmas.

Proof

(Proof of Sect. 4.4). We proceed similarly as in Sect. 4.3. We start by showing that:

$$\begin{aligned} \Pr [\textsf{SMALL}_A^{(B)}] \ge \Pr [\textsf{SMALL}_A^{(B)} \, \wedge \,\lnot \textsf{BAD}_0 \, \wedge \, \lnot \textsf{BAD}_1] \ge 1-c\cdot \sqrt{\tau }-\textrm{negl}(\lambda ). \end{aligned}$$
(10)

By a similar analysis to Sect. 4.3, using Eq. (4), we have that for any fixed \(i\in [2c]\):

$$\begin{aligned} \Pr \left[ \prod _{\begin{array}{c} k|2k-1 \ge i \end{array}}^{c} \frac{p^{(B)}_{2k}}{p^{(A)}_{2k-1}} \ge \frac{1}{4}\cdot \sqrt{\frac{1}{\tau }} \right] \le 4\sqrt{\tau }. \end{aligned}$$

A union bound over \(i=2k-1\in [2c]\), (there are c different such products), then gives:

$$ \Pr \left[ \exists i\in [2c], \quad \prod _{\begin{array}{c} k|2k-1 \ge i \end{array}}^{c} \frac{p^{(B)}_{2k}}{p^{(a)}_{2k-1}} \ge \frac{1}{4}\cdot \sqrt{\frac{1}{\tau }} \right] \le 4c\cdot \sqrt{\tau }. $$

Furthermore, whenever \(\lnot \textsf{BAD}_1\) occurs, we have \(p_i^* \le 2\tau \), that is \(1/p_i^*\ge 2/\tau \), so that, using Lemma 3:

$$ \Pr \left[ \frac{1}{2}\cdot \prod _{\begin{array}{c} k|2k-1 \ge i^* \end{array}}^{c} \frac{p'^{(B)}_{2k}}{p'^{(A)}_{2k-1}} \ge \frac{1}{4}\cdot \sqrt{\frac{1}{\tau }} \right] \le 4c\cdot \sqrt{\tau } + \textrm{negl}(\lambda ), $$

where \(p'\) are defined as \(p'_{i^*}= \tau \), and \(p'_i = p_i\) for all \(i\ne i^*\).

Last, whenever \(\lnot \textsf{BAD}_0\) additionally occurs, we have:

$$\begin{aligned} \frac{\widetilde{t^{(B)}_{\textsf{num}}}}{\widetilde{t^{(B)}_{\textsf{denom}}}} &= \frac{1}{K^{(B)}}\cdot \frac{1}{\widetilde{p_{2c}}} \cdot \frac{\prod _{k \, | \, \begin{array}{c} 2k-1\ge i^* \end{array}}^{c} \widetilde{p_{2k}}}{\prod _{k \, | \, \begin{array}{c} 2k\ge i^* \end{array}}^{c} \widetilde{p_{2k-1}}}\\ &\le \left( \frac{1+t}{1-t} \right) ^c \prod _{k \, | \, \begin{array}{c} 2k-1\ge i^* \end{array}}^{c} \frac{p'^{(B)}_{2k}}{p'^{(A)}_{2k-1}}\\ &\le 2\cdot \prod _{\begin{array}{c} k|2k-1 \ge i^* \end{array}}^{c} \frac{p'^{(B)}_{2k}}{p'^{(A)}_{2k-1}}, \end{aligned}$$

whenever \(t\le 1/2c\). Therefore:

$$\begin{aligned} \Pr [\lnot \textsf{SMALL}_A^{(B)}] = \Pr \left[ \frac{\widetilde{t^{(B)}_{\textsf{num}}}}{\widetilde{t^{(B)}_{\textsf{denom}}}} \ge \sqrt{\frac{1}{\tau }} \right] \le 4c\cdot \sqrt{\tau } + \textrm{negl}(\lambda ). \end{aligned}$$

Next, we have that if \(\lnot \textsf{BAD}_0\) holds, then \(\Pr [\textsf{CORRECT}_A] \ge p_f\) (by definition of \(p_f\)), and therefore \(\Pr [\textsf{CORRECT}_A] \ge p_f - \textrm{negl}(\lambda )\), and thus

$$\begin{aligned} &\Pr \left[ \textsf{CORRECT}_A \wedge \textsf{SMALL}_A^{(B)} \wedge \textsf{LARGE}_A^{(B)} \right] \\ &\ge \Pr \left[ \textsf{CORRECT}_A \wedge \textsf{SMALL}_A^{(B)} \right] - \textrm{negl}(\lambda ) \\ &\ge \Pr \left[ \textsf{CORRECT}_A\right] - \Pr \left[ \lnot \textsf{SMALL}_A^{(B)} \right] - \textrm{negl}(\lambda ) \\ &\ge p_f - 4c\sqrt{\tau } - \textrm{negl}(\lambda ), \end{aligned}$$

which concludes the proof of Sect. 4.4.

Overall, if A is the bias inducer, given \(C^*\) outputs 1 with probability 1/2 whenever \(\lnot \textsf{CORRECT}_A\) occurs, we have:

$$\begin{aligned} \Pr \left[ C^{\mathcal O^{(A)}}=1\right] \ge p_f - 4c\sqrt{\tau } + \frac{1-p_f}{2} - \textrm{negl}(\lambda ). \end{aligned}$$

Case 2. B is the bias inducer. Similarly to Sect. 4.3, we define and analyze the analogues of the events when B is the bias inducer, and conclude that in this case, \(C^*\) outputs 0 with probability at least \(p_f - 4c\sqrt{\tau } + \frac{1-p_f}{2} -\textrm{negl}(\lambda )\).

Wrapping Up. Overall, the advantage of \(C^*\) is

$$\begin{aligned} &\left| \,\Pr \left[ C^{\mathcal O^{(A)}}=1\right] - \Pr \left[ C^{\mathcal {O^{(B)}}}=1\right] \,\right| \nonumber \\ &\ge \Pr \left[ C^{\mathcal O^{(A)}}=1\right] - \Pr \left[ C^{\mathcal {O^{(B)}}}=1\right] \nonumber \\ &\ge p_f - 4c\sqrt{\tau } + \frac{1-p_f}{2} - 1 + (p_f - 4c\sqrt{\tau } + \frac{1-p_f}{2})- \textrm{negl}(\lambda )\nonumber \\ &= p_f - 8c\sqrt{\tau } - \textrm{negl}(\lambda ), \end{aligned}$$
(11)

and plugging in the parameters in the beginning of the proof gives \(8c\sqrt{\tau } = 1/\alpha \), which concludes the proof.

Remark 7 (Correct predictions)

Again, our attack provides a slightly better guarantee than stated in Theorem 6: it correctly outputs the identity of the bias inducer (say by associating output 1 to A being the bias inducer), as opposed to simply distinguishing them. In other words, we have:

$$\begin{aligned} \Pr \left[ C^{\mathcal O^{(A)}}=1\right] - \Pr \left[ C^{\mathcal {O^{(B)}}}=1\right] =p_f - 8c\sqrt{\tau } - \textrm{negl}(\lambda ). \end{aligned}$$

Looking ahead, we will crucially use this fact to extend our result to the many-party case.

Remark 8 (Cost of the Attack, and Fine-grained Guarantees)

The sampling complexity of our strategy \(C^*\) is a large, but fixed polynomial \(c^6\cdot \alpha ^4\cdot \omega (\log ^2 \lambda )\). Concretely, in the setting where \(p_f \ge K\) for a constant K, and \(p_0 = \textrm{negl}(\lambda )\), we obtain attack with constant advantage (or even advantage \(1-1/\textrm{poly}\) if \(p_f = 1-1/\textrm{poly}\)) which has a fixed overhead sampling cost as a function of c.

In other words, our attack rules out combinations of games and strategies that \(\delta \)-fool fine-grained observers with sample complexity m(c), if m is allowed to be a large enough polynomial.Footnote 27

5 Lower Bounds on Anonymous Transfer

In this section, we tie the attacks on covert cheating games in Sect. 4 to impossibility results for anonymous transfer, thus obtaining Theorem 8 and Theorem 9. Last, we show how to extend Theorem 8 to the N-party setting in Sect. 5.3.

5.1 Reducing Anonymous Transfer to Covert Cheating Games

Theorem 7

Let \(\Pi _{AT} ^{\ell }\) be a two-party anonymous transfer protocol, with correctness error \(\varepsilon \in [0,1]_{\mathbb {R}}\), anonymity \(\delta \in [0,1]_\mathbb {R}\) with respect to a class \(\mathcal C\) of adversaries, consisting of \(c\in \mathbb N\) rounds and message length \(\ell \in \mathbb N\) (all possibly functions of \(\lambda \)) and satisfying deterministic reconstruction (which is without loss of generality, see Remark 3).

Then there exists a covert cheating game, along with player strategy, where the game consists of c rounds, the initial state of the game is \(2^{-\ell }\), the expected final state is \(p_f = 1-\varepsilon \) and the player strategy \(\delta \)-fools observers in \(\mathcal C\).

Moreover, the covert cheating game satisfies absorption (Definition 3, Eq. (6)), and is symmetric if \(\Pi _{AT} ^{\ell }\) is symmetric (Definition 2).

Proof

Let \(\Pi _{AT} ^{\ell } = (\textsf{Setup}, \textsf{Transfer}, \textsf{Reconstruct})\) be an AT with the notation of Theorem 7. We define our game as follows.

  • Players and roles. The players of the game are the participants of the AT. The bias inducer is the sender of the AT using a uniformly random message \(\mu \leftarrow \{0,1\}^\ell \), the neutral party is the dummy party of the AT, and observers are distinguishers.

  • Execution and states. Moves in the covert cheating game are messages sent in the AT. In other words, a full execution of the game is a full AT transcript. Because moves in the covert cheating game are sequential, we sequentialize the messages of the AT by consider player A to move first within the round. This induces an order of messages, indexed by \(i\in [2c]\). Let us fix an execution of the game, that is a full AT transcript \(\pi \leftarrow \textsf{Transfer}(\textsf{crs},b,\mu )\), where \(\textsf{crs}\leftarrow \textsf{Setup}(1^\lambda )\) and \(\mu \leftarrow \{0,1\}^\ell \). The associated states of the game \(p_i\), where \(i\in [2c]\), are defined as follows. Let \(\pi [i]\) denote the partial transcript consisting of the first i messages of the protocol \(\textsf{Transfer}\) (with the sequential order from above). Let \(\overline{\pi [i]}\) denote the distribution of randomly completed partial transcripts, where \(\pi [i]\) is completed with \(2c-i\) uniformly sampled random message to obtain a full transcript. We then define:

    $$\begin{aligned} p_i = p(\textsf{crs}, \pi [i]) {:}{=}\Pr \left[ \mu '\leftarrow \textsf{Reconstruct}(\textsf{crs}, \overline{\pi [i]}) : \mu '=\mu \right] , \end{aligned}$$

    where \(\mu \leftarrow \{0,1\}^\ell \) is the input to the AT sender. The probability is over the randomness of the random completion (recall that \(\textsf{Reconstruct}\) is deterministic). The initial state of the game is \(p_0 = 1/2^\ell \), over the sole randomness of \(\mu \leftarrow \{0,1\}^\ell \). \(\Pi _{AT} ^{\ell }\) having correctness error \(\varepsilon \) implies that the resulting covert cheating strategies have success rate \(p_f = 1-\varepsilon \). Furthermore, the final state satisfies \(p_{2c}\in \{0,1\}\) by determinism of \(\textsf{Reconstruct}\) and definition of \(p_{2c}\) (as there is no randomness in \(\textsf{Reconstruct}(\textsf{crs}, \overline{\pi })\)).

  • Restriction on the neutral party. We argue that Eqs. (3) and (4) hold. This is because in an AT, dummy messages are sampled uniformly at random, and are therefore identically distributed as its counterpart obtained from random completion. More formally, supposing A is the bias inducer/sender, we have for all \(k\in [c]\) that the completions \(\overline{(\pi [2k-1]\Vert \textsf{msg})}\) where \(\textsf{msg}\) is a random AT protocol message, and \( \overline{\pi [2k-1]}\) are identically distributed by definition of completion, so that

    $$\begin{aligned} &\mathbb E[X^{(B)}_{2k}|X_{2k-1},\cdots ,X_{0}] \\ &= \mathbb {E}_{\textsf{msg}} \left[ \Pr \left[ \mu '\leftarrow \textsf{Reconstruct}(\textsf{crs}, \overline{(\pi [2k-1]\Vert \textsf{msg})}) : \mu '=\mu \right] \right] \\ &=\Pr \left[ \mu '\leftarrow \textsf{Reconstruct}(\textsf{crs}, \overline{\pi [2k-1]}) : \mu '=\mu \right] \\ &=X_{2k-1}, \end{aligned}$$

    and similarly when B is the bias inducer/sender.

  • Observers and security. Given an AT transcript, we implement a sampling oracle as follows. On input i, sample \(\overline{\pi [i]}\) and compute \(\mu '\leftarrow \textsf{Reconstruct}(\textsf{crs}, \overline{\pi [i]})\) Output 1 if \(\mu '=\mu \), and 0 otherwise. By definition, this procedure tosses a coin with probability \(p_i\). Overall, if an observer strategy distinguishes \(\mathcal {O^{(A)}}\) from \(\mathcal {O^{(B)}}\) in time t, with q sampling oracle queries and advantage \(\delta \), then there exists a distinguisher for the AT running in time \(t + q\cdot (n+\rho (c))\) with advantage \(\delta \), where n is the complexity of computing \(\textsf{Reconstruct}\) and \(\rho (c)\) is the complexity of sampling c uniformly random protocol messages.

  • Absorption. Because completions are sampled uniformly random from the whole message space of the protocol, by definition of \(p_i\), \(p_i=1\) implies that all completions of \(\pi [i]\) recover \(\mu \), which implies that all possible continuations of \(\pi [i]\) satisfy \(p=1\). Similarly, \(p_i=0\) implies that all completions of \(\pi [i]\) fail to recover \(\mu \), so that all continuations of \(\pi [i]\) satisfy \(p=0\).

  • Symmetricity. Suppose the AT is symmetric (Definition 2), and let \(k\in [c]\). Then (1) by symmetry of \(\textsf{Reconstruct}\), \(\textsf{Reconstruct}(\textsf{crs}, \overline{\pi [2k]})\) is identically distributed as , where \(\textsf{Mirror}\) flips the identities of the participants in the transcript and (2) by symmetry of \(\textsf{Transfer}\), the unordered set \((\textsf{dummy}^{(A)}, \textsf{msg}^{(B)})\) is identically distributed as . We can therefore replace all the consecutive pairs of messages \((2j-1,2j)\) from \(\{\textsf{dummy}^{(A)}_{2j-1}, \textsf{msg}^{(B)}_{2j}\}\) to \(\{\textsf{msg}^{(A)}_{2j-1}, \textsf{dummy}^{(B)}_{2j}\}\), for all \(j\le k\), without changing the distribution of the outcome of \(\textsf{Reconstruct}\). Doing so 2k times gives:

    $$\begin{aligned} \mathbb {E}[X_{2k}]=\mathbb {E}[Y_{2k}]. \end{aligned}$$

5.2 Lower Bounds on Anonymous Transfer

We first rule out the existence of AT with non-trivial correctness error \(\varepsilon \) and anonymity \(\delta \), that are secure against arbitrary polynomial-time adversaries. We do so by combining Theorem 6 with Theorem 7, which gives the following:

Theorem 8

Suppose \(\Pi _{AT} ^{\ell }\) is a (two-party, silent receiver) anonymous transfer satisfying deterministic reconstruction, and with \(\ell \ge \omega (\log \lambda )\)-bit messages, with correctness error \(\varepsilon \), and \(\delta \)-anonymous against all polynomial-time adversaries. Then, for all polynomial \(\alpha = \alpha (\lambda )\):

$$\begin{aligned} \delta \ge 1 - \varepsilon - 1/\alpha (\lambda ). \end{aligned}$$

We observe that the relation between \(\delta \) and \(\varepsilon \) is almost tight (up to \(1/\textrm{poly}(\lambda )\) factors), namely matches a trivial construction, (See full version).

Remark 9 (Ruling out other versions of AT)

Thanks to the transformations in Sect. 3, Theorem 8 also rules out other versions of AT, including (all combinations of) the following: AT with non-silent receiver, AT with randomized reconstruction, AT with a large number N of parties (by considering \(\delta ' = (N-1)\cdot \delta \)).

Remark 10 (Ruling out strong fine-grained results)

In fact, denoting \(n=n(\lambda )\) the running time of \(\textsf{Reconstruct}\), the attack obtained by combining Theorem 6 with Theorem 7 runs in time \(m(\lambda ) = n\cdot c^6 \cdot \omega (\log ^2(\lambda ))\), and therefore Theorem 8 further rules out schemes that are secure against adversaries running in fixed polynomial overhead over honest users \(m \le n^7\). In other words, fine-grained results for non-trivial parameters will at most provide security against adversaries running in time m.

Next, we rule out the existence of fine-grained AT, but for a smaller set of parameters. We do so by combining Theorem 4 with Theorem 7. Note that Theorem 4 requires the AT to be symmetric; this is without loss of generality (See full version). This overall gives the following:

Theorem 9

There are no fine-grained AT with \(\ell \)-bit messages, correctness error \(\varepsilon \), and anonymity \(\delta \), such that:

$$\begin{aligned} \delta \cdot c \ge 1 - \varepsilon - 1/2^\ell . \end{aligned}$$

More precisely, denoting \(n=n(\lambda )\) the maximum runtime of \(\textsf{Transfer}, \textsf{Reconstruct}\), and \(\rho (c)\) is the cost of sampling c uniformly random protocol messages, combining Theorem 4 with Theorem 7 gives an attack with complexity \(n(\lambda ) + \rho (c) \le 2n(\lambda )\).

5.3 Extension to Anonymous Transfer with Many Parties

In this section, we show that Theorem 8 extends to rule out anonymous transfer with any polynomial number N of parties.Footnote 28 More precisely, we prove the following result.

Theorem 10

Let \(N = N(\lambda )\) be any polynomial. Suppose \(\Pi _{AT} ^{\ell }\) is an N-party (silent receiver) anonymous transfer satisfying deterministic reconstruction, with \(\ell \ge \omega (\log \lambda )\)-bit messages, with correctness error \(\varepsilon \), and \(\delta \)-anonymous against all polynomial-time adversaries. Then, for all polynomial \(\alpha = \alpha (\lambda )\):

$$\begin{aligned} \delta \ge 1 - \varepsilon - 1/\alpha (\lambda ). \end{aligned}$$

We refer to the technical overview for a sketch, and the full version for a full proof.