1 Introduction

Secure multi-party computation (MPC) [33, 61] allows a set of parties to jointly perform a computation on their inputs, in such a way that no coalition of cheating parties can learn any information beyond what is revealed by their outputs (privacy) or affect the outputs of the computation in any way other than by choosing their own inputs (correctness). Since the first seminal works on MPC [6, 13, 33, 56, 61], it has been studied in a variety of different settings and for numerous security notions: There exist protocols secure against passively corrupted (aka semi-honest) parties and against actively corrupted (aka malicious) parties; the underlying network can be synchronous or asynchronous; and the required security guarantees can be information-theoretic or computational—to name but a few of the axes along which the MPC task can be evaluated.

The prevalent model for the design of MPC protocols is the synchronous model, where the protocol proceeds in rounds. In this setting, the round complexity, i.e., the number of rounds it takes for a protocol to deliver outputs, is arguably the most important efficiency metric. Tight lower bounds are known on the round complexity of several MPC tasks. For example, for the well-known problems of Byzantine agreement (BA) and broadcast [48, 53], it is known that any deterministic protocol against an active attacker corrupting a linear fraction of the parties has linear round complexity [25, 28]. This result has quite far-reaching consequences as, starting with the seminal MPC works mentioned above, a common assumption in the design of secure protocols has been that the parties have access to a broadcast channel, which they potentially invoke in every round. In reality, such a broadcast channel might not be available and would have to be implemented by a broadcast protocol designed for a point-to-point network. It follows that even though the round complexity of many MPC protocols is linear in the multiplicative depth of the circuit being computed, their actual running time depends on the number of parties, when executed over point-to-point channels.

The above lower bound on the number rounds for deterministic BA holds even if less than t parties are cheating, as long as all honest parties are required to complete the protocol together, at the same round [26]. Indeed, randomized BA protocols that circumvent this lower bound and run in expected constant number of rounds (e.g., [4, 27, 29, 44, 50, 55]) do not provide simultaneous termination, i.e., once a party completes the protocol’s execution, it cannot know whether all honest parties have also terminated or if some honest parties are still running the protocol; in particular, the termination round of each party is not a priori known. A protocol with this property is said to have probabilistic termination (PT).

As pointed out by Ben-Or and El-Yaniv [5], when several such PT protocols are executed in parallel, the expected round complexity of the combined execution might no longer be constant (specifically, might not be equal to the maximum of the expected running times of the individual protocols). Indeed, when m protocols, whose termination round is geometrically distributed (and so, have constant expected round complexity), are run in parallel, the expected number of rounds that elapse before all of them terminate is \(\Theta (\log m)\) [16]. While an elegant mechanism was proposed in [5] for implementing parallel calls to broadcast such that the total expected number of rounds remains constant, it did not provide any guarantees to remain secure under composition, raising questions about its usability in a higher-level protocol (such as the MPC setting described above). Such a shortcoming was recently addressed by Cohen et al. [16] who provided a framework for universal composition of PT protocols (building upon the framework of universal composability [8]). An application of their result was the first composable protocol for parallel broadcast (with a simulation-based proof) that can be used for securely replacing broadcast channels in arbitrary protocols, and whose round complexity is constant in expectation.

Indeed, an immediate application of the composable parallel-broadcast protocol from [16] is plugging it into broadcast-model MPC protocols in order to obtain point-to-point protocols with a round complexity that is independent of the number of parties. In the information-theoretic setting, this approach yields protocols whose round complexity depends on the depth of the circuit computing the function [6, 13, 21, 56], whereas in the computational setting, assuming standard cryptographic assumptions, this approach yields expected-constant-round protocols [2, 3, 23, 32, 34, 41, 47, 51]. However, the resulting point-to-point protocols have probabilistic-termination on their own. The techniques used for composing PT broadcast protocols in parallel crucially rely on the fact that broadcast is a privacy-free functionality, and a naïve generalization of this approach to arbitrary PT protocols fails to be secure. This raises the question of whether it is possible to execute arbitrary PT protocols in parallel, without increasing the round complexity.

We remark that circumventing lower bounds on round complexity is just one of the areas where such PT protocols have been successfully used. Indeed, randomizing the termination round has been proven to be a very useful technique in circumventing impossibilities and improving efficiency for many cryptographic protocols. Notable examples include non-committing encryption [24], cryptographic protocols designed for rational parties [1, 30, 31, 35, 37, 52], concurrent zero-knowledge protocols [10, 14], and parallel repetition of interactive arguments [36, 38]. The rich literature on such protocols motivates a thorough investigation of their security and composability. As mentioned above, in [16] the initial foundations were laid out for such an investigation, but what was proven for arbitrary PT protocols was a round-preserving sequential composition theorem, leaving parallel composition as an open question.

1.1 Our Contributions

Protocol-black-box composition In this work, we investigate the issue of parallel composition for arbitrary protocols with probabilistic termination. In particular, we develop a compiler such that given functionalities \({\mathcal {F}} _1,\ldots ,{\mathcal {F}} _\texttt {M} \) and protocols \(\pi _1,\ldots ,\pi _\texttt {M} \), where for every \(i\in [\texttt {M} ]\), protocol \(\pi _i\) realizes \({\mathcal {F}} _i\) (possibly using correlated randomness as setupFootnote 1), then the compiled protocol realizes the parallel composition of the functionalities, denoted as \(({\mathcal {F}} _1 \parallel \cdots \parallel {\mathcal {F}} _{\texttt {M}})\).

Our compiler uses the underlying protocols in a black-box manner,Footnote 2 guarantees output delivery (i.e., secure without abort), and is resilient against a computationally unbounded active adversary, adaptively corrupting up to \(t < n/2\) parties (which is optimal [56]). Moreover, our compiler is round-preserving, meaning that if the maximal (expected) round complexity of each protocol is \(\mu \), then the expected round complexity of the compiled protocol is \(O(\mu )\). For example, if each protocol \(\pi _i\) has constant expected round complexity, then so does the compiled protocol. Recall that this task is quite complicated even for the simple case of BA ([5, 16]). For arbitrary functionalities it is even more involved, since as we show, the approach from [5] cannot be applied in a functionally black-box way in this case.Footnote 3 Thus, effectively, our result is the first round-preserving parallel composition result for arbitrary multi-party protocols/tasks with probabilistic termination.

We now describe the ideas underlying our compiler. In [5] (see also [16]), a round-preserving parallel-broadcast protocol was constructed by iteratively running, for a constant number of rounds, multiple instances of BA protocols (each instance is executed multiple times in parallel, in a batch), hoping that at least one execution of every BA instance will complete. By choosing the multiplicity suitably, this would occur with constant probability, and therefore, the process is only needed to be repeated a constant expected number of times.

At first sight it might seem that this idea can be applied to arbitrary tasks, but this is not the case. Intuitively, the reason is that if the tasks that we want to compose in parallel have privacy requirements, then making the parties run them in (parallel) “batches” with the same input might compromise privacy, since the adversary will be able to use different inputs and learn multiple outputs of the function(s). This issue is not relevant for broadcast, because it is a “privacy-free” functionality; the adversary may learn the result of multiple computations using the same inputs for honest parties, without compromising security.

To cope with the above issue, our parallel-composition compiler generalizes the approach of [5] in a privacy-preserving manner. At a high level, it wraps the batching technique by an MPC protocol which restricts the parties to use the same input in all protocols for the same function. In particular, the compiler is defined in the Setup-Commit-then-Prove hybrid model [11, 42], where the parties receive private correlated randomness that allows each party to commit to its input values and later execute multiple instances of every protocol, each time proving that the same input value is used in all executions.

The constructions in [11, 42] for realizing the Setup-Commit-then-Prove functionality are designed for the dishonest-majority setting and therefore allow for a premature abort. Since we assume an honest majority, we require security with guaranteed output delivery. A possible way around would be, as is common in the MPC literature, to restart the protocol upon discovering some cheating or add for each abort a recovery round; this, however, would induce a linear overhead (in the number of parties) on the round complexity of the protocol.

Instead, in order to recover from a misbehavior by corrupted parties, we modify the Setup-Commit-then-Prove functionality and secret-share every committed random string between all the parties, using an error-correcting secret-sharing scheme (aka robust secret sharing [12, 22, 56]).Footnote 4 In case a party is identified as cheating, every party broadcasts the share of the committed randomness corresponding to that party, reconstructs the correlated randomness for that party, and locally computes the messages corresponding to this party in every instance of every protocol. We also prove that the modified Setup-Commit-then-Prove functionality can be realized in a constant number of rounds, thus yielding no (asymptotic) overhead on the round complexity of the compiler. We refer the reader to Sect. 6.1 for more details.

Functionally black-box composition Our feasibility result described above shows how to compute the parallel composition \(({\mathcal {F}} _1 \parallel \cdots \parallel {\mathcal {F}} _{\texttt {M}})\) of the functionalities \({\mathcal {F}} _1, \ldots , {\mathcal {F}} _{\texttt {M}}\), given a black-box access to the protocols \(\pi _1, \ldots , \pi _{\texttt {M}}\). That is, the compiler crucially relies on knowing the messages (i.e., the transcript) of the protocols’ executions, but it does not need to know how these messages are generated. Indeed, given the messages of the underlying protocols, it is possible to exploit MPC techniques to enforce consistency between multiple executions of the protocols.

We proceed to ask whether a similar compiler can be constructed without having access to the transcripts of the protocols’ executions. Specifically, we investigate the question of whether there exists a protocol that securely realizes \(({\mathcal {F}} _1 \parallel \cdots \parallel {\mathcal {F}} _{\texttt {M}})\) given only black-box access to the functionalities \({\mathcal {F}} _1, \ldots , {\mathcal {F}} _{\texttt {M}}\), but not to protocols realizing them. Each \({\mathcal {F}} _i\) is modeled as an ideal functionality that receives the inputs from the parties and returns the output in a probabilistic-termination fashion. We note that this question only makes sense if the functionalities \({\mathcal {F}} _1, \ldots , {\mathcal {F}} _{\texttt {M}}\) are not fixed in advance, but belong to some class of functionalities (see [57]), since otherwise a protocol may always ignore the functionalities \({\mathcal {F}} _1, \ldots , {\mathcal {F}} _{\texttt {M}}\) and implement \(({\mathcal {F}} _1 \parallel \cdots \parallel {\mathcal {F}} _{\texttt {M}})\) from scratch. We thus consider \(\texttt {M} \) classes of functions \({\mathcal {C}} _1, \ldots , {\mathcal {C}} _{\texttt {M}}\), where the \(i\)’th ideal functionality \({\mathcal {F}} _i\) has an oracle access to one of the functions in \({\mathcal {C}} _i\), denoted as \({\mathcal {F}} _i^{{\mathcal {C}} _i}\).

When considering semi-honest corruptions, we show that there indeed exists a round-preserving protocol for parallel composition of arbitrary deterministic functionalities \({\mathcal {F}} _1^{{\mathcal {C}} _1}, \ldots , {\mathcal {F}} _{\texttt {M}}^{{\mathcal {C}} _{\texttt {M}}}\) in a functionally black-box manner. The composition follows the “batching-technique” from [5, 16] of invoking each functionality \({\mathcal {F}} _i^{{\mathcal {C}} _i}\) multiple times in parallel for a constant number of rounds, such that with constant probability at least one computation in a batch will successfully produce an output. The reason that this approach works is that if the adversary follows the protocol, it cannot provide different inputs to different invocations of \({\mathcal {F}} _i^{{\mathcal {C}} _i}\). This works for deterministic computations since it is crucial to ensure that the adversary obtains one output value per batch even if more than one invocation produces outputs. Note that using standard techniques a randomized functionality can be reduced to a deterministic one in which each party provides an additional random string \(r_i\), and the functionality uses \(r_1\oplus \ldots \oplus r_n\) as its random coins. We refer the reader to Sect. 5 for more details.

When considering malicious corruptions, we prove that there exist function-classes for which no “natural” protocol (that is based on any of the known techniques from the literature) can compute their parallel composition in a round-preserving manner, while accessing the functions in a black-box way, tolerating even a single adversarial party. More precisely, we consider the function class \({\mathcal {C}} \) of distributed point functions, consisting of functions \(f_\alpha (x_1,x_2,{\lambda },\ldots ,{\lambda })\) (parametrized by \(\alpha \in \{0,1\}^\kappa \), where \(\kappa \) is the security parameter and \({\lambda } \) is the empty string) that return \(\alpha \) if \(x_1\oplus x_2=\alpha \) and 0 otherwise. We show that for this class it holds that:

  1. 1.

    Calling to each of the ideal functionalities \({\mathcal {F}} _i^{{\mathcal {C}}}\) until termination (i.e., until all parties receive the output) will not be round-preserving.

  2. 2.

    It is impossible to compute the parallel composition without calling every ideal functionality (until some parties receive the output).

  3. 3.

    Using the same input value in more than one call to any of the ideal functionalities will break privacy.

We refer the reader to Sect. 6.2 for the details of the lower bound.

We phrase our results using the framework for composition of protocols with probabilistic termination [16], and extend it (a side result of independent interest) to include parallel composition, reactive functionalities (in order to capture the Setup-Commit-then-Prove functionality), and to the higher corruption threshold of \(t < n/2\).

Related work Most relevant to our work are results on compositional aspects of broadcast protocols with probabilistic termination. Lindell et al. [49] studied sequential composition of probabilistic-termination Byzantine agreement protocols, and Ben-Or and El-Yaniv [5] a round-preserving parallel composition of such protocols. A simplification for parallel composition of probabilistic-termination BA protocols that are based on leader election was presented by Fitzi and Garay [29], and was used by Katz and Koo [44] for analyzing the exact round complexity of probabilistic-termination broadcast in the context of secure computation. Cohen et al. [16] studied probabilistic-termination protocols in the UC framework and constructed a composable parallel-broadcast protocol with a simulation-based proof.

Organization of the paper The rest of the paper is organized as follows. In Sect. 2 we describe the network model, the basics of the probabilistic-termination framework by Cohen et al. [16], and other tools that are used throughout the paper. In Sect. 3 we extend the framework to the honest-majority setting, and in Sect. 4 define parallel composition of PT protocols. Section 5 presents the protocol that achieves round-preserving parallel composition for arbitrary functionalities in a functionally black-box manner against semi-honest adversaries. Section 6 is dedicated to active corruptions; first, the round-preserving protocol-black-box construction is presented, followed by the negative result on round-preserving functionally black-box composition in the case of active corruptions. For ease of exposition, finer details of the model and PT framework are presented in “Appendix.”

2 Model and Preliminaries

In the following, we introduce some necessary notation and terminology. We denote by \(\kappa \) the security parameter. For \(n\in {\mathbb {N}}\), let \([n]=\{1,\cdots ,n\}\). Let \(\textsf {poly}\) denote the set of all positive polynomials, and let PPT denote a probabilistic algorithm that runs in strictly polynomial time. A function \(\nu :{\mathbb {N}} \rightarrow [0,1]\) is negligible if \(\nu (\kappa )<1/p(\kappa )\) for every \(p\in \textsf {poly}\) and large enough \(\kappa \). Given a random variable X, we write \(x\leftarrow X\) to indicate that x is selected according to X.

The statistical distance between two random variables X and Y over a finite set \({\mathord {{\mathcal {U}}}}\), denoted \(\mathsf {\textsc {SD}}(X,Y)\), is defined as \(\frac{1}{2} \cdot \sum _{u\in {\mathord {{\mathcal {U}}}}}\left| {\mathrm {Pr}}\left[ X = u\right] - {\mathrm {Pr}}\left[ Y = u\right] \right| \). Two distribution ensembles \(X=\left\{ X(\kappa )\right\} _{\kappa \in {\mathbb {N}}}\) and \(Y=\left\{ Y(\kappa )\right\} _{\kappa \in {\mathbb {N}}}\) are statistically close (denoted \(X{\mathop {\equiv }\limits ^\mathrm{{s}}}Y\)) if \(\mathsf {\textsc {SD}}(X,Y)\le \nu (\kappa )\) for a negligible function \(\nu \) .

2.1 Synchronous Protocols in UC

We consider synchronous protocols in the model of Katz et al. [46], which is designed on top of the universal composability framework of Canetti [8]. More specifically, we consider n parties \(P _1,\ldots ,P _n\) and a computationally unbounded, adaptive t-adversary that can dynamically corrupt up to t parties during the protocol execution. Synchronous protocols in [46] are protocols that run in a hybrid model where parties have access to a simple “clock” functionality \({\mathcal {F}}_{\textsc {clock}}\). This functionality keeps an indicator bit, which is switched once all honest parties request the functionality to do so, i.e., once all honest parties have completed their operations for the current round. In addition, all communication is done over bounded-delay secure channels, where each party requests the channel to fetch messages that are sent to him, such that the adversary is allowed to delay the message delivery by a bounded and a priori known number of fetch requests. Stated differently, once the sender has sent some message, it is guaranteed that the message will be delivered within a known number of activations of the receiver. For simplicity, we assume that every message is delivered within a single fetch request. A more detailed overview of [46] can be found in “Appendix B.”

2.2 The Probabilistic-Termination Framework

Cohen et al. [16] extended the UC framework to capture protocols with probabilistic termination, i.e., protocols without a fixed output round and without simultaneous termination. This section outlines their techniques; additional details can be found in “Appendix C.”

Canonical synchronous functionalities The main idea behind modeling probabilistic termination is to separate the functionality to be computed from the round complexity that is required for the computation. The atomic building block in [16] is a functionality template called a canonical synchronous functionality (CSF), which is a simple two-round functionality with explicit (one-round) input and (one-round) output phases. The functionality \({\mathcal {F}}_{\textsc {csf}} \) has two parameters: (1) a (possibly) randomized function f that receives \(n+1\) inputs (n inputs from the parties and one additional input from the adversary) and (2) a leakage function \(l\) that determines what information about the input values is leaked to the adversary.

The functionality \({\mathcal {F}}_{\textsc {csf}} \) proceeds in two rounds: In the first (input) round, all the parties hand \({\mathcal {F}}_{\textsc {csf}} \) their input values, and in the second (output) round, each party receives its output. Whenever some input is submitted to \({\mathcal {F}}_{\textsc {csf}} \), the adversary is handed some leakage function of this input; the adversary can use this leakage for deciding which parties to corrupt and which input values to use for corrupted parties. Additionally, the adversary is allowed to input an extra message, which—depending on the function f—might affect the output(s). The detailed description of \({\mathcal {F}}_{\textsc {csf}} \) is given in Fig. 7 in “Appendix C.1.” As a side contribution, in Definition C.1, we extend the definition of CSF to the reactive setting.

Wrappers and traces Computation with probabilistic termination is captured by defining output-round randomizing wrappers. Such wrappers address the issue that while an ideal functionality abstractly describes a protocol’s task, it does not describe its round complexity. Each wrapper is parametrized by a distribution (more precisely, an efficient probabilistic sampling algorithm) D that may depend on a specific protocol implementing the functionality. The wrapper samples a round \(\rho _\mathsf {term}\leftarrow D\), by which all parties are guaranteed to receive their outputs. Two wrappers are considered: The first, denoted \({\mathcal {W}}_{\mathrm {strict}} \), ensures in a strict manner that all (honest) parties terminate together in round \(\rho _\mathsf {term}\); the second, denoted \({\mathcal {W}}_{\mathrm {flex}} \), is more flexible and allows the adversary to deliver outputs to individual parties at any time before round \(\rho _\mathsf {term}\). The detailed descriptions of the two wrappers can be found in “Appendix C.4.”

As pointed out in [16], it is not sufficient to inform the simulator \({\mathcal {S}} \) about the round \(\rho _\mathsf {term}\). In many cases, the wrapper should explain to \({\mathcal {S}} \) how this round was sampled; concretely, the wrapper provides \({\mathcal {S}} \) with the random coins that are used to sample \(\rho _\mathsf {term}\). In particular, \({\mathcal {S}} \) learns the entire trace of calls to ideal functionalities that are made by the protocol in order to complete by round \(\rho _\mathsf {term}\). A trace basically records which hybrids were called by a protocol’s execution, and in a recursive way, for each hybrid, which hybrids would have been called by a protocol realizing that hybrid. The recursion ends when the base case is reached, i.e., when the protocol is defined using the atomic functionalities that are “assumed” by the model.Footnote 5 We refer the reader to “Appendix C.3” for further intuition and an illustrating example, and formally define a trace as follows:

Definition 2.1

(Traces) A trace is a rooted tree of depth at least 1, in which all nodes are labeled by functionalities and where every node’s children are ordered. The root and all internal nodes are labeled by wrapped CSFs (by either of the two wrappers), and the leaves are labeled by unwrapped CSFs. The trace complexity of a trace \(T \), denoted \(c_{\mathsf {tr}} (T)\), is the number of leaves in \(T \). Moreover, denote by \({\mathsf {flex_{tr}}} (T)\) the number nodes labeled by flexibly wrapped CSFs in \(T \).

In this work, we consider an augmented definition of traces, which allows parallel calls to ideal functionalities at the same round. Specifically, a trace is augmented with another (potentially empty) layer of nodes, such that each leaf, in the original definition of a trace, may have a list of (unordered) children. The trace complexity is defined as in Definition 2.1, as the number of original leaves (before augmenting with unsorted node-lists). We note that the composition theorems (below) trivially extend to use this augmented definition of a trace. To simplify notations, we denote by \(\left[ {\mathcal {F}}_{\textsc {}} {}_1, \ldots , {\mathcal {F}}_{\textsc {}} {}_m\right] \) the node with unordered list of children \({\mathcal {F}}_{\textsc {}} {}_1, \ldots , {\mathcal {F}}_{\textsc {}} {}_m\) (modeling a parallel call to these functionalities). We will also denote by

$$\begin{aligned} \left( \left( \left[ \left[ {\mathcal {F}}_{\textsc {}} {}_1\right] ^k, \ldots , \left[ {\mathcal {F}}_{\textsc {}} {}_m\right] ^k\right] \right) ^l\right) \end{aligned}$$

a trace consisting of l (ordered) sequences of leaves, each augmented with an (unordered) list of k nodes of each \({\mathcal {F}}_{\textsc {}} {}_i\). This corresponds to l sequential calls to k parallel instances of each \({\mathcal {F}}_{\textsc {}} {}_i\).

Sequential composition of probabilistic-termination protocols When a set of parties execute a probabilistic-termination protocol, or equivalently, invoke a flexibly wrapped CSF, they might get out-of-sync and start the next protocol in different rounds. The approach in [16] for dealing with sequential composition is to start by designing simpler protocols, that are in a so-called synchronous normal form, where the parties remain in-sync throughout the execution, and next, compile these protocols into slack-tolerant protocols.

Definition 2.2

(Synchronous normal form) Let \({\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m\) be canonical synchronous functionalities. A synchronous protocol \(\pi \) in the \(({\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m)\)-hybrid model is in synchronous normal form (SNF) if in every round exactly one ideal functionality \({\mathcal {F}}_{\textsc {}} {}_i\) is invoked by all honest parties, and in addition, no honest party hands inputs to other CSFs before this instance halts.

SNF protocols are designed as an intermediate step only, since the hybrid functionalities \({\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m\) are two-round CSFs and, in general, cannot be realized by real-world protocols. In order to obtain protocols that can be realized in the real world, [16] introduced slack-tolerant variants of both the strict and the flexible wrappers, denoted \({\mathcal {W}}_{\mathrm {sl\text {-}strict}} \) and \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} \). These wrappers are parametrized by a slack parameter \(c\ge 0\) and can be used even if parties provide inputs within \(c+1\) consecutive rounds (i.e., they tolerate input slack of c rounds); furthermore, the wrappers ensure that all honest parties obtain output within two consecutive rounds (i.e., they reduce the slack to \(c=1\)). The detailed definitions of the slack-tolerant wrappers are given in “Appendix C.5.” In order to convert SNF protocols into protocols that realize functionalities with slack tolerance, [16] constructed a deterministic-termination compiler \(\mathsf {Comp}_{\textsc {dt}} \), a probabilistic-termination compiler \(\mathsf {Comp}_{\textsc {pt}} \), and a probabilistic-termination with slack-reduction compiler \(\mathsf {Comp}_{\textsc {ptr}} \). Loosely speaking, the composition theorems provide the following guarantees:

  1. 1.

    If an SNF protocol \(\pi \) realizes a wrapped CSF \({\mathcal {W}}_{\mathrm {strict}} ^D({\mathcal {F}}_{\textsc {}})\) in the \(({\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m)\)-hybrid model, then \(\mathsf {Comp}_{\textsc {dt}} ^c(\pi )\) realizes \({\mathcal {W}}_{\mathrm {sl\text {-}strict}} ^{D',c}({\mathcal {F}}_{\textsc {}})\) in the \(({\mathcal {W}}_{\mathrm {sl\text {-}strict}} ^{D_1,c}({\mathcal {F}}_{\textsc {}} {}_1),\ldots \), \({\mathcal {W}}_{\mathrm {sl\text {-}strict}} ^{D_m,c}({\mathcal {F}}_{\textsc {}} {}_m))\)-hybrid model (where \(D'\) is defined using \(D,D_1,\ldots ,D_m\)).

  2. 2.

    If an SNF protocol \(\pi \) realizes a wrapped CSF \({\mathcal {W}}_{\mathrm {strict}} ^D({\mathcal {F}}_{\textsc {}})\) in the \(({\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m)\)-hybrid model, then \(\mathsf {Comp}_{\textsc {pt}} ^c(\pi )\) realizes \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D',c}({\mathcal {F}}_{\textsc {}})\) in the \(({\mathcal {W}}_{\mathrm {sl}} ^{D_1,c}({\mathcal {F}}_{\textsc {}} {}_1),\ldots ,{\mathcal {W}}_{\mathrm {sl}} ^{D_m,c}({\mathcal {F}}_{\textsc {}} {}_m))\)-hybrid model (where \({\mathcal {W}}_{\mathrm {sl}} ^{D_i,c}({\mathcal {F}}_{\textsc {}} {}_i)\) is \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D_i,c}({\mathcal {F}}_{\textsc {}} {}_i)\) if \(i\in I\) and \({\mathcal {W}}_{\mathrm {sl\text {-}strict}} ^{D_i,c}({\mathcal {F}}_{\textsc {}} {}_i)\) if \(i\notin I\), given a subset \(I\subseteq [m]\) (of indices) of functionalities to be wrapped using the flexible wrapper).

  3. 3.

    If an SNF protocol \(\pi \) realizes a wrapped CSF \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {}})\) in the \(({\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m)\)-hybrid model, then \(\mathsf {Comp}_{\textsc {ptr}} ^c(\pi )\) realizes \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D',c}({\mathcal {F}}_{\textsc {}})\) in the \(({\mathcal {W}}_{\mathrm {sl}} ^{D_1,c}({\mathcal {F}}_{\textsc {}} {}_1),\ldots ,{\mathcal {W}}_{\mathrm {sl}} ^{D_m,c}({\mathcal {F}}_{\textsc {}} {}_m))\)-hybrid model (where \({\mathcal {W}}_{\mathrm {sl}} ^{D_i,c}({\mathcal {F}}_{\textsc {}} {}_i)\) is \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D_i,c}({\mathcal {F}}_{\textsc {}} {}_i)\) if \(i\in I\) and \({\mathcal {W}}_{\mathrm {sl\text {-}strict}} ^{D_i,c}({\mathcal {F}}_{\textsc {}} {}_i)\) if \(i\notin I\), for a subset I as above).

The compilers maintain the security and the asymptotic (expected) round complexity of the original SNF protocols. At the same time, the compilers take care of any potential slack that is introduced by the protocol and ensure that the resulting protocol can be safely executed even if the parties do not start the protocol simultaneously. More precise descriptions of the compilers can be found in “Appendix C.6.” As a side contribution, we extend this framework to the honest-majority setting in “Appendix C.”

Finally, in [16], the authors also provided protocols for realizing wrapped variants of the atomic CSF functionality for secure point-to-point communication. This suggested the following design paradigm for realizing a wrapped functionality \({\mathcal {W}}_{\mathrm {sl\text {-}strict}} ({\mathcal {F}}_{\textsc {}})\) (resp., \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ({\mathcal {F}}_{\textsc {}})\)): First, construct an SNF protocol for realizing \({\mathcal {W}}_{\mathrm {strict}} ({\mathcal {F}}_{\textsc {}})\) (resp., \({\mathcal {W}}_{\mathrm {flex}} ({\mathcal {F}}_{\textsc {}})\)) using CSF hybrids \({\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m\). Next, for each of the non-atomic hybrids \({\mathcal {F}}_{\textsc {}} {}_i\), show how to realize \({\mathcal {W}}_{\mathrm {strict}} ({\mathcal {F}}_{\textsc {}} {}_i)\) (resp., \({\mathcal {W}}_{\mathrm {flex}} ({\mathcal {F}}_{\textsc {}} {}_i)\)) using CSF hybrids \({\mathcal {F}}_{\textsc {}} {}_1',\ldots ,{\mathcal {F}}_{\textsc {}} {}_{m'}'\). Proceed in this manner until all CSF hybrids are atomic functionalities. Finally, repeated applications of the composition theorems above yield a protocol for \({\mathcal {W}}_{\mathrm {sl\text {-}strict}} ({\mathcal {F}}_{\textsc {}})\) (resp., \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ({\mathcal {F}}_{\textsc {}})\)) using only atomic functionalities as hybrids.

2.3 A Lemma on Termination Probabilities

The following lemma, which will be used in our positive results, provides a constant lower bound on the probability that when running simultaneously (i.e., in parallel) \(\texttt {N} \) copies of \(\texttt {M} \) probabilistic-termination protocols \(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_{\texttt {M}}\), at least one copy of each \(\pi _{\textsc {}} {}_i\) will complete after \(\texttt {R} \) rounds, for suitable choices of \(\texttt {N} \) and \(\texttt {R} \).

Lemma 2.3

Let \(\texttt {M},\texttt {N},\texttt {R} \in {\mathbb {N}} \). For \(i \in [\texttt {M} ]\) and \(j\in [\texttt {N} ]\), let \(X_{ij}\) be independent random variables over the natural numbers, such that \(X_{i1},\ldots ,X_{i\texttt {N}}\) are identically distributed with expectation \(\mu _i\). Denote \(Y_i = \min \{X_{i1},\ldots ,X_{i\texttt {N}}\}\) and \(\mu =\max \{\mu _1,\ldots ,\mu _\texttt {M} \}\). Then, for any constant \(0<\epsilon <1\), if \(\texttt {R} > \mu \) and \(\texttt {N} > \frac{\log (\texttt {M}/\epsilon )}{\log (\texttt {R}/\mu )}\), it holds that \({\mathrm {Pr}}\left[ \forall i: Y_i < \texttt {R} \right] \ge \ 1-\epsilon \).

Proof

First, notice that

$$\begin{aligned} {\mathrm {Pr}}[\exists i :Y_i \ge \texttt {R} ]&\le \ \sum _{i\in [\texttt {M} ]} {\mathrm {Pr}}[Y_i \ge \texttt {R} ] \\&\le \ \sum _{i\in [\texttt {M} ]} \left( \prod _{j\in [\texttt {N} ]} {\mathrm {Pr}}[X_{ij} \ge \texttt {R} ]\right) \\&{\mathop {\le }\limits ^{(*)}} \ \sum _{i\in [\texttt {M} ]} \left( \prod _{j\in [\texttt {N} ]} \left( \frac{\mu _j}{\texttt {R}}\right) \right) \\&\le \ \sum _{i\in [\texttt {M} ]} \left( \prod _{j\in [\texttt {N} ]} \left( \frac{\mu }{\texttt {R}}\right) \right) \\&= \ \texttt {M} \cdot \left( \frac{\mu }{\texttt {R}}\right) ^\texttt {N}, \end{aligned}$$

where \((*)\) follows from Markov’s inequality. Therefore,

$$\begin{aligned} {\mathrm {Pr}}[\forall i: Y_i < \texttt {R} ] \ =\ 1 - {\mathrm {Pr}}[\exists i :Y_i \ge \texttt {R} ] \ \ge \ 1 - \texttt {M} \cdot \left( \frac{\mu }{\texttt {R}}\right) ^\texttt {N}. \end{aligned}$$

Finally, since \(\texttt {R} > \mu \) it holds that \(1> \mu /\texttt {R} \). Therefore, for constant \(0< \epsilon < 1\), by setting

$$\begin{aligned} \texttt {N} > \frac{\log (\texttt {M}/\epsilon )}{\log (\texttt {R}/\mu )}, \end{aligned}$$

it holds that

$$\begin{aligned} {\mathrm {Pr}}[\forall i: Y_i < \texttt {R} ] \ \ge \ 1 - \texttt {M} \cdot \left( \frac{\mu }{\texttt {R}}\right) ^\texttt {N}\ \ge \ 1-\epsilon . \end{aligned}$$

\(\square \)

3 Probabilistic Termination with an Honest Majority

In this section, we extend the probabilistic-termination framework [16] (that was defined for the 2/3-majority setting) to the honest-majority regime.

3.1 Fast Sequential Composition

The composition theorems from [16] are defined for \(t<n/3\) (as they focused on perfect security). When moving to the honest-majority setting, i.e., \(t<n/2\), the compilers and composition theorems follow in a straightforward way. The main difference lies in the usage of the Bracha-termination technique (specifically, in Theorem 3.2), where the “termination messages” in the compiled protocol \(\pi '=\mathsf {Comp}_{\textsc {ptr}} ^c(\pi ,D_1,\ldots ,D_m,I)\) must be authenticated. Therefore, there is an additional hybrid functionality that is required for generating correlated randomness to be used for authenticating messages. If information-theoretic security is required, correlated randomness for information-theoretic signatures [54] can be used, whereas if computational security suffices, a public-key infrastructure (PKI) can be used.

  • Correlated Randomness. The correlated-randomness functionality, parametrized by a distribution D, is defined as follows. The function to compute is \(f_{\textsc {corr}} (\lambda ,\ldots ,\lambda ,a)=(R_1,\ldots ,R_n)\), where \((R_1,\ldots ,R_n)\leftarrow D\), and the leakage function is \(l_{\textsc {corr}} (\lambda ,\ldots ,\lambda )=\bot \). We denote by \({\mathcal {F}}_{\textsc {corr}} ^D\) the functionality \({\mathcal {F}}_{\textsc {csf}} \) when parametrized with the above functions \(f_{\textsc {corr}} \) and \(l_{\textsc {corr}} \). We denote by \({\mathcal {F}}_{\textsc {corr-bc}} \) the functionality \({\mathcal {F}}_{\textsc {corr}} ^{D_{\textsc {bc}}}\), where \(D_{\textsc {bc}} \) is the distribution for correlated randomness needed for information-theoretic broadcast [54], and by \({\mathcal {F}}_{\textsc {pki}} \) functionality \({\mathcal {F}}_{\textsc {corr}} ^{D_{\textsc {pki}}}\), where \(D_{\textsc {pki}} \) is the distribution for correlated randomness needed for a public-key infrastructure using standard digital signatures.

We state without proof the composition theorems for the honest-majority setting. The proofs follow in similar lines to [16]. See “Appendix C.6” for additional details including the definition of \(\mathsf {full}\text {-}\mathsf {trace}\).

Theorem 3.1

Let \({\mathcal {F}}_{\textsc {}}, {\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m\) be canonical synchronous functionalities, let \(t<n/2\), and let \(\pi \) be an SNF protocol that UC-realizes \({\mathcal {W}}_{\mathrm {strict}} ^D({\mathcal {F}}_{\textsc {}})\), with information-theoretic (resp., computational) security, in the \(({\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m)\)-hybrid model, for some depth-1 distribution D, in the presence of an adaptive, malicious t-adversary, and assuming that all honest parties receive their inputs at the same round. Let \(D_1,\ldots ,D_m\) be arbitrary distributions over traces, let \({D^{\mathsf {full}}}= \mathsf {full}\text {-}\mathsf {trace}(D,D_1,\ldots ,D_m)\), and let \(c\ge 0\).

Then, protocol \(\pi '=\mathsf {Comp}_{\textsc {dt}} ^c(\pi ,D_1,\ldots ,D_m)\) UC-realizes \({\mathcal {W}}_{\mathrm {sl\text {-}strict}} ^{{D^{\mathsf {full}}},c} ({\mathcal {F}}_{\textsc {}})\), with information-theoretic (resp., computational) security, in the \(({\mathcal {W}}_{\mathrm {sl\text {-}strict}} ^{D_1,c}({\mathcal {F}}_{\textsc {}} {}_1),\ldots ,{\mathcal {W}}_{\mathrm {sl\text {-}strict}} ^{D_m,c}({\mathcal {F}}_{\textsc {}} {}_m))\)-hybrid model, in the presence of an adaptive, malicious t-adversary, assuming that all honest parties receive their inputs within \(c+1\) consecutive rounds.

Furthermore, the expected round complexity of the compiled protocol \(\pi '\) is

$$\begin{aligned} B_{c} \cdot \sum _{i\in [m]} d_i \cdot E[c_{\mathsf {tr}} (T _i)], \end{aligned}$$

where \(d_i\) is the expected number of calls in \(\pi \) to hybrid \({\mathcal {F}}_{\textsc {}} {}_i\), \(T _i\) is a trace sampled from \(D_i\), and \(B_{c} = 3c+1\) is the blow-up factor.

Theorem 3.2

Let \({\mathcal {F}}_{\textsc {}}, {\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m\) be canonical synchronous functionalities, let \(t<n/2\), and let \(\pi \) be an SNF protocol that UC-realizes \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {}})\), with information-theoretic (resp., computational) security, in the \(({\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m)\)-hybrid model, for some depth-1 distribution D, in the presence of an adaptive, malicious t-adversary, and assuming that all honest parties receive their inputs at the same round. Let \(I\subseteq [m]\) be the subset (of indices) of functionalities to be wrapped using the flexible wrapper, let \(D_1,\ldots ,D_m\) be arbitrary distributions over traces, denote \({D^{\mathsf {full}}}= \mathsf {full}\text {-}\mathsf {trace}(D,D_1,\ldots ,D_m)\), and let \(c\ge 0\). Assume that \({\mathcal {F}}_{\textsc {}} \) and \({\mathcal {F}}_{\textsc {}} {}_i\), for every \(i\in I\), are public-output functionalities.

Then, the compiled protocol \(\pi '=\mathsf {Comp}_{\textsc {ptr}} ^c(\pi ,D_1,\ldots ,D_m,I)\) UC-realizes \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{{D^{\mathsf {full}}},c} ({\mathcal {F}}_{\textsc {}})\), with information-theoretic (resp., computational) security, in the \(({\mathcal {F}}_{\textsc {corr-bc}},{\mathcal {W}} ({\mathcal {F}}_{\textsc {}} {}_1),\ldots ,{\mathcal {W}} ({\mathcal {F}}_{\textsc {}} {}_m))\)-hybrid model, where \({\mathcal {W}} ({\mathcal {F}}_{\textsc {}} {}_i)={\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D_i,c}({\mathcal {F}}_{\textsc {}} {}_i)\) if \(i\in I\) and \({\mathcal {W}} ({\mathcal {F}}_{\textsc {}} {}_i)={\mathcal {W}}_{\mathrm {sl\text {-}strict}} ^{D_i,c}({\mathcal {F}}_{\textsc {}} {}_i)\) if \(i\notin I\), in the presence of an adaptive, malicious t-adversary, assuming that all honest parties receive their inputs within \(c+1\) consecutive rounds.

Furthermore, the expected round complexity of the compiled protocol \(\pi '\) is

$$\begin{aligned} B_{c} \cdot \sum _{i\in [m]} d_i \cdot E[c_{\mathsf {tr}} (T _i)] + 2 \cdot \sum _{i\in [m]} d_i \cdot E[{\mathsf {flex_{tr}}} (T _i)] + 2, \end{aligned}$$

where \(d_i\) is the expected number of calls in \(\pi \) to hybrid \({\mathcal {F}}_{\textsc {}} {}_i\), \(T _i\) is a trace sampled from \(D_i\), and \(B_{c} = 3c+1\) is the blow-up factor.

Theorem 3.3

Let \({\mathcal {F}}_{\textsc {}}, {\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m\) be canonical synchronous functionalities, let \(t<n/2\), and let \(\pi \) be an SNF protocol that UC-realizes \({\mathcal {W}}_{\mathrm {strict}} ^D({\mathcal {F}}_{\textsc {}})\), with information-theoretic (resp., computational) security, in the \(({\mathcal {F}}_{\textsc {}} {}_1,\ldots ,{\mathcal {F}}_{\textsc {}} {}_m)\)-hybrid model, for some depth-1 distribution D, in the presence of an adaptive, malicious t-adversary, and assuming that all honest parties receive their inputs at the same round. Let \(I\subseteq [m]\) be the subset (of indices) of functionalities to be wrapped using the flexible wrapper, let \(D_1,\ldots ,D_m\) be arbitrary distributions over traces, denote \({D^{\mathsf {full}}}= \mathsf {full}\text {-}\mathsf {trace}(D,D_1,\ldots ,D_m)\), and let \(c\ge 0\). Assume that \({\mathcal {F}}_{\textsc {}} {}\) and \({\mathcal {F}}_{\textsc {}} {}_i\), for every \(i\in I\), are public-output functionalities.

Then, the compiled protocol \(\pi '=\mathsf {Comp}_{\textsc {pt}} ^c(\pi ,D_1,\ldots ,D_m,I)\) UC-realizes \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{{D^{\mathsf {full}}},c} ({\mathcal {F}}_{\textsc {}})\), with information-theoretic (resp., computational) security, in the \(({\mathcal {W}} ({\mathcal {F}}_{\textsc {}} {}_1),\ldots ,{\mathcal {W}} ({\mathcal {F}}_{\textsc {}} {}_m))\)-hybrid model, where \({\mathcal {W}} ({\mathcal {F}}_{\textsc {}} {}_i)={\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D_i,c}({\mathcal {F}}_{\textsc {}} {}_i)\) if \(i\in I\) and \({\mathcal {W}} ({\mathcal {F}}_{\textsc {}} {}_i)={\mathcal {W}}_{\mathrm {sl\text {-}strict}} ^{D_i,c}({\mathcal {F}}_{\textsc {}} {}_i)\) if \(i\notin I\), in the presence of an adaptive, malicious t-adversary, assuming that all honest parties receive their inputs within \(c+1\) consecutive rounds.

Furthermore, the expected round complexity of the compiled protocol \(\pi '\) is

$$\begin{aligned} B_{c} \cdot \sum _{i\in [m]} d_i \cdot E[c_{\mathsf {tr}} (T _i)] + 2 \cdot \sum _{i\in [m]} d_i \cdot E[{\mathsf {flex_{tr}}} (T _i)], \end{aligned}$$

where \(d_i\) is the expected number of calls in \(\pi \) to hybrid \({\mathcal {F}}_{\textsc {}} {}_i\), \(T _i\) is a trace sampled from \(D_i\), and \(B_{c} = 3c+1\) is the blow-up factor.

3.2 Fast Parallel Broadcast

Cohen et al. [16], based on Hirt and Zikas [39], defined the unfair parallel-broadcast functionality, in which the functionality informs the adversary which messages it received, and allows the adversary, based on this information, to corrupt senders and replace their input messages.

  • Unfair Parallel Broadcast. In the unfair parallel-broadcast functionality, each party \(P _i\) with input \(x_i\) distributes its input to all the parties. The adversary is allowed to learn the content of each input value from the leakage function (and so it can corrupt parties and change their messages prior to their distribution, based on this information). The function to compute is \(f_{\textsc {upbc}} (x_1,\ldots ,x_n,a)=((x_1,\ldots ,x_n),\ldots ,(x_1,\ldots ,x_n))\) and the leakage function is \(l_{\textsc {upbc}} (x_1,\ldots ,x_n)=(x_1,\ldots ,x_n)\). We denote by \({\mathcal {F}}_{\textsc {upbc}} \) the functionality \({\mathcal {F}}_{\textsc {csf}} \) when parametrized with the above functions \(f_{\textsc {upbc}} \) and \(l_{\textsc {upbc}} \).

The protocol of Katz and Koo [45] realizes (a wrapped version of) \({\mathcal {F}}_{\textsc {upbc}} \) when the parties have correlated-randomness setup. The following result follows.

Theorem 3.4

Let \(c\ge 0\) and \(t<n/2\). There exists an efficiently sampleable distribution D such that the functionality \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D,c}({\mathcal {F}}_{\textsc {upbc}})\) has an expected-constant-round complexity, and can be UC-realized in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {F}}_{\textsc {corr-bc}})\)-hybrid model, with information-theoretic security, in the presence of an adaptive, malicious t-adversary, assuming that all honest parties receive their inputs within \(c+1\) consecutive rounds.

The parallel-broadcast functionality is similar to the unfair version, except that the adversary cannot corrupt parties based on the messages they send.

  • Parallel Broadcast. In the parallel-broadcast functionality, each party \(P _i\) with input \(x_i\) distributes its input to all the parties. Unlike the unfair version, the adversary only learns the length of the honest parties’ messages before their distribution, i.e., the leakage function is \(l_{\textsc {pbc}} (x_1,\ldots ,x_n)=(\left| x_1\right| ,\ldots ,\left| x_n\right| )\). It follows that the adversary cannot use the leaked information in a meaningful way when deciding which parties to corrupt. The function to compute is identical to the unfair version, i.e., \(f_{\textsc {pbc}} (x_1,\ldots ,x_n,a)=((x_1,\ldots ,x_n),\ldots ,(x_1,\ldots ,x_n))\). We denote by \({\mathcal {F}}_{\textsc {pbc}} \) the functionality \({\mathcal {F}}_{\textsc {csf}} \) when parametrized with the above functions \(f_{\textsc {pbc}} \) and \(l_{\textsc {pbc}} \).

We next show how to realize the parallel-broadcast functionality \({\mathcal {F}}_{\textsc {pbc}} \) in the \({\mathcal {F}}_{\textsc {upbc}} \)-hybrid model, in the honest-majority setting. The construction follows [16], where the only difference is that for \(t<n/2\), perfectly correct error-correcting secret sharing (see Definition A.1) cannot be achieved, and a negligible error probability is introduced. We describe this protocol, denoted \(\pi _{\textsc {pbc}} \), in Fig. 1.

Fig. 1
figure 1

The parallel-broadcast protocol, in the \(({\mathcal {F}}_{\textsc {psmt}},{\mathcal {F}}_{\textsc {upbc}})\)-hybrid model

Theorem 3.5

Let \(c\ge 0\) and \(t < n/2\). There exists an efficiently sampleable distribution D such that the functionality \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D,c}({\mathcal {F}}_{\textsc {pbc}})\) has an expected-constant-round complexity, and can be UC-realized in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {F}}_{\textsc {corr-bc}})\)-hybrid model, with information-theoretic security, in the presence of an adaptive malicious t-adversary, assuming that all honest parties receive their inputs within \(c+1\) consecutive rounds.

The proof of the theorem follows in the same lines of the proof of [16, Theorem 5.6].

3.3 Fast SFE in the Point-to-Point Model

We conclude this section by showing how to construct a UC-secure SFE protocol which computes a given circuit in expected O(d) rounds, independently of the number of parties, in the point-to-point channels model. The protocol is obtained by taking the protocol of Cramer et al. [21], denoted \(\pi _{\textsc {sfe}} \). This protocol relies on (parallel) broadcast and (parallel) point-to-point channels, and therefore it can be described in the \(({\mathcal {F}}_{\textsc {psmt}},{\mathcal {F}}_{\textsc {pbc}})\)-hybrid model.

Theorem 3.6

Let f be an n-party function, let C an arithmetic circuit computing f with multiplicative depth d, let \(c\ge 0\), and let \(t < n/2\). Then, there exists an efficiently sampleable distribution D such that the functionality \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D,c}({\mathcal {F}}_{\textsc {sfe}} ^f)\) has round complexity O(d) in expectation, and can be UC-realized in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {F}}_{\textsc {corr-bc}})\)-hybrid model, with information-theoretic security, in the presence of an adaptive, malicious t-adversary, assuming that all honest parties receive their inputs within \(c+1\) consecutive rounds.

Using an adaptively secure constant-round MPC in the broadcast model, such as the protocol of Damgård and Ishai [23] (or the two-round protocol that exists under stronger assumptions [20]), we obtain expected-constant-round MPC over point-to-point channels.

Theorem 3.7

Let f be an n-party function, let \(c\ge 0\), let \(t < n/2\), and assume that one-way functions exist. Then, there exists an efficiently sampleable distribution D such that the functionality \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D,c}({\mathcal {F}}_{\textsc {sfe}} ^f)\) has round complexity O(1) in expectation, and can be UC-realized in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {F}}_{\textsc {pki}})\)-hybrid model, with computational security, in the presence of an adaptive, malicious t-adversary, assuming that all honest parties receive their inputs within \(c+1\) consecutive rounds.

4 Functionally Black-Box Protocols and Parallel Composition

In this section, we extend the probabilistic-termination framework [16] to capture the notions of functionally black-box protocols [57] and of parallel composition of canonical synchronous functionalities.

Functionally black-box protocols Most MPC protocols in the literature explicitly require a representation of the function to be computed, usually in the form a Boolean circuit, an arithmetic circuit, or a RAM program. This representation is a common parameter to the protocol and is known to all of the parties. Rosulek [57] asked whether this is inherent or if it possible to securely compute a function without “knowing its code.” Specifically, Rosulek [57] defined functionally black-box (FBB) protocols, in which the parties compute a function from within some function class, where each party is given only a local oracle access to the function (i.e., the party can query the oracle to learn the evaluation of the function on inputs of its choice). A feasibility result for semi-honest two-party FBB protocols was given in [57] for a class of functions related to blind signatures, as well as an impossibility result (that was strengthened in [43]) ruling out generic FBB protocols, also in the semi-honest case.

Looking ahead, in Sects. 5 and 6.2 we consider parallel composition of FBB computations. That is, given public function-classes \({\mathcal {C}} _1,\ldots ,{\mathcal {C}} _\texttt {M} \) and (hidden) functions \(g_i\in {\mathcal {C}} _i\), each party is given a local oracle access to \(g_1,\ldots ,g_\texttt {M} \) (as in [57]), but in addition the parties can globally use \(\texttt {M} \) ideal functionalities to jointly compute each \(g_i\); every ideal functionality is parametrized by a function class \({\mathcal {C}} _i\) and is given an oracle access to the corresponding \(g_i\). Because the parties can use an ideal functionality to compute each \(g_i\), the impossibility results from [43, 57] do not apply, and indeed, in Sect. 5 we show that in the semi-honest setting it is possible to compose in parallel FBB computations in a round-preserving manner. In Sect. 6.2 we extend the technique from [43] to rule out round-preserving FBB parallel composition in the malicious setting.

We formalize the notion of functionally black-box protocols of Rosulek [57] in the language of canonical synchronous functionalities. As in [57], we focus on secure function evaluation. The SFE functionality \({\mathcal {F}}_{\textsc {sfe}} ^g\) (see Sect. C.1), parametrized by an n-party function g, is defined as the CSF \({\mathcal {F}}_{\textsc {csf}} ^{f_{\textsc {sfe}},l_{\textsc {sfe}}}\), where \(f_{\textsc {sfe}} (x_1,\ldots , x_n,a)=g(x_1,\ldots , x_n)\) (i.e., computes the function g while ignoring the adversary’s input a) and the leakage function is \(l_{\textsc {sfe}} (x_1,\ldots , x_n)=(|x_1|,\ldots , |x_n|)\). The following definition explains what we mean by a protocol that realizes the secure function evaluation functionality in a black-box way with respect to the function g.

Definition 4.1

Let \({\mathcal {C}} =\{g:(\{0,1\}^*)^n\rightarrow (\{0,1\}^*)^n\}\) be a class of n-party functions. Denote by \({\mathcal {F}}_{\textsc {sfe}} ^{\mathcal {C}} \) the CSF, implemented as an (uninstantiated) oracle machine that in order to compute \(f_{\textsc {sfe}} ^{\mathcal {C}} (x_1,\ldots , x_n,a)\), queries the oracle with \((x_1,\ldots , x_n)\) and stores the response \((y_1,\ldots , y_n)\). The leakage function \(l_{\textsc {sfe}} (x_1,\ldots , x_n)=(|x_1|,\ldots , |x_n|)\) is unchanged.

Then, a protocol \(\pi _{\textsc {}} =(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_n)\) is a functionally black-box (FBB) protocol for (a wrapped version of) \({\mathcal {F}}_{\textsc {sfe}} ^{\mathcal {C}} \), if for every \(f \in {\mathcal {C}} \), the protocol \(\pi _{\textsc {}} ^f=(\pi _{\textsc {}} {}_1^f,\ldots ,\pi _{\textsc {}} {}_n^f)\) UC-realizes \({\mathcal {F}}_{\textsc {sfe}} ^f\).

Parallel composition of CSFs The parallel composition of CSFs is defined in a natural way as the CSF that evaluates the corresponding functions in parallel.

Definition 4.2

Let \(f_1,\ldots ,f_\texttt {M} \) be n-input functions. We define the \((n\cdot \texttt {M})\)-input function \((f_1\parallel \cdots \parallel f_\texttt {M})\) as follows. Upon input \(({ \varvec{x}}_1,\ldots ,{ \varvec{x}}_n)\), where each \({ \varvec{x}}_i\) is an \(\texttt {M} \)-tuple \((x_i^1,\ldots ,x_i^\texttt {M})\), the output is the \(\texttt {M} \)-tuple defined as

$$\begin{aligned} (f_1\parallel \cdots \parallel f_\texttt {M})({ \varvec{x}}_1,\ldots ,{ \varvec{x}}_n) = \left( (y_1^1,\ldots ,y_1^\texttt {M}),\ldots ,(y_n^1,\ldots ,y_n^\texttt {M})\right) , \end{aligned}$$

where \((y_1^j,\ldots ,y_n^j)=f_j(x_1^j,\ldots ,x_n^j)\).

Let \({\mathcal {F}}_{\textsc {csf}} ^{f_1,l_1}, \ldots ,{\mathcal {F}}_{\textsc {csf}} ^{f_\texttt {M},l_\texttt {M}}\) be CSFs and denote \({\mathcal {F}}_{\textsc {}} {}_i={\mathcal {F}}_{\textsc {csf}} ^{f_i,l_i}\). The parallel composition of \({\mathcal {F}}_{\textsc {}} {}_1, \ldots ,{\mathcal {F}}_{\textsc {}} {}_\texttt {M} \), denoted as \(({\mathcal {F}}_{\textsc {}} {}_1\parallel \cdots \parallel {\mathcal {F}}_{\textsc {}} {}_\texttt {M})\), is the CSF defined by the function \((f_1\parallel \cdots \parallel f_\texttt {M})\) and the leakage function \((l_1\parallel \cdots \parallel l_\texttt {M})\).

5 Round-Preserving Parallel Composition: Passive Security

In this section, we show that round-preserving parallel composition is feasible, in a functionally black-box manner, facing semi-honest adversaries. The underlying idea of our protocol \(\pi _{\textsc {pfbb}} \) (standing for parallel functionally black-box), formally presented in Fig. 2, is based on a simplified form of the parallel-broadcast protocol of Ben-Or and El-Yaniv [5]. The protocol proceeds in iterations, where in each iteration, the parties invoke, in parallel and using the same input values, sufficiently many instances of each (oracle-aided) ideal functionality, but only for a constant number of rounds. If some party received an output in at least one invocation of every ideal functionality, it distributes all output values and the protocol completes; otherwise, the protocol resumes with another iteration. This protocol retains privacy for deterministic functions with public output,Footnote 6 since the adversary is semi-honest, and so corrupted parties will provide the same input values to all instances of each ideal functionality.

Intuitively, during the simulation of the protocol, the simulator should imitate every call for every ideal functionality toward the adversary. A subtle issue is that in order to do so, the simulator must know the exact trace that is sampled by each instance of each ideal functionality during the execution of the real protocol. Therefore, it is indeed essential for the simulator to receive the random coins used to sample the trace for the entire protocol, by the ideal functionality computing the parallel composition (see Sect. 2.2). By defining the trace-distribution sampler in a way that consists of all (potential) sub-traces for every instance of every ideal functionality, the simulator can induce the exact random coins used to sample the correct sub-trace for every ideal functionality that is invoked.

Fig. 2
figure 2

FBB parallel composition in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}), \ldots , {\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}}))\)-hybrid model

Theorem 5.1

Let \({\mathcal {C}} _1, \ldots , {\mathcal {C}} _\texttt {M} \) be deterministic function-classes with public output, let \({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}, \ldots , {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}}\) be oracle-aided secure function evaluation functionalities, and let \(t<n/2\). Let \(D_1,\ldots ,D_\texttt {M} \) be distributions, such that for every \(j\in [\texttt {M} ]\), the round complexity of \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _j})\) has expectation \(\mu _j\). Denote \(\mu =\max \{\mu _1,\ldots ,\mu _\texttt {M} \}\).

Then, there exists a distribution D with expectation \(\mu '=O(\mu )\) such that \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) can be UC-realized by an FBB protocol in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}), \ldots , {\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}}))\)-hybrid model, with information-theoretic security, in the presence of an adaptive, semi-honest t-adversary, assuming that all honest parties receive their inputs at the same round.

In particular, if for every \(j\in [\texttt {M} ]\), the expectation \(\mu _j\) is constant, then \(\mu '\) is constant.

The proof of Theorem 5.1 follows immediately from the following lemma.

Lemma 5.2

Consider the notations in Theorem 5.1 and let \(0<\epsilon <1\).

Then, for any \(\texttt {R}>\mu , \texttt {N} > \frac{\log (\texttt {M}/\epsilon )}{\log (\texttt {R}/\mu )}\) and \(\texttt {L} =\textsf {poly}(\kappa )\), protocol \(\pi _{\textsc {pfbb}} (\texttt {N},\texttt {R},\texttt {L})\) is an FBB protocol for \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pfbb}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\), for a distribution \(D_\mathsf {pfbb}\) with expectation \(\mu _\mathsf {pfbb}=O(\texttt {R})\), in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}), \ldots , {\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}}))\)-hybrid model, with information-theoretic security, in the presence of an adaptive, semi-honest t-adversary, assuming that all honest parties receive their inputs at the same round.

Proof

We start by defining the sampling algorithm for the distribution \(D_\mathsf {pfbb}\), parametrized by \(\texttt {N},\texttt {R},\texttt {L} \), and distributions \(D_1,\ldots ,D_\texttt {M} \). The sampler initially sets \(\alpha \leftarrow 0\) and a trace \(T \) with a root labeled by \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pfbb}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) and no children. Next, independently sample traces \(T _j^{{\tilde{k}}}\leftarrow D_j\), for \(j\in [\texttt {M} ]\) and \(k\in [\texttt {N} ]\) (where \({\tilde{k}}=\alpha \cdot \texttt {N} +k\)), and append

$$\begin{aligned} \left( \left( \left[ \left[ {\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1})\right] ^\texttt {N},\ldots ,\left[ {\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\right] ^\texttt {N} \right] \right) ^\texttt {R},{\mathcal {F}}_{\textsc {psmt}} \right) \end{aligned}$$

to the (initially empty) ordered set of leaves of the trace \(T \) (i.e., calling sequentially \(\texttt {R} \) times, to \(\texttt {N} \) parallel instances of each of the functionalities \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _j})\) (for every \(j\in [\texttt {M} ]\)), followed by a call to \({\mathcal {F}}_{\textsc {psmt}} \)). If for every \(j\in [\texttt {M} ]\), there exists \(k\in [\texttt {N} ]\), such that \(c_{\mathsf {tr}} (T _j^{{\tilde{k}}})<\texttt {R} \), then output \(T \) and halt. Else, set \(\alpha \leftarrow \alpha +1\). If \(\alpha <\texttt {L} \), repeat the sampling process; otherwise output \(T \) and halt.

Following Lemma 2.3, for \(\texttt {R} >\mu \) and \(\texttt {N} >\frac{\log (\texttt {M}/c)}{\log (\texttt {R}/\mu )}\), it holds that in every iteration, at least one invocation of \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _j})\) will produce output, for every \(j\in [\texttt {M} ]\), with a constant probability. It follows that the expected number of iterations until all honest parties receive output and the protocol terminates is constant, and since each iteration consists of \(O(\texttt {R})\) rounds, the entire execution completes within \(O(\texttt {R})\) rounds in expectation, as required. The failure probability that the protocol will not terminate within \(\texttt {L} =\textsf {poly}(\kappa )\) iterations is negligible.

Let \({\mathcal {A}} \) be a semi-honest adversary, we now construct a simulator \({\mathcal {S}} \) for \({\mathcal {A}} \). Initially, \({\mathcal {S}} \) sets the values \(\alpha \leftarrow 0\) and \({ \varvec{y}}\leftarrow \bot \), and starts by receiving leakage messages \((\texttt {leakage},\mathsf {sid},P _i,(l_1,\ldots ,l_\texttt {M}))\) and a trace message \((\texttt {trace},\mathsf {sid},T)\) from \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pfbb}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\), where \(T \) is a depth-1 trace of the form

$$\begin{aligned} \left( \left( \left( \left[ \left[ {\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1})\right] ^\texttt {N},\ldots ,\left[ {\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\right] ^\texttt {N} \right] \right) ^\texttt {R},{\mathcal {F}}_{\textsc {psmt}} \right) ^q\right) \end{aligned}$$

(i.e., q iterations of calling sequentially \(\texttt {R} \) times, to \(\texttt {N} \) parallel instances of each \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _j})\), for every \(j\in [\texttt {M} ]\), followed by a call to \({\mathcal {F}}_{\textsc {psmt}} \)). More precisely, \({\mathcal {S}} \) receives the coins that were used by the functionality \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pfbb}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) to sample the trace \(T \) and using these coins, \({\mathcal {S}} \) samples the same traces \(T _j^{{\tilde{k}}}\) that were used to define \(T \).

In order to simulate the \(\alpha \)’th iteration, \({\mathcal {S}} \) sends the message \((\texttt {leakage},\mathsf {sid} _{j,{\tilde{k}}},P _i,l_j)\) to \({\mathcal {A}} \), for every \(j\in [\texttt {M} ]\), every \(k\in [\texttt {N} ]\), and every honest \(P _i\) (where \({\tilde{k}}=\alpha \cdot \texttt {N} +k\)), and receives \((\texttt {input},\mathsf {sid} _{j,{\tilde{k}}},x_i^j)\) from \({\mathcal {A}} \) on behalf of every corrupted party \(P _i\). Since \({\mathcal {A}} \) is semi-honest, it holds that the same \(x_i^j\) is used for each corrupted party \(P _i\) in all instances of the functionality \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _j})\). In the first iteration, \({\mathcal {S}} \) sends to \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pfbb}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\), on behalf of every corrupted party \(P _i\), the message \((\texttt {input},\mathsf {sid},{ \varvec{x}}_i)\), with \({ \varvec{x}}_i=(x_i^1,\ldots ,x_i^\texttt {M})\). When \({\mathcal {A}} \) sends \((\texttt {fetch-output},\mathsf {sid} _{j,{\tilde{k}}})\) requests in round \(\rho \), \({\mathcal {S}} \) answers with \(\bot \) if \(c_{\mathsf {tr}} (T _j^{{\tilde{k}}})<\rho \), and with \((\texttt {output},\mathsf {sid} _{j,{\tilde{k}}},y_j)\) otherwise, where \({ \varvec{y}}=(y_1,\ldots ,y_\texttt {M})\); if \({ \varvec{y}}=\bot \) (i.e., on the first time), send \((\texttt {early-output},\mathsf {sid},P _i)\) to \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pfbb}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\), receive \((\texttt {output},\mathsf {sid},{ \varvec{y}})\) and store \({ \varvec{y}}=(y_1,\ldots ,y_\texttt {M})\). In case \({\mathcal {A}} \) sends \((\texttt {early-output},\mathsf {sid} _{j,{\tilde{k}}},P _i)\), for some corrupted \(P _i\), send \((\texttt {output},\mathsf {sid} _{j,{\tilde{k}}},y_j)\) to \({\mathcal {A}} \). If the simulated protocol completes during the \(\alpha \)’th iteration, \({\mathcal {S}} \) sends \((\texttt {early-output},\mathsf {sid},P _i)\) to the functionality \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pfbb}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) on behalf of every party.

Proving that no environment can distinguish between its view in an execution of \(\pi _{\textsc {pfbb}} \) in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}), \ldots , {\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}}))\)-hybrid model, and its view when interacting with \({\mathcal {S}} \) in the ideal computation of \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pfbb}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\), follows via a standard hybrid argument. Starting with the execution of \(\pi _{\textsc {pfbb}} \), the invocations of the ideal functionalities \({\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}), \ldots , {\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) are replaced, one-by-one, with the answers of the simulator \({\mathcal {S}} \). Since \({\mathcal {S}} \) perfectly emulates each such call using the trace it received from the ideal functionality \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pfbb}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\), it follows immediately that the views of the environment in two neighboring hybrids are indistinguishable; therefore, the view of the environment in the simulation is indistinguishable from its view in the execution of \(\pi _{\textsc {pfbb}} \). \(\square \)

6 Round-Preserving Parallel Composition: Active Security

In this section, we consider security against active adversaries. First, in Sect. 6.1, we show how to compute the parallel composition of probabilistic-termination functionalities, in a round-preserving manner, using a black-box access to protocols realizing the individual functionalities. In Sect. 6.2, we investigate the question of whether there exists a functionally black-box round-preserving malicious protocol for the parallel composition of probabilistic-termination functionalities, and show that for a natural extension of protocols, following the techniques from [5], this is not the case—i.e., there exist functions such that no such protocol with black-box access to them can compute their parallel composition, in a round-preserving manner, tolerating even a single adversarial party.

6.1 Feasibility of Round-Preserving Parallel Composition

In this section, we show how to compile multiple protocols, realizing probabilistic-termination functionalities, into a single protocol that realizes the parallel composition of the functionalities, in a round-preserving manner, and while only using black-box access to the underlying protocols. We start by providing a high-level description of the compiler.

The compiler receives as input protocols \(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_\texttt {M} \), where each protocol \(\pi _{\textsc {}} {}_j\) is defined in the point-to-point model, in which the parties are given correlated randomness in a secure setup phase, i.e., in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {F}}_{\textsc {corr}} ^{D^\mathsf {corr}_j})\)-hybrid model.Footnote 7 It follows that the next-message function for each party in each protocol is a deterministic function that receives the input value, correlated randomness, private randomness, and history of incoming messages, and outputs a vector of n messages to be sent in the following round (one message for each party); we denote by \(f_{\textsf {nxt-msg}} ^{\pi _{\textsc {}} {}_j,i}\) the next-message function for party \(P _i\) in protocol \(\pi _{\textsc {}} {}_j\). In particular, we note that the entire transcript of the protocol is fixed once the input value, correlated randomness, and private randomness of each party are determined.

The underlying ideas of the compiler are inspired by the constructions in [5, 16], where a round-preserving parallel-broadcast protocol was constructed by iteratively running, for a constant number of rounds, multiple instances of BA protocols (each instance is executed multiple times in parallel), until at least one execution of every BA instance is completed. This approach is indeed suitable for computing “privacy-free” functionalities (such as broadcast), where the adversary may learn the result of multiple computations using the same inputs for honest parties, without compromising security. However, when considering the parallel composition of arbitrary functions, running two instances of a protocol using the same input values will violate privacy, since the adversary can use different inputs to learn multiple outputs of the function.

The parallel-composition compiler generalizes the above approach in a privacy-preserving manner. The compiler follows the GMW paradigm [33] and is defined in the Setup-Commit-then-Prove hybrid model [11, 42], which generates committed correlated randomness for the parties and ensures that all parties follow the protocol specification. This mechanism allows each party to commit to its input values and later execute multiple instances of each protocol, while proving that the same input value is used in all executions. For simplicity and without loss of generality, we assume that each function is deterministic and has a public output. In this case, it is ensured that if two parties receive output values in two executions of \(\pi _{\textsc {}} {}_j\), then they receive the same output value. The private random coins that are used in each execution only affect the termination round, but not the output value. Using this simplification, we can remove the leader-election phase from the output-agreement technique in [5, 16] and directly use the termination technique from Bracha [7].

Another obstacle is to recover from corruptions without increasing the round complexity. Indeed, in case some party misbehaves, e.g., by using different input values in different instances of the same protocol \(\pi _{\textsc {}} {}_j\), then the Setup-Commit-then-Prove functionality ensures that all honest parties will identify the cheating party. In this case, the parties cannot recover by, for example, backtracking and simulating the cheating party, as this will yield a round complexity that is linear in the number of parties. Furthermore, the protocol must resume in a way such that all instances of a specific protocol \(\pi _{\textsc {}} {}_j\) will use the same input value that the identified corrupted party used throughout the protocol’s execution until it misbehaved (since the cheating party might have learned an output value in one of the executed protocols).

To this end, we slightly adjust the Setup-Commit-then-Prove functionality and secret-share every committed random string \(r_i\) (the correlated randomness for party \(P _i\)) among all the parties, using an error-correcting secret-sharing scheme (see Sect. A.1). Note that this can be done information theoretically as we assume an honest majority [12, 22, 56]. In case a party is identified as cheating, every party broadcasts the share of the committed randomness corresponding to that party, reconstructs the correlated randomness for that party, and from that point onwards, locally computes the messages corresponding to this party in every instance of every protocol. Using this approach, every round in the original protocols \(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_\texttt {M} \) is expanded by a constant number of rounds, and the overall round complexity is preserved.

We prove the following theorem.

Theorem 6.1

Let \({\mathcal {F}}_{\textsc {}} {}_1, \ldots , {\mathcal {F}}_{\textsc {}} {}_\texttt {M} \) be CSFs, let \(t<n/2\), and let \(c\ge 1\). Let \(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_\texttt {M} \) be SNF protocols such that for every \(j\in [\texttt {M} ]\), protocol \(\pi _{\textsc {}} {}_j\) UC-realizes \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {}} {}_j)\) with expected round complexity \(\mu _j\) and information-theoretic security, in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {F}}_{\textsc {corr}} ^{D^\mathsf {corr}_j})\)-hybrid model (for a distribution \(D_j\) and a distribution \(D^\mathsf {corr}_j\) in \(NC^0\)), in the presence of an adaptive, malicious t-adversary, assuming that all honest parties receive their inputs at the same round. Denote \(\mu =\max \{\mu _1,\ldots ,\mu _\texttt {M} \}\).

Then, \({\mathcal {W}}_{\mathrm {sl\text {-}flex}} ^{D,c}({\mathcal {F}}_{\textsc {}} {}_1\parallel \cdots \parallel {\mathcal {F}}_{\textsc {}} {}_\texttt {M})\), for some distribution \(D\) with expectation \(\mu '=O(\mu )\), can be UC-realized with information-theoretic security by a protocol \(\pi _{\textsc {}} \) in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {F}}_{\textsc {corr-bc}})\)-hybrid model, in the same adversarial setting, assuming that all honest parties receive their inputs within \(c+1\) consecutive rounds. In addition, protocol \(\pi _{\textsc {}} \) requires only black-box access to the protocols \(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_\texttt {M} \).

In particular, if for every \(j\in [\texttt {M} ]\), the expectation \(\mu _j\) is constant, then \(\mu '\) is constant.

Proof

(sketch) Without loss of generality, we assume that every CSF \({\mathcal {F}}_{\textsc {}} {}_j\) is deterministic and has public output. The proof for randomized functionalities with private output follows using standard techniques. In Lemma 6.3, we prove that the compiled protocol \(\pi _{\textsc {pbb}} =\mathsf {Comp}_{\textsc {}} (\pi _{\textsc {}} {}_1,\dots ,\pi _{\textsc {}} {}_\texttt {M},\texttt {N},\texttt {R},\texttt {L})\), for \(\texttt {R}>\mu , \texttt {N} > \frac{\log (\texttt {M}/\epsilon )}{\log (\texttt {R}/\mu )}\) and \(\texttt {L} =\textsf {poly}(\kappa )\), UC-realizes \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pbb}}({\mathcal {F}}_{\textsc {}} {}_1\parallel \cdots \parallel {\mathcal {F}}_{\textsc {}} {}_\texttt {M})\) with expected round complexity \(O(\texttt {R})\) and information-theoretic security, in the \(({\mathcal {F}}_{\textsc {pbc}},{\mathcal {F}}_{\textsc {scp}})\)-hybrid model. In Lemma 6.2, we show that a wrapped version of \({\mathcal {F}}_{\textsc {scp}} ({\mathcal {P}},{\mathcal {D}} _\mathsf {parallel}(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_\texttt {M},\texttt {N} \cdot \texttt {L},\texttt {R},\ell ),\mathbf {{\mathcal {R}}}_\mathsf {parallel}, \Pi )\) (explained below) can be implemented, such that every call can be UC-realized in the \(({\mathcal {F}}_{\textsc {psmt}},{\mathcal {F}}_{\textsc {pbc}})\)-hybrid model with constant round complexity and information-theoretic security. Following Theorem 3.5, \({\mathcal {F}}_{\textsc {pbc}} \) can be UC-realized in the \({\mathcal {F}}_{\textsc {smt}} \)-hybrid model with expected-constant-round complexity and information-theoretic security. The proof follows from the sequential composition theorems, Theorems 3.1, 3.2 and 3.3 . \(\square \)

We now proceed to define the Setup-Commit-Then-Prove Functionality \({\mathcal {F}}_{\textsc {scp}} \) and prove Lemma 6.2 in Sect. 6.1.1, and to prove Lemma 6.3 in Sect. 6.1.2.

6.1.1 The Setup-Commit-Then-Prove Functionality

An important building block in our parallel-composition compiler is the Setup-Commit-then-Prove functionality. This functionality is used in order to allow parties to execute multiple instances of a protocol, using the same inputs, while ensuring input consistency. In addition, in case a party misbehaves and tries to deviate from the protocol or to use different inputs in different executions, the functionality allows all parties to identify this misbehavior and recover, while increasing the round complexity only by a constant factor. This functionality was defined by Ishai et al. [42], based on the Commit-then-Prove functionality of Canetti et al. [11], and was used in order to compile any semi-honest protocol into a protocol that is secure with identifiable abort, facing malicious adversaries, by generating committed setup for the parties and allow them to prove NP-statements in zero knowledge.

The Setup-Commit-then-Prove functionality \({\mathcal {F}}_{\textsc {scp}} \) is a reactive functionality. (The notion of CSF is extended to the reactive setting in “Appendix C.2.”) The functionality is formally defined in Fig. 3 and is parametrized by a party-set \({\mathcal {P}} \), a distribution \({\mathcal {D}} \), a vector of n NP-relations \(\mathbf {{\mathcal {R}}}\), and a (tn) error-correcting secret-sharing scheme \(\Pi \). In the first call to the functionality, the parties do not send inputs (more precisely, send the empty input \(\lambda \)). The functionality samples correlated randomness \((r_1,\ldots ,r_n)\leftarrow {\mathcal {D}} \), hands \(r_i\) to \(P _i\) and stores \(r_i\) as the committed witness for \(P _i\). In addition, the functionality secret shares \(r_i\) between all parties. All subsequent calls are used to prove NP-statements on the committed witnesses, i.e., on the \(k\)’th call, for \(k>1\), \(P _i\) sends as its input a statement \(x_{i,k}\); the functionality verifies whether \({\mathcal {R}} _i(x_{i,k},r_i)=1\) and sends \(x_{i,k}\) and the result to all parties.

Fig. 3
figure 3

The Setup-Commit-then-Prove functionality

Ishai et al. [42] showed how to realize a slightly reduced version of the Setup-Commit-then-Prove functionality (in which the functionality does not secret share the witnesses), unconditionally, with identifiable abort, facing an arbitrary number of corrupted parties. Their protocol is based on the “MPC-in-the-head” approach that was put forth by Ishai et al. [40] with the goal of using information-theoretic MPC to construct a zero-knowledge protocol. In a nutshell, given a statement x and a witness w, the prover secret shares the witness as \({\omega }={\omega }_1\oplus \cdots \oplus {\omega }_m\) (for some m) and emulates in its head an m-party t-secure MPC protocol with perfect correctness (e.g., BGW [6]) where the \(i\)’th party has input \(({\omega }_i,x)\). The MPC task is to reconstruct the witness \({\omega }={\omega }_1\oplus \cdots \oplus {\omega }_m\), evaluate the relation \({\mathcal {R}} (x,{\omega })\) and provide each party with the resulting bit. Next, the prover sends to the verifier a commitment to the local views of each of the virtual m parties (consisting of the input \({\omega }_i\), the random coins, and the incoming messages). In turn, the verifier asks the prover to open t of these commitments, and verifies that these views are consistent and that the output is 1. Intuitively, completeness holds by the correctness of the MPC, soundness holds by the perfect security, and zero knowledge since the views of t parties leak no information about other inputs; so the overall security of the zero-knowledge protocol reduces to that of the commitment scheme.

In the following lemma, we adjust and simplify the protocol from [42] for the honest-majority setting, and show how to realize \({\mathcal {F}}_{\textsc {scp}} \) with guaranteed output delivery.

Lemma 6.2

Let \(\mathbf {{\mathcal {R}}}=({\mathcal {R}} _1,\ldots ,{\mathcal {R}} _n)\) be a vector of NP-relations, let \({\mathcal {D}} \) be an efficiently sampleable distribution in \(NC^0\), and let \(t<n/2\). Consider the representation of the reactive CSF, \({\mathcal {F}}_{\textsc {scp}} ({\mathcal {P}},{\mathcal {D}},\mathbf {{\mathcal {R}}},\Pi )\), as a sequence of (non-reactive) CSFs \(({\mathcal {F}}_{\textsc {csf}} ^{{\tilde{f}}_{\textsc {scp}} ^1,l_{\textsc {scp}} ^1},\ldots ,{\mathcal {F}}_{\textsc {csf}} ^{{\tilde{f}}_{\textsc {scp}} ^q,l_{\textsc {scp}} ^q})\), where \({\tilde{f}}_{\textsc {scp}} ^j\) is defined as in Definition C.1. Denote \({\mathcal {F}}_{\textsc {scp}} ^j={\mathcal {F}}_{\textsc {csf}} ^{{\tilde{f}}_{\textsc {scp}} ^j,l_{\textsc {scp}} ^j}\).

Then, there exists a vector of distributions \({ \varvec{D}}=(D_1,\ldots , D_q)\) such that for every \(j\in [q]\), the wrapped functionality \({\mathcal {W}}_{\mathrm {strict}} ^{D_j}({\mathcal {F}}_{\textsc {scp}} ^j)\) can be UC-realized in the \(({\mathcal {F}}_{\textsc {psmt}},{\mathcal {F}}_{\textsc {pbc}})\)-hybrid model with constant-round complexity and information-theoretic security, in the presence of an adaptive, malicious t-adversary.

Proof

(sketch) We start with a high-level description of the protocol in [42], for a single prover that proves a single statement (the extension to the multi-instance version of many provers that prove many statements follows via the JUC theorem [9]). As this protocol is designed for the dishonest-majority setting, it does not guarantee output delivery, but achieves identifiable abort.

As mentioned above, the protocol follows the “MPC-in-the-head” approach [40, 41], where the prover emulates in its head a protocol, where m servers, each has as input a share of the witness \({\omega }\), compute for a public statement x the function \(b={\mathcal {R}} (x,{\omega })\), and output (xb). The prover publicly commits to the view of each server, and the verifiers challenge the prover to open the views of some of the servers. The verifiers accept the statement x, if and only if all opened views are consistent.

For simplicity, consider an ideal functionality \({\mathcal {F}}_{\textsc {corr}} ^{\textsc {1scp}}\) for generating the correlated randomness to the parties. As we argue later, this functionality can be realized in constant rounds in the broadcast model via generic MPC. (Looking ahead, the distribution \({\mathcal {D}} \) will consist of n independent copies of the following values, one for each party that acts a the prover.) The parties receive the following correlated randomness from \({\mathcal {F}}_{\textsc {corr}} ^{\textsc {1scp}}\):

  • The prover receives signed secret shares of the witness \(({\omega }_i,\sigma ({\omega }_i))\), for \(i\in [n]\), where \(({\omega }_1,\ldots ,{\omega }_n)\) are shares of the prover’s witness \({\omega }\), and \(\sigma ({\omega }_i)\) is an information-theoretic signature of \({\omega }_i\).

  • The prover also receives random strings \(v_1,\ldots ,v_m\) along with corresponding signatures \(\sigma (v_1),\ldots ,\sigma (v_m)\), that will be used for committing to the server’s views in the m-party protocol.

  • Every \(P _i\) receives a challenge string \(c_i\) along with an information-theoretic signature \(\sigma (c_i)\).

  • Every party \(P _i\) receives a verification key for each of the signature values (note that all of the signing keys are hidden from the parties).

The prove phase, consists of three rounds:

  1. 1.

    The prover emulates in its head the protocol for computing \(b={\mathcal {R}} (x,{\omega }_1\oplus \cdots \oplus {\omega }_m)\) and broadcasts, for every \(j\in [m]\), a commitment to the view of the \(j\)’th server as \(\text{ view}_j\oplus v_j\) along with \(\sigma (v_j)\).

  2. 2.

    Every party \(P _i\) broadcasts the committed random string \(c_i\) along with its commitment \(\sigma (c_i)\). All parties locally compute \(c=\sum {c_j}\) and use it to choose a subset \({\mathcal {J}} \subseteq [m]\). In case some party \(P _i\) sends invalid values, all parties identify \(P _i\) as corrupted and abort.

  3. 3.

    The prover broadcasts \(\text{ view}_j\) and \(\sigma ({\omega }_j)\), for every \(j\in {\mathcal {J}} \), and all parties validate consistency.

Since in this work we consider an honest majority, we can simplify the protocol and achieve security with guaranteed output delivery. In the second round, instead of having each \(P _i\) broadcasting the committed randomness \((c_i,\sigma (c_i))\), we have the parties jointly compute the XOR function to agree on a subset \({\mathcal {J}} \subseteq [m]\); that is, each party enters a (locally chosen) random string as its input and receives the XOR of all strings (if a party does not input a value it is simply ignored and treated as 0). Since this functionality can be represented using a constant-depth circuit, we can use the protocol of Cramer et al. [21], whose round complexity is O(d), where d is the depth of the circuit, and provides information-theoretic security in the broadcast-hybrid model against an adaptive malicious t-adversary. A second modification we require is that \({\mathcal {F}}_{\textsc {corr}} ^{\textsc {1scp}}\) secret shares the witness \({\omega }\) between all parties using an error-correcting secret-sharing scheme, which can be easily achieved in the honest-majority setting.

It is left to show that the sampling algorithm for the correlated randomness that is used in [42] can be represented using a constant-depth circuit; the proof will then follow using again the protocol from [21]. By assumption, sampling a value from the distribution \({\mathcal {D}} \) can be done using a constant-depth circuit. In addition, the information-theoretic commitments in [42] are computed using information-theoretic signatures (see “Appendix A.2”), such that no party knows the signing key. This means that the adversary cannot generate signatures on its own. In addition, each signature will only need to be verified once (by each party). Using the information-theoretic signatures construction from [60] (see Theorem A.3), the degree of the polynomial, that is used to generate the signature, is bounded by the number of signatures the adversary is allowed to see from each honest party, which in our case is constant. We conclude that the randomness-sampling algorithm can be represented using a constant-depth circuit. \(\square \)

In the following, we will consider the distribution \({\mathcal {D}} _\mathsf {parallel}(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_\texttt {M},q,\texttt {R},\ell )\) for protocols \(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_\texttt {M} \), where each \(\pi _{\textsc {}} {}_j\) is defined in the \(({\mathcal {F}}_{\textsc {psmt}},{\mathcal {F}}_{\textsc {corr}} ^{D^\mathsf {corr}_j})\)-hybrid model, that prepares (at most) q executions of each protocol, for exactly \(\texttt {R} \) rounds. For each of the q instances of every protocol \(\pi _{\textsc {}} {}_j\), the correlated randomness consists of three parts: first, sample correlated randomness for \(\pi _{\textsc {}} {}_j\) from the distribution \(D^\mathsf {corr}_j\); next, sample independent random coins (local for each party) from the corresponding distribution \(D_{\pi _{\textsc {}} {}_j}\), suitable for \(\texttt {R} \) rounds; finally, sample random coins that are used to mask all the communication (explained below). The parameter \(\ell =\textsf {poly}(\kappa )\) represents the maximum between the input length and the maximal message length in the protocols \(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_\texttt {M} \). The distribution is formally defined in Fig. 4.

The masking of the communication in the \(\rho \)’th round of the \(k\)’th instance of protocol \(\pi _{\textsc {}} {}_j\) is performed as follows:

  • When \(P _i\) wants to send messages \((m_i^1,\ldots ,m_i^n)\) (where \(m_i^u\) is sent privately to \(P _u\)), party \(P _i\) sets for every \(u\in [n]\), \({\tilde{m}}_i^u=m_i^u \oplus r^\mathsf {mask}_{i,j,k,\rho ,u}\) and broadcasts \({\tilde{{ \varvec{m}}}}_i=({\tilde{m}}_i^1,\ldots ,{\tilde{m}}_i^n)\).

  • When \(P _i\) receives messages \({\tilde{{ \varvec{m}}}}_u=({\tilde{m}}_u^1,\ldots ,{\tilde{m}}_u^n)\) from \(P _u\), \(P _i\) computes \(m_u^i={\tilde{m}}_u^i \oplus r^\mathsf {mask}_{u,j,k,\rho ,i}\), and uses \(m_u^i\) as the message \(P _u\) sent him.

Fig. 4
figure 4

The correlated-randomness distribution for parallel composition

The vector of relations \(\mathbf {{\mathcal {R}}}_\mathsf {parallel}=({\mathcal {R}} _\mathsf {parallel}^1,\ldots ,{\mathcal {R}} _\mathsf {parallel}^n)\), described below, will be used to verify that every party sends its messages in multiple executions of a protocol according to the specification of the protocol, while using the same input value in all executions. Formally, for every \(i\in [n]\), the relation \({\mathcal {R}} _\mathsf {parallel}^i\) consists of pairs \(((\alpha ,\rho ,{\tilde{{ \varvec{m}}}},{\tilde{{ \varvec{h}}}}),{ \varvec{r}}_i)\), satisfying:

  • \(\alpha \) is an integer representing the iteration number.

  • \(\rho \) is an integer representing the round number.

  • The vector of messages \({\tilde{{ \varvec{m}}}}=({\tilde{{ \varvec{m}}}}_{i,1,\rho },\ldots ,{\tilde{{ \varvec{m}}}}_{i,\texttt {M},\rho })\) is structured such that for every \(j\in [\texttt {M} ]\), \({\tilde{{ \varvec{m}}}}_{i,j,\rho }=({\tilde{{ \varvec{m}}}}_{i,j,1,\rho },\ldots ,{\tilde{{ \varvec{m}}}}_{i,j,\texttt {N},\rho })\), and for every \(k\in [\texttt {N} ]\), \({\tilde{{ \varvec{m}}}}_{i,j,k,\rho }=({\tilde{m}}_{i,j,k,\rho ,1},\ldots ,{\tilde{m}}_{i,j,k,\rho ,n})\) represents the message \(P _i\) broadcasts in the \(\rho \)’th round of the \(k\)’th execution of protocol \(\pi _{\textsc {}} {}_j\), in the \(\alpha \)’th iteration.

  • The vector of history-messages \({\tilde{{ \varvec{h}}}}=({\tilde{{ \varvec{h}}}}^\texttt {input},{\tilde{{ \varvec{h}}}}_1,\ldots ,{\tilde{{ \varvec{h}}}}_\texttt {M})\) is structured such that \({\tilde{{ \varvec{h}}}}^\texttt {input} =({\tilde{{ \varvec{m}}}}^\mathsf {input}_1,\ldots ,{\tilde{{ \varvec{m}}}}^\mathsf {input}_n)\) and for \(j\in [\texttt {M} ]\), \({\tilde{{ \varvec{h}}}}_j=({\tilde{{h}}}_{j,1},\ldots ,{\tilde{{h}}}_{j,\texttt {N}})\), and each \({\tilde{{h}}}_{j,k}\) represents the history-transcript until the \(\rho \)’th round of the \(k\)’th execution of protocol \(\pi _{\textsc {}} {}_j\), in the \(\alpha \)’th iteration.

  • The random coins are \({ \varvec{r}}_i=({ \varvec{r}}^\mathsf {input}_i,{ \varvec{r}}_i^1,\ldots ,{ \varvec{r}}_i^\texttt {M})\), with \({ \varvec{r}}_i^j=(r^\mathsf {corr}_{i,j},r^\mathsf {prot}_{i,j},r^\mathsf {mask}_{i,j})\), as defined in \({\mathcal {D}} _\mathsf {parallel}\).

  • For every \(j\in [\texttt {M} ]\) and \(k\in [\texttt {N} ]\), denote by \({ \varvec{m}}_{i,j,k,\rho }=(m_{i,j,k,\rho ,1},\ldots ,m_{i,j,k,\rho ,n})\) the unmasked messages, where for every \(u\in [n]\), \(m_{i,j,k,\rho ,u}={\tilde{m}}_{i,j,k,\rho ,u}\oplus r^\mathsf {mask}_{i,{\tilde{k}},j,\rho ,u}\), with \({\tilde{k}}=\alpha \cdot \texttt {N} +k\). Similarly, denote by \({h}_{i,j,k}\) the unmasked history obtained from \({\tilde{{h}}}_{i,j,k}\) using the corresponding randomness in \(r^\mathsf {mask}_{i,j,{\tilde{k}}}\). Denote \((x_i^1\ldots ,x_i^\texttt {M})={\tilde{{ \varvec{m}}}}^\mathsf {input}_i\oplus { \varvec{r}}^\mathsf {input}_i\). Then, for every j and k, it holds that

    $$\begin{aligned} { \varvec{m}}_{i,j,k,\rho }=f_{\textsf {nxt-msg}} ^{\pi _{\textsc {}} {}_j,i}(x_i^j,r^\mathsf {corr}_{i,j,{\tilde{k}}},r^\mathsf {prot}_{i,j,{\tilde{k}}},{h}_{i,j,k}). \end{aligned}$$

    That is, \({ \varvec{m}}_{i,j,k,\rho }\) is the output of the next-message function of \(P _i\) in protocol \(\pi _{\textsc {}} {}_j\) on input \(x_i^j\), correlated randomness \(r^\mathsf {corr}_{i,j,{\tilde{k}}}\), private randomness \(r^\mathsf {prot}_{i,j,{\tilde{k}}}\) and history \({h}_{i,j,k}\).

In the sequel, for simplicity, we will denote by \({\mathcal {F}}_{\textsc {scp}} \) the functionality

$$\begin{aligned} {\mathcal {F}}_{\textsc {scp}} ({\mathcal {P}},{\mathcal {D}} _\mathsf {parallel}(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_\texttt {M},\texttt {N} \cdot \texttt {L},\texttt {R},\ell ),\mathbf {{\mathcal {R}}}_\mathsf {parallel}, \Pi ). \end{aligned}$$

6.1.2 Round-Preserving Parallel-Composition Compiler

We are now ready to present our protocol-black-box (PBB) parallel-composition compiler, formally described in Fig. 5.

Lemma 6.3

Consider the notation in Theorem 6.1 and let \(0<\epsilon <1\).

Then, for parameters \(\texttt {R}>\mu , \texttt {N} > \frac{\log (\texttt {M}/\epsilon )}{\log (\texttt {R}/\mu )}\), and \(\texttt {L} =\textsf {poly}(\kappa )\), the compiled protocol \(\pi _{\textsc {pbb}} =\mathsf {Comp}_{\textsc {}} (\pi _{\textsc {}} {}_1,\dots ,\pi _{\textsc {}} {}_\texttt {M},\texttt {N},\texttt {R},\texttt {L})\) UC-realizes \({\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pbb}}({\mathcal {F}}_{\textsc {}} {}_1\parallel \cdots \parallel {\mathcal {F}}_{\textsc {}} {}_\texttt {M})\) with expected round complexity \(\mu _\mathsf {pbb}=O(\texttt {R})\), in the \(({\mathcal {F}}_{\textsc {pbc}},{\mathcal {F}}_{\textsc {scp}})\)-hybrid model and same adversarial setting, assuming that all honest parties receive their inputs at the same round. In addition, the compiler requires only black-box access to the protocols \(\pi _{\textsc {}} {}_1, \ldots , \pi _{\textsc {}} {}_\texttt {M} \).

To prove security of the construction, we construct a simulator for the dummy adversary, which simulates the functionality \({\mathcal {F}}_{\textsc {scp}} \) and all honest parties. At a high level, the simulation proceeds as follows. Since every protocol \(\pi _{\textsc {}} {}_j\) realizes \({\mathcal {F}}_{\textsc {}} {}_j\), there exists a simulator \({\mathcal {S}} _j\) for the dummy adversary. In order to simulate the \(k\)’th instance of each protocol \(\pi _{\textsc {}} {}_j\), the simulator \({\mathcal {S}} \) invokes an instance of \({\mathcal {S}} _j\), denoted \({\mathcal {S}} _j^k\), and receives correlated randomness \({\tilde{r}}^\mathsf {corr}_{i,j,k}\) for every corrupted party \(P _i\). The simulator \({\mathcal {S}} \) samples randomness from the distribution \({\mathcal {D}} _\mathsf {parallel}\), adjusts the correlated randomness for every corrupted party accordingly and hands the adversary \(({ \varvec{r}}_i,v_1^i,\ldots ,v_n^i)\) as the answer from the first call to \({\mathcal {F}}_{\textsc {scp}} \), where \((v_i^1,\ldots ,v_i^n)\) are shares of zero, for every \(i\in [n]\). For the input-commitment message, the simulator broadcasts commitments of zero for the honest parties (i.e., random messages). The \(k\)’th instance of \(\pi _{\textsc {}} {}_j\) is simulated now using \({\mathcal {S}} _j^k\), where \({\mathcal {S}} \) masks/unmasks the messages between \({\mathcal {A}} \) and \({\mathcal {S}} _j^k\), appropriately. Every message sent by \({\mathcal {A}} \) on behalf of a corrupted party \(P _i\) is validated by \({\mathcal {S}} \) according to the relation \({\mathcal {R}} _\mathsf {parallel}^i\), and in case it is invalid, \({\mathcal {S}} \) locally computes the messages for \(P _i\), using its input and correlated randomness. In case party \(P _i\) gets corrupted, the simulator corrupts the dummy party in the ideal computation and learns its input; next, \({\mathcal {S}} \) hands the input to each simulator \({\mathcal {S}} _j^k\), receives the random coins for \(P _i\) in each instance of \(\pi _{\textsc {}} {}_j\), updates the random coins \({ \varvec{r}}_i\) accordingly and hands it to \({\mathcal {A}} \). This is a valid simulation for the dummy adversary, following the security guarantees of each simulator \({\mathcal {S}} _j\) and of the secret-sharing scheme \(\Pi \).

Proof

For simplicity of notation, denote \({\mathcal {F}}_{\textsc {pbb}} ={\mathcal {W}}_{\mathrm {flex}} ^{D_\mathsf {pbb}}({\mathcal {F}}_{\textsc {}} {}_1\parallel \cdots \parallel {\mathcal {F}}_{\textsc {}} {}_\texttt {M})\). We start by defining the sampling algorithm for the distribution \(D_\mathsf {pbb}\), parametrized by \(\texttt {N},\texttt {R},\texttt {L} \) and distributions \(D_1,\ldots ,D_\texttt {M} \). The sampler initially sets \(\alpha \leftarrow 0\) and a trace \(T \) with root labeled by \({\mathcal {F}}_{\textsc {pbb}} \) and children \(({\mathcal {F}}_{\textsc {scp}} ^1,{\mathcal {F}}_{\textsc {pbc}})\). Next, independently sample traces \(T _j^{{\tilde{k}}}\leftarrow D_j\), for \(j\in [\texttt {M} ]\) and \(k\in [\texttt {N} ]\) (where \({\tilde{k}}=\alpha \cdot \texttt {N} +k\)), and append

$$\begin{aligned} \left( \left( {\mathcal {F}}_{\textsc {scp}} ^i,{\mathcal {F}}_{\textsc {pbc}} \right) ^\texttt {R},{\mathcal {F}}_{\textsc {pbc}},{\mathcal {F}}_{\textsc {pbc}},{\mathcal {F}}_{\textsc {pbc}} \right) \end{aligned}$$
Fig. 5
figure 5

The parallel-composition compiler in the \(({\mathcal {F}}_{\textsc {pbc}},{\mathcal {F}}_{\textsc {scp}})\)-hybrid model

to the leaves of the trace \(T \) (i.e., calling \(\texttt {R} \) times, sequentially, to \(({\mathcal {F}}_{\textsc {scp}} ^i,{\mathcal {F}}_{\textsc {pbc}})\), followed by three sequential calls to \({\mathcal {F}}_{\textsc {pbc}} \)). If for every \(j\in [\texttt {M} ]\), there exists \(k\in [\texttt {N} ]\) such that \(c_{\mathsf {tr}} (T _j^{{\tilde{k}}})<\texttt {R} \), then output \(T \) and halt. Else, set \(\alpha \leftarrow \alpha +1\). If \(\alpha <\texttt {L} \), repeat the sampling process; otherwise output \(T \) and halt.

Following Lemma 2.3, for \(\texttt {R} >\mu \) and \(\texttt {N} >\frac{\log (\texttt {M}/\epsilon )}{\log (\texttt {R}/\mu )}\), it holds that in every iteration, at least one execution of \(\pi _{\textsc {}} {}_j\) will produce output, for every \(j\in [\texttt {M} ]\), with a constant probability. It follows that the expected number of iterations until all honest parties receive output and the protocol terminates is constant, and since each iteration consists of \(\texttt {R} \) rounds, the entire execution completes within expected \(O(\texttt {R})\) rounds, as required. The failure probability that the protocol will not terminate within \(\texttt {L} =\textsf {poly}(\kappa )\) iterations is negligible.

We construct a simulator \({\mathcal {S}} \) for the dummy adversary \({\mathcal {A}} \). Let \({\mathcal {Z}} \) be an environment. The simulator \({\mathcal {S}} \) uses, in a black-box way, the simulators \({\mathcal {S}} _j\), for \(j\in [\texttt {M} ]\), where every \({\mathcal {S}} _j\) simulates the dummy adversary for protocol \(\pi _{\textsc {}} {}_j\). The simulators \({\mathcal {S}} _j\) are guaranteed to exist since every \(\pi _{\textsc {}} {}_j\) UC-realizes \({\mathcal {F}}_{\textsc {}} {}_j\). Each simulator \({\mathcal {S}} _j\) is invoked (at most) \(q=\texttt {L} \cdot \texttt {N} \) times; denote by \({\mathcal {S}} _j^k\) the \(k\)’th invocation of \({\mathcal {S}} _j\).

We consider the representation of the reactive CSF \({\mathcal {F}}_{\textsc {scp}} \) as a sequence of (non-reactive) CSFs \(({\mathcal {F}}_{\textsc {csf}} ^{{\tilde{f}}_{\textsc {scp}} ^1,l_{\textsc {scp}} ^1},\ldots ,{\mathcal {F}}_{\textsc {csf}} ^{{\tilde{f}}_{\textsc {scp}} ^{q'},l_{\textsc {scp}} ^{q'}})\), where \({\tilde{f}}_{\textsc {scp}} ^j\) is defined as in Definition C.1 and \(q'=\texttt {L} \cdot \texttt {R} +1\). Denote \({\mathcal {F}}_{\textsc {scp}} ^j={\mathcal {F}}_{\textsc {csf}} ^{{\tilde{f}}_{\textsc {scp}} ^j,l_{\textsc {scp}} ^j}\).

The simulator \({\mathcal {S}} \) proceeds as follows:

  • Initially, set \({ \varvec{x}}_1=\cdots ={ \varvec{x}}_n=\bot \) and \({ \varvec{y}}=\bot \).

  • Simulating the first call to functionality \({\mathcal {F}}_{\textsc {scp}} \):

    1. 1.

      Sample correlated randomness \(({ \varvec{r}}_1,\ldots ,{ \varvec{r}}_n)\leftarrow {\mathcal {D}} _\mathsf {parallel}(\pi _{\textsc {}} {}_1,\ldots ,\pi _{\textsc {}} {}_\texttt {M},q,\texttt {R},\ell )\), where \(q=\texttt {N} \cdot \texttt {L} \) and \({ \varvec{r}}_i=({ \varvec{r}}^\mathsf {input}_i,{ \varvec{r}}_i^1,\ldots ,{ \varvec{r}}_i^\texttt {M})\) with \({ \varvec{r}}_i^j=(r^\mathsf {corr}_{i,j},r^\mathsf {prot}_{i,j},r^\mathsf {mask}_{i,j})\), for \(i\in [n]\) and \(j\in [\texttt {M} ]\).

    2. 2.

      For every \(j\in [\texttt {M} ]\) and \(k\in [q]\), invoke \({\mathcal {S}} _j^k\), request the correlated randomness for every corrupted party \(P _i\) and get from \({\mathcal {S}} _j^k\) the values \({\tilde{r}}^\mathsf {corr}_{i,j,k}\). Set \(r^\mathsf {corr}_{i,j,k}\leftarrow {\tilde{r}}^\mathsf {corr}_{i,j,k}\) in \({ \varvec{r}}_i\).

    3. 3.

      Receive from \({\mathcal {A}} \) the message \((\texttt {input},\mathsf {sid},\lambda )\), for a corrupted \(P _i\) (that \({\mathcal {A}} \) sends to \({\mathcal {F}}_{\textsc {scp}} \)).

    4. 4.

      For every party \(P _i\), compute \(({\tilde{v}}_i^1,\ldots ,{\tilde{v}}_i^n)\leftarrow \textsf {Share} (0^{|{ \varvec{r}}_i|})\).

    5. 5.

      Send the message \((\texttt {output},\mathsf {sid},({ \varvec{r}}_i,{\tilde{v}}_1^i\ldots ,{\tilde{v}}_n^i))\) to \({\mathcal {A}} \) as the response from \({\mathcal {F}}_{\textsc {scp}} \), for every corrupted \(P _i\).

  • Simulating the first (input-commitment) message:

    1. 1.

      For every honest party \(P _i\), send a random string \({\tilde{{ \varvec{m}}}}^\mathsf {input}_i\) to \({\mathcal {A}} \).

    2. 2.

      For every corrupted party \(P _i\), receive the message \({\tilde{{ \varvec{m}}}}^\mathsf {input}_i\) from \({\mathcal {A}} \) and extract the input value \({ \varvec{x}}_i={\tilde{{ \varvec{m}}}}^\mathsf {input}_i \oplus { \varvec{r}}^\mathsf {input}_i\). In case \({\mathcal {A}} \) does not send a message for \(P _i\), set \({ \varvec{x}}_i\) and \({ \varvec{r}}_i\) to predetermined default values and locally compute \({\tilde{{ \varvec{m}}}}^\mathsf {input}_i={ \varvec{x}}_i\oplus { \varvec{r}}_i\).

    3. 3.

      Set \({\tilde{{ \varvec{m}}}}^\mathsf {input}=({\tilde{{ \varvec{m}}}}^\mathsf {input}_1,\ldots ,{\tilde{{ \varvec{m}}}}^\mathsf {input}_n)\).

  • Send inputs to \({\mathcal {F}}_{\textsc {pbb}} \):

    1. 1.

      For every corrupted party \(P _i\), send the message \((\texttt {input},\mathsf {sid},{ \varvec{x}}_i)\) to \({\mathcal {F}}_{\textsc {pbb}} \).

    2. 2.

      Upon receiving message \((\texttt {leakage},\mathsf {sid},P _i,(l_1,\ldots ,l_\texttt {M}))\) from \({\mathcal {F}}_{\textsc {pbb}} \), for an honest party \(P _i\), store the leakage information.

    3. 3.

      In addition, \({\mathcal {S}} \) receives \((\texttt {trace},\mathsf {sid},T)\), from \({\mathcal {F}}_{\textsc {pbb}} \), where \(T \) is a depth-1 trace of the form

      $$\begin{aligned} \left( {\mathcal {F}}_{\textsc {scp}} ^1,{\mathcal {F}}_{\textsc {pbc}},\left( \left( {\mathcal {F}}_{\textsc {scp}} ^i,{\mathcal {F}}_{\textsc {pbc}} \right) ^\texttt {R},{\mathcal {F}}_{\textsc {pbc}},{\mathcal {F}}_{\textsc {pbc}},{\mathcal {F}}_{\textsc {pbc}} \right) ^{q''}\right) , \end{aligned}$$

      (i.e., initially, calls to \({\mathcal {F}}_{\textsc {scp}} ^1\) and \({\mathcal {F}}_{\textsc {pbc}} \), followed by \(q''\) iterations of calling \(\texttt {R} \) times \(({\mathcal {F}}_{\textsc {scp}} ^i,{\mathcal {F}}_{\textsc {pbc}})\) and 3 rounds of \({\mathcal {F}}_{\textsc {pbc}} \) in order to agree on termination). More precisely, \({\mathcal {S}} \) receives the coins that were used by the functionality \({\mathcal {F}}_{\textsc {pbb}} \) to sample the trace \(T \) from \(D_\mathsf {pbb}\). Using these coins, \({\mathcal {S}} \) samples the same traces \(T _j^{{\tilde{k}}}\) that were used to define \(T \).

  • Simulating the \(\alpha \)’th iteration:

    1. 1.

      For every \(i\in [n]\), \(j\in [\texttt {M} ]\) and \(k\in [\texttt {N} ]\), set \({h}_{i,j,k},{\tilde{{h}}}_{j,k}\leftarrow \lambda \). Denote \({ \varvec{h}}_i^j=({h}_{i,j,1},\ldots ,{h}_{i,j,\texttt {N}})\), \({\tilde{{ \varvec{h}}}}^j=({\tilde{{h}}}_{j,1},\ldots ,{\tilde{{h}}}_{j,\texttt {N}})\) and \({\tilde{{ \varvec{h}}}}=({\tilde{{h}}}^1,\ldots ,{\tilde{{h}}}^\texttt {M})\). For \(k\in [\texttt {N} ]\), let \({\tilde{k}}=\alpha \cdot \texttt {N} +k\).

    2. 2.

      For \(j\in [\texttt {M} ]\) and \(k\in [\texttt {N} ]\), send to \({\mathcal {S}} _j^{{\tilde{k}}}\), on behalf of every honest party, the leakage messages \((\texttt {leakage},\mathsf {sid},P _i,l_j)\), as well as \((\texttt {trace},\mathsf {sid},T _j^{{\tilde{k}}})\). In addition, send to \({\mathcal {S}} _j^{{\tilde{k}}}\) the message \((\texttt {input},\mathsf {sid},x_i^j)\), for every corrupted party \(P _i\), and receive from \({\mathcal {S}} _j^{{\tilde{k}}}\) the messages \((\texttt {input},\mathsf {sid},x_i^j)\) that are sent to \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {}} {}_j)\), for the corrupted parties.

    3. 3.

      Execute for rounds \(\rho =1,\ldots ,\texttt {R} \):

      1. (a)

        For every honest \(P _i\), send \((\alpha ,\rho ,{\tilde{{ \varvec{m}}}}_{i,\rho },{\tilde{{ \varvec{h}}}})\) to \({\mathcal {A}} \) as the leakage from \({\mathcal {F}}_{\textsc {scp}} \) for \(P _i\), where \({\tilde{{ \varvec{m}}}}_{i,\rho }\) is prepared as follows. For every corrupted party \(P _u\), receive from \({\mathcal {S}} _j^{{\tilde{k}}}\) the message \(m_{i,j,k,\rho ,u}\) and set \({\tilde{m}}_{i,j,k,\rho ,u}=m_{i,j,k,\rho ,u}\oplus r^\mathsf {mask}_{i,j,{\tilde{k}},\rho ,u}\). For every honest \(P _u\), sample a random message \({\tilde{m}}_{i,j,k,\rho ,u}\). Set \({\tilde{{ \varvec{m}}}}_{i,\rho }=({\tilde{{ \varvec{m}}}}_{i,1,\rho },\ldots ,{\tilde{{ \varvec{m}}}}_{i,\texttt {M},\rho })\), where \({\tilde{{ \varvec{m}}}}_{i,j,\rho }=({\tilde{{ \varvec{m}}}}_{i,j,1,\rho },\ldots ,{\tilde{{ \varvec{m}}}}_{i,j,\texttt {N},\rho })\) with \({\tilde{{ \varvec{m}}}}_{i,j,k,\rho }=({\tilde{m}}_{i,j,k,\rho ,1},\ldots ,{\tilde{m}}_{i,j,k,\rho ,n})\).

      2. (b)

        For every corrupted party \(P _i\), receive from \({\mathcal {A}} \) the message \((\texttt {input},\mathsf {sid},(\alpha ,\rho ,{\tilde{{ \varvec{m}}}}_{i,\rho },{\tilde{{ \varvec{h}}}}_i))\) (intended to \({\mathcal {F}}_{\textsc {scp}} \)), and check whether \({\tilde{{ \varvec{h}}}}_i={\tilde{{ \varvec{h}}}}\) and \({\mathcal {R}} _\mathsf {parallel}^i((\alpha ,\rho ,{\tilde{{ \varvec{m}}}}_{i,\rho },{\tilde{{ \varvec{h}}}}_i),{ \varvec{r}}_i)=1\). If so, set \(b_i=1\) else \(b_i=0\).

      3. (c)

        For every corrupted party \(P _i\) with \(b_i=1\), it holds that \({\tilde{{ \varvec{m}}}}_{i,\rho }=({\tilde{{ \varvec{m}}}}_{i,1,\rho },\ldots ,{\tilde{{ \varvec{m}}}}_{i,\texttt {M},\rho })\), where \({\tilde{{ \varvec{m}}}}_{i,j,\rho }=({\tilde{{ \varvec{m}}}}_{i,j,1,\rho },\ldots ,{\tilde{{ \varvec{m}}}}_{i,j,\texttt {N},\rho })\) and \({\tilde{{ \varvec{m}}}}_{i,j,k,\rho }=({\tilde{{ \varvec{m}}}}_{i,j,k,\rho ,1},\ldots ,{\tilde{{ \varvec{m}}}}_{i,j,k,\rho ,n})\). Denote, for every honest \(P _u\), \(m_{i,j,k,\rho ,u}={\tilde{m}}_{i,j,k,\rho ,u}\oplus r^\mathsf {mask}_{i,j,{\tilde{k}},\rho ,u}\).

      4. (d)

        For every honest party \(P _i\), denote \({\tilde{{ \varvec{h}}}}_i={\tilde{{ \varvec{h}}}}\) and \(b_i=1\), and send to the adversary \({\mathcal {A}} \) \((\texttt {output},\mathsf {sid},(((\alpha ,\rho ,{\tilde{{ \varvec{m}}}}_{1,\rho },{\tilde{{ \varvec{h}}}}_1),b_1),\ldots ,((\alpha ,\rho ,{\tilde{{ \varvec{m}}}}_{n,\rho },{\tilde{{ \varvec{h}}}}_n),b_n)))\) as the response from \({\mathcal {F}}_{\textsc {scp}} \), on behalf of every corrupted party.

      5. (e)

        For every corrupted party \(P _i\) with \(b_i=1\) and every honest \(P _u\), forward to \({\mathcal {S}} _j^{{\tilde{k}}}\) the message \(m_{i,j,k,\rho ,u}\).

      6. (f)

        For every corrupted party \(P _i\) with \(b_i=0\), compute \((v_i^1,\ldots ,v_i^n)\leftarrow \textsf {Share} ({ \varvec{r}}_i)\) and send \(v_i^u\) to \({\mathcal {A}} \), on behalf of every honest party \(P _u\). (Note that \({ \varvec{r}}_i\) may change as the simulation proceeds, in case \(P _i\) gets corrupted dynamically.)

      7. (g)

        For every corrupted party that has been previously identified, locally compute the messages \(m_{i,j,k,\rho ,u}\), for every honest party \(P _u\), using \(({ \varvec{x}}_i,{ \varvec{r}}_i)\) and \({h}_{i,j,k}\), and send them to \({\mathcal {S}} _j^{{\tilde{k}}}\).

      8. (h)

        Append the messages \(({\tilde{{ \varvec{m}}}}_{1,\rho },\ldots ,{\tilde{{ \varvec{m}}}}_{n,\rho })\) to the global history \({\tilde{{ \varvec{h}}}}\). For every corrupted party \(P _i\), append the messages \((m_{1,j,k,\rho ,i},\ldots ,m_{n,j,k,\rho ,i})\) to the local history \({h}_{i,j,k}\).

      9. (i)

        Upon receiving \((\texttt {early-output},\mathsf {sid},P _i)\) from \({\mathcal {S}} _j^k\) for an honest party \(P _i\), simulate the honest party receiving the output message.

      10. (j)

        Upon receiving \((\texttt {early-output},\mathsf {sid},P _i)\) from \({\mathcal {S}} _j^{{\tilde{k}}}\) for a corrupted party \(P _i\), if \({ \varvec{y}}=\bot \), send \((\texttt {early-output},\mathsf {sid},P _i)\) to \({\mathcal {F}}_{\textsc {pbb}} \) and receive back \((\texttt {output},\mathsf {sid},{ \varvec{y}})\), with \({ \varvec{y}}=(y_1,\ldots ,y_\texttt {M})\). Send \((\texttt {output},\mathsf {sid},y_j)\) to \({\mathcal {S}} _j^{{\tilde{k}}}\).

      11. (k)

        For every \(j\in [\texttt {M} ]\) and \(k\in [\texttt {N} ]\), check whether \(\rho =c_{\mathsf {tr}} (T _j^{{\tilde{k}}})\). If so and \({ \varvec{y}}=\bot \), send \((\texttt {early-output},\mathsf {sid},P _i)\) to \({\mathcal {F}}_{\textsc {pbb}} \) and receive back \((\texttt {output},\mathsf {sid},{ \varvec{y}})\), with \({ \varvec{y}}=(y_1,\ldots ,y_\texttt {M})\). Send \((\texttt {output},\mathsf {sid},y_j)\) to \({\mathcal {S}} _j^{{\tilde{k}}}\) for every corrupted party.

  • Explaining corruption requests: upon a corruption request for \(P _i\), proceed as follows.

    1. 1.

      Corrupt the dummy party \(P _i\) and learn its input \({ \varvec{x}}_i=(x_i^1,\ldots ,x_i^\texttt {M})\).

    2. 2.

      For every \(j\in [\texttt {M} ]\) and \(k\in [\texttt {L} \cdot \texttt {N} ]\):

      1. (a)

        Instruct \({\mathcal {S}} _j^k\) to corrupt \(P _i\) and provide it with input \(x_i^j\).

      2. (b)

        Receive from \({\mathcal {S}} _j^k\) the view of \(P _i\), i.e., its internal randomness \({\tilde{r}}^\mathsf {prot}_{i,j,k}\), correlated randomness \({\tilde{r}}^\mathsf {corr}_{i,j,k}\) and history \({h}_{i,j,k}\).

      3. (c)

        For every \(u\in [n]\) and every message \(m_{u,j,k,\rho ,i}\) in \({h}_{i,j,k}\) (message from \(P _u\) to \(P _i\) in the \(\rho \)’th round of the \(k\)’th instance of \(\pi _{\textsc {}} {}_j\), for \(\rho \in [\texttt {R} ]\)), compute \({\tilde{r}}^\mathsf {mask}_{u,j,k,\rho ,i} = m_{u,j,k,\rho ,i} \oplus {\tilde{m}}_{u,j,k,\rho ,i}\). Similarly, compute \({\tilde{r}}^\mathsf {mask}_{i,j,k,\rho ,u} = m_{i,j,k,\rho ,u} \oplus {\tilde{m}}_{i,j,k,\rho ,u}\).

      4. (d)

        Compute \({\tilde{{ \varvec{r}}}}^\mathsf {input}_i={\tilde{{ \varvec{m}}}}^\mathsf {input}_i\oplus { \varvec{x}}_i\).

      5. (e )

        Adjust the random coins \(r^\mathsf {corr}_{i,j,k}\leftarrow {\tilde{r}}^\mathsf {corr}_{i,j,k}\), \(r^\mathsf {prot}_{i,j,k}\leftarrow {\tilde{r}}^\mathsf {prot}_{i,j,k}\), \(r^\mathsf {mask}_{i,j,k}\leftarrow {\tilde{r}}^\mathsf {mask}_{i,j,k}\) and \({ \varvec{r}}^\mathsf {input}_i\leftarrow {\tilde{{ \varvec{r}}}}^\mathsf {input}_i\) in \({ \varvec{r}}_i\).

    3. 3.

      Provide the input \({ \varvec{x}}_i\) and the adjusted correlated randomness \({ \varvec{r}}_i\) to \({\mathcal {Z}} \) as the internal state of party \(P _i\).

We next prove that no environment can distinguish between interacting with the dummy adversary and the honest parties running protocol \(\pi '\) in the \(({\mathcal {F}}_{\textsc {pbc}},{\mathcal {F}}_{\textsc {scp}})\)-hybrid model, from interacting with the simulator \({\mathcal {S}} \) and the dummy honest parties computing \({\mathcal {F}}_{\textsc {pbb}} \), except for a negligible probability. We prove this using a series of hybrid games. The output of each game is the output of the environment.

The game \(\text{ HYB}^1_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}.}\) This is exactly the execution of the protocol \(\pi _{\textsc {pbb}} \) in the \(({\mathcal {F}}_{\textsc {pbc}},{\mathcal {F}}_{\textsc {scp}})\)-hybrid model with environment \({\mathcal {Z}}\) and dummy adversary \({\mathcal {A}}\).

The games \(\text{ HYB}^{2,{i^*}}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\), for \(0\le {i^*}\le n\). In these games, we modify \(\text{ HYB}^1_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\) as follows. During the first call to \({\mathcal {F}}_{\textsc {scp}} \), for \({i^*}< i\le n\), compute \((v_i^1,\ldots ,v_i^n)\leftarrow \textsf {Share} ({ \varvec{r}}_i)\) (as before) and send \((v_1^i,\ldots ,v_n^i)\) as the shares for every party \(P _i\). For every \(i\le {i^*}\), compute \((v_i^1,\ldots ,v_i^n)\leftarrow \textsf {Share} ({ \varvec{r}}_i)\) and \(({\tilde{v}}_i^1,\ldots ,{\tilde{v}}_i^n)\leftarrow \textsf {Share} (0^{|{ \varvec{r}}_i|})\). Next, send to every corrupted \(P _k\) the shares \(({\tilde{v}}_1^k,\ldots ,{\tilde{v}}_n^k)\) and to every honest party the shares \((v_1^i,\ldots ,v_n^i)\).

Claim 6.4

\(\text{ HYB}^1_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}} {\mathop {\equiv }\limits ^\mathrm{{s}}}\text{ HYB}^{2,n}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\).

Proof

Note that \(\text{ HYB}^1_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}} \equiv \text{ HYB}^{2,0}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\). For every \(0\le {\tilde{i}}< n\), it holds that \(\text{ HYB}^{2,{\tilde{i}}}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}} {\mathop {\equiv }\limits ^\mathrm{{s}}}\text{ HYB}^{2,{\tilde{i}}+1}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\). This follows from the security of the ECSS scheme and the honest-majority assumption. In particular, the shares \(\{v_1^u,\ldots ,v_n^u\}_u\) held by the adversary (for all corrupted \(P _u\)), completely hide for that random coins \({ \varvec{r}}_i\) of every honest party \(P _i\), and are compatible even if \(P _i\) gets adaptively corrupted and \({ \varvec{r}}_i\) is revealed. This holds since all honest parties receive valid shares of \({ \varvec{r}}_i\), therefore, \({ \varvec{r}}_i\) will be correctly reconstructed even if all corrupted parties have incorrect shares. The claim follows using a standard hybrid argument. \(\square \)

The game \(\text{ HYB}^3_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\). In this game, we modify \(\text{ HYB}^{2,n}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\) as follows. The input-commitment message \({\tilde{{ \varvec{m}}}}^\mathsf {input}_i\), sent by an honest party is uniformly chosen. In addition, in every call to \({\mathcal {F}}_{\textsc {scp}} \) the returned value \(b_i\) for every honest party \(P _i\) is always set to 1. Finally, the random coins for honest parties that are corrupted adaptively are set as \({ \varvec{r}}^\mathsf {input}_i={\tilde{{ \varvec{m}}}}^\mathsf {input}_i\oplus { \varvec{x}}_i\).

Claim 6.5

\(\text{ HYB}^{2,n}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}} \equiv \text{ HYB}^3_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\).

Proof

The claim immediately follows since honest parties always send correct messages. \(\square \)

The games \(\text{ HYB}^{4,{j^*},{k^*}}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\), for \(0\le {j^*}\le \texttt {M} \) and \(0\le {k^*}\le q=\texttt {N} \cdot \texttt {L} \). In these games, we modify \(\text{ HYB}^3_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\) as follows. For \((j,k)< ({j^*},{k^*})\) (i.e., for \(j< {j^*}\), or \(j={j^*}\) and \(k<{k^*}\)), the \(k\)’th execution of protocol \(\pi _{\textsc {}} {}_j\) is replaced with the simulated transcript generated by \({\mathcal {S}} _j^k\).

More specifically, the experiment interacts with the ideal functionality \({\mathcal {F}}_{\textsc {pbb}} \). Initially, it receives the leakage messages \((\texttt {leakage},\mathsf {sid},P _i,(l_1,\ldots ,l_\texttt {M}))\) and the coins used to sample a trace \(T \) from \(D_\mathsf {pbb}\) (via the message \((\texttt {trace},\mathsf {sid},T)\)); using these random coins, sample the traces \(T _j^k\) from \(D_j\).

Each simulator \({\mathcal {S}} _j^k\) (for \((j,k)< ({j^*},{k^*})\)) is invoked and is given the leakage information \(l_j\) and the trace \(T _j^k\). The simulator \({\mathcal {S}} _j^k\) provides correlated randomness \({\tilde{r}}^\mathsf {corr}_{i,j,k}\) for every corrupted party; the random coins \({ \varvec{r}}_i\) for corrupted \(P _i\) are adjusted accordingly. Once \({\mathcal {A}} \) sends the input-commitment message \({\tilde{{ \varvec{m}}}}^\mathsf {input}_i\), for a corrupted \(P _i\), extract the input value \({ \varvec{x}}_i=(x_i^1,\ldots ,x_i^\texttt {M})\) and hand \({\mathcal {S}} _j^k\) the value \(x_i^j\) as the input value for \(P _i\). The interaction with \({\mathcal {S}} _j^k\) is done as in the simulation, i.e., validate the messages from \({\mathcal {A}} \) and unmask valid messages before forwarding to \({\mathcal {S}} _j^k\), and mask messages from \({\mathcal {S}} _j^k\) using \(r^\mathsf {corr}_{i,j,k}\). Messages from identified corrupted parties are locally computed and sent to \({\mathcal {S}} _j^k\). When \({\mathcal {S}} _j^k\) sends \((\texttt {early-output},\mathsf {sid},\cdot )\) requests, respond with the output value \(y_j\) from \({ \varvec{y}}=(y_1,\ldots ,y_j)\), where if \({ \varvec{y}}=\bot \), then forward the message to \({\mathcal {F}}_{\textsc {pbb}} \).

Claim 6.6

\(\text{ HYB}^3_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}} {\mathop {\equiv }\limits ^\mathrm{{s}}}\text{ HYB}^{4,\texttt {M},q}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\).

Proof

Note that \(\text{ HYB}^3_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}} \equiv \text{ HYB}^{4,0,0}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\). For every \(0\le j< \texttt {M} \) and \(0\le k<q\), it holds that \(\text{ HYB}^{4,j,k}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}} {\mathop {\equiv }\limits ^\mathrm{{s}}}\text{ HYB}^{2,j,k+1}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\) and similarly, \(\text{ HYB}^{4,j,q}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}} {\mathop {\equiv }\limits ^\mathrm{{s}}}\text{ HYB}^{2,j+1,1}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\); otherwise, there exists an environment that violates the simulation of \({\mathcal {S}} _j\) (resp., \({\mathcal {S}} _{j+1}\)). The claim follows using a standard hybrid argument. \(\square \)

The proof of Lemma 6.3 follows since \(\text{ HYB}^{4,\texttt {M},q}_{\pi _{\textsc {pbb}}, {\mathcal {A}}, {\mathcal {Z}}}\) (for \(q=\texttt {N} \cdot \texttt {L} \)) exactly describes the simulation done by \({\mathcal {S}} \). \(\square \)

6.2 An Impossibility of FBB Round-Preserving Parallel Composition

In this section, we prove that for a natural class of protocols, following and/or extending in various ways the techniques from Ben-Or and El-Yaniv [5],Footnote 8 there exist functions such that no protocol can compute their parallel composition in a round-preserving manner, while accessing the functions in a black-box way, tolerating even a single adversarial party. Although this is not a general impossibility result, it indicates that the batching approach of [5] is limited to semi-honest security (Sect. 5) and/or functionally white-box transformations.

We observe that this impossibility serves as an additional justification for the optimality of our protocol-black-box parallel composition (Sect. 6.1). Indeed, on the one hand, it formally confirms the generic observation that the naïve parallel composition of a set of PT functionalities does not preserve their round complexity. On the other hand, and most importantly, it proves that all existing techniques for composing PT functionalities in parallel in the natural (FBB) manner fail in preserving the round complexity. Hence, the only known existing round-preserving composition for such functionalities is the protocol-black-box compiler presented in Sect. 6.1 or more inefficient non-black-box techniques. The wideness of the class of excluded protocols by our impossibility result justifies our conjecture that there exists no round-preserving FBB protocol for parallel composition of PT functionalities. Proving this conjecture is in our opinion a very interesting research direction.

We first argue informally why the approach of [5], cannot be directly extended to privacy-sensitive functions. The idea in [5] for allowing each of the n parties to broadcast its value is to have each of the n parties participate in \(m=O(\log n)\) parallel invocations (hereafter called batches, to avoid confusion with the goal of parallel broadcast for different messages) of broadcast as sender with the same input. Each of those batches is executed in parallel for a fixed (constant) number of rounds (for the same broadcast message); this increases the probability that sufficiently many parties receive output from each batch. At the end of each batch execution, the parties check whether they jointly hold the output, and if not, they repeat the computation of the batches. It might seem that this idea can be applied to arbitrary tasks, but this is not the case. The reason is that this idea fails if the functionality has any privacy requirements, is that the adversary can input different values on different calls of the functionality within a batch and learn more information on the input.

Batched-parallel composition The above issue with privacy appears whenever a function is invoked twice in the same round on the same inputs from honest parties. Indeed, in this case the adversary can use different inputs to each invocation and learn information as sketched above. The same attack can be extended to composition protocols which invoke the function in two different rounds \(\rho \) and \(\rho '\); as long as the adversary knows these rounds, he can still launch the above attack on privacy. Generalizing the claim even further, for specific classes of functions, it suffices that there are two (possibly different) functions which are evaluated on the same inputs in rounds \(\rho \) and \(\rho '\). This excludes protocols that might attempt to avoid using some functionality \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {}} {}_j)\) by invoking some other \({\mathcal {W}}_{\mathrm {flex}} ^{D_{j'}}({\mathcal {F}}_{\textsc {}} {}_{j'})\).

To capture the above generalization, we define the class of batched-parallel composition protocols: A protocol \(\pi _{\textsc {}} \) implementing the PT parallel composition \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {}} {}_1\parallel \cdots \parallel {\mathcal {F}}_{\textsc {}} {}_\texttt {M})\) in the \(({\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {}} {}_1),\ldots ,{\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {}} {}_\texttt {M}))\)-hybrid model (for some distributions \(D, D_1,\ldots ,D_\texttt {M} \)) is a batched-parallel composition protocol if it has the following structure. It proceeds in rounds, where in each round the protocol might initiate (possibly multiple) calls to any number of the hybrid functionalities \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {}} {}_{j})\) and/or continue calls that were initiated in previous rounds. Furthermore, there exist two publicly known protocol rounds \(\rho \) and \(\rho '\), and indices \(j,j',\ell \in [\texttt {M} ]\), such that for the input vector \({ \varvec{x}}=({ \varvec{x}}_1,\ldots ,{ \varvec{x}}_n)\) that \(\pi \) gives to \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {}} {}_1\parallel \cdots \parallel {\mathcal {F}}_{\textsc {}} {}_\texttt {M})\) (where \({ \varvec{x}}_i=(x_i^1,\ldots ,x_i^\texttt {M})\)) the following properties are satisfied:

  1. 1.

    In round \(\rho \) the functionality \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {}} {}_{j})\) is called on input \({ \varvec{x}}^\ell =(x_1^\ell ,\ldots ,x_n^\ell )\) and at least two of its rounds are executed.Footnote 9

  2. 2.

    In round \(\rho '\) the functionality \({\mathcal {W}}_{\mathrm {flex}} ^{D_{j'}}({\mathcal {F}}_{\textsc {}} {}_{j'})\) is also called on input \({ \varvec{x}}^\ell \) and at least two of its rounds are executed.

Note that the protocol in [5] (as well as our semi-honest protocol from Sect. 5) is an example of a batched-parallel composition protocol for \(\rho =\rho '=1\) and for \(j=j'=\ell \) being the index of any one of the hybrid functionalities. Indeed, in the first round of these protocols, each functionality is invoked at least twice on the same inputs. In particular, protocols that follow this structure, e.g., even ones that do not call all functionalities in every phase, or those that have variable batch sizes, can be described as such batched-parallel composition protocols.

We next show that the there are classes of functions \({\mathcal {C}} _1,\ldots ,{\mathcal {C}} _\texttt {M} \) such and for any protocol \(\pi _{\textsc {}} \) that securely computes the parallel composition \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) while given hybrid access to PT functionalities \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\) the following properties hold simultaneously:

  1. 1.

    \(\pi _{\textsc {}} \) has to call each of the hybrids \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\) (for at least 2 rounds each).Footnote 10

  2. 2.

    The naïve solution of \(\pi _{\textsc {}} \) calling each of the \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\)’s in parallel until they terminate is not round-preserving (for an appropriate choice of \(D_i\)’s.)

  3. 3.

    \(\pi _{\textsc {}} \) cannot be a batched-parallel composition protocol.

The above shows that the classes \({\mathcal {C}} _1,\ldots ,{\mathcal {C}} _\texttt {M} \) not only exclude the existence of a batched-parallel composition protocol, but they also exclude all other known solutions. This implies that for this classes of functions, every known approach—and generalizations thereof—fail to compute the parallel composition of the corresponding functionality in an FBB and round-preserving manner.

Theorem 6.7

Let \(\texttt {M} =O(\kappa )\). There exist n-party function-classes \({\mathcal {C}} _1,\ldots ,{\mathcal {C}} _\texttt {M} \) and distributions \(D_1,\ldots , D_\texttt {M} \), such that the following properties hold in the presence of a malicious adversary corrupting any one of the parties:

  1. 1.

    The protocol that calls each \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\) in parallel (once) until it terminates is not round-preserving (i.e., its expected round complexity is asymptotically higher than that of the distributions \(D_i\)).

  2. 2.

    Any \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}),\ldots ,{\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}}))\)-hybrid protocol for computing \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \) \(\cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) has to make a meaningful call (i.e., a call that executes at least two rounds) to each PT hybrid \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\).

  3. 3.

    There exists no functionally black-box batched-parallel composition protocol for computing \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}),\ldots ,{\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}}))\)-hybrid model, where D has (asymptotically) the same expectation as \(D_1,\ldots ,D_\texttt {M} \).

Proof

Let \(D_1=\cdots =D_\texttt {M} \) be the geometric distribution with parameter 1/2. (This means that the expected round complexity of each \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\) is constant.) Then, Property 1 follows immediately from the observation in [5] (see also [16]), which implies that the expectation of the round complexity of the naïve protocol that executes each functionality in parallel until its completion will be \(\Theta (\log \texttt {M})\), which is super-constant.

We next turn to Property 2. Toward proving it we first prove the following useful lemma.

Lemma 6.8

There exists a family of functions \({\mathcal {C}} =\{f_\alpha \}_{\alpha \in \{0,1\}^ \kappa }\) such that there exists no FBB protocol for computing the family of (oracle-aided) n-party SFE-functionalities \(\{{\mathcal {F}}_{\textsc {}} ^{f_\alpha }\}_{f_\alpha \in {\mathcal {C}}}\), which is secure against a semi-honest adversary corrupting any one of the n parties.

Proof

Let \(f_\alpha \) be the function that takes inputs \(x_1\in \{0,1\}^\kappa \) and \(x_2\in \{0,1\}^\kappa \) from \(P _1\) and \(P _2\), respectively (and a default empty string \({\lambda } \) from every \(P _j\) with \(j\in \{3,\ldots ,n\}\)) and outputs \(y_i\) to each \(P _i\) as follows:

  • \( y_1=\left\{ \begin{array}{ll} x_2 &{}, \text { if } x_1 \oplus x_2=\alpha \\ 0^\kappa &{}, \text { otherwise.} \end{array}\right. \)

  • \( y_2=\left\{ \begin{array}{ll} x_1 &{}, \text { if } x_1 \oplus x_2=\alpha \\ 0^\kappa &{}, \text { otherwise.} \end{array}\right. \)

  • For every \(j\in \{3,\ldots ,n\}\), \( y_j=\left\{ \begin{array}{ll} x_1 \oplus x_2 &{}, \text { if } x_1 \oplus x_2=\alpha \\ 0^\kappa &{}, \text { otherwise.} \end{array}\right. \)

The argument that there exists no FBB protocol for the above function family is inspired by [43, Theorem 1]. Concretely, assume toward contradiction that such a protocol \(\pi _{\textsc {}} \) existsFootnote 11 and consider the following experiment. Pick \(x_1,x_2,\alpha \) uniformly and independently at random and run \(\pi ^{f_\alpha }\) with inputs \(x_1\) and \(x_2\) for \(P _1\) and \(P _2\), respectively, and input \({\lambda } \) for all other parties. Then, we argue that the following events have negligible probability of occurring (where the probability is taken over the choice of \(x_1,x_2, \alpha \) and the random coins \({ \varvec{r}}=(r_1,\ldots , r_n)\) of the parties):

  1. (A)

    Any of the parties (i.e., any of the \(\pi _i\)’s) queries its oracle \(f_\alpha \) with \((p,q,{\lambda },\ldots ,{\lambda })\) such that \(p \oplus q=\alpha \).

  2. (B)

    Any of the parties queries its oracle \(f_\alpha \) with \((p,q,{\lambda },\ldots ,{\lambda })\) such that \(p\oplus q=x_1 \oplus x_2\).

The fact that (A) occurs with negligible probability is due to the fact that \(\alpha \) is uniformly random.

The fact that (B) occurs with negligible probability follows from the protocol’s privacy (and is, in fact, independent of the distribution of \(\alpha \)). Indeed, suppose that the probability that \(P _i\) makes a query \((p,q,\lambda ,\ldots ,\lambda )\) such that \(p\oplus q=x_1 \oplus x_2\) is noticeable (i.e., not negligible). Consider an adversary corrupting \(P _i\) and outputting the list of values \(p\oplus q\), for each \((p,q,{\lambda },\ldots ,{\lambda })\) that \(P _i\) makes to its oracle. By assumption, this list will include \(x_1\oplus x_2\) with noticeable probability. However, in the ideal-world execution, the simulator, even if it knows \(\alpha \), will be unable to produce a list of values containing \(x_1\oplus x_2\) with noticeable probability, since \(x_1\) and \(x_2\) are chosen uniformly at random, and a simulator corrupting a single party cannot learn both \(x_1\) and \(x_2\). This implies that the above adversary cannot be simulated, which contradicts the protocol’s privacy.

Let now R denote the set of all \(({ \varvec{r}},x_1,x_2,\alpha )\) such that in the above experiment, neither event (A) nor (B) occurs. From the above argument we know that \({\mathrm {Pr}}[({ \varvec{r}},x_1,x_2,\alpha )\in R]>1-\nu (\kappa )\), for a negligible function \(\nu \).

Next, as in [43, Theorem 1], we consider the coupled experiment in which we use the same \({ \varvec{r}},x_1,x_2\) as above, but run the protocol \(\pi ^{f_{\alpha ^*}}\) where \(\alpha ^*=x_1\oplus x_2\). As in [43, Theorem 1], we can prove that this experiment proceeds identically as the original one (which, recall differs only on the oracle calls); in particular, all oracle queries will be answered by \(0^\kappa \) to all parties. The reason for this is that an oracle query \((p,q,\lambda ,\ldots ,\lambda )\) is answered by a value other than \(0^\kappa \) in the first experiment only if \(p\oplus q=\alpha \) and in the second experiment only if \(p\oplus q=x_1 \oplus x_2\), which for the combinations of \(({ \varvec{r}},x_1,x_2,\alpha )\in R\) does not occur. This in particular implies that the output vector \({ \varvec{y}}=(y_1,\ldots ,y_n)\) that the parties \(P _1, \ldots , P _n\) receive from the protocol will be identical in both experiments for \(({ \varvec{r}},x_1,x_2,\alpha )\in R\).

It follows that, since \({\mathrm {Pr}}[({ \varvec{r}},x_1,x_2,\alpha )\in R]>1-\nu (\kappa )\), the distribution of \({ \varvec{y}}\) in both experiments can be at most negligible apart. However, since the protocol is required to output the correct value with overwhelming probability, in the first experiment (where \(x_1,x_2,\alpha \) are uniformly distributed) \({\mathrm {Pr}}[{ \varvec{y}}=(0^\kappa )^n]=1-\textsf {negl}(\kappa )\) and in the second experiment (where \(\alpha ^*=x_1\oplus x_2\)) \({\mathrm {Pr}}[{ \varvec{y}}\ne (0^\kappa )^n]=1-\textsf {negl}(\kappa )\). This creates a noticeable advantage in distinguishing the outputs of the two experiments leading to contradiction. \(\square \)

Lemma 6.9

For the above function class \({\mathcal {C}} \), let \({\mathcal {C}} _i={\mathcal {C}} \) for each \(i\in [\texttt {M} ]\). Then, any FBB protocol for computing \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) with hybrid access to all \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\)’s needs to invoke at least two rounds of every hybrid \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\).

Proof

Assume toward contradiction that there exists a protocol \(\pi _{\textsc {}} \) computing \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) such that for some i, protocol \(\pi _{\textsc {}} \) might execute at most one round of \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\). We will prove that no such protocol can exists for a uniformly random choice of \(\mathbf {\alpha }=(\alpha _1,\ldots ,\alpha _\texttt {M})\).Footnote 12 This is sufficient, since by an averaging argument, it implies that there exists no such protocol that is secure for all choices of \(\mathbf {\alpha }\), as required by the definition in [57]. (Indeed, if there exists a protocol that is secure for all choices of \(\mathbf {\alpha }\) (independent of \(\mathbf {\alpha }\)) there exists one that is secure for a randomly chosen \(\mathbf {\alpha }\).)

First, we observe that any call to a functionality \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _j})\) that executes less than two rounds can be simulated by the protocol that does nothing during this execution (without of course increasing the round complexity). The reason is that, by definition of flexibly wrapped CSFs (Sect. 2.2), any such call gives no output (the first round is an input-only round). Thus, we can assume without loss of generality that \(\pi _{\textsc {}} \) makes no call to \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\).

Next, observe that any protocol that computes \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) can be adjusted to compute \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\) for any \(i\in [\texttt {M} ]\). Indeed, one simply needs to invoke the protocol for \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) and take its \(i\)’th output as the output of \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\). Hence, \(\pi _{\textsc {}} \) can be trivially turned to a protocol for computing \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\) (without ever accessing \({\mathcal {W}}_{\mathrm {flex}} ^{D_i}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _i})\)). Note that since we assume that \(\pi _{\textsc {}} \) is FBB, the parties are given access to all other functionalities \({\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _1}}), \ldots , {\mathcal {W}}_{\mathrm {flex}} ^{D_{i-1}}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _{i-1}}}),{\mathcal {W}}_{\mathrm {flex}} ^{D_{i+1}}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _{i+1}}}),\ldots , {\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _\texttt {M}}})\), and can make oracle queries to all underlying functions \(\{f_{\alpha _1},\ldots ,f_{\alpha _\texttt {M}}\}\setminus \{f_{\alpha _i}\}\).

Finally, we observe the argument of Lemma 6.8 can be easily extended to the above scenario, where the aim is to compute \(\{{\mathcal {F}}_{\textsc {}} ^{f_{\alpha _i}}\}_{f_{\alpha _i}\in {{\mathcal {C}}}}\) in an FBB manner, but where the parties have oracle access to the function \(f_{\alpha _i}\) (as required by the definition of FBB protocols [57]) and, in addition, have oracle access to \(f_{\alpha _1},\ldots ,f_{\alpha _{i-1}},f_{\alpha _{i+1}},\ldots ,f_{\alpha _{\texttt {M}}}\) and ideal access to \({\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _1}}), \ldots , {\mathcal {W}}_{\mathrm {flex}} ^{D_{i-1}}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _{i-1}}}),{\mathcal {W}}_{\mathrm {flex}} ^{D_{i+1}}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _{i+1}}}),\ldots ,{\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _\texttt {M}}})\). The reason is that the behavior of functions \(f_{\alpha _1},\ldots ,f_{\alpha _{i-1}},f_{\alpha _{i+1}},\ldots ,f_{\alpha _{\texttt {M}}}\) is independent of \(f_{\alpha _i}\) and can be therefore trivially simulated by means of an information-theoretic MPC protocol (recall we only have one corruption) that implements a globally accessible oracle to \(f_{\alpha _1},\ldots ,f_{\alpha _{i-1}},f_{\alpha _{i+1}},\ldots ,f_{\alpha _{\texttt {M}}}\) and ideal functionalities \({\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _1}}), \ldots , {\mathcal {W}}_{\mathrm {flex}} ^{D_{i-1}}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _{i-1}}})\), \({\mathcal {W}}_{\mathrm {flex}} ^{D_{i+1}}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _{i+1}}}),\ldots ,{\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {csf}} ^{f_{\alpha _\texttt {M}}})\). (Since \(\mathbf {\alpha }\) is chosen uniformly at random their role in computing \(f_{\alpha _i}\) can be emulated by choosing independent \(\alpha _j\)’s for \(j\in [\texttt {M} ]\setminus \{i\}\).) Thus, if there were a protocol which would compute \(\{{\mathcal {F}}_{\textsc {}} ^{f_{\alpha _i}}\}_{f_{\alpha _i}\in {{\mathcal {C}}}}\) in the above hybrid world, then it can be trivially converted to a protocol which does not access the hybrids or the oracle calls to the functions other than \(f_{\alpha _i}\), which would contradict Lemma 6.8. Observe that the above extension is independent of the round complexity, and adding a PT structure to the hybrids does not affect the impossibility. \(\square \)

We complete the proof of the theorem by proving Property 3 for the above choice of \({\mathcal {C}} _1,\ldots ,{\mathcal {C}} _\texttt {M} \) and \(D_1,\ldots ,D_\texttt {M}.\)

Lemma 6.10

There exists no functionally black-box batched-parallel composition protocol for computing \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}})\) in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _1}),\ldots ,{\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{{\mathcal {C}} _\texttt {M}}))\)-hybrid model, tolerating a static adversary actively corrupting any one of the parties.

Proof

As before, it suffices to prove that there exists no batched-parallel composition protocol \(\pi _{\textsc {}} \) that is secure for computing \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _1}}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _\texttt {M}}})\) in the \(({\mathcal {F}}_{\textsc {smt}},{\mathcal {W}}_{\mathrm {flex}} ^{D_1}({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _1}})\), \(\ldots ,{\mathcal {W}}_{\mathrm {flex}} ^{D_\texttt {M}}({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _\texttt {M}}}))\)-hybrid model, for a uniformly random choice of \(\mathbf {\alpha }=(\alpha _1,\ldots ,\alpha _\texttt {M})\).

Assume toward contradiction that such a protocol \(\pi _{\textsc {}} \) exists, which is secure against a malicious (i.e., active) adversary corrupting any one party, and assume without loss of generality that party \(P _1\) is corrupted. Let \((\rho ,j), (\rho ',j')\), and \(\ell \) denote the values that are assumed to exists by the fact that \(\pi _{\textsc {}} \) is a batched-parallel composition protocol and denote by \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _j}})\) and \({\mathcal {W}}_{\mathrm {flex}} ^{D_{j'}}({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _{j'}}})\) the corresponding functionalities indexed by j and \(j'\). We will denote by \({ \varvec{x}}_1=(x_1^1,\ldots ,x_1^\texttt {M})\in (\{0,1\}^\kappa )^\texttt {M} \) the input of \(P _1\) and by \({ \varvec{x}}_2=(x_2^1,\ldots ,x_2^\texttt {M})\in (\{0,1\}^\kappa )^\texttt {M} \) the input of \(P _2\) to \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _1}}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _\texttt {M}}})\). Consider an environment that chooses all inputs to the parties uniformly at random but hands its adversary the first \(\kappa -2\) bits of the input \(x_2^\ell \). (Recall that the batched-parallel composition requires that \(P _2\) inputs \(x_2^\ell \) to \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _j}})\) in round \(\rho \) and to \({\mathcal {W}}_{\mathrm {flex}} ^{D_{j'}}({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _{j'}}})\) in round \(\rho '\).)

Denote the output from the evaluation of \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _1}}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _\texttt {M}}})\) as \(({ \varvec{y}}_1,\ldots ,{ \varvec{y}}_n)\), where each \({ \varvec{y}}_{i}=(y_i^1,\ldots ,y_i^\texttt {M})\) is the output of \(P _i\). Because all inputs are independently and uniformly distributed, the simulator gains no information on the missing (i.e., last) two bits of \(x_2^\ell \) neither by using its knowledge of the \(\alpha _\ell \), nor by the inputs and outputs of any of the parallelly composed \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _1}}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _\texttt {M}}})\) other than its \(\ell \)’th output. In other words, the only way that the simulator might learn additional information on the missing two last bits of the input \(x_2^\ell \) of the honest party \(P _2\) is from the corrupted \(P _1\)’s \(\ell \)’th output \(y_1^\ell \) of the ideal functionality \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _1}}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _\texttt {M}}})\).

In the analysis below we use the following notation. Given a string \({ \varvec{x}}=(x_1,\ldots ,x_m)\in \{0,1\}^m\), denote by \({{ \varvec{x}}}[{{{\hbox {i},\ldots ,\hbox {j}}}}] \), for \(i<j\), the substring \((x_i,\ldots ,x_j)\). Consider the following cases for the input \(x_1^\ell \) that \({\mathcal {S}} \) hands to the \(\ell \)’th functionality in \({\mathcal {W}}_{\mathrm {flex}} ^D({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _1}}\parallel \cdots \parallel {\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _\texttt {M}}})\):

  1. 1.

    \({x_1^\ell }[{{{1,\ldots ,\kappa -2}}}] \ne {x_2^\ell }[{{{1,\ldots ,\kappa -2}}}] \oplus {\alpha _\ell }[{{{1,\ldots ,\kappa -2}}}] \). In this case, by correctness of \(\pi \), the \(\ell \)’th output \(y_1^\ell \) equals \(0^\kappa \) independently of the last two bits of \(x_1^\ell \). Hence, in this case the simulator is able to output the last two bits of \(x_2^\ell \) with probability 1/4 (i.e., the best he can do is guess).

  2. 2.

    \({x_1^\ell }[{{{1,\ldots ,\kappa -2}}}] = {x_2^\ell }[{{{1,\ldots ,\kappa -2}}}] \oplus {\alpha _\ell }[{{{1,\ldots ,\kappa -2}}}] \). In this case, we consider the following event

    $$\begin{aligned} E_1 :{x_1^\ell }[{{{\kappa -1,\kappa }}}] ={x_2^\ell }[{{{\kappa -1,\kappa }}}] \oplus {\alpha _\ell }[{{{\kappa -1,\kappa }}}]. \end{aligned}$$
    • If \(E_1\) occurs, then the simulator will see that the output \(y_1^\ell \ne 0^\kappa \), so can output \({x_2^\ell }[{{{\kappa -1,\kappa }}}] ={x_1^\ell }[{{{\kappa -1,\kappa }}}] \oplus {\alpha _\ell }[{{{\kappa -1,\kappa }}}] \) as his guess for the last two bits of \(x_2^\ell \), which will always (with probability 1) be the correct guess (by definition of the function). Note that \({\mathrm {Pr}}[E_1]=1/4\) since the simulator knows \(\alpha _\ell \) (by the definition of FBB [57]) and has no information on \({x_2^\ell }[{{{\kappa -1,\kappa }}}] \).

    • If \(E_1\) does not occur, then \({\mathcal {S}}\) will see that \(y_1^\ell =0^\kappa \), from which it can deduce that \({x_2^\ell }[{{{\kappa -1,\kappa }}}] \ne {x_1^\ell }[{{{\kappa -1,\kappa }}}] \oplus {\alpha _\ell }[{{{\kappa -1,\kappa }}}] \), but gets no more information on \({x_2^\ell }[{{{\kappa -1,\kappa }}}] \). Hence, the probability of outputting \({x_2^\ell }[{{{\kappa -1,\kappa }}}] \) in this case is at most 1/3 (i.e., the probability of guessing among the 2-bit strings that are not equal to \({x_1^\ell }[{{{\kappa -1,\kappa }}}] \oplus {\alpha _\ell }[{{{\kappa -1,\kappa }}}] \)).

Hence, the total probability that a simulator outputs a correct guess for \({x_2^\ell }[{{{\kappa -1,\kappa }}}] \) is

$$\begin{aligned} P _{\mathcal {S}} \le 1/4+(3/4)(1/3)=1/2. \end{aligned}$$

To complete the proof, we will describe an adversary who outputs the two last bits of \(x_2^\ell \), i.e., \({x_2^\ell }[{{{\kappa -1,\kappa }}}] \), with probability noticeably higher than 1/2; this implies a noticeable distinguishing advantage between the real world and the ideal world.

The adversary chooses two different random two-bit strings \({ \varvec{b}},{ \varvec{b}}'\in \{0,1\}^2\) and inputs \(({x_2^\ell }[{{{1,\ldots ,\kappa -2}}}],{ \varvec{b}})\oplus \alpha _j\) on the execution of \({\mathcal {W}}_{\mathrm {flex}} ^{D_j}({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _j}})\) in round \(\rho \) and \(({x_2^\ell }[{{{1,\ldots ,\kappa -2}}}],{ \varvec{b}}')\oplus \alpha _{j'}\) on the \({\mathcal {W}}_{\mathrm {flex}} ^{D_{j'}}({\mathcal {F}}_{\textsc {sfe}} ^{f_{\alpha _{j'}}})\) in round \(\rho '\). Once the adversary receives \(P _1\)’s outputs from the above two functionalities (denote them as \({\hat{y}}_{1,j}\) and \({\hat{y}}_{1,j'}\))Footnote 13 he does the following: If \({\hat{y}}_{1,j}\ne 0^\kappa \) or \({\hat{y}}_{1,j'}\ne 0^\kappa \) (an event that happens with probability 1/2 since there are four possible two-bit strings and one of them makes the output \(\ne 0^\kappa \)) then the adversary outputs \({ \varvec{b}}\oplus {\alpha _j}[{{{\kappa -1,\kappa }}}] \) (or \({ \varvec{b}}'\oplus {\alpha _{j'}}[{{{\kappa -1,\kappa }}}] \), respectively) as his guess of the last two bits of \(x_2^\ell \). By correctness of the protocol, except for negligible probability the guess will be correct in this case. Otherwise, the adversary outputs a random string from \(T=\{00,01,10,00\}\setminus \{{ \varvec{b}},{ \varvec{b}}'\}\); the probability of outputting a correct guess in this case is 1/2 since it has to be one of the strings in T. Hence, the overall probability that this adversary outputs the right guess for the last two bits of \(x_2\) is \(3/4-\nu \), where \(\nu \) is a negligible function implied by the error probabilities in the above protocol. Hence, the output of the adversary is distinguishable from the output of the best simulator which contradicts the assumed security of \(\pi _{\textsc {}} \). \(\square \)

This completes the proof of Theorem 6.7. \(\square \)