1 Introduction

The round complexity of secure multiparty computation (MPC) has been the subject of intensive research. In this work, we continue this study, focusing on the case of computationally secure MPC protocols without an honest majority. We start with some relevant background.

The Semi-honest Model. Consider first the simpler setting of semi-honest adversaries, who may passively corrupt an arbitrary subset of the parties. In the two-party case, Yao’s protocol [Yao86] is a two-round protocol that can rely on any two-round oblivious transfer (OT) protocol. The latter primitive is not only simple and minimal (as a special case of the general result), but also one that pragmatically serves as a useful basis for protocol design. Indeed, two-round OT can be implemented at a low amortized cost with interactive preprocessing [Bea95, IKNP03] and even from scratch [BCG+19]. The question of generalizing Yao’s two-round protocol to the multiparty setting remained open for many years. This question was settled by Garg and Srinivasan [GS18] and Benhamouda and Lin [BL18], who showed that two-round OT indeed suffices also for two-round MPC with an arbitrary number of parties.

Black-Box vs. Non-black-box Constructions. A major distinction between Yao’s two-party protocol and the recent MPC protocols from [GS18, BL18] is the way in which the OT primitive is used. While the former makes a black-boxFootnote 1 use of OT, in the sense that the construction uses the next-message function of the OT protocol as an oracle, the latter MPC protocols cannot use the OT protocol as an oracle and need to depend on its implementation. This qualitative difference results in a big efficiency gap between the two types of protocols, raising a question about the possibility of a black-box alternative for the multiparty case. Unfortunately, Applebaum et al. [ABG+20] obtained a negative answer: for any \(n\ge 3\), general two-round n-party MPC protocols cannot make a black-box use of two-round OT. More recently, Patra and Srinivasan [PS21] closed the remaining gap, presenting a black-box construction of three-round MPC protocols from two-round OT (improving over a previous four-round protocol from [ACJ17]). This gives us a full understanding of the round complexity of black-box semi-honest MPC based on two-round OT.

From Semi-honest to Malicious. The case of security against malicious adversaries is far less understood. Targeting the goal of matching the round complexity of semi-honest protocols, one needs to rely on a setup assumption. (See Sect. 1.2 for discussion of results in the plain model.) A minimal form of setup, originating from non-interactive zero knowledge (NIZK) proofs [BFM88], assumes the availability of a common random string or, more generally, a (structured) common reference string. Our results will apply to both kinds of setup, to which we collectively refer as a CRS setup. Given a CRS setup, NIZK can serve as a general round-preserving tool for enforcing an honest behavior by malicious parties. However, general NIZK-based protocols are inherently non-black-box. This raises the following natural question about the round complexity of black-box MPC in the CRS model:

Can we make a black-box use of two-round OT to obtain two-round (resp., three-round) two-party (resp., multiparty) MPC protocols with security against malicious adversaries?

To make this question more precise, we need to specify which kind of two-round OT we consider. Ideally, one would have liked to use semi-honest two-round OT as a basis for round-optimal malicious-secure MPC. However, even if we restrict our attention to realizing the OT functionality, it is not known how to construct two-round malicious-secure OT in the CRS model from two-round semi-honest OT, let alone in a black-box way.Footnote 2 Indeed, while semi-honest two-round OT protocols are quite easy to construct from essentially every concrete assumption known to imply public-key encryption, obtaining similar protocols with malicious security required ingenious new ideas [PVW08, DGH+20, BF22]. Given this state of affairs, we use malicious-secure two-round OT in the CRS model as our basic building block.

Known Results. In the case of two-party “sender-receiver” functionalities, which take inputs from both parties but only deliver the output to the designated receiver, the above question was answered in the affirmative in [IKO+11]. Such two-party protocols, known as non-interactive secure computation (NISC) protocols, provide a black-box extension of Yao’s protocol that offers security against malicious parties while still requiring only two rounds. However, for general two-party functionalities that deliver outputs to both parties, no analogous result is known. Note that such a “two-sided NISC” protocol cannot be obtained by simply running two instances of standard (one-sided) NISC in parallel, since there is nothing preventing a malicious party from using different inputs in the two executions. The question is similarly open for three-round MPC with \(n\ge 3\) parties. Partial progress was made in [PS21], where black-box protocols were constructed from a stronger variant of two-round OT that required some form of adaptive security. However, this extra requirement not only makes the OT primitive qualitatively stronger, but also excludes some existing protocols from the literature (such as CDH- and LPN-based protocols from [DGH+20]). A different kind of progress was recently made in [IKSS21, IKSS22], where positive answers were given using two distinct kinds of idealized setups: either the random oracle model or a random OT correlations setup. To conclude, without strong idealized setups and without strengthening the OT primitive, the above question remained open in both the two-party and the multiparty case.

1.1 Our Contribution

We settle the above question by presenting the first round-optimal black-box constructions of MPC protocols from two-round (malicious-secure) OT in the CRS model. In the two-party case, we obtain the first extension of the black-box NISC protocol from [IKO+11] to general functionalities that deliver outputs to both parties.

Informal Theorem 1

There is a (two-round, malicious-secure) two-sided NISC protocol in the CRS model that makes a black-box use of two-round malicious-secure OT in the CRS model.

See Theorem 1 for a formal version. From a concrete efficiency perspective, this compiler may be better than the recent random-oracle based compiler from [IKSS22] in that it replaces a computational security parameter by a statistical one.Footnote 3 We also obtain an arithmetic variant of this result that makes a black-box use of the underlying field as well as a two-round malicious-secure OLE protocol over the same field [CDI+19, BDM22]. (See Theorem 2 for a formal statement.) This variant leverages another advantage of black-box constructions, namely respecting the arithmetic nature of an underlying semi-honest protocol.

Informal Theorem 2

There is a three-round malicious-secure MPC protocol in the CRS model that makes a black-box use of two-round malicious-secure OT in the CRS model.

See Theorem 3 for a formal statement. The optimality of three rounds follows from the proof of the black-box separation in [ABG+20]. While the main theorem statement of [ABG+20] only refers to a separation between two-round MPC and two-round semi-honest OT, the oracle used for the separation actually implies malicious-secure two-round OT.

Open Questions. Our results leave several avenues for future work.

  • Is two-round semi-honest OT sufficient? As discussed above, the non-triviality of realizing malicious-secure two-round OT from concrete assumptions suggests that even a non-black-box round-preserving compiler from semi-honest to malicious OT would be difficult to obtain. Moreover, the existence of black-box constructions in the random oracle model [MR19, IKSS22] makes a potential black-box separation more challenging.

  • Does three-round OT suffice in the multiparty case? We conjecture that the black-box separation from [ABG+20] can be extended to rule out this possibility, and leave formalizing this to future work.

  • Can similar results be obtained in the OT-hybrid model, namely using calls to an ideal OT oracle rather an OT protocol? For standard (one-side) NISC, this is achieved by the construction from [IKO+11]. In the multiparty case, this question is open even in the semi-honest model, where evidence for the difficulty of settling it in the negative is given in [ABG+20]. Our protocols, similarly to the ones from [PS21], inherently make use of the messages generated by the OT protocol, and thus cannot replace the protocol by an oracle.

1.2 Related Work

A long line of work [KOS03, KO04, Wee10, Goy11, ORS15, GMPP16, ACJ17, BHP17, BGJ+18, HHPV18, CCG+20] studied the question of minimizing the round complexity of MPC with security against any number of malicious parties in the plain model. In this setting, one cannot hope to match the round complexity of our protocols in the CRS model regardless of the underlying assumptions. Indeed, protocols with black-box simulators must use at least four rounds [KO04, GMPP16]. Even if one allows a non-black-box (polynomial-time) simulation, two-round two-party protocols are unlikely to exist even for the special case of zero knowledge functionalities [BLV03]. We note that there is an interesting line of work [GGJS12, KMO14, GKP17, BGJ+17, BGI+17, ABG+21, FJK21, AMR21] that aims to get around this lower bound by considering a weaker notion of security, namely, security with super-polynomial time simulation (SPS) security.

The quest for minimizing the round complexity of MPC in the plain model, with a standard notion of simulation-based security, culminated in the work of Choudhuri et al. [CCG+20], who obtained a four-round protocol making a non-black-box use of four-round malicious-secure OT. The round complexity of this protocol is optimal for protocols with black-box simulation. Additionally requiring the construction to be black-box, as in this work, a five-round protocol in the plain model making use of strong flavors of two-round semi-honest OT was given in [IKSS21]. See [CCG+20, IKSS21] and references therein for a more comprehensive survey of this line of work.

2 Technical Overview

In this section, we describe the key technical ideas behind our construction of black-box two-sided NISC (in Sect. 2.1) and our black-box three-round secure multiparty computation protocol (in Sect. 2.2).

2.1 Black-Box Two-Sided NISC

Challenges. The main challenge in constructing a two-sided NISC protocol is to design a “black-box" mechanism wherein the adversarial party is somehow forced to use the same input when it plays the role of the sender and the receiver respectively. This is the reason why a natural attempt of constructing a two-sided NISC by running two versions of one-sided NISC in opposite directions fails. The prior works of Ishai et al. [IKSS21, IKSS22] constructed such a “black-box" mechanism by making use of the IPS compiler [IPS08]. However, both these works resorted to idealized setups such as OT correlations, or made use of random oracles in order to implement the watchlist mechanism, one of the key building blocks in this compiler. As we explain later, we encounter significant barriers while trying to remove these idealized setups from the above works. To circumvent these barriers, we develop new techniques to implement this mechanism and prove the security of the protocols based only on a two-message malicious OT. In the rest of the subsection, we elaborate on this in much more detail. As our work also builds on the IPS compiler, we recall the main ideas behind this compiler below.

IPS Compiler. At a high level, the IPS compiler constructs a malicious-secure MPC in the dishonest majority setting via a combination of: (1) a malicious-secure honest-majority client-server MPC (called the outer protocol)Footnote 4, and (2) a semi-honest secure protocol in the dishonest majority setting (called the inner protocol). In more detail, each party in the compiled protocol plays the role of a client in the outer protocol. The parties then invoke the inner protocol to emulate the computation done by the servers. Since the outer protocol can be information-theoretic, the computations done by the servers avoid any cryptographic operations. This feature enables the compiled protocol to be black-box. However, as such, this compilation results in an insecure protocol. This is because an adversary can cheat in all the executions of the inner protocol and break their security (as they are only secure against semi-honest adversaries). Since the outer protocol is only guaranteed to be secure as long as a constant (\(< 1/2\)) fraction of the servers are corrupted, by corrupting all the servers, the adversary has effectively broken the security of the outer protocol and could extract non-trivial information about the honest party inputs. To prevent such an attack, the IPS compiler uses a novel cut-and-choose mechanism referred to as the watchlist protocol. In this protocol, each party chooses a random subset of the servers as part of its private “watchlist". The watchlist protocol provides the input and the randomness used by every other party for those server executions that are being watched by this party. The input and randomness of the other server executions are hidden. Every party then checks if the server executions in its watchlist are emulated correctly and aborts if it detects any inconsistency. This guarantees that if the adversary cheats in many server executions, then all the honest parties will detect this and abort, preventing the adversary from learning any useful information about the inputs of the honest parties. On the other hand, if the adversary only cheats in a small number of server executions, then we can rely on the security of the outer MPC protocol to show that the adversary only learns the output of the functionality.

Prior Works. For two-sided NISC, recent black-box protocols from [IKSS21, IKSS22] used the watchlist mechanism to catch a cheating adversary that is using different inputs while playing the role of the sender and the receiver respectively. Specifically, if the adversary is using inconsistent inputs in many server executions, then the honest party detects this via the watchlist mechanism and aborts the execution. On the other hand, if the adversary is using inconsistent inputs in a small number of executions, then the servers that are emulated by these executions can be considered as corrupted in the outer protocol. Since the outer protocol is secure as long as a constant fraction of the servers are corrupted, this prevents the adversary from breaking the privacy of the honest party’s inputs. This was the main intuition behind both works. However, these works differed in their choice of the outer protocol, the inner protocol, and the implementation of the watchlist mechanism. We tabulate these choices in Table 1. A common limitation of these two works is their reliance on idealized setups, such as OT correlations [IKSS21] or a random oracle [IKSS22] to implement the watchlist mechanism. We now explain the challenges in trying to remove the idealized setups from these works.

Table 1. Choice of outer protocol, inner protocol, and idealized model for watchlist implementation in prior works.

Need for Idealized Models. The key reason why the prior works needed to resort to idealized models is due to a subtle technical difficulty in implementing the IPS compiler. Specifically, the simulator in the IPS compiler needs to know the set of executions that are watched by the corrupted party before it sends its first-round message on behalf of the honest party. Note that in the real world, the honest party’s input and randomness corresponding to the adversarial watched executions pass the consistency check and hence, we need to make sure that these checks pass even in the ideal world. Hence, the simulator needs to produce a consistent input and randomness that explains the inner protocol messages in all the executions that are watched by the corrupted party. This is further complicated in the rushing adversarial model where the adversary expects to see the first-round message from the honest party before it sends its own first-round message. Hence, if the set of watched executions are known to the simulator only after it sends the first-round message, then the simulator needs produce randomness that consistently explains the “simulated” first-round inner protocol message w.r.t. some input. In other words, the simulator needs to equivocate the first-round message of the inner protocol. This requires stronger assumptions. However, if we use idealized setups, then the simulator can learn the watched executions of the corrupted party before it sends the first-round message. In particular, this is done by allowing the simulator to implement the dealer while setting up the OT correlations in [IKSS21], or program the output of the random oracle in [IKSS22]. The above issue also precludes a natural attempt of trying to implement the watchlist functionality using a two-round k-out-of-m OT protocol. Indeed, the first-round message that encodes the set of watched executions is sent by the adversary only after it receives the first-round message from the honest party and hence, this approach too requires the first-round message of the inner protocol to be equivocal.

Our Solution. To overcome this difficulty, we need a watchlist protocol implementation where the simulator can bias the watched executions of the corrupted parties whereas the corrupted parties cannot bias the watched executions of the honest parties. These two conflicting features are obtained simultaneously via a coin-tossing protocol. Specifically, the watched executions of each party is sampled randomly where the randomness is contributed by both the parties and not just by the receiver party. This ensures that the simulator can set the randomness on behalf of the honest party in such a way that the corrupted party receives a randomly sampled set of executions that was chosen prior to sending the first-round message. At the same time, since the receiver party also provides a part of the randomness, this ensures that the corrupted party cannot bias the set of watched executions of the honest party. This helps in overcoming the above mentioned technical difficulty in implementing the IPS compiler. The next question is can we construct such a watchlist protocol? Indeed, the work of Ishai et al. [IKO+11] provides an instantiation that makes black-box use of a two-round malicious-secure OT. However, such a watchlist protocol alone does not solve all the issues and we elaborate more on this below.

Need for Watchlist Output at the End of the First Round. While the above watchlist protocol ensures that the simulator has the power to bias the watched executions of the corrupted parties, it leads to new incompatibility issues with the prior techniques. Specifically, the prior works crucially relied on the output of the watchlist protocol to be delivered to the honest parties at the end of the first round. This is indeed possible if we rely on idealized setups. However, in the above described approach, the honest party learns the output of the watchlist protocol only after the corrupted party sends its second-round message. Hence, it can only perform all the watchlist checks after it has sent its final round message in the protocol (since we are dealing with rushing adversaries). This leads to new problems and let us explain them in a bit more detail.

Firstly, the work of Ishai et al. [IKSS21] considered a two-round semi-honest inner protocol where the first-round message could be equivocated (see Table 1). However, a malicious party can also equivocate its first-round message and thereby, break the security of the inner protocol. This was not a problem in their setting since the output of the watchlist protocol is made available to the honest parties at the end of the first round (this is possible in the OT correlations model). Hence, the honest party can detect if the first-round message is equivocated in many inner protocol executions and abort if it is the case. However, in our watchlist protocol, the output is delivered to the honest party only at the end of the second round. By this time, the honest party would have sent the second-round message in the inner protocol and the adversary could potentially recover the entire input of the uncorrupted party.

In a more recent work, Ishai et al. [IKSS22] removed the need for an inner protocol with first message equivocality by considering a “stronger" outer protocol. This outer protocol which they termed as pairwise verifiable MPC protocol is a two-round client-server MPC protocol that additionally satisfies a special error correction property. Specifically, for any choice of second-round message from the corrupted servers, the error correction property requires that the output of the honest client remains the same. Unfortunately, obtaining such a protocol against standard malicious adversaries is hard due to the known barriers [GIKR02]. To overcome this, Ishai et al. considered security against weaker adversaries called pairwise verifiable adversaries. Roughly speaking, pairwise verifiable adversarial clients are restricted to send a first-round message such that the messages received by all the honest servers pass some consistency check. However, this restriction of only considering pairwise verifiable adversaries also seems incompatible with our watchlist protocol. Specifically, before we send the second-round message, we need to make sure that the first-round message in the outer protocol pass the pairwise consistency check and we must proceed only if these checks pass. In the work of Ishai et al. [IKSS22], this was made possible by making use of a random oracle. But in our setting, since the output of the watchlist functionality is only delivered after the honest party sends the second-round message, we cannot perform this check before sending the final message. Thus, an adversarial party can use first-round messages in the outer protocol that do not pass the pairwise consistency check and completely break the privacy of the honest party’s inputs.

Our Approach. We note that the above mentioned incompatibility issue could be alleviated if we use a two-round malicious secure oblivious transfer protocol that has equivocal first-round message [GS18, PS21]. Specifically, such an OT protocol forces a corrupt receiver to send a valid first-round message but enables the simulator to equivocate the first-round message to both bits 0 and 1. Indeed, a malicious party is forced to send a valid first-round message whereas the simulator could equivocate the first-round message as in [IKSS21]. However, we do not know of a black-box construction of this primitive from any two-round malicious secure OT protocol.Footnote 5 Moreover, recent protocols from the literature (such as ones based on CDH and LPN [DGH+20]) do not satisfy this property. Our goal here is to overcome this issue by only making black-box use of a two-round malicious secure OT.

Instead of relying on a pairwise verifiable MPC protocol, our solution to this problem is to rely on a standard outer protocol satisfying security with abort, say for instance, the one given by Ishai, Kushilevitz, and Paskin [IKP10, Pas12]. To make this outer protocol compatible with the IPS compiler, Ishai et al. [IKSS21] observed that the inner protocol needed to additionally satisfy first-round equivocality (see Table 1). The key insight behind our solution is that first-round equivocality of the inner protocol is actually on overkill and we could instead use a far weaker security property. We now explain this in detail.

Recall that the watchlist mechanism is guaranteed to catch a malicious party that cheats in a large number of inner protocol executions. However, a malicious party can cheat in a small number of executions such that it goes undetected by the watchlist of the honest party with some non-negligible probability. In this case, we should be able to rely on the security of the outer protocol as the number of malicious server corruptions is “small". However, in order to invoke this security property, we need to compute the inner protocol output received by the honest party in each of the executions where the adversary has cheated. This corresponds to the second-round message sent by the corrupted servers to the honest client and we need to provide this information to the simulator of the outer protocol. [IKSS21] argued that if the inner protocol satisfies first-round equivocality then it is possible for the simulator to compute this output. In particular, the simulator can equivocate the first-round message as per the honest party’s input and then use the corresponding randomness to compute the output of this inner protocol execution. In this work, we observe that this property can be weakened, specifically, to what we call as output equivocality. This property requires that if the adversary cheated in generating the second-round message, then the simulator (that is additionally provided the input of the honest party) must produce an output that is computationally indistinguishable from the honest party’s output in the real execution. Specifically, instead of requiring the entire first-round message to be equivocable, we only need the output computation to be equivocable. This property is implied by first message equivocality and could be potentially be realized under weaker assumptions. Further, since the output of our watchlist protocol is only delivered after we send the second-round message, we need our inner protocol to also be secure against malicious receivers. Hence, it is sufficient to construct an inner protocol that is secure against malicious receivers and also satisfies output equivocality.

Somewhat surprisingly, both of these properties can be obtained simultaneously if we simply replace the two-round semi-honest OT in the Yao’s protocol with a two-round malicious secure OT. Specifically, the security against malicious receivers follows from the folklore observation about Yao’s protocol when instantiated with a two-round malicious secure OT protocol. The output equivocality property is argued using the security of the oblivious transfer against malicious senders. In particular, the simulator could use the extractor for the OT protocol and extract the set of both labels for each input wire of the garbled circuit that was generated. Now, given the honest party’s input, the output equivocal simulator can just evaluate the received garbled circuit on the chosen set of labels according to the honest receiver’s input and output the result of the evaluation. From the sender security of the OT protocol, we infer that the output of this evaluation is computationally indistinguishable from the honest evaluation. This allows us to construct an inner protocol with the desired properties and thereby instantiate the IPS compiler.

The full description of the inner protocol along with the security properties it needs to satisfy is given in Sect. 3. The construction and the security analysis of our two-sided black-box NISC protocol can be found in Sect. 4.

Further Remarks. We observe that there is no need to rely on a special inner protocol that was constructed based on Yao’s garbled circuits. Instead, we can start with any one-sided OT-based NISC protocol. This follows from the fact that security against malicious receivers comes for free, and the output equivocality follows from security of the one-sided NISC against malicious senders. Thus, our work can be viewed as a black-box construction of two-sided NISC from any one-sided NISC. This allows us to directly transfer any efficiency improvements in the one-sided NISC setting to the more challenging two-sided NISC. Furthermore, this allows us to upgrade known one-sided NISC protocols in the arithmetic setting [CDI+19, DIO21] (making a black-box use of the underlying field) to similar two-sided NISC protocols.

2.2 Black-Box Three-Round MPC

To construct a black-box three-round MPC protocol, we again rely on the IPS compiler. Specifically, we start with an outer protocol that supports an arbitrary number of clients and satisfying security with selective abort (such a protocol was constructed in [IKP10, Pas12]). As in the black-box two-sided NISC case, we implement the watchlist protocol via a coin-tossing based approach. This enables the simulator to bias the watched executions of the corrupted parties before it sends its first-round message on behalf of the honest parties. The only difference from the two-sided NISC case is that we need to rely on an inner protocol that runs in three rounds (due to the black-box impossiblity of [ABG+20]). To make the inner protocol compatible with the above outer protocol, we need it to satisfy the following two additional properties:

  • Robustness: Even if the adversary cheats in generating the messages in the first two rounds of the protocol, it cannot break the privacy of the honest party inputs. This is needed since the output of the watchlist is delivered only at the end of the second round and any cheating in the first two rounds should not enable the corrupted party to break the privacy of the honest parties.

  • Last Round Equivocality: If the adversary has cheated in the first two rounds, then the simulator when provided with the inputs of all the honest parties must produce a last round message which is computationally indistinguishable from the real execution. This is needed to generate the last round message in the inner protocol executions where the adversary has cheated in the first two rounds.

We note that robustness and last round equivocality was also needed in the inner protocol used in [IKSS21]. However, their inner protocols could either run in two rounds (in the presence of OT correlations), or four rounds in the plain model. Here, our focus is on constructing such an inner protocol in three rounds in the CRS model.

Constructing Multiparty Inner Protocol. Our first observation is that to construct such an inner protocol for computing arbitrary functionalities, it is sufficient to construct an inner protocol that computes the \(\textsf{3MULTPlus}\) functionality. \(\textsf{3MULTPlus}\) is a special multiparty functionality that takes \((x_1,y_1)\) from the first party, \((x_2,y_2)\) from the second party, and \((x_3,y_3)\) from the third party where \(x_i,y_i\) are bits and delivers \(x_1 \cdot x_2 \cdot x_3 + y_1 + y_2 + y_3\) to all the parties. Indeed, the standard bootstrapping results from \(\textsf{3MULTPlus}\) to general functions [BGI+18, GIS18, ABG+20] for the case of semi-honest adversaries also extends to the above security definition. Thus, it is sufficient to construct an inner protocol for \(\textsf{3MULTPlus}\) functionality that satisfies both robustness and last round equivocality.

The starting point of our construction of such a protocol is the work of Patra and Srinivasan [PS21] who gave a construction in the semi-honest setting based on any two-round semi-honest OT protocol. The main result that we prove is that if we replace the two-round semi-honest OT protocol in their construction with a two-round malicious-secure version, then the resultant protocol is robust. However, proving this is not straightforward and requires a careful security analysis (this appears in Proposition 2). To prove last round equivocality, we observe that the last round message sent by each party in the protocol of [PS21] is obtained by decrypting some sender OT message. As in the case of two-sided NISC setting, we show that this message can be equivocated if the two-round OT protocol is secure against malicious senders. This allows us to construct a three-round inner protocol that satisfies robustness and equivocality by making black-box use of a two-round malicious-secure OT.

The formal description of the security properties along with the construction and the proof of security of the multiparty inner protocol appears in Sect. 5.

Putting Things Together. As mentioned before, our three-round black-box multiparty protocol is obtained by combining the two-round coin-tossing based watchlist protocol along with a three-round inner protocol satisfying both robustness and last round equivocality. At the end of the second round, the output of the watchlist protocol is delivered to all the parties. If the adversary cheats in many inner protocol executions, then this is detected by the honest parties who abort before sending the final round message. In this case, we rely on the robustness property of the inner protocol to show that the adversary learns no information about the private inputs of the honest parties. On the other hand, if the adversary only cheats in a small number of executions, then we corrupt the corresponding servers in the outer protocol. We use the last round equivocality to generate the final message in the inner protocol for these executions. We finally rely on the security of the outer protocol to argue that only the output of the functionality is leaked to the adversary since the number of server corruptions is “small".

The construction of the three-round black-box MPC protocol and the proof of security can be found in Sect. 6.

2.3 Another Perspective

A different way to view our techniques is as follows. Let us start with the simplest, round-optimal semi-honest protocols for 2PC and MPC that make black-box use of two-round semi-honest OT. For the case of two parties, we consider Yao’s protocol and for the case of multiple parties, we consider the protocol from the work of Patra and Srinivasan [PS21]. In both these protocols, we replace the underlying semi-honest OT protocol with a malicious secure OT protocol and ask what security properties are satisfied by this modification. In this work, we show that the properties satisfied correspond to that of the inner protocols. Later, we use the IPS compiler to bootstrap this “weaker" security notion to the standard malicious security. However, this runs into several technical hurdles (as explained earlier) and we develop new techniques to overcome them.

Organization. We assume basic familiarity with the definitions of the standard building blocks used in our construction. We provide the formal definitions in the full version. We give the description of the two-party inner protocol in Sect. 3. In Sect. 4, we give our construction of black-box two-sided NISC protocol. We give the construction of our multiparty inner protocol in Sect. 5. In Sect. 6, we give our construction of black-box multiparty protocol that runs in three rounds.

3 Two-Party Inner Protocol

In this section, we give a definition of a two-party protocol that satisfies some special properties (known as two-party inner protocol). We give a construction of such a two-party inner protocol making black-box use of a two-round malicious-secure OT. In the next section, we use this protocol to construct a two-sided NISC.

3.1 Definition

A two-round two-party protocol for computing a two-party function f is given by a tuple of PPT algorithms \((\textsf{Setup},\varPi _1,\varPi _2,\textsf{out}_{\varPi })\). \(\textsf{Setup}\) algorithm takes in the security parameter \(1^\lambda \) (encoded in unary) and outputs the common reference string \(\textsf{crs}\). \(\varPi _1\) is run by the receiver and takes in \(\textsf{crs}\) and the receiver input \(x_0\) and outputs \(\pi _1\). \(\varPi _2\) is run by the sender and takes in \(\textsf{crs}\), \(\pi _1\), the sender input \(x_1\) and outputs \(\pi _2\). \(\textsf{out}_{\varPi }\) takes in \(\textsf{crs},\pi _2\), \(x_0\) and the random tape of \(\varPi _1\) and outputs \(f(x_0,x_1)\).

Definition 1

A two-party protocol \((\textsf{Setup},\varPi _1,\varPi _2,\textsf{out}_{\varPi })\) for computing a functionality f that delivers the output to the receiver is said to be a two-party inner protocol if there exists a (stateful) PPT simulator-extractor pair \((\textsf{Sim}_{\varPi },\textsf{Ext}_{\varPi })\) such that the following properties hold:

  • Security Against Malicious Receivers: For any (stateful) non-uniform PPT adversary \(\mathcal {A}\) corrupting the receiver and for any sender input \(x_1\), we have:

    $$ \textsf{Real}_R(1^\lambda ,\mathcal {A},x_1) \approx _c \textsf{Ideal}_R(1^\lambda ,\mathcal {A},x_1,(\textsf{Sim}_{\varPi },\textsf{Ext}_{\varPi })) $$

    where \(\textsf{Real}_R\) and \(\textsf{Ideal}_R\) are described in Fig. 1.

  • Correctness of Extraction. For any non-uniform PPT adversary \(\mathcal {A}\) corrupting the receiver, we have

    $$\begin{aligned}{} & {} \textrm{Pr} \Big [\textsf{Ext}_\varPi (R,\textsf{td},\varPi _1(\textsf{crs},x_0;r_0)) \ne x_0 \big | \\{} & {} \qquad \qquad (\textsf{crs},\textsf{td}) \leftarrow \textsf{Sim}_\varPi (1^\lambda ,R),(x_0,r_0) \leftarrow \mathcal {A}(\textsf{crs}) \Big ] \le \textsf{negl}(\lambda ) \end{aligned}$$
  • Robust Security Against Semi-Malicious Senders (a.k.a., output equivocality): Informally, this property requires that if a malicious sender sends a second round message that is not explainable (by providing a valid (input, randomness) pair), then we require an equivocal simulator that when given the private input of the honest receiver computes an output such that the joint distribution of the view of \(\mathcal {A}\) and the output of the honest receiver in the real execution is indistinguishable to the ideal execution using this special simulator. Formally, for any (stateful) non-uniform PPT adversary \(\mathcal {A}\) corrupting the sender and for any receiver input \(x_0\), we have:

    $$ \textsf{Real}_S(1^\lambda ,\mathcal {A},x_1) \approx _c \textsf{Ideal}_S(1^\lambda ,\mathcal {A},x_0,(\textsf{Sim}_{\varPi },\textsf{Ext}_{\varPi })) $$

    where \(\textsf{Real}_S\) and \(\textsf{Ideal}_S\) are described in Fig. 2.

Fig. 1.
figure 1

Descriptions of \(\textsf{Real}_R\) and \(\textsf{Ideal}_R\) experiments.

Fig. 2.
figure 2

Descriptions of \(\textsf{Real}_S\) and \(\textsf{Ideal}_S\) experiments.

Remark 1

We note that correctness of extraction is implicitly implied by security against malicious receivers. However, for the ease of usage in the next section, we state it as a separate property.

We defer the proof of the following proposition to the full version.

Proposition 1

Assume black-box access to a two-round oblivious transfer protocol secure against malicious adversaries in the common random/reference string model. There exists a two-party inner protocol for computing any two-party functionality f satisfying Definition 1. The computational and communication complexity of the protocol is \(\textsf{poly} (\lambda ,|f|)\) where |f| denotes the circuit-size of f.

3.2 Construction from One-Sided NISC

We note that any one-sided NISC protocol gives rise to a two-party protocol satisfying Definition 1. This is because security against malicious receivers is implied by the security of one-sided NISC against malicious receivers. Robust security against semi-malicious senders is implied by security of one-sided NISC against malicious senders. Thus, we get the following corollary.

Corollary 1

Let f be an arbitrary two-party functionality. Assume black-box access to an one-sided NISC protocol that securely computes f. Then, there exists a two-party inner protocol for computing f satisfying Definition 1. The computational and communication complexity of the protocol are the same as that of the NISC protocol.

4 Two-Sided Black-Box NISC

In this section, we give our construction of black-box two-sided NISC protocol. We prove the following theorems.

Theorem 1 (Black-box two-sided NISC)

Assume black-box access to a two-round oblivious transfer protocol secure against malicious adversaries in the common random/reference string model. Then, there exists a two-round protocol for securely computing any two-party functionality f against malicious adversaries in the common random/reference string model where both parties get the output of f at the end of the protocol. The computational and communication complexity of the protocol is \(\textsf{poly} (\lambda ,|f|)\) where |f| denotes the circuit-size of f.

Theorem 2 (Black-box arithmetic two-sided NISC)

Let \(\mathbb {F}\) be a finite field and let f be a two-party functionality that is computable by an arithmetic branching program over \(\mathbb {F}\). Assume black-box access to a two-round oblivious linear evaluation (OLE) protocol over \(\mathbb {F}\) and an oblivious transfer protocol that is secure against malicious adversaries in the common random/reference string model. Then, there exists a two-round protocol for securely computing f against malicious adversaries in the common random/reference string model where both parties get the output of f at the end of the protocol. The computational and communication complexity of the protocol is \(\textsf{poly} (\lambda ,|f|)\) where |f| denotes the size of the branching program computing f and the protocol makes black-box use of \(\mathbb {F}\).

4.1 Building Blocks

The construction makes use of the following building blocks:

  1. 1.

    A two-round, two client, m server outer MPC protocol \(\varPsi = (\textsf{Share},\textsf{Eval},\textsf{Dec})\) for computing the function f that satisfies security with abort against t server corruptions. We set \(t = 2\lambda \) and \(m = 3t+1\). Based on [IKP10, Pas12], we give a construction of such a protocol making black-box use of a PRG in the full version where \(\textsf{Eval}\) does not involve cryptographic operations.

  2. 2.

    A two-round, two-party inner protocol (see Definition 1) \((\textsf{Setup}_{\varPi _j},\varPi _{j,1},\varPi _{j,2},\) \(\textsf{out}_{\varPi _j})\) that delivers output to the receiver and computes \(\textsf{Eval}(j,\cdot )\) for each \(j \in [m]\). From Proposition 1 and Corollary 1, such a protocol can be constructed making black-box use of a two-round malicious secure OT protocol or an one-sided NISC protocol.

  3. 3.

    A two-round malicious-secure two-party computation protocol \((\textsf{CRSGen},\varPhi _1,\) \(\varPhi _2,\textsf{out}_{\varPhi })\) for computing the \(\textsf{Sel}_{\lambda ,m}\) functionality. The \(\textsf{Sel}_{\lambda ,m}\) functionality takes in a string \(\rho _1\) from the receiver, \((\rho _2,(s_1,\ldots ,s_m))\) from the sender. It computes \(\rho _1\oplus \rho _2\) and uses it as random tape to select a random multiset (with replacement) K of [m] of size \(\lambda \). It then outputs \((K,\{s_i\}_{i \in K})\) to the receiver. [IKO+11] gave a two-round black-box protocol for computing \(\textsf{Sel}_{\lambda ,m}\) based on two-round malicious-secure OT protocol.

The key lemma that we will prove in this section is the following.

Lemma 1

Assume black-box access to a PRG and the protocols \(\{\varPi _j\}_{j \in [m]}\) and \(\varPhi \) as described above. Then, there exists a two-round protocol for securely computing any two-party functionality f against malicious adversaries where both parties get the output of f at the end of the protocol.

Theorem 1 is obtained by instantiating \(\{\varPi _j\}_{j \in [m]}\) from Proposition 1. To obtain Theorem 2, we observe that in the protocols of [IKP10, Pas12], if f is computable by an arithmetic branching program then \(\textsf{Eval}(j,\cdot )\) is computable by a log-depth arithmetic circuit and does not involve any cryptographic operations. Thus, we can instantiate \(\varPi _j\) for each \(j \in [m]\) using the one-sided NISC protocol for computing log-depth arithmetic circuits based on two-round malicious secure OLE [IKO+11, CDI+19, DIO21] using Corollary 1.

We give the construction of the protocol in Sect. 4.2 and the proof of security in Sect. 4.3

4.2 Construction

Let \(P_0\) and \(P_1\) be the two parties with private inputs \(x_0\) and \(x_1\) respectively. The parties additionally have as a common input the description of the function f. We give the formal description of the construction in Fig. 3.

Fig. 3.
figure 3

Black-Box Two-Sided NISC Protocol

4.3 Proof of Security

We give the description of the simulator below and show that the real and the ideal executions are computationally indistinguishable. Since the protocol is symmetric w.r.t. both \(P_0\) and \(P_1\), we assume without loss of generality that \(P_1\) is corrupted by \(\mathcal {A}\).

Description of \(\textsf{Sim}\) .

  • \(\textsf{CRSGen}(1^\lambda )\): \(\textsf{Sim}\) does the following:

    1. 1.

      It chooses \((\textsf{crs}^0,\textsf{td}^0,\phi ^0_1) \leftarrow \textsf{Sim}_{\varPhi }(1^\lambda ,S)\) and \((\textsf{crs}^1,\textsf{td}^1) \leftarrow \textsf{Sim}_{\varPhi }(1^\lambda ,R)\).

    2. 2.

      It samples a uniform multiset \(K^1\) of [m] of size \(\lambda \).

    3. 3.

      For each \(j \in K^1\), it samples \(\textsf{crs}^0_j,\textsf{crs}^1_j \leftarrow \textsf{Setup}_{\varPi _j}(1^\lambda )\).

    4. 4.

      For each \(j \not \in K^1\), it samples \((\textsf{crs}^0_j,\textsf{td}^0_j,\pi ^0_{j,1}) \leftarrow \textsf{Sim}_{\varPi _j}(1^\lambda ,S)\) and \((\textsf{crs}^1_j,\textsf{td}^1_j) \leftarrow \textsf{Sim}_{\varPi _j}(1^\lambda ,R)\).

    5. 5.

      It outputs \((\{\textsf{crs}^0_j,\textsf{crs}^1_j\}_{j \in [m]},\textsf{crs}^0,\textsf{crs}^1)\) as the CRS of the overall protocol.

  • Round-1: To generate the first round message, \(\textsf{Sim}\) does the following:

    1. 1.

      It runs the simulator \(\textsf{Sim}_{\varPsi }\) for the outer protocol by corrupting the client \(P_1\) and the set of servers given by \(K^1\). \(\textsf{Sim}_{\varPsi }\) provides with \(\{x^0_j\}_{j \in K^1}\).

    2. 2.

      For each \(j \in K^1\), it computes \(\pi _{j,1} \leftarrow \varPi _{j,1}(\textsf{crs}^0_j,x^0_j;r^0_j)\) for uniformly chosen \(r^0_j\).

    3. 3.

      It sends \(\phi ^0_1\) and \(\{\pi ^0_{j,1}\}_{j \in [m]}\) to \(\mathcal {A}\).

    4. 4.

      It receives the first round message from \(\mathcal {A}\). For each \(j \not \in K^1\), it computes \(x^1_j \leftarrow \textsf{Ext}_{\pi _j}(\textsf{td}^1_j,\pi ^1_{j,1})\). It computes \(\rho ^1_1 \leftarrow \textsf{Ext}_{\varPhi }(R,\phi ^1_1,\textsf{td}^1)\).

  • Round-2: To generate the second round message, \(\textsf{Sim}\) does the following:

    1. 1.

      It sends \(\{x^1_j\}_{j \not \in K^1}\) to \(\textsf{Sim}_{\varPsi }\) as the first round message from the corrupted client to the honest servers. \(\textsf{Sim}_{\varPsi }\) queries the ideal functionality on input \(x_1\) and \(\textsf{Sim}\) forwards this query to its own ideal functionality. It forwards the response from the ideal functionality back to \(\textsf{Sim}_{\varPsi }\). \(\textsf{Sim}_{\varPsi }\) sends \(\{z^1_j\}_{j \not \in K^1}\) as the second round message from the honest servers to the corrupted client.

    2. 2.

      For each \(j \not \in K^1\), it generates \(\pi ^1_{j,2} \leftarrow \textsf{Sim}_{\varPi _j}(R,\textsf{crs}^1_j,z^1_j,x^1_j)\). For each \(j \in K^1\), it generates \(\pi ^1_{j,2}\) as \(\varPi _{j,1}(\textsf{crs}^1_{j},\pi ^1_{j,1},x^0_j;t^1_j)\) for uniformly chosen \(t^1_j\).

    3. 3.

      It generates \(\phi ^1_2 \leftarrow \textsf{Sim}_{\varPhi }(R,\{K^1,\{x^0_j,r^0_j,t^1_j\}_{j \in K^1}\})\).

    4. 4.

      It sends \(\phi ^1_2\) and \(\{\pi ^1_{j,2}\}_{j \in [m]}\) to \(\mathcal {A}\).

    5. 5.

      It receives the second round message from \(\mathcal {A}\). For each \(j \not \in K^1\), it computes \(\textsf{st}_j \leftarrow \textsf{Ext}_{\varPi _j}(S,\textsf{td}^0_j,\pi ^0_{j,2})\).

    6. 6.

      It also computes \((\rho ^0_2,s^1_1,\ldots ,s^1_m) \leftarrow \textsf{Ext}_{\varPhi }(S,\phi ^0_2,\textsf{td}^0)\).

  • Output Computation: To compute the output, \(\textsf{Sim}\) does the following:

    1. 1.

      It chooses a uniform multiset \(K^0\) of [m] size \(\lambda \) and uses it to perform the same checks as done by honest \(P_0\) using \(\{s^1_j\}_{j \in K^0}\). If any of the checks fail, it instructs the ideal functionality to deliver \(\bot \) to \(P_0\).

    2. 2.

      Otherwise, it initializes an empty set \(C_1\).

    3. 3.

      For each \(j \not \in K^1\),

      1. (a)

        It parses \(s^1_j\) as \((\overline{x}^1_j,r^1_j,t^0_j)\).

      2. (b)

        If either \((\overline{x}^{1}_j,r^{1}_j)\) is not a consistent input, randomness pair that explains the message \(\pi ^{1}_{j,1}\) or if \((\overline{x}^{1}_j,t^{0}_j)\) is not a consistent input, randomness pair that explains the message \(\pi ^{1}_{j,2}\), then we add j to \(C_1\).

    4. 4.

      If \(|C_1| \ge \lambda \), then it instructs the ideal functionality to output \(\bot \) to \(P_1\). Otherwise, it instructs \(\textsf{Sim}_{\varPsi }\) to adaptively corrupt the set of servers indexed by \(C_1\) and obtains \(\{x^0_j\}_{j \in C_1}\).

    5. 5.

      For each \(j \in C_1\), it computes \(z^0_j\) as \(\textsf{Sim}_{\varPi _j}(S,\textsf{st}_j,\pi ^0_{j,2},x^0_j)\). For each \(j \in K^1\), it computes \(z^0_j\) as \(\textsf{out}_{\varPi _j}(\textsf{crs}^0_j,\pi ^0_{j,2},(x^0_j,r^0_j))\).

    6. 6.

      It sends \(\{z^0_j\}_{j \in C_1 \cup K^1}\) to \(\textsf{Sim}_{\varPsi }\) as the second round message from the corrupted servers to the honest client. If \(\textsf{Sim}_{\varPsi }\) instructs the \(P_0\) to abort, then \(\textsf{Sim}\) instructs the ideal functionality to deliver \(\bot \) to \(P_0\). Otherwise, it instructs it to deliver the output of f to \(P_0\).

Proof of Indistinguishability.

  • \(\underline{\textsf{Hyb}_1}:\) This corresponds to the output of the real experiment which comprises of the view of \(\mathcal {A}\) corrupting \(P_1\) and the output of honest \(P_0\).

  • \(\underline{\textsf{Hyb}_2}:\) In this hybrid, we make the following changes:

    1. 1.

      Sample \((\textsf{crs}^0,\textsf{td}^0,\phi ^0_1) \leftarrow \textsf{Sim}_{\varPhi }(1^\lambda ,S)\).

    2. 2.

      Obtain \(\phi ^0_2\) from \(\mathcal {A}\).

    3. 3.

      Compute \((\rho ^0_2,(s^1_1,\ldots ,s^1_m)) \leftarrow \textsf{Ext}_{\varPhi }(S,\phi ^0_2,\textsf{td}^0)\).

    4. 4.

      Sample \(\rho ^0_1\) uniformly from \(\{0,1\}^*\) and sample a multiset \(K^0\) of size \(\lambda \) from [m] using \(\rho ^0_1 \oplus \rho ^0_2\) as the random tape.

    5. 5.

      Use \((K^0,\{s^{1}_j\}_{j \in K^0})\) to perform the same checks described in output computation.

    In Lemma 2, we show from the simulation security of \(\varPhi \) against corrupted senders that \(\textsf{Hyb}_1 \approx _c \textsf{Hyb}_2\).

  • \(\underline{\textsf{Hyb}_3}:\) In this hybrid, we make the following changes:

    1. 1.

      Sample \((\textsf{crs}^1,\textsf{td}^1) \leftarrow \textsf{Sim}_{\varPhi }(1^\lambda ,R)\).

    2. 2.

      Obtain \(\phi ^1_1\) from \(\mathcal {A}\).

    3. 3.

      Compute \(\rho ^1_1 \leftarrow \textsf{Ext}_{\varPhi }(R,\phi ^1_1,\textsf{td}^1)\).

    4. 4.

      Sample a multiset \(K^1\) of size \(\lambda \) from [m] using a random tape \(\rho ^1\).

    5. 5.

      Generate \(\phi ^1_2 \leftarrow \textsf{Sim}_{\varPhi }(R,\{K^1,\{s^0_j\}_{j \in K^1}\})\).

    6. 6.

      Use \(\phi ^1_2\) to generate the final round message in the protocol.

    In Lemma 3, we use the simulation security of \(\varPhi \) against corrupted receivers to show that \(\textsf{Hyb}_2 \approx _c \textsf{Hyb}_3\).

  • \(\underline{\textsf{Hyb}_4}:\) In this hybrid, we make the following changes:

    1. 1.

      For each \(j \in [m]\), we parse \(s^1_j\) as \((\overline{x}^1_j,r^1_j,t^0_j)\).

    2. 2.

      We initialize an empty set \(C_1\).

    3. 3.

      For each \(j \not \in K^1\),

      1. (a)

        If either \((\overline{x}^{1}_j,r^{1}_j)\) is not a consistent input, randomness pair that explains the message \(\pi ^{1}_{j,1}\), or if \((\overline{x}^{1}_j,t^{0}_j)\) is not a consistent input, randomness pair that explains the message \(\pi ^{1}_{j,2}\), then we add j to \(C_1\).

    4. 4.

      If \(|C_1| \ge \lambda \), then we abort and use \(\bot \) as the output of honest \(P_0\).

    In Lemma 4, we show that \(\textsf{Hyb}_3 \approx _s \textsf{Hyb}_4\).

  • \(\underline{\textsf{Hyb}_5}:\) In this hybrid, we make the following changes:

    1. 1.

      Before generating the CRS, we sample a uniform multiset \(K^1\) of [m] with size \(\lambda \).

    2. 2.

      We sample \((\textsf{crs}^0_j,\textsf{td}^0_j,\pi ^0_{j,1}) \leftarrow \textsf{Sim}_{\varPi _j}(1^\lambda ,S)\) for each \(j \not \in K^1\). We use \(\{\textsf{crs}^0_j\}_{j \not \in K^1}\) as part of the CRS and use \(\{\pi ^0_{j,1}\}_{j \not \in K^1}\) to generate the first round message from \(P_0\).

    3. 3.

      We receive the second round message from \(\mathcal {A}\) (that includes \(\pi ^0_{j,2}\) for each \(j \in [m]\)) and extract \(\{(\overline{x}^{1}_j,r^1_j,t^0_j)\}_{j \in [m]}\) as before.

    4. 4.

      For each \(j \not \in K^1\), we compute \(\textsf{st}_j \leftarrow \textsf{Ext}_{\varPi _j}(S,\textsf{td}^0_j,\pi ^0_{j,2})\).

    5. 5.

      We compute the set \(C_1\) as before.

    6. 6.

      For each \(j \in C_1\), we set \(z^0_j = \textsf{Sim}_{\varPi _j}(S,\textsf{st}_j,\pi ^0_{j,2},x^0_j)\).

    7. 7.

      For each \(j \in K^1\), we compute \(z^0_j\) as before.

    8. 8.

      For each \(j \not \in C_1 \cup K^1\), we set \(z^0_j = \textsf{Eval}(j,x^0_j,\overline{x}^1_j)\).

    In Lemma 5, we rely on the robust security of \(\varPi _j\) against semi-malicious senders to show that \(\textsf{Hyb}_4 \approx _c \textsf{Hyb}_5\).

  • \(\underline{\textsf{Hyb}_6}:\) In this hybrid, we make the following changes:

    1. 1.

      We generate \((\textsf{crs}^1_j,\textsf{td}^1_j) \leftarrow \textsf{Sim}_{\varPi _j}(1^\lambda ,R)\) for each \(j \not \in K^1\).

    2. 2.

      On receiving \(\{\pi ^1_{j,1}\}_{j \in [m]}\) from \(\mathcal {A}\), we run \(\textsf{Ext}_{\varPi _j}(\textsf{td}^1_j,\pi ^1_{j,1})\) to obtain \(x^1_j\) for each \(j \not \in K^1\).

    3. 3.

      For each \(j \not \in K^1\), we generate \(\pi ^1_{j,2} \leftarrow \textsf{Sim}_{\varPi _j}(R,\textsf{crs}^1_{j},z^1_j = \textsf{Eval}(j,x^0_j,x^1_j),x^1_j)\). We use this to generate the second round message from \(P_0\).

    In Lemma 6, we use the security of \(\varPi _j\) against malicious senders for each \(j \in [m]\) to show that \(\textsf{Hyb}_5 \approx _c \textsf{Hyb}_6\).

  • \(\underline{\textsf{Hyb}_7}:\) In this hybrid, for each \(j \not \in K^1 \cup C_1\), we use \(z^1_j\) instead of \(z^0_j\) to compute the output of honest \(P_0\). It follows from the correctness of extraction property of \(\{\varPi _j\}_{j \not \in K^1 \cup C_1}\) that \(z^0_j = z^1_j\) for each \(j \not \in K^1 \cup C_1\) except with negligible probability and hence, \(\textsf{Hyb}_6 \approx _s \textsf{Hyb}_7\).

  • \(\underline{\textsf{Hyb}_8}:\) In this hybrid, we make the following changes:

    1. 1.

      We start running the simulator \(\textsf{Sim}_{\varPsi }\) by corrupting the client \(P_1\) and the set of servers indexed by \(K^1\). We receive \(\{x^1_j\}_{j \in K^1}\) from the simulator and use this to generate the first round message from \(P_0\).

    2. 2.

      On receiving \(\{\pi ^1_{j,1}\}_{j \in [m]}\) from \(\mathcal {A}\), we run \(\textsf{Ext}_{\varPi _j}(\textsf{td}^1_j,\pi ^1_{j,1})\) to obtain \(x^1_j\) for each \(j \not \in K^1\). We send \(\{x^1_j\}_{j \not \in K^1}\) to \(\textsf{Sim}_{\varPsi }\) as the first round message from the adversarial client \(P_1\) to the honest servers.

    3. 3.

      \(\textsf{Sim}_{\varPsi }\) queries its ideal functionality on an input \(x_1\) and we forward this to our ideal functionality and respond with \(f(x_0,x_1)\).

    4. 4.

      \(\textsf{Sim}_{\varPsi }\) provides \(\{z^1_j\}_{j \not \in K^1}\). We use this to generate \(\pi ^1_{j,2} \leftarrow \textsf{Sim}_{\varPi _j}(z^1_j,x^1_j)\) for each \(j \not \in K^1\).

    5. 5.

      We receive the second round message from \(\mathcal {A}\) and use this to extract \(\{s^1_j\}_{j \in [m]}\) as before. We compute the set \(C_1\) and abort if \(|C_1| \ge \lambda \).

    6. 6.

      For each \(j \not \in K^1\), we compute \(\textsf{st}_j \leftarrow \textsf{Ext}_{\varPi _j}(S,\textsf{td}^0_j,\pi ^0_{j,2})\).

    7. 7.

      We now instruct \(\textsf{Sim}_{\varPsi }\) to adaptively corrupt the set of servers corresponding to \(C_1\) and obtain \(\{x^0_j\}_{j \in C_1}\). We then compute \(z^0_j = \textsf{Sim}_{\varPi _j}(\textsf{st}_j,\pi ^0_{j,2},x^0_j)\) for each \(j \in C_1\). We compute \(z^0_j\) for each \(j \in K^1\) as before.

    8. 8.

      We send \(\{z^0_j\}_{j \in C_1 \cup K^1}\) as the second round message from the corrupted servers to the honest client to \(\textsf{Sim}_{\varPsi }\). If \(\textsf{Sim}_{\varPsi }\) instructs the client to abort, we instruct \(P_0\) to do the same. Otherwise, we instruct \(P_0\) to output \(f(x_0,x_1)\).

    In Lemma 7, we use the security of the outer protocol to argue that \(\textsf{Hyb}_7 \approx _c \textsf{Hyb}_8\). Notice that \(\textsf{Hyb}_8\) is identically distributed to the ideal world using \(\textsf{Sim}\).

Lemma 2

Assuming the simulation security of the protocol \(\varPhi \) against corrupted senders, we have \(\textsf{Hyb}_1 \approx _c \textsf{Hyb}_2\).

Proof

Assume for the sake of contradiction that \(\textsf{Hyb}_1\) and \(\textsf{Hyb}_2\) are computationally distinguishable with non-negligible advantage. We show that this contradicts the simulation security of the protocol \(\varPhi \) against corrupted senders.

We start interacting with the external challenger and provide a uniformly chosen random string \(\rho ^0_1\) as the challenge receiver input. The challenger responds with \(\textsf{crs}^0\). We use this to generate the CRS in the overall protocol. The challenger also sends \(\phi ^0_1\). We use this to generate the first round message in the protocol by sampling the other components of the first round message as in \(\textsf{Hyb}_1\). We generate the second round message as before and obtain the second round message from \(\mathcal {A}\). We forward \(\phi ^0_2\) from the second round message received from \(\mathcal {A}\) to the external challenger. The external challenger provides with \(K^0,\{s^1_j\}_{j \in K^0}\) as the output of the honest \(P_0\). We use this to perform the same checks as described in the output computation. We finally output the view of \(\mathcal {A}\) and the output of \(P_0\).

If the messages in the protocol \(\varPhi \) and the CRS and the output of honest \(P_0\) are generated as in the real experiment, then the output of the above reduction is identically distributed to \(\textsf{Hyb}_1\). Else, it is identically distributed to \(\textsf{Hyb}_2\). Thus, if \(\textsf{Hyb}_2\) and \(\textsf{Hyb}_1\) are computationally distinguishable with non-negligible advantage then this breaks the simulation security of \(\varPhi \) against corrupted senders and this is a contradiction.

Lemma 3

Assuming the simulation security of \(\textsf{Sim}_{\varPhi }\) against corrupted senders, we have \(\textsf{Hyb}_2 \approx _c \textsf{Hyb}_3\).

Proof

Assume for the sake of contradiction that \(\textsf{Hyb}_2\) and \(\textsf{Hyb}_3\) are computationally distinguishable with non-negligible advantage. We show that this contradicts the simulation security of \(\varPhi \) against corrupted receivers.

We interact with the external challenger and provide a uniformly chosen \(\rho ^1_2\) and \((s^0_1,\ldots ,s^0_m)\) as the challenge sender input. The external challenger provides with \(\textsf{crs}^1\) and we use this to generate the CRS of the overall protocol. We start interacting with the adversary and obtain the first round message \(\phi ^1_1\) from it. We forward this to the external challenger. The external challenger provides with the second round message \(\phi ^1_2\) and we use this to generate the second round message in the overall protocol. We compute the output of honest \(P_0\) as before and finally output the view of \(\mathcal {A}\) and the output of the honest \(P_0\).

Note that if the messages in the protocol \(\varPhi \) and the CRS are generated by the external challenger as in the real experiment then the output of the above reduction is distributed identically to \(\textsf{Hyb}_2\). Else, it is distributed identically to \(\textsf{Hyb}_3\). Thus, if \(\textsf{Hyb}_3\) and \(\textsf{Hyb}_2\) are computationally distinguishable with non-negligible advantage then this breaks the simulation security of \(\varPhi \) against corrupted receivers and this is a contradiction.

Lemma 4

\(\textsf{Hyb}_3 \approx _s \textsf{Hyb}_4\).

Proof

Note that the only difference between \(\textsf{Hyb}_3\) and \(\textsf{Hyb}_4\) is that in \(\textsf{Hyb}_4\) we abort if \(|C_1| \ge \lambda \) To show that \(\textsf{Hyb}_3\) and \(\textsf{Hyb}_4\) are statistically close, we prove that if the above condition holds, then in \(\textsf{Hyb}_3\), the checks done by the honest \(P_0\) fails with overwhelming probability.

Note that \(K^0\) is distributed as a random multiset of [m] of size \(\lambda \). If \(C_1 \cap K^0 \ne \emptyset \), then the the honest \(P_0\) in \(\textsf{Hyb}_3\) also aborts. We show that this event happens with overwhelming probability.

$$\begin{aligned} \textrm{Pr} [|K^0 \cap C_1| = 0]= & {} (1 - \frac{|C_1|}{m})^{\lambda } \\\le & {} e^{-|C_1|\lambda /m} \\\le & {} e^{-\lambda ^2/m} \\\le & {} e^{-O(\lambda )}\\ \end{aligned}$$

where the last inequality follows since \(m = O(\lambda )\). This completes the proof of the lemma.

Lemma 5

Assuming the robust security of \(\varPi _j\) against semi-malicious senders for each \(j \in [m]\), we have \(\textsf{Hyb}_4 \approx _c \textsf{Hyb}_5\).

Proof

Assume for the sake of contradiction that \(\textsf{Hyb}_4\) and \(\textsf{Hyb}_5\) are distinguishable with non-negligible advantage. We sample a uniform multiset \(K^1\) of [m] of size \(\lambda \). We now show that if \(\textsf{Hyb}_4\) and \(\textsf{Hyb}_5\) are computationally distinguishable then this contradicts the robust security of \(\varPi _j\) against semi-malicious senders for some \(j \not \in K^1\).

Let \(\prec \) be a total order on the set \([m] \setminus K^1\). If \(\textsf{Hyb}_4\) and \(\textsf{Hyb}_5\) are distinguishable with non-negligible advantage, then by a standard averaging argument there exists \(\textsf{Hyb}_{4,j}\) and \(\textsf{Hyb}'_{4,j}\) (described below) that are distinguishable with non-negligible advantage. In both the hybrids, for each \(j^* \prec j\), \((\textsf{crs}^0_{j^*},\pi ^0_{j^*,1})\) is generated as in \(\textsf{Hyb}_5\) whereas for each \(j \prec j^*\), \((\textsf{crs}^0_{j^*},\pi ^0_{j^*,1})\) is generated as in \(\textsf{Hyb}_4\). The only difference is that in \(\textsf{Hyb}_{4,j}\), \((\textsf{crs}^0_j,\pi ^0_{j,1})\) is generated as in \(\textsf{Hyb}_5\) whereas it is generated as in \(\textsf{Hyb}_4\) in \(\textsf{Hyb}'_{4,j}\). We use this to construct an attacker that breaks the robust security of \(\varPi _j\) against semi-malicious senders.

We interact with the external challenger and provide \(x^0_j\) as the challenge receiver message. The challenger provides \((\textsf{crs}^0_j,\pi ^0_{j,1})\). We use this to generate the CRS and the first round message of the overall protocol. We receive the second round message from the adversary and use it to extract \(\{(\overline{x}^1_j,r^1_j,t^0_j)\}_{j \in [m]}\). We compute the set \(C_1\) as before and abort if \(|C_1| \ge \lambda \). For each \(j \in C_1\), we send \(\overline{x}^1_j\) and an arbitrary \(t^0_j\) (that does not explain the messages correctly) along with \(\pi ^0_{j,2}\) to the external challenger. If \(j \not \in C_1 \cup K^1\), we send \((\overline{x}^1_j,t^0_j)\) along with \(\pi ^0_{j,2}\) to the external challenger. We receive the output \(z^0_j\) and use this to compute the output of the overall protocol as before.

We note that if \((\textsf{crs}^0_j,\pi ^0_{j,1},z^0_j)\) was generated by the external challenger as in the \(\textsf{Real}_S\) experiment then the output of the above reduction is identically distributed to \(\textsf{Hyb}'_{4,j}\). Else, it is distributed identically to \(\textsf{Hyb}_{4,j}\). Thus, if \(\textsf{Hyb}_{4,j}\) and \(\textsf{Hyb}'_{4,j}\) are distinguishable with non-negligible advantage, then the above reduction breaks the robust security of \(\varPi _j\) against semi-malicious senders with non-negligible advantage and this is a contradiction.

Lemma 6

Assuming the security of \(\varPi _j\) against malicious receivers for each \(j \in [m]\), we have \(\textsf{Hyb}_6 \approx _c \textsf{Hyb}_5\).

Proof

Assume for the sake of contradiction that \(\textsf{Hyb}_5\) and \(\textsf{Hyb}_6\) are distinguishable with non-negligible advantage. We sample a uniform multiset \(K^1\) of [m] of size \(\lambda \). We now show that this contradicts the security of \(\varPi _j\) against malicious receiver for some \(j \not \in K^1\).

Let \(\prec \) be a total order on the set \([m] \setminus K^1\). If \(\textsf{Hyb}_5\) and \(\textsf{Hyb}_6\) are distinguishable with non-negligible advantage then by a standard averaging argument, there exists \(\textsf{Hyb}_{5,j}\) and \(\textsf{Hyb}'_{5,j}\) (described below) that are distinguishable with non-negligible advantage. In both the hybrids, for each \(j^* \prec j\), \((\pi ^1_{j^*,2},\textsf{crs}^1_{j^*})\) is generated as in \(\textsf{Hyb}_6\) whereas for each \(j \prec j^*\), \((\pi ^1_{j^*,2},\textsf{crs}^1_{j^*})\) is generated as in \(\textsf{Hyb}_5\). The only difference is that in \(\textsf{Hyb}_{5,j}\), \((\textsf{crs}^1_j,\pi ^1_{j,2})\) is generated as in \(\textsf{Hyb}_6\) whereas it is generated as in \(\textsf{Hyb}_5\) in \(\textsf{Hyb}'_{5,j}\). We use this to construct an attacker that breaks the security of \(\varPi _j\) against malicious receivers.

We interact with the external challenger and provide \(x^0_j\) as the challenge sender input. We obtain \(\textsf{crs}^1_j\) from the external challenger. We receive \(\pi ^1_{j,1}\) from the adversary and forward this to the challenger. The challenger responds with \(\pi ^1_{j,2}\) and we use these to generate the view of the adversary \(\mathcal {A}\) and compute the output of \(P_0\) as in \(\textsf{Hyb}'_{5,j}\).

We note that if \((\pi ^1_{j,2},\textsf{crs}^1_j)\) was generated by the external challenger as in \(\textsf{Real}_R\) then the output of the above reduction is identically distributed to \(\textsf{Hyb}'_{5,j}\). Else, it is distributed identically to \(\textsf{Hyb}_{5,j}\). Thus, if \(\textsf{Hyb}_{5,j}\) and \(\textsf{Hyb}'_{5,j}\) are distinguishable with non-negligible advantage, then the above reduction breaks the security of \(\varPi _j\) against malicious receivers with non-negligible advantage and this is a contradiction.

Lemma 7

Assuming the security of the outer MPC protocol \(\varPsi = (\textsf{Share},\textsf{Eval},\textsf{Dec})\), we have that \(\textsf{Hyb}_7 \approx _c \textsf{Hyb}_8\).

Proof

Assume for the sake of contradiction that \(\textsf{Hyb}_7\) and \(\textsf{Hyb}_8\) are computationally distinguishable with non-negligible advantage. We show that this contradicts the security of outer protocol \(\varPsi \).

We start interacting with the outer protocol challenger and provide \(x_0\) as the honest client input. We instruct the challenger to corrupt \(P_1\) and the set of servers indexed by \(K^1\). The challenger provides \(\{x^0_j\}_{j \in K^1}\) as the first round message from the honest client to the corrupted servers and we use this to generate the first round message in the protocol. On receiving the first round message from \(\mathcal {A}\), we obtain \(x^1_j \leftarrow \textsf{Ext}_{\varPi _j}(\textsf{td}^1_j,\pi ^1_{j,1})\) for each \(j \not \in K^1\) and send \(\{x^1_j\}_{j \not \in K^1}\) as the first round message from the adversarial client to the honest servers. The challenger replies with \(\{z^1_j\}_{j \not \in K^1}\). We use this to generate \(\pi ^1_{j,2} \leftarrow \textsf{Sim}_{\varPi _j}(z^1_j,x^1_j)\) for each \(j \not \in K^1\) and compute the second round message of the overall protocol. We receive the second round message from the adversary. We use this to extract \(\{s^1_j\}_{j \in [m]}\) as before. We compute the set \(C_1\) and abort if \(|C_1| \ge \lambda \). Additionally, for each \(j \in C_1\), we compute \(\textsf{st}_j \leftarrow \textsf{Ext}_{\varPi _j}(S,\textsf{td}^0_j,\pi ^0_{j,2})\). We now instruct the challenger to adaptively corrupt the set of servers corresponding to \(C_1\) and obtain \(\{x^0_j\}_{j \in C_1}\). We then compute \(z^0_j = \textsf{Sim}_{\varPi _j}(\textsf{st}_j,\pi ^0_{j,2},x^0_j)\) for each \(j \in C_1\). We compute \(z^0_j\) for each \(j \in K^1\) as before. We send \(\{z^0_j\}_{j \in C_1 \cup K^1}\) as the second round message from the corrupted servers to the honest client to the challenger. If the challenger instructs the client to abort, we instruct \(P_0\) to do the same. Otherwise, we instruct \(P_0\) to output whatever is provided by the challenger as the output. We output the view of \(\mathcal {A}\) and the output of \(P_0\).

Note that if the messages received from the challenger are computed as in the real execution of the protocol \(\varPsi \), then the output of the above reduction is identically to \(\textsf{Hyb}_7\). Else, it is distributed identically to \(\textsf{Hyb}_8\). Hence, if \(\textsf{Hyb}_7\) and \(\textsf{Hyb}_8\) are distinguishable with non-negligible advantage, then the above reduction breaks the security of the outer protocol \(\varPsi \) with non-negligible advantage and this is a contradiction.

5 Multiparty Inner Protocol

In this section, we give the definition of a three-round multiparty protocol that satisfies some special properties (known as multiparty inner protocol) and give a construction based on two-round malicious secure oblivious transfer. In the next section, we will use this multiparty inner protocol as the key ingredient to construct a three-round malicious secure protocol for general functionalities.

5.1 Definition

A three-round n-party protocol for computing a function f is given by a tuple of PPT algorithms \((\textsf{Setup},\varPi _1,\varPi _2,\varPi _3,\textsf{out}_{\varPi })\) and has the following syntax. \(\textsf{Setup}\) algorithm takes in the security parameter \(1^\lambda \) (encoded in unary) and outputs the common reference string \(\textsf{crs}\). For each \(r \in [3]\), \(\varPi _r\) is the r-th round message function that takes in \(\textsf{crs}\), index i of the party, the transcript seen so far (denoted by \(\pi (r-1)\)), the i-th party’s private input \(x_i\), its random tape \(r_i\) and outputs \(\pi ^i_r\). \(\textsf{out}_{\varPi }\) is the public decoder (see [ABG+20] for the definition of a publicly decodable MPC) that takes in the transcript of the three rounds \(\pi (3)\) and outputs \(f(x_1,\ldots ,x_n)\).

Definition 2

A three-round n-party protocol \((\textsf{Setup},\varPi _1,\varPi _2,\varPi _3,\textsf{out}_{\varPi })\) for computing a function f is said to be a multiparty inner protocol with publicly decodable transcript if it satisfies:

  • Correctness: For any choice of inputs \(x_1,\ldots ,x_n\), we have:

    $$ \textrm{Pr} [\textsf{out}_{\varPi }(\pi (3)) = f(x_1,\ldots ,x_n)] = 1 $$

    where \(\pi (3)\) is the transcript generated in the first three rounds of the protocol.

  • Security: For any subset \(M \subset [n]\) of the parties, there exists a (stateful) PPT simulator \(\textsf{Sim}_{\varPi }\) such that for any (stateful) non-uniform PPT adversary \(\mathcal {A}\) corrupting the set of parties given by M and for any set \(\{x_i\}_{i \in [n] \setminus M}\) of the honest party inputs, we have:

    $$\begin{aligned}{} & {} \Big |\textrm{Pr} [\textsf{Real}(1^\lambda ,M,\mathcal {A},\{x_i\}_{i \in [n] \setminus M}) = 1] \\{} & {} \quad - \textrm{Pr} [\textsf{Ideal}(1^\lambda ,M,\mathcal {A},\{x_i\}_{i \in [n] \setminus M},\textsf{Sim}_{\varPi }) = 1] \Big | \le \textsf{negl}(n) \end{aligned}$$

    where \(\textsf{Real}\) and \(\textsf{Ideal}\) experiments are described in Fig. 4.

In this section, we state the following proposition and defer the proof to the full version.

Proposition 2

Assume black-box access to a two-round oblivious transfer protocol that is secure against malicious adversaries in the common random/reference string model. Then, there exists a three-round inner protocol for computing any n-party functionality f satisfying Definition 2. The computational and communication complexity of this protocol is \(\textsf{poly} (\lambda ,n,|f|)\) where |f| is the circuit-size of f.

Fig. 4.
figure 4

Descriptions of \(\textsf{Real}\) and \(\textsf{Ideal}\) experiments.

6 Round-Optimal Black-Box MPC

In this section, we give a construction of a three-round MPC protocol that makes black-box use of two-round malicious secure oblivious transfer. The round-optimality of this construction follows from [ABG+20]. We prove the following theorem.

Theorem 3 (Black-box three-round MPC)

Assume black-box access to a two-round oblivious transfer protocol that is secure against malicious adversaries in the common random/reference string model. Then, there exists a three-round protocol for computing any n-party functionality f in the common random/reference string model that satisfies security with unanimous abort against malicious adversaries corrupting an arbitrary subset of the parties. The protocol works over broadcast channels and its computational and communication complexity is \(\textsf{poly} (\lambda ,n,|f|)\) where |f| is the circuit-size of f.

The proof of this theorem is deferred to the full version of the paper.