1 Introduction

The field of secure multiparty computation (MPC), and more broadly fault-tolerant distributed computation, constitutes a deep and rich literature, yielding a vast assortment of protocols providing strong robustness and even seemingly paradoxical privacy guarantees. A central setting is that of n parties who jointly compute a function of their inputs while maintaining correctness (and possibly privacy) facing adversarial behavior from a constant fraction of corruptions.

Since the original seminal results on secure multiparty computation [2, 11, 25, 36], the vast majority of MPC solutions to date assume that every party can (and will) communicate with every other party. That is, the underlying point-to-point communication network forms a complete graph. Indeed, many MPC protocols begin directly with every party secret sharing his input across all other parties (or simply sending his input, in the case of tasks without privacy such as Byzantine agreement [17, 34, 35]).

There are two classes of exceptions to this rule, which consider MPC on incomplete communication graphs.

Fixed-Graph Model. The first corresponds to an area of work investigating achievable security guarantees in the setting of a fixed partial communication network. In this model, communication is allowed only along edges of a fixed graph, known a priori, and hence where corruptions can take place as a function of its structure. This setting is commonly analyzed within the distributed computing community. In addition to positive results, this is the setting of many fundamental lower bounds: For example, to achieve secure Byzantine agreement against t corruptions, the graph must be \((t +1)\)-connected [16, 21].Footnote 1 For graphs with lower connectivity, the best one can hope for is a form of “almost-everywhere agreement,” where some honest parties are not guaranteed to output correctly, as well as restricted notions of privacy [10, 19, 23, 26, 27]. Note that because of this, one cannot hope to achieve protocols with standard security in this model with \(o(n^2)\) communication, even for simple functionalities such as Byzantine agreement.

Dynamic-Graph Model. The second, more recent approach addresses a model where all parties have the ability to initiate communication with one another, but make use of only a subset of these edges as determined dynamically during the protocol. We refer to this as the “dynamic-graph model.” When allowing for negligible error (in the number of parties), the above lower bounds do not apply, opening the door for dramatically different approaches and improvements in complexity. Indeed, distributed protocols have been shown for Byzantine agreement in this model with as low as \(\tilde{O}(n)\) bits of communication [6, 33], and secure MPC protocols have been constructed whose communication graphs have degree o(n)—and as low as \(\mathrm {polylog}(n)\) [3, 5, 9, 15].Footnote 2 However, unlike the deep history of the model above, the current status is a sprinkling of positive results. Little is known about what types of communication graphs must be generated from a secure MPC protocol execution.

Gaining a better understanding of this regime is motivated not only to address fundamental questions, but also to provide guiding principles for future protocol design. In this work, we take a foundational look at the dynamic-graph model, asking:

$$\begin{aligned} What\,\,properties\,\,of\,\,induced\,\,communication\,\,graphs \\[-1.3mm] are\,\,necessary\,\,to\,\,support\,\,secure\,\,computation? \quad \,\, \end{aligned}$$

On the necessity of graph expansion. Classical results tell us that the fully connected graph suffices for secure computation. Protocols achieving low locality indicate that a variety of significantly sparser graphs, with many low-weight cuts, can also be used [3, 5, 9, 15]. We thus consider a natural extension of connectivity to the setting of low degree. Although the positive results in this setting take different approaches and result in different communication graph structures, we observe that in each case, the resulting sparse graph has high expansion.

Roughly, a graph is an expander if every subset of its nodes that is not “too large” has a “large” boundary. Expander graphs have good mixing properties and in a sense “mimic” a fully connected graph. There are various ways of formalizing expansion; in this work we consider a version of edge expansion, pertaining to the number of outgoing edges from any subset of nodes. We consider a variant of the expansion definition which is naturally monotonic: that is, expansion cannot decrease when extra edges are added (note that such monotonicity also holds for the capacity of the graph to support secure computation).

Indeed, expander graphs appear explicitly in some works [9, 33], and implicitly in others (e.g., using random graphs [31], pseudorandom graphs [5], and averaging samplers [6], to convert from almost-everywhere to everywhere agreement). High connectivity and good mixing intuitively go hand-in-hand with robustness against corruptions, where adversarial entities may attempt to impede or misdirect information flow.

This raises the natural question: Is this merely an artifact of a convenient construction, or is high expansion inherent? That is, we investigate the question: Must the communication graph of a generic MPC protocol, tolerating a linear number of corruptions, be an expander graph?

1.1 Our Results

More explicitly, we consider the setting of secure multiparty computation with n parties in the face of a linear number of active corruptions. As common in the honest-majority setting, we consider protocols that guarantee output delivery. Communication is modeled via the dynamic-graph setting, where all parties have the ability to initiate communication with one another, and use a subset of edges as dictated by the protocol. We focus on the synchronous setting.

Our contributions are of the following three kinds:

Formal definitional framework. As a first contribution, we provide a formal framework for analyzing and studying the evolving communication graph of MPC protocols. The framework abstracts and refines previous approaches concerning specific properties of protocols implicitly related to the graph structure, such as the degree [5]. This gives a starting point for studying the relation between secure computation and further, more general, graph properties.

Upper bounds. We present secure protocols whose induced communication graphs are decidedly not expander graphs, within a range of settings. This includes with computational security, with information-theoretic security, with low locality, even with low locality and adaptive security (in a hidden-channels model [9]) — but all with the common assumption of some form of input-independent setup information. The resulting communication graph has a low-weight cut, splitting the n parties into two equal (linear) size sets with only poly-logarithmic edges connecting them.

Theorem 1

(MPC with non-expanding communication graph, informal). For any efficient functionality f and any constant \(\epsilon >0\), there exists a protocol in the PKI model, assuming digital signatures, securely realizing f against \((1/4-\epsilon ) \cdot n\) static corruptions, such that with overwhelming probability the induced communication graph is non-expanding.

Theorem 1 is stated in the computational setting with static corruptions; however, this approach extends to various other settings, albeit at the expense of a lower corruption threshold. (See Sect. 4 for more details.)

Theorem 2

(extensions of Theorem 1, informal). For any efficient functionality f, there exists a protocol securely realizing f, in the settings listed below, against a linear number of corruptions, such that with overwhelming probability the induced communication graph is non-expanding:

  • In the setting of Theorem 1 with poly-logarithmic locality.

  • Unconditionally, in the information-theoretic PKI model (with or without low locality).

  • Unconditionally, in the information-theoretic PKI model, facing adaptive adversaries.

  • Under standard cryptographic assumptions, in the PKI model, facing adaptive adversaries, with poly-logarithmic locality.

As an interesting special case, since our protocols are over point-to-point channels and do not require a broadcast channel, these results yield the first Byzantine agreement protocols whose underlying communication graphs are not expanders.

The results in Theorems 1 and 2 all follow from a central transformation converting existing secure protocols into ones with low expansion. At a high level, the first n / 2 parties will run a secure computation to elect two representative committees of poly-logarithmic size: one amongst themselves and the other from the other n/2 parties. These committees will form a “communication bridge” across the two halves (see Fig. 1). The setup is used to certify the identities of the members of both committees to the receiving parties, either via public-key infrastructure for digital signatures (in the computational setting) or correlated randomness for information-theoretic signatures [37, 38] (in the information-theoretic setting).

Interestingly, this committee-based approach can be extended to the adaptive setting (with setup), in the hidden-channels model considered by [9], where the adversary is not aware which communication channels are utilized between honest parties.Footnote 3 Here, care must be taken to not reveal more information than necessary about the identities of committee members to protect them from being corrupted.

As a side contribution, we prove the first instantiation of a protocol with poly-logarithmic locality and information-theoretic security (with setup), by adjusting the protocol from [5] to the information-theoretic setting.

Theorem 3

(polylog-locality MPC with information-theoretic security, informal). For any efficient functionality f and any constant \(\epsilon >0\), there exists a protocol with poly-logarithmic locality in the information-theoretic PKI model, securely realizing f against computationally unbounded adversaries statically corrupting \((1/6-\epsilon ) \cdot n\) parties.

Lower bounds. On the other hand, we show that in some settings a weak form of expansion is a necessity. In fact, we prove a stronger statement, that in these settings the graph must have high connectivity.Footnote 4 Our lower bound is in the setting of adaptive corruptions, computational (or information-theoretic) security, and without setup assumptions. Our proof relies only on correctness of the protocol and not on any privacy guarantees; namely, we consider the parallel broadcast functionality (aka interactive consistency [35]), where every party distributes its input to all other parties. We construct an adversarial strategy in this setting such that no protocol can guarantee correctness against this adversary if its induced communication graph at the end of the protocol has any cut with sublinear many crossing edges (referred to as a “sublinear cut” from now on).

Theorem 4

(high connectivity is necessary for correct protocols, informal). Let \(t\in \varTheta (n)\). Any t(n)-resilient protocol for parallel broadcast in the computational setting, tolerating an adaptive, malicious adversary cannot maintain an induced communication graph with a sublinear cut.

Theorem 4 in particular implies that the resulting communication graph must have a form of expansion. We note that in a weaker communication model, a weaker form of consensus, namely Byzantine agreement, can be computed in a way that the underlying graph (while still an expander) has low-weight cuts [32].

It is indeed quite intuitive that if a sublinear cut exists in the communication graph of the protocol, and the adversary can adaptively corrupt a linear number of parties t(n), then he could corrupt the parties on the cut and block information flow. The challenge, however, stems from the fact that the cut is not known a priori but is only revealed over time, and by the point at which the cut is identifiable, all necessary information may have already been transmitted across the cut. In fact, even the identity of the cut and visible properties of the communication graph itself can convey information to honest parties about input values without actual bits being communicated. This results in a surprisingly intricate final attack, involving multiple indistinguishable adversaries, careful corruption strategies, and precise analysis of information flow. See below for more detail.

1.2 Our Techniques

We focus on the technical aspects of the lower bound result.

Overview of the attack. Consider an execution of the parallel broadcast protocol over random inputs. At a high level, our adversarial strategy, denoted \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \), will select a party \({P} _{i^{*}}\) at random and attempt to block its input from being conveyed to honest parties. We are only guaranteed that somewhere in the graph will remain a sublinear cut. Because the identity of the eventual cut is unknown, it cannot be attacked directly. We take the following approach:

  1. 1.

    Phase I. Rather, our attack will first “buy time” by corrupting the neighbors of \({P} _{i^{*}}\), and blocking information flow of its input \(x_{i^{*}}\) to the remaining parties. Note that this can only continue up to a certain point, since the degree of \({P} _{i^{*}}\) will eventually surpass the corruption threshold (as we prove). But, the benefit of this delay is that in the meantime, the communication graph starts to fill in, which provides more information about the locations of the potential cuts.

    For this to be the case, it must be that the parties cannot identify that \({P} _{i^{*}}\) is under attack (otherwise, the protocol may instruct many parties to quickly communicate to/from \({P} _{i^{*}}\), forcing the adversary to run out of his “corruption budget” before the remaining graph fills in). The adversary thus needs to fool all honest parties and make each honest party believe that he participates in an honest execution of the protocol. This is done by maintaining two simulated executions: one pretending to be \({P} _{i^{*}}\) running on a random input, and another pretending (to \({P} _{i^{*}}\)) to be all other parties running on random inputs. Note that for this attack strategy to work it is essential that the parties do not have pre-computed correlated randomness such as PKI.

  2. 2.

    Phase II. We show that with noticeable probability, by the time we run out of the Phase I corruption threshold (which is a linear number of parties), all parties in the protocol have high (linear) degree. In turn, we prove that the current communication graph can have at most a constant number of sublinear cuts.

    In the remainder of the execution, the adversary will simultaneously attack all of these cuts. Namely, he will block information flow from \({P} _{i^{*}}\) across any of these cuts by corrupting the appropriate “bridge” party, giving up on each cut one by one when a certain threshold of edges have already crossed it.

If the protocol is guaranteed to maintain a sublinear cut, then necessarily there will remain at least one cut for which all Phase II communication across the cut has been blocked by the adversary. Morally, parties on the side of this cut opposite \({P} _{i^{*}}\) should not have learned \(x_{i^{*}}\), and thus the correctness of the protocol should be violated. Proving this, on the other hand, requires surmounting two notable challenges.

  1. 1.

    We must prove that there still remains an uncorrupted party \({P} _{j^{*}}\) on the opposite side of the cut. It is not hard to show that each side of the cut is of linear size, that \({P} _{i^{*}}\) has a sublinear number of neighbors across the cut (all of which are corrupted), and that a sublinear number of parties get corrupted in Phase II. Hence, there exists parties across the cut that are not neighbors of \({P} _{i^{*}}\) and that are not corrupted in Phase II. However, by the attack strategy, all of the neighbors of the virtual \({P} _{i^{*}}\) are corrupted in Phase I as well, and this is also a linear size set, which is independent of the real neighbors of \({P} _{i^{*}}\). Therefore, it is not clear that there will actually remain honest parties across the cut by the end of the protocol execution.

  2. 2.

    More importantly, even though we are guaranteed that no bits of communication have been passed along any path from \({P} _{i^{*}}\) to \({P} _{j^{*}}\), this does not imply that no information about \(x_{i^{*}}\) has been conveyed. For example, since the graph develops as a function of parties’ inputs, it might be the case that this situation of \({P} _{j^{*}}\) being blocked from \({P} _{i^{*}}\), only occurs when \(x_{i^{*}}\) equals a certain value.

We now discuss how these two challenges are addressed.

Guaranteeing honest parties across the cut. Unexpectedly, we cannot guarantee existence of honest parties across the cut. Instead, we introduce a different adversarial strategy, which we prove must have honest parties blocked across a cut from \({P} _{i^{*}}\), and for which there exist honest parties who cannot distinguish which of the two attacks is taking place. More explicitly, we consider the “dual” version of the original attack, denoted \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \), where party \({P} _{i^{*}}\) is corrupted and instead pretends to be under attack as per \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \) above.

Blocking honest parties from \(x_{i^{*}}\) in \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) does not contradict correctness explicitly on its own, as \({P} _{i^{*}}\) is corrupted in this case. It is the combination of both of these attacks that will enable us to contradict correctness. Namely, we prove that:

  • Under the attack \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \), there exists a “blocked cut” \((S,\bar{S})\) with uncorrupted parties on both sides. By agreement, all uncorrupted parties output the same value \(y_{i^{*}}\) as the \({i^{*}}\)’th coordinate of the output vector.

  • The view of some of the uncorrupted parties under the attack \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) is identically distributed as that of uncorrupted parties in the original attack \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \). Thus, their output distribution must be the same across the two attacks.

  • Since under the attack \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \), the party \({P} _{i^{*}}\) is honest, by completeness, all uncorrupted parties in \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \) must output the correct value \(y_{i^{*}}=x_{i^{*}}\).

  • Thus, uncorrupted parties in \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) (who have the same view) must output the correct value \(x_{i^{*}}\) as well.

Altogether, this implies all honest parties in interaction with \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \), in particular \({P} _{j^{*}}\) who is blocked across the cut from \({P} _{i^{*}}\), must output \(y_{i^{*}}=x_{i^{*}}\).

Bounding information transmission about \(x_{i^{*}}\). The final step is to show that this cannot be the case, since an uncorrupted party \({P} _{j^{*}}\) across the cut in \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) does not receive enough information about \(x_{i^{*}}\) to fully specify the input. This demands delicate treatment of the specific attack strategy and analysis, as many “side channel” signals within the protocol can leak information on \(x_{i^{*}}\). Corruption patterns in Phase II, and their timing, can convey information “across” the isolated cut. In fact, even the event of successfully reaching Phase II may be correlated with the value of \(x_{i^{*}}\).

For example, say the cut at the conclusion of the protocol is \((S_1,\bar{S}_1)\) with \({i^{*}}\in S_1\) and \({j^{*}}\in \bar{S}_1\), but at the beginning of Phase II there existed another cut \((S_2,\bar{S}_2)\), for which \(S_1 \cap S_2 \ne \emptyset \), \(S_1 \cap \bar{S}_2 \ne \emptyset \), \(\bar{S}_1 \cap S_2 \ne \emptyset \), and \(\bar{S}_1 \cap \bar{S}_2 \ne \emptyset \). Since any “bridge” party in \(\bar{S}_2\) that receives a message from \(S_2\), gets corrupted and discards the message, the view of honest parties in \(\bar{S}_1\) might change as a result of the corruption related to the cut \((S_2,\bar{S}_2)\), which in turn could depend on \(x_{i^{*}}\).

Ultimately, we ensure that the final view of \({P} _{j^{*}}\) in the protocol can be simulated given only “Phase I ” information, which is independent of \(x_{i^{*}}\), in addition to the identity of the final cut in the graph, which reveals only a constant amount of additional entropy.

Additional subtleties. The actual attack and its analysis are even more delicate. E.g., it is important that the degree of the “simulated \({P} _{i^{*}}\),” by the adversarial strategy \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \), will reach the threshold faster than the real \({P} _{i^{*}}\). In addition, in each of these cases, the threshold, and so the transition to the next phase, could possibly be reached in a middle of a round, requiring detailed treatment.

1.3 Open Questions

This work leaves open many interesting lines of future study.

  • Bridging the gap between upper and lower bounds. This equates to identifying the core properties that necessitate graph expansion versus not. Natural candidates suggested by our work are existence of setup information and adaptive corruptions in the hidden or visible (yet private) channels model.

  • What other graph properties are necessary (or not) to support secure computation? Our new definitional framework may aid in this direction.

  • Our work connects graph theory and secure protocols, giving rise to further questions and design principles. For example, can good constructions of expanders give rise to new communication-efficient MPC? On the other hand, can necessity of expansion (in certain settings) be used to argue new communication complexity lower bounds?

Paper Organization

Basic notations are presented in Sect. 2. In Sect. 3, we provide our formalization of the communication graph induced by a MPC protocol and related properties. In Sect. 4, we describe our upper bound results, constructing protocols with non-expanding graphs. In Sect. 5, we prove our lower bound.

2 Preliminaries

Graph-theoretic notations. Let \(G=(V,E)\) be an undirected graph of size n, i.e., \(|V|=n\). Given a set \(S\subseteq V\), we denote its complement set by \(\bar{S}\), i.e., \(\bar{S}=V\setminus S\). Given two disjoint subsets \(U_1,U_2\subseteq V\) define the set of all the edges in G for which one end point is in \(U_1\) and the other end point is in \(U_2\) as

$$ \textsf {edges}_G(U_1,U_2):= \left\{ (u_1,u_2):u_1\in U_1, u_2\in U_2, \text { and } (u_1,u_2)\in E\right\} . $$

We denote by \(|\textsf {edges}_G(U_1,U_2)|\) the total number of edges going across \(U_1\) and \(U_2\). For simplicity, we denote \(\textsf {edges}_G(S)=\textsf {edges}_G(S,\bar{S})\). A cut in the graph G is a partition of the vertices V into two non-empty, disjoint sets \(\{S,\bar{S}\}\). An \(\alpha \)-cut is a cut \(\{S,\bar{S}\}\) such that \(|\textsf {edges}_G(S)|\le \alpha \).

Given a graph \(G=(V,E)\) and a node \(i\in V\), denote by \(G\setminus \{i\}=(V',E')\) the graph obtained by removing node i and all its edges, i.e., \(V'=V\setminus \{i\}\) and \(E'=E\setminus \{(i,j) \mid j\in V'\}\).

MPC Model. We consider multiparty protocols in the stand-alone, synchronous model, and require security with guaranteed output delivery. We refer the reader to [7, 24] for a precise definition of the model. Throughout the paper we assume malicious adversaries that can deviate from the protocol in an arbitrary manner. We will consider both static corruptions, where the set of corrupted parties is fixed at the onset of the protocol, and adaptive corruptions, where the adversary can dynamically corrupt parties during the protocol execution, In addition, we will consider both PPT adversaries and computationally unbounded adversaries.

Recall that in the synchronous model protocols proceed in rounds, where every round consists of a send phase followed by a receive phase. The adversary is assumed to be rushing, meaning that he can determine the messages for corrupted parties after seeing the messages sent by the honest parties. We assume a complete network of point-to-point channels (broadcast is not assumed), where every party has the ability to send a message to every other party. We will normally consider secure (private) channels where the adversary learns that a message has been sent between two honest parties, but not its content. If a public-key encryption is assumed, this assumption can be relaxed to authenticated channels, where the adversary can learn the content of all messages (but not change them). For our upper bound in the adaptive setting (Sect. 4.2) we consider hidden channels (as introduced in [9]), where the adversary does not even know whether two honest parties have communicated or not.

3 Communication Graphs Induced by MPC Protocols

In this section, we present formal definitions of properties induced by the communication graph of interactive protocols. These definitions are inspired by previous works in distributed computing [29, 30, 32, 33] and multiparty computation [3, 5, 9] that constructed interactive protocols with low locality.

3.1 Ensembles of Protocols and Functionalities

In order to capture certain asymptotic properties of the communication graphs of generic n-party protocols, such as edge expansion and locality, it is useful to consider a family of protocols that are parametrized by the number of parties n. This is implicit in many distributed protocols and in generic multiparty protocols, for example [2, 17, 25, 34, 35]. We note that for many large-scale protocols, e.g., protocols with low locality [3, 5, 29, 30, 32, 33], the security guarantees increase with the number of parties, and in fact, the number of parties is assumed to be polynomially related to the security parameter.

Definition 1

(protocol ensemble). Let \(f=\{f_n\}_{n\in {\mathbb {N}}}\) be an ensemble of functionalities, where \(f_n\) is an n-party functionality, let \(\pi =\{\pi _n\}_{n\in {\mathbb {N}}}\) be an ensemble of protocols, and let \({\mathcal {C}} =\{{\mathcal {C}} _n\}_{n\in {\mathbb {N}}}\) be an ensemble of classes of adversaries (e.g., \({\mathcal {C}} _n\) is the class of PPT t(n)-adversaries). We say that \(\pi \) securely computes f tolerating adversaries in \({\mathcal {C}} \) if for every n that is polynomially related to the security parameter \({\kappa } \), it holds that \(\pi _n\) securely computes \(f_n\) tolerating adversaries in \({\mathcal {C}} _n\).

In Sect. 4, we will consider several classes of adversaries. We use the following notation for clarity and brevity.

Definition 2

Let \(f=\{f_n\}_{n\in {\mathbb {N}}}\) be an ensemble of functionalities and let \(\pi =\{\pi _n\}_{n\in {\mathbb {N}}}\) be an ensemble of protocols. We say that \(\pi \) securely computes f tolerating adversaries of the form \(\textsf {type}\) (e.g., static PPT t(n)-adversaries, adaptive t(n)-adversaries, etc.), if \(\pi \) securely computes f tolerating adversaries in \({\mathcal {C}} =\{{\mathcal {C}} _n\}_{n\in {\mathbb {N}}}\), where for every n, the set \({\mathcal {C}} _n\) is the class of adversaries of the form \(\textsf {type}\).

3.2 The Communication Graph of a Protocol’s Execution

Intuitively, the communication graph induced by a protocol should include an edge (ij) precisely if parties \({P} _i\) and \({P} _j\) exchange messages during the protocol execution. For instance, consider the property of locality, corresponding to the maximum degree of the communication graph. When considering malicious adversaries that can deviate from the protocol using an arbitrary strategy, it is important to consider only messages that are sent by honest parties and messages that are received by honest parties. Otherwise, every corrupted party can send a message to every other corrupted party, yielding a subgraph with degree \(\varTheta (n)\). We note that restricting the analysis to only consider honest parties is quite common in the analysis of protocols.

Another issue that must be taken under consideration is flooding by the adversary. Indeed, there is no way to prevent the adversary from sending messages from all corrupted parties to all honest parties; however, we wish to only count those message which are actually processed by honest parties. To model this, the receive phase of every communication roundFootnote 5 is composed of two sub-phases:

  1. 1.

    The filtering sub-phase: Each party inspects the list of messages received in the previous round, according to specific filtering rules defined by the protocol, and discards the messages that do not pass the filter. The resulting list of messages is appended to the local transcript of the protocol.

  2. 2.

    The processing sub-phase: Based on its local transcript, each party computes the next-message function and obtains the list of messages to be sent in the current round along with the list of recipients, and sends them to the relevant parties.

In practice, the filtering procedure should be “lightweight,” such as verifying validity of a signature. However, we assume only an abstraction and defer the actual choice of filtering procedure (as well as corresponding discussion) to specific protocol specifications.

We now turn to define the communication graph of a protocol’s execution, by which we mean the deterministic instance of the protocol defined by fixing the adversary and all input values and random coins of the parties and the adversarial strategy. We consider protocols that are defined in the correlated-randomness model (e.g., for establishing PKI). This is without loss of generality since by defining the “empty distribution,” where every party is given an empty string, we can model also protocols in the plain model. Initially, we focus on the static setting, where the set of corrupted parties is determined at the onset of the protocol. We discuss the adaptive setting in the full version [4].

Definition 3

(protocol execution instance). For \(n \in {\mathbb {N}}\), let \(\pi _n\) be an n-party protocol, let \({\kappa } \) be the security parameter, let \({ \varvec{x}}=(x_1,\ldots ,x_n)\) be an input vector for the parties, let \({ \varvec{\rho }}=(\rho _1,\ldots ,\rho _n)\) be correlated randomness for the parties, let \(\mathcal{A} \) be an adversary, let \({z} \) be the auxiliary information of \(\mathcal{A} \), let \({\mathcal {I}} \subseteq [n]\) be the set of indices of corrupted parties controlled by \(\mathcal{A} \), and let \({ \varvec{r}}=(r_1,\ldots ,r_n,r_\mathcal{A})\) be the vector of random coins for the parties and for the adversary.

Denote by \(\textsf {instance}(\pi _n)=(\pi _n,\mathcal{A},{\mathcal {I}},{\kappa },{ \varvec{x}},{ \varvec{\rho }},{z},{ \varvec{r}})\) the list of parameters that deterministically define an execution instance of the protocol \(\pi _n\).

Note that \(\textsf {instance}(\pi _n)\) fully specifies the entire views and transcript of the protocol execution, including all messages sent to/from honest parties.

Definition 4

(communication graph of protocol execution). For \(n \in {\mathbb {N}}\), let \(\textsf {instance}(\pi _n)=(\pi _n,\mathcal{A},{\mathcal {I}},{\kappa },{ \varvec{x}},{ \varvec{\rho }},{z},{ \varvec{r}})\) be an execution instance of the protocol \(\pi _n\). We now define the following communication graphs induced by this execution instance. Each graph is defined over the set of n vertices [n].

  • Outgoing communication graph. The directed graph \(G_\textsf {out}(\textsf {instance}(\pi _n))=([n],E_\textsf {out})\) captures all the communication lines that are used by honest parties to send messages. That is,

    $$ E_\textsf {out}(\textsf {instance}(\pi _n)) = \left\{ (i,j) \mid {P} _i \text { is honest and sent a message to } {P} _j\right\} . $$
  • Incoming communication graph. The directed graph \(G_\textsf {in}(\textsf {instance}(\pi _n))=([n],E_\textsf {in})\) captures all the communication lines in which honest parties received messages that were processed (i.e., excluding messages that were filtered out). That is,

    $$ E_\textsf {in}(\textsf {instance}(\pi _n))\! =\! \left\{ (i,j) \mid {P} _j \text { is honest and processed a message received from } {P} _i\right\} . $$
  • Full communication graph. The undirected graph \(G_\textsf {full}(\textsf {instance}(\pi _n))=([n],E_\textsf {full})\) captures all the communication lines in which honest parties received messages that were processed, or used by honest parties to send messages. That is,

    $$ E_\textsf {full}(\textsf {instance}(\pi _n)) = \left\{ (i,j) \mid (i,j)\in E_\textsf {out}\text { or } (i,j)\in E_\textsf {in}\right\} . $$

We will sometimes consider ensembles of protocol instances (for \(n\in {\mathbb {N}}\)) and the corresponding ensembles of graphs they induce.

Looking ahead, in subsequent sections we will consider the full communication graph \(G_\textsf {full}\). Apart from making the presentation clear, the graphs \(G_\textsf {out}\) and \(G_\textsf {in}\) are used for defining \(G_\textsf {full}\) above, and the locality of a protocol in Definition 5. Note that \(G_\textsf {out}\) and \(G_\textsf {in}\) are interesting in their own right, and can be used for a fine-grained analysis of the communication graph of protocols in various settings, e.g., when transmitting messages is costly but receiving messages is cheap (or vice versa). We leave it open as an interesting problem to study various graph properties exhibited by these two graphs.

3.3 Locality of a Protocol

We now present a definition of communication locality, aligning with that of [5], with respect to the terminology introduced above.

Definition 5

(locality of a protocol instance). Let \(\textsf {instance}(\pi _n) = (\pi _n, {\kappa }, { \varvec{x}}, { \varvec{\rho }},\mathcal{A}, {z}, {\mathcal {I}} \subseteq [n], { \varvec{r}})\) be an execution instance as in Definition 4. For every honest party \({P} _i\) we define the locality of party \({P} _i\) to be the number of parties from which \({P} _i\) received and processed messages, or sent message to; that is,

$$ \ell _i(\textsf {instance}(\pi _n)) = \left| \left\{ j \mid (i,j) \in G_\textsf {out}\right\} \cup \left\{ j \mid (j,i) \in G_\textsf {in}\right\} \right| . $$

The locality of \(\textsf {instance}(\pi _n)\) is defined as the maximum locality of an honest party, i.e.,

$$ \ell (\textsf {instance}(\pi _n)) = \max _{i\in [n]\setminus {\mathcal {I}}}\left\{ \ell _i(\textsf {instance}(\pi _n))\right\} . $$

We proceed by defining locality as a property of a protocol ensemble. The protocol ensemble is parametrized by the number of parties n. To align with standard notions of security where asymptotic measurements are with respect to the security parameter \({\kappa } \), we consider the situation where the growth of n and \({\kappa } \) are polynomially related.

Definition 6

(locality of a protocol). Let \(\pi =\{\pi _n\}_{n\in {\mathbb {N}}}\) be a family of protocols in the correlated-randomness model with distribution \(D_{\pi }=\{D_{\pi _n}\}_{n\in {\mathbb {N}}}\), and let \({\mathcal {C}} =\{{\mathcal {C}} _n\}_{n\in {\mathbb {N}}}\) be a family of adversary classes. We say that \(\pi \) has locality \(\ell (n)\) facing adversaries in \({\mathcal {C}} \) if for every n that is polynomially related to \({\kappa } \) it holds that for every input vector \({ \varvec{x}}=(x_1,\ldots ,x_n)\), every auxiliary information \({z} \), every adversary \(\mathcal{A} \in {\mathcal {C}} _n\) running with \({z} \), and every set of corrupted parties \({\mathcal {I}} \subseteq [n]\), it holds that

$$ {\mathrm {Pr}}\left[ \ell (\pi _n,\mathcal{A},{\mathcal {I}},{\kappa },{ \varvec{x}},{z}) > \ell (n)\right] \le \textsf {negl}({\kappa }), $$

where \(\ell (\pi _n,\mathcal{A},{\mathcal {I}},{\kappa },{ \varvec{x}},{z})\) is the random variable corresponding to \(\ell (\pi _n,\mathcal{A},{\mathcal {I}},{\kappa },{ \varvec{x}},{ \varvec{\rho }},{z},{ \varvec{r}})\) when \({ \varvec{\rho }}\) is distributed according to \(D_{\pi _n}\) and \({ \varvec{r}}\) is uniformly distributed.

3.4 Edge Expansion of a Protocol

The measure of complexity we study for the communication graph of interactive protocols will be that of edge expansion (see discussion below). We refer the reader to [18, 28] for more background on expanders. We consider a definition of edge expansion which satisfies a natural monotonic property, where adding more edges cannot decrease the graph’s measure of expansion.

Definition 7

(edge expansion of a graph). Given an undirected graph \(G=(V,E)\), the edge expansion ratio of G, denoted h(G), is defined as

$$\begin{aligned} h(G)=\min _{\{S\subseteq V: |S|\le \frac{|V|}{2}\}} \frac{|\textsf {edges}(S)|}{|S|}, \end{aligned}$$
(1)

where \(\textsf {edges}(S)\) denotes the set of edges between S and its complement \(\bar{S}=V\setminus S\).

Definition 8

(family of expander graphs). A sequence \(\{G_n\}_{n\in {\mathbb {N}}}\) of graphs is a family of expander graphs if there exists a constant \(\epsilon >0\) such that \( h(G_n) \ge \epsilon \) for all n.

We now consider the natural extension of graph expansion to the setting of protocol-induced communication graph.

Definition 9

(bounds on edge expansion of a protocol). Let \(\pi =\{\pi _n\}_{n\in {\mathbb {N}}}\), \(D_{\pi }=\{D_{\pi _n}\}_{n\in {\mathbb {N}}}\), and \({\mathcal {C}} =\{{\mathcal {C}} _n\}_{n\in {\mathbb {N}}}\) be as in Definition 6.

  • A function f(n) is a lower bound of the edge expansion of \(\pi \) facing adversaries in \({\mathcal {C}} \), denoted \(f(n) \le h_{\pi ,D_{\pi },{\mathcal {C}}}(n)\), if for every n that is polynomially related to \({\kappa } \), for every \({ \varvec{x}}=(x_1,\ldots ,x_n)\), every \(\mathcal{A} \in {\mathcal {C}} _n\) running with \({z} \), and every \({\mathcal {I}} \subseteq [n]\), it holds that

    $$ {\mathrm {Pr}}\left[ h(G_\textsf {full}(\pi _n,\mathcal{A},{\mathcal {I}},{\kappa },{ \varvec{x}},{z})) \le f(n)\right] \le \textsf {negl}({\kappa }), $$

    where \(G_\textsf {full}(\pi _n,\mathcal{A},{\mathcal {I}},{\kappa },{ \varvec{x}},{z})\) is the random variable \(G_\textsf {full}(\pi _n,\mathcal{A},{\mathcal {I}},{\kappa },{ \varvec{x}},{ \varvec{\rho }},{z},{ \varvec{r}})\), when \({ \varvec{\rho }}\) is distributed according to \(D_{\pi _n}\) and \({ \varvec{r}}\) is uniformly distributed.

  • A function f(n) is a upper bound of the edge expansion of \(\pi \) facing adversaries in \({\mathcal {C}} \), denoted \(f(n) \ge h_{\pi ,D_{\pi },{\mathcal {C}}}(n)\), if there exists a polynomial relation between n and \({\kappa } \) such that for infinitely many n it holds that for every \({ \varvec{x}}=(x_1,\ldots ,x_n)\), every \(\mathcal{A} \in {\mathcal {C}} _n\) running with \({z} \), and every \({\mathcal {I}} \subseteq [n]\), it holds that

    $$ {\mathrm {Pr}}\left[ h(G_\textsf {full}(\pi _n,\mathcal{A},{\mathcal {I}},{\kappa },{ \varvec{x}},{z})) \ge f(n)\right] \le \textsf {negl}({\kappa }). $$

Definition 10

(expander protocol). Let \(\pi =\{\pi _n\}_{n\in {\mathbb {N}}}\), \(D_{\pi }=\{D_{\pi _n}\}_{n\in {\mathbb {N}}}\), and \({\mathcal {C}} =\{{\mathcal {C}} _n\}_{n\in {\mathbb {N}}}\) be as in Definition 6. We say that the communication graph of \(\pi \) is an expander, facing adversaries in \({\mathcal {C}} \), if there exists a constant function \(\epsilon (n)>0\) such that \(\epsilon (n)\le h_{\pi ,D_{\pi },{\mathcal {C}}}(n)\).

We note that most (if not all) secure protocols in the literature are expanders according to Definition 10, both in the realm of distributed computing [17, 20, 22, 29, 30, 32, 33] and in the realm of MPC [2, 3, 5, 9, 25]. Proving that a protocol is not an expander according to this definition requires showing an adversary for which the edge expansion is sub-constant. Looking ahead, both in our constructions of protocols that are not expanders (Sect. 4) and in our lower bound, showing that non-expander protocols can be attacked (Sect. 5), we use a stronger definition, that requires that the edge expansion is sub-constant facing all adversaries, see Definition 11 below. While it makes our positive results stronger, we leave it as an interesting open question to attack protocols that do not satisfy Definition 10.

Definition 11

(strongly non-expander protocol). Let \(\pi =\{\pi _n\}_{n\in {\mathbb {N}}}\), \(D_{\pi }=\{D_{\pi _n}\}_{n\in {\mathbb {N}}}\), and \({\mathcal {C}} =\{{\mathcal {C}} _n\}_{n\in {\mathbb {N}}}\) be as in Definition 6. We say that the communication graph of \(\pi \) is strongly not an expander, facing adversaries in \({\mathcal {C}} \), if there exists a sub-constant function \(\alpha (n)\in o(1)\) such that \(\alpha (n) \ge h_{\pi ,D_{\pi },{\mathcal {C}}}(n)\).

We next state a useful observation (proven in the full version [4]) that will come into play in Sect. 5, stating that if the communication graph of \(\pi \) is strongly not an expander, then there must exist a sublinear cut in the graph.

Lemma 1

Let \(\pi =\{\pi _n\}_{n\in {\mathbb {N}}}\) be a family of protocols in the correlated-randomness model with distribution \(D_{\pi }=\{D_{\pi _n}\}_{n\in {\mathbb {N}}}\), and let \({\mathcal {C}} =\{{\mathcal {C}} _n\}_{n\in {\mathbb {N}}}\) be such that \({\mathcal {C}} _n\) is the class of adversaries corrupting at most \(\beta \cdot n\) parties, for a constant \(0<\beta <1\).

Assuming the communication graph of \(\pi \) is strongly non-expanding facing adversaries in \({\mathcal {C}} \), there exists a sublinear function \(\alpha (n)\in o(n)\) such that for infinitely many n’s the full communication graph of \(\pi _n\) has an \(\alpha (n)\)-cut with overwhelming probability.

4 MPC with Non-Expanding Communication Graph

In this section, we show that in various standard settings, the communication graph of an MPC protocol is not required to be an expander graph, even when the communication locality is poly-logarithmic. In Sect. 4.1, we focus on static corruptions and computational security. In Sect. 4.2, we extend the construction to the information-theoretic setting and to the adaptive-corruption setting. The proof for these extensions can be found in the full version [4].

4.1 Computational Security with Static Corruptions

We start by considering the computational setting with static corruptions.

Theorem 5

Let \(f=\{f_n\}_{n\in {\mathbb {N}}}\) be an ensemble of functionalities, let \(\delta >0\), and assume that one-way functions exist. Then, the following holds in the PKI-hybrid model with secure channels:

  1. 1.

    Let \(\beta < 1/4-\delta \) and let \(t(n)=\beta \cdot n\). Then, f can be securely computed by a protocol ensemble \(\pi \) tolerating static PPT t(n)-adversaries such that the communication graph of \(\pi \) is strongly not an expander.

  2. 2.

    Let \(\beta < 1/6-\delta \) and let \(t(n)=\beta \cdot n\). Then, f can be securely computed by a protocol ensemble \(\pi \) tolerating static PPT t(n)-adversaries such that (1) the communication graph of \(\pi \) is strongly not an expander, and (2) the locality of \(\pi \) is poly-logarithmic in n.

  3. 3.

    Let \(\beta < 1/4-\delta \), let \(t(n)=\beta \cdot n\), and assume in addition the secret-key infrastructure (SKI) modelFootnote 6 and the existence of public-key encryption schemes. Then, f can be securely computed by a protocol ensemble \(\pi \) tolerating static PPT t(n)-adversaries such that (1) the communication graph of \(\pi \) is strongly not an expander, and (2) the locality of \(\pi \) is poly-logarithmic in n.Footnote 7

Proof

The theorem follows from Lemma 2 (below) by instantiating the hybrid functionalities using existing MPC protocols from the literature.

  • The first part follows using honest-majority MPC protocols that exist assuming one-way functions in the secure-channels model, e.g., the protocol of Beaver et al. [1] or of Damgård and Ishai [14].

  • The second part follows using the low-locality MPC protocol of Boyle et al. [5] that exists assuming one-way functions in the PKI model with secure channels and tolerates \(t= (1/3-\delta )n\) static corruptions.

  • The third part follows using the low-locality MPC protocol of Chandran et al. [9] that exists assuming public-key encryption in the PKI and SKI model with authenticated channels and tolerates \(t<n/2\) static corruptions.   \(\square \)

Ideal Functionalities used in the Construction. The proof to Theorem 5 relies on Lemma 2 (below). We start by defining the notations and the ideal functionalities that will be used in the protocol considered in Lemma 2.

Signature notations. Given a signature scheme \((\textsf {Gen},\textsf {Sign},\textsf {Verify})\) and m pairs of signing and verification keys \((\texttt {sk}_{i},\texttt {vk}_{i})\leftarrow \textsf {Gen} (1^{\kappa })\) for \(i\in [m]\), we use the following notations for signing and verifying with multiple keys:

  • Given a message \(\mu \) we denote by \(\textsf {Sign} _{\texttt {sk}_{1},\ldots ,\texttt {sk}_{m}}(\mu )\) the vector of m signatures \(\sigma =(\sigma _1,\ldots ,\sigma _m)\), where \(\sigma _i\leftarrow \textsf {Sign} _{\texttt {sk}_{i}}(\mu )\).

  • Given a message \(\mu \) and a signature \(\sigma =(\sigma _1,\ldots ,\sigma _m)\), we denote by \(\textsf {Verify} _{\texttt {vk}_{m+1},\ldots ,\texttt {vk}_{2m}}(\mu ,\sigma )\) the verification algorithm that for every \(i\in [m]\) computes \(b_i\leftarrow \textsf {Verify} _{\texttt {vk}_{m+i}}(\mu ,\sigma _i)\), and accepts the signature \(\sigma \) if and only if \(\sum _{i=1}^m b_i \ge m-t\), i.e., even if up to t signatures are invalid.

We note that it is possible to use multi-signatures or aggregated signatures in order to obtain better communication complexity, however, we use the notation above both for simplicity and as a step towards the information-theoretic construction in the following section.

The Elect-and-Share functionality. In the Elect-and-Share m-party functionality, , every party \({P} _i\) has a pair of inputs \((x_i,\texttt {sk}_{i})\), where \(x_i\in {\{0,1\}^{*}}\) is the “actual input” and \(\texttt {sk}_{i} \) is a private signing key. The functionality starts by electing two random subsets \({\mathcal {C}} _1,{\mathcal {C}} _2\subseteq [m]\) of size \(n'\), and signing each subset using all signing keys. In addition, every input value \(x_i\) is secret shared using a \((t',n')\) error-correcting secret-sharing scheme. Every party receives as output the subset \({\mathcal {C}} _1\), whereas a party \({P} _i\), for \(i\in {\mathcal {C}} _1\), receives an additional output consisting of a signature on \({\mathcal {C}} _1\), the signed subset \({\mathcal {C}} _2\), along with one share for each one of the m input values.

The Reconstruct-and-Compute functionality. The Reconstruct-and-Compute functionality, , is an m-party functionality. Denote the party-set by \(\{{P} _{m+1}, \ldots , {P} _{2m}\}\). Every party \({P} _{m+i}\) has an input value \(x_{m+i}\in {\{0,1\}^{*}}\), and a potential additional input value consisting of a signed subset \({\mathcal {C}} _2\subseteq [m]\) and a vector of m shares. The functionality starts by verifying the signatures, where every invalid input is ignored. The signed inputs should define a single subset \({\mathcal {C}} _2\subseteq [m]\) (otherwise the functionality aborts), and the functionality uses the additional inputs of parties \({P} _{m+i}\), for every \(i\in {\mathcal {C}} _2\), in order to reconstruct the m-tuple \((x_1,\ldots ,x_m)\). Finally, the functionality computes \(y=f(x_1,\ldots ,x_{2m})\) and hands y as the output for every party.

The Output-Distribution functionality. The m-party Output-Distribution functionality is parametrized by a subset \({\mathcal {C}} _1\subseteq [m]\). Every party \({P} _i\), with \(i\in {\mathcal {C}} _1\), hands in a value, and the functionality distributes the majority of these inputs to all the parties.

Constructing Non-Expander Protocols.

High-level overview of the protocol. Having defined the ideal functionalities, we are ready to present the main lemma. We start by describing the underlying idea behind the non-expanding MPC protocol \(\pi ^{\mathsf {ne}}_n\) (Fig. 2). At the onset of the protocol, the party-set is partitioned into two subsets of size \(m=n/2\), a left subset and a right subset (see Fig. 1). The left subset will invoke the Elect-and-Share functionality, that elects two subsets \({\mathcal {C}} _1,{\mathcal {C}} _2\subseteq [m]\) of size \(n'=\log ^2(n)\). The parties in the left subset corresponding to \({\mathcal {C}} _1\) and the parties in the right subset corresponding to \({\mathcal {C}} _2\) will form a “bridge”. The parties in \({\mathcal {C}} _1\) will receive shares of all inputs values of parties in the left subset, and transfer them to \({\mathcal {C}} _2\). Next, the right subset of parties will invoke the Reconstruct-and-Compute functionality, where each party hands its input value, and parties in \({\mathcal {C}} _2\) additionally provide the shares they received from \({\mathcal {C}} _1\). The functionality reconstructs the left-subset’s inputs, computes the function f and hands the output to the right subset. Finally, \({\mathcal {C}} _2\) transfers the output value to \({\mathcal {C}} _1\), and the left subset invoke the Output-Distribution functionality in order to distribute the output value to all the parties.

Fig. 1.
figure 1

The non-expanding subsets in the protocol \(\pi ^{\mathsf {ne}}\). The sets \({\mathcal {C}} _1\) and \({\mathcal {C}} _2\) are of poly-logarithmic size and the sets \({\mathcal {P}} _1\) and \({\mathcal {P}} _2\) are of linear size. The number of edges between \({\mathcal {P}} _1\) and \({\mathcal {P}} _2\) is poly-logarithmic.

Lemma 2

Let \(f=\{f_n\}_{n\in 2{\mathbb {N}}}\),Footnote 8 where \(f_n\) is an n-party functionality for \(n=2m\), let \(\delta >0\), and assume that one-way functions exist. Then, in the PKI-hybrid model with secure channels, where a trusted party additionally computes the m-party functionality-ensembles tolerating \(\gamma \cdot m\) corruptions, there exists a protocol ensemble \(\pi \) that securely computes f tolerating static PPT \(\beta n\)-adversaries, for \(\beta <\min (1/4-\delta ,\gamma /2)\), with the following guarantees:

  1. 1.

    The communication graph of \(\pi \) is strongly not an expander.

  2. 2.

    Denote by \(f_1,f_2,f_3\) the functionality-ensembles (resp.). If protocol-ensembles \(\rho _1,\rho _2,\rho _3\) securely compute \(f_1,f_2,f_3\) (resp.) with locality \(\ell _\rho =\ell _\rho (m)\), then \(\pi ^{f_i\rightarrow \rho _i}\) (where every call to \(f_i\) is replaced by an execution of \(\rho _i\)) has locality \(\ell =2\cdot \ell _\rho + \log ^2(n)\).

Fig. 2.
figure 2

Non-expanding MPC in the -hybrid model

Proof

For \(m\in {\mathbb {N}}\) and \(n=2m\), we construct the n-party protocol \(\pi ^{\mathsf {ne}}_n\) (see Fig. 2) in the -hybrid model. The parameters for the protocol are \(n'=\log ^2(n)\) and \(t'=(1/2-\delta )\cdot n'\). We start by proving in Proposition 1 that the protocol \(\pi ^{\mathsf {ne}}_n\) securely computes \(f_n\). Next, in Proposition 2 we prove that the communication graph of \(\pi ^{\mathsf {ne}}\) is strongly not an expander. Finally, in Proposition 3 we prove that by instantiating the functionalities using low-locality protocols, the resulting protocol has low locality.

Proposition 1

For sufficiently large n, the protocol \(\pi ^{\mathsf {ne}}_n\) securely computes the function \(f_n\), tolerating static PPT \(\beta n\)-adversaries, in the -hybrid model.

The proof of Proposition 1 can be found in the full version [4].

Proposition 2

The communication graph of the Protocol \(\pi ^{\mathsf {ne}}\) is strongly not an expander, facing static PPT \(\beta n\)-adversaries.

Proof

For \(n=2m\), consider the set \({\mathcal {P}} _1=\{{P} _1,\ldots ,{P} _m\}\) and its complement \({\mathcal {P}} _2={\mathcal {P}} \setminus {\mathcal {P}} _1\). For any input vector and for every static PPT \(\beta n\)-adversary it holds that with overwhelming probability that \(|{\mathcal {P}} _1|=n/2\) and \(\textsf {edges}({\mathcal {P}} _1,{\mathcal {P}} _2)=\log ^2(n)\cdot \log ^2(n)\). Therefore, considering the function

$$ f(n)=\frac{2\log ^4(n)}{n}, $$

it holds that \(f(n)\in o(1)\) and f(n) is an upper bound of the edge expansion of \(\pi ^{\mathsf {ne}}\). We conclude that the communication graph of \(\pi ^{\mathsf {ne}}\) is strongly not an expander.    \(\square \)

Proposition 3

Let \(\rho _1,\rho _2,\rho _3\), and \(\pi ^{f_i\rightarrow \rho _i}\) be the protocols defined in Lemma 2, and let \(\ell _\rho =\ell _\rho (m)\) be the upper bound of the locality of \(\rho _1,\rho _2,\rho _3\). Then \(\pi ^{f_i\rightarrow \rho _i}\) has locality \(\ell =2\cdot \ell _\rho + \log ^2(n)\).

Proof

Every party in \({\mathcal {P}} _1\) communicates with \(\ell _\rho \) parties when executing \(\rho _1\), and with at most another \(\ell _\rho \) parties when executing \(\rho _3\). In addition, every party in \({\mathcal {C}} _1\) communicates with all \(n'=\log ^2(n)\) parties in \({\mathcal {C}} _2\). Similarly, every party in \({\mathcal {P}} _2\) communicates with \(\ell _\rho \) parties when executing \(\rho _2\), and parties in \({\mathcal {C}} _2\) communicates with all \(n'\) parties in \({\mathcal {C}} _1\). It follows that maximal number of parties that a party communicates with during the protocol is \(2\cdot \ell _\rho +\log ^2(n)\).    \(\square \)

This concludes the proof of Lemma 2.    \(\square \)

4.2 Additional Results

Information-Theoretic Security. The protocol in Sect. 4.1 relies on digital signatures, hence, security is guaranteed only in the presence of computationally bounded adversaries. Next, we gain security facing all-powerful adversaries by using information-theoretic signatures. We prove the following theorem in the full version [4].

Theorem 6

Let \(f=\{f_n\}_{n\in {\mathbb {N}}}\) be an ensemble of functionalities and let \(\delta >0\). Then, the following holds in the IT-PKI-hybrid model with secure channels:

  1. 1.

    Let \(\beta <1/4-\delta \) and let \(t=\beta \cdot n\). Then, f can be t-securely computed by a protocol ensemble \(\pi \) tolerating static t(n)-adversaries such that the communication graph of \(\pi \) is strongly not an expander.

  2. 2.

    Let \(\beta <1/12-\delta \) and let \(t=\beta \cdot n\). Then, f can be t-securely computed by a protocol ensemble \(\pi \) tolerating static t(n)-adversaries such that (1) the communication graph of \(\pi \) is strongly not an expander, and (2) the locality of \(\pi \) is poly-logarithmic in n.

Adaptive Corruptions. In this section, we focus on the adaptive setting, where the adversary can corrupt parties dynamically, based on information gathered during the course of the protocol.

Adjusting Lemma 2 to the adaptive setting is not straightforward, since once the subsets \({\mathcal {C}} _1\) and \({\mathcal {C}} _2\) are known to the adversary he can completely corrupt them. A first attempt to get around this problem, is not to reveal the entire subsets in the output of the Elect-and-Share functionality, but rather, let each party in \({\mathcal {C}} _1\) learn the identity of a single party in \({\mathcal {C}} _2\) with which he will communicate. This way, if a party in \({\mathcal {C}} _1\) (resp. \({\mathcal {C}} _2\)) gets corrupted, only one additional party in \({\mathcal {C}} _2\) (resp. \({\mathcal {C}} _1\)) is revealed to the adversary. This solution comes with the price of tolerating a smaller fraction of corrupted parties, namely, \((1/8 - \delta )\) fraction.

This solution, however, is still problematic in the adaptive setting if the adversary can monitor the communication lines, even when they are completely private (as in the secure-channels setting). The reason is that once the adversary sees the communication that is sent between \({\mathcal {C}} _1\) and \({\mathcal {C}} _2\) he can completely corrupt both subsets. This problem is inherent when the communication lines are visible to the adversary, therefore, we turn to the hidden-channels setting that was used by Chandran et al. [9], where the adversary does not learn whether a message is sent between two honest parties.

Theorem 7

Let \(f=\{f_n\}_{n\in {\mathbb {N}}}\) be an ensemble of functionalities, let \(\delta >0\), let \(\beta <1/8-\delta \), and let \(t=\beta \cdot n\). Then, the following holds in the hidden-channels model:

  1. 1.

    Assuming the existence of one-way functions, f can be securely computed by a protocol ensemble \(\pi \) in the PKI model tolerating adaptive PPT t(n)-adversaries such that the communication graph of \(\pi \) is strongly not an expander.

  2. 2.

    Assume in addition the SKI model and non-committing encryption. Then, f can be securely computed by a protocol ensemble \(\pi \) in the PKI model tolerating adaptive PPT t(n)-adversaries such that (1) the communication graph of \(\pi \) is strongly not an expander, and (2) the locality of \(\pi \) is poly-logarithmic in n.

  3. 3.

    f can be securely computed by a protocol ensemble \(\pi \) in the IT-PKI model tolerating adaptive t(n)-adversaries such that the communication graph of \(\pi \) is strongly not an expander.

5 Expansion is Necessary for Correct Computation

In this section, we show that in certain natural settings there exist functionalities such that the final communication graph of any MPC protocol that securely computes them must be an expander. In fact, we prove a stronger statement, that removing a sublinear number of edges from such graphs will not disconnect them. We consider the plain model, in which parties do not have any trusted setup assumptions, a PPT adaptive adversary, and focus on parallel multi-valued broadcast (also known as interactive consistency [35]), where every party has an input value, and all honest parties agree on a common output vector, such that if \({P} _i\) is honest then the \(i\)’th coordinate equals \({P} _i\)’s input. In particular, our proof does not rely on any privacy guarantees of the protocol, merely its correctness.

For simplicity, and without loss of generality, we assume the security parameter is the number of parties n.

Definition 12

(parallel broadcast). A protocol ensemble \(\pi =\{\pi _n\}_{n\in {\mathbb {N}}}\) is a t(n)-resilient, parallel broadcast protocol with respect to input space \(\{{\{0,1\}^n}\}_{n\in {\mathbb {N}}}\), if there exists a negligible function \(\mu (n)\), such that for every \(n\in {\mathbb {N}}\), every party \({P} _i\) in \(\pi _n\) has input \(x_i\in {\{0,1\}^n}\) and outputs a vector of n values \({ \varvec{y}}_i=(y^i_1,\ldots , y^i_n)\) such that the following is satisfied, except for probability \(\mu (n)\). Facing any adaptive, malicious PPT adversary that dynamically corrupts and controls a subset of parties \(\{{P} _j\}_{j\in {\mathcal {I}}}\), with \({\mathcal {I}} \subseteq [n]\) of size \(|{\mathcal {I}} |\le t(n)\), it holds that:

  • Agreement. There exists a vector \({ \varvec{y}}=(y_1,\ldots , y_n)\) such that for every party \({P} _i\) that is honest at the conclusion of the protocol it holds that \({ \varvec{y}}_i={ \varvec{y}}\).

  • Validity. For every party \({P} _i\) that is honest at the conclusion of the protocol it holds that the \(i\)’th coordinate of the common output equals his input value, i.e., \(y_i=x_i\).

Recall that a connected graph is k -edge-connected if it remains connected whenever fewer than k edges are removed. We are now ready to state the main result of this section. We note that as opposed to Sect. 4.2, where we considered adaptive corruptions in the hidden-channels model, this section considers the parallel secure message transmission (SMT) model, formally defined in Sect. 5.1, where the adversary is aware of communication between honest parties, but not of the message content.

Theorem 8

Let \(\beta >0\) be a fixed constant, let \(t(n)=\beta \cdot n\), and let \(\pi =\{\pi _n\}_{n\in {\mathbb {N}}}\) be a t(n)-resilient, parallel broadcast protocol with respect to input space \(\{{\{0,1\}^n}\}_{n\in {\mathbb {N}}}\), in the parallel SMT hybrid model (in the computational setting, tolerating an adaptive, malicious PPT adversary). Then, the communication graph of \(\pi \) must be \(\alpha (n)\)-edge-connected, for every \(\alpha (n)\in o(n)\).

From Theorem 8 and Lemma 1 (stating that if \(\pi \) is strongly not an expander then there must exist a sublinear cut in the graph) we get the following corollary.

Corollary 1

Consider the setting of Theorem 8. If the communication graph of \(\pi \) is strongly not an expander (as per Definition 11), then \(\pi \) is not a t(n)-resilient parallel broadcast protocol.

The remainder of this section goes towards proving Theorem 8. We start by presenting the communication model in Sect. 5.1. In Sect. 5.2, we prove a graph-theoretic theorem that will be used in the core of our proof and may be of independent interest. Then, in Sect. 5.3 we present the proof of Theorem 8. Some of the proofs are deferred to the full version [4].

5.1 The Communication Model

We consider secure communication channels, where the adversary can see that a message has been sent but not its content (in contrast to the hidden-communication model, used in Sect. 4.2, where the communication between honest parties was hidden from the eyes of the adversary). A standard assumption when considering adaptive corruptions is that in addition to being notified that an honest party sent a message, the adversary can corrupt the sender before the receiver obtained the message, learn the content of the message, and replace it with another message of its choice that will be delivered to the receiver. Although the original modular composition framework [7] does not give the adversary such power, this ability became standard after the introduction of the secure message transmission (SMT) functionality in the UC framework [8]. As we consider synchronous protocols, we use the parallel SMT functionality that was formalized in [12, 13].Footnote 9

Definition 13

(parallel SMT). The parallel secure message transmission functionality \(f_{\mathsf {psmt}}\) is a two-phase functionality. For every \(i,j\in [n]\), the functionality initializes a value \(x^i_j\) to be the empty string \({\epsilon } \) (the value \(x^i_j\) is the message to be sent from \({P} _i\) to \({P} _j\)).

  • The input phase. Every party \({P} _i\) sends a vector of n messages \((v^i_1,\ldots ,v^i_n)\). The functionality sets \(x^i_j=v^i_j\), and provides the adversary with leakage information on the input values. As we consider rushing adversaries, who can determine the messages to be sent by the corrupted parties after receiving the messages sent by the honest parties, the leakage function should leak the messages that are to be delivered from honest parties to corrupted parties. Therefore, the leakage function is

    $$ l_\mathsf {psmt} \left( (x^1_1,\ldots ,x^1_n),\ldots ,(x^n_1,\ldots ,x^n_n)\right) =\left( y^1_1,y^1_2, \ldots , y^n_{n-1}, y^n_n\right) , $$

    where \(y^i_j=|x^i_j|\) in case \({P} _j\) is honest and \(y^i_j=x^i_j\) in case \({P} _j\) is corrupted. We consider adaptive corruptions, and so, the adversary can corrupt an honest party during the input phase based on this leakage information, and send a new input on behalf of the corrupted party (note that the message are not delivered yet to the honest parties).

  • The output phase. In the second phase, the messages are delivered to the parties, i.e., party \({P} _i\) receives the vector of messages \((x^1_i,\ldots ,x^n_i)\).

In addition, we assume that the parties do not have any trusted-setup assumption.

5.2 A Graph-Theoretic Theorem

Our lower-bound proof is based on the following graph-theoretic theorem, which we believe may be of independent interest. We show that every graph in which every node has a linear degree, can be partitioned into a constant number of linear-size sets that are pairwise connected by sublinear many edges. These subsets are “minimal cuts” in the sense that every sublinear cut in the graph is a union of some of these subsets. The proof of the theorem given in the full version [4].

Definition 14

(\((\alpha ,d)\)-partition). Let \(G=(V,E)\) be a graph of size n. An \((\alpha ,d)\) -partition of G is a partition \(\varGamma =(U _1,\ldots ,U _\ell )\) of V that satisfies the following properties:

  1. 1.

    For every \(i\in [\ell ]\) it holds that \(|U _i|\ge d\).

  2. 2.

    For every \(i\ne j\), there are at most \(\alpha \) edges between \(U _i\) and \(U _j\), i.e., \(|\textsf {edges}_G(U _i,U _j)|\le \alpha \).

  3. 3.

    For every \(S\subseteq V\) such that \(\{S,\bar{S}\}\) is an \(\alpha \)-cut, i.e., \(|\textsf {edges}_{G}(S)|\le \alpha \), it holds that there exists a subset \(J\subsetneq [\ell ]\) for which \(S=\bigcup _{j\in J} U_j\) and \(\bar{S}=\bigcup _{j\in [\ell ]\setminus J} U_j\).

In Theorem 9 we first show that if every node in the graph has a linear degree d(n), and \(\alpha (n)\) is sublinear, then for sufficiently large n there exists an \((\alpha (n),d(n))\)-partition of the graph, and moreover, the partition can be found in polynomial time.

Theorem 9

Let \(c>1\) be a constant integer, let \(\alpha (n)\in o(n)\) be a fixed sublinear function in n, and let \(\{G_n\}_{n\in {\mathbb {N}}}\) be a family of graphs, where \(G_n=([n],E_n)\) is defined on n vertices, and every vertex of \(G_n\) has degree at least \(\frac{n}{c}-1\). Then, for sufficiently large n it holds that:

  1. 1.

    There exists a \((\alpha (n),n/c)\)-partition of \(G_n\), denoted \(\varGamma \); it holds that \(\left| \varGamma \right| \le c\).

  2. 2.

    A \((\alpha (n),n/c)\)-partition \(\varGamma \) of \(G_n\) can be found in (deterministic) polynomial time, given the \(n\times n\) adjacency matrix of \(G_n\).

Note that if for every n there exists an \(\alpha (n)\)-cut in \(G_n\), then it immediately follows that \(\left| \varGamma \right| >1\), i.e., the partition is not the trivial partition of the set of all nodes.

5.3 Proof of Main Theorem (Theorem 8)

High-level overview of the attack. For \(n\in {\mathbb {N}}\), consider an execution of the alleged parallel broadcast protocol \(\pi _n\) over uniformly distributed n-bit input values for the parties \((x_1,\ldots ,x_n)\in _R ({\{0,1\}^n})^n\). We define two ensembles of adversarial strategies \(\{\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \}_{n\in {\mathbb {N}}}\) and \(\{\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \}_{n\in {\mathbb {N}}}\) (described in full in Sect. 5.3).

The adversary \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) corrupts a random party \({P} _{i^{*}}\), and simulates an honest execution on a random input \(\tilde{x}_{i^{*}}\) until \({P} _{i^{*}}\) has degree \(\beta /4\). Next, \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) switches the internal state of \({P} _{i^{*}}\) with a view that is consistent with an honest execution over the initial input \(x_{i^{*}}\), where all other parties have random inputs. The adversary \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) continues by computing the \((\alpha (n),n/c)\)-partition \(\{U_1,\ldots ,U_\ell \}\) of the communication graph, (where c is a constant depending only on \(\beta \) – this is possible due to Theorem 9), and blocking every message that is sent between every pair of \(U_i\)’s. In Lemma 3, we show that there exist honest parties that at the conclusion of the protocol have received a bounded amount of information on the initial input value \(x_{i^{*}}\).

The second adversary, \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \), is used for showing that under the previous attack, every honest party will eventually output the initial input value \(x_{i^{*}}\) (Lemma 4). This is done by having \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \) corrupt all the neighbors of \({P} _{i^{*}}\), while keeping \({P} _{i^{*}}\) honest, and simulate the previous attack to the remaining honest parties.

We show that there exist honest parties whose view is identically distributed under both attacks, and since they output \(x_{i^{*}}\) in the latter, they must also output \(x_{i^{*}}\) in the former. By combining both of these lemmata, we then derive a contradiction.

Proof

(Proof of Theorem 8). First, since we consider the plain model, without any trusted setup assumptions, known lower bounds [21, 34, 35] state that parallel broadcast cannot be computed for \(t(n)\ge n/3\), therefore, we can focus on \(0<\beta <1/3\), i.e., the case where \(t(n)=\beta \cdot n <n/3\).

Assume toward a contradiction that \(\pi \) is t(n)-resilient parallel broadcast protocol in the above setting, and that there exists a sublinear function \(\alpha (n)\in o(n)\) such that the communication graph of \(\pi \) is not \(\alpha (n)\)-edge-connected, i.e., for sufficiently large n there exists a cut \(\{S_n,\bar{S}_n\}\) of weight at most \(\alpha (n)\).

Notations. We start by defining a few notations. For a fixed n,Footnote 10 consider the following independently distributed random variables

$$ \mathrm{I}\textsc {nputs}\mathrm{A}\textsc {nd}\mathrm{C}\textsc {oins}=\left( X_1,\ldots ,X_n,R_1,\ldots ,R_n,\tilde{X}_1,\ldots ,\tilde{X}_n,\tilde{R}_1,\ldots ,\tilde{R}_n,{I^{*}}\right) , $$

where for every \(i\in [n]\), each \(X_i\) and \(\tilde{X}_i\) take values uniformly at random in the input space \({\{0,1\}^n}\), each \(R_i\) and \(\tilde{R}_i\) take values uniformly at random in \({\{0,1\}^{*}}\), and \({I^{*}}\) takes values uniformly at random in [n]. During the proof, \((X_i,R_i)\) represent the pair of input and private randomness of party \({P} _i\), whereas \((\tilde{X}_1,\ldots ,\tilde{X}_n,\tilde{R}_1,\ldots ,\tilde{R}_n,{I^{*}})\) correspond to the random coins of the adversary (used in simulating the two executions towards the honest parties). Unless stated otherwise, all probabilities are taken over these random variables.

Let \(\mathrm{R}\textsc {ed}\mathrm{E}\textsc {xec}\) be a random variable defined as

$$ \mathrm{R}\textsc {ed}\mathrm{E}\textsc {xec}:=\left( X_{-{I^{*}}},\tilde{X}_{I^{*}},R_{-{I^{*}}},\tilde{R}_{I^{*}}\right) . $$

That is, \(\mathrm{R}\textsc {ed}\mathrm{E}\textsc {xec}\) contains \(X_i\) and \(R_i\) for \(i\in [n]\setminus \{{I^{*}}\}\), along with \(\tilde{X}_{I^{*}}\) and \(\tilde{R}_{I^{*}}\). We denote by the “red execution” an honest protocol execution when the inputs and private randomness of the parties are \((X_{-{I^{*}}},\tilde{X}_{I^{*}},R_{-{I^{*}}},\tilde{R}_{I^{*}})\). We denote by the “blue execution” an honest protocol execution when the inputs and private randomness of the parties are \((\tilde{X}_{-{I^{*}}},X_{I^{*}},\tilde{R}_{-{I^{*}}},R_{I^{*}})\). Note that such a sample fully determines the view and transcript of all parties in an honest simulated execution of \(\pi _n\).

Let FinalCut\(^\textsf {corrupt}\) be a random variable defined over \(2^{[n]}\cup \{\bot \}\). The distribution of FinalCut\(^\textsf {corrupt}\) is defined by running protocol \(\pi \) until its conclusion with adversary \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) (defined in Sect. 5.3) on inputs and coins sampled according to InputsAndCoins. If at the conclusion of the protocol there is no \(\alpha (n)\)-cut in the graph, then set the value of FinalCut\(^\textsf {corrupt}\) to be \(\bot \); otherwise, set the value to be the identity of the smallest \(\alpha (n)\)-cut \(\{S,\bar{S}\}\) in the communication graph according to some canonical ordering on the \(\alpha (n)\)-cuts. We will prove that conditioned on the value of \(\mathrm{R}\textsc {ed}\mathrm{E}\textsc {xec}\), then FinalCut\(^\textsf {corrupt}\) can only take one of a constant number of values depending only on \(\beta \) (and not on n).

Let \({\mathcal {E}} _1\) denote the event that \({P} _{I^{*}}\) is the last among all the parties to reach degree \(\beta n/4\) in both the red and the blue honest executions of the protocol. More precisely, the event that \({P} _{I^{*}}\) reaches degree \(\beta n/4\) in both executions, and if it has reached this degree in round \(\rho \) in the red (blue) execution, then all parties in the red (blue) execution have degree at least \(\beta n/4\) in round \(\rho \).

Let \({\mathcal {E}} _2\) denote the event that the degree of \({P} _{I^{*}}\) reaches \(\beta n/4\) in the red execution before, or at the same round as, in the blue execution. Note that \({\mathcal {E}} _1\) and \({\mathcal {E}} _2\) are events with respect to two honest executions of the protocol (the red execution and the blue execution) that are defined according to \(\mathrm{I}\textsc {nputs}\mathrm{A}\textsc {nd}\mathrm{C}\textsc {oins}\). In the adversarial stategies that are used in the proof, the corrupted parties operate in a way that indeed induces the red and blue executions, and so, the events \({\mathcal {E}} _1\) and \({\mathcal {E}} _2\) are well defined in an execution of the protocol with those adversarial strategies.

In Sect. 5.3, we formally describe two adversarial strategies, \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \) and \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \). We denote by \(Y^{\textsf {corrupt}}_{{I^{*}}}\), respectively \(Y^{\textsf {honest}}_{{I^{*}}}\), the random variable that corresponds to the \({I^{*}}\)’th coordinate of the common output of honest parties, when running over random inputs with adversarial strategy \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \), respectively \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \).

Proof structure. Our proof follows from two main steps. In Lemma 3, stated in Sect. 5.3, we show that in an execution of \(\pi _n\) on random inputs with adversary \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \), it holds that (1) \({\mathrm {Pr}}\left[ {\mathcal {E}} _1 \cap {\mathcal {E}} _2\right] \ge 1/2n^2 - \textsf {negl}(n)\), and that (2) conditioned on the event \({\mathcal {E}} _1 \cap {\mathcal {E}} _2\), there exists an honest party \({P} _{j^{*}}\) such that \(X_{I^{*}}\), conditioned on \({\mathcal {E}} _1 \cap {\mathcal {E}} _2\) and on the view of \({P} _{j^{*}}\) at the conclusion of the protocol, still retains at least n / 4 bits of entropy. This means, in particular, that \({P} _{j^{*}}\) will output the value \(X_{I^{*}}\) only with negligible probability. Hence, by agreement, the probability for any of the honest parties to output \(X_{I^{*}}\) in an execution with \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) is negligible. In particular,

$$ {\mathrm {Pr}}\left[ Y^{\textsf {corrupt}}_{{I^{*}}}=X_{I^{*}}\mid {\mathcal {E}} _1 \cap {\mathcal {E}} _2\right] =\textsf {negl}({\kappa }). $$

In Lemma 4, stated in Sect. 5.3, we show that in an execution of \(\pi \) on random inputs with adversary \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \), it holds that (1) with overwhelming probability all honest parties output \(X_{i^{*}}\) (this holds by correctness, since \({P} _{I^{*}}\) remains honest), i.e.,

$$ {\mathrm {Pr}}\left[ Y^{\textsf {honest}}_{{I^{*}}}=X_{I^{*}}\right] \ge 1-\textsf {negl}({\kappa }), $$

and that (2) conditioned on the event \({\mathcal {E}} _1 \cap {\mathcal {E}} _2\), there exists an honest party whose view is identically distributed as in an execution with \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \), therefore,

$$ {\mathrm {Pr}}\left[ Y^{\textsf {corrupt}}_{{I^{*}}}=Y^{\textsf {honest}}_{{I^{*}}} \mid {\mathcal {E}} _1 \cap {\mathcal {E}} _2\right] \ge 1-\textsf {negl}({\kappa }). $$

From the combination of the two lemmata, we derive a contradiction.    \(\square \)

Defining Adversarial Strategies. As discussed above, the main idea behind the proof is to construct two dual adversarial strategies that will show that on the one hand, the output of all honest parties must contain the initial value of a randomly chosen corrupted party, and on the other hand, there exist parties that only receive a bounded amount of information on this value during the coarse of the protocol.

We use the following notation for defining the adversarial strategies. Virtual parties that only exist in the head of the adversary are denoted with “tilde”. In particular, for a random \({i^{*}}\in [n]\), we denote by \({\tilde{{P}}} _{i^{*}}\) a virtual party that emulates the role of \({P} _{i^{*}}\) playing with the real parties using a random input in the so-called “red execution,” and by \(\{{\tilde{{Q}}} _i\}_{i\ne {i^{*}}}\) virtual parties that emulate an execution over random inputs towards \({P} _{i^{*}}\).Footnote 11

The adversary \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \). At a high level, the adversary \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \) chooses a random \({i^{*}}\in [n]\) and isolates the honest party \({P} _{i^{*}}\). The adversary \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \) consists of three phases. In Phase I, \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \) induces two honestly distributed executions.

  • The first (red) execution is set by simulating an honest execution of a virtual party \({\tilde{{P}}} _{i^{*}}\) over a random input \(\tilde{x}_{i^{*}}\) towards all other parties. The adversary corrupts any party that sends a message to \({P} _{i^{*}}\), blocks its message, and simulates \({\tilde{{P}}} _{i^{*}}\) receiving this message. Whenever \({\tilde{{P}}} _{i^{*}}\) should send a message to some \({P} _j\), the adversary corrupts the \({P} _j\), and instructs him to proceed as if he received the intended message from \({\tilde{{P}}} _{i^{*}}\).

  • For the second (blue) execution, \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \) emulates a virtual execution with virtual parties \(({\tilde{{Q}}} _1,\ldots ,{\tilde{{Q}}} _n)\setminus \{{\tilde{{Q}}} _{i^{*}}\}\) on random inputs towards the honest party \({P} _{i^{*}}\). To do so, whenever \({P} _{i^{*}}\) sends a message to \({P} _j\) in the real execution, the adversary corrupts \({P} _j\), instructing him to ignore this message, and simulates this message from \({P} _{i^{*}}\) to \({\tilde{{Q}}} _j\) in the virtual execution (that is running in the head of the adversary). Whenever a party \({\tilde{{Q}}} _j\) sends a message to \({P} _{i^{*}}\) in the virtual execution, the adversary corrupts the real party \({P} _j\) and instructs him to send this message to \({P} _{i^{*}}\) in the real execution.

Phase II begins when the degree of \({P} _{i^{*}}\) in the red execution is at least \((\beta /4) \cdot n\); if \({P} _{i^{*}}\) reaches this threshold faster in the blue execution, the attack fails. Phase III begins when the degree of \({P} _{i^{*}}\) in the real execution is at least \((\beta /4) \cdot n\).

Ideally, Phase I will continue until all parties in the real execution have a linear degree, and before the adversary will use half of his “corruption budget”, i.e., \((\beta /2)\cdot n\). This would be the case if we were to consider a single honest execution of the protocol, since we show that there always exists a party that will be the last to reach the linear-degree threshold with a noticeable probability. However, as the attack induces two independent executions, in which the degree of the parties can grow at different rates, care must be taken. We ensure that even though \({P} _{i^{*}}\) runs in the blue execution, by the time \({P} _{i^{*}}\) will reach the threshold, all other parties (that participate in the red execution) will already have reached the threshold, and can be partitioned into “minimal” \(\alpha (n)\)-cuts, as follows.

The adversary allocates \((\beta /4)\cdot n\) corruptions for the red execution and \((\beta /4)\cdot n\) corruptions for the blue execution. We show that with a noticeable probability, once \({\tilde{{P}}} _{i^{*}}\) has degree \((\beta /4)\cdot n\) in the red execution, all other parties in the red execution also have high degree. Consider the communication graph of the red execution without the virtual party \({\tilde{{P}}} _{i^{*}}\) (i.e., after removing the node \({i^{*}}\) and its edges); by Theorem 9 there exists an \((\alpha (n),(\beta /4)n-1)\) partition of this graph into a constant number of linear-size subsets that are connected with sublinear many edges, denoted \(\varGamma =\{U _1,\ldots ,U _\ell \}\) (in particular, this partition is independent of \(x_{i^{*}}\)). In Phase II, the adversary continues blocking outgoing messages from \({P} _{i^{*}}\) towards the real honest parties, until the degree of \({P} _{i^{*}}\) in the real execution is \(\beta n/4\). In addition, \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \) blocks any message that is sent between two subsets in the partition, by corrupting the recipient and instructing him to ignore messages from outside of his subset.

In Phase III, which begins when \({P} _{i^{*}}\) has high degree in the real execution, the adversary adds \({P} _{i^{*}}\) to one of the subsets in the partition, in which \({P} _{i^{*}}\) has many neighbors, and continues to block messages between different subsets in the partition until the conclusion of the protocol.

We note that special care must be taken in the transition between the phases, since such a transition can happen in a middle of a round, after processing some of the messages, but not all. Indeed, if the transition to the next phase will happen at the end of the round, the adversary may need to corrupt too many parties. For this reason, in Phases I and II, we analyze the messages to and from \({P} _{i^{*}}\) one by one, and check whether the threshold has been met after each such message.

The adversary \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \). The adversary \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) corrupts the randomly chosen party \({P} _{i^{*}}\), and emulates the operations of an honest \({P} _{i^{*}}\) that is being attacked by \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \).

In Phase I, the adversary \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) induces two honestly distributed executions, by simulating an honest execution of a virtual party \({\tilde{{P}}} _{i^{*}}\) over a random input \(\tilde{x}_{i^{*}}\) towards all other honest parties (the red execution), and furthermore, runs in its mind a virtual execution over the initial input \(x_{i^{*}}\) and random inputs \(\tilde{x}_i\) for \(i\ne {i^{*}}\) (the blue execution). This phase continues until \({\tilde{{P}}} _{i^{*}}\) has degree \(\beta n/4\) in the red execution (no parties other than \({P} _{i^{*}}\) are being corrupted). If all other parties in the red execution have high degree, then the adversary finds the partition of the red graph as in the previous attack (the partition is guaranteed by Theorem 9).

In Phase II, the adversary continues simulating the corrupted \({P} _{i^{*}}\) towards the real honest parties until the degree of \({P} _{i^{*}}\) in the real execution is \(\beta n/4\); however, his communication is based on the view in the blue execution at the end of Phase I (this is no longer an honest-looking execution). During this phase, \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) blocks any message that is sent between two subsets in the partition.

In Phase III, that begins when \({P} _{i^{*}}\) has high degree (in the real execution), \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \) adds \({P} _{i^{*}}\) to one of the subsets in the partition, in which \({P} _{i^{*}}\) has many neighbors, and continues to block messages between different subsets in the partition until the conclusion of the protocol.

The Core Lemmata. In the full version [4] we prove the following core lemmata that conclude the proof of the theorem.

Lemma 3

Consider an execution of \(\pi _n\) on random inputs \((X_1,\ldots ,X_n)\) for the parties with adversary \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \), and the events \({\mathcal {E}} _1\) and \({\mathcal {E}} _2\) as defined in Sect. 5.3. Then, it holds that:

  1. 1.

    \({\mathrm {Pr}}\left[ {\mathcal {E}} _1 \cap {\mathcal {E}} _2\right] \ge 1/{2n^2} -\textsf {negl}(n)\).

  2. 2.

    Conditioned on the event \({\mathcal {E}} _1 \cap {\mathcal {E}} _2\), there exists an honest party \({P} _{J^{*}}\) such that

    $$ H(X_{I^{*}}\mid {\mathcal {E}} _1 \cap {\mathcal {E}} _2, \textsc {view}^\textsf {corrupt} _{J^{*}})\ge n/4, $$

    where \(\textsc {view}^\textsf {corrupt} _{J^{*}}\) is the random variable representing the view of \({P} _{J^{*}}\) at the end of the protocol.

Lemma 4

Consider an execution of \(\pi _n\) on random inputs \((X_1,\ldots ,X_n)\) for the parties with adversary \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \). Then, conditioned on the event \({\mathcal {E}} _1 \cap {\mathcal {E}} _2\) it holds that:

  1. 1.

    The \({I^{*}}\)’th coordinate of the common output \(Y^{\textsf {honest}}_{{I^{*}}}\) equals the initial input \(X_{I^{*}}\) of \({P} _{I^{*}}\), except for negligible probability, i.e.,

    $$ {\mathrm {Pr}}\left[ Y^{\textsf {honest}}_{{I^{*}}}=X_{I^{*}}\mid {\mathcal {E}} _1 \cap {\mathcal {E}} _2\right] \ge 1-\textsf {negl}(n). $$
  2. 2.

    The \({I^{*}}\)’th coordinate of the common output \(Y^{\textsf {honest}}_{{I^{*}}}\) in an execution with \(\mathcal{A} ^{\textsf {honest-}{i^{*}}}_n \) equals the \({I^{*}}\)’th coordinate of the common output \(Y^{\textsf {corrupt}}_{{I^{*}}}\) in an execution with \(\mathcal{A} ^{\textsf {corrupt-}{i^{*}}}_n \), except for negligible probability, i.e.,

    $$ {\mathrm {Pr}}\left[ Y^{\textsf {honest}}_{{I^{*}}}=Y^{\textsf {corrupt}}_{{I^{*}}} \mid {\mathcal {E}} _1 \cap {\mathcal {E}} _2\right] \ge 1-\textsf {negl}(n). $$