Abstract
The goal of secure multi-party computation (MPC) is to allow a set of parties to perform an arbitrary computation task, where the security guarantees depend on the set of parties that are corrupted. The more parties are corrupted, the less is guaranteed, and typically the guarantees are completely lost when the number of corrupted parties exceeds a certain corruption bound.
Early and also many recent protocols are only statically secure in the sense that they provide no security guarantees if the adversary is allowed to choose adaptively which parties to corrupt. Security against an adversary with such a strong capability is often called adaptive security and a significant body of literature is devoted to achieving adaptive security, which is known as a difficult problem. In particular, a main technical obstacle in this context is the so-called “commitment problem”, where the simulator is unable to consistently explain the internal state of a party with respect to its pre-corruption outputs. As a result, protocols typically resort to the use of cryptographic primitives like non-committing encryption, incurring a substantial efficiency loss.
This paper provides a new, clean-slate treatment of adaptive security in MPC, exploiting the specification concept of constructive cryptography (CC). A new natural security notion, called CC-adaptive security, is proposed, which is technically weaker than standard adaptive security but nevertheless captures security against a fully adaptive adversary. Known protocol examples separating between adaptive and static security are also insecure in our notion. Moreover, our notion avoids the commitment problem and thereby the need to use non-committing or equivocal tools. We exemplify this by showing that the protocols by Cramer, Damgard and Nielsen (EUROCRYPT’01) for the honest majority setting, and (the variant without non-committing encryption) by Canetti, Lindell, Ostrovsky and Sahai (STOC’02) for the dishonest majority setting, achieve CC-adaptive security. The latter example is of special interest since all UC-adaptive protocols in the dishonest majority setting require some form of non-committing or equivocal encryption.
C.-D. Liu-Zhang—This work was partially carried out while the author was at ETH Zurich.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
1 Introduction
1.1 Multi-party Computation
Secure multi-party computation (MPC) is one of the most fundamental problems in cryptography. It considers the setting where a set of parties wish to carry out a computation in a secure manner, where security informally means that parties obtain the correct output of the computation, while at the same time keeping their local inputs as private as possible.
A crucial step towards meaningfully designing and analyzing cryptographic protocols is to come up with appropriate definitions of security. Formulating good definitions is highly non-trivial: the definition should closely capture the aspects that we care about, while at the same time being simple and usable, even minimal, avoiding as much as possible unnecessary artifacts.
There is a vast literature on security definitions in the field of MPC. Initial works [3, 8, 21, 26, 42] considered the stand-alone setting, which examines only the protocol at hand and does not capture what it means to use the protocol in a larger context, for the task of secure function evaluation [25, 49, 50]. It was not until several years later, that definitions in so-called composable frameworks for general reactive tasks were introduced [9, 20, 30, 34, 39, 43, 46]. Such definitions aim to capture all aspects of a protocol that can be relevant, with respect to any possible application, hence the term universal composability [9].
An important aspect of security definitions for secure computation is the way in which the corrupted parties are chosen. Here, two models are commonly considered. The static security model assumes that the set of corrupted parties is fixed before the computation starts and does not change. In the more general adaptive security model, the adversary may corrupt parties during the protocol execution, based on information that has been gathered so far. Indeed, adaptive security captures important concerns regarding cryptographic protocols that static security does not capture. These include scenarios where attackers, viruses, or other adversarial entities can take advantage of the communication to decide which parties to corrupt.
The currently considered standard MPC definition for adaptive security is the one introduced by Canetti [9] in the UC framework. The UC-adaptive security definition follows the well-known simulation paradigm, and is formalized by comparing the execution of the protocol in the real world, to an ideal world that has the desired security properties by design. Intuitively, it guarantees that for any attack in the real world performed by an adversary that can adaptively corrupt parties, the attack can be equivalently performed in the ideal world, achieving a similar effect. This is formalized by the existence of a single simulator that has to simulate the entire protocol execution, with respect to any environment where the protocol is being executed.
1.2 The Commitment Problem in Adaptive Security
Despite the fact that the current standard notion has been the cornerstone of adaptive security in MPC and has lead to the development of many beautiful cryptographic protocols and primitives, one could argue that the definition is too strong.
To show this, consider the following example: Let \(\boldsymbol{\pi }\) be any protocol for secure function evaluation that is adaptively secure. Now, consider a modified protocol \(\boldsymbol{\tilde{\pi }}\), where each party i first commits to its input using for example any (non-equivocable) perfectly hiding and computationally binding commitment scheme and publishes the commitment. Then, all parties execute the protocol \(\boldsymbol{\pi }\). The commitments are never again used, and in particular they are never opened. Intuitively, protocol \(\boldsymbol{\tilde{\pi }}\) should be adaptively secure, since the commitments do not reveal any secret information (the commitments are even statistically independent of the inputs!). However, protocol \(\boldsymbol{\tilde{\pi }}\) is no longer adaptively secure: we run into the so-called commitment problem, where the simulator is unable to consistently explain the internal state of the parties that are adaptively corrupted. This is because the simulator first has to publish a commitment on behalf of each honest party without knowing its input, and later, upon corruption, output an internal state on behalf of each party that is consistent with its input and the previously published commitment.
Common ways to address this issue include the use of non-committing encryption (see e.g. [12, 13]), or the availability of secure erasable memory (see e.g. [4]), therefore incurring to a substantial efficiency loss or an extra assumption.
However, at a more general level, this raises the question of whether one could have an alternative security definition that is not subject to this issue, but still captures natural security guarantees under adaptive corruption:
Is there a natural MPC security definition that captures security guarantees under adaptive corruption and is not subject to the commitment problem?
There have been a number of works that aimed to solve this issue. A line of work [7, 45, 47] considers simulators that have super-polynomial running time. Such approaches come at the price of being technical or sacrificing composition guarantees. Another approach [1] disallows certain activation sequences by the environment that cannot be simulated, avoiding some of the complications of the other approaches, but sacrificing some guarantees by excluding certain attacks. A recent work [31] addressed this issue by proposing a notion that formalizes guarantees that hold within a certain interval, between two events, and requiring the simulation to work within each interval, without forcing the simulation to be consistent between the intervals. Although this approach seems promising, the guarantees that are given turn out to be too weak for MPC applications. In particular, the corruptions can only depend on “external” events, and not on the outputs from the given resources.
1.3 Contributions
CC-Adaptive Security. Intuitively, an MPC protocol should provide, at any point during the protocol execution, security guarantees to the set of honest parties at that point. That is, for every set of parties, there is a guarantee as long as these parties are honest. This is exactly what CC-adaptive security captures: we phrase the guarantees naturally as the intersection (i.e. conjunction) of the guarantees for every set of so-far honest parties. Informally, we require the same protocol to realize a possibly different functionality \(\mathcal {F}_X\) for each subset X of parties. In each statement, there must exist a simulator that correctly simulates the protocol execution, as long as the parties in X are honest, without having access to the secret inputs and outputs of parties in X. As soon as a party in X gets corrupted, the guarantee for this set is dropped. (However, guarantees for other so-far honest sets still remain.)
The corruptions are completely adaptive in the strong and usual sense, where the selection of which parties become corrupted can be done based on information gathered during the protocol execution. The more parties are corrupted, the less guarantees remain.
Technically, the commitment problem does not arise because the guarantees are dropped (i.e. the simulation stops) at the point where a party in X gets corrupted. Therefore, the simulator does not need to explain the secret state of a party in X. This is in contrast to previous adaptive security definitions, which require the existence of a single simulator that explains all possible cases.
The described guarantees are naturally phrased within the constructive cryptography (CC) [38,39,40] composable framework, where each guarantee corresponds to a set specification of systems, and the conjunction of guarantees is simply the intersection of specifications. The protocol can then achieve all these guarantees within a single construction statement.
Comparison with Standard Static and Adaptive Security. At a technical level, we show that our new definition lies in-between the current standard UC-security definitions for static and adaptive security, respectively. Interestingly, popular examples that separate the standard static and adaptive security notions and do not exploit the commitment problem, also separate static from CC-adaptive security, therefore giving evidence that CC-adaptive security gives strong adaptive security guarantees. More concretely, we show the following.
Static vs CC-Adaptive Security. We first show that CC-adaptive security implies static security in all settings. Moreover, we also show that CC-adaptive security is strictly stronger than static security: for the case of passive corruption and a large number of parties, the protocol shown in [12] separates the notions of static and CC-adaptive security, and in the case of active corruption and at least three parties, the protocol shown in [10] makes the separation.
Adaptive vs CC-Adaptive Security. We show that UC-adaptive security is strictly stronger than CC-adaptive security in all settings, by showing a protocol example based on the commitment problem.
Applications. We demonstrate the usefulness of our notion with two examples, showing that known protocols achieve strong adaptive security guarantees without the use of non-committing or equivocal encryption.
CDN Protocol. First, we show that the protocol by Cramer, Damgard and Nielsen [17] (CDN) based on threshold (additively) homomorphic encryption (THE) achieves CC-adaptive security in the honest majority setting. In the passive corruption setting, the protocol is described assuming solely the key setup for the THE scheme, while in the active corruption setting, the protocol is described assuming in addition a multi-party zero-knowledge functionality. This shows that the CDN protocol approach achieves strong adaptive security guarantees as-is, even when using an encryption scheme that commits to the plaintext.
CLOS Protocol. Second, we show that the variant of the protocol by Canetti, Lindell, Ostrovsky and Sahai [13] (CLOS) that does not use non-committing encryption, previously only proven statically secure, actually achieves CC-adaptive security in the dishonest majority setting. This is achieved by showing that the oblivious transfer from [25] achieves CC-adaptivity, and the CLOS compiler transforming passive to active protocols preserves CC-adaptivity. Note that, to the best of our knowledge, all previous UC-adaptive protocols in the dishonest majority setting required some form of non-committing or equivocal encryption.
1.4 Further Related Work
The problem of MPC with adaptive security was first studied by Canetti, Feige, Goldreich and Naor [12], and there is a large literature on MPC protocols with adaptive security. In the case of honest majority, it was shown that classical MPC protocols are adaptively secure [5, 15, 48]. Using the results in [32, 33], it was shown that these protocols achieve UC adaptive security with abort in the plain model, or guaranteed output delivery in the synchronous model. A more efficient protocol was shown in [19], following the CDN-approach based on threshold homomorphic encryption and assuming a CRS. In the case of dishonest majority, the protocols achieve security with abort, and all known protocols assume some form of non-committing encryption or equivocation. The first work achieving adaptive security for dishonest majority was the protocol by Canetti, Lindell, Ostrovsky and Sahai [13], assuming a CRS setup. Since then, several subsequent works have improved its round and communication complexity (e.g. [6, 14, 16, 18, 23]). The work by Garg and Sahai [24] considered adaptive security in the stand-alone model without trusted setup.
The work by Garay, Wichs and Zhou [22] consider the notion of semi-adaptive security for two parties, which considers guarantees for the case where one party is corrupted, and the other party is honest and can be adaptively corrupted. In contrast, our security notion imposes guarantees also when both parties start being honest.
2 Preliminaries: Constructive Cryptography
The basic concepts of the Constructive Cryptography framework by Maurer and Renner [38,39,40] needed for this paper are quite natural and are summarized below.
2.1 Specifications and Constructions
A basic idea, which one finds in many disciplines, is that one considers a set \(\varPhi \) of objects and specifications of such objects. A specification \(\mathcal {U}\subseteq \varPhi \) is a subset of \(\varPhi \) and can equivalently be understood as a predicate on \(\varPhi \) defining the set of objects satisfying the specification, i.e., being in \(\mathcal {U}\). Examples of this general paradigm are the specification of mechanical parts in terms of certain tolerances (e.g. the thickness of a bolt is between 1.33 and 1.34 mm), the specification of the property of a program (e.g. the set of programs that terminate, or the set of programs that compute a certain function within a given accuracy and time limit), or in a cryptographic context the specification of a close-to-uniform n-bit key as the set of probability distributions over \(\{0,1\}^n\) with statistical distance at most \(\epsilon \) from the uniform distribution.
A specification corresponds to a guarantee, and smaller specifications hence correspond to stronger guarantees. An important principle is to abstract a specification \(\mathcal {U}\) by a larger specification \(\mathcal {V}\) (i.e., \(\mathcal {U}\subseteq \mathcal {V}\)) which is simpler to understand and work with. One could call \(\mathcal {V}\) an ideal specification to hint at a certain resemblance with terminology often used in the cryptographic literature. If a construction (see below) requires an object satisfying specification \(\mathcal {V}\), then it also works if the given object actually satisfies the stronger specification \(\mathcal {U}\).
A construction is a function \(\gamma :\varPhi \rightarrow \varPhi \) transforming objects into (usually in some sense more useful) objects. A well-known example of a construction useful in cryptography, achieved by a so-called extractor, is the transformation of a pair of independent random variables (say a short uniform random bit-string, called seed, and a long bit-string for which only a bound on the min-entropy is known) into a close-to-uniform string.
A construction statement of specification \(\mathcal {S}\) from specification \(\mathcal {R}\) using construction \(\gamma \), denoted \(\mathcal {R} \xrightarrow {\gamma } \mathcal {S}\), is of the form
It states that if construction \(\gamma \) is applied to any object satisfying specification \(\mathcal {R}\), then the resulting object is guaranteed to satisfy (at least) specification \(\mathcal {S}\).
The composability of this construction notion follows immediately from the transitivity of the subset relation:
2.2 Resources and Converters
The above natural and very general viewpoint of specifications is taken in Constructive Cryptography, where the objects in \(\varPhi \) are systems, called resources, with interfaces to the parties considered in the given setting.
Resources. A resource \(R\) is a reactive system with interfaces. Formally, they are modeled as random systems [37, 41], where the interface address and the actual input value are encoded as part of the input. Then, the system answers with an output value at the same interface. One can take several independent resources \(R_1,\dots , R_k\), and form a new resource \([R_1,\dots , R_k]\), with the interface set being the union. This resource is denoted as the parallel composition.
Converters. A converter models the local actions executed by a party at its interface, which can be thought of as a system or protocol engine. Formally, converters are modeled as random systems with two interfaces, an outside interface and an inside interface. At its inside, the converter gives input to the party’s interface of the resource and at the outside it emulates an interface (of the transformed resource). Upon an input at an outside interface, the converter is allowed to make a bounded number of queries to the inside interfaces, before returning a value at the queried interface. Applying a converter induces a mapping \(\varPhi \rightarrow \varPhi \). We denote the set of converters as \(\varSigma \).
For a converter \(\alpha \) and a resource \(R\), we denote by \(\alpha ^i R\) the resource obtained from applying the converter to the resource at interface i. One can then see that converter attachment satisfies composition order invariance, meaning that applying converters at distinct interfaces commutes. That is, for any converters \(\alpha \) and \(\beta \), any resource \(R\) and any disjoint interfaces j, k, we have that \(\alpha ^j \beta ^k R = \beta ^k \alpha ^j R\).
Distinguisher. A distinguisher D is a reactive system that interacts with a resource by making queries at its interfaces, and outputs a bit. The advantage of D in distinguishing two resources \(R\) and \(T\) is defined as
2.3 Relaxations
Often a construction statement does not achieve a desired specification \(\mathcal {S}\), but only a relaxed version of \(\mathcal {S}\). We capture this via so-called relaxations [40], which map specifications to weaker, or relaxed, specifications. A relaxation formalizes the idea that we are often happy with resources being almost as good as a target resource specification. For example, one could consider the relaxation that maps a resource \(S\) to the set of resources that are indistinguishable from \(S\).
Definition 1
Let \(\varPhi \) denote the set of all resources. A relaxation \(\phi : \varPhi \rightarrow 2^{\varPhi }\) is a function such that \(R \in \phi (R)\), for all \(R \in \varPhi \). In addition, for a specification \(\mathcal {R}\), we define .
Relaxations satisfy two important properties. The first, is that \(\mathcal {S} \subseteq \mathcal {S}^{\phi }\). And the second, is that if \(\mathcal {R} \subseteq \mathcal {S}\) then \(\mathcal {R}^{\phi } \subseteq \mathcal {S}^{\phi }\). This simplifies the modular analysis, as it means that one can typically consider assumed resources that are completely ideal, or not relaxed. More concretely, from the statements \(\mathcal {R} \subseteq \mathcal {S}^{\phi }\) and \(\mathcal {S} \subseteq \mathcal {T}^{\phi '}\), one can conclude that \(\mathcal {R} \subseteq \mathcal {T}^{\phi \circ \phi '}\).
In the following, we introduce a few generic types of relaxations [31, 40] that we will use throughout the paper.
\(\boldsymbol{\epsilon }\)-Relaxation. We introduce a fundamental relaxation that captures computational security based on explicit reductions. For that, we define a function \(\epsilon \) that maps distinguishers to their respective advantage in [0, 1]. The usual interpretation is that \(\epsilon (D)\) is the advantage in the underlying computational problem of the distinguisher which is modified by the reduction.
Definition 2
Let \(\epsilon \) be a function that maps distinguishers to a real value in [0, 1]. We define the \(\epsilon \)-relaxation of a resource \(R\) as:
Until-Relaxation. Sometimes we want to consider guarantees that hold up to the point where a certain event happens. This is formally modeled by considering an additional so-called monotone binary output (MBO) [41], which is a binary value that can switch from 0 to 1, but not back. Such an MBO can for example model that all inputs to the system are distinct (no collisions).
Definition 3
Let \(R\) be a resource, and let \(\mathcal {E}\) be an MBO for the resource. We denote by \(\mathsf {until}_{\mathcal {E}}(R)\) the resource that behaves like \(R\), but halts when \(\mathcal {E} = 1\). That is, for any inputs from the point when \(\mathcal {E}=1\) (and including the input that triggered the condition), the output is \(\bot \).
The until-relaxation of a system \(R\) [31] consists of the set of all systems, that behave equivalently up to the point where the MBO is set to 1.
Definition 4
Let \(R\) be a resource, and let \(\mathcal {E}\) be an MBO for the resource. The \(\mathcal {E}\)-until-relaxation of \(R\), denoted \(R^{\mathcal {E}]}\), is the set of all systems that have the same behavior as \(R\) until \(\mathcal {E}=1\). That is,
Combined Relaxation. In this paper we are interested in the relaxation that corresponds to the intuitive interpretation of “the set of all systems that behave equally until \(\mathcal {E}=1\) given that the assumption of \(\epsilon \) is valid”. However, it was proven in [31] that the \(\epsilon \)-relaxation and the until-relaxation do not generally commute, i.e., \((\mathcal {R}^{\mathcal {E}]})^\epsilon \not \subseteq (\mathcal {R}^\epsilon )^{\mathcal {E}]}\) and \((\mathcal {R}^{\mathcal {E}]})^\epsilon \not \supseteq (\mathcal {R}^\epsilon )^{\mathcal {E}]}\), and therefore it is not clear whether any of the two corresponds to the intuitive interpretation. Moreover, choosing one of these would partially limit the composability of such statements. That is, if one construction assumes \(\mathcal {S}^{\mathcal {E}]}\) to construct \(\mathcal {T}\), and another one constructs \(\mathcal {S}^\epsilon \), then adjusting the first construction to use \(\mathcal {S}^\epsilon \) is not trivial. Following the solution in [31], we consider the next combined relaxation.
Definition 5
Let \(R\) be a resource, \(\mathcal {E}\) be an MBO, and \(\epsilon \) be a function mapping distinguishers to a real value in [0, 1]. The \((\mathcal {E},\epsilon )\)-until-relaxation of \(R\), denoted \(R^{{\mathcal {E}}:{\epsilon }}\), is defined as follows:
The combined relaxation benefits from the following desired properties, as shown in [31].
Lemma 1
Let \(\mathcal {R}\) be a specification, \(\mathcal {E}_1\), \(\mathcal {E}_2\) be MBOs for the resource, and \(\epsilon _1\), \(\epsilon _2\) be functions mapping distinguishers to a real value in [0, 1]. Then,
where \(\epsilon _{\mathcal {E} \vee \mathcal {E}'}(D) = \epsilon (D \circ \mathsf {until}_{\mathcal {E} \vee \mathcal {E}'})\) is the advantage of the distinguisher interacting with the projected (by the function \(\mathsf {until}_{\mathcal {E} \vee \mathcal {E}'}(\cdot )\)) resource, and analogously for \(\epsilon '_{\mathcal {E} \vee \mathcal {E}'}\).
Lemma 2
Let \(\mathcal {R}\) be a specification, \(\mathcal {E}\) be an MBO for the resource, and \(\epsilon \) be a function mapping distinguishers to a real value in [0, 1]. Further let \(\alpha \) be a converter, and let i be an interface of \(\mathcal {R}\). The \((\mathcal {E},\epsilon )\)-until-relaxation is compatible with converter application and with parallel composition. That is,
-
1.
\(\alpha ^i \left( \mathcal {R}^{{\mathcal {E}}:{\epsilon }}\right) \subseteq \left( \alpha ^i \mathcal {R} \right) ^{{\mathcal {E}}:{\epsilon _{\alpha }}}\), for , where \(D\alpha ^i\) denotes the distinguisher that first attaches \(\alpha \) at interface i of the given resource, and then executes D.
-
2.
\([\mathcal {R}^{{\mathcal {E}}:{\epsilon }}, \mathcal {S}] \subseteq [\mathcal {R}, \mathcal {S}]^{{\mathcal {E}}:{\epsilon _{\mathcal {S}}}}\), for , where \(D[\cdot , S]\) denotes the distinguisher that emulates \(S\) in parallel to the given resource, and then executes D.
3 Multi-party Constructive Cryptography with Adaptive Corruption
In this section, we present our model for n-party constructions with adaptive corruption. In this setting, we consider scenarios where an adversary may “hack” into the parties’ systems during the protocol execution.
Multi-party Resources. A multi-party resource for n parties is a resource with \(n+2\) interfaces: a set \(\mathcal {P}= \{1, \dots , n\}\) of n party interfaces, an adversary interface A and a free interface W [2]. The party interfaces allow each party to have access to the resource. The adversary interface models adversarial access to the resource. The free interface allows direct access by the environment to the resourceFootnote 1, and is used to model aspects that are not used by the parties, but neither controlled by the adversary.
Examples of such resources are the available network resource (which allows parties to communicate with each other), as well as the constructed computation resources (which allows the parties to perform arbitrary computations of their secret inputs).
Protocols. A protocol consists of a tuple of converters \(\boldsymbol{\pi } = (\pi _1,\dots ,\pi _n)\), one for each party. We denote by \(\boldsymbol{\pi } R\) the resource where each converter \(\pi _j\) is attached to party interface j.
Basic Construction Notion. We say that a protocol \(\boldsymbol{\pi }\) constructs specification \(\mathcal {S}\) from specification \(\mathcal {R}\), if and only if satisfies specification \(\mathcal {S}\), i.e., \(\boldsymbol{\pi } \mathcal {R}\subseteq \mathcal {S}\).
Definition 6
Let \(\mathcal {R}\) and \(\mathcal {S}\) be specifications, and let \(\boldsymbol{\pi }\) be a protocol. We say that \(\boldsymbol{\pi }\) constructs \(\mathcal {S}\) from \(\mathcal {R}\), if and only if \(\boldsymbol{\pi } \mathcal {R}\subseteq \mathcal {S}\).
The specifications \(\boldsymbol{\pi } \mathcal {R}\) and \(\mathcal {S}\) are usually called the real world and the ideal world, respectively.
Typical constructions in the literature describe the ideal specification \(\mathcal {S}\) with the so-called simulation-paradigm. That is, by showing the existence of a simulator \(\sigma \) attached to the adversary interface of a fixed ideal resource \(S\), which can for example be a resource that computes a function over private inputs. Note that this is just a particular way of defining the security guarantees in our framework. One can express different types of ideal specifications, as we will show in our examples (see [31, 35] for further examples in other settings).
Protocol Converters as Resources. In order to model adaptive corruptions in Constructive Cryptography, we consider only trivial converters [40]. More concretely, we consider the class \(\varSigma \) of trivial converters which only define a wiring between resource interfaces, and the protocol engines are then interpreted as resources. In more detail, when writing a resource \(\alpha ^i R\) consisting of a converter \(\alpha \) attached to interface i of resource \(R\), we understand the converter \(\alpha \) as a resource, for example denoted \(\tilde{\alpha }\), in parallel with \(R\). And we consider a trivial converter \(\beta \) for interface i that simply connects \(\tilde{\alpha }\) and \(R\), i.e., we have \(\alpha ^i R = \pi _i^i [R, \tilde{\alpha }]\). We depict in Fig. 1 this interpretation.
Modeling Corruptions. Protocol converters as resources have, like any resource, an adversary interface A and a free interface W. Corruption is modeled explicitly as an input to the resource via the free interface W. Upon input \(\mathsf {corrupt}\) at interface W, the resource adds additional capabilities at the adversary interface A.Footnote 2
One can then model different types of corruption. In order to model passive corruption, we require that upon input \(\mathsf {corrupt}\) at interface W, the resource makes accessible the entire local state at interface A. One can then access the local state of the resource via interface A with an input \(\mathtt {leak}\). If active corruption is considered, the adversary can in addition take over the inside and outside interfaces of the protocol engine via the adversarial interface A. That is, any inputs given at the inside or outside interface are first made available to A, who then decides what the values are.
4 CC-Adaptive Security
In an MPC protocol, the set of corrupted parties grows during the protocol execution, and at the same time, the set of parties that benefit from security guarantees, i.e. the set of so-far honest parties, shrinks.
We propose a very natural way to understand such guarantees obtained from an MPC protocol with adaptive corruption. The idea is to understand the guarantees as simultaneously achieving guarantees for every set of so-far honest parties. More concretely, we explicitly state one separate guarantee for every subset \(X \subseteq \mathcal {P}\) of the parties, and as long as parties in X are honest, the guarantee remains. That is, the guarantee is dropped as soon as any party in X gets corrupted.
The corruptions are completely adaptive as usual, and the identity of the chosen parties to become corrupted can be made based on information gathered during the protocol execution. The more parties are corrupted, the less sets are so-far honest, and therefore less guarantees remain.
The described guarantees can naturally be captured within the constructive cryptography framework, where each guarantee corresponds to a resource specification, and the conjunction of guarantees is simply the intersection of specifications.
As we will show in Sect. 4.2, our notion of CC-adaptive security lies strictly in-between the standard UC-security notions of static and adaptive security. Popular examples that separate static and adaptive security and are not based on the commitment problem, also separate static and CC-adaptive security, and examples based on the commitment problem separate our notion from UC-adaptive security; therefore showing that CC-adaptive security achieves a strong resilience against adaptive corruption, while at the same time overcoming the commitment problem.
4.1 Definition of the Security Notion
As sketched above, our security notion gives a guarantee for every set of so-far honest parties. That is, we give a explicit guarantee for each subset \(X \subseteq \mathcal {P}\) of parties, which lasts as long as the subset X is honest, irrespective of whether the other parties are honest or not. The guarantee provides privacy to the set of parties in X, and is described as usual, by requiring the existence of a simulator (for this set X) that correctly simulates the protocol execution. The simulator has to simulate without knowing the secret inputs and outputs of parties in X, but since the guarantee holds irrespective of the honesty of other parties, we allow the simulator to have access to the inputs of parties that are in \(\overline{X} = \mathcal {P}\setminus X\). Moreover, as soon as a party in X is corrupted, the guarantee for this set is lost (and therefore the simulation stops at this point). However, guarantees for other so-far honest sets still remain.
Finally, we state the guarantees with respect to a (monotoneFootnote 3) adversary structure \(\mathcal {Z} \subseteq 2^{\mathcal {P}}\), meaning that if too many corruptions happen, i.e., the set of corrupted parties exceeds the adversary structure, all guarantees are lost.
Addressing Adaptive Concerns. We will see with the examples below that this security notion captures the typical adaptive concerns: leaking sensitive information in the network, and also information leaked from an internal state of a party.
The former is prevented since the adversary is fully adaptive and can corrupt parties based on information seen in the network. Note, however, that when considering a small number of parties, and in particular two parties, our definition is intuitively very close to that of static security. This is because when X contains one party, the guarantee holds only for adversarial strategies that involve corrupting the (single) party in \(\overline{X}\).Footnote 4 (When the number of parties increases, many of the sets \(\overline{X}\) contain a large number of parties, and the guarantee holds even when the adversary adaptively corrupts parties among these large sets. See Lemma 5.)
To see why the definition also limits the information leaked from an internal state, note that upon corruption of party i, all guarantees where \(i \notin X\), still remain. In all those simulations the internal state of party i must be explained (from the inputs of parties in \(\overline{X}\)). Concretely, for X being the so-far honest set, the state of party i does not reveal anything beyond what can be inferred from the current corrupted set.
Avoiding the Commitment Problem. Intuitively, the commitment problem does not arise, because the guarantees are lost (i.e. the simulation stops) at the point where a party in X gets corrupted. Therefore, the simulator does not need to explain the secret state of any party in X. Moreover, for parties that are in \(\overline{X}\), the simulator can consistently explain the secret state because it has access to the inputs of these parties.
Let \(\mathcal {E}_X\) be the MBO indicating whether any party in X is corrupted. Moreover, let \(\sigma _{\overline{X}}\) be a simulator that has access to the inputs of all parties from the set \(\overline{X}\).Footnote 5 Formally, any inputs given at interfaces from parties in \(\overline{X}\), are forwarded to the adversary interface. Moreover, note that we only allow the simulator to modify the inputs of actively corrupted parties.Footnote 6
Further let \(\mathcal {E}_{\mathcal {Z}}\) be the MBO that is set to 1 when the set of corrupted parties does not lie in \(\mathcal {Z}\). For the common case of threshold corruption where the adversary structure contains all sets of up to t parties, we denote \(\mathcal {E}_{t}\) the MBO that is set to 1 when more than t parties are corrupted.
We require that for each set of parties X, there must be a simulator \(\sigma _{\overline{X}}\) that simulates the protocol execution until any party in X is corrupted, or the adversary structure is no longer respected, i.e., until \(\mathcal {E}_X \vee \mathcal {E}_{\mathcal {Z}} = 1\).
Definition 7
Protocol \(\boldsymbol{\pi }\) CC-adaptively constructs specification \(\mathcal {S}\) from \(\mathcal {R}\) with error \(\epsilon \) and adversary structure \(\mathcal {Z}\), if for each set \(X \subseteq \mathcal {P}\), there exists (an efficient) simulator \(\sigma _{\overline{X}}\), such that \(\boldsymbol{\pi } \mathcal {R} \subseteq (\sigma _{\overline{X}} \mathcal {S})^{{\mathcal {E}_X \vee \mathcal {E}_{\mathcal {Z}}}:{\epsilon }}\). In short, \(\boldsymbol{\pi } \mathcal {R}\) satisfies the following intersection of specifications:
Moreover, we say that \(\boldsymbol{\pi }\) CC-adaptively constructs \(\mathcal {S}\) from \(\mathcal {R}\) with error \(\epsilon \) up to t corruptions if \(\boldsymbol{\pi } \mathcal {R} \subseteq \bigcap _{X \subseteq \mathcal {P}} (\sigma _{\overline{X}} \mathcal {S})^{{\mathcal {E}_X \vee \mathcal {E}_{t}}:{\epsilon }}\).
The following lemma shows that this type of construction statement benefits from desirable composition guarantees.
Lemma 3
Let \(\mathcal {R}\), \(\mathcal {S}\), \(\mathcal {T}\) be specifications, and let \(\boldsymbol{\pi }\), \(\boldsymbol{\pi }'\) be protocols. Further let \(\mathcal {Z} \subseteq 2^{\mathcal {P}}\) be a monotone set. Then, we have the following composition guarantees:
for , where \((\epsilon _{\pi '})_{\mathcal {E}_X \vee \mathcal {E}_{\mathcal {Z}}}\) is the advantage of the distinguisher that first attaches \(\pi '\) to the given resource, and then interacts with the projected resource, and same for \((\epsilon '_{\sigma _{\overline{X}}})_{\mathcal {E}_X \vee \mathcal {E}_{\mathcal {Z}}}\).
Furthermore, we have
for , where \(D[\cdot , T]\) is the distinguisher that emulates \(T\) in parallel to the given resource, and then executes D.
Proof
The proof can be found in Sect. A.
4.2 Comparison to Traditional Notions of Security
In this section we show how to phrase the standard notions of static and adaptive security within our framework, and further show that our new definition lies in-between the two standard notions of static and adaptive security.
Static Security. In the standard notion of static security, the set of protocol engines that are corrupted is fixed before the computation starts and does not change. The possible corruption sets are modelled by a given adversary structure \(\mathcal {Z} \subseteq 2^{\mathcal {P}}\). Given a set \(Z \in \mathcal {Z}\), we denote by \(\boldsymbol{\pi }_{\overline{Z}}\mathcal {R}\) the real-world resource, where the set of protocol engines \(\pi _i\), \(i \in Z\), are corrupted. The security definition requires the existence of a simulator \(\sigma _Z\) that simulates the protocol execution and has control over the inputs and outputs from corrupted parties. As usual, in the passive case, the simulator can read these values, while in the active case, it can also change them.
Definition 8
Protocol \(\boldsymbol{\pi }\) statically constructs specification \(\mathcal {S}\) from \(\mathcal {R}\) with error \(\epsilon \) and adversary structure \(\mathcal {Z}\), if for each possible set of corrupted parties \(Z \in \mathcal {Z}\), there exists a simulator \(\sigma _Z\) such that \(\boldsymbol{\pi }_{\overline{Z}}\mathcal {R} \subseteq (\sigma _Z \mathcal {S})^{\epsilon }\), where \(\boldsymbol{\pi }_{\overline{Z}}\) indicates that protocol converters \(\pi _i\), \(i \in Z\), are corrupted, and \(\sigma _Z\) indicates that the simulator has control over the inputs and outputs of parties in Z.
Lemma 4
CC-adaptive security implies static security.
Proof
Let \(\boldsymbol{\pi }\) be a protocol that constructs \(\mathcal {S}\) from specification \(\mathcal {R}\) with error \(\epsilon \) and adversary structure \(\mathcal {Z}\), with CC-adaptive security. We prove that \(\boldsymbol{\pi }\) also satisfies static security with the same parameters. Fix a set \(Z \in \mathcal {Z}\). Consider the particular corruption strategy, where parties in Z are corrupted at the start of the protocol execution, and no more corruptions happen.
In this case, \(\mathcal {E}_{\overline{Z}} \vee \mathcal {E}_{\mathcal {Z}} = 0\), because no party in \(\overline{Z}\) is corrupted, and the set of corrupted parties lies within \(\mathcal {Z}\). Therefore, for the case where \(X = \overline{Z}\), there must exist a simulator \(\sigma _Z\) (with access to the inputs and outputs of parties in Z, which are corrupted) that satisfies \(\boldsymbol{\pi }_{\overline{Z}}\mathcal {R} \subseteq (\sigma _{Z}\mathcal {S})^{{\mathcal {E}_{\overline{Z}} \vee \mathcal {E}_{\mathcal {Z}}}:{\epsilon }} = (\sigma _Z \mathcal {S})^{\epsilon }\).
In the following, we show that known examples of protocols that separate the standard notions of static and adaptive security [10, 12], also separate static and CC-adaptive security, both in the case of passive as well as active corruption.
Lemma 5
For passive corruption and a large number of parties, CC-adaptive security is strictly stronger than static security.
Proof
We consider the classical example from Canetti et al. [12]. Consider a secure function evaluation protocol with guaranteed output delivery where parties evaluate the function that outputs \(\bot \). The adversary structure contains sets of up to \(t = O(n)\) parties.
The protocol \(\boldsymbol{\pi }\) proceeds as follows: A designated party D secret shares its input to a randomly selected set of parties U (out of all parties except D) of small size \(\kappa \) parties using a \(\kappa \)-out-of-\(\kappa \) sharing scheme, where \(\kappa \) is the security parameter. Then, D makes the set U public (e.g. by sending the set to all parties). Subsequently, all parties output \(\bot \).
It is known that \(\boldsymbol{\pi }\) achieves static security. This is because an adversary not corrupting D only learns D’s secret if U happens to be the predefined set of corrupted parties, which occurs with probability exponentially small in \(\kappa \). More concretely, for each \(Z \in \mathcal {Z}\) not containing D, the probability that \(U = Z\) is \({{n-1} \atopwithdelims ()\kappa }^{-1} = \mathtt {neg}(\kappa )\). (Note that in the case where \(U \ne Z\), the simulator trivially succeeds simply by emulating the shares as random values.)
Now we show that \(\boldsymbol{\pi }\) does not achieve CC-adaptive security. Consider the singleton set \(X = \{D\}\), containing only the designated party. Note that U does not contain D, since D chooses a set of \(\kappa \) parties randomly from the set of parties without D. The adversary can then corrupt the set of parties in U to find out D’s secret without corrupting D. Note that the simulator has access to all inputs from parties in \(\overline{X}\), but has no access to D’s input. Formally, the simulator \(\sigma _{\overline{X}}\) has to output shares for parties in U that add up to D’s input, without knowing the input, which is impossible.
Lemma 6
For active corruption, CC-adaptive security is strictly stronger than static security, as long as there are at least three parties.
Proof
We consider the example from Canetti et al. [10] with three parties D, \(R_1\) and \(R_2\). D has as input two bits \(b_1,b_2 \in \{0,1\}\), and \(R_1\), \(R_2\) have no input. The ideal resource evaluates the function f that, on input \((b_1,b_2)\), it outputs \(b_1\) to \(R_1\), \(b_2\) to \(R_2\) and \(\bot \) to D. The adversary structure contains \(\{D,R_1\}\).
The protocol \(\boldsymbol{\pi }\) proceeds as follows: at step 1 D sends \(b_1\) to \(R_1\). After that, at step 2 D sends \(b_2\) to \(R_2\). Finally, at step 3 each \(R_i\) outputs the bit that they received from D and terminates, and D outputs \(\bot \) and terminates.
It was proven that \(\boldsymbol{\pi }\) achieves static security: for the set \(Z = \{D\}\), the simulator gets the values \(s_1'\) and \(s_2'\) from the adversary interface, and it sends \((b_1',b_2')\) to the ideal resource, who forwards each \(b_i'\) to \(R_i\). It is easy to see that this simulator perfectly simulates the protocol execution. The case where \(Z = \{D,R_1\}\) is similar.
For the set \(Z = \{R_1\}\), the simulator obtains the output \(b_1\) from the ideal resource, so it can simply forward this bit to the adversary interface. Again, it is easy to see that the simulation is successful.
Now let us argue why the above protocol does not satisfy CC-adaptive security. To show that, consider the singleton set \(X = \{R_2\}\), containing only party \(R_2\). We will show that the adversary can break correctness of the protocol, by 1) learning the value \(s_1\) that is sent to \(R_1\), and 2) depending on the value received after step 1 from the so-far honest D, possibly corrupt D and modify the value that is sent to \(R_2\) at step 2. More concretely, the adversary strategy is as follows: Initially corrupt \(R_1\), and learn the value \(s_1\) from D. If \(s_1 = 1\), then corrupt D and choose the value \(s_2' = 0\) as the value that is sent to \(R_2\) at step 2. With this strategy, in the real-world protocol, whenever \(s_1 = 1\), \(R_2\) never outputs 1.
Consider the case where the input to D is \((s_1,s_2) = (1,1)\). As argued above, in the real-world \(R_2\) outputs 0. However, since D is honest at step 1, the simulator \(\sigma _{\overline{X}}\) (even with knowledge of the input of D) has no power to change the input of D. Therefore, D inputs (1, 1) to the ideal resource, and therefore the output of party \(R_2\) is 1.
Adaptive Security. In the standard notion of UC-adaptive security, the ideal-world is described as a single specification that consists of a simulator –with passive or active capabilities (i.e. can read, or also change the inputs and outputs of corrupted parties) – attached to the adversary interface of a fixed ideal resource. Moreover, guarantees are given as long as the corrupted parties respect the adversary structure.
Definition 9
Protocol \(\boldsymbol{\pi }\) UC-adaptively constructs specification \(\mathcal {S}\) from \(\mathcal {R}\) with error \(\epsilon \) and adversary structure \(\mathcal {Z}\), if there is a simulator \(\sigma \), such that \(\boldsymbol{\pi }\mathcal {R} \subseteq (\sigma \mathcal {S})^{{\mathcal {E}_{\mathcal {Z}}}:{\epsilon }}\).
Lemma 7
UC-adaptive security implies CC-adaptive security.
Proof
Let \(\boldsymbol{\pi }\) be a protocol that constructs \(\mathcal {S}\) from specification \(\mathcal {R}\) with error \(\epsilon \) and adversary structure \(\mathcal {Z}\), with standard adaptive security. We prove that \(\boldsymbol{\pi }\) also satisfies CC-adaptive security. For each set \(X \subseteq \mathcal {P}\), we have that there exists a simulator \(\sigma _{\overline{X}}\) such that \(\boldsymbol{\pi }\mathcal {R} \subseteq (\sigma _{\overline{X}}\mathcal {S})^{{\mathcal {E}_{\mathcal {Z}}}:{\epsilon }}\). This is because one can consider the simulator \(\sigma _{\overline{X}}\) that ignores the inputs from parties in \(\overline{X}\) and simulates according to the UC-adaptive simulator \(\sigma \). Moreover, we have that
because \(\mathcal {E}_{\mathcal {Z}} = 1\) implies \(\mathcal {E}_{X} \vee \mathcal {E}_{\mathcal {Z}} = 1\). Therefore, we have the following:
Lemma 8
For passive corruption and any number of parties, CC-adaptive security does not imply UC-adaptive security.
Proof
Consider a secure function evaluation protocol where parties evaluate the function that outputs \(\bot \). The adversary structure contains sets of up to \(t=2\) parties. The protocol \(\boldsymbol{\pi }\) proceeds as follows: A designated party D computes a commitment of its private input, using a (non-equivocable) perfectly hiding and computationally binding commitment scheme and makes this commitment public. Then, all parties output \(\bot \).
The protocol does not achieve standard adaptive security. Consider the corruption strategy where D is corrupted “after” he sent his commitment. The simulator then first has to come up with a commitment without knowing D’s input, and then, upon corruption, learns D’s input and has to output randomness consistent with D’s input. Since the commitment is non-equivocable, this is not possible. That is, the simulation strategy runs into the commitment problem.
It is easy to see that \(\boldsymbol{\pi }\) satisfies CC-adaptive security. This is because for any set X not containing D, the simulator can read D’s input, so the simulation is straightforward. On the other hand, for any set X containing D, the simulation is only required until the point in time where D becomes corrupted (without including the answer to the corruption query, i.e., there is no need to output D’s private state).
5 Some Ideal Resource Specifications
In this section we introduce some typical ideal specifications that will be used in later sections, such as the network model, broadcast and MPC. We consider the setting with open authenticated and asynchronous channels, where the adversary can choose to drop messages. As a consequence, the ideal building blocks for broadcast and MPC that we consider achieve so-called security with abort.
5.1 Communication Primitives
Network Model. We consider a multi-party communication network with point-to-point asynchronous authenticated channels among any pair of parties, in line with the \(\mathcal {F}_{\mathsf {auth}}\) functionality in [9]. Asynchronous means that the channels do not guarantee delivery of the messages, and the messages are not necessarily delivered in the order which they are sent. Authenticity ensures that a recipient will only receive a message from a sender if the sender actually sent the message. In the case of adaptive corruptions, this authenticity requirement holds as long as both sender and recipient are honest. In particular, a dishonest sender is allowed to modify the messages that have been sent, as long as they have not been delivered to the recipient yet. We denote \(\mathcal {N}\) the complete network of pairwise authenticated channels (see Sect. C).
Broadcast with Abort. In our protocols, we assume that parties have access to an authenticated broadcast channel with abort [27], which guarantees that no two honest parties receive different messages, and does not guarantee delivery of messages. We denote the broadcast specification \(\mathcal {BC}\) a broadcast channel that allows any party to broadcast. Such a broadcast resource can be constructed with standard adaptive security and arbitrary number of corruptions in the communication network \(\mathcal {N}\) using the protocol by Goldwasser and Lindell [27].Footnote 7 See Sect. C for a detailed description.
5.2 MPC with Abort
We briefly describe an ideal resource \(\mathsf {MPC}\) capturing secure computation with abort (and no fairness). The resource has \(n+2\) interfaces, n party interfaces, an adversary interface A and a free interface W. Via the free interface, the resource keeps track of the set of corrupted parties. The resource allows each party i to input a value \(x_i\), and then once all honest parties provided its input, it evaluates a function \(y = f(x_1,\dots ,x_n)\) (corrupted parties can change their input as long as the output was not evaluated). The adversary can then select which parties obtain output and which not.
5.3 Multi-party Zero-Knowledge
Let R be a binary relation, consisting of pairs (x, w), where x is the statement, and w is a witness to the statement. A zero-knowledge proof allows a prover to prove to a verifier knowledge of w such that \(R(x,w) = 1\). In our protocols we will consider the multi-party version, which allows a designated party i to prove a statement towards all parties according to relation R. Such a resource \(\mathsf {ZK}_{i,R}\) can be seen as a special instance of MPC with abort \(\mathsf {MPC}_{f}\) resource, where the function f simply takes as input (x, w) from the designated party i, and no input from any other party, and outputs x in the case \(R(x,w) = 1\), and otherwise \(\bot \). We denote \(\mathsf {ZK}_{R}\) the parallel composition of \(\mathsf {ZK}_{i,R}\), for \(i \in [n]\).
Such a resource can be constructed assuming \(\mathcal {BC}\) and a CRS even with standard adaptive security for arbitrary many corruptions (see e.g. [13]). Alternatively, the resource can be constructed less efficiently solely from \(\mathcal {BC}\) for the case where \(t<n/2\) (see e.g. [48] with [33]).
5.4 Oblivious Transfer
An oblivious transfer involves a designated sender s, with input \((x_1,\dots ,x_\ell )\) and a designated receiver with input \(i \in \{1,\dots ,\ell \}\). The output for the receiver is \(x_i\), and the sender has no output. For our purposes, we can see the resource as a special instance of MPC with abort \(\mathsf {MPC}_{f}\) resource, where the function f simply takes as input \((x_1,\dots ,x_\ell )\) from the designated sender s, an input i from the designated receiver r, and no other inputs, and it outputs \(x_i\) to the receiver, and no output to any other party.
6 Application to the CDN Protocol
In this section we show that the protocol by Cramer, Damgard and Nielsen [17] based on threshold (additively) homomorphic encryption essentially achieves MPC with abort and with CC-adaptive security, in the communication network \(\mathcal {N}\) of authenticated asynchronous channels. With similar techniques, one could achieve MPC with guaranteed output delivery and with CC-adaptive security in a synchronous communication model (assuming a broadcast specification).
The CDN protocol is perhaps the iconic example that suffers from the commitment problem, and the goal of this example is to conceptually distil out at which steps the protocol is subject to relevant adaptive attacks, and conclude that the CDN-approach of broadcasting encrypted inputs in the first step and computing on ciphertexts, actually achieves strong adaptive security guarantees, even when the encryption commits to the plaintext. We showcase the applicability of our definition with two versions of CDN, for passive corruption below, and active corruption in the full version [28].
Finally, note that the protocol is typically described assuming a synchronous network, where the protocol advances in a round to round basis, and messages send at round r are assumed to arrive by round \(r+1\). However, our assumed network \(\mathcal {N}\) is asynchronous. To address this, we follow the standard approach of executing a synchronous protocol over an asynchronous network (see [33]). The idea is simply that each party waits for all round r messages before proceeding to round \(r+1\). The consequence is that the CDN protocol, which achieves full security under a synchronous network, achieves security with abort under an asynchronous network.
6.1 Passive Corruption Case
The protocol relies on an adaptively secure threshold homomorphic encryption scheme (see for example the scheme by Lysyanskaya and Peikert [36], which is based on the Paillier cryptosystem [44]). In such a scheme, given the public key, any party can encrypt a message. However, decrypting the ciphertext requires the collaboration of at least \(t+1\) parties, where t is a parameter of the scheme. The scheme is additively homomorphic in the sense that one can perform additions on ciphertexts without knowing the underlying plaintexts (see Sect. B).
For a plaintext a, let us denote \(\overline{a}\) an encryption of a. Given encryptions \(\overline{a}\), \(\overline{b}\), one can compute (using homomorphism) an encryption of \(a+b\), which we denote \(\overline{a}\boxplus \overline{b}\). Similarly, from a constant plaintext \(\alpha \) and an encryption \(\overline{a}\) one can compute an encryption of \(\alpha a\), which we denote \(\alpha \boxdot \overline{a}\). For concreteness, let us assume that the message space of the encryption scheme is the ring \(R = \mathbf {Z}_N\), for some RSA modulus N.
Let us first describe a version of the protocol for the passive case (see the section below for a complete description in the active case). The protocol \(\varPi _{\mathtt {pcdn}}\) starts by having each party publish encryptions of its input values. Then, parties compute addition and multiplication gates to obtain a common ciphertext, which they jointly decrypt using threshold decryption. Any linear operation (addition or multiplication by a constant) can be performed non-interactively, due to the homomorphism property of the threshold encryption scheme. Given encryptions \(\overline{a}\), \(\overline{b}\) of input values to a multiplication gate, parties can compute an encryption of \(c = ab\) as follows:
-
1.
Each party i chooses a random \(d_i \in \mathbf {Z}_N\) and distribute encryptions \(\overline{d_i}\) and \(\overline{d_ib}\) to all parties.
-
2.
Parties compute \(\overline{a} \boxplus (\boxplus _{i}\overline{d_i})\) and decrypt it using a threshold decryption.
-
3.
Parties set \(\overline{c} = (a + \sum _{i}d_i) \boxdot \overline{b} \boxminus (\boxplus _{i}\overline{d_ib})\).
The main problem that arises when dealing with standard adaptive security, even in the passive case, is that of the commitment problem: the simulator has to first output encryptions on behalf of the so-far honest parties during the input stage, and then if one of these honest parties is later corrupted, the simulator learns the real input of this party and must reveal its internal state to the adversary. However, the simulator is now stuck, since the real input is not consistent with the encryption output earlier. To overcome this issue, protocols usually make use of non-committing encryption schemes. An exception to this, is the protocol by Damgard and Nielsen [19], which is a variant of the CDN protocol that even achieves standard adaptive security, and overcomes the commitment problem by assuming a CRS which is programmed in a very clever way.
We show that this issue does not arise when aiming for CC-adaptive security. Technically, for each subset of parties \(X \subseteq \mathcal {P}\), the simulator only needs to lie about the inputs of parties in X, since it knows the inputs of the other parties. Moreover, the simulation is only until the point where a party in X gets corrupted, and so we do not need to justify the internal state of this party. We propose CC-adaptive security as a natural security goal to aim for, providing strong security guarantees against adaptive corruption, and at the same time overcoming the commitment problem. Conceptually, this example shows that the CDN-approach achieves such strong adaptive security guarantees, without the need to use non-committing encryption tools or erasures. Note that in the passive case, the protocol assumes solely a setup for threshold homomorphic encryption, whereas the protocol in [19] requires in addition a CRS.
Key Generation. As usual, we model the setup for the threshold encryption scheme with an ideal resource \(\mathsf {KeyGen}\) that generates its keys. The resource \(\mathsf {KeyGen}\) simply generates the public key \(\mathtt {ek}\) and private key \(\mathtt {dk}= (\mathtt {dk}_1,\dots ,\mathtt {dk}_n)\) for the threshold encryption scheme, and outputs to each party i the public key \(\mathtt {ek}\) and its private key share \(\mathtt {dk}_i\), and to the adversary the public key \(\mathtt {ek}\).
Theorem 1
Protocol \(\varPi _{\mathtt {pcdn}}\) \(CC\)-adaptively constructs \(\mathsf {MPC}_{f}\) from \([\mathcal {N},\mathsf {KeyGen}]\), with error \(\epsilon \) and up to \(t<n/2\) passive corruptions, where \(\epsilon \) reduces distinguishers to the corresponding advantage in the security of the threshold encryption scheme and is described in the proof.
Proof
Let \(\mathcal {R} = [\mathcal {N},\mathsf {KeyGen}]\) and \(\mathcal {S} = \{\mathsf {MPC}_{f}\}\). Fix a set \(X \subseteq \mathcal {P}\). We need to show that there is a simulator \(\sigma _{\overline{X}}\) such that \(\varPi _{\mathtt {pcdn}}\mathcal {R} \subseteq (\sigma _{\overline{X}} \mathcal {S})^{{\mathcal {E}_X \vee \mathcal {E}_{t}}:{\epsilon }}\). At any point in time, if the event \(\mathcal {E}_X \vee \mathcal {E}_{t} = 1\), the simulator halts. The simulator works as follows.
Key Generation. The simulator \(\sigma _{\overline{X}}\) simulates this step by invoking the simulator for the threshold encryption scheme. Let \(\mathtt {ek}\) denote the public key, and \(\mathtt {dk}_i\) denote the decryption key share for party i. It then outputs \(\mathtt {ek}\) at the adversary interface.
Network Messages. The simulator simulates each step of the protocol, given that all messages before that step have been delivered by the adversary i.e., the simulator receives all the corresponding \(\mathtt {deliver}\) messages at the adversary interface. If not, it simply keeps waiting. The messages in the steps below are output to the adversary at the corresponding steps, upon receiving the corresponding \(\mathtt {leak}\) messages at the adversary interface.
Input Stage. For each party i that is honest at this step and gave input to the ideal resource, \(\sigma _{\overline{X}}\) outputs an encryption \(c_i\) on behalf of this party at the adversary interface. If \(i \in \mathcal {X}\), then the simulator does not know its input, and computes the ciphertext \(c_i = \mathtt {Enc}_{\mathtt {ek}}(0)\) as an encryption of 0. Otherwise, \(i \notin \mathcal {X}\) and the simulator knows its input \(x_i\), so it computes \(\overline{x_i} = \mathtt {Enc}_{\mathtt {ek}}(x_i)\) as the ciphertext.
For each party i that is corrupted at this point, the simulator knows its input \(x_i\), and forwards this input to the ideal resource.
Addition Gates. This step can be simulated in a straightforward manner, performing local homomorphic operations on behalf of each honest party.
Multiplication Gates. Let \(\overline{a}\) and \(\overline{b}\) denote the ciphertexts input to the multiplication gate. The simulator \(\sigma _{\overline{X}}\) can execute the honest protocol. That is, it generates a random value \(d_i\) on behalf of each honest party i, and locally computes \(\overline{d_i} = \mathtt {Enc}_{\mathtt {ek}}(d_i)\), and \(\overline{d_ib} = d_i \boxdot \overline{b}\). It then outputs the pair of ciphertexts to the adversary interface. For each corrupted party at this step, the simulator obtains the pair of ciphertexts \(\overline{d_i}\) and \(\overline{d_ib}\).
Upon receiving the pairs of ciphertexts from all parties, compute the ciphertext \(\overline{a} \boxplus (\boxplus _{i}d_i)\), and simulate an honest threshold decryption protocol of this ciphertext. That is, the simulator outputs a decryption share of the ciphertext to the adversary interface. Upon computing \(t+1\) decryption shares, reconstruct the plaintext. Let \(a + \sum _{i}d_i\) be the reconstructed plaintext. Then, compute the ciphertext \(\overline{c} = (a + \sum _{i}d_i) \boxdot \overline{b} \boxminus (\boxplus _{i}\overline{d_ib})\).
Output. The simulator inputs to the ideal resource \((\mathtt {deliver},j)\), for each party j that obtains an output in the protocol. It also obtains the output with the instruction \(\mathtt {leakOutput}\). Then, upon obtaining an output y from the ideal resource, use the simulator of the (adaptively secure) threshold decryption protocol to compute decryption shares on behalf of the honest parties (see [36], where one can simply choose as the inconsistent party one of the parties in X).
Corruptions. On input a command \(\mathtt {leak}\), at interface A.i, if i is corrupted, the simulator outputs the internal state of party i. Note that this is easily done since for parties not in X, the simulator has access to its input. And if any party in X gets corrupted, the corresponding MBO is triggered, \(\mathcal {E}_X = 1\), and the simulator halts.
We now prove that \(\varPi _{\mathtt {pcdn}}\mathcal {R} \subseteq (\sigma _{\overline{X}} \mathcal {S})^{{\mathcal {E}_X \vee \mathcal {E}_{t}}:{\epsilon }}\), for the simulator \(\sigma _{\overline{X}}\) described above. For that, we first describe a sequence of hybrids.
Hybrid \(H_1\). In this system, we assume that the simulator has access to all inputs from the parties. It then executes the real-world protocol, except that the key generation and the decryption are executed using the respective simulators for the threshold encryption scheme. By security of the threshold encryption scheme, we have that \(\mathsf {until}_{\mathcal {E}_X \vee \mathcal {E}_{t}}\left( \varPi _{\mathtt {pcdn}}\mathcal {R} \right) \) is \(\epsilon _1\)-close to \(\mathsf {until}_{\mathcal {E}_X \vee \mathcal {E}_{t}} \left( H_1\right) \). That is:
where \(\epsilon _1\) is the advantage of the distinguisher (modified by the reduction) to the security of the threshold encryption scheme. Moreover, by definition we have that \(\mathsf {until}_{\mathcal {E}_X \vee \mathcal {E}_{t}} \left( H_1\right) \in H_1^{\mathcal {E}_X \vee \mathcal {E}_{t}]}\). Therefore:
Hybrid \(H_2\). The simulator in addition sets the input encryption of the honest parties in X at the Input Stage to an encryption of 0. By semantic security of the threshold encryption scheme, and following the same reasoning as above, we have that \(H_1 \in H_2^{{\mathcal {E}_X \vee \mathcal {E}_{t}}:{\epsilon _2}}\), where \(\epsilon _2\) is the advantage of the distinguisher (modified by the reduction) to the semantic security of the encryption scheme. Moreover, the hybrid specification \(\{H_2\}\) corresponds exactly to the ideal specification \((\sigma _{\overline{X}} \mathcal {S})\).
Combining the above steps, we have that \(\varPi _{\mathtt {pcdn}}\mathcal {R} \subseteq (\sigma _{\overline{X}} \mathcal {S})^{{\mathcal {E}_X \vee \mathcal {E}_{t}}:{\epsilon _X}}\), where \(\epsilon _X = \epsilon _1 + \epsilon _2\). Therefore, choosing the function \(\epsilon \) where \(\epsilon (D) = \sup _{X \subseteq \mathcal {P}} \{\epsilon _X(D)\}\), the statement follows. \(\square \)
7 Application to the CLOS Protocol
In this section, we show another application of our new definition, with the iconic CLOS protocol [13], which is based on the classical GMW protocol [25].
We show that the CLOS protocol can be used to achieve a CC-adaptively secure protocol for arbitrary number of active corruptions, assuming a CRS resource, and the existence of enhanced trapdoor permutations. Note that, since CC-adaptivity implies static security, and some form of setup is required even for static security, then it is impossible to achieve CC-adaptivity in the plain model (where only the network is assumed) for the dishonest majority setting. However, note that in contrast to the UC-adaptive version of the CLOS protocol, the construction does not require the use of augmented non-committing encryption. In fact, to the best of our knowledge, all UC-adaptively secure protocols in the dishonest majority setting require some form of non-committing encryption.
Theorem 2
Assume that enhanced trapdoor permutations exist. Then, there exists a non-trivialFootnote 8 protocol that CC-adaptively constructs \(\mathsf {MPC}_{f}\) from \([\mathcal {N},\mathsf {CRS}]\), for appropriate error \(\epsilon \) (as defined by the steps below) and for any number of active corruptions.
We only sketch the proof of the above theorem. We follow the steps of the CLOS protocol. First, a construction of a passively secure protocol assuming the asynchronous communication network \(\mathcal {N}\) is shown. This construction is achieved by first constructing an ideal oblivious transfer (OT), and then designing a secure computation protocol assuming an ideal OT. The following lemma shows that the protocol \(\varPi _{\mathtt {ot}}\) of [25] achieves CC-adaptive security. We describe the protocol and the proof of the following lemma in the full version [28].
Lemma 9
Assume that enhanced trapdoor permutations exist. Then, \(\varPi _{\mathtt {ot}}\) CC-adaptively constructs \(\mathsf {OT}\) from \(\mathcal {N}\), for error \(\epsilon \) (described in the proof), and for any number of passive corruptions.
Given that UC-adaptive security implies CC-adaptive security by Lemma 7, and there is a UC-adaptively secure MPC protocol assuming an ideal OT resource [13], we have the following lemma:
Lemma 10
There exists a non-trivial protocol that CC-adaptively constructs \(\mathsf {MPC}_f\) from \([\mathcal {N},\mathsf {OT}]\), with no error, and for any number of passive corruptions.
As a corollary of the above two lemmas and the composition guarantees from Lemma 3, we have:
Corollary 1
Assume that enhanced trapdoor permutations exist. There exists a non-trivial protocol that CC-adaptively constructs \(\mathsf {MPC}_f\) from \(\mathcal {N}\), for error \(\epsilon \) (defined by the composition Lemma 3 and error from Lemma 9), and for any number of passive corruptions.
Second, we use the CLOS compiler that transforms any passively secure protocol operating in the network \(\mathcal {N}\), to an actively secure protocol assuming in addition an ideal commit-and-prove \(\mathsf {CP}\) resource (see the full version [28] for a description). One can see that the compiler preserves the adaptivity type in the sense that if the passive protocol is CC-adaptively secure, the compiled protocol is CC-adaptively secure.
Corollary 2
Let \(\varPi \) be a multi-party protocol and let \(\varPi '\) be the protocol obtained by applying the CLOS compiler. Then, the following holds: if \(\varPi \) CC-adaptively constructs \(\mathsf {MPC}_f\) from \(\mathcal {N}\) for error \(\epsilon \) and any number of passive corruptions, then \(\varPi '\) CC-adaptively constructs \(\mathsf {MPC}_f\) from \([\mathcal {N},\mathsf {CP}]\) for error \(\epsilon '\) defined in the proof and any number of active corruptions.
Proof (Sketch)
The proof in CLOS shows that a malicious adversary cannot cheat in the compiled protocol because the resource \(\mathsf {CP}\) checks the validity of each input received. In particular, they show that for any adversary interacting in the compiled protocol \(\varPi '\), there is a passive adversary interacting in protocol \(\varPi \) that simulates the same view.
More precisely, the proof shows that the specification \(\varPi \tau \mathcal {N}\) and \(\varPi ' [\mathcal {N},\mathsf {CP}]\) are the same, where \(\tau \) is the translation converter attached at the adversary interface, which translates from the adversary in \(\varPi '\) to the adversary in \(\varPi \). (Typically the translation is called a simulator, and it happens between a real resource and an ideal resource. Here, the translation is between two real resources.)
Fix a set \(X \subseteq \mathcal {P}\). Since \(\varPi \) CC-adaptively constructs \(\mathsf {MPC}_f\) from \(\mathcal {N}\) for error \(\epsilon \) and any number of passive corruptions, we have that \(\varPi \mathcal {N}\subseteq (\sigma _{\overline{X}} \mathsf {MPC}_f)^{{\mathcal {E}_X}:{\epsilon }}\).
Using Lemma 2, this implies the desired result:
where \(\epsilon ' = \epsilon _\tau \). \(\square \)
It was also shown in [13] that \(\mathsf {CP}\) can be constructed with UC-adaptive security assuming a zero-knowledge resource \(\mathsf {ZK}\) and broadcast \(\mathsf {BC}\). Given that \(\mathsf {ZK}\) can be constructed assuming a resource \(\mathsf {CRS}\) and broadcast \(\mathsf {BC}\), and \(\mathsf {BC}\) can be constructed from \(\mathcal {N}\), the authors conclude that \(\mathsf {CP}\) can be constructed from \(\mathsf {CRS}\) and \(\mathcal {N}\). Therefore, since UC-adaptive security implies CC-adaptive security, Lemma 7 shows:
Corollary 3
There exists a non-trivial protocol that CC-adaptively constructs \(\mathsf {CP}\) from \([\mathcal {N},\mathsf {CRS}]\), for error \(\epsilon \) (as in [13]), and for any number of active corruptions.
From Corollaries 1, 2 and 3, and the composition Lemma 3 we achieve the theorem statement.
Notes
- 1.
This is reminiscent of the environment access to the global setup in UC [11].
- 2.
One could alternatively model that the input \(\mathsf {corrupt}\) is given at the adversary interface A, with an additional mechanism to ensure that the real and ideal world corruptions are the same; for example making available the set of currently corrupted parties via the free interface W.
- 3.
If \(Z \in \mathcal {Z}\) and \(Z' \subseteq Z\), then \(Z' \in \mathcal {Z}\).
- 4.
However, note that for passive corruption and a small number of parties, static security intuitively provides sufficiently strong guarantees, as the static adversary can always guess upfront which set of parties will be corrupted.
- 5.
Our basic approach of stating a guarantee per set X allows to consider many different definitions, depending on which information is accessible to the simulator. We choose to solely leak the inputs of parties in \(\overline{X}\), since this seems to be the minimal necessary information to overcome the commitment problem.
- 6.
Allowing the simulator to modify the inputs of honest parties in \(\overline{X}\) results in an unnecessarily weak notion.
- 7.
Note that in broadcast with abort, even when the sender is honest, it is allowed that parties output \(\bot \).
- 8.
The ideal specification does not require any of the simulators to deliver the messages to the parties. This implies that a protocol that “hangs” (i.e., never sends any messages and never generates output) securely realizes any ideal resource, which is uninteresting. Following [13], we therefore let a non-trivial protocol be one for which all parties generate output if the adversary delivers all messages and all parties are honest.
References
Backes, M., Dürmuth, M., Hofheinz, D., Küsters, R.: Conditional reactive simulatability. In: Gollmann, D., Meier, J., Sabelfeld, A. (eds.) ESORICS 2006. LNCS, vol. 4189, pp. 424–443. Springer, Heidelberg (2006). https://doi.org/10.1007/11863908_26
Badertscher, C., Maurer, U., Tackmann, B.: On composable security for digital signatures. In: Abdalla, M., Dahab, R. (eds.) PKC 2018. LNCS, vol. 10769, pp. 494–523. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-76578-5_17
Beaver, D.: Secure multiparty protocols and zero-knowledge proof systems tolerating a faulty minority. J. Cryptol. 4(2), 75–122 (1991)
Beaver, D., Haber, S.: Cryptographic protocols provably secure against dynamic adversaries. In: Rueppel, R.A. (ed.) EUROCRYPT 1992. LNCS, vol. 658, pp. 307–323. Springer, Heidelberg (1993). https://doi.org/10.1007/3-540-47555-9_26
Ben-Or, M., Goldwasser, S., Wigderson, A.: Completeness theorems for non-cryptographic fault-tolerant distributed computation (extended abstract). In: 20th ACM STOC, pp. 1–10. ACM Press (May 1988)
Benhamouda, F., Lin, H., Polychroniadou, A., Venkitasubramaniam, M.: Two-round adaptively secure multiparty computation from standard assumptions. In: Beimel, A., Dziembowski, S. (eds.) TCC 2018. LNCS, vol. 11239, pp. 175–205. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-03807-6_7
Broadnax, B., Döttling, N., Hartung, G., Müller-Quade, J., Nagel, M.: Concurrently composable security with shielded super-polynomial simulators. In: Coron, J.-S., Nielsen, J.B. (eds.) EUROCRYPT 2017. LNCS, vol. 10210, pp. 351–381. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56620-7_13
Canetti, R.: Security and composition of multiparty cryptographic protocols. J. Cryptol. 13(1), 143–202 (2000)
Canetti, R.: Universally composable security: a new paradigm for cryptographic protocols. In: 42nd FOCS, pp. 136–145. IEEE Computer Society Press (October 2001)
Canetti, R., Damgaard, I., Dziembowski, S., Ishai, Y., Malkin, T.: On adaptive vs. non-adaptive security of multiparty protocols. In: Pfitzmann, B. (ed.) EUROCRYPT 2001. LNCS, vol. 2045, pp. 262–279. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44987-6_17
Canetti, R., Dodis, Y., Pass, R., Walfish, S.: Universally composable security with global setup. In: Vadhan, S.P. (ed.) TCC 2007. LNCS, vol. 4392, pp. 61–85. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-70936-7_4
Canetti, R., Feige, U., Goldreich, O., Naor, M.: Adaptively secure multi-party computation. In: 28th ACM STOC, pp. 639–648. ACM Press (May 1996)
Canetti, R., Lindell, Y., Ostrovsky, R., Sahai, A.: Universally composable two-party and multi-party secure computation. In: 34th ACM STOC, pp. 494–503. ACM Press (May 2002)
Canetti, R., Poburinnaya, O., Venkitasubramaniam, M.: Equivocating Yao: constant-round adaptively secure multiparty computation in the plain model. In: Hatami, H., McKenzie, P., King, V. (eds.) 49th ACM STOC, pp. 497–509. ACM Press (June 2017)
Chaum, D., Crépeau, C., Damgård, I.: Multiparty unconditionally secure protocols (extended abstract). In: 20th ACM STOC, pp. 11–19. ACM Press (May 1988)
Cohen, R., Shelat, A., Wichs, D.: Adaptively secure MPC with sublinear communication complexity. In: Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019. LNCS, vol. 11693, pp. 30–60. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26951-7_2
Cramer, R., Damgård, I., Nielsen, J.B.: Multiparty computation from threshold homomorphic encryption. In: Pfitzmann, B. (ed.) EUROCRYPT 2001. LNCS, vol. 2045, pp. 280–300. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44987-6_18
Dachman-Soled, D., Katz, J., Rao, V.: Adaptively secure, universally composable, multiparty computation in constant rounds. In: Dodis, Y., Nielsen, J.B. (eds.) TCC 2015. LNCS, vol. 9015, pp. 586–613. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46497-7_23
Damgård, I., Nielsen, J.B.: Universally composable efficient multiparty computation from threshold homomorphic encryption. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 247–264. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45146-4_15
Datta, A., Küsters, R., Mitchell, J.C., Ramanathan, A.: On the relationships between notions of simulation-based security. In: Kilian, J. (ed.) TCC 2005. LNCS, vol. 3378, pp. 476–494. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-30576-7_26
Dodis, Y., Micali, S.: Parallel reducibility for information-theoretically secure computation. In: Bellare, M. (ed.) CRYPTO 2000. LNCS, vol. 1880, pp. 74–92. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44598-6_5
Garay, J.A., Wichs, D., Zhou, H.-S.: Somewhat non-committing encryption and efficient adaptively secure oblivious transfer. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 505–523. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03356-8_30
Garg, S., Polychroniadou, A.: Two-round adaptively secure MPC from indistinguishability obfuscation. In: Dodis, Y., Nielsen, J.B. (eds.) TCC 2015. LNCS, vol. 9015, pp. 614–637. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46497-7_24
Garg, S., Sahai, A.: Adaptively secure multi-party computation with dishonest majority. In: Safavi-Naini, R., Canetti, R. (eds.) CRYPTO 2012. LNCS, vol. 7417, pp. 105–123. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32009-5_8
Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game or a completeness theorem for protocols with honest majority. In: Aho, A. (eds.) 19th ACM STOC, pp. 218–229. ACM Press (May 1987)
Goldwasser, S., Levin, L.: Fair computation of general functions in presence of immoral majority. In: Menezes, A.J., Vanstone, S.A. (eds.) CRYPTO 1990. LNCS, vol. 537, pp. 77–93. Springer, Heidelberg (1991). https://doi.org/10.1007/3-540-38424-3_6
Goldwasser, S., Lindell, Y.: Secure computation without agreement. In: Malkhi, D. (ed.) DISC 2002. LNCS, vol. 2508, pp. 17–32. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-36108-1_2
Hirt, M., Liu-Zhang, C.-D., Maurer, U.: Adaptive security of multi-party protocols, revisited. Cryptology ePrint Archive, Report 2021/1175 (2021). https://ia.cr/2021/1175
Hirt, M., Zikas, V.: Adaptively secure broadcast. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 466–485. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13190-5_24
Hofheinz, D., Unruh, D., Müller-Quade, J.: Polynomial runtime and composability. J. Cryptol. 26(3), 375–441 (2013)
Jost, D., Maurer, U.: Overcoming impossibility results in composable security using interval-wise guarantees. In: Micciancio, D., Ristenpart, T. (eds.) CRYPTO 2020. LNCS, vol. 12170, pp. 33–62. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56784-2_2
Katz, J., Maurer, U., Tackmann, B., Zikas, V.: Universally composable synchronous computation. In: Sahai, A. (ed.) TCC 2013. LNCS, vol. 7785, pp. 477–498. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-36594-2_27
Kushilevitz, E., Lindell, Y., Rabin, T.: Information-theoretically secure protocols and security under composition. In: Kleinberg, J.M. (eds.) 38th ACM STOC, pp. 109–118. ACM Press (May 2006)
Küsters, R., Tuengerthal, M., Rausch, D.: The IITM model: a simple and expressive model for universal composability. J. Cryptol. 33(4), 1461–1584 (2020). https://doi.org/10.1007/s00145-020-09352-1
Liu-Zhang, C.-D., Maurer, U.: Synchronous constructive cryptography. In: Pass, R., Pietrzak, K. (eds.) TCC 2020. LNCS, vol. 12551, pp. 439–472. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64378-2_16
Lysyanskaya, A., Peikert, C.: Adaptive security in the threshold setting: from cryptosystems to signature schemes. In: Boyd, C. (ed.) ASIACRYPT 2001. LNCS, vol. 2248, pp. 331–350. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45682-1_20
Maurer, U.: Indistinguishability of random systems. In: Knudsen, L.R. (ed.) EUROCRYPT 2002. LNCS, vol. 2332, pp. 110–132. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-46035-7_8
Maurer, U.: Constructive cryptography – a new paradigm for security definitions and proofs. In: Mödersheim, S., Palamidessi, C. (eds.) TOSCA 2011. LNCS, vol. 6993, pp. 33–56. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27375-9_3
Maurer, U., Renner, R.: Abstract cryptography. In: Innovations in Computer Science. Citeseer (2011)
Maurer, U., Renner, R.: From indifferentiability to constructive cryptography (and back). In: Hirt, M., Smith, A. (eds.) TCC 2016. LNCS, vol. 9985, pp. 3–24. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-53641-4_1
Maurer, U., Pietrzak, K., Renner, R.: Indistinguishability amplification. In: Menezes, A. (ed.) CRYPTO 2007. LNCS, vol. 4622, pp. 130–149. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74143-5_8
Micali, S., Rogaway, P.: Secure computation. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 392–404. Springer, Heidelberg (1992). https://doi.org/10.1007/3-540-46766-1_32
Micciancio, D., Tessaro, S.: An equational approach to secure multi-party computation. In: Kleinberg, R.D. (eds.) ITCS 2013, pp. 355–372. ACM (January 2013)
Paillier, P.: Public-key cryptosystems based on composite degree residuosity classes. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, pp. 223–238. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48910-X_16
Pass, R.: Simulation in quasi-polynomial time, and its application to protocol composition. In: Biham, E. (ed.) EUROCRYPT 2003. LNCS, vol. 2656, pp. 160–176. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-39200-9_10
Pfitzmann, B., Waidner, M.: Composition and integrity preservation of secure reactive systems. IBM Thomas J. Watson Research Division (2000)
Prabhakaran, M., Sahai, A.: New notions of security: achieving universal composability without trusted setup. In: Babai, L. (eds.) 36th ACM STOC, pp. 242–251. ACM Press (June 2004)
Rabin, T., Ben-Or, M.: Verifiable secret sharing and multiparty protocols with honest majority (extended abstract). In: 21st ACM STOC, pp. 73–85. ACM Press (May 1989)
Yao, A.C.: Protocols for secure computations (extended abstract). In: 23rd FOCS, pp. 160–164. IEEE Computer Society Press (November 1982)
Yao, A.C.-C.: How to generate and exchange secrets (extended abstract). In: 27th FOCS, pp. 162–167. IEEE Computer Society Press (October 1986)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix
A Proof of Lemma 3
The proof of the lemma follows from the properties of the \(\epsilon \)-relaxation and the until-relaxation, and is in line with the composition theorem for interval-wise relaxations in [31].
We start from the first property, which shows sequential composition. That is, if \(\boldsymbol{\pi }\) constructs \(\mathcal {S}\) from \(\mathcal {R}\) with error \(\epsilon \), and \(\boldsymbol{\pi '}\) constructs \(\mathcal {T}\) from \(\mathcal {S}\) with error \(\epsilon '\), then one can construct \(\mathcal {T}\) from \(\mathcal {R}\) with a new error \(\tilde{\epsilon }\), corresponding essentially to the sum of \(\epsilon \) and \(\epsilon '\).
for , where \((\epsilon _{\pi '})_{\mathcal {E}_X \vee \mathcal {E}_{\mathcal {Z}}}\) is the advantage of the distinguisher that first attaches \(\pi '\) to the given resource, and then interacts with the projected resource, and analogously for \((\epsilon '_{\sigma _{\overline{X}}})_{\mathcal {E}_X \vee \mathcal {E}_{\mathcal {Z}}}\).
Let \(X \subseteq \mathcal {P}\) be a set. From the first part, Lemma 2 and composition order invariance, we have:
Moreover, from the second part we have:
Combining both statements and using Lemma 1 yields:
The second property ensures that the construction notion achieves parallel composition. That is, if \(\boldsymbol{\pi }\) constructs \(\mathcal {S}\) from \(\mathcal {R}\), then it also constructs \([\mathcal {S},\mathcal {T}]\) from \([\mathcal {R},\mathcal {T}]\).
for , where \(D[\cdot , T]\) denotes the distinguisher that emulates \(T\) in parallel to the given resource, and then executes D.
Let \(X \subseteq \mathcal {P}\) be a set. From composition order invariance and Lemma 2, we have:
B Threshold Homomorphic Encryption
We recall the definition of a threshold encryption scheme. A threshold encryption scheme with standard adaptive security can be found in [36], based on the Paillier cryptosystem.
Definition 10
A homomorphic threshold encryption scheme consists of five algorithms:
-
(Key generation) The key generation algorithm is parameterized by (t, n) and outputs \((\mathtt {ek},\mathtt {dk}) = \mathtt {Keygen}_{(t,n)}(1^\kappa )\), where \(\mathtt {ek}\) is the public key, and \(\mathtt {dk}= (\mathtt {dk}_1,\dots ,\mathtt {dk}_n)\) is the list of secret keys.
-
(Encryption) There is an algorithm \(\mathtt {Enc}\), which on input public key \(\mathtt {ek}\) and plaintext m, it outputs an encryption \(\overline{m} = \mathtt {Enc}_{\mathtt {ek}}(m;r)\) of m, with random input r.
-
(Partial decryption) There is an algorithm that, given as input a decryption key \(\mathtt {dk}_i\) and a ciphertext, it outputs \(d_i = \mathtt {DecShare}_{\mathtt {dk}_i}(c)\), a decryption share.
-
(Reconstruction) Given \(t+1\) decryption shares \(\{d_i\}\), one can reconstruct the plaintext \(m = \mathtt {Rec}(\{d_i\})\).
-
(Additively Homomorphic) There is an algorithm which, given public key \(\mathtt {ek}\) and encryptions \(\overline{a}\) and \(\overline{b}\), it outputs a uniquely-determined encryption \(\overline{a+b}\). We write \(\overline{a+b} = \overline{a} \boxplus \overline{b}\). Likewise, there is an algorithm performing subtraction: \(\overline{a-b} = \overline{a} \boxminus \overline{b}\).
-
(Multiplication by constant) There is an algorithm, which, given public key \(\mathtt {ek}\), a plaintext a and a ciphertext \(\overline{b}\), it outputs a uniquely-determined encryption \(\overline{a\cdot b}\). We write \(\overline{a\cdot b} = a \boxdot \overline{b}\).
C Description of Communication Primitives
1.1 C.1 Network Model
We first describe a single-message authenticated channel \(\mathsf {AUTH}_{i,j}\) from party i to party j. A multi-message authenticated channel is then accordingly obtained via parallel composition of single-message resources. The resource has \(n+2\) interfaces, n party interfaces, an adversary interface A and a free interface W.
The channel expects an input message m at interface i, which is stored upon receipt. The adversary can learn the message that is input, and can choose to deliver the message by making it available at interface j. Moreover, if party i is corrupted, it can inject a new message, as long as the message has not been delivered yet.
Let \(\mathcal {N}\) be the complete network of pairwise authenticated channels, i.e., the parallel composition of \(\mathsf {AUTH}_{i,j}\), for \(i,j \in [n]\).
1.2 C.2 Broadcast with Abort
The resource has \(n+2\) interfaces, n party interfaces, an adversary interface A and a free interface W. As a first step, we model a broadcast resource \(\mathsf {BC}_i\) where party i is the sender. We then define the broadcast specification \(\mathcal {BC}\) that allows any party to broadcast as the specification containing the parallel composition of broadcast resources \(\mathsf {BC}_i\), for each \(i \in [n]\).
The broadcast specification \(\mathcal {BC}\) we consider corresponds to that of broadcast with abort [27], and does not guarantee delivery of messages. It only guarantees that no two uncorrupted parties will receive two different messages.
As pointed out by Hirt and Zikas [29], traditional broadcast protocols do not construct the stronger broadcast functionality which simply forwards the sender’s message to all parties, since they are subject to adaptive attacks, where the adversary can corrupt an initially honest sender depending on the broadcasted message, and change the broadcasted message. Therefore, we consider the “relaxed” version, which allows an adaptively corrupted sender to change the message sent, as long as the message was not delivered to any party.
Rights and permissions
Copyright information
© 2021 International Association for Cryptologic Research
About this paper
Cite this paper
Hirt, M., Liu-Zhang, CD., Maurer, U. (2021). Adaptive Security of Multi-party Protocols, Revisited. In: Nissim, K., Waters, B. (eds) Theory of Cryptography. TCC 2021. Lecture Notes in Computer Science(), vol 13042. Springer, Cham. https://doi.org/10.1007/978-3-030-90459-3_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-90459-3_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90458-6
Online ISBN: 978-3-030-90459-3
eBook Packages: Computer ScienceComputer Science (R0)