1 Introduction

Secure computation (MPC) is the area of cryptography concerned with mutually distrusting parties who wish to compute some function f on private input from each of the parties, yielding some private output to each of the parties. If we consider p parties, \(P_1,\dots , P_p\) where party \(P_i\) has input \(x_i\) the parties then wish to learn their respective output \(y_i\). We can thus describe the function to compute as \(f(x_1, x_2, \dots , x_p)=(y_1, y_2, \dots , y_p)\). It was shown in the 80’s how to realize this, even against a malicious adversary taking control over a majority of the parties [24]. With feasibility in place, much research has been carried out trying to make MPC as efficient as possible. One specific approach to efficient MPC, which has gained a lot of traction is based on secret sharing [3, 5, 24]: Each party secretly shares his or her input with the other parties. The parties then parse f as an arithmetic circuit, consisting of multiplication and addition gates. In a collaborative manner, based on the shares, they then compute the circuit, to achieve shares of the output which they can then open.

Out Contributions. Using the secret sharing approach opens up the possibility of malicious parties using “inconsistent” shares in the collaborative computation. To combat this, most protocols add a MAC on the true value shared between the parties. If someone cheats it is then possible to detect this when verifying the MAC [14, 16, 33].

In this paper we take a different approach to ensure correctness: We have each party commit to its shares towards the other parties using an additively homomorphic commitment. We then have the collaborative computation take place on the commitments instead of the pure shares. Thus, if some party tries to change its share during the protocol then the other parties will notice when the commitments are opened in the end (as the opening will be invalid).

By taking this path, we can present the following contributions:

  1. 1.

    An efficient and black-box reduction from random multiparty homomorphic commitments, to two-party additively homomorphic commitments.

  2. 2.

    Using these multiparty commitments we present a new secret-sharing based MPC protocol with security against a majority of malicious adversaries. Leveraging the commitments, our approach does not use any MAC scheme and does not rely on a random oracle or any specific number theoretic assumptions.

  3. 3.

    The new protocol has several advantages over previous protocols in the same model. In particular our protocol works over fields of arbitrary characteristic, independent of the security parameter. Furthermore, since our protocol computes over committed values it can easily be composed inside larger protocols. For example, it can be used for computing committed OT in a very natural and efficient way.

  4. 4.

    We suggest an efficient realization of our protocol, which only relies on a PRG, coin-tossing and OTFootnote 1. We give a detailed comparison of our scheme with other dishonest majority, secret-sharing based MPC schemes, showing that the efficiency of our scheme is comparable, and in several cases preferable, over state-of-the-art.

High Level Idea. We depart from any (possibly interactive) two-party additively homomorphic commitment scheme. To achieve the most efficient result, without relying on a random oracle or specific number theoretic assumptions, we consider the scheme of [18], since has been shown to be highly efficient in practice [35, 36]. This scheme, along with others [9,10,11] works on commitments to vectors of m elements over some field \(\mathbb {F} \). For this reason we also present our results in this setting. Thus any of these schemes could be used.

The first part of our protocol constructs a large batch of commitments to random values. The actual value in such a commitment is unknown to any party, instead, each party holds an additive share of it. This is done by having each party pick a random message and commit to it towards every other party, using the two-party additively homomorphic commitment scheme. The resulted multiparty commitment is the sum of all the messages the parties committed to, which is uniformly random if there is at least one honest party. We must ensure that a party commits to the same message towards all other parties, to this end the parties agree on a few (random) linear combinations over the commitments, which are then opened and being checked.

Based on these random additively shared commitments, the parties execute a preprocessing stage to construct random multiplication triples. This is done in a manner similar to MASCOT [29], yet a bit different, since our scheme supports computation over arbitrary small fields and MASCOT requires a field of size exponential in the security parameter. More specifically the Gilboa protocol [23] for multiplication of additively shared values is used to compute the product of two shares of the commitments between each pair of parties. However, this is not maliciously secure as the result might be incorrect and a few bits of information on the honest parties’ shares might be leaked. To ensure correctness cut-and-choose and sacrificing steps are executed. First, a few triples are opened and checked for correctness. This ensures that not all triples are incorrectly constructed. Next, the remaining triples are mapped into buckets, where some triples are sacrificed to check correctness of another triple. At this point all the triples are correct except with negligible probability. Finally, since the above process grants the adversary the ability to leak some bits from the honest parties shares, the parties engage in a combining step, where triples are randomly “added” together to ensure that the result will contain at least one fully random triple.

As the underlying two-party commitments are for vectors of messages, our protocol immediately features single-instruction multiple-data (SIMD), which allows multiple simultaneously executions of the same computation (over different inputs). However, when performing only a single execution we would like to use only one element out of the vector and save the rest of the elements for a later use. We do so by preprocessing reorganization pairs, following the same approach presented in MiniMAC [12, 15, 16], which allows to perform a linear transformation over a committed vector.

With the preprocessing done, the online phase of our protocol proceeds like previous secret-sharing based MPC schemes such as [14, 16, 29]. That is, the parties use their share of the random commitments to give input to the protocol. Addition is then carried out locally and the random multiplication triples are used to interactively realize multiplication gates.

Efficiency. In Table 1 we count the amount of OTs, two-party commitments and coin tossing operations required in the different commands of our protocol (specifically, in the Rand, Input, ReOrg, Add and Mult commands).

The complexities describe what is needed to construct a vector of m elements in the underlying field in the amortized sense. When using the commitment scheme of [18] it must hold that \(m\ge s/\lceil \log _2(|\mathbb {F} |)\rceil \) where s is the statistical security parameter.

Table 1. Amortized complexity of each instruction of our protocol (Rand, Input, ReOrg, Add and Mult), when constructing a batch of \(2^{20}\) multiplication triples, each with m independent components among p parties. The quadratic complexity of the number of two-party commitments reflects the fact that our protocol is constructed from any two-party commitment scheme in a black-box manner, and so each party independently commits to all other party for every share it posses.

1.1 Related Work

Comparison to SPDZ and TinyOT. In general having the parties commit to their shares allows us to construct a secret-sharing based MPC protocol ala SPDZ [14, 29], but without the need of shared amd specific information theoretic MACs. This gives us several advantages over the SPDZ approach:

  • We get a light preprocessing stage of multiplication triples as we can base this on commitments to random values, which are later adjusted to reflect a multiplication. Since the random values are additively homomorphic and committed, this limits the adversary’s possible attack vector. In particular we do not need an authentication step.

  • Using the commitment scheme of [18] we get the possibility of committing to messages in any field \(\mathbb {F} \) among p parties, using communication of only \(O(\log (|\mathbb {F} |)\cdot p^2)\) bits, amortized. This is also the case when \(\mathbb {F} \) is the binary fieldFootnote 2 or of different characteristic than 2. In comparison, SPDZ requires the underlying field to be of size \(\varOmega (2^s)\) where s is a statistical security parameter.

  • The TinyOT protocol [7, 30, 33] on the other hand only works over GF(2) and requires a MAC of \(\tilde{O} (s)\) bits on each secret bit. Giving larger overhead than in SPDZ, MiniMAC and our protocol and limiting its use-case to evaluation of Boolean circuits.

Comparison to MiniMAC. The MiniMAC protocol [16] uses an error correcting code over a vector of data elements. It can be used for secure computation over small fields without adding long MACs to each data element. However, unfortunately the authors of [16] did not describe how to realize the preprocessing needed. Neither did the follow up works [12, 15]. The only efficientFootnote 3 preprocessing protocol for MiniMAC that we know of is the one presented in [19] based on OT extension. However this protocols has it quirks:

  • It only works over fields of characteristic 2.

  • The ideal functionality described is different from the ones in [12, 15, 16]. Furthermore, it is non-standard in the sense that the corruption that an adversary can apply to the shares of honest parties can be based on the inputs of the honest parties.

  • There is no proof that this ideal functionality works in the online phase of MiniMAC.

  • There seems to be a bug in one of the steps of the preprocessing of multiplication triples. We discuss this in further detail in the full version [20].

OT Extensions. All the recent realizations of the preprocessing phase on secret shared protocols such as SPDZ, MiniMAC and TinyOT are implemented using OT. The same goes for our protocol. Not too long ago this would have not been a practically efficient choice since OT generally requires public key operations. However, the seminal work of Beaver [4] showed that it was possible to extend a few OTs, using only symmetric cryptography, to achieve a practically unbounded amount of OTs. Unfortunately Beaver’s protocol was not practically efficient, but much research has been carried out since then [1, 2, 25, 28, 33], culminating with a maliciously secure OT extension where a one-out-of-two OT of 128 bit messages with \(s=64\) can be done, in the amortized sense, in 0.3 \({\upmu }\)s [28].

Commitment Extensions. Using additive homomorphic commitments for practical MPC is a path which would also not have been possible even just a few years ago. However, much study has undergone in the area of “commitment extension” in the recent years. All such constructions that we know of require a few OTs in a preprocessing phase and then construction and opening of commitments can be done using cheap symmetric or information theoretic primitives. The work on such extensions started in [22] and independently in [11]. A series of follow-up work [6, 9, 10, 18, 35] presented several improvements, both asymptotically and practically. Of these works [35] is of particular interest since it presents an implementation (based on the scheme of [18]) and showed that committing and opening 128 bit messages with \(s=40\) can be done in less than 0.5 \({\upmu }\)s and 0.2 \({\upmu }\)s respectively, in the amortized sense for a batch of 500,000 commitmentsFootnote 4.

It should be noted that Damgård et al. [11] also achieved both additively and multiplicative homomorphic commitments. They use this to get an MPC protocol cast in the client/server setting. We take some inspiration from their work, but note that their setting and protocols are quite different from ours in that they use verifiable secret sharing to achieve the multiplicative property and so their scheme is based on threshold security, meaning they get security against a constant fraction of servers in a client/server protocol.

Relation to [13]. The protocol by Damgård and Orlandi also considers a maliciously secure secret-sharing based MPC in the dishonest majority setting. Like us, their protocol is based on additively homomorphic commitments where each party is committed to its share to thwart malicious behavior. However, unlike ours, their protocol only works over large arithmetic fields and uses a very different approach. Specifically they use the cut-and-choose paradigm along with packed secret sharing in order to construct multiplication triples. Furthermore, to get random commitments in the multiparty setting, they require usage of public-key encryption for each commitment. Thus, the amount of public-key operations they require is linear in the amount of multiplication gates in the circuit to compute. In our protocol it is possible to limit the amount of public-key operations to be asymptotic in the security parameter, as we only require public-key primitives to bootstrap the OT extension.

Other Approaches to MPC. Other approaches to maliciously secure MPC in the dishonest majority setting exist. For example Yao’s garbled circuit [31, 32, 37], where the parties first construct an encrypted Boolean circuit and then evaluate it locally. Another approach is “MPC-in-the-head” [26, 27] which efficiently combines any protocol in the malicious honest majority settings and any protocol in the semi-honest dishonest majority settings into a protocol secure in the malicious dishonest majority settings.

2 Preliminaries

Parameters and Notation. Throughout the paper we use “negligible probability in s” to refer to a probability o(1 / poly(s)) where poly(s) indicates some polynomial in \(s\in \mathbb {N}\). Similarly we use “overwhelming probability in s” to denote a probability \(1-o(1/poly(s))\), where s is the statistical security parameter.

There are \(p\in \mathbb {N} \) parties \(P_1,\ldots ,P_p\) participating in the protocol. The notation [k] refers to the set \(\{1,\ldots ,k\}\). We let vector variables be expressed with bold face. We use square brackets to select a specific element of a vector, that is, \(\mathbf {x} [\ell ] \in \mathbb {F} \) is the \(\ell \)’th element of the vector \(\mathbf {x} \in \mathbb {F} ^m\) for some \(m\ge \ell \). We assume that vectors are column vectors and use \(\Vert \) to denote concatenation of rows, that is, \(\mathbf {x} \Vert \mathbf {y} \) with \(\mathbf {x}, \mathbf {y} \in \mathbb {F} ^m\) is a \(m\times 2\) matrix. We use \(*:\mathbb {F} ^m \times \mathbb {F} ^m \rightarrow \mathbb {F} ^m\) to denote component-wise multiplication and \(\cdot : \mathbb {F} \times \mathbb {F} ^m \rightarrow \mathbb {F} ^m\) to denote a scalar multiplication. We will sometimes abuse notation slightly and consider \(\mathbb {F} \) as a set of elements and thus use \(\mathbb {F} \backslash \{0\}\) to denote the elements of \(\mathbb {F} \), excluding the additive neutral element 0.

If S is a set we assume that there exists an arbitrary, but globally known deterministic ordering in such a set and let \(S[i]=S_i\) denote the ith element under such an ordering. In general we always assume that sets are stored as a list under such an ordering. When needed we use (ab, ...) to denote a list of elements in a specific order. Letting A and B be two sets s.t. \(|A|=|B|\) we then abuse notation by letting \(\{(a, b)\}\in (A, B)\) denote \(\{(A[i], B[i])\}_{i\in [|A|]}\). I.e. a and b denote the i’th element in A, respectively B.

All proof and descriptions of protocols are done using the Universally Composable framework [8].

Ideal Functionalities. We list the ideal UC-functionalities we need for our protocol. Note that we use the standard functionalities for Coin Tossing, Secure Equality Check, Oblivious Transfer and Multiparty Computation.

We need a coin-tossing functionality that allows all parties to agree on uniformly random elements in a field. For this purpose we describe a general, maliciously secure coin-tossing functionality in Fig. 1.

Fig. 1.
figure 1

Ideal functionality \(\mathcal {F} _{\textsf {CT}} \)

Furthermore we need to be able to securely evaluate equality of values. This functionality is described in Fig. 2. Notice that this functionality allows the adversary to learn the honest parties’ inputs after it supplies its own. Furthermore, we allow the adversary to learn the result of the equality check before any honest parties, which gives him the chance to abort the protocol. Thus this function should only be used on data that is not private. The functionality can for example be implemented using a commitment scheme where each party commits to its input towards every other party. Once all parties have committed, the parties open the commitments and each party locally evaluates if everything is equal.

Fig. 2.
figure 2

Ideal functionality \(\mathcal {F} _{\textsf {EQ}} \)

We also require a standard 1-out-of-2 functionality denoted by \(\mathcal {F} _{\textsf {OT}} \) as described in Fig. 3.

Fig. 3.
figure 3

Ideal functionality \(\mathcal {F} _{\textsf {OT}} \)

Finally, a fully fledged MPC functionality, very similar to the one described in previous works such as SPDZ and MiniMAC, is described in Fig. 4. Note that the functionality maintains the dictionary \(\mathsf {id}\) that maps indices to values stored by the functionality. The expression \(\mathsf {id}[k]=\bot \) means that no value is stored by the functionality at index k in that dictionary. Also note that the functionality is described as operating over vectors from \(\mathbb {F} ^m\) rather than over elements from \(\mathbb {F} \). This is because our protocol allows up to m simultaneous secure computations of the same function (on different inputs) at the price of a single computation, thus, every operation (such as input, random, add, multiply) are done in a component wise manner to a vector from \(\mathbb {F} ^m\). As we describe later, it is indeed possible to perform a single secure computation when needed.

Fig. 4.
figure 4

Ideal functionality \(\mathcal {F} _{\textsf {MPC}\hbox {-}\mathbb {F} ^{m}} \)

Dependencies between functionalities and protocols. We illustrate the dependencies between the ideal functionalities just presented and our protocols in Fig. 5. We see that \(\mathcal {F} _{\textsf {CT}} \) and \(\mathcal {F} _{\textsf {EQ}} \), along with a two-party commitments scheme, \(\mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}} \) (presented in the next section) are used to realize our multiparty commitment scheme in protocol \(\varPi _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \). Functionalities \(\mathcal {F} _{\textsf {CT}} \) and \(\mathcal {F} _{\textsf {EQ}} \) are again used, along with \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \) and \(\mathcal {F} _{\textsf {OT}} \) to realize the augmented homomorphic commitment scheme \(\varPi _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \). \(\varPi _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) constructs all the preprocessed material, in particular multiplication triples, needed in order to realize the fully fledged MPC protocol \(\varPi _{\textsf {MPC}\hbox {-}\mathbb {F} ^{m}} \).

Fig. 5.
figure 5

Outline of functionalities and protocols.

Arithmetic Oblivious Transfer. Generally speaking, arithmetic oblivious transfer allows two parties \(P_i\) and \(P_j\) to obtain an additive shares of the multiplication of two elements \(x,y\in \mathbb {F} \), where \(P_i\) privately holds x and \(P_j\) privately holds y.

A protocol for achieving this in the semi-honest settings is presented in [23] and used in MASCOT [29]. Let \(\ell =\lceil \log \mathbb {F} \rceil \) be the number of bits required to represent elements from the field \(\mathbb {F} \), then the protocol goes by having the parties run in \(\ell \) (possibly parallel) rounds, each of which invokes an instance of the general oblivious transfer functionality (\(\mathcal {F} _{\textsf {OT}} \)). This is described by procedure \(\mathsf {ArithmeticOT}\) in Fig. 6.

Fig. 6.
figure 6

Procedure \(\mathsf {ArithmeticOT}\)

The use of arithmetic OT to construct multiplication triples. In our protocol we use the above procedure to multiply two elements \(\mathbf {x},\mathbf {y} \in \mathbb {F} ^m\) such that one party privately holds \(\mathbf {x} \) and the other party privately holds \(\mathbf {y} \). Specifically, we can do this using m invocations of the \(\mathsf {ArithmeticOT}\) procedure, thus, to multiply elements from \(\mathbb {F} ^m\) we make a total of \(m \log (\lceil |\mathbb {F} |\rceil )\) calls to the \(\texttt {transfer} \) command of the \(\mathcal {F} _{\textsf {CT}} \) functionality.

Even if using a maliciously secure OT functionality to realize this procedure, it still does not become maliciously secure. We discuss how to handle this in Sect. 4.2.

3 Homomorphic Commitments

In this section we present the functionalities for two-party and multiparty homomorphic commitment schemes, however, we present a realization only to the multiparty case since it uses a two-party homomorphic commitment scheme in a black-box manner and so it is not bound to any specific realization.

For completeness and concreteness of the efficiency analysis we do present a realization to the two-party homomorphic commitment scheme in the full version [20].

3.1 Two-Party Homomorphic Commitments

Functionality \(\mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}} \) is held between two parties \(P_i \) and \(P_j \), in which \(P_i \) commits to some value \(\mathbf {x} \in \mathbb {F} ^m\) toward party \(P_j \), who eventually holds the commitment information, denoted \([\mathbf {x} ]^{i,j}\). In addition, by committing to some value \(\mathbf {x} \) party \(P_i \) holds the opening information, denoted \(\langle {\mathbf {x}} \rangle ^{i,j}\), such that having \(P_i \) send \(\langle {\mathbf {x}} \rangle ^{i,j}\) to \(P_j \) is equivalent to issuing the command Open on \(\mathbf {x} \) by which \(P_j \) learns \(\mathbf {x} \).

The functionality works in a batch manner, that is, \(P_i \) commits to \(\gamma \) (random) values at once using the Commit command. These \(\gamma \) random values are considered as “raw-commitments” since they have not been processes yet. The sender turns the commitment from “raw” to “actual” by issuing either Input or Rand commands on it: The Input command modifies the committed value to the sender’s choice and the Rand command keeps the same value of the commitment (which is random). In both cases the commitment is considered as a “actual” and is not “raw” anymore. Actual commitments can then be combined using the Linear Combination command to construct a new actual-commitment.

To keep track of the commitments the functionality uses two dictionaries: \(\mathsf {raw}\) and \( \mathsf {actual} \). Both map from identifiers to committed values such that the mapping returns \(\bot \) if no mapping exists for the identifier. We stress that a commitment is either raw or actual, but not both. That means that either \(\mathsf {raw}\) or \(\texttt {actual} \), or both return \(\bot \) for every identifier. To issue the Commit command, the committer is instructed to choose a set I of \(\gamma \) freshly new identifiers, this is simply a set of identifiers I such that for every \(k\in I\) \(\mathsf {raw}\) and \( \mathsf {actual} \) return \(\bot \). The functionality is formally described in Fig. 7.

Fig. 7.
figure 7

Ideal functionality \(\mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}} \)

To simplify readability of our protocol we may use shorthands to the functionality’s commands invocations as follows: Let \([\mathbf {x} _{k}]^{i,j}\) and \([\mathbf {x} _{k'}]^{i,j}\) be two actual-commitments issued by party \(P_i \) toward party \(P_j \) (i.e. the committed values are stored in \(\texttt {actual} [k]\) and \(\texttt {actual} [k']\) respectively). The Linear Combination command of Fig. 7 allows to compute the following operations which will be used in our protocol. The operations are defined over \([\mathbf {x} _{k}]^{i,j}\) and \([\mathbf {x} _{k'}]^{i,j}\) and result with the actual-commitment \([\mathbf {x} _{k''}]^{i,j}\):

  • Addition. (Equivalent to the command \((\texttt {linear}, \{({k}, \mathbf {1}), ({k'}, \mathbf {1})\}, \mathbf {0}, {k''})\).)

    $$ [\mathbf {x} _{k}]^{i,j}+[\mathbf {x} _{k'}]^{i,j}=[\mathbf {x} _{k}+\mathbf {x} _{k'}]^{i,j}=[\mathbf {x} _{k''}]^{i,j} ~~~\text {and}~~~ \langle {\mathbf {x} _{k}} \rangle ^{i,j}+\langle {\mathbf {x} _{k'}} \rangle ^{i,j}=\langle {\mathbf {x} _{k}+\mathbf {x} _{k'}} \rangle ^{i,j}=\langle {\mathbf {x} _{k''}} \rangle ^{i,j} $$
  • Constant Addition. (Equivalent to the command \((\texttt {linear}, \{({k}, \mathbf {1})\}, \varvec{\beta }, {k''})\).)

    $$ \varvec{\beta } +[\mathbf {x} _{k}]^{i,j}=[\varvec{\beta } +\mathbf {x} _{k}]^{i,j}=[\mathbf {x} _{k''}]^{i,j} ~~~\text {and}~~~ \varvec{\beta } +\langle {\mathbf {x} _{k}} \rangle ^{i,j}=\langle {\varvec{\beta } +\mathbf {x} _{k}} \rangle ^{i,j}=\langle {\mathbf {x} _{k''}} \rangle ^{i,j} $$
  • Constant Multiplication. (Equivalent to the command \((\texttt {linear}, \{({k}, \varvec{\alpha })\}, \mathbf {0}, {k''})\).)

    $$ \varvec{\alpha } *[\mathbf {x} _{k}]^{i,j}=[\varvec{\alpha } *\mathbf {x} _{k}]^{i,j}=[\mathbf {x} _{k''}]^{i,j} ~~~\text {and}~~~ \varvec{\alpha } *\langle {\mathbf {x} _{k}} \rangle ^{i,j}=\langle {\varvec{\alpha } *\mathbf {x} _{k}} \rangle ^{i,j}=\langle {\mathbf {x} _{k''}} \rangle ^{i,j} $$

Realization of these operations depends on the underlying two-party commitment scheme. In the full version [20] we describe how addition of commitments and scalar multiplication are supported with the scheme of [18], where we also show how to extend this to enable a componentwise multiplication of an actual-commitment with a public vector from \(\mathbb {F} ^m\) as well. To this end, we show how to extend their scheme to supports this operation as well, as it follows the same approach used in MiniMAC [16]. In the following we assume that public vector componentwise multiplication is supported in the two-party scheme.

3.2 Multiparty Homomorphic Commitments

Functionality \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \), presented in Fig. 8, is a generalization of \(\mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}} \) to the multiparty setting where the commands Init, Commit, Input, Rand, Open and Linear Combination have the same purpose as before. The additional command Partial Open allows the parties to open a commitment to a single party only (in contrast to Open that opens a commitment to all parties). As before, the functionality maintains the dictionaries \(\mathsf {raw}\) and \( \mathsf {actual} \) to keep track on the raw and actual commitments. The major change in the multiparty setting is that all parties take the role of both the committer and receiver (i.e. \(P_i \) and \(P_j \) from the two-party setting). For every commitment stored by the functionality (either raw or actual), both the commitment information and the opening information are secret shared between \(P_1,\ldots ,P_p\) using a full-threshold secret sharing scheme.

Fig. 8.
figure 8

Ideal functionality \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \)

3.3 Realizing \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \) in the \((\mathcal {F} _{\textsf {EQ}}, \mathcal {F} _{\textsf {CT}}, \mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}})\)-hybrid Model

Let us first fix the notation for the multiparty homomorphic commitments: We use \([\![{\mathbf {x}}]\!] \) to denote a (multiparty) commitment to the message \(\mathbf {x} \). As mentioned above, both the message \(\mathbf {x} \) and the commitment to it \([\![{\mathbf {x}}]\!] \) are secret shared between the parties, that is, party \(P_i\) holds \(\mathbf {x} ^i\) and \([\![{\mathbf {x}}]\!] ^i\) such that \(\mathbf {x} =\sum _{i\in [p]} \mathbf {x} ^i\) and \([\![{x}]\!] ^i\) is composed of the information described in the following. By issuing the Commit command, party \(P_i\) sends \([\mathbf {x} ^i]^{i,j}\) for every \(j\ne i\) (by invoking the Commit command from \(\mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}} \)). Thus, party \(P_i\) holds the opening information for all instances of the commitments to \(\mathbf {x} ^i\) toward all other parties, that is, it holds \(\left\{ \langle {\mathbf {x} ^i} \rangle ^{i,j}\right\} _{j\in [p]\smallsetminus \{i\}}\). In addition, \(P_i\) holds the commitment information received from all other parties, \(\mathbf {x} ^j\) (for \(j\ne i\)), that is, it holds \(\left\{ [\mathbf {x} ^j]^{j,i}\right\} _{j\in [p]\smallsetminus \{i\}}\). All that information that \(P_i\) holds with regard to the value \(\mathbf {x} \) is denoted by \([\![{\mathbf {x}}]\!] ^i\), which can be seen as a share to the multiparty commitment \([\![{\mathbf {x}}]\!] \).

In protocol \(\varPi _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \) (from Fig. 9) each party has a local copy of the \(\mathsf {raw}\) and \( \mathsf {actual} \) dictionaries described above, that is, party \(P_i\) maintains \(\mathsf {raw}^i\) and \( \mathsf {actual} ^i\). In the protocol, \(P_i\) may be required to store \([\![{\mathbf {x}}]\!] ^i\) (i.e. its share of \([\![{\mathbf {x}}]\!] \)) in a dictionary (either \(\mathsf {raw}^i\) or \( \mathsf {actual} ^i\)) under some identifier k, in such case \(P_i\) actually assigns \(\mathsf {raw}^i[k]=\left\{ [\mathbf {x} ^j]^{j,i}, \langle {\mathbf {x} ^i} \rangle ^{i,j} \right\} _{j\in [p]\smallsetminus \{i\}}\) which may also be written as \(\mathsf {raw}^i[k]=[\![{\mathbf {x}}]\!] ^i\).

In the following we explain the main techniques used to implement the instructions of functionality \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \) (we skip the instructions that are straightforward):

Linear operations. From the linearity of the underlying two-party homomorphic commitment functionality it follows that performing linear combinations over a multiparty commitments can be done locally by every party. We describe the notation in the natural way as follows: Given multiparty commitments \([\![{\mathbf {x}}]\!] \) and \([\![{\mathbf {y}}]\!] \) and a constant public vector \(\mathbf {c} \in \mathbb {F} ^m\):

  • Addition. For every party \(P_i\):

    $$\begin{aligned}{}[\![{\mathbf {x}}]\!] ^i + [\![{\mathbf {y}}]\!] ^i= & {} \left\{ [ \mathbf {x} ^j ]^{j,i}, \langle {\mathbf {x} ^i} \rangle ^{i,j} \right\} _{j\in [p]\smallsetminus i} + \left\{ [ \mathbf {y} ^j ]^{j,i}, \langle {\mathbf {y} ^i} \rangle ^{i,j} \right\} _{j\in [p]\smallsetminus i} \\= & {} \left\{ [ \mathbf {x} ^j ]^{j,i} + [ \mathbf {y} ^j ]^{j,i}, \langle {\mathbf {x} ^i} \rangle ^{i,j} + \langle {\mathbf {y} ^i} \rangle ^{i,j} \right\} _{j\in [p]\smallsetminus i}\\= & {} \left\{ [ \mathbf {x} ^j+\mathbf {y} ^j ]^{j,i}, \langle {\mathbf {x} ^i+\mathbf {y} ^i} \rangle ^{i,j} \right\} _{j\in [p]\smallsetminus i} = [\![{\mathbf {x} +\mathbf {y}}]\!] ^i \end{aligned}$$
  • Constant addition. The parties obtain \([\![{\varvec{\beta } +\mathbf {x}}]\!] \) by having \(P_1\) perform \(\mathbf {x} ^i = \mathbf {x} ^i+\varvec{\beta } \), then, party \(P_1\) computes:

    $$ \varvec{\beta } +[\![{\mathbf {x}}]\!] ^i = \varvec{\beta } + \left\{ [ \mathbf {x} ^j ]^{j,i}, \langle {\mathbf {x} ^i} \rangle ^{i,j} \right\} _{j\in [2,p]} = \left\{ [ \mathbf {x} ^j ]^{j,i}, \varvec{\beta } + \langle {\mathbf {x} ^i} \rangle ^{i,j} \right\} _{j\in [2,p]} = [\![{\varvec{\beta } +\mathbf {x}}]\!] ^i $$

    and all other parties \(P_j\) compute:

    $$\begin{aligned} \varvec{\beta } +[\![{\mathbf {x}}]\!] ^j&= \varvec{\beta } + \left\{ [ \mathbf {x} ^i ]^{i,j}, \langle {\mathbf {x} ^j} \rangle ^{j,i} \right\} _{j\in [p]\smallsetminus j} = \left\{ [\mathbf {x} ^i ]^{i,j}, \langle {\mathbf {x} ^j} \rangle ^{j,i} \right\} _{j\in [2,p]\smallsetminus j} \cup \left\{ [ \varvec{\beta } + \mathbf {x} ^1 ]^{1,j}, \langle {\mathbf {x} ^j} \rangle ^{j,1} \right\} \\&= [\![{\varvec{\beta } +\mathbf {x}}]\!] ^j \end{aligned}$$
  • Constant multiplication. For every party \(P_i\):

    $$ \varvec{\alpha } *[\![{\mathbf {x}}]\!] ^i = \varvec{\alpha } * \left\{ [ \mathbf {x} ^j ]^{j,i}, \langle {\mathbf {x} ^i} \rangle ^{i,j} \right\} _{j\in [p]\smallsetminus i} = \left\{ \varvec{\alpha } * [ \mathbf {x} ^j ]^{j,i}, \varvec{\alpha } * \langle {\mathbf {x} ^i} \rangle ^{i,j} \right\} _{j\in [p]\smallsetminus i} = [\![{\varvec{\alpha } *\mathbf {x}}]\!] ^i $$

Notice that public addition is carried out by only adding the constant \(\varvec{\beta } \) to one commitment (we arbitrarily chose \(P_1\)’s commitment). This is so, since the true value committed to in a multiparty commitment is additively shared between p parties. Thus, if \(\varvec{\beta } \) was added to each share, then what would actually be committed to would be \(p\cdot \varvec{\beta } \)! On the other hand, for public multiplication we need to multiply the constant \(\varvec{\alpha } \) with each commitment, so that the sum of the shares will all be multiplied with \(\varvec{\alpha } \).

Commit. As the parties produce a batch of commitments rather than a single one at a time, assume the parties wish to produce \(\gamma \) commitments, each party picks \(\gamma +s\) uniformly random messages from \(\mathbb {F} ^m\). Each party commit to each of these \(\gamma +s\) messages towards each other party using different instances of the Commit command from \(\mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}} \), and thus different randomness.

Note that a malicious party might use the two-party commitment scheme to commit to different messages toward different parties, which leads to an incorrect multiparty commitment. To thwart this, we have the parties execute random linear combination checks as done for batch-opening of commitments in [18]: The parties invoke the coin-tossing protocol to agree on a \(s\times \gamma \) matrix, \(\mathbf {R} \) with elements in \(\mathbb {F} \). In the following we denote the element in the qth row of the kth column of \(\mathbf {R} \) by \(\mathbf {R} _{q,k}\). Every party computes s random linear combinations of the opening information that it holds toward every other party. Similarly, every party computes s combinations of the commitments that it obtained from every other party. The coefficients of the qth combination are determined by the q’th row \(\mathbf {R} \) and the qth vector from the s “extra” committed messages added to the combination. That is, let the \(\gamma +s\) messages committed by party \(P_i\) toward \(P_j\) be \(\mathbf {x} _1^{i,j},\ldots ,\mathbf {x} ^{i,j}_{\gamma +s}\) and see that the qth combination computed by \(P_j\) is \(\left( \sum _{k\in \gamma } \mathbf {R} _{q,k} \cdot [{\mathbf {x} ^{i,j}_k}] \right) + [{\mathbf {x} ^{i,j}_{\gamma +q}}] \) and the combination computed by \(P_i\) is \(\left( \sum _{k\in \gamma } \mathbf {R} _{q,k} \cdot \langle {\mathbf {x} ^{i,j}_k} \rangle \right) + \langle {\mathbf {x} ^{i,j}_{\gamma +q}} \rangle \). Then \(P_i\) open the result to \(P_j\), who checks that it is correct. If \(P_i\) was honest it committed to the same values towards all parties and so \(\mathbf {x} _k^i=\mathbf {x} _k^{i,j}=\mathbf {x} ^{i,j'}_{k}\) for all \(k\in [\gamma +s]\) and \(j\ne j'\in [p]\smallsetminus \{i\}\). Likewise for the other parties, so if everyone is honest they all obtain the same result from the opening of the combination. Thus, a secure equality check would be correct in this case. However, if \(P_i\) cheated, and committed to different values toward different parties than this is detected with overwhelming probability, since the parties perform s such combinations.

Input. Each party does a partial opening (see below) of a raw, unused commitment towards the party that is supposed to give input. Based on the opened message the inputting party computes a correction value. That is, if the raw commitment, before issuing the input command, is a shared commitment to the value \(\mathbf {x} \) and the inputting party wish to input \(\mathbf {y} \), then it computes the value \(\varvec{\epsilon } =\mathbf {y}-\mathbf {x} \) and sends this value to all parties. All parties then add \([\![{\mathbf {x}}]\!] +\epsilon \) to the dictionary \( \mathsf {actual} \) and remove it from the dictionary \(\mathsf {raw}\). Since the party giving input is the only one who knows the value \(\mathbf {x} \), and it is random, this does not leak.

Fig. 9.
figure 9

Protocol \(\varPi _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \)

We prove the following theorem in the full version [20].

Theorem 3.1

Protocol \(\varPi _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \) (of Fig. 9) UC-securely realizes functionality \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \) (of Fig. 8) in the \(\mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}} \), \(\mathcal {F} _{\textsf {CT}} \), and \(\mathcal {F} _{\textsf {EQ}} \)-hybrid model, against a static and malicious adversary corrupting any majority of the parties.

4 Committed Multiparty Computation

4.1 Augmented Commitments

In the malicious, dishonest majority setting, our protocol, as other protocols, works in the offline-online model. The offline phase consists of constructing sufficiently many multiplication triples which are later used in the online phase to carry out a secure multiplications over committed, secret shared valuesFootnote 5. To this end, we augment functionality \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \) with the instruction \(\mathbf{Mult}\) that uses the multiparty raw-commitments that were created by the Commit instruction of Fig. 8 and produces multiplication triples of the form \(([\![{\mathbf {x}}]\!], [\![{\mathbf {y}}]\!], [\![{\mathbf {z}}]\!])\) such that \(\mathbf {x} *\mathbf {y} =\mathbf {z} \). Note that a single multiplication triple is actually three multiparty commitments to values from \(\mathbb {F} ^m\) such that \(\mathbf {z} \) is the result of a componentwise multiplication of \(\mathbf {x} \) and \(\mathbf {y} \). That actually means that \(\mathbf {z} _q=\mathbf {x} _q\cdot \mathbf {y} _q\) for every \(q\in [m]\). Hence, this features the ability to securely evaluate up to m instances of the circuit at the same cost of evaluation of a single instance (i.e. in case the parties want to evaluate some circuit m times but with different inputs each time) where all m instances are being evaluated simultaneously. If the parties wish to evaluate only \(m'<m\) instances of the circuit, say \(m'=1\), they do so by using only the values stored in the first component of the vectors, while ignoring the rest of the components. However, using a multiplication triple wastes all components of \(\mathbf {x},\mathbf {y} \) and \(\mathbf {z} \) even if the parties wish to use only their first component. To avoid such a loss we augment \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \) with the instruction ReOrg. The ReOrg instruction preprocesses reorganization pairs which are used to compute a linear operator over a multiparty commitment. For example this enable the parties to “copy” the first component to another, new, multiparty commitment, such that all components of the new multiparty commitment are equal to the first component of the old one. For instance, the linear operator \(\phi \in \mathbb {F} ^{m\times m}\) such that its first column is all 1 and all other columns are all 0, transforms the vector \(\mathbf {x} \) to \(\mathbf {x} '=\mathbf {x} _1,\ldots ,\mathbf {x} _1\) (m times). Applying \(\phi \) to \(\mathbf {y} \) and \(\mathbf {z} \) as well results in a new multiplication triple \((\mathbf {x} ',\mathbf {y} ',\mathbf {z} ')\) where only the first component of \((\mathbf {x},\mathbf {y},\mathbf {z})\) got used (rather than all their m components). We note that the construction of reorganization pairs are done in a batch for each function \(\phi \) resulting in the additive destruction of s extra raw commitments (i.e. an additive overhead). In the ReOrg command, described in Fig. 10, the linear operator \(\phi \) is applied to L raw commitments in a batch manner. The inputs to \(\phi \) are the messages stored by the functionality under identifiers from the set X and the outputs override the messages stored by the functionality under identifiers from the set Y. The messages stored under identifiers from the set R are being destroyed (this reflects the additive overhead of that command).

Adding instructions Mult and ReOrg to the \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \) functionality, we get the augmented functionality \(\mathcal {F} _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) formally presented in Fig. 10.

Fig. 10.
figure 10

Ideal functionality \(\mathcal {F} _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \)

Realizing \(\varvec{\mathcal {F}}_{{\mathbf {\mathsf{{AHCOM}}}}\hbox {-}\varvec{\mathbb {F}^{{m}{} \mathbf{.}}}}\) The protocol \(\varPi _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) is formally presented in Figs. 12 and 13. In the following we describe the techniques used in \(\varPi _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) and show the analysis that implies the number of multiplication triples that should be constructed in one batch for the protocol to be secure. Specifically, in Sect. 4.2 we describe how to implement the \(\mathbf{Mult}\) command and in Sect. 4.3 we describe how to implement the ReOrg command.

4.2 Generating Multiplication Triples

Secure multiplication in our online phase, similar to previous works in the field, is performed using multiplication triples (AKA Beaver triples). In our work a multiplication triple is of the form \(([\![{\mathbf {x}}]\!],[\![{\mathbf {y}}]\!],[\![{\mathbf {z}}]\!])\) where \([\![{\mathbf {x}}]\!],[\![{\mathbf {y}}]\!] \) and \([\![{\mathbf {z}}]\!] \) are multiparty commitments of messages \(\mathbf {x},\mathbf {y} \) and \(\mathbf {z} \) respectively as defined in Sect. 3.3 and \(\mathbf {z} =\mathbf {x} *\mathbf {y} \). The construction of triples is done in a batch and consists of four parts briefly described below (and further explained and analyzed soon afterward):

  1. 1.

    Construction. Using the arithmetic OT procedure formalized in Sect. 2 the parties first construct multiplication triples that may be “malformed” and “leaky” in case of a malicious adversary. Here malformed means that they are incorrect, i.e. \(\mathbf {x} *\mathbf {y} \ne \mathbf {z} \) and “leaky” means that the adversary has tried to guess the value of the share of an honest party (the term is further explained below).

  2. 2.

    Cut-and-Choose. The parties select \(\tau _1\) triples at random which they check for correctness. If any of these triples are malformed then they abort. Otherwise, when mapping the remaining triples into buckets, with overwhelming probability all buckets will contain at least one correct triple.

  3. 3.

    Sacrificing. The remaining triples (from the cut-and-choose) are mapped to buckets, \(\tau _1\) triples in each bucket such that at least one of the triples is correct. Each bucket is then tested to check its correctness where by this check only a single multiplication is being output while the other \(\tau _1-1\) are being discarded. This step guarantees that either the output triple is correct or a malformed triple is detected, in which case the protocol aborts.

  4. 4.

    Combining. As some of the triples may be “leaky” this allows the adversary to carry a selective attack, that is, to probe whether its guess was correct or not. If the guess is affected by the input of an honest party then it means that the adversary learns that input. Thus, as the name suggests, the goal of this step is to produce a non-leaky triple by combining \(\tau _2\) triples, which are the result of the sacrificing step (and thus are guaranteed to be correct), where at least one of the \(\tau _2\) is non-leaky. As we will see later, this condition is satisfied with overwhelming probability.

Construction. The triples are generated in a batch, that is, sufficiently many triples are generated at once. However, the construction of each triple is independent of the others. Thus, we proceed by describing how to generate a single triple. The parties select three raw-commitments, denoted \([\![{\mathbf {x}}]\!], [\![{\mathbf {y}}]\!], [\![{\mathbf {z} '}]\!] \), that were generated by \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \). The goal of this step is to change \([\![{\mathbf {z} '}]\!] \) to \([\![{\mathbf {z}}]\!] \) such that \([\![{\mathbf {z}}]\!] = [\![{\mathbf {x} *\mathbf {y}}]\!] \).

Recall that for a message \(\mathbf {x} \) that is committed to by all parties, we have that each party \(P_i\) knows \(\mathbf {x} ^i\) such that \(\mathbf {x} = \sum _{i\in [p]}\mathbf {x} ^i\). Thus, the product \(\mathbf {x} *\mathbf {y} \) equals \(\left( \sum _{i\in [p]}\mathbf {x} ^i \right) * \left( \sum _{i\in [p]}\mathbf {y} ^j\right) = \sum _{i\in [p]} \mathbf {x} ^i*(\sum _{j\in [p]}\mathbf {y} ^j)\). In order to have each party \(P_i\) hold the value \(\mathbf {z} ^i\) such that \(\sum _{i\in [p]}\mathbf {z} ^i=\mathbf {x} *\mathbf {y} \) we let party \(P_i\) use the arithmetic OT procedure (as describe in Sect. 2) to have a share of the multiplication \(\mathbf {x} ^i*\mathbf {y} ^j\) for every \(j\in [p]\) where \(P_i\) inputs \(\mathbf {x} ^i\) and \(P_j\) inputs \(\mathbf {y} ^j\). After \(P_i\) multiplied its share \(\mathbf {x} ^i\) with all other parties’ shares \(\mathbf {y} ^j\) the sum of all the shares is \(\mathbf {x} ^i*(\sum _{j\in [p]}\mathbf {y} ^j)\) (assuming honest behavior). If all parties do the same, then every party ends up holding a share of \(\mathbf {x} *\mathbf {y} \) as required. Remember that we want \(P_i\) to hold a share to \([\![{\mathbf {x} *\mathbf {y}}]\!] \) and not just a share to \(\mathbf {x} *\mathbf {y} \) (i.e. we want all shares to be committed). To this end, every party broadcasts the difference \(\mathbf {t} \) between the new share and the old share, that is, \(P_i\) broadcasts \(\mathbf {t} ^i=\mathbf {z} ^i-{\mathbf {z} '^i}\), then, the parties perform a constant addition to the old commitments, that is, they compute \([\![{\mathbf {z}}]\!] =[\![{\mathbf {z} '}]\!] +(\sum _{i\in [p]} \mathbf {t} ^i\)).

Discussion. As described above, party \(P_i\) (for \(i\in [p]\)) participates in \(p-1\) instantiations of the arithmetic OT functionality with every other party \(P_j\) (for \(j\ne i\)). The arithmetic OT functionality is of the form \((\mathbf {x} ^i,(\mathbf {y} ^j, \mathbf {r} ^j)) \mapsto (\mathbf {x} ^i * \mathbf {y} ^j + \mathbf {r} ^j, \bot )\), that is, \(P_i\) inputs its share \(\mathbf {x} ^i\) of \(\mathbf {x} \), party \(P_j\) inputs its share \(\mathbf {y} ^j\) of \(\mathbf {y} \) and a random value \(\mathbf {r} ^j\). The functionality outputs \(\mathbf {x} ^i * \mathbf {y} ^j + \mathbf {r} ^j\) to \(P_i\) and nothing to \(P_j\). Then, to get a sharing of \(\mathbf {x} ^i * \mathbf {y} ^j\) we instruct \(P_i\) to store \(\mathbf {x} ^i * \mathbf {y} ^j + \mathbf {r} ^j\) and \(P_j\) to store \(-\mathbf {r} ^j\) (see Sect. 2). Even if this arithmetic OT subprotocol is maliciously secure, it will only give semi-honest security in our setting when composed with the rest of the scheme. Specifically, there are two possible attacks that might be carried out by a malicious adversary:

  1. 1.

    Party \(P_j\) may input \(\tilde{\mathbf {y}}^j\ne \mathbf {y} ^j\) such that \(\mathbf {e} =\tilde{\mathbf {y}}^j-\mathbf {y} ^j\), in the instantiation of the arithmetic OT with every other \(P_i\), where \(\mathbf {y} ^j\) is the value it is committed to. This results with the parties obtaining a committed share of the triple \(([\![{\mathbf {x}}]\!], [\![{\mathbf {y}}]\!], [\![{\mathbf {x} *(\mathbf {y} +\mathbf {e})}]\!])\). We call such a triple a “malformed” triple.

  2. 2.

    In the arithmetic OT procedure party \(P_j\) may impact the output of \(P_i\) such that \(P_i\) obtains \(\mathbf {x} ^i * \mathbf {y} ^j + \mathbf {r} ^j\) only if the k’th value of \(\mathbf {x} ^i\) is equal to some value “guessed” by \(P_j\), otherwise \(P_i\) obtains some garbage \(\mathbf {x} ^i*\tilde{\mathbf {y}}^i\in \mathbb {F} ^m\). A similar attack can be carried out by \(P_i\) on \(\mathbf {y} ^j\) when computing over a “small” field (see the description of the malicious behavior in Sect. 2). In both cases, the parties obtain committed shares of the triple \(([\![{\mathbf {x}}]\!], [\![{\mathbf {y}}]\!], [\![{\mathbf {x} *\mathbf {y}}]\!])\) only if the malicious party made a correct guess on an honest party’s share, and an incorrect triple otherwise. Thus, when using this triple later on, the malicious party learns if it guessed correctly depending on whether the honest parties abort, thus, it is vulnerable to a “selective attack”. We call such a triple “leaky”, since it might leak privates bits from the input of an honest party.

We take three countermeasures (described in the next items) to produce correct and non-leaky triples:

  1. 1.

    In the Cut-and-Choose step we verify that a few (\(\tau _1\)) randomly selected triples have been constructed correctly. This is done, by having each party open his committed shares associated with these triples and all parties verifying that the triples has been constructed according to the protocol. This step is required to ensure that not all triples were malformed as a preliminary for the sacrificing step (below) in which the triples are mapped to buckets. When working over \(\mathbb {F} =\text {GF}(2)\), this step is strictly needed to eliminate the case that all triples are malformed. For other fields, this step improves the amount of triples to be constructed in the batch.

  2. 2.

    In the Sacrificing step we make sure that a triple is correct (i.e. not malformed) by “sacrificing” \(\tau _1-1\) other triples which are being used as a “one-time-pads” of the correct triple. As we treat a bunch of triples at once, the probability of an incorrect triple to pass this step without being detected is negligible in s (analysis is presented below). Having the parties committed (in the construction step) to \(\tau _1\cdot T\) triples, by the end of this step there will be T correct triples.

  3. 3.

    In the Combining step we partition the constructed (correct but possibly leaky) triples into buckets of \(\tau _2\) triples each, and show that for a sufficiently big number of triples that are the outcome of the sacrificing step, the probability that there exist a bucket in which all triples are leaky in a specific component is negligible in s. We show how to combine the \(\tau _2\) triples in a bucket and produce a new triple which is non-leaky. This is done twice, first to remove leakage on the \(\mathbf {x} \) component and second to remove leakage on the \(\mathbf {y} \) component.

Cut-and-Choose. The parties use \(\mathcal {F} _{\textsf {CT}} \) to randomly pick \(\tau _1\) triples to check. Note that \(\tau _1\) is the bucket-size used in Sacrificing below and in practice could be as low as 3 or 4. It was shown in [21] that when partitioning the triples into buckets of size \(\tau _1\) (where many of them may be malformed) then by sampling and checking only \(\tau _1\) triples, the probability that there exist a bucket full of malformed triples is negligible. Formally:

Corollary 4.1

(Corollary 6.4 in [21]). Let \(N=\tau _1+\tau _1(\tau _2)^2\cdot T\) be the number of constructed triples where \(s\le \log _2 \left( \frac{(N\cdot \tau _1+\tau _1)!}{N\cdot \tau _1!\cdot (N\cdot \tau _1)!} \right) \), then, by opening \(\tau _1\) triples it holds that every bucket contains at least one correct triple with overwhelming probability.

Hence, it is sufficient to open (and discard) \(\tau _1\) triples out of the triples from the Construction step and hand the remaining to the Sacrificing step below.

Sacrificing. In the following we describe how to produce \((\tau _2)^2 \cdot T\) correct triples out of \(\tau _1\cdot (\tau _2)^2\cdot T\) that were remained from the cut-and-choose step, and analyze what should T and \(\tau _1\) be in order to have all produced \((\tau _2)^2\cdot T\) triples correct with overwhelming probability. We have the \((\tau _2)^2\cdot T\) triples be uniformly assigned to buckets where each bucket contains \(\tau _1\) triples, denoted \(\{ t_k\}_{k\in [\tau _1]}\). For simplicity, in the following we assume that \(\tau _1=3\). For every bucket, the parties apply the procedure \(\mathsf {CorrectnessTest}\) (see Fig. 11) to triples \(t_1\) and \(t_2\). If the procedure returns successfully (i.e. the parties do not abort) they run the procedure again, this time with triples \(t_1\) and \(t_3\). Finally, if the procedure returns successfully from the second invocation as well then the withs consider \(t_1\) as a correct triple, otherwise they abort the protocol. We note that this procedure is similar to the one used in [14] and other works.

Fig. 11.
figure 11

Procedure \(\mathsf {CorrectnessTest}(t_1,t_2)\)

Security. The correctness and security is explained in [14]. However, for completeness we prove the following lemma in the full version [20], which states that after the sacrificing step all produced triples are correct with overwhelming probability:

Lemma 4.2

When \(2^{-s} \le \frac{(|\mathbb {F} |-1)^{1-\tau _1}\cdot (\tau _2)^2\cdot T\cdot (\tau _1\cdot (\tau _2)^2\cdot T)!\cdot \tau _1!}{(\tau _1\cdot (\tau _2)^2\cdot T+\tau _1)!}\) all the \((\tau _2)^2 \cdot T\) triples that are produced by the sacrificing step are correct except with probability at most \(2^{-s}\).

Combining. The goal of this step is to produce T non-leaky triples out of the \((\tau _2)^2\cdot T\) triples remained from the sacrificing step above. We do this in two sub-steps: First to remove the leakage (with regard to the arithmetic OT) of the sender and then to remove the leakage from the receiver. In each of the sub-steps we map the triples to buckets of size \(\tau _2\) and produce a single non-leaky triple out of it. In the following we first show how to produce one triple from each bucket with the apriori knowledge that at least one of the triples in the bucket is non-leaky (but we do not know which one is it) and later we show how to obtain such buckets. Denote the set of \(\tau _2\) triples by \(\{([\![{\mathbf {x} _k}]\!], [\![{\mathbf {y} _k}]\!], [\![{\mathbf {z} _k}]\!])\}_{k\in [\tau _2]}\). We produce the triple \(([\![{(\mathbf {x} ')}]\!], [\![{\mathbf {y} '}]\!], ([\![{\mathbf {z})'}]\!])\) out of that set in the following way: The parties compute

$$ [\![{\mathbf {x} '}]\!] =[\![{\textstyle \sum \nolimits _{k\in [\tau _2]}\mathbf {x} _k}]\!] ~~\text {and}~~ [\![{\mathbf {y} '}]\!] =[\![{\mathbf {y} _1}]\!] ~~\text {and}~~ [\![{\mathbf {z} '}]\!] =[\![{\left( \textstyle \sum \nolimits _{k\in [\tau _2]}\mathbf {x} _k \right) *\mathbf {y} _1}]\!] $$

which constitute the triple \(([\![{\mathbf {x} '}]\!], [\![{\mathbf {y} '}]\!], [\![{\mathbf {z} '}]\!])\). It is easy to see that \([\![{\mathbf {x} '}]\!] \) can be computed locally since it requires additions and constant multiplications only. Furthermore, \(\mathbf {x} '\) is completely hidden since at least one of \(\mathbf {x} _1,\ldots ,\mathbf {x} _k\) was not leaked (and it is guaranteed from the construction step that it is chosen uniformly at random from \(\mathbb {F} ^m\)). However, notice that \([\![{\mathbf {z} '}]\!] \) cannot be computed locally, since it is required to multiply two multiparty commitments \([\![{\left( \sum _{k\in [\tau _2]} \mathbf {x} _k \right) }]\!] \) and \([\![{\mathbf {y} _1}]\!] \). Thus, to obtain \([\![{\mathbf {z} '}]\!] \) the parties first compute \([\![{\varvec{\epsilon } _k}]\!] =[\![{\mathbf {y} _1-\mathbf {y} _k}]\!] \) and open \(\varvec{\epsilon } _k\) for every \(k=2,\ldots ,\tau _2\). Then compute \([\![{\mathbf {z} '}]\!] =[\![{\mathbf {z} _1 + \sum _{k=2}^{\tau _2} \varvec{\epsilon } _k*\mathbf {x} _k + \mathbf {z} _k}]\!] \) by a local computation only.

We prove the following lemma in the full version [20]:

Lemma 4.3

Having a batch of at least \(\root \tau _2-1 \of {\frac{(s \cdot e)^{\tau _2} \cdot 2^s}{\tau _2}}\) triples as input to a combining step, every bucket of \(\tau _2\) triples contains at least one non-leaky triple with overwhelming probability in s in the component that has been combined on.

For instance, when \(\mathbb {F} =\text {GF}(2)\) having \(s=40\), \(\tau _1=3\) \(\tau _2=4\) it is required to construct \(T\approx 8.4 \cdot 10^5\) correct and non-leaky triples in a batch. Instead, having \(\tau _2=3\) means that \(\approx 2.29 \cdot 10^8\) triples are required.

Working over Non-binary Fields. When \(\mathbb {F} \) is a field with odd characteristic then there is a gap between the maximal field element and the maximal value that is possible to choose which can fit in the same number of bits. For instance, when working over \(\mathbb {F} _{11}\) then the maximal element possible is \(10_{10}=0101_2\) while the maximal value possible to fit in 4 bits is \(15_{10}=1111_2\), i.e. there is a gap of 5 elements. That means that an adversary could input a value that is not in the field and might harm the security.

We observe that the only place where this type of attack matters is in the \(\mathsf {ArithmeticOT}\) procedure, since in all other steps the values that the adversary inputs percolate to the underlying homomorphic commitment scheme. In the following we analyze this case: To multiply \(x^i\) and \(y^j\) with \(x^i,y^j\in \mathbb {F} _\mathcal {P}\) and \(\mathcal {P}\) prime the parties \(P_i\) and \(P_j\) participate in a protocol of \(\lceil \log \mathcal {P} \rceil \) steps. In the q-th step, where \(q\in [\lceil \log \mathcal {P} \rceil ]\), party \(P_i\) inputs \(x^i_q\) and \(P_2\) inputs \(s^0_q=r_q\) and \(s^1_q=r_q+y^j\) to the \(\mathcal {F} _{\textsf {OT}} \) functionality. The functionality outputs \(s^{x^i_q}\) to \(P_1\) which updates the sum of the result. In the end of this process the parties hold shares to the multiplication \(z=x^i\cdot y^j\).

We first examine the cases in which either \(s^0_q\) or \(s^1_q\) are not in the prime field, i.e. they belong to the gap \(\mathsf {gap}=[2^{\lceil \log \mathcal {P} \rceil }]\smallsetminus \mathbb {F} _\mathcal {P}\). We first note that if both of them are in \(\mathsf {gap}\) then this is certainly detected by \(P_1\) (since \(P_1\) receives one of them as the \(\mathcal {F} _{\textsf {OT}} \)’s output). If only one of \(s^0_q,s^1_q\) is in \(\mathsf {gap}\) then one of two cases occurs:

  1. 1.

    If the value that \(P_1\) received from \(\mathcal {F} _{\textsf {OT}} \) is in \(\mathsf {gap}\) then it is detected immediately as before (since \(P_1\) clearly sees that the value is not in \(\mathbb {F} _\mathcal {P}\)) and can abort. Since this is the preprocessing phase it is independent of any secret input.

  2. 2.

    If the value that \(P_1\) received from \(\mathcal {F} _{\textsf {OT}} \) is in \(\mathbb {F} _\mathcal {P}\) but the other value is not, then it is guaranteed that the value \(P_1\) obtains is a correct share. That the dishonest \(P_2\) obtains a share in the gap is actually the same case as if \(P_2\) adds an incorrect value to the sum s.t. it lands in the gap. This leads to two cases

    1. (a)

      If the incorrect value is \(s^0_q\ne r_q\) then this is equivalent to add \(s^0_q \mod \mathcal {P}\), which leads to an incorrect share of z. This case is detected in the sacrificing step.

    2. (b)

      If the incorrect value is \(s^1_q\ne r_q+y^j\) then this is equivalent to add \(s^1_q \mod \mathcal {P}\). As above, this leads to an incorrect share of z which is being detected in the sacrificing step.

The last case is when either \(r_q\) or \(y^j\) (or both) are not in \(\mathbb {F} _\mathcal {P}\) but the sum \(s^1_q\) does. Then this is equivalent to choosing \(y^j\in \mathbb {F} _\mathcal {P}\) and \(r_q'=s^1_q-y^j \mod \mathcal {P}\) such that the value that \(P_2\) adds to its sum is incorrect (since it is different than \(r_q'\)), and thus, this is being detected in the sacrificing step as before.

Similarly, consider a corrupted receiver who organizes its bits of \(x^j\) to represent an element in \(\mathsf {gap}\). We observe that this is equivalent to a receiver who inputs an incorrect value (value that is not committed before) for the following reason: The adversary knows nothing about the sender’s (honest party) share \(y^j\), let the value that \(P_i\) inputs be \(\tilde{x}^i\), thus the \(\mathsf {ArithmeticOT}\) procedure outputs shares to \(\tilde{x}^i y^j \mod \mathcal {P}=(\tilde{x}^i \mod \mathcal {P})(y^j \mod \mathcal {P})\). Now, if \(\tilde{x}^i\mod \mathcal {P}=0\) (i.e. \(\tilde{x}^i=\mathcal {P}\)) then this is detected by the sacrificing procedure (since \(0\in \mathbb {F} _{\mathcal {P}}\) is not in the field). Otherwise, if \(\tilde{x}^i\mod \mathcal {P}\ne 0\) then the result \(\tilde{x}^i y^j \mod \mathcal {P}\) is a random element in the field \(\mathbb {F} _\mathcal {P}\) and the same analysis from the proof of Lemma 4.2 follows.

Finally we make the observation that the math still work out in case we use an extension field and not a plain prime-field. Basically using the \(\mathsf {ArithmeticOT}\) procedure we can still multiply with one bit at a time. The parties simply multiply with the appropriate constants in the extension field (and thus do any necessary polynomial reduction), instead of simply a two-power.

Fig. 12.
figure 12

Protocol \(\varPi _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) - Part 1

Fig. 13.
figure 13

Protocol \(\varPi _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) - Part 2

We prove the following theorem in the full version [20].

Theorem 4.4

The method Mult in \(\varPi _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) (Fig. 13) UC-securely implements the method \(\mathbf{Mult }\) in functionality \(\mathcal {F} _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) (Fig. 10) in the \(\mathcal {F} _{\textsf {OT}} \)-, \(\mathcal {F} _{\textsf {EQ}} \)- and \(\mathcal {F} _{\textsf {CT}} \)-hybrid model against a static and malicious adversary corrupting a majority of the parties.

4.3 Reorganization of Components of a Commitment

The parties might want to move elements of \(\mathbb {F} \) around or duplicate elements of \(\mathbb {F} \) within a message. In general we might want to apply a linear function \(\phi \) to a vector in \(\mathbb {F} ^m\) resulting in another vector in \(\mathbb {F} ^m\). To do so, they need to preprocess pairs of the form \(([\![{\mathbf {x}}]\!],[\![{\phi (\mathbf {x})}]\!])\) where \(\mathbf {x} \) is random. This is done by first having a pair of random commitments \(([\![{\mathbf {x}}]\!],[\![{\mathbf {y}}]\!])\) (as the output of the Commit instruction of \(\mathcal {F} _{\textsf {HCOM}\hbox {-}\mathbb {F} ^{m}} \)), then, party \(P_i\) broadcasts \(\varvec{\epsilon } ^i=\phi (\mathbf {x} ^i)-\mathbf {y} ^{i}\) (i.e. by first applying \(\phi \) on its own share). Note that from linearity of \(\phi \) it follows that \(\sum _{i\in [p]} \phi (\mathbf {x} ^i) = \phi (\sum _{i\in [p]} \mathbf {x} ^i)= \phi (\mathbf {x})\), thus \(\sum _{i\in [p]} \varvec{\epsilon } ^i= \sum _{i\in [p]} \phi (\mathbf {x} ^i)-\mathbf {y} ^{i} = \phi (\mathbf {x})-\mathbf {y} \). Then, the parties compute \([\![{\mathbf {y} '}]\!] = [\![{\mathbf {y}}]\!] + \sum _{i\in [p]} \varvec{\epsilon } ^i = [\![{\mathbf {y}}]\!] +\phi (\mathbf {x})-\mathbf {y} = \phi (\mathbf {x})\). For security reasons this is done simultaneously for a batch of \(\nu +s\) pairs. Finally, the parties complete s random linear combination tests over the batch by producing a uniformly random matrix \(\mathbf {R} \in \mathbb {F} ^{s\times \nu }\) (using \(\mathcal {F} _{\textsf {CT}} \)). Let \(\mathbf {R} _{q,k}\) be the element in the qth row and kth column of \(\mathbf {R} \). To perform the test, divide the \(\nu +s\) pairs into two sets A, B of \(\nu \) and s pairs respectively. For each pair \(\left( [\![{\mathbf {z} _{q}}]\!],[\![{\mathbf {z} _{q'}}]\!] \right) \) in B for \(q\in s\) compute and open

$$ [\![{\mathbf {s} _q}]\!] = [\![{\mathbf {z} _{q}}]\!] + \sum _{k \in [\nu ]} \mathbf {R} _{q,k}\cdot [\![{\mathbf {x} _k}]\!] \quad \text {and} \quad [\![{\bar{\mathbf {s}}_q}]\!] = [\![{\mathbf {z} _{q'}}]\!] + \sum _{k \in [\nu ]} \mathbf {R} _{q,k}\cdot [\![{\mathbf {y} _{k}}]\!] $$

Each party now verifies that \(\phi \left( \mathbf {s} _q\right) = \bar{\mathbf {s}}_q\). If this is so, they accept. Otherwise they abort.

Based on this we state the following theorem, which we prove in the full version [20].

Theorem 4.5

The method ReOrg in \(\varPi _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) of Fig. 12 UC-securely implements the method \(\mathbf{ReOrg }\) in functionality \(\mathcal {F} _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) of Fig. 10 in the \(\mathcal {F} _{\textsf {OT}} \)-, \(\mathcal {F} _{\textsf {EQ}} \)- and \(\mathcal {F} _{\textsf {CT}} \)-hybrid model against a static and malicious adversary corrupting a majority of the parties.

5 Protocol for Multiparty Computation

In Fig. 14 we show how to realize a fully fledged arithmetic MPC protocol secure against a static and malicious adversary, with the possibility of corrupting a majority of the parties. This protocol is very similar to the one used in MiniMAC [16] and thus we will not dwell on its details.

Fig. 14.
figure 14

Protocol UC-realizing \(\mathcal {F} _{\textsf {MPC}\hbox {-}\mathbb {F} ^{m}} \) in the \(\mathcal {F} _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) model.

We prove the following theorem in the full version [20]:

Theorem 5.1

The protocol in Fig. 14 UC-securely implements the functionality \(\mathcal {F} _{\textsf {MPC}\hbox {-}\mathbb {F} ^{m}} \) of Fig. 10 in the \(\mathcal {F} _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \)-hybrid model against a static and malicious adversary corrupting a majority of the parties.

6 Efficiency

Practical Optimizations. Several significant optimizations can be applied to our protocol. We chose to describe the optimizations here rather than earlier for the ease of presentation. In the following we present each of the optimizations and sketch out its security.

Using less storage. As we mentioned before, the two-party homomorphic commitment scheme of [18] can be used as an implementation of functionality \(\mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}} \). Briefly, in this two party commitment scheme the committer holds a set of 2m vectors from \(\mathbb {F} ^{\gamma }\), namely the vectors \(\bar{\mathbf {s}}_1^0,\bar{\mathbf {s}}_1^1,\ldots ,\bar{\mathbf {s}}_m^0,\bar{\mathbf {s}}_m^1\) whereas the receiver choose a set of m bits \(b_1,\ldots ,b_m\), denoted as “its choice of watch bits” and obtains the m vectors \(\bar{\mathbf {s}}_1^{b_1},\ldots ,\bar{\mathbf {s}}_m^{b_m}\), denoted as “the watchbits”.

Recall that in our multiparty homomorphic commitment scheme party \(P_i\) participates as a receiver in \(p-1\) instances of two-party commitment scheme with all other parties. This means that \(P_i\) needs to remember its choice of watchbits for every other party and this accordingly for every linear operation that is performed over the commitments. For instance, let \([\![{\mathbf {x}}]\!],[\![{\mathbf {y}}]\!] \) be two multiparty commitments between three parties, then party \(P_1\) stores \([\![{\mathbf {x}}]\!] ^1 = \left\{ \left\{ [\mathbf {x} ^2]^{2,1}, [\mathbf {x} ^2]^{3,1} \right\} , \left\{ \langle {\mathbf {x} ^1} \rangle ^{1,2}, \langle {\mathbf {x} ^1} \rangle ^{1,3} \right\} \right\} \). To perform the operation \([\![{\mathbf {x}}]\!] +[\![{\mathbf {y}}]\!] \) then \(P_1\) end up with

$$[\![{\mathbf {x} +\mathbf {y}}]\!] ^1= \left\{ \left\{ [\mathbf {x} ^2]^{2,1}+[\mathbf {y} ^2]^{2,1}, [\mathbf {x} ^2]^{3,1}+ [\mathbf {y} ^2]^{3,1} \right\} ~,~ \left\{ \langle {\mathbf {x} ^1} \rangle ^{1,2}+\langle {\mathbf {y} ^1} \rangle ^{1,2}, \langle {\mathbf {x} ^1} \rangle ^{1,3} + \langle {\mathbf {y} ^1} \rangle ^{1,3} \right\} \right\} $$

To make it more efficient, \(P_i\) can choose the bits \(b_1, \ldots , b_m\) only once and use them in all instances of two-party commitments. This makes the process of linear operations over commitments simpler and does not requires from \(P_1\) to store the commitments for \(p-1\) parties. Applying the optimization to the above example, we have that \(P_1\) stores only a single value for the commitment part, that is, now \(P_1\) needs to store

$$[\![{\mathbf {x} +\mathbf {y}}]\!] ^1= \left\{ [\mathbf {x} ^2]^{2,1}+[\mathbf {y} ^2]^{2,1} + [\mathbf {x} ^2]^{3,1}+ [\mathbf {y} ^2]^{3,1} ~,~ \left\{ \langle {\mathbf {x} ^1} \rangle ^{1,2}+\langle {\mathbf {y} ^1} \rangle ^{1,2}, \langle {\mathbf {x} ^1} \rangle ^{1,3} + \langle {\mathbf {y} ^1} \rangle ^{1,3} \right\} \right\} $$

Security follows from the underlying commitment scheme, since what we now do is simply equivalent to storing a sum of commitments in a single instance of the two-party scheme.

In a bit more detail, we see that since \(\mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}} \) is UC-secure, it is secure under composition. Furthermore, considering the worst case where only a single party is honest and all other parties are malicious and colluding we then notice that the above optimization is equivalent to executing \(p-1\) instances of the \(\mathcal {F} _{\textsf {2HCOM}\hbox {-}\mathbb {F} ^{m}} \), but where the same watchbits are chosen by the honest party. We see that this is almost the same as calling Commit p times. The only exception is that the seeds of the committing party, \(P_i \), of the calls to \(\mathcal {F} _{\textsf {OT}} \) are different in our optimized protocol. Thus it is equivalent to the adversary being able to select p potentially different seeds to the calls to Commit. However, the output of the PRG calls are indistinguishable from random in both cases and so the distributions in both cases are indistinguishable assuming p is polynomial in the security parameter.

Optimized \(\mathsf {CorrecnessTest}\). Recall that in the sacrificing step of protocol \(\varPi _{\textsf {AHCOM}\hbox {-}\mathbb {F} ^{m}} \) (see Fig. 13) the parties perform two openings of commitments for every bucket (the opening is described as part of the \(\mathsf {CorrecnessTest}\) in Fig. 11). That is, beginning the step with \(\tau _1\cdot (\tau _2)^2\cdot T\) triples (which are assigned to \((\tau _2)^2\cdot T\) buckets) leads to the opening of \((\tau _1-1)\cdot (\tau _2)^2\cdot T\) triples.

Since we require that the results of all of these openings be \(\mathbf {0}\), then any linear combination over these opening would be \(\mathbf {0}\) as well if they are correct. On the other hand, if one or more of the openings are not zero the result of a linear combination over the openings might be \(\mathbf {0}\) with probability \(\frac{1}{|\mathbb {F} |}\). Thus, agreeing on a s random linear combinations over the openings would detect an incorrect triple with overwhelming probability.

Optimized opening. In the online phase of our protocol, for every multiplication gate the parties need to open some random commitments using the Open command. The implementation of the Open command requires interaction between every pair of parties, thus, the communication complexity is \(\varOmega (T\cdot p^2)\) where T is the number of multiplication gates in the circuit. Following the same idea as used in SPDZ and MiniMAC, we note that we can reduce the communication complexity for every gate to O(p) in the following way, to perform a “partial opening” of a commitment \([\![{\mathbf {x}}]\!] \): First, every party \(P_i\) sends its share \(\mathbf {x} ^i\) to \(P_1\). Then \(P_1\) computes \(\mathbf {x} = \sum _{j\in [p]} \mathbf {x} ^j\) and sends back \(\mathbf {x} \) to everyone. This incurs a communication complexity of O(p) rather than \(O(p^2)\). In the end of the evaluation of the circuit, the parties perform s random linear combinations over the commitment values that were “partially opened” earlier. Then, they open the results of the linear combinations using the Open command. If one of the opened results with a wrong value (i.e. that does not conform with the result of the linear combination of the values sent from \(P_1\) in the partial opening) then the parties abort.

Using this optimization leads to a communication complexity of \(\varOmega (T\cdot p+s\cdot p^2)\). Security follows by the same arguments as used in SPDZ and MiniMAC. Particularly before opening the output nothing gets leaked during the execution of the gates in the protocol and since the adversary does not know the random linear combinations he cannot send manipulated values that pass this check.

Optimizing for large fields. If the field we compute in contains at least \(2^s\) elements, then the construction of multiplication triples becomes much lighter. First see that in this case it is sufficient to only have two triples per bucket for sacrificing. This is because the adversary’s success probability of getting an incorrect triple through the \(\mathsf {CorrectnessTest}\) in Fig. 11 is less than \(|\mathbb {F} |^{-1}\le 2^{-s}\). Next we see that it is possible to eliminate the combining step on the \(\mathbf {y} \) components of the triples. This follows since the party inputting x into the \(\mathsf {ArithmeticOT}\) procedure in Fig. 6 can now only succeed in a selective failure attack on the honest party’s input y if he manages to guess y. To see this notice that if the adversary changes the q’th bit of his input x then the result of the computation will be different from the correct result with a factor \(y\cdot 2^{q-1}\). But since y is in a field of at least \(2^s\) elements then \(y\cdot 2^{i-1}=0\) with probability at most \(2^{-s}\) and thus its cheating attempt will be caught in the \(\mathsf {CorrectnessTest}\) with overwhelming probability. Furthermore the combining on \(\mathbf {x} \) is now also overly conservative in the bucket size \(\tau _2\). To see this notice that the adversary only gets to learn at most \(s-1\) bits in total over all triples. This means that it cannot fully learn the value of a component of \(\mathbf {x} \) for all triples in the bucket (since it is at least s bits long), which is what our proof, bounding his success probability assumes. Instead we can now bound its success probability by considering a different attack vectors and using the Leftover Hash Lemma to compute the maximum amount of leakage it can learn when combining less than \(\tau _2\) triples in a bucket as done in [29]. However, we leave the details of this as future work. To conclude, even when using the very conservative bound on bucket size, we get that it now takes only \(6m\log (|\mathbb {F} |)\) OTs, amortized, when constructing \(2^{21}\) triples instead of \(27m\log (|\mathbb {F} |)\) when \(s=40\).

Efficiency Comparison. The computationally heavy parts in our protocol are the usage of oblivious transfers and the use of the underlying homomorphic two-party commitments. Both of these are rather efficient in practice having the state-of-the-art constructions of Keller et al. ([28] for OT) and of Frederiksen et al. ([18], for two-party homomorphic commitments). It should be noted that if one wish to use a binary field, or another small field, then it is necessary to use a code based on algebraic geometry internally if using the commitment scheme of Frederiksen et al. [18]. These are however not as efficient to compute as, for example, the BCH code used in the implementation of [18] done in [35].

Table 2. Comparison of the overhead of OTs needed, in the amortized sense. All values should be multiplied with \(p(p-1)\) to get the true number of needed OTs. We differentiate between regular OTs and the more efficient correlated random OT with error (COTe) [29]. We assume that the computational security parameter \(\kappa \ge m\log (|\mathbb {F} |)\) some complexities increase. \(\mathbb {F} =GF(2)\). For [7, 29] \(m=1\) is possible. We assume at least \(2^{21}\) triples are generated which gives the smallest numbers to the protocols. *) Using optimization 4. in Sect. 6, requiring \(|\mathbb {F} |\ge 2^s\).

Notice that the amount of OTs our protocol require is a factor of \(O(m\log (|\mathbb {F} |))\) greater than the amount of commitments it require. Therefore, in Table 2 we try to compare our protocol with [7, 19, 29] purely based on the amount of OTs needed. This gives a fair estimation on the efficiency of our protocol compared to the current state-of-the-art protocols for the same settings (static, malicious majority in the secret sharing approach).

Furthermore, we note that both [19, 29] (which is used as the underlying preprocessing phase for MiniMAC) require a factor of between O(m) and \(O(m^2)\) more coin tosses than our protocol. The reason for this is that in our protocol it is sufficient to perform the random linear combinations using a random scalar from \(\mathbb {F} \) (i.e. scalar multiplication) whereas [19, 29] requires a componentwise multiplication using a random vector from \(\mathbb {F} ^m\). Note that in the comparison in Table 2 we adjusted the complexity of [19] to fit what is needed to securely fix the issue regarding the sacrificing, which we present in the full version [20].

7 Applications

Practically all maliciously secure MPC protocols require some form of commitments. Some, e.g. the LEGO family of protocols [17, 18, 34, 35], also require these commitments to be additively homomorphic. Our MPC protocol works directly on such commitments, we believe it makes it possible to use our protocol as a component in a greater scheme with small overhead, as all private values are already committed to. Below we consider one such specific case; when constructing committed OT from a general MPC protocol.

7.1 Bit Committed OT

The bit-OT two-party functionality \((b,x_0,x_1)\mapsto (x_b,\bot )\) can be realized using a secure evaluation of a circuit containing a single AND gate and two XOR gates: Let b denote the choice bit and \(x_0, x_1\) the bit messages, then \(x_b=b\wedge (x_0\oplus x_1)\oplus x_0\).

We notice that all shares in our protocol are based on two-party commitments. This means that constructing a circuit similar to the description above will compute OT, based on shares which are committed to. Thus we can efficiently realize an OT functionality working on commitments. Basically we use \(\mathbb {F} =\) GF(2) and compute a circuit with one layer of AND gates computing the functionality above. In the end we only open towards the receiver. At any later point in time it is possible for the sender to open the commitments to \(x_0\) and \(x_1\), no matter what the receiver chose. The sender can also open b towards the receiver. However we notice that we generally need to open m committed OTs at a time (since we have m components in a message). However, if this is not possible in the given application we can use reorganization pairs to open only specific OTs, by simply branching each output message (consisting of m components) into m output messages each of which only opening a single component, and thus only a single actual OT.

Furthermore, since we are in the two-party setting, and because of the specific topology of the circuit we do not need to have each multiparty commitment be the sum of commitments between each pair of parties. Instead the receiving party simply commits to b towards the sending party using a two-party commitment. Similarly the sending party commits to \(x_0\) and \(x_1\) towards the receiving party using a two-party commitment. Now, when they construct a multiplication triple they only need to do one OT per committed OT they construct; the receiver inputting his b and the receiver inputting \(x_0\oplus x_1\). Since the sender should not learn anything computed by the circuit the parties do no need to complete the arithmetic OT in other direction.

In this setting we have \(\mathbb {F} =\)GF(2) (hence \(m\ge s\)), \(p=2\) and 1 multiplication gate when constructing a batch of m committed OTs. Plugging these into the equations in Table 1 we see that the amortized cost for a single committed-OT is 36 regular string OTs of \(\kappa \) bits and \(108/m\le 108/s\le 3\) (for \(s=40\)) commitments for batches of m committed-OTs.

It is also possible to achieve committed OT using other MPC protocols, in particular the TinyOT protocols [7, 33] have a notion of committed OT as part of its internal construction. However our construction is quite different.