1 Introduction

Universal Composability (UC) [14] is currently the most popular framework for designing and proving security of cryptographic protocols under arbitrary composition. It allows one to prove that a protocol remains secure even in complex scenarios consisting of multiple nested protocol executions. The benefit of UC is that, as a formal framework, it allows to discuss the different aspects of an interactive protocol with mathematical precision. But in practice, one often sees that protocol security is argued on a very high level only. This is partially due to the complexity of fully expressing (and then proving) a protocol in UC, but also because achieving provable (UC) security sometimes requires additional, seemingly unnecessary protocol steps or assumptions.

One such case is that of (public) verifiability, which is the focus of this work. A verifiable protocol allows each protocol participant \(\mathcal {P}_i\) to check if another party \(\mathcal {P}_j\) in the end of the protocol obtained a certain claimed output (or that it aborted). A publicly verifiable protocol has this property even for external verifiers that did not take part in the protocol itself. Public verifiability is particularly important in the setting of decentralized systems and public ledgers (e.g. blockchains [24, 27, 28, 32, 35]), where new parties can join an ongoing protocol execution on-the-fly after verifying that their view of the protocol is valid. It also plays a central role in a recent line of research [2, 5, 11, 33] on secure multiparty computation (MPC) protocols that rely on a public ledger to achieve fairness (i.e. ensuring either all parties obtain the protocol output or nobody does, including the adversary) by penalizing cheating parties, circumventing fundamental impossibility results [25]. Protocol verifiability also finds applications in MPC protocols that have identifiable abort such as [8, 9, 30], where all parties in the protocol either agree on the output or agree on the set of cheaters. Furthermore, public verifiability is an intrinsic property of randomness beacons [22, 23] and a central component of provably secure Proof-of-Stake blockchain protocols [5, 24, 33]. However, most of these works achieve (public) verifiability by relying on heavy tools such as non-interactive zero knowledge proof systems and strong assumptions such as adaptive security of the underlying protocols.

1.1 The Problems of Achieving (Public) Verifiability in UC

Consider a UC functionality \(\mathcal {F}\) which has one round of inputs by the parties \(\mathcal {P}=\{\mathcal {P}_1,\dots ,\mathcal {P}_n\}\), computes outputs based on the inputs and in the end sends these outputs to each \(\mathcal {P}_i\). In this work, we are interested in adding verifiability to \(\mathcal {F}\) to obtain an extended functionality \(\mathcal {F}^{\texttt{V}}\). This functionality \(\mathcal {F}^{\texttt{V}}\) performs the same operations as \(\mathcal {F}\), but it additionally allows verifiers to confirm that certain inputs were provided by a party \(\mathcal {P}_i\) to \(\mathcal {F}^{\texttt{V}}\) to perform these operations and that certain outputs of these operations were given to \(\mathcal {P}_i\) from \(\mathcal {F}^{\texttt{V}}\). Moreover, we want to obtain a protocol \(\varPi ^{\texttt{V}}\) realizing \(\mathcal {F}^{\texttt{V}}\) from an existing protocol \(\varPi \) that realized \(\mathcal {F}\). More concretely, we are interested in compiling a UC-secure protocol \(\varPi \) into a UC-secure counterpart \(\varPi ^{\texttt{V}}\) that has (public) verifiability.

The (intuitive) first step is to construct \(\varPi ^{\texttt{V}}\) where each party commits to its inputs and randomness. The parties then run \(\varPi \) using the committed input and randomness, exchanging exchange authenticated messages. This approach assumes that we are okay with revealing the inputs after \(\varPi \) is completed in case cheating is suspected, which we will discuss in more detail. Intuitively, this yields a simple verification procedure: each party can use the committed inputs and randomness of all other parties to re-execute \(\varPi \) in its head and compare the resulting messages to the authenticated protocol transcript. Any external verifier could do the same based on the commitments and an authenticated transcript of \(\varPi \). Unfortunately, using this simple approach leads to adaptivity problems when trying to prove \(\varPi ^{\texttt{V}}\) secure: in the security proof, the simulation must have been performed without knowing the actual inputs to the functionality. But afterwards, these inputs become known to the verifier so the simulator must be able to explain the whole transcript of \(\varPi \) in terms of the previously unknown inputs, requiring adaptive security of \(\varPi \) to begin with. Similar issues have been observed before (e.g. [30]). This means that any such \(\varPi ^{\texttt{V}}\) would be quite inefficient, since adaptive protocols \(\varPi \) are often significantly less efficient than their counterparts with static security.

Consider, as an example, a 2PC protocol \(\varPi _{2PC}\) with active security based on Garbled Circuits (GCs) such as [18, 34]. Protocol \(\varPi _{2PC}\) is executed by a sender \(\mathcal {P}_1\) and a receiver \(\mathcal {P}_2\) (where only \(\mathcal {P}_2\) obtains output) as follows: (1) \(\mathcal {P}_1\) generates multiple GCs together with input keys for each circuit. \(\mathcal {P}_1\) commits to the GCs and their input keys. It inputs the input keys belonging to \(\mathcal {P}_2\) into an Oblivious Transfer (OT) functionality \(\mathcal {F}_{OT}\); (2) \(\mathcal {P}_2\) uses \(\mathcal {F}_{OT}\) to obtain its input keys; (3) \(\mathcal {P}_1\) decommits the GCs and its own input keys; (4) \(\mathcal {P}_2\) evaluates the GCs. Both parties run a consistency check showing that most GCs were correctly generated and that their input keys are consistent. The security proof of \(\varPi _{2PC}\) (for static security) usually consists of simulators for a corrupted sender (\(\mathcal {S}_1\)) and receiver (\(\mathcal {S}_2\)). \(\mathcal {S}_1\) sends random inputs to \(\mathcal {F}_{OT}\), extracts the inputs of \(\mathcal {P}_1\) and then checks that the GCs were generated correctly by the malicious \(\mathcal {P}_1\). For \(\mathcal {S}_2\) the standard strategy is to first extract the input \(x_2\) of the malicious \(\mathcal {P}_2\) using \(\mathcal {F}_{OT}\), then to obtain the output y from the functionality \(\mathcal {F}_{2PC}\), to choose a random input \(\tilde{x}_1\) and finally to simulate GCs such that they output y for the input keys of \(\tilde{x}_1, x_2\). In order to make \(\varPi _{2PC}\) verifiable (with respect to revealing inputs and outputs), let \(\mathcal {F}^{\texttt{V}}_{2PC}\) release the real input \(x_1\) of \(\mathcal {P}_1\) after the computation finished. But in \(\mathcal {S}_2\) we generated the GCs such that for the dummy input \(\tilde{x}_1\) it outputs y, so the garbling may not even be a correct garbling of the given circuit. There might not exist randomness to explain the output of \(\mathcal {S}_2\) consistently with \(x_1\), unless \(\varPi _{2PC}\) was an adaptively secure protocol.

This seems counter-intuitive: beyond the technical reason to allow (UC) simulation of verifiability, we see no explanation why only adaptively secure protocols should be verifiable when following the aforementioned compilation steps.

1.2 Our Contributions

In this work, we show how to compile a large class of statically UC-secure protocols into publicly verifiable versions that allow a party to non-interactively prove that it obtained a certain output by revealing its input. We focus on a setting where at least one party is assumed to be honest, and where the compiled protocol was already maliciously secure to begin with. While revealing an input is a caveat, this flavor of (public) verifiability is sufficient for a number of applications (e.g. [5, 7, 23]) and allows us to circumvent the need for expensive generic zero knowledge proofs and adaptive security (as needed in [30, 33]). We introduce a compiler relying only on commitments and “joint authentication” functionalities that can be realized with cheap public-key primitives.

Our approach is compatible with protocols realizing non-reactive functionalities such as OT, Commitments or Secure Function Evaluation. We describe a standard wrapper for any such functionality to equip it with the interfaces necessary for non-interactive verification, allowing external verifiers to register and to perform verification. This wrapper is designed to amalgamate the reactive nature of UC with non-interactivity and might be of independent interest. Extending the results to reactive functionalities is an interesting open problem.

When is Revealing Inputs for Verification Justifiable? Although our focus on revealing inputs might seem very restrictive, there is a quite substantial set of protocols where it can be applied. As a starting point, our techniques can be used to instantiate preprocessing for UC-secure MPC with Identifiable Abort without adaptive assumptions [9, 30]. Our approach also applies when one wants to publicly and randomly sample from a distribution and identify cheaters who disturbed the process. For example, our results have already been used as an essential tool in follow-up work constructing UC randomness beacons [23]. A third application is to bootstrap MPC without output verifiability to MPC with output verifiability without revealing of inputs. Here, each physical party \(\mathcal {P}_i\) in the protocol \(\varPi _{MPC}\) runs two virtual parties \(\mathcal {P}_i^C, \mathcal {P}_i^V\). It will give \(\mathcal {P}_i^C\) the actual input x (while \(\mathcal {P}_i^V\) has no input), and both parties obtain the same output y from \(\varPi _{MPC}\). Now, in order to convince a verifier that \(\mathcal {P}_i\) had y as output, it can “sacrifice” \(\mathcal {P}_i^V\) and reveal its randomness for verification. Observe that this requires \(\varPi _{MPC}\) to be secure against a dishonest majority of parties. A fourth application lies in achieving cheater identification in the output phase of MPC protocols, which is a prequisite for obtaining MPC with monetary fairness such as [2, 5, 11, 33]. For example, using our techniques, it is possible to construct the publicly verifiable building blocks of the output phase of Insured MPC [5] and related applications [7] since the inputs of the output phase with cheater identification are supposed to be revealed anyway. In [5] the authors had to individually redefine each functionality with respect to verifiability and reprove the security of each protocol involved. Using our techniques, we show in the full version [6] that this tedious task can be avoided and that the same result can be obtained by inspecting the primitives used in their protocol and verifying that the protocols fulfill the requirements of our compiler.

Shortcomings of Other Approaches. In case adaptive security is required, it is well-known that adaptively secure protocols usually have larger computation or communication overheads (or stronger assumptions) than their statically secure counterparts. For example, Yao’s Garbling Scheme and optimizations thereof are highly efficient with static security (e.g. [38]) but achieve similar performance with adaptive security only for NC1-circuits [31] (unless one relies on Random Oracles [10]). When implementing \(\varPi _{2PC}\), one would also additionally have to realize an adaptively UC-secure \(\mathcal {F}_{OT}\), which is also cheaper with static instead of adaptive security. This is also true when OT-extension is used [20, 21].

Previous works such as [33] obtain public verifiability, even without revealing inputs and without adaptive protocols, by using generic UC-NIZKs. They follow the GMW paradigm [29] where each party would prove in every protocol step of \(\varPi \) that it created all messages correctly, given all previous messages as well as commitments to inputs and randomness. To the best of our knowledge, no work that uses UC-NIZKs to achieve verifiability estimated concrete parameters for their constructions. This is because the UC-NIZKs, in addition to proving the protocol steps, also have to use the code of the cryptographic primitives in a white-box way. That also means that UC-NIZKs cannot be applied if the compiled protocol \(\varPi \) uses Random Oracles, which are popular in efficient protocols.

Another solution for verifiability, which also does not require an adaptive protocol and that works in the case that \(\varPi \) is an MPC protocol, is to i) let \(\varPi \) commit to the output of \(y_i\) of each \(\mathcal {P}_i\) by running a commit algorithm for a non-interactive commitment scheme inside \(\varPi \); ii) output all these commitments to all parties, which sign them and broadcast the signed commitments to each other; and iii) reveal outputs and commitment openings to the respective parties. Obviously, this does not generalize to arbitrary protocols \(\varPi \), whereas our approach does. Additionally, in this approach one needs to evaluate the commitment algorithm white-box in MPC. Evaluating cryptographic primitives inside MPC can be costly, in particular if the MPC protocol is defined over a ring where the commitment algorithm has a large circuit. This also rules out cheap Random Oracle-based commitments.

Efficiency. The only overheads in relation to the original protocol required by our compiler are a simple commitment (e.g. based on a random oracle) on the input/randomness of each party and the subsequent joint authentication of this commitment as well as of subsequent messages. If messages are exchanged over public channels, this joint authentication be done by requiring each party to compute multisignatures on the messages exchanged in each round and then combining these signatures into a single multisignature, saving on space. If messages are exchanged over private channels, there is an extra overhead of computing 2 modular exponentiations and transmitting a string of security parameter size per message, which is needed for our private joint authentication scheme. The verification procedure requires the verifier to re-execute the protocol on the jointly authenticated transcript of the protocol using a party’s opened input/randomness. While this seems expensive, notice that executing the protocol’s next message function is strictly cheaper than verifying a NIZK showing that every message in the transcript is correctly computed according to this function, which is required by previous schemes and that would also add the overhead of having each party compute such a NIZK for every message they send.

1.3 Our Techniques

We construct a compiler that generically achieves public verifiability for protocols with one round of input followed by multiple computation and output rounds as formalized in Sect. 2. For this, we start with an observation similar to [30], namely that by fixing the inputs, randomness and messages in a protocol \(\varPi \) we can get guarantees about the outputs. This is because fixing the inputs, randomness and received messages essentially fixes the view of a party, as the messages generated and sent by a party are deterministic given all of these other values. Our compiler creates a protocol \(\varPi ^{\texttt{V}}\) that fixes parties’ input and randomness pairs by having parties commit to these pairs and authenticate the messages exchanged between parties in such a way that an external party can verify such committed/authenticated items after the fact. This idea of fixing messages for the purpose of public verifiability is not new, and other works that focus on it such as [3, 33] have taken a similar route. However, fixing all messages exchanged in the original protocol \(\varPi \) is costly and might be overkill for some protocols. We explore this concept in the notion of transcript non-malleability as defined in Sect. 3. There, we formalize the intuition that we might not need that all exchanged messages are fixed in some protocols: e.g. an adversary that is allowed to replace messages exchanged between dishonest parties possibly does not have enough leverage to forge a consistent transcript for a different output.

Proving Security in UC: It might seem obvious that \(\varPi ^{\texttt{V}}\), i.e. a version of \(\varPi \) with all of its inputs and messages fixed, is publicly verifiable and implements \(\mathcal {F}^{\texttt{V}}\). Unfortunately, as we outlined above, a construction of a simulator \(\mathcal {S}^{\texttt{V}}\) in the proof of security needs to assume that \(\varPi \) is adaptively secure. In Sect. 3.1 we address this by using input-aware simulators (or über simulators) \(\mathcal {S}^{\texttt{U}}\). These are special simulators which can be parameterized with the inputs for the simulated honest parties, generating transcripts consistent with these inputs but indistinguishable from the transcripts of \(\mathcal {S}\). We then embed an über simulator of a protocol \(\varPi \) into the publicly verifiable functionality \(\mathcal {F}^{\texttt{V}}\). This delegates the simulation of \(\varPi \) to the internal über simulator of \(\mathcal {F}^{\texttt{V}}\) – whereas in our naive approach, \(\mathcal {S}^{\texttt{V}}\) had to simulate \(\varPi \) itself. Since we let \(\mathcal {F}^{\texttt{V}}\) only release the transcripts that \(\mathcal {S}^{\texttt{U}}\) generates, this does not leak any additional information to the adversary. Moreover, \(\mathcal {S}^{\texttt{U}}\) now also extracts the inputs of the dishonest parties.

Getting Über Simulators (Almost) for Free: Following our example with \(\varPi _{2PC}\) from Sect. 1.1, \(\mathcal {S}_1\) for a corrupted sender uses a random input to \(\mathcal {F}_{OT}\) and otherwise follows \(\varPi _{2PC}\). Towards constructing \(\mathcal {S}^{\texttt{U}}_1\), observe that as \(\mathcal {F}_{OT}\) by its own UC-security hides the input of \(\mathcal {P}_2\), running \(\mathcal {S}_1\) inside \(\mathcal {F}^{\texttt{V}}_{2PC}\) using real inputs of \(\mathcal {P}_2\) is indistinguishable and we can use such a modified \(\mathcal {S}_1\) as \(\mathcal {S}^{\texttt{U}}_1\). Conversely, we can also construct \(\mathcal {S}^{\texttt{U}}_2\), which runs \(\varPi _{2PC}\) based on the input \(x_1\) that it obtains. By the UC-security of \(\varPi _{2PC}\), the distribution of \(\mathcal {S}^{\texttt{U}}_2\) will be indistinguishable from \(\mathcal {S}_2\). As can be seen from this example, an efficient über simulator must not be artificial or strong, but could be quite simply obtained from either the existing protocol or existing \(\mathcal {S}\). Its requirement also differs from requiring adaptivity of \(\varPi _{2PC}\): \(\mathcal {S}^{\texttt{U}}_2\) still only requires \(\varPi _{2PC}\) to be statically secure. In fact, this strategy for constructing an über simulator works for any protocols that simulate their online phase in the security proof using “artificial” fixed inputs and otherwise run the protocol honestly while they are able to extract inputs (e.g. MPC protocols such as [26, 36]). Hence, we can directly make a large class of protocols verifiable. This is discussed further in the full version [6].

How to Realize Transcript Non-malleability. Besides fixing inputs and randomness, in order to construct compilers from \(\varPi \) to \(\varPi ^{\texttt{V}}\) we need to fix the transcript of \(\varPi \). For this, we have parties in \(\varPi ^{\texttt{V}}\) use what we call “joint authentication” (defined in Sect. 4). Joint Authentication works for both public and private messages. In the public case, joint authentication is achieved by having all parties sign a message sent by one of them. In the private case, we essentially allow parties to authenticate commitments to private messages that are only opened to the rightful receivers. Later on, any party who received that private message (i.e. the opening of the commitment to the message) can publicly prove that it obtained a certain message that was jointly authenticated by all parties involved in \(\varPi ^{\texttt{V}}\). More importantly, joint authentication does not perform any communication itself but provides authentication tokens that can be verified in a non-interactive manner. In our example with \(\varPi _{2PC}\), this means that both \(\mathcal {P}_1,\mathcal {P}_2\) initially commit to their inputs and randomness and then sign all exchanged messages (checking that each message is signed by its sender).

Putting Things Together. We use the techniques described above to compile any protocol \(\varPi \) that fits one of our transcript non-malleability definitions and UC-realizes a functionality \(\mathcal {F}\) in the \(\mathcal {F}_1,\ldots ,\mathcal {F}_n\)-hybrid model into a protocol \(\varPi ^{\texttt{V}}\) that UC-realizes a publicly verifiable \(\mathcal {F}^{\texttt{V}}\) in the \(\mathcal {F}^{\texttt{V}}_1,\ldots ,\mathcal {F}^{\texttt{V}}_n\)-hybrid model (i.e. assuming that the setup functionalities can also be made publicly verifiable). Moreover, if a global functionality is used as setup, it must allow all parties to make the same queries and obtain the same answers, so that the verification procedure can be performed. Our compilation technique has two main components: 1. commit to and authenticate each party’s input and randomness pairs of \(\varPi \) (fixing input and randomness pairs); 2. execute \(\varPi \) and use public/secret joint authentication to jointly authenticate each exchanged protocol message (fixing the transcript). These steps achieve two goals: allowing parties to publicly and non-interactively show that they have a certain input/randomness pair and transcript, making \(\varPi \) transcript non-malleable, since the adversary cannot lie about its input, randomness or view of the transcript. In order to realize the public verifiability interface of \(\mathcal {F}^{\texttt{V}}\), we have a party open its input and randomness pair as well as its view of the transcript, which could not have been forged, allowing the verifier to execute an honest party’s steps as in \(\varPi \) to verify that a given output is obtained. When proving security of this compiler, we delegate the simulation of the original steps of \(\varPi \) to an über simulator \(\mathcal {S}^{\texttt{U}}\) for \(\varPi \) embedded in \(\mathcal {F}^{\texttt{V}}\). This guarantees that the transcript of \(\mathcal {S}\)’s simulated execution of \(\varPi ^{\texttt{V}}\) is consistent with honest parties’ inputs if they activate public verification and reveal their input. To compile our example GC protocol, we now combine all of the aforementioned steps and additionally assume that \(\mathcal {F}_{OT}\) as well as the commitment-functionality are already verifiable. By the compiler theorem, the resulting protocol is verifiable according to our definition. In the full version [6] we give a detailed example by easily achieving verifiability in [5].

1.4 Related Work

Despite being very general, UC has seen many extensions such as e.g. UC with joint state [19] or Global UC [16], aiming at capturing protocols that use global ideal setups. Verifiability for several kinds of protocols has been approached from different perspectives, such as cheater identification [8, 30], verifiability of MPC [4, 37], incoercible secure computation [1], secure computation on public ledgers [2, 11, 33], and improved definitions for widely used primitives [12, 13]. Another solution to solve the adaptivity requirement was presented in [9], but their approach only works for functionalities without input. A different notion of verifiability was put forward in publicly verificable covert 2PC protocols such as [3] and its follow-up works, where parties can show that the other party has cheated. Here, both the 2PC protocol and therefore also the verifiability guarantee only hold against covert adversaries, while we focus on the malicious setting. To the best of our knowledge, no previous work has considered a generic definition of non-interactive public verifiability for malicious adversaries in the UC framework nor a black-box compiler for achieving such a notion without requiring adaptive security of the underlying protocol or ZK proof systems.

2 Preliminaries

We denote the security parameter by \(\kappa \). The set \(\{1,\dots ,n\}\) is denoted by [n] while we write \([n]_i\) to mean \([n]\setminus \{i\}\). We denote by \(\textsf{negl}(x)\) the set of negligible functions in x and abbreviate probabilistic polynomial time as PPT. We write \(\{0,1\}^{\textsf{poly} (\kappa )}\) to denote a set of bit-strings of polynomial length in \(\kappa \).

Secure Protocols. A protocol \(\varPi \) run by n parties (which we denote as \(\mathcal {P}=\{\mathcal {P}_1,\dots ,\mathcal {P}_n \}\)) consists of the algorithms \(\texttt{nmes},\texttt{out}\) and additional parameters: the number of parties n, the setup resources \(\mathcal {F}_1,\dots ,\mathcal {F}_r\), the number of output rounds G, the number of rounds \(H_\tau \) to obtain each output \(\tau \in [G]\) as well as the communication and output model. We assume that external system parameters \(s\in \{0,1\}^{\textsf{poly} (\kappa )}\) are fixed for the protocol. Each party \(\mathcal {P}_i\) uses their input \(x_i \in \mathcal {X}\) as well as randomness \(r_i\in \{0,1\}^{\textsf{poly} (\kappa )}\) for the actual protocol. Formally, \(\varPi \) is described in Fig. 1 with algorithms \(\texttt{nmes}\) and \(\texttt{out}\) defined as follows:

  • \(\texttt{nmes}\) is a deterministic polynomial-time (DPT) algorithm which on input the party number i, protocol input \(x_i\in \mathcal {X}\), randomness \(r_i\in \{0,1\}^{\textsf{poly} (\kappa )}\), auxiliary input \(s\in \{0,1\}^{\textsf{poly} (\kappa )}\), output round \(\tau \in [G]\), round number \(\rho \in [H_\tau ]\) and previous messages \(\mathcal {M}_{\cdot ,i}\) from parties and \(\mathcal {N}_{\cdot ,i}\) from resources outputs \(\{\texttt{m}_{i,j}^{(\tau ,\rho )}\}_{j\in [n]_i }, \{ \texttt{mres}_{i,q}^{(\tau ,\rho )}\}_{q\in [r]}\).

  • \(\texttt{out}\) is a DPT algorithm which on input the party number i, the protocol input \(x_i\in \mathcal {X}\), randomness \(r_i \in \{0,1\}^{\textsf{poly} (\kappa )}\), auxiliary input \(s\in \{0,1\}^{\textsf{poly} (\kappa )}\), output round \(\tau \in [G]\), a set of messages \(\mathcal {M}_{\cdot ,i}\) from parties and \(\mathcal {N}_{\cdot ,i}\) from resources outputs \(\texttt{y}_{i}^{(\tau )}\) which is either an output value or \(\bot \). The values \(x_i, r_i\) might not be necessary in every protocol, so \(\texttt{out}\) might run without these.

Algorithm \(\texttt{nmes}\) generates two different types of messages: 1. \(\texttt{m}\)-messages, which are used for communication among parties; 2. \(\texttt{mres}\)-messages, which are exchanged between a party and a functionality. Therefore, each \(\texttt{mres}\)-message consists of an interface (\({\textbf {Input}}_{i},{\textbf {Compute}}^{(\tau )},{\textbf {Output}}_{i}^{(\tau )}\)) with whom the party wants to communicate as well as the actual payload. Each message that is an output of \(\texttt{nmes}\) may either be an actual string or a symbol \(\bot \), meaning that no message is sent to a party/functionality in this round. We write \(\texttt{m}_{i,j}\) whenever we mean that a message was sent from party \(\mathcal {P}_i\) to \(\mathcal {P}_j\). Similarly, we write \(\texttt{mres}_{i,q}\) when the message was sent from \(\mathcal {P}_i\) to \(\mathcal {F}_q\) and \(\texttt{mres}_{q,i}\) when sent from \(\mathcal {F}_q\) to \(\mathcal {P}_i\). We denote messages received by party \(\mathcal {P}_i\) from another party as \(\mathcal {M}_{\cdot ,i}\) and those sent by \(\mathcal {P}_i\) to another party as \(\mathcal {M}_{i,\cdot }\). We write \(\mathcal {N}_{\cdot ,i}\) for all messages that \(\mathcal {P}_i\) received from resources while \(\mathcal {N}_{i,\cdot }\) denotes messages which \(\mathcal {P}_i\) sent to resources.

Fig. 1.
figure 1

The generic protocol \(\varPi \).

Communication and Output Model: We do not restrict how messages are exchanged, except that their length is polynomial in \(\kappa \). If messages are sent through point-to-point secure channels, then we call this private communication. If parties instead send the same message to all other parties, then we consider this as broadcast communication. Parties may arbitrarily mix private and broadcast communication. We do not restrict the output \(\texttt{y}_{i}^{(\tau )}\) which each party obtains in the end of the computation, meaning that all the \(\texttt{y}_{i}^{(\tau )}\) might be different.

Universal Composition of Secure Protocols. In this work we use the (Global) Universal Composability or (G)UC model [14, 16] for analyzing security. We focus on dishonest-majority protocols as e.g. honest-majority protocols can have all parties vote on the result (if broadcast is available). Protocols are run by interactive Turing Machines (iTMs) which we call parties. We assume that each party \(\mathcal {P}_i\) in \(\varPi \) runs in PPT in the implicit security parameter \(\kappa \). The PPT adversary \(\mathcal {A}\) will be able to corrupt k out of the n parties, denoted as \(I\subset \mathcal {P}\). We opt for the static corruption model where the parties are corrupted from the beginning, as this is what most efficient protocols currently are developed for. Parties can exchange messages with each other and also with PPT resources, also called ideal functionalities. To simplify notation we assume that the messages between parties are sent over secure channels.

We start with protocols that are already UC-secure, but not verifiable. For this, we assume that the ideal functionality \(\mathcal {F}\) of a protocol \(\varPi \) follows the pattern described in Fig. 2: following Fig. 1 we consider protocols with one input and G output rounds. This is general enough to e.g. model commitment schemes. At the same time, our setting is not strong enough to permit reactive computations which inherently make the notation a lot more complex.

Fig. 2.
figure 2

The generic functionality \(\mathcal {F}\).

It is not necessary that all of the interfaces which \(\mathcal {F}\) provides are used for an application. For example in the case of coin tossing, no party \(\mathcal {P}_i\) ever has to call \({\textbf {Input}}_{i}\). While \({\textbf {Input}}_{i},{\textbf {Output}}_{i}^{(\tau )}\) are fixed in their semantics, the application may freely vary how \({\textbf {Compute}}^{(\tau )}\) may act upon the inputs or generate outputs. The only constraint is that for each of the \(\tau \in [G]\) rounds, \({\textbf {Compute}}^{(\tau )}\) sets output values \((\texttt{y}_{1}^{(\tau )},\dots ,\texttt{y}_{n}^{(\tau )})\).

As usual, we define security with respect to a PPT iTM \(\mathcal {Z}\) called environment. The environment provides inputs to and receives outputs from the parties \(\mathcal {P}\). Furthermore, the adversary \(\mathcal {A}\) will corrupt parties \(I\subset \mathcal {P}\) in the name of \(\mathcal {Z}\) and gain full control over I. To define security, let \(\varPi \circ \mathcal {A}\) be the distribution of the output of \(\mathcal {Z}\) when interacting with \(\mathcal {A}\) in a real protocol instance \(\varPi \). Furthermore, let \(\mathcal {S}\) denote an ideal world adversary and \(\mathcal {F}\circ \mathcal {S}\) be the distribution of the output of \(\mathcal {Z}\) when interacting with parties which run with \(\mathcal {F}\) instead of \(\varPi \) and where \(\mathcal {S}\) takes care of adversarial behavior.

Definition 1 (Secure Protocol)

We say that \(\varPi \) securely implements \(\mathcal {F}\) if for every PPT iTM \(\mathcal {A}\) that maliciously corrupts at most k parties there exists a PPT iTM \(\mathcal {S}\) (with black-box access to \(\mathcal {A}\)) such that no PPT environment \(\mathcal {Z}\) can distinguish \(\varPi \circ \mathcal {A}\) from \(\mathcal {F}\circ \mathcal {S}\) with non-negligible probability in \(\kappa \).

In our (public and secret) Join Authentication protocols we use the standard functionalities for digital signatures \(\mathcal {F}_{\textsf{Sig}}\) [15] and for key registration functionality \(\mathcal {F}_{\textsf{Reg}}\) [17]. Moreover, to simplify our compiler description, we use an authenticated bulletin board functionality \(\mathcal {F}_{\textsf{BB}}\) described in the full version [6].

Fig. 3.
figure 3

The Functionality wrapper \(\mathcal {F}^{\texttt{V}}[\mathcal {F}]\). The modifications to interface \({\textbf {Input}}_{i}\) and the new interface \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\) are discussed in Appendix 3.2.

2.1 Verifiable Functionalities

We extend the functionality \(\mathcal {F}\) from Sect. 2 to provide a notion of non-interactive verification using a functionality wrapper \(\mathcal {F}^{\texttt{V}}\) described in Fig. 3. For this, we assume that there are additional parties \(\mathcal {V}_i\) which can partake in the verification. These, as well as regular protocol parties, can register at runtime to be verifiers of the computation using a special interface \({\textbf {Register Verifier}}\). Once they are registered, these verifiers are allowed to check the validity of outputs for parties that have initiated verification at any point. We keep track of this using the set of verifiers \(\mathcal {V}\) (which is initially empty) inside the functionality. For values whose output has been provided using the interface \({\textbf {Output}}_{i}^{(\tau )}\) (that we inherit from the definition of \(\mathcal {F}\) of Sect. 2) we allow the parties \(\mathcal {P}\) to use an interface called \({\textbf {Activate Verification}}\) to enable everyone in \(\mathcal {V}\) to check their outputs via the interface \({\textbf {Verify}}_{i}\). The modifications to \({\textbf {Input}}_{i}\) and the new interface \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\) are related to the über simulators discussed in Appendix 3.2.

Notice that, in our constructions, a verifier \(\mathcal {V}_i \in \mathcal {V}\) can perform verification with help from data obtained in two different ways: 1. receiving verification data from another verifier \(\mathcal {V}_j \in \mathcal {V}\) or a party \(\mathcal {P}_i \in \mathcal {P}\); 2. reading verification data from publicly available resource such as \(\mathcal {F}_{\textsf{BB}}\). In case \(\mathcal {V}_i\) obtains verification data from another party in \(\mathcal {V}\cup \mathcal {P}\), that party might be corrupted, allowing the ideal adversary \(\mathcal {S}\) to interfere (i.e. providing corrupted verification data or not answering at all). When \(\mathcal {V}_i\) obtains verification data from a setup resource that is untamperable by the adversary, \(\mathcal {S}\) has no influence on the verification process. To model these cases, \(\mathcal {F}^{\texttt{V}}\) might implement only \({\textbf {Register Verifier (public)}}\) or only \({\textbf {Register Verifier (private)}}\), respectively. We do not require \(\mathcal {F}^{\texttt{V}}\) to implement both of these interfaces simultaneously, and thus define the properties of \(\mathcal {F}^{\texttt{V}}\) according to which of them is present:

  • A functionality which implements the interface Register Verifier (public) is said to have Public Verifier Registration. We say that it has Private Verifier Registration if it implements Register Verifier (private)

  • A functionality which implements the interfaces Activate Verification and \({\textbf {Verify}}_{j}\) and which has Verifier Registration is called Non-Interactively Verifiable (NIV). If it has Public Verifier Registration then it is Publicly Verifiable, if it has Private Verifier Registration it is Privately Verifiable

3 Verifiable Protocols

We now present our definitions of non-interactively verifiable protocols. For this, we will first sketch a classification for the robustness of a protocol to attacks on its “inherent” verifiability. Then, we define properties that are necessary to achieve simulation-based security for our approach to verifiability.

Our approach to verification (as outlined in Sect. 1.3) is to leverage properties for verifiability that are already built into a protocol \(\varPi \). As the verifier can only rely on the protocol transcript, consider how such a transcript comes into existence: we first run an instance of \(\varPi \) with an adversary \(\mathcal {A}\). Afterwards, the adversary may change parts of the protocol transcript in order to trigger faulty behavior in the outputs of parties. If the adversary cannot trigger erroneous behavior this way, then this means that we can establish correctness of an output of such a protocol by using the messages of its transcript, some opened inputs and randomness as well as some additional properties of \(\varPi =(\texttt{nmes},\texttt{out})\).

Transcript Validity: If our verification relies on the transcript of \(\varPi \), then a transcript is incorrect if messages that a party \(\mathcal {P}_i\) claims to have sent were not received by receiving party \(\mathcal {P}_j\), if messages to and from a NIV functionality \(\mathcal {F}^{\texttt{V}}\) were not actually sent or received by \(\mathcal {P}_i\) or if, in case a party \(\mathcal {P}_i\) reveals its inputs \(x_i\) and randomness \(r_i\), the messages \(\mathcal {P}_i\) claims to have sent are inconsistent with \(x_i, r_i\) when considering \(\texttt{nmes}\) and the remaining transcript. We formalize this as Transcript Validity in the full version [6].

Transcript Non-Malleability: Tampering of an adversary with the transcript can be ok unless it leads to two self-consistent protocol transcripts with outputs \(\widehat{\texttt{y}}_{i}^{(\tau )} \ne \texttt{y}_{i}^{(\tau )}\) for some \(\mathcal {P}_i\) such that both \(\widehat{\texttt{y}}_{i}^{(\tau )}, \texttt{y}_{i}^{(\tau )} \ne \bot \). To prevent this, transcript validity is a necessary, but not a sufficient condition. For example, if no messages or inputs or randomness of any party are fixed, then \(\mathcal {A}\) could easily generate two correctly distributed transcripts for different outputs that fulfill this definition using the standard UC simulator of \(\varPi \). We now describe a security game that constrains \(\mathcal {A}\) beyond transcript validity:

  1. 1.

    \(\mathcal {A}\) runs the protocol with a challenger \(\mathcal {C}\), which simulates honest parties whose inputs and randomness \(\mathcal {A}\) does not know, generating a transcript \(\tau \).

  2. 2.

    The adversary will obtain some additional potentially secret information of the honest parties from \(\mathcal {C}\), upon which it outputs two valid protocol transcripts \(\varPi _0,\varPi _1\).

  3. 3.

    \(\mathcal {A}\) wins if \(\varPi _0,\varPi _1\) coincide in certain parts with \(\tau \), while the outputs of some party \(\mathcal {P}_i\) are different and not \(\bot \).

We want to cover a diverse range of protocols which might come with different guarantees. We consider scenarios regarding: (1) whether the dishonest parties can change their inputs and randomness after the execution (parameter \(\nu \)); (2) what is the set of parties \(\textsf{RIR}\) that will reveal their input and randomness later; and (3) which protocol messages the adversary can replace when he attempts to break the verifiability by presenting a fake transcript (parameter \(\mu \)).

The parameters \(\nu ,\textsf{RIR}\) have the following impact: if \(\nu =\textsf{ncir}\) then the dishonest parties are not committed to the input and randomness in the beginning of the execution. Anything that is revealed from parties in \(I \cap \textsf{RIR}\) might be altered by the adversary. If instead \(\nu =\textsf{cir}\) then all parties are committed to the input and randomness in the beginning of the execution and the adversary cannot alter \(x_i, r_i\) revealed for verification by honest or dishonest parties from \(\textsf{RIR}\). For \(\mu \) we give the adversary the following choices:

  • \(\mu =\textsf{ncmes}\): \(\mathcal {A}\) can replace all messages by all parties.

  • \(\mu =\textsf{chsmes}\): \(\mathcal {A}\) can replace messages from corrupted senders.

  • \(\mu =\textsf{chmes}\): \(\mathcal {A}\) can replace messages exchanged between corrupted parties.

  • \(\mu =\textsf{cmes}\): \(\mathcal {A}\) cannot replace any message.

The full definition of Transcript Non-Malleability is given in the full version [6].

3.1 Simulating Verifiable Protocols: Input-Aware Simulation

Most simulators \(\mathcal {S}\) for UC secure protocols \(\varPi \) work by executing an internal copy of the adversary \(\mathcal {A}\) towards which they simulate interactions with simulated honest parties and ideal functionalities in the hybrid model where \(\varPi \) is defined. In general, \(\mathcal {S}\) receives no external advice and generates random inputs for simulated honest parties and simulated ideal functionality responses with the aid of a random input tape, from which it samples all necessary values. However, a crucial point for our approach is being able to parameterize the operation of simulators for protocols being compiled, as well as giving them external input on how queries to simulated functionalities should be answered.

We need simulators with such properties in order to obtain publicly verifiable versions of existing protocols without requiring them to be adaptively secure as explained in Sect. 1.1. Basically, in the publicly verifiable version of a protocol, we wish to embed a special simulator into the publicly verifiable functionality that it realizes. This allows to “delegate” the simulation of the compiled protocol, while the simulator for the publicly verifiable version handles the machinery needed to obtain public verifiability. This simplifies the security analysis of publicly verifiable versions of existing UC-secure protocols, since only the added machinery for public verifiability must be analysed.

Über Simulator \(\mathcal {S}^{\texttt{U}}\): We now introduce the notion of an über simulator for a UC-secure protocol \(\varPi \) realizing a functionality \(\mathcal {F}\). We denote über simulators as \(\mathcal {S}^{\texttt{U}}\), while we denote by \(\mathcal {S}\) the original simulator used in the original UC proof. Basically, an über simulator \(\mathcal {S}^{\texttt{U}}\) takes the inputs to be used by simulated honest parties, as well as the randomness of the functionality, as an external parameter, and uses these in interactions with the adversary. It furthermore outputs (through a special tape) the randomness used by these simulated parties. Instead of interacting with an internal copy of the adversary, an über simulator interacts with an external copy. Moreover, an über simulator allows for responses to queries to simulated functionalities to be given externally. Otherwise, \(\mathcal {S}^{\texttt{U}}\) operates like a regular simulator, e.g. extracting corrupted partis’ inputs.

In the case of a probabilistic functionality \(\mathcal {F}\), the über simulator \(\mathcal {S}^{\texttt{U}}\) also receives the randomness tape used by \(\mathcal {F}\). \(\mathcal {S}^{\texttt{U}}\) uses this tape to determine the random values that will be sampled by \(\mathcal {F}\), simulating an execution compatible with such values and with the inputs from honest parties (if they have any).

Most existing simulators for protocols realizing the vast majority of natural UC functionalities can be trivially modified to obtain an über simulator which we explain in the full version [6]. This is because they basically execute the protocol as an honest party would, except that they use random inputs and leverage the setup to equivocate the output in the simulated execution. Departing from such a simulator, an über simulator can be constructed by allowing the simulated honest party inputs to be obtained externally, rather than generated randomly.

Syntax of Über Simulator \(\mathcal {S}^{\texttt{U}}\): Let \(\mathcal {S}^{\texttt{U}}\) be a PPT iTM with the same input and output tapes as a regular simulator \(\mathcal {S}\) plus additional ones as defined below:

  • Input tapes: a tape for the input from the environment \(\mathcal {Z}\), a tape for messages from an ideal functionality \(\mathcal {F}\), a tape for inputs for the simulated honest parties, a tape for messages from a copy of an adversary \(\mathcal {A}\) (either connected to \(\mathcal {A}\) or to \(\mathcal {F}^{\texttt{V}}\)’s \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\) interface) and a tape for messages from the global ideal functionalities in the hybrid model where \(\varPi \) is defined. If \(\mathcal {F}\) is probabilistic, \(\mathcal {S}^{\texttt{U}}\) also receives \(\mathcal {F}\)’s random tape.

  • Output tapes: tapes for output to \(\mathcal {Z}\), tapes for messages to \(\mathcal {F},\mathcal {A}\), tapes for messages for the global ideal functionalities in the hybrid model where \(\varPi \) is defined as well as a special “control output tape” that outputs the randomness used by simulated honest parties.

For any PPT iTM \(\mathcal {S}^{\texttt{U}}\) with the input and output tapes defined above, we then say that \(\mathcal {S}^{\texttt{U}}\) is an über simulator if it has the properties of simulation- and execution-consistency, which are described in Definitions 2 and 3 below. Simulation consistency says that any regular ideal world execution of \(\mathcal {F}\) with \(\mathcal {S}\) is indistinguishable from an execution of \(\mathcal {F}\) with \(\mathcal {S}^{\texttt{U}}\) where \(\mathcal {S}^{\texttt{U}}\) operates as \(\mathcal {S}\) does (i.e. with direct access to a copy of the adversary \(\mathcal {A}\) and the global setup) but is parameterized by the dummy honest party inputs instead of choosing simulated honest party inputs at random. Formally, simulation consistency is as follows:

Definition 2 (Simulation Consistency)

Let \(\varPi \) be a protocol UC-realizing functionality \(\mathcal {F}\) using ideal functionalities \(\mathcal {F}_1,\dots ,\mathcal {F}_r\) as setup and let \(\mathcal {S}\) be the simulator of \(\mathcal {F}\)’s proof. We say that the PPT iTM \(\mathcal {S}^{\texttt{U}}\) is Simulation-consistent for \((\varPi ,\mathcal {F},\mathcal {S})\) if these distributions are indistinguishable for all PPT iTM \(\mathcal {Z}\):

  1. 1.

    \(\mathcal {F}\circ \mathcal {S}\): The distribution of outputs of \(\mathcal {Z}\) in an ideal execution of \(\mathcal {F}\) and \(\mathcal {S}\) executing an internal copy of adversary \(\mathcal {A}\) and potentially a set of global functionalities.

  2. 2.

    \(\mathcal {F}\circ \mathcal {S}^{\texttt{U}}\): The distribution of outputs of \(\mathcal {Z}\) in an ideal execution of \(\mathcal {F}\) with \(\mathcal {S}^{\texttt{U}}\), where \(\mathcal {S}^{\texttt{U}}\)’s corresponding input/output tapes are connected directly to a copy of \(\mathcal {A}\) and to global setup functionalities (instead of \(\mathcal {F}^{\texttt{V}}\)’s \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\) interface). \(\mathcal {S}^{\texttt{U}}\)’s tapes for simulated honest party inputs are initialized with the same inputs that are provided by the dummy honest parties to \(\mathcal {F}\) and \(\mathcal {S}^{\texttt{U}}\) is given a uniformly random tape.

\(\mathcal {Z}\) gives inputs to all parties as in the standard UC simulation experiment but only has access to the same input/output tapes of \(\mathcal {S}^{\texttt{U}}\) that it can access for \(\mathcal {S}\).

We remark that \(\mathcal {S}^{\texttt{U}}\) does not have two explicitly different modes of operations depending on whether it is executed inside \(\mathcal {F}^{\texttt{V}}\) or in the experiment of Definition 2. In both scenarios, \(\mathcal {S}^{\texttt{U}}\) has the same input/output tapes and access to \(\mathcal {F}\)’s interfaces, with the sole differences being its input/output tapes for a copy of the adversary being either directly connected to the adversary in the experiment of Definition 2 or to \(\mathcal {F}^{\texttt{V}}\)’s \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\) interface and its input/output tapes for global setup functionalities being connected to these functionalities in the experiment of Definition 2 or to \(\mathcal {F}^{\texttt{V}}\)’s \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\) interface. This observation is important when arguing why \(\mathcal {S}^{\texttt{U}}\) does not give \(\mathcal {F}^{\texttt{V}}\)’s ideal adversary (i.e. \(\mathcal {F}^{\texttt{V}}\)’s simulator) any undue advantage by, e.g., leaking information about honest parties’ inputs.

Execution consistency states that the randomness for simulated honest parties output by an über simulator \(\mathcal {S}^{\texttt{U}}\) parameterized with the same inputs as the real honest parties must be consistent with the randomness of a real protocol execution. We use the following formal definition:

Definition 3 (Execution Consistency)

Let \(\varPi \) be a UC-secure implementation of the functionality \(\mathcal {F}\) in the \(\mathcal {F}_1,\dots ,\mathcal {F}_r\)-hybrid model and let \(\mathcal {S}\) be the simulator of the proof. We say that the PPT iTM \(\mathcal {S}^{\texttt{U}}\) is Execution-consistent for \((\varPi ,\mathcal {F},\mathcal {S})\) if for all PPT iTM \(\mathcal {Z}\) and PPT iTM \(\mathcal {A}\) the following distributions are indistinguishable:

  1. 1.

    The distribution of outputs of \(\mathcal {Z}\) in a real execution of \(\varPi \) with adversary \(\mathcal {A}\) and honest parties \(\mathcal {P}_1,\ldots ,\mathcal {P}_k\) whose input and randomness pairs are \((x_{h_1},R_{h_1}),\dots ,(x_{h_k},R_{h_k})\) in the \(\mathcal {F}_1,\dots ,\mathcal {F}_r\)-hybrid model. The tuple of honest party randomness \((R_{h_1},\dots ,R_{h_k})\) is output by \(\mathcal {S}^{\texttt{U}}\) after an execution with \(\mathcal {F}\) where \(\mathcal {S}^{\texttt{U}}\) interacts with a copy of \(\mathcal {A}\) and \(\mathcal {S}^{\texttt{U}}\)’s tapes for simulated honest party inputs are initialized with the same honest party inputs \((x_{h_1},\ldots ,x_{h_k})\) as those given to \(\mathcal {P}_1,\ldots ,\mathcal {P}_k\) .

  2. 2.

    The distribution of outputs of \(\mathcal {Z}\) in a real execution of \(\varPi \) with adversary \(\mathcal {A}\) and honest parties \(\mathcal {P}_1,\ldots ,\mathcal {P}_k\) with inputs \((x_{h_1},\ldots ,x_{h_k})\) in the \(\mathcal {F}_1,\dots ,\mathcal {F}_r\)-hybrid model where honest party randomness is sampled by \(\mathcal {Z}\).

\(\mathcal {Z}\) gives inputs to all parties in both the ideal and real executions as in the standard UC simulation experiment, the difference being that in 1. honest party randomness is provided by \(\mathcal {S}^{\texttt{U}}\) and in 2. it is sampled by \(\mathcal {Z}\).

For any PPT iTM \(\mathcal {S}^{\texttt{U}}\) with the input and output tapes defined above we then say that \(\mathcal {S}^{\texttt{U}}\) is an über simulator if it is simulation- and execution-consistent. We summarize this in the full version [6].

Ü ber Simulators with Global Setup: In order to argue that \(\mathcal {S}^{\texttt{U}}\) does not leak any information on honest parties’ inputs through \(\mathcal {F}^{\texttt{V}}\)’s \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\) interface, we will restrict the class of global functionalities that can be used as setup in compiled protocols. Intuitively, we require that all global functionalities used by a protocol with a global simulator provide all parties with access to the same interface and answers queries from all parties with the same answer (e.g. in a global random oracle). This is necessary both for practical and technical reasons: 1-(practical) the verification procedure of our compiler needs the same access to global setup as the party who activated verification and revealed its input/randomness; 2-(technical) \(\mathcal {S}^{\texttt{U}}\) must not be able to distinguish whether it is operating within \(\mathcal {F}^{\texttt{V}}\) or within the experiment of Definition 2. In order to achieve these goals, we introduce the notion of Admissible Global Setup in Definition 4 and restrict our compiler to work only on protocols with Admissible Global Setup.

Definition 4 (Admissible Global Setup)

A global ideal functionality \(\mathcal {G}\) is admissible if:

  • All parties \(\mathcal {P}_i \in \mathcal {P}\) have access to the same interfaces (i.e. all parties can send the same queries to \(\mathcal {G}\)).

  • For all of \(\mathcal {G}\)’s interfaces, for all possible queries Q, there exists a single response R such that, upon receiving a query \(Q_j\) from any party \(\mathcal {P}_i \in \mathcal {P}\), \(\mathcal {G}\) returns R.

3.2 Functionalities \(\mathcal {F}^{\texttt{V}}\) with Embedded Über Simulator \(\mathcal {S}^{\texttt{U}}\)

We now outline how an über simulator \(\mathcal {S}^{\texttt{U}}\) (Definition in the full version [6]) for the protocol \(\varPi \) will be used with a functionality \(\mathcal {F}^{\texttt{V}}\). Note that \(\mathcal {S}^{\texttt{U}}\) is internally executed by the functionality wrapper \(\mathcal {F}^{\texttt{V}}\) presented in Fig. 3, which can be accessed by an ideal adversary (i.e. \(\mathcal {F}^{\texttt{V}}\)’s Simulator) interacting with \(\mathcal {F}^{\texttt{V}}\) through interfaces \({\textbf {Input}}_{i}\) and \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\). Moreover, \(\mathcal {F}^{\texttt{V}}\) allows \(\mathcal {S}^{\texttt{U}}\) to query admissible global setup functionalities \(\mathcal {F}_1,\ldots ,\mathcal {F}_n\) (according to Definition 4) on behalf of honest parties.

The internal \(\mathcal {S}^{\texttt{U}}\) executed by \(\mathcal {F}^{\texttt{V}}\) takes care of simulating the original protocol \(\varPi \) that realizes \(\mathcal {F}\) being compiled into a publicly verifiable protocol \(\varPi ^{\texttt{V}}\) that realizes \(\mathcal {F}^{\texttt{V}}[\mathcal {F}]\), while the external \(\mathcal {S}^{\texttt{V}}\) interacting with \(\mathcal {F}^{\texttt{V}}\) will take care of simulating the additional protocol steps and building blocks used in obtaining public verifiability in \(\varPi ^{\texttt{V}}\). In order to do so, \(\mathcal {F}^{\texttt{V}}\) will parameterize \(\mathcal {S}^{\texttt{U}}\) with the inputs of all honest parties \(\mathcal {P}_i\), which are received through interface \({\textbf {Input}}_{i}\), as well as the randomness of \(\mathcal {F}\) if the functionality is probabilistic. As the execution progresses, \(\mathcal {S}^{\texttt{V}}\) executes the compiled protocol \(\varPi ^{\texttt{V}}\) with an internal copy \(\mathcal {A}\) of the adversary and extracts the messages of the original protocol \(\varPi \) from this execution, forwarding these messages to \(\mathcal {S}^{\texttt{U}}\) through the interface \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\). Moreover, \(\mathcal {S}^{\texttt{V}}\) will provide answers to queries to setup functionalities from \(\mathcal {A}\) as instructed by \(\mathcal {S}^{\texttt{U}}\) also through interface \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\). All the while, queries from honest parties simulated by \(\mathcal {S}^{\texttt{U}}\) to setup functionalities are directly forwarded back and forth by \(\mathcal {F}^{\texttt{V}}\). If verification is ever activated by an honest party \(\mathcal {P}_i\) (and \(\mathcal {P}_i \in \textsf{RIR}\)), \(\mathcal {F}^{\texttt{V}}\) not only leaks that party’s input to \(\mathcal {S}^{\texttt{V}}\) but also leaks that party’s randomness \(R_{h_i}\) in the simulated execution with \(\mathcal {S}^{\texttt{U}}\) (provided by \(\mathcal {S}^{\texttt{U}}\)). As we discuss in Sect. 5, this will allow \(\mathcal {S}^{\texttt{V}}\) to simulate verification, since it now has both a valid transcript of an execution of \(\varPi ^{\texttt{V}}\) with \(\mathcal {A}\) and a matching input and randomness pair that matches that transcript (provided by \(\mathcal {F}^{\texttt{V}}\) with the help of \(\mathcal {S}^{\texttt{U}}\)).

We remark that this strategy does not give the simulator \(\mathcal {S}^{\texttt{V}}\) any extra power in simulating an execution of the compiled protocol \(\varPi ^{\texttt{V}}\) towards \(\mathcal {A}\) other than the power the simulator \(\mathcal {S}\) for the original protocol \(\varPi \) already has. We will establish that the access to \(\mathcal {S}^{\texttt{U}}\) given by \(\mathcal {F}^{\texttt{V}}\) to \(\mathcal {S}^{\texttt{V}}\) does not allow it to obtain any information about the inputs of honest parties. Notice that in an execution with admissible global setup (according to Definition 4), the only difference between \(\mathcal {S}^{\texttt{U}}\)’s execution within \(\mathcal {F}^{\texttt{V}}\) and \(\mathcal {S}^{\texttt{U}}\)’s execution in the experiment of Definition 2 is that, when it is executed within \(\mathcal {F}^{\texttt{V}}\), its input/output tapes for a copy of the adversary are connected to \(\mathcal {S}^{\texttt{U}}\) via the \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\) interface. Hence, the only way \(\mathcal {S}^{\texttt{U}}\) can detect that it is being executed within \(\mathcal {F}^{\texttt{V}}\) and leak any undue information is via its interaction via the adversary input/output tapes. However, the definition in the full version [6] establishes that this interaction is indistinguishable from that of the original simulator \(\mathcal {S}\) for protocol \(\varPi \). Since \(\varPi \) is UC-secure, an execution of \(\mathcal {F}\) with \(\mathcal {S}\) does not leak any information about the simulated parties’ inputs (i.e. inputs randomly picked by \(\mathcal {S}\)), which would trivially allow \(\mathcal {Z} \) to distinguish an execution of \(\mathcal {F}\) with \(\mathcal {S}\) from a real world execution of \(\varPi \) with \(\mathcal {A}\). Thus, by the definition of an über simulator in the full version [6] and the UC security of \(\varPi \), \(\mathcal {S}^{\texttt{U}}\) does not leak any information about honest party inputs to \(\mathcal {S}^{\texttt{V}}\) via interface \(\mathbf {NMF_{\mathcal {S}^{\texttt{U}}}}\) when executed within \(\mathcal {F}^{\texttt{V}}\).

4 Joint Authentication Functionalities

We now define authentication functionalities that serve as building blocks for our compiler. These functionalities allow for a set of parties to jointly authenticate messages but do not deliver these messages themselves. Later on, a verifier can check that a given message has indeed been authenticated by a given set of parties, meaning that they have received this message through a channel and agree on it. More interestingly, we extend this functionality to allow for joint authentication of private messages that are only known in encrypted form.

As opposed to classical point-to-point or broadcast authenticated channels, our functionalities do not deliver messages to the receiving parties and consequently do not ensure consensus. These functionalities come into play in our compiler later as they allow for verifiers to check that all parties who executed a protocol agree on the transcript (that might contain private messages) regardless of how the messages in the transcript have been obtained. Having the parties agree on which messages have been sent limits the adversary’s power to generate an alternative transcript aiming at forging a proof that the protocol reached a different outcome, i.e. our notion of transcript non-malleability.

Public Joint Authentication: First, consider the simple case of authenticating public messages (known by all parties participating in the joint authentication procedure). Here, the sender starts by providing a message and ssid pair to the functionality and joint authentication is achieved after each of the other parties sends the same pair to the functionality. This can be implemented by a simple protocol where all parties sign each message received from each other party in each round, sending the resulting signatures to all other parties. A message is considered authenticated if it is signed by all parties. Notice that this protocol does not ensure consensus and can easily fail if a single party does not provide a valid signature on a single message, which an adversary corrupting any party (or the network) can always cause (this is captured in the functionality). Functionality \(\mathcal {F}_{\textsf{PJAuth}}\) is described in the full version [6].

Secret Joint Authentication: We further define a functionality \(\mathcal {F}_{\textsf{SJAuth}}\) (described inthe full version [6]). This functionality works similarly to \(\mathcal {F}_{\textsf{PJAuth}}\), allowing parties to jointly authenticate messages received through private channels to which they have access. However, it also allows for bureaucrat parties who observe the encrypted communication (but do not see plaintext messages) over the private channel to jointly authenticate a committed version of these plaintext messages. If a private message is revealed by its sender (or one of its receivers), \(\mathcal {F}_{\textsf{SJAuth}}\) allows third parties (including the bureaucrats) to verify that this message is indeed the one that was authenticated.

In order to capture the different actions of each party it interacts with, \(\mathcal {F}_{\textsf{SJAuth}}\) is parameterized by the following (sets of) parties: a sender \(\mathcal {P}_{\texttt{snd}}\) that can input messages to be jointly authenticated; a set of parties \(\mathcal {P}\) who receive input messages from \(\mathcal {P}_{\texttt{snd}}\) and jointly authenticate them; a set of bureaucrats \(\mathcal {B}\) who jointly authenticate that \(\mathcal {P}_{\texttt{snd}}\) has sent a certain (unknown to them) committed message to \(\mathcal {P}\). \(\mathcal {F}_{\textsf{SJAuth}}\), like \(\mathcal {F}_{\textsf{PJAuth}}\), does not aid in sending messages, notifications about sent messages nor joint authentication information to any party. The responsibility for sending messages lies with \(\mathcal {P}_{\texttt{snd}}\), while \(\mathcal {P}_{\texttt{snd}}\) or \(\mathcal {P}_i \in \mathcal {P}\) can notify other parties that plaintext verification is possible.

We realize \(\mathcal {F}_{\textsf{SJAuth}}\) with a signature scheme and a certified encryption scheme with plaintext verification, i.e. an encryption scheme with two properties: (1) all parties’ public keys are registered in a PKI, making sure that encrypted messages can only be opened by the intended receiver; (2) Both encrypting and decrypting parties can generate publicly verifiable proofs that a certain message was contained in a given ciphertext. The private channel itself is realized by encrypting messages under the encryption scheme, while joint authentication is achieved by having all parties in \(\mathcal {P}\) (including the sender) and bureaucrats in \(\mathcal {B}\) sign the resulting ciphertext. To prove that a certain message was indeed contained in the ciphertext, the receiver(s) recovers the plaintext message and a proof of plaintext validity from the ciphertext and sends those to the verifier(s). Finally, a verifier first checks that the message was indeed contained in a previously sent ciphertext and that this ciphertext has been signed by all parties in \(\mathcal {P}\) and \(\mathcal {B}\). This construction and a concrete realization are described in the full version [6].

Authenticating Inputs and Randomness: To provide an authentication of inputs and randomness we adapt the functionality \(\mathcal {F}_{\textsf{SJAuth}}\), as the desired capabilities are like a message authentication without a receiver. In the full version [6] we present a functionality \(\mathcal {F}_{\textsf{IRAuth}}\) that implements this.

5 Compilation for Input-Revealing Protocols

We now sketch how to compile protocols from Sect. 2 into non-interactively verifiable counterparts. As we focus on protocols according to the definition in the full version [6] there are 8 combinations of parameters \((\nu ,\mu )\) for \((\nu ,\textsf{RIR},\mu )\)-transcript non-malleable protocols to consider. Furthermore we might either have public or private verifier registration, which in total yields 16 different definitions. To avoid redundancy we now outline how to get the respective verifiability in each setting. We simplify notation by just using a single verifier \(\mathcal {V}\).

Assume a \((\nu ,\textsf{RIR},\mu )\)-transcript non-malleable protocol \(\varPi \) that UC realizes the functionality \(\mathcal {F}\) in the (global) \(\mathcal {F}_1,\dots ,\mathcal {F}_r\)-hybrid model with über simulator \(\mathcal {S}^{\texttt{U}}\) for \((\varPi ,\mathcal {F},\mathcal {S})\). Then compilation works as follows:

  1. 1.

    We describe how to construct a protocol \(\varPi ^{\texttt{V}}\) by modifying \(\varPi \) with access to a signature functionality \(\mathcal {F}_{\textsf{Sig}}\), a key registration functionality \(\mathcal {F}_{\textsf{Reg}}\) and authentication functionalities \(\mathcal {F}_{\textsf{PJAuth}},\mathcal {F}_{\textsf{SJAuth}},\mathcal {F}_{\textsf{IRAuth}}\). We will furthermore require that we can replace the hybrid functionalities \(\mathcal {F}_1,\dots ,\mathcal {F}_r\) used in \(\varPi \) with verifiable counterparts \(\mathcal {F}^{\texttt{V}}_1,\dots ,\mathcal {F}^{\texttt{V}}_r\).

  2. 2.

    In Appendix 3.2 we show how \(\varPi ^{\texttt{V}}\) UC-realizes \(\mathcal {F}^{\texttt{V}}[\mathcal {F}]\) in the \(\mathcal {F}^{\texttt{V}}_1,\dots ,\mathcal {F}^{\texttt{V}}_r\)-hybrid by constructing a simulator \(\mathcal {S}^{\texttt{V}}\).

Protocol Compilation - The Big Picture. In order to verify we let the verifier \(\mathcal {V}\) simulate each such party whose output shall be checked and which participated in an instance of \(\varPi \). This check is done locally, based on the inputs, randomness and messages related to such a party (and/or other parties) which \(\mathcal {V}\) obtains for this process. In case of public verifier registration we assume that a bulletin board is available which holds the protocol transcript, whereas in case of private registration the verifier contacts one of the protocol parties to obtain a transcript which it can then verify non-interactively. We want to stress that the Bulletin Board which may contain the protocol transcript does not have to be used to exchange messages during the actual protocol run.

In \(\varPi \) we assume that messages can either be exchanged secretly between two parties or via a broadcast channel. Furthermore, parties may send messages to hybrid functionalities or receive them from these. An adversary may be able to replace certain parts of the protocol transcript. As long as we assume that a protocol is \((\nu ,\textsf{RIR},\mu )\)-transcript non-malleable and constrain his ability to maul the protocol transcript to those parts permitted by the definition, the overall construction achieves verifiability. We now explain, on a high level, the modifications to \(\varPi \) for the different values of \(\mu , \nu \):

  • \(\mu =\textsf{ncmes}\): The adversary can replace all messages by any party at his will, and messages are just exchanged as in \(\varPi \).

  • \(\mu =\textsf{chsmes}\): Before the protocol begins, each \(\mathcal {P}_i\) generates a signing key with \(\mathcal {F}_{\textsf{Sig}}\) and registers its signing key with \(\mathcal {F}_{\textsf{Reg}}\). Whenever \(\mathcal {P}_i\) sends a message \(\texttt{m}_{i,j}\) to \(\mathcal {P}_j\) it uses \(\mathcal {F}_{\textsf{Sig}}\) to authenticate \(\texttt{m}_{i,j}\) with a signature \(\sigma _{i,j}\). \(\mathcal {V}\) will later be able to verify exactly those messages that were sent by honest parties, as \(\mathcal {A}\) can fake messages and signatures sent by dishonest parties.

  • \(\mu =\textsf{chmes}\): Each message that is either sent or received by an honest party must remain unaltered. Each party will do the same as for \(\mu =\textsf{chsmes}\), but whenever \(\mathcal {P}_i\) receives a message \(\texttt{m}_{j,i}\) from \(\mathcal {P}_j\) then it uses \(\mathcal {F}_{\textsf{Sig}}\) to authenticate \(\texttt{m}_{j,i}\) with a signature \(\sigma _{j,i}\). Now \(\mathcal {V}\) can establish for each message of the protocol if both sender and receiver signed the same message. \(\mathcal {A}\) can only alter messages that were both sent and received by dishonest parties.

  • \(\mu =\textsf{cmes}\): Here dishonest parties cannot replace their messages before verification. To achieve this, we use \(\mathcal {F}_{\textsf{SJAuth}},\mathcal {F}_{\textsf{PJAuth}}\) as defined in Sect. 4 which the parties now use to register their private message exchange. These functionalities can then be used by \(\mathcal {V}\) to validate the transcript.

  • \(\nu =\textsf{ncir}\): Based on each \(\mathcal {P}_i\) setting up a key with \(\mathcal {F}_{\textsf{Sig}}\) and registering it with \(\mathcal {F}_{\textsf{Reg}}\) let each party sign both its input \(x_i\) and its randomness \(r_i\) using \(\mathcal {F}_{\textsf{Sig}}\) before sending it in \({\textbf {Activate Verification}}\). \(\mathcal {V}\) now only accepts such signed values which it can verify via \(\mathcal {F}_{\textsf{Sig}}\). \(\mathcal {A}\) can replace the pairs \((x_j,r_j)\) of dishonest parties \(\mathcal {P}_j\) by generating different signatures.

  • \(\nu =\textsf{cir}\): The parties use \(\mathcal {F}_{\textsf{IRAuth}}\) to authenticate their inputs and randomness. \(\mathcal {V}\) uses \(\mathcal {F}_{\textsf{IRAuth}}\) to check validity of the revealed \(x_i, r_i\) which it obtained.

  • Hybrid Functionalities: Replace the hybrid functionalities \(\mathcal {F}_1,\dots ,\mathcal {F}_r\) with NIV counterparts, i.e. with functionalities \(\mathcal {F}^{\texttt{V}}_1,\dots ,\mathcal {F}^{\texttt{V}}_r\) that have the same interfaces as defined in Sect. 2.1. To achieve public verifiability each such \(\mathcal {F}^{\texttt{V}}_q\) must also be publicly verifiable. If a global functionality is used as setup, it must be admissible according to Definition 4, so that the verification procedure can re-execute the protocol. For any such \(\mathcal {F}^{\texttt{V}}_q\), \(\mathcal {V}\) can establish if a message \(\texttt{mres}_{q,i}\) was indeed sent to \(\mathcal {P}_i\) or not. If \(\mathcal {F}^{\texttt{V}}_q\) does also reveal inputs, it can also test if \(\texttt{mres}_{i,q}\) as claimed to be sent by \(\mathcal {P}_i\) was indeed received by the functionality.

Public Verifiability Compiler. The basic idea is to turn any \((\nu ,\textsf{RIR},\mu )\)-transcript non-malleable protocol into a \((\textsf{cir},\textsf{RIR},\mu )\)-transcript non-malleable protocol by forcing the adversary to commit to all the corrupted parties’ randomness and inputs, and then turn it into a \((\textsf{cir},\textsf{RIR},\textsf{cmes})\)-transcript non-malleable protocol by forcing the adversary to commit to all messages. While this might be overkill for some protocols, we focus on the worst case scenario of compiling \((\textsf{ncir},\textsf{RIR},\textsf{ncmes})\)-transcript non-malleable protocols, since it is the most challenging. After making a protocol \((\textsf{cir},\textsf{RIR},\textsf{cmes})\)-transcript non-malleable, the protocol execution becomes deterministic and can be verified upon revealing of the randomness, input and transcript of any party that activates the verification. All the verifier has to do is to execute the protocol’s next message function on these randomness and input taking received messages from the transcript. We present a detailed description of this compiler and a formal theorem statement together with its proof in the full version [6].