1 Introduction

Deniability, first introduced by Dolev, Dwork, and Naor [30], is a notion that received a considerable amount of attention because of its application to authentication protocols. This property allows the user to argue against a third party that it did not take part in a protocol execution. The usual argument made by the user to the third party is that the server could simulate a valid communication transcript without actually interacting with the user.

A variant of deniability was considered in the case of encryption schemes [15, 16, 63], where a public \(\textsf{Expl}\) algorithm allows anyone to open any ciphertext to any message without the secret key. Since we can publicly open ciphertexts, the random coins cannot serve as proof that a particular message is encrypted.

A similar concept was recently introduced to ring signatures [58] and called unclaimability. The property states that no one can claim to be the signer of a particular ring signature \(\sigma \). The premise is similar. There exists an \(\textsf{Expl}\) algorithm that allows any of the ring members to generate random coins that can be used to receive the same \(\sigma \).

Deniability and unclaimability are related notions. In the former, we consider the server malicious because it tries to gain an undeniable proof of an interaction. In the latter, the malicious party is a different user that tries to make it impossible for honest users to explain an interaction/signature. Interestingly, the deniability and unclaimability definitions studied in the literature only consider scenarios where the party producing a transcript/signature/ciphertext is honest, but may eventually become corrupt in the future.

1.1 Contribution

We introduce a new property for argument systems called explainability. Explainability informally resembles deniability and unclaimability. We consider interactive and non-interactive variants of such systems. We show that achieving strong explainability is hard and requires very strong primitives like witness encryption (WE) and indistinguishability obfuscation (iO). Our contribution can be summarized as follows.

New Definitions. We introduce a new property for argument systems that we call explainability, i.e., the ability for anyone with a valid witness \(\textsf{wit}\) to compute the random coins \(\textsf{coins}\) that “explain” a given argument \(\arg \). By “explain,” we mean that the witness and coins result in the same argument string \(\arg = \textsf{Prove}(\textsf{stmt},\textsf{wit}; \textsf{coins})\) or the same transcript of an interaction, given the same instance of the verifier. Thus if one can explain an argument for all witnesses and all coins, then such argument/transcript cannot serve as proof that a particular witness was used. We accounted for certain subtle differences between interactive and non-interactive arguments. In both cases, we consider malicious prover explainability, where a prover tries to create a proof that other provers cannot explain with a different but valid witness. In this case, we require the protocol to be unique, in the sense that it is infeasible for a malicious prover to produce two different arguments (or transcripts) that the verifier accepts given the same statement and random coins. For the interactive case, we also consider a malicious verifier (similar to deniability) that can abort the protocol execution or send corrupt messages to make it impossible for provers with a different witness to explain the current interaction. Since, in the non-interactive case, there is no interaction with a verifier, we consider a scenario where the common reference string (if used) is maliciously generated. We refer to this case as malicious setup explainability. Additionally, we call a (non-)interactive argument system fully explainable, when it is explainable even if both the setup/verifier and the prover are malicious.

Implications. To study the power of explainable arguments we prove several interesting implications of explainable arguments.

  • We show that when an argument system is malicious verifier explainable, then it is also witness indistinguishable.

  • We show that non-interactive malicious prover explainable arguments and one-way functions imply witness encryption (WE). This result serves us as evidence that constructing such arguments is difficult and requires strong cryptographic primitives.

Constructions of Interactive Explainable Arguments. We introduce new properties for witness encryption that we call robustness and plaintext awareness. Informally, robustness ensures that decryption is independent of which witness is used. In other words, there do not exist two valid witnesses for which a ciphertext decrypts to a different message (or \(\bot \)). Plaintext awareness ensures that an encrypter must know the plaintext it encrypted. We then show how to leverage robust witness encryption to construct interactive explainable arguments. The resulting protocol is round-optimal, predictable, and can be instantiated to yield an optimally laconic argument. Given the witness encryption is plaintext aware, we can show that the protocol is zero-knowledge. Finally, assuming the witness encryption is extractably secure, we can show that our protocol is a proof of knowledge.

Constructions of Non-interactive Explainable Arguments. We show how to construct malicious setup and malicious prover explainable arguments from indistinguishability obfuscation. While malicious prover explainable arguments can trivially be build using techniques from Sahai and Waters [63], the case of malicious setup explainable arguments is more involved and requires us to use dual-mode witness indistinguishable proofs. Furthermore, we show how to build fully explainable arguments, additionally assuming NIZK.

Why Study Explainable Arguments? Argument systems are fundamental primitives in cryptography. While some privacy properties like zero-knowledge already give a strong form of deniability, our notion of explainability is much stronger as it considers the extreme case where the provers’ coins are leaked or are chosen maliciously. For example, using our explainable arguments, we can show explainable interactive anonymous authentication schemes, where anonymity is defined similarly as in ring-signature schemes (see full paper [45]). Notably, we can construct CCA-1 secure encryption with deniability as defined by Sahai and Waters [63], from CPA secure deniable encryption and our explainable arguments assuming random oracles. Our deniable encryption is a variant of the Naor-Yung transform [56], but only rely on witness indistinguishability instead of zero-knowledge which allows us to instantiate this transformation using our explainable arguments.

Malicious Verifier/Setup Explainability. We consider adversaries that are substantially more powerful than what is usually studied in the literature, e.g., in deniable authentication schemes or ring-signatures. In particular, in our case, the user can deny an argument even when the adversary asks to reveal the user’s random coins used to produce the argument. Immediate real-world examples of such powerful adversaries are rogue nation-state actors that might have the right to confiscate a user’s hardware and apply effectual forensics techniques to obtain the random seeds as evidence material against the user. We believe that the threat posed by such potent adversaries may prevent the use of e.g., ring-signatures by whistleblowers, as the anonymity notions provided might be insufficient.

Malicious Prover Explainability. The main application we envision for malicious prover explainability is internet voting. An essential part of a sound and fair voting scheme is to prevent the selling of votes by malicious voters. We note that the “selling votes” issue isn’t limited to actual bribery but, perhaps more critically, addresses the issue of forcing eligible voters to vote on a particular candidate. In this case, an authoritarian forces others to deliver evidence that they voted on a particular option or participate in a specific digital event. An authoritarian here may be an abusive family member, corrupt supervisor, or employer. Our strong unclaimability notion is essential to handle such drastic cases, mainly because users might be coerced or bribed to use specific coins in the protocol.

1.2 Related Work

Explainability of the verifier was used by Bitansky and Choudhuri [8] as a step in proving the existence of deterministic-prover zero-knowledge proofs. In their definition they used the fact that the choices of a verifier can be “explained” by outputting random coins that will lead to the same behaviour. This later can be used to transform the system to be secure even against a malicious verifier. In contrary, we consider the explainability of the prover. While arguments with our type of explainability have not been studied before, there exists some related concepts. Here we give an overview of the related literature.

Deniable Authentication. Dolev, Dwork, and Naor [30] first introduced the concept of deniability. The first formal definition is due to Dwork, Naor, and Sahai [32]. Deniability was studied in numerous works [25, 48, 55] in the context of authentication protocols. The concept was later generalized to authenticated key exchange and was first formally defined by Di Raimondo, and Genaro [26]. Since then deniable key exchange protocols got much attention from the community [11, 24, 27, 28, 46, 49, 51, 65,66,67,68,69]. In such protocols, deniability is informally defined as a party’s ability to simulate the transcript of interaction without actually communicating with another party. Since each party can generate a transcript itself, the transcript cannot be used as proof to a third party that the interaction took place. At a high level, deniability is very similar to zero-knowledge, but it is important to mention that Pass [59] showed some subtle differences between both notions.

Deniable Encryption. Deniable encryption was first introduced by Canetti, Dwork, Naor, and Ostrovsky [15]. Here we deal with a “post” compromise situation, where an honest encrypter may be forced to “open” a ciphertext. In other words, given a ciphertext, it should be possible to show a message and randomness that result in the given ciphertext. Deniable encryption was intensively studied [1, 7, 20,21,22, 41, 57, 63]. Very recently, Canetti, Park, and Poburinnaya [16] generalize deniable encryption to the case where multiple parties are compromised and show constructions also assuming indistinguishability obfuscation.

Ring Signatures. Early forms of deniability were the main motivation for the work of Rivest, Shamir, and Tauman [61], which introduces the concept of ring signatures. This early concept took into account a relaxed form of deniability where only the secret key of a user may leak. Very recently [58] extended ring signatures with additional deniability properties. For example, they show a signer deniable ring signature where any signer may generate random coins that, together with its secret key, will result in the given signature. However, they require to assume the prover is honest at the moment of signature generation. In our argument setting, we do not make such assumptions.

We are the first to study arguments with unclaimability and deniability properties that allow denying executing a protocol even when the prover is forced to reveal all its random coins or where the prover chooses its coins maliciously. Previous works mostly address a post-compromise setting, whereas some of our explainability notions take into account malicious prover. We believe that our primitives may find applications in protocols as a means of providing consistency checks or anonymous authentication of the votes. For example, the protocols from [17, 62] rely on a trusted party to verify a voter’s signature. That party knows the user’s vote. Using our explainable arguments, we can build (see full paper) a simple anonymous authentication protocol without degrading receipt freeness of the voting scheme, and in effect, remove the trust assumption in terms of privacy.

Receipt Freeness and Coertion Resistance in Voting Schemes. Some of our definitions and potential application are tightly connected to voting schemes. In particular, our definition of malicious prover explainability poses the same requirements, at a high level, for an argument system as receipt freeness or coercion resistance in voting schemes [6, 47, 54, 64]. Since we focus on a single primitive, our definitions are much simpler in comparison to complex voting systems. For example, the definition from [17] involves numerous oracles, and defines a set of parties, and assumes trusted parties. Our definition for malicious prover explainability is simple and says that it is infeasible to produce two different arguments under the same statement that verify incorrectly.

Outline of the Paper. In Sect. 3 we give definitions of explainable argument systems. In Sect. 4 we construct non-interactive explainable arguments. In Sect. 5 we introduce robust witness encryption, and apply it to build interactive explainable arguments. Finally, in Sect. 6, we show how to apply explainable arguments to construct deniable CCA-secure encryption. In the full paper [45], we recall all definitions for the primitives in the preliminaries section, show an explainable anonymous authentication protocol, and all security proofs.

2 Preliminaries

Notation. We denote execution of an algorithm \(\textsf{Alg}\) on input x as \(a \leftarrow \textsf{Alg}(x)\) were the output is assigned to a. Unless said otherwise, we will assume that algorithms are probabilistic and choose some random coins internally. In some cases, however, we will write \(\textsf{Alg}(.; r)\) to denote that \(\textsf{Alg}\) proceeds deterministically on input a seed \(r \in \{0, 1\}^{s}\) for some integer \(s\). We denote an execution of a protocol between parties V and P, by \(\langle \textsf{Prove}(.) \rightleftharpoons \textsf{Verify}(.) \rightarrow x \rangle = \textsf{trans}\), where x is the output of \(\textsf{Verify}\) after completion of the protocol, and \(\textsf{trans}\) is the transcript of the protocol. A transcript \(\textsf{trans}\) contains all messages send between \(\textsf{Prove}\) and \(\textsf{Verify}\) and the input of \(\textsf{Verify}\). We write \(\textsf{View}(\textsf{Prove}(.) \rightleftharpoons \textsf{Verify}(.))\) to denote the view of \(\textsf{Verify}\). The view contains the transcript, all input to \(\textsf{Verify}\) including its random coins and its internal state. W say that a function \(\textsf{negl}: \mathbb {N}\mapsto \mathbb {R}^+\) is negligible if for every constant \(c>0\) there exists a integer \(N_c \in \mathbb {N}\) such that for all \(\lambda > N_c\) we have \(\textsf{negl}(\lambda ) < \lambda ^{-c}\).

Standard Definitions. We use a number of standard cryptographic tools throughout the paper, including: pseudorandom generators and Goldreich-Levin hard-core bits [39], existential unforgeable and unique signature schemes [37, 42], zero-knowledge (ZK) and witness-indistinguishable (WI) argument systems, non-interactive ZK arguments from non-falsifiable assumptions [35], dual-mode witness-indistinguishable proofs [43], CCA1 secure and publicly deniable encryption [63], witness encryption [36] and extractable witness encryption [40], indistinguishability obfuscation [3], and punctured pseudorandom functions [13, 14, 50].

3 Explainable Arguments

In this section, we introduce the security notions for explainable arguments.

3.1 Interactive Explainable Arguments

In an interactive argument system, the prover uses a witness \(\textsf{wit}\) for statement \(\textsf{stmt}\) to convince the verifier that the statement is true. The communication between the prover and the verifier creates a transcript \(\textsf{trans}\) that contains all the exchanged messages. An interactive explainable argument system allows a prover with a different witness \(\textsf{wit}^*\) to generate random coins \(\textsf{coins}\) for which \(\textsf{Prove}(\textsf{stmt}, \textsf{wit}^*; \textsf{coins})\) interacting with the same instance of the verifier (i.e., the verifier uses the same random coins) creates the same transcript \(\textsf{trans}\). In other words, this means that any prover with a valid witness can provide random coins that would explain the interaction in \(\textsf{trans}\). More formally.

Definition 1

(Interactive Explainable Arguments). An interactive argument system \(\varPi _\mathcal {R}= (\textsf{Prove}, \textsf{Verify})\) for language \(\L _\mathcal {R}\) is an interactive explainable argument system if there exists an additional \(\textsf{Expl}\) algorithm:

  • \(\textsf{Expl}(\textsf{stmt}, \textsf{wit}, \textsf{trans})\): takes as input a statement \(\textsf{stmt}\), any valid witness \(\textsf{wit}\) (i.e. \(\mathcal {R}(\textsf{stmt},\textsf{wit})=1\)) and transcript \(\textsf{trans}\), and outputs \(\textsf{coins}\in \textbf{Coin}_\textsf{Prove}\) (i.e. coins that are in the space of the randomness used in \(\textsf{Prove}\)),

which satisfies the correctness definition below.

Definition 2

(Correctness). For all security parameter \(\lambda \), for all statements \(\textsf{stmt}\in \L _\mathcal {R}\), for all \(\textsf{wit}, \textsf{wit}^{*}\) such that \(\mathcal {R}(\textsf{stmt},\textsf{wit})=\mathcal {R}(\textsf{stmt},\textsf{wit}^{*})=1\), we have

$$\begin{aligned}&\langle \textsf{Verify}(\textsf{stmt}) \rightleftharpoons \textsf{Prove}(\textsf{stmt}, \textsf{wit}) \rangle =\\&\langle \textsf{Verify}'(\textsf{stmt}; \textsf{trans}) \rightleftharpoons \textsf{Prove}(\textsf{stmt}, \textsf{wit}^{*}; \textsf{coins}_E)\rangle = \textsf{trans}, \end{aligned}$$

where \(\textsf{coins}_E \leftarrow \textsf{Expl}(\textsf{stmt}, \textsf{wit}^{*}, \textsf{trans})\) and \(\textsf{coins}_E \in \textbf{Coin}_\textsf{Prove}\) and \(\textsf{Verify}'\) sends its messages as in \(\textsf{trans}\) as long as \(\textsf{Prove}\) answers as is \(\textsf{trans}\). If the output of \(\textsf{Prove}\) do not match \(\textsf{trans}\), then \(\textsf{Verify}'\) aborts and outputs \(\bot \).

Remark 1

Note that a naive way to implement the \(\textsf{Expl}\) algorithm would be to set \(\textsf{coins}_E\) and make the \(\textsf{Prove}\) algorithm to “replay” the messages. However, this is obviously a scheme that would not be desirable, since an adversary could easily distinguish such coins from honest ones. Therefore we require that \(\textsf{coins}_E \in \textbf{Coin}_\textsf{Prove}\) to ensure that \(\textsf{coins}_E\) can be given as input to an honest \(\textsf{Prove}\) algorithm.

The above definition constitutes a correctness definition for explainable arguments and assumes that all parties are honest. Informally, we require that given a witness and a transcript of an interaction between a verifier and a prover (with a possibly different witness), \(\textsf{Expl}\) generates \(\textsf{coins}\) such that a honest prover returns the same messages given that the verifier send its messages as in \(\textsf{trans}\).

Below we describe explainability of a malicious verifier. Roughly speaking, this property says that a transcript produced during an execution with a malicious verifier, and a honest prover P, should be explainable. The goal of a verifier, is to send such messages to the prover P, that P sends such responses that no other prover (with a different witness) would send. If the adversary succeeds then the transcript (possibly with P’s random coins) can be used as a proof to a third party, that P indeed took part in the communication. Remind that P may be forced to reveal its random coins after completing the protocol.

Definition 3

(Malicious Verifier Explainability). For a security parameter \(\lambda \), we define the advantage \(\textsf{Adv}^{\textsf{MVExpl}}_\mathcal {A}(\lambda )\) of an adversary \(\mathcal {A}= (\mathcal {A}_1,\mathcal {A}_2,\mathcal {A}_3)\) as

$$\begin{aligned} 1 - \Pr [\langle \mathcal {A}_3(\textsf{stmt}; \textsf{coins}_\mathcal {A}) \rightleftharpoons \textsf{Prove}(\textsf{stmt}, \textsf{wit}^{*} ; \textsf{coins}_P) \rangle = \textsf{trans}], \text { where } \end{aligned}$$
$$\begin{aligned} (\textsf{stmt}, \textsf{wit}, \textsf{wit}^{*}, \textsf{st})&\leftarrow \mathcal {A}_1(\lambda ),\\ \textsf{trans}= \langle \textsf{coins}_\mathcal {A}\leftarrow \mathcal {A}_2(\textsf{stmt}; \textsf{st})&\rightleftharpoons \textsf{Prove}(\textsf{stmt}, \textsf{wit}) \rangle , \\ \textsf{coins}_P&\leftarrow \textsf{Expl}(\textsf{stmt}, \textsf{wit}^{*}, \textsf{trans}), \\ \textsf{wit}\not = \textsf{wit}^{*}, ~&~ \mathcal {R}(\textsf{stmt}, \textsf{wit}) = \mathcal {R}(\textsf{stmt}, \textsf{wit}^*) = 1, \end{aligned}$$

where the probability is taken over the random coins of \(\textsf{Prove}\). Furthermore, \(\mathcal {A}_3\) sends the same messages to \(\textsf{Prove}\) as in \(\textsf{trans}\) as long as the responses from the prover are as in \(\textsf{trans}\).

We say that an interactive argument system is malicious verifier explainable if for all adversaries \(\mathcal {A}= (\mathcal {A}_1, \mathcal {A}_2,\mathcal {A}_3)\) such that \(\mathcal {A}_1, \mathcal {A}_2, \mathcal {A}_3\) are PPT algorithms there exists a negligible function \(\textsf{negl}(.)\) such that \(\textsf{Adv}^{\textsf{MVExpl}}_\mathcal {A}(\lambda ) \le \textsf{negl}(\lambda )\). We say that the argument system is malicious verifier statistically explainable if the above holds for an unbounded adversary \(\mathcal {A}\).

Let us now consider a scenario where proving ownership of an argument is beneficial to the prover, but at the same time, the system requires the proof to be explainable. A malicious prover tries to prove the statement in a way that makes it impossible for others to “claim” the generated proof. For this property, it is easy to imagine a malicious prover that sends such messages to the verifier, that the verifier accepts, and no other honest prover would ever send such messages. In practice, we may imagine that an adversary runs a different implementation of the prover, for which the distribution of the sent messages deviate from the distribution of the original implementation. Later to “claim” the transcript that adversary may prove that the transcript is indeed the result of the different algorithm, not the honest one. Note that such a “claim” is sound if an honest prover would never produce such messages. To prevent such attacks, we require that there is only one (computationally feasible to find) valid way a prover can respond to the messages from an honest verifier.

Definition 4

(Uniqueness/Malicious Prover Explainability). We define the advantage \(\textsf{Adv}^{\textsf{MPExpl}}_{\mathcal {A}}(\lambda )\) of an adversary \(\mathcal {A}= (\mathcal {A}_1,\mathcal {A}_2,\mathcal {A}_3)\) as

$$\begin{aligned} 1 - \Pr \left[ \begin{array}{c} \langle 1 = \textsf{Verify}(\textsf{stmt}; \textsf{coins}_V) \rightleftharpoons \mathcal {A}_2(\textsf{st}_1) \rightarrow \textsf{st}_2 \rangle \\ \ne \langle 1 = \textsf{Verify}(\textsf{stmt}; \textsf{coins}_V) \rightleftharpoons \mathcal {A}_3(\textsf{st}_2) \rangle \end{array} \right] , \end{aligned}$$

where \(\textsf{st}_1, \textsf{stmt}\leftarrow \mathcal {A}_1(\lambda )\) and the probability is taken over the coins \(\textsf{coins}_V\).

We say that an interactive argument system is malicious prover explainable if for all PPT adversaries \(\mathcal {A}\) there exists a negligible function \(\textsf{negl}(.)\) such that \(\textsf{Adv}^{\textsf{MPExpl}}_\mathcal {A}(\lambda ) \le \textsf{negl}(\lambda )\). We say that the system is malicious prover statistically explainable if the above holds for an unbounded \(\mathcal {A}\).

Theorem 1

If \((\textsf{Prove},\textsf{Verify},\textsf{Expl})\) is a malicious verifier (statistical) explainable argument system then it is also (statistical) witness indistinguishable.

Definition 5

We say that an interactive argument system is fully explainable if it is malicious prover explainable and malicious verifier explainable.

3.2 Non-interactive Explainable Arguments

Here we present definitions for non-interactive explainable arguments. Similar to the interactive case, we begin by defining what it means that a system is explainable.

Definition 6

(Non-Interactive Explainable Arguments). A non-interactive argument system \(\varPi _\mathcal {R}= (\textsf{Setup}, \textsf{Prove},\textsf{Verify})\) for language \(\L _\mathcal {R}\) is a non-interactive explainable argument system if there exists an additional \(\textsf{Expl}\) algorithm:

  • \(\textsf{Expl}(\textsf{crs}, \textsf{stmt}, \textsf{wit}, \arg )\): takes as input a statement \(\textsf{stmt}\), any valid witness \(\textsf{wit}\) and an argument \(\arg \), and outputs random coins \(\textsf{coins}\)

which satisfies the correctness definition below.

Definition 7

(Correctness). For all security parameter \(\lambda \), for all statements \(\textsf{stmt}\in \L _\mathcal {R}\), for all \(\textsf{wit}, \textsf{wit}^{*}\) such that \(\mathcal {R}(\textsf{stmt},\textsf{wit})=\mathcal {R}(\textsf{stmt},\textsf{wit}^{*})=1\), for all random coins \(\textsf{coins}_P \in \textbf{Coin}_\textsf{Prove}\), we have

$$\begin{aligned} \textsf{Prove}(\textsf{crs}, \textsf{stmt}, \textsf{wit}; \textsf{coins}_P) = \textsf{Prove}(\textsf{crs}, \textsf{stmt}, \textsf{wit}^{*}; \textsf{coins}_E) \end{aligned}$$

where \(\textsf{coins}_E \leftarrow \textsf{Expl}(\textsf{crs},\textsf{stmt}, \textsf{wit}^{*}, \arg )\), \(\textsf{coins}_E \in \textbf{Coin}_\textsf{Prove}\) and \(\textsf{crs}\leftarrow \textsf{Setup}(\lambda )\).

Now we define malicious setup explainability. Note that a malicious verifier cannot influence the explainability of an argument because there is no interaction with the prover. Hence, the malicious verifier from the interactive setting is replaced with an untrusted setup. An adversary might generate parameters that result in the \(\textsf{Expl}\) algorithm to output coins yielding a different argument or even failing on certain witnesses. In some sense, we can think of the adversary as wanting to subvert the common reference string against deniability of certain “targeted” witnesses.

Definition 8

(Malicious Setup Explainability). We define the advantage \(\textsf{Adv}^{\textsf{MSExpl}}_{\mathcal {A}}(\lambda )\) of an adversary \(\mathcal {A}\) by the following probability

$$\begin{aligned} 1 - \Pr \left[ \begin{array}{cc} \arg ^{*} = \arg : \begin{array}{c} (\textsf{stmt}, \textsf{wit}, \textsf{wit}^{*}, \textsf{crs}) \leftarrow \mathcal {A}(\lambda ) \\ \textsf{wit}\not =\textsf{wit}^{*} \\ \mathcal {R}(\textsf{stmt}, \textsf{wit}) = \mathcal {R}(\textsf{stmt}, \textsf{wit}^{*}) =1 \\ \arg \leftarrow \textsf{Prove}(\textsf{crs},\textsf{stmt}, \textsf{wit}); \\ \textsf{coins}_P \leftarrow \textsf{Expl}(\textsf{crs}, \textsf{stmt}, \textsf{wit}^{*}, \arg ); \\ \arg ^{*} \leftarrow \textsf{Prove}(\textsf{crs}, \textsf{stmt}, \textsf{wit}^{*}; \textsf{coins}_P) \end{array} \end{array} \right] , \end{aligned}$$

where the probability is taken over the random coins of the prover \(\textsf{Prove}\). We say that a non-interactive argument is malicious setup explainable if for all PPT adversaries \(\mathcal {A}\) there exists a negligible function \(\textsf{negl}(.)\) such that \(\textsf{Adv}^{\textsf{MSExpl}}_{\mathcal {A}}(\lambda ) \le \textsf{negl}(\lambda )\). We say the that a non-interactive argument is malicious setup statistically explainable if the above holds for an unbounded adversary \(\mathcal {A}\). Moreover, we say that a non-interactive argument is perfectly malicious setup explainable if \(\textsf{Adv}^{\textsf{MSExpl}}_{\mathcal {A}}(\lambda ) = 0\).

Theorem 2

If there exists a malicious setup explainable non-interactive argument, then there exists a two-move witness-indistinguishable argument, where the verifier’s message is reusable. In other words, given a malicious setup explainable non-interactive argument, we can build a private-coin ZAP.

Malicious prover explainability is defined similarly as in the case of interactive arguments. For the non-interactive setting, it is simpler to formalize the definition, as we simply require the adversary to return two arguments that verify correctly, but their canonical representation is different.

Definition 9

(Uniqueness/Malicious Prover Explainability). We define the advantage of an adversary \(\mathcal {A}\) against malicious prover explainability of \(\textsf{ExArg}\) as \(\textsf{Adv}^{\textsf{MPExpl}}_{\mathcal {A}}(\lambda ) = \Pr [\arg _1 \ne \arg _2]\) where \(\textsf{crs}\leftarrow \textsf{Setup}(\lambda )\) and \((\textsf{stmt}, \arg _1, \arg _2) \leftarrow \mathcal {A}(\lambda )\) are such that \(\textsf{Verify}(\textsf{crs}\), \(\textsf{stmt}\), \(\arg _1)\) \(=\) \(\textsf{Verify}(\textsf{crs}\), \(\textsf{stmt}\), \(\arg _2)\), and the probability is over the random coins of \(\textsf{Setup}\). We say that a non-interactive argument is malicious prover explainable if for all PPT adversaries \(\mathcal {A}\) there exists a negligible function \(\textsf{negl}(.)\) such that \(\textsf{Adv}^{\textsf{MPExpl}}_{\mathcal {A}}(\lambda ) \le \textsf{negl}(\lambda )\). We say that a non-interactive argument is malicious prover statistically explainable if the above holds for an unbounded adversary \(\mathcal {A}\). Moreover, we say that an argument system is a perfectly malicious prover explainable if \(\textsf{Adv}^{\textsf{MPExpl}}_{\mathcal {A}}(\lambda ) = 0\).

For full explainability, we combine both malicious prover and malicious verifier explainability.

Definition 10

(Full Explainability). We define the advantage of an adversary \(\mathcal {A}\) against full explainability of \(\textsf{ExArg}\) by the following probability

$$\begin{aligned} \textsf{Adv}^{\textsf{FExpl}}_{\mathcal {A}}(\lambda ) = \Pr [\arg _1 \ne \arg _2] \end{aligned}$$

where \((\textsf{stmt}, \textsf{crs}, \arg _1, \arg _2) \leftarrow \mathcal {A}(\lambda )\) is such that \(\textsf{Verify}(\textsf{crs}\), \(\textsf{stmt}\), \(\arg _1)\) \(=\) \(\textsf{Verify}(\textsf{crs}\), \(\textsf{stmt}\), \(\arg _2)\). We say that a non-interactive argument is full explainable if for all PPT adversaries \(\mathcal {A}\), there exists a negligible function \(\textsf{negl}(.)\) such that \(\textsf{Adv}^{\textsf{FExpl}}_{\mathcal {A}}(\lambda ) \le \textsf{negl}(\lambda )\). We say that the non-interactive argument is full statistically explainable if the above holds for an unbound adversary \(\mathcal {A}\). Moreover, we say that an argument system is perfectly full explainable if \(\textsf{Adv}^{\textsf{FExpl}}_{\mathcal {A}}(\lambda ) = 0\).

Theorem 3

If \(\textsf{ExArg}\) is a fully explainable argument, then \(\textsf{ExArg}\) is a malicious setup and malicious prover explainable argument.

Theorem 4

Given that one-way functions and malicious prover selectively sound non-interactive (resp. two-move) arguments for \(\textsf{NP}\) exist, then there exists a witness encryption scheme for \(\textsf{NP}\).

4 Non-interactive Explainable Arguments

In this section, we show that it is possible to construct malicious setup explainable non-interactive argument systems from falsifiable assumptions. We also show a fully explainable argument assuming non-interactive zero-knowledge. As both schemes are nearly identical and differ only in several lines, we will denote the lines or specific algorithms with \(\circ \) for the malicious setup explainable argument, and with \(\dagger \), we denote the code specific for the fully explainable argument.

Fig. 1.
figure 1

Circuits for \(\textsf{ProgProve}_{\circ }^1\), \(\textsf{ProgProve}_{\dagger }^1\) and \(\textsf{ProgVerify}\). Note that \(\textsf{ProgProve}\) differ only in line 1.

Scheme 1

(Non-interactive Explainable Argument). Let \(\nabla = \circ \) for the malicious setup explainable argument, and \(\nabla = \dagger \) for the fully explainable argument. Let \(\textsf{DMWI}\) be a dual-mode proof, \(\textsf{NIWI}\) be a non-interactive witness indistinguishable proof, \(\textsf{Com}\) be an equivocal commitment scheme, \(\textsf{Sig}\) be a unique signature scheme, and \(\textsf{PRF}\) be a punctured pseudorandom function. We construct the non-interactive argument system \(\textsf{ExArg}^{\nabla } = (\textsf{Setup}, \textsf{Prove}, \textsf{Verify})\) as follows.

  • \(\textsf{Setup}(\lambda , \L _\mathcal {R})\):  

    1.:

    Choose \(K \leftarrow \textsf{PRF}.\textsf{Setup}(\lambda )\) and \(\textsf{crs}_{\textsf{DMWI}}\) \(\leftarrow \) \(\textsf{DMWI}.\textsf{Setup}(\lambda \), \(\textsf{modeSound}\); \(\textsf{coins}_S)\), where \(\textsf{coins}_S\) are random coins.

    2.:

    \(O_{\textsf{Prove}} \leftarrow \textsf{Obf}(\lambda , \textsf{ProgProve}_{\nabla }^1[\textsf{pp}, \textsf{crs}_{\textsf{DMWI}}, K]; \textsf{coins}_P)\), where \(\textsf{ProgProve}_{\nabla }^1\) is given by Fig. 1 and \(\textsf{coins}_P\) are random coins.

    3\(^{\circ }\).:

    Define statement \(\textsf{stmt}^{\circ }_{\textsf{Setup}}\) as

    $$\begin{aligned} \left\{ \begin{array}{c} \exists _{i \in [2], K, \textsf{coins}_P} O_{\textsf{Prove}} \leftarrow \textsf{Obf}(\lambda , \textsf{ProgProve}_{\circ }^i[\textsf{pp}, \textsf{crs}_{\textsf{DMWI}}, K]; \textsf{coins}_P) ~~ \vee \\ \exists _{\textsf{mode}, \textsf{coins}_S} \textsf{crs}_{\textsf{DMWI}} \leftarrow \textsf{DMWI}.\textsf{Setup}(\lambda , \textsf{mode}; \textsf{coins}_S) \wedge \textsf{mode}= \textsf{modeWI}\end{array}\right\} . \end{aligned}$$
    3\(^{\dagger }\).:

    Define statement \(\textsf{stmt}^{\dagger }_{\textsf{Setup}}\) as

    $$\begin{aligned} \{\exists _{K, \textsf{coins}_P} O_{\textsf{Prove}} \leftarrow \textsf{Obf}(\lambda , \textsf{ProgProve}_{\dagger }^1[\textsf{pp}, \textsf{crs}_{\textsf{DMWI}}, K]; \textsf{coins}_P) \}. \end{aligned}$$
    4.:

    Set \(\textsf{wit}_{\textsf{Setup}} = (1, K, \textsf{coins}_P)\).

    5\(^{\circ }\).:

    \(\pi \leftarrow \textsf{NIWI}.\textsf{Prove}(\textsf{stmt}_{\textsf{Setup}}^{\circ }, \textsf{wit}_{\textsf{Setup}})\).

    5\(^{\dagger }\).:

    \(\pi \leftarrow \textsf{NIZK}.\textsf{Prove}(\textsf{stmt}_{\textsf{Setup}}^{\dagger }, \textsf{wit}_{\textsf{Setup}})\).

    6.:

    Compute \(O_{\textsf{Verify}} \leftarrow \textsf{Obf}(\lambda , \textsf{ProgVerify}[K])\) and output \(\textsf{crs}= (O_{\textsf{Prove}}, O_{\textsf{Verify}}, \textsf{pp}, \textsf{etd}, \textsf{crs}_{\textsf{DMWI}}, \pi )\).

  • \(\textsf{Prove}(\textsf{crs}, \textsf{stmt}, \textsf{wit}; r)\):  

    1\(^{\circ }\).:

    Set \(\textsf{stmt}^{\circ }_{\textsf{Setup}}\) as in the setup algorithm.

    1\(^{\dagger }\).:

    Set \(\textsf{stmt}^{\dagger }_{\textsf{Setup}}\) as in the setup algorithm.

    2\(^{\circ }\).:

    If \(\textsf{NIWI}.\textsf{Verify}(\textsf{stmt}_{\textsf{Setup}}^{\circ }, \pi ) = 0\) return \(\bot \).

    2\(^{\dagger }\).:

    If \(\textsf{NIZK}.\textsf{Verify}(\textsf{stmt}_{\textsf{Setup}}^{\dagger }, \pi ) = 0\) return \(\bot \).

    3\(^{\circ }\).:

    Run \(\textsf{wit}' \leftarrow \textsf{DMWI}.\textsf{Prove}(\textsf{crs}_{\textsf{DMWI}}, \textsf{stmt}, \textsf{wit}; r)\) and \(\arg \leftarrow O_{\textsf{Prove}}(\textsf{stmt}, \textsf{wit}')\).

    3\(^{\dagger }\).:

    Run \(\arg \leftarrow O_{\textsf{Prove}}(\textsf{stmt}, \textsf{wit})\).

    1. 4.

      Run \(\textsf{vk}_s \leftarrow O_{\textsf{Verify}}(\textsf{stmt})\).

    2. 5.

      If \(\textsf{Sig}.\textsf{Verify}(\textsf{vk}_s, \arg , \textsf{stmt}) \ne 1\) return \(\bot \).

    3. 6.

      Otherwise, return \(\arg \).

  • \(\textsf{Verify}(\textsf{crs}, \textsf{stmt}, \arg )\):  

    1. 1.

      Run \(\textsf{vk}_s \leftarrow O_{\textsf{Verify}}(\textsf{stmt})\).

    2. 2.

      Output \(\textsf{Sig}.\textsf{Verify}(\textsf{vk}_s, \textsf{sig}, \textsf{msg})\)

  • \(\textsf{Expl}(\textsf{crs}, \textsf{stmt}, \textsf{wit}, \arg )\):   

    1. 1.

      Output 0.

Fig. 2.
figure 2

Circuits for \(\textsf{ProgProve}_{\circ }^{2}\), \(\textsf{ProgProve}_{\dagger }^{2}\) and \(\textsf{ProgVerify}^{*}\) used in the soundness proof of the non-interactive argument.

Theorem 5

Let \(\textsf{ExArg}^{\circ }\) be the system given by Scheme 1. The system \(\textsf{ExArg}^{\circ }\) is computationally sound (in the selective setting) assuming indistinguishability obfuscation of \(\textsf{Obf}\), pseudorandomness in punctured points of \(\textsf{PRF}\), mode indistinguishability of the \(\textsf{DMWI}\) scheme, and unforgeability of the signature scheme (Fig. 2).

Theorem 6

Given that the signature scheme \(\textsf{Sig}\) is unique, \(\textsf{NIWI}\) is perfectly sound, \(\textsf{DMWI}\) is a dual-mode proof, and all primitives are perfectly correct, the argument system \(\textsf{ExArg}^{\circ }\) is malicious setup explainable.

Theorem 7

Let \(\textsf{ExArg}^{\dagger }\) be the system given by Scheme 1. The system \(\textsf{ExArg}^{\dagger }\) is computationally sound (in the selective setting), assuming indistinguishability obfuscation of \(\textsf{Obf}\), pseudorandomness in punctured points of \(\textsf{PRF}\), zero-knowledge of the \(\textsf{NIZK}\) scheme and unforgeability of the signature scheme.

Theorem 8

Given that the signature scheme \(\textsf{Sig}\) is unique, \(\textsf{NIZK}\) is sound, and all primitives are perfectly correct, argument system \(\textsf{ExArg}^{\dagger }\) is fully explainable.

Corollary 1

The scheme is witness indistinguishable against a malicious setup.

Proof

Witness indistinguishability follows from explainability of the argument system and Theorem 2.

Theorem 9

Let \(\textsf{ExArg}^{\nabla }\) be the system given by Scheme 1 for \(\nabla = \circ \) or \(\nabla = \dagger \). \(\textsf{ExArg}^{\nabla }\) is zero-knowledge in the common reference string model.

5 Robust-Witness Encryption and Interactive Explainable Arguments

We introduce robust witness encryption and show a generic transformation from any standard witness encryption scheme to a robust witness encryption scheme.

Definition 11

(Robust Witness Encryption). We call a witness encryption scheme \(\textsf{WE}= (\textsf{Enc}, \textsf{Dec})\) a robust witness encryption scheme if it is correct, secure and robust as defined below:

  • Robustness: A witness encryption scheme \((\textsf{Enc}, \textsf{Dec})\) is robust if for all PPT adversaries \(\mathcal {A}\) there exists a negligible function \(\textsf{negl}(.)\) such that

    $$\begin{aligned} \Pr \left[ \begin{array}{cc} m_0 \ne m_1: &{} \begin{array}{c} \mathcal {R}(\textsf{stmt}, \textsf{wit}_0) = \mathcal {R}(\textsf{stmt}, \textsf{wit}_1) = 1 \; \wedge \; \\ (\textsf{stmt}, \textsf{ct}, \textsf{wit}_0, \textsf{wit}_1) \leftarrow \mathcal {A}(\lambda ); \\ m_0 \leftarrow \textsf{Dec}(\textsf{stmt}, \textsf{wit}_0, \textsf{ct}) \\ m_1 \leftarrow \textsf{Dec}(\textsf{stmt}, \textsf{wit}_1, \textsf{ct})\end{array} \end{array} \right] \le \textsf{negl}(\lambda ), \end{aligned}$$

    We call the scheme perfectly robust if the above probability is always zero.

Below we define plaintext awareness [5], but tailored to the case of witness encryption.

Definition 12

(Plaintext Aware Witness Encryption). Let \(\textsf{WE}\) \(=\) \((\textsf{Enc}\), \(\textsf{Dec})\) be a witness encryption scheme. We extend the scheme with an algorithm \(\textsf{Verify}\) that on input a ciphertext \(\textsf{ct}\) and a statement \(\textsf{stmt}\) outputs a bit indicating whether the ciphertext is in the ciphertext space or not. Additionally we define an algorithm \(\textsf{Setup}\) that on input the security parameter \(\lambda \) outputs a common reference string \(\textsf{crs}\), and an algorithm \(\textsf{Setup}^{*}\) that additionally outputs \(\tau \). We say that the witness encryption scheme for a language \(\L \in \textbf{NP}\) is plaintext aware if for all PPT adversaries \(\mathcal {A}\), there exists a negligible function \(\textsf{negl}(.)\) such that

$$\begin{aligned} |&\Pr [\mathcal {A}(\textsf{crs})=1: \textsf{crs}\leftarrow \textsf{Setup}(\lambda )]\\ -&\Pr [\mathcal {A}(\textsf{crs})=0: (\textsf{crs}, \tau ) \leftarrow \textsf{Setup}^{*}(\lambda )]| \le \textsf{negl}(\lambda ), \end{aligned}$$

and there exists a PPT extractor \(\textsf{Ext}\) such that

$$\begin{aligned} \Pr \left[ \begin{array}{cc} {\textsf{msg}\leftarrow \textsf{Ext}(\textsf{stmt}, \textsf{ct}, \tau ):}\begin{array}{c} (\textsf{crs}, \tau ) \leftarrow \textsf{Setup}^{*}(\lambda ); \\ (\textsf{ct}, \textsf{stmt}) \leftarrow \mathcal {A}(\textsf{crs}); \\ \textsf{Verify}(\textsf{stmt}, \textsf{ct}) = 1 \end{array} \end{array} \right] \le 1- \textsf{negl}(\lambda ) \end{aligned}$$

where for all witnesses \(\textsf{wit}\) such that \(\mathcal {R}(\textsf{stmt}, \textsf{wit})=1\) we have \(\textsf{msg}=\textsf{Dec}(\textsf{ct}, \textsf{wit})\), and the probability is taken over the random coins of \(\textsf{Setup}\) and \(\textsf{Setup}^{*}\).

Scheme 2

(Generic Transformation). Let \(\textsf{WE}= (\textsf{Enc}, \textsf{Dec})\) be a witness encryption scheme and \(\textsf{NIZK}= (\textsf{NIZK}.\textsf{Prove}, \textsf{NIZK}.\textsf{Verify})\) be a proof system. We construct a robust witness encryption scheme \(\textsf{WE}_{rob}\) as follows.

  • \(\textsf{Enc}_{rob}(\lambda , \textsf{stmt}, \textsf{msg})\)

    1. 1.

      Compute \(\textsf{ct}_\textsf{msg}\leftarrow \textsf{WE}.\textsf{Enc}(\lambda , \textsf{stmt}, \textsf{msg})\)

    2. 2.

      Let \(\textsf{stmt}_\textsf{NIZK}\) be defined as \(\{ \exists _{\textsf{msg}} \;\; \textsf{ct}_\textsf{msg}\leftarrow \textsf{WE}.\textsf{Enc}(\lambda , \textsf{stmt}, \textsf{msg})\}\)

    3. 3.

      Compute \(\pi \leftarrow \textsf{NIZK}.\textsf{Prove}( \textsf{stmt}_\textsf{NIZK}, \textsf{wit})\) using witness \(\textsf{wit}= (\textsf{msg})\)

    4. 4.

      Return \(\textsf{ct}= (\textsf{ct}_\textsf{msg},\pi )\).

  • \(\textsf{Dec}_{rob}(\textsf{stmt}, \textsf{wit}, \textsf{ct})\)

    1. 1.

      Set the statement \(\textsf{stmt}_\textsf{NIZK}\) as \(\{ \exists _{\textsf{msg}} \;\; \textsf{ct}_\textsf{msg}\leftarrow \textsf{WE}.\textsf{Enc}(\lambda , \textsf{stmt}, \textsf{msg}) \}\)

    2. 2.

      If \(\textsf{NIZK}.\textsf{Verify}( \textsf{stmt}_\textsf{NIZK}, \pi ) = 0\), then return \(\bot \). Otherwise return \(\textsf{WE}.\textsf{Dec}(\textsf{stmt}, \textsf{wit}, \textsf{ct}_\textsf{msg})\)

Theorem 10

(Security and Extractability). Scheme 2 is a (extractably) secure witness encryption if \(\textsf{WE}\) is a (extractably) secure witness encryption, and \(\textsf{NIZK}\) is zero-knowledge (in the common reference string or RO model).

Theorem 11

(Robustness and Plaintext Awareness). Scheme 2 is robust if the witness encryption scheme \(\textsf{WE}\) is perfectly correct, and the \(\textsf{NIZK}\) proof system is perfectly sound (in the common reference string or RO model). If the \(\textsf{NIZK}\) proof system is a proof of knowledge (in the common string or RO model), then Scheme 2 is plaintext aware.

5.1 Fully Explainable Arguments from Robust Witness Encryption

In this subsection, we will tackle the problem of constructing fully explainable arguments. The system is described in more detail by Scheme 3.

Scheme 3

(Interactive Explainable Argument). The argument system consists of \(\textsf{Prove}\), \(\textsf{Verify}\) and \(\textsf{Expl}\), where the protocol between \(\textsf{Prove}\) and \(\textsf{Verify}\) is specified as follows. \(\textsf{Prove}\) takes as input a statement \(\textsf{stmt}\) and a witness \(\textsf{wit}\), and \(\textsf{Verify}\) takes as input \(\textsf{stmt}\). First \(\textsf{Verify}\) chooses , computes \(\textsf{ct}\leftarrow \textsf{Enc}_{rob}(\lambda ,\textsf{stmt},r)\) and sends \(\textsf{ct}\) to \(\textsf{Prove}\). Then \(\textsf{Prove}\) computes \(\arg \leftarrow \textsf{Dec}_{rob}(\textsf{stmt},\textsf{wit},\textsf{ct})\) and sends \(\arg \) to \(\textsf{Verify}\). Finally, \(\textsf{Verify}\) returns iff \(\arg = r\). The explain algorithm \(\textsf{Expl}\) is as follows.

  • \(\textsf{Expl}(\textsf{stmt}, \textsf{wit}, \textsf{trans})\): On input the statement \(\textsf{stmt}\), the witness \(\textsf{wit}\) and a transcript \(\textsf{trans}\), output \(\bot \).

Theorem 12

(Soundness). Scheme 3 is an argument system for \(\textsf{NP}\) language IMAGE assuming the witness encryption scheme \(\textsf{WE}\) for IMAGE is secure. Furthermore, if the underlying witness encryption scheme \(\textsf{WE}\) scheme is extractable, then Scheme 3 is an argument of knowledge.

Theorem 13

(Zero-Knowledge). Scheme 3 is zero-knowledge given the underlying witness encryption scheme \(\textsf{WE}\) is plaintext aware.

Theorem 14

(Explainability). Scheme 3 is fully explainable assuming the used witness encryption scheme is robust (or plaintext aware) and correct.

Remark 2

Scheme 3 is predictable in the sense that the verifier can “predict” the value of the prover’s arguments/proof [33]. Furthermore, the protocol is optimally laconic [12], as the verifier can encrypt single bits.

Theorem 15

Let \(\textsf{WE}\) be a (non-robust) perfectly correct witness encryption scheme for \(\textsf{NP}\). Let \(\varPi \) be an interactive public-coin zero-knowledge proof protocol for \(\textsf{NP}\). Then there exists a malicious verifier explainable (and witness-indistinguishable) argument for \(\textsf{NP}\).

6 Applications

In this section, we show how to apply explainable arguments. We focus on constructing a CCA1 secure publicly deniable encryption scheme using as a building block malicious verifier explainable arguments. Our transformation is based on the one from Naor and Yung [56] but we replace the NIZK proof system with a NIWI. In the full version we show how to build a deniable anonymous credential scheme from malicious prover explainable arguments. Here we note that the anonymous credential system is a straightforward application of malicious prover explainable arguments and standard signature schemes.

The main idea behind the Naor and Yung construction is to use two CPA secure ciphertexts \(\textsf{ct}_1, \textsf{ct}_2\) and a NIZK that both contain the same plaintext. The soundness property ensures that a decryption oracle can use either of the secret keys (since the decrypted message would be the same) and zero-knowledge allows the security reduction to change the challenged ciphertext, i.e. change the two CPA ciphertexts. We note that in our approach we replace NIZK with NIWI, that to the best of our knowledge has not been do before.

Scheme 4

(Generic Transformation from CPA to CCA). Let \(\mathcal {E} = (\textsf{KeyGen}_\textsf{cpa},\textsf{Enc}_\textsf{cpa},\textsf{Dec}_\textsf{cpa})\) be a CPA secure encryption scheme, \((\textsf{NIWI}.\textsf{Setup},\textsf{NIWI}.\textsf{Prove},\textsf{NIWI}.\textsf{Verify})\) be a non-interactive witness-indistinguishable proof system. Additionally we define the following statement \(\textsf{stmt}_\textsf{cpa}\) be defined as

$$\begin{aligned} \{ (&\exists _{\textsf{msg}} \; \textsf{ct}_1 \leftarrow \textsf{Enc}_\textsf{cpa}(\textsf{pk}_1,\textsf{msg}) \; \wedge \; \textsf{ct}_2 \leftarrow \textsf{Enc}_\textsf{cpa}(\textsf{pk}_2,\textsf{msg}) ) \;\; \vee \;\; \\ (&\exists _{\alpha , \beta } \mathcal {H}_\mathbb {G}(\textsf{ct}_1,\textsf{ct}_2) = (g^\alpha ,g^\beta , g^{\alpha \cdot \beta }) )\}, \end{aligned}$$

where \(\mathcal {H}_\mathbb {G}\) is defined as above.

  • \(\textsf{KeyGen}_\textsf{cca1}(\lambda )\):  

    1. 1.

      generate CPA secure encryption key pairs \((\textsf{pk}_1,\textsf{sk}_1) \leftarrow \textsf{KeyGen}_\textsf{cpa}(\lambda )\) and \((\textsf{pk}_2,\textsf{sk}_2) \leftarrow \textsf{KeyGen}_\textsf{cpa}(\lambda )\),

    2. 2.

      generate a common reference string \(\textsf{crs}\leftarrow \textsf{NIWI}.\textsf{Setup}(\lambda )\),

    3. 3.

      set \(\textsf{pk}_\textsf{cca1}= (\textsf{pk}_1, \textsf{pk}_2, \textsf{crs})\) and \(\textsf{sk}_\textsf{cca1}= \textsf{sk}_1\).

  • \(\textsf{Enc}_{\textsf{cca1}}(\textsf{pk}_\textsf{cca1}, \textsf{msg})\)

    1. 1.

      compute ciphertexts \(\textsf{ct}_1 \leftarrow \textsf{Enc}_\textsf{cpa}(\textsf{pk}_1,\textsf{msg})\) and \(\textsf{ct}_2 \leftarrow \textsf{Enc}_\textsf{cpa}(\textsf{pk}_2,\textsf{msg})\),

    2. 2.

      compute NIWI proof \(\varPi \leftarrow \textsf{NIWI}.\textsf{Prove}(\textsf{crs},\textsf{stmt}_\textsf{cpa}, (\textsf{msg})\),

    3. 3.

      return ciphertext \(\textsf{ct}= (\textsf{ct}_1,\textsf{ct}_2,\varPi )\).

  • \(\textsf{Dec}_{\textsf{cca1}}(\textsf{sk}_\textsf{cca1}, \textsf{ct})\)

    1. 1.

      return \(\bot \) if \(\textsf{NIWI}.\textsf{Verify}(\textsf{crs},\textsf{stmt}_\textsf{cpa},\varPi ) = 0\),

    2. 2.

      return \(\textsf{msg}\leftarrow \textsf{Dec}_\textsf{cpa}(\textsf{sk}_1,\textsf{ct}_1)\).

Theorem 16

Scheme 4 is an encryption scheme secure against non-adaptive chosen ciphertext attacks (CCA1) in the random oracle model assuming the encryption scheme \(\mathcal {E}\) is an encryption scheme secure against chosen plaintext attacks and \(\textsf{NIWI}\) is a sound and witness indistinguishable proof system.

Theorem 17

Scheme 4 is an publicly deniable encryption scheme secure against non-adaptive chosen ciphertext attacks (CCA1) in the random oracle model assuming the encryption scheme \(\mathcal {E}\) is an publicly deniable encryption scheme secure against chosen plaintext attacks and \(\textsf{NIWI}\) is a malicious setup explainable argument system.

7 Conclusions

In this paper, we introduce new security definitions for interactive and non-interactive argument systems that formally capture a property called explainability. Such arguments can be used to construct CCA1 deniable encryption and deniable anonymous authentication. We also introduced a new property for witness encryption called robustness which can be of independent interest. An interesting open question is whether such arguments systems can be constructed from simpler primitives or we need such strong primitives because malicious prover explainability implies uniqueness of the proof.