1 Introduction

For almost a decade mass surveillance has caused controversy, widespread protests, and scandals; and yet, it is on the rise. The NSA, for instance, illegally collected phone-call data from all Verizon customers, and the data of all calls occurring in the Bahamas and Afghanistan [28]. During the present COVID-19 pandemic, Germany used contact-tracing data to pursue criminal investigations [26]. The need for privacy-enhancing solutions that provide transparency and limit mass surveillance has never been greater.

User privacy is a human right acknowledged by Article 12 of the Universal Declaration for Human Rights [42]: “No one shall be subjected to arbitrary interference with his privacy [...] or correspondence [...]. Everyone has the right to the protection of the law against such [...] attacks.” The European General Data Protection Regulation (GDPR) and e-Privacy both aim to protect privacy in digital environments, requiring minimal, transparent, secure, and user-controlled storage of data. Increasingly aware of mass surveillance, users now choose more frequently to secure their communications by using, e.g., WhatsApp and Viber. In mobile networks, data is encrypted by mobile network operators, which have an incentive to improve the privacy they offer their users.

Unfortunately, mobile data remains at risk, especially when exceptional access to it lies with a single entity. In 2016, only the integrity of Tim Cook (Apple CEO) and his awareness of the danger of such a precedent prevented encrypted phone data to be given to the FBI [30]. His refusal was not a deterrent, and risks to privacy and encryption grow every day [25].

Law-enforcement agencies argue that mobile data can be pivotal to investigations, which are undermined by individuals “going dark” [21]. Even strong privacy advocates, such as Abelson et al. [6] agree that targeted investigations, limited in scope and motivation, can be legitimate and useful (see the corruption-scandal regarding Nicolas Sarkozy’s campaign funds). It is this type of limited Lawful Interception (LI) that emerging EU legislation advocates [24].

Recent research [45] has tried to find a middle ground between privacy and lawful interception, enabling the latter at high cost. Unfortunately, that approach sacrifices a crucial real-world requirement: timely (exceptional) decryption [2]. In this paper we define and instantiate Lawful-Interception Key-Exchange (\(\mathsf {LIKE}\)), a novel primitive allowing mobile users to E2E encrypt their conversation, but providing exceptional lawful interception. This would remove the “[...] need to choose between compliance and strong encryption” ( cf. Joel Wallenstrom).

\(\mathsf {LIKE}\) combats mass surveillance in multiple ways. It fine-grains the window of interception to a single session, such that one exceptional opening will not affect past or future conversations. Moreover, the freshness used in the protocol is user- not authority-generated, thus removing the need for a centralized, secure party storing all the keys. We also divide the responsibility of exceptional opening between multiple authorities that must agree to lawfully intercept communications. Finally, we make the primitive versatile, allowing only one, or only several authorities to ultimately retrieve the session key. This makes our solution more privacy-preserving than mobile protocols today, while at the same time remaining compatible to the strictest LI-supporting 3GPP texts [2].

Lawful Interception. Regardless of one’s stand on it, Lawful Interception (LI) is part of our world, regulated by laws and standards. Ignoring it can lead to privacy threats (subversion or mass surveillance). Solutions that provide privacy but no LI are discarded as unlawful, regardless of their merits. We take the alternative approach: we analyze LI requirements and technically provide for them, while still maximizing data privacy.

LI requirements in mobile communications are authored by 3GPP and standardized by organizations, e.g., ETSI. LI is the procedure through which a law enforcement agency, holding a legal warrant, can obtain information about phone calls: either metadata (time of calls, identity of callers) or contents (of conversations happening in real time). By law, a mobile operator must provide the data requested and specified by a warrant. Three main types of requirements regulate LI: user-privacy requirements, LI-security requirements, and requirements on encryption.

R1:

User Privacy: the interception is limited in time (as dictated by the warrant) and to a targeted user. The law enforcement agency should not get data packages or conversations outside the warrant, from the same or other users.

R2:

LI Security: LI must be undetectable by either users (whose quality of service should stay the same) or non-authorized third parties (including other law-enforcement agencies). Intercepted data must be provided promptly, with no undue delay.

R3:

Special case: if it implements encryption, the operator must provide either decrypted content, or a means to decrypt it (e.g., a decryption key). However, if the users employ other means of encryption, not provided by the operator, the latter is not obliged to provide decrypted (or decryptable) conversations.

1.1 Our Contributions

LIKE. Based on these requirements, we define a novel cryptographic primitive called Lawful-Interception Key Exchange (\(\mathsf {LIKE}\), in short). This protocol allows (only) the end users to compute session keys in the presence of a variable number n of authorities (e.g., a court of justice, a law enforcement agency, operators), which all parties must agree on; the operators output a public session state. Given the session state, authorities may each extract a trapdoor. The use of all n trapdoors can yield a session key. Importantly, unless they are an authority, operators cannot recover the end users’ key; instead they forward and verify the compliance of exchanged messages (else the operator aborts).

We also formalize the following strong properties:

C:

Correctness: Under normal conditions, Alice and Bob obtain the same key. Moreover, this will be the key retrieved by the collaboration of all the authorities by means of lawful interception (requirement R2)

KS:

Key-security: If at least one authority and both users are honest for a given session, that session’s key remains indistinguishable from a random key of the same length with respect to an adversary that can control all the remaining parties (including the other authorities and the operator); (requirements R1 and R2)

NF:

Non-frameability: The collusion of malicious users, the authorities, and the operator cannot frame an honest user of participating to a session she has not been a party to (requirement R2);

HO:

Honest Operator: If an honest operator forwards the so-called session state (see below) of a session it deems correct, then the key recovered by the authorities is the one that the session transcript should have yielded (requirement R3). Thus operators can prove that this protocol is compliant with LI specifications.

Our Protocol and Implementation. As our second contribution, we describe an instantiation of LIKE using standard building blocks (signatures, zero knowledge proofs and signatures of knowledge) which we prove secure, provided the Bilinear Decisional Diffie-Hellman problem is intractable, the signature scheme is unforgeable, and our zero-knowledge proofs/signatures are secure.

Mindful of practical requirements, we place most of the burden during AKE on the operator (not on the endpoints). Our proofs and signatures of knowledge can be simply implemented based on Schnorr and respectively Chaum and Pedersen proofs (with Fiat-Shamir). The two endpoints do have to compute a pairing operation –however, we explain that the actual computation can actually be delegated to the mobile phone, leaving a single exponentiation (in the target group) to be performed on the USIM card.

The complexity of the opening procedure is reduced. Some steps are parallelizable and run in constant time (trapdoor generation), but others are linear (combining the trapdoors). Even so the computational burden remains minimal and in line with requirements R2 (constant quality of service, since only authorities run LI, not the operator) and R3 (no undue delay would be incurred by the operator, and only minimal delays occur at the authorities). A proof-of-concept implementation given in Sect. 7 illustrates this point.

1.2 Related Work

Existing encryption in mobile networks is not E2E secure, only providing privacy with respect to non-authorized third parties. This solution is compliant to LI requirements [3,4,5] because the secure channel it provides is between the user and the operator (rather than user-to-user); thus the operator has unrestricted access to all user communications. Our \(\mathsf {LIKE}\) protocol provides much stronger privacy in that respect.

\(\mathsf {LIKE}\) also provides much stronger guarantees than key-escrow [8, 11, 19, 20, 22, 27, 32,33,34,35, 37, 38, 40, 41, 44]: we can handle malicious authority input; we fine-grain exceptional opening so that it only holds for one session at a time; we allow authorities to remain offline at all times except for exceptional opening; we minimize storage and computational costs for users and authorities; we have no central key-generation authority which knows all secret keys; and we guarantee the new properties of non-frameability and honest operator, which are tailored to the LI requirements analyzed above. In return, our use-case is narrower than typically considered in key-escrow: by considering the case of mobile communications, we can safely assume that parties always use the operator as a proxy, unlike in generic key-escrow settings.

Our work is motivated by the same problem handled by [12, 45], but we take a different approach: rather than make LI computationally costly, we limit its scope, divide exceptional opening between several authorities, and fine-grain access. Our setup also resembles that of reverse firewalls [36] – which builds on related work pioneered by Young and Yung [23, 31, 46]. Although apparently similar, the setup of these works is complementary to ours. Kleptography describes ways for users to abuse protocols such that the latter appear to be running normally, while subliminal information is allowed to leak. For instance in key-exchange, a government agency could substitute the implementation of the protocol by one with a backdoor, to make the key recoverable even without formal LI. We also consider the case that Alice and Bob might be malicious (the HO property). However, rather than wanting to exfiltrate information –which is the adversarial goal in typical reverse-firewalls– in our case, Alice and Bob aim to cheat by choosing protocol contributions that might make LI malfunction. To be provably-secure, LIKE protocols must prevent this.

2 Preliminaries

Notations. By \(x \leftarrow y\) we mean that variable x takes a value y, while \(x \xleftarrow {\$} X\) indicates x is chosen from the uniform distribution on X. The notation \(\llbracket 1,n \rrbracket \) is short for \(\{1, 2, \cdots , n\}\). Let \(\mathsf {A}(x) \rightarrow a \) express that algorithm \(\mathsf {A}\), running on input x, outputs a, and \(\mathsf {P}\langle \mathsf {A} (x),\mathsf {B} (y)\rangle (z)\rightarrow (a,b) \) to express that protocol \(\mathsf {P}\) implements the interactions of \(\mathsf {A} (x)\rightarrow a\) and \(\mathsf {B} (y)\rightarrow b\), where z is an additional public input of \(\mathsf {A} \) and \(\mathsf {B} \). Let \( \lambda \) be a security parameter.

Building Blocks. We assume the reader’s familiarity with the notion of unforgeability for signature schemes (EUF-CMA), also visible in our full version [7]. We denote by \(\mathsf {Adv}_{\mathsf {DS}}^{\mathsf {EUF\text {-}CMA}}(\lambda ) \) the maximal advantage an adversary has against the signature scheme that we employ.

Definition 1

(BDDH Assumption [15]). Let \(\mathbb {G} _1 = \langle g_1\rangle \), \(\mathbb {G} _2 = \langle g_2\rangle \), and \(\mathbb {G} _T\) be groups of prime order p of length \(\lambda \). Let \(e:\mathbb {G}_1\times \mathbb {G}_2 \rightarrow \mathbb {G}_T\) be a type 3 bilinear map. The Bilinear decisional Diffie-Hellman problem (BDDH) assumption holds in \((\mathbb {G} _1, \mathbb {G} _2,\mathbb {G} _T,e)\) if, given \((a,b,c,d_1)\xleftarrow {\$} (\mathbb {Z}^*_p)^4\), \(d_0\leftarrow abc\), and \(\beta \xleftarrow {\$} \{0,1\}\), no PPT adversary \(\mathcal {A}\) can guess \(\beta \) from \((g_1^a,g_2^a, g_1^b,g_2^b, g_1^c, g_2^c,e(g_1,g_2)^{d_\beta })\) with non-negligible advantage. We denote by \(\mathsf {Adv}_{}^{\mathsf {BDDH}}(\lambda ) \) the maximum advantage over all PPT adversaries.

Signature of Knowledge. Let \(\mathcal {R}\) be a binary relation and let \(\mathcal {L}\) be a language such that \(s \in \mathcal {L} \Leftrightarrow (\exists w, (s,w)\in \mathcal {R})\). A Non-Interactive Proof of Knowledge (NIPoK) [10] allows a prover to convince a verifier that he knows a witness w such that \((s,w)\in \mathcal {R}\). Here, we follow [16] and write \( \mathsf {NIPoK} \left\{ w : (w,s)\in \mathcal {R} \right\} \) for the proof of knowledge of w for the statement s and the relation \(\mathcal {R}\). A signature of knowledge essentially allows one to sign a message and prove in zero-knowledge that a particular statement holds for the key [17]. In this paradigm, w is a secret key and s is the corresponding public key.

Definition 2

(Signature of Knowledge). Let \(\mathcal {R}\) be a binary relation and \(\mathcal {L}\) be a language such that \(s \in \mathcal {L} \Leftrightarrow (\exists w, (s,w)\in \mathcal {R})\). A Signature of Knowledge for \(\mathcal {L}\) is a pair of algorithms \((\mathsf {SoK},\mathsf {SoKver})\) with \( \mathsf {SoK} _{m} \left\{ w : (s,w)\in \mathcal {R} \right\} \rightarrow \pi \) and \(\mathsf {SoKver} ( m, s, \pi ) \rightarrow b\), such that:

  • Perfect Zero Knowledge: There exists a polynomial time algorithm \(\textsf {Sim}\), the simulator, such that \(\textsf {Sim}(m,s)\) and \( \mathsf {SoK} _{m} \left\{ w : (s,w)\in \mathcal {R} \right\} \) follow the same probability distribution.

  • Knowledge Extractor: There exists a PPT knowledge extractor \(\textsf {Ext}\) and a negligible function \(\epsilon _{\mathsf {SoK}}\) such that for any algorithm \(\mathcal {A} {}^{\textsf {Sim}(\cdot ,\cdot )}(\lambda )\) having access to a simulator that forges signatures for chosen instance/message tuples and that outputs a fresh tuple \((s,\pi , m)\) with \(\mathsf {SoKver} ( m, s, \pi ) =1\), the extractor \(\textsf {Ext}^{\mathcal {A} {}}(\lambda )\) outputs w such that \((s,w)\in \mathcal {R}\) having access to \(\mathcal {A} {}(\lambda )\) with probability at least \(1-\epsilon _{\mathsf {SoK}}(\lambda )\).

We omit the definition of NIPoK which is the same as SoK without the messages.

3 LIKE Protocols

Lawful-interception (authenticated) key-exchange (\(\mathsf {LIKE}\)) consists of two mechanisms: a multiparty AKE protocol between 2 mobile subscribers (users) and a (number of) mobile network operators (performing the same actions), and an Extract-and-Open mechanism for LI between a number of authorities.

Intuition. Our AKE component allows user Alice, subscribing to operator \(\mathsf {O} _\mathsf {A} \), and Bob, subscribing to \(\mathsf {O} _\mathsf {B} \), to compute a session key in the presence of \(\mathsf {O} _\mathsf {A} \) and \(\mathsf {O} _\mathsf {B} \). The operators do not compute the session key, just some auxiliary session state, which allows for session authentication and ulterior key-recovery.

In the Extract-and-Open component, each authority uses its secret key to extract a trapdoor from the session state. The trapdoors are then used together to open the session key.

Formalization. Let \(\mathsf {USERS}\) be a set of mobile users and \(\mathsf {OPS} \) be a set of operators, such that each user is affiliated to precisely one operator. We also consider a set of authorities \(\mathsf {AUTH}\) of cardinality \(|\mathsf {AUTH} |\), with elements indexed as \(\varLambda _{1}, \dots , \varLambda _{{|\mathsf {AUTH} |}} \).

Let \(\mathsf {PARTIES}\) be the set of all participants: \(\mathsf {USERS} \cup \mathsf {OPS} \cup \mathsf {AUTH} \). Mobile users have no super-role: \(\mathsf {USERS} \cap \mathsf {AUTH} = \emptyset = \mathsf {USERS} \cap \mathsf {OPS} \). Syntactically, \(\mathsf {OPS} \cap \mathsf {AUTH} = \emptyset \); however, operators can act as authorities by registering a second set of (authority) credentials and using those for opening. This is described in the full version.

Parties, attributes, and oracles are formally introduced in the next section. We use a dot notation to refer to attributes of a party \(\mathsf {P} \): \(\mathsf {A}\).\(\mathsf {PK}\) is Alice’s public key and \(\varLambda _{i}.\mathsf {SK} \) is the secret key of the \({i}\text {-th}\) authority. We write \(\mathsf {O} _\mathsf {A} \) to indicate Alice’s operator even though we have no user registration and can run the protocol with any two operators chosen by the users. The operators are assumed to transit all communication (as is the case today). The parties can agree on a variable number n of (distinct) authorities, with .

Definition 3

A lawful interception key exchange ( \(\mathsf {LIKE}\) ) protocol is defined by the following algorithms:

  • \(\mathsf {Setup} (1^\lambda )\rightarrow \mathsf {pp} \): Takes as input a security parameter and outputs public system parameters \(\mathsf {pp}\), known to all parties.

  • \(\mathsf {{U}KeyGen} (\mathsf {pp})\rightarrow (\mathsf {U}.\mathsf {PK},\mathsf {U}.\mathsf {SK})\): Takes as input the public parameters \(\mathsf {pp}\) and outputs a user key pair.

  • \(\mathsf {{O}KeyGen} (\mathsf {pp})\rightarrow (\mathsf {O}.\mathsf {PK},\mathsf {O}.\mathsf {SK})\): Takes as input the public parameters \(\mathsf {pp}\) and outputs an operator key pair.

  • \(\mathsf {{A}KeyGen} (\mathsf {pp})\rightarrow ({{\varLambda _{}}.\mathsf {PK}},{{\varLambda _{}}.\mathsf {SK}})\): Takes as input the public parameters \(\mathsf {pp}\) and outputs an authority key pair.

  • \(\mathsf {AKE} {}\langle \mathsf {A} (\mathsf {A}.\mathsf {SK}), \mathsf {O} _{\mathsf {A}}(\mathsf {O} _{\mathsf {A}}.\mathsf {SK}), \mathsf {O} _{\mathsf {B}}(\mathsf {O} _{\mathsf {B}}.\mathsf {SK}), \mathsf {B} (\mathsf {B}.\mathsf {SK})\rangle ( \mathsf {PK}_{\mathsf {A} \rightarrow \mathsf {B}}) \rightarrow (\mathsf {k} _\mathsf {A}, \mathsf {sst} _\mathsf {A},\mathsf {sst} _\mathsf {B}, \mathsf {k} _\mathsf {B})\): An authenticated key-exchange protocol between users \((\mathsf {A},\mathsf {B}) \in \mathsf {USERS} ^2\) and their operators \((\mathsf {O} _\mathsf {A},\mathsf {O} _\mathsf {B}) \in \mathsf {OPS} ^2\), the latter providing active middleware at all times. The parties each take as input a secret key, and they all have access to the same set of public values \(\mathsf {PK}_{\mathsf {A} \rightarrow \mathsf {B}}\) containing: parameters \(\mathsf {pp} \), public keys \((\mathsf {A}.\mathsf {PK}, \mathsf {B}.\mathsf {PK})\), and a vector of authority public keys \(\mathsf {APK} =(\varLambda _{i}.\mathsf {PK})_{i=1}^n\) with (distinct) \(\varLambda _{i} \in \mathsf {AUTH} \) for all i. At the end of the protocol, \(\mathsf {A} \) (resp. \(\mathsf {B} \)) returns a session secret key \(\mathsf {k} _\mathsf {A} \) (resp. \(\mathsf {k} _\mathsf {B} \)) and the operator \(\mathsf {O} _\mathsf {A} \) (resp. \(\mathsf {O} _\mathsf {B} \)) returns a (public) session state \(\mathsf {sst} _\mathsf {A} \) (resp. \(\mathsf {sst} _\mathsf {B} \)). In case of failure, the parties output a special symbol \(\bot \) instead.

  • \(\mathsf {Verify} (\mathsf {pp},\mathsf {sst}, \mathsf {A}.\mathsf {PK}, \mathsf {B}.\mathsf {PK}, \mathsf {O}.\mathsf {PK}, \mathsf {APK})\rightarrow \mathsf {b} \): Takes as input session state \(\mathsf {sst} \), user public keys \(\mathsf {A}\).\(\mathsf {PK}\) and \(\mathsf {B}\).\(\mathsf {PK}\), an operator public key \(\mathsf {O}\).\(\mathsf {PK}\), a set of authority public keys\(\mathsf {APK} = (\varLambda _{i}.\mathsf {PK})_{i=1}^n\), outputting a bit \(\mathsf {b} =1 \) if \(\mathsf {sst}\) was correctly generated and authenticated by \(\mathsf {O}\), and \(\mathsf {b} =0\) otherwise.

  • \(\mathsf {TDGen} (\mathsf {pp},{{\varLambda _{}}.\mathsf {SK}},\mathsf {sst})\rightarrow \varLambda _{}.t \): Takes as input authority secret key \({{\varLambda _{}}.\mathsf {SK}}\) and session state \(\mathsf {sst} \), and outputs trapdoor \(\varLambda _{}.t\).

  • \(\mathsf {Open} (\mathsf {pp},\mathsf {sst}, \mathsf {APK}, \mathcal {T}) \rightarrow \mathsf {k} \): Takes as input session state \(\mathsf {sst} \), two vectors \(\mathsf {APK} =(\varLambda _{i}.\mathsf {PK})_{i=1}^n\) with n distinct public keys, and \(\mathcal {T} =(\varLambda _{i}.t )_{i=1}^n\) (authority public keys and corresponding trapdoors), and outputs either a session key \(\mathsf {k} \), or a symbol \(\bot \).

We defer the full correctness definition to Appendix A. It essentially states that if parameters are normally generated, for an honestly-run protocol session: both operators approve it, both users accept computing the same key, and that same key is extracted from either of the operators’ session states.

To use a \(\mathsf {LIKE}\) scheme, keys are generated based on the parties’ role: \(\mathsf {{U}KeyGen} \) for users, \(\mathsf {{O}KeyGen} \) for operators, and \(\mathsf {{A}KeyGen} \) for authorities. Then, Alice, Bob, and the operators run \(\mathsf {AKE} {}\) as described above. At the end, Alice and Bob share a session key (unknown to the operator), and each operator returns a public session state \(\mathsf {sst} \). The values included in \(\mathsf {sst}\) are protocol-dependent, and they must allow the authorities to verify that the session was run correctly and recover the session key. The verification is not exclusively meant for authorities: the algorithm \(\mathsf {Verify} \) checks the validity of the session and authenticates its participants.

The authorities may later retrieve session keys. Each authority verifies the soundness of a given \(\mathsf {sst} \), then extracts a trapdoor to the session key, using its secret key. Given all the trapdoors, the algorithm \(\mathsf {Open} \) retrieves the session key. Depending on the LI scenario, the \(\mathsf {Open}\) algorithm may be run by one or multiple parties (thus, whoever has all the trapdoors extracts the key). Our protocol is versatile and can adapt to many cases, some of which are shown in the full version.

4 Security Model

This section formalizes \(\mathsf {LIKE}\) security: an essential contribution to the paper, which provides much stronger guarantees than regular authenticated key exchange or key-escrow. We begin by giving the adversarial model, then list the oracles which adversaries can use to manipulate honest parties, and formalize security games for each property.

The Adversarial Model. We assume all parties (users, operators, authorities) have unique and unchanging roles. Parties may still play multiple roles if the same physical entity registers as multiple users in our scheme (an authority and an operator, for instance). Each party \(\mathsf {P} \) is associated with these attributes:

  • (\(\mathsf {SK}\), \(\mathsf {PK}\)): long-term private (resp. public) keys \(\mathsf {SK}\) (resp. \(\mathsf {PK}\)). Such keys are output by \(\mathsf {{U}KeyGen} \), \(\mathsf {{O}KeyGen} \), or \(\mathsf {{A}KeyGen} \).

  • \(\gamma \): a corruption flag, indicating whether that party has been corrupted (1) or not (0). The flag starts out as 0, and if it changes to 1, it can never flipped back to 0.

Alice, Bob, and their operators run \(\mathsf {AKE} {}\) in sessions. At each new session, a new instance of each party is created (yielding four instances per session). We denote by \(\pi _{\mathsf {P}}^{i}\) the \({i}\text {-th}\) instance generated during the experiment, where \(\mathsf {P} \) denotes the corresponding party. In addition to the long-term keys and corruption bits of the party, instances keep track of the following attributes:

  • \(\mathsf {sid}\): a session identifier consisting of a tuple of session-specific values, like public parameters or randomness. This attribute stores only state pertinent to that session, does not include secret values, and is computable by the instances running the session. The value of \(\mathsf {sid}\) is initially set to \(\bot \) and changes during the protocol run.

  • \(\mathsf {{PID}}\): partner identifiers. If \(\mathsf {P} \in \mathsf {USERS} \), then \(\mathsf {{PID}} \in \mathsf {USERS} \) such that \(\mathsf {P} \ne \mathsf {{PID}} \), else \(\mathsf {P} \in \mathsf {OPS} \) and \(\mathsf {{PID}} \in \mathsf {USERS} ^2\) including two distinct parties.

  • \(\mathsf {{OID}}\): operator identifiers. If \(\mathsf {P} \in \mathsf {USERS} \) then \(\mathsf {{OID}} \in \mathsf {OPS} ^2\). Else \(\mathsf {{OID}} \in \mathsf {OPS} \).

  • \(\mathsf {{AID}}\): distinct authority identifiers such that \(\mathsf {{AID}} \in \mathsf {AUTH} ^n\).

  • \(\alpha \): an accept flag, undefined (\(\alpha =\bot \)) until the instance terminates, either in an abort (setting \(\alpha =0\)) or without error (\(\alpha =1\)).

  • \(\mathsf {k}\): the session key, initialized to \(\bot \), and modified if the protocol terminates without error. Operator instances do not have this attribute.

  • \(\mathsf {sst}\): a set of values pertaining to the instance’s view of the session, initialized to \(\bot \) and modified if the protocol terminates without error. User instances do not have this attribute.

  • \(\rho \): a reveal bit, initialized to 0 and set to 1 if the adversary reveals a session key \(\mathsf {k} \ne \bot \). Operator instances do not have this attribute.

  • \(\mathsf {b}\): a bit chosen uniformly at random upon the creation of the instance.

  • \(\tau \): the transcript of the session, initialized as \(\bot \), turning to the ordered list of messages sent and received by that instance in the same order.

An auxiliary function \(\mathsf {IdentifySession}\) (\(\mathsf {sst}\), \(\pi _{}^{}\)) is defined, taking as input session state \(\mathsf {sst}\) and a party instance \(\pi _{}^{}\), outputting 1 if \(\pi _{}^{}\) took part in the session where \(\mathsf {sst}\) was created, and 0 otherwise. We need this at opening, when authorities must extract and verify the session identifier for themselves.

Notice that, unlike in typical AKE, we have two types of parties running each session (users and operators), and three attributes that describe partners: mobile user partners in \(\mathsf {{PID}}\), operator partners in \(\mathsf {{OID}}\), and authorities partnering it in \(\mathsf {{AID}}\). Moreover, \(\mathsf {IdentifySession} \) and \(\mathsf {sst} \) are protocol dependent, i.e., they will have a different instantiation depending on the protocol we analyse.

We define matching conversation defined as follows:

Definition 4

(Matching Instances). For any \((i,j)\in \mathbb {N}^2\) and \((\mathsf {A},\mathsf {B})\in \mathsf {USERS} ^2\) such that \(A \ne B\), we say that \(\pi _{\mathsf {A}}^{i}\) and \(\pi _{\mathsf {B}}^{j}\) have matching conversation if all the following conditions hold: \(\pi _{\mathsf {A}}^{i}.\mathsf {sid} \ne \bot \), \(\pi _{\mathsf {A}}^{i}.\mathsf {sid} =\pi _{\mathsf {B}}^{j}.\mathsf {sid} \), and \(\pi _{\mathsf {A}}^{i}.\mathsf {{AID}} = \pi _{\mathsf {B}}^{j}.\mathsf {{AID}} \). If two instances \(\pi _{\mathsf {A}}^{i}\) and \(\pi _{\mathsf {B}}^{j}\) have matching conversation, we sometimes say, by abuse of language, that \(\pi _{\mathsf {A}}^{i}\) matches \(\pi _{\mathsf {B}}^{j}\).

Oracles. We define \(\mathsf {LIKE}\) security in terms of games played by an adversary plays against a challenger. In each game, the adversary \(\mathcal {A} \) may query some or all of the oracles below. Intuitively, \(\mathcal {A}\) may register honest or malicious participants (using \(\mathsf {{Register}}\) ), initiate new sessions (using \(\mathsf {{NewSession}}\) ), interact in the AKE protocol (using \(\mathsf {{Send}}\) ), corrupt parties (using \(\mathsf {{Corrupt}}\) ), reveal session keys (using \(\mathsf {{Reveal}}\) ), or reveal LI trapdoors (using \(\mathsf {{RevealTD}}\) ). Finally, the adversary has to query a testing oracle (\(\mathsf {{Test}}\)) and tell whether the output was the real session key or a random one. Each oracle aborts if queried with ill-formatted input, or if insufficient information exists for the response.

  • \(\mathsf {{Register}} (\mathsf {P}, \mathtt {role}, \mathsf {PK}) \rightarrow \bot \cup \mathsf {P}.\mathsf {PK} \): On input party \(\mathsf {P} \not \in \mathsf {USERS} \cup \mathsf {OPS} \cup \mathsf {AUTH} \), role \(\mathtt {role} \in \{ \mathtt {user},\mathtt {operator},\mathtt {authority}\}\), and public key \(\mathsf {PK} \):

    • If \(\mathtt {role} = \mathtt {user}\) (resp. \(\mathtt {operator}\), \(\mathtt {authority}\)), add \(\mathsf {P} \) to the set \( \mathsf {USERS} \) (resp. \(\mathsf {OPS} \), \(\mathsf {AUTH} \)).

    • If \(\mathtt {role} = \mathtt {user}\) (resp. \(\mathtt {operator}\) and \(\mathtt {authority}\)) and \(\mathsf {PK} = \bot \), run \(\mathsf {{U}KeyGen} (\mathsf {pp})\rightarrow (\mathsf {P}. \mathsf {PK}, \mathsf {P}. \mathsf {SK})\) (resp. \(\mathsf {{O}KeyGen} \) and \(\mathsf {{A}KeyGen} \)).

    • If \(\mathsf {PK} \ne \bot \), set the \(\mathsf {P}.\gamma {}= 1\), \(\mathsf {P}.\mathsf {PK} = \mathsf {PK} \), and \(\mathsf {P}.\mathsf {SK} = \bot \).

    Finally, return \(\mathsf {P}.\mathsf {PK} \).

  • \(\mathsf {{NewSession}} (\mathsf {P},\mathsf {{PID}},\mathsf {{OID}}, \mathsf {{AID}})\rightarrow \pi _{\mathsf {P}}^{i}\): On input party \(\mathsf {P} \in \mathsf {USERS} \cup \mathsf {OPS} \), if \(\mathsf {P} \in \mathsf {USERS} \) then \(\mathsf {{PID}},\mathsf {{OID}} \) and \( \mathsf {{AID}} \) must be such that: \(\mathsf {{PID}} \in \mathsf {USERS} \) with \(\mathsf {P} \ne \mathsf {{PID}} \); \(\mathsf {{OID}} \in \mathsf {OPS} ^2\); and \(\mathsf {{AID}} \in \mathsf {AUTH} ^{*}\) with \(|\mathsf {{AID}} | \ne 0\). If \(\mathsf {P} \in \mathsf {OPS} \), then \(\mathsf {{PID}},\mathsf {{OID}} \), and \(\mathsf {{AID}} \) are such that \(\mathsf {{PID}} \in \mathsf {USERS} ^2\); \(\mathsf {{OID}} \in \mathsf {OPS} \); and \(\mathsf {{AID}} \in \mathsf {AUTH} ^{*}\) such that \(|\mathsf {{AID}} | \ne 0\). On the \({i}\text {-th}\) call to this oracle, return new instance \(\pi _{\mathsf {P}}^{i}\) with already-set values for \(\mathsf {{PID}} \), \(\mathsf {{OID}} \), and \(\mathsf {{AID}} \).

  • \(\mathsf {{Send}}\) (\(\pi _{\mathsf {P}}^{i}\), m)\(\rightarrow m'\): Send message m to instance \(\pi _{\mathsf {P}}^{i}\) and return message \(m'\) according to protocol (potentially \(\bot \) for inexistent, aborted, or terminated instance, for ill-formed m, or if \(\mathsf {P}.\mathsf {SK} =\bot \)).

  • \(\mathsf {{Reveal}}\) (\(\pi _{\mathsf {P}}^{i}\))\(\rightarrow \mathsf {k} \): For accepting user instance \(\pi _{\mathsf {P}}^{i}\), return the session key \(\pi _{\mathsf {P}}^{i}.\mathsf {k} \) and set \(\pi _{\mathsf {P}}^{i}.\rho = 1\). For accepting operator instance \(\pi _{\mathsf {P}}^{i}\) return the session state \(\pi _{\mathsf {P}}^{i}.\mathsf {sst} \) and set \(\pi _{\mathsf {P}}^{i}.\rho = 1\). If \(\pi _{\mathsf {P}}^{i}\) is not an accepting instance (\( \alpha \ne 1 \)), return \(\bot \).

  • \(\mathsf {{Corrupt}}\) (\(\mathsf {P}\))\(\rightarrow \mathsf {P}.\mathsf {SK} \): Return \(\mathsf {P}.\mathsf {SK} \) of input party \(\mathsf {P} \) (all roles) and set \(\mathsf {P}.\gamma = 1\) for all instances of this party.

  • \(\mathsf {{Test}} {}(\pi _{\mathsf {P}}^{i}) \rightarrow \widetilde{\mathsf {k}}\): Return \(\bot \) if input instance \(\pi _{\mathsf {P}}^{i}\) is not an accepting user instance or if this oracle has been queried before, outputting a value that is non-\(\bot \). Else, if \(\pi _{\mathsf {P}}^{i}.\mathsf {b} = 0\), return \(\pi _{\mathsf {P}}^{i}.\mathsf {k} \), and otherwise return randomly-sampled r from the same domain as \(\pi _{\mathsf {P}}^{i}.\mathsf {k} \). This oracle can only be called once during an experiment, which means that only one instance bit \(\mathsf {b}\) is actually used.

  • \(\mathsf {{RevealTD}} {}(\mathsf {sst}, \mathsf {A}, \mathsf {B}, \mathsf {O}, (\varLambda _{i})_{i=1}^n,l)\rightarrow \varLambda _{l}.t\): If \(\mathsf {Verify} (\mathsf {pp}, \mathsf {sst}, \mathsf {A}.\mathsf {PK}, \mathsf {B}.\mathsf {PK}, \mathsf {O}.\mathsf {PK}, (\varLambda _{i}.\mathsf {PK})_{i=1}^n) = 1\), run \(\varLambda _{l}.t \leftarrow \mathsf {TDGen} (\mathsf {pp},\varLambda _{l}.\mathsf {SK}, \mathsf {sst})\), return \(\varLambda _{l}.t\); else return \(\bot \).

Notice that the operators do not contribute in a traditional sense to the security of the key; instead they verify the soundness of the exchanges and produce a session state \(\mathsf {sst}\), which proves the operator’s honesty to the authorities. The two operators need not agree on \(\mathsf {sst}\), and only one \(\mathsf {sst}\) is required for the opening procedure (as is the case in most applications). However, if during AKE one operator validates, but not the other, then ultimately the session is aborted.

We define \(\mathsf {LIKE}\) security in terms of three properties: key-security (KS), non-frameability (NF), and honest operator (HO). For the reader’s convenience, some useful notations from the model may also be found summarized in Appendix A.

Key-Security. Our KS game extends AKE security [13]. The adversary may query all the oracles above (subject to key-freshness as defined below) and must distinguish a real session key from one randomly chosen from the same domain.

The KS notion is much stronger than traditional 2-party AKE. The attacker can adaptively corrupt all but the users targeted in the challenge, all but one out of the n selected authorities, and all the operators; \(\mathcal {A}\) may also register malicious users, retroactively corrupt users (thus ensuring forward secrecy), and learn all but one trapdoor in the challenge session. Thus, the session key is only known by the endpoints, and by the collusion of all the n authorities (LI); all other (collusions of) parties fail to distinguish it from a random key. More formally, the tested target instance must be key-fresh with respect to Definition 5. The full game \({\mathsf {Exp}^{\mathsf {\mathsf {KS}}}_{\mathsf {LIKE},\mathcal {A}}(\lambda )}\) is given in Table 1.

Definition 5

(Key Freshness). Let \(\pi _{\mathsf {P}}^{j}\) be the \({j}\text {-th}\) created instance, associated to party \(\mathsf {P} \in \mathsf {USERS} \). Let \(\mathcal {A} {}\) be a PPT adversary against \(\mathsf {LIKE}\) . Parse \(\pi _{\mathsf {P}}^{j}.\mathsf {{PID}} \) as \(\mathsf {P} '\) and \(\pi _{\mathsf {P}}^{j}.\mathsf {{AID}} \) as \((\varLambda _{i})_{i=1}^n\). The key \(\pi _{\mathsf {P}}^{j}.\mathsf {k} \) is fresh if all the following conditions hold:

Table 1. Games for key-security (\(\mathsf {KS} \), left) and non-frameability (\(\mathsf {NF} \), right).
  • \(\pi _{\mathsf {P}}^{j}.\alpha =1\), \(\mathsf {P}.\gamma =0\) when \(\pi _{\mathsf {P}}^{j}.\alpha \) became 1, and \(\pi _{\mathsf {P}}^{j}.\rho =0\).

  • if \(\pi _{\mathsf {P}}^{j}\) matches \(\pi _{\mathsf {P} '}^{k}\) for \(k \in \mathbb {N}\), then: \(\pi _{\mathsf {P} '}^{k}.\alpha =1\), \(\mathsf {P} '.\gamma =0\) when \(\pi _{\mathsf {P} '}^{k}.\alpha \) became 1, and \(\pi _{\mathsf {P} '}^{k}.\rho =0\).

  • if no \(\pi _{\mathsf {P} '}^{k}\) matches \(\pi _{\mathsf {P}}^{j}\), \(\mathsf {P} '.\gamma =0\).

  • \(\exists \ l\in \llbracket 1,n \rrbracket \) such that for any \(\mathsf {O} \in \pi _{\mathsf {P}}^{j}.\mathsf {{OID}} \), \(\mathcal {A}\) has never queried \(\mathsf {{RevealTD}} {({\mathsf {sst}}, {\mathsf {A}}, {\mathsf {B}},\mathsf {O}, ({\varLambda _{i} '})_{i=1}^{{n'}},{l'})}\) and:

    • \(\varLambda _{l}.\gamma =0\) and \(\varLambda _{l} =\varLambda _{l'} '\);

    • \(\mathsf {IdentifySession} (\mathsf {sst}, \pi _{\mathsf {P}}^{j})=1\).

Definition 6

For an adversary \(\mathcal {A}\) the value:

$$\begin{aligned} \mathsf {Adv}_{\mathsf {LIKE},\mathcal {A}}^{\mathsf {KS}}(\lambda ) := \Big | \mathbb {P}\left[ \mathsf {Exp}^{\mathsf {\mathsf {KS}}}_{\mathsf {LIKE},\mathcal {A}}(\lambda ) =1\right] - \frac{1}{2} \Big | \end{aligned}$$

denotes its advantage against \(\mathsf {Exp}^{\mathsf {\mathsf {KS}}}_{\mathsf {LIKE},\mathcal {A}}(\lambda )\). A lawful-interception authenticated key-exchange scheme \(\mathsf {LIKE}\) is key-secure if for all PPT \({\mathcal {A}}\), \(\mathsf {Adv}_{\mathsf {LIKE},\mathcal {A}}^{\mathsf {KS}}(\lambda )\) is a negligible function of the security parameter \(\lambda \).

Non-frameability. In this game, \(\mathcal {A}\) attempts to frame a user \(\mathsf {P} ^*\) for running an AKE session (with session state \(\mathsf {sst}\)) which \(\mathsf {P} ^*\) rejected or did not take part in. The adversary may corrupt all parties apart from \(\mathsf {P} ^*\), as formalized in Table 1.

Definition 7

The advantage of an adversary \( \mathcal {A} {} \) in the non-frameability experiment \( \mathsf {Exp}^{\mathsf {NF}}_{\mathsf {LIKE},\mathcal {A} {}}(\lambda )\) in Table 1 is defined as:

$$\begin{aligned} \mathsf {Adv}_{\mathsf {LIKE},\mathcal {A}}^{\mathsf {NF}}(\lambda ) = \mathbb {P}\left[ \mathsf {Exp}^{\mathsf {NF}}_{\mathsf {LIKE},\mathcal {A} {}}(\lambda )=1\right] . \end{aligned}$$

A lawful interception authenticated key-exchange scheme \(\mathsf {LIKE}\) is non-frameable if all PPT adversaries \({\mathcal {A}}\), have negligible \(\mathsf {Adv}_{\mathsf {LIKE},\mathcal {A} {}}^{\mathsf {NF}}(\lambda ) \) as a function of \(\lambda \).

Table 2. The honest-operator \(\mathsf {HO}\) game, where \(q_\mathsf {r}\) is the number of queries to \(\mathsf {{Register}} \), and \(\mathsf {P} _i\) is the party input as the \({i}\text {-th}\) such query.

Honest Operator. The HO game captures the fact that honest operators will abort if they detect ill-formed or non-authentic messages. The adversary must create a valid session state \(\mathsf {sst} \), accepted by the operators, for which the authorities \(\mathsf {Open}\) to an incorrect session key. The attacker can provide some trapdoors and corrupt all the parties except the operator that approves \(\mathsf {sst}\).

Ideally, \(\mathsf {LIKE}\) schemes should guarantee that if the (honest) operator approves a session, then the extracted key (output in \(\mathsf {Exp}^{\mathsf {HO}}_{\mathsf {LIKE},\mathcal {A} {}}(\lambda )\), see Table 2) will be the one used by Alice and Bob. However, malicious endpoints could run a protocol perfectly, then use a different key (e.g., exchanged out of band) to encrypt messages. This flaw is universal to secure-channel establishment featuring malicious endpoints. The best a \(\mathsf {LIKE}\) protocol can guarantee is that the extracted key is the one that would have resulted from an honest protocol run yielding \(\mathsf {sst}\). We express this in terms of a key extractor, which, given an operator instance and a set of public keys, outputs the key \(\mathsf {k} \) associated with the session in which the instance took part.

Definition 8

(Key Extractor). For any \(\mathsf {LIKE} \), a key extractor \(\mathsf {Extract}(\cdot ,\cdot )\) is a deterministic unbounded algorithm such that, for any users \(\mathsf {A} \) and \(\mathsf {B} \), operators \(\mathsf {O} _\mathsf {A} \) and \(\mathsf {O} _\mathsf {B} \), and set of n authorities \((\varLambda _{i})_{i=1}^n\), any set \(\{\mathsf {pp},\) \(\mathsf {A}.\mathsf {PK},\) \(\mathsf {A}.\mathsf {SK},\mathsf {B}.\mathsf {PK},\) \(\mathsf {B}.\mathsf {SK},\) \(\mathsf {O} _\mathsf {A}.\mathsf {PK},\) \(\mathsf {O} _\mathsf {A}.\mathsf {SK},\) \(\mathsf {O} _\mathsf {B}.\mathsf {PK},\) \(\mathsf {O} _\mathsf {B}.\mathsf {SK},\) \(\mathsf {k},\) \(\mathsf {sst},\) \(\mathsf {APK} =(\varLambda _{i}.\mathsf {PK})_{i=1}^n,\) \((\varLambda _{i}.\mathsf {SK})_{i=1}^n,\) \(\tau _\mathsf {A}, \tau _\mathsf {B} , \mathsf {PPK} \}\) generated as follows:

  • \(\mathsf {pp} \leftarrow \mathsf {Setup} (\lambda )\); \((\mathsf {A}.\mathsf {PK},\mathsf {A}.\mathsf {SK})\leftarrow \mathsf {{U}KeyGen} (\mathsf {pp})\); \((\mathsf {B}.\mathsf {PK},\mathsf {B}.\mathsf {SK})\leftarrow \mathsf {{U}KeyGen} (\mathsf {pp})\);

  • \((\mathsf {O} _\mathsf {A}.\mathsf {PK},\mathsf {O} _\mathsf {A}.\mathsf {SK})\leftarrow \mathsf {{O}KeyGen} (\mathsf {pp})\); \((\mathsf {O} _\mathsf {B}.\mathsf {PK},\mathsf {O} _\mathsf {B}.\mathsf {SK})\leftarrow \mathsf {{O}KeyGen} (\mathsf {pp})\);

  • \(\forall i \in \llbracket 1,n\rrbracket \), \((\varLambda _{i}.\mathsf {PK},\varLambda _{i}.\mathsf {SK})\leftarrow \mathsf {{A}KeyGen} (\mathsf {pp})\);

  • \( (\mathsf {k}, \mathsf {sst} _\mathsf {A}, \mathsf {sst} _\mathsf {B}, \mathsf {k}) \leftarrow \mathsf {AKE} {}\langle \mathsf {A} (\mathsf {A}.\mathsf {SK}),\mathsf {O} _\mathsf {A} (\mathsf {O} _\mathsf {A}.\mathsf {SK}),\)

    \(\mathsf {O} _\mathsf {B} (\mathsf {O} _\mathsf {B}.\mathsf {SK}),\mathsf {B} (\mathsf {B}.\mathsf {SK})\rangle (\mathsf {pp}, \mathsf {A}.\mathsf {PK},\mathsf {B}.\mathsf {PK}, \mathsf {APK})\);

  • \( \tau _\mathsf {A} \) is the transcript of the execution yielding \(\mathsf {sst} _\mathsf {A} \) from \(\mathsf {O} _\mathsf {A} \)’s point of view;

  • \( \tau _\mathsf {B} \) is the transcript of the execution yielding \(\mathsf {sst} _\mathsf {B} \) from \(\mathsf {O} _\mathsf {B} \)’s point of view;

  • \(\mathsf {PPK} \leftarrow \{\mathsf {O} _\mathsf {A}.\mathsf {PK},\mathsf {O} _\mathsf {B}.\mathsf {PK},\mathsf {A}.\mathsf {PK},\mathsf {B}.\mathsf {PK} \} \cup \{\varLambda _{i}.\mathsf {PK} \}_{i=1}^n\);

it holds that \(\forall (U,P)\in \{\mathsf {A}, \mathsf {B} \}^{2}\) such that \(U\ne P\) and any instance \(\pi _{\mathsf {O} _U}\) such that \(\pi _{\mathsf {O} _U}.\tau = \tau _U\), \(\pi _{\mathsf {O} _U}.\mathsf {{PID}} = P\), \(\pi _{\mathsf {O} _U}.\mathsf {{AID}} = (\varLambda _{i})_{i=1}^n\), and \(\pi _{\mathsf {O} _U}.\mathsf {sst} = \mathsf {sst} \), then: \(\mathsf {Pr}[\mathsf {Extract}(\pi _{\mathsf {O} _U},\mathsf {PPK})=\mathsf {k} ] = 1\).

Notice that our extractor is unbounded, as it must be in order to preserve key security (otherwise the extractor would allow the operator to find the session key).

Definition 9

For any lawful interception key-exchange scheme \(\mathsf {LIKE} \) that admits a key extractor \(\mathsf {Extract}\) the advantage of an adversary \( \mathcal {A} {} \) in the honest-operator game \( \mathsf {Exp}^{\mathsf {HO}}_{\mathsf {LIKE},\mathcal {A} {}}(\lambda )\) in Table 2 is defined as: \(\mathsf {Adv}_{\mathsf {LIKE},\mathcal {A} {}}^{\mathsf {HO}}(\lambda ) =\)

$$\begin{aligned} \mathbb {P}\left[ \begin{array}{l} (k_*,\pi _{\mathsf {O}}^{},\mathsf {PPK}) \leftarrow \mathsf {Exp}^{\mathsf {HO}}_{\mathsf {LIKE},\mathcal {A} {}}(\lambda ); \\ k\leftarrow \mathsf {Extract}(\pi _{\mathsf {O}}^{},\mathsf {PPK}) \end{array}: \begin{array}{l} k\not = \bot \wedge \ k_*\not = \bot \\ \wedge \ k \not = k_* \end{array}\right] . \end{aligned}$$

A \(\mathsf {LIKE}\) scheme is honest-operator secure if, for all adversaries running in time polynomial in \(\lambda \), \( \mathsf {Adv}_{\mathsf {LIKE},\mathcal {A} {}}^{\mathsf {HO}}(\lambda ) \) is negligible as a function of \(\lambda \).

5 Our Protocol

Our \(\mathsf {LIKE}\) schemerequires a signature scheme \(\mathsf {DS} = (\mathsf {SGen}, \mathsf {SSig}, \mathsf {SVer})\); a signature of knowledge scheme \((\mathsf {SoK},\mathsf {SoKver})\) allowing to prove knowledge of a discrete logarithm in group \(\mathbb {G}_2 = \langle g_2\rangle \); and two NIZK proofs of knowledge – one denoted \(\mathsf {NIPoK} \left\{ x: y = g_1^{x}\right\} \) that allows to prove knowledge of the discrete logarithm of \(y = g_1^x\) in a cyclic group \(\mathbb {G}_1 = \langle g_1\rangle \) for private witness x; and another denoted \(\mathsf {NIPoK} \left\{ x: y_1 = g_1^{x} \wedge y_T = g_T^{x}\right\} \) that allows to prove knowledge of the discrete logarithm of values \(y_1 = g_1^{x}\) and \(y_T = g_T^{x}\) in same-size groups \(\mathbb {G}_1 = \langle g_1\rangle \) and \(\mathbb {G}_T = \langle g_T\rangle \) for private witness x.

The proof and the signature of knowledge of a discrete logarithm can be instantiated by using Fiat-Shamir on Schnorr’s protocol [39]. For the proof, we use the hash of the statement and the commitment as a challenge; for the signature of knowledge, we add the message into the hash [17]. The proof of the discrete logarithm equality can be instantiated by using Fiat-Shamir on the Chaum and Pedersen protocol [18].

Our scheme follows the syntax in Sect. 3. We divide its presentation into four components: (a) setup and key generation, (b) authenticated key-exchange, (c) public verification, and (d) lawful interception.

Setup and Key Generation. This part instantiates the four following algorithms of the \(\mathsf {LIKE}\) syntax presented in Sect. 3.

  • \(\mathsf {Setup} (1^\lambda )\): Based on \(\lambda \), chooses \(\mathbb {G}_1 = \langle g_1\rangle \), \(\mathbb {G}_2 = \langle g_2 \rangle \), and \(\mathbb {G}_T\), three groups of prime order p of length \(\lambda \), \(e:\mathbb {G}_1\times \mathbb {G}_2 \rightarrow \mathbb {G}_T\) a type 3 bilinear mapping, and outputs \(\mathsf {pp} = (1^\lambda ,\mathbb {G}_1,\mathbb {G}_2,\mathbb {G}_T,e,p, g_1, g_2)\).

  • \(\mathsf {{U}KeyGen} (\mathsf {pp})\): Runs \((\mathsf {U}.\mathsf {PK},\mathsf {U}.\mathsf {SK}) \leftarrow \mathsf {SGen} (\mathsf {pp})\) and returns \((\mathsf {U}.\mathsf {PK},\mathsf {U}.\mathsf {SK})\).

  • \(\mathsf {{O}KeyGen} (\mathsf {pp})\): Runs \((\mathsf {O}.\mathsf {PK},\mathsf {O}.\mathsf {SK}) \leftarrow \mathsf {SGen} (\mathsf {pp})\) and returns \((\mathsf {O}.\mathsf {PK},\mathsf {O}.\mathsf {SK})\).

  • \(\mathsf {{A}KeyGen} (\mathsf {pp})\): Picks \({{\varLambda _{}}.\mathsf {SK}}\xleftarrow {\$} \mathbb {Z}^*_p\), sets \(\varLambda _{}.\mathsf {pk} \leftarrow g_1^{{{\varLambda _{}}.\mathsf {SK}}}\) and \(\varLambda _{}.\mathsf {ni} \leftarrow \mathsf {NIPoK} \{{{\varLambda _{}}.\mathsf {SK}}: \varLambda _{}.\mathsf {pk} = g_1^{{{\varLambda _{}}.\mathsf {SK}}}\}\), lets \({{\varLambda _{}}.\mathsf {PK}}\leftarrow (\varLambda _{}.\mathsf {pk},\) \(\varLambda _{}.\mathsf {ni})\), and returns \(({{\varLambda _{}}.\mathsf {PK}},{{\varLambda _{}}.\mathsf {SK}})\).

The setup algorithm is run only once; key-generation is run once per party. The users and operators generate signature keys required in the authenticated key-exchange step. Authorities generate private/public keys, then prove (in zero-knowledge) that they know the private key. This prevents attacks in which authorities choose keys that cancel out other authority keys, breaking key security.

Fig. 1.
figure 1

The AKE component of \(\mathsf {LIKE}\) , with operators \(\mathsf {O} _\mathsf {A} \) and \(\mathsf {O} _\mathsf {B} \) under a single heading. Operators run protocol steps in turn, forwarding messages to the next participant. If some verification fails, we assume operators instantly abort. The only message not forwarded by \(\mathsf {O} _\mathsf {A} \) to Alice is marked in the dashed box. Note that \(X_1\) is not used during this protocol; we will see later that this element is required to generate the trapdoors.

Authenticated Key Exchange. Whenever two users communicate, they run the AKE component of our \(\mathsf {LIKE}\) syntax, \(\mathsf {AKE} {}\langle \mathsf {A} (\mathsf {A}.\mathsf {SK}),\mathsf {O} _\mathsf {A} (\mathsf {O} _\mathsf {A}.\mathsf {SK}),\mathsf {O} _\mathsf {B} (\mathsf {O} _\mathsf {B}.\mathsf {SK}),\)\(\mathsf {B} (\mathsf {B}.\mathsf {SK})\rangle \) \((\mathsf {pp},\mathsf {A}.\mathsf {PK}, \mathsf {B}.\mathsf {PK}, \mathsf {APK})\). Our protocol is described in Fig. 1. For readability we “merge” \(\mathsf {O} _{\mathsf {A}}\) and \(\mathsf {O} _{\mathsf {B}}\) since they act almost identically. Neither \(\mathsf {O} _{\mathsf {A}}\), nor \(\mathsf {O} _{\mathsf {B}}\) computes session keys.

In Fig. 1, Alice, Bob, and the operators first verify the public keys of the n authorities, aborting if their NIZK proofs are incorrect. Notice that in the figure we omit to state that any failed verification leads to an abort. If a user has already performed these verifications, future sessions with those authority keys can proceed directly.

The heart of the protocol is the exchange between Alice and Bob. Alice generates a secret x and sends \( X_1=g_1^{x}, X_2=g_2^x \), with an associated signature of knowledge linking this key share to \( \omega \). Bob proceeds similarly, sampling y and sending \( Y=g_2^y \) and a signature of knowledge. The endpoints also send a signature over the transcript, thus authenticating each other and their conversation.

The two operators verify the messages they receive, aborting in case of failure, and otherwise forwarding the messages. An exception is Bob’s last message, verified by both operators, but not forwarded to Alice. The operators check the signatures and signatures of knowledge and the equality of the exponents in Alice’s DH elements.

Given Bob’s public value Y, all the authority public keys, and her private exponent x, Alice computes her session key as a pairing of the product of all authority public keys and Y, all raised to her secret x: \(\mathsf {k} =e(\prod _{i=1}^{n}\varLambda _{i}.\mathsf {pk},Y)^x\). Due to bilinearity, this is equal to \( e(\prod _{i=1}^{n}\varLambda _{i}.\mathsf {pk},g_2^y)^x = e(\prod _{i=1}^{n}\varLambda _{i}.\mathsf {pk},g_2^x)^y = e(\prod _{i=1}^{n}\varLambda _{i}.\mathsf {pk},X_2)^y\), which is Bob’s key. Thus our scheme is correct for the endpoints. Note that all endpoint computations including secret values can be done by a SIM card except the pairing. Since the pairing is on public values, it can be delegated to a less secure environment (like the phone); then the exponentiation can be done on the SIM card. The operator’s output is the session state \(\mathsf {sst}\), the signed session transcript from the their point of view.

Verification. We instantiate \(\mathsf {Verify}\) as follows:

  • \(\mathsf {Verify} (\mathsf {pp}, \mathsf {sst}, \mathsf {A}.\mathsf {PK}, \mathsf {B}.\mathsf {PK}, \mathsf {O}.\mathsf {PK}, \mathsf {APK})\rightarrow b\): Parse \(\mathsf {APK} \) as a set \((\varLambda _{i}.\mathsf {PK})_{i=1}^n\) and parse each \(\varLambda _{i}.\mathsf {PK} \) as \((\varLambda _{i}.\mathsf {pk},\varLambda _{i}.\mathsf {ni})\), set \(\omega \leftarrow \mathsf {A} \Vert \mathsf {B} \Vert (\varLambda _{i})_{i=1}^{n} \). Parse \(\mathsf {sst}\) as \(\omega '\Vert m_X\Vert m_Y\Vert \sigma ^1_Y\Vert \sigma _X\Vert \sigma ^2_Y\Vert \sigma _O\), \(m_X\) as \( X_1 \Vert X_2 \Vert \mathsf {ni} _X\) and \(m_Y\) as \(Y \Vert \mathsf {ni} _Y\). if:

    • \(\forall i \in \llbracket 1,n \rrbracket \), \(\mathsf {NIPoKver}(\varLambda _{i}.\mathsf {pk}, \varLambda _{i}.\mathsf {ni}) =1\);

    • \(e(X_1,g_2)=e(g_1,X_2)\);

    • \(\mathsf {SoKver} ( \omega , (g_2,X_2), \mathsf {ni} _{X}) =\mathsf {SoKver} ( \omega , (g_2,Y), \mathsf {ni} _{Y}) =1\);

    • \(\mathsf {SVer} (\mathsf {B}.\mathsf {PK},\sigma ^1_Y, \omega \Vert m_X\Vert m_Y)=1\);

    • \(\mathsf {SVer} (\mathsf {A}.\mathsf {PK},\sigma _X, \omega \Vert m_X\Vert m_Y\Vert \sigma _Y^1)=1\);

    • \(\mathsf {SVer} (\mathsf {B}.\mathsf {PK},\sigma ^2_Y, \omega \Vert m_X\Vert m_Y\Vert \sigma _Y^1\Vert \sigma _X)=1\);

    • \(\mathsf {SVer} (\mathsf {O}.\mathsf {PK},\sigma _O, \omega \Vert m_X\Vert m_Y\Vert \sigma _Y^1\Vert \sigma _X\Vert \sigma _Y^2)=1\);

    then the algorithm returns 1, else it returns 0.

Intuitively, the signatures authenticate Alice, Bob, and the operator outputting \(\mathsf {sst}\) . The \(\mathsf {Verify}\) algorithm retraces the operator’s verifications of the authorities’ proofs, of the signatures and SoKs, and of the equality of the discrete logarithms for \(X_1\) and \(X_2\).

Lawful Interception. Lawful interception employs two algorithms, one for trapdoor generation, and another which uses all the trapdoors to extract a key.

  • \(\mathsf {TDGen} (\mathsf {pp},{{\varLambda _{}}.\mathsf {SK}}, \mathsf {sst})\): Parse \(\mathsf {sst}\) as \((\omega \Vert m_X\Vert m_Y\Vert \sigma ^1_Y\Vert \sigma _X \) \(\Vert \sigma ^2_Y\Vert \sigma _O)\). Compute \( \varLambda _{}.t_1 \leftarrow e(X_1,Y)^{{\varLambda _{}}.\mathsf {SK}}, \varLambda _{}.t_2 \leftarrow \mathsf {NIPoK} \left\{ {{\varLambda _{}}.\mathsf {SK}} : \varLambda _{}.\mathsf {pk} = g_1^{{\varLambda _{}}.\mathsf {SK}}\wedge \varLambda _{}.\right. \)\(\left. t_1= e(X_1,Y)^{{\varLambda _{}}.\mathsf {SK}} \right\} \) and \(\varLambda _{}.t \leftarrow (\varLambda _{}.t_1,\varLambda _{}.t_2)\), and return \(\varLambda _{}.t\).

  • \(\mathsf {Open} (\mathsf {pp},\mathsf {sst}, \mathsf {APK}, \mathcal {T}) \): Parse \(\mathcal {T} \) as \((\varLambda _{i}.t)_{i=1}^{n}\), parse \(\mathsf {sst}\) as \(\mathsf {A} \Vert \mathsf {B} \Vert (\varLambda _{i})_{i=1}^{n}\)\(\Vert m_X\Vert m_Y\Vert \sigma ^1_Y\Vert \sigma _X\Vert \sigma ^2_Y\Vert \sigma _O\), \(m_X\) as \( X_1 \Vert X_2 \Vert \mathsf {ni} _X\) and \(m_Y\) as \(Y \Vert \mathsf {ni} _Y\), \(\mathsf {APK} \) as \((\varLambda _{i}.\mathsf {PK})_{i=1}^n\), parse each \(\varLambda _{i}.\mathsf {PK} \) as \((\varLambda _{i}.\mathsf {pk},\varLambda _{i}.\mathsf {ni})\), each \( \varLambda _{i}.t \) as \((\varLambda _{i}.t_1,\)\(\varLambda _{i}.t_2)\) and verify the non-interactive proof of knowledge: \(\mathsf {NIPoKver} \)\(((g_1,\varLambda _{i}.\mathsf {pk}, (X_1,Y), \varLambda _{i}.t_1),\varLambda _{i}.t_2)\); if any verification fails, the \(\mathsf {Open}\) algorithm returns \( \bot \). Compute and return \(\mathsf {k} \leftarrow \prod _{i=1}^n (\varLambda _{i}.t_1)\).

Informally, each authority \( \varLambda _{i} \) generates a trapdoor \( \varLambda _{i}.t_{1} = e(X_1,Y)^ {\varLambda _{i}.\mathsf {SK}} \) as well as a proof \(\varLambda _{i}.t_{2}\) that ensures that \( \varLambda _{i}.t_{1}\) has been generated correctly, using the same \(\varLambda _{i}.\mathsf {SK} \) that is associated at key-generation with that authority (e.g., \(\log _{g_1}(\varLambda _{i}.\mathsf {pk}) = \log _{e(X_1,Y)}(\varLambda _{i}.t_{1})\)). The session key is recovered by multiplying all the trapdoors from \( \varLambda _{1}.t_{1} \) to \( \varLambda _{n}.t_{1} \), i.e., \( \prod _{i=1}^n \varLambda _{1}.t_{i} = \prod _{i=1}^n e(X_1,Y)^ {\varLambda _{i}.\mathsf {SK}} = \prod _{i=1}^ne(g_1^x,g_2^{ y})^{\varLambda _{i}.\mathsf {SK}} = e(g_1,g_2^{xy})^{\sum _{i=1}^n \varLambda _{i}.\mathsf {SK}} = e(g_1^{\sum _{i=1}^n\varLambda _{i}.\mathsf {SK}},g_2^{x\cdot y}) = e(\prod _{i=1}^n\varLambda _{i}.\mathsf {pk},X_2)^y\). This is indeed Bob’s key, proving correctness with respect to LI.

Complexity. Due to the NIZK verification and computing the product of the n authority public keys, the AKE protocol runs in linear time with respect to n. If, however, authorities do not change between protocol runs (as is likely in practice), the runtime will be constant with respect to n.

6 Security

We present our main theorem in this section, provide proof sketches in Appendix B, and give full proofs in the full version [7].

In the following, let \(\mathsf {sid}:= X_2 \Vert Y \), and define \(\mathsf {IdentifySession} (\mathsf {sst}, \pi _{\mathsf {P}}^{j})\) for some party \(\mathsf {P} \) and integer j as follows. Parsing \(\mathsf {sst} \) as:

$$\mathsf {A} \Vert \mathsf {B} \Vert (\varLambda _{i})_{i=1}^{n}\Vert X_1 \Vert X_2 \Vert \mathsf {ni} _X \Vert Y \Vert \mathsf {ni} _Y \Vert \sigma ^1_Y\Vert \sigma _X\Vert \sigma ^2_Y\Vert \sigma _O,$$

the algorithm \(\mathsf {IdentifySession} (\mathsf {sst}, \pi _{\mathsf {P}}^{j})\) returns 1 iff:

  • \( X_2 \Vert Y = \pi _{\mathsf {P}}^{i}.\mathsf {sid} \),

  • if \(\pi _{\mathsf {P}}^{j}\) plays the role of Alice then \(\mathsf {P} =\mathsf {A} \) and \( \pi _{\mathsf {P}}^{j}.\mathsf {{PID}} = \mathsf {B} \), else \(\pi _{\mathsf {P}}^{j}.\mathsf {{PID}} =\mathsf {A} \) and \( \mathsf {P} = \mathsf {B} \), and

  • \(\pi _{\mathsf {P}}^{j}.\mathsf {{AID}} = (\varLambda _{i})_{i=1}^{n}\).

Theorem 1

Suppose \(\mathsf {LIKE}\) is instantiated with an EUF-CMA signature scheme \(\mathsf {DS}\), extractable and zero-knowledge proofs/signature of knowledge, and let \(\mathsf {pp}\) be chosen such that the BDDH assumption holds. Then:

  • Our protocol is key-secure and, for all PPT adversaries \(\mathcal {A} \) making at most \(q_\mathsf {r}\) queries to \(\mathsf {{Register}} \), \(q_\mathsf {ns}\) queries to \(\mathsf {{NewSession}} \), \(q_\mathsf {s}\) queries to \(\mathsf {{Send}} \) and \(q_\mathsf {t}\) queries to \(\mathsf {{RevealTD}} \):

    $$\begin{aligned} \mathsf {Adv}_{\mathsf {LIKE},\mathcal {A} }^{\mathsf {KS}}(\lambda ) \le \frac{q^2_\mathsf {s}}{p} + q_\mathsf {ns} \cdot q^2_\mathsf {r} \cdot \left( \mathsf {Adv}_{\mathsf {DS}}^{\mathsf {EUF\text {-}CMA}}(\lambda ) + q_\mathsf {ns} \cdot q_\mathsf {r} \cdot \right. \\ \left. \left( (2 \cdot q_\mathsf {t} + q_\mathsf {s}) \cdot \epsilon _{\mathsf {SoK}}(\lambda ) + q_\mathsf {r} \cdot \epsilon _{\mathsf {NIPoK}}(\lambda ) + \mathsf {Adv}_{}^{\mathsf {BDDH}}(\lambda ) \right) \right) . \end{aligned}$$
  • Our protocol is non-frameable, and for all PPT adversaries \(\mathcal {A}\) making at most \(q_\mathsf {r}\) queries to \(\mathsf {{Register}} \), we have: \( \mathsf {Adv}_{\mathsf {LIKE},\mathcal {A} }^{\mathsf {NF}}(\lambda ) \le q_\mathsf {r}\cdot \mathsf {Adv}_{\mathsf {DS}}^{\mathsf {EUF\text {-}CMA}}(\lambda ) \).

  • Our protocol is honest-operator, and for all PPT adversaries \(\mathcal {A} \) making at most \(q_\mathsf {r}\) queries to \(\mathsf {{Register}} \), we have .

An important ingredient in our protocol are the proofs and signatures of know-ledge. We start with the latter. Without Alice’s and Bob’s signatures of knowledge, given honestly-generated \( X_1, X_2, Y\) from the challenge session, \(\mathcal {A}\) could choose random values rs, run the protocol using \(X'_1\leftarrow X_1^r\), \(X'_2\leftarrow X_2^r\) and \(Y'\leftarrow Y^s\) for two corrupted users and the same authorities, obtain \(\mathsf {sst} \), and run \(\mathsf {{RevealTD}} {}\) on \(\mathsf {sst} \) for each authority. Using \(\mathsf {Open} \), the adversary obtains: \( \mathsf {k} ' = \prod _{i=1}^n (X'_1,Y'_2)^{\varLambda _{i}.\mathsf {SK}} = \left( \prod _{i=1}^n (X_1,Y_2)^{\varLambda _{i}.\mathsf {SK}} \right) ^{r\cdot s}\), and can compute the targeted key as \( \mathsf {k} =(\mathsf {k} ')^{1/r\cdot s}\). This attack does not work if \(\mathcal {A}\) must prove knowledge of the discrete logarithm of \(X'_2\) and \(Y'\). Moreover, \(\mathcal {A}\) cannot reuse \(X_2\) and Y together with the signature of the honest user, since that signature uses the identity of the users as a message.

Each authority must prove knowledge of its secret key. Without this proof, an authority \(\varLambda _{j} \) could pick random r and compute: \(\varLambda _{j}.\mathsf {pk} = g_1^r/\left( { \prod _{i=1;i\not = j}^n {\varLambda _{i}.\mathsf {pk}}}\right) .\) In this case, Alice’s session key becomes: \( \mathsf {k} = e\left( \prod \limits _{i=1}^n \varLambda _{i}.\mathsf {pk},Y \right) ^x = e(g_1^r,g_2^y)^x=e(X_1,Y)^r,\) and the authority computes it as \(e(X_1,Y)^r\). This attack is not possible if the authority must prove the knowledge of its secret key.

Finally, each authority must prove that the trapdoor is well-formed. This is essential for the HO property: otherwise, the authority can output a fake trapdoor, distorting the output of the algorithm \(\mathsf {Open} \).

7 Proof-of-Concept Implementation

We provide a proof-of-concept C-implementation of \(\mathsf {LIKE}\) , which runs the entire primitive from setup to key-recovery, but omitting network exchanges. The implementation features: two authorities, Alice, Bob, and a single operator (since both operators would have to perform the same computations). The choice of two authorities was arbitrary: one for the justice system, and another for law-enforcement. More authorities would translate into a slightly higher runtime for the endpoints (one exponentiation per added authority), and a higher runtime for LI, depending on which authorities may view the session key (see also the full version [7]). However, even the worst added complexity would only involve an increased number of multiplications and potentially, the establishment of a secure multiparty channel. We measure the time-complexity of various operations during key-exchange and recovery, and provide averaged results in Table 2.

Setup. All tests are done on a Debian GNU/Linux 10 (buster) machine with an AMD Ryzen 5 1600 Six-Core Processor and 8GB RAM. We use the Ate pairing [43] over the BN462 Barreto-Naehrig curve [29] with 128-bit security, thus following the recommendations of [9] for pairing curve parameters. We use the base points described in [29] as the generators of groups \(\mathbb {G}_{1}\) and \(\mathbb {G}_{2}\).

For the signature scheme, we use ed25519 [14], SHA-256 as our hash function, and we implement the signatures/proofs of knowledge as explained in Sect. 5. We use mcl [1] for our elliptic-curve and pairing computations. For the ed25519 signature scheme and SHA-256 we used openssl.

Results. Table 2 shows our results, averaged over 5000 protocol runs. The protocol steps involving the authorities (trapdoor generation/opening) have linear complexity in the number of authorities n (\(n=2\) in our case) because all trapdoors are necessary to open, and opening is linear in n. In the table, A, B, and O denote the total runtimes of Alice, Bob, and the operator respectively, during AKE (omitting the pre-computation). Verification, TDGen, and Open represent the runtime for those algorithms – which could be improved through parallelization for the last two algorithms. The runtime given for TDGen is a cumulation for the two authorities we consider; thus, each authority has a runtime of about half that value. For reference we also include the runtime of a single pairing, since it is our most expensive computation.

Fig. 2.
figure 2

Average CPU time (in milliseconds) for 5000 trials.

Our results above are promising, but insufficient for protocol deployment. An immediate next step would be to implement it on a mobile phone. Stronger security might even require long-term secrets for Alice and Bob to be stored in a secure element, like the SIM card, possibly delegating some computations to the mobile phone as described in Sect. 5. Moreover our protocol (and \(\mathsf {LIKE}\) at large) would require further testing and standardization prior to deployment.

8 Conclusion

In this paper we bridge the gap between privacy and fine-grained lawful interception by introducing \(\mathsf {LIKE}\), a new primitive ensuring channel-establishment that is secure except with respect to: Alice, Bob, or a collusion of n legitimate authorities. In addition, users cannot be framed of wrong-doing, even by a collusion of all the authorities and operators. Finally, both the operator and the authorities are guaranteed that the LI procedure will reveal the key that Alice and Bob should have computed in any given session.

We instantiate \(\mathsf {LIKE}\) by using efficient NIZK proofs and signatures of knowledge, and signatures. As our proof-of-concept implementation also demonstrates, the protocol is fast, and even problematic computations could be delegated on a mobile phone.

So far, our protocol only works for domestic communications, with both operators subject to an identical authority set. An extension to multiple authority sets is not obvious and we thus leave this as an open question.