1 Introduction

In standard communication systems, protocols are designed to provide messaging services with end-to-end encryption. Essentially, secure communication reduces to continuously exchanging keys, because each message requires a new key. In bidirectional two-party secure communication, participants alternate their role as senders and receivers. The modern instant messaging protocols are substantially asynchronous. In other words, for a two-party communication, the messages should be transmitted (or the key exchange should be done) even though the counterpart is not online. Moreover, to be able to send the payload data without requiring online exchanges is a major design goal called zero round trip time (0-RTT). Finally, the moment when a participant wants to send a message is undefined, meaning that participants use random roles (sender or receiver) without any synchronization. They could send messages at the same time.

Even though many systems were designed for the privacy of their users, they rapidly faced security vulnerabilities caused by the compromises of the participants’ states. In this work, compromising a participant means to obtain some information about its internal state. We will call it exposure. The desired security notion is that compromised information should not uncover more than possible by trivial attacks. For instance, the compromised state of participants should not allow decryption of messages exchanged in the past. This is called forward secrecy. Typically, forward secrecy is obtained by updating states with a one-way function and deleting old entries [13, 14]. A popular technique in mechanics, that allows forward movement but prevents moving backward is the use of a device called ratchet. In the context of secure communication, a ratchet-like action is achieved by using randomness in every state update so that a compromised state is not sufficient for the decryption of any future communication either. This is called future secrecy or backward secrecy or post-compromise security or even self-healing. One thesis of the present work is that healing after an active attack involving a forgery is not a nice property. We show that it implies insecurity. After one participant is compromised and impersonated, if communication self-heals, it means that some adversary can make a trivial attack which is not detected. We also show other insecurity cases. Hence, we rather mandate communication to be cut after active attacks.

Previous Work. The security of key exchange was studied by many authors. The prominent models are the CK and eCK models [4, 12].

Techniques for ratcheting first appeared in real life protocols. It appeared in the Off-the-Record (OTR) communication system by Borisov et al. [3]. The Signal protocol designed by Open Whisper Systems [16] later gained a lot of interest from message communication companies. Today, the WhatsApp messaging application reached billions of users worldwide [18]. It uses the Signal protocol. A broad survey about various techniques and terminologies was made at S&P 2015 by Unger et al. [17]. At CSF 2016, Cohn-Gordon et al. [6] studied bidirectional ratcheted communication and proposed a protocol. However, their protocol does not offer 0-RTT and requires synchronized roles. At EuroS&P 2017, Cohn-Gordon et al. [5] formally studied Signal.

0-RTT communication with forward secrecy was achieved using puncturable encryption by Günther et al. at EUROCRYPT 2017 [9]. Later on, at EUROCRYPT 2018, Derler et al. made it reasonable practical by using Bloom filters [7].

At CRYPTO 2017, Bellare et al. [2] gave a secure ratcheting key exchange protocol. Their protocol is unidirectional and does not allow receiver exposure.

At CRYPTO 2018, Poettering and Rösler (PR) [15] studied bidirectional asynchronous ratcheted key agreement and presented a protocol which is secure in the random oracle model. Their solution further relies on hierarchical identity-based encryption () but offers stronger security than required for practical usage, leaving ample room for improving the protocol. At the same conference, Jaeger and Stepanovs (JS) [10] had similar results but focused on secure communication rather than key agreement. They proposed another protocol relying on . In both results, is used to construct encryption/signature schemes with key-update security. This is a rather new notion allowing forward secrecy but is expensive to achieve. In both cases, it was claimed that the depth of is really small. However, when participants are disconnected and continue sending several messages, the depth increases rapidly. Consequently, needs unbounded depth.

At EUROCRYPT 2019, Jost, Maurer, and Mularczyk (JMM) [11] designed another ratcheting protocol which has “ ” security, and does not use . Nevertheless, it still has a huge complexity: When messages alternate well (i.e., no participant sends two messages without receiving one in between), processing messages requires operations in total. However, when messages accumulate before alternating (for instance, because the participants are disconnected by the network), the complexity becomes . This is also the case for PR [15] and JS [10].Footnote 1 One advantage of the JMM protocol [11] comes with the resilience with random coin leakage as discussed below.

At EUROCRYPT 2019, Alwen, Coretti, and Dodis (ACD) [1] designed two other ratcheting protocols aiming at instant decryption, i.e. the ability to decrypt even though some previous messages have not been received yet. This is closer to real-life protocols but this comes with a potential threat: keys to decrypt un-delivered messages are stored until the messages are delivered. Hence, the adversary could choose to hold messages and decrypt them with future state exposure. This prevents forward secrecy. Furthermore, unless the direction of communication changes (or more precisely, if the epoch increases), their protocols are not really ratcheting as no random coins are used to update the state. This weakens post-compromise security as well. In Table 1, we call this weaker security “ ” (not to say “insecure” in the model we are interested in) because it is the best we can obtain with immediate decryption. The lighter of the two protocols is not competing in the same category because it mostly uses symmetric cryptography. It is more efficient but with lower security. Namely, corrupting the state of a participant implies impersonating to , and also decrypting the messages that sends. Other protocols do not have this weakness. The second ACD protocol [1] (in the full version) uses asymmetric cryptography.

Some authors address the corruption of random coins in different ways. Bellare et al. [2] and JMM [11] allow leaking the random coins just after use. JS [10] allow leaking it just before usage only. ACD [1] allow adversarially chosen random coins. In most of the protocols, revealing (or choosing) the random coins imply revealing some part of the new state which allows decrypting incoming messages. It is comparable to state exposure. JMM [11] offers better security as revealing the random coins reveals the new state (and allows to decrypt) only when the previous state was already known.

Table 1. Comparison of protocols: complexity for exchanging messages in alternating or accumulating mode, with timing (in seconds) for of comparable implementations and asymptotic; and types of coin-leakage security ( state exposure means coins leakage implies a state exposure).

Our Contributions. We give a definition for a bidirectional asynchronous key agreement () along with security properties. We start setting the stage with some definitions (such as matching status) then identify all cases leading to trivial attacks. We split them into direct and indirect leakages. Then, we define security with a game (privacy). We also consider the resistance to forgery (impersonation) and the resistance to attacks which would heal after active attacks ( security). We use these two notions as building blocks to prove -security. We finally construct a secure protocol. Our design choices are detailed below and compared to other papers.

  1. 1.

    Simplicity. Contrary to previous work, we define security in a very comprehensive way by bringing all notions under the umbrella of a cleanness predicate which identifies and captures all trivial ways of attacking.

  2. 2.

    Strong security. In the same line as previous works, the adversary in our model can see the entire communication between participants and control the delivery. Of course, he can replace messages with anything. Scheduling communications is under the control of the adversary. This means that the time when a participant sends or receives messages is decided by the adversary. Moreover, the adversary is capable of corrupting participants by making exposures of their internal data. We separate two types of exposures: the exposure of the state (that is kept in internal machinery of a participant) and the exposure of the key (which is produced by the key agreement and given to an external protocol). This is because states are (normally) kept secure in our protocol while the generated key is transferred to other applications which may leak for different reasons. We do not consider exposure of the random coins.

  3. 3.

    Using the result from exposure allows the adversary to be active, e.g. by impersonating the exposed participant. However, the adversary is not allowed to use exposures to make a trivial attack. Identifying such trivial attacks is not easy. As a design goal, we adopt not to forbid more than what the intuitive notion of ratcheting captures. We do forbid a bit more than PR [15] and JS [10] which are considered of having security and than JMM [11] (which has security)Footnote 2, though, allowing lighter building blocks. Namely, we need no key-update primitives and have linear-time complexity in terms of the number of exchanged messages, even when the network is occasionally down. This translates to an important speedup factor, as shown on Table 1. We argue that this is a reasonable choice enabling ratchet security as we define it: unless trivial leakage, a message is private as long as it is acknowledged for reception in a subsequent message from the receiver.

  4. 4.

    Sequence integrity. We believe that duplex communication is reliably enforced by a lower level protocol. This is assumed to solve non-malicious packet loss e.g. by resend requests and also to reconstruct the correct sequence order. What we only have to care of is when an adversary prevents the delivery of a message consistently. We make the choice to make the transmission of the next messages impossible under such an attack. Contrarily, ACD [1] advocates for immediate decryption, even though one message is missing. This lowers the security and we chose not to have it.

In the protocol, the correctness implies that both participants generate the same keys. We define the stages matching status, direct leakage, indirect leakage. We aim to separate trivial attacks and trivial forgeries from non-trivial cases with our definitions. Direct and indirect leakages define when the adversary can trivially deduce the key generated due to the exposure of a participant who can either be the same participant (direct) or their counterpart (indirect).

We construct a secure protocol. We build our constructions on top of a public-key cryptosystem and a signature scheme and achieve strong security, without key-update primitives or random oracles. We further show that a weakly secure unidirectional implies public-key encryption.

Notations. We have two characters: Alice () and Bob (). When designates a participant, refers to ’s counterpart. We use the roles and for sender and receiver respectively. We define and . When participants and have exclusive roles (like in unidirectional cases), we call them sender and receiver . PPT stands for probabilistic polynomially bounded. Negligible in terms of means in as .

Structure of the Paper. In Sect. 2, we define our protocol along with correctness definition and security. Section 3 proves that a simple unidirectional scheme implies public-key cryptography. In Sect. 4 we define the security notions unforgeability and unrecoverability. In Sect. 5, we give our construction. Due to space limitation, some material was moved to the full version of this paper [8], including the definition of underlying primitives and the proofs of our results.

2 Bidirectional Asynchronous Ratcheted Communication

2.1 Definition and Correctness

Definition 1

(). A bidirectional asynchronous ratcheted key agreement () consists of the following PPT algorithms:

  • : This defines the common public parameters .

  • : This generates the secret key and the public key of a participant.

  • \(\mathsf {Init}(1^{\lambda },\mathsf {pp},\mathsf {sk}_P,\mathsf {pk}_{\overline{P}},P) \rightarrow \mathsf {st}_P\): This sets up the initial state of given his secret key and the public key of his counterpart.

  • : The algorithm inputs a current state for . It outputs a tuple with an updated state , a message , and a key .

  • : The algorithm inputs where . It outputs a triple consisting of a flag to indicate an accept or reject of information, an updated state , and a key i.e. .

For convenience, we define the following initialization procedure for all games. It returns the initial states as well as some publicly available information .

figure f

Initialization is splittable in the sense that private keys can be generated by their holders with no need to rely on an authority (except maybe for authentication of and ). Other protocols from the literature assume a trusted initialization.

We consider bidirectional asynchronous communications. We can see, in Fig. 1, Alice and Bob running some sequences of and operations without any prior agreement. Their time scale is different. This means that Alice and Bob run algorithms in an asynchronous way. We consider a notion of time relative to a participant . Formally, the time for is the number of elementary steps that executed since the beginning of the game. We assume no common clock. However, events occur in a game and we may have to compare the time of two different participants by reference to the scheduling of the game. E.g., we could say that time for happens before time for . Normally, scheduling is under the control of the adversary except in the game in which there is no adversary. There, we define the scheduling by a sequence of actions. Reading the sequence tells who executes a new step of the protocol.

The protocol also uses random roles. Alice and Bob can both send and receive messages. They take their role (sender or receiver) in a sequence, but the sequences of roles of Alice and Bob are not necessarily synchronized. Sending/receiving is refined by the call in Fig. 2.

Correctness. We say that a ratcheted communication protocol functions correctly if the receiver accepts the update information and generates the same key as its counterpart. Correctness implies that the received keys for participant have been generated in the same order as sent keys of participant . We formally define the game in Fig. 2. We define variables. (respectively ) keeps a list of secret keys that are generated by when running (respectively, ). Similarly, (respectively ) keeps a list of information that are received (respectively sent) by and accepted by . The sequences only keep values for which .

Each variable such as , , or is relative to a participant . We denote by the value of at time for . For instance, is the sequence of which were received and accepted by at time for .

Fig. 1.
figure 1

The message exchange between Alice and Bob.

We initialize the two participants in the game in Fig. 2. The scheduling is defined by a sequence of tuples of form either (saying that must send) or (saying that must receive). In this game, communication between the participants uses a waiting queue for messages in each direction. Each participant has a queue of incoming messages and is pulling them in the order they have been pushed in. Sent messages from are buffered in the queue of .

Definition 2

(Correctness of). We say that is correct if for all sequence , the game of Fig. 2 never returns 1. Namely, for each , is always prefix of Footnote 3 and each call accepts.

Fig. 2.
figure 2

The game.

Security. We model our security notion with an active adversary who can have access to some of the states of Alice or Bob along with access to their secret keys enabling them to act both as a sender and as a receiver. For simplicity, we have only Alice and Bob as participants. (Models with more participants would be asymptotically equivalent.) We focus on three main security notions which are key indistinguishability (denoted as ) under the compromise of states or keys, unforgeability of information () by the adversary which will be accepted, and recovery from impersonation () which will make the two participants restore secure communication without noticing a (trivial) impersonation resulting from a state exposure. A challenge in these notions is to eliminate the trivial attacks. and security will be useful to prove security.

2.2 Security

The adversary can access four oracles called , , , and .

  • . This is essentially the message exchange procedure. It is defined in Fig. 2. The adversary can call it with three inputs, a participant , where ; a role of ; and an information if the role is . The adversary gets (for ) or (for ) in return.

  • . The adversary can expose the state of Alice or Bob. It inputs to the oracle and it receives the full state of .

  • . The adversary can expose the generated key by calling this oracle. Upon inputting , it gets the last key generated by . If no key was generated, is returned.

  • . This oracle can be called only once to receive a challenge key which is generated either uniformly at random (if the challenge bit is ) or given as the last generated key of a participant specified as input (if the challenge bit is ). The oracle cannot be queried if no key was generated yet.

We specifically separate from because the key generated by will be used by an external process which may leak the key. Thus, can be more frequent than , however it harms security less.

To define security, we avoid trivial attacks. Capturing the trivial cases in a broad sense requires a new set of definitions. All of them are intuitive.

Intuitively, is in a matching status at a given time if his state is not dependent on an active attack (i.e. could result from a game).

Definition 3

(Matching status). We say that is in a matching status at time for if 1. at any moment of the game before time for , is a prefix of —this defines the time for when sent the last message in ; 2. at any moment of the game before time for , is a prefix of . We further say that time for originates from time for .

The first condition clearly states that each of the received (and accepted) message was sent before by the counterpart of , in the same order, without any loss in between. The second condition similarly verifies that those messages from only depend on information coming from . In Fig. 1, Bob is in a matching status with Alice because he receives the information in the exact order as they have sent by Alice (i.e. Bob generates after and after same as it has sent by Alice). In general, as long as no adversary switches the order of messages or creates fake messages successfully for either party, the participants are always in a matching status.

The key exchange literature often defines a notion of partnering which is simpler. Here, asynchronous random roles make it more complicated.

Here is an easy property of the notion of matching status.

Lemma 4

If is in a matching status at time , then is also in a matching status at any time . Similarly, if is in a matching status at time and for originates from for , then is in a matching status at time .

Definition 5

(Forgery). Given a participant in a game, we say that is a forgery if at the moment of the game just before received , was in a matching status, but no longer after receiving .

In a matching status, any received by must correspond to an sent by and the sequences must match. This implies the following notion.

Definition 6

(Correspondingcalls). Let be a participant. We consider the calls by returning . We say that the receiving call corresponds to the call if and is in matching status at the time of this accepting call.

Lemma 7

In a correct protocol, two corresponding and calls generate the same key .

Definition 8

(Ratcheting period of). A maximal time interval during which there is no call is called a ratcheting period of .

In Fig. 1, the intervals and are ratcheting periods.

We now define when the adversary can trivially obtain a key generated by due to an exposure. We distinguish the case when the exposure was done on (direct leakage) and on (indirect leakage).

Definition 9

(Direct leakage). Let be a time and be a participant. We say that has a direct leakage if one of the following conditions is satisfied:

  • There is an at a time such that the last call which is executed by before time and the last call which is executed by before time are the same.

  • is in a matching status and there exists and such that time originates from time ; time originates from time ; there is one at time ; there is one at time ; and there is no between time and time .

figure g

In the first case, it is clear that gives . In the second case (in the figureFootnote 4), the state which leaks from at time allows to simulate all deterministic (by skipping all ) and to compute the key . The reason why we can allow the adversary to skip all is that they make messages which are supposed to be delivered to after time , so they have no impact on .

Consider Fig. 1. Suppose is in between time and . According to our definition and the last call is at time . It is a , thus the second case cannot apply. The next call is at time . In this case, has a direct leakage if there is a key exposure of Alice between and .

Suppose now that . We have , the last call is a , it is at time , and originates from time which itself originates from the origin time for . We say that has a direct leakage if there is a key exposure between or a state exposure of Bob before time . Indeed, with this last state exposure, the adversary can ignore all and simulate all to derive .

Definition 10

(Indirect leakage). We consider a time and a participant . Let be the time of the last successful call and be its input role. (We have .) We say that has an indirect leakage if is in matching status at time and one of the following conditions is satisfied

  • There exists a corresponding to that and making a which has a direct leakage for .

  • There exists and such that is in a matching status at time , originates from , originates from , there is one at time , and .

figure h

In the first case, is also computed by and leaks from there. The second case (in the figure) is more complicated: it corresponds to an adversary who can get the internal state of by then simulate all with messages from until the one sent at time , ignoring all by , to recover .

For example, let be a time between and in Fig. 1. We take . The last call is at time , it is a and corresponds to a at time , but originates from time . We say that has an indirect leakage for if there exists a direct leakage for at a time between and (first condition) or there exists a call at a time (after time ), originating from a time before time , so (second condition). In the latter case, the adversary can simulate with the updates sent at time and to derive the key .

Exposing the state of a participant gives certain advantages to the attacker and make trivial attacks possible. In our security game, we avoid those attack scenarios. In the following lemma, we show that direct and indirect leakage capture the times when the adversary can trivially win. The proof is straightforward.

Lemma 11

(Trivial attacks). Assume that is correct. For any and , if has a direct or indirect leakage, the adversary can compute .

So far, we mostly focused on matching status cases but there could be situations with forgeries. Some are unavoidable. We call them trivial forgeries.

Definition 12

(Trivial forgery). Let be a forgery received by . At the time just before the call, was in a matching status. We assume that time for originates from time for . If there is an call during the ratcheting period of starting at time , we say that is a trivial forgery.

We define the security game in Fig. 3. Essentially, the adversary plays with all oracles. At some point, he does one call which returns either the same result as (case ) or some random value (case ). The goal of the adversary is to guess . The call can be done only once and it defines the participant and the time at which this call is made. It also defines , the last which was used (either sent or received) to carry from the sender to the receiver. It is not allowed to make this call at the beginning, when did not generate a key yet. It is not allowed to make a trivial attack as defined by a cleanness predicate appearing on Step 6 in the game in Fig. 3. Identifying the appropriate cleanness predicate is not easy. It must clearly forbid trivial attacks but also allow efficient protocols. In what follows we use the following predicates:

  • : has no direct or indirect leakage.

  • : received no trivial forgery until has seen .

    (This implies that is not a trivial forgery. It also implies that if never sees , then received no trivial forgery at all.)

  • : received no forgery until has seen .

  • : was sent by a participant , then received and accepted by , then some was sent by , then was received and accepted by .

    (Here, could be or his counterpart. This accounts for the receipt of being acknowledged by through .)

  • : there is no and no query. ( is the receiver.)

Lemma 11 says that the adopted cleanness predicate must imply in all considered games. Otherwise, no security is possible. It is however not sufficient as it only hardly trivial attacks with forgeries.

targets that any acknowledged sent message is secure. Another way to say is that a key generated by one starting a round trip must be safe. This is the notion of healing by ratcheting. Intuitively, the security notion from is fair enough.

Bellare et al. [2] consider unidirectional with . Other papers like PR [15] and JS [10] implicitly use as cleanness predicate. They show that this is sufficient to build secure protocols but it is probably not the minimal cleanness predicate. (It is nevertheless called “optimal”.) JMM [11] excludes cases where received a (trivial) forgery then had an before receiving . Actually, they use a cleanness predicate (“near-optimal” security) which is somewhere between and : this cleanness implies the JMM cleanness which itself implies the PR/JS cleanness.

In our construction (“sub-optimal”), we use the predicate . However, in Sect. 4.1, we define the security (unforgeability) which implies that - security and - security are equivalent. (See Theorem 16.) One drawback is that it forbids more than - security. The advantage is that we can achieve security without key-update primitives. We will prove in Theorem 19 that this security is enough to achieve security with the predicate , thanks to -security which we define in Sect. 4.2. Thus, our cleanness notion is fair enough.

Fig. 3.
figure 3

- game. (Oracle is defined in Fig. 2.)

Definition 13

( - security). Let be a cleanness predicate. We consider the game of Fig. 3. We say that the ratcheted key agreement is - -secure if for any PPT adversary, the advantage

$$ \mathsf {Adv}_\mathcal {A}(1^\lambda )= \left| \Pr \left[ \mathsf {KIND}_{0, C_\mathsf {clean}}^{\mathcal {A}}(1^\lambda ) \rightarrow 1 \right] - \Pr \left[ \mathsf {KIND}_{1, C_\mathsf {clean}}^{\mathcal {A}}(1^\lambda ) \rightarrow 1 \right] \right| $$

of in security game is negligible.

3 Implies

We now prove that a weakly secure (a unidirectional asynchronous ratcheted key exchange—a straightforward variant of in which messages can only be sent from a participant whom we call and can only be received by another participant whom we call ) implies public key encryption. Namely, we can construct a key encapsulation mechanism () out of it. We recall the definition in the full version [8]. We consider a which is -secure for the following cleanness predicate:

: the adversary makes only three oracle calls which are, in order, , , and .

(Note that is never used.) implies cleanness for all other considered predicates. Hence, it is more restrictive. Our result implies that it is unlikely to construct even such weakly secure from symmetric cryptography.

Theorem 14

Given a protocol, we can construct a with the following properties. The correctness of implies the correctness of . The --security of implies the security of .

Proof

Assuming a protocol, we construct a as follows:

  • : run , , and set , .

  • : run and set .

  • : run .

The security game with adversary works as in the left-hand side below. We transform into a adversary in the right-hand side below.

figure i

We can check that is satisfied. The game with simulates perfectly the game with . So, the -security of implies the security of .    

4 and Security

4.1 Unforgeability

Another security aspect of the key agreement is to have that no information is forgeable by any bounded adversary except trivially by state exposure. This security notion is independent from security but is certainly nice to have for explicit authentication in key agreement. Besides, it is easy to achieve. We will also use it as a helper to prove security: to reduce -cleanness to -cleanness.

Let the adversary interact with the oracles , in any order. For to have unforgeability, we eliminate the trivial forgeries (as defined in Definition 12). The game is defined in Fig. 4.

Definition 15

(security). Consider game in Fig. 4 associated to the adversary . Let the advantage of be the probability that the game outputs 1. We say that is -secure if, for any PPT adversary, the advantage is negligible.

Fig. 4.
figure 4

, , and games.

We can now justify why forgeries in the game must be trivial for a with unforgeability.

Theorem 16

If a is -secure, then --security implies --security and --security implies --security.

4.2 Recovery from Impersonation

A priori, it seems nice to be able to restore a secure state when a state exposure of a participant takes place. We show here that it is not a good idea.

Let be an adversary playing the two games in Fig. 5. On the left strategy, exposes with an query (Step 2). Then, the adversary impersonates by running the algorithm on its own (Step 3). Next, the adversary “sends” a message to which is accepted due to correctness because it is generated with ’s state. In Step 5, lets the legitimate sender generate by calling oracle. In this step, security self-restores, then accepts which is sent by , hence . It is clear that the strategy shown on the left side in Fig. 5 is equivalent to the strategy shown on the right side of the same figure (which only switches Alice and the adversary who run the same algorithm). Hence, both lead to with the same probability . The crucial point is that the forgery in the right-hand strategy becomes non-trivial, which implies that the protocol is not -secure. In addition to this, if such phenomenon occurs, we can make a adversary passing the condition. Thus, we lose -security. Consequently, security should not self-restore.

Fig. 5.
figure 5

Two recoveries succeeding with the same probability.

We define the security notion with another game in Fig. 4. Essentially, in the game, we require the receiver to accept some messages sent by the sender after the adversary makes successful forgeries in . We further use it as a second helper to prove security with -cleanness.

Definition 17

(security). Consider game in Fig. 4 associated to the adversary . Let the advantage of in succeeding playing the game be . We say that the ratcheted communication protocol is -secure, if for any PPT adversary, the advantage is negligible.

-security iseasy to achieve using a collision-resistant hash function.

To be sure that no message was received before it was sent, we need the following security notion. In the game, the adversary tries to make receive a message before it was sent by .

Definition 18

(security). For the game in Fig. 4, let be an adversary. The advantage of is the probability that is returned. We say that the ratcheted communication protocol is -secure, if for any adversary limited to a polynomially bounded number of queries, the advantage is negligible.

Theorem 19

If a is -secure, -secure, and - secure, then it is - secure.

5 Our Protocol

We construct a in Fig. 6. We use a public-key cryptosystem , a digital signature scheme , a one-time symmetric encryption , and a collision-resistant hash function . They are all defined in the full version [8]. First, we construct a naive signcryption from and by

$$\begin{aligned} \mathsf {SC}.\mathsf {Enc}(\overbrace{\mathsf {sk}_S,\mathsf {pk}_R}^{\mathsf {st}_S},\mathsf {ad},\mathsf {pt})= & {} \mathsf {PKC}.\mathsf {Enc}(\mathsf {pk}_R,(\mathsf {pt},\mathsf {DSS}.\mathsf {Sign}(\mathsf {sk}_S,(\mathsf {ad},\mathsf {pt})))) \\ \mathsf {SC}.\mathsf {Dec}(\underbrace{\mathsf {sk}_R,\mathsf {pk}_S}_{\mathsf {st}_R},\mathsf {ad},\mathsf {ct})= & {} \begin{array}[t]{l} (\mathsf {pt},\sigma )\leftarrow \mathsf {PKC}.\mathsf {Dec}(\mathsf {sk}_R,\mathsf {ct})\;; \\ \mathsf {DSS}.\mathsf {Verify}(\mathsf {pk}_S,(\mathsf {ad},\mathsf {pt}),\sigma )\;?\;\mathsf {pt}\;:\;\bot \\ \end{array} \end{aligned}$$

Second, we extend to multi-key encryption called due to the multiple layers of keys. Third, we transform into a unidirectional ratcheting scheme . Finally, we design . (See Fig. 6.)

The state of a participant is a tuple where is the hashing key, is the iterated hash of all sent messages, and is the iterated hash of all received messages. We also have two lists and . They are lists of states to be used for unidirectional communication: sending and receiving. Both lists are growing but entries are eventually erased. Thus, they can be compressed. (Typically, only the last entry is not erased.)

The idea is that the entry of for a participant is associated to the entry of for its counterpart . Every time a participant sends a message, it creates a new pair of states for sending and receiving and sends the sending state to his counterpart , to be used in the case wants to respond. If the same participant keeps sending without receiving anything, he accumulates some receiving states this way. Whenever a participant who received many messages starts sending, he also accumulated many sending states. His message is sent using all those states in the procedure. After sending, all but the last send state are erased, and the message shall indicate the erasures to the counterpart , who shall erase corresponding receiving states accordingly. Our onion encryption needs to ensure complexity (so we cannot compose encryptions as ciphertext overheads would produce complexity). For that, we use a one-time symmetric encryption using a key in . which is split into shares . Each share is -encrypted in one state. Only the last state is updated (as others are meant to be erased).

The protocol is quite efficient when participants alternate their roles well, because the lists are often flushed to contain only one unerased state. It also becomes more secure due to ratcheting: any exposure has very limited impact. If there are unidirectional sequences, the protocol becomes less and less efficient due to the growth of the lists.

Fig. 6.
figure 6

Our protocol.

We state the security of our protocol below. Proofs are provided in the full version [8]. In the full version [8], we also show that our protocol does not offer - security.

Theorem 20

We consider the protocol from Fig. 6.

  • is correct.

  • is -secure.

  • If is collision-resistant, then is -secure.

  • If is -secure and is collision-resistant, then is -secure.

  • If is -secure and is -secure, then is --secure.

Consequently, due to Theorem 16, we deduce --security. The advantage of treating --security specifically is that we clearly separate the required security assumptions for .

Similarly, due to Theorem 19, we deduce --security.

6 Conclusion

We studied the protocols and security. For security, we marked three important security objectives: the protocol should be -secure; the protocol should be resistant to forgery attacks (-security), and the protocol should not self-heal after impersonation (-security). By relaxing the cleanness notion in -security, we designed a protocol based on an -secure cryptosystem and a one-time signature scheme. We used neither random oracle nor key-update primitives.