Keywords

1 Introduction

In the last years, academia and industry working groups have made substantial efforts towards realizing next-generation smart-connectivity Systems-of-Systems. These systems have evolved from local, standalone systems into safe and secure solutions ranging from cyber-physical end devices, to edge servers and cloud facilities. The core pillar in such ecosystems is the establishment of trust-aware service graph chains, comprising both resource-constrained devices, running at the edge, but also container-based technologies. The primary existing mechanism to establish trust is by leveraging the concept of trusted computing, which addresses the need for verifiable evidence about a system and the integrity of its trusted computing base. To this end, related specifications provide the foundational concepts, such as measured boot, sealed storage, platform authentication and remote attestation.

An essential component in building such trusted computing services is a highly secure anchor (either software- or hardware-based) that serves as a Root-of-Trust (RoT), providing cryptographic functions, measuring and reporting the behavior of running software, and storing data securely. Examples include programmable Trusted Execution Environments (TEEs), and fixed-API devices, like the Trusted Platform Module (TPM) [25]. Such components are considered inherently secure elements and implement hardened protections (e.g., tamper-proof or side-channel protections), because any vulnerability on them can compromise the security assurance of the overall system. This sets the challenge ahead: Because such RoTs by definition are trusted, all internal operations including handling of cryptographic data can be idealized. However, this does not directly translate to the overall application security that leverages such RoTs.

Thus, formal verification of protocols and interfaces has become a fundamental process to identify bugs or misuse cases when building complex interacting systems, as several earlier works have shown, e.g., [6, 22]. There are two noteworthy difficulties when considering formal verification in these settings. First, the inherent fact that protocol verification is an undecidable problem for unbounded number of protocol executions and unbounded size of the terms. And second, the additional difficulty faced by verification tools when considering protocols with non-monotonic mutable global states, which may provoke numerous false attacks or failure to prove security; see [7]. Trusted computing protocols fall into both categories, because the non-volatile memory of the RoTs can be regarded as a state that may change between executions of the protocol.

Indeed, several works on symbolic verification of hardware RoTs that are part of other high-level functionalities take into account the usage of persistent state [2, 7]. Some notable examples in the context of the TPM are the works by Shao et al., which cover specific subsets of TPM functionalities, such as Enhanced Authorization (EA) [22] or HMAC authorization [21], identifying misuse cases. Also, Xi et al. [31] and Wesemeyer et al. [30] conduct formal analysis and verification of the Direct Anonymous Attestation (DAA) protocol of the TPM. On the other hand, Delaune et al. [8], propose a Horn-clause framework where they prove its soundness and use ProVerif to approximate the TPM internal state space, helping to address non-termination issues.

A recurrent characteristic when formally verifying the aforementioned scenarios is that there is usually no distinction between the cryptography used for self-consumption of the RoT (e.g., the mechanism it uses for secure storage) and cryptography that is provided specifically for the overall application. This has the inherent drawback that reasoning about the security of a large application or service forces reasoning about the internals of the security device itself, thus, limiting the scope of the reasoning that automated analysis tools can achieve.

Contribution. The main objective of the present work is to propose a methodology for proving security in scenarios based on services that make use of RoTs, by idealizing the internal functionalities of the security device, except those that provide explicit cryptographic functionalities for the service being offered. In order to illustrate our methodology, we concentrate on a class of remote attestation services based on the TPM. Even though we focus on a particular case of attestation, we build an abstract model for a subset of TPM primitives sufficient to implement the core functionalities of generic attestation services. From the perspective of formally verifying RoT-based applications, this model represents a means of reasoning about security and privacy (of offered services) without being bogged down by the intricacies of various crypto primitives considered in the different platforms. We conduct our analysis in the symbolic model of cryptography (Dolev-Yao adversary [9]) using the Tamarin prover [4] and its front-end SAPiC [15]. We define a number of security properties relevant for the considered scenario, and successfully verify them within this framework.

2 Background

In this section we summarize specific notions related to RoT, TPM and remote attestation that will be needed in our discussion to follow. For more details, we refer the reader to [3, 12, 20, 25,26,27].

The TPM as a Root of Trust. The Trusted Computing Group (TCG) splits the responsibility of the RoT into three categories: measurement, storage and reporting [25, §9.4]. The RoT for measurement is usually the first piece of BIOS code that is executed on the main processor during boot, and starts a chain of measurements. The RoT for storage and reporting are responsibilities assigned to the TPM, typically implemented as a tamper-resistant hardware embedded in a host device. The TPM and the host device form a platform.

A measurement chain of trust is a mechanism that allows to establish trust from the low-level RoT for measurement to a higher-level object, e.g., the OS. Each component in the chain measures the next component, and these measurements are checked against reference values, typically provided by the platform manufacturer, before passing control to the next component. For instance, the RoT will measure the (remaining part of) the BIOS, the BIOS will measure the bootloader, and so on. Each component in the chain updates the TPM’s Platform Configuration Registers (PCRs), which are a set of special registers that store the representation of the measurements as a hash chain.

The TPM contains an embedded key pair known as the Endorsement Key (EK) generated and certified by the platform manufacturer. The private part of the EK never leaves the TPM. This key pair uniquely identifies the platform. If this key pair is used to sign platform measurements, it will compromise the platform privacy. Therefore, the TPM offers mechanisms to generate an arbitrary number of Attestation Keys (AKs) that can be used to attest the platform state by signing the PCR contents. These AKs are generated in such a way that it can be ensured to an external verifier that the signature was generated by a legitimate TPM, without revealing the identity of that TPM.

Moreover, the TPM offers mechanisms to restrict access to TPM commands and objects by defining authorization policies. Most notably, in TPM 2.0, the Enhanced Authorization (EA) mechanism allows to define flexible, fine-grained authorization policies by combining a number of assertions through logical connectors. For instance, a system administrator could create a TPM key and associate with it a policy that allows the usage of that key when (i) the PCRs are in a given state, or (ii) a password is provided and the user is physically present. The authorization policy is stored, within the TPM object, as a hash chain called authPolicy. An authorization session is the mechanism used to pass in authorization values or policies, and to maintain state between subsequent commands. To load or use a TPM object a session must be created, and the user will indicate what assertions must be checked. The TPM checks the assertions and updates the session attribute policyDigest (a hash chain) if they succeed. If the policyDigest of the session matches the authPolicy of a given object, then access to that object is granted. We refer the reader to [25, §19.7] for the complete details, and to Appendix A for specific illustration purposes.

Remote Attestation. It is a mechanism to provide evidence of the integrity status of a platform [12, 20]. It is typically realized as a challenge-response protocol that allows a third party (verifier) to obtain an authentic and timely report about the state of an untrusted, and potentially compromised, remote device (prover). The TPM allows implementing privacy-preserving remote attestation protocols. Remote attestation services are currently used in a variety of scenarios, ranging from attestation for isolated execution environments based on the –now outdated– Intel’s Trusted Execution Technology [13], to more modern approaches used together with Intel’s Software Guard Extensions, e.g., [14, 29].

From a high-level perspective, a remote attestation protocol requires that the user first creates an AK that will be used to sign attestation reports (quotes). A quote is essentially composed of the contents stored in selected PCRs (which reflect the platform state) signed with the AK. As commented above, the user has the ability to create as many AKs as they wish, but each AK is required to be certified by a third party called the Privacy Certification Authority (PCA). The certification process, detailed in Sect. 4, implies that the PCA knows the relationship between EK and AKs, but the PCA is trusted not to reveal this information, which would break the anonymity of the platform.

We also note that the TCG has an alternative method for performing remote attestation without revealing the EK to a trusted third party, which is known as Direct Anonymous Attestation (DAA) [5]. However, DAA works by design only in conjunction with the TCG specification for TPMs. Since the aim in this paper is to formalize the notion of secure remote attestation in a more general context, we focus on the first approach, i.e., attestation using a PCA. Such protocols represent most of the real-world applications and they do not modify the core characteristics of the remote attestation service. Further, we argue that the formalization methods that we present here can be used as the basis for other hardware-based remote attestation instances, not only TPMs.

The Tamarin Prover and SAPiC. For our modelling approach we have chosen the Tamarin prover [4, 17] and its front-end SAPiC [15]. SAPiC allows modelling protocols in (a dialect of) the applied pi-calculus [1], and converts the processes specification into multiset rewrite rules (MSRs) that can be processed by Tamarin. Security properties can be expressed as first-order logic formulas. Everything that SAPiC does can be expressed in MSRs in Tamarin, but it provides an encoding (e.g., for locks, reliable channels, state handling) which is likely more concise than an ad-hoc modelling a user would come up with using MSRs. Hence, SAPiC has a better chance for termination. We present a brief description of the SAPiC syntax in Appendix B; see [4, 15] for the complete details.

Tamarin and SAPiC have already been used successfully for modelling TPM functionalities in existing works, e.g., [21, 22, 30], and they offer a convenient syntax for modelling protocols with global state. However, as mentioned above, it is rather challenging to model protocols with arbitrarily mutable global state, as it is required in the scenario presented in this paper. Therefore some technical alternatives and manual intervention have been adopted in order to define a realistic adversary. See Sect. 5 below.

3 A Methodology for Modelling Protocols with RoTs

The key idea in our modelling approach is to consider a further layer of abstraction within the traditional symbolic model of cryptography [9], and idealize the internal functionalities of the RoT, except those providing cryptography to a consumer application (like hashing, asymmetric encryption or signatures), capturing their intended semantics rather than their implementation. That is, in addition to the assumption of “perfect cryptography” from the symbolic model (e.g., a ciphertext can only be decrypted by providing the correct key), we abstract and idealize any of the cryptography operations that the RoT uses for self consumption and are not intended to take part in a client application. Consider, for example, the limited amount of non-volatile memory of the RoT. This has the side effect that it needs to use encryption to offload memory contents securely to the host; an ancillary operation unrelated to the client application.

To achieve this, we propose replacing such self-consumption cryptographic functionalities of the RoT with non-cryptographic mechanisms, for example, using a combination of strong access control with channels not visible to the adversary where honest parties interact (private channels). We call this approach the idealized model of cryptography, which in addition to restricting the adversary capability in computing terms using only cryptographic function symbols, it also idealizes the internal operations of the RoT, assuming they are “perfect.” We provide a comparison of the assumptions considered in the computational, symbolic and idealized models in Table 1.

Table 1. Assumptions considered in the three models.

Such an idealized model will enable the reasoning about (and comparing) different TPM services under various adversarial models and for different security guarantees, excluding any possible implications from the leveraged cryptographic primitives. Further, this approach simplifies the modelling of complex protocols, such as the attestation procedure that we study in the paper, and hence it gives the chance to automated analysis tools to be more likely to terminate. From the perspective of formally verifying trusted hardware components, this approach can provide a means of reasoning about security and privacy (of offered services) without being bogged down by the intricacies of various crypto-related primitives considered in the different platforms. On the downside, the idealized approach that we present here is not suitable for capturing potential vulnerabilities stemming from the TPM internal cryptography [21, 22], nor it can identify timing attacks, as those presented in [18]. However, this is something well studied in the literature, when it comes to the non-perfect secrecy of the TPM’s crypto primitives, thus, in this paper we opted for focusing on idealized models that have a direct impact on real-world applications.

Thus using this approach, one has to prove or assume two facts. First, the particular RoT under analysis implements securely those high-level functionalities that are part of a certain application or service. This can be proved under some model of cryptography which might require significant effort, but this effort will be required to be done only once. Second, that the system is secure when we use an idealized version of the RoT instead of the real device. This task indeed cannot be reused, and needs to be done for each application or service considered. Idealizing cryptography used internally in a RoT allows to carry out an analysis of the cryptography that is relevant to the application itself more concisely, where this analysis can be supported by the use of a more broadly accessible set of tools than those used so far to analyze such applications. Further, it allows to address more complex protocols and larger use cases and compositions of Systems-of-Systems with current formal verification technologies.

The overall overview of our methodology is as follows:

  1. i.

    Identify and select a subset of RoT functionalities that apply to the service.

  2. ii.

    Obtain an idealized model, and identify the best approach to model them. Assume or prove security of the idealized functionalities.

  3. iii.

    Model the application-specific scenario using the idealized device.

  4. iv.

    Define and model the set of security properties to be considered.

The strategy of idealizing as much of the cryptography as possible should make the task of proving that an application or a system is secure, for some specific notion of security, more manageable. Therefore, simplifying the RoT internal cryptographic components is a worthy consideration. To illustrate our methodology, we focus on an attestation protocol based on TPMs. We describe the complete scenario in Sect. 4 and we instantiate our methodology in Sect. 5.

Fig. 1.
figure 1

Remote attestation protocol using a PCA. (*) The PCA can compute credBlob and secret on its own, or by calling the convenience command \(\texttt {TPM2\_MakeCredential}(ek_{\text {pub}}, challenge, ak_{\text {name}})\) on a local TPM.

4 Remote Attestation Using a PCA

The goal of the TPM-based remote attestation protocol presented in this section is to establish a secure communication channel between a Client and a Service Provider (SP), while ensuring the trust status of the Client. The protocol is based on [11], and it is a generic version of a network management protocol presented in [23, 24]. There are four devices that participate in the protocol: the Client, a TPM embedded in the Client, the PCA Server, and the SP. See Fig. 1.

Notation. For an entity A, we denote \(A_\text {pub}\) and \(A_\text {priv}\) its public and private keys, respectively. We denote \(\mathrm {cert}_{A}(x)\) a certificate for object x issued by entity A, and \(\mathrm {sign}_{A}(y)\) a signature of y using private key \(A_\text {priv}\). Also, for a TPM key k we denote \(k_\text {pub}\) and \(k_\text {priv}\) its public and private key parts, \(k_\text {authPolicy}\) its EA policy, \(k_\text {h}\) its TPM object handle [25, §15], and \(k_\text {name}\) its name (a unique identifier computed as a hash of its public data [25, §16]). For clarity of exposition, we omit session management objects and parameters in the description of the protocol.

First of all, the Client receives the PCA certificate, signed with the SP key and initializes the TPM by extending the PCR to the current firmware/software values. In order to achieve the establishment of a secure communication channel between the Client and the SP the following phases are executed: (a) The Client creates an AK using the TPM and this AK is certified by the PCA. (b) The Client creates a TLS key using the TPM, which is certified by the PCA and signed by the SP. (c) Finally, the Client uses the TLS key to establish a secure communication channel with the SP for exchanging encrypted messages and attestation quotes. The three phases are described in detail below.

Phase 1: Creation and Certification of AK. In the first step, the Client creates an AK via the TPM, which is certified by the PCA in a similar way as in [11]. This initial AK is not bound to a PCR state through EA, because authorization for the AK will be accomplished with the certificate that will be issued by the PCA. The AK certification sub-protocol works as follows. Upon receiving the EK, the AK and a desired fully-qualified domain name (FQDN), the PCA creates a random challenge and protects it using the TPM command \(\texttt {TPM2\_MakeCredential}(ek_{\text {pub}}, challenge, ak_{\text {name}})\), which outputs the values:

  • secret: a random seed encrypted with \(ek_{\text {pub}}\),

  • credBlob: encrypt-then-MAC of the challenge using keys derived from seed.

The Client receives this pair of values and uses the TPM to execute:

$$\begin{aligned} \texttt {TPM2\_ActivateCredential}(ek_\text {h}, ak_\text {h}, credBlob, secret) \end{aligned}$$

which retrieves the challenge after performing some validation checks. Then, the Client sends it to the PCA. If the challenge matches, the PCA is convinced that the AK and EK are bound to a legitimate TPM and issues the AK certificate \(\mathrm {cert}_\text {PCA}(\langle ak_{\text {pub}}, fqdn \rangle )\). See Fig. 1(a).

Phase 2. Creation and Certification of the TLS Key. In this phase, the Client creates a TLS signing key using the TPM, which is bound to PCR state through EA. At this point, the PCRs have already been extended with appropriate firmware/software measurements using a succession of \(\texttt {TPM2\_PCRExtend}\)s.

Let \(tls_{\text {priv}}, tls_{\text {pub}}\) be the generated TLS key pair. The Client executes the command \(\texttt {TPM2\_Certify}(tls_\text {h}, ak_\text {h})\), which attests the TLS key, in order to vouch that it is protected by a genuine, but unidentified TPM. This attestation is a signature \(\sigma _\text { certInfo}\) by the AK over information that describes the key, which we abstract as the tuple \(certInfo=\langle tls_{\text {pub}}, tls_\text {authPolicy} \rangle \). The signed tuple, plus some additional information, is known as Subject Key Attestation Evidence (SKAE), and it is an X.509 certificate extension [28], \(skae=\langle certInfo, \sigma _\text { certInfo}\rangle \).

Next, the Client creates a Certification Signing Request (CSR) for the TLS key. The CSR is composed of the message \(\langle tls_{\text {pub}}, skae \rangle \) and the signature on this message with the private part of the TLS key. This is done using the command:

$$ \sigma _{csr} = \texttt {TPM2\_Sign}(tls_\text {h}, \langle tls_{\text {pub}}, skae \rangle ). $$

The value \(csr = \langle \langle tls_{\text {pub}}, skae \rangle , \sigma _{csr}\rangle \) is sent to the PCA, which verifies both csr and skae signatures, using \(tls_{\text {pub}}\) and the AK certificate, and forwards CSR to the SP. The SP issues \(\mathrm {cert}_\text {SP}(tls_{\text {pub}})\), which is sent to the Client. See Fig. 1(b).

Phase 3. TLS Communication and Device Attestation. The last step is where the secure communication channel between the Client and the SP is established. The Client and the SP execute the TLS 1.3 protocol [19] using the TLS key that was created in the previous step, to establish a common symmetric encryption session key \(s_\text {key}\). The SP encrypts some random data qData with \(s_\text {key}\) and sends the ciphertext to the Client. Upon receipt, the Client decrypts the ciphertext obtaining qData. For the attestation part, the Client executes:

$$\begin{aligned} (quote, signature) = \texttt {TPM2\_Quote}(ak_\text {h}, qData, pcrSelection), \end{aligned}$$

to quote the PCR contents pcrDigest referenced in pcrSelection, where \(quote = H(pcrDigest \parallel qData)\) and \(signature = \mathrm {sign}_{ak_{\text {priv}}}(quote)\). The AK certificate and the pair (quotesignature) are sent to the SP, which verifies the signature and the PCR values that reflect the trust status of the platform. See Fig. 1(c).

5 Application of the Modelling Methodology

In this section, we apply the modelling methodology, specifically Steps i.iii., discussed in Sect. 3, to the use case presented in Sect. 4. We address the verification of security properties (Step iv.) in Sect. 6. Recall that the objective is to come up with high-level abstractions of the relevant RoT commands that will allow us to model the functionalities of the trusted component not directly related to the cryptographic operation needs of the target application.

Adversarial Model. We consider the usual Dolev-Yao model [9] for an adversary sitting between the Client and the two servers (PCA, SP). This is because we assume that there is a trusted infrastructure supporting the interactions between the two servers and it is not a viable target for the adversary. Consequently, we consider three processes in our model: the Client, the TPM and the Server, representing both the PCA and the SP. The adversary is allowed to monitor and interfere in the communication between the Client and the Server. However, the communication between the Client and the TPM is treated independently. More precisely, we allow the adversary a certain degree of control over the communication channel between the Client and the TPM. For instance, the adversary can behave as a passive adversary, in order to capture more relaxed trust assumptions, such as the case where the Client has malware installed for monitoring the interactions between the (host) Client and the TPM, but not for intercepting or querying the TPM (using the TPM as an oracle).

Assuming near zero-trust assumptions for the Client requires additional validation of several processes that are executed in the host to protect against fully compromised Clients that try to either manipulate the parameters given to the TPM or use the TPM as a “crypto engine,” that is, send falsified data (which can be considered as part of the attestation) to the TPM to be signed. This essentially considers a Dolev-Yao adversarial model which allows an adversary to both monitor and modify all interactions between the Client and the TPM. The critical operation to verify is the management of the policies depicting the correct PCR values to which the use of the TLS key is bounded to. A compromised host can either create a false policy or bound the correct policy to a wrong representation of the PCR values, resulting in the incorrect use of the TLS key. Such verification proofs can be done using the TPM’s certification functionality and validated by a trusted entity, such as the PCA. PCR values representing the “correct” state of the Client should be circulated by the PCA, and signed under its private key, as part of the answer to the TPM2_ActivateCredential command. Since the certification of the TPM has already been included in the described models, for simplicity, we have excluded the further modelling of such additional proofs, thus considering only attackers that can monitor but cannot modify the interactions between the Client and the TPM.

Fig. 2.
figure 2

Idealized TPM commands (sketch). See Appendix B for an overview of the SAPiC syntax.

i. Identify and select a subset of RoT functionalities. Since the focus is on attestation services, instead of modelling all the TPM commands and functionalities, we concentrate on a specific subset of commands that are common in most reference scenarios and application domains in relation with this service. Such commands are related to object creation, authorization sessions, EA, PCR extension, remote attestation, and cryptographic operations. Concretely, the commands that we consider are:

  • Key and session management: TPM2_StartAuthSession, TPM2_PolicyPCR, TPM2_PCRExtend, TPM2_Create

  • Attestation: TPM2_MakeCredential, TPM2_ActivateCredential, TPM2_Sign, TPM2_Certify, TPM2_Quote

These commands provide TPM functionalities that are crucial in establishing chains of trust in generic heterogeneous environments. Our intuition is that the models that we will describe for these commands will serve as both a basis for reasoning about the security of a wide set of TPM-based applications and for reasoning about the security of the TPM’s mechanisms themselves.

ii. Obtain an idealized model. Our models for the TPM commands mentioned above are presented in Fig. 2 using SAPiC syntax (see Appendix B). The cryptographic operations that we need to abstract are the creation of hash chains, which are used for updating PCR and policy digest values, secure storage, as well as object and session management in general. Note that we do not need to model the command TPM2_MakeCredential, as it does not rely on TPM internal secrets. We describe our modelling key points below.

  • Modelling hash chains. PCR values are the results of measurements of a platform or software state. When the command \(\texttt {TPM2\_PCRExtend}(pcr_\text {h}, v)\) is executed, the TPM extends the contents of the PCR referenced by \(pcr_\text {h}\) as \(pcrDigest = H(pcrDigest \parallel v)\). Applying \(\texttt {TPM2\_PCRExtend}\) iteratively, a single digest is obtained, representing the sequence of measurements of the platform state. An object can be sealed to a PCR value, meaning that it can only be accessed when the PCR has been appropriately extended to that particular value. As such, the actual semantics to get access to the object is performed by comparing a digest value stored in the object blob with a PCR hash value. However, the intended semantics is that the object can only be accessed if a certain number of PCR extensions have been executed with corresponding specified values. Therefore, in our idealized approach, we need to find an alternative description for the PCR extension that captures that intended semantics. The idea is to keep a record of all PCR extensions in a list \(pcr_\text {list}\) as global, mutable state, and append each new digest provided by the client to this list. More concretely, the idealization of the process of extending a value \(v_n\) to \(pcr_\text {list}\), translates to:

    $$ \text {append}(v_n, pcr_\text {list}) \Rightarrow pcr_\text {list} = (\underbrace{v_1, v_2, \ldots , v_{n - 1}}_{\text {previous } pcr_\text {list}}, v_n) $$

    Then, the adversary cannot gain access to a TPM object that is sealed to PCR, unless he extends the \(pcr_\text {list}\) with the corresponding values in the intended order, and not only any combination of extensions leading to the same hash value. We apply the same idea to the case of policy digests, which consists of hash chains similar to PCRs.

  • Usage of equational theories. We model cryptographic equational theories as usual, e.g., for the signature generation and verification we use the Tamarin built-in function symbols pk/1, sign/2 and verify/3 satisfying \(\mathrm {verify}(\mathrm {sign}(m, k), m, pk(k)) = true\). On the other hand, the command \(\texttt {TPM2\_MakeCredential}\), discussed above, is used to protect a randomly generated value challenge, with (a variation of) the encrypt-then-MAC scheme. The seed for the symmetric keys is encrypted with the public part of the EK. The command \(\texttt {TPM2\_ActivateCredential}\) will retrieve the value challenge after being provided with the appropriate private keys and execute some internal checks. These commands can be abstracted by the equational theory with function symbols pk/1, makeCredential/3, activateCredential/3 and verifyCredential/3. For an object name n and a private key k it must be satisfied that:

    $$\begin{aligned} \texttt {verifyCredential}(n, k, \texttt {makeCredential}(\texttt {pk}(k), challenge, n))&= \text {true} \\ \texttt {activateCredential}(n, k, \texttt {makeCredential}(\texttt {pk}(k), challenge, n))&= challenge. \end{aligned}$$

    The first equation verifies the correctness of the blob, hence emulates the HMAC operation. The second retrieves the challenge, which models the symmetric decryption process.

  • Secure storage and object management. As a resource-constrained device, the TPM offers the possibility of offloading memory contents into the host in the form of an encrypted blob. The object can then be loaded again if appropriate authorization is provided. We idealize this functionality by assuming that the TPM has unlimited memory space. This is achieved by using as many memory cells as objects or sessions are required. Therefore, we do not model the command TPM2_Load, and assume that objects are readily available through their handles after invoking TPM2_Create. In a real scenario a user might want to offload memory objects (e.g., when the TPM is switched off). But if a security property holds without offloading memory objects it will also hold when these objects are offloaded, since the adversary will have access to less objects. The communication to/from the TPM needs to be carefully addressed, since if this channel is made fully available to the adversary it would be an overestimation of his capabilities: he could detach, interact with and reattach the TPM at his will at any time. Therefore we need to implement access control (e.g., through private channels) when using some critical commands. This is discussed below.

Fig. 3.
figure 3

Tamarin templates for sending/receiving TPM commands

iii. Model the application-specific scenario using the idealized device. As commented above, in our model we aggregate the PCA and the SP servers depicted in Fig. 1 as a single Server process. The main purpose is to capture the communication between the three processes (Client, TPM, Server) in a way that replicates the real-world interactions and modes of operation, to the best possible extent, according to the proposed adversarial model. Modelling the channel between the Client and the TPM as a “standard” private channel in applied pi-calculus caused various technical difficulties for the tool in completing the proofs. For this reason, we have considered several strategies in SAPiC that allow the adversary to observe, but not interfere in that channel:

  • Tamarin restrictions. Used in order to limit the capabilities of the adversary. We use the templates for sending/receiving TPM commands depicted in Fig. 3, and the following restriction on the execution traces:

    figure a

    This ensures that in order for a TPM to receive and process a message, an (injective) TPM call must have been executed by the Client process. This forbids an external adversary from calling the TPM arbitrarily, but it allows an internal (e.g., malware) adversary access to the TPM.

  • Usage of the public channel. Whenever it is possible, we output the response of the TPM to the public channel (e.g., when it produces public keys). To some extent, this could overestimate the capabilities of the adversary, as it assumes that he can always listen to the communication channel between the TPM and the Client. Still, the adversary does not have access to internal secrets such as the EK or private parts of the objects. That is, in most cases, we can assume that the output of the TPM is available to the adversary to prove the security properties we are interested in.

  • Direct usage of multiset rewrite rules. The SAPiC calculus has an advanced feature which allows direct access to the multiset rewrite system of Tamarin. In order to emulate asynchronous message transfers between the Client and the TPM, we perform the following updates in Fig. 3:

    figure b

    The state fact CmdNameIn(tpm_send_cmd) is produced by the Client, consumed by the TPM process, and it is not available to the adversary at any time point. A similar approach can be devised for the TPM outputs, i.e., the TPM produces a state fact that is consumed by the Client process.

Source Code. Our model is split in three parts, corresponding to the three phases of the protocol from Sect. 4, namely AK certification, TLS certification and attestation. The SAPiC models are available at [10].

6 Verification

We address our last step (Step iv.) by describing a number of high-level security properties, defined for the protocol that is presented in Sect. 4. These properties are expressed as first-order logic formulas using Tamarin lemmas [4] and they focus on achieving integrity, confidentiality and attestation in the different phases. The notation “\(\text {{Ev}}(p_1,\dots ,p_n)@t\)” below stands for an execution of event with name “\(\text {{Ev}}\)” and parameters \(p_1, \dots ,p_n\), executed at time t.

Sanity-Check Properties. We model a number of properties to guarantee the existence of execution traces that reach the end of each possible branch in the protocol. Whereas these reachability properties do not encode any security guarantee in particular, they are required in order to ensure a non-trivial verification of the remaining correspondence properties. That is, a property of the form \(\text {{Event2}}\Rightarrow \text {{Event1}}\) will be trivially satisfied if Event2 is never reached. Therefore, for a given party A we define a number of events AFinish\(\_i\)(), where i ranges over the number of branches of the process that defines party A. We then define the following collections of exists-trace lemmas (one for each branch):

figure c

Availability of Keys at Honest Processes. We define a number of properties that ensure that all honest parties have initial access to the trusted key material required, so that they can build a chain of trust and successfully complete each phase of the protocol. This property can be treated as a premise in our models. We define the event \(\text {{HasKey}}(label, pid, k)\), where label is the key identifier (e.g., ‘EK’, ‘AK’), pid is a unique process identifier, and k is the key value. This event is launched at the start of each entity process. Therefore, we require:

figure d

Key Freshness. This property ensures that the created key material is not reused as a new key during the execution of the protocol. This applies both to asymmetric keys (e.g., AK public and private parts), as well as to symmetric keys (e.g., session keys). We define the event \(\text {{GenerateKey}}(label, tid, k)\), where label is the key identifier, tid stands for thread execution of the principal process, and k is the value of the generated key. The property is captured by the lemma:

figure e

Key Secrecy. This property ensures that the sensitive key material, namely private or symmetric session keys, is not available to the adversary. Similarly, we define the lemma \(\text {{SecretKey}}(label, k)\), where label is the key identifier and k represents its value. This event is launched after the corresponding key material has been created in each processes. This is modeled through the lemma:

figure f

where \(K(\dots )\) is the event fact denoting adversary knowledge.

Authentication. We consider the agreement property from Lowe’s hierarchy [16] in the parameters exchanged in each phase of the protocol. Whenever it is possible (e.g., for a session key), we also require injective, mutual agreement. As customary, we encode this property through the usage of the events \(\text {{ARunning}}(id_A, id_B, pars)\) and \(\text {{BCommit}}(id_B, id_A, pars)\), where the former is placed on the party A being authenticated, and the latter is placed in party B, to whom the authentication is being made. The placement of these events has some flexibility, but not all placements are correct. The \(\text {{ARunning}}\) event must be placed before party A sends its last message, and the \(\text {{BCommit}}\) event must be placed after party B receives and verifies the last message sent by A. The variable pars capture the set of parameters which are being agreed. The corresponding lemma has the following form in its injective version:

figure g

In case the agreement is mutual, we require an additional lemma where the roles of A and B are reversed.

Transfer of Information as Generated. Such lemmas ensure that information generated at different stages in the protocol, such as certificates or attestation reports, are received at the destination process as generated by the process of origin. We model this property through the events \(\text {{GenerateValue}}(label, v)\) and \(\text {{ReceiveValue}}(label, v)\), executed at the sender and receiver side respectively:

figure h

No Reuse of Key. This property ensures that a specific key is used only once in its intended context. We consider this property mainly used for single-use session keys. We therefore define the event \(\text {{UseKey}}(label, k)\) and lemma

figure i

No Attestation of Corrupted Client. This property signifies that a Client that is in a corrupted (untrusted) state will not be able to perform a successful attestation. Note that the PCR value of the Client’s TPM will be extended to an expected reference value only if its configuration is in a trusted state. SAPiC offers a non-deterministic branching construct which we use to model the fact that the Client might be in a trusted or untrusted state, but the remaining parties in the model do not know a priori which one. We define the event \(\text {{Corrupted}}(id_A)\), which is launched at the beginning of the corrupted branch, and reuse the event \(\text {{ServerCommit}}(id_B, id_A, pars)\), launched after a successful verification of the Client attestation report. We express this property as follows

figure j

Table 2 summarizes the results of our analysis using SAPiC/Tamarin: the objects to which each property is related, and the number of proof steps taken to prove the corresponding lemmas. The simulations have been executed in a VM 3 cores, 4 GB RAM on Intel(R) Core(TM) i5-4570 @ 3.20 GHz. The tool has been able to successfully verify them as expected, showing the feasibility of our approach in abstracting the functionalities of the TPM as a RoT for reporting.

Table 2. Summary of verified properties in SAPiC/Tamarin

7 Conclusion

In this paper, we introduce a new formal verification methodology, in which our focus is to idealize the internal functionalities of the RoT in such a way that we exclude the cryptographic actions carried out by the RoT and replace them with non-cryptographic approaches. This idealized model of cryptography allows for a more effective verification process, especially when complex protocols and extensive use case scenarios are considered. We formalized the notion of secure remote attestation towards trust aware service graph chains, in “Systems-of-Systems,” and verified Tamarin lemmas showing that our models satisfy the three key security properties that entail secure remote attestation and execution: integrity, confidentiality, and secure measurement. Furthermore, in order to model this service, we also considered additional TPM processes such as the creation of TPM keys, the Enhanced Authorization (EA) mechanism, the management of the Platform Configuration Registers (PCRs), and the creation and management of policy sessions. We argue that the included TPM commands cover a wide range of TPM-based applications and hence they can serve as a baseline for modelling additional TPM functionalities, in various application domains in the literature. Finally, we believe that our methodology of idealizing the cryptography can be used as an extensible verification methodology that enables rigorous reasoning about the security properties of a RoT in general.