Keywords

1 Introduction

Reputation systems provide valuable information about previous transactions and are popular tools to measure trustworthiness of interacting parties. This measurement relies on the existence of a large number of ratings for one specific subject. But in most practical applications the process of rating reveals, besides the actual rating, many information about the rater. Providers of reputation systems use this information in many different ways, e.g. for profiling users, which are not necessarily desired by the users. Moreover, users can feel compelled to rate “dishonestly/benevolent" when they fear negative consequences from negative ratings. Therefore, it is important that the process of rating does not reveal more information than the actual rating. Besides that, reputation systems need to be protected against various attacks to provide trustworthy, reliable and honest ratings. These attacks include self-rating attacks (also known as self-promoting attacks), Sybil attacks, whitewashing attacks, bad mouthing attacks, ballot stuffing attacks, and value imbalance attacks. Both the privacy concerns and the prevention of attacks are discussed frequently in the literature, e.g. [1, 8, 13, 17, 20, 21, 23, 24, 26, 27], albeit they are not considered simultaneously.

Further important security properties for reputation systems are anonymity, (public) linkability, traceability, and non-frameability, as discussed in [1, 6, 13, 27]. Anonymity means that ratings of honest users are indistinguishable, whereas public linkability requires that anyone can decide whether or not two ratings for the same product were created by the same user. Also, ratings need to be traceable: the identity of any rater can be determined by a designated System Manager. In the course of this non-frameability guarantees that honest parties are not blamed having rated some product, when they did not. The combination of traceability and non-frameability enables penalizing dishonest behavior.

All previously mentioned works consider reputation systems in isolation, although reputation systems are always used in combination with other applications. In such situations stand-alone security definitions, as in [6], do not guarantee security. With the Universal Composability Framework (UC) [9] there exists a methodology that guarantees security even in composed applications. Informally, in UC the execution of a real-life protocol is compared to the execution of an ideal protocol. If the real-life and ideal protocol executions are indistinguishable, then the real-life protocol is UC-secure. Based on this security definition Canetti [9] formulates a composition theorem which states that any UC-secure protocol is also secure when it is composed with other protocols.

Our Contribution. We present an ideal functionality for reputation systems \(\mathcal {F}_{\mathsf {RS}}\) in the Universal Composability Framework [9]. Our ideal functionality prevents all previously mentioned attacks and provides anonymity, public linkability, traceability, and non-frameability. In contrast to [6], users can rate each others products; there is no separation of customers and providers.

Besides defining an ideal functionality we present an efficient protocol for reputation systems that realizes \(\mathcal {F}_{\mathsf {RS}}\). This protocol is influenced by different techniques known from \(\varSigma \)-protocols [16] and (dynamic) group signatures [2,3,4, 7], similarly to the scheme in [6]. But our protocol is more efficient and more flexible than the scheme in [6] and it is secure even under concurrent composition (UC-secure).

2 The Ideal Functionality for Reputation Systems

In the first part of this section, we give some intuition to our ideal functionality of a reputation system \(\mathcal {F}_{\mathsf {RS}}\). The second part concerns the formal definition of \(\mathcal {F}_{\mathsf {RS}}\) in the Universal Composability Framework [9]. We discuss the functionality and its security properties in the third part of the section.

Intuition to Our Reputation System. A meaningful reputation system must provide trustworthy, reliable, and honest ratings. Furthermore, it should be flexible in the sense that it can be combined with many different applications. Therefore, we focus on the process of secure rating and provide a scheme that can be combined with any high-level application. For this reason, the aggregation of ratings and the evaluation of a specific reputation function is excluded from our model. Specifically, we handle the actual rating-message as a placeholder for the higher level application.

We consider reputation systems where users within the system can rate each others products. The term product refers to anything that can be used as a basis for ratings. Each user in our system has to register once at a System Manager, before a product can be rated. This prevents Sybil attacks, whitewashing attacks, bad mouthing attacks, and ballot stuffing attacks, and gives the System Manager the ability to punish misbehaving users. For this to work the system must prevent users to register with different identities. When users do not want to rate other products, a registration is not necessary - publishing products and verifying ratings is independent of the registration, which increases trust in the system. Analogously to registering, a product must be purchased prior to rating. This requirement assures that ratings are only given by raters using the product. Also, this is a protection mechanism against value imbalance attacks.

To further increase trust in the reputation system, raters must be able to rate purchased products anonymously. Without anonymity raters may tend to rate dishonestly when they fear negative consequences from the product owner. At the same time a product owner must be protected against unjustified negative ratings. This is achieved by giving the System Manager the ability to revoke the anonymity of a rater. Of course, the System Manager must not be able to accuse an honest user having misbehaved.

The negative side-effects of anonymity are that self-ratings, i.e. ratings for a product from the product owner, are hard to prevent and that a single rater who purchased a product could rate this product multiple times. Therefore we require a reputation system to explicitly forbid self-ratings and to provide linkable ratings: everybody - even outsiders of the system - must be able to detect multiple ratings from the same user for the same product.

As pointed out above, the security requirements a reputation system has to fulfill include - but are not limited to - anonymity for raters, unforgeability and public linkability of ratings, and the ability to determine the raters’ identity. These properties have already been studied in the simpler context of group signatures [2,3,4, 7, 18]. However, reputation systems have more security requirements than group signatures, as they do not consist of a single group of users. Instead, reputation systems can be seen as a collection of multiple group signature schemes - one for each product. Moreover, a single user may offer several products. Hence, in the definition of security properties the different group signature schemes must be considered in conjunction. Therefore, we adapt and extend these notions and give our formal definition of a secure reputation system in the Universal Composability Framework [9]. This framework guarantees security even for concurrently composed protocols. Stand-alone security definitions do not provide this strong guarantees, which are very important for our reputation system, as we intend it to be combined with other applications.

Additionally to the experiment-based security definitions for reputation systems [6] and group signatures [3, 4], our ideal functionality \(\mathcal {F}_{\mathsf {RS}}\) is influenced by the ideal functionalities for digital signatures \(\mathcal {F}_{\mathsf {SIG}}\) [10], public-key encryption \(\mathcal {F}_{\mathsf {PKE}}\) [9] and group signatures [2].

The Universal Composability Framework. In contrast to stand-alone security definitions (both experiment-based and simulation-based), the Universal Composability Framework, introduced by Canetti [9], provides security under concurrent composition of different applications. To achieve this strong security notion, the execution of a real-life protocol is compared to the execution of an ideal protocol. Both protocol executions are controlled by an environment \(\mathcal {Z}\) that tries to distinguish whether it interacts with the real-life protocol or the ideal protocol.

The ideal protocol is described by an ideal functionality \(\mathcal {F}\) that handles every (cryptographic) task as a trusted party and interacts with an ideal adversary \(\mathcal {S}\) (also called a simulator) and all parties involved in the protocol. Every party hands its inputs from the environment securely to \(\mathcal {F}\). Then \(\mathcal {F}\) computes the parties’ output and sends it back to the party. Whenever a party receives a message from \(\mathcal {F}\), the party outputs this message directly to the environment. The ideal adversary \(\mathcal {S}\) may corrupt some parties and can block the delivery of messages from \(\mathcal {F}\) to a party. The inputs a party hands to \(\mathcal {F}\) cannot be seen by \(\mathcal {S}\). In the real-life execution all parties compute their outputs by running the defined protocol. Analogously to \(\mathcal {S}\), a real-life adversary \(\mathcal {A}\) may corrupt parties within the real-life protocol execution.

We say that the real-life protocol UC-realizes the ideal protocol, if no environment can distinguish an interaction with the real-life protocol and \(\mathcal {A}\) from an interaction with the ideal protocol and \(\mathcal {S}\). Based on this security definition Canetti [9] formulates a composition theorem which states that any UC-secure protocol is also secure when it is executed concurrently with other protocols.

For our proof of security we will consider black-box simulators \(\mathcal {S}\), denoted by \(\mathcal {S}^{\mathcal {A}}\), that have block-box access to real-life adversaries \(\mathcal {A}\). Also we consider a model with ideally authenticated channels, meaning that an adversary is able to read the messages sent, but is unable to modify them. We refer to this communication model as the authenticated channels assumption.

2.1 The Formal Definition of \(\mathcal {F}_{\mathsf {RS}}\)

Our ideal functionality interacts with the parties \(P_{\mathrm {IDM}}, P_1, P_2, \ldots , P_n\) and an ideal adversary \(\mathcal {S}\), which is also called a simulator. The party \(P_{\mathrm {IDM}}\) acts as the System Manager, whereas the parties \(P_i\) correspond to the users within the reputation system. Furthermore, \(\mathcal {F}_{\mathsf {RS}}\) manages the lists \(\mathfrak {Params}\), \(\mathfrak {Reg}\), \(\mathfrak {Prods}\), \(\mathfrak {Purch}\), \(\mathfrak {Ratings}\), and \(\mathfrak {Open}\) to store important information. Before giving the formal definition of \(\mathcal {F}_{\mathsf {RS}}\), we explain how these lists are used. We also introduce the notation needed in the definition of \(\mathcal {F}_{\mathsf {RS}}\).

  • \(\mathfrak {Params}\): This list stores all pairs of the form \((P_{\mathrm {IDM}}, pp )\) containing public parameters the simulator \(\mathcal {S}\) gives to \(\mathcal {F}_{\mathsf {RS}}\) during KeyGen-requests. The first component of a pair is fixed to \(P_{\mathrm {IDM}}\), whereas the second component represents the actual parameters given by \(\mathcal {S}\).

  • \(\mathfrak {Reg}\): The list \(\mathfrak {Reg}\) stores pairs of the form \(( pp , P_i)\) containing registration information. The first component stores the public parameters the registrated party used in the Register-protocol, whereas the second component is the registrated party.

  • \(\mathfrak {Prods}\): All products that are used within the reputation system are stored as 4-tuples \((P_i\), \( prod \), \( ppk \), b) in the list \(\mathfrak {Prods}\). The first component of a tuple declares the product owner, the second is a product identifier (a bitstring chosen by the environment), the third specifies the corresponding product-public key and the fourth component is a validity bit. There can exist different products with the same product identifier, but for different product owners. The validity bit indicates whether the product-public key matches the given product owner and the product identifier.

  • \(\mathfrak {Purch}\): When some party successfully purchased a product, this information is stored as 4-tuple \((P_i, P_j, prod , ppk )\) in the list \(\mathfrak {Purch}\). For every tuple in the list the first component represents the purchaser, whereas the other components determine the product that was purchased (the product owner, the product identifier and the product-public key).

  • \(\mathfrak {Ratings}\): The list \(\mathfrak {Ratings}\) stores the most complex information as 10-tuples of the form \(( pp \), \(P_i\), \(P_j\), \( prod \), \( ppk \), m, \(\sigma \), b, \( lid \), \( oid )\). The components of each tuple represent the following information:

    1. 1.

      \( pp \) - the public parameters a rating is generated for,

    2. 2.

      \(P_i\) - the identity of the rater (\(( pp , P_i)\) should match an entry in \(\mathfrak {Reg}\)),

    3. 3.

      \(P_j\) - the product owner of the product the rating is generated for,

    4. 4.

      \( prod \) - the product identifier of the product the rating is generated for,

    5. 5.

      \( ppk \) - the product-public key of the product the rating is generated for (the tuple \((P_i\), \(P_j\), \( prod \), \( ppk )\) should match an entry in \(\mathfrak {Purch}\)),

    6. 6.

      m - rating message (a placeholder for high-level applications),

    7. 7.

      \(\sigma \) - the rating,

    8. 8.

      b - the validity bit (indicating whether the rating is valid),

    9. 9.

      \( lid \) - the linking-class identifier, which is managed by the algorithm RebLDB, and

    10. 10.

      \( oid \) - the opening-proof identifier.

    The linking-class identifier is needed to model the linkability property: two ratings with the same linking-class identifier have the same author. The opening-class identifier binds a list of opening-proofs to a specific rating. Whenever a new rating is added to the list \(\mathfrak {Ratings}\), \(\mathcal {F}_{\mathsf {RS}}\) uses the current value of a global counter \( lidc \) as the linking-class identifier and increments the counter. The subsequent execution of RebLDB ensures that the rating is put into the correct linking-class, according to the linkability-relation. A more detailed explanation of this behavior and the \( oid \)-mechanism is given in the discussion of the security properties of \(\mathcal {F}_{\mathsf {RS}}\).

  • \(\mathfrak {Open}\): This list stores all opening-proofs as 4-tuples of the form \(( oid , \tau , b, P)\). The first component is an opening-proof identifier that binds a tuple to a specific rating with the same identifier. The second component is the actual opening-proof. The third component is a validity bit indicating whether the proof is valid and the fourth component is the claimed party that shall be the author of the associated rating. The value \( oid = \bot \) within a rating expresses that the rating was not opened yet and hence no opening-proof exists. To uniquely bind opening-proofs to ratings a global counter \( oidc \) is used and incremented whenever a new opening-proof is bound to an unopened rating.

To manipulate the described lists, we introduce two operations:

  • adding a tuple v to a list L is expressed by \(L.\mathsf {Add}(v)\), and

  • substituting a tuple \(v_{\mathrm {old}}\) with a tuple \(v_{\mathrm {new}}\) is expressed by \(L.\mathsf {Sub}(v_{\mathrm {old}}, v_{\mathrm {new}})\).

Substituting a tuple \(v_{\mathrm {old}}\) means that this tuple is removed from the list, while the tuple \(v_{\mathrm {new}}\) is added to the list.

The classical notation to address components of tuples is using indices, i.e. \(v = (v_1, v_2, \ldots , v_n)\), where \(v_i\) is the i’th component of tuple v. We deviate from this notation to prevent confusion with different variables and address the i’th component of a tuple v by v[i].

Remark 1

(Technical Details of \(\mathcal {F}_{\mathsf {RS}}\)). Whenever \(\mathcal {F}_{\mathsf {RS}}\) misses some information, the symbol \(\bot \) is used to highlight this fact. Also the Simulator \(\mathcal {S}\) can output this symbol at some points to indicate that it is not able to respond to a request. Depending on the situation, this is not necessarily a failure.

To reduce repeating code we introduce the internal activations \(\mathsf {VfyProd}\), \(\mathsf {VfyRtg}\), \(\mathsf {LinkRtgs}\), and \(\mathsf {RebLDB}\). These activations are only used by \(\mathcal {F}_{\mathsf {RS}}\) as an internal subroutine and are not callable by parties or adversaries.

The activations for user registration (\(\mathsf {Register}\)) and purchasing a product (\(\mathsf {Purchase}\)) generate outputs to multiple parties. Albeit this mechanism is rarely used in the UC framework another example for this technique can be found in the definition of homomorphic UC commitments \(\mathcal {F}_{\mathrm {HCOM}}\) by Damgård et al. [15].

With these prerequisites we now give the formal definition of \(\mathcal {F}_{\mathsf {RS}}\).

figure a
figure b
figure c
figure d
figure e

Security Properties of \(\varvec{\mathcal {F}_{\mathsf {RS}}}\) . As many other ideal functionalities in the UC framework, we define \(\mathcal {F}_{\mathsf {RS}}\) to work as a “registy service” to store parameters, ratings, and opening-proofs. Using the right parameters, every party is able to check whether ratings and opening-proofs are stored by \(\mathcal {F}_{\mathsf {RS}}\). In all activations, \(\mathcal {F}_{\mathsf {RS}}\) lets the simulator \(\mathcal {S}\) choose the values needed to respond to the activation. The requirements on these values are defined as restrictions for each activation. In the following, we discuss these restrictions and the implied security properties.

  • Registry Key Generation: Similar to the Signature Functionality \(\mathcal {F}_{\mathsf {SIG}}\) [10] and the Public-Key Encryption Functionality \(\mathcal {F}_{\mathsf {PKE}}\) [9], we do not make any security relevant requirements on the public parameters \( pp \).

  • User Registration: Being registered is a prerequisite to rate a product and covers the first step to prevent Sybil attacks, whitewashing attacks, bad mouthing attacks, and ballot stuffing attacks. The user registration models an interactive protocol between \(P_{\mathrm {IDM}}\) and some party \(P_i\). In general, \(\mathcal {F}_{\mathsf {RS}}\) lets the simulator \(\mathcal {S}\) decide whether party \(P_i\) successfully registered, with the following two restrictions: non-registered honest parties communicating with an honest \(P_{\mathrm {IDM}}\) using the right public parameters will always be registered after the protocol execution (\(b = 1\)) and an honest \(P_{\mathrm {IDM}}\) will reject a party from registering, when wrong parameters are used (\(b = 0\)).

  • Product Addition and VfyProd : The NewProduct-activation is used by party \(P_i\) to publish a new product-public key \( ppk \) for a given product \( prod \in \{0,1\}^*\). The value \( ppk \) is bound to the bitstring \( prod \) and to the party requesting it, such that every party can validate the ownership of a product. Formally this means, that a product-public key is only valid for one specific pair \((P, prod )\). This is a very important requirement, because it models unforgeability of product-public keys. Without this property any corrupted party \(P_j\) could “copy” some \( ppk \) (that was generated by an honest party \(P_i\)) and declare foreign ratings as own ratings: all valid ratings for \((P_i, prod , ppk )\) would also be valid for \((P_j, prod ', ppk ')\). Since we want to have a reliable, trustworthy and fair system such attacks must be prevented. We emphasize that VfyProd is modeled as an internal subroutine within \(\mathcal {F}_{\mathsf {RS}}\) and is implicitly used in other activations.

  • Purchase: Another prerequisite to rate a product is to purchase it. This is necessary to prevent value imbalance attacks. The purchasing protocol is an interactive protocol between two parties: the seller \(P_j\) and the purchaser \(P_i\). Naturally, before purchasing a product its corresponding product-public key is verified. Only if this is valid, the protocol will be executed. For two honest parties the purchasing process will successfully finish, whereas the simulator \(\mathcal {S}\) determines the outcome of the protocol execution in any other case.

  • Rating a Product: When party \(P_i\) wants to rate the product \( prod \) with public key \( ppk \) owned by party \(P_j\), \(P_i\) must be registered, must have purchased the specified product, and must not have rated the product before. Being registered is necessary to open ratings, whereas having purchased the product enables rating verifiers to detect self-ratings, bad mouthing attacks and ballot stuffing attacks. In the case that \(P_{\mathrm {IDM}}\) is honest, \(\mathcal {F}_{\mathsf {RS}}\) guarantees anonymity of raters: the simulator \(\mathcal {S}\) is asked to output a rating \(\sigma \), that is valid for the specified product, without knowing the rating party. Hence, the output rating cannot depend on the raters’ identity. In the case that \(P_{\mathrm {IDM}}\) is corrupted, the simulator \(\mathcal {S}\) obtains the identity of the rater, because in this case anonymity cannot be achieved.

  • Rating Verification and Determining the Raters’ Identity: Given the right parameters, every rating can be verified. Note that ratings are only verified, if the specified product is valid. A valid rating guarantees the following properties, even for maliciously generated ratings:

    • Non-Self-Rating: the rater is not the owner of the product.

    • Linkability: the rater purchased the product (will be discussed later in detail).

    • Traceability: the rater is registered and can be identified.

    Every single property is crucial for trustworthy reputation. If self-ratings would not be prevented, ballot stuffing attacks were possible. The same holds for linkability, but this will be discussed later in detail. Being able to open ratings is also very important in practical applications, because otherwise misbehaving parties can not be identified and punished. Hence, it must be guaranteed that honest parties are not blamed having rated some product, when they did not. This property is called non-frameability and is discussed later in detail. \(\mathcal {F}_{\mathsf {RS}}\) not only asks the simulator \(\mathcal {S}\) to validate a rating, but also to determine the raters’ identity. This models the ability of \(P_{\mathrm {IDM}}\) to open every rating, not only those for which an Open-request occurs. Furthermore, it simplifies the definition of \(\mathcal {F}_{\mathsf {RS}}\) without weakening the security properties, because VfyRtg encapsulates all important characteristics of a valid rating in a single and reusable procedure.

  • Linking Ratings and RebLDB : For every party using a reputation system it is important to know whether two valid ratings for the same product are generated by the same party. If this is true, the rater behaved dishonestly. We call this property linkability, which prevents bad mouthing attacks and ballot stuffing attacks. Linkability represents an equivalence relation: \(\mathsf {Link}(x, x) = 1\), \(\mathsf {Link}(x, y) = \mathsf {Link}(y, x)\) and \(\mathsf {Link}(x, y) = 1 \wedge \mathsf {Link}(y, z) = 1 \Rightarrow \mathsf {Link}(x,z) = 1\). The value \( lid \) stored by \(\mathcal {F}_{\mathsf {RS}}\) for every rating represents the equivalence class the rating belongs to. Initially, \( lid \) is set to the current value of a global counter \( lidc \). The linking-class identifiers are updated by the RebLDB algorithm whenever a new rating is added to the list \(\mathfrak {Ratings}\) (via Rate and Verify) or new linking information is obtained (via Link and Judge). This algorithm is only for internal use and not callable by any party. The RebLDB-algorithm merges two equivalence classes in the following cases:

    • Step 2 covers calls to the algorithm from Rate, Verify, and Judge (\(s = \bot \)), where \(P_{\mathrm {IDM}}\) is not corrupted and/or \(X_1\) is an uncorrupted rater (\(X_1 \ne \bot \)). In these cases RebLDB selects all valid ratings for the specified product from the same rater \(X_1\) (the set \(\mathcal {L}\)) and sets the value \( lid \) (\(\ell [9]\) for \(\ell \in \mathcal {L}\)) for all ratings in \(\mathcal {L}\) to the minimal value within the selected ratings.

    • Step 5 handles requests from Link where either the identity of the rater is not known but the simulator \(\mathcal {S}\) tells \(\mathcal {F}_{\mathsf {RS}}\) that these ratings are linkable (Step 6 of Link), or the identity of some corrupted party can be updated for some rating, because it is linkable to another rating \(\mathcal {F}_{\mathsf {RS}}\) already knows the identity of (Step 9 in Link). According to the transitivity of the linkability relation, RebLDB merges the two equivalence classes into one class by selecting all ratings within the two classes (Step 9) and setting \( lid \) to be the smaller of both values. Additionally, if a party identity is given in \(X_1\) or \(X_2\) this value will be set for all ratings within the equivalence class (Step 10).

    • In Steps 11–18 RebLDB verifies that there do not exist more equivalence classes for an honestly generated product than the party owning the product sold. This ensures that it is only possible to rate a product once (without being linkable) after purchasing. When \(P_{\mathrm {IDM}}\) is corrupted, it is possible that no linking information is available to \(\mathcal {F}_{\mathsf {RS}}\). In this case \(\mathcal {F}_{\mathsf {RS}}\) asks the simulator \(\mathcal {S}\) to link all ratings for the product in question. Without this step a simple attack is possible:

      \(\bullet \) :

      \(\mathcal {Z}\) lets the real-world adversary \(\mathcal {A}\) corrupt \(P_{\mathrm {IDM}}\) and some party \(P_i\), lets \(P_i\) purchase some product from an honest party \(P_j\), generates multiple valid ratings for this product and verifies them.

      \(\bullet \) :

      In this scenario \(\mathcal {F}_{\mathsf {RS}}\) adds the ratings to \(\mathfrak {Ratings}\) during the Verify-protocol, which in turn calls RebLDB. Since no linking information is available to \(\mathcal {F}_{\mathsf {RS}}\), without Step 13 \(\mathcal {F}_{\mathsf {RS}}\) outputs error, even when all ratings are linkable. Hence, no protocol can realize \(\mathcal {F}_{\mathsf {RS}}\).

      If after Step 13 there are still more equivalence classes than purchases, this violates the security requirements of \(\mathcal {F}_{\mathsf {RS}}\).

    Summarizing, the handling of equivalence classes is modeled by the RebLDB-algorithm which uses linking information obtained from the algorithms Rate, Verify, Link, and Judge.

  • Generating and Verifying Opening-Proofs: Opening-proofs are values that enable every party to verify that a blamed party is really the author of a given rating. This covers the property of non-frameability: no honest party can be accused being the author of a given rating, when it is not. \(\mathcal {F}_{\mathsf {RS}}\) asks the simulator \(\mathcal {S}\) to output valid opening-proofs and ignores the output of \(\mathcal {S}\), if the given rating is invalid, a wrong identity is given or the rating has not been opened yet. Since there can be more than one valid opening-proof, the value \( oid \) is used to connect a rating with its list of opening-proofs. This mechanism ensures that an opening-proof cannot be used to determine a raters identity for other ratings.

3 Realizing \(\mathcal {F}_{\mathsf {RS}}\)

Before introducing the protocol that realizes \(\mathcal {F}_{\mathsf {RS}}\), we give the required preliminaries and building blocks in this section.

Preliminaries. Our realization relies on bilinear groups, the Symmetric External Diffie-Hellman-Assumption, and the Pointcheval-Sanders-Assumption. For completeness, we give the respective definitions in this section.

Definition 1

(Bilinear Groups). A bilinear group \(\mathbb {GD}\) is a set of three cyclic groups \(\mathbb {G}_1, \mathbb {G}_2\) and \(\mathbb {G}_T\), each group of prime order p, along with a bilinear map \(\mathrm {e}:\mathbb {G}_1 \times \mathbb {G}_2 \rightarrow \mathbb {G}_T\) with the following properties:

  1. 1.

    Bilinearity: for all \(u \in \mathbb {G}_1\), \(v \in \mathbb {G}_2\) and \(a, b \in \mathbb {Z}_{p}:\) \(\mathrm {e}(u^a, v^b) = \mathrm {e}(u, v)^{ab}\).

  2. 2.

    Non-degeneracy: for \(u \ne 1_{\mathbb {G}_1}\) and \(v \ne 1_{\mathbb {G}_2}:\) \(\mathrm {e}(u, v) \ne 1_{\mathbb {G}_T}\).

  3. 3.

    The map \(\mathrm {e}\) is efficiently computable.

We will use pairings of Type-3 for our construction, because they allow efficient implementations and the Pointcheval-Sanders-Assumption does not hold in Type-1 and Type-2 pairing groups. Furthermore, for Type-3 pairing groups it is believed that the Decisional-Diffie-Hellman-Problem is hard in both \(\mathbb {G}_1\) and \(\mathbb {G}_2\). This assumption is often referred to as the Symmetric External Diffie-Hellman-Assumption (SXDH) [19].

Definition 2

(Bilinear Group Generator). A bilinear group generator, denoted by \(\mathsf {BiGrGen}\), is a probabilistic polynomial time algorithm that, on input \(1^\lambda \), outputs a description of a bilinear group \(\mathbb {GD}\). We denote the output of \(\mathsf {BiGrGen}\) by \(\mathbb {GD}= (p\), \(\mathbb {G}_1\), \(\mathbb {G}_2\), \(\mathbb {G}_T\), \(\mathrm {e}\), \(g_1\), \(g_2)\).

Definition 3

(Pointcheval-Sanders-Problem – PS1). Let \(\mathbb {GD}= (p\), \(\mathbb {G}_1\), \(\mathbb {G}_2\), \(\mathbb {G}_T\), \(\mathrm {e}\), \(g_1\), \(g_2)\) be a bilinear group setting of Type-3, with generators \(g_1 \in \mathbb {G}_1\) and \(g_2 \in \mathbb {G}_2\). Further, let , , , and , , for . We define the oracle \(\mathcal {O}(m)\) as follows: on input \(m \in \mathbb {Z}_{p}\), choose and output \((h, h^{x + m\cdot y})\). Given \((g, Y, \tilde{g}, \tilde{X}, \tilde{Y})\) and unlimited access to oracle \(\mathcal {O}\), the Pointcheval-Sanders-Problem is to output a tuple \((m^*, s, s^{x + m^* \cdot y})\), where \(s \ne 1_{\mathbb {G}_1}\) and \(m^*\) was not asked to \(\mathcal {O}\).

We say the Pointcheval-Sanders-Assumption holds for bilinear group generator \(\mathsf {BiGrGen}\) if for all probabilistic polynomial time adversaries \(\mathcal {A}\) there exists a negligible function \(\mathrm {negl}\) such that

$$\begin{aligned} \Pr \left[ \mathcal {A}^{\mathcal {O}(\cdot )}\left( \mathbb {GD}, g, Y, \tilde{g}, \tilde{X}, \tilde{Y} \right) = \left( m^*, s, s^{x + m^* \cdot y}\right) \right] \le \mathrm {negl}(\lambda ), \end{aligned}$$

where the probability is taken over the random bits used by \(\mathsf {BiGrGen}, \mathcal {A}\), and the random choices of .

Building Blocks and Intuition for Our Realization. In this section we briefly introduce the building blocks of our realization and explain how they are combined to realize \(\mathcal {F}_{\mathsf {RS}}\). Due to lack of space, all formal definitions are given in the full version of this paper [5].

We use Pointcheval-Sanders Signatures (\(\mathsf {PS}= (\mathsf {KeyGen}, \mathsf {Sign}, \mathsf {Verify})\)) [25] as certificates for registration and for purchased products. We call the certificate for registration a registration token, the certificate for purchased products a rating token. To obtain such tokens every user has to prove knowledge of a self-chosen user-secret-key \( usk \). We use the concurrent zero-knowledge variant of \(\varSigma \)-protocols, which uses Trapdoor Pedersen Commitments (\(\mathsf {PD}= (\mathsf {KeyGen}, \mathsf {Commit}, \mathsf {Reveal}, \mathsf {Equiv})\)) for this purpose.

To rate a product a user has to non-interactively prove knowledge of the registration token, the rating token, and its personal user-secret, for which the tokens were generated. As non-interactive proof system we use Signatures of Knowledge [12]. Also, opening-proofs, generated by \(P_{\mathrm {IDM}}\), are non-interactive proofs of knowledge of opening tokens. These tokens are given by a user \(P_i\) to the System Manager \(P_{\mathrm {IDM}}\) during the registration protocol. In our construction it is important not to publish these tokens, because they allow to open any rating. Hence, we encrypt opening tokens with the CCA2-secure Cramer-Shoup encryption (\(\mathsf {CS}= (\mathsf {KeyGen}, \mathsf {Enc}, \mathsf {Dec})\)) [14].

The Signatures of Knowledge we use need a Random Oracle, which can be modeled as the ideal functionality \(\mathcal {F}_{\mathsf {RO}}\) [22] in the UC framework. We further need the ideal functionalities for Common Reference Strings \(\mathcal {F}_{\mathsf {CRS}}\) [11] and Certification \(\mathcal {F}_{\mathsf {CA}}\) [10]. \(\mathcal {F}_{\mathsf {CRS}}\) is needed for secure commitment schemes like the above mentioned Trapdoor Pedersen Commitments and \(\mathcal {F}_{\mathsf {CA}}\) ensures that users cannot register with different identities.

The output of \(\mathcal {F}_{\mathsf {CRS}}\) is \((\mathbb {GD}, \mathsf {PD}. pk , \mathcal {H}, \mathcal {H}_1, \mathcal {H}_2)\), where \(\mathbb {GD}\) is the output of the bilinear group generator \(\mathsf {BiGrGen}(1^{\lambda })\), \(\mathsf {PD}. pk = (u, v) \in \mathbb {G}_1^2\) is the public key of the Trapdoor Pedersen Commitment scheme, and \(\mathcal {H}:\{0,1\}^* \rightarrow \mathbb {Z}_{p}\), \(\mathcal {H}_1:\{0,1\}^* \rightarrow \mathbb {G}_1\), and \(\mathcal {H}_2:\{0,1\}^* \rightarrow \mathbb {G}_2\) are collision-resistant hash functions. We assume that every party obtains the common-reference string prior to its first activation. We write to indicate a call to \(\mathcal {F}_{\mathsf {RO}}\) on input \(( sid , x)\) and outputting y to the calling party.

A Protocol for Realizing \(\varvec{\mathcal {F}_{\mathsf {RS}}}\) . We assume to communicate via authenticated channels between two parties. This implies that the identities of communicating parties are known to each other and that the adversary cannot modify the message’s payload.

figure f
figure g
figure h
figure i

Theorem 1

Under the Authenticated Channels Assumption, the SXDH-Assumption, the Pointcheval-Sanders-Assumption, and the assumption that \(\mathcal {H}, \mathcal {H}_1\), and \(\mathcal {H}_2\) are collision-resistant hash functions, Protocol \(\varPi _{\mathsf {RS}}\) UC-realizes the \(\mathcal {F}_{\mathsf {RS}}\) functionality in the (\(\mathcal {F}_{\mathsf {RO}}, \mathcal {F}_{\mathsf {CRS}}, \mathcal {F}_{\mathsf {CA}}\))-hybrid model, in the presence of static adversaries.

Due to lack of space, we only sketch the proof here. The full proof is given in the full version of this paper [5].

Proof

(Sketch). To prove Theorem 1 we have to show that for any probabilistic polynomial-time real-world adversary \(\mathcal {A}\) there exists a probabilistic polynomial-time ideal-world adversary \(\mathcal {S}\) such that for any probabilistic polynomial-time environment \(\mathcal {Z}\) it holds:

$$\begin{aligned} \Big \{\mathrm {EXEC}_{\mathcal {F}_{\mathsf {RS}}, \mathcal {S}^{\mathcal {A}}, \mathcal {Z}}(1^{\lambda }, z)\Big \}_{\lambda \in \mathbb {N}, z \in \{0,1\}^*} {\mathop {\equiv }\limits ^{c}} \Big \{\mathrm {EXEC}_{\varPi _{\mathsf {RS}}, \mathcal {A}, \mathcal {Z}}^{\mathcal {F}_{\mathsf {RO}}, \mathcal {F}_{\mathsf {CRS}}, \mathcal {F}_{\mathsf {CA}}}(1^{\lambda }, z)\Big \}_{\lambda \in \mathbb {N}, z \in \{0,1\}^*}. \end{aligned}$$

We divide the proof of this statement into three parts. In the first part we define the simulator \(\mathcal {S}\) that interacts with \(\mathcal {F}_{\mathsf {RS}}\) and simulates the cryptographic computations. Note that during \(\mathsf {Rate}\)-requests \(\mathcal {S}\) does not obtain any identifying information of the rater. Hence, \(\mathcal {S}\) uses the zero-knowledge simulator for the Signature of Knowledge that represents a rating. Analogously, opening-proofs are represented by a Signature of Knowledge. Therefore, \(\mathcal {S}\) uses the corresponding zero-knowledge simulator to generate opening-proofs.

In the second part of the proof we define a hybrid game \(\mathcal {G}\) and a corresponding simulator \(\mathcal {S}_1\) for which we prove that no environment \(\mathcal {Z}\) can distinguish whether it interacts with \((\mathcal {F}_{\mathsf {RS}}, \mathcal {S})\) or \((\mathcal {G}, \mathcal {S}_1)\). In this game \(\mathcal {S}_1\) obtains all identifying information during \(\mathsf {Rate}\)-requests and therefore can execute the computations as defined in Protocol \(\varPi _{\mathsf {RS}}\). Also opening-proofs can be generated by \(\mathcal {S}_1\) as in Protocol \(\varPi _{\mathsf {RS}}\). Hence, an environment \(\mathcal {Z}\) is only able to distinguish \((\mathcal {F}_{\mathsf {RS}}, \mathcal {S})\) and \((\mathcal {G}, \mathcal {S}_1)\), if it can distinguish between simulated and real ratings and opening-proofs. Under the SXDH-Assumption this is not possible.

In the third part of the proof we show that \(\mathcal {S}_1\) executes exactly the same computations as Protocol \(\varPi _{\mathsf {RS}}\). This implies that any environment \(\mathcal {Z}\) that distinguishes between \((\mathcal {G}, \mathcal {S}_1)\) and \((\varPi _{\mathsf {RS}}, \mathcal {A})\) is able to let \(\mathcal {F}_{\mathsf {RS}}\) output error, whereas the Protocol \(\varPi _{\mathsf {RS}}\) outputs some value, or \(\mathcal {F}_{\mathsf {RS}}\) outputs 0, whereas Protocol \(\varPi _{\mathsf {RS}}\) outputs 1 (or vice versa). Using different reductions to the Pointcheval-Sanders-Problem and to the CCA2-security of the Cramer-Shoup encryption scheme we show that such environments cannot exist. Hence, \(\varPi _{\mathsf {RS}}\) UC-realizes \(\mathcal {F}_{\mathsf {RS}}\) in the (\(\mathcal {F}_{\mathsf {RO}}, \mathcal {F}_{\mathsf {CRS}}, \mathcal {F}_{\mathsf {CA}}\))-hybrid model.    \(\square \)

A Note on Revocation: Protocol \(\varPi _{\mathsf {RS}}\) can be easily extended to support verifier-local revocation, which revokes a user completely: to revoke the party \(P_i\) the System Manager \(P_{\mathrm {IDM}}\), or even \(P_i\) himself, publishes the value \(\tilde{Y}_i\) as the users’ revocation token \( rt _i\) on a revocation-list \(\mathcal {RL}\). Then any verifier can check whether the author of a given rating \(\sigma = (T_1, T_2, T_3, T_4, T_5, ch , s)\) is revoked by testing if the equation \(\mathrm {e}(T_5, \tilde{Y}) = \mathrm {e}(\mathcal {H}_1(j, prod ), rt )\) holds for any entry \( rt \in \mathcal {RL}\). Analogously, during \(\mathsf {Purchase}\)-requests the product owner can test whether \(\mathrm {e}(M_i, \tilde{Y}) = \mathrm {e}(g_1, rt )\) holds to detect a revoked user \(P_i\). This revocation mechanism conflicts with our definition of anonymity and it is an open problem how to prove security when revocation is considered.

Considering Adaptive Adversaries: Theorem 1 only claims security against static adversaries, because anonymity and linkability are conflicting security properties, which impede the construction of UC-secure protocols in the presence of adaptive adversaries. We leave this as an open problem that needs further research.