Keywords

1 Introduction

The rapid growth of Internet and ubiquitous connectivity has spurred the development of various collaborative computing systems such as service-oriented computing (SOC), Peer-to-Peer (P2P) and on line community systems. In these applications, the accessibility of information and services offered by these communities, makes it both possible and legitimate to communicate with strangers and carry out interactions anonymously, as rarely done in “real” life. However, the service consumer usually knows little about the service providers, which often makes the consumer accept the risk of working with some providers without prior interaction or experience. To mitigate the potential risks of the consumers, reputation systems [1,2,3,4] are deployed as a popular approach to predict how much the service provider can be trusted. Reputation systems provide communities with means to reduce the potential risk when communicating with people hiding behind virtual identities. These utilize the experience and knowledge accumulated and shared by all participants for assigning reputation values to individuals, and attempt to identify dishonest members and prevent their negative effect.

One major challenge associated with designing reputation mechanisms is to ensure that truthful information is gathered about the actual outcome of the transaction. In the absence of independent verification means, the efficiency of the reputation mechanism fully depends on the number of received reputation information and the quality of each of them. It is however not at all clear that it is in the best choice of a rational peer to provide honest recommendations actively because [5]: (1) feedback reporting is usually costly. Users need to understand the rating scale, must fill in feedback forms and supervise the submission of the report. All these require the time and the conscious effort of the reporters. As feedback reporting does not bring direct benefits (the information is valuable only to subsequent buyers), rational agents are better off not to report at all. (2) providing honest positive recommendations lifts the reputation of other peers, so it may be a disadvantage to report them truthfully, and (3) a peer may be afraid of retaliation for honest negative recommendations.

To address the above problems, many researchers have proposed a lot of approaches for promoting honest recommendations. One solution is provided by Miller, Resnick and Zeckhauser [6]. They compare the quality reports of two agents about the same good with one another and apply strictly proper scoring rules to compute a payment scheme that makes honest reporting a Nash equilibrium. Jurca and Faltings [7] study a largely similar setting but use automated mechanism design to compute a budget-optimal payment scheme. Furthermore, they developed numerous extensions to the base model, such as incorporating collusion resistance [8, 9]. Gudes et al. [10] used another approach of protecting the privacy of recommendation providers to solve the problem of fear of retaliation in reputation systems. They presented three different schemes for the private computation of reputation, and analyzed the advantages and disadvantages in terms of privacy and communication overhead.

A number of strategies have been proposed to prevent the impact of selfish behavior. It is therefore necessary to give an overview of the representative strategies. In this paper, we present a comprehensive discussion on approaches for promoting honest recommendations in reputation systems. Different classes of approaches are described along with their unique characteristics and working principles. A number of schemes proposed are critically reviewed and compared with respect to their effectiveness and efficiency of performance. Some open problems in the area of promoting honest recommendations in reputation systems are also discussed. To the best of our knowledge, we are the first to systematically analyze the schemes for promoting honest recommendations in reputation systems.

The reminder of this paper is structured as follows. In the second section, Section two presents a classification and brief description of existing schemes for motivating honest recommendations in reputation systems. The detailed discussion on two main category approaches along with their unique characteristics and working principles is presented in the following two sections. Conclusions and future works are in the end.

2 Taxonomy of Approaches for Promoting Honest Recommendations

Many reputation systems make an assumption that all peers are willing to provide recommendations, but this assumption always is not true. In a self-organized systems (e.g. file sharing, collaboration, and e-commerce) dominated by rational agents acting to maximize their revenues, it is not clear that sharing truthful information is in the best interest of the reporter. To address this problem, current solutions can be divided into two categories: preserving the privacy of recommenders and providing incentives to recommenders. Taxonomy is shown in Fig. 1.

Fig. 1.
figure 1

Taxonomy of approaches for promoting honest recommendations in reputation systems.

It has been observed that users in a reputation system often hesitate in providing negative feedback. Resnick and Zechhauser reported some interesting statistics about eBay’s reputation system [2]: Only 0.6% and 1.6% of all the feedbacks provided by buyers and sellers, respectively, were negative, which seem too low to reflect the reality. This might be due to the fact that mutually satisfying transactions are simply the (overwhelming) norm. However, it might also be the case that when feedback providers’ identities are publicly known, reputation ratings can be provided in a strategic manner for reasons of reciprocation and retaliation, not properly reflecting the trustworthiness of the rated parties. For example, a user may have an incentive to provide a high rating because he expects the user he rates to reciprocate, and provide a high rating for either the current interaction or possible future ones.

A more general solution to this problem is computing reputation scores in a privacy preserving manner [11,12,13,14,15,16,17,18,19]. A privacy preserving reputation system operates such that the individual feedback of any entity is not revealed to other entities in the system. The implication of private feedback is that there are no consequences for the feedback provider and thus he is uninhibited to provide honest feedback.

Moreover, in a large network, there is little incentive for a particular individual to expand resources to maintain the reputation system. Moreover, from a game theoretic perspective not reporting feedback may be advantageous to an agent in a competitive situation. To address these two problems, researchers have been working on developing incentive mechanisms [5,6,7,8,9, 20,21,22,23,24,25,26]. Incentive mechanism rewards the peers who give honest recommendations actively and penalizes the peers who are not willing to give recommendations or give dishonest recommendations. So the peers can behave as we expected to provide honest recommendation actively. Thus, the aim of incentive mechanism is how to make a reputation mechanism incentive-compatible, i.e. how to ensure that it is in the best interest of a rational agent to actually report reputation information truthfully.

Current incentive mechanism for promoting honest recommendations can be divided into two categories [4]: market-based approach and policy-based approach. Market-based incentive mechanism introduces side payments that make it rational for peers to truthfully share reputation information. Peers can get reputation information by paying some virtual currency and obtain some virtual currency by providing reputation information. Policy-based incentive mechanism is implemented through a fair differential service mechanism. The goal of service differentiation is not to provide hard guarantees but to create a distinction among the peers based on their contributions to the system. The basic idea is, the more the contribution, the better the relative service.

In the following subsections, we will introduce the representative strategies in each category and summarize their common problems.

3 Protecting the Privacy of Recommenders

It has been observed that reputation ratings may be provided in a strategic manner for reasons of reciprocation and retaliation, and therefore may not properly reflect the trust worthiness of rated parties. It thus appears that supporting privacy of recommendations providers could improve the quality of their ratings [12, 13].

Privacy preserving reputation computation is straightforward in the presence of a trusted central authority, each provider submits his feedback value to the central authority who aggregates all feedback and reveals the reputation score while keeping the individual feedback private. Zhang et al. [11] propose a reputation system which can protect the privacy of users offering feedback with the help of a trusted central server. In their scheme, reputation scores submitted to the central server are encrypted and can only be decrypted by it. In addition, the central server only returns to the querying user an aggregated reputation score instead of collected raw reputation scores. Therefore, it is impossible for any server to know the reputation score a particular client gives for him, and clients can be assured of offering honest reputation scores without incurring retaliation.

However, preserving privacy in decentralized reputation systems is not trivial, since no such universally trusted central authority is present to collect and report reputation ratings.

Pavlov et al. [12] argues that supporting perfect privacy in decentralized reputation systems is impossible, but as an alternative presents three probabilistic schemes that support partial privacy. On the basis of these schemes, they offer three protocols that allow ratings to be privately provided with high probability in decentralized additive reputation systems. The first protocol is not resilient against collusion of users, the other two protocols are probabilistically resistant to collusion of up to n–1 users, and require respectively O(n2) and O(n3) messages among n users.

Kinateder and Pearson [13] suggest a privacy-enhanced peer-to-peer reputation system on top of a Trusted Computing Platform (TCP). The platform’s functionality along with the use of pseudonymous identities allow the platform to prove that it is a trusted platform, yet to conceal the real identity of the feedback provider. A possible privacy-breach in the IP layer is handled by the use of MIX cascades or anonymous web-posting. This approach is dependent on a specific platform, which is currently arousing controversy in the computing community [14].

Gudes et al. [10] discusses the computation of reputation while preserving members’ private information. Three different schemes for the private computation of reputation are presented, and the advantages and disadvantages in terms of privacy and communication overhead are analyzed.

Hasan et al. [15] present three different privacy preserving protocols for computing reputation. They vary in strength in terms of preserving privacy, however, a common thread in all three protocols is that they are fully decentralized and efficient. Their protocols that are resilient against semi-honest adversaries and non-disruptive malicious adversaries have linear and loglinear communication complexity respectively.

Liu et al. [26] presents a hybrid approach for privacy-preserving recommender systems by combining randomized perturbation and differential privacy. Users’ private data are protected by randomized perturbation and the privacy of recommendation result is guaranteed by differential privacy.

The approach protecting the privacy of recommenders needs to ensure the anonymity of recommenders during collecting reputation information and computing trust value. The logic of anonymous recommendation to a reputation system is thus analogous to the logic of anonymous voting in a political system. It potentially encourages truthfulness by guaranteeing secrecy and freedom from explicit or implicit influence. Although this freedom might be exploited by dishonest recommendation providers, who tend to provide exaggerated recommendations, it seems highly beneficial for honest ones, protecting the latter from being influenced by strategic manipulation issues. For example, a peer may have an incentive to provide a high rating because he expects the peer he rates to reciprocate, and provide a high rating for either the current interaction or possible future ones.

4 Providing Incentives to Recommenders

Providing rewards is effective way to improve feedback, according to the widely recognized principle in economics which states that people respond to incentives. Game theory plays a major role in the design of these mechanisms. This is the mathematical study of interaction among independent, self-interested agents in multi-agent systems. A mechanism is a set of rules that provide a mapping between the actions of the agents and the outcomes (payment) for these actions. The aim of providing rewards is to make a reputation mechanism incentive-compatible, i.e. how to ensure that it is in the best interest of a rational peer to actually report reputation information truthfully.

Recent years have witnessed a growing interest in incentive mechanism for promoting honest recommendations research. Current incentive mechanism for promoting honest recommendations can be divided into two categories [4]: market-based approaches and policy-based approaches.

4.1 Market Based Incentive Mechanism

Market-based incentive mechanism introduces side payments that make it rational for peers to truthfully share reputation information [5,6,7,8,9, 20]. Peers can get reputation information by paying some virtual currency and obtain some virtual currency by providing reputation information.

Dellarocas [20] proposes “Goodwill Hunting” (GWH) as a feedback mechanism for a trading environment based upon the argument that truthful feedback will benefit the community as a whole. If buyers provide random feedback, sellers with high product qualities will be driven out of the market and buyers will lose profit. This mechanism elicits truthful feedback from buyers by offering rebates of a buyer’s periodic membership fee if the mean and variance between the buyer’s and seller’s perception of quality of their transactions are consistent across the entire buyer community. In this mechanism, buyers will receive less payment if their feedback of seller’s product qualities deviates from the community-wide reporting. To provide incentives for buyer participation in this mechanism, buyers will not receive a rebate if they do not provide feedback. Buyers may behave badly before they exit from the market. To solve this problem, part of the membership fee will be refunded only at the end of the period on the basis of the buyer’s behavior. However, the GWH mechanism does not deal with buyers’ strategic behavior of misreporting and only works when each buyer buys from a given seller only once.

In order to stimulate reputation information sharing and honest recommendation elicitation, Jurca and Faltings [5, 7,8,9] propose an incentive compatible reputation mechanism to deal with inactivity and lies. A peer buys a recommendation about a service provider from a special broker named R-nodes. After interacting with the provider, the peer can sell its feedback to the same R-node, but gets paid only if its report coincides with the next peer’s report about the same service provider. One issue is that if the recommendation from an R-node is negative such that a peer decides to avoid the service provider, the peer will not have any feedback to sell. Or in the existence of opportunistic service providers that, for example, behave and misbehave alternatively, an honest feedback does not ensure payback. This opens up the possibility of an honest entity to have negative revenue and thus is unable to buy any recommendation. Besides, the effectiveness of their work depends largely on the integrity of R-nodes, which is assumed to be trusted a priori.

Miller et al. [6] introduce a mechanism which is similar to that proposed by Jurca and Faltings [5]. In this mechanism, there is a center that maintains peers’ ratings. The center rewards or penalizes each peer on the basis of its ratings and ensures that the mechanism at least breaks even in the long run. More specifically, a peer providing truthful ratings will be rewarded and get paid not by broker agents but by the buyer after the next buyer. To balance transfers among peers, a proper scoring rule is used to determine the amount that each peer will be paid for providing truthful feedback. Scoring rules used by the center (i.e. the Logarithmic Scoring Rule) make truthful reporting a Nash equilibrium where every peer is better off providing truthful feedback given that every peer else chooses the same strategy. Furthermore, proper scaling of scoring rules and collection of bonds or entry fees in advance ensure budget balance and incentives of the mechanism. This mechanism assumes that service providers have fixed quality, which limits its usefulness. As with the mechanism proposed by Jurca and Faltings [5], the truthful equilibrium is not the only equilibrium in this mechanism. There may be non-truthful equilibria where every peer is better off providing untruthful feedback given that other peers choose the same strategy. Therefore, this mechanism also can not deal with the situation where strategic peers collude in giving untruthful feedback.

To encourage the exchange of reputation information, Pinocchio [21] rewards participants that advertise their experience to others. At the same time, to protect the reward system from users who may submit inaccurate or random statements to obtain rewards, they use a probabilistic honesty metric to detect dishonest users and deprive them of the rewards. The trust management system will set up a credit balance for each participant, which will be credited with a reward for each statement advertised and debited for each query made by that user. The trust management system can set a maximum limit to the amount of credit given as rewards to a participant per minute. If a participant’s credit balance is positive, she can use it to get a discount on queries she will make in the future. There is no way to cash the credit for money. Pinocchio does not intend to protect against conspiracies or bad-mouthing.

To obtain the truthful feedbacks on the non-verifiable information environment, inspired by the mechanism design paradigm in a hidden knowledge setting, Zhao et al. [22, 23] model the feedback reporting process as a reporting game, and design a wage-based incentive mechanism and provide numerical solutions to obtain the minimum wage required to reinforce the truthful strategies. Under their mechanism, querists are not required to estimate/know truthfulness of feedbacks when paying wage. The wage paid to reporters only depends on the feedbacks regardless of truthfulness. By following their scheme, truthful revelation will be a dominant strategy for all reporters. Different from most of the comparison based schemes, their proposed solution does not require peers to verify the information truthfulness. That is, the scheme does not require the peers to compare the feedback submission with other feedbacks. The solution requires only localized wage payment schemes, which greatly reduce the risk of collusion in reporting.

In summary, market based schemes allow for rich and flexible economic mechanism, and offer side payment to peers that truthfully rate results of transaction with service providers. Providing truthful feedback of service providers is a Nash Equilibrium in these mechanisms. However, these mechanisms suffer the notable drawback of seeming highly impractical since they need an infrastructure for accounting and micropayments. Much of the research in this field is less concerned with the feasibility of micropayments but instead considers problems that remain under the assumption that monetary exchanges are possible. Second, these mechanisms assume that all the peers share the same truthful opinion and the majority peers behave truthfully, and therefore have difficulty with the situation where peers collude in giving dishonest recommendations. Furthermore, the mechanisms do not work well if the majority of peers select to provide dishonest recommendations because each of these dishonest peers will also get a reward. This means that honest peers that will not be giving similar recommendations as many other peers, will not be rewarded and will be discouraged from being honest in the future. Third, in addition to the desirable truth-telling equilibria, these incentive mechanisms induce additional equilibria where peers do not report the truth. Equilibrium selection is an important consideration in practical implementations.

4.2 Policy Based Incentive Mechanism

Policy based incentive mechanism induces peers to participate reputation information sharing as expected by establishing the proper policy according to their behavior characteristics, namely, whether providing honest recommendations actively. The principle of establishing the policy is that active and honest recommenders, compared to inactive or dishonest ones, can always benefit more from other peers, such as the higher trust value, more interaction chances, larger amount of honest recommendations.

PeerTrust [4] presents a policy based incentive scheme which adds a reward as a community context for peers submitting feedback. It may alleviate the feedback incentive problem to some extent. This can be accomplished by providing a small increase in reputation whenever a peer provides feedback to others. The community context factor can be defined as a ratio of total number of feedbacks a peer give others during the given time period, over the total number of transactions the peer has. The weight factors can be tuned to control the amount of reputation that can be gained by rating others. However there are still some problems. First, how to allocate the weight for community context in the trust metric, if allocated high weight, peers can gain high trust value by providing the honest recommendations, and have no motivation to provide high quality services. Oppositely, if allocated low weight, the effectiveness of incentive mechanism is in doubt. Second, PeerTrust proposes the feedback credibility valuation algorithm PSM (Personal Similarity Measure), however, it ignores the feedback credibility and gives reward to all peers providing feedback. This may induce the peers to submit a lot of random or dishonest feedback.

T. G. Papaioannou et al. propose a mechanism for providing the incentives for reporting truthful feedback in a peer-to-peer system for exchanging services [24, 25]. Under their approach, both transacting peers (rather than just the client) submit ratings on performance of their mutual transaction. If these are in disagreement, then both transacting peers are punished, since such an occasion is a sign that one of them is lying. The severity of each peer’s punishment is determined by his corresponding non-credibility metric, this is maintained by the mechanism and evolves according to the peer’s record. When under punishment, a peer is not allowed to transact with others for a period that is exponential in their non-credibility values. For each peer, both non-credibility and punishment state are public information, they are appropriately stored so that they are available to other peers. The punishment of not transacting with other peers causes the punished peer to lose value offered by others. This provides incentives for peers to truthfully report of their business with others. however this approach does not deal with collaborated liars, moreover, the policy for punishing the dishonest peers is in doubt from the viewpoint of improving the system availability, for example, there are some peers who provide the high quality service, at the same time, and submit dishonest recommendations, using the punishment policy these peers will have no chance to provide service because of the dishonest recommendations, so the availability of the whole system is weakened.

Zhang et al. [27] develop a novel trust-based incentive mechanism where buyers first model other buyers using their personalized approach and select the most trustworthy ones as their neighbors from which they can ask advice about sellers. They use the term “neighbor” to refer to a buying agent that is accepted as an advisor of the buyer, and becomes part of that buyer’s social network. In addition, however, sellers model the global reputation of buyers based on the social network. Since buyers are modeling the trustworthiness of potential advisors, advisors that always provide truthful ratings of sellers are likely to be neighbors of many other buyers and are considered reputable in the social network. These agents will be able to attract a larger audience to witness their feedback (also known as increasing “broadcast efficiency”). In marketplaces operating with their mechanism, sellers will increase quality and decrease prices of products to satisfy reputable buyers, in order to do business with many other buyers in the market. In consequence, their mechanism is able to create incentives for buyers to provide truthful ratings of sellers.

Liu et al. [28] present in this paper an incentive compatible reputation mechanism to facilitate the trust worthiness evaluation in ubiquitous computing environments. It is based on probability theory and supports reputation evolution and propagation. Our reputation mechanism not only shows robustness against lies, but also stimulates honest and active recommendations. The latter is realized by ensuring that active and honest recommenders, compared to inactive or dishonest ones, can elicit the most honest (helpful) recommendations and thus suffer the least number of wrong trust decisions, as validated by simulation based evaluation.

From the above discussion, we can see that in policy based incentive mechanisms, peers maintain recommendation credibility of other peers and use this information in their decision making processes. Recommendation credibility measures the truthfulness of a peer as a provider of recommendations. Peers in these mechanisms have incentives to provide truthful ratings, in order to increase their credibility or decrease their non-credibility. In doing so, they are able to gain higher profit, such as the higher trust value, more interaction chances, larger amount of honest recommendations. Many policy based incentive mechanisms differ from one another primarily in the computation of recommendation credibility and the mapping of credibility to strategies.

5 Conclusions and Future Work

The success of current trust and reputation systems is on the premise that the honest recommendations are obtained [29, 30]. However, without appropriate mechanisms, in most reputation systems, under-participation and lying strategies usually yield higher payoffs for peers than honest recommendations strategies. Thus, to address this problem, a number of schemes have been proposed to motivate honest recommendations in reputation systems. In this paper we give an overview of existing and proposed schemes. Moreover, we categorize them, and analyze the representative strategies in each category and summarize their common problems. Their principles are explained along with the limitations against the system requirements, which are shown in Table 1.

Table 1. Comparison of existing approaches for promoting honest recommendations in reputation systems.

Since all existing incentive mechanisms aim for maximizing the network’s performance without taking into account the privacy protection of peers, it is necessary to introduce an incentive mechanism with the privacy protection to peers. The combination of privacy and incentive mechanisms promoting honest recommendations will make reputation systems more robust than ever. Because meeting the demands of peers’ privacy protection definitely raises the complexity of the incentive mechanism, and increases computation and communication overheads, it is a challenge to design an efficient incentive mechanism with privacy protection for reputation systems. The addressing of this challenge will be part of future work.

In addition, we plan to study schemes and protocols achieving privacy in the general case, i.e., in decentralized reputation systems which are not necessarily additive.