Introduction

The continuous development of technologies during the past decade has changed the way people live and carry out their daily activities. Based on this trend, a new paradigm called the Internet of Things (IoT) has been introduced offering a 3A connectivity service, that is, anyone can be connected at any place and anytime (Arseni et al. 2015). As a result, the way people interact and communicate with each other changed drastically, with the advent of the Internet and especially with today’s craze: Online Social Networking (OSN). With the emergence of smartphones, OSN gained further popularity since access to these networks is now more convenient and easier. According to Pew Research Center (2018), it is known that 92% of teens go online daily using their mobile phones.

Based on the recent advances of mobile devices and OSN applications, a new type of networking has emerged—Proximity-based Social Networking. Proximity-based Social Networking (PBSN) can be referred to as social networking services provided to physically proximate users on their mobile devices through Wi-Fi, Bluetooth, or Cellular Networks using their location information. The main activity on these networks, known as “check-in,” allows the users to share their real and current geographical location automatically in their posts. Users can discover friends in the vicinity, pick out nearby restaurants, or select routes based on traffic information. As a complement to traditional web OSN platforms, PBSN allows more tangible face-to-face interactions among the users.

As more people embraced the PBSN phenomenon, much emphasis has been directed towards the security and privacy of the users. Securing data and information of PBSN users are known to be more crucial than that of OSN in such a way that real locations of the users can be exposed and movements of the users can be followed via a map displaying all the places that the users have checked in (Kong et al. 2014). In addition to the information of the users’ usual routes being leaked, the mode of transport used also can be predicted according to Puttaswamy and Zhao 2010. On top of that, private and confidential location information such as visits to hospitals or bars can be exposed (Li et al. 2016). Hence, people are more reluctant to share their location information due to the dangers associated such as being stalked, robbed, or sexually assaulted (Liu 2009). Consequently, many PBSN frameworks have adopted privacy-preserving techniques in their system models to further encourage users to make the most of these services such as k-anonymity, obfuscation, spatial cloaking, and cryptography among others (Zhu and Cao 2011; Sun et al. 2016; Xue et al. 2016; Ravi et al. 2019; Song et al. 2015; Li et al. 2011). Vu et al. (2012) use a trust server known as the anonymizer to obtain K-anonymous location privacy by removing the user’s ID and opt an anonymizing spatial region (ASR) which takes into consideration the user and at least K-1 users in the vicinity. Applaus employs a location proof system in which an authorized verifier cross check the trustworthy level of the location (Zhu and Cao 2011). Additionally, dynamic pseudonyms are used in each device as additional protection for the location data, hence, protecting the users’ identity and location details. PrivCheck (Yang et al. 2016) is a customizable privacy-preserving framework where the user check-in data are obfuscated to minimize user private data leakage under a given data distortion budget to prevent inference attacks. Song et al. (2015) proposed a cloaking system model called anonymity of motion vectors (AMV) which provides anonymity for spatial queries, thus, preventing the real locations of mobile users to be revealed. User queries can be addressed and the CR is minimized by predicting user movements based on their motion. Werner (2016) makes use of strong cryptographic functions to enable a good trust relationship between users. The system makes use of Locagrams which act as the basic messaging system of the PBSN service.

Despite the success of these privacy-preserving solutions to protect the location information and the corresponding personal information such as the identity of the users, it is observed that limited analyses have been carried out to assess how secure the solutions are and how well the privacy requirements are enforced. The existing literature is either directed to the assessment of the security and privacy performance by addressing only part of the privacy requirements or to demonstrate how a particular attack is prevented. For instance, Buchanan et al. (2013) carried out an analysis on privacy-preserving methods of recent works by assessing the techniques used in the studies which further reduce the impact of location tracking. Jiang et al. (2021) outlined the potential risks associated with location-based servers and performed an analysis of the existing LPPMs (Location Privacy-Preserving Mechanisms) in terms of the level of privacy and performance. On the other hand, the study of Sun and Xue (2020) focused on the assessment of online Automated Privacy Policy Generators (APPGs) by analyzing the completeness of privacy policies of applications to determine the missing categories and items in the policies. Nevertheless, there is a research gap in evaluating the privacy-preservation techniques in terms of the privacy and security goals, location-related threats, and risks associated with the privacy-preserving solutions. This prevailing paucity is addressed in this study by carrying out a comprehensive assessment of the protection features of PBSN frameworks by introducing the PISA model. The latter will be useful to evaluate existing privacy-preserving PBSN systems and carry out a thorough analysis of the privacy-preserving solutions that they proposed. It will also help improve the algorithms during their development to avoid any potential loopholes or weaknesses in advance.

The objectives of this study are detailed as follows:

  1. 1.

    Review the different privacy and security requirements of existing privacy-preserving algorithms of PBSN systems and categorize them into a series of protection goals.

  2. 2.

    Carry out an in-depth analysis of the different threats associated with PBSN systems including the information that can be accessed during an attack or the resources available to adversaries.

  3. 3.

    Formulate a protection assessment (PISA) model for PSBN based on the quantification of the related protection goals using the privacy metrics.

  4. 4.

    Evaluate recent PBSN frameworks using the PISA model highlighting the protection features and shortcomings based on the threat models.

The rest of the paper is organized as follows. Section 2 presents the methodology of the study and the PISA model is introduced in Sect. 3 outlining the protection goals, threat models, and evaluation metrics. An exhaustive assessment of recent PBSN frameworks is carried out in Sect. 4 based on the PISA model, and the paper is rounded off with some concluding remarks and future works in Sect. 5.

Methodology

The exploratory research methodology has been adopted in this study by conducting a thorough analysis of the privacy requirements and privacy threats of PBSN systems. The following steps of the evaluation process have been identified accordingly:

  1. Step 1

    Identification of protection goals

    The privacy and security requirements of PBSN systems are identified based on the secured identities of the users and their private locations. This step will help elucidate the goals of the assessment. The protection goals presented in this paper are derived from the privacy and security requirements of location-based systems. They cover most of the privacy and security prospects of PBSN systems and refer to data privacy, spatial privacy, unlinkability, trust, and security.

  2. Step 2

    Scrutiny of threat models

    Location privacy of users can be threatened in different ways based on the characteristics of the PBSN system and based on the services provided to the users (Lee et al. 2013). Before assessing privacy and security, it is important to outline the threat models related to location privacy and also identify the resources or information which can be available to the adversary.

  3. Step 3

    Definition of evaluation metrics

    The most important step of an assessment model is to define the appropriate metrics. This step is also regarded as the core process of the evaluation (Shokri et al. 2010). These privacy metrics are useful to quantify the protection goals. Additionally, the identified protection goals and threat models determine which kinds of metrics can be used and how these metrics can be assessed.

  4. Step 4

    Evaluation and analysis

    Once the protection goals and threat models have been identified, the evaluation of PBSN systems can start with respect to the privacy metrics. The evaluation metrics are determined based on the identified protection goals and threat models. Once the evaluation is done, the results are analyzed and criticized.

    Additionally, four research questions that address the different dimensions of the proposed model are identified. These research questions will help define the fundamental steps of the assessment model.

RQ1:

Which privacy and security requirements should be considered by template protection to define privacy-preserving algorithms in PBSN systems?

RQ2:

What are the protection goals that substantiate the privacy and security requirements of PBSN systems?

RQ3:

What are the most influential privacy threat models that apply to PBSN systems?

RQ4:

Which evaluation metrics are needed to quantify the protection goals?

PISA model

In this section, the PISA model is proposed to carry out a comprehensive protection assessment of PBSN systems and has been inspired by the work of template protection for biometric systems by Zhou (2011). The protection goals are defined to cover the essential privacy and security requirements that privacy-preserving algorithms aim to achieve. Threat models of PBSN systems are also outlined to define the privacy threats, risks, capabilities, and resources available to the adversary. The evaluation metrics associated with location privacy are proposed and are used to quantify the protection goals.

Figure 1 illustrates the proposed evaluation PISA model, which addresses the challenge of full-scale security and privacy assessment of PBSN systems in practice and can be helpful during the development of any location privacy-preserving algorithms.

Fig. 1
figure 1

The PISA model

The first step in the assessment of PBSN frameworks is to identify the privacy requirements of the privacy-preserving solution. From this diagnosis, the protection goals are described. The threat models of the frameworks are analyzed and classified. The privacy metrics are then construed to design the evaluation process based on which the evaluation starts. The evaluation can be carried out in two different ways: the practical evaluation and the theoretical evaluation. The practical evaluation measures the different attacks of the adversary by taking into consideration his/her prior knowledge and his/her resources and by measuring the efficiency of the attack by the adversary’s success rate or recovery rate. The theoretical evaluation is independent of the adversary’s attacks but refers mainly to the measure of information theoric metrics such as entropy, mutual information, etc. The results with the evaluation metrics are obtained, and an analysis process is carried out. If more than one privacy metric is used, the evaluation will be done for each and the results should be compared with each other.

An alternative way to assess the privacy of PSBN frameworks is to identify the privacy policies of the application. These privacy policies should be analyzed and studied based on which the data practices are gathered and thereafter assessed. Different criteria characterizing the privacy policies are identified based on which the privacy requirements may be derived. The policies will differ in terms of privacy level, location, and privacy threats. Another direction to privacy evaluation is by making use of different use cases such as security use cases, location-tracking use cases, or misuse cases. The use-case-driven modeling method can be adopted to assess the security and privacy requirements in a more structured form.

These methods require further analysis and have not been covered in this study but can be considered as future works.

Protection goals

The preliminary step before starting an evaluation is to define the evaluation criteria, which corresponds to the privacy aims of the assessment model and outline the threat models. Based on the frequently changing context of PBSN users, it should be noted that omitting the privacy requirements of the PBSN application will affect the user’s privacy and will imply how the PBSN application is being adopted or used (Thomas et al. 2014). Figure 2 gives an overview of the privacy and security requirements of PBSN applications. Considering these properties, the following evaluation criteria are proposed: data privacy, spatial privacy, unlinkability, trust, and security. The privacy and security assessment framework can be quantified with these protection goals and enable empirical evaluation.

Fig. 2
figure 2

Taxonomy of privacy and security requirements

Threat models

Privacy-preserving techniques prevent different types of attacks on personal information and location information. To carry out a thorough privacy and security assessment, it is crucial to identify the various privacy threats that can be faced by PBSN users. Additionally, the information and resources available to the adversary should be taken into consideration. Based on the privacy threats proposed by recent surveys (Do et al. 2019; Babar et al. 2010; Solove, 2005), a set of 16 threat models, applicable for PBSN systems, are presented in this paper. The classification of the selected privacy threats has been inspired by the work of Solove, 2015 and is illustrated in Fig. 3.

Fig. 3
figure 3

Privacy threat models

Based on the classification of privacy violations as proposed by Solove, 2015 and taking into consideration privacy threats relevant to location-based applications, the threat models are presented as below:

Information collection

Information collection in PBSN systems refers to the process of data gathering among targeted users located in proximity (Raschke et al. 2014). The information collection can be possible based on the data available due to proximal access, surveillance, and snooping. Table 1 provides a detailed description of each privacy threat for Information Collection.

Table 1 Information collection

Information processing

Information Processing is the usage, storage, and manipulation of collected data and relates to different ways of connecting data together and linking the data to another set of information or persons (Mamonov and Benbunan-Fich 2018). Further details about the privacy threats of information processing are presented in Table 2.

Table 2 Information processing

Information dissemination

The dissemination of information is the act of spreading or transferring personal data or the threat to do so (Lilien & Bhargava, 2006). It comprises the following privacy threats: disclosure, breach of confidentiality, exposure, distortion, and privacy leakage. Table 3 illustrates the description of each and the privacy harms associated.

Table 3 Information dissemination

Invasion

Invasion refers to the deliberate intrusion on a user’s personal details or private activities and does not always involve information (Chamarajnagar and Ashok 2019). It relates to attacks conducted rather than activities involving data. The different privacy threats associated with invasion are described in Table 4 with their corresponding privacy harms.

Table 4 Invasion

Privacy metrics

The privacy metrics related to proximity mobile systems are presented in this section and a brief description of each is given. Also known as, the evaluation metrics, the privacy metrics are important to quantify the protection goals. Wagner and Eckhoff (2018) described four characteristics of privacy metrics namely adversary models, data sources, inputs, and output measures. These characteristics are helpful to classify the privacy metrics and similarly to the survey done, in this study, the classification is based on the output measures. Metrics from different output categories can provide a more comprehensive estimate of privacy as reported by the authors. However, only the appropriate metrics related to location-based systems are studied in this paper outlining the related classifications. Figure 4 presents the output measures relevant to PBSN systems and the metrics associated with each are illustrated.

Fig. 4
figure 4

Privacy metrics classified by output measures

Uncertainty metrics

Uncertainty metrics measure the ambiguity of an adversary, that is, how uncertain he/she is about his/her estimate (Thuiller et al. 2019). For example, in location-based systems, the uncertainty metrics measure how uncertain an adversary is to associate a user with his/her current location. Table 5 provides an insight into the different uncertainty metrics related to location-based systems.

Table 5 Uncertainty metrics

Error metrics

Error-based metrics quantify the error an adversary makes while estimating the user’s identity or location (Al-Dhubhani et al. 2019). Table 6 describes the two error-based metrics identified for PBSN systems.

Table 6 Error metrics

Data similarity metrics

The similarity metrics, as described in Table 7, measure the similarity between the estimate of the adversary and the real information (Kim et al. 2019).

Table 7 Data similarity metrics

Information gain/loss metrics

This category of privacy metrics focuses on the amount of information acquired by the adversary. The less information gained by the adversary relates to higher privacy and the more information is disclosed corresponds to the privacy lost by users (Amar et al. 2018). Table 8 provides the description and application of the metrics in location privacy.

Table 8 Information gain/loss metrics

Adversary’s success probability metrics

The adversary’s success probability metrics measure the success of the adversary, that is, the number of nodes that have been identified correctly by the adversary (Wagner 2015). Table 9 provides more details about this type of metric.

Table 9 Adversary’s success probability metrics

Evaluation and analysis

Table 10 provides an overview of the different metrics discussed above. The metrics quantify the protection goals based on the different threat models. The measurability details show how the metrics are applicable in practice. Table 10: Privacy Metrics for Assessment of Privacy and Security Protection.

Table 10 Privacy metrics for assessment of privacy and security protection

Table 10 provides an empirical evaluation giving a substantial measurement of the privacy metrics. The measurability provides an insight into how applicable these metrics are in practice. The uncertainty metrics show that high privacy correlates with the high uncertainty in the adversary’s estimate. Most of the uncertainty metrics are built upon the information-theoretic concept such as entropy, min-entropy, etc. It is observed that privacy is strongly related to security, which is consequently dependent on entropy. Security improves when the entropy data increase and can be measured with entropy, conditional entropy, and min-entropy. Min-entropy provides the lowest security and privacy, and it is also known as the worst-case performance (Zhao and Wagner 2020). The error-based metrics, on the other hand, demonstrate how the high correctness of the adversary’s estimate and small errors are related to low privacy. For example, as mentioned by Shokri et al. (2011), correctness is the metric, which quantifies the privacy of a user compared to certainty or accuracy. This metric evaluates the success of the attacker or determines how close the adversary’s estimate is to the real value.

On the other side, data similarity metrics do not consider any adversary but the focus is on the properties of the observable or published data. The privacy level is derived from the structures of the disclosed data. The information gain/loss metrics pertain to the measurement of information loss or gain by the information gained by adversaries or the privacy lost by disclosure of information. From the analysis done, it is observed that security and privacy assessment are strongly dependent on the threat models, for example, privacy leakage can cause linkability and increases with decreasing secrecy performance where mutual information reveals the secrecy leakage. Hence, privacy leakage can be measured with mutual information and entropy loss. Mutual information measures the average leakage case whereas entropy loss measures leakage in a worst-case scenario. The metrics of the adversary’s success probability depend on the adversary model and measure how success is attained. The adversary’s success rate should also consider false-positive and false-negative rates in addition to cases where correct estimates are done. The false-positive cases relate to where an adversary identifies an incorrect user or location and the false-negative ones correspond to cases where the identification of a user or location is a failure. The adversary depends on surveillance and aggregation in addition to physical threats such as stalking to ensure a high success rate in identifying a user or his/her location.

Further to the evaluation, a mapping of the privacy threats with the protection goals is carried out as follows:

Table 11 helps determine which protection goals are needed to counteract the existing privacy threats. These privacy criteria help alleviate the harms associated with the privacy threats if not eliminate them. The threat models as presented in this study can be tackled by using one or more privacy criteria in a PBSN application.

Table 11 Mapping of privacy threats to protection goals

PBSN frameworks assessment

Based on the recent advances in PBSN together with the outburst usage of smartphones, many platforms have been created to ease the development of such applications. This section outlines some of the popular and recent PBSN frameworks and emphasizes on the privacy and security provisions.

FINE framework

FINE refers to a fine-grained privacy-preserving location-based service framework designed for mobile devices (Shao et al. 2014). It follows the data-as-a-service (DaaS) model and consists of three main parties: the provider, a cloud server, and the users. The provider outsources its data to the cloud server which acts as a third party, which subsequently executes the queries of the users. FINE achieves several privacy properties such as fine-grained access control, location privacy, confidentiality, and accurate query result by making use of a ciphertext-policy anonymous attribute-based encryption (CP-AABE) technique. However, the cloud server is known to be honest but curious in such a way that it can launch passive attacks to retrieve the maximum secret information available, for example, the location information of the mobile users. Additionally, even though the cloud server will not collude with the server provider, it can collude with malicious users to retrieve the location information of users and the data of the server provider.

PLAM framework

A privacy-preserving request aggregation protocol is applied in the PLAM framework to obtain k-anonymity and l-diversity (Lu et al. 2014). PLAM ensures identity privacy, secure past, and future location privacy and attack resistance. User preference privacy is achieved without the use of a trusted anonymizer server, and to protect the past and future locations of users, an unlinkable pseudo-ID technique is adopted by using changing pseudo-IDs for users at different locations. Additionally, the protocols of PLAM are secure against adversary attacks, hence, ensuring authentication, data integrity, and availability. The system model of the PLAM framework consists of users in a local area with a provider and a trusted authority. The latter, though is fully trusted in the system, is honest but curious and can snoop into the user’s privacy preference to retrieve some side information. The users are also known to be privacy curious in such a way that they can disclose the privacy of other users from available information. Moreover, a user’s pseudo-ID can be correlated to his/her real identity by strong adversaries if some side information at a specific location is available.

TTP-FREE privacy framework

The TTP-free privacy framework protects both user’s identity and location without the use of a trusted third party by making use of a strong cryptography mechanism (Al-Badawy et al. 2018). Fake identities are generated on the users’ mobile phones to ensure identity protection. A key agreement protocol is applied to ensure secure channels for communications between the users. Only authorized users can use the system with an authentication process. The locations of the users and the secure communication channel are encrypted using elliptic curve cryptography. However, server operators can be attackers and ultimately reveal the locations of users, their identities, and the mapping of the user’s pseudo-ID to their real names.

SOCIOTAL EU framework

The SOCIOTAL EU framework is a privacy-preserving securing framework based on the Architecture Reference Model (ARM) for IoT systems ensuring content generation, publishing, and data sharing in a reliable, secure, and private manner (Bernabe et al. 2014). It consists of different security components such as authentication, authorization, identity management, group manager, and trust and reputation. The privacy-preserving identity management ensures anonymity, data minimization, and unlinkability. The access control component employs XACML to make authorization decisions based on access control policies that can specify which actions a user or group of users is/are allowed to perform over a specific resource under certain conditions. The group manager uses an attribute-based encryption mechanism (CP-ABE) and allows sharing information securely and privately such that only specific users satisfying particular identity attributes can decrypt the data.

APPLET framework

APPLET is a secured framework for location-based recommender systems by protecting user privacy information (Ma et al. 2017). In addition to locations, recommendation results are also protected since leakage of user privacy can be leaked while generating recommendations. The system model of APPLET refers to a Service Provider (SP) of which role is to own attributes and collect historical ratings, a Cloud Platform (CP) who is responsible for storage and computation, a Trusted Authority (TA) which generates private keys and the Recommendation Users (RUs). Figure 5 illustrates the APPLET framework. The Pailler homomorphic encryption is used when the similarities of the venues are computed by SP, and the encrypted ratings and attributes are sent to the CP in a ciphertext. Other cryptography methodologies are used such as comparable encryption to protect the users’ locations during a recommendation. In this process, the locations of venues are compared with the users’ requesting areas in the ciphertext. Using comparable encryption, the venues found in the users’ areas can be filtered, hence, not revealing the locations of the users. In addition, commutative encryption is used to protect the leakage of venue attributes namely the names of the venues and their corresponding locations from SP during the response of the CP to a user’s recommendation result. A security analysis is carried out in the study proving that user information is kept private during the recommendation process including the historical ratings and similarities of venues, and no information is leaked.

Fig. 5
figure 5

APPLET framework

A privacy-preserving framework for outsourcing location-based services to the cloud

Zhu et al. (2019) proposed a privacy-preserving framework for outsourcing Location-Based Services (LBS) to the cloud with multi-location queries and per-query privacy limits. In this solution, a query scheme is proposed where user can specify their locations of interest with a minimum privacy degree. For each location, the cloud service returns an area containing the location where the latter cannot be inferred. In addition, the cloud service can perform a search while protecting the privacy of user queries and identities. The search by location attributes and locations is carried out by using an auxiliary index structure over encrypted data. A hierarchical index reflecting the geographical hierarchical locations is built where each node in the index is replaced by a Bloom filter. Furthermore, to protect the searched data and pattern of the Bloom filter, function-hiding inner product encryption (FHIPE) is employed to encrypt the Bloom filter. The number of matching bits is calculated to search by location or location attributes, and the query vector is matched with the index vector by the cloud service by comparing the number of matching bits for locations and attributes. In addition, a fine-grained access control scheme is integrated with the framework which uses blind signatures to prevent the service provider from learning any information in the query during the authentication process. A key policy attribute-based encryption (KPABE) is used to encrypt data records in the database. The authors confirmed that data (locations and user identities) are kept confidential from the cloud and no leakage of information is present when the cloud performs location queries and searches.

Evaluation of PBSN frameworks with the PISA model

An evaluation of the PBSN frameworks discussed in the above section is carried out using the PISA model by taking into consideration the existing privacy threats of the frameworks and the protection goals that each framework provides. Based on this information, the frameworks are assessed with respect to the privacy metrics associated as illustrated in Table 12.

Table 12 Evaluation of PBSN frameworks with the PISA model

The protection goals Data Privacy, Spatial Privacy, Unlinkability, Trust, and Security are abbreviated to DP, SP, U, T, and S, respectively, in the table for easier understanding.

Discussions

Based on the above evaluation of the PBSN frameworks, it is observed that at least three protection goals are met for each framework, e.g., the FINE framework ensures Data Privacy, Spatial Privacy, and Security while the TTP-FREE framework guarantees Data Privacy, Spatial Privacy, Unlinkability, and Security based on the privacy requirements that are identified in the frameworks. However, most of them are still prone to several privacy threats and some of the frameworks comprise trusted authorities, which are honest but curious. The PISA model is, hence, used to evaluate the frameworks based on their protection goals and privacy threats based on which the privacy metrics of each framework are deduced.

Initially, to start the evaluation of privacy-preserving PBSN frameworks, the privacy solutions as presented by the respective authors are analyzed, from which the privacy requirements are outlined and subsequently the protection goals of the PBSN frameworks are deduced based on Fig. 2. The threat attacks, information leakage, or knowledge of the adversary also are considered in the different frameworks, and these shortcomings in terms of privacy are explored and they are associated with the privacy threat models as presented in Fig. 3. Based on the protection goals and privacy threat models gathered as the evaluation criteria, the assessment of the PBSN frameworks can start by taking into consideration the related privacy metrics as illustrated in Table 10.

FINE, as depicted in Table 12, is subjected to the privacy threat Intrusion, among others, where the cloud server launches attacks to retrieve information. Even though Shao et al. (2014) claim that any attack is stopped at the very beginning, no further details are obtained on how they are prevented (). The privacy metric adversary’s success rate is the correct measure to validate this statement. For instance, to confirm if the adversary, in this case, the cloud server is not successful in trying to find any information, the rate value will be zero. If the value is no other than zero, it means that some attacks were attempted. The success rate of the Location-based Service (LBS) provider determines how FINE is protected by any information collection privacy threat such as surveillance and evaluates the success of the LBS provider when trying to retrieve information from the communication between the cloud server and the users. On the other side, mutual information is an important metric to assess how much information the LBS provider retrieves from the communication by validating the adversary’s estimate and the true value. In addition, the cloud server colludes with malicious users to obtain some information. However, the authors insist that even if it colludes with malicious users, only the basic information such as if a ciphertext can be decrypted or not will be available. The metric-leaked privacy value further evaluates this breach of confidentiality to check how much information is being disclosed. To have a complete privacy assessment of FINE, the use of the location entropy metric is important to ensure that privacy is protected even if some attempts of attacks are done by the cloud server and LBS provider. The privacy metric refers to the additional information the adversary needs to identify the location of a user or find his/her position.

The metric degree of unlinkability is the main privacy metric to assess the unlinkable pseudo-ID technique in the PLAM framework. It is helpful to estimate if the pseudo-id can be correlated to the real identity of a user if any side information is present. Additionally, asymmetric entropy measures the uncertainty of the adversary to associate the pseudonyms with his/her true identity. PLAM also uses a trusted authority that is honest but curious and tries to retrieve user information by surveillance. Relative entropy and adversary’s success rate are the two privacy metrics that are used to evaluate the distribution of the adversary’s estimate and the true value.

The TTP-Free privacy framework provides complete privacy protection, that is, identity and location protection in the absence of a trusted third party. To evaluate the identity protection of the TTP-Free privacy framework, entropy is useful to detect the remaining information that an adversary needs to identify a user or find any other attributes related to the user. Similarly, location entropy is used to assess the location protection ensured by the framework through the cryptographic mechanism used. Entropy is calculated at different points in time to find out the position of the user. If users are located very close to each other, the exact positions of the users can be exposed even if they are high entropy locations. However, though it is a TTP-Free framework, the social server and location server are assumed to be dishonest and will try to retrieve user information. The TTP-Free framework is also susceptible to privacy threats such as disclosure, exclusion, and breach of confidentiality since server operators can act as attackers and reveal user information. The positive information disclosure value provides an estimate of the breach of confidentiality in different scenarios such as prior knowledge of adversary, relative security, etc.

Even though the SOCIOTAL EU project framework provides privacy protection, security, and trust, it is noted that a minimum amount of information can be disclosed in the Identity Management process. K-anonymity and positive information disclosure refer to important privacy metrics for this framework to evaluate how users are being identified or how other private information can be deduced. The degree of unlinkability is equally important in this assessment to measure the unlinkability provided by the framework and l-diversity to measure the minimal disclosure of attributes.

Nevertheless, though it is mentioned that no privacy leakage happens during a recommendation in the APPLET framework, the service provider and cloud provider are curious about the recommendation results and the provider will try to know the service provider’s historical ratings and similarities. This privacy leakage is measured with mutual information and entropy loss, and the results obtained will highlight the degree of privacy leakage of the APPLET framework. Additionally, the leaked privacy value further confirms this privacy leakage. Moreover, even though there is an invasion of adversaries when they try to eavesdrop on all data transmission between the cloud provider, service provider, and the users, the authors claimed that adversaries should not learn anything about the data. The adversary’s estimated error metric demonstrates the incorrectness of the adversary and confirms the fact that adversaries do not retain any information.

Zhu et al. (2019) ensured protection goals such as data privacy by protecting the privacy of user queries and identities, spatial privacy by ensuring that private locations are kept secured with the use of FHIPE and minimum privacy degree and security by providing fine-grained access control and confidentiality. Location entropy measures the protection of the private locations and calculates the uncertainty to determine the real location of a user from other proximate users. The framework is prone to privacy threats such as surveillance, aggregation, and power imbalance since the cloud server and LBS provider are assumed to be honest but curious and will attempt to infer information. Additionally, the framework provides a leakage function that ensures complete preservation of data privacy preventing any adversary to gain information about the users since to have access to any user details, he must be able to break one of the FHIPE algorithms, the blind signature mechanism and the database encryption algorithm. The metrics leaked privacy value and adversary’s estimated error to measure the leakage of information and the incorrectness of the adversary to estimate any details about the user.

The values obtained from the privacy metrics determine the level of privacy provided by the frameworks, and hence, the protection features can be further improved based on the results of the evaluation.

Table 13 gives an overview of the security implications associated with each of the most used location privacy-preserving techniques. It is observed that these techniques do not protect users’ data fully and different attempts to collect data or manipulate available data are possible. Adversaries can also eavesdrop on all traffic and ultimately deduce information on the users. It should be noted that many privacy-preserving solutions adopt two or more of these techniques to provide better privacy to the users.

Table 13 Security implications of location privacy-preserving techniques

Contributions of PISA

As discussed above, the PISA model allows a rigorous analysis of the different privacy and security provisions in PBSN frameworks. It facilitates the evaluation of PBSN frameworks in terms of privacy by following the below approach:

The privacy and security requirements of the PBSN framework are analyzed thoroughly by taking into consideration the privacy techniques used, the privacy protection methods, and the architecture of the framework.

Based on the analysis of privacy and security requirements, the potential protection goals of the PBSN framework are identified based on the data in Fig. 2.

The PBSN framework is further analyzed in terms of the privacy loopholes and the existing threat models are pointed out by identifying the privacy threats that may exist in the PBSN framework though privacy-preserving techniques are implemented.

Additionally, based on the perceived threat models, the resources or information that may be available to any adversary are detected. The prospective knowledge of the adversary is also considered.

Based on the information gathered such as the protection goals and the threat models, the privacy metrics of the framework are derived based on the PISA model as illustrated in Table 10.

Once the privacy metrics of the PBSN framework are obtained, the analysis and evaluation of the framework can be carried out. The evaluation metrics help quantify the protection goals. These metrics can be used to evaluate the different privacy algorithms or techniques used in the framework.

Each privacy metric defined provides a different level of assessment and evaluation, for instance, entropy metrics indicate the information that a variable may contain, e.g., location entropy measures the uncertainty that an adversary can disclose the position of a user. Similarly, min-entropy and conditional entropy can be used in the security assessment of cryptographic algorithms where min-entropy measures the irreversibility and conditional entropy measures the number of attempts needed to retrieve the target data. On the other hand, error metrics such as expected distance error measures how accurately an adversary can estimate a user’s position while information gain/loss metrics such as leaked privacy value outline the amount of knowledge an adversary can learn.

By making use of the different privacy metrics, different types of assessment can be done for each framework to unveil the privacy and security aspects of the framework.

The PISA model can be useful to privacy-preserving algorithm designers so that different privacy aspects can be considered in advance. This allows a defensive perspective during the development of the algorithms and allows any improvement to the algorithms to avoid any flaws or loopholes. Additionally, this model can be used as an indispensable tool to endorse or popularize any new privacy-preserving algorithms in PBSN systems by conducting an exhaustive analysis of the privacy and security provisions.

Future works and conclusion

In this paper, a privacy assessment framework called PISA is proposed to evaluate privacy-preserving PBSN frameworks and systems. It provides a thorough analysis of the privacy and security goals in PBSN systems by taking into consideration the possible location-related threats to privacy-preserving systems. A comprehensive evaluation of privacy and security requirements for location-based systems is firstly done, based on which, five protection goals of PBSN systems are proposed: data privacy, spatial privacy, unlinkability, trust, and security. A series of location privacy threats are investigated and classified to identify the resources and information available to adversaries. Privacy metrics associated with PBSN systems are defined and used to quantify the protection goals presented. The PISA framework allows the assessment and comparison of the different privacy features of PBSN frameworks based on several evaluation criteria. The PISA framework was used in this study to evaluate six recent PBSN frameworks in terms of their privacy and security requirements, protection goals, threat models, and privacy metrics. The results validate that the PISA framework ensures an extensive analysis, evaluation, and comparison of different privacy-preserving solutions. Future works of the current research include extending the framework to consider different adversary models based on their assumptions, goals, and capabilities and investigating other threat models involving locations. Additional privacy metrics can be considered to provide a more extensive evaluation. The measurability ratings of the assessment can be improved by providing appropriate weight scoring for different evaluation metrics. To obtain better results on the assessment of privacy-preserving solutions, the proposed PISA model should extend the empirical evaluation on a large scale on other PBSN systems and privacy-preserving algorithms. Different methods of privacy evaluation can be considered apart from analyzing privacy requirements such as analyzing privacy policies or adopting a use-case modeling method to assess the privacy and security criteria.