Keywords

1 Introduction

An effective trust management system should support cloud service providers (CSP) and consumers. Trust assessment mechanisms, distrusted feedbacks, poor identification of feedbacks, privacy of participants, and lack of feedbacks integration are examples of open issues, which still need to be investigated [1].

In this sense, models that are more comprehensive are necessary, based on a set of representative criteria such as those inspired by Saaty and Ergu [2]. These models should consider various aspects, including: reputation, performance, recommendation, policies, regulations, compliance with legislation and standards, accreditation by third party auditors, and mandatory disclosure of information security incidents. Thus, investigation is necessary of new forms of communication and efficient disclosure of information, considering its relevance and meaningfulness for end users. Indicators can be defined to include these aspects, pointing to trends considering quantitative and qualitative parameters. These indicators can serve as metrics of results of CSP actions and processes [3].

In [4], the authors point out that “Trust is a mental state comprising: (1) expectancy—the consumer expects (hopes for) a specific behavior from the provider (such as providing valid information or effectively performing cooperative actions); (2) belief—the consumer believes that the expected behavior occurs, based on the evidence of the provider’s competence and goodwill; and (3) willingness to take risk—the consumer is willing to take risk for that belief.” Trust is a matter of calculating advantage and risk under given circumstances, which presupposes that experts will account for security incidents. It is understood that there is a balance between trust and acceptable risk, guaranteed by credibility of specialist systems, expertise, and contingent systems designed to mitigate the impacts of possible accidents [5].

Privacy is another emerging concern that is not fully addressed by the models. Privacy has a significant influence on the willingness of users to use cloud services [6]. Web services that violate user’s privacy expectations are penalized by decline of confidence levels [7].

Contracts with CSP should be transparent and make clear security issues, as well as define relevant responsibilities in the business relationship with their customers [8]. Transparency relies on information and data provided by cloud providers. Monitoring is another key aspect on trust. Monitoring is often performed using metrics imposed by service providers. Decision-making relies on systems that continuously collect and process such data. From the users’ perspective, decision-making is a combination of security transparency, confidence and interpretation of the collected data. A comprehensive, relevant and meaningful trust model should consider all these aspects [9].

We present a consumer-centric framework for trust assessment in cloud computing environments. Proposed indicators provide consumers of cloud services a means to assess trust of CSP. The improvement of consumers’ confidence in cloud environments is a hard task; new criteria and indicators related to sensitive data, supported by proper metrics, are demanded.

The remainder of the paper is organized as follows: in Sect. 14.2 literature review and related work are presented; in Sect. 14.3 the conceptual framework for trust assessment is described; in Sect. 14.4 an application scenario is presented; in Sect. 14.5 our proposal is discussed; and finally, in Sect. 14.6 we present our final remarks and future work.

2 Literature Review and Related Work

This section contains a summary of a review; related work is described and compared. The review is based on guidelines for systematic mappings [10, 11]. Questions and keywords were chosen to collect relevant papers in scientific databases, such as: IEEE Xplore, ACM Digital Library, SpringerLink, among other databases. The following search string was used to select an initial set of papers from these databases: ((trust OR confidence) AND (“cloud computing”) AND (“security information” OR privacy)). The search was carried out on titles, considering the search period of 2015 to 2019. Firstly the selection of articles was based on their relevance according to the abstracts and conclusions.

Our literature review points out that there are few studies focusing on trust and transparency of security from the cloud consumers’ point of view. There are also few articles that deal with communication of users with CSP such as how to give visibility to information security practices and how to enable consumers to understand these practices. Besides, reviewed papers do not discuss how to provide meaningful and relevant information to cloud shareholders, cloud service providers, or decision makers. The articles indicate the need of a unified approach for the following problems: (1) Difficulty to access security data of cloud systems; (2) Many models and metrics to measure cloud confidence; (3) Lack of information on management, resources and infrastructure aspects; (4) Lack of disclosure of information security incidents; and (5) Consumers’ difficulty in identifying objective forms of relationship with providers. Table 14.1 summarizes our findings.

Table 14.1 Summary of the Related Work

SOFIC (Security Ontology For InterCloud) [12] is standards-based and has been adapted to address the security requirements of different inter-cloud scenarios. A model named “Trust Model for Cloud Computing Environment”, which includes mutual audit management agreements, is proposed in [13]. The model establishes a formal relationship involving relevant legal responsibilities. To establish and control the appropriate contractual requirements, technologies must be adopted to collect data needed to inform risk decisions, such as access usage, security controls, location and other data related to the use of the service. Contracts with CSP should be more transparent as well as more specific to make clear security issues and to define relevant responsibilities [14].

A taxonomy of trust models and classification of information sources for trust assessment is presented in [15], suggesting a new qualitative solution. A method for calculating security coverage for cloud services is proposed in [16]; it is based on the number and types of installed products and security tools. In [17] the authors propose a method to qualify the security status for cloud computing systems based on an approach with practical elements, techniques and attack graphs.

CloudArmor [18, 19] is a reputation-based trust management framework providing a set of capabilities to deliver Trust as a Service (TaaS); it includes: (1) a protocol to prove credibility of trusted feedbacks and preserve users’ privacy; (2) a credibility model to measure the credibility of trusted feedbacks to protect the cloud services from malicious users and to compare the reliability of cloud services; and (3) a model to manage the availability of decentralized implementation of the trust management service. Specifically, CloudArmor is an adaptive conceptual model proposal to measure the credibility of user feedback to protect cloud services from malicious users.

In [20] a framework is proposed to ease the cloud service users (CSU) in choosing a CSP by: (a) allowing CSU to provide their security preferences with the desired cloud services; (b) providing a conceptual mechanism to validate the security controls and internal security policies of CSPs published in the CSA’s (Cloud Security Alliance) Security Trust and Assurance Registry (STAR) database; and (c) maintaining a database of CSP along with their responses to the Consensus Assessments Initiative Questionnaire (CAIQ) as well as certificates issued by the certificate authorities. In [21] the authors extend the work to incorporate a third party auditing (TPA) for performing CAIQ analysis and to inform users.

A compliance-based multidimensional reliability assessment system (CMTES) is presented in [22]. It uses a variety of mathematical techniques to provide reliability assessment results from the perspective of various stakeholders, such as Cloud Auditors, Peers, and Cloud Brokers. The framework considers the customer’s perspective from the point of view of performance and reliability (SLA) of cloud services; thus, issues related to information security and privacy are not part of their assessment framework.

3 Conceptual Framework for Trust Assessment

A decision is a 4-tuple: (1) Understanding of the problem to minimize doubts and uncertainties; (2) A complete structure to represent factors involving criteria and alternatives; (3) Measurement scales to represent judgments; and (4) A priority rank derived from numerical judgments.

Next, we present our conceptual proposal—a framework for trust assessment in cloud computing environments. The framework is consumer-centric and deals with trust aspects from the consumer or end user perspective.

The assessment result is presented as Numeric Indicators representing: the current evaluation, the history of previous evaluations, and the trend of consumer confidence in the cloud service. The indicators aim to allow a consumer-centric assessment of trust, and are adaptable and extensible to other contexts.

The foundations of the proposed framework come from three axioms representing increase of users’ confidence in cloud services, namely:

  1. 1.

    Information about the system, leads to trust. Trust increases when there is meaningful and relevant communication, ease of interpretation, ease of access, and credibility of information.

  2. 2.

    Meeting consumer expectations increases confidence. Performance, protection of privacy and data security and responsiveness to questions foster trust.

  3. 3.

    Positive opinions increase confidence. Reputation, recommendation, certification and audits influence trust.

These axioms will serve as a basis for defining the domains that will make up the framework.

We consider three domains: Transparency (TP), Security Information (SI), and Governance (GV). These domains support the comprehension and contribute to the achievement of meaningful and relevant results for consumers. Each domain is divided into criteria and sub-criteria.

This section contains five subsections: (a) Conceptualization; (b) Engineering Process; (c) Framework Architecture; (d) Assessment Criteria; and (e) Indicators Calculation.

3.1 Conceptualization

Here, we present the main concepts necessary to understand our framework. A lightweight-ontology is presented to represent the relationships among the main concepts of the framework. In Fig. 14.1, we present the hierarchy of the proposed lightweight ontology.

Fig. 14.1
figure 1

Hierarchy lite-ontology

Governance (GV) is the comprehensive set of requirements that support organizations to manage day-to-day processes, to assess security, privacy, regulatory, and business imperatives; it supports organizations to move forward, with some degree of control to obtain the customer’s confidence.

Security Information (SI) is the aggregation of people effort, processes, and technology, to support organizations to provide confidentiality, integrity and availability in their information assets.

Transparency (TP) means “revealing sufficient information” to enable strategic decisions, providing mechanisms to ensure confidentiality needs of the CSP. Security transparency can be understood as appropriate dissemination of the governance aspects of security controls, policies and practices.

3.2 Engineering Process

We follow the six steps process proposed in [23] to develop our framework. Steps 1–4 are planning steps, Step 5 is an examination process, and step 6 is the decision-making process. We address Steps 1–5; we do not discuss the decision-making step. This final step is very complex and context dependent. We expect that the rigorous development of our evaluation model will deliver good Indicators for improving the decision-making process.

  • Step 1Select the target of evaluation. It refers to the object under evaluation. We have chosen to evaluate cloud computing services from the consumer perspective, regardless of being IaaS, PaaS or SaaS.

  • Step 2Identify assessment criteria. Literature often distinguishes between properties and attributes, but as argued in [24], we adopted them as interchangeable and refer to them as criteria. Our proposal considers that the evaluation of criteria is carried out through questions. Criteria are described in Subsection D—Assessment Criteria.

  • Step 3Define evaluation yardstick. A yardstick is a standard measure used to compare or to judge a certain target. Choosing the appropriate scale is a hard task and depends on the person and the decision problem [24]. The numerical values used in the scale affect the preferences of an individual; we cannot assure that a given method of preference disclosure is entirely independent of the measurement scale. The use of verbal responses is intuitive and may represent ambiguity in nontrivial comparisons.

Verbal statements can be represented by an ordered scale, because it is a feasible alternative when the evaluator does not have a comprehensive understanding of the problem [25].

We proposed the following scale, inspired in a “5 Likert ordinal scale” [26]: 0—Non-presence; 1—Strongly Disagree or Minimal Confidence; 2—Disagree or Acceptable Confidence; 3—Agree or Good Confidence; and 4—Strongly Agree or High Confidence. In our scale there is no neutral point, so that the evaluator is required to have either a positive or a negative opinion.

  • Step 4Select and develop data gathering. This step comprises the data gathering techniques required to obtain data to analyze each evaluation criterion. We have chosen: documents review, service monitoring tools, reputation or evaluation form (checklist), third party auditing, recommendation; primary data gathering techniques used to collect data from a specific resource.

  • Step 5Select and develop synthesis techniques. It refers to a set of well-defined steps and activities to synthesize all data and information (including the degree of importance for each criterion) to evaluate a target against the criteria. The synthesis techniques and equations are described in Subsection E—Indicators.

  • Step 6Decision-making Process. It refers to a series of specific activities and tasks to be executed to solve a specific decision problem.

3.3 Framework Architecture

Multi-Criteria Decision-Making (MCDM) methods, based on Cloud Service Evaluation Methods (CSEMs), were developed for different purposes, such as: classify, select, compose, adopt, improve and compare Cloud services. The results of the framework should be used for decision making in an MCDM. Our aim is to meet the recommendations of Saaty and Ergu [2]. We apply it first in the cloud context, due to it being a well-established service platform that allows us to test and validate the proposal. Once restrictions or gaps in the framework are known, it can be adapted and extended to other platforms and contexts.

In Fig. 14.2, we present a layered functional architecture of the framework. Collecting aims to collect data from the provider and external sources. Processing is intended for data processing (criteria and metrics). A database of metrics and indicators supports the framework. Monitoring is responsible for monitoring performance and revealed information. Decision Making is responsible for providing data for decision-making. Interface provides the visualization of indicators and allows the setting of parameters as well as score inputting by the consumer.

Fig. 14.2
figure 2

Layered functional architecture of the framework

3.4 Assessment Criteria

When we evaluate trust in Cloud services, the information security facet is the first concern, but it is not enough. Other factors such as privacy, performance, transparency, and communication have a relevant weight in the trust assessment of Cloud providers [27]. All these factors must be evaluated through use of criteria. The choice of criteria and the composition of the model must follow requirements to make the model accomplish the objectives it proposes.

The evaluation criteria have the following principles [23]: (1) Understandability—evaluation criteria are well defined, meaningful for decision makers, easy to understand, clear and unambiguous; (2) Decomposability—evaluation criteria can be decomposed from the top of the hierarchy to its bottom to cover all important characteristics of decision making problem and to simplify evaluation processes; and (3) Reliability—evaluation criteria are formulated based on reliable sources and verified using formal verification approach.

The criteria and sub-criteria have been defined by a group of five experts in information security and cloud computing. Criteria and sub-criteria were grouped into three domains:

  • Governance (GV)—Security Design: Security infrastructure (CGV1); Countermeasures (CGV2). Recommendation: Third Party Auditing (CGV3); Experts Recommendation (CGV4). Reputation: Users Rating Average (CGV5). Privacy: Privacy Impact Assessment (CGV6); Anonymization Techniques (CGV7).

  • Transparency (TP)—Reveal Information: Security Information Disclosure (CTP1); Mandatory Disclosure (CTP2). Information Disclosure: Regulatory Requirements (CTP3); Security Incidents (CTP4); Customer Service (CTP5). Periodic Communication: Reports (CTP6); Warnings (CTP7).

  • Security Information (SI)—Resources: Human Resources (CSI1); Security Operations Center (CSI2); Governance Structure (CSI3); Technological Resources (CSI4). Certifications: Standards (CSI5). Contractual Guarantees: Insurance (CSI6); Penalty (CSI7); Reparation (CSI8). Monitoring: Performance (CSI9); Green Clause (CSI10).

3.5 Indicators Calculation

A framework for evaluation should provide a complete mathematical and logical solution with its justifications. Therefore, there is a formal mathematical representation of logic and reasoning behind the theory underlying the evaluation model. Metrics and indicators are proposed, in addition to a sequence of steps called stages.

Indicators are calculated for each domain (GV, TP, SI). If there is more than one evaluator per criterion, the geometric mean for each sub-criterion should be calculated, so that only one value enters the calculations by sub-criterion. It has been proven that the geometric mean, not the arithmetic mean, is the correct way to do this [28].

Thus, for each domain, the Indicators are calculated through 8 steps:

  1. 1.

    Evaluate all sub-criteria, Cxxi;

  2. 2.

    Calculate, Eq. (14.1), the arithmetic mean, per domain, based on the values of Step 1, GVj, TPj, SIj, (1);

  3. 3.

    Calculate the difference between the average of the current month and the average of the immediately previous month—values obtained in step 2; e.g. (GVj − GVj−1) the result represents the tendency for the future;

  4. 4.

    Calculate the average of the last 12 means obtained in Step 2, the result represents the history;

  5. 5.

    Add the plots obtained with the following calculation: k1 times the value of Step 2; k2 times the value of Step 3 and k3 times the value of Step 4. This weighted sum summarizes the current assessment, the trend for the future and the history of the evaluations;

  6. 6.

    Divide the result from Step 5 by 2 power m, where m is the number of catastrophic or extremely shocking security incidents that occurred in the month, such as data breaches, leakage of customer information, disaster recovery;

  7. 7.

    Calculate the Indicator for the domain by multiplying the result of Step 6 by a bonus RB, related to the relationship time, which will vary from 1 to 10%, to be assigned by the consumer based on the experience in the last evaluation period (2);

  8. 8.

    Present the Indicators, which reflect the trust placed by the consumer in that Cloud service under evaluation.

Equation (14.2) incorporates three plots, the first represents the proportional term, the second the trending, and the third the history; weighted by parameter k1, k2, and k3 which varies from 0.00 to 1.00; and the k i sum must be equal to 1.00. The relationship bonus RB and m reflect the dynamism of the Cloud service. RB can increase trust by up to 10%; while serious security incidents (m) split trust by 2m.

The example shown applies to the GV domain. The same formulas should be applied to the other two domains (TP and SI). CGV represents the score (0–4) of the criteria under evaluation, defined by the evaluator. IGV represents the GV Indicator. The assessment should be performed monthly, so that we have a follow-up on the behavior of the CSP.

$$ {GV}_j=\frac{\sum_{i=1}^n{CGV}_i}{n} \vspace*{-15pt} $$
(14.1)
$$ {IGV}_j=\frac{\left(\left(k1\times {GV}_j\right)+\left(k2\times \left({GV}_j-{GV}_{j-1}\right)\right) +\left(k3\times \sum_{j-12}^j{GV}_j/12\right)\right)\times RB}{2^m} $$
(14.2)

4 An Application Scenario

Consider an application scenario in which a DevOps team (software house) needs to evaluate a cloud service (IaaS and PaaS) for choosing a CSP by considering features, costs, etc. DevOps is a term designed to describe a set of practices for integration between the software development, operations (infrastructure), support teams (e.g. Quality control) and the adoption of automated processes for fast and secure deployment of applications and services. It is a process that makes possible the CI/CD (continuous integration/continuous deployment), i.e., the agile application development.

Members of the team answer structured questions by means of an online form. Four security experts previously prepared the questions as part of the proposed framework. These experts set the framework based on the expectations and needs of the DevOps team. The form is part of a system that collects all answers and makes the necessary calculations to provide the trust Indicators in the cloud service assessment. This way, an average consumer can easily use the framework and perform the assessment. The team should periodically repeat this assessment to get an overview of how the confidence in the contracted service is evolving. With these results it is possible to make decisions about changes that prove necessary.

The DevOps team is completely dependent on the CSP and its services to operate their business. Hence the importance of the trust placed in the CSP.

The team is distributed around the world, with a central office where the policies and most of the management tasks are performed. This team has as priorities the reliability and confidentiality of the service. They apply the proposed framework to evaluate the trust in the chosen service. An evaluation was carried out and the outcomes of the initial assessment are presented in Appendix.

5 Discussion

We proposed the framework taking into account the coherence with the definition of Trust adopted in Sect. 14.1, with the given axioms presented in Sect. 14.3, and the end user’s decision-making perspective. The consumer relies on cooperation, goodwill, competence, explicit contractual guarantees, expert and consumer recommendations, as well as contingent systems that could mitigate negative impacts.

The proposed Indicators (IGV, ITP, ISI—Sect. 14.3.5) represent the consumers’ evaluation in relation to the provider. These Indicators have internal validity because the bases employed in their construction are theoretically and contextually grounded, and have shared meanings between the participants—consumer and provider. The intended external validity refers to the possibility of generating knowledge that contributes to the improvement of services and interaction among participants. These indicators are used as outcome metrics for processes and actions of the CSP.

The proposed architecture has operational characteristics, which were adapted from [29]: (1) Appropriateness—It refers to the quality of being suitable or proper to the problem at hand; (2) Ease to use—no expert is needed to supervise the usage process; (3) Reliability—evaluation criteria are being formulated based on reliable and verified sources; and (4) Validity—justifications are used to validate its procedures and prove its effectiveness with real world examples.

The measurement scale, introduced to evaluate the performance of each alternative with respect to each criterion, is able to handle the classification of tangible and intangible criteria. The values assigned to each criterion are synthesized by a merge function to obtain the outcomes (Sect. 14.3.5).

The framework is adaptable to different contexts, via parameterization and formulation of evaluation questions; for example, it can be extended to other contexts as IoT or Edge Computing. The comparison month-to-month shows the evolution of trust.

The GV Indicator represents how the CSP is structured, based on technological resources, third party evaluations, as well as opinions and audits. The TP Indicator represents security transparency and relationship with the consumer. The SI Indicator represents the contractual guarantees, performance monitoring and the socio-technical resources.

The framework is simple to use and provides the capability to build a comprehensive decision structure, with breadth, depth and merit. This is particularly relevant when the decision is complex and, in addition, involves Benefits, Opportunities, Costs and Risks (BOCR).

Therefore, the framework provides valid outcomes useful for different types of decisions.

5.1 Obstacles of Cloud Assessment Models

Assessments are made by considering data from the CSPs that are not always available; as well as it is impossible to know which protocols that was used for collecting these data. There is a lack of information to provide security transparency. This circumstance is changing, as users demand their rights as consumers of services, which should be protected by consumer protection laws. There is also a need for more regulation of these services to overcome obstacles in communicating with providers, as well as mandatory notification of significant events for the security and trust of the services. There is still much uncertainty as to the representativeness of the criteria and parameterization adopted. By using the proposed framework, could be possible to adjust these criteria and parameters.

      

A

B

C

D

E

 

Domain

Criteria

ID Cxx i

Sub-criteria

Criteria Score (0–4)

Score Average GV J, TP J, SI J

k1 0.50

k2 0.25

k3 0.25

RB 1

m 0

F

Governance

Security design

CGV1

Security infrastructure

1

2.14

1.07

0.00

0.53

1.62

1.60

IGV j 1.60

  

CGV2

Counter measures installed

2

       
 

Recommendation

CGV3

Third party auditing

3

       
  

CGV4

Experts recommendation

1

       
 

Reputation

CGV5

Average users assessment

3

       
 

Privacy

CGV6

Privacy impact assessment

4

       
  1. (continued)

6 Conclusion

Customer’s confidence and trust on cloud services are impacted by cost, responsibilities, quality and assurance provided by Cloud Service Providers (CSP). Cloud computing has been receiving a lot of attention in the last years.

A consumer-centric framework for trust assessment in cloud computing environments is proposed; it aims to provide metrics and indicators that allow consumers of cloud services to measure Trust on a CSP. We consider in the calculus, for example: security events or incidents with great impact on trust; a relationship bonus; history of evaluations; and trends. The framework is extensible and can be applied to other complex contexts, such as IoT, Edge, and Fog Computing.

A conceptual formalization, expressed by means of a lightweight ontology, is proposed and described. It models the hierarchy of the main concepts of trust assessment in the cloud-computing context.

Our main contributions are: (1) a conceptual framework, composed of indicators and processes for trust assessment; (2) a lightweight ontology that includes the hierarchy of the main concepts on trust assessment; and (3) an application scenario that simulates the usage of the conceptual framework in a trust assessment of a CSP.

The contribution of the article is significant because it proposes a framework that meets the needs of assessing the consumer confidence. Consumers do not need to have extensive knowledge of the operational aspects of cloud services, so they can carry out the evaluation. Also, the ease of evaluating and monitoring the evolution of trust about the relationship between the CSP and the contractor, are aspects that contribute to the cloud computing research area. As far as we can see it is the only framework that uses Indicators to present the results, making the trust assessment from the consumers’ point of view.

Our framework contributes to the improvement of confidence assessment models for complex environments (e.g. Systems-of-systems), by using a unified approach that considers tangible and intangible criteria and socio-technical aspects. It also contributes to overcoming the shortcomings presented in Sect. 14.2.

6.1 Future Research and Recommendations

Important aspects to be considered in future research are related to the transparency of security, the measurement metrics of service levels and also the interpretation of the data used in decision-making. Also, ease to use, ease of interpretation and ease of access must be considered. These aspects need to be considered in a relevant, meaningful and comprehensive framework. There is a world of underutilized data on the back-end of the providers that could be used to improve the service quality for both providers and consumers. When evaluating complex systems, users must evaluate the security aspects of these environments. The complexity of making this assessment is so high that a team of experts is needed; also, the evaluation will be outdated in a short time.

The best approach is to assess the trust placed in this complex environment rather than the cyber security of the environment. The responsibility of the cyber security rests with the provider. The consumers are responsible for their own environment. By using Indicators it is possible to communicate, in a simply and meaningfully manner, how well the security, reliability, and other aspects of a cloud service are going.

Further studies are also needed on which Indicators best represent the qualities and characteristics of the CSPs under evaluation. Therefore, approaches that use Indicators seem to be more promising, as it reveals trends, incorporates several evaluation actors, and communicates in simpler manners.