Abstract
While many maintainability metrics have been explicitly designed for service-based systems, tool-supported approaches to automatically collect these metrics are lacking. Especially in the context of microservices, decentralization and technological heterogeneity may pose challenges for static analysis. We therefore propose the modular and extensible RAMA approach (RESTful API Metric Analyzer) to calculate such metrics from machine-readable interface descriptions of RESTful services. We also provide prototypical tool support, the RAMA CLI, which currently parses the formats OpenAPI, RAML, and WADL and calculates 10 structural service-based metrics proposed in scientific literature. To make RAMA measurement results more actionable, we additionally designed a repeatable benchmark for quartile-based threshold ranges (green, yellow, orange, red). In an exemplary run, we derived thresholds for all RAMA CLI metrics from the interface descriptions of 1,737 publicly available RESTful APIs. Researchers and practitioners can use RAMA to evaluate the maintainability of RESTful services or to support the empirical evaluation of new service interface metrics.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
- RESTful services
- Microservices
- Maintainability
- Size
- Complexity
- Cohesion
- Metrics
- Static analysis
- API documentation
1 Introduction
Maintainability, i.e. the degree of effectiveness and efficiency with which a software system can be modified to correct, improve, extend, or adapt it [17], is an essential quality attribute for long-living software systems. To manage and control maintainability, quantitative evaluation with metrics [9] has long established itself as a frequently employed practice. In systems based on service orientation [22], however, many source code metrics lose their importance due to the increased level of abstraction [4]. For microservices as a lightweight and fine-grained service-oriented variant [20], factors like the large number of small services, their decentralized nature, or high degree of technological heterogeneity may pose difficulties for metric collection and the applicability of existing metrics, which has also been reported in the area of performance testing [11]. Several researchers have therefore focused on adapting existing metrics and defining new metrics for service orientation (see e.g. our literature review [7] or the one from Daud and Kadir [10]).
However, approaches to automatically collect these metrics are lacking and for the few existing ones, tool support is rarely publicly available (see Sect. 2). This significantly hinders empirical metric evaluation as well as industry adoption of service-based metrics. To circumvent the described challenges, we therefore propose a metric collection approach focused on machine-readable RESTful API descriptions. RESTful web services are resource-oriented services that employ the full HTTP protocol with methods like GET, POST, PUT, or DELETE as well as HTTP status codes to expose their functionality on the web [23]. For microservices, RESTful HTTP is used as one of the primary communication protocols [20]. Since this protocol is popular in industry [5, 26] and API documentation formats like WADLFootnote 1, OpenAPIFootnote 2, or RAMLFootnote 3 are widely used, such an approach should be broadly applicable to real-world RESTful services. Relying on machine-readable RESTful documentation avoids having to implement tool support for several programming languages. Second, such documents are often created reasonably early in the development process if a design-first approach is used. And lastly, if such documents do not exist for the system, they can often be generated automatically, which is supported for popular RESTful frameworks like e.g. Spring BootFootnote 4.
While formats like OpenAPI have been used in many analysis and reengineering approaches for service- and microservice-based systems [18, 19, 25], there is so far no broadly applicable and conveniently extensible approach to calculate structural service-based maintainability metrics from interface specifications of RESTful services. To fill this gap, we propose a new modular approach for the static analysis of RESTful API descriptions called RAMA (RESTful API Metric Analyzer), which we describe in Sect. 3. Our prototypical tool support to show the feasibility of this approach, the RAMA CLI, is able to parse the popular formats OpenAPI, RAML, and WADL and calculates a variety of service interface metrics related to maintainability. Lastly, we also conducted a benchmark-based threshold derivation study for all metrics implemented in the RAMA CLI to make measurements more actionable for practitioners (see Sect. 4).
2 Related Work
Because static analysis for service orientation is very challenging, most proposals so far focused on programming language independent techniques. In the context of service-oriented architecture (SOA), Gebhart and Abeck [13] developed an approach that extracts metrics from the UML profile SoaML (Service-oriented architecture Modeling Language). The used metrics are related to the quality attributes unique categorization, loose coupling, discoverability, and autonomy.
For web services, several authors also used WSDL documents as the basis for maintainability evaluations. Basci and Misra [3] calculated complexity metrics from them, while Sneed [27] designed a tool-supported WSDL approach with metrics for quantity or complexity as well as maintainability design rules.
To identify linguistic antipatterns in RESTful interfaces, Palma et al. [21] developed an approach that relies on semantic text analysis and algorithmic rule cards. They do not use API descriptions like OpenAPI. Instead, their tool support invokes all methods of an API under study to document the necessary information for the rule cards.
Finally, Haupt et al. [14] published the most promising approach. They used an internal canonical data model to represent the REST API and converted both OpenAPI and RAML into this format via the epsilon transformation language (ETL). While this internal model is beneficial for extensibility, the chosen transformation relies on a complex model-driven approach. Moreover, the extensibility for metrics remains unclear and some of the implemented metrics simply count structural attributes like the number of resources or the number of POST requests. The model also does not take data types into account, which are part of many proposed service-based cohesion or complexity metrics. So, while the general approach from Haupt et al. is a sound foundation, we adjusted it in several areas and made our new implementation publicly available.
3 The RAMA Approach
In this section, we present the details of our static analysis approach called RAMA (RESTful API Metric Analyzer). To design RAMA, we first analyzed existing service-based metrics to understand which of them could be derived solely from service interface definitions and what data attributes would be necessary for this. This analysis relied mostly on the results of our previous literature review [7], but also took some newer or not covered publications into account. Additionally, we analyzed existing approaches for WSDL and OpenAPI (see Sect. 2). Based on this analysis, we then developed a data model, an architecture, and finally prototypical tool support.
Relying on a canonical data model to which each specification format has to be converted increases the independence and extensibility of our approach. RAMA’s internal data model (see Fig. 1) was constructed based on entities required to calculate a wide variety of complexity, size, and cohesion metrics. While we tried to avoid unnecessary properties, we still needed to include all metric-relevant attributes and also to find common ground between the most popular RESTful description languages.
The hierarchical model starts with a SpecificationFile entity that contains necessary metadata like a title, a version, or the specification format (e.g. OpenAPI or RAML). It also holds a single API wrapper entity consisting of a base path like e.g. /api/v1 and a list of Paths. These Paths are the actual REST resources of the API and each one of them holds a list of Methods. A Method represents an HTTP verb like GET or POST, i.e. in combination, a Path and a Method form a service operation, e.g. GET /customers/1/orders to fetch all orders from customer with ID 1. Additionally, a Method may have inputs, namely Parameters (e.g. path or query parameters) and RequestBodies, and outputs, namely Responses. Since RequestBodies and Responses are usually complex objects of ContentMediaTypes like JSON or XML, they are both represented by a potentially nested DataModel with Properties. Both Parameters and Properties contain the used data types, as this is important for cohesion and complexity metrics. This model represents the core of the RAMA approach.
Based on the described data model, we designed the general architecture of RAMA as a simple command line interface (CLI) application that loosely follows the pipes and filters architectural style. One module type in this architecture is Parser. A Parser takes a specific REST description language like OpenAPI as input and produces our canonical data model from it. Metrics represent the second module type and are calculated from the produced data model. The entirety of calculated Metrics form a summarized results model, which is subsequently presented as the final output by different Exporters. This architecture is easily extensible and can also be embedded in other systems or a CI/CD pipeline.
The prototypical implementation of this approach is the RAMA CLIFootnote 5. It is written in Java and uses Maven for dependency management. For metric modules, a plugin mechanism based on Java interfaces and the Java Reflection API enables the dynamic inclusion of newly developed metrics. We present an overview of the implemented modules in Fig. 2.
For our internal data model, we used the protocol buffers formatFootnote 6 developed by Google. Since it is language- and platform-neutral and is easily serializable, it can be used in diverse languages and technologies. There is also a tooling ecosystem around it that allows conversion between protocol buffers and various RESTful API description formats. From this created protobuf model, the necessary Java classes are automatically generated (Canonical REST API Model in Fig. 2).
With respect to input formats, we implemented Parsers for OpenAPI, RAML, and WADL, since these are among the most popular ones based on GitHub stars, Google search hits, and StackOverflow posts [15]. Moreover, most of them offer a convenient tool ecosystem that we can use in our Parser implementations. A promising fourth candidate was the Markdown-based API BlueprintFootnote 7, which seems to be rising in popularity. However, since there is so far no Java parser for this format, we did not include it in the first prototype.
The RAMA CLI currently implements 10 service-based maintainability Metrics proposed in five different scientific publications (see Table 1), namely seven complexity metrics, two cohesion metrics, and one size metric. We chose these metrics to cover a diverse set of structural REST API attributes, which should demonstrate the potential scope of the approach. We slightly adjusted some of the metrics for REST, e.g. the ones proposed for WSDL. For additional details on each metric, please refer to our documentationFootnote 8 or the respective source.
Finally, we implemented two Exporters for the CLI, namely one for a PDF and one for a JSON file. Additionally, the CLI automatically outputs the results to the terminal. While this prototype already offers a fair amount of features and should be broadly applicable, the goal was also to ensure that it can be extended with little effort. In this sense, the module system and the usage of interfaces and the Reflection API make it easy to add new Parsers, Metrics, or Exporters so that the RAMA CLI can be of even more value to practitioners and researchers.
4 Threshold Benchmarking
Metric values on their own are often difficult to interpret. Some metrics may have a lower or an upper bound (e.g. a percentage between 0 and 1) and may also indicate that e.g. lower values are better or worse. However, that is often still not enough to derive implications from a specific measurement. To make metric values more actionable, thresholds can therefore play a valuable role [28]. We therefore designed a simple, repeatable, and adjustable threshold derivation approach to ease the application of the metrics implemented within RAMA.
4.1 Research Design
Since it is very difficult to rigorously evaluate a single threshold value, the majority of proposed threshold derivation methods analyze the measurement distribution over a large number of real-world systems. These methods are called benchmark-based approaches [2] or portfolio-based approaches [8]. Since a large number of RESTful API descriptions are publicly available, we decided to implement a simple benchmark-based approach.
Inspired by Bräuer et al. [8], we formed our labels based on the quartile distribution. Therefore, we defined a total of four ranked bands into which a metric value could fall (see also Table 2), i.e. with the derived thresholds, a measurement could be in the top 25%, between 25% and the median, between the median and 75%, or in the bottom 25%. Depending on whether lower is better or worse for the metric, each band was associated with one of the colors green, yellow, orange, and red (ordered from best to worst). If a metric result is in the worst 25% (red) or between the median and the worst 25% (orange) of analyzed systems, it may be advisable to improve the related design property.
To derive these thresholds per RAMA CLI metric, we designed an automated benchmark pipeline that operates on a large number of API description files. The benchmark consists of the four steps Search, Measure, Combine, and Aggregate (see Fig. 3). The first step was to search for publicly available descriptions of real-world APIs. For this, we used the keyword and file type search on GitHub. Additionally, we searched the API repository from APIs.guruFootnote 9, which provides a substantial number of OpenAPI files.
Once a sufficiently large collection of parsable files had been established, we collected the metrics from them via the RAMA CLI (Measure step). In the third step Combine, this collection of JSON files was then analyzed by a script that combined them into a single CSV file, where each analyzed API represented a row. Using this file with all measurements, another script executed the threshold analysis and aggregation (Aggregate step). Optionally, this script could filter out APIs, e.g. too small ones. As results, this yielded a JSON file with all descriptive statistics necessary for the metric thresholds as well as two diagram types to potentially analyze the metric distribution further, namely a histogram and a boxplot, both in PNG format.
To make the benchmark as transparent and repeatable as possible, we published all related artifacts such as scripts, the used API files, and documentation in a GitHub repositoryFootnote 10. Every subsequent step after Search is fully automatable and we also provide a wrapper script to execute the complete benchmark with one command. Our goal is to provide a reusable and adaptable foundation for re-executing this benchmark with different APIs as input that may be more relevant threshold indicators for a specific REST API under analysis.
4.2 Results
We initially collected 2,651 real-world API description files (2,619 OpenAPI, 18 WADL, and 14 RAML files). This sample was dominated by large cloud providers like Microsoft Azure (1,548 files), Google (305 files), or Amazon Web Services (205 files). Additionally, there were cases where we had several files of different versions for the same API.
A preliminary analysis of the collected APIs revealed that a large portion of them were very small, with only two or three operations. Since it seems reasonable to assume that several of the RAMA CLI metrics are correlated with size, we decided to exclude APIs with less than five operations (Weighted Service Interface Count < 5) to avoid skewing the thresholds in favor of very small APIs. Therefore, we did not include 914 APIs in the Aggregate step. Our exemplary execution of the described benchmark calculated the quartile-based thresholds based on a total of 1,737 public APIs (1,708 OpenAPI, 16 WADL, and 13 RAML files). The median number of operations for these APIs was 15. Table 3 lists the thresholds for all 10 metrics of the RAMA CLI. Because of the sequential parsing of API files, the execution of the benchmark can take up to several hours on machines with low computing power. We therefore also provide all result artifacts of this exemplary run in our repositoryFootnote 11.
5 Limitations and Threats to Validity
While we pointed out several advantages of the RAMA approach, there are also some limitations. First, RAMA only supports RESTful HTTP and therefore excludes asynchronous message-based communication. Even though REST is arguably still more popular for microservice-based systems, event-driven microservices based on messaging receive more and more attention. Similar documentation standards for messaging are slowly emerging (see e.g. AsyncAPIFootnote 12), but our current internal model and metric implementations are very REST-specific. While several metrics are undoubtedly valid in both communication paradigms, substantial efforts would be necessary to fully support messaging in addition to REST. Apart from that, the approach requires machine-readable RESTful API descriptions to work. While such specifications are popular in the RESTful world, not every service under analysis will have one. And thirdly, relying on an API description file restricts the scope of the evaluation. Collected metrics are focused on the interface quality of a single service and cannot make any statement about the concrete service implementation. Therefore, RAMA cannot calculate system-wide metrics except for aggregates like mean, which also excludes metrics for the coupling between services.
Our prototypical implementation, the RAMA CLI, may also suffer from potential limitations. While we tried to make it applicable to a wide range of RESTful services by supporting the three formats OpenAPI, RAML, and WADL, there are still other used formats for which we currently do not have a parser, e.g. API BlueprintFootnote 13. Similarly, there are many more proposed service-based metrics we could have implemented in the RAMA CLI. The modular architecture of RAMA consciously supports possible future extensions in this regard. Lastly, we unfortunately cannot guarantee that the prototype is completely free of bugs and works reliably with every single specification file. While we were very diligent during the implementation, have a test coverage of \(\sim \)75%, and successfully used the RAMA CLI with over 2,500 API specification files, it remains a research prototype. For transparency, the code is publicly available as open source and we welcome contributions like issues or pull requests.
Finally, we need to mention threats to validity concerning our empirical threshold derivation study. One issue is that the derived thresholds rely entirely on the quality and relevance of the used API description files. If the majority of files in the benchmark are of low quality, the derived thresholds will not be strict enough. Measurement values of an API may then all fall into the Q1 band, when, in reality, the service interface under analysis is still not well designed. By including a large number of APIs from trustworthy sources, this risk may be reduced. However, there still may be services from specific contexts that are so different that they need a custom benchmark to produce relevant thresholds. Examples could be benchmarks based only on a particular domain (e.g. cloud management), on a single API specification format (e.g. RAML), or on APIs of a specific size (e.g. small APIs with 10 or less operations). As an example, large cloud providers like Azure, Google, or AWS heavily influenced our benchmark run. Each one of these uses fairly homogeneous API design, which influenced some metric distributions and thresholds. We also eliminated a large number of very small services with less than five operations to not skew metrics in this direction. So, while our provided thresholds may be useful for a quick initial quality comparison, it may be sensible to select the input APIs more strictly to create a more appropriate size- or domain-specific benchmark. To enable such replication, our benchmark focuses on repeatability and adaptability.
6 Conclusion
To support static analysis based on proposed service-based maintainability metrics in the context of microservices, we designed a tool-supported approach called RAMA (RESTful API Metric Analyzer). Service interface metrics are collected based on machine-readable descriptions of RESTful APIs. Our implemented prototypical tool, the RAMA CLI, currently supports the specification formats OpenAPI, RAML, and WADL as well as 10 metrics (seven for complexity, two for cohesion, and one size metric). To aid with results interpretation, we also conducted an empirical benchmark that calculated quartile-based threshold ranges (green, yellow, orange, red) for all RAMA CLI metrics using 1,737 public RESTful APIs. Since the thresholds are very dependent on the quality and relevance of the used APIs, we designed the automated benchmark to be repeatable. Accordingly, we published the RAMA CLIFootnote 14 as well as all results and artifacts of the threshold derivation studyFootnote 15 on GitHub.
RAMA can be used by researchers and practitioners to efficiently calculate suitable service interface metrics for size, cohesion, or complexity, both for early quality evaluation or within continuous quality assurance. Concerning possible future work, a straight-forward option would be the extension of the RAMA CLI with additional input formats and metrics to increase its applicability and utility. Additionally, our static approach could be combined with existing dynamic approaches [6, 12] to mitigate some of its described limitations. However, the most critical expansion for this line of research is the empirical evaluation of proposed service-based maintainability metrics, as most authors did not provide such evidence. Due to the lack of automatic collection approaches, such evaluation studies were previously challenging to execute at scale. Our preliminary work can therefore serve as a valuable foundation for such endeavors.
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
References
Athanasopoulos, D., Zarras, A.V., Miskos, G., Issarny, V., Vassiliadis, P.: Cohesion-driven decomposition of service interfaces without access to source code. IEEE Trans. Serv. Comput. 8(4), 550–5532 (2015). https://doi.org/10.1109/TSC.2014.2310195
Baggen, R., Correia, J.P., Schill, K., Visser, J.: Standardized code quality benchmarking for improving software maintainability. Software Qual. J. 20(2), 287–307 (2012). https://doi.org/10.1007/s11219-011-9144-9
Basci, D., Misra, S.: Data complexity metrics for XML web services. Adv. Electr. Comput. Eng. 9(2), 9–15 (2009). https://doi.org/10.4316/aece.2009.02002
Bogner, J., Fritzsch, J., Wagner, S., Zimmermann, A.: Assuring the evolvability of microservices: insights into industry practices and challenges. In: 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME), pp. 546–556. IEEE, Cleveland, Ohio, USA, September 2019. https://doi.org/10.1109/ICSME.2019.00089
Bogner, J., Fritzsch, J., Wagner, S., Zimmermann, A.: Microservices in industry: insights into technologies, characteristics, and software quality. In: 2019 IEEE International Conference on Software Architecture Companion (ICSA-C), pp. 187–195. IEEE, Hamburg, Germany, March 2019. https://doi.org/10.1109/ICSA-C.2019.00041
Bogner, J., Schlinger, S., Wagner, S., Zimmermann, A.: A modular approach to calculate service-based maintainability metrics from runtime data of microservices. In: Franch, X., Männistö, T., Martínez-Fernández, S. (eds.) PROFES 2019. LNCS, vol. 11915, pp. 489–496. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35333-9_34
Bogner, J., Wagner, S., Zimmermann, A.: Automatically measuring the maintainability of service- and microservice-based systems: a literature review. In: Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement on - IWSM Mensura 2017, pp. 107–115. ACM Press, New York (2017). https://doi.org/10.1145/3143434.3143443
Bräuer, J., Saft, M., Plösch, R., Körner, C.: Improving object-oriented design quality: a portfolio- and measurement-based approach. In: Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement on - IWSM Mensura 2017, pp. 244–254. ACM Press, New York (2017). https://doi.org/10.1145/3143434.3143454
Coleman, D., Ash, D., Lowther, B., Oman, P.: Using metrics to evaluate software system maintainability. Computer 27(8), 44–49 (1994). https://doi.org/10.1109/2.303623
Daud, N.M.N., Kadir, W.M.N.W.: Static and dynamic classifications for SOA structural attributes metrics. In: 2014 8th. Malaysian Software Engineering Conference (MySEC), pp. 130–135. IEEE, Langkawi, September 2014. https://doi.org/10.1109/MySec.2014.6986002
Eismann, S., Bezemer, C.P., Shang, W., Okanović, D., van Hoorn, A.: Microservices: a performance tester’s dream or nightmare? In: Proceedings of the ACM/SPEC International Conference on Performance Engineering, pp. 138–149. ACM, New York, April 2020. https://doi.org/10.1145/3358960.3379124
Engel, T., Langermeier, M., Bauer, B., Hofmann, A.: Evaluation of microservice architectures: a metric and tool-based approach. In: Mendling, J., Mouratidis, H. (eds.) CAiSE 2018. LNBIP, vol. 317, pp. 74–89. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-92901-9_8
Gebhart, M., Abeck, S.: Metrics for evaluating service designs based on SoaML. Int. J. Adv. Software 4(1), 61–75 (2011)
Haupt, F., Leymann, F., Scherer, A., Vukojevic-Haupt, K.: A framework for the structural analysis of REST APIs. In: 2017 IEEE International Conference on Software Architecture (ICSA), pp. 55–58. IEEE, Gothenburg, April 2017. https://doi.org/10.1109/ICSA.2017.40
Haupt, F., Leymann, F., Vukojevic-Haupt, K.: API governance support through the structural analysis of REST APIs. Comput. Sci. Res. Dev. 33(3), 291–303 (2017). https://doi.org/10.1007/s00450-017-0384-1
Hirzalla, M., Cleland-Huang, J., Arsanjani, A.: A metrics suite for evaluating flexibility and complexity in service oiriented architectures. In: Feuerlicht, G., Lamersdorf, W. (eds.) ICSOC 2008. LNCS, vol. 5472, pp. 41–52. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-01247-1_5
International Organization For Standardization: ISO/IEC 25010 - Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - System and software quality models (2011)
Mayer, B., Weinreich, R.: An approach to extract the architecture of microservice-based software systems. In: 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE), pp. 21–30. IEEE, Bamberg, March 2018. https://doi.org/10.1109/SOSE.2018.00012
Neumann, A., Laranjeiro, N., Bernardino, J.: An analysis of public REST web service APIs. IEEE Trans. Serv. Comput. PP(c), 1 (2018). https://doi.org/10.1109/TSC.2018.2847344
Newman, S.: Building Microservices: Designing Fine-Grained Systems, 1st edn. O’Reilly Media, Sebastopol, CA, USA (2015)
Palma, F., Gonzalez-Huerta, J., Moha, N., Guéhéneuc, Y.-G., Tremblay, G.: Are RESTful APIs well-designed? Detection of their linguistic (Anti)patterns. In: Barros, A., Grigori, D., Narendra, N.C., Dam, H.K. (eds.) ICSOC 2015. LNCS, vol. 9435, pp. 171–187. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-48616-0_11
Papazoglou, M.: Service-oriented computing: concepts, characteristics and directions. In: Proceedings of the 4th International Conference on Web Information Systems Engineering (WISE 2003), p. 10. IEEE Computer Society, Rome, Italy (2003). https://doi.org/10.1109/WISE.2003.1254461
Pautasso, C.: RESTful web services: principles, patterns, emerging technologies. In: Bouguettaya, A., Sheng, Q.Z., Daniel, F. (eds.) Web Services Foundations, pp. 31–51. Springer, New York (2014). https://doi.org/10.1007/978-1-4614-7518-7_2
Perepletchikov, M., Ryan, C., Frampton, K.: Cohesion metrics for predicting maintainability of service-oriented software. In: Seventh International Conference on Quality Software (QSIC 2007), pp. 328–335. IEEE, Portland (2007). https://doi.org/10.1109/QSIC.2007.4385516
Petrillo, F., Merle, P., Palma, F., Moha, N., Guéhéneuc, Y.-G.: A lexical and semantical analysis on REST cloud computing APIs. In: Ferguson, D., Muñoz, V.M., Cardoso, J., Helfert, M., Pahl, C. (eds.) CLOSER 2017. CCIS, vol. 864, pp. 308–332. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94959-8_16
Schermann, G., Cito, J., Leitner, P.: All the services large and micro: revisiting industrial practice in services computing. In: Norta, A., Gaaloul, W., Gangadharan, G.R., Dam, H.K. (eds.) ICSOC 2015. LNCS, vol. 9586, pp. 36–47. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-50539-7_4
Sneed, H.M.: Measuring web service interfaces. In: 2010 12th IEEE International Symposium on Web Systems Evolution (WSE), pp. 111–115. IEEE, Timisoara, September 2010. https://doi.org/10.1109/WSE.2010.5623580
Vale, G., Fernandes, E., Figueiredo, E.: On the proposal and evaluation of a benchmark-based threshold derivation method. Software Qual. J. 27(1), 275–306 (2018). https://doi.org/10.1007/s11219-018-9405-y
Acknowledgments
We kindly thank Marvin Tiedtke, Kim Truong, and Matthias Winterstetter for their help with the threshold study execution and tool development. Similarly, we thank Kai Chen and Florian Grotepass for their implementation support. This research was partially funded by the Ministry of Science of Baden-Württemberg, Germany, for the doctoral program Services Computing (https://www.services-computing.de/?lang=en).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Bogner, J., Wagner, S., Zimmermann, A. (2020). Collecting Service-Based Maintainability Metrics from RESTful API Descriptions: Static Analysis and Threshold Derivation. In: Muccini, H., et al. Software Architecture. ECSA 2020. Communications in Computer and Information Science, vol 1269. Springer, Cham. https://doi.org/10.1007/978-3-030-59155-7_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-59155-7_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59154-0
Online ISBN: 978-3-030-59155-7
eBook Packages: Computer ScienceComputer Science (R0)