Abstract
Participatory research keeps expanding to connect science and society through engaging projects using a multi-stakeholder strategy, including citizens. However, each participatory project follows different evaluation formats and strategies. This results in limiting evidences on best practices, hindering the scaling up of Participatory Research. Through the H2020-funded InSPIRES project, an innovative and online-based evaluation strategy was developed which is valid for Participatory Research initiatives labelled as Science Shops or Citizen Science. This strategy challenges those teams that want to undergo a self-reflection process during and after their project is active. An online-tool gathers and automatically analyses data in a harmonized way among projects. The tool delivers back a set of pieces of information through different visualizations which analyze each project’s process in five dimensions, selected-constructed after a careful revision of public engagement and impact evaluation criteria proposed by different projects and researchers. The dimensions evaluated by this online instrument are: (i) Knowledge Democracy, (ii) Citizen-led Research, (iii) Participatory Dynamics, (iv) Integrity, and (v) Transformative Change. Online-based self-evaluation questionnaires were designed and personalized according to the profile of the respondents and are sent out by email in four different stages to capture the momentum of the project, as well as its short-term and mid-term impacts. The quantitative and qualitative evaluation instrument is featured within the InSPIRES Open Platform (OP) which becomes an open repository that allows comparison among participatory projects.
InSPIRES Consortium—Members listed at the end of the paper.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
- Impact evaluation
- Digitalized evaluation
- Automatic data analysis
- Participatory research
- Citizen Science
- Science Shops
1 Introduction
Science Shops are defined as knowledge intermediary structures that jointly with civil society organizations (CSOs) co-create research questions, deploy participatory projects to respond to societal needs, include students in the work and support the translation of research results [1]. Besides, the Citizen Science term is being increasingly used to refer to a growing number of Participatory Research activities. While including a number of different practices, it broadly refers to a partnership between professional researchers and volunteers in which the volunteers are actively involved in scientific tasks that have traditionally been implemented by scientists [2, 3]. Being increasingly adopted by government institutions and international organizations, despite some challenges, Citizen Science is taking advantage of the rise of connectedness and communication technologies to quench the thirst for data and to improve the transparency and accessibility of science [4], and they often use visualization tools for easier digest of data [5]. Both Science Shops and Citizen Science could eventually refocus what parts of the natural and social worlds are subject to scientific inquiry, thereby transforming what we know about the world [6].
While Science Shops have been running in some countries for more than 40 years, the impact evaluation’s studies remain limited. The few available evaluation studies reported that Science Shops are a fairly unique way (i) to put science at the service of society to address real life challenges, (ii) to provide all involved stakeholders with new skills, (iii) to offer new research ideas for established researchers, and (iv) to make research processes more transparent, democratic and inclusive by considering participation as main research driver [7, 8]. Albeit having a shorter tradition, very similar vision and approach is shared by Citizen Science practices.
The main techniques and tools that are considered for performing impact evaluation studies are interviews and paper-format based questionnaires, which provide outputs that makes comparison among projects very difficult. Furthermore, besides the fact that the evaluation’s conclusions are rarely published and shared, each project follow their own format and strategy. This results in limiting evidence on best practices, hindering the scaling up of the more open perspective that these two classes of Participatory Research is providing [9, 10].
Inspired by the Citizen Science approach where digital tools and internet have been widely developed for engaging non-academic participants, and through the InSPIRES H2020 funded project, an innovative and online-based evaluation strategy was developed which can be used for many Participatory Research initiatives that want to be evaluated or want to undergo a self-reflection process during and after the project is active. The proposed online-tool gathers and automatically analyses data in a harmonized way among projects. The data gathered delivers back a set of pieces of information through different visualizations that are covering several aspects related to most of Participatory Research projects. Online-based self-evaluation questionnaires that were designed and personalized according to the profile of the respondents (i.e. civil society members, researchers, students, project managers), are sent out by email and responded online, in four different stages to capture the momentum of a project, as well as its short-term and midterm impacts. The dimensions evaluated by this online instrument are: (i) Knowledge Democracy (i.e. transdisciplinarity and relevance of topics), (ii) Citizen-led Research (alignment of project goals to the community demands and efficacy of engagement techniques), (iii) Participatory Dynamics (degree and quality of engagement), (iv) Integrity (ethics, transparency of data management, gender, etc.), and (v) Transformative changes (individual learning, personal growth, sustainability, impact on policies, etc.).
This online tool is featured within the InSPIRES Open Platform (OP), accessible via this URL https://app.inspiresproject.com, which is an open repository for Participatory Research projects and their promoting structures, such as at the moment Science Shops or Citizen Science research groups. This is a quite unique practical approach to carry out impact evaluation studies and to share them. The OP is completely free, open and its goal is to act as a hub for comparative evaluation of Science Shops and Citizen Science projects. Individuals contributing to the platform and participating in a particular project keep ownership and privacy of their own data and can download the data of their projects in order to perform more in-depth project-based analysis.
After a revision of the state of the art regarding impact evaluation tools for Participatory Research projects, the methods section describes the OP, the evaluation and communication strategy, and the collective effort implemented to produce such tool which has been followed by a multidisciplinary team composed by a mathematician, a statistician, an UX designer, and researchers from different fields that include experts in Science Shops, Citizen Science and other Participatory Research practices. The results section shows the platform and the conclusion section includes a reflection on the takeaways of the project and future work to be done.
2 State of the Art
More and more funding agencies at national and European levels demand to take into consideration the social impacts of funded projects on their evaluation [11]. The European Commission, for example, has put at the core of its research and innovation policies two concepts that seek for generating and evaluating social impacts: the Responsible Research and Innovation (RRI) since 2004 and the Open Science and Open Innovation since 2016. These two paradigms advocate the promotion of more ethical, open, inclusive, reflexive and participatory science [12, 13]. Participatory Research projects, such as Science Shops and Citizen Science, have the potential to integrate the RRI and Open Science characteristics and contribute to social impacts.
Going beyond the current international evaluation framework has become an important task in order to understand impacts of Participatory Research projects in dimensions that overcome classical bibliometric indicators in academia. In Participatory Research, evaluation can be performed at two different levels, first in regards with co-learning processes generating among all stakeholders during a project and secondly in regards to the quality and use of results being collectively produced. Elements from these two dimensions are not captured in traditional research evaluation methods in academic contexts and/or by public funding agencies.
While many frameworks and guidebooks for evaluation are available, a majority of published research with a stakeholder partner engagement dimension does not include an evaluation component [9]. According to this study, most evaluations are qualitative and use self-report through focus group, one-on-one semi structured interviews, informal observations and/or written surveys with open-ended text responses, and the level of details regarding evaluation design, strategy and results are very limited. In a review of research assessment models and methods, Milat et al. 2015 [14] discussed four different theoretical frameworks. They find that they all differ in terminology and approaches and that the typical methods reported to perform the evaluation are desk analysis, bibliometrics, panel assessments, interviews and case studies, relying mostly on principal investigators interviews and/or peer review and they rarely include other stakeholders such as policy-makers and other important end-users of research creating a clear bias in evaluation studies. They report that multidimensional models are more valuable as they capture broader range of potential benefits including capacity building, policy and product and service development, as well as societal and economic impacts. Their review also suggests that the research sector should use broader indicators to better capture policy and practice outcomes, as bibliometric indices do not say much about the real-world benefits of research.
For Science Shops projects, an evaluation strategy was proposed by a former EU funded project, titled PERARES [8]. The evaluation consisted of a series of quantitative paper-based questionnaires to be distributed to all stakeholders at different times of the project. After having performed interviews to experts participating in the PERARES project, findings from InSPIRES suggest that the evaluation task is not routinely integrated within Science Shops projects because of a lack of time and resources. As for Citizen Science, there are currently no commonly established indicators to evaluate these types of projects [10]. Kieslinger et al. recommend to develop a framework and transform it into a practical assessment tool for projects and initiatives, through a mix of quantitative and qualitative methods such as for example online surveys, statistics, in-depth interviews and focus groups.
The InSPIRES Open Platform aims to bring together civil society, practitioners and other stakeholders from across and beyond Europe to roll out innovative models for Science Shops and Citizen Science which systematically include an impact evaluation study. Science Shops, as mission-oriented intermediary units between the scientific sphere and civil society organisations, facilitates citizen-driven Open Science projects to respond to the needs of civil society organisations and most of the time including students in the work process. Like in Citizen Science, the aim is to co-create and prioritize research questions together with civil society members as well as develop projects in a most participatory manner. As many other EU funded Participatory Research project, InSPIRES had as a core objective to develop an improved impact evaluation strategy and systematically introduce it within Participatory Research projects to capture process and results outcomes. After months, of discussions among partners and external advisors, it was decided to develop an online based impact evaluation tool, using quantitative and qualitative indicators to capture the produced impacts. This approach seemed to be the right equilibrium between a broad evaluation framework and a viable application. Constructing an evaluation tool requires obviously a simplification of the object of study [15], and the limitation of time and resources constrains the scope of the evaluation. But still, it is the first of its kind to propose a harmonized evaluation data collection and analysis, allowing for comparison among participatory research projects.
The InSPIRES Open Platform (OP) was developed and is on the one hand an open repository for Participatory Research projects and promoting structures, such as Science Shops and Citizen Science research groups, and on the other hand, a quantitative and qualitative evaluation instrument capturing data in five different dimensions. The uniqueness of the approach is that the impact of the evaluation tool is both an online questionnaire and a platform for automatic data analysis visualization. Thereby, offering real-time project evaluation reports comparing performance of one project in relation to the other ones registered on the platform. To our knowledge, this is the first time that a platform featuring such characteristics is developed and proposed to the Participatory Research community.
3 Methods
3.1 The Choice of Dimensions, Indicators and Questions
While providing a set of indicators that allows for within and between project monitoring and impact assessment, the online-based evaluation tool is primarily aimed at enhancing awareness and self-reflection for project members by giving them the opportunity to reflect upon the purpose and design of the research, both during and by the end of the project. In line with the principles of Action Research [16] and experiential learning [17], the online-based evaluation tool was therefore designed in the hope that the results feed back to the actors involved would change practice through critical reflection and facilitate the direct application of research findings in a practical context.
Several available tools have set the basis for the definition and selection of relevant indicators. These include the measures adopted in the assessment of public engagement in research as for the PERARES EU Project [8], the metrics developed within the MoRRi EU Project for RRI monitoring [18], and the scientific, individual and socio-ecological criteria as proposed by Kieslinger and colleagues [10] for the evaluation of Citizen Science projects. After a careful revision of criteria proposed by these EU-funded projects and in tune with the criteria proposed by Kieslinger and colleagues [10] for the evaluation of Citizen Science projects, the final classificatory scheme of the online-based evaluation tool covers the following five core dimensions: (i) Knowledge Democracy, (ii) Citizen-led Research, (iii) Participatory Dynamics, (iv) Integrity, and (v) Transformative Change. Each core dimension encompasses different indicators (see Table 1 for further details), and includes from 12 to 23 items each, positively worded on a 0 to 7 scale, plus an open-ended question to capture qualitative feedback.
A defining attribute of Participatory Research projects is that they should value and weight equally the contribution of the participants involved [19]. Evaluation strategies of Participatory Research projects should thus, by their very nature, assess the project value and impact for the different actors involved through trust-related mechanisms and a continuing commitment to power-sharing [20]. In an attempt to include and accommodate multiple viewpoints and needs into the design and conduct of the evaluation process, the online-based evaluation items were thus designed and personalized according to project members profiles which include: project manager(s), professional scientist(s), student(s), and involved members of civil society.
Further, just as Participatory Research projects should be improved over time by the growing experience and reflection from involved project members, the same adaptive capacity and openness is requested from project evaluation [20]. Ideally, indeed, Participatory Research projects should be continuously reviewed and amended by feedback from the community [21]. This to say that evaluation should be all-inclusive but not tight: rather, in the course of a participatory project the evaluation should allow for reflecting developments and contextual changes in the projects [22]. Therefore, with the purpose of reconciling varied perspectives through interactive processes, the online-based evaluation tool is structured along four project stages to capture the momentum of the project, as well as its short-term and mid-term impacts: (a) early stage evaluation; (b) mid-point evaluation; (c) end-point evaluation; and (d) post-project evaluation, to be carried out approximately six months after the project is completed.
Yet, the most important reason for developing a digital version of the evaluation tool is that it supplies open, shareable and, primarily, comparable data. Being the idea of sharing knowledge one of the most powerful aspects in collaborative and participatory practices such as Science Shops and Citizen Science, the platform is thus designed accordingly. The data of each project is visualized allowing comparisons with those projects that have already gone through their own evaluation process and uploaded their data. While keeping privacy and anonymization through data aggregation, the visualization wants to trigger self-reflection using other projects as a reference point (Table 2).
3.2 Agile and Sprint
AGILE methodologies were adopted by the entire team during the OP development, in order to guarantee the alignment of all the stakeholders working jointly on the project, and to ensure that a minimum viable product would be shipped in the very time-constrained deadline of 3 months. AGILE and other adjoint techniques are commonly used in software development [23] to manage expectations and focus the attention of participants.
SCRUM [24] was the AGILE methodology used in the OP development. After initial discovery of the requirements of the objectives described, the team planned very regular meetings, at the end of each 2-week interval, being this interval frequency known as a “Sprint”. Each meeting consisted of an open debate on the current state of the development and a product demo with a fully working and deployed version of the OP. This demo kept the entire team engaged and promoted participation as the product becomes, since the first meeting, a real and tangible entity that can be commented on. Meetings provided the opportunity to raise concerns, revaluate past decisions and decide on top priority items for the next Sprint.
Going into more detail, the Sprint plans for this project were as follows:
-
1.
Sprints 1 and 2 were focused on building up the database and application infrastructure, the setting up of user registration and login flows, and the design of the necessary data model to support the objectives proposed during the first meeting. Registering Structures and Projects was completed at the end of this period.
-
2.
Sprint 3 was the mid-way OP development milestone to finish up the features started in sprints 1 and 2, and to prepare the system to start with the “Project Evaluation” set of objectives.
-
3.
Sprints 4 and 5 were solely focused on developing the entry forms and visualization technologies for Evaluation. The data model was expanded with the, until that moment, unfinished requirements for the “Project Evaluation” features.
-
4.
The last sprint, completing the 3 months of work, was focused on refining the look and feel of the platform, adding quality-of-life features such as forgotten password resolution and email alerts, and completing legal requirements such as data privacy disclaimers.
On this particular project, not all of the requirements were fully signed off from the start. Instead, as things became final and available, they were brought to discussion to the bi-weekly sprint meetings and incorporated on the rolling development cycle. We believe this would not have been possible if we had chosen to follow a more traditional software development methodology, such as “Waterfall Development” in which requirements are set from the start and can’t be changed moving forward.
3.3 The Technological and Design Pipeline
Our platform was deployed as a cloud application, this means that the platform executes in a remote machine (server) and is ready for connections from any kind of client device (i.e. mobile, tablet, laptop,…). Usually, these applications have two important parts to be developed: Frontend (display and interaction with data) and Backend (the provider of data and program logic).
Visualizations were developed on Altair, a Python wrapper of the Vega-Lite Library. The visualizations of evaluation results were organized in three levels depending on information interests and disclosure policy:
-
(a)
Public level (Fig. 1), prioritizing aesthetics and simplicity. This level is based on the project logo and associates each part of the hummingbird with an evaluation dimension. Its wing is associated with Transformative Change, because it is the most active part of the Hummingbird, and the one that creates movement. The head is associated with Knowledge Democracy, following the natural relation of head, mind and knowledge. The heart (or lower body) is associated with Citizen-led Research, because citizenship is the heartbeat of Open Science. The tail is associated with Participatory Dynamics, because it acts as a helm. And, finally, the upper body is associated with Integrity, because without it all other aspects would fall apart. This visualization offers maximum aggregation and simply gives the relative position of each project within the five evaluation criteria, through colours and arrows. It fulfils two needs: to have a quick overview of project excellence and to rapidly compare between projects. This visualization is open to everyone to comply with open data requirements as well as RRI ones such as transparency and openness.
-
(b)
Participants level (Figs. 2 and 3): prioritizing usefulness but preserving privacy. This visualization gives far more detail than the public one and complements the dimension’ evaluation with indicators. For each indicator a bullet chart is shown with the evaluation of the project related to the other projects in the platform. Additionally, the project overall position is shown as well as the project evolution through the different data collection phases. This visualization is open to all participants in the project.
-
(c)
Project manager level (Fig. 4): prioritizing the view of the different roles in the project. The structure and presentation are very similar to the participants’ visualization but for every dimension the aggregation is done by role. It fulfils the need of the project managers to review the perceptions of the research team and to investigate or compensate deviations. Project managers have also access to public and participant level visualizations, being able to review the project from different perspectives.
The selection of charts followed task and use-case scenarios design: the public one is aiming for the wow effect; the bullet charts are a substitute for gauges and provide “a rich display of data in a small space” [25]; Line charts are the most suitable charts for viewing time evolution; Scatterplots relate two variables, and in the implementation act as a quadrant chart.
The OP’s backend was developed on top of Django, a powerful Python framework to act as a controller, using a PostgreSQL database to store the data model. Django allows a rapid implementation based on an application skeleton which can be highly customized, being a perfect platform for agile projects. To deal with the calculations for the evaluation visualizations, the Backend leverages Pandas (a Python library) to execute most of the number crunching. Projects data can be exported in CSV (comma separated values) which allow easy takeout of data to process further in other tools.
4 Results
The platform was successfully deployed to a cloud provider and is currently online. First users are testing the software with about 25 on-going Participatory Research projects, mostly Science Shops and Citizen Science ones, and reporting any issues they find.
The figures below show the three levels of visualization of evaluation results: (Fig. 1) the public visualisation, available to everybody, (Figs. 2 and 3) two graphics from the second level of visualization available only to participants of the project, and (Fig. 4) level accessible only to project managers.
The General Evaluation (Fig. 1) is the public visualization. It displays the five principles and reflects the overall evaluation of the project. Each principle splits the evaluation into four quartiles. In the example above the yellow marker shows the project quartile for each dimension.
The Project Overall Position chart (Fig. 2) condensates the evaluation scores of all projects into one indicator (evaluation) and calculates, for all projects, the difference of the evaluation scores between the five principles (coherence). The current project is represented by the yellow dot.
The Project Evolution chart (Fig. 2) shows, for the current project, the aggregated score for each one of the four phases of the evaluation process represented by the yellow line. The median and quartiles Q1 and Q3 of the evaluation of all projects are displayed for each phase.
A meter (bullet chart) (Figs. 3 and 4) is shown for every principle and for the different dimensions that constitute each principle. The yellow dot represents the evaluation of the current project. The shadows represent other projects evaluation: a lighter shadow represents the projects with the lowest and the highest evaluation scores, whereas an opaquer shadow represents the projects within the first and the third quartile of the evaluation scores.
The Project Manager visualization gives a fine-grained detail of the self-reflection process while providing an internal view for management purposes.
5 Conclusions and Future Work
The Inspires Open Platform represents a step forward in Open Science evaluation and in the use of technology and visualization techniques that may impact on future projects on the area. We freely offer it to open science structures and Participatory Research projects to increase their visibility and share good practices.
The platform is currently being tested by 25 projects which are introducing actual data and are going through all the phases of the evaluation processes, this will verify the correct user experience of the participants with all the interaction and also the adequacy of the questions to real and specific projects.
As a future work the authors will do more research on the users understanding of the visualizations and on the data gathered by the platform to make some complex cross-comparison and analysis among the projects in the Inspires Open Platform.
InSPIRES Consortium Members
Barcelona Institute for Global Health: David Rojas Rueda, Valeria Santoro. Environmental Social Science Research Group: Bálint Balázs, Janka Horvath. Fundació Privada Institute de Reserva de la Sida-Caixa: Rosina Malagrida, Marina Pino. Université de Lyon: Shailaja Baichoo, Florence Belaen. VU Institute for Research on Innovation and Communication in the Health and Life Sciences: Marjolein Zweekhorst, Eduardo Muniz Pereira Urias. Universita deglo Studi di Firenze: Giovanna Pacini. Institute Pasteur de Tunis: Hichem Ben Hassine, Sonia Maatoug. Ciencia y Estudios Aplicados para el Desarrollo en Salud y Medio Ambiente: Daniel Lozano, Claire Billot, Faustino Torrico.
References
Fisher, C., Leydesdorff, L., Schophaus, M.: Science shops in Europe: the public as stakeholder. Sci. Public Policy 31, 199–211 (2004). https://doi.org/10.1038/35108157
Heigl, F., Kieslinger, B., Paul, K.T., Uhlik, J., Dörler, D.: Opinion: toward an international definition of citizen science. Proc. Natl. Acad. Sci. 116(17), 8089–8092 (2019). https://doi.org/10.1073/pnas.1903393116
Auerbach, J., et al.: The problem with delineating narrow criteria for citizen science. Proc. Natl. Acad. Sci. 116(31), 15336–15337 (2019). https://doi.org/10.1073/pnas.1909278116
Irwin, A.: No PhDs needed: how citizen science is transforming research. Nature 562, 480–482 (2018)
Newman, G., Wiggins, A., Crall, A., Graham, E., Newman, S., Crowston, K.: The future of citizen science: emerging technologies and shifting paradigms. Front. Ecol. Environ. 10(6), 298–304 (2012). https://doi.org/10.1890/110294
Strasser, B.J., Baudry, J., Mahr, D., Sanchez, G., Tancoigne, E.: “Citizen Science”? Rethinking science and public participation. Sci. Technol. Stud. 32, 52–76 (2019). https://doi.org/10.23987/sts.60425
Zaal, R., Leydesdorff, L.: Amsterdam science shop and its influence on university research: the effects of ten year of dealing with non-academic questions. Sci. Public Policy 14(6), 310–316 (1946). https://doi.org/10.1093/spp/14.6.310
PERARES Final report D9.2. Evaluating Projects of Public Engagement with Research and Research Engagement with Society (2014). https://www.livingknowledge.org/fileadmin/Dateien-Living-Knowledge/Library/Project_reports/PERARES_Evaluating_Projects_of_PER_Final_report__WP9_Monitoring_and_Evaluation_2014.pdf. Accessed 23 July 2019
Esmail, L., Moore, E., Rein, A.: Evaluating patient and stakeholder engagement in research: moving from theory to practice. J. Comp. Eff. Res. 4(2), 133–145 (2015). https://doi.org/10.2217/cer.14.79
Kieslinger, B., Schäfer, T., Heigl, F., Dörler, F., Richter, A., Bonn, A.: The Challenge of Evaluation: An Open Framework for Evaluating Citizen Science Activities (2018). https://doi.org/10.17605/OSF.IO/ENZC9
Saris 2020: “Avaluació Responsable Avaluació per Millorar.” Agència de Qualitat i Avaluació Sanitàries de Catalunya. Departament de Salut. Generalitat de Catalunya (2018)
Von Schomberg, R.: A vision of responsible innovation. In: Owen, R., Heintz, M., Bessant, J. (eds.) Responsible Innovation, pp. 1–35 (2013). https://doi.org/10.1002/9781118551424.ch3
Fecher, B., Friesike, S.: Open science: one term, five schools of thought. In: Bartling, S., Friesike, S. (eds.) Opening Science, pp. 17–47. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-00026-8_2
Milat, et al.: Health Res. Policy Syst. 13, 18 (2015). https://doi.org/10.1186/s12961-015-0003-1
Espeland, W., Sauder, M.: Rankings and reactivity: how public measures recreate social worlds. Am. J. Sociol. 113(1), 1–40 (2007). https://doi.org/10.1086/517897
Lewin, K.: Action research and minority problems. J. Soc. Issues 2(4), 34–46 (1946). https://doi.org/10.1111/j.1540-4560.1946.tb02295.x
Kolb, D.: Experiential Learning: Experience at the Source of Learning and Development. Kogan Page, London (1984)
MORRI Progress report D3.2. Metrics and indicators of Responsible Research and Innovation. Monitoring the Evolution and Benefits of Responsible Research and Innovation (2015). https://www.rri-tools.eu/documents/10184/47609/MORRI-D3.2/aa871252-6b2c-42ae-a8d8-a8c442d1d557. Accessed 23 July 2019
Whyte, W.F.: Participatory Action Research. Sage, Newbury Park (1991). https://dx.doi.org/10.4135/9781412985383
Chevalier, J.M., Buckles, D.J.: Participatory Action Research: Theory and Methods for Engaged Inquiry. Routledge, London (2019). https://doi.org/10.4324/9780203107386
Feuerstein, M.T.: Partners in Evaluation: Evaluating Development and Community Programmes with Participants. Macmillan Publishers, London (1986)
McAllister, K.: Understanding participation: monitoring and evaluating process, outputs and outcomes in rural poverty and environment. Working paper series, 2. International Development Research Centre, Ottawa (1999)
Cockburn, A.: Agile Software Development, Addison Wesley, Boston (2002)
Kenneth, R.: Essential Scrum: A Practical Guide to the Most Popular Agile Process (2012). ISBN 978-0137043293
Few, S.: Bullet Chart Design Specification (2013). https://www.perceptualedge.com/articles/misc/Bullet_Graph_Design_Spec.pdf
Acknowledgements
We would like to thank all the members of the InSPIRES Consortium, and especially the VU and IRSICaixa Team, and all the external partners that have participated in the several iterations, whose suggestions helped in the creation and development of the tool. The InSPIRES consortium also acknowledges the support of the European Union grant number 741677.
Author information
Authors and Affiliations
Consortia
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
We declare no competing interests.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Gresle, AS. et al. (2019). An Innovative Online Tool to Self-evaluate and Compare Participatory Research Projects Labelled as Science Shops or Citizen Science. In: El Yacoubi, S., Bagnoli, F., Pacini, G. (eds) Internet Science. INSCI 2019. Lecture Notes in Computer Science(), vol 11938. Springer, Cham. https://doi.org/10.1007/978-3-030-34770-3_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-34770-3_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34769-7
Online ISBN: 978-3-030-34770-3
eBook Packages: Computer ScienceComputer Science (R0)