Keywords

1 Introduction

The current transition into the digital era is radically changing the service context in our societies. The digital era revolutionizes traditional hierarchical and sector-specific service provision. To ensure the usability of the services, the systemic nature of innovation should be considered: in renewing the services, simultaneous development of organizations, technologies, services, and multiple partner relationships takes place (Geels 2002, 2004; Kemp et al. 2009).

Public sector services aim at creating social value by improving the well-being of citizens (Kroeger and Weber 2014) and answering the target group’s service needs. Their ultimate aim is not to gain profit but to create value to citizens (Hartley 2005; Lévesque 2013). In digital services, the roles of citizens and employees may change radically. More skills and agency may be required from citizens, and routine work of the service employees may diminish (e.g., Berger et al. 2016). Digitalization can be a great opportunity for those who have the knowledge and devices to use digital interfaces but simultaneously a threat to the disadvantaged citizens.

The perspective of social innovation improves understanding on the participatory and networked processes that are in the core of creation, implementation, and diffusion of such innovations (Harrisson et al. 2010; Harrisson 2012; Mulgan 2007; Moulaert et al. 2013). Social innovation highlights that the active engagement of various actors and collaboration between them are essential to generate broad society- and population-level impacts. A central reason is that learning in broader partnership and network structures has become a leading idea to deal with the complexities and uncertainties of development efforts (Bovaird 2007). A key mechanism is active engagement that promotes the process of achieving agreed-upon goals and values. Such structures may reach beyond the organization, and, on both democratic and pragmatic grounds, they may involve citizens in the design from the very start to realization in practice (Flyvbjerg et al. 2003). As a consequence, forms of evaluation need to address not only the summative effects but also the process-based, formative results that emerge as various actors engage in learning evaluation (Brulin and Svensson 2012).

Usually, evaluations tend to focus on single values of technological progress and cost efficiency. They ignore the broader societal effects and phenomena that are meaningful from the perspective of human beings: their health, lives, competences, motivation, and other existential aspects. This is because these impacts are not easily measurable (Dahlberg 2018). However, the techno-economic approach is too narrow to describe the ubiquitous, multifaceted, and interactive phenomena of digitalization and its impact on peoples’ lives (Djellal and Gallouj 2013; Hyytinen 2017). Qualitative methods are needed to illuminate the multiple values in a fine-tuned way, e.g., impacts on people’s sense of well-being or inequality. Furthermore, learning-oriented evaluation and dialogue between different perspectives, such as those of citizens and employees, are needed to make visible the consequences digitalization may have on peoples’ everyday lives (Dahlberg 2018).

As the main aim of this chapter, we suggest a human-centered co-evaluation method, which focuses on multiple values of digital innovation and creates a process for mutual learning and capability building between inclusive actors. The developed method consists of (1) a multi-criteria framework which is used to evaluate multiple impacts of innovation (Djellal and Gallouj 2010, 2013; Hyytinen 2017) and (2) a participatory evaluation process to support multivoiced evaluation and learning (Patton 2011; Saari and Kallio 2011). The multi-criteria evaluation tool unfolds impacts of innovations on six dimensions. Specific emphasis is put on human and societal impacts, which are analyzed parallel with the traditional techno-economic characteristics of innovations. The dimensions included are impacts on citizens, employee, and population as well as impacts on reputation, integration of technology and services, and economy.

This chapter is structured in four sections. The second section after this introduction presents the theoretical principles of human-centered co-evaluation method, including citizens, and employees’ central role in the development of digital service innovation, and social and sustainable aspects of innovation. It also presents the methodological principles of the developed evaluation method, including the multi-criteria approach to evaluation and learning as a process of evaluation. The third section describes and operationalizes the new evaluation method. The fourth section discusses the novelty of the method and provides some practical implications for implementation.

2 Theoretical and Methodological Principles of Human-Centered Co-evaluation Method

In this section, we discuss the theoretical and methodological approaches that form the starting point of human-centered co-evaluation method. From the theoretical perspective, we start by discussing the need to strengthen the role of human being in innovation and the social and sustainable aspects of innovation. Then, we explain how the multi-criteria framework and the learning-oriented evaluation approach contribute to each other.

2.1 Agency of Human Being in the Evaluation of Digital Service Innovations

Frontline employees and citizens using existing services provide an important perspective for altering the services both in incremental or radical way. When services or their parts are digitalized, the relationship between the service provider and the customer changes. As service-dominant logic (S-D logic) (Vargo and Lusch 2004, 2008) has become the prevailing way of organizing offerings, customers are considered as active co-creators of value to adapt services to their individual needs. Service suppliers are motivated to understand and improve customers’ mundane practices in order to create value for them. This does not only mean getting feedback from customers during a service but also gaining understanding of where and how offerings fit customers’ overall activities. Co-creation opportunities are integrated into the service itself, in supplier’s encounters with the customers. Mobile services demand new kind of active agency from the customers. For example, in the healthcare services, the traditional role of the citizen as “a recipient of services” is expected to become increasingly active, not only by taking care of his/her well-being but also as a user of mobile applications connected to health records and services.

Although the promise of co-creation within a single service is enormous, there are still doubts about how everyday needs of citizens guide service integration and a development of digital platforms. When thinking new digital service innovations from the perspective of a citizen and a potential user, it should be considered how digitalized services change their everyday life. Can the citizens really influence on offerings or are they only trying to adapt themselves as users to ready-made offerings? Understanding citizens’ life in a holistic way, not only as service users, may open a new perspective to the development of new services. Research on what kind of everyday life produces well-being of the citizen (Korvela and Tuomi-Gröhn 2014) may draft another kind of “big picture” on how and what services should be digitalized or what kind of services should be integrated.

The frontline employee’s role is in transition in the complex and digitalized service environment. The routine part of service work may disappear because of the digitalization. Automation usually aims to automate certain tasks rather than whole occupations, and bundles of tasks that cannot be easily automated always exist. A task-based approach to automatability in 21 OECD countries estimates that 9% of jobs are potentially automatable (Arntz et al. 2016).

However, as the face-to-face servant role of service employees may seemingly fade away when the technological interface pushes them into back offices, these employees may have the opportunity and space to form new agencies and adopt new roles and relations. They may become innovators of new services based on their deep experience with clients; enablers, helping and training clients to use technology; differentiators, giving a genuinely empathetic and personal face to the surface of the service; or coordinators, handling integration and building bridges between different offerings (Bowen 2016). Employee-driven perspectives on innovation consider employees as active agents in renewals (e.g., Høyrup et al. 2012). Case studies so far indicate that empowering and allowing employees to apply their customer know-how and ideas to service innovation increase preconditions for development, improve services, and positively influence their well-being (Hasu et al. 2014; Honkaniemi et al. 2015).

In the implementation phase of digital services, service workers’ agency may depend on how quickly and smoothly customers are willing to adopt the role of co-producer of the service and be guided to increase the use of self-service with the IT system (Breit and Salomon 2015; Berger et al. 2016). Previous studies of e-government have perceived increases in staff workload because the staff must simultaneously assist citizens in digital communication and guarantee face-to-face service to the most vulnerable citizens who have neither the competence nor possibility to use digital services (Berger et al. 2016).

This perspective emphasizes the role of employees and citizens in the evaluation of digital service innovations. Human beings from the point of view of an employee and as a citizen should be involved into evaluation as learners, and their changing roles in digitalization should be considered. Their participation would bring in a significant view of everyday life to provide broad understanding on phenomena that are meaningful from the perspective of their health, lives, competences, motivation, and other existential aspects.

2.2 Societal and Sustainable Aspects of Innovation

The concept of sustainability emerged in research, policy, and organizational strategies in the 1980s as an attempt to explore the relationship between the economic development and environmental protection (Banerjee 2008; Pope et al. 2004). While there are a variety of definitions for sustainability, the most common is that of Brundtland Commission (Banerjee 2008; Mickwitz et al. 2011). According to it, sustainable development is “a process of change in which the exploitation of resources, direction of investments, orientation of technological development, and institutional change are made consistent with future as well as present needs” (WCED 1987, p. 9). In the recent literature (Banerjee 2008; Gendron 2013; Komiyama and Takeuchi 2006), the definition has been broadened to cover the balance between economy, society, and the environment.

Because of the complex nature of the sustainability challenge, a broad perspective has been called for in the problem framing: in the recent literature, systemic views have gained ground. These include the analysis of innovations at the system level; more specifically, the transition toward more sustainable socio-technical systems has aroused increasing interest (Geels 2010; Elzen et al. 2004; Mickwitz et al. 2011; Kivimaa and Mickwitz 2011; Smits et al. 2010). The perspective of socio-technical systems acknowledges the difficulty in solving sustainability challenges as isolated technologies and services and provides a framework for their analysis in the context of societal changes. It points out strong interdependencies between various elements of the systems in which the multiple network relationships are their essential characteristic. The composition of networks needed for the promotion of sustainable development is versatile: they include public authorities, industrial firms, financial service providers, consultancies, universities, etc. (Mickwitz et al. 2011; Smith et al. 2010).

System innovations in the area of sustainability imply major changes along the entire production-consumption chain: its flows, its multilevel architecture, and its institutions and structures (Smith et al. 2010; Weber and Hemmelskamp 2005). In the markets, central issues are the integration of clean technologies in safety standards and market rules and the promotion of effective and prospective market demand. The institutional framework is essential in order to go beyond technical aspects and include the enabling environment, which covers social mobilization and acceptance, institutional arrangements (e.g., laws and stakeholder roles), and financial and operational requirements (Van de Klundert and Anschutz 2001). It highlights the role of policy-making and governance processes in sustainability efforts.

The perspective of social innovation has been applied to improve understanding on the participatory and networked processes that are in the core of implementation, learning, and scaling up of innovations at the systemic level (Harrisson et al. 2010; Harrisson 2012; Mulgan 2007; Moulaert et al. 2013). Social innovations are characterized by two different aspects of “social”: social by the ends and social by the means (Rubalcaba et al. 2013; Moulaert et al. 2013; Pol and Ville 2009). The first aspect refers to the societal challenges (e.g., social exclusion and aging population) that innovations are aiming to solve. The second aspect refers to the importance of engagement and participation (Harrisson et al. 2010; Kahnert et al. 2012). The approach of social innovation highlights that collaboration between different actors and actor groups is essential: a prerequisite for the realization of broad society and population-level effects is the active engagement of various actors in the development of innovation.

Dissemination is a challenging task due to two characteristics of social innovations: local nature and the lack of codification. The contribution of social innovations is typically manifested as the density of local networks and as local vitality that may result in new jobs and market activities. Scaling up innovations from this limited context requires the strengthening of their systemic features. It also requires new types of R&D practices that can facilitate the codification of social innovations and the procedures applied (Harrisson et al. 2010; Moulaert et al. 2013; Pol and Ville 2009; Rubalcaba et al. 2013).

It is also worth noting that social innovation needs to be balanced with result orientation (in terms of effectiveness or economic growth). This balance is in line with the idea that efforts striving toward sustainable development must handle and balance different, sometimes conflicting needs that may be associated with a change process. A good example is needs related to effectiveness and innovation versus needs related to creating good working conditions (Elg et al. 2015).

2.3 Multi-criteria Perspective to Evaluate Service Innovation

The evaluation of innovations is typically based on traditional science and technology (S-T) indicators, which are highly oriented toward the technological aspects and economic impacts of innovations. This narrow approach has been criticized in service studies as it neglects the novelties based on immaterial values and interaction (Rubalcaba et al. 2012; Toivonen 2010). In particular, researchers have pointed out that the traditional evaluation methods and measures are not able to capture the diversity of innovations and the multifaceted performance in service sectors (Djellal and Gallouj 2013).

The increasing “servitization” of society has put pressure to develop more advanced approaches to evaluation. In some recent studies (Dyehouse et al. 2009; Williams and Imam 2007), a plurality of methods and starting points for new evaluation criteria have been suggested. According to them, impacts should be assessed on the basis of a multidimensional approach to take into account the issues of quality, reputation, social innovation, and social value (Djellal and Gallouj 2013).

The reasoning is rooted in the “broad view on innovation” (Dosi 1988; Kline and Rosenberg 1986; Lundvall 1992; Nelson and Winter 1982) that highlights complexity, uncertainty, and interactivity in the development and implementation of innovations. In other words, it favors a systemic perspective. Recently, the systemic and network perspective has become topical – not only in terms of multiple actors but also concerning the novelty itself. It has become apparent that the most urgent problems in the present society cannot be solved via individual technologies or services, as these problems form systemic wholes and require systemic solutions (Harrisson et al. 2010). This development puts additional pressure on the renewal of evaluation of innovations.

The Djellal-Gallouj approach (2013) analyzes the diversity of innovations and the multifaceted nature of their performance by linking them to the idea of different “worlds of services.” The concept of “a world” is derived from the “economics of convention,” developed by Boltanski and Thévenot (1991), and refers to different justificatory criteria used in society in the definition of different values. Djellal and Gallouj (2013) identify six different “worlds” that provide criteria for evaluation: the industrial and technological world, the market and financial world, the relational and domestic world, the civic world, the world of innovation, and the world of reputation. The outcomes of innovation can then be evaluated from the perspective of different target areas: besides the traditional technical and financial aspects of innovation, the complex societal challenges and the specific characteristics of services linked to quality and social value are taken into account (Djellal and Gallouj 2010, 2013; cf. Rubalcaba et al. 2012). In addition to the different target areas, the approach pays attention to the timescale in the generation of impacts through the division into direct, short-term outputs and indirect, long term-outcomes. Table 4.1 illustrates the different worlds and the specific justification criteria (Djellal and Gallouj 2013) in a slightly modified form. In comparison with the original framework, “the civic world” has been replaced with the concept “responsibility world,” which includes the original ethical issues linked to the equal treatment and fairness but emphasizes also social innovation and sustainability (see Rubalcaba et al. 2012).

Table 4.1 A multi-criteria perspective to the evaluation of services (Djellal and Gallouj 2013, modified)

On the other hand, the researchers are unanimous that the existing innovation and performance measures and indicators should not be abandoned. What is needed is a more diversified analysis framework that is able to take into account the multiplicity of innovations and the increase of their social and systemic nature (cf. den Hertog 2010; Rubalcaba 2006).

2.4 Learning Approach in Evaluation

A topical question in service research is how to develop innovations at the systemic level (Ostrom et al. 2015; Toivonen 2015). Learning between stakeholders has been proposed as a solution to these complex and multifaceted problems. Although scholars of service research have used the concept of learning (e.g., Lusch et al. 2010), they have mainly referred to firms’ ability to learn to serve customers or to become vital and sustaining part of the value networks. Learning on the level of an entire value network, emphasizing active agency and intentionality of all the participants in relation to a societal problem, still needs to be elaborated.

There are difficulties in getting learning-oriented evaluation to carry out analysis from a sustainability perspective. As proposed by Brulin and Svensson (2012), learning in evaluation needs to become more critical and focus on capturing intended effects, as well as those that are unexpected. In line with this idea, learning-oriented evaluations need to be ongoing (Brulin and Svensson 2012). The requirements of such an evaluation can thus be summarized under the following three points: (1) evaluate results and effects continuously, (2) contribute to learning in a development process and its continuous improvement, and (3) conduct contextual analyses and contribute to public value while also to help to ensure that the results become knowledge products that will be utilized.

The theory of expansive learning (Engeström 1999) derived from cultural-historical activity theory provides an avenue for learning-oriented evaluation. The conditions for learning are a critical component in evaluative efforts. In Engeström’s theories, disturbances and questioning provide a foundation for learning opportunities. Expansive learning in a community begins when, during the course of activity, some individuals begin questioning prevailing goals, patterns, norms, or even basic motives of the activity and searching for new practices. In some cases, this escalates into collaborative envisioning and a deliberate collective change effort at grassroots level (Engeström 1999, 2001a, b), after which a new motive and expansive cycle follows. How is it possible to embed learning as a central mechanism in evaluation? A central opportunity is created by learning-oriented evaluation in terms of a broad stakeholder involvement contributing to more reflective ways of working.

In the evaluation process, the participants are offered a tool, which enables them to understand the service innovation in a wider context and long-term horizon. In our case, the tool is based on multiple values and criteria, and it is theoretically grounded as described in the previous section. The use of the reflexive tool and collective evaluation has been previously applied in developmental impact evaluation for innovation networks (Saari and Kallio 2011). In order to enhance dialogue between developers and potential distributors of the experiment, we need a method which provides equal listening of the different perspectives. An aquarium method has been used in solving severe conflicts in a work community and also as an evaluation method (Aalto-Kallio and Hakulinen 2009). It is based on active listening: it instructs participants to listen, allows them to communicate, and guides them to create further actions.

3 Human-Centered Co-evaluation Method

The new method consists of (1) a multi-criteria framework that will be used to evaluate various dimensions and values of the innovation and (2) a developmental evaluation process to support multivoiced evaluation and learning. We have combined these two approaches in a practical evaluation method, which is operationalized and described in detail in this section.

3.1 Multi-criteria Framework

Our human-centered evaluation framework evaluates impacts of service innovations from the perspective of six dimensions (Fig. 4.1). The dimensions included are impacts on citizen, employee, and population as well as impacts on reputation, integration of technology and services, and economy. The three first dimensions have been categorized as social indicators because they put emphasis on human and social aspects of service innovation. Three later dimensions emphasize technical and economic characteristics of innovations; they are thus categorized as techno-economic indicators. In the figure, the horizontal axis illustrates the scale of analysis: dimensions on the left-hand side in the framework analyze impacts and value from the perspective of individuals (including individual organizations) or a group of individuals. Dimensions on the right-hand side analyze broader impacts from the perspective of wider population, society, and economy.

Fig. 4.1
A framework represents the social indicators: citizens, employees, and the population with diagrams on the top and techno-economic indicators: reputation, integration, and economy with diagrams on the bottom for the scale from individual to broader impacts.

Impact dimensions of multi-value evaluation framework

In accordance with former evaluation approaches (cf. Djellal and Gallouj 2010, 2013; Hyytinen 2017), our framework analyzes societal impacts parallel with the more traditional techno-economic characteristics of innovations. Thus, it aims to create a balanced and comprehensive picture of impacts generated by a service innovation. Specificity and novelty in the new framework are the emphasis on human values, which means that evaluation includes the analysis of impacts from the perspectives of citizens and employees. Human aspects in the evaluation make visible the value from the perspective of various individual actors involved in service generation and utilization. Figure 4.1 crystallizes the main dimensions of the multi-criteria evaluation framework.

Each dimension in the framework includes a variety of aspects and possible areas of impacts. We have identified the potential impact areas and illustrated them with assisting questions. These questions help to analyze value from multiple perspectives. In the following, we present the potential impact areas included in each dimension. We also give examples of the assisting questions to concretize the application of the framework in a practical evaluation situation.

The impact on citizen analyzes the value of a new service innovation from the viewpoint of an individual service user. The emphasis is on customer orientation and the significance of a service, which in a concrete evaluation situation can be asked in a following way, for example: what kinds of customer needs has the new service resolved and how has the everyday life of the citizen changed? The dimension also focuses on service experience including accessibility and quality from the viewpoint of a citizen. Moreover, the impact on well-being, citizen empowerment, and relationships with the employee in charge of the service are reflected upon. This dimension requires a qualitative approach such as interviews of the service users and observation of service events as starting points for understanding.

The impact on employee focuses on changes in the content of work, including work roles, relations, know-how, and concrete tasks. In the concrete evaluation situation, the guiding questions are, for example: how has the new service affected the work role of an employee and what have been the main changes? Moreover, the dimension pays attention to collaboration and means of interaction; a specific focus is on the relationship with citizens and other employees. Also changes in well-being of employees are evaluated. This involves several aspects. Competence for conducting the task is generally conceived as a central criterion that provides a foundation for well-being. Another aspect is autonomy, which emanates from values and interests and a sense of choice and freedom for doing work.

Through broad forms of partnership, a better understanding is created on how technical, social, or economic solutions will be embedded in beneficiaries’ lives. This is an essential element increasing the likelihood of sustainable solutions. Such partnerships link cooperation at different levels that are relevant for innovation (national, regional, local). The involvement of customers and citizens facilitates practical knowledge-based development and innovation, potentially leading to better coordination and more sustainable development work. It is, however, important to separate different groups of beneficiaries from each other. The impact on citizens and employees captures value from the perspective of an individual or a small group of individuals, whereas the impact on population focuses on value from a wider perspective. This dimension analyzes citizens’ needs and service availability in the context of a specific geographical region, for instance. A concrete question in the evaluation situation may be how the new service meets citizens’ need in this region or from the perspective of different citizen groups and how it affects the availability of services. In addition, the dimension includes aspects like social and ecological sustainability and equality and fairness in delivering the service.

As regards reputation, the focus is on the effects on brand image and on the visibility of actors involved in service development. In the concrete evaluation, this can be enquired as follows, for instance: how has the new service – or participation in service development – affected the brand image of involved actors? Moreover, attractiveness and the public image of the service are evaluated by asking, for example, the following questions: has the new service been discussed in public, what has been the public image, and how attractive is the new service from the citizen viewpoint?

Integration focuses on the value of the service and on the technology integration and interaction. This dimension aims to provide understanding on the questions: why are different services and technologies required for the new service development, how have services and technologies been integrated with each other and into the prevailing system, and what is the value of the comprehensive service solution? These aspects can be concretized as follows, for example: how have the different services been integrated to better serve customers’ need or what kinds of technologies have been integrated into the new service and how is the integration managed? Furthermore, this dimension evaluates the functionality between different services and technologies as well as the means of interaction.

The last dimension, economy, focuses on economic effects of the service by considering them from the perspectives of both a single actor or an actor group and broader society. As regards the single actors, evaluation focuses, for example, on new potential resources, savings, and cost-effectiveness. These aspects can be captured by asking evaluators to specify the economic effects that the new service has generated. Besides these topics, this dimension aims to identify new possibilities in business and export.

In the actual evaluation situation, the aim is to capture the changes in accordance with each dimension. In concrete terms, evaluators are asked to consider how the new service has generated value from the perspective of each dimension. To make visible the potential disadvantages or surprises, evaluators are asked to consider both positive and negative changes as well as anticipated and unanticipated effects.

The evaluation approach can be applied in the different phases of a service development. To support the development throughout the process, we suggest that evaluation is conducted in an early planning phase, in the middle phase, and in the final phase of the development. In these different phases, evaluation has a different purpose. In the early planning phase, it supports target setting and helps to identify multiple target areas and foresee potential impacts of the new service. In the middle phase, it helps to justify the changes against the original targets and thus recognize the direction of changes. It also provides information if the development is going to the desired direction or if there is need to make any changes. Evaluation in the final phase concerns the generated impacts and provides an arena to plan next steps for scaling up or re-innovating the service innovation for the future. In the following, we illustrate how evaluation could be conducted as a participatory process to support learning and reflection throughout the development process.

3.2 Evaluation Process: Learning Between Developers, Users, and Enhancers

The evaluation process between developers, employees, citizens, and potential actors, who may promote the innovation experiment, provides an arena for learning and reflection along the development of an innovation. As mentioned earlier, the purpose of a common tool and multiple criteria in the evaluation process is to create insights for the participants to understand the potential value of the service innovation from multiple perspectives and also in the wider societal context and long-term horizon.

In a practical evaluation situation, developers, employees, and citizens – who are the users and enhancers of the innovation – are brought into the same table to learn what has been achieved and what should be accomplished and done in the near future. We utilize the idea of two-phase evaluation from an aquarium method (Aalto-Kallio and Hakulinen 2009). In practice, it means that the facilitator guides who should evaluate, and who should listen, and thus ensures that different perspectives are equally heard in the evaluation situation. What we bring as a new element in terms of the aquarium method is the multi-criteria framework, which is used as a formal evaluation tool and a source of inspiration in the discussion between the involved actors. To create constructive interaction and dialogue is a challenge, when actors from different premises and interests come together. Learning from each other’s viewpoints becomes possible only if the prevailing atmosphere is open and trustful. We suggest that active listening to each participant’s observations and judgments of each element should be guaranteed in the process. For this purpose, using the aquarium method as an inspiration, we created a process model that instructs participants to listen, encourages them to mutually reflect and communicate about the topic, and guides them to create further actions.

In the model, which we call “co-evaluation,” participants are divided into two groups: an inner circle and an outer circle. By co-evaluation we refer to the collaborative evaluation dialogue which is conducted inside each circle and between the inner circle and the outer circle. The inner circle consists of those who have been involved in developing the innovation, such as the managers, supervisors, employees, ICT-designers, and users of the service (citizens). The outer circle consists of those who have a possibility to promote the spreading of the innovation into wider use, such as the directors, collaborators from other services, and funding agencies. Figure 4.2 represents the positions of the inner and outer circles in the evaluation situation.

Fig. 4.2
An illustration divides the participants into two groups, an inner circle, and an outer circle represented by dots on the left. A sketch of a group of people standing in an inner and outer circle in front of a notice board on the right.

The inner and outer circles in co-evaluation

The evaluation process also needs a facilitator, who provides a rhythm to the interaction. The pros and cons of each element become visible only if contradictory viewpoints are allowed to collide with each other. Before the interactive process, the basic information of each element and its qualitative and quantitative indicators should be collected as basis for collective sensemaking and judgment. The aquarium method gives space first for the reflection of the developers and then for those who may give resources to the innovation.

The inner circle evaluates how the innovation has succeeded in each element. They discuss what the measures of each element mean and add their reflections to each element. In this phase, colliding perspectives are allowed and valued. The six-dimensional evaluation tool as a printed poster is placed either on the table or to the wall. The discussion is documented on post-it papers which are then located into boxes of the tool.

While the inner circle conducts the evaluative dialogue, the outer circle is not allowed to speak, but their task is to actively listen to the evaluation. They may make notes and observe which perspectives collide with each other; they also develop ideas for developing the innovation further.

Thereafter, the inner circle and outer circle exchange their positions. Now the outer circle is allowed to discuss, and the inner circle only listens. The participants of the outer circle should discuss what they have heard and what they may conclude from the inner circle’s evaluation. They should sum up their discussion by writing down their suggestions on a separate paper and then by presenting what are the lessons learnt, what should be done next, and how they may contribute to the implementation.

The inner circle then comments how feasible the suggestions are. They may remove some of the suggestions and add their own ones. Finally, they decide who should promote each act and when.

4 Conclusions and Discussion

In this chapter, we have introduced a new human-centered co-evaluation method for the evaluation of service innovations. The new evaluation method responds to the current evaluation challenge, which has been noted within both service innovation research and within evaluation research. According to the former studies (e.g., Djellal and Gallouj 2010, 2013; Patton 2011; Rubalcaba et al. 2012), the evaluation of service innovations tends to focus on single values of technological progress and cost efficiency, which are too narrow to describe the multifaceted, interactive, and systemic nature of services.

The new method provides an alternative by emphasizing the systemic and collaborative nature of service innovation. It integrates a multi-criteria framework to evaluate multiple impacts and values of innovation (Djellal and Gallouj 2010, 2013; Hyytinen 2017) and a developmental evaluation process (Patton 2011; Saari and Kallio 2011) to support multivoiced evaluation and continuous learning. The multi-criteria evaluation tool unfolds impacts of innovations on six dimensions. Specific emphasis is put on human and societal impacts, which are analyzed parallel with the traditional techno-economic characteristics of innovations. Dimensions included are impacts on citizens, employee, and population as well as impacts on reputation, integration of technology and services, and economy.

We propose that the human-centered co-evaluation method could, by clarifying the multiple values of services, leverage the scaling up of new solutions and enhance the service organization’s ability to conduct and learn from the evaluations. The new method, based on a reflexive evaluation approach, facilitates interaction between developers and potential supporters; thus, it provides a promising alternative to foster the continuous development and learning throughout the innovation process.

We argue that a balance between human-centeredness and the result-oriented aspects (i.e., effectiveness and economic growth) in evaluation requires a sensitive, mixed-method approach in order to capture the impacts on peoples’ lives and sustainable development. This means that both quantitative and qualitative data is utilized as a starting point for reflections in collective evaluation events. There is a need to involve sensemaking of the different stakeholders into complex innovation processes. This is in line with recent discussion in the evaluation community: evaluation itself should be a caring and ethical practice, providing arenas for reflecting and influencing on the significant phenomena for the humanity such as climate change, digitalization, use of artificial intelligence, future work, and pollution (Visse and Abma 2018).

As a managerial implication, we suggest that evaluation capacity should be know-how of each organization who develops and innovates services. However, learning-oriented evaluation processes do not take place spontaneously but require a facilitator, who is trained in evaluation methods and who can use his or her time and effort into designing and conducting collaborative evaluation processes (Ensminger et al. 2015). In such an evaluation process, learning from the failures becomes possible. This may be called evaluation capacity building of the organization. It may be know-how of the professionals, but it should be used between organizations. Furthermore, the generation of new type of systemic indicators to describe complex and collaborative processes in the generation of impacts would be both interesting and useful from the viewpoint of management and decision making, too.

It is important to reveal and ponder what kinds of values guide the development of digital services and create methods to intervene it. Evaluation provides an opportunity to look for new practices, question changes, and improve them (Dahler-Larsen 2019). When human-centered co-evaluation method is used, it may offer an arena for learning between stakeholders and thus lead into more ethical, inclusive, and human-centered digitalization.