Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

5.1 Introduction

This chapter presents research on e-Government service efficiency performed as part of the eGovMon project. The project worked with government agencies and municipalities to find indicators and develop measurement techniques to assess the following aspects of e-Government web sites and services: accessibility, transparency, efficiency, and impact. To assess accessibility and transparency, the eGovMon project has produced tools for automatic and semiautomatic assessment. To assess efficiency and impact, the project has developed measurement techniques based on indicators. The goal was to improve benchmarking of web sites and services.

This chapter focuses on efficiency of e-Government services.

5.1.1 Public Sector Efficiency

There is a vast amount of research on public sector efficiency (Sørensen 2009). This research has mostly been motivated by the changes in attitude towards the public sector (Le Grand 2003). People are more skeptical towards the public sector, and at the same time citizens want more freedom of choice and more influence on public service production. Research has been conducted on such themes as models of ownership (Ramsdal and Skorstad 2004), different ways of governing and financing public sector activities (Johnsen 2007), and the impact of different leadership models (Schedler and Proeller 2010).

In Norway, the debate on public sector efficiency was significantly intensified in 1989 with the establishment of a national steering committee to analyze the possible efficiency gains in the public sector. This steering committee initiated studies within a number of different areas, which were completed during the summer and autumn of 1990. Based on the results, the steering committee initiated a study of the total potential for efficiency gains and appointed an expert group headed by Prof. Victor D. Norman to undertake this task. The expert group submitted their report on April 7, 1991 (NOU 1991:28). This report had focused on efficiency of the entire public sector. Since then, efficiency improvements have been the goal of Norwegian governments regardless of political orientation.

5.1.2 e-Government Service Efficiency

While research on public sector efficiency is widespread, research on the efficiency of electronic services is almost nonexistent. There are some examples of research on efficiency related to electronic commerce (Watson et al. 2000), which have been adapted to e-Government services (Steyaert 2004). This research will be described in further detail in the next section.

Lu and Rao (2008) built a framework for assessing e-service export performance. Their paper looks at e-services as opportunities for export and categorizes e-services based on the degree of customization and the degree of tangibility. The development of the framework draws on resource-based theory (RBT) and identifies six propositions that influence success: firm resources, management commitment, product adaption, e-service type, firm size, and export experience. While not directly relevant to development of eGovMon indicators on efficiency, the paper contains some valuable insight on how to build successful e-services.

Auer and Petrovic (2004) discussed performance of electronic services in general. They introduced the perspectives of the user and the provider. Their paper proposes a three-phase model for measuring e-service performance, shown in Table 5.1.

Table 5.1 Integrated e-service performance measurement methodology (Auer and Petrovic 2004)

While this research is not directly relevant to the development of efficiency indicators for e-Government services, the ideas presented have influenced the work, in particular the idea of including both the user (citizen or business) and provider (administration) perspectives.

5.1.3 The Aim of This Chapter

The aim of this chapter is to present a framework to measure efficiency of e-Government services. In order to measure, it is necessary to find a suitable set of indicators based on data collected in various ways.

It is important to look at service efficiency both from the user (citizen/business) perspective and the administration perspective.

Most studies discussing public sector efficiency have considered the administration perspective only, where efficiency gain can be seen as a reduction of cost (or labor) related to the provision of the service (e.g., NOU 1991:28) (Kalb 2010). However, it may be even more important to look at efficiency from the user (citizen or business) perspective, since this user-centric approach has more impact on how citizens or businesses perceive the government. How does e-Government save time and effort for the user of the service? In previous work of the eGovMon project, the interests of the users (accessible and transparent public web sites) have been focused. It was therefore natural to keep this user-centric approach when discussing efficiency as well.

Such a framework can also be used to decide which services to implement as e-Government services. Should a service be provided through a downloadable form, or should the form be interactive? If an interactive service is offered to the citizens/businesses, how much effort should be invested in integration with back office? How to decide which service to implement among several candidate services? These are some of the questions this chapter will help answer.

5.1.4 Guidance to the Reader

Section 5.2 discusses efficiency of e-Government services using a stakeholder approach. Section 5.3 discusses the design of an indicator set. The last section contains a discussion and provides directions for future work.

5.2 Measuring Efficiency of e-Government Services

According to Oxford English (2006), efficiency is

the state or quality of being efficient.

The adjective efficient means

working productively with minimum wasted effort or expense.

The Oxford Dictionary of Business and Management, fifth edition (2009) elaborates:

1 (technical efficiency) A measure of the ability of an organization to produce the maximum output of acceptable quality with the minimum of time, effort, and other inputs. One company is said to be more efficient than another if it can produce the same output as the other with less inputs, irrespective of the price factor. 2 (economic efficiency) A measure of the ability of an organization to produce and distribute its product at the lowest possible cost. A firm can have a high technical efficiency but a low economic efficiency because its prices are too high to meet competition.

Djellal and Gallouj (2008) created the following figure to show the relationship between concepts related to efficiency (Fig. 5.1).

Fig. 5.1
figure 1

The interrelationship between concepts (Djellal and Gallouj 2008)

Effectiveness (or external performance) describes to what extent objectives are achieved, but does not take into account costs of production. An organization is effective when it meets its targets. Efficiency (or internal performance) is addressing the ratio between input and output. If you put in a certain amount of resources and get more output, you are more efficient. Similarly, if you are able to produce something with fewer resources, you are more efficient.

Performance has a broader meaning, since performance also includes other important aspects as seen by both the users and the provider, e.g., quality.

5.2.1 Public Sector Efficiency

Efficiency studies have their origin in manufacturing, where it is relatively easy to establish the inputs (e.g., raw materials and labor) and outputs (goods produced). Public sector provides services and brings some new challenges.

5.2.1.1 Methodologies for Measuring Public Sector Efficiency

Djellal and Gallouj (2008) discuss how to measure productivity in public services. Their book starts with a description of traditional techniques for performance measurement and then discusses the special problems of measuring performance of services. Public services are seen as further refinement of services in general.

They divide the methods used into two categories: index-based methods and frontier techniques. Index-based methods are based on indicators. Frontier techniques are used to compare similar production units. The production frontier is made up of the most efficient production units in a given sample. The efficiency of the other units is assessed relative to this empirical frontier.

Index-based techniques are common among bodies responsible for national and international statistics (e.g., OECD), while frontier techniques have successfully been used in research contexts.

5.2.1.2 What Is Analyzed?

In most cases, studies of public sector efficiency have targeted specific areas of service provision, e.g., culture, education, energy supply, health care, public facilities, security, transportation, and administrative units (e.g., local governments) (Kalb 2010).

To measure efficiency of services is not trivial. Djellal and Gallouj (2008) list the following reasons why provision of services is more difficult to measure:

  • Output is fuzzy: “Services are generally characterized by a relatively vaguely defined, intangible and unstable output. The process of producing a service does not culminate in the creation of a tangible good. Rather what is produced is a ‘change of state’. The product is an action, a treatment protocol or a formula - in other words, a process and a way of organizing that process. In many cases, it is difficult to map the boundary of the service.”

  • Output makes its effects felt over time: “Any definition of services must take account of the temporal variable. After all, it is important to distinguish the immediate aspect of a service (the acts involved in providing it) from its effects in the medium and long term. Thus in the English-language literature a distinction is made between output and outcome (the long-term result).”

  • Output depends on value systems: The definition of output is often not objective, but rather subjective, based on the value systems of the users and the provider.

  • Output is interactive (or coproduced): Users often take part in the production. Such a simple thing as filling out an application form is in fact coproduction, since the user takes part in producing the result (e.g., filling a position or a place in a kindergarten).

  • Output is not stockable: Services are often consumed as they are produced. The consumers and the providers often have different views on the valuation of the services.

Coproduction is an essential feature of electronic services. The users do their part; the administration does its part. Due to value systems and different views on valuation, it is necessary to look at efficiency both from the user perspective and the administration perspective.

5.2.1.3 The Usefulness of Public Sector Performance Studies

There have been arguments over the usefulness of performance measurement in the public sector. Hans de Bruijn (2007) summarizes arguments from both sides as follows:

On the one hand, there is the view that performance measurement does not do any justice to the nature of the activities performed by professional organizations. Professional organizations are organizations that provide public services. These public services are multiple-value ones (i.e. they have to take several values into account) and are rendered in co-production (in cooperation with third parties),

and

The opposite view begins with the idea of accountability. The more complex the services that professional organizations must provide, the more necessary it is to grant these organizations autonomy in producing such services. While they are autonomous, they are also accountable, however: How do they spend public funds? Does society receive value for money? After all, granting autonomy to a professional organization may cause it to develop an internal orientation, to be insufficiently client oriented, to develop excessive bureaucracy and therefore to underperform.

He concludes that performance measurements are beneficial, but it is necessary to be aware of the possible negative effects of performance measurements.

5.2.2 Internal and External Efficiency

Efficiency in the context of e-Government services is different from service efficiency in general.

  • e-Government services are, when properly implemented, obviously efficient for the administration (internal efficiency), which can utilize information systems to reduce time spent on processing.

  • e-Government services can also be efficient for the citizens/users and businesses by reducing the time spent on transactions with the municipality or agency (external efficiency).

Example: Downloadable forms are more efficient than paper-based forms. The ability to submit a form online is more efficient than downloading, printing, and mailing a form. If parts of the information in the form are filled out automatically based on existing knowledge about the citizen or business, or if the form is able to capture typographical errors or inconsistencies before submission, efficiency is improved even further.

This illustrates the need to use stakeholder perspectives on the efficiency of e-Government services.

5.2.3 Stakeholder Perspectives

Axelsson et al. (2012) discussed agency efficiency and citizen benefit based on a stakeholder-centered analysis of a specific case: A system used to handle anonymous grading of university exams.

Their approach was for each stakeholder to identify the need for the electronic service, their influence on the development of the e-service, how the e-service affects their performed activities and their opinions, and reactions related to the electronic service.

The main argument is that two stakeholders (citizens and agency) may be insufficient to get a good understanding; it is also necessary to bring in the context in which the stakeholder operates. But the authors also argue that the distinction between external and internal stakeholders is important.

For the purpose of analyzing efficiency of electronic services, we will focus on internal and external efficiency. It is still important to understand the context in which the services are used.

5.2.3.1 Administration Perspective (Internal Efficiency)

The common reason for implementing e-Government services is to reduce the administrative workload. Common goals are to establish “self-service” solutions and provide integration with back-end systems. The ultimate goal is to automate processes to minimize human intervention. Electronic processing is cheap; work done by humans is expensive.

5.2.3.2 Citizen/User/Business Perspective (External Efficiency)

e-Government can also be seen as more efficient from a citizen/user or business viewpoint. The possibility to access online information or fill out an interactive form can save time for a citizen or a business entity, but not necessarily. If the information is not easily accessible through good information structure or search engines, the user can perceive online services as a waste of time. If the user has to enter data into an interactive form instead of making a copy of the document containing the original data, the use of the interactive form suddenly becomes cumbersome. We therefore argue that efficiency of an e-Government service needs to take into account how users experience the efficiency of the service. This is why context of use becomes important.

5.2.3.3 Environmental Perspective

The provision of e-Government services may also be seen as efficient from an environmental viewpoint. By limiting the use of paper documents and physical distribution, e-Government services can be a part of saving the environment.

5.2.4 Efficiency of e-Commerce

In their book “Electronic Commerce—The Strategic Perspective,” Watson et al. (2000) propose a set of five e-commerce performance indicators: awareness, attractability, contact, conversion, and retention.

These indicators are based on the set of variables shown in Table 5.2.

Table 5.2 Variables used to calculate performance of e-Government sites (Watson et al. 2000)

The first indicator is awareness efficiency. This indicator expresses the ratio between those who know the site and the total number of people within the target audience that have Internet access:

$$ Awareness\; efficiency=\frac{ People\; aware\; of\; the\; site}{ People\; with\; Internet\; access}=\frac{Q_1}{Q_0} $$

Awareness can be influenced by marketing campaigns for the e-commerce site. The second indicator is attractability efficiency. This indicator shows the ratio between those hitting the site and those who know the site. Note that a hit is not the same as a visit. A hit means that a user lands on the site. A visit means that the users do more interaction over longer time, e.g., browsing the site for certain goods:

$$ Attractability\; efficiency=\frac{ Hits\; on\; the\; site}{ People\; aware\; of\; the\; site}=\frac{Q_2}{Q_1} $$

The third indicator is contact efficiency. This is the ratio between active visitors and those hitting the site:

$$ Contact\; efficiency=\frac{ Active\; visitors}{ Hits\; on\; the\; site}=\frac{Q_3}{Q_2} $$

The fourth indicator is conversion efficiency. This is the ratio between active visitors and those making a purchase:

$$ Conversion\; efficiency=\frac{ Purchases}{ Active\; visitors}=\frac{Q_4}{Q_3} $$

The fifth and final indicator is retention efficiency. This is the ratio between purchases and repurchases made by the same customer:

$$ Retention\; efficiency=\frac{ Repurchases}{ Purchases}=\frac{Q_5}{Q_4} $$

These five indicators are used to calculate an average web site efficiency index:

$$ Website\; efficiency=\frac{1}{5}\;{\displaystyle \sum}_1^5\frac{Q_n}{Q_{n-1}} $$

According to the authors, this calculation may be misleading, since the factors may not have the same importance for a given context. A more refined and appropriate measure might be a weighted average:

$$ Website\; efficiency=\frac{1}{5}\;{\displaystyle \sum}_1^5\frac{Q_n}{Q_{n-1}}\;{u}_i $$

In this case, the factor u i represents the weight of indicator i.

5.2.5 e-Government Service Performance

In her paper “Measuring the Performance of Electronic Government Services,” Steyaert (2004) adapted the framework of Watson et al. (2000) and used it to analyze six agencies and two federal and state government samples. She used the variables/indicators shown in Table 5.3.

Table 5.3 Indicators used by Steyaert (2004)

Some of the ideas from Watson et al. (2000) and Steyaert (2004), e.g., the ratio between users of Internet service and total number of users, are used in Sect. 5.3, outlining a set of indicators for e-Government service efficiency. But these two frameworks do not take into account the time saved by users and administration. While some of the indicators proposed here are obviously important, we argue that time saved is the most important indicator for measuring efficiency gain.

5.3 Indicators for e-Government Service Efficiency

Efficiency indicators aim to serve as measurement units of how efficient e-Government services are from the different stakeholder perspectives. The work done by Watson et al. (2000) and Steyaert (2004) described in the previous section only looks at efficiency from the site owner (or administration perspective). Auer and Petrovic (2004) introduced the idea of both customer and provider perspectives. In the context of e-Government, the customer is a citizen or business and the provider is the administration.

5.3.1 Individual Services

Efficiency can be seen as a property of a service. Efficiency gains happen when a service is replaced or improved, e.g., as an e-Government service. The first step will be to compute the efficiency gain of each individual service. Examples of such services may be kindergarten applications or applications for positions within a government agency or a municipality.

5.3.1.1 Citizen/User Perspective

What is the efficiency gain for the citizen/user using a downloadable form or an interactive form compared to an off-line service? The efficiency gain can be expressed as time saved for the citizen/user but may also include direct costs, e.g., postage to send a form through ordinary mail.

The efficiency gain is related to the maturity of the service. Figure 5.2 shows a maturity model for e-Government services inspired from Layne and Lee (2001). The y-axis shows the technical complexity of providing the service, while the x-axis shows the development over time. On the lowest level, there are no online forms. The citizen/user has to contact the municipality or agency to obtain the form and will have to submit the filled-in form by mail or personal appearance. The next level is the provision of an online form that can be filled in and printed. The citizen/user still has to submit the form through ordinary mail or make a personal delivery. On the third level, the form is interactive. Information is filled in and submitted by clicking a button. The information is delivered electronically to the municipality or agency. On the fourth level, the interactive form is reusing information either entered through previous use or from existing information stored by the municipality or agency. A good form would also check the input.

Fig. 5.2
figure 2

e-Government maturity model (Inspired by Layne and Lee 2001)

Note: Some forms require the signature of the citizen/user, and this has been used a rationale for municipalities/agencies to provide printable forms instead of interactive forms. However, the use of electronic signatures is now becoming widespread. In order to get to the next maturity level, it may be necessary to enhance the technological solution to incorporate electronic signatures.

5.3.1.2 Administration Perspective

The maturity model will be somewhat different from the administration perspective. Figure 5.3 shows the maturity model from this perspective. If the user downloads a form, fills it out, and mails it, the efficiency gain from using an e-Government service is zero, except that form was obtained through self-service. An interactive form may have no integration with back-office systems. In this case the content of the form is sent as a message through a message-handling system, normally e-mail. In this case the efficiency gain is rather low for the administration. On the next level, the data submitted on the interactive form is directly transferred into a back-office system.

Fig. 5.3
figure 3

e-Government services maturity model (administration perspective)

Example: A typical case is kindergarten application. The parents fill in the necessary information in the interactive form and submit it into the system that is handling admission and allocation. This system keeps track of waiting lists for each kindergarten. The data is then reused to send monthly bills to the parents, monitor the progress of each individual child, allocate children to staff members, etc.

There are two types of integration: vertical, where the data is transferred into one back-office system, and horizontal integration where the back-office system exchanges information with other relevant systems. The use of open standards and protocols for data interchange makes it possible to improve administrative processes.

5.3.2 Use of the Service

Building on the work of Watson et al. (2000), the actual and potential use of each online service is important indicators. Some of the most popular services that are provided through downloadable or interactive forms have an identifiable target group.

One example is online kindergarten applications. Here, the target group is all parents submitting applications, either on paper or online. That you have to apply for kindergarten is common knowledge. But some parents may have missed the option of applying online.

Our interviews with municipal executives have shown that kindergarten applications and applications for vacant positions are the two most successful electronic form-based services, not only because of efficiency gains but also due to quality improvements of the processes. This includes the possibility to validate information before final submission. The usage factor can be computed as follows:

P 0 = Potential target group (total number of users)

P 1 = Users of the e-Government service

P 2 = Nonusers (P 0P 1)

$$ Usage\; factor\; for\; electronic\; service=\frac{P_1}{P_0} $$

Example: One municipality had a total of 311 kindergarten applications. 290 applications were submitted online; the rest were paper-based applications.

The usage factor is 290/311 = 0.93 (93 %).

5.3.3 Efficiency Gain

The efficiency gain is the time or money saved by citizens/businesses and the administration due to the use of the e-Government service.

5.3.3.1 Efficiency Gain for the Individual Citizen/Business

U 0 = Time used by citizen/business to fill in and submit a paper-based form

U 1 = Time used by citizen/business to fill in and submit an interactive form

$$ Efficiency\; gain\; for\; citizen\; or\; business:1-\frac{U_1}{U_o} $$

Example: An average citizen/user uses 10 min to fill in an online application and 25 min to fill in and submit a paper-based application. The user of the interactive form uses only 40 % of the time spent by a user using the paper-based version. The efficiency gain for the user of the e-Government service is 1 − (10/25) = 0.6 (60 %).

5.3.3.2 Efficiency Gain for Administration (for Each Request)

A 0 = Time used by administration to process a paper-based form

A 1 = Time used by administration to process an interactive form

$$ Efficiency\; gain\; for\; administration:A=1-\frac{A_1}{A_o} $$

Example: The administration uses 3 min to process an online application and 20 min to handle a paper-based application. The administration uses only 15 % of the time to handle an interactive form compared to a paper-based form. For the administration the efficiency gain is 1 − (3/20) = 0.85 (85 %).

5.3.4 Total Efficiency Gain for an Individual Service

To calculate the total efficiency gain for an individual service, it is necessary to include the ratio between users of the e-Government service and the size of the target group. The total efficiency gain will always be lower than the individual efficiency gain, if the usage is below 100 %:

$$ Total\; user\; efficiency\; gain=1-\frac{P_1\times \left(1-\frac{U_1}{U_0}\right)+{P}_2}{P_0} $$
$$ Total\; administration\; efficiency\; gain=1-\frac{P_1\times \left(1-\frac{A_1}{A_0}\right)+{P}_2}{P_0} $$

It is also possible to calculate the potential efficiency gain of transforming nonusers into users.

Note: It does not make sense to add the efficiency gain for the citizens/users and administration together. These are separate measures. The time spent by administration is often easily transferable to costs, while the time spent by users is more about how user-centric the government agency/municipality is perceived by the users.

Example: The following shows how to calculate efficiency gain for citizens and the administration. The example is based on real numbers from a Norwegian municipality.

Case: Kindergarten Applications

Citizens

U 0 = 25 min, U 1 = 10 min.

Efficiency gain for each citizen is (1 − 10/25) = 0.6 (60 %).

Users of interactive service: total users (P 0) = 311, e-service users (P 1) = 290, nonusers (P 2) = 21.

Time spent by e-service users is 290 × 25 × (1 − 0.6) = 2,900 min.

Time spent by noninteractive users is (311 − 290) × 25 = 525 min.

Total time spent for both groups: 2,900 + 525 = 3,425 min.

If everyone used paper-based form: 311 × 25 = 7,775 min.

Efficiency gain is 1 − (3,425/7,775) = 0.56 (56 %).

Administration

A 0 = 20 min, A 1 = 3 min.

Efficiency gain for each submission = (1 − 3/20) = 0.85 (85 %).

Users of interactive service: total users (P 0) = 311, e-service users (P 1) = 290, nonusers (P 2) = 21).

Time spent caused by interactive users is 290 × 20 × (1 − 0.85) = 870 min.

Time spent caused by noninteractive users is (311 − 290) × 20 = 420 min.

Total time spent for both groups: 870 + 420 = 1,310 min.

If everybody used paper-based form: 311 × 20 = 6,220 min.

Efficiency gain is 1 − (1,310/6,220) = 0.79 (79 %).

Note: For the individual user/citizen, the time saved by all citizen/users is normally of limited interest. But the number is important for decision makers when deciding what electronic services to implement.

5.3.5 Aggregation of Individual Services

The efficiency gain of individual services may be aggregated to show the total efficiency gain for all services. The number of available services varies from municipality to municipality. For benchmarking purposes, it seems reasonable to select a subset of common e-Government services

5.3.6 A Simplified (Lightweight) System of Indicators

In many cases, it is not feasible to perform studies of the time spent by users and administration. Therefore, a simplified system is proposed based on easily observable characteristics of the service.

Based on the maturity model shown earlier, points could be awarded to each level in the following way:

User perspective:

1 = No e-Government service

2 = Downloadable form

3 = Interactive form

4 = Interactive form with prefilled content

For the user, a downloadable form is more efficient than no form at all. An interactive form is better, since physical delivery is avoided. An interactive form with prefilled content, based on what the government already knows, is even better.

Administration perspective:

1 = Downloadable form or no e-Government service

2 = Interactive form

3 = Interactive form with back-office integration

4 = Process improvement

For the administration, an interactive form reduces manual work. Back-office integration is even more efficient, since information does not need to be manually transferred. If the administrative processes get more efficient due to integration, it is even better.

Use:

1 = seldom used (0 % < use < 10 %)

2 = sporadically used (10 % < use < 50 %)

3 = often used (50 % < use < 90 %)

4 = heavily used (90 % < use)

The use of e-Government services are often not easy to assess. The four categories were selected based on discussions with municipal representatives.

The points in each category are multiplied to give a relative value for the efficiency gain. The following examples are based on informal interviews with municipal representatives:

Example 1: Application for kindergarten

Interactive form. Data is retained from previous year (4 points). The form is integrated with back-office application, but no evidence of process improvement is given (3 points). The solution is heavily used (4 points). Total points: 48.

Example 2: Complaint form

Interactive form (3 points). Form data is converted into an e-mail (2 points). The solution is sporadically used as most complaints are submitted by phone, e-mail, and personal appearance (2 points). Total points: 12.

Note: A low number of complaints may be positive, since it can indicate general satisfaction with the service provision.

Example 3: Applying for positions.

Interactive form (3 points). Form data is converted into e-mail (2 points). The use is mandatory (4 points): Total points 24.

Note: We do not know how many is excluded from the application process based on the mandatory use of interactive forms.

These three examples show how an assessment can be made by a short investigation of a specific e-Government service. The reason for using this lightweight approach is to reduce the time spent on assessment.

5.3.7 Other Possible Indicators

The following indicators are related to efficiency and may be included in the future set of indicators:

  • Easy to find/findability: How much time does a user spend to get hold of a paper-based form or to find the downloadable or interactive form? The time used is now included in the time spent to retrieve, fill in, and submit a form. However, this aspect is more related to information design, which could justify a separation of this particular aspect.

  • Process intervention: Can a user cancel or modify a request for a service (a submitted form). A cancellation or modification is an exception but may be very time-consuming for both the user and the administration. If the user can cancel or modify a request, this would be more efficient than having to contact administration to solve the problem.

  • Process transparency: Can a user follow the process of the service request online? This could save time both for the user and the administration, since other contacts asking for status (mail, telephone calls) could be avoided.

5.4 Conclusion/Discussion

Most research on efficiency has not tried to quantify the actual efficiency gain different stakeholder groups obtain by using an electronic service. This chapter examined earlier research and ended up with a proposed methodology and a set of indicators to calculate efficiency gain both for the administration and the citizen/user/business that use an electronic service. Since the methodology requires some observation or self-reporting by users, a lightweight approach was also introduced to make comparisons between electronic services easier.

The material in this chapter can be used to:

  • Benchmark efficiency of e-Government services

  • Help deciding what electronic services to implement

Both methodologies have been developed in collaboration with eGovMon partner municipalities and agencies. The initial ideas for efficiency measurements were presented on a workshop held in Grimstad, Norway, on September 12, 2008. The ideas were refined in subsequent semiannual workshops and were finally tested on real-world examples in a workshop held in Tønsberg, Norway, on March 7 and 8, 2012. In this workshop we used numbers from municipal partners to show efficiency gains obtained from two specific e-Government services: applying for positions and kindergarten applications. The participants confirmed that the methodology is useful to justify investments in form-based e-Government services.

In the same workshop, the lightweight approach was used to prioritize what e-Government services to maintain and develop further. This approach does not report the actual efficiency gain but calculates an index showing the relative importance of each service. Participants confirmed that the lightweight approach required substantially less work but still provided information that could be used to rank services.