Introduction

Higher Education Institutions (HEIs) around the world are undergoing important changes. Experts in the field of higher education (HE) affirm that the twenty-first century will be the period of the highest growth in HE in the history of education, with qualitative changes in the system such that HEIs will be forced to make important readjustments in order to fit with public sector financial management systems (Rodríguez Vargas 2005; Leydesdorff 2006; Bonaccorsi and Daraio 2007).

According to the OECD (1999), universities are developing new roles and missions that have serious implications for their structures. At the same time, universities are carrying out processes of costs rationalization due, among others things, to the decrease in public R&D funding and the increase in private funding. For example, in Germany Spain and Portugal, between 1997 and 2005 public R&D funding decreased by 1.0, 0.5 and 10.6% respectively, while private financing of universities increased by 2.4, 5.6 and 13.7%, respectively (Eurostat 2007; INE 2007).

To cope with these changes, governments and HE agencies are implementing strategies to improve HE efficiency and ensure optimal utilization of resources. Spanish universities have undergone a complete legal and structural transformation over the last few decades, which have resulted in major reforms to their systems. Governments are establishing new management forms for public institutions, the most important of which is greater autonomy, which demands greater efficiency, efficacy and responsibility from these organizations (LOU 2001, 2007). In this context, many theoreticians think it is vital that universities are evaluated (Keller 1999; Villarreal 1999; Pla and Villarreal 2001; García-Aracil et al. 2006).

Evaluation of universities is a relatively recent phenomenon in Spain compared to other western countries; North America can be taken as the reference case (Blank 1993; De Miguel 2007). HE assessment is a complex process that requires previously agreed reliable and appropriate standards (Miguel Díaz 1999). Rather surprisingly, in a world where information plays an important role in the creation of new knowledge, we do not have information about how to develop such indicators (Bonaccorsi and Daraio 2007). Thus, there has been an upsurge in studies on the evaluation of universities using different indicators systems (Douglas Williams 1995; García-Aracil et al. 2006; Aghion et al. 2007; García-Aracil and Villarreal 2009), which has resulted in a multiplicity of indicators in the literature that are addressed to teaching, research activities, the transfer of research results or evaluation of several of these factors simultaneously. There is also a lack of adequate disaggregated data. Therefore, it is necessary to systematize the existing indicators to facilitate the establishment of criteria for decision making and classification of the factors related to evaluation (Oakes 1989; Consejo de Universidades 1999; Westerheijden 1999; García-Aracil 2007; MEC 2007).

In this paper we present a review of the indicators proposed by some OECD countries to evaluate HEIs, with special attention to those developed in Spain. The paper is organized as follow: “University institutional context and conceptualization of indicators” section presents the main characteristics of the institutional context of the university and defines the use of indicator systems for the evaluation of universities. “Indicators used to evaluate HEIs” section presents a review of the literature on indicators and “Conclusion” section provides comments and some final remarks.

University institutional context and conceptualization of indicators

The university is seen as the most important social space for the promotion of ideas and intellect. From their medieval origins to the beginning of the 19th century, European universities have been considered ‘ivory towers’ where intellectuals have produced and transmitted knowledge that has often been disconnected from the practical concerns of everyday life (Etzkowitz et al. 2000; Martín 2000; Martin and Etzkowitz 2000).

At the beginning of the nineteenth century, German universities contributed to the rise of a second mission that has become as important as teaching—research. At the end of the twentieth century, Wilhelm von Humboldt’s vision of an institution in which research and teaching were linked, was adopted in many OECD countries (Geuna 1999).

At the same time, HE was moving from being an elite system to becoming a ‘mass’ system achievable by the whole of society. This transformation, which can be explained by the spread of democratic education and the influence of the market in society, has provoked important changes in the university system as a whole. Among the most significant changes, we highlight: (1) change in government financing from a centralized model based on public subsidies to a diversified structure based on shared models of financing (OECD 1999); (2) decrease in the role of government in the financing of research and development (R&D) due to the transfer of the management of HE facilities to regional governments (Bricall 2000); (3) increased industry funding for R&D (Clark 1997, 1998); (4) stronger relationships between academia and industry promoting more efficient innovation networks (Manley 2002); (5) internationalization of university research (Davies 2001); and (6) recognition of the importance of universities in the knowledge-based economy (Etzkowitz and Leydesdorff 1997).

These changes have had two main effects. Universities have abandoned their ivory tower mentality, and there is increasing differentiation among institutions in their response to the demand for teaching and research (Scott 1998; Martin and Etzkowitz 2000). The increasing emphasis on the knowledge society, the globalization of services, the scientific-technical revolution and interest in economic welfare, in countries with competitive economies have all combined to promote the appearance of a new university model that includes the so-called ‘third mission’ in the name of entrepreneurialism, innovation and social commitment (Bricall 2000; Martin and Etzkowitz 2000; Commission of the European Communities 2006; Bueno Campos 2007; Gulbrandsen and Slipersaeter 2007).

In this socio-political context and knowledge-based society, HEIs have three interrelated and inseparable missions: teaching, research and the new third mission of the direct connection between university research activities and the external economic and social worlds (Gibbons 1999; Martin and Etzkowitz 2000; Molas-Gallart 2002; European Commission 2005; Laredo 2007). The challenge is to find an appropriate balance between these roles and responsibilities. This requires evaluation of universities’ resources, processes and results in order to: (1) improve efficiency (Bonaccorsi and Daraio 2007); (2) speed-up and clarify the rendering of accounts (Lepori et al. 2007); (3) advance knowledge about the social impact of education and the economic value of investment in education (Hernández Armenteros 2003); (4) enable horizontal level comparisons of universities in similar environments and vertical level comparisons of the services being offered by individual universities (Tricio et al. 1999); and (5) analyze the impact of universities on society (El-Khawas et al. 1998; Pla and Villarreal 2001; Giménez García and Martínez Parra 2006).

In this sense, indicator systems are frequently utilized for evaluation of HEIs (Mora 1991; Consejo de Universidades 1999; Aghion et al. 2007). In Europe, since the late 1970s there have been proposals for the construction of indicators to evaluate universities (Cave et al. 1988, 1997; Mora 1991; Molas-Gallart 2002). The indicators used for institutional evaluations can be based on quantitative or qualitative empirical data (Cuenin 1987; Cave et al. 1988, 1997), and are commonly applied to measure the degrees of achievement of institutional missions and objectives. If systematically collected from primary and/or secondary sources of data, they enable the assessment of institutional productivity and/or efficiency (Lázaro 1992; Chacón Moscoso et al. 1999).

The evaluation of HE systems and measurement of objectives achieved is complex. For this reason, there have been many methods proposed, and opinion differs about what are the most appropriate indicator systems. De Miguel (1989) suggests five groups of indicators based on: (1) results (outputs); (2) internal organizational processes; (3) mixed or integrative criteria; (4) organizational culture; (5) capacity for change. García Ramos (1989), following De Miguel’s approach, proposes eight blocks of indicators: (1) results (outputs); (2) link between resources and results (inputs-outputs); (3) internal organizational processes; (4) technical aspects of the organization; (5) cultural aspects of the organization; (6) capacity to change; (7) relationship of the organization and human factors; (8) integrative criteria.

Other authors have developed other classifications (for more detail see Clark et al. 1984; Murnane 1987; Blank 1993; De Miguel et al. 1994; Wimpelberg et al. 1989). In this paper, we adopt the classification proposed by Rodríguez Espinar (1999) which is based on a generic model for the evaluation of HEIs. Evaluation models generally fall into two categories: (1) those that emphasize the evaluation typology: (a) internal evaluation vs. external evaluation; (b) peer review vs. evaluation based on indicators; and (2) those that emphasize the purpose of the evaluation: (a) institutional vs. program; (b) inputs, processes and output; (c) quality, equity, effectiveness, efficiency and efficacy; (d) teaching, research and management; (e) third mission activities. In this paper we focus on the second of these categories (the purpose of the evaluation). In the next section we describe some of the most important indicators developed in various OECD countries, with special attention to Spain.

Indicators used to evaluate HEIs

Institutional vs. program evaluation

Public and private bodies are developing indicators to evaluate universities that take account of the context of the evaluation: the entire institution (Mora 1999; Pla and Villarreal 2001; García-Aracil and Daraio 2009) or the individual program (Guerra et al. 1999; González Fernández et al. 1999; Stassen et al. 2001).

At the institutional level, we examine some OECD and ENQA (European Network for Quality Assurance in HE) proposals. The OECD through its INES (International Indicators of Education Systems) project has developed a system of education indicators for cross-national comparisons, and collected data from secondary sources on an annual basis. These indicators relate to the general educational context, including aspects such as economic and human resources (academic staff, technical and administrative staff, public expenditure on education, expenditure by student, etc.), educational processes (understood as instruments to enhance the performance of university activities such as size of class, faculty timetables, etc.) and the results achieved by the institution and their impact on society (measured by the literacy teaching index, participation in the labor market based on educational achievement, etc.) (OECD 2007).

ENQA disseminates information, experience and good practice in the field of quality assurance in HE, based on consensus among a panel of experts, in order to guarantee external and internal quality of HEIs in the European HE Area (EHEA). Internal quality refers to intrinsic institutional operations and is mainly evaluated in house. Its main purpose is to guarantee quality, student evaluative processes, academic resources, and so on. External quality is the additional value that is gleaned from institutional best practice. This is judged by an external agency, and the results provide objective and independent information. Mainly, external quality takes account of the procedures utilized by institutions to evaluate their internal quality (ENQA 2007).

In the USA, the New England Association of School and Colleges (NEASC) has developed standards for evaluating all levels of education. These are related to the institutional mission, the planning and organization of the university, faculty (training and/or dedication to teaching, to research or to innovation activities), students and other resources (CIHE 2007). At the same time, the Southern Association of Colleges and Schools (SACS), through its Expert Commissions, tries to enhance educational quality throughout the southern states of the US and to improve the effectiveness of institutions by ensuring that they meet the standards established by their respective HE communities. These Expert Commissions provide the incentive for institutions to strive their programs and services within the boundaries of their resources and capacities, and to create an environment in which teaching, public service, research, and learning occur, as appropriate to their individual missions (SACS 2008).

In the UK, the UK’s Quality Assurance Agency (QAA) is one of the most important independent bodies that carry out HE evaluations. It is the focus of most others references. QAA was established in 1997 by subscriptions from UK universities and colleges of HE, and through contracts with the main UK funding bodies. Its mission is to safeguard the public interest in standards of HE qualifications and to inform and promote continuous improvement in the management of the quality of HE. To do this, it works together with HEIs defining academic standards and quality. QAA assess some aspects, for instance, the institutional mission, academic infrastructure, role of students, admission policy, staff support, and so on (QAA 2006). These institutional audits are developed in partnership with the HE Funding Council for England (HEFCE). Moreover, HEFCE, through its working groups, promotes and funds high-quality, cost-effective teaching and research to meet the needs of students, the economy and society. HEFCE analyses academic aspects (student numbers, results, employment of graduates, etc.), research activities (research income, publication of research results, etc.) and wealth generating activities (collaborative research with industry, commercialization of research results, licensing activities) (HEFCE 2008). There is also a set of indicators related to expenditure, academic and non-academic staff based on a proposal by Cave et al. (1988, 1997).

In Spain, the Mora (1991) proposal, which was based on Cuenin (1987), proposes a set of indicators to measure the internal management of resources, institutional planning, academic organization and links to external agencies. Other researchers at the University of Valencia (UVEG) and at the Spanish Council for Scientific Research (CSIC) in 2008 developed a scheme for evaluating the regional impact of entrepreneurial universities (García-Aracil and Villarreal 2009). These indicators fall into nine categories: (1) changes in demand related to new knowledge areas, new specialties, etc. (e.g. numbers of enrolled and graduated students); (2) changes in the environment in terms of the influence of private initiatives (e.g. numbers of public vs. private institutions, student ratios by type of institution); (3) limitations and/or financial or regulatory restrictions (e.g. public vs. private budget); (4) administrative ability in the institution to fuse together new managerial values and traditional academic values (e.g. existence of a strategic plan); (5) peripheral developments focusing on the relationship between the business and academic environments (spin-offs, etc.); (6) financial diversification based on source of income (changes in the financial structure); (7) academic stimulation or teaching function (enterprise activities); (8) integration of entrepreneurial culture based on the business and innovator ‘ethos’ of the institution (e.g. programs to promote entrepreneurial activities); and (9) assimilation of an entrepreneurial culture such as the integration of entrepreneurial promotion mechanisms (e.g. rewards for entrepreneurial activities). On the other hand, it is important to underline the efforts of the Quality Assurance Agency for the University System in Catalonia (AQU). The purpose of AQU Catalonia is assessment, accreditation and certification of quality in the field of universities and HEIs in Catalonia. This Agency defined indicators for supply, demand/enrolments, access to university, human resources and student results (AQU 2007).

For the evaluation of programs, UNESCO has produced some indicators for mission, objective, resources, curriculum and teaching methods for HE systems in East and West Europe (UNESCO 2004).

In the USA, the Council for HE Accreditation (CHEA) analyses academic quality (student achievement), accountability (financial audit), promotion of change in terms of development of new study programs, administrative capacity (if procedures used at the organizational structure are right and democratic), continuous accreditation and availability of resources (facilities and equipment) (Eaton 2006). The US Department of Education (USDE) focuses its accreditation efforts on quality of university programs. It has responsibility for the management and disbursement of public funds (Eaton 2006). ABET (Accreditation Policy and Procedure Manual) assesses the institutional organization, the studies offered, admissions policy, academic staff, material resources and support services offered to student (ABET 2006).

In Spain, ANECA (National Agency of Quality Assessment and Accreditation Trust) is responsible for monitoring the performance of the public HE service based on objective procedures and transparent processes. Its objective is to improve the positioning of universities in the national and international environment (ANECA 2008). A group at University of Valladolid (UVA) (Guerra et al. 1999) has proposed some indicators to measure the profiles of university departments taking account of their structural parameters, academic achievements and research performance. These measurements are implemented through surveys.

Table 1 summarizes the evaluation methods described above.

Table 1 Review of indicators: Institutional evaluation vs. program evaluation

Indicator systems for HE evaluation are designed to provide information about how closely universities are meeting their objectives (Mora 1991; Cave et al. 1988, 1997; ENQA 2005; QAA 2006; CIHE 2007; HEFCE 2008; OECD 2007; SACS 2008). Most of the systems referred to above define the university mission and its organizational structure (see column (1), Table 1). Whether they are used in an institutional or in a program evaluation, in distinguishing between the purposes and strategies of universities we can see how the available resources are being used.

Column 2 in Table 1 shows that the proposals that include indicators for admissions policy and access procedures, registration, and so on, are more or less the same as those related to university mission (see column (1), Table 1).

These schemes also take account of teaching and research inputs and enable analysis of the opportunities for universities to develop their functions (see columns (3) and (4) in Table 1). Note that all these schemes, either directly or indirectly, make reference to resources (human, financial or equipment). However, not all of them include indicators for academic or research results (see columns (5) and (6) in Table 1)—see CIHE, SACS and UNESCO proposals, and the ANECA manual for Spain. Thus, these proposals would not be able to get performance, effectiveness or efficiency indexes based on the ratios between inputs and outputs.

Table 1 shows that systems of indicators that include assessment of third mission activities are less numerous; however, most make reference to the advice to students (see columns (7) and (8), respectively, in Table 1).

Finally, we should underline the diffuse delimitation among proposals. Although this section has focused on the context in which the evaluation of institutions or programs is developed, it is interesting to see how ENQA, QAA, CIHE and SACS systems, which are oriented to university accreditation, have introduced indicators for the review and control of programs. Also, proposals such us UNESCO and ABET, which are oriented to the evaluation of programs, include indicators that relate to the institutional framework (see column (9) in Table 1).

Evaluation of inputs, processes and outputs

There are indicator systems that focus on the object being evaluated at the university. They consider HE as an input–output transformation process. It is sometimes difficult to distinguish between input and output, because some indicators refer to both teaching and research. Process indicators are useful because they enable assessment of the institutional context, societal demand and the added value of social conditions.

At the international level, the Pan-Canadian Education Indicators Program (PCEIP), an initiative of the Council of Ministers of Education (CESC), provides information that is collected through surveys and secondary data sources, on the supply and demand of education, financing, student achievement, academic staff and labor market transition (CESC 2005). The Association of Universities and Colleges of Canada (AUCC) publishes university indicators related to supply and demand of studies, infrastructures, financing and research resources (AUCC 2008). In Australia the HE Council, and in Germany the Federal Agency of Statistics, provide information based on indicators on number of students enrolled, academic and non-academic staff, infrastructures and financial resources (UNESCO 2003).

In Spain, the National University Quality Evaluation Plan (PNECU) has as main objectives to promote quality assurance systems for universities, to develop homogeneous methodologies to evaluate HEIs and to provide objective information about academic activities, production functions and the financial systems of HEIs (Consejo de Coordinación Universitaria 2002). The University of Oviedo (Miguel Díaz 1999) has constructed indicators related to the evaluation of teaching results (e.g. success rates, professional human resources, student satisfaction); evaluation of teaching processes including use of resources (e.g. teaching load, student/professor ratio); evaluation of quality maintenance systems (e.g. attendance and class participation rate, student support system).

Table 2 summarizes the systems described in this sub-section.

Table 2 Review of indicators: inputs, processes and outputs evaluation

In terms of inputs, all the systems referred to include indicators for human resources. Some focus on academic and non-academic staff; others focus on students. Only two of six proposals in Table 2 include indicators for infrastructures.

Indicators related to processes provide information on how institutional activities are performed. They distinguish between general processes, where student characteristics carry the greater weight (age, study preferences, time dedicated to study, etc.), and social processes, where the evaluation is focused on the student’s social context (parents’ educational level, household income, etc.).

In terms of outputs, the indicators provide data on academic results, but not all systems give information on research and third mission activities.

Evaluation of quality, equity, effectiveness, efficiency and efficacy

In terms of evaluation, proposals have been developed that include indicators relating to the quality, equity, effectiveness, efficiency and efficacy of the HE system. Quality refers to the resources available at universities including improvements needed; equity refers to the egalitarian distribution of resources within the university system; effectiveness refers to the degree to the objectives of the university are achieved based on the difference between actual and forecast results; efficiency refers to the best use of resources; while efficacy in this context refers to the price of the results obtained (Cave et al. 1988, 1997; Mora 1991; El-Khawas et al. 1998; OEI 1998; Consejo de Universidades 1999; Fernández 1999; De Pablos Escobar and Gil Izquierdo 2004).

Within this context, in the UK the PCFC Macro Performance Indicators proposal (Rodríguez Espinar 1999) suggests a set of indicators of efficiency (cost of producing a graduate), effectiveness (number of successful students), and quality (student satisfaction). In the Netherlands, the University of Maastricht (Joumady and Ris 2005) has been working on the reliability and validity of indicator systems and especially policies related to students and faculty, to quality control, to innovation and to the internationalization of universities.

In Spain, a research group from the University Complutense of Madrid (UCM) (De Pablos Escobar and Gil Izquierdo 2004) using secondary data sources, has developed a system to measure quality (number of places, size of class), efficacy (graduated students vs. enrolled students) and equity (student scholarships, own funding).

Table 3 shows that the UK, Dutch and Spanish proposals all take account of quality and equity. Only two (the UK and the Dutch schemes) include indicators for effectiveness and efficiency, while the Spanish scheme includes indicators for efficacy. It should be noted that the PCFC proposal in the UK is oriented to justifying government funding, while the Dutch proposal is focused more on process improvements and the Spanish scheme focuses on university ranking.

Table 3 Review of indicators: quality, equity, effectiveness, efficiency and efficacy

Evaluation of teaching, research and management activities

Universities are responsible for developing several activities getting different outputs (Villarreal 1999). Some institutions have proposed a set of indicators grouping them as follows: teaching, research and management (Chacón Moscoso et al. 1999).

In France, the National Committee of Evaluation of Public Institutions (CNE), evaluates the country’s cultural, scientific and professional institutions through surveys. From a public service point of view, it pays attention to teaching activities, research, management and institutional government (CNE 2003).

The University of Seville, based on information derived from surveys, has proposed indicators for teaching (programs, degrees, subjects, teaching methodology, academic results), research (general resources, funding sources, research results) and university management (admission policy and human resources; Chacón Moscoso et al. 1999). Work at the University of Burgos, based on information derived from surveys, has proposed a system of indicators for teaching quality and educational research, which emphasize resources over results (Tricio et al. 1999).

Table 4 presents a synthesis of the above proposals. It can be seen that the indicators relating to the teaching function are classified into indicators that provide information on subjects, resources, educational methodology and academic results. In terms of research activities, the proposals developed in Spain suggest indicators for economic and personal resources and research results. The French proposal includes indicators for production results and scientific diffusion. Indicators relating to management activities refer chiefly to admissions policy, financial management and human resources, documentation services and planning of the organizational structure.

Table 4 Review of indicators: teaching, research and management activities

Evaluation of third mission activities

The increased attention being given to the universities’ third mission is based on the changing relationships between science and society, and to the growing social and economic role of knowledge production. However, there is no consensus on the definition of the concept of the third mission. There are three definitions that have been used in the literature: (1) additional sources of income; (2) technology commercialization activities; (3) extension work and commitment to the community (Molas-Gallart and Castro-Martínez 2006). Although these concepts may appear similar, they refer to different objectives and political strategies (Molas-Gallart and Ordóñez 2006).

The OECD has compiled certain statistical data which could be used R&D, technological and innovation indicators. These include the Frascati manual (OECD 2002), the Technology Balance of Payments (TBP) manual (OECD 1990), the Oslo manual (OECD 2005) and the Patents manual (OECD 1994). The latter three relate to the business context but can also be applied at the university level to evaluate third mission activities (European Commission 2003, 2005).

The Frascati manual focuses on human resources analysis (R&D personnel) and financial resources (income and funding source) (OECD 2002).

The TBP manual indicators evaluate and analyze the technology transfer processes (patents, licenses, know-how, trademarks, prototypes), technical and/or intellectual content services sources (technical support, contracts or training), technology diffusion (services with highly technological content; OECD 1990).

The Oslo Manual is a methodological guide to compiling statistical data on resources and the results of innovative activities, which can be extrapolated to HE. These indicators are used to carry out comparisons between technical and general institutions, different knowledge areas and different sized institutions (OECD 2005).

The Patents manual analyses technological and scientific activities. The use of patents as indicators measures innovation activity outputs and the direction of technological change (OECD 1994).

Also, the European Report on Science and Technology Indicators and the European Commission (EC) provide inputs that can be applied to the production, dissemination and absorption of knowledge (financial and human resources) and research scientific outputs (publications, patents and scientific honors) (European Commission 2003).

In the US, the North Central Association of Colleges and Schools, through its Higher Learning Commission (HLC), evaluates and accredits the performance of education institutions through peer review evaluation based on five general criteria: (1) institutional mission; (2) future vision; (3) student learning and capacity of faculty; (4) acquisition and application of knowledge; (5) commitment and service to society (HLC 2003). SPRU (Science and Technology Policy Research) at the University of Sussex, distinguishes among universities’ capacities (knowledge and infrastructure) and activities (teaching, research and communication). It considers 12 categories of third mission activities and proposes 34 indicators, including number of patents, spin-offs, entrepreneurial activities, contracts with non-academic organizations (Molas-Gallart 2002).

A European network of Public Agencies of Research and Universities has been implemented, which is called ProTon Europe. This European network evaluates the efficiency of European Technology Transfer Offices (TTO). The indicators proposed are based on innovation and organization theory, which matches most closely to the three directions of knowledge transfer: context, results and processes (ProTon 2007). In Spain, TTO, worried about the need for information and management indicators, are setting up a working group on indicators to get information about universities as institutions and to analyze how universities collaborate with businesses in their region over research (Castro-Martínez et al. 2005).

Table 5 presents the proposals for evaluation of third mission activities. There are some parallels with the Frascati and Oslo Manuals and the EC system, all of which propose statistics, and in the case of the EC Manual indicators related to resources, although with some differences. The Frascati Manual does not include statistics on the outputs of innovation activity. The Oslo Manual proposes economically quantifiable outputs, and the EC manual includes non-monetary outputs such as publications and scientific cooperation. The TBP and the patents manuals refer to transfer and technological diffusion activities, and university-businesses relationships through technical or intellectual advice services.

Table 5 Review of indicators: evaluation of ‘third mission’ activities

The SPRU, ProTon and Spanish TTO Network proposals are similar in that they all suggest indicators for the transfer of research results through patents, licenses, spin-offs, research contracts and consultancy activities. Furthermore, the US HLC proposal establishes generic criteria for how to respond to community needs and how to collaborate with business. The SPRU scheme includes indicators for the transfer capacities of teaching activities (employability and job satisfaction).

Finally, ProTon and the TTO Network include indicators that provide general information about universities and public research agencies, as well as the results of TTO activities. Synergies among these proposals will enable comparisons at European level. It is necessary to emphasize that the previous proposals do not only suggest indicators for the evaluation of results; they also introduce aspects relating to the university institution and its resources.

Conclusions

This paper demonstrates the complexity involved in analyzing the indicator systems proposed by national and international agencies and major research groups for the activities of HEI.

Organizations such as UNESCO, OECD, the EC and other agencies have established manuals, normative documents and guides aimed at achieving consensus in the establishment of indicators applied to the assessment of HEI (UNESCO 2004; OECD 2007; Commission of the European Communities 2006). No consensus has been achieved to date.

Our attempts to classify some of these indicators show that the boundaries between some of these proposals are not clearly defined, for example in the differentiation of indicators to evaluate institutions or programs or to evaluate resources and results. Moreover, there is an additional difficulty. Universities, which are responsible for three main functions, have limited human and economic resources. Many of the inputs required for their different activities are the same. For instance, it is often the same human resources in terms of university staff that develop teaching, research and third mission activities. The situation for outputs is similar: how can we say that a particular result is based on education rather than research? Some evaluation schemes look at academic results but not the results of research or technology transfer.

There is also a problem related to the definition of indicators: should they be quantitative or qualitative? Should data analysis be descriptive, inferential or multivariable? The degree to which each proposed scheme defines the indicators is also significant. Some proposals are concerned with establishing absolute or relative value indexes, while others are limited to formulating generic ‘reports’.

There are also differences in terms of the categories used to define these indicators, for instance, in the case of infrastructures resources, some compute the number of places (Chacón Moscoso et al. 1999), others consider the available area (De Pablos Escobar and Gil Izquierdo 2004), yet others measure student places (Miguel Díaz 1999).

Taking into account the proposals for evaluation of third mission activities, we can see that most assess the impact of research results but ignore employability of graduates, graduates’ labor market returns, and so on, which they would give us information about the social labor market.

Our study shows how difficult it is to establish criteria to classify the existing indicators, given the multiple objectives of HE and the variety of principals and stakeholders involved. To solve these problems is fundamental both to the rationale for policy, and for the relevance and practical use of indicators. For that reason it is useful to discuss what indicators are the best ones since give rise to consensus among policy-makers and university community members. In this sense, it is expected that there will be a move towards greater coherence among quality systems in the coming decades.

With the purpose of contributing to this, our future research will focus on proposing indicators to explain the university’s missions. This will be a comprehensive proposal for indicators of input and output for each mission. Moreover, a measurement system that adopts a holistic approach, and takes account of the variety of the relationships between HEIs and the rest of the society is needed. This would suppose definition of a set of internal and external indicators. The internal indicators would refer to the characteristics of HEIs that define themselves as entrepreneurial. The external indicators would refer to the regional impacts.

The response of HEIs to regional needs is relevant to the debates about the economic role of HE in Europe, and to discussion of the new socio-economic challenges of HE in educated societies. How a university can contribute to its region through education, research and community services is also relevant to individual and societal decision-making. Furthermore, it addresses the changes in organization and management that will be necessary, and policy makers should take account of the response of HEIs to regional needs in any rational analysis of educational investment, organization, finance and planning.