Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The nature of public services has changed over the years, mainly induced by new government orientations and social, economic and technological changes. In many of these services, a new form of management, more concerned with the organization and coordination of services towards an increased efficiency in service delivery, has been implemented (Bleiklie et al. 2000; Mwita 2000; Pollitt 2003). In order to classify and explain the reforms that took place in many countries, some authors came up with concepts such as ‘managerialism’ (Aucoin 1990; Pollitt 1990) or ‘New Public Management (NPM)’ (Hood 1991).

Similarly to what happened in many public organizations, universities also faced increasing pressures to change their ‘traditional’ nature (Amaral and Magalhães 2002). According to the existing literature, several exogenous forces have contributed to the urge to reform these institutions. Among these are the following: first, the shift from being ‘Ivory Towers’, inhabited by scholars with the liberty to pursue knowledge in a rigorous and critical way, enjoying the independence of mind that came from autonomy and intellectual freedom (Barry et al. 2001; Czarniawska and Genell 2002), to being deliverers of mass higher education (Halsey 1995); second, the increasing difficulty of exclusively financing the institutions with public funds; third, European policies; and finally, the emergence of new approaches to public policy, such as NPM (Hood 1991; Shattock 1999; Chevaillier 2002; Salter and Tapper 2002).

Growing demands to become more efficient, effective and accountable, led to an increased interest in introducing control mechanisms aimed at assessing organizational performance. As a result, Performance Management Systems (PMS) have been implemented in some universities and many of these institutions have started to rethink their forms of organization, governance and management (Vilalta 2001).

Even though many universities claim that they have implemented PMS and that they are now more accountable to their stakeholders (Melo et al. 2010), it is neither clear how performance is being measured and managed nor what the real effect of these new managerial arrangements on the governance structures of these institutions has been, particularly concerning the roles of key actors. Therefore, the central focus of this chapter is first, to understand the way performance information is being collected and used and second, to understand the extent to which the roles and influences of the main actors in the governance structures of universities have been affected by PMS.

Data from a Portuguese university is presented, one that considers itself innovative and entrepreneurial, being, in fact, the only Portuguese university that belongs to the European Consortium of Innovative Universities (ECIU).Footnote 1 As such, one would expect that such an institution would have implemented adequate systems to measure, report and manage performance.

The chapter is structured in the following way: first, a systems view of performance management in higher education is introduced; second, an analytical framework incorporating the main actors in the governance structures of universities is displayed and the research questions are outlined; third, the research design, methods and setting are introduced; fourth, findings are presented according to the framework developed; and finally, results are discussed and conclusions are drawn.

2 Theoretical Background

2.1 Performance Management in Higher Education: A Systems View

For the purpose of this chapter, performance management is defined as an integrated system where performance information is closely linked to strategic steering. It consists of three stages: the first is the measurement stage, which involves measuring the input, output, level of activity or outcome of organizations, people and programs, thereby gathering performance information (Radnor and Barnes 2007; Askim 2008); the second is the reporting stage, which entails communicating performance information to decision-makers; and the third is the management stage, which consists of using the information and acting upon it, aiming at improvements in behavior, motivation and processes (Bouckaert and van Dooren 2003; Radnor and Barnes 2007).

Given the importance of linking the measurement process with strategic planning and the need to look at several levels of performance, it is considered adequate to use an ‘input–output model’ to look at the performance of universities (Fig. 1). This model, based on systems-theory, provides tools for a dynamic and systemic thinking, since it acknowledges the existence of a closed loop between the actions of performance measuring, taking corrective action and achieving outcome response (Boland and Fowler 2000). It comprises four main components: inputs, processes, outputs and outcomes.

Fig. 1
figure 1

A systems view of performance management in universities [Adapted from Bouckaert and Halligan (2008, p. 33) and Dochy et al. (1990, p. 145)]

According to this model, higher education is seen as the process of transforming inputs (e.g., students’ and academics’ time) into outputs, which can be broadly classified as relating to the three areas of every university’s mission: teaching, research and third mission. Outcomes are the products of a university in the long run, and include, for instance, building a well-educated society (Boland and Fowler 2000). All this process is monitored and controlled. At the end, the output and the outcome are measured against pre-established targets and, if there is a difference between these and the actual outputs/outcomes, corrective action occurs.

Since performance measurement should not be used as an end in itself, but to provide staff with feedback designed to enable them to develop and improve their practice, generating ownership and building trust is essential for a well-succeeded performance management system (Bouckaert and Halligan 2008). This means that professionals ought to be invited to say what constitutes a good service and what they want to be assessed on (De Bruijn 2007). Plus, there should be a clear identification of the functions of performance measurement as well as the intended forums for dealing with the performance measurement results. In this way, the manager and professional can trust that any deviation from it will demand consultation.

In the last stage of the system’s view there is an ex post audit and/or evaluation, comprising both an internal and an external dimension. In higher education these tasks can be performed by an accreditation agency. Ideally, this feeds forward to the next cycle.

In order to assess performance, some criteria are usually used. These normally relate to the three “e”s: economy, concerned with ensuring the lowest possible cost; efficiency, concerned with how much output is achieved for a given level of input at a specified level of volume and quality; and effectiveness, concerned with the extent to which services confer the benefits which they are intended to confer (Holloway 1999). The three “e”s relate to the more rational, hard type of control mechanism in performance management. Another fundamental dimension is trust, which represents a softer type of control mechanism of performance. Trust, when present, acts as a facilitator of the more rational dimension. Trust-based control systems rely on traditions, on professions, on standard practice. A key challenge is to keep equilibrium between these two systems, and indeed make them work in unison (Bouckaert and Halligan 2008).

If working well, a PMS should provide information on important matters, promote appropriate behavior, provide mechanisms for accountability and control, and create a mechanism for intervention and learning (Haas and Kleingeld 1998; Neely 1998); that is, it should be used for improvement purposes (Radnor and Barnes 2007). But that seldom happens. In fact, some authors (e.g., Radnor and McGuire 2004; Hood 2006; Laegreid et al. 2008) argued that the focus for many public service organizations is on measurement, leading to an excessive amount of data collected with little action.

But, how are these systems working inside universities? And, how have they influenced key actors in the governance of universities?

In order to answer these questions it is essential to understand the way universities are governed.

2.2 Governance Structures in Higher Education

De Boer (2002: 44) regards governance structures as a ‘set of rules concerning authority and power related to the performance of a university’s activities directed towards a set of common goals’. It reflects the way an organization divides and integrates responsibility and authority.

As Fig. 2 illustrates, governance structures can be conceptualized by an ‘inner ring’ and an ‘outer ring’. The ‘inner ring’ represents the internal coordination mechanisms, and is composed of the members of the university’s governing bodies—the ‘four Estates’. These are: students, academics, non-academic staff and external representatives.

Fig. 2
figure 2

Governance structures in higher education

The ‘outer ring’ embodies the external coordination mechanisms and is composed of the state, Europe (understood as European policies) and the market.

This model extends Clark’s (1983) ‘triangle of coordination’ to other internal stakeholders of the university, revisiting the concept of the university’s Estates proposed by Neave and Rhoades (1987). According to Neave (2009), the characteristic of an Estate is the central part played by prescribed and formal status. To Neave (2009) there has been a move from what Clark (1983) called ‘academic oligarchy’ to an extended constituency in which all three Estates—Academic, Student and Administrative—have their formal elected place. To the three Estates, this research adds a new one—the ‘External Representatives Estate’, since these members have become increasingly important in the governance and management of universities.

Therefore, Fig. 2 is proposed as the analytical framework that will help to understand how the introduction of the performance management system described above has affected governance structures and the roles, influences and accountabilities of the key actors (the ‘inner ring’).

3 The Case Study

3.1 Research Design and Methods

To illustrate what is happening, it was decided to study the case of a Portuguese university (PU) in-depth. The Portuguese higher education system was chosen because it has recently gone through major reforms, which aim at putting progressively in place mechanisms of control, in order to fulfill demands for increased accountability. PU was selected because it is an entrepreneurial organization, recognized for its good performance, and it was one of the few that has decided to become a public foundation subjected to private law, which means that it has to raise at least 50 % of its revenue. It would be expected that such a university would have implemented mechanisms to measure and manage performance.

In relation to performance, it has been regarded as a system, using two main dimensions: the collection and use of performance information data. The key actors studied are the four Estates that compose the ‘inner ring’ (see Fig. 2): academics, non-academic staff, students and lay members.

Mixed methods were used to assemble data. These comprised documentary analysis and interviews. The documents analyzed included policy and strategic documents, minutes of meetings, the results of internal surveys, and statistical data collected from secondary sources. The semi-structured interviews were conducted to 39 members of the four groups that sit in the governing bodies of a university: academics (n = 24), non-academic staff (n = 9), students (n = 4) and external representatives (n = 2). The number of interviews conducted to each group was related to their weight inside the existing governance structures. The interviews were all recorded and transcribed, totalizing 42 h of recordings. This meant that each interview lasted, on average, 1 h and 7 min.

The interview schedule integrated three main sets of questions. The first group of questions was centered on performance measurement and management practices, at all levels. In order to help the interviewees answer these questions, a prompt card was shown to them, integrating: the main activities that compose the mission of a university (teaching, research and third mission), the employees of a university (academic and non-academic staff), ‘customers’ (students), services and finance. The second set of questions focused on the pressures (both internal and external) felt to measure and manage performance. The last set of questions was mainly related to the influences exerted on decision-making by each one of the Estates; and the impact the introduction of measurement and management practices might have had on these groups.

All the quotations used were coded in order to ensure confidentiality. S refers to students, L to lay members, NA to non-academic staff and A to academics.

3.2 Research Setting

The chosen university—PU—was established in the early 1970s. It is divided into 17 different departments, and has around 13,500 students. It employs nearly 1,500 members of staff, which comprise, approximately, 1,100 academics.

At the time the interviews took place (between January and June 2009), this institution had three main decision-making bodies at the central level. First, there was the University Assembly, which was composed of a large number of members (approximately 110, comprising academics, students and non-academic staff). This body only held formal meetings on occasions such as the election of the Rector or the approval of the University’s Statutes and their alteration. Second, there was the University Senate, with nearly 50 members, integrating academics, non-academic staff, students and lay members. This was the most important collective decision-making body, since it decided not only on academic matters, but also on the approval of the budget, annual plans and strategic plans. Finally, there was the Rector, who presided over the Senate. The Rector, who was elected, appointed high-level institutional officers (Vice-Rectors and Pro-Rectors).

3.3 Performance Measurement and Management Practices

3.3.1 How is Performance Measured?

Results show that the degree of measurement and the way performance information is used vary considerably according to the area.

Teaching and learning is mainly assessed through students’ feedback. At the end of each semester, the university asks students to fill in a questionnaire about the content of the course and about academics. Then, the Office of Information Management summarizes data from the questionnaires in a graphical form and a score from one to nine is given.

Professional bodies also used to visit the university periodically, in order to measure the effectiveness of teaching and learning and accredit degrees. Since its creation, the Agency for Assessment and Accreditation of Higher Education has suspended all the accreditation procedures performed by professional bodies.

Internally, there were two Vice-Rectors responsible for education, one for undergraduate degrees and the other for post-graduate degrees, and three institutes (IFIU, IFPG and IFSP) responsible for coordinating teaching and learning within the university. Some of the interviewees felt that sometimes there was an overlap between the activities developed by IFIU and the ones carried out by the Pedagogic Council. Additionally, the Office of Information Management also collects data on teaching and learning (e.g., success rates and retention rates).

Since 2005, there has been no evaluation of programs or degrees at a national level. There used to be a global coordinating body of the evaluation system called the National Council for the Evaluation of Higher Education. This body had to assist and assure the credibility of the process of higher education, and to review and report on the quality assurance procedures. However, according to ENQA (2006), follow-up of the assessments was inexistent and, in many cases, the reports failed to provide consistent, clear and sufficient information to the stakeholders. Most significant was the general perception that the evaluation results had no consequences, since there were no plans of action drawn up to overcome or attenuate weaknesses or reinforce strengths.

Following the recommendations of the European Association for Quality Assurance in Higher Education (ENQA), the Ministry Science, Technology and Higher Education created the Agency for Assessment and Accreditation of Higher Education (Decree-Law 369/07). The Agency has recently started a preliminary process of accreditation.

Research and scholarship is reviewed mainly because of an external audit performed by the Foundation for Science and Technology to Research Units. To an academic ‘this is the most evaluated area’ (A67). The evaluation system on which it is based comprises a periodic evaluation by panels of international experts, which include direct contact with the researchers through visits to all units. This process culminates with the panel attributing a qualitative grade, which determines funding. The parameters that are looked at are: size of research contracts and quality of research outputs, translated by the number of publications, citations and the impact factor of journals.

Internally, there was one Vice-Rector looking after research and there is a Research Institute, which was responsible for coordinating research activities within the university. The Office of Information Management also collects statistics on research, but only on demand. To gather that information it usually uses secondary sources, such as Citation Indexes.

Regarding third mission, it is consensual that no measures are used, apart from financial measures (e.g., income generated by services provided). ‘It is merely impressionistic’ (A42). In governance terms, there is one Vice-Rector responsible for it.

In practice, academics’ performance is measured in terms of research, providing information on publications, supervision of postgraduate students, coordination of projects and research grants, and in terms of teaching, through the feedback obtained from students at the end of each semester. Historically, research has been the most important issue, not only because it is the most easily measured, but also because it is the one that contributes the most to career progression. The Office of Information Management and the Office of Human Resources also collect statistics on academics, such as numbers, categories and salaries, sorted by department. Nevertheless, most interviewees agree ‘the evaluation methodology is not adequate’ (L82).

Students are very closely monitored in terms of numbers, degree results, completion rates and retention rates, even though there is no follow-up of students after graduation. The Office of Information Management and the Academic Office gather that information, mainly due to the need to send it to Ministry on a yearly basis.

Non-academic staff is effectively measured through a national system developed by Government to assess public servants (the Integrated System for the Evaluation of Performance within Public Administration—SIADAP). Within central administration, each member of staff discusses objectives with the Director of his or her service and ways to reach them, being their performance then compared with pre-established objectives. Within departments, non-academic staff agrees their objectives with the head of department, being then assessed by him or her at the end of each year. The performance of each member of staff can be considered ‘Non-Adequate’ (1–1,999), ‘Adequate’ (2–3,999) or ‘Relevant’ (4–5). Only 25 % of the workers can be awarded a grade between four and five, and, only a maximum of 5 % can be considered ‘Excellent’ (5). The Council responsible for coordinating the evaluation process within the university receives all the results and revises them. All the interviewees consider SIADAP an extremely bureaucratic system that is ‘too time consuming for what you get from it’ (A69). The Office of Information Management and the Office of Human Resources also collect statistics on non-academic staff, such as numbers, categories and salaries.

Support services are assessed at a central level though the evaluation of the Director of each service by the Administrator. At a departmental level, the head of department assesses each service. Apart from that, there are two support services that launch satisfaction surveys on an annual basis. That is done voluntarily.

In relation to the performance of support services, the university is starting to implement the Evaluation and Accountability Framework. This system, integrated in SIADAP, was developed by Government to assess the performance of public services, at a national level. The Office for Quality, Evaluation and Procedures is coordinating its development inside the university.

It is consensual that finance is clearly measured both at university and departmental levels. The key performance indicators used by the Finance Office are developed by that office and have been agreed upon by the Administrator. These have to comply with the ones defined by the law that regulates public spending. The Consolidated Results were then presented to Senate and ratified by this body, after being certified by an external auditor. The Office of Information Management gathers all financial information. Departments have their own budgets and have some autonomy in running those budgets, within pre-established rules.

3.3.2 How is Performance Information Used? Who Uses that Information?

Data on research is closely analyzed, especially due to the need to prepare for external evaluation exercises. Results obtained on this area were generally looked at by the Research Institute and used by research units to improve their performance.

Although some data is collected on teaching and learning and academic staff (through students’ feedback), it seems consensual that departments do very little with it. Even though some heads of department have taken some action concerning data from the questionnaires, such as, changing the courses lectured by a particular teacher, this seldom happens. If academics have tenure, it is still not very clear what can be done, since it depends on a clearer definition of the Statutes that regulate the academic profession. In addition, there seems to be ‘a lack of legitimacy to act upon the data collected from the questionnaires, especially if heads of department are not full professors’ (A47). This is particularly problematic in relatively new departments, which do not have a huge number of full professors.

Moreover, the majority of the interviewees question the validity of the questionnaires, since the way they are administered does not guarantee the representativeness of the sample and its response rates are quite low. According to a student ‘there is not an evaluation culture in the university… [and] there has not been enough time to find the best way to explain each student that there may be direct consequences from their participation’ (S46). In fact, many of the interviewees feel that students’ will to participate in the academic life has decreased over the years. Several explanations have been advanced to account for these changes: first, the shorter period students spend at the university, after the implementation of the Bologna Process; second, the lack of information provided to incoming students concerning the mission, development plan and governance structures of the university; third, the increasing competition for jobs; and fourth, the pressure exerted by families for students to finish their degrees as soon as possible.

In addition, the majority of the interviewees regard the questionnaires as unfit for purpose. They are considered too long, discouraging students from filling them in. Plus, some people believe there should be complementary tools to assess the quality of teaching and learning. An interviewee stated: ‘The questionnaires are not enough… a different system must be used to validate results’ (A39).

Therefore, results on academics’ performance seem to be only used in terms of career progression. In fact, their curriculum is thoroughly evaluated when they apply for a position, especially their research activities. Pedagogic activities and management activities account very little for career progression, which may lead to a perverse effect: academics may feel tempted to focus on research and spend less time preparing their courses or performing management duties.

A lot of data is collected on students, even though not much is done with it.

In relation to poor-performing non-academic staff, bad evaluations in the Integrated System for the Evaluation of Performance within Public Administration have a direct impact on their promotion.

Data concerning support services and finance is analyzed and used for improvement purposes by the central administration.

Third mission is not subjected to a lot of measurement, and the scarce data that gets collected seems to be neglected.

3.3.3 Pressures to Measure and Changes in the Roles, Influences and Accountabilities of Key Actors

The role of measurement in universities has changed considerably, over the last years, especially with the creation of new laws, which demand more efficiency, effectiveness and accountability from universities, and with the introduction of external audits.

Externally, pressures come from the state, Europe and the market (the ‘outer’ ring of Fig. 2). Internally, ‘there is a general perception in the university, mainly among those linked to the management of the institution, that the implementation of a performance management system is needed to facilitate decision-making’ (A38). In fact, this university has willingly asked to be evaluated by the European University Association (EUA 2007).

3.3.3.1 External Pressures
3.3.3.1.1 State

Most interviewees feel that the main pressures to measure come from Government and are imposed on universities by law: ‘Law imposed the big changes that are happening, otherwise no one would move!’ (NA50)

In fact, a lot of legislation came out since 2007, creating a new evaluation framework for Portuguese universities; the Agency for Assessment and Accreditation of Higher Education, believed to be the main pressure; a new juridical regime for universities; and a new Statute for Academic Careers. These laws led to vast reforms in the Portuguese higher education system.

3.3.3.1.2 Europe

The European Commission published a modernization agenda for universities, which was welcomed by the member states and the main stakeholders in higher education. The main fields of reform were: curricular reform (also promoted through the Bologna Process); governance reform, accomplished through more university autonomy, strategic partnerships (e.g., with businesses) and quality assurance; and funding reform, which means finding diversified sources of income better linked to performance. It became clear that the implementation of these reforms needed to be assessed, demanding increased measurement. As an interviewee put it: ‘There have been international pressures (…) mainly European [ones]’ (A47).

3.3.3.1.3 Market

Even though some interviewees stated the market had not a strong influence in Portugal and that it had not influenced what happened inside the university, almost all agreed that the competition between universities increased, and some even argued that this competition has ‘benefited universities and other sectors of the Portuguese economy’ (NA55). To the interviewees: ‘the number of students is decreasing and the students that exist are just those. (…) We are competing for the same universe’ (A67). Actually, some departments struggled to get students. That was why a number of interviewees mentioned that attracting students became an important issue and that universities felt the pressure to be better, which meant ‘reinforcing their marketing’ (A43) and ‘image’ (A80).

Additionally, society started to demand more accountability from public institutions, especially educational ones: ‘We have to be accountable to taxpayers, to justify the money spent’ (S75).

But how have these pressures changed the roles of the key actors included in the ‘inner ring’ of governance?

3.3.3.2 Changes in the Roles of the Estates
3.3.3.2.1 Academic Estate

Several interviewees argued that the role of being an academic changed. They stated that academics are now held more accountable for their actions, meaning their students increasingly assess them. To an interviewee this does not mean they have lost their autonomy:

I do not think they have lost their autonomy.… What they have now realized is that they cannot have the same future that academics had 20 or 30 years ago…. Today, an Assistant Professor does not know what chances he or she has to progress in his or her career. (A45)

This uncertainty has, to a number of interviewees, increased competition between academics.

Moreover, academics felt they were increasingly asked to perform bureaucratic tasks, including work related to performance management, and expected to perform other roles (e.g., management roles), which, according to them, deviated their attention from research and teaching, and for which they were not adequately rewarded in terms of career progression.

Although most interviewees agreed that academics were now more assessed (especially due to the introduction of the questionnaire about the content of the course and about academics), there seemed to be, according to them, little consequences for poor performers, rather than some not frequent ‘internal reengineering’ (e.g., attributing courses to other academics).

Additionally, academics also worried about having to ‘share’ their decision-making power with external members, whose role increased inside the university’s governance structures. Nevertheless, they were still believed to be the most powerful group: ‘It is obvious that at the end of the day, the power lies with academics’ (A49).

3.3.3.2.2 Administrative Estate

With the introduction of the Integrated System for the Evaluation of Performance within Public Administration, non-academic members of staff became more assessed than before. The introduction of this system raised competition and created a bad environment within services and departments, according to some interviewees. This group felt their efforts were not recognized and felt disappointed and discouraged. Moreover, similarly to academics, they felt the workload increased enormously and many of them highlighted the need to increase the number of non-academics in the university, especially when compared to the number of academics (three times more). Additionally, their representativeness in university governing bodies was reduced to one member.

3.3.3.2.3 Student Estate

Students’ roles and influences have changed over the years, especially since they started to be seen as ‘consumers’ of higher education. Therefore, in the last years, attracting students and maintaining them satisfied became a concern of every university.

In fact, and even though students always participated very actively in decision-making within this university, they became more influential in terms of assessment with the introduction of the questionnaire to assess courses and academics. Actually, their opinions became the main tools used to assess teaching and learning (even though most interviewees questioned the validity of this tool, as explained before). Nevertheless, they still were not believed to be very influential in terms of strategic management.

3.3.3.2.4 External Representatives Estate

The number of lay members in governing bodies grew with the introduction of new governance structures and their role has been enhanced within the university. They now represent almost 30 % of the members that sit on the General Council, the ultimate decision-making body, and one of them chairs it.

They have been co-opted by the university, being most of them from the region where the university is located. They were chosen mainly because of their prestige and connections.

They are considered important to the university by most interviewees, especially since they bring in completely different perspectives from insiders: ‘It is important for the university to be connected to the surrounding environment’ (S51).

The results obtained thus confirm the suitability of the new governance framework proposed in the chapter to study the governance arrangements of universities. In fact, after analyzing the external coordination mechanisms and the role and influence of the four Estates in decision-making, this university (PU) can be placed in that framework, as shown in Fig. 3.

Fig. 3
figure 3

The positioning of PU in the framework of governance structures in higher education

The state and Europe are the main external coordination mechanisms, with the role of the market starting now to emerge.

The positioning of PU against the academic estates’ corner shows that even though the number of external members increased in the main governing body, the Academic Estate is clearly the dominant one.

Discussion and Conclusion

Through a systems view of performance management and the presentation of a new governance framework for universities, this study contributes to the research on the way performance is measured and managed in universities and to the research on the effect performance management systems might have had on the roles and influences of key actors in the governance of universities.

Data analysis reinforces the findings of some authors (e.g., Vilalta 2001), which state that there has been a substantial increase in the measurement of performance in the university over the years. In fact, data showed that more areas are now assessed (third mission was clearly an exception), albeit many interviewees agreed that better measures could be in place in some of them.

The increased level of measurement was greatly influenced by the external environment, resulting mainly from European policy, namely the Bologna Declaration; and from the state, which published a lot of legislation in the last years. Additionally, the role of the market, even though minimal, started to show, as competition for students became tougher. Therefore, it could be argued that the main pressures to measure came mainly from Europe and from the state, being little influenced by the market (see Fig. 3). Internally, a new Contract-Program, which integrated some objectives, indicators and targets, led to a different attitude in terms of the need to measure performance. Moreover, interviewees also expected the external members of the General Council to push more towards the introduction of control mechanisms. Thus, similarly to what van Dooren (2006) found out for public services, performance management in this university became more systematic and institutionalized.

Concerning the management of performance, many of the interviewees mentioned the lack of use of performance data, especially regarding the individual performance of both academic and non-academic staff. Thus, the closed loop between measurement and taking corrective action, acknowledged by Boland and Fowler (2000), does not exist. The reasons presented were mainly related to the legal framework, which was considered very protective. These findings are consistent with some literature on the public sector (e.g., Radnor and McGuire 2004 and Hood 2006), which suggests an excessive focus on measurement with little action, and can arguably be extrapolated to other universities. In fact, if individual performance information is not used very much in a university that is an ‘extreme case’ (Yin 2003), one might expect that it will be used even less in universities that are less entrepreneurial.

In terms of the components of the performance management system (see Fig. 1), findings indicate a concern mainly with outputs, with several areas being measured (with the exception of third mission). Data shows little preoccupation with inputs and processes; outcomes are also not measured, given the difficulties in doing so.

Given these findings, it can be stated that performance is not managed in a systemic way in our case study, as presented in Fig. 1.

In relation to governance, it was apparent that the introduction of control mechanisms led to some changes in the governance of the university, following the general trend towards the centralization of authority in the institution-level governing structures (the Rector, for example, has now more power than before). These results reinforce Vilalta’s (2001) findings, which state that with the introduction of control mechanisms, many universities started to rethink their forms of organization, governance and management.

Even if it is still early to understand the real impact of the new structure (imposed on universities by law in 2007 and implemented in 2009), it was generally regarded by the interviewees as more efficient, given the decrease in the number of committees and in their membership. Moreover, the leadership structure was considered clearer and the participation of the outside world bigger. The lighter, more centralized, and more externally participated structure was thought to enable more strategic decision-making and provide increased strategic coherence, which were considered fundamental for the introduction and functioning of a performance management system.

Although, there were considerable changes in the university’s structure, essentially driven by European and national interests, the introduction of measurement and management practices also led to changes in the roles played by the Estates involved in the governance and management of the university, with the exception of the Student Estate, who rarely changed its role in terms of decision-making.

Concerning the Academic Estate, the bureaucratic work demanded from academics increased a lot. Academics were increasingly expected to perform other roles (e.g., management roles). In their view this did not necessarily lead to an increase in the quality of their teaching and research, since it left them less time to focus on these tasks. Some academics also mentioned the possibility of a decline in the ‘academic-voice’ in institutional decision-making. Nevertheless, it was noted that they still have the most active voice in the university and that the ‘collegial type’ of coordination still persists at this institution (shown by the positioning of PU against the academic estates’ corner in Fig. 3).

Concerning the Administrative Estate, and although non-academic staff were never very powerful inside the university, now they are even less represented in governing bodies.

In relation to the External Representatives Estate, the presence of external members increased significantly, even though they felt they did not participate very much in strategic decision-making.

Although it is acknowledged that the governance reforms that took place in many higher education systems—more institutional autonomy from the state, increased centralization of decision-making inside the institutions, stronger leadership at the top, increased accountability and wider participation of external members—are enablers for the implementation and good functioning of performance management systems, there are still other variables to take into consideration. Two important variables seem to be the level of communication and the level of stakeholder involvement. In fact, a good level of communication between bodies, between units and between individuals, and the involvement of different actors in the development of such a system will arguably overcome resistances and build trust. Trust is the most difficult piece to develop of the performance management framework, as devised in Fig. 1, but arguably, as discussed previously, a crucial one. As Thomas (2004) argued, an ideal performance management system should be embedded in the organization, stable and widely understood and supported.

The work presented contributes to a better understanding of performance management practices in universities. It was developed in the Portuguese context and complements previous work in a British context (Melo et al. 2010). However, it is based on a single case study, albeit in-depth. As future work, we envisage a more extensive research project, using survey methods, of a more representative sample of European universities.