Introduction

Performance indicators

The UK Further and Higher Education Act (1992) brought with it increasing concerns about how universities perform and the quality of teaching and services they provide. As a result, and in response to the report of the National Committee of Inquiry into Higher Education (The Dearing report, 1997) indicators and benchmarks of performance (KPIs) for the Higher Education (HE) sector were developed.

In 1999, the first formally condoned group of KPIs for UK universities was established by the Higher Education Funding Council for England (HEFCE 1999). At this time, KPIs focussed on five broad aspects of institutional performance: participation of under-represented groups; student progression; learning outcomes; efficiency of learning and teaching; and research output. A sixth, an employment indicator, was added later. These KPIs reflected the political policy preoccupations of the time, which were with social equity, value for money, economic impact and international standing.

Subsequently, there has been further specification of the performance indicators. The KPIs published by the Higher Education Statistics Agency (HESA) annually since 2002/2003, have been described as ‘the most elaborated yet’ (Yorke and Longden 2005, p. 4). HESA focus upon indicators of: widening participation (from low participation neighbourhoods, poorer socio-economic backgrounds and state sector schools); module completion (including assessment); research output (including performance in the national exercises in research assessment—the RAE—and measures of research output relative to resources consumed, e.g., numbers of PhDs educated as a proportion of academic staff costs); and employment of graduates (indexed by a survey of the occupation 6 months after graduation). HESA benchmarks and normalises data across discipline types and its’ database on institutional performance is unique.

However, in 2006 the Committee of University Chairmen (CUC 2006) pointed out that the choice that an institution makes concerning the KPIs on which it wishes to be evaluated will depend on its mission and objectives. Not all universities in the UK have the same ambitions with respect to widening participation, engagement with employers, or research excellence. CUC argued, not unreasonably, that the choice of KPIs should map onto mission. Given the autonomous status of UK universities and their independence from government, the right to choose appropriate KPIs, even where they might not echo public policy priorities, is treated as sacrosanct. Nevertheless, most UK universities are heavily dependent upon national government funding. Of course, schedules of funding from government, determined by policy priorities, have acted in the first decade of the twenty-first century to incentivise universities to pursue relatively similar KPIs—even when their routes to achieving them are very different and their absolute levels of success are diverse.

The audience for feedback on performance against KPIs is not simply, or even primarily, government. Prospective students and their parents want to know and they influence their choice of institution and course. Employers want to know and they influence their hiring strategy of graduates. Research sponsors want to know and they influence where they choose to invest. Potential academic staff want to know and they influence where they are willing to work. The consumption of KPI feedback is wide-ranging and important for the success of a university. With the advent of university league tables compiled by UK newspapers, KPI feedback is widely available—even if there is some uncertainty, because different newspapers use different clusters of KPIs to derive their league tables. For example, the Guardian uses no information on research performance, while the others do; the Sunday Times is alone in using head teacher ratings of institutions. There are also some league tables that purport to make international rankings (e.g. the Times Higher Educational Supplement and the Shanghai Jiao Tong ratings). Again, they differ in the indicators that they use. Further details of the compilation of the league tables and their impact upon the HE sector can be found in HEFCE issues paper (2008/14).

Given the importance of position in the league tables for the attractiveness of an institution (whether to potential students, staff or research sponsors nationally and internationally), the KPIs on which they are based have become an increasing preoccupation of the HE sector in the UK. They evolve over time. The advent of the National Student Satisfaction Survey (which asks all final year undergraduates to rate their satisfaction with all facets of their experience as students) in 2006 provided a further crucial set of KPIs. With government standards being set for a carbon reduction commitment for institutions, a further set of KPIs focussed on environmental sustainability have emerged in the last year. Some argue that institutional planning and strategy is now driven by the KPIs (HEFCE 2008/14). Heads of institutions (referred to here as VCs for convenience) are tasked with achieving on these KPIs by their governing bodies. The governing bodies, in turn, are selecting people to become VCs who they believe will be able to achieve on these KPIs (Breakwell and Tytherleigh 2008a).

Leader characteristics

Since VCs are chosen to deliver against KPIs, the question arises as to whether institutional performance can be shown to be related in any way to the characteristics of the VC. The characteristics that can be readily and reliably examined across a large sample and over a reasonably lengthy time period tend to be relatively limited. They are primarily socio-demographic characteristics: age, gender, ethnicity, and educational and employment background. The question about the relationship between these socio-demographic characteristics and institutional performance is now particularly live following Goodall’s (2006, 2009) argument that top research universities are led by top researchers. Her data show that the heads of major research universities internationally tend to have previously had highly successful careers as academic researchers. The existence of this relationship raises the issue of causation. Are leaders chosen because their characteristics match the profile of the university? Is the profile of the university significantly determined by the characteristics of the leader? Any empirical answer to these questions would require a very large sample of heads of institution and a large sample of institutions sampled across time as they changed their leaders. Finding an answer is made more difficult, at least in the UK, because VCs do not differ too much in their socio-demographic characteristics. Breakwell and Tytherleigh (2008b) studied the socio-demographic characteristics of UK University VCs appointed between 1997 and 2006. They found that the majority of VCs were white men, appointed during their mid-fifties having worked previously in academe, from a science background, and with either undergraduate or postgraduate experience of the University of Oxford or the University of Cambridge (Oxbridge). This was identical to the pattern discovered to have existed in earlier decades by Bargh et al. (2000). However, at the turn of the twenty-first century VCs were different from their predecessors in that there were more women and more social scientists in post. They were also appointed at a later age than in previous decades. The fact that there is great homogeneity in the socio-demographic characteristics across VCs suggests that they may not be statistically significantly related to variation in institutional performance, but this is a relationship which can be tested empirically and this study seeks to do that.

The prime demographic factor whose influence has been examined in other sectors is age of appointment to the chief executive role. The work is inconclusive. While one study of senior managers in small and medium-sized enterprises in the UK and their strategy development found a significant relationship between age and organisational performance (Karami et al. 2005), others have reported superior performance associated with younger chief executives, with level of education found to be an important factor (e.g., Norburn and Birley 1988). This study will examine the institutional KPI correlates of age of appointment and length of appointment as VC.

Institutional characteristics

Any test of the relationship between VC characteristics and institutional performance should take into account the variations across UK universities in heritage and ambitions. While all universities, except one, in the UK are public institutions, they differ massively in size, age, history, discipline mix, student recruitment profile, research quality and quantity, endowment wealth, and so on. It is possible to cluster them into many different groupings dependent upon the criterion chosen. However, the biggest and most significant distinction is one created by the Further and Higher Education Act (1992), which disestablished the binary divide between universities and polytechnics. Those universities in existence prior to 1992 are still labelled “pre-92s”, and those designated universities in 1992 (mostly erstwhile polytechnics) are labelled “post-92s”. Subsequent legislation, particularly in 2003, has allowed more institutions to win the university title; these are also referred to here as post-92s. Compared to their pre-92 counterparts, with their academic reputations often dating back decades if not hundreds of years, the majority of the post-92s evolved from educational traditions with much less emphasis on research activity. The pre- and post-92 institutions have markedly different profiles on the standard HESA KPIs. Pre-92s do better on research and completion rates. Post-92s do better on widening participation. It might be expected that the nature of the relationship between VCs’ characteristics and performance would be different in the pre- and post-92 institution clusters. This is a proposition examined in this study.

The leadership-performance debate

Examining the relationship between VCs characteristics, institutional performance and institutional type falls into a tradition of research that explores the significance of leaders to performance. The leadership-performance debate, put crudely, queries whether it is ‘who’ leads, or ‘where’ they lead, that matters most. It is an old debate but ongoing. Some contend the leader is vital to organisational success (e.g. Hall 1977; Ashley and Patel 2003; Thomas 1988; Weiner and Mahoney 1981). Others argue that, while an organisation’s progress over time may be moulded by the traits of its’ leader, any impact of this is constrained by situational factors (Lieberson and O’Connor 1972; Pfeffer and Salancik 1978; Hambrick and Mason 1984; Samuelson et al. 1985) and that organisations, particularly large organisations, tend to run themselves. More recently, Hannan and Freeman (1989) have described how the culture of a company, the structure of its industry and its fixed assets can limit the ability of a chief executive to take actions of any impact. In fact, it has been suggested that, rather than “does leadership matter?”, we need to ask “when does leadership matter?” (Wasserman et al. 2001).

In the context of UK universities, VCs are selected initially because they are deemed capable of delivering against KPIs. The size, culture and history, and stage of development of the university, together with the situational factors (e.g. policy and economic) will determine that choice. However, the same complex influences will constrain the performance modifications any new VC can achieve. The limits to change that can be achieved by chief executives in other types of organisation have been explored. For instance, in a meta-analysis, Samuelson and colleagues (Samuelson et al. 1985) found that the impact of executive leadership on organisational performance was less than the effects of environmental and organisational factors, with executive change at the top of good-sized companies appearing less crucial, and chief executive turnover alone most often not sufficient to overcome organisational inertia. There is no clear relationship between length of tenure in post or rate of turnover in chief executives and organisational performance (Miller 1991; Guthrie and Datta 1998). An internal appointment to the chief executive post seems beneficial if the organisation does not want turbulence (Cannella and Lubatkin 1993; Zhang and Rajagopalan 2004). External appointment tends to be associated with seeking to tackle under-performance (Helmlich 1976; Pfeffer 1981), yet they rarely succeed in their efforts to improve firm performance in the longer term (e.g., Greiner et al. 2002; Zhang and Rajagopalan 2004; Warner et al. 1988). Whilst outsiders may bring in new competencies and skills (Finkelstein and Hambrick 1996), they can often be seen as being disruptive and, thus, any ‘enhanced cognitive repertoire may not get translated into improved firm performance’ (Grusky 1963; Greiner et al. 2002; Zhang and Rajagopalan 2004, p. 484).

Given this body of evidence, there is little reason to suppose that the appointment of a new VC will have a transformational impact upon performance in a university. However, the studies summarised were not conducted in universities. The few studies heads of HE organisations have not focussed on institutional performance. They have looked at the characteristics of heads in relation to their personal success (McFarlin et al. 1999), or at management style in relation to organisational climate (Sala 2003). Interestingly, Sala found that it was organisational climate that predicted college and student outcomes. Against this background, it would seem sensible to hypothesise that a change in VC will not be associated with a significant change in achievements against KPIs. The present study examines whether a change in VC is discernibly associated with a change in KPI profile.

The current study

Using data available in the public domain, therefore, the overall objective of the current study was to explore any relationships between the characteristics of universities and their leaders, and how these impact upon university performance. In particular:

  • the relationship between performance in UK universities on different KPIs;

  • the relationship between UK university characteristics and KPIs;

  • evidence of any significant change in KPIs associated with a change of VC; and

  • the relationship between the socio-demographic characteristics of VCs and their institutions’ KPIs, controlling for the effect of institution characteristics.

Method

Sample

Relevant robust and comprehensive performance data on universities could only be obtained for certain years. Consequently, all of the analyses reported here were carried out using data for VCs in service, for a least some time during 1999–2004 inclusive only. Due to differences in the measures of certain KPIs in Scotland (e.g., entry qualifications), Scottish universities were not included in the analyses. The final sample, therefore, comprised data for 147 VCs, including 89 VCs of pre-92s and 58 VCs of post-92s (including those appointed to post-2003 institutions).

Measures

Characteristics of VCs

All data were obtained from individual socio-demographic characteristics of VCs available in the public domain:

  • Gender: Female [0]; Male [1];

  • Age appointed as VC. Using these data, Years experience in a VC (or similar) role up to and including 2005 was calculated and these data were used in our analysis;

  • Academic Background: Initially classified as: Arts & Humanities [1] Business Admin [2]; Law & Accountancy [3]; Medicine [4]; Science [5]; Social Science [6]; and Technology and Engineering [7], due to the limited sample size (N = 147), for the purpose of analysis these data were grouped into two main categories. These comprised: Science related [1] (including the categories for Medicine, Science and Social Science); and Other [2] (including the categories for the Arts, Law and the more industry-focussed degrees, e.g., Engineering);

  • Place of appointment prior to appointment as VC: Pre-92 UK university [1]; Post-92 UK university [2]; and All other types of institution [3]; this latter category included Overseas universities, as well as non-academic related institutions in the UK and Overseas;

  • Source of selection immediately prior to appointment as VC: Internal to academe, or related [1]; External, non-academe [2];

  • Experience of Oxford or Cambridge as either an undergraduate or postgraduate student: No [0]; Yes [1]; and

  • Holds a Fellowship: No [0]; Yes [1]. This did not reflect type or number of fellowships held.

Whilst previous research reports level of education as an important factor in the successful performance of leaders (Norburn and Birley 1988), all of the VCs in our sample were educated to at least first degree level; 99% were also educated to postgraduate level. As such, level of education was not treated as a separate variable.

A final personal variable was included in the data set for VCs and this was salary. Salary can be regarded as a cipher for marketability or value, and was treated as such here.

Institutional characteristics

  • Type of university (pre-92 vs. post-92).

  • Size of university: this was measured using student numbers during 2003/2004. The mean number of students as at 2003/2004 in pre-92s was 15,378 (SD = 8,493; range = 595–35,170) compared to 19,080 students (SD = 10,472; range = 155–51,455) in post-92s. The majority of both pre- and post-92s (i.e., 88.5 vs. 79%, respectively) had student numbers within the 1–26,000 size limit. As an additional indicator of size, the average number of staff employed by each university during 2003/2004 was used;

  • Year Achieved University Status was treated as another aspect of institutional character.

  • Whether or not there had been a change of university leader during the period of performance. Our analysis showed that 93 changes of university leadership had taken place during the 2000–2004 inclusive period; this included 61 changes in pre-92s and 32 in post-92s;

Performance data (KPIs)

The KPIs used were nationally recorded, publicly available and applicable to all types of institution.

  • Mean total income (MTI) over a 3 year period (i.e., total income from funding council grants, tuition fees and education grants and contracts received for: 2002/2003; 2003/2004 and 2004/2005)—it is recognised that mean surplus might have been a better KPI, but this metric is not comprehensively publicly available, and growth in total income can be regarded as an indicator of institutional success;

  • Mean research income (MRI) over a 4 year period (i.e., a 3 year rolling figure for total income from grants and contracts received for: 2000/2001; 2001/2002; 2002/2003; 2003/2004);

  • RAE scores for 1996 and 2001 (i.e., Research Assessment Exercise scores awarded to universities overall). Note: not all institutions would have RAE scores for both 1996 and 2001 due to changes in their status between the two.

  • Mean entry standard (MES) over a 4 year period (i.e., average A and AS-level or Scottish Highers point scores on entry of first-year, first-degree students under 21, for: 2000/2001; 2001/2002; 2002/2003; and 2003/2004);

  • Mean completion rates (MCR) over a 3 year period (i.e., percentage of full-time first-degree starters projected to complete their degrees on time or to transfer to another institution in: 2000/2001; 2001/2002; and 2002/2003);

  • TQA 2005 scores (i.e., average of the teaching quality assessment subject review scores, excluding very early English scores and the corresponding subjects from Wales).

Having a time series for each measure (except TQA) enabled us to explore any changes in performance over time.

To avoid any potential skewing of our results, all ‘extreme’ performance values were excluded from our analyses; across the KPIs, this ranged from 2 to 10 values. Whilst this still left us with several outliers, these were retained in the analysis because of the impact deleting them would have had on our relatively limited sample size. Hence, this factor needs to be taken into consideration when interpreting our results.

Analysis

Table 1 summarises VC characteristics by university type. There are similarities across pre- and post-92s. In particular, in both pre- and post-92s, similar numbers of VCs were appointed from within academe; at the time of our analyses, the majority had just over 8.5 years cumulative experience as VC; most VCs had been appointed to their first leadership position in their early fifties; and most came from a Science (or related) academic background. There were, however, also some differences. Pre-92s were twice as likely as post-92s to have recruited external to academe and, whereas the majority of pre-92 VCs had been appointed from pre-92 universities, the majority of post-92s had been appointed from post-92s. Pre-92s had also made appointments of VCs at older ages and with more with experience of Oxbridge.

Table 1 Personal characteristics of sample

The aim of the first stage of our analysis was to explore differences in the KPIs that could be explained by organisational factors. To investigate differences by type (pre- and post-92) and size of university, a series of Mann Whitney-U’s (for non-parametric data) were carried out. The significant results only are summarised in Table 2. Apart from in MTI, pre-92s were characterised by significantly higher levels of performance on each KPI. A difference according to university size (i.e. across two class size intervals of student numbers [1–26,000 vs. 26,001–52,000]) was only significant for MTI. In this instance, and as could reasonably be predicted given the significance of student fees to income, those universities reporting the largest student numbers also reported the largest MTIs.

Table 2 Showing significant differences in university performance by type and size of university; and whether there had been a change of leader or not

A further analysis was conducted to determine whether a change of VC was associated with notable changes in the KPIs where time series data were available. Using indices of changes in each of these KPIs (MRI, MES, MTI and MCR) over the time series available, a comparison between institutions that had experienced a change of VC and those that had not showed no statistically significant differences between them on any index.

To further explore the relationship of KPIs to each other and to the institutions’ characteristics, a series of Spearman’s rho correlations (for non-parametric data) were carried out. Only the significant results are shown in Table 3. One of the most significant findings is the high level of inter-correlations among KPIs. Indeed, apart from MTI and TQA05 scores, all KPIs were significantly related to each other. The very significant relationship between RAE96 and RAE01 results, while predictable, is also interesting. It suggests that the efforts institutions made to change their relative positions on the RAE rankings were not particularly effective.

Table 3 Showing relationships between organisational factors and each measure of university performance

It is now a dominant rhetoric in the HE sector that each university will need to find a differentiated position in the new market place and that this desired position will determine which KPIs they emphasise. For example, in the near future the percentage of overseas students may become a prime KPI for some, yet for others it may be part-time student numbers. With many of the KPIs showing such strong inter-correlations with each other, it suggests that success in one will largely be dependent on success in others. This may make efforts after change less effective if they are mounted with single targets (KPIs) in mind. This, in turn, suggests that achieving successful differentiation across institutions will be harder than might be imagined.

Table 3 also reiterates the impact of university type (pre- vs. post-92) on KPIs. Based on the history of universities, the direction of these results (with pre-92s showing higher performance on more research-related KPIs) is not surprising. The significant positive relationship between MTI and size of university could have also been predicted (having more students generates more income). What is interesting, however, is the lack of any significant relationship between university size (as measured by student numbers 2003/2004) and both MES (r = −.070; df = 112; p = .231) and MCR (r = −.121; df = 121; p = .094). This suggests that size of the student population and quality (as indexed by entry qualifications or completion rates) are unrelated. However, we also looked at the relationship between each of the KPIs and the mean number of staff working in each university at that time. Numbers of staff were significantly positively related to all KPIs.

As the longevity of a university can also impact upon its performance levels, the relationship between the KPIs and the length of time that an institution had held university status was explored. In Table 3, “year received chartership” is used to refer to length of time with university status (of course, not all universities are actually chartered in the sense of having a Royal Charter). As shown in Table 3, length as a university was significantly and negatively related to all KPIs apart from for MTI, with the older universities having better performance.

Moving now to the relationship between VC characteristics and KPI profile—the VC characteristics examined were:

  • Age of first appointment as VC/CE (including former equivalent roles);

  • Experience of Oxbridge;

  • Fellowship or Not;

  • Place of appointment prior to becoming VC (referred to as Former Place in the table)—whether the role was in either pre-92, post-92, or other type of industry; this included Overseas HEIs, as well as other non-HE related types of industry.

  • Academic discipline background

Table 4 shows the statistically significant relationships only.

Table 4 Showing relationships between socio-demographic characteristics of VCs and university performance

Gender effects were not examined since the small number of female VCs in the sample made the statistical comparisons inappropriate.

There are no statistically significant relationships between the VCs academic background and the KPIs except with regard to MRI. One way analysis of variance shows an effect (F = 3.47, df 5, 115, p = 0.006) that can be attributed to the difference between VCs with a social science and those with a medicine background.

As shown, the most significant relationships are those relating to “former place” (i.e. whether the VC was appointed from a former role in a pre-92, post-92, or ‘other’ type of institution. Table 5 shows the majority of appointments made to pre-92s were made from those previously working in pre-92s (67%). In contrast, the majority of appointments made to post-92s were made from those previously working in post-92s (63%). It would seem that the relationship apparent between former employment and KPI performance of the current institution is a product of the differential recruitment practices in the pre- and post-92 institutions. However, to test this, a further set of analyses were conducted. These explored whether VCs from pre-92 institutions who gained their vice-chancellorship in a post-92 institution were associated with better KPI performance than those who moved from post-92 to post-92 universities. A series of independent t-tests were carried out using appointments of VCs made from pre- and post-92s only.

Table 5 showing direction of appointments made to pre- and post-92 universities

For appointments made to pre-92s: MRI was significantly higher (t(17.023) = 5.292, p ≤ .0001); MES was significantly higher (t(39.73) = 13.05, p ≤ .0001); and MCR was significantly higher (t(48 = 4.01, p ≤ .0001), in universities led by VCs appointed from pre-92s. There was, however, no significant difference in MTI (t(54) = 1.097, p = .278). For appointments made to post-92s, there were no significant differences across any of the four KPIs for VCs from post-92 and pre-92 institutions. However, given the small numbers who switch from post-92 to pre-92 institutions to become VC, the significant effects found for pre-92s reported here may not be reliable and should be treated with caution.

The socio-demographic characteristics of VCs used here do not seem to be strongly related to institutional performance—except in so far as they are mediated by university type; certain characteristics are associated with the VCs of pre-92s and of post-92s, and these are then moderately linked to institutional performance because pre-and post-92s do perform differently on the KPIs used in this study.

One final analysis was conducted to examine the relationship between VC salary and institutional performance. VCs salaries at 2005/2006 were highly significantly and positively correlated (at p < .0001) with all of the KPIs used in this study (apart from TQA 2005 scores).

Discussion

The aim of this study was to examine whether HE institutional performance can be shown to be related to the characteristics of the VC. We explored the relationship of several socio-demographic characteristics, recently identified as being consistent amongst university leaders in the UK (see Breakwell and Tytherleigh 2008a), to several objective measures of university performance. In line with observations made by previous researchers (Thomas 1988), we also gave due consideration to the potential impact of non-leadership factors, including type and size of university type, and other associated measures of performance. Despite identifying some significant relationships between experience of Oxbridge and MTI and MES, and between having held a fellowship and MRI, the evidence for the importance of VC characteristics for institutional performance was limited. Rather, our results go some way to support the ‘small man’ notion (Mintzberg 1979) that whilst the performance of a university may be ‘moulded’ by the characteristics of its’ leader, most of the variability in university performance is explained by non-leadership factors. Relating this to Goodall’s (2006, 2009) observation that better research universities tend to be led by better researchers, and our own finding that pre-92 universities tend to appoint their VCs from pre-92 universities, perhaps where we do see a relationship between institutional performance and the characteristics of the VC we are actually seeing the effects of highly selective recruitment processes. Institutions seek out individuals to become their VC that match the aspirations and identity of the institution. Of course, it could also be argued that individuals are only attracted to apply to become VC at institutions that match their aspirations and identity.

Institution type is most clearly associated with KPI profile. The clear relationship between length of time in existence as a university and the KPIs suggests that it takes time to achieve success on these criteria (e.g., minimally 7 years for the last RAE). The tenure of a VC, in most cases, will be far too short (the average is 8 years) to clearly see any significant change brought about. Our analysis of the correlates of change in the VC suggests limited impact; there were no significant differences between those that had a change of VC and those that did not in changes in any of the KPIs. This analysis was not capable of tracking change levels within an institution and then tie them back to the point at which the VC changed. It could, consequently, be underestimating the real impact of VC-change. Further studies are needed to explore the real impact of VC-change.

Having earlier acknowledged that there are several limitations to our data, however, we would like to emphasise that we did not expect the socio-demographic characteristics used in our analysis to be realistic predictors of university performance. Rather, our main interest was to identify any empirical justification for the socio-demographic characteristics which those responsible for searching and appointing university leaders appear to have favoured over the past 10 years. As described by Lieberson and O’Connor (1977, p. 128), ‘since leaders are not selected randomly from the universe of all employees, the low leadership effects on [performance] could reflect a company’s uniformly consistent judgements in leader selection’. Whilst we found no empirical justification for the consistently selected characteristics of VCs in terms of predictors of performance using the KPIs in our study, we would continue to emphasise the importance of understanding the individual leadership characteristics which can help to ‘mould’ better performing universities. In this respect, therefore, one of the main areas which requires further investigation, is the type of personal characteristics we should be focussing on (e.g. experience or leadership styles). Also, it would be useful to know which measures of university performance are most related to the successful performance of its leader. Again, as described by Lieberson and O’Connor, ‘companies may have other performance variables more greatly affected by administration and virtually unaffected by environmental constraints’ (p. 128).

We were limited in the socio-demographic characteristics and performance measures we used because of their availability in the public domain. We may not have chosen the best KPIs to illustrate the importance of the potential relationship to the VCs characteristics. Also, whilst the KPIs in our analysis have been described as being relevant to all types of institution (CUC 2006), different institutions may rate them differently. For example, in contrast to a more research-orientated university, a university with a strong focus on student equality of access would be more interested in KPIs with a strong focus relating its success to ‘securing engagement, retention and progression by students without traditional academic qualifications’ (CUC 2006, p. 4). While we used the pre- and post-92 distinction as a cipher for differences in mission, it is a limited reflection of the diversity that does exist in the HE sector. It is necessary to acknowledge there are very different values attached to the KPIs used in this study across the sector. Not to perform well on some of them would not necessarily be considered a problem for some institutions. The telling finding here, however, is the way the KPIs cohere and the way some institutions do well across the piece.

In conclusion, this study has shown that there is a limited relationship between the socio-demographic characteristics of the leaders of higher educational institutions and the performance of those institutions. This may be unsurprising given the considerable homogeneity in the socio-demographic profile of VCs. There is a prototype in terms of these characteristics which is sought when VCs are appointed. This means that if the VC is making a difference to institutional performance it is likely to be as a result of other types of characteristic—for instance, personality or interpersonal skills. The other key conclusion from the study must be that the key performance indicators used in higher education are highly correlated. This has very significant implications for those who would develop strategies to succeed only in one or two performance domains in order to differentiate themselves in the market from other institutions. Successful differentiation reliant upon a single dimension of high quality performance would seem unlikely to be achievable in the current policy environment. If the policy environment was to change, and universities were rewarded for clear differentiation, the pattern of KPI correlation might evolve.