Introduction

The process of generating and securing quality in a study programme is often the responsibility of the leadership of the study programme, or the study programme leader. This is an important function – encompassing an integration of administrative and academic dimensions and issues within higher education institutions which often is overlooked. The function of the study programme leader is placed in the intersection of organizational and pedagogical tasks in the programme and can thus be seen as a mediator in much of the quality work going on in the programme. As quality in education is actually delivered at study programme level, there is a need for ‘quality work’ to be undertaken involving a range of activities at the local and departmental level: curriculum development, staff qualifications, and organizing teaching and learning as well as resources and infrastructure (Bollaert 2014). There has also been an increasing emphasis on educational leadership as key for quality development (Gibbs et al. 2009).

During the last decades, all European countries have developed national external quality assurance systems (EQA) (Dill and Beerkens 2010). The systems vary, both between the emphasis on quality control or development, in organisation of the systems and between how they operate. In addition, most universities have established their own internal quality assessment systems as a response to, or even demanded by, the national systems (Pratasavitskaya and Stensaker 2010; Manatos et al. 2017). Consequently, most EQA systems have been developed with the aim to have an effect on teaching and learning. While there is general evidence that EQA may indeed have impacts within higher education institutions (Dill and Beerkens 2010; Stensaker et al. 2011), we have still limited knowledge of the relationship between different forms of EQA and how teaching and learning is designed and organised at study programme level. This is an important issue as more knowledge on the impact of EQA could contribute to better designed EQA systems, and not least improved quality of teaching and learning. It is also important as there are repeated accusations that poorly designed, or overly accountability oriented EQA systems may have severe damaging effects on the ‘quality work’ conducted at institutional level (Brennan and Shah 2000; Burnes et al. 2014). Hence, in the current chapter we analyse the relationship between national quality assessment systems and the leadership and organisation of quality work within universities.

Evidently, to investigate this issue there is a need for more comparative research designs allowing us to control for national differences in EQA systems, while keeping a range of other national characteristics as similar as possible. In the current chapter we draw on data from a survey directed at study programme leaders (programme level) in Danish and Norwegian universities. Denmark and Norway are two countries that historically have had quite similar university systems with respect to their governance, internal organization and academic culture. However, in the last two decades, the two countries have developed quite different EQA systems (Kalpazidou Schmidt 2017), which create a quasi-experimental setting where our research interest can be explored.

The ‘Quality Work’ Conducted at Institutional Level and the Larger Environment

EQA is one of the most visible outcomes of the Bologna process, and a central tool in pursuing one of the key aims of the Bologna process, i.e. harmonization. However, the European Standards and Guidelines specifying how EQA are to be conducted, open for EQA systems designed in very different ways. Hence, as governmental tools, EQA may cater for different purposes and be instruments for solving various issues (Dill and Beerkens 2010). As such, EQA systems can for example be aiming at stimulating institutional autonomy by being designed at enhancing and assessing how higher education institutions take responsibility for developing their internal quality assurance systems (Pratasavitskaya and Stensaker 2010; Bollaert 2014). In this design, EQA is intended to have an indirect effect by developing the institutional governance and management systems which again will have a positive impact on the educational delivery. But EQA systems can also be an instrument for more direct inspection of the quality of teaching and learning, for example by systematically scrutinizing and evaluating the educational delivery at programme level without paying much attention as to whether the higher education institutions have well-functioning internal quality management systems (Brennan and Shah 2000). Hence, it can be imagined that ‘indirect’ and ‘direct’ EQA plays out very differently at programme level within higher education institutions, and that EQA stimulate to different ways of working with quality.

Indirect EQA will most likely have focus on the institutional level and concentrate on the governance and management of whatever system institutions may have developed (Manatos et al. 2017). For institutions, what matters are effective ways in organizing this institutional responsibility, to collect and analyse data where educational delivery can be compared, and to have accountability systems where information can float seamlessly throughout the organization (Bryman 2007). An implication is that these systems contain several administrative elements emphasizing the existence of institutional systems, routines and reporting.

Direct EQA, on the other hand, can be expected to have a focus on the study programme level, and on how the educational delivery is a result of specified learning outcomes, and subsequent teaching and learning activities which are linked to adequate evaluation and examination. Direct EQA scrutinize study programmes and their internal coherence (Biggs and Tang 2011), and the leadership and routines associated with this activity. For institutions exposed to EQA design of this type, what matters are not quality management systems as such, but convincing delivery of teaching and learning (Mårtensson et al. 2014). A possible implication is that those having leadership responsibility at this level need to be practical problem solvers, balancing different expectations and interests (Stensaker et al. 2018). Furthermore, as direct EQA is concerned about the quality of the programme, one might expect that the ‘quality work’ conducted have consistent academic focus emphasizing programme content and cohesion.

As EQA have matured in numerous countries, there is also the possibility that the differences between different forms of EQA are blurring, not least as new elements are being included in the EQA portfolio. For example, recent research has suggested that mandatory introduction of learning outcomes in higher education may reduce the potential gap between indirect and more administratively oriented, and more direct and academically oriented EQA designs (Aamodt et al. 2017), having the potential of reinventing collegiality and collective responsibilities (Burnes et al. 2014). Governmental traditions and reforms may also affect the ways in which EQA is translated into higher education institutions – both with respect to enhancing and hindering their implementation (Møthe et al. 2015; Irving 2015).

Empirical Context, Data and Methods

Country Description and Case Selection

The current chapter is using data from a comparative survey directed at study programme leaders in the university sector in the two countries. As cases, Denmark and Norway are perfectly suited for investigating the potential relationship between ‘quality work’ and EQA conducted at study programme level.

Denmark and Norway are very similar countries with respect to their higher education systems, and the Norwegian system was historically developed as a direct result of the Danish-Norwegian Union in the early 1800s. As such, university traditions and internal organization of higher education institutions have many shared values and norms, which have been reinforced over the years by the Scandinavian welfare tradition focusing on tuition free higher education, and a relatively high level of public funding of the higher education sector. Hence, compared to most other countries the university sector in Norway and Denmark, and how higher education institutions in the two countries operate are quite alike.

Still, some differences do exist between Denmark and Norway, not least with respect to EQA (Kalpazidou Schmidt 2017). Denmark was one of the innovators of EQA in Europe and started up with a national system already in the early 1990s. At that time and for several decades afterwards, direct EQA was the dominating approach used where external assessments scrutinised all higher education programmes offered by Danish universities, and where a strong focus also was on how study programmes offered was relevant for the Danish labour market (Thune 1996). Norway started up with systematic EQA later than Denmark, and it was only in the early 2000s that a national accreditation system was in place. However, contrary to Denmark, the Norwegian approach to EQA was the indirect one where quality assurance were seen as an instrument for stimulating institutions to manage and take responsibility for their increasing autonomy (Haakstad 2001). As such, in Norwegian universities, no single study programmes had to be exposed to external assessment as the EQA system only focused on whether the institutional quality management systems were existing and well-functioning.

Since the initial start-up of EQA in Denmark and Norway, both the Danish and the Norwegian higher education system have undergone further reforms including attempts to further increase institutional autonomy, the build-up of a more dominant hierarchical governance model within universities, and quasi-voluntary mergers within the two higher education systems together with changes in the funding systems (Kalpazidou Schmidt 2012). In general, these reforms are quite similar to reforms in many other European countries. However, the reforms can be said to have been more radical in Denmark compared to Norway. While the Danish universities were established as self-owning institutions with a contract-based relationship with the government, the Norwegian universities continued as state-owned institutions although with special privileges. Norway has also kept more of the collegial steering system within institutions than what is the case for Denmark.

As indicated above, both the Danish and the Norwegian higher education system have in the last decade been exposed to several merger processes that have changed the institutional landscape fundamentally. In both countries, the main driver behind these mergers was to create larger, more robust institutions. These changes have so far affected the college sector in Denmark to a limited degree, while in Norway several former university colleges have been upgraded to university status. Hence, while the university sector is quite similar in the two countries, the college sector is not. Due to this fact, the current study has only selected universities as cases.

Data and Methodology

The data is based on two surveys among study programme leaders collected in Denmark and Norway, hereafter labels as study programme leaders. The Norwegian data was collected from December 2015 to March 2016, and the Danish data from September to October 2016. The questionnaires used in the two countries are practically identical, but with a few adaptations due to differences in the names of positions and internal organisation at institutions in the two countries. A major challenge in the data collection in both countries was to identify the population, since study programme leader is not a formal position in all institutions. We asked the study directors or equivalent administrative units to submit names and e-mail addresses, and in addition the study programme leaders were identified through universities’ home pages.

In Norway, the target group consisted of 1010 people, of whom 551 or 54.6% responded. In Denmark, 496 questionnaires were distributed, 24 were excluded since they did not function as study programme leaders, and 220 or 46.6% responded. The survey in both countries only covered study programme leaders who have been recruited among the scientific staff, not study programme leaders recruited among administrative staff at the institution.

In Norway, data was collected in all types of public higher education institutions, and in Denmark in universities and university colleges. Our analyses are limited to the university sector, which is quite similar in the two countries, leaving out the Danish colleges and the Norwegian university colleges. The Norwegian data presented furthermore cover only the “old” universities (Oslo, Bergen, NTNU and Tromsø), while the “new” universities and the specialised universities are kept outside.

The survey was mainly explorative aimed at uncovering the potential roles and responsibilities of the study programme leaders. Below, we present findings on how the study programme leaders perceive and assess their own work, and how important they perceive a number of qualities, aspects and characteristics of the study programme they are in charge of. We have run t-tests to control for significant country differences.

Findings

In general, one could expect that formal titles and formalised roles related to managerial responsibilities at study programme level could be an effect of both indirect and direct forms of EQA, although such formalisation might be more expected in direct EQA systems as such systems would more likely be embedded in national standards regarding study programme organization and delivery. In indirect EQA systems, national standardisation is perhaps less likely as such indirect systems often is intended to stimulate to institutional autonomy. A consequence is that institutions are more likely to create internal quality management systems tailored to institutional traditions and strategic objectives.

As Table 3.1 indicates, there are distinct differences between Denmark and Norway regarding formal titles for those having study programme management responsibilities. In Norway, the titles used for the role of study programme leaders vary considerably, and the most striking difference is that a much wider range of titles is used in Norway than in Denmark. While formal titles not necessarily indicate real differences in roles and responsibilities, they do reflect the degree of standardisation regarding formal organization at this level. As such, Norwegian institutions demonstrate a strikingly lack of standardisation, where differences have been identified both between and within institutions. This finding is in line with what we might expect from indirect and direct EQA approaches, although it should also be underlined that the variety within Norwegian universities also indicates the lack of distinct institutional EQA systems.

Table 3.1 Titles used for study programme leaders. Percent

Graversen et al. (2017) and Aamodt et al. (2016) also point to other country differences in the roles of the study programme leaders The Danish study programme leaders more often than their Norwegian counterparts state that they have a specific work description, specific tasks, responsibilities and reporting demands in their position. The differences are not very large but systematic which indicates that the role as study programme leader is more formalised in Denmark than in Norway. Furthermore, the Danish study programme leaders are slightly more experienced than their Norwegian colleagues (Graversen et al. 2017, fig. 4.1; Aamodt et al. 2016, fig. 2.1). This may be because the position of study programme leaders was introduced earlier in Denmark, or because the study programme leaders keep the position for a longer time. Based on the longer experiences, one may conclude that the study programme leader role is more mature in Denmark.

The role as a study programme leader is conducted within a specific institutional and political context, and in interaction with several scientific bodies (boards or committees). Some of these bodies have an advisory role, other have decisive functions on the establishment or the modifications of the study programmes. Our main impression is that the role of the study programme leaders in both countries have a somehow weak formal administrative anchoring, but this is considerably more visible in Norway than in Denmark.

The relative weak anchoring of the role as study programme leaders in both countries should not necessarily be interpreted as the role of study programme leaders is unimportant or dispensable. On the contrary: they communicate and collaborate with several institutional bodies and persons, both among the scientific staff and the administration (Graversen et al. 2017, table 6.6; Aamodt et al. 2016, table 4.7). It should also be added that the study programme leaders usually are very experienced and have a high academic rank, primarily professors or associate professors (Graversen et al. 2017, table 4.4, Aamodt et al. 2016, table 2.4).

Study programme leaders have a range of tasks, which imply contact with several different stakeholders, (Table 3.2). When asked, the tasks reported most frequently in both countries, was “having contact with the study administration and “securing good quality in the study programme”. Also “changing the composition of subjects” and “reporting of results” occurs frequently, but this is considerably more common in Denmark than in Norway. The Danish leaders also have more often contact with students. Study programme leaders seem to have a limited responsibility for changing the content of subjects; this is mainly the responsibility of the academic staff. In conclusion, the general picture is that Danish study programme leaders report to have a broader set of responsibilities than their Norwegian colleagues.

Table 3.2 Tasks and responsibilities of the study programme leaders. Percent who responded “to a great extent”

While formal tasks and responsibilities are indications of more formal roles, we also asked the study programme leaders to report on their degree of autonomy to make decisions on various matters concerning the programme. Between 10 and 32% of the respondents answered “to a great extent”. This percentage might suggest that the overall statement “free to make decisions” is a quite strong expression. Therefore, in these analyses the percentages displayed in Table 3.3 also include ‘to some extent’. Table 3.3 shows that there are still some differences between Denmark and Norway, and a slight tendency that the Danish study programme leaders – in general – have somewhat more freedom to make decisions on teaching and assessment in the programme, while the Norwegian colleagues have slightly more influence on staffing decisions. These differences are however not significant, but they show a coherent pattern. This pattern indicate that indirect EQA systems such as the Norwegian one might have a more administrative focus (subject and learning outcome descriptions, staffing etc.), while the direct EQA system, which the Danish one is an example of, would tilt towards more academic issues (such as types of teaching and types of assessment).

Table 3.3 To what extent are you free to make decisions on the following matters? (Percent responding “to a great extent” or “to some extent”)

In the survey, questions concerning quality assurance and how issues related to quality assurance compare to other pressing issues handled by study programme leaders, were also deployed. A key dimension here is what study programme leaders think is the most important input for quality development and what they think are the most important aims and measures of quality development.

In Table 3.4, the responses by study programme leaders in Denmark and Norway shows several similarities. In both countries, knowledge development in the field is perceived as one of the most important input factors for quality development of the study programmes, but student feedback and evaluations seems equally important. As student evaluations have become a mandatory and integrated part of quality assurance regardless of whether EQA is indirect or direct, this finding is perhaps not so surprising although it might be more difficult to interpret, as attention to student evaluation may, on the one hand, reflect an increasing emphasis on teaching quality, while it can also reflect a drift towards consumer orientation in higher education.

Table 3.4 Input for quality development

The responses from Denmark and Norway do show two interesting and significant differences though. Signals from the labour market is considered as a much more important input in Denmark and can be seen as a direct reflection of the quite long tradition for linking EQA and relevance issues in Denmark.

The institutional quality system is considered as more important for quality development in Denmark, contrary to what we expected due to the long tradition of evaluating single programs in this country. This finding may relate to the fact that the EQA system in Denmark was in the start of a transformation at the time when the survey was conducted – a transformation from a direct EQA (more like the Norwegian one), towards a changed and more indirect EQA system. As such, institutional quality management systems were in the development phase at the time, which most likely might have affected this particular response, giving it a higher rating than it would have gotten if the survey had been run at a different point in time.

Turning to what measures study programme leaders perceive are most important for improving the quality of their study programme (Table 3.5), some of the previous differences between the two countries become visible once again. In general, the Danish study programme leaders put a significantly stronger emphasis on practically all the statements related to aims and measures for quality development than the Norwegian colleagues did. This difference should not be read as a stronger need for changes due to more dissatisfaction with programme quality in Denmark, but rather reflecting our initial expectation that direct EQA systems make study programme leaders at programme level more accountable than a more indirect EQA approach would to. The extremely high percentage of Danish study programme leaders that prioritise programme coherence and integration, focusing on reducing drop-out rates, and is interested in strengthening the links to the labour market, are all indications of ‘quality work’ where the external political agenda is important, e.g. funding and financing of education programmes. Here, the direct EQA system that has operated in Denmark, with its strong focus on the efficiency and relevance of the programmes to the labour market, may have led to this focused attention and consciousness among the study programme leaders in Denmark about what to prioritise.

Table 3.5 Aims and measures for quality development

According to the study programme leaders in both countries, the most important aim for quality development is to strengthen coherence in study programmes, even if they are quite satisfied with the coherence in their programme. This apparent contradiction may indicate that in both countries continued work to improve coherence in study programmes is regarded as an important quality development measure – regardless of the EQA system.

It is somewhat surprising that the emphasis on reducing dropout is larger in Denmark than in Norway. Dropout could be said to be an equally important problem in Norway, at least according to OECD statistics (OECD 2016), and is an issue also high on the political agenda. It is, however, possible that these differences between the two countries do not primarily reflect the severity of the problem itself, but rather political attention and how this attention is passed on to the study programme leaders through the QA system, and through the funding system in particular. The fact that the funding system in Denmark is based on a ‘taximeter’ logic,Footnote 1 making drop-out and completion rates important measures, may very well have influenced this strong focus on reducing drop-outs.

Discussion and Final Reflections

In this chapter we have argued that study programme leaders are important in the process of ‘doing quality work’ at the institution – not least as the ones having the responsibility to link together administrative and academic issues in program delivery. Further, the ways in which EQA systems are designed affect how ‘quality work’ is conducted at institutional level, i.e. the formalization and task associated with study programme leadership and how these leaders prioritize among the many issues that may positively or negatively affect quality in teaching and learning. Our main expectations were that indirect EQA approaches would result in more ‘administrative’ oriented study programme leaders emphasizing accountability and reporting while direct EQA approaches would trigger more ‘academically’ oriented study programme leaders emphasizing programme content, coherence and educational delivery.

While the data certainly provide indications of ‘academically’ oriented study programme leaders in Denmark, this does not mean that these issues were not on the agenda by their Norwegian counterparts. However, the Danish study programme leaders have more standardized titles and job descriptions, as well as more autonomy to make changes within the study programmes – regarding both administrative and academic issues. This is a finding that also may be related to the more established Danish EQA system, and the possibility that more experience may have triggered more professionalization at institutional level. Another indication of the possible impact of a direct EQA approach is also the noticeable attention being paid to issues such as drop-out and labour market needs in Denmark, although one could argue that the political attention regarding these issues in the Norwegian political landscape has increased as well. At the same time, a few of the findings are not fully in line with our expectations regarding influences of direct versus indirect EQA systems, as the anticipated ‘administrative’ focus in Norway was less visible in some cases. As such, one could also argue that we should be careful in exaggerating the differences between the two countries as study programme leaders in both countries do share many task and responsibilities, and that different forms of coordination of both administrative and academic nature is central in both institutional settings.

We started out this chapter by pointing out that EQA is a governmental tool which can be used in various ways, not least to strengthen institutional autonomy (indirect EQA) or providing external accountability to society (direct EQA) (Dill and Beerkens 2010). The Danish approach clearly has a strong accountability function built into the system as demonstrated by the emphasis on relevance (labour market focus) and efficiency (drop-out). The Norwegian approach shows fewer signs of the need for accountability, although the institutional autonomy dimension is also rather invisible as perceived by the study programme leaders. The fact that the formal titles of those having managerial responsibility for study programme is highly diverse at the Norwegian universities is another indication of weakly developed ‘quality management’. Why is this so? One possible explanation is that the two EQA approaches are related to other reforms in Denmark and Norway as well. For example, the EQA system in Denmark has a much longer historical track and influence than the Norwegian one, and radical governance reforms in Denmark where initiated in isolation from the existing EQA system. In Norway, one could argue that the EQA system was part of a changing relationship between the state and higher education institutions where this dimension has become more important than the internal quality management systems. As such, different EQA systems may indeed affect how the ‘quality work’ is perceived, introduced and conducted at the institutional level.

In the introduction, we also opened up for the possibility that the differences between the indirect and the direct approaches to EQA may blur over time as professionalization, specialization and experiences as to quality develop (Stensaker et al. 2011). Our collected empirical data do also hint at this option, for example, by the weight given to student evaluation and student feedback as a key determinant for actions taken by study programme leaders in both Denmark and Norway. The fact that Danish study programme leaders reported that they had much attention towards the institutional quality assurance system is also an indication that the transformations from a direct to an indirect EQA system, where the institutions take more responsibility for QA, in Denmark may have a strong influence at programme level within the Danish universities. The blurring between direct and indirect approaches to EQA may also be related to the European Standards and Guidelines (ESG) for quality assurance, and the possibility that national characteristics over time may be more influenced by ‘European ideas’ concerning how this activity should be organized (cf. Bollaert 2014).

In the introduction to this book, quality work was described as negotiated and dynamic where individuals may function as local problem solvers in their effort to balance multiple expectations. This chapter has tried to explore some of these dynamics by relating the work conducted by leaders at program level to larger system characteristics – external quality assurance. While our findings should not be interpreted as evidence of direct causal links between the different levels, the analysis do suggest that the larger environment indeed may matter for the perceived autonomy of the leaders. The different priorities in the work conducted by leaders at program level in Danish and Norwegian institutions suggests that the links between the “autonomy” and “academic” orientation should be further explored, and that further research perhaps also should look closer at how supra-national ideas, not least the ESGs, are impacting the work conducted at institutional level.