Keywords

1 Introduction

Over the past two decades Romanian higher education has been the subject of numerous policy reforms aiming to increase the overall qualityFootnote 1 and performance of higher education institutions (HEIs): two comprehensive laws on education and several second-order legislative acts intended to transform the sphere of higher education into a modern system of teaching and research. Funding was one of the most important mechanisms used to stimulate universities in the direction of improved performance. Throughout the article elements of student equity and access which operate within the framework of quality assessment are highlighted and the impact of this framework on the funding process is evaluated. The article mainly focuses on public HEIs because, unlike private institutions which do not rely on state funding, public universities are particularly sensitive to shifts in the funding policies.

Following a brief section outlying the theoretical framework used in the paper, two distinct but related subjects are discussed. First, the early efforts undertaken by the government in the nineties to implement a system of accreditation for all higher education institutions and the subsequent transformation of the initial accreditation scheme into a broader process of quality assurance are analysed. Second, the efforts of policymakers to define and integrate aspects of quality in the distribution of state funds to Romanian public universities are described with reference to the recent implications of the national process of university classification and study programme ranking for issues of quality, funding and equity. This approach is meant to reflect the general process through which the study programmes in the Romanian public HEIs come to operate: first they must be accredited (a process which inter alia assures financial support from the state); second, they must meet further performance and quality requirements which determine the financial allocations they can secure for their subsequent activities. Throughout the article the evolution of accreditation, quality assurance and funding is deconstructed using the framework of principal-agent theory in order to illustrate specific problems typical in the governance of higher education.

2 Theoretical Considerations

As summarized by Moe (1984), the principal-agent model is a theoretical tool initially developed in the field of economics that postulates a contractual relationship between two parties: a principal who is interested in providing certain outcomes and an agent that the principal entrusts with the operational tasks needed to achieve these desired outcomes. The model assumes that the parties are rational. Therefore, it must take into account the fact that the agent has his own interests (which may be different from the principal’s interests), and so he pursues the principal’s objectives only to the extent that the incentive structure imposed in their contract renders such behaviour advantageous. The principal’s chief dilemma is therefore that of defining the incentive structure in such a way that the agent is compelled to pursue the preferences of the principal, i.e. to provide the outcomes specified in the contract.

However, this problem is further compounded by a specific feature of the relationship between the principal and the agent, namely asymmetric information manifest in the fact that agents possess information that the principal does not have (or that could only be acquired at great and unfeasible cost). Asymmetric information brings about two important problems (Lane and Ersson 2000): ex ante adverse selection of agents resulting from hidden information, and ex post moral hazard resulting from hidden actions taken by the agent without the knowledge of the principal. In the first case, the principal may decide to enter a contract with an agent he may only later find is not suited to accomplish his desired outcomes; in the second case, even if adverse selection has been avoided, the principal may find he is confronted with an agent that does not strictly adhere to the terms of the initial contract. The main concern of the principal-agent theory is therefore that of finding solutions to both adverse selection and moral hazard.Footnote 2 Because it tends to view the behaviour of the agent as primarily opportunistic and self-interested, the theory identifies various instruments needed to counteract the potentially opportunistic behaviour of the agent. Three such common instruments are available to the principal: monitoring or surveillance, risk-sharing contracts or retaliation (Lane 2008).

Two separate traditions in the application of principal-agent models can be discerned (Miller 2005; Lane and Kivistӧ 2008): on the one hand there are the canonical economic versions of principal-agent theory. On the other hand there is a distinct political science perspective which has relaxed some of the more rigid assumptions formulated by the economic version. According to it, the contract between the parties is implicit in nature, focuses on both agent and principal, considers that all actors (i.e. all principals and agents) are motivated by economic utility as well as political power, and acknowledges the existence of multiple and collective principals, as well as the possibility that intermediary agents and principals can exist between a primary principal and agent. In addition, the political science-driven principal-agent model considers that a social/political contract is the principal’s primary mode of control; it also recognizes that the output of the contractual relationship is a public (rather than a private) good, and, lastly, admits that shirking (the agent’s wilful neglect of his responsibilities toward the principal) need not only be a consequence of individual action, but may also result because of structural considerations, especially in cases involving multiple principals and agents where information is not properly communicated.

In its most general form, the principal-agent model can be used in political science to represent the relationship between the population of a given country (the principal) and its government (the agent that has to provide specific public goods and services). However, the model has a much wider range of application. In particular, this article explores how adverse selection, moral hazard and information asymmetry have had direct implications for the operation of Romanian HEIs over the past two decades and how they have shaped governmental policies in the field of quality assurance and funding of the higher education system.Footnote 3 Throughout this paper accreditation is considered as a specific screening device that governments employ in order to select which universities they support from the state budget, while the education funding policy makes up the reward rules that frame the interactions of government and accredited HEIs and periodic quality assessment of universities acts as a monitoring device. In such a setting the government is the principal and HEIs are agentsFootnote 4 entrusted with specific outcomes (creation and dissemination of knowledge, preparation and training of skilled individuals for the labour market, etc.). Accreditation thus becomes an instrument in solving the problem of adverse selection, while differential (quality or performance-based) funding and monitoring through periodic assessment are solutions to the problem of moral hazard.

3 Approaches to Quality Assurance in the Romanian Higher Education

Consistent with the overall pattern in Central and Eastern Europe where, at least initially, the “predominant approach to assuring quality in higher education has been accreditation by a state-established agency” (Kahoutek 2009), early conceptions of quality assurance processes in the Romanian higher education system seem to have been very narrowly identified with the process of accreditation. In general, accreditation has at least two crucially important financial implications for HEIs (Schwarz and Westerheijden 2007): first, it may function as a prerequisite for funding; second, it makes institutions and programmes that have accreditation status more attractive to students and can therefore indirectly increase institutional funding in systems where the funding depends on student numbers. Both provisions apply to Romania. Therefore, a discussion of accreditation is important, both in itself as well as for student equity and access.

As Scott points out, issues of quality assessment, accreditation and evaluation became common themes in Central and Eastern European higher education following the collapse of communism, with most quality systems in the region being adapted from West European or American models (Scott 2000). Consistent with this depiction, the issue of quality in Romanian higher education began to emerge as a pressing concern during the early nineties when the country embarked on the difficult transition from a centralized socialist system to a democratic society. Like most other Central and East European countries, following the economic and political liberalization brought about by the fall of communist rule Romania witnessed a rapid expansion of private higher education suppliersFootnote 5 which eventually demanded a governmental response. The main problem in this turbulent period was that numerous corporations started declaring themselves as suppliers of higher education services in a volatile setting where “no criteria or standards existed for the coordination of private initiative in the field of higher education” (Korka 2009). From an agency perspective, the unchecked proliferation of private HEIs put early governments in a position where they were faced with a typical adverse selection problem in that they could not know which agents (HEIs and other new suppliers) offered quality educational services.Footnote 6 A corollary of this situation was that government had difficulties in deciding how to distribute state funds to support the newly-established higher education providers.

In order to solve the increasing problem of adverse selection, the government’s initial response, enacted through the Law no 88/1993 regarding the accreditation of higher education institutions and the recognition of diplomas, sought to establish firm rules regarding what type of entities were officially sanctioned to provide higher education services. The law established a state supported National Council for Academic Evaluation and Accreditation (CNEAA) which was invested with the task of temporarily authorizing and then accrediting institutions and study programmes which met certain minimum standards regarding teaching staff and other input criteria.Footnote 7

The law, however, made no explicit reference to the notion of quality; it was strictly concerned with the process of accreditation and, to a lesser extent, with a distinct process labelled “periodic academic evaluation”, a process which was tantamount to (periodic) external quality assessment.Footnote 8 The law’s overall positive influence was its remarkable success in combating the chaos of the early post-communist higher education landscape (Miroiu 1998), but, nonetheless, it also suffered from several shortcomings. Korka (2009) mentions three of them: the neglect of mechanisms of internal quality control; apparent quality homogeneity of study programmesFootnote 9; the lack of any substantial difference between initial accreditation and subsequent periodic evaluation.Footnote 10 All of these were further compounded by the fact that quality evaluation was virtually neglected in practice because of the more pressing tasks of authorization and accreditation (Vlăsceanu 2005). To put it in a different way, throughout the first decade of transition the government was more preoccupied with the problem of adverse selection (which it did solve with the aid of CNEAA) than with recurring issues of moral hazard.

Quality in this stage was defined strictly in terms of compliance with a set of minimum standards to be attained in order to secure entry in the market of higher education providers. As noted by other authors, Law no 88/1993 “appeared rather as a response to the market evolution than as part of higher education policy” (Nicolescu 2007). Subsumed to the narrow interpretation of accreditation, quality had no nuances and functioned solely on the dichotomous logic of approval (authorization/accreditation) and rejection: those institutions meeting the minimum requirements were accredited (and therefore considered to be of quality), while those that did not were excluded from the system.

The need to reconfigure the normative framework regarding quality assessment began to emerge as an important concern following Romania’s adhesion to the Bologna Process, given the specific objective of establishing a European dimension in quality assurance. Only a month after the Bologna Declaration was signed, Law no 88/1993 was amended by Law no 144/1999 which, although offering virtually no conceptual elaboration, introduced the notion of “quality assurance” as such in the legal framework governing higher education. Following the amendments made through Law no 144/1999 quality assurance came to be an objective of CNEAA, although evaluation and accreditation remained the Council’s main focus. It would take another 6 years, however, for a more substantial conception of quality assurance to be implemented by Romanian policymakers.

Following the European drive for convergence of higher education systems, alongside a number of other structural reformsFootnote 11 meant to implement the Bologna objectives, a Cabinet Emergency Ordinance issued in 2005 (and subsequently endorsed by Parliament and enacted as Law no 87/2006) was specifically devoted to the issue of quality assurance in education. The new law marked, at least in formal terms, a visible turn in the process of quality assurance: it made a firmer distinction between accreditation and quality assurance (accreditation was now explicitly defined as “a component of quality assurance”); it differentiated between internal and external quality assurance following the Standards and Guidelines for Quality Assurance in the European Higher Education Area; it outlined a methodology for quality assurance and explicitly listed the domains and criteria encompassed by this methodology; it instituted the obligation of HEIs to create a commission responsible with internal evaluation and quality assurance; it created the Romanian Agency for Quality Assurance in Higher Education (ARACIS) which was to supersede CNEEAFootnote 12 and which would operate as an autonomous institution.Footnote 13 The law also contained a provision of meta-accreditation because it required ARACIS itself to submit to a periodic process of international evaluation.

From a structural point of view, the new methodologyFootnote 14 for external evaluation comprised three broad domains–institutional capacity, educational efficacy and quality management–each with distinct criteria to which standards and corresponding performance indicators were attached. A total of 43 distinct performance indicators were specified (10 for institutional capacity; 16 for educational efficacy and 17 for quality management). The methodology made a further distinction between minimum (obligatory) standards and reference (optimal) standards. In order to secure authorization or accreditation an institution had to meet the minimum level for all standards. Failure to comply with the minimum level for even a single performance indicator prohibited the possibility of authorization/accreditation.

In the context of this paper it is worth noting that the methodology of ARACIS includes indicators specifically associated with elements that could be considered as part of a broader concern with student equity and access, in that they specify general student facilities and various types of services which must be provided by HEIs. It should also be mentioned that these indicators are among the few explicit (albeit indirect) constraints imposed by law on HEIs with respect to equity and access issues: (1) the system of scholarships allocation and other forms of financial aid for students. As a minimum standard, this indicator requires the existence and consistent application of clear regulations for awarding scholarships; as a standard of reference, however, the indicator outlines as desirable that at least 10–20 % of the institution’s resources be devoted explicitly to a scholarship fund. Another relevant indicator is (2) incentive and remediation programmes which, as a minimum, specifies that a university should have programmes that further encourage students with high performance but, additionally, that it also have programmes to support those with difficulties in learningFootnote 15; as a desirable standard of reference, the methodology mentions the existence of supplementary tutorial programmes. A final indicator we wish to note is (3) student services which, as a minimum, states that universities are required to have social, cultural and sport services; a particularly noteworthy fact is the explicit provision that the university must offer (again as a minimum) housing for at least 10 % of its students.

These indicators point towards the fact that within the context of quality assessment there are at least some elements of potential relevance to equity and access which universities must take into account. However, without comprehensive data regarding the individual universities’ actual attainment of specific values for the performance indicators (especially in terms of reference standards), no systemic judgement can be made as to whether or not universities provide sufficient services and support, in sufficient quantities, for all the students that require them.

Since it began its activities, ARACIS has analysed more than three thousand study (bachelor and master) programmes and has also completed institutional external evaluation of more than ninety universities. Equally important, its annual activity reports indicate that it has also undertaken the task of periodic evaluation (both of institutions and their study programmes) which signals that unlike its predecessor, CNEAA, which was mostly concerned with the problem of adverse selection, ARACIS is also preoccupied with issues of moral hazard which can arise when universities or their study programmes fail to continuously meet the initial standards which served as the basis for their accreditation. However, the efforts of ARACIS to instil a quality culture in Romanian HEIs seem to have met with limited success, as evidenced, for example, by the finding that the institutional commissions for internal evaluation and quality assurance only have a discontinuous, quasi-formal activity (Vlăsceanu et al. 2010), an element which points to shirking on the part of HEIs. Overall, despite the intentions of policymakers, compliance with the minimum standards specified in the methodology of ARACIS is still the prevalent form of quality assurance which thus remains “preponderantly administrative, decoupled from (organic) processes of learning and teaching” (Păunescu et al. 2011).

4 Quality and Funding

However important for purposes of evaluation and accreditation, the methodology devised by ARACIS has not been the sole instrument of assessing (or indeed rewarding) quality in Romanian Higher education. In order to present a more complete picture of quality assessment, special attention must also be paid to a second aspect: the way in which (public) universities have actually been financed by the government. In this context, our paper focuses on only one feature of the evolution of the Romanian funding mechanism, namely the quality components used by the National Higher Education Funding Council (CNFIS) to distribute basic funding to universities.

The term basic funding was introduced in 1999 alongside a separate notion, that of complementary funding, Footnote 16 through a policy that marked the transition from an approach whereby public universities received funding “according to principles more or less inherited from the socialist period” (Miroiu and Aligica 2003) to a new mechanism of formula-based funding. With the introduction of formula-based funding the number of enrolled students became central to the funding scheme: the amount of funding received by a university became contingent on the number of physical students it enrolled, following a formula which attached different equivalence coefficients for each programme level (bachelor, PhD, etc.) and different cost indicators for each field of study (medical, technical, economic, etc.). The funding formula in effect translated the physical students a university had: first into equivalent students and then into unitary equivalent students; these could then be used to determine funding for each distinct university.

Although a remarkable break from previous practices, the initial formula for allocation of funds had a strictly quantitative approach inherent in its reliance on the single dimension of physical students and had the consequence that universities received funding in strict proportion to their number of (unitary equivalent) students (CNFIS 2007). The formula-based funding mechanism had two important consequences for universities (Vlăsceanu and Miroiu 2012): it put universities in a position to autonomously use their budget and it stimulated them to reduce operating costs; however, most universities reduced costs by decreasing the amount and the quality of facilities offered to students and by increasing the student/staff ratio (instead of developing a more responsible scheme for cost control). In this context the following potential access and equity paradox can be noted: since a university received funding in accordance with its number of students, it had direct and powerful incentives to enrol as many students as possible to ensure its survival; however, the more students it enrolled, the less it was able to provide them with adequate facilities and services.

Aware of this danger, policymakers began experimenting with a way to directly build into the funding formula a series of quality measures: starting in 2003 the formula incorporated several quality indicators which were meant to stimulate differential funding based on measurable aspects of institutional performance. Once introduced in 2003, the number and complexity of the indicators grew continually as did, more importantly, the final amount of funding determined through their use. Between 2003 and 2011 the number of indicators increased from 4 to 17 (some having a complex structure determined by numerous sub-indicators). At the same time, the level of basic funding these quality indicators determined expanded from 12.7 to 30 %.

Starting in 2003 the total amount of basic funding was thus divided into two distinct components: a quantitative component relying on the number of students and a qualitative component influenced by the universities’ individual level of performance. The quality indicators were grouped into categories mainly dealing with the following issues: (1) teaching staff (2) research (3) material resources, and (4) academic and administrative management of the university. Table 1 below provides a detailed list of these indicators and their individual weight in the process of allocating funding during three distinct years: 2006, 2009 and 2011; this is a period when the overall structure of the methodology used by CNFIS stabilized and yearly revisions focused more on the individual weights attached to the indicators, rather than on their content. Although an exhaustive description and treatment of these indicators is outside the scope of this paper, it is important to emphasize several aspects.

Table 1 Quality Indicators and per cent of total basic funding they determined in 2006, 2009 and 2011

To begin with, it is obvious from the development of the indicators and their growing significance in funding allocation that there is a clear trend toward increased quality assessment leading to greater competition between universities. This competition is not only the result of monetary rewards (which need not always be substantial) but may also appear due to added legitimacy associated with higher scores which can serve as a powerful motivator for universities to improve their performance (Miroiu and Andreescu 2010). From an agency theory perspective, however, incorporation of such performance-oriented funding is a direct expression of concern with moral hazard problems resulting from a stable setting in which public universities, once accredited, receive funding in accordance with their number of students and therefore have no stimulus to improve their performance. Thus, changes in the funding mechanism are actually equivalent to a restructuring of the incentive system devised by the principal in order to assure accountability of the agents and greater competition among them.

A second aspect that merits attention is the nature of the distribution implied by the funding formula once the qualitative indicators were introduced: funding partly became a zero-sum game in which losses of one university with low scores on quality indicators were gains to another that had superior performance. However, because within the funding mechanism it was necessary to avoid the treatment of universities as “a-dimensional entities” (Ţeca 2011) the number of students (the quantitative component which already determined the better part of the total amount of basic funding) also had a powerful indirect influence on the qualitative side of the funding formula. In other words, within the framework of the zero-sum game determined by quality indicators, the quantitative aspect still played an important role, in effect determining the size of the reward (or penalty) for each university.Footnote 17

A final aspect worthy of mention is a certain shift in emphasis noticeable towards the end of the period during which quality indicators were used: although most indicators maintained a relatively constant weight throughout the entire period (see in particular quality of social and administrative student services which determined 2 % of the total amount of basic funding and which was mainly concerned with student dormitories), one indicator (the level of performance in scientific research) more than doubled in size. It had a complex structure, meaning it was actually made up of many other sub-indicators dealing with items such as the number of articles or books published by university staff and, compared to all other indicators, it was responsible for the largest amount of funding distributed on the grounds of quality assessment.

Although research played an important role in the funding allocations, starting in 2012 it came to have an even more prominent role in the higher education landscape following the introduction of the new comprehensive law on education (Law no 1/2011). This law required all universities to be classified into three distinct categories and all study programmes to be ranked according to their performance. Following a thorough evaluation, a university could be classified as focused on teaching, as focused on teaching and research, or as a research intensive university. In addition, all individual study programmes of accredited HEIs were ranked into five distinct categories ranging from A (best performance) to E (lowest performance).Footnote 18 The methodologyFootnote 19 used in the process of university classification and study programmes ranking relied on more than 60 distinct indicators grouped into four main criteria: (1) research performance;(2) teaching; (3) relation to the external environment; and (4) institutional capacity. Research was particularly important as it had a global weight ranging from 40 % (in the case of arts and humanities and certain social sciences) to 60 % (for mathematics, engineering and biomedical sciences).

In accordance with the new law, a first (and for the time being only) comprehensive evaluation of the universities and their study programmes was conducted in 2011.Footnote 20 The general structure of public funding devoted to universities also changed: in addition to basic and complementary funding a new category of supplementary funding was introduced (equivalent to 30.5 % of the basic funding), together with a distinct institutional development fund. Supplementary funding was further divided among three major components: (1) supplementary funding for excellence which accounted for 25 % of basic funding and which can be seen as a successor to the previous idea of distributing funds based on quality indicators; (2) preferential funding for master and PhD programmes in advanced science and technology, for programmes taught in foreign languages and for jointly supervised PhD programmes; and (3) a fund to support HEIs with an active local or regional role.

Since 2012, the former quality indicators used between 2003 and 2011 are no longer in operation, but quality constraints are instead incorporated into the funding mechanism through the use of the results of the national evaluation of universities and their study programmes.Footnote 21 This can be seen as “a recent preoccupation for unifying the different existing approaches to quality” (CNFIS 2013) because CNFIS replaced its own indicators with the results of the national evaluation. Operationalization of this idea entailed the use of certain excellence indices which became multiplication factors in the allocation of supplementary funding for excellence. The excellence indices reflect the results of the national ranking of study programmes. For example, at the bachelor level, a study programme belonging to class A (best performance) translated into an excellence index of 3, but 0 if the programme was ranked in class D or E (low performance). For master level studies, programmes ranked in class A received an excellence index of 4, those in B an index of 1 and those in C, D, and E received 0.Footnote 22

Access and equity elements within the methodology used for the process of university classification and study programmes ranking included several indicators. Under relation to external environment one can find the following three indicators: students from lower socio-economic groups, mature students (defined as aged 30 years or more), and students with disabilities. Under institutional capacity one can find several other indicators dealing with student cafeterias, dormitories, personnel responsible with medical services, infrastructure devoted to students with disabilities, and personnel specifically employed to support students with disabilities.

Although the methodology used in the process of university classification and study programmes ranking thus seems to have more indicators dealing with equity and access issues, it remains doubtful whether these had any significant impact on the final results of classification and ranking and, following these processes, on the funding universities received in 2012. This claim may be supported by studying the methodology itself, the individual weights of the indicators and the aggregate weights of the criteria it used. To begin with, it should be noted that the methodology had several intermediate levels of aggregation: at the lowest level were individual indicators that were then aggregated into composite (intermediary) indicatorsFootnote 23 which, finally, were further aggregated into the four criteria listed in the previous paragraphs, namely research performance, teaching, relation to the external environment and institutional capacity. A natural consequence of such a hierarchical structure that uses multiple layers of indicators is that the overall impact of any one individual indicator tends to become diluted. With respect to the indicators dealing with equity and access elements this is particularly evident because at the most general level of aggregation, both relation to external environment and institutional capacity had, without exception, the smallest weight of all four criteria used by the Ministry (ranging between 5 and 20 %) but also had the largest number of individual indicators (more than 20 in each case).

However, the new methodology used by CNFIS starting in (2012) also included a different component that can account for access and equity. Based on the provisions of the Law no 1/2011, a special fund for stimulating the universities which develop policies addressed to students from disadvantaged groups was created (i.e. the fund to support HEIs with an active local or regional role mentioned above). Disadvantaged groups can be ethnic minorities (e.g. Roma), or people living in certain areas (rural areas, small towns, etc.).Footnote 24 In 2012 the funding for this component represented 3 % of the total allocations for universities that were distributed by CNFIS. Funds were allocated by the Ministry of Education mainly to universities located in small towns and which had study programmes aimed to satisfy local needs (CNFIS 2013).

5 Conclusions

Over the past two decades quality assessment and quality constraints have become a central feature in the process of policymaking for Romanian higher education. This article has illustrated how problems of adverse selection and moral hazard typical of principal-agent models have spurred Romanian governments to develop specific solutions in the form of normative constraints limiting the potentially opportunistic behaviour of universities. Prior to 2012 such quality constraints took two distinct shapes: one is given by the process of accreditation (together with its corollary, periodic academic evaluation), while the other is represented by specific indicators used to determine the level of funding for each public university. In both cases the complexity and number of indicators used for overall quality assessment increased over time. However, starting in 2012, quality indicators are no longer in use; quality is instead incorporated into the funding mechanism through a proxy measure – excellence indices derived from the results of the national process of study programmes ranking which relied heavily on research aspects.

In terms of aspects that promote equity and access, all methodologies pertaining to quality assessment discussed in this article can be found to incorporate only a limited number of indicators devoted to such issues. In addition, rather than dealing with targeted measures for specific (potentially more vulnerable) groups of students, most of these indicators only concern themselves with material resources and minimal facilities and services for all students in general. The scope and importance of these indicators varies between the distinct methodologies under discussion: within the methodologies used by CNFIS between 2003 and 2011 such indicators generally accounted for 2 % of the basic funding allocated to universities and mainly dealt with student dormitories and general administrative services; within the methodology for accreditation used by ARACIS the three indicators we identified also deal with input aspects related to the universities’ distribution of material resources and services provided to students. A more comprehensive list of indicators sensitive to equity and access issues can be found in the methodology used to assess universities and their study programmes in 2011 but, paradoxically, the effects of these indicators is diluted by the existence of dozens of other indicators and by the presence of intermediary levels of aggregation to which the indicators contribute only to a negligible degree.

Overall, based on these methodologies and their evolution we may conclude that general quality considerations play an increasingly important role for higher education institutions and their funding, but equity and access elements do not act as important factors within quality assessment processes themselves. This does not mean, however, that equity and access have no impact on funding itself. To the contrary, although such elements are limited within the various frameworks of quality and performance evaluation, they have also been recently included in the funding mechanism in a more direct manner, through the provision of a distinct component within the newly-introduced supplementary funding. Therefore, the impact of equity and access elements for Romanian HEIs is now twofold: on one hand this impact is indirect (and limited), mediated by the processes of accreditation and performance assessment which have their distinct leverage on funding; on the other hand, however, the impact is also taking a more explicit form through specification of a distinct component geared towards equity and access issues in the funding scheme. The inclusion of this distinct component may indicate a growing importance assigned by policymakers to equity and access in general but, because objective criteria for distribution of these earmarked funds have yet to be clearly formulated, it remains to be seen what substantial consequences this policy will have for HEIs and their students.