Introduction

Evaluation of university performance has, in recent years, significantly increased in importance, particularly in emerging higher education systems. The performance evaluation requires an inclusive and holistic assessment tool comprising appropriate criteria and indicators, which are, however, difficult to establish due to various factors such as the complexity and different background of each university, its nature, strengths, administration structure, mission and vision (Edgar & Geare, 2013; Shin & Toutkoushian, 2011). Although evaluating university performance remains a challenging task, several on-going initiatives attempt to assess university performance worldwide which ultimately rank those evaluated universities based on the assessment tools that have been developed. Among these, three relatively well-known world university ranking systems in the international arena are the Academic Ranking of World Universities (ARWU), Times Higher Education World University Rankings (THEWUR) and the QS World University Rankings (QSWUR).

While it is observed that research performance (or research strength) is one of the major components in the three leading assessment systems (Soh, 2015; Selten et al., 2020), substantial differences exist in the way the three ranking systems incorporate their research measurement. For example, the ARWU uses five indicators pertaining to research in terms of alumni and staff winning prestigious prizes and medals, publication in highly specialised journals, and inclusion in premium citation indices. Thus, all weights are devoted to research indicators (Soh, 2015). As for THEWUR, research is measured on three sub-indicators, i.e., academic reputation survey; institutional income/total number of academics; and number of papers per academic. The QSWUR only uses academic reputation survey and citation counts. While all three ranking systems use publication and citation counts, both THEWUR and QSWUR incorporate reputation surveys in measuring research. The ARWU, on the other hand, is very much focussed on research performance measured by the volume of publications and quality of research conducted at the highest level, and it does not include reputational data.

Assessment of research performance at the country level is also well in place, be they highly centralised, standardised and formal systems or devolved and relatively informal ones. The Research Excellence Framework (REF), undertaken by four UK higher education funding bodies - namely, Research England, the Scottish Funding Council (SFC), the Higher Education Funding Council for Wales (HEFCW), and the Department for the Economy, Northern Ireland (DfE), is an impact evaluation that assesses the research of British higher education institutions. The UK was one of the first countries to not only institutionalise university research assessment but also link it to financial allocations. New Zealand’s Performance Based Research Fund Exercise (PBRF) is also a system used to allocate funding among universities, departments and researchers according to the quality and quantity of research outputs over the preceding six years. The PBRF aims to incentivise research excellence and efficiency, and to enable government to invest research funds where the greatest returns would most likely result. Hong Kong and Australia also have similar research assessment systems to measure research excellence. The Australian Research Council (ARC) is responsible for administering Excellence in Research for Australia (ERA), a national research evaluation framework that identifies and promotes excellence across the full spectrum of research activity in higher education institutions. In Hong Kong, the Research Assessment Exercise (RAE) was set up as part of the University Grants Committee’s (UGC) commitment to assessing the research performance of UGC-funded universities. Borrowed from the United Kingdom, the RAE system aims to control spending on universities based on an explicit and formalised assessment process of the quality of research (Currie, 2008).

In Malaysia, in acknowledgement of the importance of a university’s research performance, a system known as the Malaysian Research Assessment Instrument (MyRA) was established in 2006. The MyRA adopted predetermined benchmarks, mostly output-oriented (e.g., amount of research funds, number of high impact factor journals, etc.), and those universities that fulfil the requirements (or scores) are conferred the status of research universities (RUs). A research university is expected to fulfil the following functions: i) to develop as a centre of excellence for niche areas of research; ii) to play a leading role in the development of innovation at the national level; iii) to generate world-class academic output, especially the production of high-impact scholarly publications; iv) to attract high quality graduates to assist in research; v) to build an environment conducive to research and innovation (Ministry of Higher Education, 2004). Since 2014, all universities have been mandated to participate in the annual assessment exercise to coincide with the opening up of the Ministry of Higher Education (MoHE) research grants (MoHE, 2015). In 2012, the MyRA was also utilised to evaluate the research performance of relatively smaller entities, i.e., research institutes or Research Centres of Excellence (RCoE) within Malaysian universities, and those RCoEs that fulfil the requirements of MyRA are recognised as Higher Education Centres of Excellence (HiCOE).

The MyRA seems to have been confirmed as an appropriate tool for evaluating each university’s research performance, having gained acceptance by almost all the public and private universities in Malaysia. However, when the MyRA was used to evaluate the research performance of RCoEs, which are smaller entities functioning as research centres within Malaysian universities, questions were raised as to its suitability. This raises the general issue of assessing at different levels: individual researcher, organized research unit, institution. The concern was whether the unique research strengths of the RCoEs could be clearly captured by the MyRA. The discussion as to why the MyRA is not as suitable for RCoEs also points out weaknesses in its use for institutional evaluation: those activities or types of research not captured well may become obvious when applied to RCoEs. Should the MyRA be found to be inappropriate for evaluating the research performance of RCoEs in universities, an alternative system would be required. Policy makers would be better served by having this distinction discussed in terms of the different purposes of research as well as issues regarding how to align research assessments at the different levels. This provided the initial impetus for the present study.

This paper, reporting on part of a practice-based enquiry, aims to critically review the challenges of MyRA as an instrument used to measure the research performance of RCoEs within Malaysian universities. The first part of the paper introduces the contexts leading up to the development and adoption of the MyRA as a tool to assess the academic performance of universities. This is followed by an analysis of the challenges and appropriateness of using the MyRA as a tool for evaluating the research performance of relatively smaller entities, namely RCoEs within Malaysian universities. Then, based on a participatory action research approach towards an alternative system of assessment for RCoEs, a list of prioritised components and criteria deemed relevant for the newly proposed assessment tool is presented.

The Malaysian research assessment (MyRA) instrument

The MyRA, an instrument for evaluating the research performance of Malaysian universities, was developed by an ad hoc committee in 2004. The main reason for the development of the MyRA was to fulfil the objective of identifying universities to be conferred the status of Malaysian Research University (MRU). The aim to elevate the standing of Malaysian universities to attain world-class status is high on the agenda of both the National Higher Education Strategic Plan (NHESP) 2007-2020 and the Higher Education Blueprint (2015-2025). As the first official research performance assessment tool designed for Malaysian universities, the MyRA has been considered a very useful and comprehensive tool for evaluating research performance. The objectives of the MyRA are:

  1. i)

    to promote activities related to research, development and commercialisation;

  2. ii)

    to increase the number of post-graduate students and post-doctoral researchers;

  3. iii)

    to increase the number of lecturers with PhDs;

  4. iv)

    to increase the number of foreign students;

  5. v)

    to strengthen centres of excellence; and

  6. vi)

    to improve the rankings of institutions of higher education (Higher Education Institution Excellence Planning Unit, 2014).

The MyRA comprises seven key areas of measurement (see Table 1). A six-star rating system is used (with six stars representing the highest performance level, and one star the lowest) and all participating HEIs have their documents and websites audited by a panel of trained auditors. Since 2007, after numerous rounds of the MyRA assessments, five universities (out of 20 public and 33 private universities) have been deemed to have fulfilled the MyRA requirements (six-star rating), thus earning themselves recognition as RUs: Universiti Malaya, Universiti Sains Malaysia, Universiti Kebangsaan Malaysia, Universiti Putra Malaysia and Universiti Teknologi Malaysia (MoHE, 2012). The MyRA continues to be used to evaluate the performance of non-research universities while a new stricter version called the MyRA II is used to evaluate the RUs. The MyRA II uses the same criteria, but assigns higher weighting (percentage) to four of the criteria (quantity and quality of research, innovation, professional services and gifts, and networking and linkages). The criteria and summary of indicators for both the MyRA 1 and the MyRA II are presented in Table 1.

Table 1 Assessment areas and summary of indicators for MyRA I & MyRA II

Since the establishment and application of the MyRA, the extent of quality research in Malaysian RUs has increased substantially, albeit broadly, across the system (Jabatan Pendidikan Tinggi and Clarivate Analytics, 2018). The output, in terms of journals published by the five RUs and the number of citations, has been increasing gradually every year (Bakri et al., 2017). The performance of the five RUs is also acknowledged in the Malaysia Education Blueprint 2015-2025 (Higher Education), as the number of research articles published by Malaysian universities has increased more than threefold between 2007 and 2012, with 70% of these contributed by the five RUs (Ministry of Education, 2015). The level of funding from government sources for research has been sustainable throughout the last ten years. The majority of stakeholders have accepted, in principle, the need for a mechanism to assess the quality of research in universities, particularly when the research has been funded directly or indirectly with public money. However, while the MyRA seems to have been effective in assessing the performance of universities, and ultimately in increasing the quantity of research output, our concerns are about the form and fairness of the assessment mechanism and its suitability for evaluating the research performance of relatively smaller entities, i.e., research centres and institutes (RCoEs) within Malaysian universities.

Higher education research centres of excellence (HICoE) in Malaysia: from RCoE to HICoE

Before discussing the issues of using the MyRA for assessing RCoEs and proposing an alternative assessment framework for RCoEs, it is necessary to define what research centres or institutes are, and what their objectives are. An RCoE is defined as a research centre or institute, commonly referred to in the literature as an organised research unit, dedicated to specific scientific research and innovation goals (Geiger, 1990; Hellström, 2011). In essence, RCoEs have as their primary mission the conduct of research. In recent years, research centres/institutes have played an increasingly important role in the conduct of research at major Malaysian research universities. Geiger (1990) argued that RCoEs have been the decisive factor in the expansion of the university research system and have taken a leading role in developing big science projects. In Malaysia, RCoEs were established at universities in accordance with the statutes of the Colleges and Universities Act 1976. However, each university has its own stipulated set of criteria, expectations and standards for the establishment of an RCoE and approval is given by the university senate. Unfortunately, the overall number of RCoEs in Malaysian universities has not been formally determined.

In 2009, with the aim of gearing up for the next level of Malaysia’s research and development (R&D) and innovation, the MoHE established the Higher Education Centres of Excellence (HICoE) framework. The HiCOE framework requires that more than 70% of an RCoE’s activities be related to research and innovation, and less than 30% to service and consultation for industry and community. The MyRA I was adopted to assess the research performance of all RCoEs before being granted the HiCoE status. Using similar criteria and indicators for assessing the RUs, the targets and/or passing scores of the MyRA have been modified to accommodate the actual size and scale of the HICoE. Any RCoE that achieves six-star rating in the MyRA 1 can be granted HICoE status. Since the first evaluation exercise of HICoE in 2008, 142 institutes and research centres have submitted their applications but only 20 RCoEs in five public universities have met the stringent requirements to be designated as HiCoEs (Higher Education Institution Excellence Planning Unit, 2018).

The effort of identifying and acknowledging the achievement of RCoEs to HICoE status is expected to encourage the research institutes to work towards becoming global leaders in their research niche areas. HICoEs are supported and facilitated by MoHE to become the focused vehicles to drive the R&D and innovation agenda, particularly in fundamental, multidisciplinary and transdisciplinary research, as well as to contribute to human capital development. Each HICoE is provided with funds from MoHE to conduct niche research programmes and to develop new research talents, produce high impact publications, generate innovation revenues and grow collaborative networks. The fund is also expected to assist the HICoE in improving the capability of a research laboratory niche towards obtaining ISO17025 certification / accreditation (Higher Education Institution Excellence Planning Unit, 2014).

Challenges in using the MyRA for evaluating research performance of RCoEs

Research centres/institutes are part of complex ecosystems that vary greatly in the type of research conducted, organizational structures and expected outcomes (Geiger, 1990; Hellström, 2011). Each research institute has its specific mission and objectives. For example, the Institute of Oceanography in Universiti Terengganu Malaysia carries out research to protect the marine ecosystem health while the Institute for Research in Molecular Medicine of Universiti Sains Malaysia explores medicines that could improve human health. The Institute of the Malay World and Civilization (ATMA) in Universiti Kebangsaan Malaysia deals with the construction of knowledge about the Malay world and civilisation aimed at nurturing a civilised society that is ethical, knowledgeable, and equipped with a global mind-set. These research institutes are inherently applied and transdisciplinary, with explicit goals contributing to solving real problems, and a strong emphasis on context and social engagement, with a range of research goals, organisational forms, and outputs. Hence, the research performance assessment of research institutes should be case specific, yet flexible and multidimensional, in order to take into consideration, the uniqueness of each research institute and its disciplines.

An analysis of the criteria and indicators of the MyRA revealed that its assessment emphasises the research outputs from STEM (Science, Technology, Engineering and Mathematics). For example, outputs that tended to be from STEM, such as product development, patents and commercialisation, are crucial components in the MyRA assessment. This might have been influenced by the government funding mechanisms since the 8th Malaysia Plan (Malaysia, 2001) that placed greater emphasis on the development of innovations to further improve the commercial relevance of research and development projects. For instance, in the 11th Malaysia Plan (2015-2020), 42% of the research budget was allocated to the development of services for commercialisation, particularly prioritising research in information and communication technology, precision technology and artificial intelligence, while another 25% of the research budget was allocated to strategic research areas to enhance future competitiveness in emerging areas such as engineering technology and design and software technology (Malaysia, 2015). Another 10% was reserved for pure science research. Since commercialisation and technological competitiveness serve as driving forces, naturally STEM plays a relatively important role in research and development (compared with the social sciences and humanities).

As a result, “excellence” in the MyRA tends to be associated with the monetary value of what is achieved through research in terms of selling a product or an innovation. Hence, the MyRA is not considered suitable for nor capable of evaluating the diverse disciplines of RCoEs. Researchers from the social sciences and humanities encounter difficulties in patenting and commercialising their research findings; furthermore, their publications (largely books and monographs) are given relatively lower weighting compared to the ISI journals in the MyRA system (Azman et al., 2017). Having to grapple with the performance indicators predicated on the practices of science scholars such as citation counts, the arts, humanities and, to a certain extent, the social sciences, when compared with the STEM disciplines, not only appear to lack representation in high impact academic journals but also fail to generate high citation rates, hence failing to attract much research funding (Azman et al., 2017).

Secondly, the MyRA, being a quantitative evaluation tool based on predetermined numerical benchmarks, may not fully capture the unique strengths of RCoEs. For example, while an RCoE serving as a think tank for the government might not produce a great number of journal articles or PhD graduates, its influence in assisting decision and policy-making is clearly significant. Also, indigenous knowledge generated by researchers in such centres would definitely enhance their strength and reputation, but this can hardly be captured by a quantitative evaluation tool. This means that additional criteria are needed to acknowledge the innovative approaches, the diversity of actors, outputs, outcomes, and long-term policy and social impacts of RCoEs on the Malaysian and international community.

Thirdly, although the MyRA captures aspects related to process and output of the research performance, the impact of the research performance remains unappraised. Due to the influence (either directly or indirectly) of several existing international university ranking systems, the MyRA has placed greater emphasis on the research outputs (e.g., number of publications in journals, citation count). While these indicators of research quality remain relevant, additional criteria are needed to consider the innovative approaches and the diversity of RCoEs from different disciplines. Extensive evidence in the literature points to the inherent flaws in publication counts and other output measures as indicators of research productivity (e.g., stemming from differences in journal standards and requirements), and difficulties in quantifying publication output (e.g., with regard to weighting author contributions), and to disciplinary differences in publication style (e.g., length and number of authors) (Åkerlind, 2008; Bazeley, 2010).

Thus, Malaysian universities need to have clear indicators or benchmarks to determine whether their RCoEs are progressing towards, or have acquired excellence status. Having considered the challenges of using MyRA for evaluating the research performance of RCoEs within Malaysian universities, this paper proposes an alternative evaluation system, the Research Centres of Excellence Assessment (RCoE-A).

Methodology

The overall study, from which this article provides a foundational analysis, employed a Participatory Action Research (PAR) approach (Jacobs, 2018). It delineates a systematic approach for quick and efficient data gathering and thoughtful analysis by participants through participatory process in proposing an alternative assessment tool for evaluating the research performance of RCoEs. The PAR approach incorporates qualitative methodologies including document analysis and group discussions in workshops.

Using a systematic review, the researchers conducted a literature search for relevant information on assessing university research performance from other countries - namely Japan, the United Kingdom, the European Union and Australia. Pertinent documents at the national level, specifically the National Higher Education Strategic Plans, the MyRA, Research Universities and Higher Institution Centres of Excellence (HiCoE), and RCoE assessment guidelines were also reviewed. The aim was to identify appropriate principles and criteria for defining and measuring research quality in a multi- and transdisciplinary context. Together with journal articles pertinent to research assessment, approximately 27 documents that focused on concepts, criteria and indicators of research assessment were analysed using content and thematic analysis.

To construct the analytical framework for the purpose of assessing the criteria and indicators necessary for an alternative framework, a stepwise approach was adopted that included:

  1. i.

    Assessing the current evaluation system of MyRA;

  2. ii.

    Conducting a systematic review of literature on existing evaluation systems implemented at other research universities/in other countries;

  3. iii.

    Adopting the outcome of the systematic review as a benchmark against which the current evaluation system is assessed;

  4. iv.

    Identifying gaps and shortcomings in the current evaluation system that warrant an alternative evaluation system;

  5. v.

    Proposing salient criteria and indicators that should be incorporated into the alternative evaluation system.

Taking into account the findings gathered from the literature review, the process of developing an alternative assessment system for RCoE involved four phases as described in Table 2. The four phases utilised a participatory process in the various research activities including three workshops involving all relevant stakeholders, namely ten directors, ten research fellows, six administrators and three university top management personnel from a public research university. The participants in this study were selected via purposeful sampling.

Table 2 Description of participatory research activities

The participatory process sought to gauge the suitability and reliability of the criteria and indicators and to establish a mechanism for accommodating the diverse needs and perspectives of the research institute communities and the university management. The research activities, particularly the workshops, were designed as a forum for action research, in which learning occurs as part of the process (Jacobs, 2018). All the participatory activities were documented using a video, and by note-taking. A hybrid approach of inductive and deductive coding and theme development was used to interpret the raw data.

Definitions of each principle, criterion and rubric statement were developed and formulated based on the literature, the stakeholders’ experiences and negotiated needs. The framework went through three rounds of testing with revisions after each round to refine and improve the framework.

The RCoE-A

Features of the RCoE-A

The main objective of the study was to develop a framework for the RCoE-A that includes the concepts, and initial components and criteria for quality research. The RCoE-A framework is based on the premise that a balanced and comprehensive assessment of research quality requires consideration of elements beyond the research outputs, and should include important aspects of the context in which the research has been conducted, and the manner in which it has been managed.

The components and criteria of this proposed RCoE-A, developed from the findings, are presented in Table 3. The concepts from which the components and criteria are developed and selected are discussed below.

Table 3 Selected criteria and measurement of input, process, output and impact in the RCoE-A

Evaluating diverse/multi/trans disciplines

As pointed out in the previous section, the MyRA places greater emphasis on STEM than on the social sciences, arts and humanities. Hence, the newly proposed RCoE-A should be able to assess the research performance of research institutes with diverse disciplines. The new assessment also acknowledges the importance of the MyRA, thus, part of the RCoE-A, particularly the quantitative aspect, still includes most of the indicators used for evaluating RUs, inevitably causing some degree of overlap.

Essentially, the new framework acknowledges disciplinary differences in the medium and style of research performance and in the nature of the impact that research might have. As the RCoE-A was designed to evaluate research institutes with diverse disciplines, with some being multidisciplinary or transdisciplinary in nature, it is necessary to group similar types of RCoEs into appropriate categories. Four categories were identified based on the Times Higher Education World University Rankings and the QS World University Rankings. These categories were determined based on mutual agreement with the participants: i) science and technology; ii) social science and sustainability science; iii) engineering and technology; and iv) medicine and health. The category of science and technology covers both life sciences and physical science, whereas the social science category incorporates disciplines such as law, politics, economics (note: the arts and humanities are not covered by the RCoE-A in this phase of development). A new discipline, sustainability science, which conducts transdisciplinary-based research is also included in the RCoE-A. The engineering and technology category covers areas such as civil and structural engineering, mechanical and materials engineering, and architecture; while the medicine and health category covers areas such as clinical, environmental and public health, and nutrition science.

After establishing the four categories of RCoEs, the weighting for each category was identified during a workshop. Each of these RCOE categories has a different weighting; for example, the categories of social science and sustainability science will have relatively lower weighting for research facilities compared to other categories as researchers in these disciplines do not normally request expensive equipment and apparatus to conduct research. The weighting for the different RCoE categories is shown in Table 3.

The flexibility built into the framework through the potential for customization of the criteria and weighting according to disciplines is imperative for the reasons explained earlier. However, it is also recognised that this can complicate applications where strict standardization of measures is required. Complications are expected to arise due to the aggregation and comparison of ratings by different peer groups and disciplines.

Quantitative and qualitative assessment

In the RCoE-A, the traditional aspect of quantitative assessment is retained; however, it is no longer the sole assessment tool as many have argued about the flaw of quantifying output as the main measure of productivity (Mattedi & Marco, 2017; Bazeley, 2010). The qualitative aspect of assessment, i.e., the broader contribution of research outputs to knowledge and social well-being, is introduced due to the major concern of the participants over measures of research income and quantity of production. Some of the criteria in the RCoE-A include assessment of the RCoE’s talent management plan such as succession plans to ensure its sustainability. The generation of indigenous knowledge by the researchers, and transfer to and application of new knowledge by the government, industry and community settings are also being captured and assessed in the RCoE-A system. It is expected that the approach required to judge the research quality dimensions would enable greater clarity, be more nuanced and, hence, more accurate and transparent assessments that require the systematic integration and interpretation of qualitative and quantitative data.

Both the quantitative and qualitative assessment tools have gone through pre-testing activities based on data collected over a five-year period. This pre-testing involved six research institutes representing the four categories of RCoE identified in this study. The RCoEs gave their inputs and comments to further enhance both the quantitative and qualitative assessment tools.

Assessment of input, process, output

The RCoE-A promotes a balance among components of input, process and output which are equally important in assessing research performance. Findings from the assessment could then be used to address any weaknesses and gaps, and appropriate strategies and action plans can then be formulated by RCoEs to enhance overall research performance. In addition, the RCoE-A also assesses the impact of research conducted by the RCoEs by incorporating appropriate elements across the components of input, process and output. A summary of selected criteria, input, process, output and impact of the new RCoE-A is presented in Table 3.

The component of input is assessed based on three criteria, namely human resource capacity, research facilities, and research grants. Human resource is a vital input for conducting research because without outstanding and capable academics, derived research outputs might not be discussed and deliberated in depth. Research requires originality and a high level of interpretive and analytical capacity (Bazeley, 2010). In fact, academic rigour is the nucleus that establishes the strengths and uniqueness of an RCoE. In this regard, succession plans for researchers should also be systematically prepared to ensure sustainability of the institution.

Research facilities refer to special equipment or tools, such as advanced laboratory instruments or comprehensive databases that could assist academics and facilitate frontier research. In addition, research operation funds are also considered an integral part of research facilities because these funds enhance the administration of a smooth research process. Research officers and technical assistants are also important for the same reason. According to the participants, these criteria could help to take the institute to the next level in terms of the infrastructure’s continuous improvement.

Research grants serve as an essential input for embarking on research; for example, chemists need research grants to purchase chemicals and equipment to conduct experiments while social scientists would use the research grant to conduct field studies, or to purchase satellite images for topography and land use purposes. The magnitude and diversity of research grants reflect the strengths of an RCoE because these signify recognition and acknowledgement of the reputation based on the quality proposals that secured the grants.

A significant evaluation effort is also directed to the process component not found in the MyRA. The process component comprises three criteria: management system, network and outreach, and recognition and credibility. The management system criteria are important as they focus on how the RCoE promotes or markets its research strength, as well as ascertains the cost involved. The system also considers the financial autonomy given by the university to the director to plan and promote the RCoE. The financial autonomy criterion emerged as very important for the RCoE community as they considered how the sense of empowerment and their ability to do their jobs with fewer restrictions could ultimately improve their efficiency and performance. In addition, the participants concurred on the importance for researchers to be involved in out-bound attachments as these would provide them the opportunity to promote their work and the RCoE to potential collaborators and partners. Conversely, in-bound academics would also be captured by the management system criteria through on-site promotion and marketing.

The criteria of networking and outreach focus on how the RCoE collaborates with partners and stakeholders via partnership and cooperation efforts in research. Although the participants wrestled with the ideas of Memoranda of Understanding (MoU) and Memoranda of Agreement (MoA) as means for measuring networking, they acknowledged that these are binding documents or obligations for exploring possibilities and opportunities for collaboration. Participants also believed that in the context of the application of science, multiple actors are involved in the knowledge production process, and that this has consequences for the kinds of knowledge produced and how knowledge communication takes place. Thus, outreach is considered as part of knowledge development and application as well as an activity of capacity building. As such, outreach is considered as a process and should be measured while the impact of the outreach activities on stakeholders such as policy makers and communities are subsumed under the reputation criterion.

All the participants expressed high regard for academic (research) leadership, hence recognition and credibility of academics was also chosen as one of the process criteria for the RCoE-A. Awards and recognition earned by the researchers would undeniably attract donors for research grants (as input) and are likely to produce high impact research findings (as output). Awards and recognition accorded to researchers at both the national and international levels are taken into consideration, as well as the leadership and membership of academics in any related scientific and professional bodies at various levels.

The participants also decisively identified four output criteria: scientific output, human capital output, service output and development of knowledge system. The criterion of scientific output captures research findings via publications such as articles in high impact journals, articles in peer-reviewed national journals (including journals that have no impact factor), books, and chapters in books. Research findings that have obtained patents or have been commercialised are also considered as scientific outputs in the RCoE-A framework.

Human capital output measures the number of PhD and Master’s graduates from the RCoE ensuring that knowledge and expertise have been successfully transferred to the younger generation. Accredited professional courses designed and carried out by the RCoEs are also considered important human capital output because they are also mandated to train future talents, i.e., professionals and experts in relevant fields.

The criterion of service output captures specialised and unique services provided by the RCoE. Specialised services refer to services formulated by ingenious thinking and expertise developed by an RCoE, while unique services refer to the niche services developed through ingenious thinking and expertise at the RCoE, and these unique services are considered the icon of leadership. These aspects were missing from the MyRA, and as the participants argued, this criterion will emerge as an important and meaningful factor in the RCoE’s systematic utilisation of the service criteria.

The development of knowledge systems takes into consideration the indigenous knowledge developed by the RCoE. Local or indigenous knowledge refers to the understandings, skills and philosophies confined to a Malaysian/Asian culture or society. It is the knowledge generated through a systematic process of observing local conditions, experimenting with solutions and re-adapting previously identified solutions to local environmental, socio-economic and technological situations. It can be represented broadly by new knowledge, new perspectives, and new approaches. According to the participants, the knowledge system development is also considered a crucial contribution of research as indigenous knowledge developed by the RCoEs is pertinent in scaling up national development based on local capacities, capabilities and needs, making research and development more equitable and sustainable. Many research performance systems, including the MyRA, do not address this aspect in their evaluation.

The RCoE-A prioritises research impact by using the criteria of management system, network and outreach, recognition and credibility, service output and knowledge system development. The impact can be measured via the RCoE’s reputation as perceived by the stakeholders and via its contributions to the stakeholders. The visibility of the centre is a measure of the RCoE’s impact on research dissemination to the public and real-world benefits stemming from the RCoE’s research (the relevance of the RCoE). It also ensures that research findings are brought to the public arena rather than just disseminated through publications. In addition, reputational measure can also be considered to some degree as impact of promotion activities and networking with stakeholders. These types of qualitative assessments will rely on the expertise of relevant stakeholders in judging the legitimacy and relevance of the research undertaken by the RCoE.

The RCoE’s reputation, based on its well-recognised research strengths, or recognition received at national and international levels can create peer esteem, and potentially lead to invitations and promotion as well as influence the likelihood of getting further funding. In addition, it is anticipated that researchers from other countries would seek short-term attachments or study at the RCoE which would increase inward and outward mobility of researchers.

Conclusion

This paper has analysed the challenges of using the MyRA to evaluate the research performance of RCoEs within Malaysian universities. Based on the challenges identified and input from the stakeholders, an alternative assessment framework is proposed which will arguably serve the purpose better. The RCoE-A focuses on the quality of basic, applied and transdisciplinary research, and broader input, process and output together with impact or use, rather than just on quantitative measures of income or production volume, by using both qualitative and quantitative assessments. The criteria and indicators of the RCoE-A have been developed based on stakeholders’ participation and an informed peer review, and they have been pre-tested on data collected over a five-year period by five RCoEs in a research university. Generally, the participants described the final criteria as adaptable, grounded in research assessment concepts, result-oriented, and encouraging methodical evaluation of input, process, output and impact. Nevertheless, it must be borne in mind that assessing the research performance of multi-disciplinary RCoEs requires that the evaluators plan how to aggregate the performance measures of the various fields.

Arguably, the participatory process seemed to have improved the credibility of the results and the applicability of the RCoE-A (what is easily practicable to collect) in general. The RCoE-A framework, and its methodologies, can be replicated, and it reflects on potential uses as well as ideas for its further refinement. While the framework is specific to and arguably suitable for the Malaysian higher education and research context, it may also be relevant to other research institutes in a similar higher education system, or in similar cultural and political contexts. Those interested may, however, need to apply and utilise improvement methodologies and test the usefulness and functionality of the framework across all research disciplines. What is particularly positive is that the framework can serve as a model for what can be accomplished with a stakeholders’ participatory method. In this regard, this study has filled the gap of how research performance should be measured by integrating the stakeholders’ approach (bottom-up, grass roots) and policy makers and management leaders’ approach (top-down, centralised) in the development and implementation of research assessment.

Evidently, a variety of lessons were learnt from the PAR project. The crucial lessons relate to the stages of involvement in the implementation process. It was found that affected stakeholders need to be involved in all stages of the implementation process, from conception, development, implementation (testing) to evaluation. Flexibility is needed in the overall approach, and the workshop activities need to be simple enough for the participants to be engaged. Perhaps the most valuable lesson learned is that there are often conflicts between what the stakeholders want and what the management needs in determining criteria for assessment. This calls for a moderating mechanism in the form of a credible senior researcher as a facilitator who can distil his/her experience into general observation and conclusion.

Finally, the RCoE-A is not intended to replace any existing evaluation system in Malaysia; rather, it serves as an alternative to be used by the MoHE and universities to carry out self-assessment and continuous monitoring of the RCoE research performance. Our analysis suggests that more attention should be paid to the different types of research that are favoured and not favoured by the national system and that the purpose of the assessment is likely to differ depending on the utilization and level at which the research is conducted. The RCoE-A discussed in this paper is a tested framework based on the inputs and comments from selected research institutes in a research university, hence for the RCoE-A to be used as a nation-wide assessment tool, further refinement is required by obtaining inputs from research institutes in other universities in Malaysia. We expect that the new framework for evaluating research quality will be further improved by practitioners who use the framework and through further scrutiny by scholars concerned with issues of research quality and use.