Introduction

It is clear that advancing discoveries of therapeutic agents and medical devices from laboratories to the bedside and patient care requires a well-trained clinical research workforce. The National Institutes of Health (NIH) recognized the importance of having a well-trained cadre of physician scientists and began funding Clinical Research Curriculum Awards (K30) to over 50 academic health centers some 20 years ago. Many of the K30 funded institutions were subsequently funded 10 years later with Clinical and Translational Science Awards (CTSAs). A dominant theme of the more than 60 funded CTSA hubs is the training of physician scientists in an effort to expedite the movement from ‘bench to bedside’ of advances in health care.

The efforts to train physician scientists to conduct clinical and translational research has had some success. Fordyce and colleagues found that 172,453 different investigators filed FDA 1572 forms to conduct clinical research between 1999 and 2015 [1]. However, just under half (49.6%) of the filers were what Fordyce and colleagues labeled “one-and-done” meaning that they did only a single study during the period while only 65,231 (37.8%) were continuously involved in clinical research. The Tufts Center for the Study of Drug Development found in 2013 that the proportion of investigators from the USA and Canada has declined and that the rate of turnover among investigators has increased especially among new investigators but also among those with more research experience [2]. The absence of a well-trained and experienced investigator workforce is a threat to the stability of the clinical research process and the validity of research findings. So too is the absence of a well- prepared workforce that function in roles supporting the principal investigator in the execution of what is often a complicated research protocol.

The spider diagram in Fig. 1 depicts the clinical research process beginning with knowledge of biomedical, clinical and behavioral sciences, conceptualizing and operationalizing a research hypothesis, designing an observational or experimental study, developing a research protocol that defines the population to be studied along with the intervention and outcome(s) of interest over during a specific period of time (i.e., The Population, Intervention, Comparison, Outcome, Time, PICOT model). Performing the study requires a different set of skills including leadership that promotes teamwork as well as skills necessary to manage budgets and deliverables. Collecting and making sense of the data requires still another set of skills and expertise including bioinformatics, quantitative and qualitative data analysis and hypothesis testing (e.g., biostatistics) and finally, a successful research project requires professionals with the ability to communicate findings and implications of the research to both professional and lay audiences and, equally importantly the ability to articulate the next research questions worthy of grant funding by government and/or industry. The spider diagram also shows that the skill level of members of the research team can range from what Benner described nearly 40 years ago as “from novice to expert” [3,4,5,6].

Figure 1.
figure 1

Benner’s Novice to Expert Model for Clinical Research.

There are now over 350,000 trials being conducted in more than 200 countries and in all 50 states in the USA [7]. The increasing complexity of trials, for example, vaccine trials including challenge trials, smart trials, cluster trials, basket trials, pragmatic trials, adaptive trials, platform and other multi-arm multi-stage designs with master protocols, requires an increasing expertise among members of the research team. Many of them are trained at the doctoral level (e.g., MD, OD, PhD) or Master’s level who function as principle investigators (PIs) or Co-PIs while others contribute expertise in a wide array of fields including genetics, molecular biology, combinatorial chemistry, pharmacoepidemiology, bioethics, bioinformatics and nursing. Still other team members are trained at the associate degree to Master’s level and function in support roles including research coordinators/monitors, patient recruiters, regulatory affairs specialists, database managers and research administrators who manage contracts and budgets as well as science writers and communications directors.

PIs and human resource managers would benefit from a means of assessing the competence of individuals to perform their designated role on a research team. Mullikinand colleagues developed the Clinical Research Appraisal Inventory (CRAI) in 2007 to assess the competency of PIs to conduct clinical trials [8]. The CRAI has been modified several times and has become the ‘gold standard’ for assessing PI competence [9,10,11]. The CRAI measures self-perceived competence to perform functions that define the role of PI.

These PI functions are not routinely or ordinarily performed by other members of the research team who play what are essentially supporting or adjuvant roles in the research process. While much attention has been paid to the competence of the PI, much less attention has been paid to the importance of those working in the roles that support PIs and to how well prepared they are to carry out their respective roles. Accordingly, this suggested that there was a need for a measure that is specifically designed to assess the competence of members of the research team that function in supporting roles. We developed two forms of such a measure to meet this need: the Competency Index for Clinical Research Professionals (CICRP-I)(12) and CICRP-II [13].

Methods

We used both forms of CICRP (I and II) to compare the performance of CRCS from existing data of 3 surveys of populations that reflect differences in formal training, research settings and years of research experience. The first population was surveyed by the Joint Task Force on Clinical Trial Core Competencies (JTF) [14]. In 2015, the on-line JTF survey, approved by the Institutional Review Board of Eastern Michigan University, collected demographic data, educational attainment and years of experience in clinical research in a variety of research roles and settings that included private medical offices, private research firms, contract research organizations (CROs) and academic medical centers. The researchers used a snowball approach to disseminate the survey resulting in 1584 respondents completing the competency assessments. Respondents worked in the USA, Canada, Latin America, Western Europe, Asia/Pacific, Middle East and Africa. Some identified their role in the research process as a CRC while others indicated that their role was in research administration, data management, or that they dealt with regulatory requirements or ethical and safety concerns in recruiting and enrolling research participants. To avoid language issues and possible differences in definitions in roles across countries, only those respondents from the USA and Canada were selected for the sample population (n) [12, 15].

The second population analyzed came from a survey conducted as part of the Development, Implementation and Assessment of Novel Training in Domain-Based Competencies (DIAMOND) project funded by NCATS in 2017. The DIAMOND investigators represented CTSA hubs at the University of Michigan, Ohio State University, Tufts University, and the University of Rochester. DIAMOND, in collaboration with the Consortium of Academic Programs in Clinical Research (CoAPCR) and approval by the Institutional Review Board of the University of Michigan surveyed experienced clinical research professionals working at the 4 research-intensive CTSAs and their affiliated hospitals [13].

The third population consists of students and graduates of academic degree granting programs at institutions that are members of the Consortium of Academic Programs in Clinical Research (CoAPCR). Study data were collected and managed using the REDCap electronic data capture tools hosted at University of North Carolina Wilmington [16]. REDCap (Research Electronic Data Capture) is a secure, web-based software platform designed to support data capture for research studies, providing (1) an intuitive interface for validated data capture; (2) audit trails for tracking data manipulation and export procedures; (3) automated export procedures for seamless data downloads to common statistical packages; and (4) procedures for data integration and interoperability with external sources. A total of 256 students and graduates from 10 programs responded to the CoAPCR survey that was reviewed by the Institutional Review Board of the University of North Carolina Wilmington. The average response rate from the 10 institutions was 26% (range 10 to 72%). Before entering a clinical research program, 172 (67%) of the respondents had a baccalaureate degree; 25 (10%) had a nursing degree and 11 (5.5%) had a doctoral degree (e.g., MD, PhD, PharmD) as the highest level of education. Moreover, 180 (70%) reported having ≥ 1 years of experience in the clinical research workforce with 53 (20.7%) having earned ACRP and/or SoCRA certification. In the comparisons reported here, we limit respondents to graduates of the classes of 2017 and 2018 who have 0 to < 1 year of experience in the clinical research workforce who can therefore reasonably be assumed in Benner’s terms to be ‘Novices’ [16].

Measuring the Competency of Clinical Research Professionals: CICRP-I

CICRP-I was created from responses to the JTF survey that inquired how competent individuals thought they were to perform 51 research functions. According to the JTF, the 51 research functions are core competencies that cluster into 8 research domains: Scientific Concepts and Research Design; Ethics and Participant Safety Considerations; Medicines Development and Regulation; Clinical Trial Operations; Study and Site Management; Data Management and Informatics; Leadership and Professionalism; and, Communication and Teamwork.

CICRP-I was created by factor analyzing dichotomized responses about self-perceived competence to perform the core JTF competencies (i.e., Competent vs Not Competent) provided by 238 non-PI research professionals from the USA and Canada. The analysis identified 20 core competencies that formed 5 CICRP-I scales: (1) General Index that measures self-perceived competence to perform general clinical research functions defined by 10 core competencies and 4 sub-scales measuring more specific competence to perform research functions related to: (2) Ethics and Participant Safety; (3) Drug and Device (Medicines) Development; (4) Data Management; and (5) Scientific Concepts. Each of the sub-scales involve 5 core competencies. Scores on each scale represent the mean number of competency items for which a respondent claim “Competent” range from 0 to 10 on the ‘General Index’ and 0 to 5 on each of the ‘Subscales’. Details of the factor analytic techniques and psychometric properties of CICRP-I are previously published [12].

CICRP-I is a set of measures that assess the self-perceived competency of clinical research professionals with a range of expertise (i.e., from Novice to Expert) in the different roles and diverse settings that constitute the general clinical research enterprise. Important and statistically significant differences in scores on the CICRP-I General Index and the sub-scales were noted in the JTF USA & Canada population according to role in the research process, research setting and years of research experience [12].

Measuring the Competency of Experienced CRCs at Research-Intensive Sites: CICRP-II

The differences in CICRP-I scores by role, experience and setting observed in the JTF data prompted the DIAMOND investigators to re-calibrate CICRP with a specific focus on experienced CRCs wording at the 4 CTSA hubs and their participating hospitals [13]. CRCs play a very critical role in carrying out a research protocol and the CRCs at these research-intensive sites would likely have the greatest experience coordinating the most complicated and the widest variety of research protocols. A factor analysis of responses from 95 experienced CRCs to the 20 CICRP-I core competencies produced 2 factors. A ‘Routine Functions’ factor defined by 10 core competencies performed by CRCs (e.g., GCPs) and a second factor defined by 10 more specialized ‘Advanced Functions’ (e.g., regulatory and reporting requirements) CRCs perform. CICRP-II was created from responses that used an 11-point response scale with 0 indicating not competent and 10 indicating highly competent to perform the research function. Accordingly, in terms of Fig. 1, these CRCs are an ‘elite’ group ranging from “Competent to Expert” and based on their responses CICRP-II, can be thought of as the ‘gold standard’ for assessing the self-perceived competence of experienced CRCs in research-intensive settings. Further details of the DIAMOND survey, the analytic techniques and the excellent psychometric properties of CICRP-II are published and available [13].

Analysis

We first compare performance on CICRP-II by respondents in the three populations of interest: CoAPCR Graduates with little to no experience (N = 17); CRCs working at the research-intensive CTSAs and the affiliated medical centers with 2 or more years of research experience (N = 61); and JTF CRCs with 2 or more year-experience presumably in less research-intensive settings (N = 65). Years of experience can be thought of as a heuristic indicator of Benner’s ‘from Novice to Expert’. For example, CRCs with 1 to 2 years of experience are “Beginners”, those with 2–5 years of experience would be expected to be “Competent”, those with 5–10 years of experience should be “Proficient” while those with ≥ 11 years of experience could be expected to be “Expert”.

The JTF data involved dichotomous responses to the competency items. As a result, all comparisons involving JTF CRCs required responses from the DIAMOND CRCs and the CoAPCR graduates to be similarly reduced to a dichotomy. We did this by coding responses of 0 to 5 as “Not Competent” and responses 6 to 10 as “Competent” in both the DIAMOND and CoAPCR surveys. Comparisons involving the DIAMOND CRCs and the CoAPCR graduates reflect mean scores attained with the 0 to 10 scoring option. Alternative cut-points can be used and may be more appropriate in certain circumstances determined by the CICRP-II user. Directions for multiple CICRP-I and II scoring options are available from the senior author. Scores in Figs. 2 and 3 indicate the mean number of “Competent” responses to the 10 items comprising the Routine Competency and the Advanced Competency measures.

Figure 2.
figure 2

Mean Scores for CICRP-II Routine Competencies. Note CICRP Competency Index for Clinical Research Professionals; RC Routine Competencies; CoAPCR Consortium of Academic Programs in Clinical Research; CTSA Clinical Translational Science Awards; JTF Join Task Force; CRCs Clinical Research Coordinators; Yrs years; Exp. experience.

Figure 3.
figure 3

Mean Scores for CICRP-II Advanced Competencies. Note CICRP Competency Index for Clinical Research Professionals; AC Advanced Competencies; CoAPCR Consortium of Academic Programs in Clinical Research; CTSA Clinical Translational Science Awards; JTF Join Task Force; CRCs Clinical Research Coordinators; Yrs years.

We do not present traditional tests of statistical significance because we believe that they would not be helpful due to small sample sizes in each group and the large number of tests that would require adjustment for multiple comparisons. Instead, we present these comparisons as research generating hypotheses in the hopes that additional data will further the professional development of the clinical research workforce by documenting the relationships between formal education in clinical research and years of experience in various research roles and settings.

The self-perceived competence reported by inexperienced CoAPCR graduates (Novices) to perform the 10 routine core competencies of CICRP-II (Fig. 2) is equal to that reported by CTSA respondents regardless of their years of experience as a CRC. Further, while JTF CRCs report self-perceived competence that increases with experience, they consistently lag-behind Novice CoAPCR graduates and respondents from the CTSA with as little as two years in the role of CRC (Beginners).

When it comes to competence to perform advanced research functions (Fig. 3), Novice CoAPCR graduates report much higher levels of self-perceived competence than CRCs regardless of research setting. These data also suggest that CTSA CRCs do not gain additional self-perceived competence after 2 years of experience while for CRCs in less intensive settings additional years of experience are important.

Additional insight into the impact of formal education compared to experience in a research-intensive setting is illustrated in Fig. 4. Both DIAMOND and CoAPCR surveys asked respondents to report self-perceived competence on the 20 core competencies using a 0 to 10 scale. This permits comparison of mean scores on the CICRP-I General Index (range 0 to 100) and the 4 sub-scales (range 0 to 50) and allows comparison of CoAPCR program graduates and CTSA CRCs.

Figure 4.
figure 4

Mean Competency Scores for CICRP-I. Note CICRP Competency Index for Clinical Research Professionals; CTSA Clinical Translational Science Awards; CRCs Clinical Research Coordinators; Med. Dev. Medicines Development; Mng management, SCI.CON Scientific Concepts; Yrs years.

Novice CoAPCR graduates score about 15 points higher on the CICRP-I General Index than Beginner” CRCs at a CTSA. It appears that even CTSA CRCs with 5 or more years of experience (i.e., “Proficient” and “Expert”) do not appear to acquire the same degree of self-perceived competency as “Novice” graduates of CoAPCR degree granting programs. This also appears to be the case in terms of sell-perceived competency to perform the more specialized functions particularly the most esoteric functions related to reporting requirements and regulatory issues governing Medicine’s Development where “Novice” CoAPCR graduates again score 15 or more points higher than the experienced CTSA CRCs.

Conclusion

These data suggest that a formal academic degree granting program equips a graduate with a greater degree of self-perceived competence to perform routine and especially advanced clinical research functions than what is acquired by years of experience—even experience at a research-intensive site such as a CTSA. These data also suggest that both forms of the CICRP index can be of value in identifying differences between CRCs according to their years and types of experience with respect to how competent they believe themselves to be to perform various research functions. This has potentially important implications for human resource directors and for PIs who may use any of the CICRP measures to identify and recruit individuals to fill specific roles to execute a research protocol as a member of a clinical research team.

Human Resources departments may also find the CICRP measures useful for identifying staff needs for continuing education and training and for evaluating the quality and effectiveness of available education and training programs. Clinical research professionals may find their performance of the CICRP measures valuable in judging their readiness to sit for professional certification exams such as those offered by Society of Clinical Research Associates (SoCRA), Association of Clinical Research Professionals (ACRP) and Regulatory Affairs Professional Society (RAPS).

The ultimate value and validity of measures of self-perceived competency will depend upon their correlation with yet to be developed objective measures of performance. Until then, CRAI and CICRP are the best available tools to further advance the professionalization of the clinical research workforce.