Introduction

Research is an increasingly important part of undergraduate medical education. The Association of American Medical Colleges (AAMC) reported [1] that 84.4% of medical students graduating in 2023 participated in a research project with a faculty member during medical school, compared to 78.8% in 2018 [2]. Alongside research participation, the involvement of medical students in deliverable research products has risen, with 63.7% of 2023 graduates [1] reporting authorship of a peer-reviewed paper submitted for publication compared to 50.5% in 2018 [2].

Multiple factors drive the increase in research opportunities during medical school. For one, the Liaison Committee on Medical Education (LCME) requires accredited medical schools in the USA and Canada to provide some foundation for scholarship or research. The LCME standard [3] that pertains to research includes the creation of an environment conducive to scholarly inquiry, and the expectation is additionally threaded through several other standards such as those related to faculty scholarly productivity, sufficiency of buildings and equipment, and inclusion of research principles in the curriculum.

Scholarly dissemination is also an important way for medical students to differentiate themselves as residency candidates, where applicants often compete for a limited number of residency spots. For example, greater research productivity is associated with matching to higher-ranked orthopedic surgery residency programs [4], and research is emerging as a common topic of discussion during interviews [5]. As Step 1 of the United States Medical Licensing Exam (USMLE) has become pass/fail [6], eliminating a standardized metric historically used to distinguish applicants, students, and faculty alike speculate [7] that research experiences, dedicated research time, and research productivity will become increasingly valued in the residency match process.

Given the escalating emphasis on research during undergraduate medical education, medical schools are undertaking a range of approaches to foster research opportunities for their students [8]. Although student perceptions of research programming have been studied [9,10,11,12], cohesive institutional wisdom surrounding components and outcomes of medical student research training programs is lacking. This study aims to address this need using information about in-house research programming collected from the medical schools ranked 1–50 in research by US News and World Report. Because the rankings are heavily influenced by research activity [13], these represent institutions with prominent levels of research productivity. They are therefore likely to support medical students in doing research and contributing to institutional research goals. Commonalities in their programs and offerings may provide a helpful framework for medical student research training programs at any institution. This study uses the insights obtained from the responding schools to generate a logic model conceptualizing the components that may be included in medical student research training programs. Logic models are a powerful tool to assist in program planning, implementation, management, evaluation, and reporting. They consist of program inputs, defined as “resources dedicated to or consumed by the program”; outputs, defined as “the direct products of program activities”; and outcomes, defined as “benefits or changes for individuals or populations during or after participating in program activities” [14]. Though traditionally used to develop and evaluate human services programs [14], the logic model has also been applied to medical educator faculty development [15], high-fidelity simulation training [16], and clinical practice-based research [17].

Materials and Methods

Fifty-one medical schools were invited to participate in a survey emailed to the corresponding Dean of Medical Education, to be completed by them or their representative (e.g., director of a medical student research office or similar). Though the top 50 were the target sample, due to ties in the rankings, 51 were included. Two reminder emails were sent. Responses were collected between November 15, 2021, and February 28, 2022.

The survey was created to describe the function, structure, and outcomes of medical schools’ scholarly concentration programs from deans’ perspectives. We sought to capture a snapshot of current top-of-mind issues in the profession. Due to the absence of existing measures, two of the authors (MR, RK) created questions to capture program structure, staffing, curriculum, other scholarly offerings, and scholarly products. Unlike other published program descriptions, we sought to describe programs available to all medical students regardless of their specialty interests or enrollment in dual degree programs. The items were developed based upon the authors’ experience and issues raised in current professional discussions. We consulted with survey research professionals within the institution and pretested the questions for comprehensibility. To minimize time for completion, the original drafted survey was scaled back to 14 questions.

The final Survey Monkey survey is available in the Supplemental Materials. Two questions (#2 and 6) assessed program inputs including personnel and funding; six questions (#1, 3, 4, 5, 7, and 8) focused on outputs such as programming, curriculum, and research requirements; one question (#9) assessed program outcomes; and two questions (#11 and 12) requested free-text responses that could span inputs, outputs, and outcomes. The remaining questions assessed institutional characteristics. Eight questions had structured response options with opportunities to describe further, and 6 questions were open-ended. Participants were instructed to answer questions about programs, services, and offerings available for all their medical students. This study was approved by the Icahn School of Medicine at Mount Sinai Institutional Review Board # 21-01243.

Open-ended survey responses were coded for common themes by three of the authors (NF, TS, MR). Discrepancies were discussed, and only the themes for which there was consensus were included. Items that were vague or lacked agreement were not included in the manuscript. Quantitative (multiple choice) survey responses were described as proportions. Since the study sample focused on a subset of medical schools, we did not perform inferential statistical comparisons. Instead, the results are presented as themes and best practices.

Development of the logic model included analysis of both quantitative and qualitative survey data. The authors used themes identified in the data and their own educational expertise to agree on elements appropriate for inclusion in the model. These included inputs, outputs, and outcomes reported by responding institutions; in addition, some data revealed areas of opportunity (i.e., components that were not commonly reported by programs but that would be of benefit) that were also included.

Results

Thirty-seven institutions (72.5%) responded to the survey. They varied by type (public vs. private), geographic location, enrollment, and requirement of research for graduation (Table 1).

Table 1 Characteristics of participating medical schools (N = 37)

Program inputs included many types of research support for all medical students, including institutional structures and funding (Table 2). The existence of an office or program that formally supports medical student research was ubiquitous (n = 36, 97.3%), with staffing most commonly including at least one director (n = 36, 97.3%) and at least one support staff member (n = 32, 86.5%). Faculty involvement was also key, as most schools (n = 35, 94.6%) allowed students to choose their own scholarly mentors rather than selecting from a prespecified list. Notably, however, funded support for faculty members serving as research mentors (n = 1, 2.7%), advisors (n = 7, 18.9%), or teaching faculty (n = 11, 29.7%) was rare. Student funding opportunities included stipends for summer research (n = 27, 73.0%) and conference presentation subsidies (n = 26, 70.3%) most commonly, with fewer institutions providing funding for a research year (n = 19, 51.4%), research training courses (n = 13, 35.1%), scholarly software (n = 10, 27.0%), publication fees (n = 5, 13.5%), and research specific costs (n = 3, 8.1%). Several respondents reported planning to expand student funding opportunities in upcoming years (Table 3).

Table 2 Types of institutional support offered by participating medical schools (N = 37)
Table 3 Themes of planned changes to the medical schools’ research programs (N = 25)

Program outputs included programming like curricula, specific programs and services, and a variety of topics on which students may focus their research (Table 4). Most institutions offered some type of formal research curriculum, including epidemiology (n = 31, 83.8%), statistics (n = 23, 62.2%), and research ethics (n = 29, 78.4%); curricular changes including implementation of research-related curricula were commonly identified as upcoming changes to institutions’ offerings (Table 3). Specific programs or services to support medical student research included research or scholarly elective opportunities and a school-sponsored venue for presentation of medical student research at most institutions (n = 34, 91.9%); other programs were extremely common as well, including a summer research program (n = 31, 83.8%), dual degree program (n = 30, 81.1%), graduation awards recognizing outstanding scholarship (n = 28, 75.7%), and a full-time research or scholarly year (n = 27, 73.0%). Many institutions offered many scholarly concentration areas in which students may choose to conduct research, with 18 (48.6%) offering 6–10 unique areas and 8 (21.6%) offering 11 or more. The topic areas offered across the 37 responding institutions varied widely (Table 4), encompassing translational science, population health, the humanities, and more. Themes around expansion of programming were common in respondents’ planned changes to their medical student research programs, such as creating additional degree programs, increasing support for research years, and diversifying the areas of concentration offered (Table 3).

Table 4 Research programming, offerings, and outcomes tracked by participating medical schools (N = 37)

Participating schools tracked outcomes of their programming in several ways. While specific deliverables such as publications, presentations or posters, and student participation were the most tracked, many institutions followed qualitative measures such as student evaluations of their experiences, continued engagement in research, and creation of less traditional scholarly products such as patents or non-scholarly publications (Table 4). Reconsideration of the metrics used to evaluate student research was also a common theme in the planned changes respondents reported (Table 3).

Data are summarized in the logic model (Fig. 1) to demonstrate the resources available to programs (inputs), the components of the programs including activities and participants (outputs), and outcomes that were monitored (short term, medium term, and long term). The logic model includes the common components emphasized here, less common components specified by respondents, and components that would be beneficial to include based on areas of opportunity revealed by the data.

Fig. 1
figure 1

Logic model describing research training programs by medical schools

Discussion

The medical schools ranked in the top 50 for research by US News and World Report in 2021 offer robust support and opportunities for medical student research, in part because scholarship is a foundational value for these research-focused institutions. Commonalities across their offerings and planned changes provide insights into how these schools promote medical student research. The logic model offers a framework for the key inputs, outputs, and outcomes of a medical student research training program.

Most participating schools provided institutional support for medical student research. The existence of a specific office or program to support medical student research, complete with dedicated staffing, was ubiquitous, suggesting that an institutional structure was important. Most students were allowed to choose their own research mentors from their institutions’ faculty, who therefore must be willing and able to mentor students. Frequent funding opportunities empowered students to spend summers or academic years focusing on research and to present or publish their work. Thus, key inputs for a medical student research training program included institutional structures such as administration, faculty and staff, and funding. Prior work [10] has identified that adequate time, quality supervision, and institutional support lead to development of students’ research skills and research output, underscoring the importance of such inputs.

From these inputs, schools can create research programming. Most participating schools offered formal curricula covering research-related topics, which provide the foundations needed for students to engage in scholarly work and which faculty have identified [18] as essential for successful research projects. Most respondents also offered specific programming such as elective opportunities and presentation forums, allowing for completion and dissemination of student work. Students had opportunities to engage in research during summers, as components of dual degrees, and during a scholarly year taken away from the medical school curriculum. The overwhelming majority of students had autonomy to choose a research topic from a wide array of concentration areas, which can help them [10] see the experience as relevant and valuable, stimulating interest in future research. These key elements of programming made up some important outputs of a research program.

Programs must also evaluate their outcomes. Among respondents, some tracked immediate student outcomes such as the number of students participating in research. Most commonly, schools tracked traditional measures of scholarly productivity, such as publications, presentations, and posters. However, many also saw value in less-traditional outcomes; not all scholarly pursuits yield publications but can instead lead to equally valuable quality improvement, health policy, community service, or educational projects. In acknowledgement of the increasing diversity of projects undertaken, several institutions reported planned changes in the evaluation criteria for medical student research. Institutions were also interested in less tangible outcomes, such as reports of student and mentor experiences and continued engagement in research later in students’ careers. Prior work [19] has identified similar diversity in measured research program outcomes and acknowledged that it is difficult to identify a “gold standard” set of outcomes given “the range of program goals and characteristics.” In training medical students to conduct, engage in, or evaluate research, programs should carefully consider what their intended outcomes are, as success can only be measured if a program has defined what success means to them.

The data additionally highlight areas of opportunity and concern for schools to consider in designing and assessing their programs. For example, students’ ability to choose their own research mentors allows them to explore areas of interest and to learn more about specialties they might pursue. However, it requires mentors to have the time, funding, and resources to provide guidance. This expectation may compete with faculty pressures to see patients, conduct procedures, or produce grant funding and publications, which faculty may perceive as higher value for revenue generation or academic promotion. Given that only one of the 37 responding institutions provided funding for research mentors, programs may consider increasing support and incentives for faculty involved in medical student research training. For example, they might strengthen inputs, such as mentor funding, or design program outputs that train or reward faculty for their mentoring efforts.

The desired long-term outcomes of a research training program also deserve close consideration. For example, if one long-term goal is to produce physicians who contribute high-quality evidence to medical literature, this may be at odds with shorter-term outcomes such as the number of publications. With increasing pressure to publish, students may seek faster-to-publish projects such as review articles [20] and may even be driven to misrepresent authorship or publication status [21, 22], as they feel obligated to pad research output numbers rather than contributing to high-quality, clinically meaningful research. This may also contribute to the research reproducibility crisis described by the NIH in 2014 [23]; the increase in non-replicable data and publications of questionable value makes it significantly more challenging to keep up to date with scientific and medical advances. Programs may wish to address such concerns by creating program activities such as research training workshops, online training modules, and curricular additions designed to help young researchers conduct stronger science, as suggested by the AAMC [24]. Programs may also reconsider their medium- and long-term outcomes, for example by measuring the quality of projects or their effects on students’ future careers, rather than simply counting numbers of research products.

Another valuable long-term outcome is residency matching, though medical student research training programs should carefully evaluate any unintended consequences of this goal. For example, in one survey [25], 66% of students engaging in neurosurgery research reported feeling anxiety around having enough research output, and many felt that they had insufficient time to work on research, suggesting that research expectations add additional stress, obligations, and time burden to the already rigorous experience of medical school. The density of the medical school curriculum makes it difficult for students to immerse themselves in rigorous research without taking extra time, such as a dedicated research year, to complete an already long and expensive training path. This raises concerns around equity: for example, since students who take a research gap year are more likely to match into competitive specialties such as plastic surgery and orthopedic surgery [26, 27], first-generation or disadvantaged students who are financially unable to take a year off to increase their research output may become unfairly disadvantaged in the residency match. Additional financial burdens that may inequitably disadvantage certain students include the costly expense of conference presentations and publications [28, 29]. In our survey, not all schools reported offering funding for conferences and publications, and more schools offered the opportunity to take a research year than offered funding for a research year, two areas of opportunity that deserve attention. In designing their programs, schools should consider the issue of equity in determining program inputs such as funding, which will contribute to key program outcomes such as residency matching.

This study was limited by a 72.5% response rate. In addition, collecting responses from one school leader at each institution could have restricted a broader awareness of all research offerings. Additionally, because they focus on the schools ranked 1–50 by US News and World Report, the survey data may not generalize to the research landscape across all medical schools. Finally, this study is limited in that it does not include data on outcome indicators (e.g., number of publications, quality of research) for responding programs, so there is no assessment of the relative impact of program components on outcomes.

Conclusion

As research and scholarship become an increasingly important component of medical student education and competitiveness for residency programs, it may be helpful to know what some schools offer so others can proactively design or evaluate their own programming. In this study, we found that most medical schools ranked highly for research provided institutional support. Key inputs included institutional structures such as administration, faculty and staff, and funding; key outputs were programming, research curricula, advising, mentoring, and faculty recognition; and key immediate outcomes included number of students participating in research, with longer-term outcomes including measures of scholarly productivity such as publications, presentations, and posters.

Although these tangible research products certainly have relevance, schools also offered other scholarly concentrations that may not lead to these traditional outcomes. Measures of productivity should go beyond counting publications and should involve examining the quality of research, impact of less-traditional scholarly work such as quality improvement projects, effects on students’ careers, and ways to reward faculty for mentoring efforts. Further work should assess the outcomes, costs, and benefits of medical student research training programs and the relative contributions of various inputs and outputs to their success.