Introduction

Examinations to detect skin cancer are cited as one of the most cost-effective and lowest-risk cancer screening interventions in modern medicine [1]. When considering skin cancer as a whole, keratinocytic carcinomas (basal cell carcinoma and squamous cell carcinoma) are more common than melanoma, but melanoma has a higher mortality rate. In the USA, an estimated 106,110 patients will be diagnosed with melanoma and 7180 patients will die from melanoma in 2021 [2], and by 2040 cutaneous melanoma is projected to be the second most common cancer [3]. In addition to the risks of morbidity and mortality, skin cancer represents a significant health care expenditure: in 2011, the US population spent approximately $8.1 billion on skin cancer–related procedures. Of the $8.1 billion in 2011 annual skin cancer expenditures, $4.8 billion were attributed to keratinocytic carcinomas and $3.3 billion dollars were attributed to melanoma [4]. Early melanoma detection, or secondary prevention, supports diagnosis at stages where straightforward therapeutic options are potentially curative. Melanoma diagnosed at advanced stages carries a significantly greater risk of therapeutic morbidity, financial toxicity [5, 6], and potentially irreversible adverse events to immune checkpoint inhibitors [7].

Early detection of skin cancers is an integral part of dermatologists’ practice, yet there are many regions of the USA without dermatologists [8]. In these regions, Primary Care Providers (PCPs) may provide essential skin cancer detection services, but there are significant training barriers to supporting early diagnosis. Eight to 25% of primary care encounters address a patient’s skin concern [9, 10], and the Family Medicine Accreditation Council for Graduate Medical Education (ACGME) recognizes the diagnosis of skin cancer as a core competency [11]. Among PCPs, performance of skin cancer examinations is variable: in 2010, 80% of respondents to National Health Interview Surveys (NHIS) reported never receiving a total body skin examination [12]. While some PCPs regularly perform skin cancer diagnostic examinations and procedures, many cite the need for additional training as a key barrier to incorporating skin examinations into clinical care [13]. Telementoring, specifically Project ECHO (Extension for Community Healthcare Outcomes), supports PCP incorporation of dermoscopy into practice [14].

There is no consensus statement reflecting the specific skin cancer–related competencies appropriate for physician, Physician Assistant/Associate or Nurse Practitioner PCPs, and there is no standard or standardized skin cancer educational curriculum. In the absence of a consensus statement and a standard educational curriculum, most PCP graduate medical training programs offer no formal skin cancer education [15], representing a key cancer control training gap.

The current spectrum of educational curricula seeking to support skin cancer diagnosis by PCPs varies widely; one of the most rigorous curricula, the Internet curriculum FOR Melanoma Early Detection (INFORMED), recently became unavailable as Adobe officially discontinued Adobe Flash Player support on December 31, 2020 [16]. But even this rigorously created educational curriculum illustrates the need for a new educational approach: PCPs who completed the INFORMED curriculum demonstrated improved knowledge and confidence [17], but in post-completion focus groups, most PCPs reported no intent to change practice, citing two specific barriers: (1) the need for more detailed instruction; and (2) the need for assistance with challenging cases encountered during patient care, or telementoring [18].

To overcome practice change barriers, we designed a multi-faceted pilot intervention which includes a curriculum and telementoring [19] to support PCP performance of skin cancer examinations. We conducted a single-site intervention pilot to test program feasibility.

Methods

The pilot intervention was conducted in collaboration with Texas Tech University Health Center – El Paso (TTUHSC-El Paso) Department of Family and Community Medicine and was approved by The University of Texas MD Anderson Cancer Center Institutional Review Board. Family medicine resident physicians reviewed a statement of consent and agreed to study participation. Participation in the intervention was voluntary and was included as part of the scheduled graduate medical education didactics.

Our intervention provides training to two levels of proficiency (Table 1). Level 1, for providers and patients with access to dermatology consultation, trains providers to “triage and refer,” with recognition of melanoma risk factors and performance of an examination to identify clinical outliers and distinguish melanoma from benign pigmented lesions. Level 2, for providers and patients without access to dermatology consultation, trains providers to “diagnose and manage,” with the addition of algorithm-based dermoscopic image analysis, performing diagnostic biopsies, interpreting pathology results, and developing appropriate plans of care. The family medicine resident physicians at TTUHSC-El Paso were given the option to participate either in one, both, or neither of the Level 1 and Level 2 interventions.

Table 1 Overview of level 1 and level 2 crriculum timeline, lecture topics, test delivery, and participant completion (n) per test. Tests included self-efficacy and knowledge components. Level 1 trains providers to “triage and refer” with an emphasis on clinical examination and triage. Level 2 trains providers to “diagnose and manage” with the addition of algorithm-based dermoscopic assessment of skin lesions, performing diagnostic biopsy, and developing appropriate plans of care. TADA Triage Amalgamated Diagnostic Algorithm

The Level 1 curriculum includes didactic content from the Visual Perception Learning [20] and INFORMED [17] evidence-based interventions. The Visual Perception Learning intervention includes four educational modules addressing how to differentiate specific subtypes of melanoma from similar appearing benign skin growths based on the clinical examination alone [20]. INFORMED is an approximately 3-h online evidence-based curriculum which addresses clinical diagnosis of the three most common skin cancers: melanoma, basal cell carcinoma, and squamous cell carcinoma along with benign lesions. Performing a clinical examination and identification of melanoma risk factors were also addressed. Sections representing similar content between the two evidence-based interventions were grouped and presented in the live (synchronous) Level 1 curriculum lecture.

The Level 2 curriculum provided instruction on algorithm-based dermoscopic image analysis through the Triage Amalgamated Diagnostic Algorithm [21]. The Triage Amalgamated Diagnostic Algorithm uses a two-step process to interpret dermoscopic images to differentiate between common benign and malignant skin neoplasms. In the first step, providers determine if the lesion in question matches the prototypical pattern for one of three common benign skin growths: angioma, dermatofibroma, or seborrheic keratosis. If the growth does not match one of those three benign diagnoses, then it is evaluated for an architectural disorder, or the presence of six additional diagnostic criteria: starburst, blue-black or gray color, shiny white structures, negative network, ulcer/erosion, vessels. If the lesion has any of the above criteria, then it should be biopsied or referred for further evaluation and management [21]. The Level 2 curriculum was divided into five 30-min live lectures delivered via Zoom, followed by 15–30 min of case-based discussion using the Project ECHO (Extension for Community Healthcare Outcomes) telementoring framework [19].

Pilot tests were developed to assess gains in knowledge (including interpretation of clinical images) and self-efficacy (or confidence). The tests were developed through a collaborative interdisciplinary process and reflected the key learning constructs of the corresponding curricula. The pilot Level 1 test included 9 multiple-choice knowledge and 12 self-efficacy items (utilizing a 10 point Likert scale), delivered via Qualtrics [22]. The Level 2 pilot test included 32 knowledge and 23 self-efficacy items. Testing was integrated into the curriculum timeline. Level 1 tests were delivered at three time points: pre-intervention, immediate post-intervention (both month 0), and 7 months after the Level 1 lecture delivery (Table 1). The 7-month post-lecture timeline was selected as a matter of convenience to align with the academic year-end. Level 2 tests were delivered at three time points: pre-intervention (month 1), intervention mid-point (month 3), and post-intervention (month 7). While the same cohort of family medicine resident physicians had the opportunity to participate in the intervention, as participation was voluntary, not all resident physicians completed all intervention curriculum lectures or tests (Table 1).

Results

Physicians completing Level 1 tests demonstrated statistically significant gains in average knowledge (p < 0.001, Fig. 1a) and self-efficacy from pre- to immediate post-educational intervention (p < 0.001, Fig. 1b/c). While gains in self-efficacy remained at statistically significant levels at the month 7 (m07) time point levels (p < 0.001), knowledge change was not statistically significant (p = 0.195).

Fig. 1
figure 1

Level 1 knowledge and self-efficacy testing performance. Participants demonstrated significant gains in average knowledge from pre- to post-intervention (p < 0.001, a); however, the comparison of pre- to m07 time points was not significantly different. Level 1 self-efficacy by time point is presented with histogram (b) and distribution box plot (c) views. Participants demonstrated significant gains in self-efficacy from pre- to post-educational intervention (p < 0.001), as well as from pre- to 7 months (m07) after educational intervention (p < 0.001)

Physicians completing the Level 2 tests demonstrated statistically significant gains in knowledge (p = 0.035, Fig. 2a) and self-efficacy (p < 0.001, Fig. 2b/c) from pre- to post-intervention time points, with significant gains in knowledge (p = 0.014) and self-efficacy (p < 0.001) reached the mid-point assessment.

Fig. 2
figure 2

Level 2 knowledge and self-efficacy testing performance. Participants demonstrated statistically significant gains in knowledge from pre- to immediate mid-intervention time points (p = 0.035), as well as the comparison of pre- to post-educational intervention (p = 0.014, a). Level 2 self-efficacy by time point is presented with histogram (b) and distribution box plot (c) views. Participants demonstrated statistically significant gains in self-efficacy from pre- to mid-intervention (p < 0.001), as well as from pre- to post-educational intervention (p < 0.001)

Discussion

When evaluating existing interventions to support PCP performance of skin cancer examinations, three key gaps exist: (1) training PCPs to “triage and refer” may generate diagnostic delays in regions without dermatologists; (2) educational intervention studies should ideally include the intent to change practice as an outcome, as without practice change patients derive no benefit from the educational intervention; and (3) without standardized and validated knowledge and self-efficacy instruments, comparing educational interventions to determine which intervention works best is challenging. We considered these gaps and opportunities when designing our pilot intervention and tests.

The resident physician participants demonstrated a measurable change in knowledge and self-efficacy for the Levels 1 and 2 interventions. Of note, the m07 assessment of Level 1 knowledge and self-efficacy demonstrated persistent gains in self-efficacy but returned to pre-intervention knowledge levels. There are two potential explanations for this finding. First, this may reflect the exposure to the Level 2 intervention, which emphasizes dermoscopic image interpretation for benign and malignant skin growths. This exposure may have prompted a heightened recognition of subtle skin cancer presentations and changed the threshold for malignant tumor categorization. Second, this may reflect general attrition of learning and time from the educational intervention exposure.

The strengths of our pilot include the multidisciplinary team development of learning constructs and tests, utilization of existing evidence-based interventions, and the creation of proficiency levels to help match the level of education with participants’ needs. Pilot study challenges include limited data analysis given a single-site educational cohort, the sequential delivery of the intervention content which may have influenced the level 1 m07 follow-up data, and the need for future intervention adaption from the graduate medical education environment to real-world PCP practice. Given the resident physician educational cohort, intent to change practice and actual practice change were not included in our pilot test outcomes, as resident physician practice is integrated within the faculty supervision model. In addition, the pilot tests require further analysis, development, and validation.

Future research includes developing a consensus statement regarding key learning constructs and appropriate evaluative instrument items for educational interventions supporting PCP performance of skin cancer examinations. We anticipate adaptions to the intervention, tests, and implementation methods based on the outcomes of the consensus process and the transition of the intervention from resident physician to practicing PCP target audiences. Outcomes reflecting an intent to change practice, practice change, and patient-based outcomes will need to be developed and validated with future studies. Finally, to support intervention scalability, we anticipate translation of the educational intervention from live to interactive, web-based asynchronous modules complemented by geographically appropriate telementoring to address patient care challenges that may arise in practice.

Presentations

None.