INTRODUCTION

Virtual urgent care, also referred to as tele-urgent care and direct-to-consumer telehealth, has expanded rapidly in recent years.1,2,3,4 Virtual urgent care visits offer a convenient and timely care option for many patients and hold potential in diverting an estimated 20% of emergency department (ED) visits.5,6,7 However, studies evaluating the impact of virtual urgent care programs on quality of care have been mixed to date. One study found virtual urgent care likely improves access to care for some patient populations but could lead to increased utilization and healthcare expenditures.8 Another study found similar guideline-concordant antibiotic management between brick-and-mortar urgent care clinic visits and virtual urgent care visits, but a higher frequency of follow-up visits after virtual urgent care appointments.9 Prior work evaluating virtual urgent care programs found significant variation in quality between provider groups in caring for acute illnesses.10 Despite this described variation, there is lack of consensus on measurement of quality to evaluate these programs.11 Likewise, there is limited reporting of quality metrics in virtual urgent care by regulatory organizations and professional medical societies.12

Standardizing key quality measures for virtual urgent care is a critical action in advancing patient outcomes and care model operations. Metrics can help guide benchmarks of high- and low-performing programs, inform effective resource utilization, and identify opportunities for quality improvement. However, existing quality measures relevant to virtual urgent care are limited. Although a natural starting point for identifying optimal quality metrics for use in the virtual urgent care setting might be measures used by brick-and-mortar urgent care, this setting unfortunately also lacks widely developed metrics from which to take direction.13 Moreover, the relevance of quality measures proposed by the Centers for Medicare and Medicaid Services (CMS) is limited for virtual urgent care encounters.14

To guide virtual urgent care programs and other applications of telehealth in quality metric development, the National Quality Forum (NQF) has developed quality frameworks which promote a focus on measuring impact in five key domains: access to care, financial impact/cost, experience, effectiveness, and equity.15,16 Prior evaluation of the implementation of telehealth quality metrics among telehealth programs revealed variation between programs in domains in which metrics were developed.17 However, to our knowledge no studies have described the use of quality metrics across virtual urgent care programs. We sought to characterize use of quality measures related to effectiveness and equity by virtual urgent care programs. We selected these domains given a wide variation described within effectiveness quality measurement subdomains in our prior work as well as the recent strong renewed interest in digital health equity.17,18 Additionally, by focusing on two domains, we could better allow for adequate time to capture all relevant metrics. By describing implementation of quality metrics within these domains, attention and resources may be dedicated more urgently to areas of quality that are “under-measured.” As a secondary aim, we sought to understand programs’ motivations for and barriers to quality measure development to better inform possible future strategies for implementing and benchmarking quality metrics for virtual urgent care.

METHODS

Study Design and Setting

This was a qualitative study of eight virtual urgent care programs. We defined virtual urgent care as care provided by a remote clinician to a patient, addressing acute, unscheduled care needs. Virtual urgent care offerings may involve one or multiple options for patients to connect with a clinician, including telephonic, video-conferencing, and asynchronous chat.19 In an attempt to triage appropriate patients, most virtual urgent care offerings ask patients to select a chief complaint from a list of appropriate chief complaints prior to scheduling a visit. Virtual urgent care offerings may then ask patients to queue in a virtual waiting room to see the next-available provider or offer an appointment time in the near-future (i.e., next 1–4 h). Increasingly, virtual urgent care programs are seeking opportunities to improve clinical efficiency by initiating visits using asynchronous texts to help triage patient complaints.20 We conducted semi-structured interviews with telehealth medical directors and health system leaders of virtual urgent care programs from July to October 2022 (Table 1).

Table 1 Characteristics of Participants and Virtual Urgent Care Programs

The study team interviewers (DW, KL, EH, TJ, KZ) were composed of five practicing emergency physicians with training in qualitative research and expertise in telehealth research. Three had experience providing emergency medicine telehealth, including one emergency medicine telehealth medical director. The study was reviewed and considered exempt by the Mass General Brigham’s Institutional Review Board.

Participant Selection

We used a convenience sample of virtual urgent care program representatives identified through a review of the virtual urgent care literature and through relationships with members of the study team. A research librarian aided the team in developing a PubMed query to identify published literature on virtual urgent care programs. Articles and their references were reviewed to build a target list of programs. The study team then identified contact information from published articles and online queries. Email invitations to participate were sent to potential participants. Follow-up invitations were sent a maximum of two times. We ultimately interviewed eight representatives from eight unique virtual urgent care programs and did not pursue further additional interviews given that thematic saturation was achieved.

Interview Guide Development and Interviews

All team members participated in the development of the interview guide. We structured the questions to mirror the effectiveness and equity quality domains of the NQF Framework.15,16 The interview guide was then pilot tested with two former virtual urgent care leaders at two different institutions and revised to incorporate their feedback.

The interview started with introductions by the team and study participants, followed by obtaining verbal study consent. We then queried participants about relevant quality metrics captured by their program. Motivations for quality measurement as well as barriers to quality measurement were also explored. Interviews were 45 to 60 minutes long and were conducted on Microsoft Teams by a minimum of two members of the study team. Interview transcripts were created using the Microsoft Teams transcription function and field notes were collected during the interviews. The transcripts were not returned to the participants for comments, and the participants did not provide feedback on the findings. Repeat interviews were not conducted.

Analysis

Quality measures reported by participating programs were identified by coding interview transcripts and interview notes. Measures were then grouped according to the NQF’s effectiveness domain and subdomains (system effectiveness; clinical effectiveness; operational effectiveness; and technical effectiveness) as well as equity domain. We used a grounded theory approach for thematic analysis. Four members of the study team (DCW, KL, KSZ, TJ) reviewed interview transcripts, independently developed a codebook with one level of themes, and categorized transcript comments related to motivations for and barriers to quality measurement. Coding discrepancies were adjudicated as needed by additional team members. Feedback on study findings was not provided by study participants.

RESULTS

We emailed study invitations to 13 representatives at 13 unique virtual urgent care programs, received responses from 11 representatives and interviewed eight virtual urgent care leaders from eight unique programs (Table 1). Of the three representatives who responded and were not interviewed, one indicated that their institution no longer provided a virtual urgent care offering and two were unable to be scheduled for an interview. Programs represented were predominantly academic health systems (6/8) and situated in urban areas (5/8). All programs offered video visits with a clinician (8/8) and a majority offered audio-only visits (5/8). Most programs (6/8) delivered more than 1,000 total virtual urgent care visits per month.

Most virtual urgent care programs (7/8) interviewed reported measuring at least one quality metric within the NQF’s effectiveness domain (Table 2). Programs were most likely to measure a quality metric related to clinical effectiveness (7/8) (i.e., antibiotic rates for sinusitis at the physician level or virtual urgent care visit within 7 days for same chief complaint) as well as operational effectiveness (7/8) (i.e., Left Without Being Seen rate). Quality metrics related to technical effectiveness (i.e., video failure rate) were measured by half of the programs interviewed. Only one program reported measuring a quality metric related to equity of care, examining virtual urgent care use by zip code.

Table 2 Virtual Urgent Care Program Effectiveness and Equity Quality Measurement

Most study participants reported a desire to ensure quality of care (6/8) as a motivation for quality measurement (Table 3). For example, participant 3, from a large, private telehealth company offering virtual urgent care services mentioned an interest “to ensure providers are performing high quality care.” Additionally, a medical director of virtual health at an urban, academic health system mentioned “a desire to prove that we’re delivering the same quality of care via telemedicine that we are in person.” Demonstrating the value of the virtual urgent care offering was the second most cited motivation (4/8) for measuring quality. Most participants cited demonstrating value to internal stakeholders (i.e., health system chief financial officer) with one participant noting that an anticipation of insurance reimbursement requirements prompted quality metric development.

Table 3 Motivations for and Barriers to Virtual Urgent Care Quality Measurement

Limited resources for quality measurement were most commonly (6/8) reported as being a barrier to quality measurement. Specifically, analytic resources for development, implementation, and upkeep of quality metrics were cited in addition to limited bandwidth amongst existing virtual urgent care administrative staff. Participant 1, a medical director of an academic, virtual urgent care program in the west, highlighted that “dedicated analytic resources to support virtual health initiatives is very difficult.” Moreover, a clinical innovation leader at an urban, academic program in the northeast reported that “limited bandwidth amongst staff makes quality measurement challenging.” Lack of standardization, including lack of quality metric guidance and opaque electronic medical record (EMR) data definitions, was the second most cited (3/8) barrier to quality measurement.

DISCUSSION

In summary, most virtual urgent care programs represented in this sample capture quality metrics related to the NQF’s domain of effectiveness, particularly within the subdomains of clinical and operational effectiveness. In contrast, only one of the eight programs in our sample currently uses a quality measure related to health equity. Programs expressed high levels of motivation to measure quality of care delivery, with the most cited motivations being a desire to ensure quality of care and to demonstrate the value of their program. Limited resource availability served as a significant barrier to quality measurement for many programs, specifically lack of data analytic support.

We found significant variation in quality measurement content related to effectiveness among the virtual urgent care programs in our study. This is similar to our prior work examining quality measurement among a broad spectrum of telehealth programs.17 Most programs reported measuring at least one aspect of clinical and operational effectiveness, but a minority reported quality measures within the subdomain of system effectiveness, which describes the ability of virtual urgent care to assist in coordinating care across settings and between clinicians. Lack of quality monitoring in the system effectiveness domain may contribute to the increased healthcare utilization after virtual urgent care visits described by prior studies.9 With the anticipated growth of virtual urgent care visits, including virtual urgent care models that may include Mobile Integrated Health paramedics, there is an increasingly pressing need to understand current quality of care evaluation to inform potential standardization of quality metrics as well as to allow for benchmarking and focused quality improvement interventions.

Despite the increasing recognition of the importance of equity in healthcare delivery, we found only one of the virtual urgent care programs in our study reported tracking an equity-related quality measure. This underscores the need for standardization in quality reporting. Virtual urgent care represents a significant opportunity to narrow healthcare disparity in access to care if implemented well; however, at the same time, these programs could further exacerbate healthcare inequities given historic barriers of technology access, language accessibility, and internet access for people with lower socioeconomic status and people of color.21,21,22,23,25 Investment and regulatory changes have sought to improve equitable access to virtual urgent care services.26,26,28 Encouragingly, a recent study evaluating demographic shifts in virtual urgent care visits after recent COVID-19 telehealth expansion policies has noted increasing proportions of vulnerable patient populations (e.g., elderly, uninsured, rural populations) using virtual urgent care.29 However, future sunsetting of these policies with the end of the public health emergency puts these equity-related gains in virtual urgent care access at risk.30 Development and implementation of standard equity measures could provide much-needed feedback and benchmarking for institutions to inform further maturation and adoption of local telehealth-equity related initiatives such as investment in digital connectivity and digital literacy in addition to multiple-language offerings. A few health systems, which did not participate in our study, have published their own telehealth quality frameworks which emphasize equity; however, the degree to which and how institutions are actively monitoring the equity of their telehealth programs is relatively unknown.31

While the virtual urgent care programs we interviewed are strongly motivated to develop and implement quality measurement, resource and technology limitations remain a significant barrier. This is unsurprising given prior work finding quality measurement initiatives to be costly; one study from 2016 estimated the cost to be $40,069 per physician per year.32 A root cause of this high cost is likely the explosion of quality metrics in recent decades with less attention paid by regulatory and quality institutions to date to the cost of implementing and reporting quality measures.33 Participants in our study mentioned lack of standardization, in particular how electronic medical record data is labeled, and how quality metrics are defined, as a major barrier to quality metric development and use. Limited usability of EMR data has been widely reported in the literature and efforts to structure and define data are in the works.34,35 Improving healthcare data usability may represent an underutilized lever to drive down the cost of quality measurement for virtual urgent care programs as poorly labeled and structured data can increase workloads for quality analytics teams. Additionally, development of measures that are readily extracted from EMR data without requiring “hands-on” abstraction will be key. To our knowledge, our study is the first to report on challenges faced by virtual urgent care quality metric initiatives.

The findings from our study could inform several key next steps for virtual urgent care quality metric development. First, further work, research, and leadership should focus on developing, validating, and standardizing virtual urgent care quality metrics related to effectiveness and equity. Clearly defined metrics would help virtual urgent care program leaders monitor and benchmark performance. CMS could develop quality metrics for virtual urgent care and consider tying the most clinically relevant measures to reimbursement, especially given the Congressional Budget Office’s recent estimate that extending pandemic related telehealth measures could increase Medicare costs by $25 billion over 10 years.36 As research informs the most relevant quality metrics for implementation, medical professional societies such as the American College of Physicians and American College of Emergency Physicians could further develop guidelines for virtual urgent care to reduce variation in care.37,38 As with all measure development processes, pilots should occur first before mandated adoption to limit unintended consequences and understand the variation in performance and cost of measurement.39 Initial focus on a few validated and actionable metrics as opposed to a multitude will allow programs to invest in quality improvement activities and limit unnecessary implementation and reporting costs, a major barrier to quality metric use found in our study.40 At the same time, efforts to allow for easy sharing of quality measurement best practices between programs as well as advocacy for improved EMR quality reporting capabilities to alleviate burdensome resource costs could unlock even more virtual urgent care quality improvement opportunities.

This study has multiple limitations. The first is that we used a convenience sampling method, identifying tele-urgent care programs from the published literature and through professional networks. Programs represented therefore tended to be established and relatively mature (87.5%, > 2 years) with larger patient volumes, potentially allowing for more investment in quality measurement infrastructure. Quality metric use and key themes identified may then not be representative of nascent virtual urgent care programs; and thus, our sample is likely biased toward more developed programs with more robust quality measurement processes. Additionally, most programs (~ 75%) interviewed were based at academic institutions where the desire to evaluate and publish the impact of virtual urgent care on quality of care may have influenced broader quality metric implementation than in a community health system or commercial setting. Furthermore, as the majority of programs participating were based in urban areas with widespread broadband penetration, they may be less motivated than rural-serving virtual urgent care programs to measure equity measures related to connectivity. Lastly, some participants had pre-established relationships with the study team which could have contributed to responses influenced by social desirability bias.

In conclusion, our research furthers understanding of virtual urgent care programs’ use of quality metrics related to effectiveness and equity as well as identifies common barriers to implementation and motivation for quality measurement. Given the potential large role virtual urgent care could play in delivering equitable and effective care, more work should focus on developing, evaluating, and standardizing quality metrics. To ensure maximal impact as well as program sustainability, particular attention to financial and human resource requirement needs for quality metric collection and reporting should be given as this was identified as a major barrier to current quality metric use.