Background

Over the last two decades, there has been increasing development of patient-facing behavior change interventions that incorporate digital technologies (e.g., Web-based multimedia, telephone/video conferencing, mobile and sensor technology, gaming, virtual reality, social media) across health domains to augment and/or fill the gaps of traditional prevention and treatment programs [1,2,3,4]. One reason behind this growth is the promise of effectiveness: eHealth interventions afford design elements not previously available or feasible with in-person, human-delivered interventions. Features such as personalization, privacy, variable workflow, timecasting, and integration into daily routines and environments are hypothesized to improve engagement and learning, thereby leading to better outcomes [2, 5,6,7,8,9]. More importantly, eHealth interventions offer the promise of reach: digital technologies not only allow interventions to circumvent geographic, social, and economic barriers to access [7] but also provide strong opportunities for cost-efficient scalability with fidelity [2, 7]. With capacity to improve both reach and effectiveness, eHealth interventions have tremendous potential for public health impact [10].

Despite continued investments in the development and testing of new eHealth programs [3, 4, 11,12,13], few interventions with demonstrated evidence of effectiveness have achieved widespread use [2,3,4, 14]. This gap between research and implementation is not unique to eHealth [15], but the model of establishing effectiveness in incrementally less-controlled settings creates challenges for eventual implementation that are more pronounced for technology-based interventions. First, research-based eHealth interventions are typically developed within highly controlled trials, often with a focus on the technology but limited input from end users, particularly future implementers [16•]. Upon moving into practice, an intervention may not be accepted by its intended targets or those delivering it because it does not fit their needs, contexts, or capabilities. Second, although the greatest cost of eHealth interventions is usually borne during development, building out components to make them pragmatic also incurs considerable resources that are rarely budgeted for at the end of a proof-of-concept study [15]. Third, the speed at which clinical innovations move through the research pipeline to the field—about 17 years by one estimate [17]—is far outpaced by the rate of technological advancements and changes in consumer expectations [6, 16•]. By the time an eHealth program’s effectiveness has been demonstrated, the technology has almost certainly become outdated [18]. Fourth, limited theoretical guidance exists on how to implement eHealth interventions. Most implementation theories and frameworks begin with the premise that the innovation is a discrete, stable product to be disseminated [19,20,21,22,23]. For eHealth, however, there is no option to “freeze” or even “stiffen” interventions in this way because failing to update software, hardware, form, and/or functionality would relegate them to increasing dysfunction and obsolescence [6]. Fifth, eHealth evidenced-based interventions (EBIs) that are ready for implementation must compete in a market with home-grown and commercial programs that have not been rigorously evaluated, making it difficult for consumers to know which to use [20].

To maximize return on research investments in eHealth and realize their potential for greater public health impact, guidance on overcoming these technology-related implementation challenges is needed. Specifically, we believe researchers can prepare for and mitigate many of the aforementioned barriers to implementation via strategic choices during intervention development. Such planning can then be extended as intervention content evolves based on new research findings and technology adapts to changing user preferences [6, 24]. By prioritizing implementation, program developers can minimize stakeholder rejection, reduce build-out costs, keep pace with shifting social and technological (i.e., sociotechnical) factors during and after research trials, and build supports for sustained delivery of their interventions. The aim of this manuscript is thus to articulate a pragmatic approach and set of guiding questions to consider when designing or adapting eHealth behavioral interventions. To achieve this, we describe four rigorously evaluated eHealth HIV programs as exemplars of such implementation considerations during development and evaluation, and we examine challenges and successes experienced in each case.

eHealth in HIV

HIV prevention is a useful domain for studying eHealth implementation because of the substantial proliferation of diverse eHealth interventions [11,12,13, 25], limited resources for prevention, and a nationally coordinated infrastructure sensitive to dynamic changes in the field. Regarding end users, the epidemic in the US disproportionately affects young men who have sex with men (YMSM) [26,27,28,29], who, because of factors such as stigma against sexual minorities, are difficult to reach through traditional youth settings (e.g., schools, families) [30,31,32]. Targeted face-to-face HIV EBIs have had some success in high-density areas but are insufficient to meet goals for reducing HIV incidence due to economic and structural barriers to implementation [21, 23, 33,34,35,36]. Even in urban centers, only 28% of HIV-negative YMSM report participating in HIV prevention programs [37]. Simultaneously, YMSM tend to be early adopters and frequent users of digital technologies because they provide opportunities for learning, connection, and expression free from stigma [38,39,40,41,42,43,44,45]. Trials of eHealth HIV interventions have accordingly shown exceptionally high acceptability, interest, and actual use, particularly among subgroups at greater risk (e.g., YMSM of color) [46,47,48]. Available evidence also demonstrates comparability to in-person programs in effectiveness at changing HIV risk and protective behaviors [4, 13, 49,50,51].

Regarding implementers, the science of preventing HIV is rapidly and repeatedly transformed by biomedical, behavioral, and structural innovations [28, 52, 53], resulting in relationships among research institutions, government agencies, healthcare providers, community-based organizations, pharmaceutical companies, and YMSM communities that are synergistic, adaptive, and responsive to sociotechnical volatility [54]. As funding for HIV prevention, differing from treatment, becomes more scarce [55], the collective zeal from stakeholders to scale up and scale out [56••] eHealth HIV EBIs grows.

ACTS Model

Our analysis is informed by the Accelerated Creation-to-Sustainment (ACTS) model, a “framework for accelerating research and integrating design, evaluation, and sustainable implementation into a unified effort” [16•]. As noted, one of the challenges for eHealth implementation research is that most implementation theories focus on delivering and replicating a complete, “locked-down” product. The ACTS model refutes this characterization by explicitly separating eHealth interventions into a service component, representing what end users receive through the intervention from implementers, and a technology component, representing how technology supports delivery of the service. This reconceptualization of eHealth as technology-enabled services instead of human-supported technologies allows for both components to evolve over time and incorporates the role of implementers into design and evaluation [57].

Building on a paradigm shift in implementation science toward a recognition of ongoing change as important for long-term implementation (e.g., dynamic sustainability framework [24]), the three-phase ACTS model proposes that eHealth interventions are initially designed only insofar as to have service protocols, technology prototypes, and implementation plans for both service and technology that are safe and free of significant usability problems (Create phase). The protocol, prototype, and plans are optimized and evaluated on effectiveness and implementation outcomes in a hybrid optimization–effectiveness–implementation trial, combining what would traditionally be multiple distinct studies (OEI Hybrid Trial phase). Finally, research support is removed, leaving in place a fully functioning, integrated, and independently sustained program (sustainment phase). Across the phases, a process of iterative evaluation and redesign continually fits the intervention to real-world contexts to avoid overoptimization for non-pragmatic conditions, expedite use, and adapt to unpredicted sociotechnical disruptions.

Methods

We applied the ACTS model’s reconceptualization of eHealth interventions to four previously tested HIV prevention programs for YMSM—Keep It Up!, Harnessing Online Peer Education, Guy2Guy, and HealthMindr—to identify where implementation was or should have been considered and draw comparisons across different types of digital technologies. Although the interventions were developed and evaluated using traditional trial methodology, the ACTS framework nonetheless provides a lens through which to examine how each program addressed the need for upkeep and revolutions in the field, most notably the arrival of pre-exposure prophylaxis (PrEP), a medication that prevents HIV transmission [28, 53]. Our goal was to use the lessons learned from these experiences to help accelerate the transition of other interventions from studies to scalable solutions.

Results

Table 1 presents the service, technology, and implementation plans of the eHealth interventions and summaries of the adaptations that have been made over time. The following narratives further describe the interventions along with challenges encountered during each program’s operation and subsequent considerations around intervention design.

Table 1 Four eHealth HIV prevention interventions for MSM, framed using the Accelerated Creation-to-Sustainment model

KIU!

Keep It Up! (KIU!) is an online HIV prevention program designed to get YMSM ages 18–29 who test HIV-negative to “keep it up,” or maintain their negative status, by reducing risk and enacting protections [47]. Through accounts registered to their emails, participants can access KIU! at their own convenience from any web browser (originally not mobile devices). They move linearly through seven modules (approximately 1 h total) across three sessions, with a forced day-long break between sessions. There are also two (originally one) booster sessions 3 and 6 months later. Based on the information–motivation–behavioral (IMB) skills model [58], each module focuses on a setting/situation relevant to YMSM (e.g., gay bars, dating) and uses diverse multimedia (e.g., soap opera videos, testimonials, animations, games, testing/clinic locator) to address gaps in HIV knowledge, motivate safer behaviors, teach behavioral skills, and instill self-efficacy for preventive behaviors via active learning, role modeling, dramatic relief, goal setting, and self-reevaluation [59, 60].

Through two randomized controlled trials (RCTs), labeled 1.0 [47] and 2.0 [61, 62], KIU! was shown to be highly acceptable and efficacious in reducing condomless anal sex and incidence of rectal/urethral STIs. KIU! has also been implemented as service projects (KIU! 1.5 and 2.5) by community-based organizations (CBOs) in Chicago, IL [48], and Jackson, MI. A national implementation RCT (KIU! 3.0) comparing two delivery approaches is in progress.

Service Challenges

Originally focused on condom use and testing, the educational components of KIU! have had to respond to changes in HIV prevention. The Food and Drug Administration approved PrEP for high-risk YMSM in 2012, posing a methodological issue for KIU! 2.0 because the intervention content was frozen in the context of the RCT. However, leaving out this information would be unethical and immediately antiquate the intervention. PrEP content was therefore added to a booster session that no participants had yet completed, which allowed for rapid inclusion of information with consistent delivery to everyone in the trial. KIU! 2.5 and 3.0 have been further refreshed to include PrEP and other recent scientific advances (e.g., viral suppression [63]).

Anticipating more such changes, the developers have been deliberate about how new content integrates with technology. Media that are more costly to update (e.g., filmed video, interactive applications) are reserved for relatively stable information (e.g., communication skills) whereas more easily edited media (e.g., digital pamphlets) are used to deliver facts that could change over time (e.g., forms of biomedical prevention). Real and animated characters and scenes (e.g., clothing, hairstyles, music) are kept simple but diverse to maximize shelf lives.

Technology Challenges

The KIU! platform has also had to evolve within a changing technology landscape. KIU! 1.0 was developed for Web-based delivery on desktop and laptop computers. In preparation for implementation, the software was transferred in KIU! 1.5 to a university-supported platform that, though designed for collecting patient-reported outcomes rather than delivering interventions [64], was purported to be stable and scalable. However, a sociotechnical shift toward mobile devices saw young people accessing the Internet more through smartphones [40]. Despite requests by KIU! 2.0 participants to view KIU! on their phones, the Adobe-Flash-based platform was not compatible with all mobile devices, thus requiring another resource-intensive switch in KIU! 3.0 to stay current. The latest version is a mobile-responsive Web site built on an open-source system that has ongoing university support and investment. The developers kept the web platform instead of migrating to a smartphone application so that KIU! would be accessible across a range of devices and more easily updated [65].

The multimedia content has faced similar challenges, as exemplified by a virtual club simulation activity: Created in KIU! 1.0 using Second Life and exported to Flash for easier access and usability, it suffered from the aforementioned mobile-compatibility issues as well as aging graphics and functionality. Updates have been cost-prohibitive until KIU! 3.0, where the simulation is being built anew; however, the developers recognize it will likely require redevelopment within a few years.

Implementation Considerations

Weighing the substantial technical requirements of KIU! against the limited technical capacity of most CBOs, KIU! was designed to be centrally maintained by the developers but integrated into community-based HIV testing as an opportunity to reach diverse YMSM and supplement standard counseling [37, 66]. Across versions, the developers have continually acquired feedback from CBOs and other stakeholders to ensure KIU!’s appropriateness and acceptability upon deployment. One recurring theme was localized tailoring. KIU! 2.0 was conducted in 3 metropolitan areas, so Module 1 videos were filmed in each city, with the idea that future versions could include such kinds of adaptations generally favored by CBOs. Meanwhile, rating and feedback pages were incorporated within the program to provide ongoing monitoring of end-user acceptability.

In practice, there has been unanticipated variation in how implementers deliver KIU!. Contrary to the purported advantage of independence from in-person delivery, CBOs consistently describe value in using KIU! to engage YMSM with services. The KIU! 1.5 CBO flipped the order and used the program as an incentive to bring YMSM in for testing. The KIU! 2.5 CBO is delivering the program on-site, bundled with other services and group discussion. These deviations from the original implementation plan raise questions about how CBOs view eHealth in general. Furthermore, with the advent of low-cost at-home HIV testing, a new approach of delivering the intervention directly to YMSM arose in KIU! 2.0. This direct-to-consumer strategy is now being evaluated in the KIU! 3.0 comparative implementation trial against the CBO approach on outcomes that include reach, responsiveness, and cost-effectiveness. The design of this head-to-head implementation trial [67] provides for distinct strategies within each arm for engaging YMSM, creating sustainable delivery systems as recognized in the ACTS model.

HOPE

Harnessing Online Peer Education (HOPE) is a stigma reduction and behavior change intervention that uses social media (so far Facebook and HealthCheckins) to increase HIV testing among MSM, but it is also tailorable for different populations and areas of need. Following a modified community popular opinion leader model [68], influential members of the target communities are trained to communicate about HIV (in HIV testing studies) and motivate change in their peers’ behaviors over a 12-week period [69]. MSM participants are added to a closed (private) group and instructed to use their social media as they normally do. Peer leaders attempt to engage with their assigned participants around HIV prevention and testing knowledge and attitudes via direct messages, chats, and wall posts, though participants are not required to respond to peer leaders or engage with other participants.

HOPE has been shown to increase HIV self-testing behavior among primarily African American and Latino MSM in Los Angeles, CA [70], and in-person HIV testing among MSM in Lima, Peru [71]. Acceptability and retention in both RCTs were high [72]. A third trial in Los Angeles with modifications for delivery by CBOs is underway. HOPE has been adapted to increase HIV testing among women in jails, increase retention in care among minority MSM living with HIV, reduce addiction and overdose among chronic pain patients on opioid therapy [73, 74], and reduce substance use and underage drinking among youth [75].

Service Challenges

In the HIV trials, peer-led group education was paired with optional free HIV testing, which doubled as an outcome for the studies. The major service challenge has been incorporating updated HIV research findings and clinical tools (e.g., PrEP) into the intervention via peer leader training.

Technology Challenges

Facebook was initially selected as the platform for HOPE because of its high traffic and acceptability among MSM [69, 76]. After conducting the first two efficacious HIV trials using Facebook, a third trial was planned. Before the trial launched, though, Facebook altered its user interface, which included moving Facebook Groups from visibly front and center to a side menu, making it more difficult for people to find and use the feature. Additionally, some MSM expressed resistance to having the intervention on Facebook. Anticipating implementation problems, the developers planned and created another platform (i.e., HealthCheckins) that could mimic the original functionality of and replace Facebook. However, a university health system compliance officer ruled the new third-party software to be unacceptable because it could not be owned by the university or stored on its servers, so the technology reverted to the original platform despite the less-ideal Group functionality.

Implementation Considerations

Using a popular platform like Facebook for HOPE saves on development and maintenance costs and includes additional benefits like a way to verify participant identities via Facebook Connect, an integrated single sign-on feature. However, there are challenges to developing an implementation plan for technology one does not control. The intervention becomes subject to the behaviors of that particular platform, often with unanticipated consequences. For example, the order that posts are displayed in Facebook Groups is determined by Facebook’s proprietary algorithm, which changed over the course of the three HIV trials but most recently appeared stacked in reverse chronological order. Because MSM randomly assigned to the intervention arm were more actively engaged than those assigned to the control (as intended), during the third trial, the testing invitation posts within the intervention Groups kept getting buried underneath other posts, whereas the testing posts in the control Groups remained at the top due to the limited number of other posts. Thus, the developers modified the research protocol to repost the testing invitation multiple times in the intervention Groups so that it could be seen as frequently as in the control.

Another adaptation to the implementation plan occurred in the second trial. As an alternative to requesting an at-home HIV test, participants could visit a local CBO for testing, of which 17% of the intervention arm and almost 7% of the control arm did. This was an encouraging finding for considering alternative dissemination approaches for HOPE. Familiarity with Facebook may also mean CBOs could more easily adopt the intervention. So, while it is unfortunate that the developers’ attempt to better control the implementation of their technology-enabled service by switching to HealthCheckins met with unanticipated sociotechnical barriers (data security policies), the tradeoff could be a lower burden for eventual deployment. However, modifying HOPE for other researchers or service organizations who lack the resources to pay peer leaders has been an ongoing challenge.

G2G

Guy2Guy (G2G) was a comprehensive text-messaging-based HIV prevention program for adolescent MSM ages 14–18 [77]. Every day for 5 weeks, G2G delivered 6 to 8 short, standardized messages to AMSM participants’ phones. Booster text messages were sent approximately 6 weeks after intervention completion. Based on the IMB model [58], message content primarily focused on HIV knowledge, attitudes (e.g., reasons AMSM use condoms), and skills (e.g., correct condom use) and was tailored to participants’ sexual experience. Messages could also include other topics, such as healthy and unhealthy relationships, coming out to parents, and bullying; links to online resources; and interactive quiz questions to which participants could respond. Furthermore, AMSM could text questions to an automated intervention number, which would use text mining to select and return an answer from a library of scripted messages. Finally, AMSM were paired based on demographics, sexual experience, and geographic location, and automated messages would encourage participants to text their partner to discuss HIV-related topics and provide peer support.

During an iterative development process, G2G was deemed acceptable by AMSM [77]. In the pilot RCT, G2G had a positive effect on rates of HIV testing among a national sample, but it did not significantly reduce condomless anal sex compared to the active control [78].

Service Challenges

No major changes to the service were made to G2G during the pilot study, but several adaptations would be required in future iterations. First, none of the original messaging contained information about PrEP because it was not indicated for adolescents at the time of the trial. As these recommendations have since changed [79], new content will need to be integrated into the library of scripted messages while outdated material is refreshed or purged. Second, the original library comprised messages that were tailored to the participants’ sexual experience (i.e., initiated sexual activity versus abstinent) but untailored to individual linguistic style and the linguistic context of the peer-to-peer conversation thread. Current work exploring the optimization of pairing AMSM based on linguistic compatibility suggests that G2G dyads who were closer in style were more engaged [80]. To strengthen the program’s effectiveness, the developers could incorporate this information into the dyad matching algorithm and deliver scripted messages from the library that would be sensitive to each pair’s style and/or context.

Technology Challenges

No major changes to the technology were needed during evaluation. In planning for the aforementioned service adaptations, the platform that pairs AMSM and coordinates messages would need to be updated with algorithms that automatically measure and respond to AMSM’s linguistic profiles.

Implementation Considerations

Because text messaging is an existing feature of phones, reliable and consistent across different mobile platforms, and widely adopted by youth [40], G2G is purportedly scalable without need for graphic or user interface redesign. However, the lack of multimedia content may have contributed to the limited behavioral outcomes, so additional features (e.g., language tailoring) may prove critical to improving effectiveness. Another consideration is the potential cost to participants, whereas the trial included only individuals with unlimited texting plans, this will not be the case for all AMSM.

G2G incorporated some automation into its implementation plan, including algorithms for detecting keywords and patterns that signal problems (e.g., personal contact information, suicidal ideation). However, some participants tried to circumvent the terms of service prohibiting attempts to exchange contact information, so substantial manual monitoring of peer-to-peer conversations was also required. Future scale out would need to plan how to maintain safety and confidentiality, especially because the users are minors.

HM

HealthMindr (HM) is a smartphone native application (i.e., installs onto the phone) that provides a comprehensive package of HIV prevention services for MSM, including monthly HIV risk assessments with tailored feedback, PrEP and PEP eligibility screeners, testing decision support (e.g., recommended frequency based on risk behaviors), a customizable testing plan and results tracker, a testing/clinic locator; customizable reminders, answers to frequently asked questions, and a mechanism to order free condoms, lubricant, and at-home test kits [81]. Based on social cognitive theory [82], these features promote goal setting, self-efficacy, outcome expectations, and self-regulation around multiple protective health behaviors (e.g., testing, condom use, PrEP self-screening), but no components are required; MSM participants who download the app can use it however they see fit.

HM was developed after an extensive formative research process that included several cycles of input from potential app users, HIV prevention counselors, health department officials, and federal funders of HIV prevention programs [83, 84]. An uncontrolled pilot study of HM showed high (> 50%) utilization of planning for a regular HIV testing schedule, ordering of at-home HIV test kits, and ordering of condoms and condom-compatible lubricant for home delivery [81]. Additionally, 9% of PrEP-eligible men initiated PrEP during the 4-month evaluation period. HM was found to be both usable and acceptable by MSM. A multicity effectiveness RCT is underway.

Service Challenges

Over the period of HM development and evaluation, there were important changes in the HIV/STI prevention environment. Awareness of, willingness to use, and use of PrEP increased substantially [85]; state and local programs to offset costs of PrEP became more common; and a national directory of PrEP providers was developed [86]. In response, content about available PrEP navigation programs and a geolocation-based locator of PrEP providers (via an automated interface to a live database) were added to the app. Furthermore, mail-out kits for specimen self-collection for uretheral, pharyngeal, and rectal STI testing were found to be very acceptable to MSM in research settings [87, 88], so the commodity ordering function in HM was modified to include mail-out STI kits.

Technology Challenges

Considerable changes in the sociotechnical landscape occurred during the same period, which impacted the transition of HM from evaluation to implementation. For example, as cellular bandwidth has increased, so have opportunities to include video content in mobile interventions and demand for such from users, who may find static content less engaging over time. Consequently, HM was modified to present more video in place of text. In another example, public disclosures, data breaches, and hacking of social media platforms continue to legitimate concerns about data security and privacy in sexual health apps [89]. It was necessary to modify HM to incorporate additional security measures (drawing, for instance, on the advent of biometric identification tools in operating systems) and more explicit and visible mechanisms to secure the most sensitive health and sexual risk behavior data. More generally, the look and feel of smartphone apps has evolved, requiring maintenance to keep the app contemporary and functioning across different devices.

Implementation Considerations

Regarding the service implementation plan, because of the increased volume of prevention supplies available for ordering, HM integrated ordering and shipping with Amazon.com to streamline workflow for fulfilling test kit and condom orders and to provide a consistent level of service in delivery times. More broadly, however, two challenges that were not fully anticipated, despite extensive formative work to lay out the path for eventual implementation, have delayed rollout of HM. First, most funding for HIV prevention programs in the US is provided by the CDC to health departments and CBOs, and only EBIs and recognized public health strategies can be supported by these funds. There is a question about whether HM is a standalone intervention, which requires more stringent evidence of efficacy, or a strategy to promote the efficient provision of prevention tools known to be impactful. If the latter, then distribution of HIV test kits and referrals to care through HM would be eligible for immediate uptake by CDC grantees, but the process to evaluate eHealth tools as public health strategies is not as well described as the process to evaluate interventions [90]. Second, most health departments and CBOs have minimal technical capacity to manage the deployment and operations of an eHealth intervention. To support widespread use of HM in community settings, the CDC must delineate a more explicit process for evaluating eHealth programs as public health strategies and establish capacity for technical assistance for eHealth.

Discussion

Examining our experiences developing KIU!, HOPE, G2G, and HM and addressing emergent challenges throughout their evaluations and implementations, we have identified several lessons that have practical implications for planning future eHealth programs for HIV and other health domains.

  1. 1.

    Continual adjustment to the ACTS model targets (service, technology, implementation plan) should be expected. Whereas the iterative optimization process outlined in the ACTS model suggests these adjustments are small and incremental, though, we encountered major and radical interruptions during implementation (e.g., move toward smartphones, introduction of home-based testing). We posit that such sociotechnical disruptions are common, especially in a field with frequent innovations. Therefore, contingencies including time, resources, and processes should be incorporated into the implementation plan and, if possible, built into the technology. This is particularly true of multimedia interventions (e.g., KIU!, HM), as some formats are more costly to update than others. Such planning may limit developers’ technology options, but to ignore these inevitabilities would likely reduce the life expectancy of the intervention. However, that is not to say that changes should be viewed negatively. Dynamic models of implementation [24], including the ACTS model, reject the notion that deviations from the original intervention will inherently produce a suboptimal effect. Conversely, adaptability to evolving contexts, whether in terms of technology (e.g., more video content in HM), service (e.g., new PrEP content in KIU!), or implementation (e.g., more testing options in HOPE), can potentially improve intervention quality and is key to maintaining long-term relevance and viability for an eHealth program.

  2. 2.

    Because adjustments for optimization and disruptions are expected, one cannot just “set and forget” an eHealth intervention. Implementation requires both vigilance to catch issues and capacity to troubleshoot/fix them. Under traditional models of HIV EBI dissemination [91], few community settings would have the resources to solely manage a technology-heavy intervention, constraining scalability of the program. Thus, intervention design should consider the capabilities not only of the technology but also of its end users. Implementation plans that share the cost burden of eHealth intervention delivery, such as KIU!’s coordinating center and direct-to-consumer models, can help alleviated these challenges.

  3. 3.

    It is important to distinguish between minor updates (e.g., bug fixes, system upgrades, additional device support) and major content or functionality changes that alter or improve users’ experience and benefits. Though ongoing optimization is desirable, to what extent does adding features around STI testing, for example, make HM a different intervention than its prototype? To respond to this reality while maintaining scientific rigor in studying eHealth programs, we must move beyond traditional RCTs of frozen or locked-down products. One approach is to shift the paradigm to evaluating core intervention principles, which may take on different forms but retain the same fundamental function [6, 16•]. Operationally, this includes (a) identifying methods (e.g., peer norms) and instantiation strategies (e.g., role model stories) that underlie behavior change, which are part of the service component that is constant during a trial; (b) partnering with stakeholders to identify new technology features or scientific content to be considered for adoption; (c) matching proposed changes to the behavior change methods and strategies and deciding if the changes interfere with the core principles being tested; and (d) monitoring and testing usability and acceptability to ensure functionality is maintained.

When the content and/or delivery systems of an eHealth EBI evolve more substantially, researchers may choose to establish new evidence of efficacy. In cases where the implementation plan generally remains the same but the service and/or technology change, new participants can be randomized to older versus newer versions of the intervention. In cases where there is a change in implementation plan, researchers can pinpoint which system components need testing and which components can “borrow strength” from previous studies [56••]. Novel study designs, such as factorial experiments [92] and sequential multiple-assignment randomized trials [93], may be used to isolate the effects of specific combinations of components, whereas optimization frameworks, like the ACTS model and the multiphase optimization strategy [94], can guide that process.

  1. 4.

    The choice of platform for an eHealth intervention has far-reaching ramifications for its implementation needs, and different technologies have their own unique challenges. A complex intervention with multiple components built on a custom or proprietary platform requires high technical expertise to manage and update (e.g., KIU!, HM). Conversely, a platform that is maintained by a widely used third party delegates upkeep responsibilities but also relinquishes control over changes in function (e.g., HOPE). Other factors like adaptability, data and security issues, cost to participants (e.g., texting in G2G), and funding streams (e.g., prevention grants for HM) are tied to platform choice, all of which influence pragmatic scalability: Platforms that are not widely used, do not have extensive and ongoing external support, or are esoteric to one institution are more likely to encounter challenges. Intervention developers should carefully weigh the pros and cons of selected technologies for scalability early in the design process.

  2. 5.

    Understanding all aspects of user engagement is critical. The ACTS model distinguishes three relationships in a sociotechnical system: between the participant and the technology, the implementer and the technology, and the participant and the implementer [16•]. Of the three, the first receives the most attention because acceptability and usability of an intervention by its target population are major factors in effectiveness and successful implementation [3, 95]. Within HIV prevention, eHealth interventions have thus far had high acceptability among YMSM in trials, but retention drops outside the research context [48]. Local adaptations; better technology, tailoring, interactivity, and graphics; and more human involvement have been shown to increase engagement and utilization, but the added complexity may come at the cost of greater resource demand and upkeep across the life of the program [2].

The implementer–technology interaction encompasses implementers’ attitudes about the intervention, an important implementation outcome [95], but it also represents behind-the-scenes tasks required to administer the program that are often neglected in non-pragmatic studies. For example, eHealth technology tends to be built or calibrated to meet research needs like efficiently tracking participants, monitoring fidelity, and delivering assessments. Though some features may be useful for future implementers as well, the software is not typically deployment-ready, requiring modifications later. These additional costs could be mitigated by constructing pragmatic interfaces for implementers from the start. Data management is another example. Tremendous amounts of data and meta-data, such as the number of clicks on a feature or the content of typed messages, can be captured from eHealth interventions but will overwhelm organizations that do not have the capacity to store, handle, analyze, and interpret those data [16•]. Data visualizations and dashboards with monitoring algorithms like in G2G can help implementers filter through large volumes of information to triage cases that need attention.

Finally, the participant–implementer connection is often overlooked in eHealth, but reframing eHealth interventions as technology-enabled services identifies important human interactions necessary for implementation. Participant recruitment is often considered a function of research, so enrollment drops during real-world implementation [48, 96]. However, the ability to reach and retain participants is key to scaling up an intervention. Those processes must be planned as much as the technology itself, but who provides these critical roles of recruitment, support, and engagement in practice is an unanswered question.

Design Considerations for eHealth Intervention Implementation

From these lessons, we distilled a practical checklist of design decisions for eHealth intervention developers to think through in order to prevent or moderate issues that may hinder implementation of their programs (Table 2). Several guidelines for eHealth already exist in the literature, but they primarily focus on categorizing an intervention’s applications and functions (e.g., mHealth and ICT Framework [97]) or reporting on its core features and evaluation methods (e.g., CONSORT-EHEALTH [98], mHealth Evidence Reporting and Assessment [99•]). Although referring to these guides a priori may help developers plan out their study designs, with maybe a nod toward future adaptability and scalability, the guides are intended to apply after an intervention has been created and tested, as a way to comprehensively codify the research evidence. In contrast, our checklist is meant to be used during intervention development. Rather than describe a finished product, we hope that it stimulates thinking around how service, technology, and implementation plans are intricately linked and must be designed in tandem so that eHealth intervention developers, along with reviewers and funders evaluating the merits of their proposals, can prioritize the pragmatic scalability of their programs. The questions outlined therein are also not meant to be an exhaustive solution to all implementation challenges for eHealth interventions, but they do represent at minimum what must be asked and answered for all technology-enabled behavior change services. In the absence of a clear path forward for eHealth implementation, addressing these considerations early on will set up program developers to better traverse the gap between research trials and widespread use.

Table 2 Critical implementation questions for eHealth intervention design

As a caveat, if researchers try to address only implementation issues up front, they could succeed at implementation but accidentally fail at proof of concept. Our checklist should be applied alongside the ACTS model and established protocols for designing behavioral change interventions [100]. It is also impossible to predict and optimize for all possible futures and sociotechnical disruptions, so developers and their technology teams must balance the tension between adaptability and parsimony. The tradeoffs are many, and there are no right answers; the only wrong answer is to not consider implementation issues before embarking on the costly endeavor of creating a new program.

Conclusions

As eHealth interventions continue to propagate in HIV and across other health domains, it is critical that we establish new paradigms in program design, evaluation, and implementation that keep pace with the rapidly shifting dynamics of modern sociotechnical landscapes. Just as we upgrade technology, so too must we employ new frameworks, like the ACTS model [16•], as well as new methods (e.g., trials of intervention principles [6], multiphase optimization strategy [94]); paradigms (e.g., dynamic sustainability framework [24]); and research agendas (e.g., disruptive innovations [20]). We propose our design considerations for eHealth behavioral intervention implementation to contribute to the burgeoning science in this area and to aid developers, evaluators, reviewers, and funders in achieving a future of eHealth intervention scalability.