Approximately 10 years ago, the Lancet Mental Health Group issued a call for action to scale up mental health services worldwide, particularly in low- and middle-income countries (LMIC; Lancet Global Mental Health Group et al. 2007) to respond to the glaring gap between those who needed mental health care and those who received it (Andrade et al. 2014; Kohn et al. 2004). In 2001, the World Health Organization (WHO) released the report titled “Mental Health: New Understanding, New Hope” (World Health Organization 2011) and a number of collaborative efforts to improve the global mental health have been established since then (Abdulmalik and Thornicroft 2016).

Global health can be defined as a set of initiatives that promote the scale-up of evidence-based interventions to improve the quality of service delivery of mental health services, particularly in LMICs (Bayetti and Jain 2017; Jain and Orr 2016; White et al. 2017). The initiatives to expand global mental health care delivery are based on the moral and ethical assumptions that anyone should receive attention and care, regardless of their social determinants of health (Kirmayer and Pedersen 2014; Mason et al. 2017; Patel 2014), and much progress have been made towards improving the mental health care delivery globally (Patel et al. 2011). However, as the field of global mental health grows, it has also been met with a number of critiques and challenges (Bayetti and Jain 2017; Whitley 2015). Concerns have been expressed by numerous scholars about the potential clash between scaling evidence-based interventions (EBIs) and the social injustice that may be inherent in such practice. Critics have been vocal about the inattention of proposed interventions to the social and cultural conditions that give rise to mental distress (Mills and White 2017; Whitely 2015) and to the many “cultural variations in the experience of illness” (Fernando 2011, p. 22). As Mason and colleagues (Mason et al. 2017) state “some of these efforts potentially obscure the social, economic, and political histories of the locations where projects are implemented, as well as the plurality of knowledge and values within and across communities.” (Mason et al. 2017).

One could propose that, instead of perceiving the scale-up of EBIs as a “black or white” phenomena, it is more of a palette full of colors where, depending on the pathway of implementation, of the context, and on how the collaboration is established, one could successfully balance a shared knowledge between LMIC and high-income countries (HIC). This paper aims to share some of our lessons learned as we pursue implementation efforts of evidence-based parenting interventions (EBPI) while maintaining social justice in our work. In this paper, we focus on parenting interventions because this is our specific area of research, but we believe that the principles outlined here are similar to those related to other mental health preventive interventions.

Before we outline our challenges, we clarify the terms used in this paper. Implementation of an intervention refers to the process of integrating EBI in real-world settings. Implementation research aims to understand the factors and strategies that facilitate or hinder the adoption of these interventions in usual care (Proctor et al. 2011). Dissemination research, on the other hand, refers to the study of targeted distribution of information and intervention materials to a clinical audience with the intent to spread and sustain knowledge associated with evidence-based interventions (Rabin and Brownson 2017). Scaling out is a recent term coined by Aarons and colleagues (Aarons et al. 2017) to refer to the process whereby EBIs are implemented with either new populations, new delivery systems, or both. Scaling up refers to the expansion of the delivery of one EBI within the same or very similar setting under which the intervention has been originally tested. A helpful metaphor to distinguish these two may be that scaling up is akin to watching the seed grow into a flower within one garden whereas scaling out is planting seeds in different gardens. Such distinctions in terms are important as they have consequences in the hypothesis, assumptions, and designs of the studies. For more information about these terms, we refer the readers to Aarons et al.’s article (Aarons et al. 2017). As we implement EBIs in different settings, we conceptualize our work as “scaling out” EBPIs to different settings.

Experiences of Success and Innovative Approaches for the Scaling Out of Evidence-Based Parenting Interventions into LMICs

There are numerous implementation frameworks (Tabak et al. 2012) but very few explain the process of implementation of evidence-based intervention with diverse populations (Yancey et al. 2006). Several scale-up frameworks also exist (Brownson et al. 2017), including one developed by our team, using components of cultural adaptation and implementation science fields (Baumann et al. 2014). In general, scale-up frameworks involve feedback loops of at least three phases: (1) learning from HICs and adapting to LMICs leads to (2) strong partnerships and involvement of local communities for (3) the optimization of existing interventions. In the present section, we describe our experiences of success implementing EBPIs in LMICs, describing our work in these three sections.

Learning from High-Income Countries

When doing global work, there is a tension between how much to “take” from HIC to LMICs, and how much to develop on the ground. A first step is to spend time gathering system-level information, including resources (human, financial, and physical) to be able to implement the EBPI. This phase could be placed in the “exploration” phase of our model (Baumann et al. 2014). In our experience, it seems that this first phase entails some top-down approach at the beginning as the teams translate the manuals, examine which measures to use, and examine who are the potential stakeholders that will be trained to deliver the intervention. For example, our teams in Mexico City spent a lot of time translating parenting practice measures from English to Spanish, back translating, and examining their fit to the context in Mexico prior to using it in a randomized controlled trial (RCT). Similarly, because the available validated measures for child outcomes in Brazil are simply too expensive for our stakeholders, we are in the process of doing cognitive interviewing of free translated measures. In other words, often in this beginning phase, our stakeholders are still “consuming” knowledge from HICs and not “creating new knowledge” that could be disseminated to similar countries in the region.

In Mexico, there is a large interest in linking researcher with the clinics responsible for attending to the population. In Enseňada, Baja California, they are establishing collaborative networks between researchers from the universities with governmental health institutions and with civil associations, as well as encourage collaborative work with foreign researchers who are also interested in benefiting the Mexican population. The goal of such three-tiered relationships is to facilitate learning between academia and clinicians who deliver care.

Adapting the Interventions

The field of cultural adaptation defines adaptation as “the systematic modification of an evidence-based intervention (EBI) to consider language, culture, and context in such a way that it is compatible with the client’s cultural patterns meanings and values” (Bernal et al. 1995) to refer to adaptations based on client cultural background. In this paper, we expand the concept and incorporate contextual adaptations that go beyond cultural elements at the client level, such as modifying interventions to fit provider characteristics, organizational contexts, and service settings (e.g., historical, political, and economic contexts; Baumann et al. 2017). This broader conceptualization allows us to specify the components that have been adapted to fit to a broader context, as part of increasing the fit of the intervention to the LMICs settings.

For example, a group of us have translated and adapted GenerationPMTO to Spanish for Latino families in the USA and in Mexico (Baumann et al. 2014; Domenech Rodríguez et al. 2011). When scaling out the intervention to Mexico, our first iteration involved using the traditional training of GenerationPMTO: a series of five workshops across 18 months (Baumann et al. 2014). This in-depth training, however, has been proven to be a challenge for low-resource settings, and we then adapted the training to use technology. A pilot study was conducted to test the feasibility of using blended learning with a mix of online and in vivo strategies to train therapists in GenerationPMTO (Baumann et al. 2017). In Panama, for example, before trialing the Triple P Positive Parenting Program, a series of studies were conducted with local communities in order to ensure the program was culturally relevant and acceptable to their needs (Mejia et al. 2015a, b). Our work has shown that adaptation is inherent—and perhaps crucial—to the implementation process of EBPIs in LMIC (Baumann et al. 2014, 2017; Cabassa and Baumann 2013; Mejia et al. 2017). However, we have also faced challenges in that researchers tend to not report their adaptation process or justification for the adaptations made (Baumann et al. 2015), compromising the scientific integrity and replicability of adaptations so as to test how to adapt our interventions in a cost-effective and sustainable way (Baumann et al. 2017).

Partnering and Involving Local Communities in Research Efforts

Partnering and involving the community is a crucial component. Accordingly, the field of implementation science has embraced more the principles of community-based participatory research (CBPR; Blachman-Demner et al. 2017; Holt and Chambers 2017). Community engagement is multilevel (Brown et al. 2014; Mazzucca et al. 2018) and involves stakeholders in many roles. Particularly in global health, the concept of community and stakeholders needs to be broadened, as we often face a lack of trained professionals able to deliver interventions (Belfer 2008). Because often the intervention is being delivered while its evidence is still being evaluated, often, we need to train not only practitioners but also research staff to evaluate the work (Weiss et al. 2012).

Our team has approached a train-the-train model, with the goal of having a full transfer of knowledge and skills to the new setting adopting the intervention (Baumann et al. 2014). In Mexico, the approach has been to train researchers (faculty, as well as graduate and undergraduate students), in addition to clinicians, on the EBP to be implemented. The goal of such tiered approach is to support the sustainability of the interventions in the long term: the researchers will maintain the quality of research and evaluation, and clinicians and therapists will support the work by delivering the intervention. Training clinicians in a train-the-trainer model also supports a social justice empowerment approach by diminishing the potential risk of, when ready to scaling up within the country, of having a solely top-down approach of foreign researchers training everyone. As such, our research stakeholders in the countries that we work with make the point of having a close relationship with clinicians and therapists from different agencies to ensure that the work being done is relevant to them. In this way, scale up is not taken as an order that hierarchically descends from the authorities (from top to bottom), but as a need that arises from themselves. This makes it more likely for clinicians to understand and have a voice when implementing EBPs.

Involving multiple stakeholders from other countries is not an easy task; however, as often, we have challenges in funding for training (Baumann et al. 2014). The level of interest from authorities and leaders in the countries that we have worked with has varied, and in some places, the often in-depth training of our interventions and process evaluation of our work could be considered more of a challenge than an opportunity by some leaders. Part of our work, therefore, involves activism and lobbying to convince policy makers and donor agencies of the importance of collecting evidence that will improve the implementation process of their interventions. For this, collaborations between prevention researchers in LMICs and between those in HICs and LMICs are key.

The Feedback Loop: Giving Results to Local Policy Makers

Conducting studies in collaboration with local policy makers is key for successful implementation. In the case of Panama, a trial of the Strengthening Families Program 10-14 is currently on its way. In order to ensure sustainability after the trial is over, the study was designed so that policy makers committed the time and resources of their health practitioners for delivering the program. If the program shows effective, then the capacity is already built in the Ministry of Health and Education, thus making sure implementation takes place effortlessly. This kind of buy-in from local policy makers will ensure sustainability and effective implementation. Although testing efficacy of interventions is very important to establish if they have the potential to produce changes in children and their families, process evaluations give us information on whether the intervention is appropriate and sustainable in a particular context. There is a need for comprehensive process evaluations in LMICs, using implementation frameworks that allow discussion on sustainability of interventions in a particular context (WHO and ExpandNet 2011). Sustainability is particularly relevant to LMICs where governmental systems change often and corruption is a key factor affecting provision of services (Parra-Cardona et al. 2018). Moreover, cost-effectiveness analyses are also important as we need to ensure that EBPIs imported from HICs are economically sustainable in a different context (Duncan et al. 2017). The above factors highlight the acute need for the field of implementation research to support the work of global health researchers in preparing for the implementation of EBPIs in LMICs.

Lessons Learned: Recommendations for the Implementation of Evidence-Based Parenting Interventions in LMICs

To be able to talk about the research of scaling out of EBPIs, we need to first define EBI. An intervention is considered evidence-based if (1) it is included in a federal registry of evidence-based interventions, or (2) it has produced positive effects on the primary targeted outcome, and these findings have been reported in a peer-review journal, or (3) the intervention has documented evidence of effectiveness such as (a) documentation of a theory of change, and (b) replication of findings (SAMHSA 2016). One important caveat in this definition, however, is that much of the research done to establish the effectiveness of the interventions have been conducted with relatively few disadvantage persons in the trials, and with the majority of the trials in the USA (Baumann et al. 2017; Yancey et al. 2017; Yancey et al. 2006). As such, implementing these interventions in LMIC represents challenges in that the effect of EBPIs is not the same for everyone (Gardner et al. 2017; Leijten et al. 2018a, b; van Aar et al. 2017). With such drawbacks, we would like to warn about the potential assumption that if an intervention has evidence in HIC, it can be implemented in LMIC without previously examining the context. As we described above, to a lot of work needs to be done, even at the measurement level, to be able to examine whether an intervention has evidence in a LMIC.

Choosing how to measure an intervention being implemented in LMIC, however, is not a trivial task. Many interventions use self-report measures of these outcomes, such as the Alabama Parenting Questionnaire (APQ; Shelton et al. 1996) for parenting practices and the Child Behavior Checklist (CBCL; Achenbach 1991) to assess child outcomes. Some interventions combine observational tasks with self-reports of child behavior to triangulate outcomes (Patterson et al. 2010). However, some tests are very time consuming to administer and/or are commercial and require payment per use, which makes it unsustainable for poorly funded projects in LMICs (Fernald et al. 2017). How can we then examine the evidence of an intervention with often burdensome measures? How can we detect the trends in child development to inform policy and intervention implementation if the measures are not comparable across settings that do not have resources?

The World Bank Group provides a set of ideal characteristics of an assessment, which should, among other things, be appropriate, interpretable, easy to administer, and of low cost (Fernald et al. 2017). To address these issues, scholars have advocated for evidence-based assessments (EBA; American Psychological Association 2006; Hunsley and Mash 2007) where both the process of conducting the assessment, as well as the instrument used for evaluation, are carefully selected through systematic and empirically based, research-driven approach to assessment (Beidas et al. 2015). While a lot have been published on EBA with recommendations of measures for youth and adult outcomes (Hunsley 2015; Hunsley and Mash 2005; Mash and Hunsley 2005; Roberts et al. 2017), a lot needs to be done for measures for low-resource settings (Beidas et al. 2015), particularly for international communities. Without good measures that are practical, free, valid, and translated to different languages, we continue to struggle in providing client-centered, equitable services to our communities in LMICs.

Second, when choosing and implementing an intervention in diverse settings, attention needs to be given to the assumptions made regarding the mechanisms of action. Specifically for parent interventions, the underlying assumption of a parenting intervention is that they will help reduce conduct problems in children (Weisz and Kazdin 2010). However, data has shown that a third of the families exposed to parenting interventions fail to show improvement on child outcome (Leijten et al. 2018b). While much needs to be known about which components from the interventions help whom, some scholars have hypothesized that variables such as contextual factors may play a role (Gardner et al. 2017). For example, time out may be received differently by White American parents used to this technique in the USA compared to immigrant parents or parents in LMIC (Domenech Rodríguez et al. 2011; Furlong and McGilloway 2012; Leijten et al. 2018a, b). It may be that the source of the information matters: parents may see time out as an American way of teaching things, or it may be that the strategy is not the best for everyone. Much research needs to be done to disentangle the specific components of parenting interventions that are effective for different populations. As Leitjen et al. (2018a) argue, the questions around understanding the clinical effectiveness of an intervention may be more fruitful if we perhaps move from “who benefits?” to ask “who benefits from what, when and how?” (Leijten et al. 2018a, b). In fact, we argue that we could go further in this question by adding “who benefits from what, when and how?” if we add components of the implementation science in our work as we scale up EBPIs in LMIC. We advocate for two ways to do answer these questions: through designs and the clear engagement of stakeholders in our work.

Implementation of EBIs in LMICs: Designs.

We can hypothesize that the well-cited finding that it takes 17 years from research to turn 14% of research to benefit patient care (Balas and Boren 2000) may be even worse when considering the context in LMICs. Part of the challenge in this delay involved the designs to test the efficacy of interventions. Traditionally, investigators would be expected to conduct RCTs to test the efficacy of an intervention prior to scaling it out to usual care (Curran et al. 2012). However, RCTs present a host of challenges when it relates to conducting studies in low-resource and for minority populations: they present ethical and practical challenges of assigning members of small communities to randomization when the community is small and withhold members from beneficial interventions in communities that are often in dire need of basic services (Dixon et al. 2016). Additional risks such as historical or political events that can affect vulnerable and low-resourced communities may also pose a challenge in conducting RCTs and have a clear definition of the effects of the intervention (Baumann et al. 2011).

Because of such challenges, researchers have recently advocated for other types of designs, such as regression discontinuity, interrupted time series designs, and roll-out randomization designs (Henry et al. 2017). A design that has been recently advocated to be used in parent-intervention is the multiphase optimization strategy (MOST); a framework that includes evaluation of behavioral interventions, while also optimizing the intervention before its evaluation (Collins 2018). The advantage of MOST designs is that they allow to evaluate which of the multi component of a given EBPI contributes to the overall outcome, and which component produces a large effect enough to justify its cost of implementation (Collins 2014). Creative designs such as MOST and other adaptive designs may be a better option to answer what are the best components of EBPIs that can be implemented in LMICs considering the realistic constraints of the low-resource settings, or what is “the best experimental design is the one that gathers the most, and most relevant, scientific information while making the most efficient use of available” (Collins 2014). We encourage, therefore, researchers to think about different designs that may be able to accommodate issues of internal and internal validity (Landsverk et al. 2017), as well as practice, pragmatic, and ethical considerations of RCTs when implementing EBPIs in LMICS (Brown et al. 2017; Curran et al. 2012; Landsverk et al. 2017; Mazzucca et al. 2018). An overview of the different designs proposed by implementation scientists to support accelerating the reach of EBPIs is beyond the scope of this manuscript but can be found in other reviews (Landsverk et al. 2017; Mazzucca et al. 2018).

Engaging communities is a key component of our work. While scale-up frameworks tend to have stakeholder engagement as a key component, there is limited empirical guidance on what are the key actions and best practices of stakeholder involvement (Goodman and Sanders Thompson 2017). To address these issues, Goodman and Sanders Thompson (2017) provide three categories of stakeholder engagement: (a) non-participation; (b) symbolic participation; and (c) engaged participation. The engaged participation, according to the authors, involves collaboration (i.e., both researchers and community members are actively involved in the design and implementation of the project), patient-centered process (i.e., community stakeholders are the main decision makers of the design and implementation process, as well as of the publications; whereas researchers are supportive of the process but not leaders), and community-based participatory research (i.e., where there is trust between community stakeholders and researchers, and an equitable partnership and shared decision making). The authors state that different members of the community can have roles, but that true stakeholder involvement entails giving voice to those who traditionally have limited power and input in the research design and implementation process. The follow-up question, then, is on how to evaluate the engagement of your stakeholders? Goodman et al. (2017) provide a survey measuring 11 principles of community involvement, from acknowledging the community (e.g., showing appreciation for the community’s time and effort), to disseminating findings and facilitating a collaborative and equitable partnership. While the survey needs to be empirically tested in different settings as potential predictor of community engagement, it could provide a useful guide for global researchers to support their community work and social justice as they scale out evidence-based interventions.

Knowledge Sharing as a Two-Way Process

The advantage of positive partnership is, of course, related to the sustainability of any global work. In many ways, the assumptions of scale-up frameworks are that there is a one-way arrow of information from HIC to LMIC. The bi-directional relationship of global work is crucial so as to avoid colonialism practices that could only reince oppression if work is done in a unidirectional approach (Parker et al. 2017). To avoid colonialism in our global work, Parker et al. (2014) advocate for (1) clear agreement and shared goals between all parties; (2) equitable distribution of powers, including opportunities to change the design and implementation of the intervention; (3) equally incorporating local knowledge and perspectives when developing and recognizing skills and expertise; (4) ongoing communication based on honest exchanges and willingness to raise concerns; and (5) trust.

The bi-directional communication among HIC and LMIC partners is not only beneficial for good practice of research grounded in social justice, but also to “bring back” lessons learned from LMIC. The notion that the knowledge gained in LMIC is relevant to HIC is not new, and is well documented in different areas in the literature (Harris et al. 2016). Different names have been used to label the process of “bringing back” the lessons learned to HIC, such as “reverse innovation” (Bhattacharyya et al. 2017; Immelt et al. 2009; Trimble and Govindarajan 2012), “innovation blowback” (Brown and Hagel 2005), and “social innovation” (Chambon et al. 1982). However, the field of global health still has a lot to learn as sometimes the innovations or lessons learned from LMIC tend to be discounted and not valued (Harris et al. 2016), including bias against publication and shared information from LMIC researchers (Harris et al. 2017).

Conclusions

The implementation of EBIs in LMICs is a complex process that calls for creativity and adaptability, while also maintaining scientific rigor grounded in social justice principles. It is through bi-directional communication and sharing of knowledge among scholars engaging in global work that the access to culturally and contextually relevant parenting programs will be expanded. It will take a committed community of scholars to make these needed programs more accessible and sustainable. We urge global health researchers to collaborate with implementation science researchers to draw on potential frameworks and creative designs that could potentially test of efficacy of EBPIs in LMIC while also accelerating the uptake of these interventions (Betancourt and Chambers 2016). Our team also urges for treatment developers to think about the transportability and sustainability of their interventions for different settings. Designing and testing interventions using methods such as User Centered Design (Baumann et al. 2017; Lyon et al. 2014; Lyon and Koerner 2016) could be a beneficial way to balance scientific integrity with social justice in LMIC.