Introduction

The latter half of the twentieth century has brought major advances in the development of effective behavioral interventions for a wide range of health issues and conditions facing our nation [1]. Among the health contributions made by the behavioral and social sciences are advances in methods and interventions for reducing tobacco use [2]; contributions to behavioral risk factor management that have contributed to reductions in cardiovascular disease and enhanced chronic disease management [3, 4]; successes in demonstrating that lifestyle intervention (i.e., regular physical activity and reasonable weight loss) can reduce the risk of developing type 2 diabetes in high-risk adults [57]; and continued contributions to the prevention and control of HIV/AIDS [8].

The advent of such advances has brought an increased awareness that, as in other health fields, ‘one size does not fit all’ [9]. In fact, a growing emergence of ‘individualized’ or targeted medicine approaches to intervention has occurred in a number of fields, spearheaded by the latest discoveries in areas such as pharmacogenomics that have begun to identify patient subgroups with particular genotypes that may predispose them to respond favorably or poorly to pharmacological interventions [10]. The strengths of a targeted perspective, whereby subgroups of people are assigned to specific interventions based on certain characteristics [10], include the potential for enhanced intervention efficacy as well as cost savings accrued through matching intervention-relevant resources to participant requirements [11]. Given these strengths, identification of ways to enhance program targeting is currently recognized as a key public health goal [1216]. The targeted medicine perspective, in essence, modifies the question typically asked in randomized clinical trials (RCTs) and similar intervention studies from ‘does this intervention work?’ to ‘does this intervention work, and for whom and under what conditions?’ [1618]. The targeted medicine perspective also aids decision-making related to which participants should not receive an intervention (i.e., because of lack of benefits, side effects or other types of adverse events, etc.) [16, 18, 19]. Similar to the long-standing scientific concern related to overgeneralizing RCT results to populations excluded from the RCT [20], the targeting perspective aims to better understand the extent to which the results from a RCT may (or may not) apply to each subpopulation included in the RCT [21].

The purpose of this paper is to:

  • present one potentially useful definition of targeting within the context of intervention development, adaptation, and delivery;

  • describe potential targeting domains that are of particular relevance to behavioral medicine;

  • briefly discuss the use of different statistical approaches and methods to aid the targeted intervention development process; and

  • discuss the challenges and opportunities accompanying the incorporation of targeted intervention development methods into behavioral RCT research.

We acknowledge that what we are describing is one of a range of perspectives related to targeting and applications of exploratory investigation in the intervention development field. Our hope is that the perspectives represented in this paper may stimulate further discussion of these issues as they relate to the behavioral medicine arena.

The current article focuses specifically on RCT research. Subsequent dismantling of interventions already shown to be effective to better understand their ‘mechanisms of action’ is seen as an appropriate next step after a successful RCT to optimize the cost-effectiveness of such treatments.

Definition of Terms

Whereas the term ‘targeting’ has been used in a variety of ways [22], for purposes of this paper we define targeting as the systematic matching of subpopulations of individuals to specific interventions based on the characteristics of the subpopulation (i.e., moderators) (Table 1). Table 2 includes examples of person-related targeting domains as well as the types of intervention-related contextual variables (i.e., the conditions surrounding intervention delivery) that may be applicable in optimizing intervention success for different subpopulations. Our perspectives build on the lessons learned from previous efforts to enhance intervention efficacy through client targeting [23], including the recognition of the importance of choosing targeting characteristics that have a specific theoretical and empirical basis, pursuing targeting research in a defined population group, exploring empirically derived combinations of stable, easy-to-measure participant characteristics (as opposed to single characteristics), and evaluating the robustness of the targeting variables using several interventions that are distinct with respect to potentially important intervention matching dimensions (e.g., intervention content, delivery source, channel, location, etc.) [11, 15, 23, 24].In addition, we recognize that, because of budgetary constraints, many experimental studies are limited with respect to being adequately powered to evaluate statistically significant moderator (i.e., subgroup × treatment) effects. Approaches to exploring subgroup effects within such constraints are discussed in the statistical methods section below. Our definition of ‘targeting’ is directed toward choosing which of two or more interventions is best for a particular subgroup of individuals [17]. By way of analogy, ‘targeting’ is analogous to selecting which jacket is best suited to the customer’s needs. This paper is focused primarily on the targeting endeavor.

Table 1 Definitions of the terms ‘targeting,’ ‘tailoring,’ and ‘prediction’ based on the MacArthur group recommendations [17, 42]
Table 2 Examples of potential (person-related) targeting domains and the types of intervention-relevant contextual factors with which they may interact (i.e., intervention matching factors)

We consider the above targeting endeavor (i.e., selecting to whom to give which intervention using baseline information about participants, also referred to by some researchers as ‘matching’) to differ from the more dynamic individual ‘tailoring’ activities occurring within a particular intervention [25]. Whereas there are, similar to definitions of targeting, different definitions of tailoring that have been used in the scientific literature, for purposes of the present discussion we apply the MacArthur group’s definition of ‘tailoring’ with respect to adapting a particular intervention the individual is receiving to the specific needs of that individual [26], i.e., akin to altering the jacket selected to the specific measurements of the customer. Within this context, an example of tailoring is adapting problem-solving activities found in many cognitive–behavioral interventions to the specific barriers experienced by individual participants during treatment.

Both ‘targeting’ and ‘tailoring’ of interventions to individuals are vital to consumer health and safety as well as reducing unnecessary societal health costs. However, it is often the case in randomized clinical trials that both issues tend to be ignored, and the two terms are often confused with one another and with activities focusing on the prediction of intervention response. To aid efforts concerning the use of more uniform definitions of these terms in the health and mental health fields, the MacArthur group has defined ‘prediction’ as an activity that does not necessarily involve comparison of two or more interventions as does ‘targeting’ nor consideration of two or more versions of a particular intervention as does ‘tailoring’. Rather, they define ‘prediction’ as using what information is available at a given time to identify those individuals who may do better or worse with a particular intervention. Whether the participant would have fared as well or more poorly with other interventions (or other versions of an intervention) is not considered. Thus, one might choose to deliver a specific intervention to certain types of individuals defined by predictors, ignoring the fact that those individuals might have done as well or better with no intervention at all or with another intervention, i.e., the comparison with other types of interventions is lost.

Whereas exploring predictors of success (or failure) within a particular intervention does not provide specific information related to targeting subgroups for one type of intervention versus another, it can provide other types of useful information related to intervention refinements. For example, predictor analysis has been used to identify persons at risk for long-term relapse after smoking cessation interventions [27], participant subgroups that may or may not respond to community cardiovascular disease interventions [28], and subgroups that may respond well or poorly to automated health promotion interventions [29]. Exploratory predictor analysis can also be used as part of post-RCT intervention dissemination research (which is typically limited to pre–post designs) to identify subgroups for which intervention refinements may be particularly indicated. For instance, in exploratory analysis of predictors of success in the Robert Wood Johnson Foundation’s Active for Life Physical Activity Dissemination Initiative [30], participant subgroups reporting living in neighborhoods with heightened traffic and crime levels were less successful in increasing physical activity via an evidence-based telephone-assisted intervention relative to subgroups reporting living in more supportive neighborhoods for walking [31]. Such results set the stage for experimental studies evaluating further refinements of the intervention for these less successful subgroups.

The above issues notwithstanding, clinical (or public health) decisions always involve choices between two or more courses of action, which is why randomized clinical trial (RCT) methodology requires a control or comparison group against which the efficacy or effectiveness of an intervention is assessed. As is widely recognized in the scientific community, pre–post changes within one intervention often reach statistical significance, yet such changes may be because of statistical artifacts rather than clinically significant changes in the condition or behavior of the participant. For such reasons, it is considered prudent to base targeting decisions on variations in the effect size of an intervention relative to the comparison condition that are associated with information known about the participant before intervention onset, i.e., on the moderators of intervention response. Targeted intervention development may also involve adapting interventions to accommodate the needs of future target groups.

Whereas tailoring and prediction activities represent important aspects of the intervention endeavor [32, 33], the focus of this article is on the choice of which intervention to deliver to specific participant subpopulations based on initial or baseline information about those participants (i.e., targeting). In the case of targeting activities of this type, the goal is to assign a particular type of intervention to a specific subpopulation of individuals who are homogeneous on one or more (ideally) easy-to-measure characteristics (i.e., moderators) [21].

Exploratory Activities to Aid Intervention Targeting—Background and Rationale

Exploratory data analysis has long been recognized as a key component of scientific inquiry that has been supported for a number of decades by well-respected statisticians and methodologists as well as national scientific organizations [3437]. The aim of exploratory data analysis is primarily hypothesis generation, as opposed to hypothesis testing. A well-known example of such exploratory analysis is represented in the investigative activities occurring as part of the Human Genome Project. As such, exploratory data analyses are guided by a set of principles and procedures that generally differ from the principles that apply to hypothesis-testing investigations, given the different goals associated with these two avenues of gaining knowledge. Such differences can include relaxing or suspending formal determinations of power and statistical adjustments related to type I error [3436]. However, the caveats attendant in using p values in the hypothesis-generating context as simple ‘sign posts’ that indicate that further scientific attention may be warranted as opposed to as tests of formal hypotheses need to be made explicit [3436]. Perhaps because of the growing familiarity and use of RCT methods in the behavioral medicine field as well as other scientific arenas, however, scientists often express discomfort when faced with exploratory analyses that do not apply hypothesis-testing statistical methods, although doing so can severely limit the utility of exploratory (hypothesis-generating) activities [3437]. We believe that further discussions of the principles and procedures accompanying exploratory research as discussed by Tukey, Behrens, and other luminaries in the statistical field can enhance behavioral medicine inquiry.

Recent discussions of the utility and methods underlying moderator and related forms of subgroup analysis occurring within the context of RCTs have been published in major medical journals [21, 38]. As part of these discussions, it is useful to distinguish between subgroup analyses that divide the total sample into smaller distinct subgroups and test for treatment effects within each subgroup, and moderator analyses (the focus of this article) that use the entire sample and test for differential treatment effects associated with baseline variables.

Some methodologists have recommended formal testing and reporting of a priori moderator effects with appropriate adjustments for multiple testing (including the number of interaction effects one would expect to reach statistical significance through chance alone) in the primary publication of a clinical trial [38]. This is a reasonable and prudent approach when likely moderator effects can be appropriately specified and powered for in the RCT design. There is often, however, insufficient evidence to support a priori hypotheses related to moderator effects in many intervention areas. Given this situation and the potential clinical and public health value of throwing a wider ‘net’ in attempting to better understand combinations of variables that may be linked to intervention success or failure, alternative methods deserve consideration. Hypothesis-generating exploratory research activities, undertaken as part of secondary data articles generated from RCT investigations, can help identify potential moderators of intervention for future hypothesis testing. The results from such hypothesis-generating activities can thus serve as the pilot work to inform the next generation of RCTs aimed at formally testing such moderator effects [21].

The ‘bottom line’ with such exploratory lines of inquiry is that they require subsequent formal testing using standard hypothesis-testing methods before deciding whether they are truly important in advancing interventions or simply ‘noise’. Yet, by foregoing such exploratory data analysis as a standard part of secondary clinical trials analysis, the risk of missing serendipitous findings or unexpected associations that could be used in the subsequent development and testing of refined interventions may be increased. Unfortunately, the testing of ambiguous interventions that were not substantively informed by empirical observations or evidence before being formally evaluated in an RCT design has presented problems in a variety of scientific fields, including behavioral ones [23].

It is unfortunate that hypothesis-generating activities have received the dubious label in many scientific quarters of being ‘fishing expeditions.’ Such negative labels are likely a response to published exploratory analyses that have drawn specific scientific conclusions from the results (i.e., turning hypothesis-generating activities into hypothesis-testing activities). Drawing such conclusions from exploratory analyses is clearly inappropriate. As noted in the guidelines summarized by the Task Force on Statistical Inference of the APA Board of Scientific Affairs, “Each form of research [including exploratory research] has its own strengths, weaknesses, and standards of practice” ([3437], p. 594).

The prime utility of exploration is to generate hypotheses concerning how to advance or improve interventions that can subsequently be tested using experimental designs. Whereas it is certainly possible that exploratory analyses may yield little constructive information, it is, in our experience, more likely that such analyses will provide useful observations that can be applied subsequently in experimental testing of refined versions of an intervention in the population subgroups that may most benefit from them (i.e., intervention targeting) [39]. An excellent example of exploratory research aimed at better understanding potential subgroup effects related to a multisite RCT is the work published by Schneiderman et al. related to the Enhancing Recovery in Coronary Heart Disease (ENRICHD) trial that suggests that whereas the overall trial showed no differences between treatment and control arms, a moderator analysis indicated that white men, although not other subgroups, may have benefited from the ENRICHD intervention with respect to medical endpoints [40]. Such exploratory work can set the stage for the development of more powerful interventions for other subgroups (e.g., women, ethnic minority men) that were found to receive less benefit from the original intervention [40].

Targeting Dimensions: A Myriad of Choices

Growing awareness of the multidimensional influences impacting health behaviors has led to greater acceptance of more complex conceptual models for understanding health behavior change [1, 41]. A similar understanding has developed concerning the variety of dimensions of potential relevance for intervention targeting. Targeting variables, also known as moderators, have been defined by the MacArthur group as baseline variables that are uncorrelated with the intervention (a result of randomization) and delineate subgroups that may be more or less successful with the intervention to which they have been assigned relative to another intervention [17, 21, 33, 42]. Such variables can be used in subsequently matching target groups to appropriate interventions as well as in adapting interventions to accommodate the needs of a target group.

As shown in Table 2, targeting variables generally consist of person-related variables. Whereas the most typical person-related variables used in intervention matching have been demographic variables, a number of other domains (e.g., psychosocial [including patient preferences], behavioral, cultural, biological, genetic) can also be used in matching persons with interventions that may promote enhanced intervention success [39, 43, 44]. For instance, recent advances in diagnostic methods related to breast cancer have added potential moderators (e.g., estrogen receptor status) to the cancer treatment field [45]. Similarly, potential behavioral moderators (e.g., number of cigarettes smoked per day) of successful smoking cessation intervention have been reported [46], as well as potential biologic moderators (e.g., insulin resistance) of weight loss dieting success in overweight individuals [4749].

Table 2 additionally summarizes the types of intervention-relevant contextual domains to be considered in modifying interventions to optimize subgroup success. Intervention-related dimensions include delivery source (e.g., health educator, automated delivery agents), channel (e.g., face-to-face, print, internet, electronic devices such as cell phones, etc.), content (e.g., the health behaviors being targeted, instructional orientation), timing (e.g., daily, weekly, during ‘critical periods’ of development), dose (e.g., contact duration, frequency), social context (e.g., individual, group-based, internet chat room), and location (e.g., health setting, home, neighborhood, school, workplace). For instance, in the instance of insulin-resistant persons participating in weight loss interventions referred to earlier [4749], some studies have suggested the potential importance of dietary content (i.e., low-glycemic load foods) in enhancing weight loss success. Certain intervention-relevant domains (e.g., delivery source, channel, dose) may also hold promise for curbing costs and extending intervention reach into diverse population segments [39, 43, 44].

Finally, with the increased appreciation of the utility of ecological frameworks in influencing health behaviors and other aspects of health [50], physical environment variables (both objective and perceived) could also conceivably be added as a potential targeting domain in future research (for example, persons living in particular neighborhood environments could be treated as a subgroup for intervention targeting purposes relative to persons living in different environments).

Innovative RCT designs have been increasingly applied in some health fields to advance knowledge related to treatment matching and subsequent intervention success [51, 52]. For example, in the Sequenced Treatment to Relieve Depression (STAR*D) trial, the goal was to more accurately mimic actual clinical practice by offering patients who did not achieve initial citalopram-related remission of their depression or who experienced too many side effects a choice related to their subsequent treatment [53, 54]. In this manner, different subgroups of patients could be identified with respect to future matching of specific patients and interventions [55].

In addition to such innovative types of designs, RCTs and other experimental research present an excellent opportunity to collect subgroup information that can inform subsequent targeting activities. A critical time point for collecting such information is often at baseline assessment. Planned exploratory analysis using baseline variables, occurring after the intent-to-treat analysis (i.e., primary outcomes paper) is completed, can in turn suggest hypotheses to be tested in follow-up experimental research aimed specifically at testing those hypotheses [46, 56]. To broaden the range of potential moderators of intervention effects suggested by these secondary exploratory analyses, health areas can benefit from the application of potentially relevant theories that have received less systematic attention in the target area, in addition to including variables identified as potentially important in the empirical literature. An example of this type of cross-disciplinary work is the increasing exploration of self-determination theory—developed and applied originally in nonhealth arenas—in the physical activity [57, 58] and smoking [59] intervention fields. To enhance eventual intervention dissemination, it may be particularly useful to focus on those variables that could be easily assessed in nonresearch settings.

Statistical Approaches to Aid the Targeted Intervention Development Process

There are several readily available statistical methods to explore moderators of intervention. The most common involves linear modeling. With a continuous outcome measure, an appropriate example is linear regression analysis. With an analysis focused on time to an event (e.g., time to remission), an example is Cox proportional hazards model. With a binary outcome, an example is logistic regression methods (which are mathematically related to linear discriminant analysis) [60]. In all of these regression analyses, a ‘risk’ score is developed that can be applied to each individual, higher values of which are associated with less-preferred outcomes [61]. Decisions concerning how many or what types of baseline predictor variables to enter into the regression analyses, including which variable interactions are specified and what type of variable entry method is applied (e.g., simultaneous, stepwise), are typically based on investigator judgment in combination with sample size and variable collinearity considerations [62].

In regression analysis, the researcher can test for moderation by including as independent variables intervention assignment, the baseline variable of interest, and the interaction between intervention assignment and the baseline variable [17, 21]. A significant interaction indicates a possible moderator effect (to be evaluated further through additional evaluation of clinical significance or relevance). An example of this type of moderator analysis occurred in a recently published examination of perceived environmental factors as potential moderators of physical activity intervention effects across three RCT data sets collected as part of the NIH Behavior Change Consortium [63]. Moderator analysis of the three RCTs (located in Atlanta, GA; the San Francisco peninsula region; and Eugene, OR) showed that perceived neighborhood traffic safety issues (e.g., absence of pedestrian crosswalks, speeding drivers) were significant moderators of the intervention–PA relationship across all three RCTs. For each RCT, physical activity intervention subjects endorsing negative neighborhood traffic had significantly less physical activity increases relative to intervention subjects without these negative traffic conditions (differences >90 min of physical activity per week), whereas control subjects showed no such impact on their physical activity levels (i.e., negative traffic conditions appeared to moderate the impact of the physical activity intervention in all three RCTs) [63].

Under the assumptions of the above types of linear models, the effect sizes used are most frequently variations of Cohen’s d (the standardized mean differences between groups). Recently, univariate nonparametric methods have also been developed [64] based on the use of ‘numbers to treat’ (NTT), ‘area under the curve’ (AUC), or ‘success rate difference’ (SRD), effect sizes increasingly recommended [65, 66] for their clinical interpretability. Multivariate nonparametric approaches include various recursive partitioning models [28, 62, 67, 68], such as classification and regression tree analysis (C&RT) [69] and signal detection methods [70]. Such recursive partitioning models have been increasingly used in behavioral medicine RCT research to begin to identify subgroups, defined by a combination of baseline variables, which were more or less successful with one type of intervention versus another. For example, exploratory signal detection moderator analysis of a 2-year physical activity RCT in initially sedentary, healthy 50- to 65-year-old adults indicated that the least successful subgroup (success being defined as achieving at least 66% adherence to the physical activity intervention at 2 years) consisted of those participants who had been randomized to the community-based exercise class intervention and who had baseline body mass index (BMI) values >27 [39]. Only 8% of this subgroup achieved the above level of intervention success. The most successful subgroup identified via this method consisted of participants who at baseline had average or below-average ratings of stress, below-average fitness levels measured via treadmill exercise testing, ≤12 years of education, and who had been randomized to the telephone-assisted home-based exercise intervention (69% of them met the success criteria at 2 years) [39]. Similar recursive partitioning methods have been used to explore moderators of intervention success in other behavioral medicine areas such as weight loss [44].

How Examination of Targeting Variables Fits into the Intervention Development Continuum

After suitable initial intervention development and pilot work, the continuum of research aimed at successful intervention evaluation typically consists of efficacy trials which optimize internal validity, subsequent dismantling of effective interventions to better understand their ‘mechanisms of action,’ effectiveness trials aimed at broadening the operational aspects of the intervention to increase suitability to ‘real-world’ settings, and translation/dissemination efforts and activities to increase uptake of the intervention across relevant segments of the population [71]. RCTs remain the most powerful design for establishing the causal link between an intervention and desirable health outcomes [72, 73]. Given, however, the relatively high costs of RCTs, it has become increasingly important for scientists to obtain the maximum amount of information available from well-run RCTs and similar experimental research to inform subsequent scientific and public health activities, particularly during restricted funding climates [18, 21]. In addition to the exploratory moderator activities described in this article, increased attention in the behavioral medicine field has been focused on including information on intervention reach (i.e., percentage of eligible individuals in the target population who participate) and other aspects reflecting the external validity (generalizability) of RCT research [74]. Collecting both types of information as part of the experimental endeavor can serve to expedite translation and dissemination efforts that are at the core of RCT research aimed at serving the public health [71].

Discussion

Identifying what behavioral interventions work best for whom and under what conditions (i.e., which programs for which subgroups under which environmental contexts to achieve which outcomes) remains a critical challenge for researchers and public health policymakers. This paper discusses some key conceptual and methodological issues in developing targeted behavioral interventions. Measuring a diverse set of theoretically and empirically relevant baseline variables provides relevant information for planned exploratory analyses that can, in turn, set the stage for subsequent systematic, hypothesis-testing investigations. It is likely that if there are moderators ‘lurking’ in our data and we do not evaluate them, we may end up with attenuated effect sizes and nonsignificant results. By the same token, there are challenges attendant in discovering many moderators, as this can mean larger numbers of stratification variables and increased study complexity down the road. However, identification of a large number of moderators is rare in our experience [44, 63].

Once the above types of preliminary results have been obtained, the next phase in this development process involves conducting experimental investigations to verify the causal relationships between potentially relevant subgroups at increased risk for intervention success or failure and appropriately matched interventions. One method of doing this would be to randomize individuals representing the specifically targeted subgroup to either a ‘matched’ or ‘mismatched’ intervention. For example, as suggested in the Behavior Change Consortium moderator analysis of environmental factors and physical activity change [63], one could randomize the subpopulation of individuals residing in traffic-congested neighborhoods to receive either a standard physical activity intervention or the intervention coupled with additional instructional or neighborhood-based activities related to overcoming traffic-related physical activity barriers. If, in fact, the differential physical activity rates suggested from the post hoc exploratory analyses were verified using the experimental design, confidence in the subsequent matching of this specific subpopulation to the environmentally enhanced physical activity intervention would be increased.

It is important to underscore that, with all such exploratory investigations, the utility of the data generated is influenced by the level of theory applied as well as the adequacy of the psychometric properties of the measures being used. This perspective emphasizes the observation that every good study is both theory- and data-driven (i.e., the relationships between theory and data are transactional and iterative in nature).

Challenges of Incorporating Targeted Intervention Development Methods into RCT Research

Among the challenges attendant with the above systematic approach to targeted intervention development are:

  • Population specificity issues. It is critical that the population under study be defined clearly enough that subsequent research can be undertaken with the same population. Similarly, the sample needs to be sufficiently heterogeneous to allow for the detection of moderators.

  • Planning issues. The exploratory, hypothesis-generating approach being described will be most useful if researchers carefully consider, up front, a broad range of theoretically and empirically derived variables that may serve as intervention moderators. This requires care, forethought, and a willingness to engage researchers from other disciplines and perspectives as a means of broadening the types of moderator variables under consideration.

  • Complexity issues. It is becoming increasingly clear that the moderator field will advance most rapidly if the evaluation of higher-order interactions among baseline variables is considered and evaluated from the start. The increasing familiarity with and acceptance of nonparametric risk classification statistical methods as a means of identifying such higher-order interactions in the medical and behavioral sciences fields represents an important methodological advance of which researchers should take advantage [69]. Given the exploratory nature of such approaches, it remains important to replicate and verify the robustness of the identified higher-order interactions before full implementation of a behavioral intervention based on such targeting information [75]. It also bears repeating that the moderator analyses that we are proposing, which directly compare two or more interventions ‘head to head’ in identifying any subgroup effects, are distinct from forms of subgroup analysis that are undertaken within a particular intervention [38].

  • Relevance issues. It is important that baseline variables be chosen based not only on potential theoretical and empirical relevance, but also with an eye to feasibility of use in ‘real-world’ settings. Baseline variables consisting of short paper-and-pencil measures that are readily scored and feasible for community settings have distinct advantages in this regard.

  • Resource issues. Although reasonably inexpensive paper-and-pencil measures can be identified for demographic and many behavioral and psychosocial constructs of potential relevance to intervention matching, measures of biological and genetic variables invariably involve additional costs that may pose a problem for researchers and, by extension, community providers who constitute the ultimate intervention delivery source. Continued efforts to measure such biological variables in a manner that is reasonably inexpensive and convenient will aid future intervention translation efforts. Such advances have occurred over the past 20 years, including automated blood pressure units [76], simple finger-stick methods for collecting and analyzing blood samples [77], and portable devices for continuous glucose monitoring and ambulatory skin impedance/body temperature monitoring [78].

Finally, a separate resource issue involves the development of increasingly low-cost and wider-reach intervention approaches. The recent advances being witnessed in the communication technology field provide a means for advancing intervention delivery and reach through use of electronic and other mediated interfaces [79]. Such advances offer a potential expansion of interventions into population segments that heretofore have rarely sought out health interventions. Identifying which types of intervention delivery channels, sources, and messages will most appropriately serve the needs of such subgroups remains a major public health challenge that the targeted intervention field will hopefully be able to ultimately address.

In summary, whereas RCTs remain the bedrock of systematic scientific advances, the exploratory work discussed in this paper can help to make the most efficient use of RCTs to identify the best paths for subsequent RCT development. This approach can help to facilitate the advancement of the behavioral medicine field in a resource-constrained era. As noted by the late Michael J. Mahoney in “Suffering, Philosophy, and Psychotherapy” (2005):

Science is about questions, about quests. The best questions are those that explore the edges or center of our understanding. The best answers are those that lead to better questions and future quests (pp. 343, 345).