Keywords

Introduction

During the past 50 years, the developed world has experienced a major shift in the leading causes of illness and death. Chronic illnesses now account for seven in ten deaths in the USA, with heart disease, obesity, cancer, and type 2 diabetes numbering among the most common. As our population ages and as scientific advances continue to transform terminal conditions into ones that people can live with (albeit often uncomfortably), it is likely that these numbers will continue to grow (WHO Global Status Report on Noncommunicable Diseases, 2014).

Thankfully, these diseases can be successfully managed or prevented in part by engaging in lifestyle behaviors such as maintaining a healthy diet, abstaining from tobacco use, drinking less, exercising regularly, and when necessary taking medication as prescribed. However, despite proven, widely known methods to alleviate some of the most deadly, burdensome, and costly chronic conditions of our time, millions of people struggle every day to do what is objectively good for them. Why is this?

Initiating and maintaining healthy behavioral change is a challenging endeavor—both for the individuals who are attempting to make changes and for the myriad of practitioners providing support and guidance along their journey. Health behaviors and behavior change processes are complex, involving a web of personal, interpersonal, and environmental factors that influence our decisions and abilities to behave in certain ways. Changing behavior requires juggling multiple and often competing motives. It may require developing new skills and making fundamental shifts in how one orients to the social and physical environment around them. Complicating things further, problematic health behaviors tend to co-occur (e.g., people who smoke or have poor dietary habits tend to be less physically active) and certain conditions may require changing multiple behavioral patterns (e.g., weight loss efforts often focus on level of physical activity as well as dietary intake, both the amount and kinds of food eaten).

Finally, it is clear that most attempts to facilitate behavior change at individual, organizational, community, or population levels are executed via implicit common-sense models of behavior and behavior change rather than through a systematic application of theory, evidence, and technique. Effect sizes from commonsense interventions trend toward minimal at best, particularly when delivered through digital means such as websites and native mobile applications .

Guidance from the Medical Research Council (MRC) Population Health Sciences Research Network (PHSRN) states that “best practice is to develop interventions systematically, using the best available evidence and appropriate theory” (Craig et al., 2008) but intervention designers and researchers need practical frameworks and methods to effectively bring theory and evidence into the fold. This chapter will outline one such process and highlight frameworks to strengthen the design, implementation, and evaluation of behavior change interventions.

Toward a Systematic Process of Behavior Change Design

Changing something requires that you first understand it. In the case of behavior change, we need to understand the nature of change at both broad and granular levels regarding specific target behaviors, populations, and the contexts toward which interventions may be applied. For behavior change interventions to be meaningful, they must target behaviors that are clinically significant, address the right determinants that predict target behaviors, and be delivered in a way that fits with the characteristics of the intended recipients, culture, and context.

Maximizing our ability to effect change requires an iterative, systematic process that integrates theory and evidence at every step from problem identification and framing through to solution design, implementation, and evaluation. Ideally, behavior change design methods should form part of a “virtuous spiral ” in which empirical evidence is used to create an ever-improving design methodology that is applied to improve human well-being and whereby rigorous implementation insights feed back into the advancement of behavior change science.

What Is Behavior and How Does It Change?

“Behavior” can be defined as “anything a person does in response to internal or external events ” (Davis, Campbell, Hildon, Hobbs, & Michie, 2015). Many individual behaviors are recurring and can be described as “behavior patterns” and characterized in terms of their frequency, intensity, and duration over a period of time. Smoking, overeating, physical inactivity, and staying up late are all examples of health-related behavior patterns. Behaviors are part of an integrated system such that any one behavior can be influenced by other behaviors of the same or other individuals as well as environmental affordances. These influences are dynamic and interact both positively and negatively with each other, and their relationships can change over time.

Behavior can be said to have changed when (1) activities in a particular context are undertaken differently from how they would normally have been performed; and (2) when the incidence of one or more activities that individuals, groups, or populations is different than it had been previously. In either case, the change may be maintained over a period of time or the behavior may revert to its original pattern. In most cases, for behaviors to translate meaningfully into improved population health , they must be sustained over the long term. It is important to note that the underlying factors influencing initiation and maintenance of behavior change may be different, and our strategies to facilitate change may need to be tailored accordingly.

Frameworks for Understanding Behavior and Behavior Change

Behavioral science is advancing rapidly, and there are many theories of behavior and behavior change that aim to explain and predict when, why, and how behavior change occurs (or does not occur). Designing or selecting effective strategies for behavior change needs to be based on a clear understanding of which behaviors are likely to be the easiest to change and deliver the greatest impact, as well as what the underlying individual, interpersonal, and environmental barriers and facilitators to the selected target behaviors may be.

Gathering evidence for a “behavioral diagnosis” is often conducted through systematic reviews of the scientific literature, in-depth interviews or surveys with domain or subject matter experts, target population groups and other stakeholders , or less formally through collaborative workshop activities with above groups. A critical step between understanding behavior in context and linking it to theoretically grounded behavior change techniques is to identify precisely what needs to change in the person or the environment in order for the desired change in behavior to occur. The more accurate our analysis of identified target behavior and underlying determinant, the more likely our intervention will be to change behavior in a desired direction.

To do this successfully, we can leverage the COM-B model of behavior, developed by Susan Michie and colleagues at University College London’s Centre for Behaviour Change. COM-B stands for “Capability,” “Opportunity,” “Motivation,” and “Behavior” and is a composite behavioral model built from a synthesis of overlapping behavioral determinants found across 93 different theories of behavior and 19 frameworks for behavior change (Michie et al., 2013; see Fig. 9.1).

Fig. 9.1
figure 1

The Capability Opportunity Motivation Behavior Model (CM-B) . Source: Original graphic by author, based on Michie et al. (2013). Used with permission

The COM-B model of behavior posits that for any given behavior to occur, a person must have the capability and opportunity to execute the behavior, and that the motivation to engage in a given behavior must be greater than to engage in any other potentially competing behavior/s. For example, in the moment at which you planned to go for an after work run, your co-workers invite you to the pub across the street for hot wings and pints.

  • Each of the model’s C, O, M components can be divided into two types.

  • Capability includes both “physical” and “psychological” capability . Physical capability consists of having sufficient strength, stamina, dexterity, or physical skills needed to enact a behavior. Psychological capability refers to knowledge and cognitive skills as well as our perception, attention, memory, decision processes, and abilities to regulate our behavior.

  • Opportunity consists of the surrounding environmental factors that restrict or enable a behavior. These may be “physical” in terms of time, triggers, resources, physical location barriers, or “social,” including cultural norms, interpersonal influences, and social cues.

  • Motivation refers to all the mental processes that energize and direct behavior. This includes conscious, “reflective” processes such as goals, intentions, plans, values, and beliefs as well as “automatic” processes involving our emotional and habitual responses, desires, attitudes, and impulses.

The COM-B model also reflects the interactions between the different components, with motivation playing a central role. Increased motivation can energize people to engage in activities that will increase their capability (e.g., practicing new skills) or opportunity (e.g., we respond to more cues when we are strongly motivated), thereby facilitating behavior change. In addition, increasing opportunity or capability can increase motivation (e.g., we like to do things we are good at and have the opportunity to do).

We can think of these interactions in terms of riding a bicycle (as a target behavior). If we own a bicycle (opportunity), and are able to ride it (capability), it might increase our motivation to ride a bicycle but our motivation alone will not improve our riding skills or provide access to a bicycle unless we act (behavior) on this motivation and buy a bike and/or practice riding.

Changing behavior therefore requires change in one or more of capability, motivation, and opportunity, and these factors serve as targets for behavior change techniques and interventions overall (Abraham, Kelly, West, & Michie, 2009).

Behavior Change Design Process

Behavior change design is a systematic approach to design, integrating methods and principles from behavioral science, motivational psychology, and human-centered design . The process is iterative and sequential combining the rigor of behavioral science with the creative ingenuity of human-centered design. At its core, designing for change is the process of defining a real-world problem, understanding the needs, contexts, and change targets of affected and at-risk populations, creating the elements of an intervention to shift those targets, and refining those elements through a series of studies (Fig. 9.2).

Fig. 9.2
figure 2

Behavior change design methodology . Source: Mad*Pow. Used with permission

We broadly describe this process as a series of four phases:

  1. 1.

    Diagnosis—where we seek to understand and define a problem, a target population and targets for change,

  2. 2.

    Prescription—where we detail the precise mechanics for how the intervention will function, what techniques will be used to change what behaviors through which mechanisms of action (mediators), and how those techniques will be delivered (e.g., digitally, face-to-face, and environmental change),

  3. 3.

    Execution—where we translate the intervention strategy into content, artifacts, interface, and interactions, and.

  4. 4.

    Evaluation—where we perform as series of studies throughout the design process to assess the intervention for conceptual clarity , usability, utility, acceptability, feasibility, efficacy, and effectiveness. It should be noted that the process is not strictly linear, with evaluation activities occurring throughout the process and earlier phases being returned to as needed.

Phase 1: Diagnosis

Understand and Define the Problem

As we have said, changing something requires that you first understand it. At the start of any project, we seek to understand the individual, interpersonal, and environmental factors that give rise to (or sustain) a problem over time, who is affected (or at risk of being affected) by the problem and how risk or protective factors and experiential contexts may vary across populations.

This involves conducting a variety of qualitative and quantitative research activities: analyzing available data sets, conducting systematic evidence-based literature reviews of interventions in a given space, using survey instruments, and conducting in-depth interviews with our target audience, stakeholders, and relevant subject matter experts. Taking a mixed-methods approach to diagnosing a problem allows us to unify several sources of information into hard-nosed, empirical data about a problem space, including the interventions that have been deployed to effect change, their underlying theoretical basis and evidence about what has worked (and not worked) for whom, in what contexts, and first-hand accounts of the stated needs, mindsets, and lived experiences of our intended intervention beneficiaries.

Once team members have a solid understanding of the shape, complexities, and root causes of a problem, decisions can then be made on where to intervene to bring about change. Often, graphic representations of a problem illustrating relationships and causal pathways are used to help inform decision-making. These diagrams can take the shape of path charts, logic models, or structural equation models linking the problem statement to macro- and micro-level factors that contribute to the problem and desired distal and proximal outcomes. These outcomes must be measurable, and benchmarks are often laid out as thresholds for success.

Specify the Target Behavior/s

With a model of the problem and desired long- and short-term outcomes constructed, the next step in designing a behavior change intervention is to identify which behaviors are likely to deliver the greatest impact and can be most easily changed. This is achieved through conducting a “behavioral diagnosis” which describes as precisely as possible who needs to what differently, when, where, how, and with whom (if applicable). The more precise you can be about the behavior, the better the diagnosis is likely to be.

For example, if addressing obesity, one might suggest that overweight individuals reduce fat and sugar consumption by packing vegetable snacks in their lunches rather than cookies or sweets and that sugary beverages be substituted with water or (sugar-free) teas, or that meals be planned in advance, grocery lists made, and nutrition labels read before making food purchases. These categories of behavior—shopping, meal preparation, and eating— could be performed by different people (e.g., a family member or housemate could be responsible for the shopping and meal preparation rather than the overweight individual and would need to be engaged in the intervention).

Additionally, we could decide to target supermarket managers to change product placement on shelves (e.g., placing lower fat or sugar products up at eye level and / or making high fat/sugar foods harder to find, or running promotions on healthy foods) or even government policy makers to change the way nutritional information is displayed on food labels, or introduce regulations on advertising or taxes or size limits on sugary beverages.

Our objective here is to ensure that our behavioral diagnosis is sufficiently detailed and useful (e.g., “eating less” is less likely to be useful than “overweight individuals substitute veggie snacks for sugary snacks in their lunchboxes”) and to define the target population (who is to take action), the nature of the behavior (what they will do), and the context of the behavior (how and when they will do it), and the setting of the behavior (where will it be performed).

Only when these details are pinned down, can we then analyze barriers and facilitators to performing the target behavior/s and what exactly needs to change in people and/or the environment to bring about behavior change (Francis, O’Connor, & Curran, 2012; Michie, Atkins, & West, 2014; Michie, van Stralen, & West, 2011). We do this through a “COM-B Analysis.”

Identify What Needs to Be Changed

As described earlier, changing behaviors requires changing the individual, interpersonal, and environmental determinants that underpin selected target behaviors. To accomplish this change, we analyze and map these determinants to individual capability (psychological and physical), motivation (reflective and automatic), and environmental (social and physical) as outlined the COM-B model in Fig. 9.1 and Table 9.1.

Table 9.1 COM-B component definitions and examples

An effective COM-B analysis draws from different sources. COM-B questionnaires are created to uncover target audience, subject matter experts, and stakeholder perspectives on barriers and facilitators underpinning the selected target behaviors, and coding and quantifying the evidence gathered from the scientific literature review. By mapping underlying determinants to each target behavior, teams can identify prominent barriers that need to be addressed through intervention design. Specifically, by focusing on which COM-B factors might be malleable through design and targeting them with behavior change techniques most likely to shift an individual’s capability, motivation, or opportunity into a new equilibrium.

At the end of the diagnosis phase, intervention design teams should have robust conceptualization of the causal arguments that produce and sustain a problem, desired behavioral, proximal, and distal outcomes and modifiable determinants that mediate behavior change for differing populations. The importance of devoting sufficient time and resources to the diagnosis phase of a project cannot be overstated. If the diagnosis is not thorough, the formulation of the problem and identification of effective change targets is less likely to be accurate, and the intervention is much less likely to be effective.

Phase 2: Prescription

Having completed a thorough diagnosis, design teams can now consider what intervention strategies are most likely to be effective in altering the relevant mechanisms of change.

Currently, “The Behaviour Change Wheel” (Michie et al., 2011; Michie et al., 2014) outlines nine broad strategies (or “functions”) by which an intervention can change behavior and links them to COM-B components. We’ve added a 10th (“Needs Satisfaction”) to draw more deeply upon motivational change mechanisms outlined in Self-Determination Theory (Ryan & Deci, 2011).

The ten intervention functions are:

  1. 1.

    Education (i.e., increasing awareness, knowledge or understanding),

  2. 2.

    Training (i.e., developing mental or physical skills),

  3. 3.

    Persuasion (i.e., using communication or design tactics to change attitudes or beliefs toward a target behavior, induce positive or negative emotions, or stimulate action),

  4. 4.

    Incentivization (i.e., setting the expectation of financial or other rewards),

  5. 5.

    Coercion (i.e., setting the expectation of punishment, cost, or personal loss),

  6. 6.

    Needs satisfaction (i.e., creating experiences that satisfy inherent basic psychological needs for autonomy, competence, and relatedness),

  7. 7.

    Restriction (i.e., using rules to reduce the opportunity to engage in a target behavior),

  8. 8.

    Environmental restructuring (i.e., changing the physical or social environment),

  9. 9.

    Modeling (i.e., providing a visible example for people to imitate or aspire to), and.

  10. 10.

    Enablement (i.e., increasing means/reducing barriers to capability beyond education and training or increasing opportunity beyond environmental restructuring).

For example, if our goal were to increase medication adherence in individuals with hypertension, our COM-B analysis may highlight reflective motivation (e.g., beliefs about necessity for medication, beliefs about effects/side-effects of medication, lack of intentions to medicate as prescribed) as an important factor to be targeted via intervention. We can then craft our strategy around a number of relevant functions to change motivation such as needs satisfaction , persuasion, incentivization, education, and/or modeling.

Intervention functions can be delivered by a number of “behavior change techniques” (BCTs) . BCTs are “the smallest active ingredients of an intervention, hypothesized to change behavior” (Michie et al., 2013, 2015). 93 techniques have been identified and organized into a taxonomy— the “Behavior Change Techniques Taxonomy v1 (BCTTv1) ”—allowing for a systematic method to identify what are likely to be the most appropriate techniques for a target behavior, barriers, population segment, and setting.

While BCTs have been reliably linked to intervention functions, evidence continues to accumulate regarding the effectiveness of different behavior change techniques as applied to different target behaviors, populations, and settings, such as increasing physical activity among healthy versus overweight, obese and older adults (Olander et al., 2013; Olander, Berg, McCourt, Carlstroem, & Dencker, 2015) or techniques delivered via different modalities like face-to-face, telephonic, text message , or in-app content. Further research has suggested that interventions that use more behavior change techniques (mean no. of techniques = 8.57) are more effective than those that use fewer techniques (mean no. of techniques = <4) (Gardner, Smith, Lorencatto, Hamer, & Biddle, 2016; Webb, Joseph, Yardley, & Michie, 2010).

Finally, some evidence exists that suggest behavior change techniques may produce greater effects if they are delivered in theoretically informed groups rather than in isolation. A common and effective pattern that can be found in countless digital applications stems from Control Theory (Carver & Scheier, 1982) and pairs goal-setting, action planning, self-monitoring, and feedback (Dombrowski et al., 2012; Michie, Abraham, Whittington, McAteer, & Gupta, 2009).

Whether an intervention is to be specified at the individual, organization, community, or policy level, the processes outlined in diagnosis and prescription phases provide a methodology for adequately defining a problem space, identifying the malleable determinants that lead to change and articulating the logic of the intervention. With an emergent strategy in hand, teams can begin to translate the prescription into intervention artifacts such as content, activities, interfaces , and interactions, which is the focus of the next phase.

Engagement: The Other “E”

Beyond prescribing the “active ingredients” that mediate change, we also want to focus our efforts on strategies that have been shown or hypothesized to support engagement with digital applications. We see engagement as the other critical consideration in designing for behavior change.

Much of the rationale behind engagement design decisions comes from principles embedded within Self-Determination Theory (Ryan & Deci, 2011). According to SDT research, we humans have basic (universal) psychological needs that we need to fulfill in order to thrive, and that we seek out and continue to engage in experiences that satisfy these needs. These needs are:

  • Competence, which is our need to feel effective and capable of doing things well. It’s the feeling we get after attaining a challenging goal and the experience of mastery when our competence for a particular task or goal is supported consistently.

  • Autonomy is our need to experience our actions as our own. To wholeheartedly endorse what we’re doing at the time we’re doing it. It’s the feeling we get when we act with a sense of purpose and choice.

  • Relatedness is our need to feel cared about by the people we care about. It’s the feeling of belonging, like we’re understood and can be ourselves around people who “get” us.

These needs can be fulfilled via digital technologies through the way we communicate with end-users and the interactions we provide. Unlike finding just the right context specific and often individually tailored pairings of BCTs to shift behavioral determinants, SDT techniques for satisfying basic needs and facilitating engagement are applied in all designs, for all users, in all contexts. Our goal is to make every user feel competent, in control and cared for.

Supporting Basic Psychological Needs

SDT and its concept of needs satisfaction is an excellent framework for designing any interactions—so much so, it’s shocking that it isn’t used more in the design world. Here, we’ll look at a few principles that are universally applied in our version of behavior change design.

Supporting Competence

The cornerstones of supporting competence are built from (1) meeting people where they are in terms of their mental and physical skills and abilities; and (2) providing structure (e.g., actions, tools, and resources) and informationally rich, supportive feedback on performance, and progress to help them hone the skills they need to address challenges.

It is now overwhelmingly clear that one-size-fits all approaches to intervention design are far from ideal. People start with very different knowledge and skills to carry out behaviors. They have different strengths to capitalize upon and different challenges to overcome. Enabling individuals to select specific and challenging enough proximal goals and a reasonable way to achieve them helps them to stretch and develop their skills without feeling completely overwhelmed. These “optimal challenges” lead to experiences of mastery and sustained engagement critical in behavior change pursuits.

Supporting Autonomy

Part of autonomy support is helping individuals develop personally relevant and meaningful behavioral goals, as people will have the most energy and interest for activities they like to do and that they personally value. Allowing individuals to explore bigger-picture life goals and how behavioral goals fit into their higher-order motives helps to energize and sustain motivation toward behavior change goals and desired outcomes.

For many, if not most health outcome goals, there might be different ways by which individuals may strive to achieve them. For example, controlling high-blood pressure may be achieved through medication and/or lifestyle changes such as increasing physical activity and changing one’s diet. Within each method there could be additional choices offered such as type of medication, activity (e.g., jogging, swimming, etc.), or dietary changes (increased potassium, salt-reduction, DASH diet, etc.).

Providing options for what goals to pursue and how to pursue them when possible, strengthens our sense of choice, and endorsement. Finally, when choice is constrained or not possible, providing a meaningful rationale for why that is helps individuals accept the limitations without sacrificing their autonomy.

Supporting Relatedness

Attempting to assist any individual with their own behavior change requires that they trust you, that they feel you have their best interests at heart, and that you will be there for them when needed regardless of their abilities, decisions, progress, or lack-thereof. Meeting this requirement starts by offering an environment of warmth, respect, empathy, and compassion.

Understand that individuals may have different reasons for making changes and they might also have different feelings about those reasons, including negative or ambivalent ones. Instead of assuming every user is super gung-ho and always ready for action, acknowledge that they might get annoyed or frustrated on their journey and that it is a normal and acceptable part of the process. And speaking of being annoyed, when presenting (options of) actions a user might take, make it a request not a demand. Steering clear of “musts,” “have to’s,” and “shoulds” is not only more polite, it’s more motivating in the long run.

Additionally, we should think about relatedness and relationships outside of the immediate context of the interventions. Sometimes, when making changes even with the best digital support, people need some real-world human support as well. Designing in opportunities to connect users with their real-world support teams, and potentially coaching them on how to seek support when needed can strengthen a person’s sense of relatedness.

Finally, creating safe digital spaces for people making similar changes to connect, learn from each other, and provide each other support and encouragement can be a powerful engagement mechanism for digital interventions.

Whether an intervention is to be specified at the individual, organization, community, or policy level, the processes outlined in diagnosis and prescription phases provide a methodology for adequately defining a problem space, identifying the malleable determinants that lead to change and articulating the logic of the intervention. With an emergent strategy for engagement and effectiveness, teams can begin to translate the prescription into intervention artifacts such as content, activities, interfaces, and interactions, which is the focus of the next phase.

Phase 3: Execution

Where we previously mentioned that the design process as a whole was an iterative process, the execution phase is perhaps by nature the most iterative. It is here where broader design teams—interaction design, content/copy writing, visual design and branding, coding, and UX research— come together with intervention designers to visualize and evaluate the evidence-based strategy developed in phases 1 and 2. This collaborative process typically involves multiple rounds of ever increasing fidelity, depth, and precision from initial concept development, through prototype revisions, to minimum credible pilot intervention and implementation ready intervention.

In line with modern human-centered design approaches, it’s important that the “end-user” is involved during the creation process. This can take shape through co-design workshop sessions, often held as part of pre-concept and concept development workstreams and evaluation sessions, where feedback on the intervention is sought from our intended beneficiaries. As described in the diagnosis phases, designing for behavior change involves a balanced integration of theory, evidence, and the perspectives of the people who will use the intervention.

Our end-goal in behavior change design is that we’ve been effective and our efforts meaningfully change behavior. Getting there requires that the intervention also be useful, usable, attractive, engaging, trustworthy, valuable, and not overly burdensome to our users. It’s critical that the tone, features, and functionality fit the needs, understanding, and preferences and we avoid or modify as much as possible the elements that are not easily understood, disliked, or seen as impractical or intrusive.

We believe that designing with people and including their perspectives—rather than deploying interventions that seek to capitalize on perceived human shortcomings, manipulate or otherwise trick into behaving even in certain objectively beneficial ways—reduces the potentially inherent paternalism of designing for other people’s change, preserves their autonomy, and ultimately delivers a better product, service, or intervention.

Phase 4: Evaluation

As a workstream, evaluation runs in synchronicity with execution phase activities. The focus of early tests is on seeking direction, refining and confirming design iterations (in order to produce more acceptable, useful, and effective interventions). Ultimately, evidence will need to be gathered to assess whether the intervention is producing the kind of effects it was intended to and if there are any side-effects or unintended consequences.

Typically, this is done through a sequence of tests beginning with a small pilot test of a minimum credible intervention to revise and scale the intervention, then a higher fidelity and larger scale efficacy test under tightly controlled conditions, and finally an effectiveness test of an appropriately scaled intervention “in the wild.”

The Logic of Experimentation

During intervention research , we often decide between two basic types of research designs: experimental and quasi-experimental designs. Experimental designs such as A/B tests and Randomized Controlled Trials (RCTs) use random assignment to create intervention and control groups, meaning sample of your population is exposed to the intervention you want to test and the other receives treatment as usual, a different intervention, no treatment or waitlist.

When randomization on a large enough sample size is used, post-intervention differences between groups can be considered causally related to the intervention (assuming no contamination or spillover occurs). No other method of group assignment or statistical adjustment produces similar effects.

Quasi-experimental designs have the same goals and structural features of experimental designs except instead of random assignment, participants are allocated into groups (if there are more than one) by nonrandom means such as self-selection/enrollment or researcher assignment/enrollment. By using nonrandom assignment, quasi-experimental designs are exposed to a variety of potential biases or “selection effects.”

For example, participants self-selected to receive an intervention may be more motivated to change behavior than participants who do not volunteer, skewing effects. It then becomes the researcher’s job to rule out potential alternative explanations for effects. This can be done effectively through taking multiple measurements over a period of time, called interrupted time-series design, as opposed to more common pre-post measurement designs.

Pilot Testing

As stated above, intervention design is a systematic and iterative process that begins with identifying and understanding a real-world problem to inform the design of an intervention, progresses through pilot evaluation to testing impact, and may include optimization or adaptation efforts after release.

Pilot testing is typically performed after concept and prototyping phases, when an implementable minimum credible intervention is developed. The goals of pilot testing are to (1) refine the intervention based on its usage and performance by its intended audience, in its intended setting, and (2) to collect preliminary evidence of change in mediators (COM-B factors), target behaviors, and proximal outcomes. The design of pilot tests is nearly always quasi-experimental. It generally involves a single group of participants who are aware that they are part of a pilot test (and may be asked to provide feedback on the intervention as part of the study).

As such, pilot testing requires both quantitative and qualitative measurement. Data collection and analyses focus on understanding participant experiences and responses to the mechanics, materials, and content of the intervention, including their level of engagement , satisfaction, and if the intervention seems to produce change in mediators.

Efficacy Testing

After an intervention has been designed and pilot tested, we want to know whether it works. Specifically, based on the intervention design strategy, does the intervention produce change in the mediating variables, and do the changes in the mediators appear to produce changes in behavior and proximal outcomes?

Efficacy tests involve random assignment of participants to intervention conditions and control groups, or they may utilize more rigorous quasi-experimental methods such as “regression discontinuity designs,” where participants are measured on a key indicator and the intervention is only offered to those participants who reach a certain threshold level on the measure. The difference in regression lines (intercepts and slopes) between the two groups can provide evidence for intervention effects.

While the intervention can be revised based on findings from the efficacy tests, a complete, high-fidelity, stable release of the intervention is important for an efficacy test. Technical difficulties, incomplete content, or low-quality execution will all confuse the results.

Finally, inclusion and exclusion criteria are often used to screen participants to ensure they represent the target audience.

Effectiveness Testing

In public health and social work , effectiveness is required for program and intervention adoption. In the commercial world, digital products, technologies, and interventions are much less likely to be rigorously evaluated for impact before (or even after) wide scale release. In our experience, measures that appear to be of greater concern include speed to market, reach, adoption, engagement, conversion, and revenue.

That said, as digital tools for assessing, monitoring, and managing our health continue to proliferate an already saturated market, we believe the fight for consumer dollars will be based on effectiveness. A key differentiator in purchase and use will be if the solution delivers its intended (or claimed) effects. Organizations that build a practice of rigorous research, design and evaluation methods now, will be ahead of the game as the gravitational shift toward effectiveness comes to fruition.

Whether for necessity, differentiation, or contributions to science, the goal of effectiveness testing is to estimate the impact of an intervention under real-world conditions, compared to “status quo” treatment or another active intervention. In other words, a new intervention that has been shown to achieve the desired outcomes under the ideal conditions of an efficacy test is now exposed to other settings that represent the variability of conditions for which the intervention was intended. Unlike efficacy tests, the two conditions are relaxed in an effectiveness test, and the usage (uptake and engagement) of the intervention is subject to natural variation.

When introducing more relaxed controls, it is likely that trade-offs have to be made with our confidence that a detectable effect can be attributed to the intervention versus extraneous factors (internal validity) and also to the extent to which the findings can be generalized to other populations or settings of interest (external validity). Best practices here again include combining both experimental and observational methods to come to more confident conclusions.

Depending on the results of effectiveness testing, the intervention may be further rolled out, revised, optimized, or adapted for new settings or populations . It should be noted that interventions rolled out at any scale should be routinely monitored and optimized over time (Fig. 9.3).

Fig. 9.3
figure 3

Common design activities undertaken throughout an intervention design process. Source: Mad*Pow. Used by permission

Future Directions

The science of behavior change is rapidly advancing and evolving its knowledge base and methods while capitalizing on emerging technologies and advances in other fields such as computer and data science. New opportunities are becoming available to amplify our potential to improve our effectiveness in delivering effective interventions.

Up to the minute knowledge about what works for whom, in what contexts, when and why is being disseminated to intervention researchers and designers. Advances in computing technology such as contextual sensors, streaming data, and machine learning algorithms are being used to leverage data to predict behaviors and dynamically deliver tailored “just in time” content and techniques. New methods for rapidly evaluating interventions and disseminating findings are being developed in conjunction with technology-based “rapid innovation” methods. New computational model theories that are more in line with our abilities to capture and sense moment-to-moment behavior are being developed to update 50-year-old, “snapshot” style social cognitive models. These are just some of the future directions intervention research and design are headed.

Conclusions

Designing engaging and effective interventions presents both unique challenges and great opportunities. While the promise and anticipation of revolutionary public health impact continues to grow, the industry still remains more in the land of promise than revolution. In order to meaningfully improve the reach, engagement, and effectiveness of digital and offline health interventions, a more rigorous approach to design and evaluation is needed. We argue that a merging of behavioral science and human-centered design methods (and practitioners) with emerging technological advances as outlined in this chapter amplifies all our abilities to deliver on the promise of implementing effective and engaging interventions, to ultimately impact population health and well-being.