The implementation of effective HIV prevention programs for adolescents is an urgent public health goal. Because the average duration from HIV infection to the development of AIDS is 10 years, most adults living with AIDS today were likely infected as adolescents or young adults. The proportion of adolescents and young adults in the U.S. with an AIDS diagnosis has grown to over 4% (National Institute of Allergy and Infectious Diseases [NIAID] 2006). Adolescents and young adults account for 50% of new HIV infections worldwide and fewer than 50% of this age group are able to identify ways of preventing HIV (UNAIDS 2006).

Fortunately, there are many available evidence-based (EB) prevention programs designed specifically for adolescents: Advocates for Youth (2003, 2006) identified 24 EB prevention “Programs that Work.” The availability of effective programs, however, does not guarantee penetration of the interventions into the population at risk (Rotheram-Borus et al. 2000). The current challenge for HIV prevention is not the design of new effective programs, but rather the creation of strategies to assure that the benefits of existing effective HIV prevention programs reach every single adolescent.

To achieve the goal of disseminating EB HIV prevention programs, the Centers for Disease Control and Prevention (CDC) have created several integrated projects: (a) Prevention Research Synthesis project to identify “best-evidence” programs (PRS; see http://www.cdc.gov/hiv/topics/research/prs/index.htm; Lyles et al. 2007); (b) Replicating Effective Programs to package programs (REP; see http://www.cdc.gov/hiv/projects/rep/default.htm), (c) Diffusion of Effective Behavioral Interventions to provide training for diffusing programs (DEBI; see http://effectiveinterventions.org/), and (d) Technical Assistance Resources, Guidance, Education and Training Center to provide ongoing training and consultation (TARGET; see http://careacttarget.org/). Despite the best efforts of these projects, there is still a gap between the availability of a program and community implementation. For example, of the 162 agencies that sent a staff member to be trained to implement a single-session, 45 min HIV prevention intervention, only 38% implemented the program within the expected time frame (Harshbarger et al. 2006).

Broad implementation of EB programs is a difficult challenge even when encouraged by administrative policies and funding bodies (Rotheram-Borus et al. 2004). Program protocols written for a research project are designed to have fidelity with specific theoretical models (Green and Glasgow 2006; Schoenwald and Hoagwood 2001), not for attractiveness, acceptability, convenience, and needs of community providers (Kelly and Kalichman 2002; Rotheram-Borus and Duan 2003; Rotheram-Borus et al. 2004). Kraft et al. (2000), taking the perspective of providers, mentioned barriers to program adoption such as the high cost of programs, staff constraints, infrastructure and organizational character of the agency, lack of practical information provided in research reports, and lack of interest or support from the target population. Community agencies have difficulties knowing which EB program to select for the specific population and cost constraints (Collins et al. 2006; Kelly et al. 2000b; Lyles et al. 2006). To remedy this problem, the CDC created guidelines for a step-by-step selection process (McKleroy et al. 2006). However, the 23 action steps required to assess available programs may be daunting rather than helpful for community decision makers.

When communities do adopt an EB program for implementation, they invariably modify the program (McKleroy et al. 2006; Rebchook et al. 2006; Rohrbach et al. 2006; Stanton et al. 2005), believing that they are appropriately customizing the program to the needs of their clients. In contrast to academic researchers, community interventionists believe that flexibility is more important than “replication with fidelity” (Kelly et al. 2000a; Kraft et al. 2000; Solomon et al. 2006). Community providers may prefer using their “homegrown” program to an “intervention in a box” (Collins et al. 2006), and may resist implementing a program of which they have no ownership. Community agencies brand themselves by providing unique services, and compete for funding from private foundations by demonstrating the distinctiveness of their programs. Thus, providing replications of someone else’s program limits their ability to attract funding from any source except state and federal governments.

To promote effective adaptation of EB programs, it is necessary to distinguish crucial, robust program elements that must be retained from those that can be modified or eliminated without detracting from the program’s efficacy (Kelly et al. 2000a; Neumann and Sogolow 2000; Rohrbach et al. 2006; Rotheram-Borus and Duan, 2003; Sogolow et al. 2000; Solomon et al. 2006). An analogy is frequently drawn between prevention programs and medication (e.g., Jensen 2003). We need to distinguish the intervention’s core efficacious elements (similar to the molecules of chemicals in a drug), which need to be replicated with fidelity, from its delivery vehicles (analogous to pill, liquid, injection, patch, or suppository), which can be modified or eliminated. In behavior change interventions, however, the delivery vehicles (the procedures and processes, as distinct from the content) have an important impact on whether participants stay or drop out, leave with beneficial behavior change, and maintain this change over time. Therefore, there are two types of core efficacious elements—essential content and necessary processes—to be contrasted with optional tailoring adaptations.

The methods for defining core elements fall into two categories: using the same principles that guided the design of the original EB intervention (e.g., theoretical bases and internal logic of the intervention (Eke et al. 2006)) or conducting independent research. Research approaches include a componential analysis (e.g., Rotheram-Borus et al. 1998), using feedback from participants and facilitators (e.g., Kelly et al. 2000a), and process evaluation methods applied to successful programs (e.g., Janz et al. 1996).

Those different strategies resulted in diversity rather than consensus in the lists of core elements. The REP project identified a list of 3–4 short-term goals or activities as the core elements for each of the programs it certified (Solomon et al. 2006). For a one-session intervention (VOICES/VOCES), a “culturally specific” video was included as a core element (Harshbarger et al. 2006). The creators of the Mpowerment Project identified nine core elements, including informal outreach, a particular dedicated meeting space, and peer-led groups with specific activities (Rebchook et al. 2006). In their quantitative and qualitative study of 37 AIDS prevention programs, Janz and colleagues (1996) found the three most important elements to be small group discussions, outreach, and trained community peers. There is a need for further research to develop a systematic model of core elements that would apply to all EB programs and provide usable guidance to community practitioners.

The present study joined the search for core elements by using qualitative research methods to examine what is actually occurring in a set of successful EB HIV prevention programs for adolescents. Our research program had two separate goals: (1) to develop a comprehensive lexicon of processes and procedures in order to distinguish those that are essential for all programs (core elements) from those that are optional and discretionary and (2) to identify “core principles” and “content” imbedded in successful programs. The study of core principles is reported elsewhere. In this paper we first describe our method for developing a classification system of processes and then report our comparison of five programs. For the remainder of this paper, we ignore the content of the programs only for purposes of clarity and because that topic is discussed elsewhere (Ingram et al. 2007). It must be stressed that the practitioner must first be knowledgeable about the content from EB prevention programs before tailoring the processes to the needs of the target population.

Method

A four-step procedure was followed to identify common process elements in adolescent HIV prevention programs: (1) selecting successful evidence-based prevention programs for adolescents; (2) developing categories through an iterative open coding approach; (3) having trained raters code each manual; and (4) comparing programs across the identified process elements.

Evidence-Based Program Selection

The following criteria were used to identify eligible programs: (1) the CDC had identified the program as efficacious, or a program that works, based on published empirical studies; (2) the program had been used for over five years; and (3) the program’s manual clearly presented program parameters, strategies and procedures. Five programs were selected based on these criteria and their key characteristics are summarized in Table 1: (1) Be Proud! Be Responsible! (Jemmott et al. 1992); (2) Becoming A Responsible Teen (St. Lawrence et al. 1995); (3) Focus on Kids (Stanton et al. 1996); (4) Safer Choices (Coyle et al. 2001); and (5) Street Smart (Rotheram-Borus et al. 1991; Rotheram-Borus et al. 2003).

Table 1 Key characteristics of five adolescent, evidence-based HIV prevention interventions

As shown on Table 1, the programs were similar in a number of ways. Each program: (1) targeted adolescents or preadolescents; (2) focused on reduction of sexual risk behavior; 3) was delivered in small group formats from 5–30 adolescents; (4) was delivered with ethnic minority populations; and (5) included multiple sessions (from 7–20 sessions). All programs were skill-focused and utilized cognitive-behavioral principles. Because we wanted to choose a highly-regarded school-based program, we selected one (SC) in which HIV prevention is secondary to pregnancy prevention in its emphasis.

Category Development

The first stage in category development was the process of “open coding” as described by Strauss and Corbin (1990). We examined the manuals of the five efficacious HIV prevention programs to identify the processes that would actually occur during program implementation, without biasing our observations by our knowledge of theory, research articles, or the introductory theoretical sections of the intervention manuals. One of the authors and a psychology graduate student independently reviewed the activities, scripts, time allocations, and instructions for two prevention programs to develop a preliminary list of concepts.

For the next stage of category development, the authors introduced concepts from our expertise in cognitive behavioral interventions, adolescent development, group dynamics and group therapy, and pedagogy, making sure to stay close to the data from the manuals that were displayed from the first round of coding. We then compared our categories, revised the wording, and agreed on a list of 21 processes. We selected examples that clearly reflected the meaning of each category and devised a 4-point rating scale: 0 (absent), 1 (possibly present), 2 (explicitly present, minor), and 3 (major emphasis in session). For instance, the process label “fun” would be used for games, role plays that introduced humor, and refreshments. The process label “teacher role” would be applied when the leader lectured on facts or myths, used charts and statistics, assigned homework, or gave a quiz.

Two master’s level psychology students were trained in the category system and were tested by being asked to match codes with 21 examples. They reached over 90% agreement on rating codes. Through discussion, the students and one author resolved the disagreements over some items by making changes in definitions. For example, the process label “focus on future” initially referred to mobilizing dreams of a positive adulthood but was expanded to include more immediate short-term future goals. By combining absent and possibly present into one category, we reduced the major source of disagreements. The revised category was called none/low. We noticed redundancy in two different pairs of categories and combined them, thus reducing the number of categories to 19. For instance, an original category “democratic interactions” was recognized as a subset of “active engagement.”

Coding the Manuals

The two raters divided the five programs between them and then completed a rating sheet for each session of each program. Next to each process label the rater entered a rating score and, if the process was present, provided at least one concrete example of instructions in the manual that illustrated that process. Because the categories were not mutually exclusive, a single group activity could receive multiple codes and, therefore, be used as an example of multiple process elements. For instance, the use of behavioral rehearsal for learning to insist on condom use would be an example of active engagement, behavioral skill building, and set social rules.

Comparing Programs Across Process Elements

The authors examined the examples and, through discussion, developed a set of four domains into which we sorted the 19 processes: (1) Structural Features; (2) Group Management Strategies; (3) Competence Building; and (4) Addressing Developmental Challenges. The numerical rating data for the programs were entered into Excel spreadsheets. The next step was to develop a summary statistic for each process to answer the question: How strongly was this process emphasized in the entire program? We decided that only the highest category—major emphasis—would signify a core efficacious element. Had we combined the top two out of three categories on the rating scale, the differences between programs might have been obscured. Examining the spreadsheet, we counted the number of sessions in which a process received the highest rating, and then computed the percentage. The summary percentages allowed us to compare the relative emphasis of the 19 processes across the five programs.

Results

Categories of Process Elements

The 19 process elements sorted into four categories are presented in Table 2 with definitions and examples.

  1. 1.

    Structural features. Three process elements were included in this group: Goal-setting, Agenda-setting, and the Teacher role.

  2. 2.

    Group management strategies. This category comprises seven process elements. Three relate to the creation of positive emotional experiences in the group: Support, Cohesiveness, and Fun. Two refer to styles of participation and involvement in the group: Self-disclosure, and Active engagement. The final two elements are Cultural sensitivity and Application of behavioral management strategies.

  3. 3.

    Competence-building includes behavioral, cognitive, and affective components. The names of processes in this category are: Behavioral skill building, Cognitive change strategies, Use of positive affect, and Use of negative affect.

  4. 4.

    Addressing developmental challenges. HIV prevention depends on the capacities of an individual to resist powerful internal impulses and emotions and intense external peer pressure. Those complex capacities evolve during adolescence and include the exercise of self-control and delay of gratification, development of an internal moral code, creation of a vision of the desired future, and a greater sense of responsibility in social relationships. The names of the process elements in this group reflect a higher level of abstraction: Social identity, Sense-of-self, Promote moral values, Set social rules, and Focus on future.

Table 2 Description of 19 processes

Comparison of Programs

Table 3 presents the five programs with a summary statistic for each process element. For each program, the percentage of sessions in which a specific process received a “major emphasis” is indicated. A figure of “0%” does not mean that the process was absent; it could have been present with minor emphasis in every session.

Table 3 Descriptive summary of 5 adolescent HIV prevention programsa

Structural Features

For all five of the interventions, 100% of the sessions stated clear goals for the entire program, for each segment of the session, and for the period following termination of the group. 100% of the programs set a clear agenda for each session, providing a structured framework. These findings are not surprising because these are manualized programs, designed to be replicated faithfully. There was considerable variability in how much emphasis was given to the use of a traditional teacher role, ranging from 25% of sessions with a major emphasis (Focus on Kids) to 100% of sessions (Be Proud! Be Responsible!).

Group Management Strategies

For the seven process variables in this category, only Active Engagement received major emphasis in a high percentage of sessions, ranging from 83% to 100%. Methods to engage the participants included using small groups for role plays and discussion, offering competitive games as a way to assure mastery of facts, and having participants put condoms on penile models. Only one of the interventions (Becoming a Responsible Teen) gave Cultural Sensitivity a major emphasis in a majority of sessions: 62.5% compared to a maximum of 17% in the other programs. However, other programs showed high awareness of cultural issues by using specific subpopulations for their program development and using cultural competence as a requirement in selecting trainers, information that was not available for coding (e.g., Be Proud! Be Responsible!).

Every program instructed leaders to create a safe supportive group environment; however, the degree to which a major emphasis on support was recognized in the procedural guide for each session varied greatly, from 5% to 100% of sessions. In all programs, clear norms were set in the beginning of the program to assure that members could speak up and participate in activities without fear of overt ridicule. The programs differed in whether the manual reminded the group leader to emphasize these norms and rules in each session. Street Smart demonstrated the highest emphasis on support and cohesiveness by implementing a specific behavioral management strategy in every session: the use of token exchange by all members to “catch each other doing something well.” Safer Choices, the only one of the five programs implemented within the school curriculum, gave high emphasis to support in only 5% of sessions. Only one program gave a high emphasis to self-disclosure: Becoming a Responsible Teen highly emphasized this style of interaction in 50% of its sessions whereas the other programs had ratings of 5% or lower.

Competence-Building

All five programs gave higher emphasis to cognitive competence than to behavioral skills. For four of the five programs, Cognitive Change Strategies received a major emphasis in every session, while in the fifth program it had major emphasis in 80%. The role of affect in the learning process varied greatly among the programs. One program (Focus on Kids) did not highly emphasize emotion in any sessions. Only one program (Be Proud! Be Responsible!) had a greater emphasis on negative than positive affect, stirring feelings of fear and vulnerability to strengthen motivation toward self-protection.

Addressing Developmental Challenges

This category relates to the special challenges of the stage of adolescence when individuals are searching for identity, defining their sense of self, struggling with values and morals, trying to figure out the rules to follow, and living very much in the present. The program that was designed for a specific cultural group (Becoming a Responsible Teen) had the highest emphasis on Social Identity (62.5% of sessions) and Promote Moral Values (37.2% of sessions). Every program gave a major emphasis to Sense-of-Self and Social Identity in at least one session. Only two programs made Set Social Rules a major emphasis in some of its sessions (Street Smart in 50% of sessions and Safer Choices in 35% of sessions); the other three programs had scores of 0% for this process.

Discussion

Community providers need guidance to distinguish core elements from optional elements so that when they modify EB programs to meet the needs of their target population, they do not undermine the program’s successful outcomes. Our cross-intervention analysis of successful adolescent HIV prevention programs found common core processes that are highly emphasized in all successful programs, as well as factors that varied greatly from program to program. The variations represent adaptations to specific contexts and populations, as well as preferences of the program designers.

This discussion focuses on those principles, practices, and discoveries that stand out as being the most useful for our goal—helping real-world interventionists develop programs that draw from, yet do not faithfully replicate, empirically-based programs.

1. Clear goals are essential for prevention programs. The participants must set and attain goals to reduce their risky behavior; the interventionists must set and meet goals for delivering an effective program.

2. Active engagement should be viewed as a mandatory process. This process was emphasized in every program, even those that emphasized the use of the traditional teacher role for leaders. Active engagement of members may or may not involve personal self-disclosure.

3. A direct focus on cognitive change is an essential ingredient. Every program gave major emphasis to the cognitive domain. Examples of cognitive change processes included developing self-efficacy, increasing perceived vulnerability, encouraging values around self-protection, identifying triggers for risk situations, building cognitive control over strong passions, teaching facts about the transmission and outcome of infection, and building problem solving skills for managing specific risk situations. The strategies and techniques from social cognitive theories, therefore, appear to be robust elements of all prevention programs and should be incorporated in future interventions.

4. Efficacious interventions are more comprehensive than their underlying theories would lead us to expect. All designers of successful HIV prevention interventions articulate the theoretical foundations of their programs, yet we found that there are processes that contribute to program effectiveness that were not addressed by the underlying theory. We discovered that cognitive skill building was a key element in all programs, although the theoretical basis for Focus on Kids (Rogers 1983) never mentions that process. Be Proud! Be Responsible! is grounded in the Theory of Planned Behavior (Ajzen 1991), a social cognitive model. However, many of the activities in that program are aimed at building social identity and sense-of-self. The developers of Street Smart describe their theoretical foundation as social learning theory (Bandura 1977), yet in 50% of the sessions there is a major emphasis on setting social rules, consistent with the views of Rotheram-Borus and Phinney (1987) on the need for interventionists to focus on multiple levels to achieve consolidation of behavior change (e.g., behaviors, social rules, social roles, identities). The differences among the programs did not match differences in theoretical models, and no single program is a pure distillation of the ideas of the developer’s cited theory.

5. The potency of group cohesiveness as a facilitator of change is not reflected in program manuals. Cohesiveness was not explicitly emphasized at all in two programs; two other programs emphasized it in only one-eighth of the sessions; and only one program emphasized it explicitly in every session. Yet, professionals who work with groups understand the power of a cohesive group to influence its members and counteract negative group pressures from the outside environment. A leader without expertise in group processes who dutifully followed the manual would neglect opportunities to build cohesiveness. Skilled leaders no doubt foster group morale and cohesiveness in ways that were not described in the manual. One program, Focus on Kids, requires that participants in a program are members of a pre-existing friendship group, thereby assuring that cohesiveness is present, without having to encourage it too frequently in the manual.

There are several limitations of our methodology in these analyses. Educators know that a single session may be sufficient to achieve a learning objective. In our analyses, the impact on the learner was not addressed, only the quantity of sessions that highly emphasized a process. Because the programs differ in length, the numerical summary ratings have different meaning: 10% would represent 1 session in a 10-session program but 2 sessions in a 20-session program. Thus, it is important to emphasize that we are using the percentage statistic to illustrate variability and similarity, and not for quantitative analysis.

Our analysis produced some results that were surprising and counterintuitive: How could a behaviorally-based program not rate high in application of behavioral management strategies? Only one program (Street Smart) used tokens for positive reinforcement, yet we know that positive reinforcement is present in all learning contexts. The presence of anomalous findings strengthens the argument that the words in a manual do not describe all of the change processes occurring in an intervention.

Developers of the five programs may feel that their intentions were not accurately represented in Table 3, and we invite their reactions to this discovery: Not only do their manuals include ideas and methods that go beyond the principles of their specified theoretical model, but the efficacious components of their intervention programs go beyond what is explicitly stated in the manual.

Conclusions

The current analyses challenge existing notions of the diffusion process of EB programs. Currently, one theoretical model typically underlies each EB prevention program; this theory leads to the design of an intervention and the production of a manual. Then practitioners are asked to replicate the program with fidelity in their local sites, following the manuals. The present study suggests that there are processes and elements that are not articulated in the theoretical model. Planners, providers, and interventionists can benefit from understanding the structural features, group management strategies, approaches to building competence, and methods for addressing developmental challenges that are used in empirically-validated HIV prevention programs for adolescents.

The search for core elements in prevention interventions is parallel to the identification of “common factors” in a large body of psychotherapy research (Lambert and Oglesk 2004) and other mental health interventions (e.g., Arnold et al. 2008), as well as consistent with the study of levels of change by a National Institute of Mental Health taskforce (NIMH; NIMH Intervention Workgroup 2001). By focusing on the common processes that occur in successful interventions, rather than on their theoretical explanations or idiosyncratic packaging, we are making an empirically-validated knowledge base more accessible to community interventionists.