Introduction

Over 15 years ago the Journal of Behavioral Education began as an outlet for behavioral research conducted in school-based settings. During this time the Journal was presided over by three editorial teams (Singh, Wolery, Belfiore/Skinner), numerous associate editors/reviewers, and supporters. As with any scientific endeavor, it is often good to look back at the past in order to better chart the future.

Much has happened in the field of applied behavior analysis in 15 years. Perhaps the most important development has been the increased use of assessment-based approaches (i.e., functional assessment) in developing behavioral interventions. This approach allows practitioners to focus on variables that evoke and maintain problem behaviors to develop interventions that address the function of behavior. Yet another change in the field is the availability of behaviorally-based services for individuals at all ages. Fifteen years ago there was no Behavior Analysis Certification Board to establish quality standards for behavior analysis. Now there are just over 100 university-based ABA training programs and approximately 5,000 certified behavior analysts around the world (Behavior Analyst Certification Board 2007).

The increase in the number of service providers is commensurate with the increase in the number of students identified as having academic and behavioral difficulties in our schools (U.S. Department of Education 2005). Fortunately, the science of behavior provides an array of empirically validated interventions to help address these problems. This large body of peer-reviewed research is particularly important given the more recent focus on evidence-based practice for academic and behavioral interventions for students in P-12 education.

The state of our science is reflected by the research that is published. By examining this body of knowledge we can better identify gaps in the literature that should be addressed. This examination may also yield strengths, which can be capitalized upon in developing interventions that make education effective and accessible to all people. The purpose of this paper is to examine the publication trends in the Journal of Behavioral Education over the past decade and a half. More specifically, we describe the: (a) types of studies published, (b) research designs employed, and (c) characteristics of students who participated in the studies from 1991 to 2005.

Methods

All issues from 1991 to 2005 of the Journal of Behavioral Education were examined in this descriptive analysis. Each study was reviewed using the following coding categories:

  1. 1.

    Type of article (book/software review, data-based, introduction to series, responses, review/theory, tutorial, other).

  2. 2.

    Participant classification (autism, behavior disorder, learning disabilities, mental retardation, multiple disabilities, no disability, and other/not specified).

  3. 3.

    Participant age and/or grade.

  4. 4.

    Gender of participant (female, male).

  5. 5.

    Setting (college, community, family home, general education classroom, group home, institution/residential, segregated classroom/school, multiple settings, and other/not specified).

  6. 6.

    Format of intervention (one-to-one, small group [2–5 persons], medium group [6–15 persons] large group [over 15 persons], and not specified).

  7. 7.

    Interventionist (experimenter, general education teacher, group home staff, multiple interventionists, parents, peers, special education teacher, and other/not specified).

  8. 8.

    Dependent variables assessed (academic, behavior, social skills, other, multiple).

  9. 9.

    Generalization data present (yes, no, not applicable).

  10. 10.

    Maintenance data present (yes, no, not applicable).

  11. 11.

    Procedural integrity data present (yes, no, not applicable).

  12. 12.

    Experimental design (AB, alternating treatments, combined single-subject, descriptive design, group design multi-treatment, multiple-baseline, reversal, other).

Interrater agreement

Interrater agreement for this review was established by having a second coder independently code 27 (8%) randomly selected studies. Agreement was computed on a category-by-category basis and was computed by dividing the number of agreements by the sum of agreements and disagreements and multiplying by 100. The mean interrater agreement was 97%.

Results

Types of studies

From the years 1991 to 2005, 375 articles were published in 14 volumes of the Journal of Behavioral Education. There were 623 unique contributors during this time period. The majority of these articles were data-based (55%) (see Table 1). Of the data-based articles 91% were experimental in nature, while 9% were descriptive. Of the other works published in JoBE, just over 53% were reviews of the literature or software/books. Only two tutorials were published in JoBE during the years examined in this paper.

Table 1 Types of articles published in JoBE from the years 1991 to 2005

Experimental studies

Of the 210 experiments (185 articles) published in JoBE, 73% used single-subject methodology, 22% group, and 5% were classified as other. The most frequently used single-subject design was the multiple-baseline design (42% of cases where single-subject designs were utilized), followed by alternating treatments designs (26%) (see Fig. 1).

Fig. 1
figure 1

Percent of single-subject designs used in experimental studies in JoBE from 1991 to 2005

The majority of experiments (63%) were conducted in P-12 school settings (e.g., segregated special education classrooms or schools) (see Table 2). Approximately 15% of studies were conducted in higher education settings. Only 2.5% of experiments were conducted in community-based settings (i.e., home, community, group home).

Table 2 Number and percent of experiments across settings

Studies in P-12 academic settings

The focus of interventions in P-12 academic settings was most often academic in nature (60% of experiments). Behavioral measures such as disruptions (e.g., Killu et al. 1998) were documented in 22% and social skills in 14% of studies. Other measures comprised 23% of dependent variables and included documenting three-term contingency trials (e.g., Albers and Greer 1991) and frequency of time-outs administered (Grskovic et al. 2004).Footnote 1 Most interventions were implemented using a one-on-one format with the interventionist/researcher (48%), followed by small groups (20%), and size of the group not specified (10%). Experimenters intervened most often (40%), followed by special education teachers (24%), general education teachers (11%), and peers (6%). A relatively small number of studies included information regarding generalization (25%) and/or maintenance (34%) of behavior change. Procedural integrity data were collected for 56% of the experiments. Similarly, social validity data were only collected in 17% of experiments.

A total of 1,426 students participated in P-12 studies published in JoBE over the time period examined by this review. For participants where gender data could be disaggregated (n = 1101) the percentage of male participants (54%) was slightly greater than female (46%). Information about grade level or age was available for 90% of participants. Researchers reported the chronological age for 814 participants. The mean chronological age was 128 months (SD = 49.34), or about 10.5 years of age. Grade levels were included for 754 participants. Age and grade information was grouped into the following categories (a) Preschool/Early Intervention (age birth-5), (b) Elementary Age (ages 5–10), (c) Middle-School Age (ages 11–13), and (d) High-School Age (ages 14–18) (see Table 3).

Table 3 Grade levels of participants for experimental studies

Of the 1,426 participants, 839 were served in general education settings and 587 were served in special education settings. The majority of participants in special education settings (63%) were identified with a specific disability (see Table 4). Of those with a specific disability, 54% were labeled as mentally retarded, and 25% were labeled with a specific learning disability. There was also a large group of students for which a generic label (e.g., at risk) was given instead of a special education label (37%). A clear majority of students in general education settings did not have an identified disability (86%). Students with mental retardation comprised the largest group of students with a disability served in general education settings (n = 14).

Table 4 Percent and number of categories of participants across settings for experimental studies in JoBE from 1991 to 2005

Studies conducted in college settings

The foci of interventions in college settings were almost exclusively academic in nature (87%). Most interventions were implemented in a large-group format (55%), followed by one-on-one (29%), and medium groups (10%). Experimenters intervened most often (58%) followed by other personnel (e.g., graduate assistants) (35%). A relatively small number of studies included information regarding the generalization (3%) or maintenance (10%) of behavior change. Procedural integrity data were only collected for 19% of the experiments. Similarly, social validity data were collected in 13% of experiments.

A total of 2,128 individuals participated in studies in higher education settings. The mean chronological age of participant was 230 months, or about 19 years of age (age data were available for 816 participants). For participants where gender data could be disaggregated (n = 1151) the percentage of male participants (27%) was less than female (73%).

Descriptive studies

There were 20 descriptive studies published in JoBE during the years covered by this review. Examples of studies that fell under the descriptive category included examinations of verbal responding directed towards students with and without disabilities (Ormsby and Deitz 1994), surveys on curricular options for students with disabilities (Lovitt 1995), and instrument reliability studies (e.g., Reed et al. 1997). Research designs were most often correlational in nature (70%). Sixty-percent of descriptive studies were conducted in school-based settings (general/special education). Fifteen-percent of descriptive studies were conducted across multiple settings.

There were 11,097 participants in descriptive studies (28% female, 72% male). A large percentage of the participants in the descriptive studies (30%) did not have an identified disability (only 0.5% of participants did have a specified disability). Disability status was not explicitly specified for the majority of participants (69%). As with the participants from the experimental studies, participants from descriptive studies were broken into age groups that corresponded with school grade levels. Most of the individuals that participated in descriptive studies were of middle school age (81%); followed by elementary (2%), college (1.5%), high (0.07%), and preschool (0.03%). Grade level information was unavailable for 15% of participants. The mean age of participants in the descriptive studies was 137.08 months (SD = 42.03), or about 11.5 years of age.

Discussion

The philosopher George Santayana once wrote, “Those who do not remember the past are condemned to repeat it.” In behavioral research, repeating the past, or replication, is not necessarily a bad idea. Indeed, single-subject research designs establish validity through direct and indirect replications, or repeating the past. However, if the science of behavior is to meet new challenges we must systematically forge ahead. Part of making progress entails examining questions related to our collective past.

There were many areas of strength associated with the articles analyzed for this paper. Most notable was the diversity of age, settings, and disability status of the participants in the studies published in JoBE. This diversity certainly makes it more likely that the results of these studies will apply to an increasingly diverse school population. A second strength noted was the variety of applied problems examined by researchers in the Journal. The eclectic mix of behaviorally-based interventions has and will continue to provide practitioners with a variety of interventions to address academic, social, and behavioral difficulties. Finally, the number of different individuals who contributed to JoBE during the time period covered by this review (n = 623) provides hope that our science will continue to prosper well into the future.

As with any scientific endeavor there are strengths, and there are areas in which to improve. Issues related to generality of instructional procedures and generalization of behavior changes are two broad issues that should continue to be addressed by researchers in the future. If we are to realize our goal of providing effective educational and behavioral strategies for all students, we must begin to examine interventions in contexts similar to those most often encountered in schools. For example, most instruction takes place in large groups and is delivered by a general educator. However, in the present analysis general educators delivered interventions in only 8% of studies and interventions for large groups of students (i.e., class-wide) were investigated in only 10% of experimental studies. Future researchers may wish to address this potential limitation by conducting more research in the general education environment with individuals typical to that setting implementing the intervention. It is through these linkages with general education that preventive efforts for academic and behavioral issues may continue to take shape.

A second potential threat to the generality of interventions is ensuring that researchers implemented procedures exactly as stated in the study. Although the percentage of experiments where treatment integrity was documented is greater here (56% P-12 and 19% college) than in previous studies (e.g., 16% of studies with children as participants in the Journal of Applied Behavior Analysis between 1980 and 1990; Gresham et al. 1993), any number less than 100% makes generality of instructional procedures less likely because of the potential for variability in implementation of those procedures. Relatedly, practitioners implement interventions that are practical and effective. One measure of the value of an intervention is social validity. Unfortunately, the number of studies that contained information about this important variable is quite low. Fortunately, these problems can be easily fixed through collecting and reporting treatment integrity and social validity data.

Documenting generalization of intervention effects to other interventionists, settings, and behaviors is also a key issue. Opponents often cite problems with generalization and maintenance as potential disadvantages of behavioral programming. In the present analysis generalization and maintenance data were collected in a clear minority of studies. This lack of documentation is not endemic to JoBE but has been found in other literatures as well (see, Schlosser and Lee 2000). However, generalization continues to be a key concern and must be actively addressed in behavior change programs (Stokes and Baer 1977) if we are to adequately defend the utility of behaviorally-based programming.

As stated in the mission statement, the Journal of Behavioral Education is a “...single-source forum for the publication of research on the application of behavioral principles and technology to education.” Indeed the Journal has served as an outlet for researchers who may not otherwise have had an opportunity to disseminate findings, yielded primarily through single-subject research designs, in other education or psychology journals. We hope the results of this review will serve as both a “pat on the back” and as a reminder that helps guide future efforts on behaviorally-based programming in school-based settings.