Functional analysis (FA) has become the de facto standard assessment for discovering environmental variables that maintain and evoke challenging behavior (Roane, Fisher, Kelley, Mevers, & Bouxsein, 2013). Although FA has many benefits, several areas for improvement remain. For example, FAs rely on visual analysis for determining a common function. Due to the subjective nature of interpreting data displayed visually, reliability and agreement of function vary (Hagopian et al., 1997).

Since the development of modern FA practices, many variations of the procedure have become available. With the modified FAs, visual displays have changed. For example, trial-based FA often uses bar graphs to compare conditions (e.g., Austin, Groves, Reynish, & Francis, 2015). Further, other variations rely on equal-interval line graphs, which can create issues with agreement of results (Diller, Barry, & Gelino, 2016). The variations paired with potential inconsistencies have led some researchers to innovate and develop alternative techniques for FA (e.g., trial-based FA, informed synthesized contingency analysis).

Other issues pertinent to FA variations include limited control over environmental conditions, staff training, initial case selection, and data collection and interpretation (Iwata & Dozier, 2008). For instance, insufficient control over environmental conditions makes it difficult to determine the effects of confounding variables during assessment. Further, less sensitivity with behavioral units (i.e., dimensionless quantities such as a percentage) and certain discontinuous measures may affect clinicians’ ability to detect change and can lead to an erroneous conclusion. Measurement error can create issues with analysis of data via over- or underestimation of the level of behavior (Johnston & Pennypacker, 2009). The previously described issues demonstrate areas for growth along with a standard for analysis of data.

One solution advanced by several experimenters for enhanced data analysis involves the use of structured criteria to aid visual analysis. The structured criteria use a quotient that yields an upper and lower criterion line (Hagopian, Rooker, & Zarcone, 2015). Criterion lines, along with the selected data points, then follow predetermined rules to differentiate function (Hagopian et al., 1997). Quantifying data via structured criteria has facilitated decision making and improved agreement (Hagopian et al., 1997). The criterion lines demonstrated an increase in agreement with 10 data points per condition. A replication of the study by Hagopian et al. (1997) using less than 10 data points demonstrated similar results (Roane et al., 2013).

Aside from structured criteria to increase agreement, the field of behavior analysis has not provided solutions to assist with data interpretation of FAs. For example, one study sought to understand agreement with FA data by asking participants to determine function (Diller et al., 2016). The researchers found lower agreement than previous research. Procedures mimicked a previous study with the exception of using a multielement design (Diller et al., 2016).

Structured criteria arose out of the need for consistent interpretation of visually displayed data across behavior analysts. Supplementing traditional visual analysis with structured criteria holds promise. Another line of thought beyond supplementing analyses explores the actual visual display used in the analysis. In other words, could an alternate graphic display provide a more consistent and reliable method for detecting patterns and making intervention decisions?

The functional analysis celeration chart (FACC) meets the criteria of having a standard, nonchanging display (cf. Hagopian et al., 1997). A standard visual display provides graph readers with several advantages. First, efficiency in detecting effects increases due to familiarity with scaling and construction features. Second, comparisons across data sets occur in a streamlined fashion due to the uniform and consistent data display. And third, ratio charts provide a proportional view that illustrates subtle change patterns hidden by linear graphs. The previously described assets may enhance visual analysis.

As a new member of the celeration family of charts, the FACC shares similarities with the timings chart, as opposed to the daily, weekly, monthly, and yearly standard celeration charts (SCC). The FACC has a tailored use for FAs, which relies on level as the analytic tactic. Celeration does not indicate function in data analysis, as a comparison of conditions is required to determine maintaining variables. The left vertical and right vertical axes share the same features as typical daily per-minute SCCs: the left vertical axis displays count per minute (i.e., the scale starts at 0.001 and ends at 1,000), whereas the right vertical axis shows common counting times (i.e., the scale covers 1,000 min to 10 s). Yet the horizontal axis has successive and nonsuccessive timed measurements (e.g., Fig. 1), as opposed to calendar time found on SCCs. A typical practice of FAs incorporates multiple, brief sessions run per day (Betz & Fisher, 2011). Another way to view of the data incorporates a grouped view with an option to label the horizontal axis label with nonsuccessive timed measurements (as in Fig. 1).

Fig. 1
figure 1

A cross-section example of the functional analysis celeration chart

The purpose of the current brief practice sought to examine whether a FACC might enhance visual analysis for identifying function. Precision teachers have long benefited from having a standard visual display geared toward producing recognizable patterns of behavior. It stands to reason this may also benefit behavior analysts conducting FAs. This brief practice addressed the following research questions:

  1. 1.

    What level values occur for stated functions on linear graphs when replotted on a standard ratio chart?

  2. 2.

    How would the new level analysis on the FACC relate to previously determined functions?

Method

Journal Selection

The Journal of Applied Behavior Analysis (JABA) served as the exclusive source for FA articles. JABA has an extensive track record publishing FAs. Research shows nearly half of FA research appears in JABA (Beavers, Iwata, & Lerman, 2013). The current brief report examined a 6-year subset of FA articles from JABA, from 2010 to 2016 (see Falcomata, Wacker, Ringdahl, Vinquist, & Dutt (2013); Fisher, Greer, Fuhrman, & Querim (2015); Gabor, Fritz, Roath, Rothe, Gourley (2016); Hammond, Iwata, Fritz, & Dempsey (2011); Marsteller, & St. Peter (2014); Rodriguez, Thompson, Schlichenmeyer, & Stocco (2012); Travis, & Sturmey (2010) for specific studies reviewed). Qualifying journal articles had to have continuous measures (e.g., rate per minute) and a visual display of data depicted on a line graph. The researchers selected an article randomly from each volume.

Procedure

After selecting the FA article, the focus turned to identifying graphs with FA data. A screen capture of each FA graph, from the qualifying article, went into WebPlotDigitzer (Rohatgi, 2017). WebPlotDigitzer then provided the coordinates of each datum, thereby yielding the original data set from each article. The researcher then entered the extracted data into a software platform (CentralReach PrecisionX, 2019) that contained the FACC. (Free PDFs of the FACC are available by e-mailing the lead author.)

Each data set, now displayed on the FACC, presented a visual display of the level. Also, each level had a numerical value calculated by using the geometric mean. The geometric mean had three advantages over the mean and median: (a) very high and low values, namely outliers, do not skew the data; (b) the number uses all of the data points in its calculation; and (c) the number of data points collected does not negatively affect the resulting number (Clark-Carter, 2005). A calculation then followed with each condition compared against the control condition. For example, a control condition (e.g., free play) with a level value of 1 compared to a test condition (e.g., attention) with a level of 2 would produce a ×2 (i.e., 2 ÷ 1 = 2; the multiplication symbol is added because from 1 to 2 on the FACC depicts multiplicative change). The level multiplier refers to the multiplicative or divisional change expressed as a multiplier or a divider from one level line to another (Kubina, 2019).

The researchers then submitted all of the replotted data for analysis in the FACC. All graphs had a set of level multiplier values for each respective condition. The researchers matched the functions reported by the JABA authors with the level multiplier values calculated in the current analysis.

Accuracy

The extent to which observed values estimate events that took place in an experiment defines accuracy (Johnston & Pennypacker, 2009). In the present research, the software PrecisionX calculated levels with the geometric mean through a specific algorithm. The researchers checked 30% of the PrecisionX values by hand by calculating the levels and level multipliers. The correspondence between the observed value and the true value—accuracy—came to 100%.

Results

Table 1 shows the results of the function-matched level values. The table displays the year of the study, how many data sets the article contained, the results of the function from the original article, and the level multiplier values. The Results column lists the function determined by the authors and the corresponding level multiplier determined by the FACC. A multiplication symbol indicates the test condition performed at higher levels than the control, and a division symbol indicates the test had lower rates of responding compared to the control condition. The resulting range of values spanned ÷1.93 to ×545. The ÷1.93 level multiplier illustrated the difference between the test and control conditions. The authors rendered a judgment of automatic reinforcement due to elevated levels across conditions with no discernible difference between test and control (Rodriguez, Thompson, Schlichenmeyer, & Stocco, 2012). The FACC visually supported the decision of automatic reinforcement. The addition of the level multiplier quantitatively uncovered the lack of an apparent differentiation.

Table 1. Authors’ Determination of Function and FACC Results

Only one article reported an automatic function (Rodriguez et al., 2012). The remainder of the articles contained a single or multiply maintained function. Figure 1 displays a hypothetical example of a single function. The data points in the play condition, along with tangible, all rest on the level line of 1 per minute. A very slight elevation of attention occurs at 1.3 per minute. The hexagon symbol representing the escape condition shows an elevated number of responses, 4.9 per minute. Similar to that of the linear graph, visual analysis would suggest a function for escape. The level line on the FACC, however, permits a standard and proportional view of the data. Additionally, the FACC offers quantification of the level multiplier. Both the standard view and the mathematical analysis of function offer several advantages to behavior analysts.

In each data set, the authors concluded a function had a difference of ×2 above a control condition. The results support that a minimum level multiplier may visually and quantitatively determine the function of challenging behavior. The ×2 value provides behavior analysts with an additional tactic in decision making. Further, multiply maintained results have different level multipliers for each condition, indicating that one source of reinforcement may provide more value than another (e.g., Marsteller & St. Peter, 2014).

Discussion

First, the standard view of data means the physical dimensions of all visual displays remain constant. Every linear graph in the reviewed articles had different space allocations for the data. For instance, one graph used 0.015 in. for the space of 0 to 1 (Rodriguez, Thompson, Schlichenmeyer, & Stocco (2012), whereas another graph allocated 0.36 in. for the same scale (Fisher, Greer, Fuhrman, & Querim 2015). And a third graph had 1.56 in. of distance to visually represent 0 to 1 (Hammond, Iwata, Fritz, & Dempsey (2011). With the extreme variability of the space assignment for the same scaling value, the resulting visual pattern of data significantly differs, thereby affecting visual analysis.

Second, differences between level analysis on linear graphs and the FACC offer two distinct methods. With linear graphs, two methods exist for comparing adjacent levels (Gast & Ledford, 2014): (a) the comparison of the last data point in a condition to the first data point in the next condition and (b) the comparison of the median of the last half of data points in a condition to the median of the first half of data points in the following condition. Both level change options involve subtracting the larger value (larger value becomes the minuend) from the smaller value (smaller value becomes the subtrahend) with the resulting value representing the level difference (value becomes the difference but labeled level difference in line with level changes). As an example, if one condition had a level value of 10 and the second condition had a level value of 14, the level difference would come to 4.

For multielement designs, the control acts as the comparison condition for the other conditions. The experimenter would do a level analysis by contrasting the level of each test with the control. The calculation for level analysis calls for subtracting the larger value by the smaller value. The calculation for level analysis calls for subtracting the larger value from the smaller value. A control condition (e.g., play) with a level value of 3 and a test condition (e.g., demand) with a level value of 9 would yield a difference of 6. Stated differently, the demand condition occurs six more than the play condition. Six more refers to the difference as a result of the subtraction calculation (i.e., the absolute amount of change). A per-minute difference of six would appear as a visually distinct separation. However, a difference of six more on a graph with a percentage scaled vertically (i.e., 1–100) would not suggest a function.

The FACC enables a different type of level analysis called the level multiplier. The level multiplier provides a relative amount of change. In the previous paragraph, the level analysis conducted on linear graphs produced an absolute amount of change of six more. When analyzed on the FACC, the larger value divided by the smaller value yields the level multiplier (e.g., 9 ÷ 3 = 3). Therefore, demand occurs three times more than play. The value of the level multiplier shows itself with relative differences between levels. However, with a control condition level of 20 and a test condition of 30, the level multiplier would come to ×1.5. With level difference conducted on a linear graph, the difference comes to 10. Therefore, when using a level analysis, 3 to 9 equals a 6 difference, whereas 20 to 30 equals a 10 difference, providing a numerical and visual representation showing a larger difference favoring 10. Yet when using the level multiplier, 3 to 9 is ×3 and 10 to 20 is ×1.5, indicating the ×3 as the more significant difference relatively speaking.

The stark contrast between level difference and the level multiplier has the potential to appreciably help behavior analysts discover functions. When the levels of the test conditions appear discernibly higher than the control, a functional relation likely exists (Betz & Fischer, 2011). With the level multiplier, the reviewed data all indicate a minimum of ×2.0 relative difference or higher, which suggests a functional relation. Although the present review only examined seven studies, should the ×2 level difference remain predictive and valid, behavior analysts would have an objective measure to determine function.

Future Research and Limitations

The current brief practice builds upon the ideas put forward by Hagopian et al. (1997). The need, and subsequent efforts, to provide structured criteria to augment analysis and decision making enhances outcomes for clients. The present study offers an alternative idea by placing data on the FACC, a ratio chart derived from precision teaching engineered to show relative difference. Subsequent research should expand on the number of studies to further examine the ×2 level multiplier difference; the present study requires further replication beyond the current limited sample. Furthermore, the present study evaluated existing data. The true value of an applied method would emerge from behavior analysts implementing the FACC with clients by determining the function of a problem behavior using quantification and visual analysis in a more efficient manner.