Motivating operations (MOs) are a class of environmental events that alter the reinforcing or punishing effectiveness of other events. For instance, prolonged exposure to the sun can increase the punishing effectiveness of putting something in contact with the affected skin and the reinforcing effectiveness of removing anything in contact with the affected area. Changes in stimulus control characteristically mediate such effects (Edwards, Lotfizadeh, & Poling, 2019a, 2019b; Poling, Lotfizadeh, & Edwards, 2020). For example, you fall asleep on your belly by your pool, awaken with a severe sunburn, and go inside. A boisterous friend, who often affectionately slaps your back in greeting, rings your doorbell. You open the door, then step away from your friend and immediately tell her about your sunburned back. Stepping away and describing your sunburn are avoidance responses that occur because 1) your unprotected skin was exposed to sunlight for a sufficient period to be burned (i.e., an MO was in effect), and 2) your friend (a historically relevant stimulus) was present. Because such joint control of behavior by MOs and antecedent stimuli is ubiquitous, we have suggested that MOs should be defined as “operations that modulate the reinforcing or punishing effectiveness of particular kinds of events and the control of behavior by antecedent stimuli historically relevant to those events” (Edwards et al., 2019b, p. 57), which differs substantially from the definition proposed by Michael and his colleagues (e.g., Laraway, Snycerski, Michael, & Poling, 2003) in what we herein term the “current” conceptualization of MOs. In both the current conceptualization and ours, “establishing operations” (EOs) increase the effectiveness of reinforcers and “abolishing operations” (AOs) decrease it.

In addition to redefining MOs, we proposed that the subtypes of conditioned MOs delineated by Michael and his associates are problematic in several regards. We will not repeat the basis for that contention, nor our response to critiques of our position (Edwards et al., 2019b). Rather, in this article we consider the adequacy of the current conception of MOs as it relates to negative reinforcement. While dissecting the MO concept for our recent reconceptualization (Edwards et al., 2019a), we became aware of serious issues associated with its handling of negative reinforcement. Further investigation of these issues and writing of the present article was prompted by Petursdottir’s (2019) comments on our proposed reconceptualization. In the process of examining these issues, we came to realize that they have a major influence on how behavior analysts conduct functional analyses and address negatively reinforced behavior in applied settings. We also reached the unavoidable conclusion that Michael’s handling of them is fundamentally inadequate, despite the well-recognized value of his work and the related work of other behavior analysts who support the current conceptualization of MOs (see, e.g., Laraway, Snycerski, Olson, Becker, & Poling, 2014; Michael & Miguel, 2019). Our goal is not to disparage his seminal work, but rather to call attention to aspects of it that can, and should, be improved.

On Terms

Michael (1975) and others (e.g., Baron & Galizio, 2005; Iwata, 2006) have argued that the distinction between positive and negative reinforcement (and, by extension, between positive and negative punishment) is not always clear, a point with which we agree. They also argue that it is not useful, and here we disagree. Distinguishing between positive and negative reinforcement focuses on the kind of stimulus change, adding something to the environment or taking something away, that strengthens behavior. If nothing else, the distinction is a useful heuristic that helps students understand that different kinds of stimulus changes can serve as reinforcers (or punishers).

Behavior analysts have used various terms to refer to the stimuli involved in negative reinforcement, often in inconsistent and potentially confusing fashion. Skinner (1953) provided the following definition of the negative reinforcer: “we define the negative reinforcer (an aversive stimulus) as any stimulus the withdrawal of which strengthens behavior” (p. 185, emphasis in original). This definition is straightforward and unambiguous. But he also used the term “negative reinforcer” to refer to stimuli whose presentation weakens behavior. Many contemporary behavior analysts would refer to such stimuli as positive punishers, maintaining the convention that reinforcers strengthen and punishers weaken the responses that produce them. Poling, Carr, and LeBlanc (2002) define positive punishment as “a procedure (or process) in which the presentation of a stimulus after a behavior weakens (e.g., decreases the likelihood of) that behavior in the future” (p. 271). They define negative reinforcement as “a procedure (or process) in which the removal or postponement of a stimulus after a behavior strengthens (e.g., increases the likelihood of) that behavior in the future” (p. 271). Each definition specifies a response-produced change in the environment and the subsequent behavioral effect of that change. Moreover, by including stimulus postponement as a change in the environment, the definition of negative reinforcement incorporates unsignaled avoidance. Consistent with these definitions, rather than applying the term “negative reinforcer” to the stimulus that is removed in an instance of negative reinforcement, we apply it to the removal of the stimulus (i.e., it refers to the stimulus change rather than to the stimulus) herein.

In many cases, if the offset of a stimulus functions as a negative reinforcer, the onset of that same stimulus will function as a positive punisher, and vice versa. Likewise, if the onset of a stimulus functions as a positive reinforcer, its offset will often function as a negative punisher, and vice versa. But such symmetrical functions do not always occur. It is important to ascertain the behavioral functions of stimuli in the situation of concern and to describe those functions unambiguously. Saying, for example, that removal of a punisher strengthens behavior tempts confusion; clarity is served by restricting the term “punisher” to stimulus changes that weaken behavior and “reinforcer” to stimulus changes that strengthen it.

There is a long tradition in behavior analysis of using the term “aversive stimuli” to refer to stimuli that are avoided or terminated in avoidance and escape procedures, respectively, and stimuli that are presented dependent upon a response in punishment procedures (e.g., Azrin & Holz, 1966; Hineline, 1984; Skinner, 1953). There is no major problem if “aversive stimulus” is used in a strictly functional sense: the onset of an aversive stimulus functions as a positive punisher and the offset of an aversive stimulus functions as a negative reinforcer. Although these effects are typically symmetrical, as noted previously, whether either effect is evident depends on a variety of factors, and one cannot safely assume that a stimulus whose offset serves as a negative reinforcer in one situation will serve as a positive punisher when presented in another. Terms like “aversive stimulus” suggest that the behavioral effects of stimuli are fixed and invariant, which is clearly not the case. These shortcomings notwithstanding, “aversive stimulus” is a well-established term that is unlikely to fall from favor.

Michael’s Early Analysis of MOs and Negative Reinforcement

In Michael’s (1982) original reintroduction of the MO (then termed “establishing operation” [EO]) concept, he focused on distinguishing between discriminative and reinforcer-effectiveness-altering (i.e., motivational) functions of stimuli. Michael argued that a stimulus whose offset would function as a reinforcer (e.g., shock offset) should not be classified as a discriminative stimulus because discriminative stimuli must be correlated with the differential availability of reinforcement. He contended that, because termination of the stimulus would not function as a reinforcer if the stimulus were not present, it fails to meet the definition of a discriminative stimulus. He concluded that it is better to classify such a stimulus as an MO because the offset of the stimulus only functions as a reinforcer when the stimulus is present.

We suspect that Michael was attempting to square his analysis of motivation with Skinner’s analysis. For example, Skinner (1953) wrote:

When we present an aversive stimulus, any behavior which has previously been conditioned by the withdrawal of the stimulus immediately follows, and the possibility of conditioning other behavior is immediately provided. The presentation of the aversive stimulus therefore resembles a sudden increase in deprivation (Chapter IX), but since deprivation and satiation differ in many respects from the presentation or removal of an aversive stimulus, it is advisable to consider the two kinds of operations separately. We study aversive behavior in accordance with our definition: by presenting an aversive stimulus, we create the possibility of reinforcing a response by withdrawing the stimulus. (p. 172)

Michael’s conception of MOs as they relate to stimulus changes that sustain escape responding is consistent with Skinner’s analysis, as summarized in the foregoing quote. Nonetheless, the conceptualization is fundamentally flawed, as we explain.

MOs and Negative Reinforcement: Logical Issues

For something to be terminated, it must first be presented. Inherent in Michael’s (1982) argument that MOs include presentations of stimuli whose termination functions as negative reinforcement is the assumption that it is meaningful to talk about terminating something that is not present. In a relevant example, Michael stated, “. . . unless the shock is on, shock termination is not behaviorally functional as a form of reinforcement” (p. 151). This is analogous to a structural engineer arguing that for a building to fall down, it must first be built, and that some special terminology is required to describe the relationship between these two events so that we do not get confused about buildings that were never built falling down. This is clearly foolishness.

If one’s goal is predicting and (potentially) controlling behavior, the real issue is: “How can we know whether the offset of a given stimulus will increase the probability of (or otherwise strengthen) responses that produce this effect?” If this behavioral function were a fixed characteristic of the stimuli of interest, then a simple cataloging of events that had this function could provide a basis for answering this question. We know, for example, that extremely high-intensity stimuli of many kinds, such as loud sounds, bright lights, and contact with very hot and very cold surfaces, typically do so. But, depending on the current state of the organism, termination of such stimuli can serve as negative reinforcers, negative punishers, or neither. Such diversity in function is even more pronounced with lower-intensity stimuli. For example, if an audio device is programmed to play Wagner’s Götterdämmerung, in some situations one of us would respond to turn the device on and the other would respond to turn it off. Sometimes, neither of us would do either. Saying that the onset of the music is an MO in one of these situations tells us nothing about the necessary and sufficient conditions for it to sustain escape responding, or to predict the one of us for whom it would have this function.

Furthermore, as discussed by Edwards et al. (2019b), the termination of the relevant stimulus (e.g., music) will only ever occur when the stimulus is present and, therefore, it could be argued that this stimulus meets the definition of a discriminative stimulus (i.e., the response of interest is more likely to occur in the presence of that stimulus than in its absence because, historically, the reinforcing consequence was available only in the presence of that stimulus). We would not argue that conceptualizing such a stimulus as a discriminative stimulus is of theoretical or practical value.Footnote 1 But the simple fact that it can be so conceptualized helps to demonstrate the logical issues associated with Michael’s (1982) original conceptual work. If characterizing the presentation of negative reinforcers as MOs or as SDs actually clarified their function, perhaps either or both characterizations could be justified, but we gain nothing new from either analysis, aside from a reminder that, for a stimulus to be terminated, it must be presented.

In response to Michael’s (1993a) detailed exposition of the MO, in which he repeated his prior (i.e., Michael, 1982) ideas about the onset of a stimulus (which he termed “painful stimulation,” p. 195) serving as an EO that increases the reinforcing value of its offset, Hesse (1993) remarked that such a characterization was “an apparent redundancy in terms” (p. 216). Michael (1993b) replied:

A small point, but one that I am not satisfied with, is my statement that some punishing stimuli [i.e., stimuli that sustain escape responding] function as their own EOs. Hesse refers to this as “an apparent redundancy in terms,” and I agree. I'm not sure how to deal with this issue, and have simply postponed a more careful treatment. (p. 229)

It does not appear that Michael or other theorists returned to this issue and, aside from Hesse’s comment, Michael’s (1982) original analysis has gone unchallenged.

If we continue to adopt Michael’s (1982) approach to describing the relationship between MOs and negative reinforcement, we are also obliged to consider a parallel relationship between MOs and negative punishment. That is, according to Michael’s reasoning, the presentation of a reinforcer is an EO that establishes its own offset as a punisher. On this point, Hesse (1993) pointed out that, “For stimulus removal or withdrawal, we need specify only the reinforcement EO because withdrawal or removal of that reinforcer produces the behavior decrease we call punishment, and it is that EO that ‘establishes’ withdrawal or removal as an effective consequence” (p. 216). We agree with Hesse’s reasoning and argue that the MO is relevant to negative punishment in the same way that it is relevant to negative reinforcement; the concept is useful in helping us determine the conditions under which termination of a stimulus will function as negative punishment. Although we focus in particular on negative reinforcement in the present analysis, most points that we make are also directly relevant to the analysis of negative punishment.

Returning to the issue of negative reinforcement, following Hesse’s reasoning, the EO responsible for establishing the termination of a stimulus as a negative reinforcer is the only MO that we need to specify. The same EOs that increase the punishing effectiveness of the presentation of a stimulus will often (but not inevitably) increase the reinforcing effectiveness of the offset of the same stimulus. For example, sleep deprivation may increase the punishing effectiveness of the illumination of a bright light and increase the reinforcing effectiveness of extinguishing the same bright light. Construing the presentation of the stimulus (illumination of the light) as the EO distracts us from the real EO (sleep deprivation).

MOs and Negative Reinforcers: Effects of Misclassification

The (mis)application of MOs to negative reinforcers is present in most detailed descriptions of the MO concept. In a recently revised book that serves as a reference for many applied behavior analysts, the following description appears: “Painful stimulation is an EO that increases the effectiveness of pain reduction as a reinforcer and evokes all behavior that has been reinforced by pain reduction” (Michael & Miguel, 2019, p. 374). In addition to perpetuating the notion that MOs include stimuli whose offsets function as negative reinforcers, this quotation includes the term “painful,” which Michael (e.g., 1993a) and others (e.g., Carbone, Morgenstern, Zecchin-Tirri, & Kolberg, 2010) have used when discussing this issue. This term is problematic because “painful” is not functionally defined (or defined at all). Moreover, explaining an instance of escape responding by referring to pain is circular reasoning.Footnote 2

The MO concept is a valuable addition to our analytical toolbox because it provides a framework for discussing environmental events that alter the reinforcing and punishing effectiveness of other events. We most commonly speak of MOs that increase the reinforcing effectiveness of stimulus presentations (i.e., positive reinforcement). For example, we speak of exercise and exposure to high temperature increasing the reinforcing effectiveness of the stimulus change from water-absent to water-present. These same MOs can also increase the punishing effectiveness of the removal of the same stimuli (i.e., negative punishment). For example, if you have been working on fences in the hot sun all afternoon (the MO), a successful request for water from a nearby farmhouse would be positively reinforced, and spilling the only canteen of water that you brought with you would be negatively punished.

MOs can also increase the punishing effectiveness of stimulus presentations (i.e., positive punishment). For example, getting a sinus infection (which causes a headache, as indicated by self-report and collateral behaviors) can increase the punishing effectiveness of loud music. In addition, the same EO that increases the punishing effectiveness of a stimulus’ presentation will in general increase the reinforcing effectiveness of the removal of the same stimulus (i.e., negative reinforcement). After getting the headache, locating the volume knob on your friend’s stereo, which is loudly playing your favorite piece of music, “Flight of the Valkyries,” is reinforced by the removal of the music. This final point is key to the present discussion. The same stimulus change, from music-present to music-absent, functions as a negative punisher under some circumstances (e.g., prior to getting the headache) and a negative reinforcer under other circumstances (e.g., after getting the headache). We do not need the MO concept to tell us that the volume must be up before it can be turned down (this is a logical requirement), but we do need the MO concept to help us to understand how the termination of music can have different functions under different circumstances (i.e., when the organism is in different states).

Negative reinforcers are a dynamic class of stimuli, or to be more precise, stimulus changes. The members of this class for a given individual will change across time. MOs are events that can move stimuli across the class boundary and/or increase or decrease the reinforcing effectiveness of the stimuli within the class. That is, as a result of MOs, stimulus changes can become more or less effective negative reinforcers or cease to function as negative reinforcers altogether. The MO concept is valuable because it prompts us to take these events and their effects into account when attempting to predict and change behavior. It is unfortunate that the MO concept, as currently applied to negative reinforcement, draws our attention away from the part of the analysis that is incomplete without the MO concept.

A hypothetical example may serve to further illustrate how the misapplication of MOs to negative reinforcement can turn attention away from changes in the environment that can alter reinforcer effectiveness. You and a friend are fishing and sharing a bottle of scotch. The fish aren’t biting and you end up drinking most of the bottle before deciding to stagger back to the lake house. When packing up your gear, you get a fishhook in your finger but don’t notice it until your friend tells you that you are bleeding. The entry of the fishhook into your finger did not function as a punisher for whatever behavior brought it about and, until your friend brought it to your attention, its removal would not have functioned as a negative reinforcer. In this case, the onset of the stimulus (hook entering hand) did not lead to the occurrence of responses that historically had removed hooks (or produced similar effects), hence it would be tempting to assert that neither negative reinforcement nor MOs were at play. In actuality, imbibing large quantities of scotch was an AO that decreased the punishing effectiveness of the fishhook entering the finger and decreased the reinforcing effectiveness of its removal.

The main points here are that, by applying an MO analysis where it is not needed, our attention is drawn away from the part of the analysis that is incomplete without the MO concept: what events alter the functions of putative negative reinforcers? In addition, with the current conceptualization, we are invited to make the grave error of assuming, rather than analyzing, the functions of these stimuli and the environmental events that modulate these functions. This misapplication of the MO concept has serious implications for how we conceptualize negative reinforcement in applied settings.

MOs, Negative Reinforcement, and Applied Behavior Analysis

A search for “escape-maintained behavior” produces more than 600 results in Google Scholar. Researchers and practitioners often identify escape (i.e., negative reinforcement) as one of the functions of problem behavior and implement a variety of interventions, including escape extinction combined with differential reinforcement of alternative behavior, to reduce the problem behavior. In Hanley, Iwata, and McCord’s (2003) review of functional analysis procedures and outcomes, they found that 89.2% of the studies meeting inclusion criteria included a negative reinforcement condition (commonly referred to as an “escape” or “demand” condition) in the functional analysis. Of all studies, 34.2% produced results suggesting that the targeted problem behavior was escape-maintained.

The current MO conceptualization as it relates to negative reinforcement has directly informed the language and, we fear, the procedures associated with functional analyses of problem behavior. For example, in the “demand” condition, the presentation of a task of some sort is conceptualized as an EO (which establishes task removal as a reinforcer). We hope that, in light of the points we have raised, the reader is able to identify why we are concerned about this conceptualization as it is applied to functional analyses, but we will take this opportunity to restate our main points in this context.

The MO concept, applied in this way, takes attention away from operations that might influence the punishing effectiveness of the presentation of the stimuli associated with the specific “demand” and the reinforcing effectiveness of their removal, which is where the MO concept has unique analytical value. In addition, there is an inherent assumption in the current analysis that the removal of task demands functions as a negative reinforcer. Otherwise, their presentation could not possibly establish their removal as a reinforcer. This is not a safe assumption and we fear that, as a result of the current MO conceptualization, the demand condition in functional analyses has been misconstrued as a true test of the “escape” function without adequate evidence that the termination of the relevant stimulus functions as a negative reinforcer.

We are not the first to call attention to this issue. Roscoe, Rooker, Pence, and Longworth (2009) noted that, “altering antecedent variables associated with the demand context can alter the motivating operation for escape-maintained problem behavior” (p. 819). It is unfortunate that the misapplication of the MO concept to negative reinforcement has given Roscoe et al. and other behavior analysts a serious challenge when it comes to dissecting this issue and has impeded clear analysis of the demand condition in functional analysis procedures in general. “Antecedent variables” do not alter MOs, as suggested in this quotation. Instead, the antecedent variables may include MOs (among other manipulations) that influence the reinforcing effectiveness of demand removal.

Langthorne, McGill, and Oliver (2014) reviewed the literature relevant to negatively reinforced problem behavior with the intention of clarifying the importance of MOs in such behaviors. They explored many factors that are relevant to negatively reinforced behavior, including sleep deprivation and menses, but these factors were described as “predisposing the person’s behavior to be influenced by specific motivational operations” (p. 107), rather than as MOs that alter the effectiveness of negative reinforcers. In addition, consistent with the currently accepted conceptualization, stimuli whose offset would presumably function as a negative reinforcer were referred to as MOs throughout the analysis, drawing attention away from operations that might change the function of such stimulus changes (i.e., actual MOs) and implying that the behavioral function of these stimulus changes is fixed.

Smith, Iwata, Goh, and Shore (1995) explored the influences of some antecedent variables on escape-maintained behavior and encountered similar conceptual difficulties when doing so. The antecedent variables they considered included task novelty, duration of instructional sessions, and rate of task presentation. Each of these manipulations was construed as an MO manipulation, with explicit reference to Michael’s (1982, 1993a, 1993b) analysis. None of the manipulations, however, are appropriately classified as such. A novel task is a different set of stimuli rather than a change in the function of the stimuli initially escaped (recall that an MO must change the function of stimulus onset or offset).

This misclassification is the same as calling a change in the type of reinforcer (e.g., candy instead of fruit) an MO. Alterations in duration and rate are also changes in stimulus presentation but not function. Calling these manipulations MOs is the same as calling longer durations of access to food or a greater number of food deliveries MOs. Manipulating rate and duration of stimulus delivery might be construed as MOs in the same sense that fixed- or variable-time delivery of a reinforcer can be construed as an MO—additional exposure to a stimulus can change its function (e.g., Wilder & Carr, 1998). But Smith et al. (1995) did not explore such an analysis.

These issues are only a small sample of those that we have encountered in the relevant literature, a thorough review of which is beyond the scope of the present article. The misapplication of the MO concept to negative reinforcement has significantly hobbled our analysis and treatment of negatively reinforced behavior in applied settings. The misapplication at this fundamental level is clearly evident with respect to “conditioned motivating operations” and has caused additional confusion at this level of analysis.

Conditioned Motivating Operations

Michael and Miguel (2019) defined conditioned motivating operations (CMOs) as “motivating variables that alter the reinforcing effectiveness of other stimuli, objects, or events as a result of the organism’s learning history” (p. 383).Footnote 3 The CMO subtype most relevant to the current analysis is the reflexive CMO (CMO-R), which Laraway et al. (2014) defined as a CMO that “makes its own offset a reinforcer or punisher” (p. 606). This definition is essentially a restatement of the MO as applied to negative reinforcement under the current conceptualization, the difference being that the offset of the stimulus is a conditioned, rather than unconditioned, reinforcer or punisher.

For example, in a signaled avoidance procedure, a brief tone might reliably precede shock delivery, such that after several pairings a lab animal would respond to terminate the tone (and avoid the shock), although it would not respond to terminate the tone prior to this conditioning history. The current conceptualization of MOs would consider the tone, after conditioning, as a CMO-R. That is, because of the organism’s learning history, the onset of the tone establishes the tone’s offset as a reinforcer. The CMO-R concept suffers from the same logical and practical failings as the current MO concept’s handling of unconditioned negative reinforcement.

The organism’s history—in particular, respondent pairing of the tone and shock (i.e., making the tone predictive of the shock)—established the tone offset as a conditioned negative reinforcer. If we consider respondent conditioning procedures to be MOs, then this conditioning, and not the onset of the tone, is the MO. It may not be beneficial, however, to classify respondent conditioning procedures as MOs because changes in stimulus function as a result of such procedures are best understood with direct reference to principles of respondent conditioning. Moreover, and more important, researchers and theoreticians have long and profitably explored the necessary and sufficient conditions for establishing conditioned reinforcers, and no one has construed them as MOs (e.g., Fantino, 1981; Kelleher & Gollub, 1962; Shahan & Cunningham, 2015). Nothing of value would be gained by so doing, but it is important to recognize that MOs can influence how readily conditioned reinforcers are established, as well as their relative effectiveness as reinforcers once established. A few examples of MOs that can influence the effectiveness of respondent conditioning procedures and operant behavior established or maintained by resulting conditioned reinforcers in escape and avoidance scenarios include drug administration (e.g., Babbini, Gaiardi, & Bartoletti, 1980), sleep deprivation (e.g., Kennedy, Meyer, Werts, & Cushing, 2000), and food deprivation (e.g., Leander, 1973).

With respect to unsignaled avoidance (i.e., Sidman avoidance), the CMO-R concept also fails to make any meaningful contribution. The mechanisms underlying avoidance in the absence of explicit “warning stimuli” are still contested and poorly understood (Krypotos, Effting, Kindt, & Beckers, 2015). Although we have not seen a description of the relevance of the CMO-R to unsignaled avoidance, this term would presumably take the place of the implicit “conditioned aversive stimulus” (e.g., stimuli correlated with the passage of time and the absence of an avoidance response) in the two-factor account of unsignaled avoidance (Dinsmoor, 2001). Therefore, the CMO-R would add nothing new to the analysis. Instead, this would represent another misapplication of the MO concept and a missed opportunity to account for factors that are not currently (formally) accounted for by behavior analysts. The MO concept, appropriately applied, prompts us to consider, for example, what happens to avoidance responding when an MO is in place that reduces the punishing effectiveness of the avoided stimulus. Dickinson and Balleine (1994) have observed that, as a general rule, the organism must have some experience with the function-altered outcome (in the state induced by the MO) in order for the MO to change behavior. Following this rule, we would not expect that administering an analgesic would result in a rat immediately reducing its responding to terminate a shock-predictive tone unless it had previous experience receiving the shock in the analgesic state.

We know little about the role of MOs in respondent-operant interactions and even less about their influence on these interactions in applied settings. The MO, properly applied, highlights this knowledge deficit and prompts us to consider these variables. Many histories are possible and, if one wants to predict whether an organism will respond to terminate a given stimulus, one needs to know the specific kind of history that will produce that outcome. Simply saying that the onset of a stimulus is an MO that establishes its own offset as a reinforcer in some cases, but not in others, is of no value and preempts more precise analyses.

As a case in point, consider an article by Carbone et al. (2010) that provides a detailed review of the variables that affect escape-maintained responding in children with autism exposed to discrete-trials procedures. The authors adeptly summarized the relevant literature, but their conceptual analysis was impeded rather than enhanced by the CMO-R concept, which they promoted as a valuable conceptual tool. In the article, the authors discussed “treatments designed to abolish the CMO-R” (p. 116; note, it is not meaningful to talk about abolishing an operation) rather than talking about altering tasks such that their termination ceases to function as a negative reinforcer or selecting alternative tasks whose termination does not function as a negative reinforcer. Examples of such treatments discussed by Carbone et al. (2010) include offering a choice of tasks, varying tasks, changing the pace of instruction, and changing task difficulty. They also discussed the strategy of embedding positive reinforcers into the instructional session but, rather than using well-understood principles associated with operant and respondent conditioning (e.g., counterconditioning) to explain why this might be beneficial, such procedures are described as “weakening the value of the CMO-R” (p. 117). In actuality, all of these manipulations changed the stimuli to which participants to which participants were exposed, and could respond to escape, not the reinforcing effectiveness of escaping from the stimuli initially present. Therefore, by definition, they cannot serve as MOs.

In their general analysis, the authors indicated with a diagram (Carbone et al., 2010, Figure 2, p. 114) that children who emit escape responses in “demand” settings are exposed to a “worsening set of conditions” that can involve, for example, a “session beginning with removal of positive reinforcement.” The diagram goes on to specify that “termination of worsening conditions is a reinforcer.” With repeated exposures, previously neutral stimuli (“instructional demands, instructional materials, and presence of teacher”) become “warning stimuli” (CMO-Rs), the onset of which establish their own offset as a reinforcer. This analysis parallels the analysis associated with the signaled avoidance procedure; both are based on respondent conditioning. The CMO-R terminology, however, is superfluous, adding complexity without enhancing precision and promoting the incorrect assumption that the functions of relevant stimulus presentations are fixed.

Understanding how conditioned negative reinforcement works is critical to our treatment of behavioral problems in applied settings. This requires a solid grasp of many behavioral principles, including those related to respondent conditioning. If we apply the existing principle of conditioned negative reinforcement to our analysis instead of the redundant CMO-R concept, we are reminded that a stimulus whose termination functions as a conditioned negative reinforcer will also likely function as a conditioned positive punisher when presented. If the reader finds it helpful, they might use the term “conditioned aversive stimulus” to describe this stimulus, as long as they are aware of the issues with the term “aversive stimulus” that we pointed out earlier. If we have established that termination of a specific task functions as a conditioned negative reinforcer (i.e., the task is a “conditioned aversive stimulus”), we should also anticipate that its presentation will function as a conditioned positive punisher, reducing the strength of behavior that was occurring prior to its introduction. The CMO-R concept obscures rather than clarifies these and other relevant processes.

As Carbone et al. (2010) use the term, the “CMO-R” functions as an intervening variable (MacCorquodale & Meehl, 1948). An intervening variable is an inferred (i.e., not directly observed) variable assumed to be responsible for the relation between independent variables and dependent variables. Unlike a hypothetical construct, it has no properties beyond those evident in the functional relations between independent and dependent variables. Intervening variables can be useful heuristics, but they are not explanatory concepts. Carbone et al. (2010) describe a variety of operations (changes in the environment arranged as independent variables) that affect rate or frequency of escape responding (the dependent variable) and argue that the changes in the environment alter the evocative effectiveness of a CMO-R, leading to a change in escape responding. The studies they review never isolated the specific stimulus that served as the CMO-R, nor did they demonstrate that terminating that stimulus in-and-of-itself served as a reinforcer. Rather, they showed that changing some aspect(s) of a complex environment increased the likelihood of responses that terminated contact with some or, rarely, all, aspects of that environment. The CMO-R is inferred, not empirically demonstrated. Moreover, the possibility that stimuli predictive of the “worsening of conditions,” such as the presence of a teacher, might serve as discriminative stimuli for escape responding is not considered. As noted previously, focusing attention on the CMO-R directs attention away from meaningfully analyzing the behavioral principles involved in the behavior change.

A major problem with the analysis proposed by Carbone et al. (2010) is that not all children exposed to the same demand conditions responded to escape. Thus, the demand conditions must have constituted a “worsening of conditions” for some of them, those who engaged in escape responding, but not for the remainder. (It would seemingly constitute a “bettering of conditions” should a child respond to enter the demand condition, which at least a few will do.) Contrast that to electrical shocks, which they consider to represent a “worsening of conditions” in their example of the development of the CMO-R in the laboratory. No learning history is required for delivery of electric shocks of sufficient intensity to weaken, and termination of such shocks to weaken, a variety of response topographies emitted by all, or nearly all, members of a given species of laboratory animal (e.g., rats). Therefore, it is appropriate to consider the onset of such shocks as unconditioned positive punishers and their offset as unconditioned negative reinforcers.

This situation is very different from the “demand” condition arranged in functional analysis, where termination of exposure to a relatively complex set of stimuli (not a single stimulus, like shock) serves as a negative reinforcer for the challenging target responses (which vary in topography) of some, but not most, of the individuals tested. Recall that Hanley et al. (2003) found that 89.2% of the functional analysis studies they reviewed included an “escape” or “demand” condition, but only 34.2% produced results suggesting that the targeted problem behavior was escape-maintained. “Demand” conditions clearly do not involve presenting UMOs for escape responding, or such responding would be consistently engendered. The truth of this assertion is clearly evident in the seminal article that introduced the functional analysis procedures that are now widely used (Iwata, Dorsey, Slifer, Bauman, & Richman, 1994).

Iwata et al. (1994) examined the self-injurious behavior of nine children with self-injurious behavior under several brief experimental conditions. During their “academic demand” condition, the experimenter presented individualized learning trials using a graduated three-prompt procedure. Social praise was presented when the participant responded appropriately. If self-injury occurred, “the experimenter immediately terminated the trial and turned away from the subject for 30 s, with an additional 30-s change-over delay for repeated self-injury” (p. 102). This arrangement is similar to that typically arranged in the “demand” condition of other functional analyses. Iwata et al. found that two of their nine participants exhibited substantial levels of self-injury in the demand condition. It is clear that condition, which apparently differed in some unspecified ways across participants (e.g., in the specific educational activities involved), did not constitute a UMO. Moreover, it is impossible to discern which aspect of the demand condition actually served to negatively reinforce the self-injury (was it termination of a prompt? cessation of verbal requests? the experimenter’s turning away?) or the specific stimuli that evoked escape responding (i.e., that served as UMOs or CMO-Rs, under the current conceptualization). One may liken escape-maintained problem behavior in people with developmental disabilities to signaled avoidance responding in rats, as do Carbone et al. (2010), and assert that the analogy is informative. But that is untrue. There is neither a clear UMO (i.e., analogue to shock) nor an obvious CMO-R (i.e., analogue to a tone predictive of shock) in the former case.

It might be possible to quantify the specific environmental changes that different children exposed to the same demand condition experience (e.g., rates of praise statements and rates of reprimands), and to determine whether these changes constitute a “worsening of conditions,” and hence should support escape responding, but no one makes such determinations. Rather, Carbone et al. (2010) and other advocates of the current conceptualization of MOs appear to assume that “conditions have worsened” in the demand condition if it engenders escape responding. Such an analysis is circular and of no predictive value.

Of course, the same criticism can be applied to attempts to explain escape responding in terms of negative reinforcement. Why does an organism respond to terminate a stimulus? Because that stimulus is a negative reinforcer. How do you know that it is a negative reinforcer? Because the organism responds to terminate it. Meehl (1950) proposed long ago that it is possible to isolate stimulus changes that serve as reinforcers in a variety of situations. Therefore, it is possible to predict their effects a priori, which is requisite if the concept of reinforcement is to be of explanatory or predictive value. We have argued (Poling, Lotfizadeh, & Edwards, 2017), and continue to believe, that knowledge of operations (environmental changes) that serve as actual MOs, i.e., that modulate the reinforcing effectiveness of particular kinds of events and the control of behavior by antecedent stimuli historically relevant to those events, is invaluable in predicting whether a given event will serve as a reinforcer.

Are MOs Relevant to Negative Reinforcement?

The first step in identifying potential MOs is confirming that we are examining the influence of an event on the reinforcing or punishing effectiveness of the presentation or termination of a stimulus. This MO “test” disqualifies any manipulations that involve making changes directly to the stimulus itself (e.g., presenting and removing simple rather than complex math problems). This “test” also disqualifies manipulations that simply make a stimulus change possible, which is how the MO is currently applied to negative reinforcers. So, what operations pass this test? From our critical analysis of the MO as it is currently applied to negative reinforcement, the reader might conclude that few, if any, MOs are relevant to negative reinforcement, but this is not the case. There are many MOs with potential relevance to negative reinforcement that have been identified in the literature and many more that are likely to be relevant but have not yet been investigated.

For example, May and Kennedy (2010) identified a series of health conditions that have the potential to alter the reinforcing effectiveness of stimulus removal (described as “establishing events as noxious” in their review), including the following: allergies increased the reinforcing effectiveness of task removal (Kennedy & Meyer, 1996); otitis media (a middle-ear infection) increased the reinforcing effectiveness of the removal of loud noise (O’Reilly, 1997); menses (and associated dysmenorrhea) appeared to increase the reinforcing effectiveness of demand removal (Carr, Smith, Giacin, Whelan, & Pancari, 2003); sleep deprivation was associated with higher rates of negatively reinforced problem behavior in several studies, presumably because it increased the reinforcing effectiveness of termination of the relevant stimuli (Kennedy & Meyer, 1996; O’Reilly, 1995; O’Reilly & Lancioni, 2000); and, likewise, constipation was associated with higher rates of negatively reinforced problem behavior (Christensen et al., 2009; Janowsky, Kraus, Barnhill, Elami, & Davis, 2003; Kozma & Mason, 2003).

It is our ethical obligation to consider health conditions before pursuing other explanations for problem behavior. An understanding of some common health conditions and their potential to serve as MOs (i.e., to increase or decrease the reinforcing effectiveness of the termination of specific stimuli) should help us to identify these MOs when they are in effect and prompt us to consider others that have not yet been documented in the relatively small body of relevant literature.

As a second example, the administration of certain drugs can alter the reinforcing effectiveness of the removal of specific stimuli and, therefore, should be considered as potential MOs relevant to negative reinforcement. The relevance of certain drugs to negative reinforcement is evident in a study by Northup, Fusilier, Swanson, Roane, and Borrero (1997), who evaluated the influence of methylphenidate on the results of reinforcer assessments with common classroom reinforcers and found that the drug increased the reinforcing effectiveness of tokens that could be used to escape from (avoid) future academic tasks.

Another drug was examined by Kelley, Fisher, Lomas, and Sanders (2006), who evaluated the effects of amphetamine on destructive behavior emitted by an 11-year-old boy. The behavior appeared to be maintained by escape from academic task demands and access to tangible items. Destructive behavior was consistently less frequent, and compliance was consistently more frequent, when amphetamine was administered. This study was not designed to evaluate the reinforcing effectiveness of escape from demands, and the authors framed the drug effect as shifting response allocation to the appropriate response, compliance. Nonetheless, the mechanism underlying this shift in response allocation could be a reduction in the reinforcing effectiveness of task termination, in which case it would be proper to label amphetamine administration as an MO. Few studies have examined the effects of drugs on escape-maintained, challenging behaviors, and there appears to be a need for further research in this area. The MO concept, properly applied, may be of value in generating useful research questions and clarifying research findings.

A third general class of MOs relevant to negative reinforcement is exposure to the condition from which the individual might work to escape. For example, exposure to a task involving completion of math problems may increase the reinforcing effectiveness of escape from doing math problems, which corresponds to some extent with the lay concepts of “boredom” and “fatigue.” On the other hand, reducing exposure to a task might reduce the reinforcing effectiveness of escape from that task. Several researchers have demonstrated that fixed-time escape from tasks can result in a decrease in escape-maintained problem behavior (Allen & Wallace, 2013; Kodak, Miltenberger, & Romaniuk, 2003; Waller & Higbee, 2010). In all of these studies, the amount of exposure to the conditions from which the individual initially escaped appears to have been reduced by the fixed-time escape procedure (i.e., in the experimental condition) because the procedure involved a regular period without task demands. For example, Waller and Higbee gave their participants regular breaks from the task that lasted 30 s or 1 min, thereby reducing the total duration of exposure to the task conditions.

There are at least two plausible explanations for the effectiveness of this type of intervention in reducing escape-maintained behavior. First, delivery of the reinforcer independent of the problem behavior weakens the correlation between these two events and serves as an alternative form of extinction (Katz & Catania, 2005). Second, although exposure to more positive reinforcement (e.g., food) in general reduces the reinforcing effectiveness of additional exposure to that form of positive reinforcement, exposure to negative reinforcement (i.e., escape) reduces the amount of exposure to the conditions from which the individual is escaping and may reduce the reinforcing effectiveness of escape from these conditions, an MO mechanism.

It is interesting that exposure to one condition (e.g., one that involves engagement in dissimilar tasks, such as physical exercise) may reduce the reinforcing effectiveness of escape from another condition (e.g., math problem completion). For instance, Cannella-Malone, Tullis, and Kazee (2011) found that two 20-min and six 5-min periods of physical exercise each day significantly reduced the frequency of escape-maintained problem behavior in the three boys in their study. Although additional research is required to clarify the specific mechanisms that are responsible for this effect, a plausible explanation is that the physical exercise functioned as an AO, reducing the reinforcing effectiveness of academic task removal.

This brief overview of MOs that are, or may be, relevant to negative reinforcement is not an exhaustive review of the relevant literature. Our aims in providing these examples are to demonstrate that there are many MOs relevant to escape-maintained challenging behavior, to clarify the type of events that may appropriately be classified as MOs, and to draw attention to the important role of such events in applied settings.

Conclusion

The MO concept as it is currently applied to negative reinforcement is superfluous, because it refers only to a logical necessity, and draws attention away from real MOs that are relevant to negative reinforcement. This application of the concept is of no value in predicting and controlling behavior. Moreover, it has led to an oversimplified, and misleading, analysis of the demand condition of functional analyses, hence, of escape-maintained challenging behavior. We implore behavior analysts to consider these points carefully when describing the MO and applying this concept to the analysis and treatment of negatively reinforced behavior.