Previous staff-training research has primarily relied on trainer-to-trainee face-to-face interactions (Roscoe, Fisher, Glover, & Volkert, 2006). Even when conducted remotely, telehealth research has often incorporated some form of on-site assistance (Wacker et al., 2013). Higgins, Luczynski, Carroll, Fisher, and Mudford (2017) used behavioral skills training (BST) to remotely train staff to conduct brief multiple stimulus without replacement (MSWO) preference assessments without any additional on-site assistance. Although effective, using all components of BST required up to 6 h of trainer time per trainee. Delivering remote real-time feedback alone offers the advantage of removing the need to have on-site assistance and has been found to be time efficient (Machalicek et al., 2010). Efficient and economical alternatives are needed to sustain the necessary practitioners for service delivery (Baer, Wolf, & Risley, 1987). Therefore, the purposes of the current study were (a) to evaluate the effectiveness of remote real-time feedback delivered to train staff on how to conduct brief MSWO preference assessments with a simulated consumer, (b) to assess the generalization and maintenance of these skills, and (c) to measure the social validity of this staff-training procedure.

Method

Participants

Four female participants were evaluated in this study (hereafter referred to as trainees). Trainees recruited were newly hired clinical staff who reported no previous experience implementing or learning about preference assessments. Abby (23 years old), Kiley (22 years old), Lucy (23 years old), and Maggie (19 years old) were all working on obtaining a bachelor’s degree at the time of this evaluation. Written consent was obtained prior to participation.

One child (4 years old, female) diagnosed with autism spectrum disorder (ASD) participated. The child participant was able to independently scan multi-item arrays, make selections when instructed to pick an item, and use one- to two-word mand utterances. Informed consent was obtained from caregivers prior to participation.

Three registered behavior technicians served as confederates. Confederates were used to minimize prolonged consumer exposure to assessments implemented with low treatment integrity. Six confederate scripts were randomly rotated across sessions and phases, detailing the confederates’ responses on each trial. Each script contained the same confederate responses (i.e., correct response, simultaneous selection, consecutive selection, no choice, and engagement in challenging behavior), but the order of the responses varied across scripts. Confederates were trained on scripted responses via telehealth prior to the evaluation. Confederates provided scripted responses and did not provide any feedback to the trainee. Scripts were kept out of view from the trainee but visible to the confederate. Procedural integrity data were collected on the confederates’ implementation of scripted responses across training phases and trainees across an average of 41% (range 40%–44%) of sessions. Average procedural integrity data were 95% (range 86%–100%) for Confederate 1, 98% (range 89%–100%) for Confederate 2, and 98% (range 94%–100%) for Confederate 3.

The lead author conducted all training sessions. She held a master’s degree in applied behavior analysis (ABA), was a Board Certified Behavior Analyst, and was enrolled in a PhD program in ABA. Following consent, the trainees and trainer never had face-to-face contact.

Setting and Materials

Training sessions were conducted remotely across two settings. Training occurred in a private office in an early intervention clinic and the trainer connected remotely from an office in a different state. Videoconferencing provided a live audio and visual connection between the trainee and trainer using VidyoDesktop (2018). Remote sessions were achieved using a Dell Latitude E7470 laptop with a c920 Logitech HD Pro Webcam (1080 progressive scan) and a Surface Pro tablet with a built-in camera (1080 progressive scan) at the trainee site. All sessions were videotaped and scored at a later time. Additional materials included desks, chairs, preference assessment stimuli (14 tangibles and 14 edibles), timers, a calculator, writing utensils, and data sheets.

Design, Dependent Variable, Interobserver Agreement, and Treatment Integrity

A nonconcurrent, multiple-baseline across-participants design was used. The dependent variable was the percentage of brief MSWO skills (Carr, Nicolson, & Higbee, 2000) implemented correctly. MSWO component skills included (a) item selection, (b) presession exposure, (c) presentation of items, (d) presentation of instruction, (e) delivery and removal of items, (f) a reinforcement interval, (g) data recording, (h) presentation of trials, (i) response to idiosyncratic responses, (j) data calculation, and (k) identification of stimuli to use for teaching new skills (a list of operational definitions is available from the first author). Data were summarized as the percentage of component skills implemented correctly, by dividing the number of component skills implemented correctly by the total number of opportunities to implement each component skill and converting to a percentage.

A second observer independently scored an average of 34% (range 33%–35%) of sessions. Trial-by-trial interobserver agreement (IOA) was calculated by dividing the number of agreements by the total number of agreements plus disagreements and converting to a percentage. Mean IOA percentages were 93% (range 91%–97%) for Abby, 96% (range 90%–100%) for Kiley, 97% (range 94%–100%) for Lucy, and 94% (range 85%–100%) for Maggie.

Treatment integrity data were collected on the trainer’s delivery of real-time feedback on an average of 63% (range 50%–67%) of sessions. Data were collected on the trainer’s use of positive feedback, constructive feedback, or omission feedback after every trial. Positive feedback included general and behavior-specific praise for trials implemented correctly. Constructive feedback included a brief description of how the skill should be implemented when an error was observed in the trial. Omission feedback included instances in which the trainer did not provide feedback following a trial. An outside observer calculated treatment integrity by dividing the number of correct responses by the total number of correct and incorrect responses and converting to a percentage. Mean treatment integrity scores were 87% (range 81%–92%), 94%, 100%, and 97% (range 94%–100%) for Abby, Kiley, Lucy, and Maggie, respectively.

Procedure

Prior to the start of each session, the trainer e-mailed trainees a link to join a videoconference meeting and step-by-step instructions for how to connect. Once connected, the trainer provided feedback as needed to ensure the tablet was positioned so that all relevant session events would be captured.

To begin each session, trainees opened a bin provided during the consent process containing all relevant training materials and a blank MSWO data sheet. The trainer then provided the trainee with a brief scenario that discussed potential preferences for a consumer via a shared computer screen. Scenarios were short paragraphs discussing potential preferred and nonpreferred stimuli for the consumer, to mimic caregiver-nominated stimuli to serve as input for trainees during the stimulus-selection component of the preferences assessment (Fisher, Piazza, Bowman, & Amari, 1996). Six confederate scenarios were randomly rotated across sessions. Potential preferred and nonpreferred stimuli were discussed with the child’s caregiver during the consent process to create one scenario for sessions conducted with the child participant.

Each session consisted of the trainee conducting three iterations of a brief six-item MSWO preference assessment for a total of 18 trials. Following 18 trials, trainees were asked to hold up their data sheets to the computer screen for the trainer to screenshot the data sheets. The trainees were then asked to calculate the average percentage each stimulus was selected based on the number of trials in which the stimulus was present during the preference assessments (DeLeon & Iwata, 1996). Next, trainees selected a stimulus they would use for teaching a new skill based on the results of the brief MSWO preference assessment (Deliperi, Vladescu, Reeve, Reeve, & DeBar, 2015). If trainees were unable to calculate percentages, hypothetical data were provided, and trainees were asked which stimulus they would use for teaching a new skill based on the data provided.

Baseline

During baseline, trainees were allowed to review a blank MSWO data sheet and scenario for up to 15 min. Following 15 min of review, or when the trainee said she was ready, the trainee was instructed to “conduct a brief tangible MSWO preference assessment to the best of your ability.” No feedback was provided and no questions were answered.

Real-Time Feedback

During real-time feedback, the trainer provided feedback contingent on the delivery or absence of MSWO component skills moments after the participant conducted each trial. Positive feedback was provided for trials implemented correctly, which included general praise for trials in which the trainee adhered to all components (e.g., “Great! You implemented the MSWO trial perfectly.”) and behavior-specific praise was provided for trainees’ correct implementation of components previously implemented with errors (e.g., “Excellent job rotating the items from the previous trial.”). Constructive feedback was delivered for incorrectly implemented trials (e.g., “Remember to remove unselected items after the consumer selects an item.”). Following feedback, trainees were not allowed to rehearse the skills prior to the next trial or session. Training was discontinued after trainees reached mastery (i.e., two consecutive sessions at or above 90% accuracy).

Posttraining Probes

Posttraining probes were conducted at least 2 days following exposure to real-time feedback using procedures identical to baseline.

Generalization and Follow-Up

Generalization probes were conducted with an actual child and with different preference assessment stimuli (e.g., edibles) across baseline, posttraining, and follow-up phases.

Follow-up data were collected 2 weeks following exposure to real-time feedback. Maintenance data were collected at this interval to mimic typical supervision periods behavior technicians are exposed to (Behavior Analyst Certification Board, 2014). No feedback was provided and no questions were answered during generalization probes and follow-up.

Social Validity

Following training, trainees completed an electronic social validity questionnaire. Trainees responded to five statements on a 6-point Likert scale. Ratings closer to a score of 6 indicated social acceptability.

Results

Figure 1 displays the percentage of brief MSWO skills implemented correctly during baseline, training, posttraining, maintenance, and generalization across four trainees. During baseline, all trainees implemented preference assessments with low to moderate procedural integrity. Following exposure to real-time feedback, an increase in trainees’ implementation of the brief MSWO skills was observed within a few sessions. Abby met mastery criteria within three training sessions (M = 87%, range 80%–91%), Kiley met mastery criteria within two sessions (M = 94%, range 90%–98%), Lucy met mastery criteria within three sessions (M = 94%, range 89%–98%), and Maggie met mastery criteria within three sessions (M = 91%, range 81%–97%). The total duration of real-time feedback delivery during training was 11.9 min, 8.3 min, 13.3 min, and 13.2 min for Abby, Kiley, Lucy, and Maggie, respectively. Total duration of training sessions, including preference assessment implementation and real-time feedback delivery, was 39.9 min, 31.1 min, 46.0 min, and 45.0 min for Abby, Kiley, Lucy, and Maggie, respectively. Following real-time feedback, posttraining probes were conducted with the confederate and child diagnosed with ASD. All trainees conducted posttraining and generalization probes with high procedural integrity.

Fig. 1
figure 1

Percentage of MSWO component skills implemented correctly. The bottom panel for each participant depicts a box plot showing MSWO component skills that met mastery criteria at or above 90% accuracy (gray box) and less than 90% accuracy (white boxes). Absent boxes represent participants had no opportunity to perform the skill

At the 2-week follow-up, all participants implemented the brief tangible MSWO preference assessment with the confederate with high procedural integrity. Three of the four participants implemented all generalization probes with above 90% accuracy. Abby fell below 90% accuracy implementing the brief tangible MSWO preference assessment conducted with the child. However, Abby was able to implement the brief edible MSWO preference assessments with above 90% accuracy.

Trainees’ responses to the social validity questionnaire indicated that the training procedure was effective (M = 6), that real-time feedback (M = 6) and telehealth service delivery were acceptable (M = 5.8, range 5–6), that they were satisfied with the technology setup (M = 5.5, range 4–6), and that they would recommend this training procedure to others (M = 5.8, range 5–6).

Discussion

The current study trained four participants to conduct brief MSWO preference assessments using remote real-time feedback. Minimal training time and sessions were required for participants to master the skills. The short but effective procedure may be especially appealing for practitioners facing staffing barriers such as high staff turnover, lack of trained providers, inaccessibility to trainers, and limited training time. An efficient staff-training procedure, such as real-time feedback, may allow trainers more time to train other staff or train staff how to implement additional assessments and skills or may allow them more time to complete other supervisory responsibilities. These skills were found to maintain and generalize both to a child diagnosed with ASD and to edible stimuli. Following the completion of the study, all trainees provided favorable social validity ratings for the use of remote real-time feedback.

The current evaluation supports previous research in delivering staff trainings via telehealth without in-person or on-site trainer assistance while keeping training time manageable. Research on remotely training staff to conduct an MSWO preference assessment using all components of BST took upward of 6 h (Higgins et al., 2017). In this study, using only real-time feedback to train staff to conduct brief MSWO preference assessments took a maximum of 46 min. Participants’ implementation of some MSWO component skills were seen to improve following only one exposure to trainer feedback. The repetitive nature of MSWO trials and delivery of feedback after every trial may have facilitated the brief training time.

In addition to displaying treatment integrity data using a line graph, a box plot was used to collect and depict data. The box plot provided an ongoing visual depiction of component skills implemented at or below mastery criteria during each session. Collecting and depicting data in this manner aided in the delivery of feedback by identifying errors on specific components within and across trials and sessions.

There are some limitations that warrant mentioning. First, maintenance probes were only conducted at a 2-week follow-up. Longer maintenance periods may have allowed for a better understanding of the long-term effects of this staff-training procedure. Second, generalization probes were only conducted with one child diagnosed with ASD. Although the child participant exhibited a variety of idiosyncratic responses (e.g., no choice, simultaneous selection, challenging behavior), the child mostly engaged in correct responses. It would be important for future research to consider including additional child participants to evaluate if the skills learned would generalize to different behavioral repertoires. Additionally, the child participant was sometimes observed to continue playing with an item after being prompted to return the item. Future research should consider including this idiosyncratic response during training. Third, there were some technology difficulties that took place during this evaluation. Periods of weak Internet connection ended calls or froze the computer screen during four sessions. No data were lost, but connections did have to be restarted.

The outcomes of the current study suggest several areas for future research. During this evaluation, real-time feedback was provided after every trial. Future research should consider the optimal schedule of real-time feedback during training sessions. Additionally, future research should evaluate the effectiveness and feasibility of using remote real-time feedback when conducting other behavioral assessments and clinical service procedures. Furthermore, newly hired staff served as trainees within this evaluation. Although trainees were told their participation would not impact their employment, their employment status may have impacted their motivation to acquire these skills. Future research should evaluate the effectiveness of using remote real-time feedback to train other professionals or caregivers.

Using web-based technologies to provide staff trainings has the potential to extend the reach of training providers and ease of training around the world. Refining the methods used to provide staff trainings through telehealth, such as using real-time feedback alone, may positively impact the time and resources required to train skills.

Implications for Practice

  • Remote real-time feedback is an efficient procedure to train staff.

  • Exposure to one instance of feedback can improve performance.

  • A box plot display allows for analysis of trainee progress on specific components.

  • Trained skills can generalize to actual consumers and preference assessments with edibles.