Introduction

Remote delivery has been gaining popularity recently as a viable mode of delivery for parent-training programs, particularly those based on behavioral principles (see Bearss et al. 2018; Ingersoll et al. 2016, 2017; Vismara et al. 2016). Receiving training via distance technology has many benefits for parents, including wider access to trained professionals and the ability to receive training from home (Ingersoll et al. 2017). When parents are able to access training in their home, it eliminates travel time and the need for transportation, thus increasing the number of people who are able to benefit from training, especially those from geographically isolated areas. Additionally, parents have reported a high level of satisfaction with behavioral training delivered via distance technology (Bearss et al. 2018; Vismara et al. 2016). Despite these benefits, training parents via remote delivery can be resource and time intensive if instructors are needed to deliver intensive one-on-one coaching to parents in order for them to reach fidelity. As such, a training program that incorporates more self-directed learning may be a more desirable option.

Self-management is one intervention that may be a viable method for training parents as parents are initially trained to manage their own behavior rather than requiring intensive one-on-one support from an external coach for the duration of the intervention. Broadly defined as personally applying behavior change tactics to change behavior (Cooper et al. 2007), self-management has decades of research supporting its use, particularly among children (Briesch and Chafouleas 2009). When applied to adults, self-management has demonstrated efficacy as a viable training method for teachers (Molías et al. 2017) and has effectively been used to train parents; however, the research with parents is scarce and dated (see Sanders and Glynn 1981; Sanders 1982). Additionally, the methods used in prior studies to train parents in self-management procedures were conducted in person, leading to questions as to whether parents can effectively be taught to use self-management via distance.

Despite such a broad definition, research using self-management typically includes one or more of the following procedures: self-monitoring, self-instruction, self-evaluation and recording, goal setting, self-reinforcement, and self-charting and graphing (Briesch and Chafouleas 2009). Self-monitoring, the most commonly used self-management procedure (Davis et al. 2016), is a procedure that involves the systematic observation and recording of one’s behavior (Cooper et al. 2007). In educational settings, positive impacts of self-monitoring systems have been identified across all grade levels of students, regardless of the presence or absence of a disability in those students (Hager 2012; Ganz and Sigafoos 2005; Rankin and Reid 1995). Several meta-analytic reviews have presented support for the use of self-monitoring systems to enhance academic achievement, on-task behavior, independent social skills, and to decrease disruptive behavior among students (Davis et al. 2016; Hattie et al. 1996; Reid et al. 2005), as well as to increase instructional skills among educators (Morin et al. 2019); however, application of the system in home environments involving parents has yet to be explored in the literature.

The positive effects seen when using self-monitoring can often be enhanced when combined with video self-reflection, commonly termed video analysis (Nagro and Cornelius 2013; Morin et al. 2019). The process of video analysis involves recording videos of oneself delivering instruction, reflecting and analyzing on one’s behavior in the video, and making changes to instruction based on the self-reflection and analysis (Nagro and Cornelius 2013; Morin et al. 2019). Although not inherent in the definition of video analysis, self-monitoring procedures, such as self-identifying a behavior to improve, goal setting, data collection, and self-graphing, are often combined with self-reflection and self-analysis to intensify the effects of the intervention (see Alexander et al. 2012; Hager 2012). Using video analysis to change behavior has many benefits, including allowing the person viewing the video to (a) focus their analysis on specific behaviors, (b) see their instruction from a different perspective, (c) feel accountable to change their behavior, (d) remember to make changes to their instruction, and (e) see the progress they are making (Tripp and Rich 2012). When used with educators, the results from research using video analysis procedures are positive (Morin et al. 2019) and have focused on improving behaviors such as specific praise statements (Alexander et al. 2012), praise variety (Hager 2012), and opportunities to respond (Alexander et al. 2012; Hager 2012). Although the literature base on the use of video analysis procedures with teachers is positive and supports its use among this population, the lack of research using this intervention with parents leads to doubt about its generalizability and usability across persons and settings.

The purpose of this study was to determine the effects of video analysis procedures (i.e., instructional feedback, goal setting, self-monitoring through video, self-graphing, and reflection) on the instructional practices of parents in a home setting. This study extends prior research by using distance technology to train parents, as well as evaluating whether parents’ use of the intervention techniques replicates with a different instructional practice and maintains in the absence of intervention. The specific research questions investigated in this study include the following:

  1. 1.

    What are the effects of video analysis procedures on two unique, self-identified instructional practices of parents?

  2. 2.

    Will the effects of video analysis on the instructional practices of parents maintain in the absence of intervention?

  3. 3.

    Do parents find the use of video analysis acceptable and feasible and do these views change over time?

Method

Participant Description

Seven parents and their children participated in this study. All parents were White females who conducted instructional sessions with their child. Although demographic data are presented for the children to provide context (see Table 1), children were not directly targeted with the intervention; therefore, no data were collected on child outcomes. Table 1 provides additional information regarding the age, education level, and role of the parents, as well as the skills taught. To record the sessions, parents used either a laptop computer with a built-in camera (Madison, Lisa, Alana, and Tonya), a tablet-based computer (Karen and Jailyn), or a video camera (Teresa). Parents were located in the southern USA (Teresa, Alana, Madison, Lisa, and Jailyn), Mexico (Karen), and Russia (Tonya). All phases of the study were conducted via distance.

Table 1 Parent and child descriptions

All parents were enrolled in a Master of Education program with an emphasis in applied behavior analysis and were taking a course in single-case design at the time of the study. Parents were selected based on their enrollment in the course, and the investigators of this study (first and second authors) were co-instructors of the course. All group training sessions were conducted during synchronous class sessions (see “Independent Variable” section). Informed consent was obtained from all participants included in the study.

Setting and Instructional Context

All sessions were conducted in various rooms of the parents’ homes, depending on the skill being taught (e.g., Karen’s sessions occurred in the kitchen as she was teaching her son cooking skills), and parents determined the instructional context of the sessions based on the academic, social, or behavioral needs of their child. No instruction was provided to the parents on how to conduct the instructional sessions other than to keep the content of the sessions consistent (e.g., do not teach math one day and toilet training the next) and to ensure that there was interaction between the parent and the child (e.g., having a parent supervise a child completing a worksheet independently and with no feedback from the parent would not be appropriate). See Table 1 for a description of the instructional content that was taught by each parent. This content was kept standardized across all sessions and phases of the study.

Research Design

Parents were assigned to either a changing criterion or non-concurrent multiple-baseline design across participants (Kazdin 2011; Kennedy 2005) in order to investigate the effects of using video analysis procedures on their use of instructional practices. An additional replication phase was added to determine whether the effects of video analysis would replicate to a second target behavior. Further, maintenance data were collected to determine whether effects would maintain over time. For the parent assigned to the changing criterion design (i.e., Karen), the decision to transition between goals was made by the investigators.

In addition to these single-case data, quantitative and qualitative data were also collected using a social validity survey and reflection questionnaire. This design, where both qualitative and quantitative data are collected simultaneously, allows for multiple perspectives to inform the same set of research questions (Creswell et al. 2003) and has the added benefit of presenting quantifiable participant behaviors across phases while adding critical context about what was occurring during each phase based on the unique experiences and perspectives of individual participants (Hitchcock et al. 2010). Additionally, cross-referencing qualitative data with single-case experimental design data creates an opportunity to triangulate findings across methods (Hitchcock et al. 2010).

Independent Variable

The independent variable in this study was a video analysis intervention that consisted of instructional feedback, goal setting, self-monitoring through video, self-graphing, and reflection. Additionally, group synchronous instructional sessions were used to train parents on how to implement video analysis. After baseline, the investigators guided parents in the selection of an instructional practice to target and trained parents on how to (a) operationally define the behavior and (b) collect, graph, and visually analyze data on their target behavior (see “Procedures” and “Appendix 1” sections). In order to ensure the reliability of data that parents collected, interobserver agreement was collected on a minimum of 20% of sessions across every phase, dependent variable, and participant (see “Interobserver Agreement” section).

All training sessions were delivered in a group format using a secure video conferencing platform (i.e., Blackboard Collaborate) as part of the weekly synchronous class sessions in which the parents were enrolled, with individualized feedback delivered via email correspondence as needed. Training on how to operationally define a behavior, how to collect and graph data, and how to visually analyze data were delivered in three separate group training sessions lasting approximately 2 h each (weeks 3–5 of the study; see “Appendix 1” section). The content of these sessions consisted of (a) explicit instruction, (b) modeling how to complete each step, and (c) opportunities for parents to practice each step, with feedback from the investigators, on data and videos unrelated to the study. Additionally, parents received nine additional weekly training sessions, lasting approximately 2 h each, on topics related to single-case research (e.g., effect sizes, generalization, interobserver agreement, etc.; see “Appendix 1” section) and ongoing feedback on study procedures and the visual analysis of their own data.

Although the study lasted a total of 14 weeks, no training occurred during Week 6 (this week was dedicated to optional small group meetings; however, no parents took advantage of the opportunity to meet with the investigators during this time) nor during Week 9 (spring break). All training sessions occurred concurrently with the study, and each training session focused on content that the parents would need to know to implement that phase of the study (e.g., training regarding the purpose and process for collecting maintenance data occurred during Week 13, immediately prior to parents entering the maintenance phase). The feedback provided to parents during these synchronous group training sessions primarily focused on the parents’ ability to visually analyze their data (e.g., excellent job describing the trend in your data) and when to switch conditions. If parents stated that they met their goal during an instructional session, general praise was provided (e.g., great work); however, the content of the feedback focused more on clarifying study procedures and visually analyzing data.

Dependent Variables

To increase the social validity of the intervention and the likelihood that it would be sustained after the conclusion of the study, parents self-selected the dependent variables they wanted to change based on their performance during baseline (see “Procedures” section). The investigators provided parents with a list of possible practices they could select from that had substantial evidence to warrant their use (e.g., specific praise, wait time, etc.), but parents were not required to select a practice from the list. Parents were encouraged to select a behavior that occurred at low levels during baseline if they wanted to increase the behavior or that occurred at high levels during baseline if they wanted to decrease the behavior; however, to increase the social validity of the intervention, the decision on which behavior to target was ultimately made by the parents. All parents, except for Karen, chose specific praise as their primary target behavior (Karen chose specific praise as her secondary target behavior) as they noticed that this is a behavior from the list of suggested practices that occurred at very low levels or not at all during baseline. Many parents were quite surprised to find that they did not provide specific praise, as they thought this was something they did well prior to watching their videos. As one parent mentioned, “I never thought I was a ‘good job’ parent until I watched my videos.” See Table 2 for parent-generated operational definitions of the primary and secondary target behaviors selected by each participant.

Table 2 Primary and secondary target behaviors and operational definitions for each participant

Data Collection and Analysis

Parents submitted their videos to the investigators by uploading them to Google Drive, graphs were uploaded to a learning management system (i.e., eCampus), and answers to the reflection questionnaire and social validity survey were submitted via Google Forms. All accounts used to transfer data were secure and maintained by the university. Storage of all participant information was in compliance with the guidelines set by the university’s institutional review board.

Parents collected and calculated all data presented in the graphs in Figs. 1, 2, and 3. Parents measured specific praise, opportunities to respond, varied praise, higher-order questions, and negative statements by recording the frequency that these behaviors occurred during the session, and they measured dense schedule of reinforcement, increased breaks, and wait time on a trial-by-trial basis. For dense schedule of reinforcement (Alana) and increased breaks (Lisa), a trial began when a demand was given by the parent and ended when the child responded to the demand. If the parent provided reinforcement prior to the child emitting five correct responses (Alana) or provided a break prior to the child emitting five responses (Lisa), then a plus sign was recorded for that trial (+). A minus sign (−) was recorded for that trial if these behaviors did not occur. For wait time (Jailyn), a trial began when the participant delivered a verbal demand and ended when (a) the participant repeated or reworded the verbal demand, (b) provided a verbal or physical prompt, or (c) the child correctly responded to the verbal demand. For each trial, a plus sign (+) was recorded if the participant provided sufficient wait time according to the operational definition and a minus sign (−) was recorded if she did not. A “n/a” was recorded for instances in which the participant did not have the opportunity to provide a second verbal demand or a physical or verbal prompt because the child responded correctly before the minimum number of seconds elapsed. For each of the target behaviors, the number of plus signs (+) was divided by the number of plus (+) and minus (−) signs and multiplied by 100 to obtain a percentage for each session.

Fig. 1
figure 1

Frequency of opportunities to respond (OTR) and specific praise for Karen

Fig. 2
figure 2

Frequency of specific praise, varied praise, and higher-order thinking questions and percent of opportunities for increasing breaks for Teresa, Madison, and Lisa

Fig. 3
figure 3

Frequency of specific praise for all participants and percent of opportunities to provide a dense schedule of reinforcement for Alana, percent of opportunities to provide wait time for Jailyn, and frequency of negative statements for Tonya

Data from the changing criterion and multiple-baseline designs were visually analyzed by examining trend, level, overlap, and variability (Horner et al. 2005; Kazdin 2011). The results of the social validity survey were analyzed using descriptive statistics to summarize the Likert scale items (see Table 3) and inductive content analysis to identify themes in the open-ended responses (Thomas 2006; see Table 4).

Table 3 Social validity responses
Table 4 Open-ended response themes and frequency of occurrence

InterObserver Agreement

Parents served as the primary data coders for the study, and interobserver agreement (IOA) data were collected by a second, independent data coder (i.e., a peer from the synchronous class training sessions) for 20–40% of sessions in each phase, for each dependent variable, and for each participant. IOA on data measured by frequency was calculated using total count IOA (Cooper et al. 2007), meaning the number of instances of behavior recorded by one observer was compared to the number of instances of behavior recorded by the second observer and the smaller count was divided by the larger count and multiplied by 100 to obtain a percentage. For data measured by trials (i.e., reinforcement schedule, breaks, and wait time), IOA data were calculated by comparing the pluses (+) and minuses (−) for each trial to determine whether there was agreement (see “Data Collection and Analysis” section). The number of trials where there was agreement was divided by the total number of trials and multiplied by 100 to obtain a percentage. The mean IOA score across all participants, phases, and dependent variables was 86% (range 0–100). More specifically, for the primary target behavior the mean IOA was 82% (range 0–100) in baseline, 87% (range 50–100) in intervention, 89% (range 70–100) in the replication phase, and 94% (range 86–100) in maintenance. For the secondary target behavior, the mean IOA was 74% (range 0–100) in baseline, 85% (range 50–100) in the replication phase, and 91% (range 63–100) in the maintenance phase. For detailed IOA data for each participant, please see “Appendix 2” section.

Due to the nature in which IOA was calculated, the IOA was sometimes low for behaviors that were measured using total count IOA, particularly in baseline when the behavior for which data were collected occurred infrequently. For example, Madison and Alana had an IOA score of 0% in baseline because one observer coded one instance of the target behavior occurring and the second observer coded zero instances of it occurring (see “Appendix 2” section). Had the 8-min session been broken into 16 30-s intervals (interval by interval IOA; Cooper et al. 2007), the IOA score would have been 94% instead of 0% for that session (i.e., 15/16 × 100). Similarly, a small number of disagreements were magnified in the IOA results for varied praise in baseline for Teresa and for decreasing negative statements in the replication phase for Tonya due to the low frequency for which these behaviors occurred. Given that parents were the primary data coders and the low IOA scores occurred infrequently, we chose to retain the original data rather than asking parents to recode all of their data using the interval by interval method, which we felt would have been burdensome.

Procedures

All procedures performed in this study were in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. For detailed information about study procedures across each week of the study, see “Appendix 1” section.

Baseline

Parents were kept blind to the purpose of the study during baseline. The investigators instructed parents to record five, seven, or nine 8-min videos of instructional sessions with their child, depending on the experimental design, but no other instructions were provided. The instructional context of each session was determined by the parents (see Table 1) and kept constant throughout all phases of the study. Baseline was completed in an average of 1 week across participants, and parents uploaded an average of six videos per week during this condition.

Goal Selection

After all baseline videos were recorded, parents watched the videos to select an instructional practice to improve (see “Dependent Variables” section) and participated in training during synchronous group class sessions on how to (a) operationally define the behavior and (b) collect, graph, and visually analyze data. See the “Independent Variable” section and “Appendix 1” section for information regarding parent-training procedures.

Parents used the operational definition they generated to collect and graph data on their baseline videos according to the measurement procedures described previously. After visually analyzing their baseline data, parents selected a goal for their primary target behavior that they wanted to reach in intervention. Karen selected multiple goals (i.e., 18, 20, 22, and 25 opportunities to respond per session) to correspond with the multiple subphases in intervention. Teresa, Madison, Lisa, Alana, Jailyn, and Tonya’s goals were 15, 8, 8, 8, 4, and 5 instances of specific praise per session, respectively.

Intervention

During intervention, parents continued recording 8-min videos 5 days per week, but now they watched each video within 24 h of recording it and engaged in the following behaviors: (a) collected data on their primary target behavior according to the measurement procedures described previously, (b) graphed their data in Excel, and (c) completed a reflection questionnaire. The reflection questionnaire asked parents to identify (a) something they did well, (b) any challenges or areas in need of change, (c) whether they met or exceeded their goal, and (d) what could be done differently during the next session to help them meet their goal (if applicable). Intervention was completed in 4 weeks, and participants uploaded an average of 4 videos per week during this phase.

Replication

After the conclusion of intervention, parents viewed their baseline videos again to identify a second instructional practice to target for improvement. Once a second target behavior was identified and operationally defined, parents randomly selected five of the original baseline videos and collected data on their secondary target behavior using the procedures described previously. Parents graphed and visually analyzed these data in order to choose a goal for their secondary target behavior to determine whether the effects of using video analysis procedures would replicate with a second instructional practice. Goals for the secondary target behavior were as follows (per session): seven specific praise statements (Karen), six different praise statements (Teresa), five higher-order questions (Madison), providing a break after a maximum of five responses in 90% of opportunities (Lisa), providing reinforcement after a maximum of five correct responses in 90% of opportunities (Alana), engaging in sufficient wait time in 80% of opportunities (Jailyn), and delivering zero negative statements (Tonya). After goal selection, parents engaged in procedures identical to the intervention phase with the exception that they now collected and graphed data on both their primary and secondary target behaviors. The replication phase was completed over 7–10 days, and participants uploaded approximately five videos per week during this condition.

Maintenance

Between 1 and 2 weeks following the conclusion of the replication phase, parents recorded three maintenance videos. Maintenance procedures were identical to baseline in that parents did not view the videos until all three had been recorded. After all videos were recorded, parents collected and graphed data on both their primary and secondary target behaviors following procedures described previously, as well as completed reflection questionnaires for each video. The maintenance phase was completed over 1–3 days, and participants uploaded an average of one video per day during this condition.

Social Validity Survey

In order to investigate whether parents’ views on the feasibility and acceptability of using video analysis changed throughout the study, a social validity survey was administered at five points in time: immediately after participants watched their baseline videos, at the beginning of intervention, at the conclusion of intervention, after the replication phase, and after maintenance. The survey included 10 items that required participants to respond on a 5-point Likert scale to indicate the degree in which they agreed with the statement (5 = strongly agree, 4 = agree, 3 = unsure/neutral, 2 = disagree, 1 = strongly disagree) and four open-ended questions that asked parents to provide information on the following (a) procedures they would change or keep the same, (b) the feasibility of using video analysis in their setting, (c) the advantages and disadvantages of using video analysis to change behavior, and (d) any additional comments they would like to add. Table 3 provides information on the 10 multiple-choice items included in the survey.

Fidelity of Implementation

The first and fourth authors collected fidelity of implementation data on parents’ adherence to the study procedures using a checklist. Data were calculated by dividing the number of steps completed by the total number of steps and multiplying the number by 100 to obtain a percentage. Baseline and maintenance procedures included the following: (a) submission of the required number of videos, (b) videos were a minimum of 8 min in length, (c) baseline and maintenance data collection and completion of the reflection questionnaires did not occur until all baseline and maintenance videos had been recorded, (d) all data were graphed, and (e) graphs were submitted for review by the investigators by the dates requested. Intervention and replication phase procedures were identical to the baseline and maintenance procedures with the following exceptions: (a) data collection and reflection questionnaires were completed within 24 h of recording and uploading each video rather than occurring after all videos had been recorded, and (b) only one video was recorded per day.

Because all data, videos, graphs, and responses to questionnaires and surveys were submitted electronically, the submissions were date and time stamped. Thus, it was possible to determine whether parents followed the procedures in the implementation fidelity checklist. Treatment fidelity data were taken on 100% of the procedures, and the overall score across all participants, phases, and dependent variables was 94% (range 73–100). More specifically, the mean treatment fidelity score was 100% in baseline, 92% (range 85–98) in the intervention, 93% (range 78–97) in the replication phase, and 90% (range 73–100) in maintenance. Lower scores were primarily due to parents either (a) submitting their graphs after the date requested by the investigators (usually within a few days) or (b) uploading more than one video in a 24-h period. As indicated by the average procedural fidelity scores, these instances occurred infrequently and they did not adversely affect the results.

Results

Overall, video analysis was effective for changing parents’ instructional practices. All seven parents changed their behavior in the desired direction during intervention for both target behaviors and these effects maintained in the absence of intervention (see Figs. 1, 2, 3). When considering parents’ responses on the social validity survey, all parents rated the video analysis intervention favorably and indicated that they planned to continue using the intervention at the conclusion of the study (see Table 3). As hypothesized, parents viewed the intervention more favorably as the study progressed. Although all parents had changes in level from baseline to intervention, four parents (i.e., Karen, Madison, Alana, and Jailyn) had a decreasing trend in their data during the replication and/or maintenance phases for at least one target behavior; however, in spite of the decreasing trend, these parents’ data were still elevated above baseline levels. Teresa, Madison, Alana, and Jailyn met their goals for both target behaviors during the majority of sessions (see Figs. 1, 2), but despite making progress from baseline, the remainder of the participants were unable to claim the same positive results. In order to provide possible explanations for why Karen, Lisa, and Tonya were unable to meet their goals for one or both target behaviors, we provide further discussion of these three participants to contextualize their experiences based on their responses to the reflection questionnaire.

Karen’s level of responding for specific praise did increase over baseline for the replication and maintenance phases; however, she did not meet her goal during any of the sessions. There were also several times that Karen did not meet her goal for opportunities to respond, especially during later stages in the study when her goal was very high. Analyzing Karen’s reflections provides context regarding her performance toward her goals. Karen explained that there comes a point when opportunities to provide specific praise hit a ceiling effect and any additional praise statements beyond this point became contrived and non-contingent upon the students’ behavior. Karen reiterated that she felt she should have set a lower goal for specific praise. Karen also stated that one reason she did not meet her goal for her target behavior was that many times the opportunities to respond she provided for her son were in the form of a physical response rather than a verbal response and the time it took for him to engage in the physical response limited future opportunities to respond. For example, she mentioned that during one session she gave her son the opportunity to crack the eggs, which did make him more receptive to cooking, but it also took up so much time that it limited the number of opportunities she gave him to respond during the rest of the session. Despite not meeting her goals, Karen’s responses on the social validity survey indicated that she felt the intervention was valuable and that she planned to continue using it at the conclusion of the study.

Although Lisa met her goal for specific praise during the majority of sessions, she only met her goal for increasing breaks once during the replication phase and not at all during maintenance; however, when visually analyzing her data in comparison with baseline, positive effects are seen as her levels from baseline to the replication phase show a significant increase. Despite levels for both target behaviors decreasing slightly during maintenance, they are both elevated significantly from baseline levels. Analyzing Lisa’s reflections provides context regarding her performance and lack of improvement toward her secondary goal. Lisa cited difficulty keeping track of how many specific praise statements and breaks she was delivering in a session as the primary reason she did not always meet her goal. She particularly struggled with keeping track of how many demands she delivered before providing a break: “Sometimes I get on a roll and forget to take a break before I get to my 5 demands max. I get into a groove with the lesson and if my son is doing well, I just forget about what I am actually doing!” In order to help her meet her goal in the future, Lisa mentioned that it would have been helpful for her to use a counter to keep track of how many specific praise statements she was delivering in a session and to plan her lessons to include natural breaks throughout.

Tonya also had trouble meeting her goal for her second target behavior, despite achieving her goal for her primary target behavior. Although Tonya only met her goal of decreasing negative statements to zero for one session in the replication phase, her levels for this target behavior decreased from baseline to the replication phase and remained low in maintenance, indicating that maintenance was achieved. Tonya’s comments on the reflection questionnaire indicated she was satisfied with her progress, despite not meeting her goal for her secondary target behavior during the majority of sessions: “Overall, I am very pleased with my numbers…my praise statements are increasing, and my negative statements are decreasing, so both are moving in the right direction.” Tonya also commented that using video analysis made her aware of times when she unintentionally used negative statements out of habit and that she believed with more time she could completely eliminate this behavior. Additional comments on Tonya’s reflection questionnaires indicated that she was proud of the progress she was making with her first target behavior: “Seven specific praise statements is almost one a minute and even though it might be considered low for some people, that is a big number for me. I did not have a single praise statement when this project first started.”

Social Validity

Overall, parents agreed or strongly agreed with 9 out of 10 of the items. The only item that parents rated less than a 4.0 was related to their comfort with watching themselves on video. For this item, the overall average score was 3.7 which indicates that parents were unsure or neutral. The items that parents rated the highest were related to the benefits of the intervention (i.e., watching myself on video helped me see things I would not have noticed otherwise) and the broad applicability of the intervention (i.e., video analysis can be used to improve many different teaching skills). When considering parents’ responses over time, parents responded more positively to 9 out of 10 items from the first administration of the survey to the last administration. The only item that parents did not change their response on was related to the feasibility of implementing the intervention. For this item, parents consistently rated it 4.6, indicating that they agreed or strongly agreed with this statement. Parents’ responses increased the most on items related to the time and effort required to implement the intervention (0.9 increase) and plans to continue implementing the intervention at the conclusion of the study (0.7 increase). Overall, parents viewed the intervention favorably and rated it as an effective method for improving their skills.

Analyzing participant responses on the three open-ended questions also provided valuable information regarding parents’ views (see Table 4). When asked what they would change, the majority of parents said they would not change anything, indicating that they were satisfied with the intervention. All parents stated that the intervention was very feasible and easy to implement. When asked about the advantages of using video analysis, all parents stated that the ability to see their selves on video was an advantage. Tonya stated that she had a lot of “wow…. that needs to be changed” or “do I really do that?” moments throughout the study, and both Karen and Jailyn stated that using video analysis made them more self-aware. Cost-effectiveness was also a theme that arose frequently when parents discussed the advantages of the intervention. Karen, Teresa, Madison, Lisa, and Tonya repeatedly stated that affordability of the intervention was a big advantage. As Tonya mentioned, “Every device these days has the capability to record. You do not necessarily need certain software programs.” All parents in this study used technology already available to them to record videos, so there was no need to purchase additional equipment. Another theme that emerged in parents’ responses was the ability to self-evaluate their behavior rather than having someone else critique them. As Alana mentioned, “Critiques make me self-conscious, so being able to see and correct my own mistakes was helpful.” Finally, Lisa stated that the ability to watch the videos on your own time was an advantage to her. These comments highlight the benefits of video analysis, such as the flexibility of the intervention and the freedom to self-select behaviors for change.

When asked about the disadvantages of using video analysis, four distinct themes arose from parents’ responses. The two most common themes were the time it takes to implement the intervention and whether their child was cooperative on the day they were implementing. Jailyn, Alana, Madison, and Lisa all cited the time it takes to edit and upload videos as a disadvantage to the intervention. Concerns over confidentiality were also disadvantages brought up, particularly regarding the storage or disposal of raw video. Another commonly identified disadvantage was the need for parents to record with their child when they were not willing or available. Specifically, Jailyn stated that some days her child needed extra sleep, Karen’s son was distracted by the camera, and Teresa’s son had medical issues, including epileptic seizures. Less common were comments about the discomfort in watching one’s self on video. For example Lisa mentioned, “It is not always easy to see the bad side to your teaching.” Despite these potential disadvantages, parents agreed that the advantages of the intervention far outweighed the disadvantages.

Additional comments focused on the generalizability of the intervention and the impact it had on their child’s behavior. As Karen mentioned, “While I’m not specifically working on general praise…I notice when I praise [my son] in any way. So, there has been an increase in general praise statements [as a result of the intervention].” Additionally, Karen showed an interest in using the intervention in a different setting: “I would actually like use video analysis at my workplace because we have an employee with very challenging behaviors and I would love to see how I behave around her and how I can change my behavior to help change hers.” Although data were not taken on child outcomes, parents indicated that they noticed the intervention was having a positive effect on their child’s behavior. Tonya stated, “Even though this project was targeting my behaviors, I have been able to identify behaviors in my child that I could alter a bit. Her behaviors are of course stimulated by my actions so it is helping us both….there are far less meltdowns, less resistance to instruction, and there is an overall willingness to learn.” Parents’ comments on the social validity survey corroborate the positive effects found when visually analyzing their data.

Discussion

Overall, video analysis was effective for changing parents’ behavior and parents perceived the experience as worthwhile. One interesting finding from this study was that although all parents met their goal for their primary target behavior for the majority of sessions, not all parents met their goal for their secondary target behavior. Additionally, responding for the primary target behavior decreased for several participants when the secondary target behavior was introduced. When analyzing parents’ responses on the reflection questionnaires, several parents acknowledged difficulty trying to improve multiple behaviors at once. This finding is consistent with prior research which found that behavior change as a result of self-monitoring is reduced when multiple behaviors are tracked simultaneously (Hayes and Cavior 1977). Although some parents met their goals more often than others, all parents showed improvement over baseline for both target behaviors.

Parent reflections offered important context to explain why they recognized video analysis as a positive approach to improving instructional skills and self-efficacy. Even when goals were not met, parents identified benefits of setting goals and tracking performance through video analysis sessions. Parents reflected that using target behaviors so frequently helped to add these behaviors to their instructional repertoire. The current study demonstrates the potential benefits of video analysis as a conduit to self-evaluate and self-reflect with the goal of improving both how to think about instructional decision making and also how to improve implementation of research-supported instructional practices.

Broadly, this investigation supports previous research that classifies video analysis as a promising practice for transforming existing beliefs and practices (see Nagro and Cornelius 2013), and it also expands the literature base by demonstrating the feasibility and replicability of video analysis practices across contexts. Specifically, support for video analysis practices is typically contextualized within formal education settings, such as K-12 classrooms, with educators as implementers (Morin et al. 2019); however, this study demonstrates the generalizability of video analysis practices as useful in more informal education settings (i.e., participant homes) and extends the scope of the body of research on video analysis to include parents. Furthermore, this research extends prior work by demonstrating the broad application and replicability of video analysis practices to target instructional and behavioral skills not previously explored in the literature, such as reducing instances of negative statements and increasing breaks and reinforcement schedules.

One of the key strengths of this study was the way in which parents could access training on video analysis from a distance, allowing for greater accessibility. Prior work has indicated that parent-training programs are effective for changing behavior (O’Connor et al. 2013); however, previous studies have overwhelmingly delivered parent-training interventions in person. This study expands previous research by demonstrating that parents can be trained to implement an intervention entirely via distance technology and that the time and effort it takes to implement the intervention are acceptable to parents. The positive results on the social validity survey complement prior work using remote delivery to deliver parent-training programs (e.g., Bearss et al. 2018; Vismara et al. 2016) which have demonstrated that this mode of delivery is satisfactory to parents. The results of this study are encouraging as it offers a concrete approach for practitioners to reach larger numbers of parents in remote locations by taking advantage of the distance-based coaching procedures outlined in this study. Practitioners are often limited in the number of intervention hours they can deliver to children due to funding restrictions, thus potentially restricting the success of the intervention. However, if practitioners can effectively teach parents to implement the intervention outside of the child’s scheduled instructional session with a therapist or teacher, then the impact of the intervention can be maximized. Additionally, this study demonstrates that parents can successfully be trained to use video analysis in group settings and that resource-intensive, one-to-one, in-person sessions may not be needed. The total time that was spent on training parents in this study is comparable to other parent-training interventions delivered in a group format (e.g., Kjobli et al. 2013) and requires less time overall by a therapist or interventionist who would otherwise have to deliver the intervention multiple times to individual parents.

Another unique contribution of this study is the collection of social validity data at multiple points in time to determine whether parents’ views regarding the acceptability and feasibility of video analysis changed over time. Although prior work has collected social validity data on video analysis procedures with educators (e.g., Capizzi et al. 2010), this is the first study, to the authors’ knowledge, that has collected these data with parents. Taking data at multiple time points revealed that parents’ views of video analysis do change over time; more specifically, parents viewed the intervention more favorably as the study progressed. One likely reason for this finding is that parents became habituated to seeing themselves on video and the act of viewing their own behavior became less aversive. Another possible reason for the increase in parents’ social validity scores is that parents became more proficient in implementing the video analysis procedures, thus increasing their self-efficacy and the acceptability of the intervention. It is also likely that as parents began to see positive effects from video analysis, their results served as automatic reinforcement for the implementation of the intervention. These findings are encouraging as they provide evidence that parents who initially demonstrate reluctance when asked to implement video analysis procedures may change their perspective as the intervention progresses.

Implications for Practice

One point to consider when teaching parents to use the methods described in this study is that practitioners may need to offer more guidance on goal setting for some parents to ensure that the goals they select are reasonable and attainable. The majority of parents met or exceeded their goals throughout the study; however, some parents did not meet their goals despite making progress toward them. An analysis of their data and their responses on the reflection questionnaire indicated that one reason parents were not able to meet their goal is that their goals were set too high or too low. Although parents should choose their own goals to increase the social validity of the intervention, practitioners can offer guidance to parents by encouraging them to select goals based on their baseline performance and to consider the reasonableness of their goal in light of the session length and the behavior they wish to change. Another reason that parents did not meet their goal is that it was difficult for some parents to focus on multiple behaviors at once. Practitioners should be aware that the effects of video analysis with self-monitoring may be reduced when parents are expected to change multiple behaviors concurrently.

Although all parents stated that the intervention was very feasible and easy to implement, there were comments expressing concern about technical aspects of the video analysis process, including the need to improve video upload times and determine ideal camera setup. To address upload speed concerns, practitioners can encourage parents to connect directly to the Internet when uploading videos rather than using Wi-Fi, teach parents to “zip” the file or to reduce the resolution of the videos, or consider shortening video episodes. Parents in this study recorded 8-min videos in order to standardize the procedures across all participants; however, shorter videos would likely have been sufficient for some parents. To address concerns regarding capturing critical aspects of instruction on video, practitioners can provide parents with a reference sheet of important points to remember when recording videos, including how to set the camera up to maximize the quality of the recording. Published resources about do’s and don’ts for recording instruction can be shared with parents as part of their training (e.g., Nagro 2016; Harvard College 2015).

Limitations

This study has several limitations which should be considered when interpreting the results. First, data were not taken on child outcomes; therefore, it is unknown whether the positive effects found for parents can be extrapolated to their children. Second, the maintenance condition was implemented a maximum of 2 weeks post-intervention due to limitations on the length of the study; as such, questions still remain regarding whether the effects of video analysis would maintain over longer periods. These questions are particularly relevant given that four participants (i.e., Karen, Madison, Alana, and Jailyn) had a decreasing trend in their data across the replication and/or maintenance phases for at least one target behavior. Although their levels of responding were still increased over baseline, it is unknown whether their data would have continued to decrease had a longer maintenance period been implemented. Third, parent participants were all graduate students studying applied behavior analysis; given that the dependent variables were all behavior analytic in nature, it is unknown whether parents without experience in behavior analysis would have similarly positive results. Fourth, although the mean IOA for each phase and participant were generally high for most phases and participants (see “Appendix” section), IOA data did drop below 80% at times, particularly when the behavior occurred at low levels (e.g., during baseline); therefore, readers should take these data into consideration when interpreting the results. Lastly, one of the questions on the reflection questionnaire that parents completed after each intervention session asked whether they met or exceeded their goal. As such, many parents tried to exceed their original goals. Although this was not an issue for the majority of participants, given that they were in a multiple-baseline design, it did lead to Karen having variable data that was not tightly clustered around the different criteria for each subphase; thus, the internal validity for the changing criterion design in this study is weakened (Hartmann and Hall 1976).

Future Research

Several areas for future research were revealed during this study. First, although parents anecdotally noted that video analysis was having a positive effect on their child’s behavior, it is important for future research to investigate these effects empirically. Second, to determine whether the positive results found for video analysis will maintain over longer periods of time, future studies should include a longer maintenance period than what was included in this study. Third, future research should investigate whether video analysis is effective for teaching parents to engage in more complex behavioral procedures, such as conducting a functional behavior assessment, functional analysis, or using extinction. Research in these areas is limited, and it would be important to determine whether video analysis is effective for teaching multi-step procedures. A final area for future research involves conducting a parametric and component analysis to determine the optimal dosage of the intervention and which components are most effective (Dallery and Raiff 2014). The video analysis intervention in this study was conducted 5 days per week with four different components (i.e., goal setting, self-monitoring, self-graphing, and reflection). Although effective, the intensity of the intervention was time-consuming and a disadvantage noted by some parents. If the same results can be obtained with a lower dosage and less components, it may further increase the social validity of the intervention.