Introduction

The concept of social validity, or the social importance of an intervention, has been discussed and studied in the field of special education since it was first introduced by Kazdin (1977) and Wolf (1978) more than 40 years ago. Conducting research that is “socially important” is essential to addressing the continuous gap between research and practice in the special education field (Callahan et al. 2017; Hanley 2010; Ledford et al. 2016; Reichow et al. 2008; Spear et al. 2013). Although a practice or strategy has been recognized as evidence-based, it may not be implemented in natural or authentic environments, such as schools and homes, and by natural change agents such as caregivers and professionals. One possible reason why evidence-based strategies are not always implemented in the natural environment by natural change agents could be their lack of or limited social validity (e.g., they require too much time and effort, are not considered cost-effective, or do not produce meaningful changes to quality of life; e.g., Hanley 2010; Spear et al. 2013). To increase the likelihood that an intervention will be implemented and maintained, it is important to evaluate its social validity.

Wolf (1978) suggested assessing three different aspects of social validity: goals (i.e., if the target of the intervention is socially important), procedures (i.e., if the intervention procedures are acceptable and feasible), and outcomes of the intervention (i.e., if the intervention is effective and the results are socially important). Additionally, Horner et al. (2005) discussed criteria that can enhance the social validity of an intervention, including (a) evidence that the natural change agents (i.e., those who work directly with target recipients, such as teachers or caregivers) implement the intervention with fidelity in natural settings; (b) reports from natural change agents that the intervention procedures are effective and feasible; and (c) evidence the intervention implementation and/or impact continues after the study has ended.

Several researchers have conducted reviews of the literature on evidence for social validity assessments in the special education field (Barton et al. 2018). Ledford et al. (2016) reviewed 54 single-case research articles that included a total of 109 studies. Within those 109 studies, 44% reported social validity data. Many studies used interviews or questionnaires to assess social validity (96%) and relatively few used behavioral observations (19%). The evidence of social validity in these studies focused primarily on the acceptability of intervention procedures or satisfaction with outcomes but provided limited assessment of the feasibility of program procedures or acceptability of program goals. Similar findings related to social validity assessments in single-case research and their limitations were reported in a literature review conducted by Snodgrass et al. (2018). They reviewed single-case studies published in six top-ranked special education journals from 2005 to 2016. Of those single-case research articles, 27% (115 out of 429 single-case articles) reported any social validity assessment of the program and only 6.5% of the articles (n = 28) reported on all aspects (i.e., goals, procedures, and outcomes) of social validity. A majority of those 28 studies used a single measure, largely interviews or questionnaires completed by stakeholders, and a relatively small percentage of the studies used behavioral observation (11%, n = 3).

Given the results of Snodgrass et al.’s (2018) literature review, there remains a need for improvement in assessment practices to represent all three aspects of social validity (i.e., goals, procedures, and outcomes). Careful assessment of social validity can be accomplished using a variety of different methods and methodologies. Social validity assessment results can be used to predict and/or increase the likelihood of the acceptance of a program when it is disseminated to the community (Baer et al. 1987). Results can also be used to improve future replications/applications of an intervention (Finn and Sladeczek 2001). If a social validity assessment is conducted during an intervention, it might also be used to improve implementation procedures within a study (Harrison et al. 2016; Page and Thelwell 2013). Thus, it is critical to consider evidence of rigorously assessed social validity when expecting dissemination of evidence-based practices (EBPs) to reduce the gap between research and practice.

Caregiver-Implemented Communication Intervention

Communication skills are an essential area of development for young children who are receiving EI services. The development of communication skills is closely related to physical, cognitive, and behavior development (Toth et al. 2006). Thus, many researchers have studied EBPs to support the communication development of young children with developmental disabilities or delays (e.g., Drew et al. 2002; Green et al. 2010; Kaiser et al. 2000; Kaiser and Roberts 2013; McDuffie et al. 2013). Although researchers have identified EBPs, information is limited regarding how these are implemented in natural environments or how natural change agents, such as EI service providers and caregivers, perceive the goals, procedures, and outcomes of EBPs when used in the context of their lives. The gap between research and practice is a consistent issue in the EI and early childhood special education (ECSE) fields because transferring EBPs to natural settings has been a persistent challenge (Cook and Odom 2013). To address the gap between research and practice, researchers are exploring different ways to effectively disseminate EBPs (e.g., Wong et al. 2013) and to understand how the social validity of an EBP influences its uptake in practice (e.g., Strain et al. 2012).

One way to enhance the dissemination of EBPs is to support caregivers’ use of EBPs with their children who have communication delays or disorders. Caregivers are well situated to embed EBPs into natural settings and contexts (i.e., ecological validity, Carr et al. 2002; and social validity, Wolf 1978). In addition, both researchers (Barton and Fettig 2013; Peterson et al. 2007) and federal legislation (IDEA 2004) encourage the involvement of caregivers in their children’s education. Caregivers can learn to implement communication strategies effectively and use them within their natural routines (e.g., bedtime routine, play time). Furthermore, researchers have demonstrated the efficacy of training and coaching programs for caregivers to implement evidence-based communication strategies (e.g., Kaiser and Roberts 2013; McDuffie et al. 2013; Meadan et al. 2016).

Telepractice as a Service Delivery Model

Researchers have used telepractice to deliver services and reach caregivers who have limited access to EBPs (e.g., Ferguson et al. 2019; Heitzman-Powell et al. 2014; Machalicek et al. 2016; Meadan et al. 2016; Snodgrass et al. 2017; Suess et al. 2014). Providing services, training, and coaching via telepractice has been found to be feasible and effective (e.g., Little et al. 2018). Telepractice refers to the use of technology (e.g., videoconference) to deliver services (e.g., consultation, assessment, or intervention) from a distance. Service delivery can be done with real-time audio and/or video interaction (synchronous) or as a self-paced learning activity (asynchronous; e.g., pre-recorded videos and modules; ASHA, n.d.; Vismara et al. 2013). The use of telepractice can address many of the barriers to providing EI services in families’ homes (e.g., personnel shortages, travel time, cost; Hallam et al. 2009; Staerkel and Spieker 2006). In addition, the infrastructure for and access to high-speed internet is increasing across the United States (US, U.S. Department of Commerce 2019) and the number of individuals with smartphones is higher than ever. The benefits of using telepractice along with increased access to technology and improved infrastructure could enhance how EI service providers deliver high-quality services and supports to many children and families across the US.

Social Validity in Caregiver-Implemented Communication Intervention

Although researchers have demonstrated the efficacy of caregiver-implemented communication intervention programs with or without telepractice, information about whether the goals, procedures, and outcomes of these programs are socially valid is limited (Ledford et al. 2016; Schlosser 1999). Researchers have attempted to evaluate the social validity of caregiver-implemented intervention programs in various ways. For example, Meadan et al. (2014) evaluated the social validity of the outcomes of their intervention using blind raters. The researchers selected random video clips of participants from pre- and postintervention phases. They showed the videos to experts (i.e., other caregivers, early childhood special educators, speech-language pathologists), asking them to rate caregiver and child behaviors related to communication skills in an effort to evaluate whether their responses corroborated with the measured intervention outcomes (i.e., coded behavioral data). Multiple research teams have also used standardized scales (e.g., Behavior Intervention Rating Scale [BIRS]; Von Brock and Elliott 1987) or questionnaires, including Likert-type scales and open-ended questions, to explore the perspectives of caregivers related to the social validity of intervention programs (Justice et al. 2011; Olive and Liu 2005; Rivard et al. 2017). Researchers have also considered the attrition rate of an intervention as an indicator of social validity (i.e., feasibility; Justice et al. 2011). Ogilvie and McCrudden (2017) assessed social validity by conducting a mixed-method study in which they asked four caregivers who participated in a naturalistic behavioral intervention (i.e., the Early Start Denver Model) to complete a standardized social validity scale (Treatment Acceptability Rating Form-Revised questionnaire [TARF-R]; Reimers et al. 1992) and participate in semi-structured interviews at the end of the intervention. After independently analyzing the two data sources, they integrated the quantitative results (i.e., TARF-R scores) and qualitative themes (i.e., from interview) to interpret the social validity of the intervention program.

Although many researchers have attempted to capture the social validity of caregiver-implemented interventions using single, multiple, or mixed methods, most of these attempts do not attempt to assess all three aspects of social validity (i.e., goals, procedures, and outcomes; Snodgrass et al. 2018). In addition, very limited information is available on the social validity of interventions in which services were delivered via telepractice. Given the importance of developing and implementing socially valid interventions, the limited and incomplete evaluations of social validity as it relates to caregiver-implemented communication interventions, and the lack of evaluation of such interventions delivered via telepractice, it is important to further explore this topic.

Study Background

The Internet-based Parent-Implemented Communication Strategies (i-PiCS) program, which was developed by Meadan et al. (2016), was adapted and modified from the Parent-Implemented Communication Strategies (PiCS) program (Meadan et al. 2014; Stoner et al. 2012). The PiCS program focused on teaching caregivers to use four naturalistic communication strategies (i.e., environmental arrangement, modeling, mand-model, and time delay) with their young children with communication delays during home visits. The i-PiCS program taught the same content but used a telepractice service delivery model in which researchers provided training and coaching to caregivers and collected data via videoconferencing instead of during in-home visits (Meadan et al. 2016). The i-PiCS program was found to be promising; caregivers implemented the strategies with high fidelity, the children’s communication skills improved, and all participating caregivers reported their satisfaction with the goals, procedures, and outcome of the program (Chung et al. 2016; Meadan et al. 2016).

The purpose of the current study was to explore, via multi-source multi-method assessment, whether the i-PiCS program was socially valid when implemented by natural change agents (i.e., EI service providers) and embedded in a natural service delivery system (i.e., EI). Specifically, our aim was to assess whether both an EI service provider and caregiver perceived the program as socially valid while an EI service provider provided coaching to a caregiver via telepractice and the caregiver learned to implement the communication strategies with fidelity with their child. The research questions that guided this study were:

  1. 1.

    In what ways and to what extent does an EI service provider perceive the i-PiCS program’s goals, procedures, and outcomes as socially valid?

  2. 2.

    In what ways and to what extent does a caregiver of a young child with a disability perceive the i-PiCS program’s goal, procedures, and outcomes to be socially valid?

Method

We explored whether the i-PiCS program was socially valid for both EI service provider and caregiver by using multiple data sources to evaluate the program’s goals, procedures, and outcomes. This multi-method approach was selected to address some of the limitations of previous social validity assessments, described previously, and to pursue a more robust and comprehensive assessment of the extent to which the program was socially valid.

Study Design Overview

To conduct a comprehensive social validity assessment of the i-PiCS program, we collected and analyzed multiple data sources while implementing the i-PiCS program. Using multiple data sources could lead to better understanding of the social validity of the program or identify new ideas for the future replication of the program (Kramer 2011). To assess the social validity, we first identified detailed sub-questions to our research questions related to the goals, procedures, and outcomes of the program (Wolf 1978). We then identified the relevant data sources based on each research question (see Table 1). Lastly, we analyzed the data and used these multiple data sources to answer the research questions.

Table 1 Data sources used to evaluate the social validity of the program

The i-PiCS program was conducted over 8 months with one EI service provider and a caregiver and child on her caseload. This was the first time the program was delivered by an EI service provider to a caregiver rather than by researchers to caregivers. The participating EI service provider completed online training, coached the caregiver via telepractice, and analyzed caregiver’s strategy use to make data-based decisions. The EI service provider used single-case multiple baseline design elements for the purpose of structuring her decision making, but not to conduct an experiment of the effects of the program. In addition, before and after i-PiCS implementation, both the EI service provider and the caregiver independently participated in interviews and completed a questionnaire.

Participants

EI Service Provider

We recruited an EI service provider through a state EI program. To participate, the provider had to meet the following criteria; (a) willing to deliver EI services to a family on their caseload via telepractice; (b) had a potential family who could benefit from the i-PiCS program (i.e., a caregiver of a child who required supports for communication); and (c) could bill for her telepractice services and count those services toward the legally required supports for the participating family. Joan contacted the research team indicating her interest in participation and had a potential caregiver and a child in mind for this program. She was a Caucasian female with 11 years of experience providing EI services. She had a bachelor’s degree in special education, and she had worked for 7 years in a special education classroom in a public school prior to working in EI. At the time of the study, she was a lead special educator in a local EI program for children who were at risk for autism spectrum disorder (ASD). She worked closely with speech-language pathologists to enhance young children’s social communication skills. Prior to the start of the study, Joan reported that she had knowledge of communication teaching strategies and was confident in using those strategies in EI settings.

Caregiver and Child

Ashley, recruited by Joan, was a married mother of three children (two sons and one daughter). At the time of the study, she and her husband, both Caucasian, lived in a small town near a large city where she worked as a paraprofessional at a public school. Her oldest son received special education services at the public school, and her daughter was 2 months old. Hayden, her second son and the child participant, was 2 years and 1 month old at the start of the study. He had communication delays and was considered to be at risk for ASD, based on EI records. Hayden received EI services for 2 h each week at home, and 2 h every other week at the local EI program center. Hayden’s baseline score on the Ages and Stages Questionnaire: Social-Emotional screening (ASQ: SE; Squires et al. 2002) was 75 (cutoff score = 65). His score fell within the category of “at risk.” According to the MacArthur Bates Communication Development Inventory: Words and Sentences (MCDI; Fenson et al. 2007) completed by his mother, Hayden’s word production was at the 22nd percentile at the start of the study. Hayden typically used one- or two-word sentences to communicate and his primary modes of communication were speech and gestures.

Settings and Materials

Sixteen parent–child observations were conducted at home during Ashley’s typical interactions with Hayden, and four observation sessions were conducted in the local EI program center. All training sessions were conducted asynchronously online, and all interviews and most of the coaching sessions were conducted via telepractice. For the online training session, we used Compass 2 g, an online learning platform provided by the researchers’ university. For each synchronous telepractice session, we used Polycom RealPresence® videoconferencing software, which is compliant with the Health Insurance Portability and Accountability Act (HIPAA). Ashley used her personal smartphone and Joan used her work-provided tablet for the telepractice sessions. Observations of parent–child interactions were recorded either through the internal RealPresence® recording feature or using an internal smartphone camera and then uploaded to Box® (i.e., a secure, cloud-based file-sharing service). Google Forms were used to collect behavioral and coaching fidelity data.

The i-PiCS Program Training Procedures

The i-PiCS program was divided into six phases: preintervention, baseline, training, posttraining, coaching, and maintenance. First, a researcher trained Joan, the EI service provider, on the technology used in the program and then on (a) the targeted communication strategies (i.e., environmental arrangement, modeling, mand-model, and time delay), (b) how to analyze the caregiver’s use of those strategies (i.e., data-based decision making), and (c) best practices for caregiver coaching. All training provided by the researcher to the EI service provider were done via telepractice. Once each training was completed, a graduate student who did not directly interact with the participants reviewed the recorded session and checked the fidelity.

Ashley, the participating caregiver, completed online training on the technology and targeted naturalistic communication strategies. Fidelity was assessed through the embedded release rules within the modules, which did not allow Ashley to proceed until the current module was completed.

After both Joan and Ashley had completed training, Ashley was coached by Joan on how to use the targeted communication strategies with her son, Hayden, one at a time. Once Ashley had mastered targeted strategies, they moved on to the maintenance phase. During the maintenance phase, similar to the baseline phase, Joan withdrew coaching and just observed and coded Ashley’s strategy use with Hayden. The coaching and data-collection procedures followed those reported for the i-PiCS program in Meadan et al. (2016). An overview of the procedures related to the EI service provider’s implementation of the i-PiCS program is found in Fig. 1.

Fig. 1
figure 1

i-PiCS program implementation procedures

Technology Training

The research team conducted individual technology training via telepractice to show Joan and Ashley how to record videos, upload them into Box®, and how to use Polycom RealPresence® for secure videoconferencing. The procedural fidelity rate of the technology training was evaluated at 100% using a checklist.

Communication Strategies Training

Both Joan and Ashley learned about the targeted communication strategies by completing five online training modules (i.e., introduction to the program, environmental arrangement, modeling, mand-model, and time delay) hosted on Compass 2 g. The modules consisted of a short (9–11 min) video about a targeted strategy that included example clips of other parents using the strategy with their young children and a flowchart of the strategy steps.

Coaching and Decision-Making Training for Provider

Prior to coaching Ashley, Joan completed online training on assessing the fidelity of the caregiver’s use of the targeted communication strategies. The training was hosted on the i-PiCS website and included handouts explaining the coding rules and example video clips with annotation of the appropriate codes. Joan then met with a research team member via telepractice to discuss the coding and decision-making process and to ask questions she had about coding. Next, the research team conducted a coaching session with Joan, via telepractice, using the i-PiCS coaching procedures to support her as she worked directly with Hayden without Ashley present. The purpose of this coaching session was to model the coaching procedures Joan would later use with Ashley and to check Joan’s fidelity of communication strategy use. The procedural fidelity rate of the coaching session was 100%.

Coaching the Caregiver

Coaching procedures used within the study were based on those used by Meadan et al. (2016): (a) a preobservation conference (joint planning); (b) uninterrupted observation of parent–child interaction; and (c) a postobservation conference (i.e., reflection and feedback). Joan utilized a coaching fidelity checklist as her guide. Once Ashley completed online training, Joan coached her on environmental arrangement paired with one additional strategy at a time, starting with modeling. Once it was determined that Ashley had achieved mastery of one strategy (e.g., modeling), Joan then began coaching on the next strategy (i.e., mand-model, then time delay). Joan decided when to move to the next strategy based on the following caregiver performance criteria: (a) Ashley used the current strategy at or above 80% high fidelity in two consecutive sessions (per Joan’s coded data); and (b) Ashley reported to Joan (as part of the coaching procedures) that she felt confident in using the current strategy with Hayden.

Data Collection and Analysis

Five different data sources were used to assess the social validity of the i-PiCS program implemented by the EI service provider and are identified in Table 1.

Interviews

Before and after the intervention, graduate students who did not otherwise directly interact with the participants conducted semi-structured interviews with both Joan and Ashley to investigate their opinions regarding the goals, procedures, and outcomes of the program. All four interviews were conducted via telepractice or a phone call. Each interview lasted approximately 20 min. All interviews were video- or audio-recorded and transcribed by members of the research team. The data analysis process included the development of codes and grouping codes into Wolf’s (1978) framework (i.e., goals, procedures, and outcomes; Miles et al. 2014) by the second and fifth authors. The coders independently read the interview transcripts several times and conducted open coding of transcripts line by line, identifying potential codes (Strauss and Corbin 1988). Then, the coders met multiple times and discussed the potential codes and came to an agreement for the code lists. The coders then recoded the interviews with the finalized code list, compared their code applications, and, when they different, reached consensus about final code applications. After all the interview data were coded, we categorized the codes under Wolf’s (1978) framework.

Questionnaires

The EI service provider and the caregiver completed pre- and postintervention questionnaires via Google Forms. The preintervention questionnaire included questions related to their demographic information and their self-reported knowledge of and experience with the targeted communication strategies. The postintervention questionnaire included 20 multiple-choice questions and two four-point Likert-like scale items that focused on their confidence in using the target communication strategies, and their satisfaction with the program components (e.g., online training, coaching) and outcomes (questionnaires available from first author).

Quiz

Before and after the online training, Ashley completed quizzes to check her understanding of the targeted communication strategies (i.e., environmental arrangement, modeling, mand-model, and time delay). The quiz included 34 multiple-choice items. The total number of items correct was calculated and pre-/postscores were compared.

Observational Behavior Data

Employing data-collection processes used in single-case research (Kazdin 2011), we operationally defined behaviors of the service provider and caregiver and systematically coded these behaviors in video recordings. In addition, Joan collected behavioral data on Ashley and we included these data in our analysis.

EI Service Provider-Coded Data

Joan observed Ashley as she interacted with Hayden and coded Ashley’s fidelity in implementing the communication strategies. The level of fidelity was divided by scoring high- versus low fidelity. For instance, to earn a high-fidelity score on modeling, Ashley needed to (a) establish joint attention with Hayden, (b) model the word and/or gesture, (c) wait 2–3 s for Hayden’s response, and (d) provide appropriate verbal feedback or repeat the model depending on his response. If Ashley missed one or more steps, the strategy was considered to have low fidelity. More detailed operational definition and examples of each strategy can be found in Meadan et al. (2016). For the baseline, posttraining, and maintenance phases, Joan reviewed recorded videos of parent–child interactions and coded all occurrences where strategies were used and the fidelity of modeling, mand-model, and time delay strategy use. During the coaching phase, Joan coded live while observing parent–child interactions and coded only the occurrences and fidelity of the targeted strategy discussed in that session (i.e., during coaching on modeling, only modeling was coded). The coded data were counted and divided by the duration of the observation to determine the rate of strategy use. The percentage of high-fidelity strategy use was also calculated. Joan’s observational coded data were graphed for her use during decision making.

Researcher-Coded Data

All recorded sessions with Joan, Ashley, and Hayden across all study phases (i.e., baseline, posttraining, coaching, and maintenance) were coded by the research team.

Strategy Use

The fourth author, who did not directly interact with the participants, acted as the primary observer. She coded occurrences, strategy type (i.e., modeling, mand-model, time delay), and the fidelity of strategy use in all recorded videos. The first author acted as a secondary (i.e., reliability) observer and coded 45% of the videos to calculate interobserver agreement (IOA; at least 30% of each phase). The observers followed the same procedures outlined in Meadan et al. (2016). The IOA for the type of strategy was 95% (range 88–100%), and the IOA for the fidelity of strategy use was 86% (range 59–100%).

Coaching Fidelity

The fidelity of Joan’s coaching practices was coded during each coaching session. A fidelity checklist was developed by the researchers and included the 10 steps of the i-PiCS coaching process (fidelity checklist available from first author). Coaching fidelity was calculated as a sum of steps Joan completed divided by 10 (i.e., total steps) and multiplied by 100. Two special education graduate students who did not directly interact with participants acted as observers. The primary observer coded all sessions during the coaching phase and the secondary observer coded 50% of randomly selected coaching sessions. Codes were considered to be in agreement when both observers coded a specific step in the same way. IOA was calculated as the number of agreements divided by the number of agreements plus disagreements and multiplied by 100. The IOA for Joan’s coaching was 98% (range 90–100%).

Video Observations Notes

In addition to coding coaching fidelity, two research team members, the first and fifth authors, watched each coaching session and independently wrote observation notes about Joan and Ashley’s conversations. Prior to starting observation, the two observers discussed the components to observe and decided to focus on Joan and Ashley’s communication and interactions related to (a) the overall project, (b) coaching procedures, (c) communication strategy use, and (d) telepractice. To promote credibility and trustworthiness, the two observers met to compare their notes, check agreement, and discuss disagreement to establish consensus for their observations. Then, they organized the consensus observation notes into Wolf’s (1978) framework.

Results

We evaluated the social validity of the goals, procedures, and outcomes of the i-PiCS program delivered by an EI service provider via telepractice. We considered multiple data sources to answer the questions related to the aspects of social validity (see Table 1). Table 2 represents the summary of the results by research questions.

Table 2 Summary of the social validity assessment results by sub-questions

Social Validity: Goals

Does the EI Service Provider Perceive Training and Coaching the Caregiver, via Telepractice, as Needed and Important?

During her preintervention interview, Joan reported that she did not have experience providing services to families via telepractice. However, Joan stated she was eager to learn training and coaching caregivers via telepractice. Prior to the study, she had only conducted services in person. She usually met each family at least four times a month, twice in the family’s home and twice in the EI center, and typically drove 50 to 100 miles per week to provide these services. Joan also reported that she felt comfortable using the technology related to telepractice prior to her participation in this study. In her preintervention interview, Joan emphasized that she believed her role as an EI service provider was to support parents to work with their children and make them feel successful and empowered. Joan described that, prior to the study, she modeled for parents how to interact with their children and also provided feedback when the parents interacted with their children.

Does the Caregiver Perceive Using EBPs with Her Child as Needed and Important?

Ashley reported on the preintervention questionnaire that her knowledge of communication strategies was somewhat high (4 on a 5-point scale) and that she was confident (3 on a 5-point scale) in using communication strategies with her child. During the preintervention interview, she reported that she was struggling to communicate with Hayden and had difficulty understanding what he wanted. Additionally, she shared that this difficulty of understanding Hayden often led to him exhibiting challenging behaviors (e.g., screaming, crying). Ashley reported that she hoped to learn strategies that could help Hayden express his needs and lessen his frustration.

We also considered the need for Ashley’s use of the i-PiCS strategies by looking at graphed behavioral data collected from video-recorded interactions between her and Hayden during baseline. We represent Joan’s and the researcher’s coded data in a graph that follows multiple baselines across strategies single-case design conventions (see bar graph in Fig. 2). Each bar contains the total rate and the proportion of high- versus low-fidelity caregiver strategy use in each session. According to Joan’s coded baseline data (see Fig. 2 left graph), the rate of Ashley’s use of all three strategies was low with minimal variability (e.g., fewer than two times per min) and without any high-fidelity strategy use. Based on the researcher-coded data (see Fig. 2 right graph), the level of modeling and mand-model use were slightly higher with some variability and time delay rarely occurred. The portion of strategies used with high fidelity was none or very low. These baseline data aligned with what Ashley reported in her intake interview. In the interview, Ashley explained that she provided choices to Hayden and waited for him to respond (i.e., mand-model). However, she did not mention any use of modeling or time delay.

Fig. 2
figure 2

Ashley’s performance from Joan-coded (left) and researcher-coded (right) data. Each bar indicates the number of strategies used per min in one session. Within each bar, the black area indicates high-fidelity strategy use and the white area indicates low-fidelity strategy use. The line graph (right) indicates Joan’s coaching fidelity score. During coaching, Joan collected data only on the targeted communication strategies; Joan’s coaching fidelity was coded only during the coaching sessions

Social Validity: Procedures

Do the EI Service Provider and the Caregiver Perceive Training and Coaching Procedures via Telepractice as Feasible?

Telepractice Service Delivery

For the online training, both Joan and Ashley had access to Compass 2 g to complete five online training modules on the i-PiCS communication strategies. Ashley completed these online modules over a long period of time (i.e., posttraining phase was approximately 12 weeks), although the research team had originally designed them to be completed within 3–4 h over 2 weeks. In the postintervention interview, Ashley reported that the training took more time than she had originally expected and that there were many requirements that delayed her from moving on to the next step (e.g., watching the online module and recording posttraining videos with Hayden). Joan expressed the same concern in her postintervention interview, noting Ashley’s delay in completing the online training. Joan reported that it would have been faster if she could have controlled or monitored the pace for Ashley. For example, she felt it would have been beneficial if she could have set up a videoconference meeting with Ashley to watch the module together instead of allowing it to be self-paced.

During Joan’s postintervention interview, she reported that other aspects of the telepractice service delivery procedure were acceptable, including the use of Box® and the program’s website. However, Joan also mentioned that, at the beginning, learning how to use Polycom for videoconferencing and video recording was challenging for her and Ashley. She expressed how the software’s unexpected technical errors (e.g., difficulties getting into a virtual meeting room or recording a session) caused Polycom to be a “downfall” of the program. Joan’s comments during her postintervention interview aligned with the researchers’ video observation notes. For example, in notes dated 5/3 and 8/25, Joan expressed confusion regarding stopping the recording of the session. At the end of the session on 5/17, while Joan was finishing feedback, she unintentionally disconnected from Ashley. Also, on 6/5, during the video observation, the videoconferencing screen froze for a while due to an unstable internet connection.

i-PiCS Training and Coaching Components

Both Joan and Ashley reported that the training procedures were feasible and easy to follow. Joan also reported that the coaching procedures were feasible to deliver. In her postintervention questionnaire, Joan indicated some confidence implementing modeling and time delay strategies following the online training modules, but she felt confident in all four strategies following coaching from a member of the research team. Joan also reported that she felt more confident in coaching a family to use all target strategies after receiving the training and coaching from the research team. Joan’s report aligns with the coaching fidelity scores across her coaching sessions. Joan’s score was 93.8% on average (range 83–100%) which meant she implemented the coaching procedures with high fidelity (see Fig. 2, right graph; line). In the postintervention interview, Joan indicated that the coaching procedures related to collaborative goal setting and the strategy resources (i.e., flowcharts and step-by-step breakdown of strategy) were the most helpful parts of the i-PiCS program. Joan described that using coaching provided more opportunities for Ashley to use the strategies, to successfully support her son’s communication development on her own, and more frequent opportunities for Joan to provide feedback and encouragement to Ashley. She also stated that the program improved her communication with Ashley because it provided each of them with common terminology related to the strategies they used with Hayden.

Does the Caregiver Perceive Strategies as Easy to Learn and Feasible to Implement in the Natural Environment?

In her postintervention questionnaire, Ashley indicated that, overall, she was satisfied with learning the naturalistic communication strategies. She indicated that she was proficient in implementing the EBPs and that she enjoyed using them. Ashley also indicated that it was easy to incorporate the EBPs into her daily home routines. The questionnaire responses aligned with Ashley’s postintervention interview comments, in which she explained that the naturalistic communication strategies were easy techniques to use. However, Ashley mentioned that, at first, she felt that using the strategies (e.g., mand-model) was an unnatural way to communicate with Hayden and that she had some difficulties implementing them. However, with Joan’s encouragement and guidance during coaching, Ashley said that now she could use the strategies correctly and discovered that the strategies improved the communicative interactions between her and Hayden (e.g., “…he finally had some language and understood how powerful his using language was to get him things that he needs and wants.”). Her challenges learning how to use the strategies were also reflected in the video observation notes. She expressed her discomfort in using mand-model during her mand-model coaching sessions (video observation notes from 6/5 and 7/6), and, because of this, Joan provided an additional mand-model coaching session even though Ashley had met performance criterion. According to the video observation notes from the mand-model coaching session, Ashley expressed that she rarely used mand-model with Hayden in their daily routine and this made her feel initial discomfort in comparison to her feelings toward other strategies (video observation note from 7/6). As Joan provided both supportive and corrective feedback on Ashley’s use of mand-model, eventually Ashley felt the use of this strategy became more natural and improved her interactions with Hayden (video observation note from 8/18). In the postintervention questionnaire, Ashley indicated that the information from the online training was useful (on a scale of not useful to very useful) and the information provided during coaching with Joan was very useful (on a scale of not useful to very useful). Her responses aligned with her comments during the postintervention interview in which Ashley stated that the content of the training was helpful, but the most effective component of the program was the coaching from Joan. She described how receiving feedback and encouragement were very effective aspects of the program.

Social Validity: Outcomes

Does the EI service provider successfully coach the caregiver via telepractice?

EI Service Provider’s Fidelity of Coaching

During coaching sessions, Joan implemented the coaching procedures with 83% fidelity or greater, which was considered high-fidelity implementation (see line graph on right in Fig. 2). However, because we did not collect coaching fidelity data prior to the intervention phase, we cannot attribute this change solely and directly to the i-PiCS program.

EI Service Provider’s Decision Making

Joan made appropriate decisions during coaching, as evidenced by Ashley’s mastery of the strategies, and then moved on to the next strategy (i.e., modeling to mand-model; mand-model to time delay). Based on Joan’s coded data, Ashley exceeded 80% in high-fidelity strategy use on her second coaching session on modeling and again in her third modeling coaching session. Additionally, Ashley stated that she felt confident using the modeling strategy during her third coaching session (video observation note from 5/25). Because Ashley said that using mand-model seemed unnatural to her (video observation notes from 6/5 and 7/6), and even though she utilized the strategy with high fidelity on over 80% of her attempts for two consecutive sessions, instead of transitioning from mand-model to time delay, Joan appropriately gave her an additional fourth coaching session to make sure Ashley felt comfortable using the mand-model strategy. Once Ashley used the time delay strategy with high fidelity on over 80% of attempts in two consecutive sessions and expressed comfort using the strategy (video observation note from 8/25), Joan moved on to the maintenance phase. Thus, all of Joan’s decisions are aligned with the recommendations given in the i-PiCS program.

Does the Caregiver Learn EBPs and Use Them Correctly with Her Child in the Natural Environment?

Overall, Ashley’s strategy use improved from baseline to maintenance phases, increasing in rate and high-fidelity strategy use. She also reported that she felt more confident using the strategies compared to the beginning of the study. In addition, her quiz scores improved from 71 to 91 (out of 100) after online training. However, the coded observational behavioral data recorded by Joan and the researcher data were divergent in some ways. Based on the researcher-coded data, Ashley’s skills required additional practice to be considered at mastery level (see bar graph on right in Fig. 2). Ashley’s high-fidelity strategy use during the coaching phase, as coded by the researchers (from video recordings of the session; Joan recorded data live during the session), was not consistent enough to represent mastery because it varied from 53 to 84% rather than consistently achieving high fidelity of 80% or more. High-fidelity strategy use for modeling and mand-model strategies also rarely reached over 80% after Joan completed coaching and moved on to the next strategy. However, compared to the baseline and posttraining phases, during the coaching phase, the rate strategy use either slightly or immediately increased and, when vertically analyzing the data, the increase in the rate of strategy use happened only after coaching began for each strategy. In addition, compared to the baseline data, the level of Ashley’s high-fidelity strategy use (i.e., proportion) did increase during the maintenance phase based on the researcher-coded data. Based on the researchers’ assessment of Ahsley’s rate and fidelity together, she tended to demonstrate better rates of high-fidelity strategy use when using the strategies less frequently (i.e., maintenance phase). In other words, she tended to use strategies with lower fidelity when she used the strategies more often during a session.

Are the EI Service Provider and Caregiver Satisfied with the Outcomes of the Program?

In the postintervention interview, Joan and Ashley both indicated that the i-PiCS program resulted in the achievement of their goals for Hayden and an increased knowledge of communication strategies for themselves. These comments aligned with their comments in the postintervention questionnaire, where Joan reported that she would participate in the program again and recommend it for other service providers. Ashley indicated that the naturalistic communication strategies she learned were very useful (on a scale from not useful to very useful) in helping meet her child’s goals, and that she was extremely satisfied (on a scale from not at all satisfied to extremely satisfied) with the project’s overall outcomes for her child. During the postintervention interviews, Ashley and Joan both mentioned that they felt Hayden’s expressive communication skills had improved, that he knew how to get what he wanted, and was broadening the ways in which he was doing so. They felt the strategies were very effective in developing Hayden’s communication skills. Ashley also described the i-PiCS program as empowering, stating that it built her confidence in communicating effectively with her child.

I had felt a lot more confident with Hayden and I still do. Now, knowing like, if he and I are understanding each other, or if he’s not able to communicate, I can see what the gap is usually now, and I can figure out what I need to do. So that makes me a lot more confident.

She stated that sessions during the program were different than in past therapy sessions where therapy was done only when the therapist was physically present in her home. She stated that during the i-PiCS program, the training and coaching made her more prone to use the strategies during other times, and that she felt the strategies were more natural to use in their daily routines. Finally, she reported that she had to invest a lot of time in the program at first and that the training was overwhelming initially, but that, in the end, participating in the i-PiCS program was “absolutely worth the amount of time.”

Discussion

The primary aim of this study was to assess, using multiple sources to answer specific sub-questions about all aspects of social validity (see Table 1), whether a training and coaching program that was provided to a caregiver by an EI service provider via telepractice was socially valid. The service provider received support from the research team via telepractice, the caregiver completed online self-paced, self-directed training modules and was coached by the EI service provided via telepractice, and then, in turn, implemented the EBPs with her son. In most aspects, both the EI service provider, Joan, and the caregiver, Ashley, reported that the program’s goals, procedures, and outcomes were socially important. In addition, the data from the observations and pre-/posttraining quizzes demonstrated an increase in the caregiver’s knowledge and accurate use (rate and fidelity) of the targeted strategies. The service provider successfully used telepractice to support the caregiver from a distance and the caregiver successfully learned new strategies and used them with fidelity with her son. These findings are similar to findings of other researchers who reported on the observed effects of parent-implemented interventions (e.g., Kaiser and Roberts 2013; McDuffie et al. 2013; Meadan et al. 2016) and on the use of telepractice to provide services from a distance (e.g., Ferguson et al. 2019; Meadan et al. 2019). The current study extends current literature by (a) using multiple sources of data to assess all aspects (goals, procedures, and outcomes) of the social validity of an intervention (b) that was implemented by the natural change agents in the natural environment (c) via telepractice.

The cascading model used in this program in which researchers train and coach professionals who, in turn, train and coach caregivers to use EBPs with their children is a promising model (Biggs and Meadan 2018; Meadan et al. 2019). In this study, the EI service provider, the natural change agent for supporting caregivers, was trained and coached on (a) how to coach a caregiver, the natural change agent for supporting a child, via telepractice to implement EBPs and (b) how to collect data and make decisions based on the fidelity with which the caregiver used the target strategies. Although there were differences between the data coded by the provider and the researchers, which could be considered one limitation of the study, there was a clear increase during coaching in the caregiver’s rate and fidelity of using the target strategies, and the EI service provider successfully supported the caregiver and used the coaching practices with high fidelity.

One key consideration for closing the research-to-practice gap is identifying for whom researchers are the natural change agents. Researchers are rarely, if ever, the natural change agents for children or their families, but researchers readily serve as the natural change agents for professionals. Researchers already play this role through teacher/service provider preparation programs and graduate programs at their universities. When researchers direct their efforts toward empowering EI service providers to teach EBPs to caregivers on their caseload and to make data-based decisions about the services they provide to families, we can expect an increase in the dissemination of EBPs to families. This study provided initial evidence that a cascading model of disseminating research can be socially valid, a promising finding given the existing evidence that similar programs are often effective (Meadan et al. 2016, 2019).

Although, overall, both the EI service provider and caregiver perceived the program as socially valid, they reported some difficulties and had suggestions for improvement of some features of the program. The use of multiple sources for the in-depth social validity assessment allowed for the identification and exploration of these difficulties. For example, the self-paced, self-directed training modules took the caregiver much longer to complete than expected and the EI service provider would have liked to have had more control over the pace at which the caregiver completed these modules. In addition, both the EI service provider and the caregiver had some challenges with the technology used in the project and suggested having more training available on the use of the technology and/or considering other technologies that are more user friendly and familiar (e.g., using a different platform for videoconferencing).

One of the purposes of assessing the social validity of a program is to determine the potential for acceptance and maintenance after the completion of the intervention (Wolf 1978; Kennedy 2002). If participants think the program can easily be integrated into their daily routine (feasibility), is acceptable, and that it benefits them, they are more likely to continue to implement the skills or strategies they learned from the program over time for as long as those strategies prove useful. In addition, if the social validity assessment reveals elements of the program as socially invalid, researchers then have the opportunity to modify or improve the program (Schwartz and Baer 1991). Moreover, assessing social validity can help to better predict the possibility of scaling up the program to a larger population (Ledford et al. 2016). From this multiple-source social validity assessment, we obtained promising data that the EI service provider successfully integrated both telepractice and caregiver training and coaching into her regular services, and the caregiver successfully learned the targeted EBPs. We also found aspects of the program’s procedures have some issues (e.g., technology barriers and time for completing online training) which can be modified and improved for future applications. We expect these findings and considerations will help researchers improve the i-PiCS program, making it more acceptable to the relevant stakeholders in EI and increasing its potential power to scale up beyond a single provider–caregiver–child triad. In addition, we hope that other researchers will consider the use of multiple sources to assess all of the aspects of the social validity of other interventions to obtain a deeper understanding of the social importance of the goals, procedures, and outcomes.

Limitations and Implications

There are several limitations to this study that should be considered when interpreting the findings. First, the study included only one EI service provider and one family and, therefore, these findings are most usefully considered through the lens of transferability, or the ways in which what was learned (e.g., social validity of telepractice) and how it was learned (e.g., multiple-source social validity assessment) might transfer to or inform other applications of the same or similar programs (Miles et al. 2014). In addition, because the EI service provider made decisions about coaching based on data she collected live during coaching sessions, we did not attempt to establish experimental control to determine whether a functional relation existed between the i-PiCS program implemented by the EI service provider and improvement in the caregiver’s use of evidence-based communication strategies. As the goal was to evaluate the social validity of the program when implemented by the natural change agent via telepractice, the findings related to the social validity of the outcomes reported here are based on adults’ observed behavior and reported perceptions only, not an experimental demonstration of effects. In addition, no data are reported on the child’s behavior. Finally, although this study represents an in-depth examination of social validity, there are limitations to the scientific rigor of our design (e.g., multiple data sources accessed via various methods instead of mixed-methods design). Future social validity assessments should apply existing methods to answer questions about the social validity of programs (Snodgrass et al. 2018).

In the future, researchers might consider participants from diverse backgrounds and using mixed methodology that includes both a rigorous application of a single-case experimental design to assess the effects of the program at each level of the cascading model and one or more of many possible quantitative and/or qualitative methods to assess participants’ perceptions of the social validity of the program’s goals, procedures, and outcomes.

Conclusion

Socially valid interventions are essential to closing the research-to-practice gap in EI, as is working directly with natural change agents. In this study we used an in-depth, multiple-source social validity assessment to explore the social validity of (a) researchers helping an EI service provider to integrate both telepractice and parent training and coaching in EBPs into their service delivery using the i-PiCS program; (b) the EI service provider then serving as the natural change agent, via telepractice, for helping a caregiver learn to use EBPs with her young child, and (c) the caregiver then serving as the natural change agent for helping her young child develop stronger communication skills. We found that this program was socially valid in many ways, particularly as relates to the program’s goals and outcomes for both the EI service provider and the caregiver. We also found that adjustments to the program’s procedures are likely to further improve the social validity of the program. Future researchers should (a) assess the social validity of the telepractice program after these adjustments have been made; (b) assess the social validity of the program with more EI service providers, caregivers, and children; and (c) apply existing research methods to answer questions about social validity.