In the USA, the reauthorization of the Individuals with Disabilities Education Act (IDEA, 2004) added language specifying that states could adopt response to intervention (RTI). This new language specified that the purpose of RTI was to identify struggling students early, to provide them with evidence-based interventions, to closely monitor their progress, and to adapt or intensify interventions based on progress monitoring data (Fuchs, Fuchs, & Stecker, 2010). Providing intensive early intervention was intended to prevent many reading difficulties. Another intention was that RTI could replace the IQ achievement discrepancy approach to identifying students with reading disabilities such as dyslexia; that approach had become known as a wait-to-fail approach (Fuchs et al., 2010).

RTI is currently in use in every state within the USA (Zirkel & Thomas, 2010). The Institute for Education Sciences reviewed the existing research base for RTI and issued a practice guide (Gersten et al., 2008) detailing recommendations for implementation to support struggling readers, including students with dyslexia. These evidence-based recommendations include universal screening of all students, provision of explicit and systematic tier 1 reading instruction, and additional tiers of increasing intensity for students demonstrating inadequate response to instruction. Tier 2 supports should include intensive and systematic instruction provided regularly and in small groups and frequent progress monitoring. Tier 3 should include more intensive and individualized instruction for students who do not respond to tier 2 supports.

However, these evidence-based recommendations have not been consistently incorporated into procedural guidance for RTI implementation, which varies from state to state, and often varies among districts in the same state (Jenkins, Schiller, Blackorby, Kalb Thayer, & Tilly, 2013; Mellard, McKnight, & Jordan, 2010; Mellard, McKnight, & Woods, 2009). Furthermore, recent legislation has introduced more variability; the Every Student Succeeds Act (ESSA, 2015) introduced the term “multi-tiered systems of support (MTSS)” which represents a broader constellation of supports than RTI. MTSS incorporates not only academic outcomes but also behavior and social emotional learning. ESSA provided little additional procedural guidance or clarification for RTI or MTSS implementation, and emphasized states and local education agencies may develop their own models.

Variable RTI implementation may be related to the mixed findings about the impact of RTI when implemented within typical school practice, despite converging findings of positive effects from researcher-implemented RTI. For example, positive findings were reported in one large-scale study involving 318 elementary schools in the state of Florida (Torgesen, 2007). The Florida RTI model followed evidence-based practices, and it included explicit and systematic tier 1 reading instruction, universal reading screening, evidence-based interventions, and ongoing progress monitoring. Torgesen reported significantly fewer students were identified with reading disabilities after schools began implementing RTI. By contrast, less encouraging findings about RTI implementation were reported in another large-scale study examining data from 146 schools across the USA (Balu et al., 2015). The researchers collected data from 20,450 first- through third-grade students and used a regression discontinuity design, which examined whether students who qualified for tier 2 or tier 3 intervention (by testing just below a cut score on a benchmark screener) scored higher than students performing just above the cut score (and therefore did not qualify for interventions). Findings that indicated second- and third-grade students who qualified for tier 2 and tier 3 interventions showed no statistically different outcomes than students who did not. At first grade, students who qualified for interventions performed statistically worse than their peers. However, researchers have cautioned against interpreting Balu et al.’s (2015) findings to mean that RTI does not work, because many schools did not consistently implement RTI using evidence-based practices (e.g., Fuchs & Fuchs, 2017; Gersten, Jayanthi, & Dimino, 2017).

The success of RTI is largely dependent on elementary teachers’ knowledge about RTI implementation, because these teachers are the first line of defense against reading difficulties. For students who are struggling, or who have reading disabilities, including dyslexia, teachers need to know (1) how to identify who needs help, (2) what help to provide, and (3) how to access the resources within their school and district. A primary aim of our research study is to describe preliminary findings from a survey we conducted of elementary teachers’ knowledge about RTI implementation within their own school settings and an analysis of the factors derived from the results. First, we describe the conceptual and empirical framework for our study, based on the existing research that has surveyed teacher knowledge of RTI and, given the limitations of this research base, we also describe surveys of teacher knowledge about reading instruction. Then, we highlight the limited psychometric work that has been conducted to date.

Surveying teachers’ knowledge about RTI

In order for RTI to be successful for students with reading disabilities and dyslexia, teachers need knowledge about how to use data to identify students’ level of performance relative to either peers or benchmark assessments and how to develop instructional plans related to their relative strengths and weaknesses. To date, a fairly limited number of studies have surveyed various aspects of teachers’ knowledge about RTI (i.e., how to identify who needs help, what interventions are needed, and what resources are within the school). However, findings across previous research have been very consistent. Studies have revealed that teachers often report that they understand broadly what RTI is, how to administer assessments, and how to locate data (Barnes & Burchard, 2011; Spear-Swerling & Cheesman, 2012; Wilcox, Murakami-Ramalho, & Urick, 2013). However, teachers also report having little knowledge of what to do with that information to make instructional decisions to help their students, particularly those who have or are at risk for developing reading disabilities (Means, Padilla, DeBarger, & Bakia, 2009; Spear-Swerling & Cheesman, 2012; Vujnovic et al., 2014; Wilcox et al., 2013). For example, the report of the National Study of Education Data Systems and Decision Making (Means et al., 2009), which surveyed 2509 teachers about their use of and ability to access data, reported that 81% of teachers knew where to access student data, but fewer than half could interpret the data to change instruction. A few years later, Spear-Swerling and Cheesman (2012) developed a 66-item Teacher Knowledge Survey for RTI to assess teacher familiarity with and knowledge of foundational reading skills, identification and use of assessments, and knowledge about RTI. A total of 142 preservice and in-service teachers participated: the authors reported an adequate Cronbach’s alpha of 0.88. Findings revealed 85% of the teachers had basic knowledge of RTI, but fewer than one third reported having the knowledge to use the data from assessments to adjust instruction. Moreover, a majority could not identify tier 2 and tier 3 reading programs, and they had little knowledge generally about evidence-based reading practices such as findings from the National Reading Panel (2000). Similarly, Vujnovic et al. (2014) developed and tested the 30-item Ready for Response to Intervention Questionnaire which they administered to 308 general educators, special educators, and school psychologists. The items addressed understanding of and ability to apply principles of RTI. Reliability data was reported only for questions related to how teachers rated their knowledge but not for how they applied that knowledge. Internal consistency for that section was reported as high with an alpha of 0.97. The researchers found that less than half of the respondents could identify which tier of instruction would be most beneficial for students given a range of scenarios of students with varied reading abilities.

Findings from a qualitative study described teachers’ mixed perceptions about their self-efficacy for implementing RTI (Wilcox et al., 2013). The researchers conducted focus groups, interviews, and questionnaires with 81 elementary teachers across two states. No reliability data were available for the instruments. Findings suggested that teachers consistently believed themselves to be capable of assessing and identifying students in need of support; however, teachers reported having less confidence about their knowledge about providing intensive interventions for these students.

To summarize, within this emerging database, converging findings support teachers’ need for additional support and professional development. It is important to interpret these findings in light of the variability in RTI implementation and the lack of procedural guidance about RTI. Further, some researchers provided support that their survey instruments had adequate reliability (Spear-Swerling & Cheesman, 2012; Vujnovic et al., 2014), while others did not provide reliability information (Wilcox et al., 2013).

Surveying knowledge about teaching reading

A relatively larger research base involves surveys to describe teacher knowledge about reading instruction. To provide support within an RTI model, teachers also need knowledge about how to deliver early, targeted, and systematic instruction in the foundational reading areas of vocabulary, phonics, phonemic awareness, fluency, and reading comprehension which have been shown to improve the literacy skills of struggling readers (National Reading Panel, 2000). However, converging findings from research surveying teachers have shown that teachers often lack basic knowledge of how to teach foundational reading skills (e.g., Bos, Mather, Dickson, Podhajski, & Chard, 2001; Cunningham, Perry, Stanovich, & Stanovich, 2004; Moats & Foorman, 2003; Piasta, Connor, Fishman, & Morrison, 2009; Binks-Cantrell, Joshi, & Washburn, 2012). An early survey, the Informal Survey of Linguistic Knowledge developed by Moats (1994), has been foundational to many of the subsequent teacher knowledge surveys. This 65-item survey focused mainly on teachers’ skills in spelling, terminology, phonology, and morphology. Results from her study, involving 89 participants, revealed that on average, novice and experienced teachers had poor knowledge about these key foundational reading and writing concepts (Moats, 1994). However, reliability data was not specified.

Since then, several studies have been conducted showing converging results about the knowledge teachers have about teaching reading, with particular relevance to struggling readers and students with dyslexia (e.g., Bos et al., 2001; Moats & Foorman, 2003; Piasta et al., 2009; Carlisle, Kelcey, Rowan, & Phelps, 2011; Binks-Cantrell et al., 2012). For example, following publication of the National Reading Panel Report (2000), Bos et al. (2001) conducted a study to learn more about teachers’ knowledge about explicit teaching of reading. Bos and colleagues developed two measures: a 15-item survey, the Teacher Attitudes of Early Reading and Spelling (α ranging from 0.50 to 0.70), and the 20-item Teacher Knowledge assessment (α = 0.60) with components modeled after the Moats (1994) survey. In the study, a total of 538 general and special education preservice and in-service teachers answered Likert-scaled questions about their perception of important reading skills and knowledge of foundational reading concepts. The researchers found that while special educators generally had higher knowledge than general educators, overall, all teachers incorrectly responded to nearly half of the questions regarding language structure.

A decade later, during Reading First, Carlisle et al. (2011) developed a survey that included some direct assessment of teachers’ knowledge to teach not only foundational linguistic skills but also reading comprehension, the Teacher Knowledge of Reading and Reading Practices. The survey was administered to over 1100 teachers who had received professional development during Reading First initiatives, and their data analysis explored the association between teachers’ level of knowledge and their students’ reading achievement. Carlisle and colleagues reported reliability results to be 0.76 for the coefficient alpha and 0.76 for IRT reliability. They found that first-grade students whose teachers were more knowledgeable had higher reading achievement. The researchers also found that the knowledge needed for teaching reading comprehension correlated with having knowledge about foundational linguistic skills.

Across these studies, researchers have used surveys to assess teachers’ knowledge of the important elements of reading instruction (e.g., NRP, 2000). Findings consistently have shown that teachers lack necessary knowledge in foundational reading skills and the pedagogy to be able to teach reading effectively to prevent reading problems and to provide structured literacy interventions for students with reading disabilities, including dyslexia. Fortunately, several studies have demonstrated improved teacher knowledge has led to significantly stronger reading outcomes for their students (Bos, Mather, Narr, & Babur, 1999; Foorman & Moats, 2004; McCutchen, Abbott, et al., 2002; McCutchen, Harry, et al., 2002; Moats & Foorman, 2003; Spear-Swerling & Brucker, 2003, 2004). Bos et al. (1999) found that students improved reading skills when their teachers received specific professional development in early literacy more than students whose teachers did not receive the professional development. Similarly, McCutchen, Harry, et al. (2002) found in a randomized control trial that students whose teachers were provided intensive training in foundational reading skills demonstrated higher reading gains at the end of the year than students of teachers in the control group.

Factor analytic work for teacher surveys

Further research is needed about the psychometric properties of surveys of RTI, particularly regarding factor structure as this would increase our understanding of the constructs related to teacher knowledge. Overall, researchers have modified existing survey instruments with the addition of questions or have developed original survey tools to measure teacher knowledge constructs of interest; however, there has been limited reporting for the psychometric features of the surveys or about the factors related to the constructs of interest. Some notable exceptions include two studies describing knowledge of reading instruction (Binks-Cantrell, Joshi, & Washburn, 2012; Phelps & Schilling, 2004) and one focusing on RTI knowledge (Barnes & Burchard, 2011) in which researchers detailed how each question type loaded onto specific factors identified as fundamental to the intended teacher knowledge construct.

In one study, Binks-Cantrell, Joshi, & Washburn (2011) developed and tested the reliability of a 46-item multiple choice survey with 286 participants aimed at assessing teachers’ knowledge and skill in various language constructs important for students with dyslexia, including morphology, phonics, phonemic awareness, and phonological awareness. They found that the survey had high reliability in estimating where teachers lacked knowledge of language concepts (α = 0.90). Similarly, Phelps and Schilling (2004) developed the Content Knowledge for Teaching Reading and administered it to over 1500 teachers. Through factor analysis, the researchers determined that content knowledge for teaching reading included multiple dimensions (α ranged from 0.67 to 0.082). In a third study, Barnes and Burchard (2011) reported the results of their 58-item Multi-Tiered Instruction Self-Efficacy Scale in which they asked 184 teachers and educational leaders about needs in implementing various components of RTI. The researchers determined the survey to be reliable (α = 0.98) in measuring educational professionals’ needs in broad concepts of RTI/MTSS.

Although these three studies (Barnes & Burchard, 2011; Binks-Cantrell, Joshi, & Washburn, 2011; Phelps & Schilling, 2004) contribute to confidence that teacher knowledge surveys related to RTI and reading are reliable, additional work is needed to ensure good fit statistics. For example, these studies used Cronbach’s alpha; however, many researchers have argued that factor determinacy scores are a stronger statistic when measuring internal consistency (Lawley & Maxwell, 1963; Muliak, 1972; Raykov & Marcoulides, 2012; Sijtsma, 2009). Using this knowledge, we planned to implement factor determinacy scores in our current study. In summary, teachers play an important role within RTI implementation, which affects reading outcomes for students with reading disabilities, including dyslexia. However, the existing research highlights that teachers need more support and professional development, particularly with regard to understanding how to use data to make decisions about appropriate evidence-based interventions and general knowledge of evidence-based practices in foundational early literacy instruction.

Purpose and research questions

The purpose of our exploratory study was to describe teachers’ knowledge about RTI with regard to their own ability to implement RTI and about their understanding about how RTI is implemented within their schools. One aim to inform this process was to measure teacher knowledge about RTI implementation reliably and validly. Another related aim was to determine the primary factors within this knowledge base that could be targeted for professional development. Both aims relate to the context of a larger study we are conducting, funded by the Institute for Education Science, that will explore the relations among teachers’ knowledge of RTI, school interview data about RTI implementation, and students’ reading scores (blinded author grant funding no.). The larger project will not only describe these relations generally but will also focus on malleable factors to improve reading performance for struggling students.

In the current study, we addressed three main research questions: (1) what was the factor analytic structure of the survey? (2) What was the internal consistency of the exploratory factor analysis of the survey? (3) What was the range of teacher scores on these factors?

Method

Participants and settings

Nine elementary schools were recruited during the second year of a 4-year larger study examining the relationships among RTI knowledge, practice, and reading outcomes. The purpose of the larger project is to identify effective RTI practices. These schools were located in four states near universities in the southwest, midwest, and northeast. The student population at the participating schools varied; at four schools, a majority of students received free and reduced price lunch, which is a proxy for socioeconomic status of their families. In these four schools, the percentage of students receiving free and reduced price lunch (FRPL) ranged from 72.4 to 94.1% percent, and in these schools, a majority of students were Hispanic. In another four schools, the percentage of students receiving FRPL was lower, ranging from 0.0 to 20%, and the majority of students were White. In the remaining school, the percentage of students receiving FRPL was 37.8%, and the students were more diverse (with 36% African-American, 5.4% Hispanic, and the remainder were White).

We provided principals at each participating school the link to an online survey; principals provided the link to all teachers and staff at the school. Based upon data from school websites, we estimate that approximately, 283 teachers and staff would have received this link. Of those 283 possible respondents, 139 teachers and staff consented and responded anonymously for a response rate of 49.1%. One teacher did not finish the survey and was omitted from the data analysis. Teachers each reported years of experience (ranging from less than 1 year to more than 35 with a median of 38% of teachers responding to have taught 5 years or less) and the grade level at which they were currently teaching (ranging from kindergarten to fifth grade). The number of respondents at each of the grade levels was evenly distributed across all grade levels.

Survey and procedures

We developed our survey of teachers’ knowledge about RTI implementation by first adapting and extending items from surveys developed by Roberts and his colleagues, which in turn, built upon existing survey items (Roberts, G., personal communication, June, 2014; (Bos et al., 2001; Edmonds, Roberts, & Vaughn, 2003; Oregon Reading First: Pilot surveys, 2003, Texas Reading First Baseline Surveys, 2004, Washington (RTI), 2011)). We also incorporated some items from the ECLS-K teacher surveys (c.f., Tourangeau, Lê, Nord, & Sorongon, 2009). We focused on items designed to capture teachers’ knowledge about RTI and their understanding about how RTI was implemented within their individual schools. The survey includes 52 items that teachers rate on a Likert scale (ranging from 1 = strongly disagree to 4 = strongly agree). For the convenience of our participants, the survey was administered using the online platform of Survey Monkey. See Table 1 for examples and Appendix Table 7 for all survey items.

Table 1 Factors and examples

Data analytic methods

To address our research questions, we conducted an exploratory factor analysis (EFA) on the responses from the 138 teachers using Mplus (Muthen & Muthen, 1998). The goal of this analysis was to learn how many distinct factors emerged within our sample of participants and to evaluate the reliability and internal consistency of the items. According to Zwick and Velicer (1986), there are several procedures to determine the number of factors to retain. These include the eigenvalue > 1 rule (Kaiser, 1960), the scree test (Cattell, 1966), the minimum average partial correlation (Velicer, 1976), the Bartlett’s chi square test (Bartlett, 1950), and parallel analysis (Horn, 1965). Of these methods to determine the number of factors, the most widely used is the eigenvalue > 1 rule; however, this method may overestimate the number of retainable factors (Cattell & Jaspers, 1967; Linn, 1968; Zwick & Velicer, 1986). The scree test was more accurate but still may overestimate factors (Thompson & Daniel, 1996). The least consistent method was Bartlett’s chi square test (Thompson & Daniel), whereas parallel analysis appeared to be the best method for estimating the number of factors, while the minimum average partial correlation followed closely (Zwick & Velicer).

We used parallel analysis to determine the number of factors in this survey. This method involved the following steps: (1) we observed eigenvalues were extracted from the original data set; (2) we simulated random data sets were generated and are parallel in structure to the original data set; (3) we extracted the eigenvalues from the parallel data sets; (4) 95th percentile distribution of eigenvalues were determined; (5) we observed eigenvalues and 95th percentile distribution of eigenvalues were compared; and (6) we retained a factor if the observed eigenvalue from the original data set was greater than the 95th percentile eigenvalue observed from the parallel data sets (Horn, 1965; Glorfeld, 1995).

After determining the number of factors contained within the survey, we performed an EFA to examine the factor loadings in order to place items to a corresponding factor. Researchers generally agree that a factor loading of 0.3 is the minimum used to determine whether or not an item belongs to a specific factor (e.g., Hair, Anderson, Tatham, & Black, 1998), which we used as our threshold. From the EFA, we considered additional estimates in consideration of model fit, including χ2, the comparative fit index (CFI) (Bentler, 1990), the standardized root mean square residual (SRMR), the root mean square error of approximation (RMSEA), and the Tucker–Lewis index (TLI) (Bollen, 1989; Jöreskog, 1993). According to Hu and Bentler (1999), an acceptable model fit was supported by a CFI value greater than or equal to 0.90, a SRMR value less than or equal to 0.08, a RMSEA value less than or equal to 0.05, and a TLI value greater than or equal to 0.95.

Finally, instead of using the traditional Cronbach’s alpha, we calculated factor determinacy coefficients as a measure of internal consistency. The possible values range from 0 to 1 with larger values indicating better measurement of the factor by the observed items. A factor determinacy coefficient of ≥ 0.80 suggests strong correlation among items with their respective factor denoting high internal consistency (Lawley & Maxwell, 1963; Muliak, 1972). We chose this method because factor determinacy scores are considered to be more accurate than Cronbach’s alpha since Cronbach’s alpha is derived from the variances and covariances among the items rather than the parameters of the factor model (Raykov & Marcoulides, 2012; Sijtsma, 2009).

Results

Research question 1: what was the factor analytic structure of the RTI survey?

When conducting the factor analysis, we performed a parallel analysis to determine the number of factors contained within the RTI Survey of Teacher Knowledge. According to the parallel analysis, the strongest solution initially indicated four distinct factors as seen in Table 2.

Table 2 Eigenvalues from parallel analysis

Following the guidance for factor analyses mentioned in the previous data analysis section, four factors emerged. However, when examining factor 4, the eigenvalues from the original correlation matrix were larger than the eigenvalues from the parallel analysis. Then, when looking at factor 5, both the average eigenvalues and 95th percentile eigenvalues from the parallel analysis were larger than the eigenvalues from the original correlation matrix. Due to these results, we conducted a four-factor exploratory analysis to examine the individual items. The rotated factor loadings are seen in Table 3.

Table 3 Rotated factor loading four-factor EFA

When using the 0.3 threshold for minimum loading, we found 20 items for factor 1, 16 items for factor 2, 4 items for factor 3, and 12 items for factor 4. Then, to label the constructs, we examined the individual items through a theoretical lens. We rank ordered the items in terms of the strength of their relation to the factors and named them accordingly. We noted that the four items within factor 3 appeared to relate to other factors but did not theoretically relate to each other. Specifically, items in questions 18, 19, and 37 appeared related to factor 1 and one item related to factor 4 (question 38). Question 1 remained without a true factor; however, it mostly loaded onto factor 1 (Teacher Knowledge about Tier 1 Implementation). Next, we named the factors Teacher Knowledge about Tier 1 Implementation (factor 1, 23 items), Teacher Knowledge about Leadership and School Systems, (factor 2, 16 items), and Teacher Knowledge about Data-Based Decision Making (factor 3, 13 items). We conducted a three-factor EFA. The rotated factor loadings are seen in Table 4.

Table 4 Rotated factor loading four-factor EFA

Next, we examined the model fit for both the three- and four-factor EFAs in Table 5.

Table 5 Model fit indices for EFAs

The four-factor EFA was somewhat better due to lower AIC, BIC, RMSEA, and SRMR and higher CFI and TLI; it also did not meet optimal standards as described in the data analytic methods. The three-factor EFA was more consistent with our theoretical framework, and it was also more parsimonious.

Research question 2: what was the internal consistency of the EFA?

We used factor determinacy scores to analyze the internal consistency of the items within each factor, which is seen in Table 6. Factor determinacy scores were above the cutoff, which indicated good reliability.

Table 6 Factor determinacy scores for three-factor EFA

Research question3: what was the range of teachers’ scores on these factors?

Possible responses for each item on the teacher survey had a range from 0 (the minimum, strongly disagree) to 4 (the maximum, strongly agree). We calculated composite scores for each of the three factors to see the level of teachers’ knowledge in those specific factors. For factor 1 (Teacher Knowledge about Tier 1 Implementation), factor 2 (Teacher Knowledge about Leadership and School Systems, and factor 3 (Teacher Knowledge about Data-Based Decision Making), the composite scores were 3.08, 2.71, and 2.48 with standard deviations of 0.61, 0.86, and 0.91, respectively. The composite scores suggest that on average, teachers reported that they agreed that they are knowledgeable about implementing tier 1 of RTI and that are relatively knowledgeable about Leadership and School Systems, with slightly more variability on Leadership and School Systems. On average, teachers reported being least knowledgeable about data-based decision making, and again, there was considerable variability in their answers. The full survey individual item means and standard deviations can be found in Appendix Table 8.

Discussion

Summary of findings and major implications

Our initial research question addressed the structure of the Survey of Teachers’ Knowledge about RTI Implementation. We addressed their teachers’ knowledge about how to implement RTI and also about their understanding of how RTI was implemented within their school. The three final factors from this factor analytic work were (1) Teacher Knowledge about Tier 1 Implementation, (2) Teacher Knowledge about Leadership and School Systems, and (3) Teacher Knowledge about Data-Based Decision Making. For example, items that loaded on the first factor, Teacher Knowledge related to Tier 1 implementation, included, for example, “This school has explicit goals for what we want students to achieve (learning outcomes) at each grade level” (q2) or “At this school, we know what the fidelity criteria are when instruction is observed” (q5). Items related to the second factor, Teacher Knowledge about Leadership and School Systems, included “Since implementing RTI, the number of referrals to special education has decreased” (q46) or “At this school, professional development related to RTI is coordinated” (q34). Items for the third factor, Teacher Knowledge about Data-Based Decision Making, included “Since implementing RTI, I have increased my expectations for my students’ academic performance (q23)” or “Since implementing RTI, at-risk students are identified early” (q29).

These findings add to the current limited research about the factor structure of surveys to rate teachers’ knowledge generally, or specifically about RTI and reading (Binks-Cantrell, Joshi, & Washburn, 2011; Phelps & Schilling, 2004; Barnes & Burchard, 2011). We modified items from two existing surveys, which Roberts and colleagues had shown were feasible for teachers to complete. To meet our research interests and further enhance the usability of the survey, we modified it by selecting the most relevant items, by adding additional items, and by creating an online version. An important implication is that it was feasible to use the online version, which can support collecting larger samples in future research related to training teachers to serve struggling readers, including students with dyslexia. An online version also increases the usability of the measure by school systems to assist them in making decisions about professional development.

In addition, to answer our second research question, we examined the internal consistency of the items when placed into their subsequent factors discovered from the EFA. When analyzing the internal consistency of the items for each factor, we showed high internal consistency with determinacy scores above 0.9. Thus, we had no reason to remove any items at the present time. Practically, in terms of reliability and validity, it is important that our factors have high internal consistency. Strong internal consistency validates the strength of the specific factors, which will make understanding the results of the survey less complicated for teachers and other faculty working in the field who are using the survey. An implication is that the survey can reliably measure teacher knowledge, which is foundational for future research to identify areas for professional development to increase teachers’ knowledge about RTI, and ultimately to provide stronger intervention for students with dyslexia and other reading disabilities.

Our third research question explored respondents’ knowledge on the three factors. We learned that there was considerable variability in teachers’ ratings of the knowledge, whereas the possible range was from 0 to 4, as seen in Appendix Table 8; the standard deviations on the individual items ranged from 0.70 to 1.53. Findings indicate that teachers reported that they agreed that they were knowledgeable about tier 1 implementation (M = 3.08; SD = 0.61) and that they were also relatively knowledgeable about leadership and school systems (M = 2.71; SD = .86), with slightly more variability. On average, they reported being least knowledgeable about data-based decision making (M = 2.48; SD = .91), and again, there was considerable variability in their answers. Notably, the composite scores showed less variability than did individual item scores.

These findings add to and converge with findings from prior surveys indicating that teachers feel less prepared to make data-based instructional decisions than they report having broad knowledge about RTI (e.g., Barnes & Burchard, 2011; Spear-Swerling & Cheesman, 2012; Vujnovic et al., 2014; Wilcox et al., 2013). This finding is problematic particularly for students who have or are at risk reading disabilities, and an implication is the need for further professional development. It is important to interpret these findings in light of the variability in RTI implementation and the lack of procedural guidance about RTI.

Confirmation of the three individual factors and teachers’ reports of their own knowledge can guide future professional development to target specific weaknesses. We hope to inform the efforts of others who are working to increase teacher knowledge about RTI for their school. Because teachers have little knowledge of what to do with information about RTI to make instructional decisions (USDOE, 2009; Spear-Swerling & Cheesman, 2012), the individualized items and factors can provide a preliminary framework to help guide teachers and other instructional staff into making strong data-based decisions. It will also be easier for teachers to identify professional development opportunities to learn how to provide instruction to meet the needs of students with disabilities.

Limitations and directions for future research

The initial factor analysis from this pilot study shows promise; however, we do highlight several limitations. First, some of the specific fit indices from the factor analysis (CFI, TLI, etc.) were not optimal. There are a number of potential explanations for this, but we believe a large contributor to these less than optimal fit indices was our relatively small sample. In addition, some items reflect teachers’ own knowledge about RTI and others reflect their knowledge about RTI implementation within their schools. Small sample sizes have been shown to result in poor model fit; however, future research is underway to replicate this preliminary analysis with a larger sample size we have collected in a broader geographic area using a confirmatory factor analysis (CFA). We hypothesize that the factor loadings of the items will continue to show that the survey is a good fit for understanding teacher knowledge. In turn, with this larger sample size, model fit will improve. The CFA will also identify items that are redundant or not representative of a specific factor, which may allow us to shorten the survey to make it easier for teachers to complete in a timely manner.

Another limitation is inherent to survey research. We are reporting what teachers self-report, and we have not measured their ability to use data, or their ability to accurately identify or intervention with students with reading disabilities, including dyslexia. Additionally, we were unable to report the teachers’ demographic information; the survey has been modified to include more demographic information about the teachers while still maintaining their anonymity. This information will include years of experience, specific dyslexia training, and training in special education, which will allow us to help schools identify sources of strength and needs for further training. Individual teachers could compare their own knowledge against a larger sample of teachers, which could help them identify professional development goals to improve their knowledge to support students with or at risk for dyslexia and other reading disabilities.

In addition, steps are already underway in a larger study to examine the correlations among these teacher knowledge about RTI factors, school level RTI implementation factors, and student reading growth. We are compiling a national database of students and their achievement scores, so it will be informative to link teacher knowledge about RTI and how students are performing. We hypothesize that there will be a strong positive correlation between teacher knowledge and students’ achievement scores. This examination will not only focus on students in tier 3, students receiving dyslexia services, and students receiving special education but will also examine the relations across a large and diverse sample.