The impact of a quality teacher on student achievement is well-documented (Morrison et al., 2005; Rupley, 2011). Indeed, a teacher’s central role in students’ achievement is generally accepted and has been studied for decades (Darling-Hammond & Young, 2002). For example, Jeanne Chall (1967) found that teachers had a greater influence on students’ attitudes toward reading than the instructional materials used to teach reading. The National Research Council also recognized the importance of a teacher in preventing reading difficulties (Snow et al., 1998). However, what makes a teacher highly qualified has been the subject of many educational debates (Rupley, 2011).

To help specify candidate indicators of teacher quality, Hill and her colleagues (2019) developed three broad categories: teacher preparation and experiences, teacher mindset and habits, and teacher knowledge. Teacher preparation and experiences are experiences that build skills in classroom practice and are commonly explored with measures such as years of teaching experience, degrees held, and licensure exam scores. Teacher mindset and habits consist of teacher attitudes, perceptions, and beliefs about instruction. Measures of these aspects of teachers are used as teacher-level variables in studies examining teachers’ practices and their influence on student performance (Fischer et al., 2018). The third category, teacher knowledge, is the knowledge teachers acquire through coursework or in-service learning that informs their instruction and includes measures of content competency and pedagogical knowledge. Yet, to what extent do these commonly used indicators of teacher quality impact student reading outcomes and, in particular, teacher knowledge? The current study explored the relative contribution of teacher knowledge on student reading outcomes in kindergarten and first grade while controlling for some aspects of teacher experience and preparation and student characteristics. Such research is especially important within the current educational landscape as more U. S. states continue to pass legislation about the right for every child to learn to read. Many of these laws highlight the importance of teacher knowledge of what is known as the science of reading and outline requirements for teacher preparation programs and ongoing professional development to support the development of that knowledge (Schwartz, 2022).

Teacher knowledge

An important area of exploration in teacher quality has been teacher knowledge. Shulman (1987) stated, “teaching necessarily begins with a teacher’s understanding of what is to be learned and how it is to be taught” (p. 7). The content base of teacher knowledge is comprised of the knowledge, skills, and understandings that students should possess and is a central component of teaching (Phelps & Schilling, 2004). For students to be successful readers, they need instruction in foundational literacy skills that includes phonological awareness, decoding, fluency, vocabulary, comprehension, spelling, and writing (Connor et al., 2014; Foorman et al., 2016). As the research basis that supported these aspects of literacy grew, a perspective emerged that teachers require a deep understanding of spoken and written language to teach reading effectively (Moats, 1994).

This specialized professional knowledge for teaching reading was thought to be different than an individual’s ability to read and their implicitly held knowledge of the language. Instead, it was defined as declarative knowledge of the language that is not acquired merely through exposure to spoken or written language (Moats, 1994; Phelps, 2009). The notion that content knowledge matters followed the logic that classroom teachers’ explicit knowledge of the language they are striving to teach supports their ability to deliver empirically validated instruction and intervention (Binks-Cantrell et al., 2012; Carlisle et al., 2011; Spear-Swerling et al., 2005; Stark et al., 2016). Like other content areas, such as math and science, it is presumed that delivering quality reading instruction requires teachers to have knowledge of reading and not just an ability to read.

Assessing teacher knowledge

Since the 1980s, researchers have investigated teachers’ knowledge of literacy. For example, in 1985, Rupley & Logan attempted to measure teachers’ knowledge of basic reading content, their beliefs about reading, and their understanding of reading outcomes using three different measures, one of which was the Knowledge Test for Reading for Elementary Teachers from Rude (1981). Teachers’ knowledge of basal readers and other approaches to reading instruction best predicted teachers’ ability to identify important reading outcomes. However, their knowledge measure consisted of questions about popular reading methods (e.g., basal readers, language experience approach) rather than components of language and literacy (e.g., phonemic awareness, phonics).

In contrast, Moats (1994) proposed that teachers’ knowledge of the language they were striving to teach children to read was essential for supporting struggling students, responding to errors, choosing examples, organizing and sequencing instruction, using morphology to explain spelling, and integrating components of literacy instruction. She assessed teachers’ depth of linguistic knowledge using the Informal Survey of Linguistic Knowledge (Moats, 1994). This measure required teachers to define terms, identify speech sounds in words, analyze word parts, and give examples of phonic, syllabic, and morphological units through a series of multiple-choice, fill-in-the-blank, and listing items.

The teachers in Moats’ study performed poorly on these questions across all aspects of language she assessed. Moreover, she found teachers conceptualized words in their written forms rather than in their spoken forms. For example, it is incorrect to say that ‘fox’ has three sounds because it has three letters when in fact, it has four sounds (/f//o//k//s/). She viewed this type of error as problematic. She held this knowledge of the English language to be essential to provide reading instruction to all students and indispensable when engaging students with the highest level of instructional needs. A lack of declarative knowledge and awareness of the sound structure of the language could serve as a barrier to educators striving to provide direct instruction in mapping this sound structure to letters and larger orthographic units.

Connecting teacher knowledge to student outcomes

Since the early work by Moats (1994), additional research has demonstrated that educators lack knowledge of language and literacy (Foorman & Moats, 2004; Lyon & Weiser, 2009; McCutchen et al., 2002; Piasta et al., 2020). Furthermore, teachers may be unaware of their lack of knowledge (Cunningham et al., 2004; Joshi et al., 2009; Washburn et al., 2016). Yet, the focus on teacher knowledge of language and literacy is based on the idea that a knowledgeable teacher is needed to support students’ reading development. However, the few studies that have attempted to establish a direct link between teacher knowledge of language and literacy and students’ reading outcomes have provided mixed results (Hudson et al., 2021).

For example, Piasta et al. (2009) hypothesized that teacher knowledge indirectly impacts student outcomes via instruction. The knowledge of 49 teachers serving 616 students in 10 different schools was assessed using the Teacher Knowledge Assessment to examine teacher knowledge, instructional practice, and student outcomes. As part of the study, the authors directly modeled the teachers’ scores on the Teacher Knowledge Assessment with students’ spring scores on the Letter-Word Identification subtest of the Woodcock-Johnson (WJ) assessment. Teacher Knowledge Assessment scores did not significantly predict change in students’ letter-word identification scores after controlling for fall scores, treatment status, and the school free or reduced lunch level. However, these findings could be due to the small sample size of teachers and the small number of schools from which these teachers were sampled.

In a larger study of Reading First schools in Michigan, Carlisle and her colleagues (2009) overcame the concern of statistical power by using an analytical sample that included 747 first, second, and third-grade teachers and at least 2,700 students in each grade level. Using the Language and Reading Concepts (LRC) measure of teacher knowledge, the authors categorized the levels of teacher knowledge into low, medium, and high, and then analyzed the influence of the categorical characterization of teacher knowledge on the student’s performance on the Iowa Test of Basic Skills (ITBS) subtests of Word Analysis and Reading Comprehension. Students’ prior reading achievement was based on a subtest of the Dynamic Indicators of Basic Literacy Skills (DIBELS) for grades 1–3 and prior ITBS reading scores from the previous spring for grades 2–3. After adjusting for student- and teacher-level covariates, their results revealed no statistically significant effects of teacher knowledge on students’ performance in word reading or comprehension. However, they noted that only 7–11% of the variance was observed between classrooms within schools. The amount of variability observed at this level of analysis could have hindered their ability to explore the association between teacher knowledge and student learning outcomes.

In 2011, Carlisle and colleagues created a new measure of teacher knowledge, The Teachers’ Knowledge of Reading and Reading Practices (TKRRP), that included 13 scenario-based items with additional multiple-choice items for a total of 22 items. They used the TKRRP with a new cohort of first, second, and third grade teachers in the Reading First schools in the same state as the previous study. The new scenario-based measure focused on activities typically used to teach word reading and reading comprehension. Their unconditional model indicated that 8–15% of the variance in student achievement was observed between classrooms (i.e., teachers) meaning that only a small portion of the differences in student outcomes could be attributed to the differences in their classrooms. First-grade teachers’ scores on the TKRRP predicted their students’ reading comprehension scores. In contrast, teacher knowledge did not reliably predict word analysis at any grade level, nor did it predict reading comprehension in second or third grades. Kelcey (2011) reported similar findings with the same data set. However, the analytic sample was obtained from Reading First schools identified for extra support due to multiple years of student underachievement, which limited the range of school contexts.

Present study

Collectively, research has established that teachers have limited declarative academic knowledge of language and literacy (Bos et al., 2001; McCutchen et al., 1999; Spear-Swerling & Brucker 2004). Yet, demonstrating a link between teacher knowledge and student outcomes has been elusive, which is concerning because the development of teacher knowledge is emphasized within recent state-wide initiatives geared towards supporting student reading outcomes (Schwartz, 2022). With that said, previous studies have been limited due to small sample sizes that result in an underpowered study, analytic samples obtained from schools performing at similar levels, and teacher knowledge measures that have low reliability. The present study attempted to address these limitations by modeling data collected from a large sample of educators who taught in classrooms at schools with a broader range of performance than in previous research. We also selected an established teacher knowledge measure that assessed well-specified areas of educators’ content knowledge of the English language. These efforts allowed us to explore the following research questions:

  1. 1.

    After controlling for student and teacher characteristics, does teacher knowledge of reading predict student reading performance in foundational skills at the end of kindergarten and first grade?

  2. 2.

    After controlling for student and teacher characteristics, does teacher knowledge of reading predict student reading performance in reading comprehension at the end of kindergarten and first grade?

Method

Research sample

The analytic sample for this study consisted of 9,640 students in 512 kindergarten and first-grade classrooms in 112 public schools located throughout Arkansas. Based on the state’s school performance rating criteria that grade each school’s academic performance and growth, 28 schools (25%) received an A rating, 38 schools (33.9%) received a B rating, 22 schools (19.6%) received a C rating, and 24 schools (21.5%) received a D or F. The state’s rating criteria used a multiple-measure approach based on each school’s academic achievement to create value-added growth scores. These school value-added growth scores include English Learner progress, adjusted cohort graduation rates, and indicators of school quality and student success (e.g., on-grade level reading, ACT scores, attendance, proficiency on state tests, GPA, and on-time credits) (Arkansas Department of Education, 2018). These ratings indicate that the schools in the analytic sample had varying levels of student academic achievement and growth based on the state’s evaluation criteria. The percentage of certified teachers in this sample of schools ranged from 27.7 to 100% (M = 98.37, SD = 8.13).

The students (N = 9,640) in the analytic sample were in either kindergarten or first grade during the 2018–2019 school year. On average, students were 5.84 years old (SD = 0.70). Table 1 presents demographic information on the students taught by the teachers included in the research sample. Students were primarily Caucasian (58.2%), followed by African American (18.7%), Hispanic (14.7%), and additional ethnicities (8.4%). The additional ethnicity category consisting of Asian Americans, Pacific Islanders, Native Americans, Alaskan Natives, and students who identify as two or more races was created due to the low occurrence of each of these groups. Further, 51.0% of the students in the analytic sample were male, and 55.6% qualified for free or reduced price lunch (FRL). The vast majority of the 14.8% of the students who were identified as Limited English Proficiency (LEP) students were also Hispanic (n = 1062), and few LEP students were African American (n = 10), which limited our ability to model both ethnicity and LEP as separate vectors within our linear models.

Table 1 Sociodemographic Characteristics of Students

The classroom teachers (N = 512) taught either kindergarten or first grade and were primarily Caucasian females (83.98%). Table 2 presents demographic information on the teachers in the analytic sample. Over half of the teachers had a Bachelor’s degree (54.3%), and 45.7% held a Master’s degree or higher. Overall, the teachers had an average of 13.53 years (SD = 10.50) of teaching experience (Kindergarten: M = 14.45, SD = 10.57; First-grade: M = 12.65, SD = 10.37).

Table 2 Sociodemographic Characteristics of Teachers

Data sources

Students’ reading outcomes

As part of the state-mandated universal screening procedures, students’ reading and literacy skills are assessed three times during the school year. The Northwest Evaluation Association Measures of Academic Progress for Primary Grades (MPG) Growth assessment can be used in the universal screening process. This computerized adaptive measure was administered three times throughout the year, and individual scores are reported as Rasch Unit (RIT) scores, an equal-interval scale that is continuous across grades. RIT scores range from 130 to 201 based on the norming study conducted by the test developers (Thum & Hauser, 2015). Test items are accompanied by an audio presentation and primarily use a multiple-choice format. Test items align with the instructional areas of foundational skills, vocabulary, and comprehension.

The Foundational Skills and Literature and Informational Text domains were used as student reading measures for this study. The Foundational Skills domain represents a cluster of concepts that include basic features of print, phonological and phonemic awareness skills, and the ability to apply grade-level phonics and word analysis skills in decoding words. The Literature and Informational Text domain represents comprehension skills such as understanding a text read aloud, making inferences, determining central ideas, and identifying and using text features to clarify the meaning of unknown words.

For our analytic sample, fall RIT scores in foundational skills ranged from 100 to 194 (M = 138.33, SD = 12.50) for kindergarten students and 108 to 237 (M = 163.38, SD = 16.45) for first-grade students, which would put the sample means at the 33rd and 67th percentiles, respectively based on the national norms (Thum & Hauser, 2015). In contrast, the spring foundational skills scores for kindergarten students ranged from 101 to 226 (M = 158.75, SD = 15.02) and 110 to 227 (M = 176.93, SD = 15.84) for first-grade students, which would put the sample means at the 54th and 47th percentiles, respectively based on the national norms. For literature and informational text, fall scores ranged from 107 to 199 (M = 144.11, SD = 11.18) for kindergarten students and 114 to 220 (M = 163.30, SD = 14.60) for first-grade students, which would put the sample means at the 69th and 67th percentiles respectively. The spring scores for literature and informational text ranged from 107 to 215 (M = 157.99, SD = 13.74) for kindergarten students and 123 to 237 (M = 176.93, SD = 15.84) for first-grade students, which would put the sample means at 49th and 47th percentiles, respectively. These scores show the range of student performance across our analytic sample, which includes students who struggle and students who do well on the universal screening measures.

Teacher knowledge measure

The teacher knowledge measure, developed by McMahan and her colleagues (2019), is based on prior surveys of teacher knowledge (e.g., Binks-Cantrell et al., 2012; Bos et al., 2001; Moats, 1994). The knowledge test contains 50 items to assess five domains of basic knowledge of the English language: phonological sensitivity, phonemic awareness, decoding, encoding, and morphology. Phonological sensitivity is the awareness of parts of words larger than a phoneme, such as rhyme and syllable. Phonemic awareness is the knowledge of and ability to manipulate the smallest unit of sounds in speech. Decoding is the ability to match letters with sounds. Encoding is the ability to link sounds to letters in writing or spelling. Morphology refers to the study of the smallest units of meaning in language. Within each domain of knowledge, there were 10 multiple-choice questions, and they all focused on academic content knowledge (for examples of test items see McMahan et al., 2019 and McMahan 2019). Some questions required participants to define terms (e.g., What is the smallest speech sound that, if changed, changes the word? The correct answer is a phoneme.), or identify instructional activities (e.g., A teacher says, “Say tool. Now have the /t/ and /l/ change places. What word do you have?” This is an example of ____. The correct answer is phoneme transposition.). Other test items required participants to demonstrate their ability to perform a task, such as to count the number of phonemes in a word or identify where a word would be divided into syllables (e.g., How many morphemes are in the word incredible? The correct answer is three.). The reported Cronbach’s alpha for the 50-item survey was 0.86 (McMahan et al., 2019), and it was 0.72 for the current analytic sample.

The teacher knowledge data were collected at the beginning of the school year through an online learning management system maintained by the state’s department of education. Form A of the knowledge test was administered before the implementation of training modules that were part of a state-wide reading initiative focused on increasing teachers’ knowledge of literacy. There was no time limit to complete the 50-item knowledge test. Educators also completed background items about their education and current role in schools as part of the data collection process. Within the analytic sample, scores on the teacher knowledge measure ranged from a total proportion correct of 0.18 to 0.86 (M = 0.58, SD = 0.11) across participating teachers. Further, scores obtained from the kindergarten (M = 0.58, SD = 0.11) and first-grade teachers (M = 0.58, SD = 0.11) were not significantly different from each other, t(510) = 0.03, p = .977. Teachers were linked to their students’ assessment data for analysis.

Analytic strategy

A multi-level mixed effects analytic approach was adopted to address the research questions, and this modeling was undertaken using the lme4 and ggeffects packages within R (Bates et al., 2015; Ludecke, 2018; R Core Team, 2020). To examine the student-level and teacher-level predictors that impact a student’s spring reading scores, multi-level linear models were built that included random effects for the nesting of students in classrooms (i.e., teachers) and schools and fixed effects for the student-level and teacher-level predictors. For each outcome variable (i.e., spring foundational skills RIT scores, or spring reading comprehension RIT scores), we ran a null model that only contained the random intercepts at the school- and teacher-levels and a full model that added both student- and teacher-level fixed effects. The interclass coefficient (ICC) in the null models revealed that the between-school differences accounted for 16% of the variance in students’ spring reading scores for foundational skills and reading comprehension. The ICCs also indicated that between-teacher differences accounted for 26% of the variance in students’ spring foundational skills scores and 25% of the variance in students’ spring reading comprehension scores. Fixed effects were added to create the full model and examine their association with the dependent variable for each model (e.g., spring scores for foundational skills or reading comprehension). The full model contained both student- and teacher-level predictors. As described below, the full model calculations, degrees of freedom, and output present categorical variables as the number of categories present minus 1, reflecting each category’s comparison to the reference group coded as 0.

There were five student-level predictors. Socioeconomic status was represented by FRL status. This dichotomous variable was coded as a 1 for yes a child qualified for FRL and a 0 for no. The second predictor was student ethnicity, which was a categorical variable coded as 0 for Caucasian, 1 for African American, 2 for Hispanic, and 3 for the group composed of students identifying as part of additional ethnicity groups (e.g., Asian American) or multiple ethnicities. Caucasians were used as the reference group because they were the largest group of students. The third student-level predictor was student gender, which was coded 1 for males and 0 for females. The fourth student-level predictor, the students’ fall reading scores for the skill examined in each model (i.e., foundational skills or reading comprehension,) was entered as a continuous fixed effect. Finally, student grade was a dichotomous variable, coded as 1 for first grade and 0 for kindergarten.

The full model also added three teacher-level fixed effects. The proportion of correct responses on the teacher knowledge measure for each teacher was included as a fixed effect. Teaching experience and the highest degree held are common proxies of teacher effectiveness and are often included in analyses exploring teacher quality (Goldhaber et al., 2018; Phelps, 2009). The number of years since a teacher was certified was entered as a continuous fixed effect. A dichotomous variable for teacher degree was added with 1 for advanced degrees and 0 for bachelor’s degrees.

The following equation uses combined notation to represent the fully specified final model that includes all fixed and random effects used in the analysis (Heck et al., 2013; Raudenbush & Bryk, 2002; Snijders & Bosker, 2012). Separate models used the student’s spring foundational skills or reading comprehension scores as the outcome variable (Yijk). Other than the score entered for the student-level fixed effect of fall score and the spring score used for the outcome variable, the models examining foundational skills and reading comprehension in research questions one and two, respectively, were the same. In the model, Yijk is the spring reading score for student i corresponding to teacher j in school k. Additionally, U00ijk represents the random variation for the intercept across groups, and εijk represents the random variation from subject to subject.

Yijk = γ000 + γ100FRLyesijk+ γ200African Americanijk+ γ300Hispanicijk+ γ400Additional Ethnicityijk+ γ500Maleijk+ γ600Fall Scoreijk+ γ700First Gradeijk+ γ010Teacher Knowledgeijk+ γ020Years Certifiedijk+ γ030Advanced Degreeijk+ U00ijk+ εijk

Results

The first research question examined whether teacher knowledge of reading predicted student reading performance in foundational skills at the end of kindergarten and first grade. Table 3 reports the parameter estimates for the models. As indicated by the ICCs reported above, substantial variability occurred between teachers and between schools. All of the student-level fixed effects were significant. Predicted estimates provided below control for other predictors in the model by setting the student- and teacher-level predictors at the sample mean or their respective reference categories. These predicted estimates reveal that a student who qualified for FRL (166.58, 95% CI [1165.59, 167.58]) had lower scores than students who did not qualify for FRL (169.95, 95% CI [168.98, 170.92]). African American (167.11, 95% CI [165.92, 168.30]) and Hispanic (167.79, 95% CI [166.58, 168.99]) students had lower spring RIT scores than students who identified as Caucasian (169.95, 95% CI [168.98, 170,92]) or the additional ethnicity group (169.78, 95% CI [168.52, 171.04]), whose predicted spring foundational skills scores were equivalent to each other. Further, males (169.15, 95% CI [168.18, 170.12]) had lower spring scores than females (169.95, 95% CI [168.98, 170.92]). A student’s fall foundational skills score and grade were statistically reliable positive predictors of the student’s spring foundational skills RIT score. Higher fall scores were associated with higher spring scores, and students in first grade were predicted to have higher scores than those in kindergarten, as expected.

Table 3 Model Parameters Predicting Student Outcomes in Spring

At the teacher level, years since certification and having advanced degrees did not predict students’ spring foundational skills scores. In contrast, teachers’ performance on the knowledge test reliably predicted students’ spring foundational skills RIT scores. Educators who scored better on this measure of teacher knowledge had students who performed better on the spring benchmark assessment of foundational skills.

The second research question examined to what extent teachers’ performance on the knowledge assessment predicted student performance in reading comprehension at the end of kindergarten and first grade after controlling for student and teacher characteristics (see right-hand side of Table 3). Similar to the model predicting the student’s foundational skills scores, all student-level predictors were statistically significant and their parameter estimates had the same sign as before, indicating a similar pattern across models. Similarly, neither teacher years since certification nor possession of an advanced degree were significant. In contrast to the foundational skills model, teacher knowledge was not a statistically significant predictor of students’ spring reading comprehension scores.

Discussion

In this study, we sought to add to the literature by exploring the link between teacher knowledge of language and literacy and student reading outcomes in foundational skills and reading comprehension after controlling for student- and teacher-level variables such as gender, ethnicity, prior performance, and highest degree obtained. Prior research has provided mixed results when attempting to link an educator’s knowledge directly to student reading outcomes. This study adds to the existing body of research by showing a significant link between teacher knowledge and student reading outcome in foundational skills, but not in reading comprehension.

Our first model indicated that a teacher’s knowledge has a positive effect on students’ spring foundational skills scores after controlling for student-level and teacher-level characteristics shown to impact student growth and achievement. Like previous research, gender and FRL status were significant predictors of students’ spring performance. Unlike previous research, our analysis revealed a significant link between teacher knowledge and students’ foundational skills scores (Piasta et al., 2009; Carlisle et al., 2011). This finding provides empirical evidence to support the idea that teachers with a deep understanding of the content students are expected to learn positively influence their students’ reading outcomes. Like Piasta et al. (2009), our teacher knowledge measure was designed to assess teachers’ academic content knowledge. However, our analytic sample was larger and sufficiently powered to identify a statistically significant effect. This academic content knowledge is necessary for teachers to make sense of curriculum materials, understand student work, and present tasks and materials in ways that students can understand and learn (Phelps, 2009). Although connecting teacher knowledge to instructional practices was not the focus of this study, research also supports the link between a knowledgeable teacher and effective teaching practices (Hill et al., 2012; Phelps, 2009).

The lack of a significant link between teacher knowledge and reading comprehension is not surprising. Our measure of teacher knowledge focused on knowledge of foundational reading skills and not specific knowledge of reading comprehension. Previous studies had found a modest effect of teacher knowledge on reading comprehension (Carlisle et al., 2009). However, Carlisle’s work used a teacher knowledge measure that included scenarios based on reading comprehension instruction, whereas our measure focused on foundational reading skills. Models of proficient reading such as the Simple View of Reading or Scarborough’s Reading Rope illustrate the role foundational skills play in building reading comprehension (Gough & Tunmer, 1986; Scarborough, 2001). Yet, knowledge of foundational skills alone is not sufficient for producing proficient readers. Effective reading comprehension requires knowledge of text structure, vocabulary, syntax, and a variety of skills such as inferencing, predicting, and summarizing (Pressley & Gaskins, 2006). Whereas we know early reading skills are predictive of later reading comprehension, specialized knowledge of reading comprehension instruction may be required beyond foundational skills (Peng et al., 2019).

Limitations and future research

We note the following study limitations. The analytic sample for this study was gathered from a single state with a disproportionate number of rural schools represented based on the location of the different schools throughout the state. Despite the range of student performance across the schools as indicated by the schools’ state-assigned performance ratings and the range of students’ scores in the analytic sample itself, caution should be taken when generalizing these results without considering the characteristics of the schools, teachers, and students in our analytic sample.

The lack of teacher knowledge items about reading comprehension is another limitation of this study. The student outcome measure used in this study did provide a measure of students’ language comprehension. However, without a measure of teacher knowledge of reading comprehension, it is not surprising that we did not observe a significant relationship between teacher knowledge and their students’ comprehension. Future research should seek to measure teachers’ academic and pedagogical knowledge of reading comprehension and link that knowledge to students’ reading comprehension outcomes.

Another limitation of the present study is the lack of an instructional practice measure in the classrooms. A few studies have attempted to model whether the impact of teacher content knowledge of reading on student reading outcomes is mediated by instructional practices by including classroom observational data or measures that focus on teachers’ knowledge of effective reading practices (Garet et al., 2008; Piasta, 2020). Future research should further explore the link between teacher content knowledge, instructional practice, and student reading outcomes.

Although observational data or a pedagogical content measure would add to the study, we believe demonstrating the direct relationship between teacher knowledge and student outcomes deepens the understanding of how necessary knowledgeable teachers are for all students. These findings provide an empirical basis to support the growing number of legislative mandates focused on bolstering the training provided to teachers that in turn enables all students to learn to read. Yet, additional research should continue to clarify the domains of knowledge that are essential for effective early reading instruction. Questions remain as to what threshold of knowledge is sufficient to teach reading effectively and should be investigated in future studies. Future research should seek to create a valid and reliable measure that can be used to differentiate teachers with adequate levels of knowledge from teachers with low levels of knowledge. Creating a valid and reliable measure would allow teacher preparation programs and schools to adequately measure what teachers know and to design appropriate courses or professional development opportunities to address knowledge deficits.

Teaching reading effectively is a complex issue and requires a multi-faceted approach to understanding it. Teacher content knowledge is just one component required for ensuring highly qualified teachers are prepared to meet the needs of all students in their classrooms. The results of this study demonstrated that teacher content knowledge can be linked to students’ growth in reading performance. Policies and programs targeting the development of highly qualified teachers should focus on ensuring teachers have the specialized knowledge of basic reading skills that will be essential for ensuring all students grow into proficient readers.