Introduction

The improvement in student learning outcomes in mathematics continues to be a central focus of efforts in many countries (European Mathematical Society 2012; Hiebert et al. 2005; Jaworski 2006). Student performance at the elementary school level is of particular concern because it is at this education level that children develop understandings that are fundamental to future learning and dispositions toward the subject matter that becomes central to their identities as mathematics learners in later grades (Hiebert et al. 1997). At the same time, research on mathematics teaching and teacher education has highlighted the complexity of the work of teaching and of the knowledge required to teach effectively (Ball et al. 2008).

In the USA, the Common Core Standards for Mathematics (Common Core State Standards Initiative 2010) have raised the demands on teachers and what is required of them to be effective. These standards aim at developing coherence across grade levels and emphasize conceptual understanding and engaging students in mathematical practices such as making conjectures about the form and meaning of mathematical solutions and constructing viable arguments and critiquing the reasoning of others. The field has made great progress in understanding the knowledge that is necessary for standard-aligned teaching (Hill 2010). Nonetheless, questions remain about the relationship between knowledge and instructional quality and the processes through which teachers activate their knowledge during teaching.

The purpose of this study is to examine the relationship between teacher mathematical knowledge for teaching and mathematical quality of instruction in a sample of elementary school teachers. This work both replicates and extends a prior study (Hill et al. 2008) by involving novice teachers in their first year of the profession and examining closely the practices of three kindergarten case-study teachers. New teachers are on average less effective than teachers with some experience (Harris and Sass 2011; Kane et al. 2008); thus, understanding how teachers draw on their knowledge to inform instructional decisions at the beginning of their careers is important. It has implication for the design of systems of support during both teacher preparation and induction that might affect instructional quality as well as job satisfaction and retention at the beginning of teachers’ career.

Below we summarize research on mathematical knowledge for teaching, instructional quality, and their relationships before we introduce the study context and research questions.

Literature review

Mathematical knowledge for teaching

During the last 30 years, the field has made much progress in the conceptualization of knowledge bases teachers have to draw from to be effective in the classroom. Teaching is now understood as a complex activity occurring in real time and involving interactions with students that require not only subject matter knowledge, but also pedagogical content knowledge (Shulman 1987). Pedagogical content knowledge encompasses “the ways of representing and formulating the subject that make it comprehensible to others” (Shulman 1986, p. 9).

Recent research has contributed to our understanding of different facets of knowledge (Ball et al. 2008) as well as the nature of the knowledge that is most relevant to teachers’ work in the classroom. For example, Kersting and colleagues (2012) have proposed the notion of usable knowledge to highlight knowledge that teachers access and apply during teaching. The Teacher Education and Development Study in Mathematics (TEDS-M) has distinguished pre-active (i.e., knowledge of planning for mathematics teaching and learning) and interactive aspects of knowledge (i.e., enacted mathematics knowledge for teaching and learning) (Tatto et al. 2008). The Knowledge Quartet project has developed a tool for teacher educators to identify mathematical content knowledge in the act of teaching that distinguishes: foundational knowledge (e.g., subject matter knowledge); transformational knowledge (e.g., making subject matter knowledge available to learners); connection (e.g., between procedures and concepts); and contingency (e.g., responding to children’s ideas) (Rowland 2008). The CO-ACTIV project has embedded knowledge in a comprehensive conceptualization of teacher competence that includes also beliefs and psychological functioning (Baumert et al. 2010). Similarly, proponents of the Mathematics Teacher’s Specialized Knowledge (MTSK) model place at the center of their conceptualization of Mathematical Knowledge and Pedagogical Content Knowledge, teacher beliefs about mathematics and mathematics teaching and learning which they argue influence deeply teachers’ classroom practices (Carrillo-Yañez et al. 2018).

This study joins these efforts in that it draws from a conceptualization of knowledge that was derived from careful analysis of the mathematical work that teachers conduct in their classrooms. It is designed as a replication study and it builds on the research conducted by Ball and colleagues (2008) in the context of the Learning Mathematics for Teaching (LMT) project. The LMT project investigated the mathematical knowledge needed for teachers to carry out various teaching tasks during instruction, such as assessing student work, representing numbers and operations, and explaining common mathematical rules or procedures. The resulting conceptualization of mathematical knowledge for teaching (MKT) has been utilized widely, including outside of the USA, in studies of teacher learning and instructional quality, and it highlights the highly professional nature of teachers’ work (Santagata et al. 2011; Delaney et al. 2008).

MKT is composed of several dimensions that are grouped in two main constructs, subject matter knowledge and pedagogical content knowledge. Subject matter knowledge includes common content knowledge, specialized content knowledge, and horizon content knowledge. Common content knowledge is “content knowledge that is used in the work of teaching in ways in common with how it is used in many other professions or occupations that also use mathematics” (Hill et al. 2008, p. 436). Specialized content knowledge refers to “content knowledge that is tailored in particular for the specialized uses that come up in the work of teaching, and is thus not commonly used in those ways by most other professions or occupations” (Hill et al. 2008, p. 436), whereas horizon content knowledge is “an awareness of how mathematical topics are related over the span of mathematics included in the curriculum” (Ball et al. 2008, p. 403). For pedagogical content knowledge, the domains include knowledge of content and students, knowledge of content and teaching, and knowledge of the curriculum (Ball et al. 2008).

MKT has been examined through a measure that LMT researchers have pilot tested with over 2000 teachers during 2002–2010 (http://www.umich.edu/~lmtweb/). The instrument items are different from those included in more traditional certification or subject matter assessments. Knowledge is assessed in the context of common problems that arise in the course of teaching mathematics to students. Some examples are providing an explanation for a mathematical rule or procedure, examining an unusual method for solving a problem, or deciding which of several definitions is accurate and usable with students at a certain grade level (see “Appendix A” for a sample items). Research conducted by LMT researchers found that teachers’ performance on the MKT instrument was linked to their students’ achievement gains (Hill et al. 2005). More details about the instrument are provided in “Method” section.

Instructional quality

Over the past decades, researchers have developed various constructs and instruments to capture instructional quality of mathematics lessons. Charalambous and Praetorius (2018) recently reviewed 12 frameworks and discussed their characteristics. Approaches differ based on the purpose that drove researchers’ work (research, evaluation, professional development or a combination of these), their top-down or bottom-up development (based on existing literature and expert judgment; deriving from close observation of instructional practices; or both processes), and the body of literature on which they are based (generic educational effectiveness and learning and motivational theories; mathematics-specific, such as research on mathematics education and mathematics teacher knowledge; or hybrid approaches that include a combination of both). In addition, there are some differences in how conceptualizations are operationalized into coding dimensions and categories, how researchers capture frequency and quality of particular instructional moves, and whether they focus on the teacher or the students. Finally, measurement decisions also differ in terms of units of analysis, scoring scales, and strategies for assuring rater quality. These differences raise important questions about conclusions we draw from studies on the impact of teacher professional development, the effects of teaching on student learning, or, as in this case, the associations between teacher knowledge and quality of instruction.

This study draws on a framework and instrument for describing and measuring instructional quality developed by the LMT research group (Learning Mathematics for Teaching (LMT) 2011): the Mathematical Quality of Instruction (MQI). This conception of instructional quality centers on identifying and analyzing mathematical features of classroom works as opposed to general pedagogical features. A focus on the mathematical quality of instruction was important in our study given our interest in mathematical knowledge for teaching rather than general pedagogical knowledge.

Other researchers have provided insights into features of classroom teaching in which teachers’ mathematical knowledge becomes visible (Borko et al. 1992; Rowland et al. 2005; Rowland 2008), and some research groups have developed measures that capture the mathematical nature of teaching (Gearhardt et al. 1999; Sawada and Pilburn 2000; Horizon Research 2000). However, LMT researchers argue that what distinguishes their work is a focus on quantifying the mathematical quality of teachers’ work separately from other features of instruction (LMT 2011). They define MQI as “only the nature of the mathematical content available to students during instruction” (p. 30). The MQI framework and instrument include several constructs and corresponding scales. Broadly speaking, the instrument attends to the richness of the mathematics in a lesson, to the instruction mathematical accuracy, and to how the teacher builds on students’ mathematical ideas and addresses mathematical difficulties. Additional details about the instrument are discussed in “Method” section.

Relationships between mathematical knowledge for teaching and mathematical quality of instruction

Building on the LMT group’s research on MKT and MQI, this manuscript examines the relationship between teacher knowledge and instructional quality in a sample of first-year elementary school teachers. The motivation and inspiration for this study came from an article authored by Hill and colleagues (2008) that examined the association between MKT and MQI for ten teachers and investigated teacher practices as they related to knowledge in five case-study teachers whose years of teaching experience ranged from 5 to 15. Researchers found associations between teachers’ performance on the MKT instrument and the quality of their instruction measured by several dimensions of the MQI instrument. These Spearman correlations ranged between .3 and .83. The dimension of responding to students appropriately (i.e., the degree to which the teacher can correctly interpret student mathematical utterances and address student misunderstandings) and the absence of mathematical errors in instruction were statistically significant even with a small sample size of 10 teachers (see Table 3). The authors also found that the association between knowledge and instructional quality was either supported or hindered by several factors, including teacher beliefs about how mathematics should be learned and how to make it enjoyable by students; teacher beliefs about curriculum materials and how they should be used; and the availability of curriculum materials to teachers. In a subsequent publication, Hill and colleagues (2012) reported a moderate correspondence (\(r = .44\)) between MKT and MQI in a larger sample of 291 teachers with varying degrees of expertise and drawn from two separate projects.

In this study, we replicate Hill and colleagues’ approach (summarized in their 2008 publication) to examine whether the MKT–MQI associations they found hold true in a sample of beginning teachers. We also further explore factors that might explain teachers’ manifestation or lack thereof of their mathematical knowledge for teaching in instruction in three case-study teachers in kindergarten classes (i.e., first year of elementary school in the USA). Although replication studies in education are rare—comprising 0.13% of education publications, according to a recent study that examined education journals with the top 100 5-year impact factors—“conducting replications on important findings is essential to moving toward a more reliable and trustworthy understanding of educational environments” (Makel and Plucker 2014, p. 313).

The question on which we focus is also relevant more broadly. Internationally, several studies have documented positive associations between teacher knowledge, instructional decisions, and student learning (Baumert et al. 2010; Blömeke et al. 2016; Copur-Gencturk 2015; Hill et al. 2011; Kersting et al. 2012). Research has highlighted how teachers with more subject matter and pedagogical content knowledge are able to provide richer opportunities for students to learn mathematics, including using appropriate representations, making explicit conceptual connections, unpacking student errors, and building on student ideas through productive classroom discourse (Ball 1993; Fennema and Franke 1992; Ma 1999; Putnam et al. 1992).

Examining the relationship between knowledge and instructional quality at the beginning of teachers’ careers is important for several reasons. From a policy perspective, understanding what aspects of knowledge are most related to high-quality instruction for novice teachers can inform existing policies at the national and state level that require teachers to acquire knowledge before they enter the profession (U.S. Department of Education 2013). From a practice perspective, understanding how knowledge impacts teaching is important both for making decisions about certification examinations (Santagata and Sandholtz 2019) and for designing teacher preparation and induction programs that support novice teachers.

Existing research involves teachers with various degrees of experience, while studies focusing on novice teachers are rare. In our review of the literature, we could identify one study that centered explicitly on novice teachers. In this study, Desimone and colleagues found weak links between knowledge and instructional quality in a sample of middle-school teachers during their first and second year of teaching (Desimone et al. 2016). Their study utilized the MKT survey to measure teacher knowledge (Hill et al. 2008) and a different instrument to measure instructional quality (Boston and Wolf 2006). Specifically, the researchers assessed the quality of instruction along two dimensions: (1) rigor of lesson activities and the ensuing class discussion (i.e., cognitive demand of mathematical tasks) and (2) the quality of class discussion (i.e., teacher-and-students’ interactions). The authors suggested that the weak links between knowledge and instructional quality in novice teachers might be due to their difficulty in accessing knowledge related to the dimensions of instructional quality they measured.

Research questions and study context

The study is structured into a quantitative phase that examines the relationship between MKT and MQI and a qualitative phase centered on three case-study teachers. Together these phases answer the following two research questions: (1) What is the relationship between mathematical knowledge for teaching and mathematical quality of instruction for novice elementary school teachers during their first year of teaching? And, (2) To what extent and how is teachers’ level of mathematical knowledge for teaching reflected in different aspects of their instruction? What other factors might account for teachers’ instructional decisions?

This study is part of a larger project that involved 112 participants. Pre-service teachers were enrolled in a post-bachelor elementary teacher education program at a large university in the USA. Participants completed the MKT measure. The present study includes a sub-sample of these participants.

Method

Examining associations between MKT and MQI

Selection criteria and participants’ background

From the larger sample of teachers, ten participants were selected to participate in the longitudinal phase of the project. These teachers found jobs at a reasonable distance from the university to allow for classroom visits three times per year. They taught grade levels ranging from kindergarten (4 teachers), first grade (2 teachers), second grade (2 teachers), third grade (1 teacher), and fourth grade (1 teacher). Their schools represented a variety of contexts with students from different socioeconomic and ethnic backgrounds. As a group, they also represented various levels of knowledge as measured at the end of teacher preparation. In this study, we examine the relationship between their knowledge and the quality of their instruction during their first year in the profession.

Data sources and measures

The main sources of data for this study were the Mathematical Knowledge for Teaching Survey (MKT), three videotaped lessons for each teacher, and the verbatim transcripts. Teacher interviews were used to supplement information about teachers’ instructional decisions.

Mathematical knowledge for teaching (MKT) The MKT survey was used to measure teacher mathematical knowledge for teaching (Ball et al. 2008). All teachers completed the paper-and-pencil version of the survey at the end of their teacher education program. The 2008 form B was used which includes 16 multiple-choice problem situations centered on numbers and operations. Established inter-item reliability is .84 (Hill 2010). The survey measures common content knowledge, specialized content knowledge, and knowledge of students and content with some items capturing more than one dimension.

Every item was coded as correct (score of 1) or incorrect (score of 0). Each participant received a raw score equal to the sum of correct responses and an IRT (i.e., Item Response Theory) score that was generated using the conversion scale provided by the instrument developers. The scaled scores ranged from − 2 to + 2 (with 0 as the average compared to the national norm and negative scores and positive scores indicating below and above average, respectively).

Mathematical quality of instruction (MQI) Research suggests that two or more observations are necessary to capture teacher instructional quality (Ho and Kane 2013). In this study, three full-length mathematics lessons were videotaped for each teacher during the first year of instruction. In order to give teachers some time to adjust to their classroom and students, the first lesson was filmed 4 months into the school year (in January). The subsequent lessons were videotaped roughly 2 months apart, in March and May.

The Mathematical Quality of Instruction instrument (MQI; LMT 2011) was used to code the lessons. The MQI aims to capture three types of interactions that occur in the classroom—teacher-and-student, teacher-and-content, and student-and-content interactions—and the presence of certain features of quality teaching and their degree of sophistication. The MQI has gone through an extensive instrument development process that has led to different versions and different ways to calculate scores of instructional quality. We utilized the 2014 version of the instrument (Center for Education Policy Research, n.d.a). Below we describe the scoring procedure.

Scoring consists of three phases. First, each videotaped lesson is divided into 7-min segments, and each segment is scored along several dimensions of instructional quality on a scale from 0 (not present) to 3 (high). In the second phase, raters refer back to these segment-level codes to assign whole-lesson scores to the lesson according to nine dimensions. Each score ranges from 1 (not at all true of this lesson) to 3 (Default) to 5 (very true of this lesson). Lastly, the rater decides on an overall MQI score, which ranges from 1 (low) to 3 (mid) to 5 (high), for the entire lesson by taking into account all of the coding work that has been completed. This lesson score captures the rater’s overall evaluation of the teacher’s mathematical knowledge as manifested in the lesson (Center for Education Policy Research, n.d.b).

While the whole-lesson scores were not included in the version of the MQI utilized by Hill and colleagues in their 2008 publication, we decided to report these scores instead of the segment-level overall scores they reported. These scores have been found to be the most reliable when scoring instructional quality in lower-elementary classrooms (Mantzicopoulos et al. 2018). Overall, in our opinion, these scores capture similar dimensions of instructional quality reported by Hill et al. (2008), and they thus allow for comparisons of correlations between teacher knowledge and instructional quality across studies.

Teachers’ whole-lesson and overall lesson scores were averaged across their three videos to reflect the instructional quality of their first year of teaching. These averaged scores were used to examine the association between MKT and MQI through Spearman’s rank-order correlation analyses. Table 1 reports descriptions of whole-lesson and overall lesson scoring.

Table 1 MQI whole-lesson codes and overall lesson score

All lesson videos were scored by two independent raters. The second author is an MQI-certified rater and was a member of the master-scoring team led by the measure developers at the Center for Education Policy Research at Harvard University. This rater has extensive prior experience with the MQI instrument, including the scoring of 20 whole lessons and at least 110 short clips after formal training. The second rater is an MQI-certified research assistant, who received the measure developers’ formal training consisting of completing an online program that involves watching 56 clips for each code and dimension. The training process lasted approximately 20 h, and the certification examination consisted of scoring four segments of four different classrooms.

The inter-rater reliability as calculated by exact agreement ranged from .4 to .53 across the whole-lesson codes. Following approaches used by other scholars in published studies of instructional quality (see for example, Schlesinger and Jentsch 2016) scoring decisions were finalized through discussions and consensus. Most lessons were taught in early elementary classrooms, while the MQI instrument and the lessons on which the rater training is conducted are mostly from upper elementary and secondary classrooms; thus, we decided that independent scoring and subsequent discussion was the most robust way to obtain scores of instructional quality.

Supplementary data A total of four semi-structured interviews were conducted for every teacher: three post-lesson interviews immediately after each video-recording session and an end-of-the-year interview after the third post-lesson interview. The same protocol was used to interview all teachers. Questions asked teachers to reflect on the effectiveness of their lesson we videotaped and on how they could improve it. End-of-year interviews asked teachers to discuss their professional experiences, including opportunities to collaborate with others and to grow professionally.

Case studies

Case-study selection

Based on the results of the correlation analyses, we selected three teachers to illustrate how their different levels of knowledge are related to their practice. Similarly to Hill and colleagues (2008), we followed Yin’s (2014) case selection procedure: “each case [was] carefully selected so that it either (a) predicts similar results (a literal replication) or (b) produces contrasting results but for predictable reasons (a theoretical replication)” (p. 46). We chose two teachers whose MKT and MQI scores converged and one whose scores did not converge. The first two cases illuminate how MKT contributes to instructional quality and a lack of MKT constrains instruction. The third case, a teacher whose knowledge does not translate into instructional quality, problematizes this relationship and highlights factors that mediate the expression of MKT in instruction. All three case-study teachers teach 5–6-year-old children in the first year of elementary school (i.e., kindergarten). We chose to present cases at this level of schooling because the role of MKT at this grade level might be less intuitive. Our findings suggest that knowledge matters at this grade level as well. In addition, other considerations are important to understand how teachers utilize their knowledge to create opportunities for students to learn. Table 2 includes case-study teachers’ performance on the MKT and MQI and information about their school context.

Table 2 Teacher MKT and MQI and school context

Case-study analyses

After selecting the case-study teachers, we reviewed their lesson videos, transcripts, and interviews. Specifically, we examined whether and how instruction reflected mathematical knowledge for teaching. To identify evidence of knowledge, we built on prior work by Ball and colleagues (2008) who discussed how different facets of knowledge inform the work of mathematics teaching. We used the MKT framework to examine lesson videos and to generate annotated memos for each lesson video that documented how different facets and levels of knowledge were manifested in the lesson. We followed Hill et al. (2008) approach and conducted “pattern matching” (Yin 2014) to note where MKT seem to matter for specific dimensions of instructional quality and where it did not. We also noted factors other than MKT that seem to inform teachers’ instructional decisions. To do this, we reviewed teacher interview transcripts and highlighted sections where teachers discussed reasons for specific instructional decisions. We noted these reasons in each of the lessons’ annotated memos.

We then summarized evidence across the three videos and interviews for each teacher and selected episodes that best illustrated teachers’ practices across the three lessons. We centered the presentation of findings around these episodes to provide examples of instances in which mathematical knowledge for teaching is clearly visible in instruction (first case), developing knowledge limits instructional quality (second case), opportunities for student learning are limited despite a teacher’s knowledge (third case).

Findings

Relationship between mathematical knowledge for teaching and mathematical quality of instruction

Participants MKT scale scores ranged from − 0.51 to 1.56. The average MKT score was .67, which indicates that the knowledge of this sample of teachers is above the national average. The MQI overall lesson scores ranged from 1 to 5. Similarly to Hill and colleagues’ (2008, 2012) findings, we found a positive association between teachers’ MKT and MQI overall lesson score. The scatterplot reported in Fig. 1 shows a positive, linear relationship with moderate strength between teachers’ MQI and MKT scores (r = .33). The scatterplot also reveals cases in which higher levels of knowledge do not correspond to higher levels of instructional quality (i.e., bottom right quadrant), while there are no clear cases of lower levels of knowledge and higher levels of instructional quality (i.e., top left quadrant).

Fig. 1
figure 1

Scatterplot of first-year teachers’ MKT and overall lesson MQI scores

Table 3 reports correlations between teacher MKT and the dimensions of instructional quality captured by the whole-lesson codes. It also compares our findings with those reported by Hill and colleagues in their 2008 and 2012 publications. As mentioned above, different versions of the instrument vary in how they computed instructional quality scores. We compared findings based on scoring dimensions that we deemed capturing the same features of instruction.

Table 3 Correlations between MQI and MKT Scores (Spearman’s rho)

Our analyses confirm findings from Hill et al. (2008, 2012) and reveal a strong, positive and statistically significant association between teacher knowledge and the Mathematics is Clear and not Distorted dimension of instructional quality (r =.77; p < .001). Positive associations of moderate strength, albeit not statistically significant, are also observed between teacher knowledge and two dimensions of instructional quality that are directly linked to the mathematics taught in the lesson, Tasks and Activities Develop Mathematics (r =.29; p = .422) and Lesson is Mathematically Dense (r = .27; p = .446). In addition, there are positive associations between teacher knowledge and the Efficient Use of Lesson Time, which captures the extent to which students are continuously engaged with the mathematics and transitions between tasks are smooth and quick (r = .32; p = .363) and between teacher knowledge and the Students are Engaged dimension, which captures whether students are eager to participate and are not off task or disengaged with the lesson (r =.25; p = .500). All other relationships are weaker. Specifically, contrary to Hill et al.’s studies that found strong and significant correlations between MKT and similar dimensions, in our sample of novice teachers, Lesson Contains Rich Mathematics, Attends to and Responds to Student Difficulty, and Lesson contains Common Core-aligned student practices are not associated with MKT.

Table 4 reports teachers’ individual scores on the MKT measure and on both overall lesson and whole-lesson scores. These scores shed light on aspect of instructional quality that vary the most among teachers independently from their ranking on the MKT, indicating that other factors might explain the quality of their instruction. There are also dimensions in which overall teachers’ scores tend to be lower. Sixty percent of teachers obtained a score below 3 in Lessons are Mathematically Dense, perhaps in part a consequence of the fact that most teachers taught in lower-elementary classrooms where classroom management and getting students ready to work on mathematics might take time. Half of the teachers obtained low scores on Common Core-Aligned Student Practices. This is the dimension in which teachers across the sample differ the most, with two of the teachers with higher MKT obtaining a score of 1.3. Lessons were videotaped when the Common Core standards had just been released and schools differed in terms of time of transition and support provided to teachers to adopt the new standards, with some schools relying on individual teachers to find online resources to support their instruction. We further discuss these findings below.

Table 4 Teachers’ MKT and MQI Scores

Case studies

In this section, we attend to the second research question: To what extent and how is teachers’ level of mathematical knowledge for teaching reflected in different aspects of their instruction? What other factors might account for teachers’ instructional decisions? In what follows, each case is presented by referring to the scores teachers received in the MQI (see Table 4) and by illustrating teachers’ practices through an episode from one of their lessons.

A case of high MKT and high MQI: Ren

Ren’s consistently high scores on all dimensions of the MQI, capture several features of her instruction. Her knowledge, which places her at a 90th percentile ranking within the larger sample, is manifested in the way she treats the mathematics at the core of her lessons, with clarity and precision (Math is Clear and not Distorted score is 4.7). Her students are visibly engaged (i.e., the majority of students frequently raise their hands) in the mathematics tasks she planned for them (Students are Engaged score is 4.3), and most of her tasks are designed to build mathematical understanding (Tasks and Activities Develop Mathematics score is 3.7). Students frequently contribute their reasoning to the class discussion and are observed to solve math problems collaboratively. Ren’s pedagogical content knowledge is evident in the ways she consistently builds on students’ ideas during her lessons (Teacher Uses Students’ Ideas score is 4.3). She unpacks students’ explanations for their mathematical contributions and formalizes student emergent ideas to move all students’ learning forward. For the most part, she also attends to and remediates students’ difficulty (score of 3.3). Her interviews show her attention to mathematical representations, such as the number line and the ten frames, and her considerations of how to best use them to develop students’ understanding.

Reflecting on her first year, Ren states that her pre-service program prepared her adequately for teaching kindergarten mathematics and for reflecting on her practice in ways that encourage continuous improvement. Frequent collaborations between fellow kindergarten teachers and professional development workshops provided her with further support. Although other kindergarten teachers use the district suggested textbook to teach mathematics, Ren shares that she uses it for homework only and designs lessons herself based on the kindergarten curriculum standards.

Below, we discuss one of her lessons, to argue for the role of knowledge in her instructional decisions. Students are working on a word problem and use a math board that includes a blank number line, a blank double ten-frame chart, a hundreds chart, and a blank space to write. She asks students to represent the number four in the ten-frame chart by marking four consecutive boxes with blue dots. She then poses the question, “If I have four in my ten frames, how many more do I need to fill in to get ten?” She tells students to quietly think in their heads first and then write their answers on top of their ten-frame chart. After giving students some time to think, she asks the question again.

Jerry::

It’s 6. 6! I think it’s 6

Ren::

Jerry, why do you think it’s 6?

Jerry::

Because… um… because… uh…. When it’s 4… after 4, it’s 5, 6, 7, 8, then 9, then 10, [waving his marker after each count] and it’s six more

Ren::

Oh, can you say that one more time and then show me? Say one more time

Jerry::

It’s like, you have 4, [holds up 4 fingers with his thumb folded in], then you add 6, [unfolds his thumb and displays 5 fingers with his other hand]. So, it makes 10 [holds up all 10 fingers stretched out]. You have this and but you have 6 so it makes, um, so it makes 10

Ren::

Makes 10. Alright. Tom

Tom::

I think it’s 6

Ren::

Why do you think it’s 6?

Tom::

Because if it’s 5, if it’s 5, you will have 5 plus 5—4. And if there is no 5 and then no more this [folds thumb in] this 5, and then change 4 and it will be 6

Ren::

Yeah, okay. So, let’s look at what Jerry was saying first. He said 4, right? And then he said like this, he went like this, he went, [using an orange marker, draws 6 more X’s in the 10 frame chart, one by one, counting on] 4, 5, 6, 7, 8, 9, 10. So how many X’s are in there?

Ss::

6

Ren::

6 of them. Like the way that you said, right? There’s 6 more. You go, 4, 5, 6, 7, 8, 9, 10. So 6 of them, 1, 2, 3, 4, 5, 6 [as she points to the X’s from the fifth box to the tenth]. And then, Tom, what were you saying?

Tom::

If 5 here [points to his ten frame]. And then [inaudible] 5, it will make—but if it’s 4, if it’s 4 here, it will change… It will, um… And then the 5 will [folds thumb in], and then it will be 4 and then it will be 6. Minus 1—it’s 6

Ren::

Right. Because you know that 5 and 5 make how many?

Tom::

10

Ren::

10, okay [writes in 10]. But we only have how many [points to the 4 blue dots]?

Tom::

4

Ren::

4 [writes 4]. So, 4 [points to the blank fifth box]

Tom::

Plus 6 makes 10 [Teacher writes out the number sentence as student says it aloud]

Ren::

Makes 10

In this excerpt, two different approaches to the task are shared. Jerry solves the problem by counting on from 4. Tom solves the problem using the additive compensation method. Tom starts with 5 and 5 and subtracts 1 from one of the 5s and adds the 1 to the other 5. This allows him to create another number sentence with a sum of 10. We also see that Ren provides students with adequate time to formulate an explanation by patiently waiting for them to complete their thoughts without interrupting or speaking for them. As kindergarteners, her students struggle to clearly convey their ideas with proper mathematical language, yet their explanations are rich and they use their fingers and math boards to aid them in explaining their mathematical thinking. In addition to these two distinct solution strategies, two more were shared during the lesson. This lesson structure is common across Ren’s three lessons.

Ren’s MKT is displayed in this excerpt in several ways. Drawing on her knowledge of content and students, Ren displays her deliberate efforts to make student thinking visible by allowing children to take the time necessary to communicate their emerging mathematical ideas and in her ability to make sense of students’ incomplete explanations for the benefit of the whole group. In addition, she displays specialized content knowledge by recognizing the two students’ explanations as being two distinct solution methods. In responding to Jerry’s method, Ren shares a ten-frame chart with the whole group and marks the remaining six blank boxes with the letter X as she counts on aloud from 5 to 10. She then goes back and counts aloud how many X’s she marked and concludes that they needed 6 more to make 10. In responding to Tom’s method, Ren asks the student to reiterate his explanation and helps to clarify what he is trying to convey. She writes out number sentences for the whole group that represent his reasoning and makes explicit the connection between 5 + 5 and 4 + 6 by pointing to the ten-frame representation and linking it back to the symbolic representation in the number sentences.

Finally, Ren’s knowledge of content and teaching is evidenced by her deliberate probing of student thinking, which “requires coordination between mathematics at stake and the instructional options and purposes at play” (Ball et al. 2008, p. 401). At key moments during her lessons, she pauses and guides the whole group’s attention to different strategies for solving a mathematical task, and in doing so she develops mathematical ideas. We observe these moves in the excerpt above and in the few minutes that come after when she asks two additional students to share their solutions. She probes their thinking and asks for further clarification; then she makes the strategies explicit to the whole group.

A case of low MKT and low MQI: Nina

Nina exemplifies the case of a teacher whose developing knowledge limits instructional decisions in several ways (she places at the 24th percentile within the larger sample for MKT and receives an overall lesson score of 2 in MQI). Her scores on the whole-lesson dimensions of the MQI are mostly in the 2 range with 3s in Students are Engaged and Mathematics is Clear and Not Distorted. As mentioned above, a score of 2 on the whole-lesson dimensions indicates that a teacher is observed to be enacting practices in ways that hinder or obscure mathematics learning; a score of 3 indicates that the teacher is observed to be enacting practices captured by the scoring dimension without particular problematic features, but not at high-quality level. The tasks in which students are engaged in Nina’s lesson are not mathematically rich (Lesson Contains Rich Mathematics score is 2) and she shows minimal attention to student thinking (Teacher Uses Student Ideas score is 2 and Teacher Attends to and Remediates Student Thinking score is 2.3).

All three mathematics lessons we observed begin in a whole-class setting with students seated on a spacious rug, facing the whiteboard with her standing in front. She reviews counting numbers from 1 to 30 by singing along to a number song. She then divides the whole class into two large groups and teaches the lesson to one group at a time (while a teacher assistant guides the other half through other activities). She begins her lessons by demonstrating the day’s task on the whiteboard easel placed in front of two large rectangular tables with seven students seated at each table. The task involves completing a worksheet. Students then work independently as she walks around to check their progress.

Nina’s developing knowledge is evident in the limited opportunities she creates for students to engage with the mathematics in rich ways and her focus on the correctness of student work rather than student thinking. Although students are in groups and reminded to talk to their partners to share their ideas, Nina frames learning mostly as an individual endeavor. When students are working independently and she notices a student struggling, she directs that student’s attention to the whiteboard and demonstrates how to do the problem correctly. She repeats this process when she notices another student struggling. Instructional moves that would indicate specialized content knowledge or knowledge of content and teaching are not observed in her instruction. For example, she does not look for common students’ errors among the students who showed signs of struggles nor does she utilize students’ work to make a mathematical point (Ball et al. 2008).

The excerpt below illustrates her practice. In this lesson, Nina clipped a large number chart with natural (counting) numbers up to 100 on an easel. The first row begins with 1 and ends with 10, the second row begins with 11 and ends with 20, and so on up to 100. She reviews the number chart with her students to practice their number recognition. All students have smaller individual number charts that look identical to Nina’s. One strategy she introduces to her students is showing them how to find the rows for 10s, 20s, 30s, … 90s. In the excerpt below, she begins with the 50s row. S indicates that a student is speaking. When a new student is speaking, s/he is labeled with a consecutive number (S2, S3, etc.). Ss indicates that more than one student is speaking at once.

Nina::

Let’s see. I want to find the 50s row. 10 [pointing to 11], 20 [pointing to 21], 30 [pointing to 31], 40 [pointing to 41], 50 [pointing to 51]. But I want to double check. I want to see if there’s a 5 in the front, so […] so, let’s look across. [underlines the first digit (5) of all numbers in the “50s row” except for the last number, which is 60]

S::

Do we do that too?

Nina::

No you’re just looking. Does it have a 5 in front?

Ss::

Yeah

Nina::

Find the 50s row in yours [stands up and begins to circulate around the room]. Circle the first number in the 50s row. In the 50s row. With the 5 in the front. The 50s. Circle the first number

S2::

Do I circle it?

Nina::

Yes you can circle it

Nina::

Okay, now [sits back down], let’s see, I think Susan is going to be ready to answer it. So, we went down, we found the row and it had the 5s in the front. That’s the 50s. Now, find –I want you to work with your partner–find–go across [motions finger across the row]–find 56, go across, find 56 [stands up to circulate room]. Go across the 50s row. Find 56

S4::

I did it!

S5::

I did it too!

[Teacher sits back down and shuffles through her equity sticks and picks one out]

Nina::

Ok. Now, I want to see—oh, you’re doing so well—Diana, can you come show us how you found 56

Diana::

[slides finger across the “50s row” and stops at 56]

Nina::

Is that 56? [gives a marker to Diana] Yes it is

Diana::

[uses marker to circle 56]

Nina::

Now, Diana, I have a challenge for you. Remember we learned numbers before and after? Could you circle, the number that comes before 56. Before

Diana::

[circles 55]

Nina::

[nods in approval]

This segment depicts the kinds of interactions that we have observed across Nina’s three lessons. We observed Nina focusing on procedures to correctly complete mathematical tasks. Her instructional moves can help students develop automaticity in recognizing numbers, but the procedural approach limits the development of number sense. For example, in this lesson she uses the number chart to show students that there is a designated row for multiples of 10s. However, the last number in each row shares the first digit of the next row. Specifically, in the second row, the last number is 20, and the third row begins with 21 until 30. Given that Nina wants students to see each row having a specific digit in the tens place, the number 20 should be with the numbers 21 through 29. Although she points to the tens digit of the first number in each row, the number that she reads aloud conflicts with the number she is pointing at. For example, she points to 21 and says “20.” She then points to 31 and says “30.”

In her interview, when asked if there was anything she would change about her lesson, she reflects on the chart being confusing. However, based on the presence of the number chart in a previous videotaped lesson, this is not the first time she has used it. In this episode, not being able to anticipate what students will find confusing beforehand exhibits her developing knowledge of content and students (Ball et al. 2008). Her developing knowledge is also evident in the limited opportunities that children have to engage in rich mathematics. Nina explains in her interview that she had students identify numbers beyond the standard objective of 30 in order to challenge them. However, the task of identifying numbers without a focus on number sense lowers the cognitive demand. Many of her questions are close-ended and can simply be answered by a yes or no or by showing the answer. A stronger knowledge of content and teaching could have supported her in moving beyond a routinized method and asking questions that are more appropriate for the level of understanding demonstrated by her students and support higher-order thinking.

In the remaining part of the lesson, students play in pairs a game. One student chooses a number, and the partner uses the “going down a column, across the row” strategy to find the number. This task engages children again in limited mathematical reasoning. Knowledge of content and students and content and teaching could have supported Nina with anticipating what her students may find confusing when learning new 2- and 3-digit numbers and consider sequencing different tasks that build her students’ number sense and reasoning skills instead of repeatedly doing the same activity.

A case of high MKT and low MQI: Hope

Hope’s performance on the MKT survey places her at the 81st percentile among the larger sample of novice teachers. Her overall MQI lesson score of 2 instead is among the three lowest scores in the group of ten teachers in this study. Similar to Ren’s, Hope’s instruction is mostly clear and free of mathematical errors (her score in Mathematics is Clear and Not Distorted is 4.3); however, her lessons are characterized by low levels of mathematics density (Lesson is Mathematically Dense score is 2) and by tasks that do not engage students in meaningful mathematical explorations (Tasks and Activities Develop Mathematics score is 2.7). Opportunities for students to engage in cognitively demanding and rich mathematics are limited (Lesson Contains Rich Mathematics score is 2) and Hope rarely elicits and builds on student thinking (Teacher Uses Student Ideas score is 2).

Hope’s lessons typically begin with students seated in a designated spot on a large rug and facing Hope, who is seated in a child chair next to a small whiteboard easel. She introduces students to mathematical ideas in a clear, procedural manner. She demonstrates the activity for the lesson. Students then are excused to work at their desks, which are grouped into fours. In two of the lessons we observed, students are at their desks, quietly working independently; in the third lesson, students spend the entire lesson on the rug playing a number game.

The excerpt below provides a glimpse into Hope’s math lessons. It is drawn from a segment that features a whole-class discussion on addition and subtraction signs.

Hope::

Friends, when you see a sign like this, Jimmy. [Draws an addition sign] When you see a sign like this, Teresa, what is it telling us to do?

Teresa::

Plus

Hope::

Oh someone raise your hand and tell me what is this sign telling us to do? Tim?

Tim::

Plus

Hope::

It’s a plus sign. But what do we do if there’s a number over here [writes 1] number one and there’s a number over here [writes 2]. It’s a plus sign but what does it tell us to do? What are we doing?

Alex::

Switch!

Hope::

What are we doing to those two numbers?

S::

Put them together!

Alex::

Switching numbers

Hope::

Karen, raise your hand and say it

Taylor::

Switching numbers

Hope::

Oh raise your hand, friends. Jimmy, can you tell us what are we doing to the numbers?

Jimmy::

Putting them together to make a big number

Hope::

We’re putting them together to make a bigger number. So we can call this a plus sign. Can we say plus sign?

Ss::

Plus sign

Hope::

Or we can call it an addition sign [points to the addition sign]. Can we say addition sign?

Ss::

Addition sign

Hope::

It’s called addition because we add two numbers together. We’re adding this number over here and we’re now adding this number over here together. So to help us remember, I’m gonna have you use your hands. [Teacher puts her hands up] Ok, get your hands ready. When you have an addition sign, [Teacher forms an addition sign with her arms] you add two numbers together. Ok, Kelly can I see your hands? Let’s make the plus sign. Riley, let’s make the plus sign.

[Students are crossing their arms to form addition sign]

Hope::

Riley, let’s make the plus sign. Ok, one arm goes straight like this

Riley::

Like this. [student crosses his arms]

Hope::

Riley, make one arm go straight like this and the other arm crosses over just like that. So we are making an addition sign, Sydney

Although Hope draws on her common content knowledge in this episode by attending to the meaning of and the symbolic notation of the addition sign, we observe her looking only for correct answers when interacting with students. For example, in the excerpt above, Alex raised an interesting idea—that “1 + 2” means to “switch the numbers.” Taylor followed his lead and added, “switch the numbers!” It is difficult to infer student thinking from these short utterances. However, this would have been an opportunity for Hope to explore student mathematical thinking and engage the class in rich discussions about both correct and incorrect ideas. Hope’s instructional moves in this and other similar episodes, as captured by the MQI dimension Teacher Uses Student Ideas, are indicative of low instructional quality. To receive a high score in this dimension, teachers must respond appropriately to students and build on their ideas during instruction. Hope’s knowledge is not visible in these situations.

Later on in the lesson, Hope prepares an activity that involves using colored tiles to represent an equation. She models the problem “4 + 1” by first stating the problem verbally, then placing four orange tiles and one yellow tile on the board. She asks her students, “What is four plus one?” to which all students reply, “Five.” She then asks student volunteers to come up to represent additional equations that she verbally states with the colored tiles.

Hope::

Here’s your equation. Ok? We’re gonna do 3 plus 2. And friends you have to count with him and make sure he’s doing a good job. We’re gonna help him out, ok? 3 plus 2. [whispers to Sean] And you have to pick different colors.

[Sean places 3 green and 2 blue tiles on the whiteboard]

Sean::

1, 2, 3, 4, 5

Hope::

Alright. Come over here, Sean. Friends, does this look like three plus two? [pointing to the tiles]

S::

Yeah

Hope::

Does it look like 3 plus 2?

Ss::

Yes

Hope::

Taylor, do you agree? Yeah? Alright friends what does three plus two equal?

Ss::

5

Hope::

5. Nice job. Thank you, Sean. Nice job

Hope then asks two more students to come up to represent “2 + 2” and “0 + 5” on the board. The students follow the same procedure of placing the tiles correctly, and Hope asks whether the tile representation is correct or not to the class. In this exchange, we can see that the structure of her interaction with students follows the Initiate-Response-Evaluate (IRE) pattern of talk (Cazden 2001). Following this episode, students are completing a worksheet with 15 addition problems at their desks working independently. Students are given individual sets of number tiles that have numbers written on one side and an array of dots in the back. Hope instructs students to use the side with dots to help them solve the problems. The solutions to the problems range between 2 and 5.

In her interview, Hope mentions that with the exception of two students, all other students found the problems to be “way too easy.” Even though she provided number tiles for students to use, “they didn’t even bother using those tiles.” Although Hope exhibits her knowledge of content and students in her assessment of the problems’ difficulty and her specialized content knowledge in how she explicitly links tiles to verbal statement of the equation in her lesson, she does not act on this knowledge during her lesson. Similar observations can be made for her other lessons. Her selection of mathematical tasks—mostly too easy and close-ended with only one possible solution method—limits students’ opportunities to engage deeply with the mathematics. Her focus on correct answers and procedural fluency does not open up opportunities for rich mathematical discussions. This leads to her low scores on both MQI dimensions of Common Core-Aligned Student Practices and Richness of Mathematics.

Hope’s interviews include several comments that shed light on her instructional decisions and help explain the limited opportunities her lessons offer for student mathematics learning, despite her MKT. Hope chose to structure the videotaped lessons around activities that require students to use manipulatives without the support of a worksheet, perhaps hoping to please the interviewer who comes from the university where she completed her credential program. At the same time, in her interviews she expresses her uneasiness with tasks that are not based on a worksheet. She explains that her math instruction is mostly worksheet based in part because she plans lessons in close collaboration with her grade-level colleagues and in response to a rather strict pacing plan that her school leadership reinforces. She discusses her planning group as both supportive and constraining in that other teachers are kind and always available to share resources and suggestions, but their teaching approach is traditional and does not include the kind of conceptually oriented tasks she learned about during teacher preparation. In addition, Hope explains how in her planning group, math tasks are chosen often for the extent to which kids enjoy them, rather than in terms of the affordances they provide for mathematics learning. In her interviews, she discusses her instructional decisions as driven by two children in her classroom who tend to fall behind and need one-on-one attention, while she is satisfied with the rest of her class as long as they are attentive and interested in the activities she proposes. Finally, she laments the fact that as a newcomer to her school, she did not feel comfortable sharing these and other challenges with her colleagues, fearing that they would think she was not a good teacher.

Discussion

This study examined the relationship between mathematical knowledge for teaching and mathematical quality of instruction in a sample of ten first-year elementary school teachers. Findings provide the initial evidence that teacher knowledge matters for quality teaching from the very beginning of teachers’ career. The positive association between teacher MKT and their instructional quality as captured by the overall lesson scores confirms findings by Hill and colleagues (2012). The strong and statistically significant correlation between MKT and scores on the Mathematics that is Clear and not Distorted MQI dimension also confirms findings from prior studies that found similarly high and significant correlations (Hill et al. 2008, 2012). Most teachers in this study taught early elementary grades (Kindergarten through second grade); therefore, this finding is noteworthy and underscores the role of MKT in the early years when teachers are responsible for developing a solid mathematics foundation in young children.

Hill and colleagues (2008) also stated that “not only do high-knowledge teachers avoid mathematical errors and missteps, they appear able to deploy their mathematical knowledge to support more rigorous explanations and reasoning, better analysis and use of student mathematical ideas, and simply more mathematics overall” (p. 457). In our study, the correlations of moderate size between MKT and other dimensions of instructional quality that capture the mathematical nature of instruction (Tasks and Activities Develop Mathematics and Lesson is Mathematically Dense) support Hill et al.’s conclusions. In regard to dimensions of instructional quality that capture teachers’ abilities to build on students’ contributions (Teacher Attends and Builds on Student Ideas; Teacher responds to Student Difficulty; and Lesson Contains Common Core-Aligned Student Practices) and the richness of the mathematics available to students (Lesson Contains Rich Mathematics), findings are instead not as clear. The correlations of these dimensions to MKT were weak in our sample of teachers and they raise further questions.

The case studies provide possible interpretations of these findings. We observed that how teachers structure their lessons, the tasks they choose, and the roles participants take in the classroom have an impact on instructional quality. The cases of Nina and Hope illustrate this point. They both structure their lessons according to what Stigler and Hiebert (1999) have described as a typical US script for a mathematics lesson. They begin their lessons by introducing the new topic and demonstrating how to solve mathematical tasks with a focus on procedural accuracy. They assign students individual seatwork, circulating around the room to check the correctness of student work. Classroom discourse is teacher-directed and students’ contributions are minimal and mostly limited to short utterances, following an IRE format (Cazden 2001). Although students sit in groups or pairs, they are not asked to collaborate. Instead, they work quietly and independently.

In the case of Nina, this lesson structure, combined with her emerging MKT, may explain the many missed opportunities to develop student mathematical understanding we observed in her lessons. The lack of a curriculum and a textbook on which to rely during the transition to new standards further complicated Nina’s instructional decisions. As Hill et al. (2008) note, textbook materials may provide teachers with support in planning lessons and utilizing correct mathematical representations in their teaching.

The case of Hope provides insights on additional contextual factors that may play a role in the quality of teacher instruction. While Hope’s knowledge is visible in the lack of mathematical inaccuracies and errors, the tasks she chooses do not offer opportunities for her to utilize her knowledge. The differences in instructional quality between Hope and Ren are striking given their similar level of knowledge and the similar context in which they teach.

Researchers have discussed the affordances and constraints of lesson structures and task design before. Stigler, Hiebert, and colleagues for example have highlighted the limitation of the typical US mathematics lesson script and the low level of cognitive demand characterizing US school mathematics (Hiebert et al. 2005). The reasons for teacher to choose particular lesson and participation structures may vary. Hill et al. (2008) argue that instructional decisions are informed by teachers’ conceptions of mathematics teaching and learning and beliefs about effective instruction. The case of Hope offers insights on how district pacing plans, local school cultures, and teachers’ communities may limit beginning teachers’ choices in significant ways. A seemingly supporting environment constrained Hope’s practices. The pressure to conform to approaches adopted by her more senior peers and the lack of outside support to which to turn to address difficulties left her little choice but to impoverish her lessons.

In addition, Hope’s preoccupation with two struggling children took most of her attention and detract it from reflecting on and making instructional decisions that could benefit the other twenty students in her class. Novice teachers’ inexperience with addressing multiple demands may limit their ability to act on their knowledge during instruction (Desimone et al. 2016). It is possible that with more experience and adequate support, these challenges could be overcome and teachers could draw more easily on their knowledge. This may in part explain the differences between our findings and those obtained in studies with more experienced teachers (Hill et al. 2008, 2012).

Finally, it is also possible that Hope’s lessons would have scored higher on the MQI had she based them on published mathematics curriculum. All three case-study teachers designed their own lessons and activities for the videotaped lessons. Although Ren was able to maintain a high level of instructional quality, for both Nina and Hope this might have contributed to their low scores. As mentioned above, Hill et al. (2008) discuss the availability of curriculum material as a factor that might contribute to instructional quality. The transition to new curriculum standards might have exacerbated this issue in our study.

Conclusions

In sum, although exploratory in nature, this study extends prior literature on the association between teacher knowledge and instructional quality to a sample of first-year teachers. It suggests that mathematical knowledge for teaching matters at the beginning of teachers’ career, while also highlighting additional factors that may contribute to instructional quality. These findings have implications for both policy and practice.

They call for attention to ways we assess elementary teacher preparedness for credentialing purposes and specifically how we evaluate their mathematical knowledge for teaching. From a practice standpoint, they offer several suggestions for teacher preparation and induction. Programs should tailor learning experiences to individual teacher candidates’ knowledge; they should be explicit about typical US lesson scripts and how they may limit students’ opportunities to learn; they should offer ample opportunities to observe and practice teaching approaches that build on and respond to student mathematical ideas; and they should include discussions of how to best navigate the school community, so it doesn’t constrains one’s work. In addition, novice teachers should receive continuous support as they enter the profession. Induction programs should provide the guidance necessary to overcome the initial obstacles and challenges that prevent knowledge from being utilized in the classroom at the service of student learning.

We conclude with words of caution due to the several limitations of this study. We did not recruit a random sample of participants; thus, the findings presented may not be generalized to larger population of teachers. Teachers in this study graduated from a selective program and on average their MKT was higher than we might expect in a US national sample. In addition, the number of participants is small and the knowledge-instructional quality association we found may have occurred by chance. We utilized multiple sources of evidence to conduct our analyses, yet a larger sample is warranted to explore further our findings and conclusions. Notwithstanding these limitations, it is our hope that this study will encourage other researchers to explore further the relationship between knowledge and instructional quality in novice teachers.