The mathematics education field recognizes the importance of noticing children’s mathematical thinking as a means for supporting children in learning mathematics (Sherin et al. 2010). As such, extensive research has been generated investigating how teachers learn to notice children’s mathematical thinking. For instance, researchers have decomposed professional noticing of children’s mathematical thinking into interrelated skills—attending, interpreting, and deciding how to respond (Jacobs et al. 2010; van Es and Sherin 2002), found relationships among the skills (Barnhart and van Es 2015), determined that teachers’ noticing skills can develop over time (Jacobs et al. 2010; van Es and Sherin 2008), and recognized that teachers develop these skills in a variety of ways (van Es and Sherin 2008). Further, researchers have posited that teachers’ noticing skills can be developed with supports such as the use of video (van Es and Sherin 2010), online discussions (Fernandez et al. 2012), learning trajectories (Wilson et al. 2013), and curriculum materials (Empson and Junk 2004).

Of the three noticing skills (attending, interpreting, and responding), responding seems to be the most difficult to develop (Barnhart and van Es 2015; Jacobs et al. 2010; Tyminski et al. 2014; Vacc and Bright 1999). In other words, teachers demonstrate more facility with attending to and interpreting children’s mathematical thinking than responding. Additionally, the teacher response is fundamental to both studying and enacting pedagogical approaches informed by children’s mathematical thinking as it results in a visible action dependent on the internal processes of attending and interpreting (Barnhart and van Es 2015). After considering this prior research, we elected to narrow our focus to specifically address the responding skill. Our aim was to leverage the existing research literature on responding to develop a tool (the Responding Rubric) with which we could evaluate and investigate teachers’ responding skills. While responding to children’s thinking can take a variety of forms such as asking questions designed to support or extend student thinking (Jacobs and Ambrose 2008), redirecting (Lineback 2015), or posing “next problems” (Jacobs et al. 2010), our tool measures responding through posing problems; as responsive problem posing is a repeated activity, elementary mathematics teachers engage in that results in a tangible artifact. Our research question was, “What does the Responding Rubric, applied to teachers’ responsive posing of problems, illustrate about teachers’ responding skills?”

We begin with a literature review that highlights prior research on teacher knowledge and learning about children’s mathematical thinking. In the Methods section, we explain data collection related to teachers’ responses to children’s mathematical thinking tasks, and our development of a measurement tool—the Responding Rubric—which provides a means to measure four aspects of responding to children’s mathematical thinking through problem posing: (1) group or individual responses, (2) consideration of children’s existing strategies, (3) anticipation of children’s future strategies, and (4) responsive problem posing. Finally, we discuss how our results further our understanding of teachers’ noticing skills and provide directions for future research.

Literature review

Erickson wrote, “human noticing is active rather than passive. We ‘direct’ our attention and we ‘pay’ it” (2010, p. 17). In directing our attention, we notice some things and not others, and we pay costs according to what we pay attention to (Erickson 2010). By studying teachers, Erickson recognized that, “noticing was highly variable across individual teachers—this implies that differing teachers do not inhabit identical subjective worlds as they are engaged in the real-time conduct of noticing while they teach” (2010, p. 20).

Research on noticing children’s mathematical thinking began with Cognitively Guided Instruction (CGI) and other similar professional development approaches. These approaches gave teachers support for developing children’s mathematical thinking in number and operations through posing word problems (Carpenter et al. 1999, 2014). These approaches have demonstrated that focusing instruction on children’s mathematical thinking can increase teachers’ knowledge bases and support children in learning number concepts and operations with understanding, leading to more long-term learning (Fennema et al. 1993, 1996a, b; Peterson et al. 1989). These points speak to the importance of examining teachers’ responsive problem posing by focusing on specific aspects of this responding skill and applying a measure of problem posing effectiveness.

In the context of a video club, van Es and Sherin (2008) examined how teachers learned to notice children’s mathematical thinking. These researchers found that teachers shifted in what they noticed in videotaped class sessions across five dimensions. Dimensions included the actors (student, teacher, or other), what teachers noticed (mathematical thinking, pedagogy, climate, management, or other), how teachers analyzed practice (describe, interpret, or evaluate), the level of specificity used to discuss events (general or specific), and whether comments were video based or not (van Es and Sherin 2008, p. 250). Over time, more attention was given to the students and mathematical thinking from more of an interpretive stance that became more specific and video based. Additionally, teachers “followed different paths as they learned to notice in new ways” (2008, p. 253). van Es and Sherin (2008) identified three different paths: (1) the direct path in which teachers made a “single qualitative shift in noticing” (p. 257); (2) the cyclical path in which teachers cycled between broad and narrow perspectives along the dimensions; and (3) the incremental path in which teachers developed “gradually in their noticing” (p. 260). Thus, we can surmise that teachers make shifts in their abilities to notice as they engage in sustained professional development and that teachers develop noticing skills in different ways.

More recent work on the pedagogical use of children’s mathematical thinking has focused on unpacking the in-the-moment decision-making done by teachers as they engage in instruction (Jacobs et al. 2010). To unpack teacher decision-making, researchers asked teachers a series of prompts after observing children solving problems through video or written work away from the classroom and analyzed teachers’ responses to the prompts as a way to capture decision-making. Through this analysis, Jacobs and colleagues (2010) introduced the construct of professional noticing of children’s mathematical thinking. Three interrelated skills comprise the construct: attending to children’s strategies, interpreting children’s understandings, and deciding how to respond on the basis of children’s understandings. To date, responding has received less attention than the other aspects of noticing and earlier conceptualizations of noticing did not include the responding component. What we know about responding is discussed next.

Responding to children’s mathematical thinking

Jacobs and her colleagues (2010) found that 14% of prospective teachers, 26% of practicing K-3 teachers with no professional development, 54% of teachers with 2 years of professional development, and 82% of teachers who had four or more years of professional development considered, in varying degrees, children’s mathematical thinking in their problem-posing responses (Jacobs et al. 2010). Based on these results, it appears responding to students through problem posing is a skill that teachers can develop over time (Jacobs et al. 2010). As evidenced by participants’ scores related to all three noticing skills however, responding also seems to be the most difficult skill to develop. For all four participation groups, mean scores were the lowest for responding.

Jacobs and colleagues (2010) also provided a list of growth indicators that described shifts that occur as teachers develop their noticing of children’s mathematical thinking. Of particular importance to us are the three growth indicators related to responding:

  1. 1.

    A shift from considering children only as a group to considering individual children, both in terms of their understandings and what follow-up problems will extend those understandings.

  2. 2.

    A shift from reasoning about next steps in the abstract (e.g., considering what might come next in the curriculum) to reasoning that includes considerations of children’s existing understandings and anticipation of their future strategies.

  3. 3.

    A shift from providing suggestions for next problems that are general (e.g., practice problems or harder problems) to specific problems with careful attention to number selection (Jacobs et al. 2010, p. 196).

According to these indicators, growth in the responding skill is evidenced by focusing on individual children, considering children’s existing understandings, anticipating future strategies, and providing specific problems with attention to number choice.

Two other studies involving pre-service teachers (PSTs) provided evidence that responding was more difficult to develop than the other two noticing skills. Vacc and Bright (1999) found that PSTs could recognize aspects of children’s mathematical thinking after a 2-year sequence of coursework, but they struggled with using their knowledge of children’s mathematical thinking in instruction. With the enactment of three course activities asking PSTs to attend to, interpret, and respond to children’s mathematical thinking, Tyminski and colleagues (2014) found that “73% of PSTs demonstrated evidence of attending to children’s strategies, 63% demonstrated evidence of interpreting children’s thinking, and around 20% demonstrated evidence of utilizing children’s thinking in posing their next problem” (p. 214).

Results from the work of Barnhart and van Es (2015) may explain why developing the responding skill may be more difficult. The researchers asked PSTs in a science education class to respond to prompts in a video analysis task and scored the responses as low, medium, or high sophistication for each of the noticing skills. By using a mixed-methods analysis approach, the researchers were able to identify connections among the noticing skills. Specifically, they found, “sophisticated analyses and responses to student ideas require high sophistication in attending to student ideas. However, high sophistication in attending to student ideas does not guarantee more sophisticated analyses or responses” (Barnhart and van Es 2015, p. 83). In other words, it seems that high levels of attention (and perhaps interpretation) are necessary, but not sufficient, conditions for high levels of responding.

Taken together, these studies suggest two main ideas. First, the three noticing skills are interrelated and dependent on each other. We can then infer that sophistication in teachers’ responses implies that the teachers are capable of high levels of attending and interpreting, although the opposite relation does not hold true. Second, all three noticing skills can be developed. Responding, however, is the most difficult to develop and involves several elements. Thus, we see our work as extending the work by Jacobs and colleagues (2010) by focusing in greater depth on the development of teachers’ responding skills and investigating the growth indicators, organized into a rubric, within and across participants and across multiple sets of children’s work.

Methods

Participants

Participants included 20, K-5 female teachers from four schools within the Forks School District. We selected participants across a range of years of experience (1–6 years) of participating in CGI professional development. Teachers also varied in terms of when they received their CGI training. Some teachers were recent in their participation while others had completed their participation 3 years prior. In this professional development program, participants were introduced to and engaged in exploration around the problem-type and children’s mathematical thinking frameworks. Participants also analyzed and sorted multiple examples of children’s work. There were three teachers who received further training as CGI trainers. This selection process not only allowed us to build on the results of Jacobs et al. (2010), but also provided a variety of responses that could inform our understanding of teachers’ noticing skills. Our participants were given pseudonyms beginning with letters A-V with the exception of O, such as Amy, Barb, and Carrie, and so on.

Data collection

Data included participant responses to children’s mathematical thinking from two different contextualized word problems. We chose to use children’s work from two different problems in order to investigate teachers’ noticing across content, but without fatiguing the teachers due to the required extensive responses. The first problem was the M&M Problem completed by three children, which was also given by Jacobs and colleagues (Jacobs et al. 2010). It is a multiplication problem—or an Unknown Product problem according to the Common Core for State Standards-Mathematics (CCSS-M) framework, which reads: “Todd has 6 bags of M&Ms. Each bag has 43 M&Ms. How many M&Ms does Todd have?” The three students solved the problem in a variety of ways, varying in strategy, sophistication, and efficiency (student work can be found in Jacobs et al. 2010). We briefly describe the student strategies here. Cassandra wrote out 6 groups of 43 and added each pair of 43s by place value, showing 80 and 6 before combining each pair into a group of 86. She then added 86 + 86 by place value, getting 172. She solved 172 + 86 by taking 20 from 70 in 172 and combining it with 80 from 86 and 100 from 172 to get 200 + 52. She arrived at an incorrect answer of 252 as she neglected to add in the 6 from the last group of 86. Josie also wrote 6 groups of 43, but represented each 43 using 10s and tally marks (40 & | | |). She skip-counted by 40s six times for a total of 240 and then counted on using the tally marks (it is unclear whether she counted by 3s or 1s) to arrive at her final answer of 258. Alexis represented 6 groups of 43 in tally marks. Each group of 43 is represented by 8 circled groups of 5, labeled as 40, and 3 more tally marks. Her final answer was 258. It is unclear how she counted the groups of 5s and the single tally marks.

The second problem was the Newspaper Problem. This problem was a separate, result unknown—or takeaway, result unknown (CCSS-M)—problem: “Daniel is delivering newspapers. On his entire route, he has 255 newspapers to deliver. He has already delivered 165 newspapers. How many does he have left to deliver?” The first author created the student work for the Newspaper Problem based on strategies she had seen in a third-grade classroom (See Appendix A). We chose the Newspaper Problem, because takeaway result unknown is a prominent problem type within the CCSS-M, and the children’s solutions were varied. Damian used an incrementing strategy. He added 40–165 to get 205 and then added 50–205 to reach 255. To find his answer, he added 50 + 40 = 90. Leticia solved the task by taking away in parts and then adding back at the end. She began with 200 (of 255) and subtracted 100 (of 165), resulting in 100. From this, she subtracted 60 (from 165) and then subtracted 5 (from 165), arriving at 35. She then added back 55 (from the original 255), to arrive at her answer of 90. Maxine subtracted by place value. She solved 200 − 100 = 100; 50 − 60 = − 10; and 5 − 5 = 0. To find her final answer, she wrote “100 − 10 = 90.”

Teachers were asked to describe, interpret, and respond to both sets of work in the interview prompts within a structured interview at each individual participant’s school. For the purposes of this paper, we analyzed replies to the responding prompt only. The responding prompt was as follows: “Pretend that you are the teacher of these children. What problem or problems might you pose next? Why?” There were no additional prompts after this question was posed. Teachers’ responses were audiotaped and transcribed.

Data analysis

Data analysis began by analyzing the transcripts of teachers’ responses through focused coding (Coffey and Atkinson 1996). We began our analysis of the data using the three growth indicators given in the Jacobs et al. study (2010). In our first pass through the data, we found that participants sometimes engaged in either considering or anticipating, but not necessarily both. This prompted us to separate the second growth indicator, considerations of children’s existing strategies and anticipation of their future strategies, into two categories. Discussion of this initial pass also revealed that some participants engaged within considering and anticipating at different levels, motivating the need for criteria that allowed us to further differentiate these two activities. Based on these discussions, we developed the Responding Rubric (Table 1) to describe four aspects of participants’ responses: (1) group or individual responses; (2) consideration of children’s existing strategies; (3) anticipation of children’s future strategies; and (4) responsive problem posing. We established and refined operationalized definitions for each of these aspects through several more passes through the data. We first viewed the rubric as a lens useful for our work decomposing and making sense of the multiple aspects of teacher responding. Upon further reflection, we decided it described teacher responses in general enough terms to also be potentially useful to others either in its current form, or with slight modification.

Table 1 The responding rubric

To answer our research question, “What does the Responding Rubric, applied to teachers’ responsive posing of problems, illustrate about teachers’ responding skills?” we started with two researchers independently scoring each participant’s response for each problem set. In scoring responses, we first identified whether the participants responded to the children together or as individuals. If a participant responded to individual children, their response to each child was scored separately for each of the other three categories. Therefore, a complete response could receive a minimum of two scores for each of the three rubric categories if the teacher responded to the three children as a group in both problems. If the teacher responded to each child individually in both problems, their response could receive a maximum of six scores for each of the other three categories. In some cases, teachers responded to two children in a problem or grouped two children together for a single response. Inter-rater reliability was calculated for each problem set for all interview prompts and categories and was established at 83% for the M&M Problem and 86% for the Newspaper Problem. All disagreements were resolved through discussion.

Findings

In this section, we present findings from our focused coding. Each section reports findings for one of the four aspects of responding. Tables provide all participant scores followed by examples to further describe and illustrate participants’ responding skills.

Responses to individual and groups of children

In Table 2, we provide data related to whether participants responded to individual children or a group. An “I” indicates a response to individual children, and a “G” indicates a response to a group of children. In some cases, a teacher responded to two of the three students (coded as “2”).

Table 2 Participant scores related to responding to a group (G) or individual (I) children in the M&M and Newspaper Problems (NP)

Each of the 20 teachers was given two opportunities to respond to children’s mathematical thinking for a total of 40 instances. Of those 40 instances, 19 were cases of individual responses, 16 were group responses, 4 were responses to two of the three children, and 1 to two children together. If we consider two categories of individual responses and non-individual responses, we can see teachers were equally likely (19 vs. 21) to respond to individuals as not. Additionally, 11 of the 20 teachers were consistent in who they responded to—individuals or the group for both the M&M and Newspaper Problems. For example, Audrey responded to individuals and Donna to the group for both problems. The other 9 teachers varied in whom (individual or group) they responded to. For example, Abby responded to individual children for the M&M Problem, but to the group for the Newspaper Problem. We provide examples of both a group and individual response below.

Group Response from Geneva for the M&M Problem

With Alexis, I would definitely see if we can’t get Alexis to move away from the direct modeling into using just the numbers, instead of having to tally out everything, in order to get a more accurate answer… Honestly, if these were my students, I would probably back down to something a little easier in all three cases, and try to move them to a harder strategy. And then move them back up to the larger numbers, in order to see if a different strategy and a more effective strategy might be something that would take.

Individual Response from Kacey for the M&M Problem

Possibly giving her (Cassandra) maybe a little higher number and seeing if they were able to break that apart in the same way—maybe keeping the bags of M&Ms the same, but modifying, making this number, the 43, larger.

Josie, I guess I would say the same thing. Keep the number of bags the same, but raising the number that each bag.

Alexis, based on if she really was just individually counting, I might keep her numbers the same, or close to the same, if not a little smaller, and see if she uses the same strategy.

Geneva began by giving Alexis individual attention in that she wanted Alexis to move away from direct modeling, but then she stopped and decided to provide a group response by giving all three students “something easier” (We are assuming easier numbers.) and encouraging students to use a “harder strategy” (We are assuming a more efficient strategy.) followed by “larger numbers”. The specifics of harder strategies and larger numbers were not given. Kacey, on the other hand, named each child individually and stated the action she would take with each child. For Cassandra (because she used a break apart by place strategy) and Josie, the number of bags would stay the same, but the number in each bag would increase. For Alexis, the numbers for both the bags and the amount in them would stay the same or be a little smaller.

Considerations of children’s existing strategies

In Table 3, we present the rubric scores related to consideration of children’s existing strategies, defined as the degree to which responses suggested explicit and specific awareness and understanding of the children’s strategies on the M&M and Newspaper Problems.

Table 3 Scores related to consideration of children’s existing strategies

There were several ways a teacher could score for this criterion on either the M&M or Newspaper Problems: a singular group score of 1, 2, or 3; a consistent score (e.g., 2, 2 or 3, 3, 3) across two or three individual children; or variable scores (e.g., 1, 1, 2 or 2, 1, 3 or 1, 3, 3) across two or three individual children. A range of scores can be seen across the 20 participants. For example, Evelyn had a group score of 1 for both the M&M and Newspaper Problem. Miranda had variable individual scores of 1, 2, and 3 across both problems. Jo had consistent individual scores of 3 for the M&M Problem, but a group score of 2 for the Newspaper Problem.

Next, we provide examples with scores to further illustrate teachers’ responding skills. We are sharing Kacey’s and Noel’s responses in particular, because together, they provide examples for each score (1–3). Returning to Kacey’s response to individual students on the M&M Problem (above), she scored the following for each child in considering their existing strategies: Cassandra—2, Josie—1, and Alexis—2. For both Cassandra and Alexis, there is evidence that Kacey is aware of their strategies. Kacey stated that Cassandra broke apart by place (“break that apart”) and Alexis counted, possibly individually. For Josie, however, there was no evidence that Kacey considered Josie’s strategy in the response, as she did not name any of the actions or thinking that Josie might have used in solving the problem.

Noel’s response to the Newspaper Problem further illustrates the ways in which responses provided evidence of participants’ consideration of children’s existing strategies. Text related to consideration of children’s existing strategy is bolded.

Individual Response from Noel for the Newspaper Problem

Instead of using 5s in the one place, I would use different numbers. Maybe 256 minus 162. So, you’ve got a harder number in the 1s place for him (Damian). So, I would try that. And then I could even propose after that, depending on the progress, on something similar to the style that he would have to use negative numbers. So, if I have 253 minus 168, so knowing that you’re not going to get an even 10 or an even 0 or an even 5.

For Leticia, she knows how to break numbers apart by 100s. And I would say her knowledge is more advanced than Damian’s, the front one. Because she had the knowledge of you can continually subtract a number by breaking it apart. So, she was breaking apart all of the 165 into 100s, 10s and 5s, and then added that up. So, she subtracted to get to [2]35 and then added it back on when she broke apart the 255.

And for Leticia, again I would probably propose different numbers in the one places. I think that would seem like maybe a good starting point for these problems. And then we could even change eventually the 10s for her and see how she could break that apart. So maybe 263 minus 165, and see what happens there.

Maxine again has a comfortable knowledge of breaking apart numbers. So, takes the 100s, subtracts them. Takes the 10s, subtracts them. Takes the 5 and subtracts them. And knowing what she got for the difference for those answers, she knew that she could add those up to find the difference.

And for Maxine, she does have a knowledge of negative numbers, so maybe some more numbers that could give her negative number results. But not so comfortable, as far as to say that she could get like a 10, to say maybe she could get like a − 8 or a − 3 in her problem, instead of a − 10, to make it a little more challenging.

Noel scored the following for each child: Damian—1, Leticia—3, and Maxine—3. For Damian, we did not consider the comparison of Damian’s strategy to Leticia’s strategy as awareness of Damian’s strategy, because there was no statement about Damian’s strategy that described it in any way. For Leticia and Maxine, however, Noel provided strategy details—Leticia broke numbers apart, continually subtracted by breaking 165 apart, and added 55 back on at the end. Maxine broke apart both numbers by place—“Takes the 100s, subtracts them. Takes the 10s, subtracts them….” Also, Maxine added the subtraction results to find the difference and has knowledge of negative numbers. Because Noel provided evidence, in the form of strategy details, we scored this aspect of responding at a 3.

Anticipation of future strategies

In Table 4, we present the rubric scores related to anticipation of children’s future strategies—the ways in which teachers described and named the strategies they thought children might use to solve a subsequent problem.

Table 4 Scores related to anticipation of future strategies

Like the previous criteria, there were several ways a teacher could score and there were a range of participant responses. We are sharing examples from Faith and Leigh, because they both have individual responses with scores of 2 and 3. As before, the text bolded is related to anticipation of future strategies.

Individual Student Response from Faith for the M&M Problem

For Cassandra, I might give another one similar—maybe with another bag. Or instead of just 43, like 48 or 49 and see if she would, the way she’s able to group, see if she would actually do the same thing and maybe just round it [the number in each bag] up to 50 and then do the subtraction at the end to see if she would use that [strategy] again. Maybe like 8 bags of 49, just so it’s a different amount of bags, but then also to see if she would catch on to that, 48 or 49.

For Alexis, I would do something pretty similar. Maybe still 6 bags, but then maybe like 33 or something, just to see if I could work with her on finding a more efficient way. And then instead of doing, I would talk to her a little bit ahead of time, just thinking about instead of doing all these by 5s, to see if she could do it by 10s. Even if she still wanted to direct model, but then if she could see something a little more similar to that. So, she could see that, instead of having to break it all the way out by 5s, if she could do it by 10s even. To get it a little more efficient, but let her still see the model and kind of move her along a little bit.

With Josie, I really like the way she did. I think that’s really good. I might want to see if she could do this one again too. This one, since it was a smaller, she has a real good plan, but I might want to do another one similar to this. But then again, with just a higher number. Because skip counting by 3s is still pretty easy. So maybe instead of a 43, do a 38 or 58. Just seeing if once the numbers get a little bigger, if it [the strategy] was still strategic. If this would still work for her or if she would come up with a different one [strategy]. Instead of just keeping all the 3s separate, if she would pull those out and do something different with them.

Faith’s response was scored the following for each student: Cassandra—3, Alexis—2, and Josie—2. For Cassandra, Faith created an opportunity for Cassandra to use a particular strategy through number choice. Specifically, Faith thought about compensation when she stated, “round it [the number in each bag] up to 50 and do the subtraction at the end.” Faith suggested 8 (original 6 bags of M&Ms with another bag) × 48 or 8 × 49 to prompt this compensation strategy.

Faith anticipated that Alexis could more efficiently direct model by tens, instead of fives, after Faith has conferenced with Alexis. Faith did not offer any possible number choices for Alexis however, resulting in a score of 2. For Josie, Faith offered number choices (38 or 58 in each bag instead of 43), but did not anticipate a particular strategy she might see. Instead, Faith wanted to investigate what strategy Josie would use when the value in the ones place was not so easy to work with.

Next, we present Leigh’s responses to provide further examples of responses scored as 3 on this criterion.

Individual Response from Leigh for the Newspaper Problem

With this one (Leticia), I would do something where she couldn’t move the 255 down to 200. So, 255 or another number minus 240. Would she still move the number, would she not? So, she moved it to a benchmark that she knew, 200, to make it easy for her to break down with those zeros. If I kept it at 255 and I had her subtract, let’s say 237, would she still move it and if she did, would she move it up to 300 and then add on 45? What would she do about that? Because I think she has a solid foundation right now, so I want to see that challenge, is she can still understand what to do. There’s nothing missing here that I would have to continue to question her on, is what I would say.

This one (Maxine), I’m so surprised by the negative. I think I want to emphasize that somehow. I would do another problem, maybe 317 minus 200… But this time I would almost do two negatives. Would she be able to understand it again, minus 269? You’d have 300 minus 200, which would be 100. And then you have 10 minus 60. What am I going to do? − 50. Okay, hopefully. And then 7 minus 9. Okay, that’d be a negative 2. And then would they still be able, two negatives, is she going to get confused or what? I think she has a great understanding as well.

This little one, I like this one. Damian. I would change the 5. See how that, instead of 5s at the end, see how that would impact him. Because it was very easy. What if it was 172 and then whatever would equal 251? How would that impact him? You’ve got 5s and 5s, so it’s very easy to match up. When you change the place of the 1s digit, you’re going to have to add up a different number… Is he still going to be able to understand?

They all understood it by actually adding. Well this guy [Damian] understood by adding, instead of subtracting. So, I would also change the wording of the problem, to see if he still understood. He did a subtraction problem as addition, which was great.

Leigh scored 3 for each of the children related to anticipation of future strategies. In each instance, she provided number choices in anticipation of particular strategies. For example, Maxine would be given 317–269, because Leigh wanted to know whether Maxine could understand a case where subtracting by place would result in negative numbers in both the tens and ones places.

Responsive problem posing

In Table 5, we present the rubric scores related to a problem posed in response to children’s mathematical thinking (as opposed to an initial problem).

Table 5 Scores related to responsive problem posing

Like the previous criteria, there were several ways a teacher could score, and there were a range of scores. Our first example is Carrie’s individual responses to students for the Newspaper Problem. Her response is provided with text bolded where it pertains to a responsive problem.

Individual Response from Carrie for the Newspaper Problem

To maybe push Leticia to even a more sophisticated strategy, I might have other kids come up and share their ideas. If a student, perhaps, did 255 minus 100 and took bigger steps, goes to 155, sometimes students will say, well − 5, because that will take me to 150. Then I’ll take 60 away, instead of taking 65 away altogether. So, I might call some other students up during our sharing time and make sure that Leticia, bring this to Leticia’s attention, to all the other kids, as well, that we can even take shorter steps. I also might, for Leticia, call a student who counted up, who started with 165 and made that addition, subtraction connection and counted up, 165 maybe plus 100 would be 265, minus 10, would be 90. That would be a really big step, if a student did that, counting up. I think these number choices are good. I would pose similar number choices.

I would have Maxine come up and show the class her strategy and see if other kids catch on to this strategy. I think she pretty much has it. I might pose bigger numbers, like numbers in the thousands, to Maxine and see if she does it the same way.

Damian, I might have Maxine show Damian her way and also for Damian, I might have someone else come up who also counted up, maybe someone who did 165 and took a big jump and said plus 100 is 265 and then take 10 off, which gives me 255. So, 100 minus 10 would be 90.

In her plan for the problem, Carrie decided that having students share strategies would be appropriate to push Leticia to use a “more sophisticated strategy.” Included in that plan are “similar number choices.” For Maxine, “bigger numbers, like numbers in the thousands” would be posed, and for Damian, Carrie thought he would benefit from seeing Maxine’s strategy. Because the plan for a subsequent problem was general, Carrie scored a 1 on the rubric for all three children. Next, we present Barb’s response to students on the M&M problem, a group response with a score of 3 for the responsive problem.

Group Response from Barb for the M&M Problem

I would probably do a multiplication problem. But I would probably do a series of number choices that would build on each other, because looking at a couple of these… Like, I would probably do like 8 bags, and I would do the first number would be something like a 30, 40 or 50. Just to see this beginning piece. What kind of accounting structures they would use for that, would it be stuff that they would hold in their head to count, or would it be one that they would feel like they would direct model? So, it would be interesting, again, to see if it was just this piece, how would they tune into it.

The second number choice, I would keep the first number the same. And then if I chose like a 30, my second number choice would be a 34. So, then I would look just to see if they already solved the problem with the 10s, how would they handle the 1s.

And then the second number choice, I could probably keep, again, probably the first number the same and the second number a different two-digit number. Not in decade. So, an example could be like, I could do, probably for them would be a 6 and 43, but I would do maybe a 4 and a 40. And then a 4 and a 46. And then a 4 and a 62. So, it’d be something like that.

Barb scored a 3 for the responsive problem, because she offered a specific problem type (multiplication), number choices, and a rationale. It seemed like Barb was still contemplating number choices, but we argue that she thought about the following number choices—8 × 30, 8 × 34, 4 × 40, 4 × 46, 4 × 62—and seemed to plan to present all the number choices in a multiple number choice structure (Land et al. 2015). Barb’s rationale was that she wanted to investigate what strategies students would use when multiplying with decade numbers (e.g., 8 × 30), and then how students would handle the ones in something like 8 × 34.

The differences between Carrie and Barb’s responses were striking to us. Carrie provided individual responses to the three children, which prior research suggests is a sophisticated way to respond to children’s mathematical thinking (Jacobs et al. 2010), but yet, the responsive problem was general. Barb, on the other hand, had a group response (less sophisticated according to prior research), but had a strong responsive problem. Thus, we wanted to investigate whether there were patterns in the relationships between the scores of responsive problems and participants’ use of individual or group responses.

Of the 65 individual responses, 21 (32%) were scored as 1; 20 (31%) were scored as 2; and 24 (37%) were scored as 3 for the subsequent problem. In the 17 group responses, 6 (35%) were scored as 1; 5 (30%) were scored as 2; and 6 (35%) were scored as 3. Generally, both the individual and group responses were distributed evenly across 1, 2, or 3 scores. Thus, individual responses tended not to lead to strong responsive problems for children. Group responses also did not tend to lead to strong responsive problems either.

Looking across criteria: two cases

We have presented results around looking at each element of the Responding Rubric separately. Here, we present two cases that present all aspects of responding. In both cases, the teachers provided responses to individual students for their respective problem, but their scores differed considerably across the responding areas. These cases were chosen so that we could provide a comparative analysis between two teachers who scored very differently across the criteria. The first case comes from Carrie. At the start of the study, Carrie was a third-grade teacher with over 30 years of teaching experience. Carrie had completed 3 years of CGI training that started 4 years prior to the study; as such she was 1 year removed from CGI training. In Carrie’s responses to the M&M Problem below, text related to consideration of existing strategies is underlined; text related to anticipation of future strategies is italicized, and text related to the responsive problem is bolded.

The Case of Carrie for the M&M Problem

So, I think that I would pose more problems to her (Cassandra) like this. Maybe some simpler numbers. Perhaps 6 times 20, 6 times 25 even. Because some students might say, I know four 25s are 100. So, that she can develop some other strategies. And then during sharing time, I would have Cassandra share the first part of her problem and maybe she could determine what happened at the end. Then I might have another student come up who used maybe a different strategy, like 40 times 6 is 240, 3 times 6 is 18, and then add that together. But I would definitely give her more experiences and pose more problems to her. And perhaps, like smaller numbers, like 6 times 20, 6 times 25 and so forth.

So, I might give her (Josie)… more experiences and perhaps use maybe even turn it around, 20 bags with 6 M&Ms in each.I don’t think she’d want to make 20 bags with 6 in each.Or maybe 80 bags with 6 in each. She might find a shorter strategy. I don’t think she’d want to take all the trouble to make all those bags. So, maybe that would push her to think a little differently. And then also during sharing time, I would make sure Josie can see how the other kids shared in their thinking, as well, some more sophisticated thinking, perhaps 40 times 6. Well I know that 4 times 6 is 24, so 40 times 6 would be 240. Perhaps like that. A more sophisticated strategy would be cut the 6 in half and make it a three. What are 3 43s. And some students will do that, 3 43s and then double it to get 6 43s, as well. So, I’d like to have her use more of those sophisticated strategies, because she has a lot on the ball here.

Alexis…This looks very painful. She’s accurate in her answer. She used tallies to figure this multiplication problem out. I would probably give her lower numbers in the future. Perhaps 2 bags, 22 in each. Smaller groups, smaller amount in each bag, and see if she comes up with the more sophisticated strategy. I would definitely, during sharing time, have her observe some kids at a higher level, like the previous papers that we talked about.

Carrie scored a 1 (Cassandra), 1 (Josie), and 2 (Alexis) for consideration of children’s existing strategies. Even though there is evidence that Carrie is aware of Cassandra’s and Josie’s strategies, Carrie did not mention anything about either strategy. For Alexis, Carrie scored a 2, because she made a general statement—“she used tallies,” but did not provide strategy details such as the tallies were in groups of 5 with 43 tallies in each bag. For anticipating children’s future strategies, Carrie scored a 1 for Cassandra, because she did not anticipate a future strategy for Cassandra; a 3 for Josie because she offered number choices for specific strategies (4 times 6 is 24, so 40 times 6 would be 240 and cut the 6 in half and make it a three); and 2 for Alexis, because she thought Alexis might use a more sophisticated strategy. For the responsive problem, Carrie scored 2 for all three students. The responsive problem of 6 × 20 or 6 × 25 for Cassandra seemed disconnected from Cassandra’s mathematical thinking, because it was not clear from Carrie’s description how smaller numbers would be productive for Cassandra who broke apart by place. The problem of 20 × 6 or 80 × 6 for Josie also seemed disconnected. We concede that more bags could prompt Josie to use a more sophisticated strategy, but 20 × 6 or 80 × 6 do not seem like strategic choices for that purpose. Since Alexis was successful using tallies for 6 × 43, 2 × 22 did not seem likely to promote use of a more sophisticated strategy without more information from Carrie. Thus, Carrie scored 2 in responsive problem posing for Alexis.

For the second case, we present Amy’s response for the Newspaper Problem. Amy was a fourth-/fifth-grade teacher with 19 years of teaching experience and had completed 3 years of CGI training starting 7 years prior to the study. Amy had also received instruction in how to facilitate CGI training and had 2 years of experience in doing so.

The Case of Amy for the Newspaper Problem

Let me do some thinking…I think there’s many different ways you could go with this, but I would like to see them (the students) do a separate result unknown problem. I want to focus more on the subtraction,since two out of three of them demonstrated subtraction, I’m curious as to whether Damian would use addition on just an equation like 300 minus 149, or would he go back to addition? I’m just curious on what he would do on a straight, separating problem. So, that’s kind of why I wanted to see that for him.

Then thinking about Leticia and Maxine, I think they both demonstrated going to that friendly number of 200 first.So, I would like them to see if I gave started with just with, the problem did, but starting either with a not-friendly number and seeing then, would they still always go to that as they’re starting spot? And how comfortable then they were with like, let’s say a 324 take away 100. Would they do a 300 minus 100 is 200 and add the 24 back on? Or would they see that as just changing in the hundreds?So, some number choices I would give is like a 324 take away 100, to see what they would do. And I would probably do a couple of those, like maybe an 856 and a 300. I’m curious if they are going to break the 856 into 800 minus 300 is 500, and then add the 56 back on.

Like 324 minus 100, I would like them just to see it as 224, without having to take off…because in your brain, what your brain is doing is taking off and then adding it back on. So, it’s doing a couple extra steps. If they could it without those couple extra steps, is what I’m kind of curious about.

And then like for Damian, I’m just curious if addition is his go-to. I’m looking to see if addition is his go-to strategy for any subtraction problem. For Damian, it (number choice) does matter. If he would see it as addition, I think my first number choice might like a 300 and 149. If he is going to do it as addition, to get to that friendly number, 149–50 to 200–300, or just three easy jumps, is what I consider it.But then I would change then, for his second number, I would do something like a 316 minus 178, something that’s not…, and see what would happen. How that would affect it? If I changed it up to be not as friendly for him.

Amy scored a 2 for considering Damian’s existing strategy and a 2 for considering Leticia’s and Maxine’s existing strategies together, because there was evidence that Amy was aware of the children’s strategies, even though she did not provide strategy details. Additionally, Amy scored a 3 for anticipating all three children’s future strategies in an investigative manner. That is, Amy wanted to gather more information about how the children would solve the same kind of problem type (separate, result unknown) with other number choices, and she provided a specific strategy opportunity through number choice for all of them. For Damian, Amy wanted to see whether he would count up by place (149–50 to 200–300) with 300–149 and then see what Damian would do with the less friendly number choice of 316–149. For Leticia and Maxine, Amy wanted to see how comfortable they were with a multiple of 100 and another 3-digit number like 324–100 and 856–300. Would Leticia and Maxine break the non-multiple of 100 by place or would they just subtract the hundreds without breaking apart the entire number? Amy also scored a 3 for the responsive problem, because she provided a problem type, number choices, and a rationale. Amy’s rationale was directly connected to considering children’s existing strategies and anticipating future strategies.

Comparing the cases of Carrie and Amy, our analysis demonstrated despite the fact both teachers had significant experience in teaching and CGI preparation and both responded to individual students, the quality of Amy’s response stood out in comparison with Carrie’s. We posited the difference in their responses might be attributed to the differences in the way each teacher considered and anticipated students’ strategies. Amy’s scores in these two areas signal to us the potential for a higher level of intentionality in her problem. Not only is Amy aware of how students are currently operating, she anticipates how changing the task or the number choices might encourage a change in the children’s thinking. Both of these pieces of information can be taken into consideration in her responsive problem posing, and the specificity of information Amy draws on can allow her to be very specific.

In order to assess this hypothesis, we examined the data for teachers whose problem responses were scored high. We defined a high score as having the majority of scores for responses to individual students scored as 3 (all 3 or 3 3 2 in some order) or scoring 3 on a response to a group of students. There were 14 such examples across the two problems (5 from the M&M Problem, 9 from the Newspaper Problem). We posited high considering and anticipating scores were necessary in order to create a high-quality responsive problem. We defined “high scores” again as having the majority of scores for responses to individual students scored as 3 (all 3 or 3 3 2 in some order) or scoring 3 on a response to a group of students. Of the 14 high problem responses examined, 4 had high considering scores and 7 had high anticipating scores. Two responses had high scores in both considering and anticipating: one from each problem. The connection between anticipating and responsive problem posing was the predominant relationship, but we concede that it was not a relationship in 5 other examples where the teachers had high responsive scores, but not high scores in considering and/or anticipating. Further, we noted that for considering children’s current strategies, there were 8 examples of high scores (3, 233, or 333). Of those, only 4 resulted in high response scores. For anticipating future strategies, there were 7 examples of high scores. All 7 of those resulted in high response scores.

While these data are certainly limited, it suggests that anticipating of students’ future strategies may be an influential element of effective responding. See Table 6 for the relationships across considering, anticipating, and responsive problems for each of the 20 participating teachers.

Table 6 All teacher scores

Discussion and implications

Using the existing research literature and teacher responses, we were able to develop the Responding Rubric that described aspects of teachers’ responses to children’s mathematical thinking in the form of responsive problem posing. Jacobs, Lamb, and Philipp (2010) had identified growth indicators, but in utilizing our data, we were able to build on their work to define various levels of each growth indicator. Conducting this work allowed us to identify some interesting patterns across the dataset. For instance, we found that about half of the teachers responded either to individuals or to groups for both problems, while the other half responded to individuals for one problem and to groups for the other. We also found that the choice to respond to individuals or to groups was not related in any consistent way to the quality of responsive problem posing.

A second result concerns the constructs of considering children’s existing understandings (considering) and anticipation of their future strategies (anticipating). Existing research had grouped these two constructs together as one indicator of growth in responding (Jacobs et al. 2010). Close examination of our data suggested that some teachers were more likely to engage in considering but not anticipating, or vice versa. As such, we designed the Responding Rubric to differentiate between these two constructs. As a result of our case study comparison, we delved deeper into the relationship between high scores in considering, anticipating and responsive problem posing, finding preliminary evidence suggesting anticipation of children’s future strategies as an indicator of effective responsive problem posing.

Like Erickson (2010), we found that the range of teacher responses was varied and some interesting patterns began to emerge. However, we conjecture that given a larger and/or longitudinal dataset, clearer patterns could be identified that would contribute to the knowledge base around how teachers develop skill in responding. For instance, van Es and Sherin (2008) identified three paths along which teachers learned to notice children’s mathematical thinking: Direct, Cyclical, and Incremental. Given that developing the responding skill can take many paths or forms, research with a larger set may be able to identify these paths and how teacher attention to the various aspects of responding to children’s mathematical thinking contributes to teacher development along those various paths. Conversely, the three paths might help explain teachers’ scores on the Responding Rubric over time. Given a larger dataset, we also might find connections between noticing skills similar to, or not to, Barnhart and van Es (2015). Barnhart and van Es (2015) found that sophisticated responses to children’s mathematical thinking required sophisticated analyses, but that sophisticated analyses did not always lead to sophisticated responses.

Erickson (2010) also identified two characteristics of noticing: (1) “experts are often distinguished as much by what they do not notice as by what they do” (p. 52) and (2) noticing is not always a conscious or explicit process. We are continuing to identify what is and is not noticed as part of responding, but at the same time, we are also relying on teachers providing explicit evidence for what is often an implicit process. As part of her responding, the evidence suggests that Amy did less considering of children’s existing strategies and more anticipating of future strategies and generating subsequent problems. Or, could it be the case that Amy’s consideration of children’s existing strategies has become more of an implicit process for her? Alternatively, our data collection process may not have captured everything these teachers know about children’s mathematical thinking.

Finally, we also conjecture that variables such as problem type, number choice, and children’s strategy likely affected teachers’ responses. This could be the case with Leigh. For the M&M Problem, Leigh had a group response with scores of 2 across all aspects of responding, but for the Newspaper Problem, Leigh had individual responses with scores of 3 for all children across all aspects of responding. Leigh’s responses and our full results indicate considerable variability across the two problems and sets of children’s thinking we provided. Because there are many different problem types (Carpenter et al. 1999) with a variety of numbers that could be posed along with varied children’s solution strategies, there are countless combinations of problems, numbers, and strategies. Teachers are going to be most familiar with the problems, numbers, and strategies that they pose and see in their classrooms. Our data collection process may not have fully captured some teachers’ abilities to respond to children. Teachers’ responses could differ when presented with different problems, number choices, and strategies. To more accurately capture a teacher’s ability to respond, problems related to the content they teach may need to be presented.

Implications include a practical use for the Responding Rubric. Researchers or mathematics coaches could be able to categorize and measure teachers’ development of noticing skills with the aid of the Responding Rubric. Areas of improvement could be identified and interventions designed to promote growth in these areas. For instance, we recognized a general weakness in our participating teachers’ responsive problems in that they tended not to be specific with problem type, number choices, and a rationale. Our results suggest the anticipation of children’s future strategies may be an idea we could leverage to improve upon this weakness. One way to support development in this area may be through explicit professional development in anticipation of children’s strategies along with addressing number choice and the ways it can be used to support the development of children’s mathematical strategies (Land et al. 2015).

Conclusion

With this study, our intent was to understand aspects of teachers’ responses to children’s mathematical thinking in their responsive problem posing and inform our emerging understanding of teachers developing the responding skill. The results further our understanding of teacher development; however, more work needs to be conducted to identify experiences that contribute to teachers’ development of the responding skill and how to support individual needs. Because teachers’ scores were considerably varied, we conjecture that there may be other aspects of teachers’ work (e.g., school setting, curriculum, previous experiences, and beliefs) that contribute to their responding abilities. Given that we know teachers have individual needs, how do we support their growth? What types of intentional tasks would give us the information we need to support individualized growth?

Finally, further research is needed to determine the implications for the ways in which teachers learn how to respond. Our research, along with the research of van Es and Sherin (2008), implies that teachers learn how to respond in different ways. What does that mean for teachers’ practices and continued growth? From the research of van Es and Sherin (2008), we know that teachers learn how to notice children’s mathematical thinking in Direct, Cyclical, or Incremental paths. However, we do not know how that learning played out in the teachers’ classroom or how it affected teachers continued growth and learning.