Introduction

While it is common practice for elementary school teachers to spend much of their time on literacy instruction, teachers in the US have needed to adapt to several major initiatives that are meant to influence that instruction in profound ways. Two of these are the Common Core State Standards (CCSS) and new teacher and school administrator evaluation systems that are in part based on student performance on standardized tests. These two initiatives are interdependent components of the national reform effort called Race to the Top (RttT) (Race to the Top, 2011).

In this study we provide an account of the approaches elementary school teachers in US schools with above-predicted and typical performance on CCSS English Language Arts (ELA) assessments are taking toward writing instruction in response to these initiatives. We begin with a brief overview of the RttT context and provide a review of literature focusing on relationships among writing standards, assessments and instructional practices. Next, we explain why the schools were chosen and how the study was conducted. We then discuss findings regarding characteristics of writing instruction and teachers’ perceptions of aligning instruction to the CCSS and conclude with implications for practice and future research.

The Race to the Top context

RttT, which followed on the heels of No Child Left Behind (NCLB) legislation ratified under the George W. Bush presidency was championed at the federal level under the Obama administration. Both RttT and NCLB are part of a wave of test-based accountability reforms purported to have been implemented in order to close enduring achievement gaps among white native English speakers and students from other diverse ethnic and linguistic backgrounds and children growing up in poverty (Darling-Hammond, 2015). These gaps have been associated with significant differences in students’ completion of secondary school and success at the post-secondary level (Suárez-Orozco et al., 2010) and hence have been directly connected to the goal of ensuring college and career readiness for all students (CCSS, 2015).

The CCSS for ELA require some major shifts in the focus of K–12 teaching. For example, the CCSS in New York State,Footnote 1 the site of the current study, propose six instructional shifts for ELA instruction: (1) reading a balance of narrative and informational texts, (2) building knowledge through texts; (3) reading grade-appropriate texts, (4) engaging in evidence-based discussions around texts, (5) using evidence from sources to enhance argumentative and informative writing, and (6) building academic vocabulary (Engage NY, 2015). As illustrated in the fifth shift, the CCSS place a stronger emphasis on argumentative and informative writing versus narrative, imaginative, or personal writing than prior state standards. This emphasis in the CCSS increases as students progress from elementary to secondary school, culminating in the expectation that by the 12th grade 80 % of students’ writing should be directed toward the purposes of explaining or persuading compared to 20 % for conveying experience (CCSS, 2015). Also notable, and highlighted in the sixth shift, the CCSS emphasize the development of academic vocabulary. In general the CCSS pay closer attention to the development of discipline-specific literacies than prior ELA standards as evidenced in their framing as standards for ELA and Literacy in History/Social Studies, Science, and Technical Subjects (National Governors’ Association for Best Practices & Council of Chief State School Officers, 2010).

The degree to which teachers are successful in helping students achieve the expectations for writing as presented in the CCSS is then assessed through a performance review system which in New York is called the Annual Professional Performance Review (APPR). APPR evaluations in New York, as well as in other RttT states, are informed by classroom observations as well as student performance on large-scale CCSS assessments that will be discussed in more detail below. Teacher observation protocols vary from state to state and from school to school, but as the Danielson teacher evaluation tool,Footnote 2 one example of many used in New York state schools illustrates, to receive a “proficient” or “distinguished” evaluation, a teacher would exhibit such practices as (1) tasking students with writing to different audiences, (2) asking students to evaluate their writing using rubrics, and (3) differentiating writing tasks and expectations for correctness in usage and mechanics for English language learners and students with learning disabilities (The Danielson Group, 2013). In addition to requiring school leaders to use such observation protocols, the state also made available instructional modulesFootnote 3 that are meant to provide models of CCSS-aligned practices including unit and lesson plans.

How the CCSS, APPR, and such instructional models might be influencing writing instruction is an important question to ask at this juncture. To respond to this question, the current study is the first to examine how teachers in elementary schools with higher-than-predicted and predicted student performance on the CCSS ELA assessments are approaching writing.

Related literature

The promises and perils of the CCSS for writing

This study is framed broadly by sociocultural theory that posits that writing development is highly contingent upon the historical and cultural contexts in which it occurs; it is not merely a mechanical or cognitive endeavor, but rather a complex individual and social one (MacArthur et al., 2006; Wertsch, 1991). As discussed above, the teachers participating in this study were working within the broader context of RttT which incentivized states to adopt the CCSS and performance evaluation systems aligned to them. The CCSS reflect values for what counts as effective writing as well as assumptions regarding weaknesses in literacy instruction in US schools, which in turn give rise to the shifts in emphases to which teachers are required to respond. It is important, then, to understand how those values and assumptions are influencing the ways that writing instruction is approached in CCSS-accountable schools.

A sociocultural understanding of writing emphasizes the development of flexible, context dependent strategies that facilitate the ability to write for different social purposes and audiences rather than of mechanical skills that reflect monolithic conceptions of writing competence. That is, the language features that give rise to qualities perceived as “coherence,” for example, may vary depending on the background knowledge of the intended audience for a written composition (McNamara, 2001; McNamara, Crossley, & McCarthy, 2010). An effective writer would then need to develop an array of strategies for achieving coherence in written work targeted to different audiences. To help students develop these competencies, teachers would not only teach specific strategies for achieving coherence but also provide students with opportunities to consider how to use these strategies flexibly depending on the different social contexts within which they are writing.

Sociocultural theory further postulates that an individual learner’s experiences and cultural and linguistic background interact with the wider sociocultural context to influence how instruction is taken up. It follows that effective teachers adapt learning environments and instructional approaches (e.g., the use of writing rubrics) to build on their students’ already-developed writing competencies and motivations to write. In this view, when planning instruction effective teachers would consider student characteristics to make literacy tasks such as using evidence from texts to advance written arguments meaningful and within reach, both affectively and cognitively (Vygotsky, 1978; Wilcox & Jeffery, 2015). This socioculturally informed theory of writing development assumes that effective implementation of the CCSS would necessitate such considerations.

As Applebee (2013) expressed in a discussion of the CCSS for writing, the standards reveal particular beliefs about what might be perceived as valuable in students’ writing and how and why writing should be taught in one way or another. He asserted that the CCSS present both “promises and perils” for writing instruction in US schools (Applebee, 2013, p. 25). The promises include raised expectations for writing across disciplines, particularly argumentative and informative writing. The perils include: (1) the potential overemphasis on foundational skills that take shape in decontextualized language exercises focused on grammar, spelling, and vocabulary; (2) the potential for approaching recursively-developing literacy competencies in a piece-meal and linear way based on trivial grade-by-grade distinctions; (3) the disregard for a developmental model that emphasizes the use of a “flexible array of strategies” rather than a formulaic approach to writing (Applebee, 2013, p. 29); and (4) the overemphasis on “one and done” high-stakes assessments of writing that can strongly shape instruction.

Studies examining alignment between the CCSS and evidence-based practices (EBPs) for writing instruction provide some support for Applebee’s concerns regarding potential perils. For example, Graham and Harris (2013), in an analysis of the potential benefits and challenges of the standards for students with learning disabilities, described CCSS benchmarks for writing development as “simply educated guesses as to what students should be able to do at particular grades” (p. 31). Other studies investigating the extent to which EBPs are incorporated into the writing standards found that the CCSS failed to address important aspects of writing development, including those related to purposeful composing processes (Mo, Kopke, Hawkins, Troia, & Olinghouse, 2014; Troia, 2014; Troia & Olinghouse, 2013). Mo et al. (2014), for example, expressed concern that the writing standards “do not address the writing process as a reciprocal and iterative whole” (p. 448). Furthermore, Troia and Olinghouse (2013) voiced concern that the standards “do not address writing motivation at all,” (p. 347). Further, Aull (2015) has questioned the separation of language and writing standards in the CCSS because this approach suggests that the two can be taught separately rather than integrated in instruction focused on how language structures give rise to effective writing in academic genres.

Troia and Olinghouse (2013) analyzed the extent to which CCSS for writing were aligned with EBPs for writing instruction as presented in meta-analyses of experimental, quasi-experimental, and single subject writing intervention studies (e.g., Bangert-Drowns, Hurley, & Wilkinson, 2004; Graham, McKeown, Kiuhara, & Harris, 2012; Graham & Hebert, 2011; Graham & Perin, 2007; Graham & Sandmel, 2011; Hebert, Simpson, & Graham, 2013; Rogers & Graham, 2008). They note that while several CCSS for writing across K-5 are supported by a strong research base, some are not represented. Those EBPs represented include: prewriting/planning/drafting, text structure instruction, word processing, handwriting/typing skills, sentence combining, decreasing spelling errors, decreasing grammar errors, writing responses to text, and collaborating with peers (Troia & Olinghouse, 2013). However, 12 practices recommended in the research are absent from the K-5 standards: freewriting, process writing, comprehensive writing instruction, strategy instruction, assistive technology, summarizing, writing to learn, self-regulation/meta-cognition, goal setting, using rubrics, evaluations, and presentation instruction (e.g., legibility). Additionally, providing extra time for writing, writing with creative imagery, and taking notes are missing from the kindergarten, first, and second grade standards but are included in the third, fourth, and fifth grade standards. Likewise, using text models and providing feedback are lacking in the third, fourth, and fifth grade standards but are included in the kindergarten, first, and second grade standards. Such variation reminds us of Applebee’s concerns for arbitrary grade-level distinctions.

Analyses of CCSS alignment with EBPs suggest that learners might experience gaps in recommended writing practices if teachers only follow the CCSS with high fidelity. Taken together this scholarship suggests that although the CCSS are in many ways preferable to scenarios prior to their implementation when there was little consistency from state-to-state, translating the CCSS into theoretically- and empirically-grounded practice will be challenging.

Classroom-based observational research regarding how CCSS are affecting instruction (e.g., Glaus, 2014; Montgomery, 2012; Strahan, Hedt, & Melville, 2014) is only beginning to emerge as in most states implementation did not begin until the 2012–2013 school year. However, some scholars have focused on the materials surrounding the standards, including CCSS-associated websites, video-recorded and widely distributed talks presented by designers of CCSS literacy standards, and other supporting documents such as publication guidelines. For example, Hodge and Benko (2013) analyzed documents and speeches by CCSS ELA standards authors David Coleman and Susan Pimental. They found that Coleman and Pimental “can appear to contradict the standards’ claims of leaving decisions about instruction to teachers” (p. 175), and that the stances these two key CCSS architects have taken toward implementation are not always aligned with research. Though reading has often been the focus of this CCSS ELA scholarship (e.g., Botzakis, Burns, & Hall, 2014; Johnson, 2014; Snow & O’Connor, 2013; Zancanella & Moore, 2014), given the CCSS integrated literacy model that emphasizes “text-dependent” writing (i.e., writing about academic texts), one would expect similar challenges when working to implement CCSS for writing particularly since such an integrated model was not prevalent in state standards prior to the CCSS.

Accordingly, researchers have called for more pre-service and in-service teacher professional development to integrate EBPs for writing in alignment with the CCSS (Graham & Harris, 2013; Troia, 2014). However, survey research indicates that many teachers receive little or no pedagogical guidance for teaching writing during their pre-service training and with the added challenges of the CCSS for writing many can be expected to need considerable support in adopting and implementing CCSS-aligned writing practices (Gillespie, Graham, Kiuhara, & Hebert, 2014; Graham, Capizzi, Harris, Hebert, & Morphy, 2014). Even with the aforementioned instructional modules that are meant to provide models of CCSS-aligned practices, teachers may resort to a “teach to the test” (Hillocks, 2002) or a “lift and deliver” (Wilcox, 2015) approach whereby they lift ideas from the instructional modules and deliver them without much or any adaptation of strategies and activities for their particular students.

Annual professional performance reviews and assessments of writing

As discussed earlier, the CCSS and APPR are meant to work in tandem as part of the RttT reform agenda. The APPR system hypothetically should help assess how well teachers prepare students to meet the CCSS and how well school administrators help teachers to align their practice to this aim. In order to gather comparable data to assess teachers, a large-scale assessment program was perceived as an essential component of the system.

Therefore, New York’s APPR system is in partFootnote 4 reliant on students’ scores on grades 3–5 assessments in ELA and mathematics. Similar to other states where the CCSS have been adopted, the ELA exams for these grades include items for both reading and writing with the language and writing standards comprising around 45 % of the total points (New York State Education Department, n.d.). On this exam, children must respond to a short-response and an extended-response task. On the short-response questions they are prompted to use textual evidence to support their own answers to inferential questions. These questions ask learners to make an inference (a claim, position, or conclusion) based on their analysis of a reading passage and then provide two pieces of text-based evidence to support an answer. Responses for the third grade test, as an example, are scored using a rubric to evaluate written responses based upon the validity of the inferences or claims; evidence of analysis of the text, relevance of details, sufficiency of details, and readability. Each response is expected to be no more than three complete sentences and is scored on a two-point scale. Extended-response questions are designed to assess competencies to write from sources and are scored on a four-point scale based on four criteria: content and analysis; command of evidence; coherence, organization, and style; and control of conventions.

Regardless of how well the actual test items are constructed, a substantial body of research conducted prior to the implementation of the CCSS raises concerns regarding a narrowing of the curriculum that often results from an excessive focus on high-stakes literacy assessments (Behizadeh, 2014; Hillocks, 2002; Nichols & Berliner, 2007). In the context of the CCSS and APPR that are intended to work in tandem to influence teachers’ instruction, it is then important to investigate how teachers are responding in school contexts where student outcomes are on target.

Hence, in this study we selected schools that represent relatively better-case scenarios for student outcomes guided by this overarching question: How do teachers in elementary schools with above-predicted (“odds-beating”) and predicted (“typically performing”) outcomes on CCSS ELA assessments approach writing instruction? Subquestions include: (1) What are teachers’ instructional practices in odds-beating and typically performing schools? and (2) What perspectives do teachers in odds-beating and typically performing schools hold regarding aligning their instruction to the CCSS for writing?

Methods

This study is part of a larger mixed-method multiple case study that investigated instructional practices in schools with above-predicted and predicted outcomes on Common Core ELA assessments. A quantitative method, specifically the use of regression analysis, was used to identify the sample and qualitative methods were used to collect data on teacher practice and perspectives through the use of semi-structured interview and focus groups as well as collection of documentary evidence.

Since the funder for this study was particularly interested in schools that had exceeded predicted performance and with a significant number of students from economically disadvantaged homes (a poverty indicator), we used regression analyses to identify our sample (Levine, Stephan, & Szabat, 2013). We ran regressions using ELA assessment data from grades 3, 4, 5, as well as percentages of economically disadvantaged and English language learner students. All of the schools identified as “odds-beating” fell at least one standard deviation above the state average for performance on the 2012–2013 CCSS ELA assessments in comparison to schools serving similar populations and they consistently performed above-average on ELA assessments prior to the Common Core. Because we sought to identify exemplary practice we oversampled odds-beating schools, selecting six odds-beating and three typically-performing elementary schools which could be used for comparison purposes. These typically-performing schools were selected based on the same set of assessments as those used to identify the odds-beating schools, yet they performed at predicted levels. Lower-performing schools were not sampled as they were undergoing a variety of state-led reviews and interventions that would make participation in research an undue burden. In addition, including these schools was beyond the scope of the current study. The sample schools’ demographic details as well as their performance on the 2012–2013 CCSS ELA assessments as represented in z scores are displayed in Table 1.

Table 1 Elementary school sample

The sample included rural, suburban, and urban schools. All but one of the odds-beating schools (i.e., Starling Springs) had higher than the state average for economic disadvantage and this was purposeful as we sought a majority of schools that face demographic challenges such as poverty. Both Bay City and Goliad, the two urban odds-beating schools, also are more ethnically and linguistically diverse than the state average. This was another characteristic we sought in the odds-beating sample.

Data collection

To examine teachers’ practices in odds-beating and typically-performing schools (research question 1), we collected observation, interview, and documentary data from each participating school, including samples of instructional scaffolds (e.g., writing rubrics) used during observed classes. The observation protocol was designed to yield thick descriptions of instructional practices particularly as they related to the CCSS shifts. It included three parts: field notes, summaries of practice, and a debriefing section wherein teachers were prompted to share reflections after their lesson (Adler and Adler, 1988). Additionally, photos of classrooms, lesson plans, and other instructional materials were collected during site visits and also from school and district websites. To investigate teachers’ perceptions regarding their aligning of writing instruction with CCSS (research question 2), we collected individual and focus group interview data, and the debriefing after observed lessons also were helpful here. The semi-structured interviews and focus group protocols included questions regarding teachers’ perspectives on the implementation of the CCSS and APPR, curriculum revision processes and outcomes, and instructional practices. A total of 30 interview and focus group transcripts, 24 observations, and a variety of documentary data, some in hard copy and others in digital form, were collected.

A research team member recruited selected schools and contacted both the district superintendent and the school principal to provide consent to participate in the research. This team member promised a modest stipend for contributions to the study and provided a sample schedule. A field research team, consisting of three to four members (university researchers/professors and doctoral students) who were certified in human subjects research by the university’s Institutional Review Board, was assigned to complete the data collection. Each team had a designated lead (i.e., person responsible for organizing the team’s activities and conducting key informant interviews, focus groups, and observations) and a co-lead who shared responsibilities for data collection, transcript preparation, interpretive memoing, and the writing of a summary report and case study. They were typically accompanied by two other researchers whose primary responsibilities were to organize document collection and transcribe the data.

Each team member participated in a half-day orientation to all aspects of the study including consent procedures and use of the observation, semi-structured interview and focus group protocols. During this time the principal investigator modeled data collection strategies, offered examples of audiotaped interviews and focus groups from prior studies, and the team members practiced using the protocols. Before going into the field, each team leader accompanied the principal investigator as an observer on at least one site visit at which time data collection strategies were modeled. As is typical in qualitative field research, some variability in data collection was considered appropriate as to maintain an empathetic stance toward respondents. However, essential questions were bolded in all protocols to ensure that evidence for cross-case comparisons would be available (Josselson, 2013; Maxwell, 2012).

Data analysis

Interpretive memoing (i.e., recording interpretations of data throughout data collection and analysis), member checking (i.e., confirming accuracy of evidence and interpretations with participants), and source triangulation (examining multiple data sources) methods (see Patton, 2001) recommended for multiple case studies were employed to ensure the credibility of intra-case and cross-case findings (Creswell, 2014; Yin, 2013). Analyses began onsite as each team member contributed to interpretive memos during and immediately after data collection. Next, all interview and focus group data were transcribed and they along with observation field notes were loaded into the qualitative software program NVivo at which time analysts who were trained in the use of the a priori codes informed by the literature coded the data. After all data from the larger study were coded and the summary reports and case studies crafted, they were sent to superintendents and principals who were asked to check the reports for accuracy. Upon the review of feedback that in most cases included only minor adjustments to such things as acronyms, the reports were finalized.

Next, for this embedded study, both a priori and open coding was conducted. Two analysts used a priori codes derived from the literature regarding EBPs and types of writing to categorize the data in response to our first research question achieving 93 % interrater reliability. To address our second research question, this deductive procedure was complemented by open coding for perspectives toward the CCSS and APPR falling outside the a priori codes which included such things as teachers’ perspectives on assessing writing, materials used to prompt writing, and supports for writing among others. In the end the codebook included 55 codes (see “Appendix 1”). Through querying the data in NVivo (i.e., extracting codes by individual case study schools, by categories of odds-beating and typical schools, and by categories such as EBPs) we then developed and used matrices and displays to identify patterns (see “Appendix 2”). Such pattern mapping is recommended to aid in identifying relationships and making explanations during multiple case study analysis (Miles, Huberman, & Saldaña, 2014).

Findings

Here we describe our findings in turn, beginning with illustrations of observed teacher practices followed by descriptions of their perspectives on aligning their practices to the CCSS for writing. These findings are situated within the literature and critiques of the CCSS for writing framed as “promises and perils” discussed earlier. We draw attention to the issues of aligning instruction not only with CCSS, but also with evidence-based practices highlighted in the literature.

Teachers’ practices

In response to our first research question (What are teachers’ instructional practices in odds-beating and typically-performing schools?), we focus this section on the evidence-based practices identified in the classroom observations (e.g., comprehensive writing instruction, creative imagery instruction, etc.) followed by a description of the types of writing tasks they assigned.

Evidence-based practices

First, we found that teachers in both odds-beating and typically-performing schools shared many EBPs including the use of creative imagery instruction, peer collaboration, presentation instruction, prewriting/planning/drafting, self-regulation/metacognitive instruction, strategy instruction, text structure instruction, using rubrics, writing responding to texts, and writing to learn. Those EBPs not observed in the schools investigated included the use of assistive technology that went beyond typical word processing programs like Microsoft Word.

The most commonly observed EBPs across all classrooms involved the use of peer collaboration, prewriting/planning/drafting, using rubrics, and writing to learn. These practices were observed in five of the nine schools (see “Appendix 2”). The next most common EBPs evidenced were creative imagery instruction, text structure instruction, and transcription skill instruction. Meanwhile, we observed no evidence of comprehensive writing instruction, freewriting, goal setting (beyond statements of learning objectives taken verbatim from instructional modules), process writing, transcription skill instruction, and word processing/using the computer for writing in the typically-performing school classrooms. Meanwhile, in the odds-beating school classrooms we observed no evidence of sentence structure instruction or note taking instruction. With regard to results by school type, which because of the small sample size need to be interpreted cautiously, urban classrooms showed evidence of more of the EBPs overall than suburban and rural schools in that order.

The use of benchmark tests and rubrics as well as the use of prewriting/planning/drafting with peers represent the most commonly observed EBPs and also the highest contrast practices between odds-beating and typically-performing schools. Therefore, we provide some illustrative examples of these practices below.

Benchmark tasks and rubrics

In order to prepare students for the increased expectations for writing on the state’s exams, teachers in odds-beating Bay City, Goliad, Yellow Valley, Starling Springs and typically performing Paige City reported providing students with benchmark writing tasks along with the state’s exam rubric for scoring their work. This aspect of their instruction takes up the recommendations from the research regarding the use of model texts and rubrics (Troia, 2014) and aligns with the guide provided in the Danielson teacher evaluation tool that emphasizes the use of rubrics in assessing students’ work.

One of the master teachers (also known as an instructional coach) from odds-beating Yellow Valley, for example, explained what is done in her school with regard to using benchmark writing assessments and rubrics. She explained, “The students are given a writing prompt at the beginning of the year, and then we assess it. At the end of the year, we give them another one and we assess it. Of course, we’re looking for growth.” This focus on growth using benchmarks and rubrics throughout the school year, rather than “one and done” exams used for high-stakes evaluation only was an aspect of their practice teachers described as important to improving children’s writing.

In addition to using a variety of rubrics provided to them, teachers also crafted their own rubrics for different kinds of writing assignments. As they used these, they guided students to focus on different aspects of their writing based on their past performance. For example, an instructional coach/teacher at odds-beating Starling Springs explained how writing tasks and rubrics were being differentiated in her school:

We have writing rubrics that are done by some of our coaching staff and so even if the kids are doing journal writing and things like that we are using the rubric and trying to find and develop small groups. Like these kids may need to work more on main idea or this group might really need some work with lead sentences. And there are rubrics that we have that we can use in all areas.

In this way, teachers exhibited attempts at aligning their practices to the CCSS using the instructional modules provided by the state complemented with practices that were adapted to different students’ needs.

Prewriting, planning, and drafting with peers

Another of the most common practices observed was prewriting, planning, and drafting with peers. In an odds-beating school fourth grade classroom, for example, a teacher engaged students in a “science talk” with guidance for how to proceed with the activity that included journaling and note taking as the main writing activities followed with peer review of evidence, individual reflection, and finally peer responses to final drafts. This lesson encompasses many of the EBPs discussed in the literature (Graham et al., 2012; Troia, 2014). In another odds-beating school, a third grade teacher identified peer collaboration and feedback as important qualities of her instruction and noted the importance of such practices in mitigating negative self-talk among the students and acknowledging students’ already-developed competencies as writers. She explained in the debrief meeting after her lesson,

Like you saw, all morning the students were in groups and editing. I told them ‘Now, go ahead and read this to a partner’ instead of taking their notebook and saying, ‘Well, this is wrong and this is wrong.’ They are learning from each other. ‘Well, I have this and someone else might have something different.’ and just lending themselves to one another and as I circulate, I see who needs a little bit more support, who I night need to sit with the next day.

The example above also provides an illustration of a concern for students’ motivation to write as well as activities intended to use writing to learn content, another EBP evidenced in the majority of the schools in this study and discussed in more detail below.

Types of writing

With regard to the types of writing students were doing in the classrooms observed, we noted a preponderance of lessons where teachers tasked students with comparing and contrasting texts (particularly non-fiction texts and documentary video) and writing based on research (e.g., article or book reports). This focus on informational tasks and the emphases on source-based evidence align with the fifth CCSS shift.

Comparison/contrast writing was evident in observed classrooms at four odds-beating schools (Goliad, Yellow Valley, Starling Springs, and Spring Creek) and in none of the typically performing schools. Writing based on research was evident in these same odds-beating schools as well as odds-beating Bay City and typically performing Wolf Creek. Other types of writing observed were creative writing in Goliad, poem writing in Yellow Valley, summary writing in Bay City and Goliad as well as typically performing Wolf Creek, and personal stories/narratives in Bay City and Starling Springs.

Like the EBP analysis more variety in types of writing was observed in the urban schools than suburban and rural in that order. Since comparison/contrast writing and writing based on research were the most commonly observed types of writing and also the highest contrast between odds-beating and typically-performing schools, we provide some illustrative examples next.

Comparing/contrasting and writing based on research

In the following example from an odds-beating school (Goliad) third grade classroom, students had researched bullfrogs before writing drafts of paragraphs that compared bullfrogs with their own fictional “freaky frog.” These writing activities were taken directly from the CCSS instructional module made available by the state on the Internet described earlier. The “learning target” was defined as: “I can compare characteristics of the bull frog to that of the freaky frog” and the instructional module indicated that the unit is intended to meet multiple CCSS standards (e.g., writing informative/explanatory texts to examine a topic and convey ideas and information clearly for writing, write narratives to develop real or imagined experiences or events using effective technique, descriptive details, and clear event sequences). In this example, teachers used several of the EBPs discussed in the literature including text structure instruction, prewriting/planning/drafting, and writing to learn. However, the strategies they were encouraged to use in the module could not be characterized as “flexible” (per the previous discussion of “perils”). Rather, students were provided with a specific structure for their writing that resembled a fill–in-the-blank exercise. The following was displayed on the chalk board: “Both the Bullfrog and the _________ are very similar because they have special adaptations that help them survive. One of the Bullfrog’s adaptations is _____ (explain).”

In another odds-beating school (Bay City) classroom, a fifth grade teacher, also using one of the state’s instructional modules, questioned students about why researchers used a camera to study wildlife as their class prepared to engage in a close analysis of a documentary video in preparation for writing an essay. Soon, the students were busy writing and sharing brief summaries of main ideas as they watched and listened to seven 1-min segments of the video. The teacher commended students on their efforts: “You’re focused and able to identify key ideas from a video!” During the debriefing that followed the lesson, the teacher noted that students responded well to the minor adaptations she had made to the scripted lesson plan in the instructional module and concluded that “using informational texts is a big shift” in her instruction. In this example, the teacher provided instruction on how to summarize main ideas in accordance with the module lesson plan, yet also modified the plan to include more scaffolding through prompts orally and in writing and more opportunities for children to share their draft summaries.

While the modules themselves incorporate several of the EBPs as demonstrated above and some teachers made modifications to the modules to provide appropriate scaffolding when necessary, we also noted lessons dominated by independent writing without many opportunities for peer collaboration, the use of worksheets to gather evidence or academic vocabulary from texts that were neither engaging or cognitively demanding, and formulas for constructing texts that allow little room for students to flexibly craft their texts such as in the bullfrog paragraph example. Field notes for instance provide the following summaries of practices observed: “Students were asked to individually write about the water cycle in their own words” (Wolf Creek); “Writing on worksheets to describe context clues, synonyms, sentence examples” (Paige City); “No writing integrated into instruction; students drew pictures as the teacher read” (Sun Hollow).

Teachers’ perspectives

In response to our second research question (What perspectives do teachers in odds-beating and typically performing schools hold regarding aligning their instruction to the CCSS for writing?), we found that most teachers expressed a generally positive view of the CCSS overall and viewed the use of instructional modules as a way to ensure that they were translating the standards into CCSS-aligned practice. They expressed confidence that identifying evidence in a text and using research to prompt writing are important things to expect elementary students to learn to do, and felt that the instructional modules provided models to help them achieve such outcomes. As an example, one teacher explained, “What I really like about the modules is that by doing the modules you’re covering the standards.” This statement captures the general message teachers expressed across both odds-beating and typically-performing schools about aligning practice to the CCSS.

Some of the “promises” of the CCSS mentioned in the introduction were evidenced in teachers’ statements, including teachers expecting higher quality writing as will be described next. However, some of the “perils” were also evident in complaints regarding overly scripted instructional modules that did not allow for enough scaffolding. In addition, teachers described feeling pressure to push students to produce writing that would meet the criteria for a high score on a rubric while sacrificing more joyful writing experiences including those that involved imaginative and narrative writing.

Quality and detail

Some teachers described the writing they were requiring students to produce in alignment with the CCSS as more challenging. Teachers explained that they look for improvement in the detail students provide to support claims in their writing as well as overall improvements in the quality as defined by the state’s rubric. A teacher from a typically-performing school explained, “Usually I have to explain what it means to use details. Putting details in their writing is very difficult for them [students]”. A teacher from an odds-beating school echoed a similar sentiment:

Writing is a little harder. We’re looking for improvement in the quality and in the details. I think most of us follow the state ELA rubric for the writing projects and four [the highest possible score on the rubric] would always be the goal.

These examples provide illustrations of the finding that while the CCSS overall are viewed by teachers in these schools as reasonable and even improvements over past standards for writing, they also pose challenges to students and teachers alike with regard to higher expectations for quality and detail in writing.

Independence, motivation, and metacognition

One of the major contrasts between teachers’ perceptions in odds-beating and typically-performing schools was regarding gains and losses in relation to what they described as independent/creative thinking, motivation, and metacognition. For example, teachers in typically-performing schools reported that their students are having more difficulty with imaginative writing since in their view, the focus of the Common Core has shifted their attention to non-fiction reading and writing and a stronger emphasis on the use of text-based evidence in writing. In addition, throughout the typically-performing school interviews and focus groups, teachers expressed a belief that CCSS-aligned practices do not emphasize teaching students to be independent thinkers. A typically-performing school (Sun Hollow) teacher, for instance, described students not knowing what to do in response to a module writing task unless being told explicitly how they should write.

You can’t give them [students] a task. They’re like, ‘But what do I have to do?’ Sometimes I give them a like a topic to write about and it’s, you know, creative writing— to draw some analogies. Some of them are like, ‘I can’t.’ And then they can’t go beyond because we’re teaching them how to do it exactly. Everything’s got to have a topic sentence. There’s a certain way to do it. It’s like a formula. They’re afraid of doing it wrong.

Another Sun Hollow teacher commented on the place of creative writing in the curriculum and how that is impacting her students:

Even like creative writing – there is NO [emphasis by participant] creative writing anymore and they [presumably the designers of the CCSS and/or instructional modules] really discourage a lot of fictional reading. So you wonder, “Where our future authors are ever going to come from?” We had an Artfest Day and they [students] did a creative writing workshop in the past and it’s been successful because kids were so used to creative writing. It was fun and my kids loved it. This year they hadn’t done anything; they hadn’t gotten to write anything that was just fun writing. Everything has to have a purpose.

While in the odds beating schools, teachers also noted a reduction in how much narrative writing they were assigning, many noted ramping up the amount of writing they were asking students to do as well as increasing the emphasis they were placing on engaging students in metacognitive activities and critical evaluations of texts and their own arguments. For example, in a focus group, a teacher from an odds-beating school expressed her views about how her instruction has changed in alignment with this goal since the implementation of the Common Core.

I feel like we’re doing a lot more writing. I am and I don’t know about you [turning to another teacher], you have always done a lot. I feel like we’re doing a lot more writing. Less creative writing, more essay writing, opinion writing, research, note taking, reading text for evidence, using more of a close reading model so that kids are thinking much more critically as opposed to just, ‘Ok read this and answer this question’. They have to really be thinking about what they’re reading and make more inferences: it’s not just there in the text. They really have to be thinking a lot more deeply.

In sum, teachers in typically-performing schools expressed more concern about the losses with regard to tapping into students’ creativity, and developing independence and motivation for writing than their odds-beating school peers. Odds-beating school teachers, while acknowledging decreased attention being paid to creative writing, noted gains: Increased expectations for the quality of students’ writing as well the development of their metacognitive and critical abilities.

Discussion

Since writing development is highly contingent upon the historical and cultural contexts in which it occurs, in this study framed by sociocultural theory we conducted interviews and focus groups, as well as made classroom observations and gathered documentary evidence in different school settings. We sought to investigate how elementary teachers in schools with above-predicted and predicted outcomes on Common Core ELA and pre-CCSS ELA assessments approach writing. As we discussed earlier, the CCSS as part of the broader RttT context reflect particular assumptions about what should be valued in teaching and learning writing.

We found that teachers’ instructional practices in odds-beating and typically-performing schools were fairly similar and were aligned to the CCSS in some common ways likely influenced by a combination of the CCSS instructional modules made available to them from the state, APPR teacher evaluation protocols that emphasize such things as the use of rubrics to evaluate writing, and the high-stakes assessments all schools must use to assess students’ performance on writing tasks. Specifically, we found that teachers in the majority of schools in our study shared use of the following EBPs: peer collaboration, prewriting/planning/drafting, using rubrics, and writing to learn. Other EBPs were also in evidence as described earlier, but these were less pervasive. In terms of types of writing assigned, teachers in the schools we studied showed evidence of focusing on comparison/contrast and research-based writing tasks requiring source-based evidence. Many of the teachers in this study were drawing these tasks directly from the instructional modules offered by the state with varying degrees of adaptation.

Our second research question focused on teachers’ perspectives toward aligning their writing instruction practices to the CCSS and here we found greater differences between teachers in odds-beating and typically-performing schools. While most teachers expressed a positive view of the CCSS overall and felt that they had raised their expectations for the quality of students’ writing as a result, this general perception was tempered by typically-performing school teachers’ more negative view. These teachers reported dismay at sacrificing reading and writing about fiction that they felt was more engaging for elementary school students than non-fiction texts. They also expressed concern about increasing the emphasis on informative and argumentative writing in alignment with the CCSS and CCSS assessments. They connected this shift in the emphasis of their practice to students’ declining interest in writing in general and competence in what they characterized as “creative” writing and “independent” thinking more generally.

Before concluding, it is important to note that the schools selected in this study are not meant to represent all elementary schools where the CCSS has been implemented in the United States, but rather were selected in order to highlight the kinds of practices that are related to better than predicted and predicted performance outcomes on Common Core ELA assessments in one particular RttT state (New York) and so represent relatively better-case scenarios for more economically disadvantaged school contexts. Our findings are based on case studies of nine schools with observations, interviews, and focus groups only spanning 2 days in each setting. Therefore, many practices are not accounted for here and what is presented can be understood as providing snapshots of instructional practice. Finally, as the larger study in which this one is embedded did not focus specifically on writing, more detailed descriptions of writing practice are not available and will require additional research.

Conclusion

In the US RttT context, the CCSS and APPR are purportedly intended to close achievement gaps by holding teachers accountable for students achieving college and career readiness standards as measured through performance evaluations tied to CCSS-aligned test scores. From a sociocultural perspective, the potential perils in these standards and their associated shifts rest on their assumptions about typical literacy teaching practices and which of these, or what combination of them, are in need of change.

One major assumption is that prior to the CCSS students lacked sufficient opportunity to engage with literacy tasks independently. In fact, the CCSS place a good deal of emphasis on student independence: variations of “independent” appear 26 times in a 43-page document detailing the “Research Supporting Key Elements of the Standards” (CCSS, n.d.). In this document, the problem of poor literacy outcomes is, in part, attributed to excessive scaffolding. CCSS designers presume that in contrast to the “independent reading of complex texts so crucial for college and career readiness, particularly in the case of information texts,” students in US elementary and secondary classrooms have been “given considerable scaffolding—assistance from teachers, class discussions, and the texts themselves (in such forms as summaries, glossaries, and other text features” (CCSS, n.d., p. 3). The document’s glossary defines “independent” as “a student performance done without scaffolding from a teacher, other adult, or peer; in the Standards, often paired with proficient(ly) to suggest a successful student performance is done without scaffolding” (CCSS, n.d., p. 42). Yet, this emphasis on independence runs counter the strong evidence from prior research that peer collaboration and feedback in writing activities that include prewriting, planning, and drafting are correlated with better writing performance (Graham et al., 2012; Troia, 2014; Troia & Olinghouse, 2013).

Notably, a lack of student independence or what some teachers referred to as opportunities to be “creative” in their writing was a concern identified in this study. This issue bore out in what was observed in some classrooms: heavily scripted lessons accompanied by formulaic writing activities that resemble fill-in-the-blank exercises. This concern was also evidenced in teachers’ expressions that they have reduced the amount of imaginative and narrative writing in their classrooms as well as reading of fictional texts to focus more closely on text-dependent informational writing and the reading of non-fiction texts. A concern regarding student autonomy to choose what kinds of texts to write and read and opportunities to craft personal narratives and other more creative writing was especially salient in the typically-performing schools. This finding suggests that at least in some educational settings attempts to align to the CCSS may ultimately work against recommended practices identified in the research such as the use of creativity/imagery to prompt writing, and self-regulation and metacognitive reflection as teachers focus on the use of rubrics to align students’ writing to the CCSS tests. In this study, teachers who used a “teach to the test” or “lift and deliver” approach tended to voice more negative feelings about the influence of the CCSS on their writing instruction and on their students’ motivation for writing.

One implication of this finding is that teachers, especially in schools that have not enjoyed a history of exemplary ELA performance, will need carefully crafted, collaborative, and ongoing professional development opportunities in how to integrate EBPs and a variety of writing tasks in their lessons. Such opportunities would need to focus on filling in gaps in CCSS materials and models within and across grades to address the perils discussed at the outset of this article. Teachers might, for example, analyze instructional modules identifying the modules’ for writing and in relation to their students’ specific strengths, interests, and needs. This work would be done in light of EBPs for writing recommended in the literature. Teachers might then collaboratively plan lessons in which they thoughtfully adapt modules for their teaching contexts—focusing on how and why different writing tasks and activities related to those tasks might be integrated.

Another important area of focus for professional development involves close examination of what scaffolding of CCSS-aligned writing instruction might look like and what scaffolds are appropriate when and for whom. As noted by Benko (2013), the term scaffolding has been widely misunderstood and misapplied in literacy instruction. The ultimate goal of a writing scaffold, consistent with CCSS aims, is for students to respond to increasingly complex and challenging tasks independently. Scaffolds are “temporary assistance provided to students as they complete the task.” Yet, particularly in the era of high-stakes assessments, potentially helpful scaffolds such as the paragraph structure provided to students in the bullfrog example discussed earlier have become “the goal upon which students are evaluated” (Benko, 2013, p. 291) rather than as a structure that supports the development of more complex writing competencies. Ideally, scaffolds would be removed as students internalize their features and goals and gain greater independence in their ability to respond flexibly to a wider range of writing demands.

Further, professional development regarding the effective use of writing scaffolds would focus on differentiated use. For example, though teachers often use scaffolds to teach writing structures, student interest and motivation to engage in challenging writing tasks as well as their own stage of development can and should also be assessed when determining scaffolding. Professional development regarding scaffolding would ideally focus on the use of scaffolds as means for students to develop abilities to: (1) to use peer collaboration variably and in combination with working independently on increasingly complex writing tasks, and (2) flexibly applying an ever-growing set of writing strategies—rather than as end goals.

Finally, while some of the promises of the CCSS were evidenced as fulfilled in raised expectations for informational writing, the findings also encourage us to revisit the perils of how any standards for writing might be translated into effective practice that meet diverse young learners’ needs in a high-stakes accountability context. In this regard, we note the dangers of misalignments between standards for writing, materials and models, high stakes assessments, and evidence-based practices recommended in the literature. We also note the import of not losing sight of the value of providing variability in kinds of writing tasks as well as paying close attention to young learners’ diverse motivations to write and needs for different levels of scaffolding. Future research addressing how the CCSS and APPR are impacting practice in larger samples of schools and with greater diversity of participants is needed as efforts to revamp standards for writing are implemented in schools not only in the US RttT context, but in locations around the world.