Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In Chapter 4 entitled “Knowledge”, we spent some time discussing what is meant by knowledge. Here are a few things we might ask students to do:

  • Recite from memory, the Pledge of Allegiance

  • Use your own words to explain the statement, “The state at the center of the earth is igneous fusion.”

  • Find the product of 77 times 88.

  • Find the pressure of water vapor inside a 5.0-L vessel that initially is evacuated and maintained at 25°C into which a 2.0-mL sample of liquid water is injected.

  • Write a paper describing the emergence and use of wind turbine power generation in the EU.

  • Compare and contrast the Japanese and Swiss health care systems with the intent of recommending the adoption of one or the other for the State of California.

These happen to follow a hierarchy according to the Bloom Taxonomy of Education Objectives: knowledge, comprehension, application, analysis, synthesis, and evaluation.1 Although an increase in difficulty is implied, this is not really a part of the taxonomy. Reciting the Koran from memory is difficult even though it is a straight knowledge activity; stating how you liked your dinner at the local diner last night is easy even though it is an evaluation.

In classrooms, we ask students to do things. Sometimes we ask them to do things that are tricky. A trick question one of us often uses is to ask about the water vapor pressure that exists after a small amount of water is injected into a large container. The trick is that not all of the water evaporates. Students usually just pull out an equation and proceed to plug-and-chug away without realizing that the physical reality of the situation is quite different from what the equation is meant for. What one needs to do is to look up the pressure in a table. Sometimes we ask them to do things where there is no right answer – like writing a paper or recommending a policy. Regardless of what we ask students to do to demonstrate their knowledge, assessment and specific feedback are critical. From the ULM perspective, any discussion of feedback and assessment should be thought of in terms of the five rules of the ULM outlined earlier. Before discussing these issues, however, we begin with a brief overview of how we define and think about assessment and feedback.

Assessment

Classroom teachers assess students all the time. We give tests, grade homework, and observe students working on group projects. From this perspective, assessment is simply a tool we use to determine how much learning has taken place. From the ULM perspective, assessment is a tool that we should use to determine how close students are to reaching their goal . We can do this in a number of ways. What may be the most important issue, however, is not what kinds of assessments we use, but rather what we are doing with them.

The ULM argues that the goal of most learning situations should be mastery, long term retention, and the ability to transfer information to other contexts or situations. As we saw in Chapter 4, if we want to promote mastery, then we need to provide opportunities for practice in varied contexts. Another way to say this is that if we want students to pursue mastery, then they need (1) a goal, (2) opportunities for practice with assessment , and (3) significant feedback on their progress toward that goal. If we also want to promote long-term retention, then students should do things that require long-term retention. From the ULM perspective, the most appropriate way to accomplish both mastery and promote long-term retention is to provide opportunities for regular “formative” assessment and a more formal cumulative or “summative” assessment at the end. Formative assessment is used during learning. It includes any number of activities in which students can receive feedback on their progress. This may include (but is not limited to) questions posed by the teacher or classmates during instruction, in-class activities, homework, informal (and formal) teacher observations of student progress, student reflections, quizzes and exams during the semester, and class projects. The key feature of formative assessment is that it provides teachers with an opportunity to assess student progress toward the goal together with an opportunity to provide students with feedback about their progress including recommendations for improvement.

In contrast to formative assessment , a cumulative or “summative” assessment should be designed to allow teachers to assess – at the end of instruction – the extent to which a student achieved mastery and long-term retention. Additionally, summative assessment should provide students with an analysis of whether or not they have reached their instructional goals . Summative assessments are usually more “formal” than formative assessments. In many cases, a cumulative assessment regarding long-term retention is built directly into the content students are learning. In a typical math class, for example, what you learn in Chapter 4 will likely be used to complete more complex tasks in Chapter 6; so long-term retention is a necessary prerequisite for future success. This may not be as obvious in other areas. For example, in a psychology course, what you learn about Freud may, or may not be, a prerequisite to understating Piaget ’s theory. Whether or not the subject has some cumulative long-term retention requirements, the question is whether or not students learn in that way. From the ULM perspective, a well-designed summative assessment encourages long-term retention.

Feedback

No matter how we assess students, the information we provide them about their performance on that assessment – the feedback – may be what matters most. Feedback is information provided to a student by a teacher about that student’s performance, progress, or understanding. We know that some types of feedback impact learners more or less than other types of feedback. In general, feedback about the process students are using tends to be more effective than feedback about the product. Therefore, simply grading homework with a percentage score (e.g., 80%) or a summative comment (e.g., great work) may not be sufficient feedback for the most effective learning. Rather, substantive feedback about the process students are engaging in seems to be the most effective form of feedback. We can say three things about feedback. First, effective feedback is a required part of the learning process. Second, effective feedback occurs at multiple points during learning. Third, but perhaps most important, effective feedback prompts students to generate their own feedback. When a teacher provides feedback to a student by asking him to think about how he reached a particular answer or response, he must think about what he did and thus generate a personal assessment of his progress. In short, from the ULM perspective, effective feedback prompts students to allocate working memory space to self-monitoring or self-regulatory activities. This is not easy. Developing effective feedback can be very time consuming and requires effort on the part of the teacher. Our intent here is to use the ULM to guide the way we provide feedback. We begin by looking at how assessment and feedback fit within the ULM rules.

New Learning Requires Attention

The first ULM rule is that teaching is about getting students to “attend” to things. Perhaps the most basic function that assessment performs is telling students what to pay attention to. How often have we heard students ask “Is it going to be on the test?” or say “If it isn’t on the test, I am going to skip it.” For better or worse, students base what they are going to attend to and going to put forth effort to learn on what they think will be tested. While this is often viewed as bad and “teaching to the test” is seen as lowering the quality of learning, assessment and testing are not inherently good or bad. Good assessment takes advantage of the privileged role that testing has in directing attention by assessing what is important. If assessment is in line with the goals of instruction, then the assessments will reinforce the learning goals of the class. It is only when assessments are not aligned with learning goals and test peripheral or non-important information that they become problematic and dysfunctional for learning. Similarly, feedback performs a necessary function by helping students direct their attention to the proper instructional materials or learning activities and reinforcing what is and isn’t important.

The attention directing roles of assessment and feedback mean that careful attention must be paid to creating high-quality tests and other assessments that are clearly and tightly aligned with the learning goals of the class. A classic way of doing this is through a table of specifications, which is a grid that crosses learning objectives/goals with assessment items and activities. Excellent guidance and examples of these can be found elsewhere.2 Also, feedback must be well thought out and focused on specific information that needs to be attended.

Learning Requires Repetition

The second ULM rule is learning requires repetition . Repetition is accomplished by retrieval of knowledge from long-term memory into working memory . Retrieval means something going out. Assessments require demonstration of knowledge and hence require that knowledge to be retrieved. The act of retrieving knowledge for an assessment not only demonstrates that it was learned, but increases the strength of that knowledge in memory. A recent and very interesting study under controlled conditions provided a rather remarkable outcome that illustrates this point.3 A large group of students was divided into four groups studying Swahili vocabulary. One group studied and tested only those words they had previously not learned. A second group studied and tested all words. A third group studied only words they felt unfamiliar with, but tested all. A fourth group studied all but tested only those found to be unfamiliar. “Repeated studying after learning had no effect on delayed recall, but repeated testing produced a large positive effect. In addition, students’ predictions of their performance were uncorrelated with actual performance. The results demonstrate the critical role of retrieval practice in consolidating learning and show that even university students seem unaware of this fact.”

Assessment performs an important role in strengthening what has been learned. Assessment provides opportunity for practice and success comes to those who practice, practice, practice. We may sometimes hear a student say “I studied and knew the material, but I just couldn’t bring it out on the test.” What this likely means in ULM terms is that this student hasn’t had enough repetitions of the knowledge to make the chunk strong enough to retrieve . Assessment can provide that repetition . Assessments also provide opportunity for corrective feedback to help refine knowledge in memory chunks. So, more specifically, we believe that success comes to those who practice, assess, get feedback, practice, assess, get feedback, and so on. Without assessment, it is impossible to provide guidance to the repetition necessary for accurate chunk formation.

Learning Is About Connections

Knowledge is about creating a connected pattern of neurons. Higher, more complicated forms of learning, like concept formation or skill development, are essentially about forming a series of complex connections among discrete ideas. Students make connections between the concepts of shapes and sides to understand the Pythagorean Theorem or among historical issues, a belief in states’ rights and the Emancipation Proclamation as they generate an understanding of issues that led to the Civil War.

Assessment fosters creating these connected patterns by requiring students to connect and combine information and skills they are learning. Asking students to compare and contrast discrete concepts, solve problems, transfer knowledge to new contexts or situations, write essays, demonstrate skills, and other forms of “open-ended” assessments allow students to engage in the problem solving and critical thinking that underlies the construction of larger knowledge structures in memory and the creation of new connections. Assessments that require making connections will foster knowledge connections. These assessments also provide opportunities to observe the constructions and connections students are making so that effective feedback can be provided to foster productive constructions and avoid misconceptions. Open-ended assessments provide rich opportunities for mentoring and guidance and more indirect instructional feedback methods like Socratic dialog.

Teaching to the Test

Much talk centers on the phrase, “Teaching to the test.” This has been especially true since enactment of the No Child Left Behind legislation. Many, if not most people, seem to react negatively to this sound bite. It connotes very restricted learning. There was an advertisement on local television in 2008 for learning centers intended to help struggling students. The ad showed a student who one assumes had learning issues and was helped by the center. She recited the last names of the presidents: “Washington, Adams, …, Bush, Clinton, Bush.” Obviously this was impressive. To accomplish this required effort and, because of that, required motivation. It’s a task that was doable. Success with a task like this might increase self-efficacy for learning, especially if parents were to ask the child to demonstrate this skill to friends. (OK, did you remember that Fillmore was a president?) On the other hand, how useful was it? It’s not that knowing this is either bad or wasted; it’s that this knowledge alone is not particularly useful. Moreover, given just a list, the learning strategy involved is straightforward: rehearse , rehearse, rehearse. If the test consists of recalling lists of presidents’ names, then perhaps what is being tested is not in the overall best interests of national security.

Nevertheless, we recommend that teachers re-contextualize their use of the phrase, “teaching to the test.” What we would like to do, under the best of teaching circumstances, is to provide opportunity for transfer to the contexts most like those in which we believe that the knowledge being learned is going to be used. If this is the case, then the “tests” probably need to be changed to better reflect the ultimate intended contexts. While this is quite difficult to achieve, it should remain a goal .

Many have embraced Wiggins & McTighe ’s Understanding by Design.4 This approach logically advocates deciding what you want known, then deciding how you will know it is known, and then creating instructional materials that will lead to the appropriate learning. At the end of the day, this is really just a repackaging of teaching to the test. Think of it this way: If it isn’t something you wanted learned, why was it on the test? If it is something you want learned, why isn’t it on the test? We emphasized the use of tests to reinforce classroom learning goals in our discussion of how assessments focus attention .

In the United States as well as most other countries, the purpose of compulsory education is to “centralize” those aspects of knowledge and intellectual skill thought to apply to all or most vocations. As a result, reading and arithmetic are taught in schools. In contrast, surgery is taught in surgical rotations in medical or veterinary school, heat transfer in advanced engineering, and steamfitting in apprenticeship programs. Both the reading and arithmetic skills used by surgeons, engineers, and steamfitters are different. However difficult and impractical, it would be better and would lead to better learning outcomes if reading and arithmetic could be contextualized at those levels. That is, if we could say: “This is what a steamfitter needs to know about geometry.” On the other hand, a great strength of education is that, if you are a steamfitter and want to become a heat-transfer engineer, you don’t have to repeat your K-12 education to make the change.

High-Stakes Testing Versus Feedback

One certain impact of the U.S. No Child Left Behind legislation has been an emphasis on testing, especially what one might call high-stakes testing. High-stakes testing is not likely to go away any time soon, nor should it. Stakes in tests like SAT, GMAT, or MCAT are high. These stakes pale in comparison with, say, the boarding exams taken by those seeking certification and/or licensure in medical specialties or those taking “bar” examinations to practice law or those becoming commercial airline pilots. There are times when assessment for the purpose of measuring and documenting knowledge is appropriate.

There are several problems with high-stakes testing. These tests are expensive and complex to develop, organize, and administer. For this reason, they often are not available “on demand.” Emotions run high for these tests, and adverse effects on performances can result. Students often “choke under pressure.”5 Perhaps most importantly, high-stakes tests, as currently employed, often do not facilitate a focus on either mastery or deep and meaningful learning. Rather, high-stakes tests seem to facilitate a focus on learning “just enough” to perform well on the test. From the perspective of the ULM, high-stakes tests can create goals for performance or task rather than for learning. This should lead us to the question of what is the alternative.

As we have discussed, providing performance-related feedback is one alternative that is an effective way of bringing about learning. As students engage in a learning task, teachers can monitor their progress and provide feedback to students about not only what they are doing well and not so well, but also about how they are going about doing it. This ongoing feedback is helpful because it gives students information about what they are doing before they have automated it. At the same time, it gives teachers very powerful ways to decide about a learner’s knowledge.

Modern computer-based homework systems and other electronic practice systems offer excellent means of providing “formative” assessments of learning. In these systems, students have an opportunity to submit their homework electronically and to receive feedback on their work as they are working on it. At a minimum, these systems can provide feedback to students before they have automated a new task. Remember, it is much easier to correct someone before they have automated a task. If a teacher waits until his student has automated a task, it becomes much more difficult to correct errors. Earlier on p. 48, we discussed how hard it was for Tiger Woods to change his swing.6

Praise Versus Encouragement

For two decades, some in the literature have suggested distinguishing praise and encouragement in the classroom.7 Praise involves saying nice, pleasant things without indicating why that compliment is being given. Encouragement spells out rather explicitly why the praise is being offered. From the perspective of the ULM, it is a matter of explicitly connecting the praise to whatever action is being praised. These connections permit a teacher to encourage mastery rather than performance. Consider Table 10.1.

Table 10.1 Examples of feedback

Scaffold Learning by Responding to Outputs

Essentially all teachers advocate some form of active learning – in which the learner is engaged in his or her own learning processes. Active learning is difficult to define or describe. In the ULM, active learning is motivated working memory allocation. Of course this isn’t really observable, so it may be difficult for someone like a teacher to determine if learners are actively engaged with new content. One fairly certain way to detect active learning, however, is to interact with learners by providing performance-related feedback . During instruction, an emphasis on eliciting outputs from students and responding to those outputs with suggestions for improvements is likely to be the most efficient way to keep students learning “actively” and ultimately bring about new learning. Perhaps you can see why feedback must be focused on the processes students used, rather than simply the outcome.

Learning is not just a “garbage in, garbage out” process. While much of what we teach is explicit and can be learned as declarative knowledge, success in life nearly always depends to some degree on developing skills for using knowledge – creating procedures. If teachers want that to happen, then ways of using knowledge and creating skilled behaviors are what should be practiced and tested.

Responding with feedback to student outputs is especially critical for proceduralization. We discussed how procedural knowledge was created in Chapter 4. Recall that procedural knowledge is based in the results of action; success in achieving a goal . How do students know if their results are correct? How do they know whether they are building optimal procedures? The only way they can know is through feedback.

Declarative knowledge is typically right or wrong; you either know it and have strengthened it enough through repetition to recall it on a test or in another setting, or you don’t know it. Procedural knowledge, on the other hand, is often not clearly right or wrong. There can be many ways to do something, many approaches to solving the math problem, or many options for conducting a scientific inquiry. Helping students sort out these possibilities and strengthen the best approaches is what teachers can do with feedback . This is what scaffolding student learning is about. This is also the heart of what Ericsson called deliberate practice with a mentor.8

Assessments should allow students to use knowledge in ways that provide observable outputs. From those outputs, teachers can give feedback that effectively scaffolds for the student. Use of knowledge requires assessments that go beyond just statement of the knowledge. We might want to teach a rule such as: In the periodic table of the chemical elements, atom size increases going down a group. This could be tested with something like: state the trend in size for groups in the chemical periodic table. This would be a direct test of declarative knowledge. The same rule might be tested as follows – select the element with the largest atoms from among: lithium, sodium, and potassium. In this case, the learner must decide that all of these elements are in the same chemical group, that potassium is the lowest in that group, and that potassium would have the largest atoms. This learning involves acquiring a procedure that is a decision rule for determining the largest atom in a group, given the knowledge that atom size increases going down a group. This procedure would apply to any combination of atoms in the periodic table. Besides allowing a teacher to know whether the student knows the rule, this kind of assessment allows feedback for developing a procedural application of the rule.

Notes

  1. 1.

    Bloom , B. S. (1956). Taxonomy of educational objectives: The classification of educational goals . New York: D. McKay. This has been revised slightly to remember, understand, apply, analyze, evaluate, and create as found in: Anderson, L., & Krathwohl, D. (Eds.). (2001). A Taxonomy for learning, teaching, and assessing: A revision of bloom’s taxonomy of educational objectives, complete edition. Columbus: Merrill. The net effect of this would be to interchange the fifth and sixth items of our example. In this example, we believe that the last item is the most challenging of those listed. A common confusion about the taxonomy is that it corresponds to difficulty. Naming the presidents or reciting the Koran from memory both are the “lowest” taxonomic levels. Responding to, “How was dinner last night?” is an evaluation.

  2. 2.

    Kubiszyn, T., & Borich. G. (2007) Educational testing and measurement: Classroom application and practice (8th edn.). Hoboken, NJ: Wiley/Josey Bass Education.

  3. 3.

    Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. Science, 319, 966–968.

  4. 4.

    http://en.wikipedia.org/wiki/Understanding_by_Design (Accessed March 23, 2009); Wiggins , G. P., & McTighe , J. (2005). Understanding by design (2nd edn.). Alexandria, VA: Association of Supervision and Curriculum Development.

  5. 5.

    Beilock, S. L., & Carr, T. H. (2005). When high-powered people fail. Working memory and “choking under pressure” in math. Psychological Science, 16(2), 101–105.

  6. 6.

    http://www.oneplanegolfswing.com/oneplanemembers/Tour_Pros/Tiger-Woods/index.jsp (Acessed March 23, 2009).

  7. 7.

    Hitz, R., & Driscoll, A. (1988). Praise or encouragement? New insights into praise: Implications for early childhood teachers. Young Children, 43(5), 6–13.

  8. 8.

    Ericsson , K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100(3), 363–406.