Keywords

Part One: Feedback Origins, Purposes and Application

Introduction

Feedback is seen as a key process in learning, providing information on actual performance in relation to the goal of performance. There is a large body of literature arguing for the importance of feedback in learning, yet there is an accruing body of evidence pointing to an inability of feedback to perform its function in practice. In particular, learner surveys have indicated that feedback is one of the most problematic aspects of the student experience (Carless et al., 2010). Ironically, but not surprisingly, educators typically believe that their feedback is more useful than their students believe it to be (Shute, 2008). The educators’ inflated perceptions of their own performance points to a key issue that lies at the heart of the “feedback problem”—that educators, like all learners, need feedback on their (feedback giving) skills in order to recalibrate and improve their practices.

There is mounting survey data to suggest that students are dissatisfied with feedback. The Course Experience Questionnaire (Krause, Hartley, James, & McInnis, 2005) and National Student Survey (Higher Education Funding Council for England, 2011) consistently report that graduates are more dissatisfied with feedback than any other facet of their programs. Even with this incoming data, educators seem to rationalise the reported dissatisfaction with factors inherent in learners. One rationalisation in the discourse is that learners do not understand what is meant by feedback (Hattie & Timperley, 2007; Shute, 2008) and therefore do not recognise “feedback” when it is provided. Another proposition is that learners are thirsty vessels for performance information and won’t be satisfied regardless of the amount of attention given to them (Henderson, Ferguson-Smith, & Johnson, 2005). In both arguments, the “fault” is seen to reside with the learner, rather than stem from the skill of the educator, the appropriateness of the learning activity or the nature of the learning environment. This tendency for “deflection” happens frequently when there is a discrepancy between learners’ internal perceptions (self-evaluation) and external teacher perceptions (feedback). Chinn and Brewer (1993) work suggests that when such a discrepancy arises, the receiver will reinterpret external feedback to make it conform with their own hope, intention or interpretation of their own practice. In the case above, educators may argue that there is nothing wrong with their actual feedback practice, but rather, the problem stems from learners’ inaccurate interpretation of it.

This chapter critiques literature on feedback from a range of fields, including higher education and professional education, and focuses on untangling why feedback is seen as problematic. Part one will explore what is done in feedback in education, and part two will focus on how it might be done better. Our suggestions for improvement of feedback are not based on better spreading of the clear, and already established messages on how to “do feedback”, but rather we call for a reconceptualisation of feedback that may be more effective and more conducive to uptake in practice. In presenting this alternative framework, we argue for less preoccupation in what educators “do” in giving feedback, such as how much information to give and at what time, and instead anticipate a shift towards a better understanding of how students seek, interpret and use data related to their learning and how programs are designed to foster this. It is hoped that an alternative framework, built on constructivist learning principles can encourage learners and educators to view feedback as a co-produced system of learning, rather than discreet, unconnected episodes of unidirectional “telling”. Challenging traditional “feedback rituals” requires commitment to curricular redesign with purposeful and supported opportunities for learners to engage in feedback “episodes”, to implement changes triggered by feedback, and to reassess their performance in relation to the target. Such a system-orientated view of feedback design dispels assumptions that “feedback is done to learners” and that “feedback ends in telling”. The shift in conceptual framework and associated practices acknowledges that learning is co-produced by both learner and teacher, and is influenced by context and relationships. This shift in feedback ideology should translate to changes in learner and teacher approaches to feedback, and positions feedback as a process to build sustainable learning practices, rather than simply as a catalyst for immediate episodic behaviour change.

The Definition of Feedback

Feedback was discussed as a concept in the 1940s in the field of rocket engineering (Ende, 1983) and was defined as information that a system uses to make adjustments to reach a target or goal. Norbert Wiener, a researcher who helped create the science of cybernetics was one of the first to extend the concept to the social sciences. He stated that “Feedback is the control of a system by reinserting into the system the results of its performance. If these results are merely used as numerical data for criticism of the system and its regulation, we have the simple feedback of the control engineer. If, however, the information which proceeds backwards from the performance is able to change the general method and pattern of the performance, we have a process which may very well be called learning” (Wiener, 1954 p. 71).

Since this early conceptual declaration, feedback as a concept has had wide application in education, organisational psychology and business. Its purpose as a learning tool is to highlight discrepancies between actual performance and intended performance, with a motive to produce behaviour change. The premise behind the need for feedback is that novices, across any spectrum of knowledge or profession, have difficulty in understanding the performance target, and have difficulty in evaluating how their own performance matches up to the target. Feedback acts like a mirror, to reflect back to the learner “what their performance looks like”. For some people, the external provision of feedback matches their own self-evaluation of performance. That is, there is good approximation of self-assessment of competence and the actual performed or displayed activity. Others rely on external feedback as a reference point to build the accuracy of their own self analysis. External feedback can be seen as a tool to encourage accurate self analysis. With this form of “data collection and comparison” over time, individuals can hone their self-evaluation skills to approximate external judgements. In other words, external feedback can help us to better judge the quality of our knowledge and work.

Interestingly, early experimental studies looking at the effect of feedback on performance attempted to eliminate the role of the internal, or self-evaluative function in feedback (Butler & Winne, 1995). Researchers focused on the effect of external provision of information on observable performance. In line with this behaviourist philosophy, psychologists have commonly employed a methodology focused on looking for relationships between treatments (stimulus) and behaviours (response) and hypothesise cognitive mechanisms behind these correlations. Harré and Van Langenhove (1999) argued that behaviourist psychology is not unlike chemistry methodology, where chemical reactions are observed and explanations are then sought in unobserved molecular processes. “The concept of person is secondary if it is invoked at all” (Harré & Van Langenhove, 1999 p. 43).

With more recent theoretical perspectives on learning, including constructivist ones (Mory, 2004; Price, Handley, Millar, & O’Donovan, 2010) that acknowledge the active role of the learner in co-producing knowledge, it appears that this behaviourist approach to studying and understanding feedback is severely limited, as it does not recognise the agency of learners. Despite the acknowledgement of these alternative and more recent theories to represent understandings about how people learn, much research in feedback, and many of the practice recommendations, continues to lean on a behaviourist view of feedback as external transmission of information. That is, the dominant view of feedback is that a more experienced person tells a less experienced person about their interpretation of what they did, and how to do things better (Butler & Winne, 1995). With this conception, it is not surprising that much of the feedback literature focuses on enhancing the teacher’s capacity to deliver high quality information at appropriate junctures (Nicol & Macfarlane-Dick, 2006), rather than focusing on the role of the student in feedback.

Typically, as highlighted by Butler & Winne (1995), learners have rarely had explicit instruction or support in how to seek or use feedback, particularly when it might contradict or challenge their own internal view of how they see their performance. This observation leads us to think that in order to improve the effectiveness of feedback, we need to focus not only on improving the quality of the externally provided message but also on strengthening the self-evaluative capacity of learners (Boud, 2000; Nicol & Macfarlane-Dick, 2006; Yorke, 2003). This message about the need to shift focus to the role of the learner in engaging and using feedback, rather than focusing on the mechanics of the “sender’s delivery” of feedback, forms the central premise in part two of this chapter.

Models to Explain How Feedback Works

There are a number of explanatory models available to aid understanding about how feedback works in learning. Some are linear and behaviourist in sentiment, some are circular to imply an iterative process, some ignore the internal capacities of the learner, and others represent the interplay between internal and external performance information and how this affects response or output.

Despite the variability in models, there seems to be consensus in the literature about three key components that constitute feedback in learning. That is, the prerequisite properties for feedback include: (1) information on the goal of performance, (2) information about how performance meets the goal, commonly referred to as the “gap” and (3) strategies to address the gap (Sadler, 1989). Similarly, Hattie and Timperley (2007) describe the three components of the process as the feed up (where am I going?), the feedback (how am I going) and the feedforward (where to next?).

A Mechanical Model of Feedback

The key premise of a mechanical or technological model of feedback, as applied to rocket engineering or the powering of steam engines is that information relating to current task/work is given to learners in order to change the quality of the subsequent task/work. This model implies that there needs to be detection of a change or influence in subsequent behaviour as a result of the information exchange. It also implies that there is a need for the teacher to do what is required in order to have an effect on student performance in the desired direction. In this model, the type of information that is most important is not that which relates to any aspect of the task itself, but rather, information that impacts on the conduct of subsequent tasks. Interestingly, studies that have examined feedback practices in situ, particularly in workplace learning, have indicated that only a small percentage of feedback content is dedicated to discussion of strategies for improvement in performance (Fernando et al., 2008; Molloy, 2009).

Feedback in a mechanical model means that feedback involves information used, rather than information that is transmitted. Ramaprasad (1983) aptly summarised this function in that “the information on the gap between the actual level and the reference level is feedback only when it is used to alter the gap” (p. 6). The most obvious downfall of this model in it’s mechanistic roots, is that it assumes that the learner needs a teacher to provide the information that they need to learn and it assumes that the learner will respond to the “feedback intervention” in a predicable way. In the “messy” real-life context of education, where learners have the capacity to construct their own learning, and engage in activities with varying intention, the mechanical model of feedback does not hold up.

A Constructivist Model of Feedback

If learners are viewers as active players in constructing their own understanding, a constructivist model of feedback is more appropriate to represent the practice of seeking, giving, receiving and acting on feedback. The model acknowledges that feedback not only acts to improve subsequent performance of the task, but that the very process helps the learner to self-regulate. Under the constructivist framework, feedback is repositioned away from an episodic tool with a short term impact, to a process that builds skills over time. Boud (2000) wrote about the concept of sustainable assessment, and Hounsell (2007) extended this concept into “sustainable feedback” where feedback helps to promote student capacities in monitoring their own learning.

A model to explain the complex, multi-factorial workings of feedback where the student is central to the process (and not the educator’s skill in collection and delivery of performance information), is provided by Butler and Winne (1995) (Fig. 33.1). The standout feature of this model is that feedback is conceptualised as intrinsic to self-regulation.

Fig. 33.1
figure 00331

A model of feedback as self-regulated learning. From Butler and Winne (1995) with permission

This conceptual model places the learner at the centre of the feedback process and explicitly acknowledges that the learner is actively making links between their goals in learning, the strategies or approaches they use to achieve this target and the performance outcomes. This comparative process, may cause the student to change their understanding of the goal, or may cause them to tweak or refine the strategies they chose to attempt to reach the goal. The educator (or external body which may constitute peer, practitioner or client) then provides additional external information that helps to further inform the “adjustment process”. The internal and external feedback loops enable the learner to interpret a task’s properties, and to design strategies or tactics to reach the desired goal. The model also acknowledges the impact of motivation on learning and performance.

Kulhavy and Stock (1989) examined the complexities of how external feedback may confirm, complement or contradict the internal feedback (or self-evaluation) of the learner. The researchers devised a “response certitude model” to explain how learners cope with a discrepancy between self-evaluation and external feedback. Chinn and Brewer (1993) and Butler and Winne (1995) also focused on how learners collect and make sense of internal and external information relating to performance. It is notable that these researchers focused on the role of the learner in seeking, interpreting and acting on feedback, rather than on the design or delivery mechanics of externally provided feedback. The “sustainable feedback” model respects students’ agency and emphasises the development of students’ dispositions for evaluative judgement that extend beyond the “formal education” period.

Butler and Winne (1995) identified six key ways that learners could interact with external feedback to render feedback ineffective. These “maladaptive responses to feedback” were observed and classified in the following ways; the learner can ignore the external feedback, reject the external feedback, view the feedback as irrelevant, perceive that there is no connection between the internal and external feedback, reinterpret the external feedback to make it align to the internal judgement (i.e. hear what they want to hear), and finally, act on the feedback in a superficial way to satisfy the assessor/feedback sender in contrast to making legitimate shifts in knowledge or practice on the basis of external feedback. In all these six instances, the influence of external feedback on behaviour change is likely to be minimal.

Students use internal and external feedback to assess the strengths and deficits in their performance, so that high quality characteristics or behaviours can be reinforced, and that less than optimal characteristics can be modified. Again, dominant conceptions of feedback emphasise that feedback is a tool for the learner’s benefit. Sadler (1989) and Nicol and Macfarlane-Dick (2006) emphasise that feedback, as a system, also informs the educator about aspects of their teaching effectiveness. This less visible and discussed function of feedback is highlighted later in the chapter.

Effects of Feedback on Learner Performance and Motivation

Feedback is widely viewed as an intervention to improve learner performance. As reported by Pritchard, Jones, Roth, Stuebing, and Ekeberg (1988) “the positive effect of feedback on performance has become one of the most accepted principles in psychology” (p. 338). This was the accepted wisdom until the mid 1990s when a large scale meta-analysis on feedback was published by Kluger and DeNisi (1996) in Psychological Bulletin. In their analysis, the authors found that while on average, feedback improved task performance by 0.4 of a standard deviation, feedback in fact reduced performance in over one third of the cases. This finding led the researchers to explore the conditions or variables that rendered feedback either helpful or detrimental to performance. Hattie and Timperley’s (2007) meta-analysis of feedback interventions also showed considerable effect size variability, supporting Kluger’s claim that the approach used in feedback has a significant bearing on whether or not it is useful. A key proposition to emerge from the research is that feedback can have a debilitating effect on performance if it is delivered in a way that is perceived to threaten learners’ “self” (Kluger & DeNisi, 1996).

This potential for feedback to debilitate rather than facilitate performance improvement is also a key thesis in papers by Shute (2008) and Nicol and Macfarlane-Dick (2006). Dweck (1999) explained the detrimental effect of feedback on motivation and performance in terms of the characteristics and world-view of the individual learner. Students who responded poorly to feedback, were seen as inhabiting a “fixed” or “entity” view where they saw their ability as finite and capped. In contrast, those learners who responded to feedback with subsequent positive behaviour/performance change were characterised as possessing an “incremental view” where they viewed their capacity as malleable and contingent on effort and motivation. Those learners with a fixed view of their own capacity had a tendency to interpret feedback relating to failure at task as failure of self and this response served to demotivate action.

Like all issues relating to feedback design, delivery and uptake, two parties are involved in the “feedback dance” and it is too simple to claim that a learner’s disposition alone creates the predicted response above. The motivational beliefs of learners can be generated and/or influenced by the way educators provide feedback. For example, a common “feedback guideline” for educators is to phrase feedback in a way that emphasises behaviours related to task, rather than overarching or personalised characteristics such as overall ability or likeability or intelligence (Ende, 1983; Nicol & Macfarlane-Dick, 2006). Feedback that deviates from task and focuses on fixed qualities of “self” are likely to have a negative effect on motivation and performance (Butler, 1987; Narciss, 2008; Shute, 2008).

Kluger and Van Dijk (2010) have ventured further into the “feedback puzzle” in an attempt to understand the variable capacity of feedback for both good and harm. Rather than focusing on the self-efficacy of the learner, the researchers investigated how the nature of the task itself can interact with the utility of external feedback. The authors have postulated that people approach tasks or performances with two mind sets; either with a promotion focus or prevention focus. This regulatory focus of the learner determines whether positive (affirming) or negative (corrective) feedback is going to be more effective in soliciting behaviour change. In simple terms, a promotion focus involves things “people want to do” and a prevention focus is applied when “people have to do” tasks. A promotion-focused task is often based on problem solving and searching for new understandings, and a prevention-focused task is typified by vigilance and adherence to rules in order to avoid failure.

In their experimental study, Kluger and Van Dijk (2010) found that under a promotion focus, people are more responsive to positive feedback, whereas negative feedback tends to be more effective for people under a prevention focus. This research suggests that a one size fits all model on “how to give feedback” is not appropriate. It takes skill for the educator to judge the regulatory foci of the learner, and therefore, the type of feedback that will support the desired change. The findings also challenge educators to examine the properties of their own teaching and learning environment (Molloy, 2010). For example, in practical placements in medical education, error avoidance is important in protecting and optimising the patient’s health—the learner (novice doctor) is operating in a high stakes environment, where their actions have potentially “life and death consequences”. There are other professional cultures that value and thrive on creativity and innovation, and within these learning cultures, a promotion focus may reign over prevention. This research highlights the complexity of feedback in learning, and the centrality of context in influencing effective feedback practice. The role of the learner’s history, cognition, and self-efficacy, along with the nature of the task in question, appears to influence the impact of the message. Such research prompts us to question the value in rolling out generic best practice feedback frameworks, which seem destined to collapse under loading in authentic practice.

Effects of Feedback on the Educator

Typically, feedback is viewed as a tool to help the learner. The less discussed function of feedback is as a mechanism to help the educator. Yorke (2003) reported that “the act of assessing has an effect on the assessor as well as the student. Assessors learn about the extent to which they [the students] have developed expertise and can tailor their teaching accordingly” (p. 482). An example of such feedback is in collating written test results. If a large number of students fail to answer a particular question correctly, the teacher may use this information as a surrogate for the quality of their teaching of the content knowledge.

Another example to illustrate how feedback can provide benefits to the educator, is when the learner receives feedback on their performance, and is then provided with an opportunity to make the suggested changes in performance. This subsequent performance loop can be analysed to assess the extent to which the advice is translated to a change in behaviour. The educator needs to structure a subsequent “practise opportunity” post-feedback to allow for the student to exercise any new knowledge gains. As an example, if a teacher observes a student-teacher in action with a class full of children and notes that the student-teacher has difficulty in controlling childrens’ behaviour, they may provide feedback such as “… one thing that helps me in this situation is to do A, B and C …” It is important that the supervisor observes a subsequent class to see whether this strategy has indeed been effective in changing the class dynamic. If there is no change in dynamic, the supervisor is challenged to evaluate their own advice and collectively the learner and educator need to generate alternative ideas or strategies to help the learner achieve the goal. In summary, the learner’s post feedback response provides the educator with “data” to evaluate the appropriateness or effectiveness of their own feedback and advice on performance improvement. It could be argued that without knowledge of the effect of any inputs on actual learning, as revealed through performance on subsequent tasks, no feedback has occurred, merely information that the teacher believes would be valuable.

Factors Impacting on Feedback Quality

Content

Sadler’s seminal (1989) paper identified three essential properties in order for students to experience benefit from feedback. Students need to (1) have an understanding of the goal of performance, or reference point, (2) engage in an act of data comparison between the goal of performance and the actual performance and (3) attempt to close the gap between desired performance and actual performance using action or strategy. Much of the observational approaches to feedback research highlight the lack of time that educators spend on explicating performance targets and providing strategies to address the performance gap (Molloy, 2009; Nicol & Macfarlane-Dick, 2006). That is, students often do not understand the objectives of learning/performance and educators often do not spend time discussing tangible strategies for improvement (Hattie, Biggs, & Purdie, 1996). As Sadler (1989) eloquently reported, if educators do not provide information on the gap between the actual and reference level, and do not help devise strategies to alter the gap, we simply have a construct called “dangling data” (p. 121). It could well be that the dissatisfaction surrounding feedback is reflective of the dangling data that students can’t use.

There are ample guidelines published on how educators should structure feedback messages, particularly in relation to how much time should be devoted to affirmation of performance, and criticism of performance. Kluger & DeNisi (1996) research on the interaction of “feedback sign” (positive versus negative) with the regulatory foci of the learner is an exception within the ocean of guidelines that are crafted on the basis of claiming to protect the self-esteem of the learner. A prime example of a model that is frequently advocated in educator training on feedback is the “Feedback Sandwich” (Henderson et al., 2005). In such a model, the educator is assigned the task of softening the blow when providing constructive feedback on performance, so that the information on deficits in performance becomes the meat in the sandwich, wedged between two slices of carbohydrate flattery. The ensuing conversation takes a predictable path that both educators and learners learn to navigate. Rather than a useful framework, this model can be seen as reductionist, tokenistic and paternalistic (Molloy, 2009). The learner anticipates the “important message” in the middle, and learns to disregard the complements on performance as part of a mandated linguistic ritual.

Many authors on feedback have honed their focus on to the impact of feedback on self-concept formation, and the tendency for learners to react defensively to feedback. The speculation regarding the “damaging impact” of feedback has led to the formulation and dissemination of simplistic models that in fact deviate from the original purpose of feedback, as conceptualised in cybernetics. That is, rather than feedback acting as a mirror—to reveal performance, gap in performance, and strategies to bridge the gap between desired task performance and actual task performance, it becomes a social convention of apparent honesty wrapped up in nicety, that the learner has to negotiate and decode through time.

The tension for educators in giving feedback oscillates between acting with sensitivity and delivering with honesty. This presents a challenge to educators across all sectors of higher education. Ende (1983, 1995) studied how doctors/supervisors gave feedback to students in medical education and observed that these supervisors went to great lengths to avoid upsetting learners. Ende coined this observed phenomenon “vanishing feedback” where, in an attempt to avoid a negative emotive reaction, educators disguised or avoided the constructive or corrective information, so that the learner was not privy to the important message, and consequent potential for performance improvement. As reported by Higgs et al., (2004) “Giving feedback that preserves dignity and facilitates ongoing communication between the communication partners, but that also leads to behavioural change, is a challenge.” (p. 248).

Feedback characterised by “disguised corrective strategies” is fraught with danger. Students may not pick up on errors in their learning or practice, and may leave the learning encounter with an inflated sense of mastery (Ende, Pomerantz, & Erickson, 1995; Ilgen & Davis, 2000). This has implications not only for their immediate skill base, for example, essay writing ability or competence in a technical “hands on” task, but also impacts negatively on their self-evaluative capacity, as it is through the provision of external feedback that learners calibrate their own internal judgements.

Rather than engaging in models of feedback designed to soften messages, what would happen if we stopped underestimating learners’ ability to process and act on truthful feedback? What if we took an alternative route and instead channelled energies into better orientating learners to the purpose of feedback, and provided them with frequent opportunity to seek, listen and respond to honest feedback, and align this to their own self-evaluation, throughout their programs?

Other guidelines for educators on the provision of effective feedback include the focus on behaviours and specific performances, not generalisations (Shute, 2008). And that observable decisions and actions are highlighted, rather than educators’ own hypotheses around the student motivations or intentions behind performance approaches (Ende, 1983). Assuming a learner’s intentions, without asking them for an explanation about their chosen approach to task is one way of devaluing their agency as a learner, and depriving them of the opportunity to self-evaluate and reflect. This practice positions the educator as the expert, and the learner as the passive recipient of information. A descriptive study by Latting (1992) suggested that educators from a psychology or health background have a tendency to adopt a diagnostic role (hypothesising causes of under performance) in feedback as a “hang over” from their clinical knowledge paradigm. “Clinically trained clinical educators who have developed skills in assessing the underlying causes of behaviour may be especially prone to offer their interpretations of a subordinate’s behaviour” (Latting, 1992 p. 426). Good educators, like good learners, are those who engage in critical reflection and examination of their patterns of engagement in feedback; looking for historical, social cultural and pedagogical influences that might shape their habits.

Timing

The majority of generic feedback models available to teachers advocate that feedback is most effective when delivered immediately post-task engagement (Ende et al., 1995; Hattie & Timperley, 2007). However, delving into the feedback research reveals a more complex picture in relation to timing (Kulik & Kulik, 1988; Shute, 2008). Clariana, Wagner, and Roher Murphy (2000) found that there is merit in delaying feedback on complex tasks that involve greater degrees of processing. In such cases, delaying feedback can provide the learner with reflective space to evaluate performance and consider alternative ways to approach similar subsequent tasks. The immediate atmosphere of the learning environment and the emotional state of the learner may also determine the optimal time to engage feedback encounters. For example, in workplace learning scenarios such as in a classroom or a hospital, it may not be productive or appropriate to provide the “learner” with immediate feedback on their performance if pupils, patients or colleagues are present. Capacity for receptivity to external feedback is also diminished if the learner is highly emotive due to the nature of the task engagement (i.e. working with an unwell or dying patient) or is disappointed with their performance on the task (Molloy, 2009).

Qualities (and Perceived Qualities) of the “Teacher”

The perceived status of the feedback provider carries significant weight to the feedback message. Novices value feedback from their superiors, because of their perceived expertise (Asghar, 2009; Liu & Carless, 2006; Molloy & Clarke, 2005; Poulos & Mahony, 2008). The perceived ability and experience of teachers builds a case for trust and credibility, and therefore, learners are more likely to “listen to” and “act on” the feedback messages.

This interpretation of the status of the sender may impact on the use of peers in providing meaningful feedback to learners. In principle, a learner’s peers are in a prime position to give meaningful performance information. Research in both university and workplace learning settings has indicated that student peers can often serve as the most accessible, and often, most invested, parties in the learning experience (Falchikov, 2002; Fantuzzo & Riggio, 1989; Ladyshewsky, 2010). It is for these reasons that they offer great potential to provide feedback to each other. The benefits of receiving feedback from different and additional sources is often written about, but less so, is the benefit that students gain from the act of giving feedback to others, as a result of the peer interaction (Ladyshewsky, 2010). This benefit occurs as the “peer tutor” must observe the tutee task/performance, think about how this relates to the goal of the task/performance (and therefore engaging in task/performance expectations) and reorganise and explain the material in accessible terms to the “peer tutee” (Fantuzzo & Riggio, 1989).

Peers are free from the constraints inherent in evaluation or summative assessment, and therefore, there is potential for disclosing honest information relating to deficits in learning, knowledge and performance. They often tend to be more available than teachers and may frame observations, gaps in performance and recommendations in language that is more accessible and meaningful (Ladyshewsky, 2010). However, despite these advantages, peers are commonly viewed as lacking expertise, and therefore, their feedback, despite how sophisticated and accurate, may not have the same reach as an equivalent message delivered by an expert in the field (Falchikov, 2002). This observation points to the potential value of peer feedback in areas in which peers manifestly have expertise. That is, they can have particular value in revealing whether the learner has clearly communicated to them.

Often mixed with the concept of expertise, but not a direct result of content/context expertise is the use of an authoritative or judgemental voice in feedback. This mode of delivery of performance information implies that the viewpoint cannot be contested—that is, the feedback is stated as fact, rather than positioned as a subjective construct that can be negotiated with the learner. The danger of feedback delivered in such a tone is that it can discourage the learner from self-evaluation or exploring an alternative view on the episode or performance in question. Carless et al., (2010) discusses the “terseness” or “finality” of one-way written or verbal comments, that do not invite any addition or modification or contesting by the learner. This mode of feedback delivery does not provide the learner with a sense of agency in their learning. This use of final vocabulary (Rorty, 1989) leaves the learner no room for manoeuvre: it closes options whether offered in positive or negative form, and discourages self-regulation (Boud, 1995).

Part Two: Creating a Learner Disposition to Seek and Use Feedback?

Disparate Educator and Learner Perspectives on How Feedback Is Given and Used

As highlighted in the introduction, educators typically rate the quality of feedback provided higher than learners’ equivalent ratings. In particular, learner surveys have indicated that feedback is one of the most troublesome aspects of the student experience (Carless et al., 2010; Krause, Hartley, James, & McInnis, 2005). Students report deficits in the amount of feedback provided and in the quality of feedback provided. Observational studies in higher education seem to confirm students’ self-reported dissatisfaction with the delivery of feedback, in that students often do not act on feedback to improve the quality of their work. A review by MacDonald (1991) concluded that many students do not read written feedback provided by educators, and those who do are not guaranteed to act on the messages. This finding was supported in a later study by Sinclair and Cleland (2007), revealing that less than half the students in the study collected the formative information made available. These results point to two key messages; (1) educators need to start responding to feedback about their feedback practices and (2) the focus in the feedback research and discourse is inappropriately centred on the role of the educator in “transmitting feedback” rather than on how students seek and use it.

Another finding from the research on feedback is that educators and students may have a shared conception of what “good quality” feedback should look like. However, the view of what feedback actually looks like is a different proposition. Molloy’s (2009) study of learners and supervisors in feedback in clinical education revealed this disjunction. In phase 1 of the study, both parties emphasised the importance of a dialogue, as opposed to an educator-led monologue, and the provision of invitations or opportunities for student self-evaluation. In Phase 2 of the study, analysis of 18 feedback sessions between student and educator in clinical education showed that there was minimal input from students in the sessions. On average, the feedback interactions lasted for 21 min and the students’ contribution accounted for less than 2 min of the “conversation”. In the post-feedback session interviews, educators acknowledged the unidirectional nature of their feedback, despite “good intentions” and attributed this monologic tendency to time constraints, lack of trust in students’ insight to formulate accurate self-evaluation, and complying with students’ expectations of a transmissive exchange of knowledge from expert to novice. The findings suggest that educators may be focused on the short term benefits of feedback (i.e. the effect of the message on immediate performance) rather than the long term benefits of increasing students’ capacity to self evaluate and self correct.

A Relational View of Feedback

Educators and learners may be able to parrot with accuracy “principles of effective feedback”, yet researchers are accruing data to suggest that feedback is not carried out in accordance with these principles (Fernando et al., 2008; Krause, Hartley, James, & McInnis, 2005; Shute, 2008). One hypothesis for this lack of translation into practice, is that the models, or guidelines are not fit for practice. That is, they cannot be readily taken up by those involved.

The evidence supporting the lack of uptake in practice does not necessarily forecast the probability of doom and gloom in the landscape of feedback in higher education. Like any “feedback”, this gap or incongruence between idealised practice and actual practice can provide an impetus to improve what is done. The incongruence can be seen as an avenue for re-examining what we think constitutes good feedback for learning. The remedies for poor feedback practice are not as simple as “spreading the word” to educators, or “saying the same message, but saying it louder” or refining mechanics in the process. As Carless et al., (2010) state, “tinkering with feedback elements such as timing and detail, is likely to be insufficient. What is required is a more fundamental reconceptualization of the feedback process” (p. 2).

To summarise, empirical evidence suggests that feedback is complex and that it can have both positive and negative effects on performance, depending on characteristics of the learner, the task and the learning setting. The interrelationship between the learner, the educator, the environment, the practice/knowledge culture and the specific task mean that a one size fits all model on “how to do feedback” is likely to fall down on many levels in application. Not only do the results point to an over-simplification of conceptions of feedback practice, but they also suggest that current feedback conceptions and practices may be overly informed by a unilateral and behaviourist view of education (Biggs, 1993). The observations of feedback in situ, and the collection of learners and educators’ perceptions on intention and action indicate that feedback is commonly seen as a tool for the student, delivered by the educator, and for the purpose of improving the student’s immediate performance on an equivalent or directly related task (Nicol & Macfarlane-Dick, 2006). Observational studies of verbal feedback reveal didactic provision of information from educator to learner. This model of practice positions the educator as the expert and the learner as the dependent and passive recipient of information who must take whatever is given.

Most guidelines on feedback imply that we know what to do to improve the effectiveness of feedback, and that improvement (and consequent improvement in student satisfaction ratings) will result from urging teachers to be more prompt in providing comments to students, and to provide this information more frequently. The most common institutional response is simply to mandate the frequency of verbal feedback delivery (i.e. once/day or once/week in the workplace setting) or to make rules about the speed of return of comments on written submissions of work. Such a response again appears to be leaning on behaviourist principles of learning, and ignores the role of the student in feedback episodes.

The importance of learners developing self-evaluative capacities through feedback is starting to gather momentum within the higher education literature (Boud, 2000; Boud & Falchikov, 2007; Carless et al., 2010; Hounsell, 2007; Nicol, 2009). This movement in feedback, as seen through a constructivist learning lens, pivots off Boud’s (2000) notion of “sustainable assessment” where learners and educators work together to produce practices to meet immediate assessment requirements without compromising the knowledge and skills important for ongoing and independent learning. Carless et al., (2010) furthered this concept in the context of feedback research and refers to “dialogic processes and activities which can support and inform the student on the current task, whilst also developing the ability to self-regulate performance on future tasks” (p. 3).

Carless et al., (2010) view of a better way to do feedback, underpinned by the theories of constructivist learning (Price, Handley, Millar, & O’Donovan, 2010), puts (1) the student at the centre of the feedback experience, and (2) frames feedback as an iterative, continuous part of learning that helps the learner to develop independent skills in self-monitoring and self-regulation. Through providing external information on how performance matches up to goals of performance, educators are modelling critical reflection skills that help learners to calibrate capacity for their own internal appraisal. The learner’s continuing comparison between internal and external information, and heightened trust in self-evaluation over time, is strengthened through regular opportunities for learners to self-evaluate. As Riordan and Loacker (2009) comment “the most effective teaching eventually makes the teacher unnecessary” (p. no). Sadler (1989) also commented on the value of actively engaging learners in self-assessment and therefore developing sustainable learning practices.

How Did We Get from Cybernetics to Sandwich Making?

One of the questions that begs to be answered is how has the original concept of feedback, as first discussed in cybernetics (1954) evolved into the dominant practice we see in contemporary higher education? On a conceptual level, it is easy to see the advantages of controlling a system through reinserting into the system the results of its performance. The situated and social nature of learning (Harré & Van Langenhove, 1999) means that simple information provision to humans about performance can have an impact beyond its intent. Research in organisational psychology has demonstrated the multiple factors that can influence learners’ receptivity to, and use of feedback, including both their own self-concept and the regulation foci of the specific task. Awareness of these sensitivities have manifested in “rules” about how to conduct fair and balanced feedback (Molloy, 2010). These rules of engagement may help create better learners or may in fact generate a teaching and learning encounter that departs from the original purpose for which it was designed. For example, there are times when students’ performances do not warrant affirmation, and yet some models of feedback advocate that praise is a feature at the start and at the end of the feedback communication. Another example of potential deviation from purpose, is the idea that feedback should relate to the episode observed, and should not relate to past performance. This convention stems from principles of fairness and protecting the student from cognitive bias in assessment. This preservation of fairness is good in theory, but in practice, changes feedback from a continual and iterative process promoting looping between performance standards, performance, advice/remediation and subsequent task performance. In giving the student “a clean slate”, feedback has morphed into a catalyst for immediate behaviour commentary and change, rather than an as a process to build sustainable learning habits.

Implications for Program Design

As a one-size-fits-all model such as the “feedback sandwich” fails in practice, we are loathe to present a list of instructions or prescriptive guidelines on how to do feedback under a constructivist framework, particularly when these claims are not substantiated through multiple research studies. There are, however, key overarching principles that might help generate healthy educational habits in both learners and teachers, and strategies to incorporate within the curriculum to support these ideals.

  1. 1.

    Creating learner disposition for seeking feedback

    If students are made aware of the advantages of feedback through suitable task design and sequencing, and have frequent opportunities to engage in productive, dialogic exchanges with multiple others, they are more likely to see feedback as a tool for “them” rather than as a destabilising or debilitating act “done to them” by those in authority. Generating this disposition is largely about providing regular opportunities to seek, listen to and act on feedback and to be provided with “sanctioned space” to both reflect on performance criteria and to reflect on how internally and externally generated feedback support or contradict each other. Henderson et al., (2005) commented that this provision of regular opportunities to practise feedback would mean that students would start to see engagement in feedback as habit, rather than as “an act of bravery”. Another important strategy for reducing the emphasis on feedback as a one-way transmission from teacher to student is to involve peers and/or consumers in feedback provision (Ladyshewsky, 2010). Reaching for feedback sources outside the traditional teacher-learner relationship affirms the status of the learner as one with “agency” who makes knowledge rather than receives knowledge.

  2. 2.

    Orientation to the purpose of feedback in learning

    Both students and educators need to see feedback as a system of promoting learning through fostering active learners, not as individual acts of information provision and reception. That is feedback is not viewed “as telling” and “does not end in telling”. Equally it is not a process that is done to students, by educators. All stakeholders in the environment need to be explicitly orientated to the purpose of feedback, and to view it as a means to increase skill in self-monitoring and self-regulation.

  3. 3.

    Explicit, nested, iterative tasks

    Students and those providing feedback need reminders that it is necessarily an ongoing loop linking (1) performance targets, (2) actual performance, (3) strategies for improvement to bridge the gap and (4) observation of opportunities for subsequent change in performance. Students report that they do not have a clear understanding of assessment goals or criteria and educators can work hard to explicate the standard or reference point. (Rust et al.,2003). Sadler (1983) promotes the use of student exemplars in order to develop an improved personal knowledge of what constitutes “quality work”. Likewise, more professions are using videotaped exemplars of “best practice” in technical or practical skill execution, so that students have a readily accessible bank of performance targets by which to compare their own performance. Formative assessment tasks need to be positioned within the curriculum so that students have subsequent opportunities to enact the changes stimulated by feedback. For example formative feedback may be provided on tasks throughout a semester. Feedback at the end of a semester is less likely to be formative as learners are much less have an opportunity to utilise useful information in their immediate work. Without this subsequent practice opportunity loop, students are not able to see the benefits of feedback as a tool that changes practice, and educators are not able to judge the effectiveness of their interventions.

  4. 4.

    Practising judgement

    Early in the curriculum, students should have opportunities to judge their own performance, and to see how this appraisal “stacks up” to external appraisal. This may constitute regular activities to assess students’ content knowledge or it may take the form of criterion referenced assessment processes that learners engage in following written or practical skill performance. In the case of verbal feedback exchanges (for example post-oral presentation or post-workplace learning placement), educators can scaffold students self-monitoring capacity through asking questions about the student’s own account of the performance. Clarifying or exploratory questions posed by the educator can encourage learners to think further about their learning, and help the learner to “own” their insights, rather than being told. Questions such as “how do you think you went?” “is this feasible?” “can you explain what you mean by?” serve as prompts for students to exercise their judgements. The subsequent provision of educator opinion may then validate, contest or calibrate the learner’s internal evaluation, strengthening knowledge about the relationship between task goal and execution.

    These four pillars of program design are likely to afford conditions favourable to effective feedback provision and uptake. The propositions include, but extend beyond the mechanics of feedback content and delivery, and are directed at higher levels of curricular design and implementation. The innovations designed to improve feedback processes in higher education need to be shared, and robustly evaluated for the effect on both learners and educators. These instances of “program level” changes need to be the focus of the next wave of feedback research. We already have plenty of data to reveal the widespread discontent with current processes.

Conclusion: Feedback and Self-Evaluation as Habits for Sustainable Learning

This chapter has outlined key research into feedback in an attempt to distill the properties that render it useful for learning. Students consistently rate feedback provision as problematic, and educators are starting to acknowledge that what they think they should do in feedback differs to what they enact in practice. The didactic nature of feedback exchanges, and the lack of engagement of students in the messages, points to a need to reorientate thinking on feedback for learning. A revolution, sparked by the observations and ideas of Boud, Price, Nicol and Carless, is starting to hit higher education. The challenge for educators is to embody these ideas, to depart from the traditional role as “director” of feedback and to focus on how to create a student disposition that seeks and uses multiple forms of feedback. The drive towards sustainable feedback practices requires commitment and skill from both learners and educators, and a progressive withdrawal of didactic performance information from the educator as students demonstrate skill and confidence in self-monitoring. Generating a discourse based on constructivist learning principles and the sharing of program design “wins and failures” should help align goals of, and practices in, feedback. The innovations designed to generate these sustainable learning habits, and the accompanying evaluation data, needs to be the focus of the next iteration of this chapter. That is the feedforward.