Introduction

Feedback Is Important But There Is a Gap

Feedback is critical for effectively promoting learning. Without feedback, learners are limited in how they can make judgements as to their progress, and how they can change their future performance. Feedback is the lynchpin to learners’ effective decision making, and the basis of improved learning outcomes. The value of feedback is tied with its assumed connection to an improved future condition, in other words—impact. However, while there is a growing body of research regarding feedback design, such as the potential of diverse sources (e.g. peers, automated systems), modes (e.g. written, audio, video, rubrics), agency (e.g. learners seeking specific feedback), sequencing, and the influence of context, there has not been a similar focus on the impact of feedback.

The Structure of This Chapter

This chapter has two sections. The first section outlines how we conceive feedback. We explain how feedback is a learner-centred process in which impact is a core feature. In doing so we aim to reveal some of the complexities of feedback processes which then provides a context for the remainder of the chapter. In the second section of this chapter we explore the reasons why identifying, let alone measuring, impact is problematic. We briefly revisit the contingent nature of educational research into cause and effect and question the implications for feedback processes that are likely to be experienced by individuals in different ways with different effects over different timescales. It is here we then discuss some ways we conceive the various forms of feedback effect. The chapter concludes with a reminder that in the current climate of evidence-based policy and practice, there is an urgent need for research to inform students, educators, higher education institutions and industry partners about how they might identify impact and understand it in connection with feedback processes as a whole.

Feedback Must Have Impact

What Do We Mean by Feedback?

Our starting point is that the common conception of feedback—that it is something that is done by educators and given to learners, an act that is commonly described as “giving comments”—is in fact a misconception. Both academics and learners often assume that feedback is a one-way flow of information, which happens after assessment submission and is isolated from any other event, and worse, that the role of feedback is less about improving future performance but merely serves to justify the grade. In contrast, leading researchers in the field argue that feedback is not a simple input. Building on the works of Boud and Molloy (2013) and Carless (2015) , we argue that feedback is usefully defined as processes where the learner makes sense of performance-relevant information to promote their learning. Here, we purposely position feedback as a process, or a series of processes, and not simply an event involving the transmission of information or input. In addition, learners are understood to be active participants in this process, which does not necessarily involve academics at all. In this definition, we talk about making sense of information; however, we do not presuppose this is necessarily a rational or conscious process. Indeed, we conceive the possibilities of the sense-making process from a variety of frames including social constructivist and sociocultural learning. For example, the notion of entanglement between the individual and their environment within a sociocultural frame offers further challenges to understanding sense-making and impact.

A further critical element of this definition is that we explicitly tie impact or effect to the feedback process. The purpose of assessment feedback is to result in improved learning strategies or performance. The improved outcome is an impact of feedback. However, we purposefully conceive impact in a broad way. We use the terms impact and effect interchangeably and by which we mean that the learner’s condition is somehow changed as part of the feedback process. Therefore, we seek to explore the effects that occur in the feedback process and how they support or hinder improved learning strategies or performance.

We Are Focused on Assessment Feedback

Feedback is a term used across many fields and in a variety of ways. However, here we are focused on feedback processes surrounding assessment, particularly in the context of higher education. This includes any systematically organised or structured approach to collecting evidence of performance, whether it is diagnostic, formative or summative in nature. There are many occasions of informal or casual feedback processes, but these are beyond our present consideration. As is demonstrated throughout this book, the enactments of assessment feedback can surface in many different ways, from the all-too-familiar comments at the end of an essay, to peer feedback prior to submission of work, to face-to-face performance conversations in work integrated learning contexts. However, while assessment designs are important, we argue that they are only one part of the complex context in which feedback processes occur.

Impact Is a Necessary Characteristic of Feedback

In contexts other than education, such as engineering or biology, feedback is not understood as an input but rather a process within a system. For instance, if a blood vessel is damaged, platelets cling to the injured site and release chemicals that attract more platelets, eventually forming a clot. In this system, feedback regulates or optimises the output. In this example, we can see that impact is a necessary component of the feedback loop. Applying this metaphor to education, feedback can be usefully understood as a process in which information about a learner’s performance somehow influences their future capabilities or actions. With this in mind, any information without effect is not feedback, just information.

This understanding of feedback is not new. An early reference to feedback as a mechanism of learning can be found in Wiener’s treatment of cybernetics in his 1950 treatise on The Human Use of Human Beings. He draws on a range of examples from engineering, computing and biology to make the point that feedback is a process “of being able to adjust future conduct by past performance” (Wiener, 1989, p. 33) and that “effective behavior must be informed by some sort of feedback process” (p. 58). Wiener’s conception of feedback inherently involves a dialogue between action and effect. In doing so, he takes great pains to be clear that this includes regulation of simple behaviour but is also the basis of what he calls “policy-feedback” that can inform new courses of action:

Feedback may be as simple as that of the common reflex, or it may be a higher order feedback, in which past experience is used not only to regulate specific movements, but also whole policies of behavior. Such a policy-feedback may, and often does, appear to be what we know under one aspect as a conditioned reflex, and under another as learning. (p. 33)

The point is that feedback is not just information about performance but it is a process in which that information is somehow used to influence subsequent performance. This is true whether we are looking at simple engineering models of feedback or more complex processes of learning. Nevertheless, there is a danger in assuming overly deterministic and simple direct connections between the performance information and impact. For example, the same performance information can result in different outcomes by learners, which highlights among other things, the complexity of learner individuality, their sense-making processes and the contexts in which they operate.

This raises a serious challenge for educators and educational institutions who are committed to effective feedback. They can no longer simply provide information and “hope for the best” but instead need to (a) design it in anticipation of its impact on future performance and (b) find ways to understand or measure that impact to optimise learner outcomes. If learners are not benefiting from engagement in feedback processes, the conditions can be reconfigured until such effects are observed. However, there is a danger here of focussing too much on the teacher or institution. Feedback need not be instigated or managed by the educator. Indeed, as described in the next section, we conceive the learner to have agency in the process, including the potential to identify their own goals, criteria and even generate their own evaluative information to inform their future constructions and actions. It is possible that the educator may not be directly involved at all.

Feedback Is a Learner-Centred Process

In higher education, policy and practices surrounding feedback tend to be framed as teacher-centred—that is something that an educator does to learners. Worse, the construct of feedback is often unshackled from that of teaching and understood to be a labour of accountability or an act of beneficence on the part of a lecturer for which learners should be grateful. Teacher-centred perspectives emphasise the role of academics in “giving feedback” while failing to adequately recognise learners’ active role in the process of their own learning. It is an indictment on the higher education system that, for many learners, the experience of teacher-led feedback is underwhelming or negative. In many cases, learners are effectively left to their own devices, ill-equipped to seek, interpret or act upon evaluative information.

The shift from a teacher-centred perspective to one focused on learning provides a valuable opportunity to reposition the teacher as just one possible source of evaluative information. Feedback processes can, and often do, involve a variety of agents, particularly in the performance information generation. These may include family members, friends, automated systems (e.g. simple spell checkers, writing assistants such as Grammarly, code compilers), social networks, peers, educators, client/patient/consumer (in the case of work-based learning) and, of course, the learner. These various sources of feedback information may be as a result of the learner themselves, or through the careful design of the educator. While we argue that feedback needs to be conceived as student-centred, the educator can, and usually does, have a significant role in shaping the feedback processes. The educator may have designed the assessment that elicits and shapes the performance, but also can—and we argue should—orchestrate opportunities for learners to engage with a variety of sources and types of information designed to develop their evaluative judgement, that is decisions about the quality of their work (Tai, Ajjawi, Boud, Dawson, & Panadero, 2018). Finally, the educator also has an important role in designing ways for the learner to evaluate what they have learned through a subsequent performance. Regardless of whether it is by teacher design or through learner agency, it is likely several agents have been utilised any one assessment event which includes researching, building/drafting and submission or performance. While there is a strong body of literature that deals with peer feedback designs, the role and impact of the broader feedback networks are less clear (Dawson et al., 2018).

Arguably, learning is a process over which educators have no direct control. It is not a simple process of transmission. The learner needs to be actively involved in attending to the stimulus or relationship or environment and engaging in a complex process of meaning making . In many conceptions of learning, the social environment is understood to have an important role in shaping the experiences of learners and simultaneously providing a mechanism to test and evaluate new ideas, which may then result in modified or new conceptions of the world. It is worth noting that these conceptions may be incomplete, inaccurate or contradictory and are often fluid, changing and influenced by other active schema or changes in context. From this perspective, learning is a process of knowledge construction rather than reproduction and it emphasises the central role of the learner and context.

Sense-Making and Feedback Literacy Are Important for Feedback to Make a Difference

This brings into focus the issue of learner capacity to seek or generate, make sense of and use performance information. While the quality of the feedback information cannot be forgotten, unless learners can identify, interpret and act upon it then the feedback process is thwarted. The presence of evaluative information in any format does not necessarily mean that feedback has occurred. Nicol (2010) notes that from a social constructivist perspective, if learners are to consciously influence future actions, performance information needs to stimulate an “inner dialogue” in which learners are “actively decoding feedback information, internalising it, comparing it against their own work, to make judgements about its quality and ultimately to make improvements in future work” (p. 503). Ultimately, learners need to make sense of the information in order to act upon it. However, we suggest that the act of sense-making is much more than simply being able to comprehend the feedback information.

A prerequisite for students taking an active role in feedback as Nicol suggests is that they have an appreciation of the purpose of feedback processes and how they can operate to their benefit. The taken-for-granted assumption in student evaluation surveys that feedback consists of what teachers do undermines the development of a stronger student role in the process. Carless and Boud (2018) have called for the development of what they have termed “feedback literacy”. That is, “an understanding of what feedback is and how it can be managed effectively; capacities and dispositions to make productive use of feedback; and appreciation of the roles of teachers and themselves in these processes” (p. 1316). They take the view that in order for feedback to do its job, students need to have a deeper appreciation of how feedback can work if they are to make the most of the opportunities that their courses provide, and that this needs to be scaffolded throughout courses, starting in first year units.

Identifying Impact Is Not Easy

We have described feedback as processes in which learners make sense of information about their work in order to improve learning strategies and future performance. We have purposely made explicit the notion of impact as a critical and necessary component of feedback. However, the effects that occur in the feedback process are largely internal to the learner and may not directly manifest in subsequent actions. This creates a dilemma for anyone trying to understand the effectiveness of feedback: How can we know that feedback has occurred unless we can see some evidence of its effects? Designing feedback for impact, and in particular, evidencing that impact, is problematic. It is perhaps a symptom of this difficulty that feedback initiatives and research often focus on front-end design, but assume or simply omit any concerted effort to define or evidence impact.

The Problem of Cause and Effect in Educational Research, Practice and Policy

At the most simple level, feedback relating to a task should result in a learner being able to perform that task more effectively in the future. It is conceivable that when the tasks are simple skill or knowledge acquisition then we may be able to track the influence of the evaluative information on future performance. However, higher education and professional learning contexts normally involve more complex learning tasks. Identifying the nature of impact is problematic, not least, because subsequent performances that allow learners to enact their understanding of the feedback information are usually delayed (if not absent) are in response to different conditions and measured by different criteria.

In addition, at a more fundamental level, ascribing clear causality of feedback impact, outside of experimental conditions, is near impossible. The problem of causality in education research has been well documented. In his book on Causation in Educational Research, Morrison (2009) points out that “one can soon become stuck in a quagmire of uncertainty, multiplicity of considerations, and unsureness of the relations between causes and effects” (p. 4). There are so many variables to take into account, for example the curriculum design, the assessment design, pedagogy, context, and learner agency. Indeed, the role of individual conditions, including beliefs, motivation, prior experience and emotion, is well-recognised modifiers in causation (Maxwell, 2004). Regardless of the cause–effect model being applied or the methods in measuring it, Gorard (2002) concludes that we are unable to detect cause–effect directly. It is in this context that we need to be cautious in our search for impact and attributing change to the feedback process. Nevertheless, this does not mean to suggest that we should not try to identify or measure impact. Simply, that some caution needs to be made in the strength of causal claims.

The Problem of SequenceAnd Its Implication for Locating Effect

Fundamental to most conceptions of cause–effect is that it is temporal, that is cause is followed by effect. In the case of feedback, we might assume there is (1) a performance by the learner (e.g. essay), followed by (2) the generation of performance information from different sources (either by the learner or others), then (3) sense-making including forming evaluative judgements, and which may finally result in (4) some form of effect or change. Given the purpose of feedback is to result in improved performance it is also reasonable to argue that the learner should also engage in a subsequent performance to enact and test their new understanding, and thereby also beginning the cycle again.

This description of a feedback process is seductively simple, but also highly problematic. There is a danger of assuming that the elements of performance, generation, sense-making and effect are linear and that each only occur once within a single cycle. In contrast, each element is likely to be more complex and fluid in how it is experienced.

Each stage does not necessarily occur in a specific order. It is possible for learners to move back and forth between stages. For example, from a single performance, learners may engage in several iterations of generation and sense-making. It is possible that a learner may produce a draft of an assignment from which the educator generates evaluative information for the learner. After attempting to make sense of that information, the learner might then also seek other sources of evaluative information using the same draft assignment. In this example, the learner has moved back and forth between the stages of generation and sense-making. It is likely that this is not an uncommon experience for students—often in any one assessment there is a history of evaluative information being generated from several sources (e.g. educator, family, peers, automated systems, self) all of whom would potentially be representing different values or understanding of the success criteria. These multiple instances of evaluative information can interact, adding to, or even confounding the sense-making process. This causes a problem for us in trying to identify and understand effect, which, in this example, is no longer a simple product of a linear sequence.

A further complication is that the movement between stages may not be apparent, and indeed, the very distinction between stages may need to be questioned. For instance, the internal process of effect is likely to be fluid—shifting whenever the learner engages in sense-making but also when they engage in performance. As an example, a learner can engage in a subsequent performance, drawing on their new understanding. However, in enacting that performance the learner will be actively creating new meanings. This is particularly true for learners who have well developed evaluative judgement and are constantly monitoring and regulating their ongoing performance.

Subsequent Performance May Not Represent an Effect of Feedback

A key problem in determining feedback impact is that sense-making and effects such as emotion and identity are largely internal processes and therefore particularly hard for us to measure or observe directly. A simple response to this dilemma has been to suggest that learners need to engage in subsequent performance utilising the same knowledge or skills. This can aid the learner to test out their new understanding and continue their learning journey. A subsequent performance also can provide the educator or “other” with a better understanding of the effect their feedback information or design.

However, subsequent performance may not actually represent the particular effect(s) of the feedback process that it is meant to be evidencing. For example, subsequent performances are often conducted at a later time and in response to different task requirements and thereby likely to be drawing on more than just the understanding developed from the first performance. In addition, there may be a variety of effects arising from the initial feedback process that are not evident in the subsequent performance. This may be simply because it is not called for by the assigned task, or it may be because the effect is harder to observe, such as emotional, motivational, relational or other changes.

A further complication is that in the process of preparing the assessment, such as researching, drafting, editing, discussing ideas and writing the learner constructs a history of actions that constitute the learner’s experience of the performance as a whole. In other words, the assessment process may involve a number of feedback loops, interacting with each other and impacting on the learner. In treating the final assessment submission (essay, test, oral performance, action, etc.) as the whole of the performance, it may result in the generation of evaluative information that is partial and less relevant than we might otherwise assume.

Clearly, we need to be circumspect in treating the subsequent performance as a manifestation of effect. While we argue that a logical and valuable feedback design is to ensure there is a subsequent opportunity for learners to enact their new understanding, we need to be careful assuming that the subsequent performance is connected to any particular event. This creates an interesting challenge for evidence-based policy and practice, as well as feedback research attempting to identify effect.

The Different Forms of Feedback Impact

We conceive impact as essentially any changed state within the learner as a result of the feedback process. The nature of that change could be related to their thinking processes, emotions, relationships, work strategies, identity and more. In addition, a single feedback loop may result in more than one effect, which may in turn interact and together influence future performance. The diverse forms of impact, and the potential ways in which they may combine, are multifarious. However, in better understanding impact within a feedback loop, we are more likely to better understand how to improve feedback outcomes. Here, we describe a number of ways we conceive feedback impact.

Impact is not just a learning outcome.

We have already pointed out that subsequent performance may not be an accurate or complete representation of the feedback impact. A student may be able to meet the learning outcomes of a unit without this being attributable to feedback. Grades or formal measures of achievement are poor representations of particular performance and are very unlikely to represent the whole of a feedback effect. While it is desirable that an assessment feedback process should result in learners being able to perform better, there may be other effects involved, and desired, that are not reflected by the grade. The challenge here is to look beyond simple or familiar forms of measurement, such as grades and student satisfaction, to think about how we may usefully evidence the impact of feedback in more nuanced ways.

Impact may be cognitive.

Learners may now have a better understanding of a concept or skill. Their knowledge may have been impacted. The feedback process may result in new schemas, reframing of a problem, or connecting ideas that were otherwise not associated or associated in partial or incorrect ways. However, the effect may also be in terms of their cognition itself, that is their thinking processes. This may include the way in which they attend to details, process information, form concepts, and how they store and retrieve memory. Therefore, in engaging with a feedback process, it is possible that learners may shift in not only what they think, but also how they think. This can be difficult to detect.

A particularly desirable form of cognitive effect would be in the area of metacognition, that is the way we think about thinking. It includes both an awareness of ones thinking processes and an ability to regulate or influence that process. For example, knowing when and how to use particular strategies for generating performance information, making sense of that information, or utilising newly acquired understandings in subsequence performances. It is arguably one of the greatest potential impacts of feedback—to improve learner capacity to effectively engage in the feedback process itself and to be able to judge what they can and cannot do; whether this is in relation to developing students’ self-regulation of learning (development of goals, monitoring and action planning; Nicol, 2010) or in terms of the development of evaluative judgement (Boud, Ajjawi, Dawson, & Tai, 2018; Sadler, 1989).

Evaluative judgement is defined as “the capability to make decisions about the quality of work of oneself and others” (Tai et al., 2018 ), as such it involves metacognitive processes that need to be refined through the inputs of others, not just on the quality of a learner’s work but on their ability to make it for themselves. Feedback is required to develop these capacities and feedback processes of this kind should be judged in terms of their effects on learners’ self-judgements, not just improved work. How evaluative judgement is manifested around assessment activities and the role of feedback in this development process is not well understood.

Impact may be affective or motivational.

The feedback process is often intimately connected with issues of motivation and affect (including emotion). An example may be how motivation and emotional resilience influences how a learner engages with a task. However, the impact of the feedback process can also surface in changes to the affective and motivational states of learners. For example, when learners perceive comments from educators as being critical it can lead to negative emotional reactions (Ryan & Henderson, 2017). When learners perceive feedback comments to be negative or upsetting it can have a detrimental effect on their self-esteem and perceived self-efficacy (Rowe, 2017; Sargeant , Mann, Sinclair, Van der Vleuten, & Metsemakers, 2008), they can also become demotivated and less likely to use those comments to improve (e.g. see Poulos & Mahony, 2008). It has also been found that in some cases, negative emotional responses can have long-term effects—hindering subsequent learning and potentially influencing career decisions (Crossman, 2007; Falchikov & Boud, 2007; Molloy, Borrell-Carrió, & Epstein, 2013). While we understand that emotion and motivation can be involved, the mechanisms that cause such effects and their consequence need to be further researched as it is likely that this relationship is more complex than a simple positive/negative emotional valence.

Impact may be relational .

The relationship between the educator and the learner can influence the feedback experience. For example, Telio, Regehr, and Ajjawi (2016) found in their study that the credibility of the educator “not only affects a learner’s engagement with a particular piece of feedback at the moment of delivery, but also has consequences for future engagement with (or avoidance of) further learning interactions with the supervisor” (p. 933). Therefore, learners’ perceptions of the strength of the educational alliance (based on shared goals, activities and bond) influence immediate and subsequent feedback behaviours (Farrell, Bourgeois-Law, Ajjawi, & Regehr, 2017; Telio et al., 2016). In addition to credibility, issues of trust (Carless, 2009, 2013; Molloy et al., 2013) and perceived safety/ threat (Orsmond, Merry, & Reiling, 2005) have been shown to influence the way in which learners engage with the feedback process, including how they seek out, interpret and act on feedback comments. The relationship serves both as a mechanism for engagement with feedback and as a potential impact in the sense of strengthening of an educational bond, this being mutually constitutive.

Impact may change values, beliefs and identity .

Different fields of inquiry define identity in different ways. From a social theory of learning, identity could be understood to be both how we perceive ourselves and how we are perceived, in relation to our competence and values within a community of practice (Wenger, 1998). Identity in this sense is defined socially; that is, it is produced through participation in a community and in relation to another. It is both internal and external to the individual. In feedback processes, the assessment performance is a form of practice and so is the way in which the learner may engage and react to feedback information. Through this practice, the learner both negotiates and reifies their identity which in turn influences future participation. Sutton and Gill (2010) note that “active participation in feedback discourse opens up the possibility of students acquiring a different voice, and provides opportunities for the construction, deconstruction and reconstruction of students’ academic self-identities” (p. 11). Research highlights that feedback processes can have an impact on professional socialisation (Molloy, 2009; Ajjawi & Boud, 2018) and this needs to be explored in more depth.

Impact may be intentional or unintentional.

This is perhaps obvious but the impact on the learner may be by design, but could also be unintended, unexpected and thereby potentially unnoticed. In the context of medical education and multi-source feedback, Sargeant et al. (2007 ) describe how feedback can have “low consequential validity”, that is feedback with unintended or even detrimental consequences, such as decreased motivation, emotional distress and deteriorated performance. In Hattie’s (2009) meta-analysis a third of feedback studies were found to have a detrimental effect on learning attainment. Why this is so and for whom, under which circumstances is less well known.

Impact may be delayed or have ripples.

Both effect and subsequent performances may not occur in quick succession. The effect may evolve over time as the learner processes the information and comes to understand it in different ways. Similarly, subsequent performances, in other words the external manifestation of effect, may occur months later in different contexts. All of these pose serious difficulties for policy, practice and research approaches to locating effect.

Impact may be plural .

We have already indicated that there is a potential, if not likelihood, of multiple effects occurring in (short and/or long) feedback loops. An example may be that a learner may improve in their knowledge of a concept while also strengthening their perceived self-efficacy. It is also logical then to assume that these multiple effects may interact and influence future performance. This raises an interesting problem for research that tries to identify and measure a particular form of effect and, in so doing, may not account such other effects, including interactive effects, that may influence future performance.

Different Forms of Impact for Different Parties and Purposes

People have stakes in looking for different effects and therefore necessarily look for certain outcomes in feedback processes. The above discussion has focused on impact on learners in order to improve learning strategies and future performance. However, the feedback processes in education are not necessarily benign. They can be value-laden, simultaneously serving different purposes and potentially compromised.

For example, institutions are situated within complicated governance environments including needing to demonstrate they meet the standards set by quality assurance agencies. In this context, there is often a focus on student satisfaction of the quality of their educational experience, including feedback. This has shaped the design of student satisfaction surveys that have become the basis of not only a reporting mechanism to quality assurance agencies but also are used to sustain university ranking systems, university marketing and even academic promotion. At the same time, the way in which feedback is often referred to in these contexts is a teacher-centred one, where feedback is an input rather than a process with an improved outcome. With this in mind, the high stakes and high visibility of the student satisfaction surveys reinforces a particular understanding of the teaching and learning process in higher education which is not conducive to stimulating effective feedback designs.

Another example of how university structures compromise feedback is the way in which many universities break subjects into relatively short sequences (e.g. semester, term and carousel models). This modularisation of units has been known to encourage bunching of assessment tasks at the end of the teaching period which makes it difficult for educators to provide feedback comments that connect to future assessments, particularly across programmes of study (Timmerman & Dijkstra, 2017; van der Vleuten et al., 2012 ).

Educators and educational/instructional designers also have value-laden, contingent and compromised approaches in their feedback designs. Assessment feedback can provide valuable information back to the educators and designers regarding the effectiveness of their planning and enactment, facilitating their continual improvement of their teaching/designs. This is desirable. It is a key concern of this book to further explore the ways in which we might understand impact to both improve the learner journey but also the teaching and designs of the educators/designers. However, it needs to be also noted that feedback designs are also often compromised. Educators often complain of not having enough time to do feedback well and not enough control over the ways in which feedback can be enacted. These perceived constraints have in turn been used to justify efficient but arguably ineffectual methods such as rubrics. A further example of educator compromise has been shown when educators shy away from providing comments that may result in potential negative student reactions. Sensitivity to student sensitivities and needs, such as those of emotion and motivation, is valuable. However, it can also result in strategies, such as the sandwiching, that have been characterised as mealy-mouthed.

Conclusion

Feedback is a set of processes in which learners make sense of evaluative information about their work to improve future performance. Impact is a necessary characteristic of effective feedback. We have framed impact as essentially any changed state within learners as a result of feedback processes. As we have discussed, identifying the processes that influence impact and its connection to subsequent performance continues to require research. Indeed, the nature of impact itself needs to be better understood. For instance, we have suggested that impact may be intentional/unintentional, immediate/delayed, cognitive, affective, motivational, relational, social (identity) and plural and intersecting. As a consequence, we argue that it is important to understand how, when and why feedback processes result in various forms of effect, and how those effects may then influence future performance.