Defining the Learner Feedback Experience in Learner-Centered Instruction

The Challenge

Quality assurance in higher education is a monumental task. Historically, employers and learners have entrusted the quality of their learning experiences to expert faculty and administrators who attempt to design, develop, implement, evaluate, innovate, and continuously improve postsecondary education. In recent decades, however, as the educational paradigm shifts away from the industrial age, new models and educational technologies have emerged from all corners of the globe and the marketplace (Carey 2016; Engle 2016; Gallagher 2016; McGee 2015; Selingo 2013). Adult learners and instructional designers have an increasing range of learning options that previously did not exist including micro-credentials, bootcamps, digital badges, mobile-ready skill development apps, adaptive learning tools, massive open online courses (MOOCs), open and free college courses, simulations, virtual or augmented reality, learning analytics, competency-based education, and open educational resources. This proliferation of options is accompanied by needs for new ways of measuring the impact of these learner-centered instructional experiences. Measurement is dependent on the identification of the quantifiable distinct dimensions of a construct. The analysis that follows intends to define the quantifiable and distinct dimensions of feedback so that the measurement of this critical construct can inform research and practice in learner-centered instructional design.

An Emerging Solution

We propose a definition of the distinct dimensions of feedback that work in concert to provide effective learner-centered instruction. A shared framework for the design of feedback would be useful to faculty, instructional designers, educational technology companies, and learners. Feedback is a construct that is central to the learning process, relevant across cultures, and necessary in every discipline. Therefore, because of its centrality, it is potentially a better indicator of quality than other constructs. By articulating six dimensions of the construct of learner feedback in a quantifiable way, we ask if, eventually, the learner feedback experience could serve as a proxy for measuring quality in formal online educational experience, where much of the necessary data could now be collected and analyzed using learning analytics.

The Rationale Behind a Definition of Feedback

Defining the dimensions of feedback is the first step toward measuring this multi-dimensional construct holistically in the context of any instructional unit--from an eLearning module to an entire program. Quantitatively defined dimensions of feedback enable the evaluation of a learner’s experience (Dirlam 2017), which serves to benefit future instructional design scholars, consumers of educational technology, and policymakers, among others. We define six dimensions of feedback, but future research will be needed to answer whether or not feedback could become an effective proxy for measuring quality in online learning.

The Benefits for Consumers

Over 2.5 billion dollars were invested in educational technology companies in just the first six months of 2015 (Straumsheim 2015). These investments represent technology development and implementation in over 118 countries. From 2002 to 2009, the bulk of investment funds were funneled toward learning management systems or courseware resources, but a shift is occurring. From 2013 to 2015, the educational technology industry saw investment increases of 268% and most of those funds were used for educational technology that is marketed directly to the learner-consumer (Adkins 2016).

In addition to venture capital funding, in 2014, the Bill and Melinda Gates Foundation sponsored a $20 million competition related to the development of digital courseware in higher education to offer more personalized and adaptive learning experiences (Schaffhauser 2014). Seven finalists were selected for this 36 month competition targeted toward low-income postsecondary students. They were challenged not only design and develop exemplary digital courseware but also to creatively think about their distribution, adoption, implementation, and delivery. Educational technology companies and investors are clearly looking to please a new audience--the individual--as opposed to the provider institution or organization (Adkins 2016).

Marketing campaigns of these educational technology companies often tout artificial intelligence and adaptive forms of learning and instruction, big data and associated learning analytics, and various aspects of learning personalization. Unfortunately, they rarely address the instructional design issues, challenges, and opportunities that should wrap around the technology to increase the probability of a quality learning experience. Better articulating the dimensions of the feedback construct could help consumers choose appropriately to meet an instructional need.

The Benefits for Policymakers

Simultaneous to this unprecedented rise in educational technology funding, federal government policymakers struggle to satisfy the divergent needs of institutions and employers in the postsecondary market. Without a doubt, quality assurance is increasingly needed to keep bad actors and inferior or ill-designed products out of the higher education marketplace. At the same time, employers are voicing concerns about an ever-widening skills gap and need for a skilled workforce (Jaschik 2015), thereby opening the door to an increasing array of emerging technology systems and solutions. However, financial aid is often available for distance education purposes to only such institutions that can prove regular and substantive contact with students. In effect, these concerns about contact mean that students have regular access to and consistent interaction with a qualified faculty member (qualified according to the regional or specialized accrediting body) (Harris 2002). Furthermore, contact must be initiated by the faculty member, and the exact frequency is not defined beyond “regular” (Laitinen 2012).

When the rules for regular and substantive interaction were written, multi-million dollar cognitive tutoring educational technology companies did not exist (Laitinen 2012). Pressures to lower the cost of higher education to fill the workforce skills gaps have produced innovative technology to meet some of a student’s feedback needs. According to some educators (Prensky 2016; Reigeluth et al. 2017), leveraging technology could reduce online course and program costs without sacrificing quality. While advances in the fields of educational technology and learning science signal that we should let technology do what it does best and reserve human interaction for the things that humans do best (Prensky 2016)

This premise conflicts with federal policy (Harris 2002). In higher education in the United States, federal distance education policies related to regular and substantive interaction limit the degree to which innovations can impact the design of learner-centered instruction (Laitinen 2012).

New methods of measuring the quality of a learner’s higher education experience are needed. Policymakers have relied on contact hours and units of time as proxies for measures of learning for too many years (Laitinen 2012). As long as the regular and substantive rules remain in force, and as long as the Office of the Inspector General continues to recommend fines to institutions like Western Governor’s University for a lack of regular and substantive interaction, innovations that include AI, adaptive learning technologies, intelligent tutoring, and whatever else emerges next will remain in their experimental and nascent stages for years, if not decades, to come (United States Department of Education 2017).

To review, the rationale for defining the dimensions of feedback is twofold. First, defining the dimensions of the learner feedback experience provides instructional designers with methods for communicating the distinct elements of their products, while also enhancing effective implementation for consumers. Second, measuring the dimensions of a learner’s feedback experience could lead to better methods for measuring quality in innovative online, postsecondary education.

Feedback as a Central Construct

Feedback is a construct that is central to the learning process. Feedback “describe[s] any of the numerous procedures that are used to tell a learner if an instructional response is right or wrong” (Kulhavy 1977, p. 211). Also, feedback provides “information about the correctness of the response,” and extends or expands a learner’s knowledge state (Jaehing 2007, p. 220). Feedback can be a pre-programmed, automated response delivered from the adaptive-learning platform to the student or authored by an individual. The positive effects of timely, relevant feedback have been reported in multiple K-12 and postsecondary studies from the past 25 years (Black and Wiliam 1998; Fraser et al. 1987; Pennebaker et al. 2013) as well as several comprehensive literature reviews (Fraser et al. 1987; Gibbs and Simpson 2004; Hattie 2015; Jaehnig and Miller 2007; Kulhavy 1977). Pedagogically, feedback is a critical component of all learning theories and instructional theories and models (Smith and Dillon 1999).

Inroads in both pedagogy and psychology have well documented the necessity of feedback for learning (Pennebaker et al. 2013; Kulik et al. 1990; Smith and Dillon 1999). In psychology, the Dunning-Kruger effect is the well-publicized principle that people need an incompetency exposed before recognizing it (Kruger and Dunning 1999). This exposure occurs through feedback that illuminates our misconceptions or incompetence. Psychologically, people need feedback for change to occur. In effect, they need some sense of cognitive dissonance (Festinger 1957) from their various feedback mechanisms. As noted in the remaining sections of this paper, the timing, intensity, and amount of feedback as well as one’s overall prior experiences or background knowledge related to that feedback can each factor into the degree to which such cognitive dissonance is acted upon.

Feedback Experience

Although these findings all point to the value of feedback, none of them attempts to define the ideal feedback experience for a learner holistically. Four decades ago, Kulhavy found that “elaborated feedback” was more effective than “knowledge of response” feedback (Kulhavy 1977). Interestingly, Gibbs and Simpson (2004) concluded in their review that when studies showed negative or no results related to learner feedback, there were flaws in the study designs. Despite these flaws, some reviews of feedback have resulted in practical implications for the field of instructional design and learning technology. For instance, Hattie’s (2015) recent review resulted in learner-centered instructional design recommendations for educators. Unfortunately, however, none of these studies has, as its aim, the description of the learner’s overall feedback experience within the design of a unit or module of instruction.

Dimensions of Feedback

To determine a preliminary set of dimensions, we reviewed eight learner-centered instructional models from Volume IV of Instructional-Design Theories and Models: The Learner-Centered Paradigm of Education, by Reigeluth et al. (2017). While not a comprehensive source of learner-centered instructional models, this volume provides a sufficiently current and scholarly collection of instructional design models. Models reviewed included: learner-centered paradigms (Reigeluth et al. 2017); principles for competency-based education (Voorhees and Voorhees 2017); principles for task-centered instruction (Francom 2017); principles for personalized instruction (Watson and Watson 2017); a new paradigm of curriculum (Prensky 2017); designing maker-based instruction (McKay and Glazewski 2017); designing collaborative production of digital media (Kalaitzidis et al. 2017); and, designing games for learning (Myers and Reigeluth 2017).

A qualitative analysis of the chapters revealed over one hundred references to the concept of feedback. Each reference was listed and coded with keywords to indicate the function and features of the feedback as described by its authors. Since the purpose of the analysis was to identify any observable or quantifiable dimensions of feedback, references that would indicate the results of feedback (i.e., feedback increases self-awareness) were categorized as “results” as opposed to “dimensions.” As listed in Table 1, after analysis, these references were grouped into the six dimensions of feedback by combining similar function or feature codes. For example, references to automatic and timely feedback were combined into the category--timeliness. References to personalized, unique, and individualized feedback were grouped into the category--individualization. While a broader investigation is definitely needed, there is compelling evidence to warrant the dimensions listed in Table 1 as essential considerations for effective learner-centered feedback experiences.

Table 1 The six dimensions of feedback

Before applying these dimensions, it is vital to provide theoretical and research-related grounding for each one. For each dimension, there is a continuum of possibilities for the instructional designer and ultimately the learner. In keeping with sufficiency theory, the following results provide a sufficient foundation for the distinct qualities inherent in each dimension (Kozma 1994).

Dimension 1: Timeliness

Perhaps the earliest dimension of feedback to be empirically researched is timeliness. Several literature reviews have concluded that there is great value of timely feedback (Gibbs and Simpson 2004; Hattie 2015; Jaehnig and Miller 2007). Behaviorists concluded that timeliness was effective because it acted as a positive reinforcer of the student’s response behavior. However, this rationale for the importance of timeliness has since been successfully refuted (Kulhavy 1977; Kulhavy and Anderson 1972). Instead, many propose that timeliness is important because cognitive pathways are still malleable directly after a response. Timely feedback solidifies the learner’s cognition or addresses a misconception before the thoughts of the learner move too far astray to other topics (Brown et al. 1989; Merrill 2013).

Other research on timeliness reveals that Bloom’s two sigma effect is realized when time is variable and feedback is used to continue to improve performance and responsiveness for as many iterations as is needed (as cited in Reigeluth et al. 2017). Along these same lines, Merrill (2013) points out that just-in-time tutoring is recognized as a highly effective instructional practice. Merrill also notes that timely feedback is a key aspect of problem and project-based learning since it is necessary when students are stuck on a problem situation and cannot proceed (Merrill 2013). Learners need timely responses (scaffolds) to continue to move forward (Watson and Watson 2017). Finally, timely formative feedback allows learners to satisfy their own timelines for creative production instead of working toward an imposed deadline (Kalaitzidis et al. 2017). Clearly, there are many ways to interpret and implement the dimension of timely feedback.

Dimension 2: Frequency

The second dimension of the model proposed here is that feedback should be continuous and integrated (Reigeluth et al. 2017). Naturally, feedback should be frequent enough to inform both the instructor and the student of the knowledge state of the student compared with the outcome to be achieved (Voorhees and Voorhees 2017). As detailed by Francom (2017), the frequency of such feedback should be faded over time as learners become more skilled. Studies find that time for the provision and use of feedback is “severely limited” and often learning designs could be improved by moving some of the modeling or demonstration activities to independent video such as seen in notions of “flipping the classroom” (Zainuddin and Halili 2016). Such approaches provide in-class time for feedback and application (Francom 2017). As shown in this dimension, frequent feedback gives the learner knowledge of her individual progress.

Dimension 3: Distribution

According to Reigeluth et al. (2017), the distribution of feedback instances across a unit of instruction should allow for goal setting and self-regulation. While providing a thorough and even distribution of feedback instances is challenging for human instructors, technology provides options for immediate feedback delivered on an intentionally distributed timeline (Reigeluth et al. 2017, 2017). While designing the distribution of feedback, the design can be universal (all students receive the same distribution), triggered (students who do x receive y), or requested by the student (Reigeluth et al. 2017, 2017). In task-centered instruction, for better task completion, coaching leads to better transfer especially with “whole-task” integrative learning. By definition, in such forms of instruction, effective coaching involves distributed feedback to continue motivating and guiding a learner (Francom 2017). Of course, providing the right distribution of feedback for each learner is a complex task. Prensky (2017) recommends leveraging technology to do what it does best (automated feedback) and developing human capacity to interact in more complex dimensions.

Dimension 4: Source

From the learner’s perspective, the source of feedback could be the instructor, a teaching aide, an outside expert or practitioner, the general public, a peer, or canned responses via the Web. Each source has value. On the surface, no one of these sources is inherently better than another. The dimension to be measured is the degree to which the learner trusts the source (i.e., learner perception). Trust can perhaps be measured simply by asking the learner “Did you trust the source of this feedback?” Trusted sources of feedback create positive emotions that are essential for effective learner-centered instruction. Furthermore, it is possible that flexible, diverse settings and sources of feedback are more effective because learners gain a greater variety of perspectives (Reigeluth et al. 2017, 2017).

What is clear is that learners need to have respected relationships with those who grasp their unique talents (Reigeluth et al. 2017, 2017) and find instructionally effective ways to tap into and extend them. Such relationships are vital since a trusted source can help produce a love of learning and appreciation for peers. Principle Four of Reigeluth et al.’s (Reigeluth et al. 2017, 2017) learner-centered paradigm is that feedback can be delivered by a mentor, coach, or outside expert as well as an expert faculty member or instructional assistant. In effect, definitions concerning sources of feedback often mention coaches, mentors, teachers, peers, other learners, virtual participants, or some type of software system or computer-enabled environment. Importantly, Principle 2.2 of the learner-centered paradigm argues that virtual coaching is a justifiable expense if the number of learners can offset the budget needed for such human resources.

Other research cautions designers to maintain a balance of locally developed (faculty or instructional designer) and commercially developed (e.g., courseware, adaptive learning tools, intelligent tutoring, etc.) sources of feedback (Voorhees and Voorhees 2017). In task-centered instructional design models, the role of the peer has been researched and found to be a trusted source (Francom 2017). The real world can also be a trusted source of feedback if the assignment calls for responses from an authentic, real-world audience (Kalaitzidis et al. 2017). In summary, the ideal feedback experience contains diverse sources of feedback that are trusted by the learners.

Dimension 5: Individualization

In an effective feedback experience, success is unique to every learner (Reigeluth et al. 2017, 2017). Feedback is customized to each learner in some way--skill development, interest, specific goals, prior outcomes, etc., or perhaps offering learners a choice for their preferred method of assessment and feedback (Reigeluth et al. 2017, 2017). It is important to note that Principle Three of the learner-centered paradigm from Reigeluth et al. (2017, 2017) relates to personalization. Of course, the provision of feedback to individuals, by an instructor, is one way to realize the personalization aspect of learner-centered instructional design.

Feedback can also come from the learner herself when instructional resources are introduced as a fixed point of comparison for the learner. In their research on the competency-based education, for example, Voorhees and Voorhees find that learners are more successful when they use a rubric to self-assess their work rather than the rubric being a tool used solely by the instructor (Voorhees and Voorhees 2017). After drafting a product, if a learner moves systematically through an analytic rubric, articulating the comparison of her work to the rubric criteria described, her own self-assessment of her work becomes individualized feedback for herself. As this occurs, she is making her justifications visible so that a trusted peer or instructor can identify misconceptions and offer additional individualized feedback that fits the learner’s pre-existing rationale for her choices.

Similarly, personalized reflection can be an effective instructional strategy and can provide unique opportunities for the individual formative feedback of others (Watson and Watson 2017). Advances in personalized learning meet the need for students to receive individualized feedback generated from an artificially intelligent program (Jarrett 2013). Finally, maker-based research indicates that learning is more effective when learners are shown the value of the learning to the outside world and also when they are made aware of the necessity for change (McKay and Glazewski 2017). Connecting each learner to the world and connecting learning to each individual will be different for every learner, thereby pointing to the need for individualization in a learner’s overall feedback experience.

Dimension 6: Content of the Feedback

Finally, the content of learner feedback matters. Content can move the learner forward and solidify accurate understandings, or it can be motivational. In feedback research during the 1970s, the categories developed included three kinds of content as detailed below.

  1. 1.

    Knowledge of results which is mostly motivational feedback (e.g., “Good job!” or “You did excellent work once again.”).

  2. 2.

    Verification feedback (e.g., “You selected B but C is the best answer”).

  3. 3.

    Elaborated feedback (e.g., “You should review Chapter 10 and consider the laws of motion.”) (Kulhavy 1977).

Research from Kulhavy (1977) found that elaborated feedback is most effective of the three, whereas knowledge of results feedback produces almost no effect (Kulhavy 1977). Beyond these previously existing broad categories, what follows are descriptions of more recent research that provide more granular detail into the content dimension of feedback.

First, the content of high-quality feedback is connected to the learner’s existing knowledge or skill (their knowledge state). Related to cognitivism, rather than the behaviorist tendency to see feedback as behavioral reinforcement, the content of feedback should help learners process and store their own thinking; in effect, it helps learners think about their thinking (Reigeluth et al. 2017). Second, the content of feedback should include emotional, social, and character development as well as input on the cognitive and physical knowledge or skill to be mastered (Reigeluth et al. 2017, 2017). Third, there are models for the content of feedback that can be followed to improve learner success. In one example from Social Serious Game design, the content of feedback delivered to players is categorized as “question, information, hint or solution” and then delivered to the player/learner when certain conditions are met (Konert et al. 2012).

Other considerations related to the content of feedback include the finding that granular feedback is better suited for formative assessments, whereas broad feedback is more effective for summative competencies (Voorhees and Voorhees 2017). According to Francom (2017), feedback related to the actual task accomplished by the learners rather than the topic of instruction tends to have greater relevance and effectiveness. He also notes that the content of feedback instances should range from simple to complex and then fade with independence (Francom 2017). Among some of the other relevant findings, Watson and Watson (2017) suggest that mentoring is a method of identifying the strengths and interests of the learner so that the content of the feedback can be authentically connected. Moreover, in the maker-based instructional model, the content of feedback should help learners articulate a question that will guide their learning and prompt them to reflect and consider their own design thinking (McKay and Glazewski 2017). Finally, the content of feedback in a learner’s overall feedback experience should clearly reflect the purpose of the learning.

The Impact of a Diverse Feedback Experience

The online learning market for adult learners is the broadest educational market on the globe. From micro-credentials to competency-based education to open universities, a learner’s options are continually expanding. Before selecting a learning opportunity, how can a learner know more about what she will experience? How can consumers and philanthropists know what to fund? How could policymakers better protect the learner from bad actors in the market? Clearly, everyone could benefit by measuring aspects of learning that matter most. Feedback is one of those central constructs, and defining the dimensions of feedback, as we have begun with this analysis, is merely the first step to evaluating quality. Further studies are needed to use rubrics or analytics, like the example presented in Appendix 1 Table 2, to quantify the learner feedback experience and compare it with other indicators of learner success.

If all stakeholders had a better awareness of the feedback experience learners could expect with a given educational technology, those tasked with purchasing decisions and funding options could reward instructional offerings that are explicit about the overall quality and components of the learner’s feedback experience. Learners are negatively impacted when one dimension of feedback (e.g., Timeliness: automated responses from adaptive learning software) eclipses other equally important dimensions (e.g., Source: trusting the source and whether the feedback is inclusive of a diversity of perspectives). Evaluation of the whole feedback experience, using a rubric like the one proposed (Appendix 1 Table 2) could help designers provide balance to a learner’s experience- thereby increasing the quality of the feedback experience.

Intentionally designed feedback experiences that attend to the six dimensions mitigate common instructional challenges. For example, when distributed practice opportunities are lacking across a learning experience, student performance is weakened. Design that attends to distribution could mitigate this challenge. Additionally, learners increasingly report a lack of personal feedback in college courses (Boud and Molloy 2013). Design that incorporates individualization could address the lack of learner engagement or personal connection as well as the lack of ownership in the learning process.

It is vital to ask who can help in this regard. First of all, policymakers and accreditors need to validate effective implementation of educational tools, systems, and feedback mechanisms. Second, educational technology providers need to differentiate themselves among competitors by designing feedback components that are in line with the six dimensions of feedback (i.e., timeliness, frequency, distribution, source, individualization, and content) presented here. Third, instructors and instructional designers need to become more aware of the importance of these six dimensions through professional training as well as implementation.

Leveraging learning analytics to track the first three dimensions of learner feedback could lead to reliable measurements of timely, frequent, and distributed feedback. The dimensions described in this article work in concert with one another to produce learner-centered instructional experiences suitable for academically diverse groups of learners who display a variety of interests, style preferences, and levels of motivation. Such diversity and individual differences effectively describes a wide gamut of learning settings in the adult learning world, from higher education to corporate training environments to casual informal learning in one’s home setting.

As learner-consumers, humans living in the twenty-first century enter a learning experience expecting to receive regular, individualized feedback from a qualified expert. In response, some organizations and institutions are increasingly willing to invest time and money into the learning experience for specific, detailed, and timely feedback on demand. On the other hand, if someone seeks to brush-up on a previously mastered but fast fading skill (e.g., Microsoft Excel formulas), and does not need or want extensive or individualized feedback, free online videos and associated transcripts of those videos may suffice. Defining and communicating the dimensions of feedback means that developers can provide learners with an accurate picture of the feedback experience that they expect as well as need. Learners are then empowered as informed consumers of their own learning experiences.

Closing Comments

Exploring feedback components in many of the chapters of Reigeluth et al's. (2017, 2017) on instructional design theories and models helped reveal several key aspects of feedback in learner-centered instruction. It is now important to ask whether an analysis of the learner’s feedback experience is a better proxy for measuring quality in postsecondary online learning than other mechanisms currently in use (i.e., grades, learner satisfaction, or regular and substantive contact). We argue that the ideal learner feedback experience in most postsecondary settings would be comprised of effective implementation in each feedback dimension. A potential rubric for measuring the learner’s feedback experience is presented in Appendix 1 Table 2. Defining the dimensions of the learner’s feedback experience in learner-centered instructional design, as we have presented here, is merely the first step. Next steps might include evaluating emerging educational technology tools and products for such dimensions of feedback as well as analyzing the relationships between student feedback experiences and other measures of student success.

The forms of learning design and delivery are expanding at a rapid pace. As such expansion occurs, there are mounting needs to better grasp the functions of all aspects of the learning experience and environment. Learner feedback is a key component. Each element described in this paper—timeliness, frequency, distribution, source, individualization, and content—is vital to the design of high quality online learning in higher education settings. As such, this six-part model is intended to provide a mechanism for the design, delivery, and evaluation of effective online learning environments. Key aspects of human existence in the twenty-first century may, in fact, depend on it.