Introduction

Case-based instruction (CBI) has a long-standing history within medical, legal, and business education (Williams 1992), with more recent implementations in the field of instructional design (ID; Ertmer et al. 2014). In general, CBI presents a realistic problem situation, which students then analyze and resolve through reflection and discussion (Ertmer and Stepich 2002). Typically, after independently analyzing the case problem, students work collaboratively to clarify and extend individual interpretations (Flynn and Klein 2001; Levin 1995) and, subsequently, to reach consensus regarding proposed solutions (Jonassen and Hernandez-Serrano 2002; Stepich et al. 2001). As these collaborative experiences are designed to encourage deeper understanding of and connections among the presented case problems and discipline-based concepts, discussions during CBI are considered to be one of the most important elements of the case learning process (Ertmer and Koehler 2014; Ertmer and Stepich 2002; Levin 1995; Mitchem et al. 2008; Yew and Schmidt 2012).

Case problems, like the instructional anchors in other problem-centered methods, are distinguished by their ambiguity and openness to multiple interpretations (Barrows 1999; Jonassen 2011a). According to Jonassen (2011b), “cases are the building blocks of [all] problem-solving learning environments (PSLEs). … Various forms of problem-based learning, problem-centered instruction, case studies, case-based teaching, case-based instruction, and case-based learning … have been developed to engage or support learning how to solve different kinds of problems” (p. 149). In this paper, we use ‘problem-centered instruction’ as the umbrella term, which encompasses all forms of PSLEs. CBI, as a specific form of problem-centered instruction, is defined as instruction that is “anchored in an authentic problem that is relevant to the learner. … The problem to be solved is represented as a case, and cases are used in various ways as instructional support” (Jonassen 2011b, pp. 150–151)

As an instructional strategy, discussion works well within CBI, as it engages students in the development of solutions to real, complex problems (Moore 1997); prompts students to develop both cognitive and problem-solving skills (Wilen 2004); and encourages creativity, peer and facilitator interaction, and reflective and knowledge-seeking behaviors (Ngeow and Kong 2003). By prompting understanding, reflection, elaboration, and clarification, discussions during CBI have been observed to enrich the case learning experience (Ertmer and Stepich 2002; Levin 1995; Mitchem et al. 2008; Yew and Schmidt 2012). For novice instructional designers, collaborating on and discussing key case components represents an important aspect of the problem-solving process (Dabbagh et al. 2000). As many potential solutions exist for each instructional design case, hearing the ideas and experiences of others can provide an effective means for proposing more informed solutions (Ertmer and Koehler 2014).

Role of facilitator in CBI

It is generally agreed that CBI’s effectiveness is dependent on the facilitator’s skill in initiating, leading, and closing the case discussion (Ertmer and Koehler 2014; Levin 1995; Rangan 1996; Rico and Ertmer in press). As described by Andersen and Schiano (2014), “The core of case teaching—and most of the art of it—lies in managing the students’ discussion” (p. 66). First, during CBI, a facilitator must structure/establish the initial direction of the discussion. Research indicates that the prompts used to activate a discussion can impact both the instructional process and the quality of learning that results from the subsequent discourse (Ertmer and Stepich 2002; Kanuka 2011; Wegerif and Mercer 1996). This finding also appears to be true for online discussions (Ertmer et al. 2011; Kanuka 2011; McLoughlin and Mynard 2009). For example, Richardson and Ice (2010) reported that using different instructional strategies to initiate asynchronous discussions prompted varying levels of critical thinking among students. That is, discussions initiated with open-ended or topical prompts (e.g., “Based on the readings, what implications follow?”) generally resulted in lower levels of critical thinking than other types of prompts, such as questions related to specific case or problem scenarios.

Second, after prompting and structuring the initial discussion, the facilitator is responsible for directing and maintaining the collaboration and interaction among students (Chng et al. 2011; Yew and Yong 2014). Focusing on case and course goals, expert facilitators have been observed to utilize a variety of techniques to help students fully explore the problem at hand (Heckman and Annabi 2006; Hmelo-Silver and Barrows 2006). Although students drive the discourse, the facilitator’s role is to promote sense-making by validating students’ ideas, summarizing discussed points, pushing for consensus, and using probing questions to prompt deeper analysis (Ertmer and Koehler 2014; Heckman and Annabi 2006; Hmelo-Silver and Barrows 2006; Yew and Yong 2014). Rangan (1996) refers to this process of coordinating various instructional elements during a case discussion as “choreographing” the discussion, as the instructor must be prepared to flexibly transition among topics and strategies.

Finally, at the end of a case discussion, facilitators must bring closure to the case learning experience (Ertmer and Stepich 2002; Rico and Ertmer in press). Palincsar (1999) describes facilitating closure as a culminating event: Throughout the discussion, a facilitator validates participants’ ideas while striving for “consensus regarding the problem and the solution process” (p. 168). Others (Choi and Lee 2009; Ertmer and Stepich 2002) have described specific strategies instructors can use to close the case discussion, including asking students to (1) summarize the issues left to be resolved, (2) describe insights gained during the discussion, (3) review unexpected developments or findings, (4) list “best” ideas that emerged during the discussion, and (5) reflect on lessons learned from the case story itself, as well as the subsequent discussion (Ertmer and Stepich 2002).

Regardless of which strategy is used, at the end of the discussion, facilitators are responsible for helping students extract, reflect on, and index the case lessons (Kolodner and Guzdial 2000; Schank and Cleary 1995; Stepich and Ertmer 2009) so they can be recalled and applied more readily in the future. Kolodner (1997) indicated that educators should help students articulate lessons learned in a variety of rich ways, as well as help them predict the circumstances under which a lesson might be applied in the future. It is important to note that this step in the facilitation process is not just an “extra” or an “add-on,” as it plays a critical role in prompting learners to make connections among specific case elements and their own experiences.

Although the general recommendation in the literature is for the instructor to assume the role of a facilitator during CBI (Heckman and Annabi 2006; Rico and Ertmer in press), there is limited research examining differences in students’ learning as the result of instructor facilitation. For the purposes of this research, we define a CBI facilitator as one who, as an active participant in the case discussion: (1) creates opportunities for productive discourse (Hmelo-Silver and Barrows 2006), (2) promotes sense making and deeper understanding of case content through the use of strong questioning skills (Ertmer and Koehler 2014; Gilbert and Dabbagh 2005; Stepich et al. 2001), and (3) is ready to flexibly respond to, or scaffold, student thinking (Chng et al. 2011). Given the general expectation that a CBI instructor acts as a facilitator during case discussions (Rangan 1996; Savin-Baden 2003), we use the words “instructor” and “facilitator” interchangeably in this paper.

Existing research supports the importance of the facilitator’s role in problem-centered methods, of which CBI comprises one specific type. For example, Budé et al. (2011) compared the conceptual understanding of undergraduate students in a statistics course who participated in a directive discussion (i.e., tutors actively guided student discussion via directive questions) to students who participated in a general discussion (i.e., tutors asked general, open-ended questions). The authors found that students who received directive facilitation had a better conceptual understanding of the statistics content than students in the control group. Examining the effects of facilitation methods in online formats reveals similar findings: Active facilitator participation has been demonstrated to support higher levels of cognitive presence (Bangert 2008; Lu and Jeng 2006) and can provide necessary direction for novice problem solvers (Nandi et al. 2012; Ng and Tan 2006).

Measuring impact of facilitation

Determining how facilitation influences student learning is complicated by the inherent difficulty in measuring CBI learning outcomes (Lundeberg and Yadav 2006; Saleewong et al. 2012; Yew and Yong 2014). Recently, Hmelo-Silver (2013) proposed “problem-space coverage” (i.e., features, knowledge, and goals needed to solve a problem; Teasley and Roschelle 1993) as a means to quantify what is learned during a case discussion. By first mapping the afforded problem space using textbooks and expert opinion and then analyzing students’ coverage of that space, Hmelo-Silver was able to determine how students engaged with a case problem and the extent to which they covered targeted and related concepts; in other words, what they learned from the case discussion.

Others have also highlighted (e.g., Dolmans and Schmidt 2000; Yew and Schmidt 2012) the importance of considering the content covered during a case discussion, claiming that it provides direction and validation to students’ individual learning and is related to student achievement. Although problem space coverage does not provide a direct or complete measure of the impact of facilitation on student learning (i.e., there are other factors that can impact coverage), it offers a useful means for examining differences across facilitated and non-facilitated case discussions (i.e., How does problem space coverage differ when an instructor is engaged in the discussion versus when the students are left to discuss a case problem on their own?).

Purpose

In this research we used the concept of “problem space coverage” to measure learning outcomes from two facilitated and two non-facilitated case discussions (Hmelo-Silver 2013), including students’ reflective postings of lessons learned (in a course wiki) at the close of the discussion. Furthermore, we examined the quality of those discussions by considering the specificity and depth of content coverage. By examining both the extent and quality of problem space coverage, we are afforded a meaningful method for comparing outcomes of facilitated and non-facilitated discussions. The following research questions guided both data collection and analysis throughout this study:

  • What are the differences in the extent of problem-space coverage between facilitated and non-facilitated online case discussions?

  • What are the differences in the quality of problem-space coverage between facilitated and non-facilitated online case discussions?

Method

Research design

We used an exploratory descriptive research design, with purposeful sampling (Patton 1990), to examine differences in problem-space coverage between two facilitated (F) and two non-facilitated (NF) online case discussions. Qualitative data, in the form of students’ and instructors’ discussion postings, as well as students’ reflections on lessons learned, were collected from four sections of an advanced instructional design (ID) course, taught in either fall 2012 or fall 2013. After mapping the problem space for the targeted case study, described in more detail later, we conducted a content analysis of all discussion (n = 512) and wiki (n = 65) postings.

The afforded problem space was divided and mapped into two primary areas: (1) problem-finding (e.g., identifying stakeholder perspectives, articulating design and non-design challenges) and (2) problem-solving (e.g., proposing solutions that meet stakeholders’ needs and that address design and non-design issues) (Ertmer and Stepich 2005). Table 1 includes a sample of the case map, highlighting major categories. Tables 2 and 3 include samples of sub-categories for both problem finding and problem solving, respectively (additional details are provided in Ertmer and Koehler 2014). Extent of coverage was determined by examining how often different aspects of the problem space were addressed; quality was examined in terms of specificity and depth of postings. In addition, we examined general differences in student engagement in the discussion (e.g., number of posts/student; number of posts/thread) to substantiate quantitative differences in students’ interactions in facilitated versus non-facilitated discussions.

Table 1 Mapping the problem space afforded by an ID Case Study (Craig Gregersen Case from The ID CaseBook, 2014)
Table 2 Sub-categories related to problem finding: identifies relevant non-design challenges (project constraints)
Table 3 Sub-categories related to problem solving: solutions address design and non-design challenges

Participants

Participants included 54 graduate students (20 males, 34 females) enrolled in four sections of a required 8-week course (F1 = 16; F2 = 13; NF1 = 12; NF2 = 13) in an online master’s program in Learning Design and Technology (LDT). The facilitated discussions occurred in sections taught by the first author; the NF sections were taught by two additional faculty members. All instructors had previous experience teaching the course. The instructor of the facilitated sections had taught the course multiple times, including both face-to-face and online. The other two instructors had either one (NF2) or 2 years (NF1) of previous experience teaching the course.

Description of the setting/course

Advanced Practices in Learning Systems Design is an advanced graduate course that uses a case-based approach to engage students in the application of previously learned ID skills. The course is designed to serve as a capstone experience in the LDT online masters program and occurs during the last 8 weeks of each fall semester. In the course, students participate in three instructor-facilitated case studies at the beginning of the term, followed by participation in and/or facilitation of three student-led case discussions. Prior to participation in the case discussions, students complete individual case analyses in which they reflect on and respond to a number of specific prompts. That is, for each individual case analysis, students are asked to (1) identify the key stakeholders in the case and to describe their primary concerns; (2) to outline the key design challenges in the case, as well as the specific situational constraints (i.e., non-design challenges such as budget, time, etc.); (3) to propose at least two reasonable solutions for the designer in the case; and (4) to discuss the pros and cons to each solution/recommendation. These prompts are designed to force students to give each of these issues careful consideration before participating in the whole class discussions. Based on the results of previous research (Ertmer et al. 2009; Tawfik and Jonassen 2013), the use of these types of questions can scaffold students’ case analysis efforts and push them toward responding in a more expert manner than they would without such prompts.

For this research, we examined students’ discussion of the Craig Gregersen case (Dundis 2014), the third instructor-facilitated case discussion in the course. The Craig Gregersen case study presents a situation in which the ID consultant must create a training program that satisfies multiple stakeholders (e.g., engineers, trainers, lawyers) who have conflicting interests and needs. The main objectives of the case include identifying: (1) organizational issues that affect the success of ID projects, (2) strategies for getting buy-in from a diverse set of stakeholders, (3) strategies for dealing with resource and time limitations, (4) ways to mitigate potential problems via initial contract negotiations, and (5) strategies for reconciling ethical/professional concerns with the client’s desires. Students were assigned two supplementary readings, both related to professional ethics, to help them prepare for their individual case analyses and the subsequent case discussion.

Description of facilitated versus non-facilitated discussions

To examine the impact of facilitation on problem-space coverage, we purposefully selected the same case discussion from four sections of the course in which the instructors used contrasting approaches when facilitating the weekly case discussions. Two case discussions (F1, F2) were facilitated by the first author, who intentionally participated in the discussion throughout the week and who implemented the strategies of an engaged facilitator as described earlier. Facilitation strategies included clarifying misperceptions, asking probing questions, and prompting students to go beyond surface interpretations and simple solutions (more specific examples are included in Table 6). The other two discussions (NF1, NF2) occurred in sections of the course in which the instructors chose not to actively engage in the ongoing case discussion, although they did post a set of pre-determined prompts, as described next.

Instructors for each section posted the same initial (n = 3), mid-week, (n = 1) and final (n = 1) prompts to structure and debrief the weekly discussion. These pre-determined, directive prompts were designed to build on students’ individual case analyses, described previously. For example, at the beginning of the week, students were assigned to one of the three main stakeholder roles and asked to discuss, first in same-role groups, their perceptions of what the proposed training should look like, from their assigned perspectives. Following these small group discussions, students presented their ideas to the other stakeholders, which allowed them to see, fairly quickly, that their perspectives/needs were in conflict. This initial prompt was structured specifically to focus students on problem finding.

The mid-week prompt switched students’ focus to problem solving by asking them to assume the role of the ID consultant to determine how to balance the conflicting needs of the different stakeholders. More specifically, they were asked to share their thoughts about what the proposed training solution should look like and to discuss how their solutions would meet the various stakeholder needs identified in the first part of the week. The final instructor post served as a debriefing/summary and was posted at the end of the week.

The main difference between the facilitated and non-facilitated sections of the course was that in the NF sections there were no additional postings by the instructors in the discussion threads during the week. That is, after posting the initial and mid-week prompts, the instructors of the NF sections did not engage, in any observable way, in the ongoing discussions among the students. In contrast, in the F1 and F2 sections, the instructor posted an additional 25 and 30 times, respectively, throughout the week, responding to students’ postings in the discussion. Instructor responses were used primarily to (1) provide or ask for clarification of case details (e.g., “Remember that legal said it’d be ok for Craig to ‘jazz up’ the training.”), (2) prompt students to carefully consider the role of the ID consultant in the case (“How far can Craig really go to get his ideas through?”), and (3) to emphasize or clarify the roles of the key stakeholders in the case (“Remember that they [legal] are the subject matter experts [SMEs].”).

Data collection

Our primary data source consisted of the four online case discussions, which were captured within Blackboard Learn. A waiver of consent was granted by the IRB to use these previously collected data. For this research we analyzed the third case discussion in the course, which included total student posts of 167 (F1), 128 (F2), 77 (NF1), and 140 (NF2). Instructors’ posts from the two facilitated sections served as an additional data source.

Secondary data included students’ postings to a “Lessons Learned wiki,” at the end of the case discussion. Although all students were expected to post to the wiki, the number of wiki entries varied across sections. In the facilitated sections, each student posted two lessons learned (F1 = 32 entries; F2 = 26 entries); in the NF sections, only 4 students posted in NF2 (7 entries), while no students posted in the NF1 wiki.

The problem space afforded by the targeted case study was mapped into the problem-finding and problem-solving spaces as described in Ertmer and Koehler (2014). Building on our previous work, in which we conducted an in-depth analysis of one facilitated case discussion, we analyzed three additional discussions of the Craig Gregersen case study (Dundis 2014), which occurred during week 3 of the 8-week semester.

Data analysis

To provide a quantitative measure of students’ interactions in the discussions, we calculated: (1) the average number of responses/student in the discussion forum as a whole (how much students are talking), and (2) the number of threads and average number of posts within each thread (how much students are talking to each other), beginning with Wednesday’s prompt. Because the earlier threads (Monday–Tuesday) were specifically devoted to small group discussions of 4–6 students, it was not expected that the quantity or depth of these threads would be particularly meaningful. In contrast, the Wednesday prompt was designed to bring students together to focus on ways to move forward and solve the issues in the case. In addition to this pre-determined Wednesday prompt, the instructor in the facilitated discussions posted an additional prompt on Thursday, directed toward specific aspects of the ongoing conversations in which the students were engaged. In F1, the prompt heading asked, “What should Craig DO?” and occurred within the ongoing discussion that began on Wednesday, in response to a student’s post. In F2, the prompt heading was labeled, “Moving Forward” and was posted as a new thread. Both prompts were designed to push the students to reach consensus regarding the best way to meet the needs of the various stakeholders in the case. It should be noted that these prompts were not pre-determined but rather posted in direct response to students’ ongoing, generally naïve, ideas about solving the case, which often tended to ignore the needs of at least one of the key stakeholders.

We used both a deductive and inductive approach to analyze and code each case discussion (Miles and Huberman 1994). That is, we began, deductively, by coding students’ posts using the case map previously generated (Ertmer and Koehler 2014). Additionally, during the coding process we inductively identified new categories or sub-categories that were not included on the previous case map. New categories were cross-checked with earlier codes and then combined when meanings were similar. For example, the initial case map included categories related to project management, company culture, and the 1-day training format, among others. Additional sub-categories emerged during the coding process, including two that were related to company leadership and project scope. Given similarities in meaning, and in order to reduce redundancy, project scope was combined with project management and company leadership with company culture. After the coding categories were finalized, the two researchers independently coded the four discussions again; any remaining divergent interpretations were clarified through extensive dialog. After multiple reviews of each discussion, consensus was reached and frequencies were calculated for each category and sub-category of the case map. Typically, posts included more than one code as illustrated by the sample in Fig. 1.

Fig. 1
figure 1

Sample student post, with associated codes

Postings to the Lessons Learned wiki were coded using the same case map/categories. As an example, one student in F1 posted the following lessons learned: “(1) Establish clear expectations before a project starts, and (2) “Consultants beware: Be sure you understand the ID constraints established by the client.” We noted that the first lesson corresponded to the category, “Recognizes the role of the contract,” and to the sub-category: “Sets boundaries at the start of the project.” The second lesson corresponded to the category, “Recognizes the role of the consultant,” and to the sub-category: “Recognizes the limited power of the consultant.” Although we initially expected to see new codes or categories emerge from the wiki, this was not the case. In fact, all of the posted lessons learned tied back to specific aspects of the problem space students had found most meaningful during the case discussion. After coding all wiki postings related to this case, frequencies were totaled and then added to the discussion frequencies.

Issues of reliability and validity

Lincoln and Guba (1985) recommended that qualitative results be evaluated using the standard of ‘‘trustworthiness,’’ as established by credibility and confirmability. In this study, credibility was gained by examining instructor and student postings from four online discussions, facilitated by three instructors, across four different sections of the same course, thus providing triangulation of data sources. Additional triangulation was provided by coding and comparing students’ postings in the online Lessons Learned wiki. The use of two researchers led to confirmability of the data. That is, two researchers examined the data individually and then collaboratively as a means of developing consensus on the coding for each discussion post as well as students’ postings in the wiki.

Findings

Extent of problem-space coverage

As noted by Andrews (1980), the average number of responses/student comprises one of several “mileage” indicators for student discussions, serving as an early sign of a productive discussion. Results from this study suggest that students were nearly equally active in three of the four discussions, averaging about 10 posts/student with the remaining discussion (NF1) showing relatively fewer posts/student (see Table 4). The number of threads per prompt, as well as the average number of posts/thread is also included in Table 4, but discussed in more detail later as part of our discussion of the quality of coverage. Table 5 illustrates quantitative differences in problem space coverage for the four discussions. Although frequency counts don’t capture the quality of the discussions, they enable us to identify areas that received marginal versus extensive coverage, or in other words, the extent of coverage.

Table 4 Differences in student posts: facilitated versus non-facilitated case discussions
Table 5 Extent of problem space coverage for facilitated and non-facilitated case discussion

First, in four of the five problem-finding categories (see Table 5, discussion [D] columns), the facilitated discussions had the greatest number of coded segments, with F1 having the most in three of the five categories. However, the NF2 discussion had the highest number of coded segments in the problem-finding category of “non-design challenges,” and equaled the number of segments in the F2 discussion in one other category (i.e., relationships among challenges). This suggests the possibility that students could address the afforded problem space, even without instructor facilitation. However, variation between the two NF discussions suggests that problem space coverage is not guaranteed, perhaps depending more on the specific makeup of the participating students. Furthermore, it’s important to remember that amount of discussion, especially if concentrated in one category, may not equal a productive discussion. In fact, in this study, this result seems to suggest that students in the NF2 discussion may have been overly concerned about the project constraints and perhaps unable to move forward to consider potential solutions. Findings from previous research (Fitzgerald et al. 2011; Kim and Hannafin 2008) suggest that while experts are able to quickly filter through the details of a situation to narrow the problem space and determine key elements (Ertmer et al. 2008), novices tend not to do this on their own. As noted by Fitzgerald et al. (2011), “novices expend significant time and effort in interpreting situations, tend to focus on irrelevant information, and fail to identify problems adequately or to develop solutions” (p. 3). However, Ertmer et al. (2009) demonstrated how scaffolds, introduced by the instructor, can enable novice instructional designers to shift their focus from case details/constraints to the more critical elements of the problem situation (i.e., the design challenges and solutions). In an online discussion, scaffolding from the instructor, in the form of probing questions, may serve this same purpose.

Second, coverage of the problem-solving space generally showed greater coverage in the facilitated discussions, particularly in terms of generating training solutions, including those that addressed both the specific design and non-design challenges in the case. In one category, NF2 showed slightly more discussion: consideration of the potential consequences of proposed solutions. Overall, the NF1 discussion showed the least amount of problem-space coverage in every category. Because students in the NF1 discussion were relatively less active than students in the other discussions, it follows that each aspect of the problem space also received relatively less coverage.

Third, one fairly clear difference between the F and NF discussions is the extent to which students addressed the design challenges as well as the extent to which they proposed solutions that addressed those challenges. On average, students in the F sections articulated the design challenge 32 times, while those in the NF sections addressed them 15.5 times. Solutions to address the design challenges were proposed, on average, 41 times in the facilitated discussions, but only 21 times in the NF discussions. Given the importance of this task to the entire case analysis process (Jonassen 2011a), the opportunities for students to learn how to analyze and solve specific ID challenges appeared much more limited for students who participated in the NF case discussions.

As noted earlier, active participation by the facilitator can provide important direction for novice problem solvers (Bangert 2008; Ng and Tan 2006). Although students in the NF discussions might be as active as those in facilitated discussions, there is no guarantee that their comments and postings will be relevant to the targeted ID issues or solutions (Hmelo-Silver 2013; Hmelo-Silver and Barrows 2006), especially without a facilitator to prompt or guide them. This interpretation is supported by the relatively greater number of postings in the NF discussions that were related to the people or stakeholders involved in the case as well as to the non-design challenges, or situational constraints involved in the case (see Table 5). In other words, students in the NF discussions appeared to spend less time on the relevant ID issues and more time on those things that were more or less out of the control of the instructional designer in the case (e.g., stakeholders’ personalities, timeline, budget, etc.).

As noted by Ertmer and Stepich (2005), this emphasis is typical of ID novices who tend to focus on the concrete surface features of a case problem and who often describe a case problem in terms of what specific people did right or wrong, as opposed to describing the issues in terms of the underlying principles or forces at play. This is not to suggest that students in the facilitated sections did not also act like novices, only that the facilitator may have been able to scaffold their efforts to think about the case in more productive terms. Earlier work (Stepich et al. 2001) demonstrated that instructors can prompt students during problem analysis to think about things that they might not think about on their own. Similarly, studies have shown that students’ learning can be positively impacted by the presence of a facilitator in an online discussion, especially one who uses strong questioning skills to advance their thinking (Gilbert and Dabbagh 2005; Richardson and Ice 2010).

According to Saye and Brush (2002), these types of supports/questions comprise a form of “soft” scaffold, which requires teachers to “continuously diagnose the understandings of learners and provide timely support based on student responses” (p. 82). Table 6 presents examples of how initial novice responses from students in three sections were either redirected by the facilitator (F2), or, in the absence of a facilitator (NF1, NF2), erroneously supported and reinforced by other students in the discussion. Given these examples, the question arises: How can we guarantee that students, in the absence of a facilitator, will index appropriate lessons for transfer to, and application in, their future work?

Table 6 Students’ interpretations of the case issues: comparison between facilitated and NF discussions

Finally, students in the facilitated sections appeared to take advantage of the opportunity to reflect on what they learned from the case discussion by adding to the Lessons Learned wiki, submitting nearly 50 additional instances of problem finding, compared to only six new instances of problem finding from students in the NF discussions (see Table 5, [W] columns). This is an important finding as novices have been observed to typically pay less attention to problem finding than problem solving (Ng and Tan 2006; Perez and Emery 1995; Rowland 1992; Hmelo-Silver et al. 2002). Apparently, the Lessons Learned activity provided students with an opportunity to view the case through a problem-finding lens and to bolster their understanding of those aspects that were initially overlooked or under-considered. Additionally, 25 % (21/80) of the coded segments from the lessons learned related to specific ideas and considerations about the designer’s role in the case, suggesting that students in the facilitated sections took away important lessons about how to “be” an instructional designer. As one of the main goals of a case approach (i.e., to initiate students into professional practice), this evidence is particularly valuable in understanding how this goal might be met through a case discussion.

As noted in Table 5, the wiki postings of students in the facilitated sections touched on nearly every category in the problem-finding and problem-solving space. In contrast, in the NF2 discussion, lessons learned seemed to primarily reinforce students’ concerns about the non-design challenges, or project constraints, in the case situation. Furthermore, NF2 participants extracted only one lesson related to potential solutions in the case, providing further evidence for our interpretation that students were “stuck” considering constraints in the case without offering productive ideas for how to work around, or within, those constraints to propose reasonable solutions. This further underscores potential outcomes of a non-facilitated case discussion—the accuracy and completeness of a discussion could miss the mark without proper guidance, as noted by previous researchers as well (Nandi et al. 2012; Ng and Tan 2006).

Quality of problem space coverage

It is important to remember that frequency counts related to extent of coverage provide only general information about how often a topic was discussed, but not how well. As a preliminary measure of quality/depth of coverage, we examined students’ attention to the sub-categories of the problem space. That is, by delineating sub-categories of the problem space, we were able to code students’ postings at a finer level of detail. More specifically, while there were only 10 major problem-space categories (5 each for problem finding and problem solving—see Table 1), there were 34 sub-categories related to problem finding and 39 related to problem solving (see Tables 2, 3).

Our analysis of the coverage of these sub-categories (see Table 7) indicated that students in the facilitated discussions addressed, on average, 80.5 % of the problem-finding sub-categories (F1 = 78 %; F2 = 83 %) and 91 % of the problem-solving sub-categories (F1 = 97 %; F2 = 86 %). In contrast, students in the NF discussions addressed, on average, 67.5 % problem-finding sub-categories (NF1 = 57 %; NF2 = 78 %) and 65.5 % problem-solving sub-categories (NF1 = 55 %; NF2 = 76 %). These percentages hint at differences in the level of detail provided in students’ posts. Furthermore, differences between the two NF discussions appear greater than that observed between the two facilitated sections. This suggests that although students might potentially address the afforded problem space in a NF discussion, actual coverage may be more dependent on the specific makeup of the participating students. Steele et al. (2000) reached a similar conclusion in their research: Students who participated in student-led discussions showed more variation, across groups, in their discussion of key topics, compared to students participating in instructor-led discussions.

Table 7 Comparison of coverage of problem space: facilitated versus non-facilitated discussions

Finally, recognizing that these reported results do not illustrate the direct influence of the instructors’ facilitation, we examined students’ responses to each facilitator post. By contrasting what happened in the last 3 days of the discussion (W–F), when students in all sections were asked to focus on solutions, an important difference becomes apparent (see Table 4). For example, in the NF1 course, this mid-week prompt generated 40 posts in 12 different threads (approximately 1 thread/student), with the deepest thread containing 6 posts, and with no attempt by the facilitator to bring these threads together. As noted by Ertmer et al. (2011), there is an inverse relationship between number of threads in a discussion and the amount of interaction occurring among students. That is, the more threads observed in an online discussion, the less interaction occurring among students, as students are more likely to be posting isolated responses in order to “answer” the question, as opposed to responding to each other in order to reach a shared understanding. An examination of the content of the threads in NF1 supports this interpretation. The majority of threads included an initial comment from a student, followed by, at most, one or two comments from peers, and then a reply from the original poster. Given this pattern, it is quite possible that students in NF1 were not reading all of the posts in the discussion forum, but simply responding to a few peers in order to meet specific course requirements.

In contrast, in the F1 discussion, the same mid-week prompt generated 91 posts in 5 main threads, with the deepest thread containing 51 posts. As noted in Table 4, even without this final thread, there were 40 responses to the mid-week prompt, spread across 5 threads. Although this number is equivalent to the number of responses posted in NF1, the number of threads is considerably lower in F1, suggesting that there was more interaction among students in the F1 discussion. The final sub-thread in F1, which included 51 additional posts, began when the instructor responded to a student’s post directly within the thread, prompting him to consider whether his proposed solution met all the stakeholders’ needs. Nearly every student (n = 15/16) posted in this final sub-thread, and by the end of the week, students were in general agreement regarding the “best” solution. In this instance, the instructor’s prompt moved students toward a more viable solution than originally proposed, and pushed them to achieve consensus.

As a final contrast, despite a large number of posts in the NF2 mid-week discussion (n = 105 in 18 threads), which occurred from Wednesday to Friday, the deepest thread (n = 20 posts) involved only 7 students in a side conversation regarding how to interpret the ethics issue in the case. Although this could have been a very fruitful discussion, especially with some input from the instructor, the students were left to come to their own conclusions. Furthermore, 30/105 posts in this discussion actually identified a solution that would not work in the given case context (e.g., go above the boss’ head; ignore the concerns of the legal department). In general, these solutions were left unchallenged and thus, became part of students’ understanding regarding how to solve the identified challenges. As noted by Nandi et al. (2012), “handing students the responsibility to direct discussion is not always the best option…Instructors [should] play an active role in initiating and carrying the discussion forward” (p. 23).

Discussion

The results of this research suggest that facilitation of a case discussion does not, necessarily, lead to a greater quantity of student posts; students in three of the four discussions participated at a fairly equal level. In other words, we did not find a direct relationship between facilitation and number/quantity of student posts. Given that the goal of facilitation is generally not geared toward increasing posting quantity, but rather quality, our results can be interpreted positively. However, this contrasts, at least partially, with results reported by Mazzolini and Maddison (2007), who found a significant negative correlation between the percentage of instructor postings and the length of the discussion. Others (e.g., Silver and Wilkenson as cited in Dolmans et al. 2002) have also found differences in discussions led by content experts versus non-content experts, with content experts tending to take a more active role, which often leads to less interaction among students. In our research, instructor involvement in the discussion did not appear to have a negative impact, and in fact, seemed to lead to equal or greater engagement of the participants. However, additional research is needed to verify this finding, as our study was not designed to demonstrate a cause–effect relationship.

In terms of discussion quality, our results suggest that the problem space of a case study is addressed at a deeper level when the discussion is facilitated. That is, students in this study tended to discuss the different aspects of the problem space in more detail, addressing more of the specific nuances of the problem space, when the discussion was facilitated. This is similar to what Ng and Tan (2006) reported: students in an un-moderated online discussion completed only a shallow analysis of the problem situation. Similarly, Lu and Jeng (2006) noted that when the instructor participated in a discussion as both a facilitator and co-participant, students tended to post more high-level responses. Although it is encouraging to note that students in the NF2 discussion contributed nearly the same number of posts as students in the F2 discussion, this is not something that instructors could depend on happening. Given the unique makeup of each group of participants, it may be just as likely that a very brief, and shallow, discussion would occur, as evidenced by NF1 discussion. Furthermore, despite equal amounts of discussion, the content of the discussion was not equal across the sections, as indicated by differences in overall coverage of problem-finding and problem-solving space, as well as differences in those aspects of the case that students spent most of their time discussing (design solutions vs. project constraints). As noted earlier, novices tend to spend significant time and effort interpreting problem situations/constraints (Fitzgerald et al. 2011; Ng and Tan 2006; Perez and Emery 1995; Rowland 1992; Hmelo-Silver et al. 2002). Yet they often fail to define the problem adequately and may fail to develop workable solutions. Without relevant prompting from an instructor, there may be little help available to move them forward.

Our results further demonstrate that we cannot rely solely on quantitative indicators (Andrews 1980) to determine whether a case discussion is productive, especially in terms of problem space, or content, coverage (Hmelo-Silver 2013). As illustrated by our results, students in the NF2 discussion were fairly talkative, posting a large number of responses to the instructor’s initial and mid-week prompts. However, a closer look at the content of their discussion reveals relatively little time spent on the design challenges or solutions in the case, compared to the amount of time spent discussing the case constraints. This helps us understand, at a more detailed level, the differences between a high mileage discussion and a high quality discussion. Given the goal of a case discussion—“to understand the problem, identify solutions, and choose among them” (Andersen and Schiano 2014, p. 3), it appears unlikely, or at least unclear, if students in the NF discussions actually achieved this outcome. Without explicit efforts by the instructor to choreograph the discussion (Rangan 1996), the potential for students to fully benefit from the case study approach appears limited (Heckman and Annabi 2006).

Another interesting finding of this study relates to the demonstrated benefits gained by asking students to reflect on, and share, their lessons learned from the case discussion, as recommended by Kolodner and Guzdial (2000). Surprisingly, few students in the NF sections actually completed this task. Although we verified that every student was asked to do this, there appeared to be no consequences for not completing this task in the NF sections. In the facilitated sections, the instructor not only reminded students of this task after every case discussion, but also added an additional incentive in the last reminder, telling students: “This needs to be complete before I assign grades.” Without a strong rationale for the benefits to be gained by reflecting on their individual lessons learned, and in the absence of a penalty for not doing so, the majority of students simply chose not to follow through. This has important implications for future uses of this strategy in a case-based course. As suggested by Stepich and Ertmer (2009), asking students to extract the lessons learned from a new experience, including those that are experienced vicariously through case studies, can help students develop a rich library of case experiences from which they can draw in the future. However, it appears important to help students understand the importance of this reflective activity, as they do not necessarily see this on their own.

Based on our analysis of the content of the lessons learned that were posted, students in the facilitated discussions reflected on nearly all of the targeted problem space, thus adding significantly to the overall coverage of the case issues. As described by Kolodner and Guzdial (2000), by indexing their key take-aways from the case, students in the facilitated sections were able to extend their learning. In this study, that reflection led to the inclusion of 41 (F1) or 39 (F2) additional instances of problem space coverage. Although this additional coverage was not directly related to the presence of the facilitator in the case discussions, the fact that it occurred primarily in the facilitated sections suggests that facilitators can play a role in assuring that this reflection occurs in a meaningful way.

Limitations and suggestions for future research

Our ability to generalize the results of this study is limited by the relatively small number of facilitated and non-facilitated discussions examined. As might be expected, every case discussion provides just a small window into the case discussion process. Given different participants, different instructors, and even different case studies, discussion patterns are likely to vary. However, by examining four different discussions based on the same case study, we can begin to discern patterns in problem-space coverage. Additional research is needed to verify whether the examples in this study are representative of other facilitated and non-facilitated case discussions. In other words, would these patterns hold across different contexts? Furthermore, future research is needed to examine how these patterns might differ in a face-to-face case discussion. Would problem-space coverage be equivalent, or more or less extensive, than that obtained in a facilitated online discussion? How might the facilitators’ actions be similar to, or different from, those used in an online discussion? These are interesting questions that warrant additional investigation.

Implications and conclusions

The results of this study have implications for the use of discussions in online case learning, as well as for the education of instructional designers through the use of a case-study method. First, the results of this study confirm earlier work that suggests that the discussion that occurs during CBI contributes to student learning (Andersen and Schiano 2014; Flynn and Klein 2001; Moore 1997; Wilen 2004). As noted earlier, the case discussion provides a vehicle for sense making during case learning (Heckman and Annabi 2006; Hmelo-Silver 2013), enables students to see connections among case details, and promotes a deeper understanding of the case content (Choi and Lee 2009). As such, structuring the discussion in a way that addresses both course and case goals is critical.

Second, facilitated discussions appear to promote student learning of case content more readily than non-facilitated discussions. In this study, this was evident in the differences in problem-space coverage among the different sections. That is to say, students in the facilitated discussions covered more of the potential problem space at a more detailed level than those in the non-facilitated discussions. While the opening prompts were helpful in initiating the discussion across all sections, and appeared effective in focusing students’ initial efforts (Ertmer and Stepich 2002), they were not sufficient to guide the entire discussion, especially toward the end of the week when students were tasked with crafting a reasonable solution. Students in the non-facilitated sections appeared to be stuck on project constraints and unable to find a productive, agreed-upon solution. In contrast, students in the facilitated sections, in response to the instructor’s additional prompts, worked together to find connections among ideas, which served to keep the discussion focused and moving toward a workable solution (Andersen and Schiano 2014).

Finally, the facilitator plays an important role in directing coverage of case content, as noted by Chng et al. (2011) and Andersen and Schiano (2014), and supported by the results of this study. Facilitators must look for ways to extend students’ thinking (Rangan 1996), which requires someone who is knowledgeable of the case objectives, skilled at prompting students to consider case aspects more deeply (Hmelo-Silver 2013), and able to identify ways to transition students into relevant and productive areas. Furthermore, facilitators can, and should, provide opportunities for students to index the key take-aways from the case discussion, as recommended by Kolodner and Guzdial (2000). As evidenced by the results from this study, this indexing activity enabled students to revisit nearly all of the targeted problem space and to hone in on some of the key aspects of a designer’s role in solving a specific training need. Adding a final activity, such as the Lessons Learned Wiki, to a case discussion appears to be a relatively easy, but particularly beneficial, practice to incorporate into the case method.

Although CBI has great potential to engage our students in problems of practice, we cannot expect cases to “teach themselves” (Alder et al. 2004; Levin 1995). Rather, the instructor plays a key role in both creating the opportunities for productive discourse (Hmelo-Silver and Barrows 2006), as well as prompting students to address the afforded problem space at a level that goes beyond superficial coverage (Jones 2006). As noted by Chng et al. (2011), facilitators/tutors must closely follow the discussion generated by the students, and understand when and how to best contribute. Typically, this means questioning, probing, suggesting, and challenging ideas that are raised during discussion. The results of this study suggest that if our ID students are to truly reap the benefits of a case approach, instructors should give careful consideration to how they initiate, maintain, and close each case discussion. As noted by Rangan (1996): “A good case (discussion) is one where, with the instructor’s help, students learn and build, and learn how to learn” (p. 6). Furthermore, given the importance of the instructor’s role in facilitating students’ learning from CBI, future research is needed to help us understand how to best prepare and support facilitators so that their students gain maximum benefit from this approach.