Introduction

In student-centered learning environments, students are expected to co-construct knowledge by engaging in productive disciplinary discourse with their instructors and peers. As such, class discussions, whether online or face-to-face, afford a key strategy for achieving this goal. A number of researchers have examined the role of discussions, in general, and facilitation strategies, more specifically, on student learning in face-to-face and asynchronous learning environments (Heckman and Annabi 2006; Clarke and Bartholomew 2014). However, there is little research that explores the similarities and differences between discussion strategies and the learning outcomes that occur across these settings.

This study examined how two experienced instructors facilitated the same case study discussion in a face-to-face and online environment. Our primary research goal was to compare the specific strategies used by the co-instructors to meet the same discussion goals in the two contexts. Additionally, we examined how well the students in these two contexts addressed the targeted content. Our overarching research question was: How do facilitation strategies and problem space (i.e., targeted content) coverage compare across face-to-face and online discussions?

We begin this paper by exploring the critical role of discussions in student-centered learning environments and then consider the importance of facilitation strategies to the success of a discussion approach. We describe how our research design enabled us to examine similarities and differences among facilitation strategies applied in the two contexts as well as to compare learning outcomes, as measured by problem-space coverage, across contexts. Both quantitative and qualitative results are presented that have general implications for the use of discussions in both face-to-face and online settings as well as specific implications for the application of relevant facilitation strategies in each context.

Literature review

Social constructivism comprises the foundation for many of the student-centered teaching approaches in vogue today including experiential learning, problem-based learning, and case-based instruction (CBI), to name a few (Ertmer and Newby 2016). CBI, as a specific form of student-centered instruction, is defined as instruction that is “anchored in an authentic problem that is relevant to the learner. … (Jonassen 2011, pp. 150–151). In general, CBI presents a realistic problem situation that students analyze and resolve through reflection and discussion (Ertmer and Stepich 2002). Key to this approach, as well as any problem-centered approach, is the expectation that students will co-construct knowledge through interactions with their teachers and peers while collaboratively engaged in authentic disciplinary work (Dennis et al. 2008).

Regardless of whether a student-centered approach is implemented in a face-to-face or online environment, class discussions afford a key strategy for immersing students in a learning community that is engaged in productive disciplinary discourse (Clarke and Bartholomew 2014; Engle and Conant 2002). By exchanging ideas and considering others’ perspectives, especially contradictory views, learners are motivated to reflect on their existing ideas, thus enabling new understandings to emerge (Koschmann et al. 1996). This assumption has been supported by findings from numerous research studies. For example, Levin (1995) demonstrated how case discussions helped less experienced teachers clarify and elaborate on their ideas about the issues in a case study. Yew and Schmidt (2012) confirmed, using a path analysis model, that collaborative learning was more predictive of achievement on a concept recognition test in a problem-based learning context than students’ self-directed study.

Current reform efforts, aimed at improving both math (National Council of Teachers of Mathematics 2000) and science education (Lee and Kinzie 2012; National Research Council 2012), emphasize the importance of productive disciplinary discourse to the development of students’ higher-order reasoning and understanding. Yet, research examining current K-12 classroom practices suggests that this type of discourse is the exception rather than the norm (McNeill and Pimentel 2010; Watters and Diezmann 2016). Similarly, in a face-to-face professional development context, Zhang et al. (2010) lamented that effective discussion rarely occurs spontaneously in problem-centered group work. In the online context, McLoughlin and Mynard (2009) noted that simply giving post-secondary students the opportunity to discuss course content in an online forum does not automatically lead to higher levels of thinking.

What seems important, across these different contexts, is the manner in which the discussion is structured and moderated (Ertmer and Koehler 2014, 2015; Ellis et al. 2004), the questions the facilitator asks (Salinitri et al. 2015; Wang and Chen 2008; Zhang et al. 2010), as well as the responses she provides to students’ questions and/or comments (Lachner et al. 2016; Lee and Kinzie 2012; McNeill and Pimentel 2010). As noted by Zhu (2006), although class discussion has the potential to significantly benefit student learning, it “needs to be nurtured carefully in accordance with course goals and learning objectives” (p. 475). It is unrealistic to assume that all interactions in a discussion will be productive as even small differences in course design and facilitation approaches have been observed to impact the subsequent discourse (Zhu 2006).

Specifically, in a case-based context, discussion is viewed as the primary vehicle for student learning (Ertmer and Koehler 2015; Levin 1995; Yew and Schmidt 2012), typically taking the place of lectures or other teacher-directed strategies. As such, a well-designed case discussion can (and should) facilitate students’ coverage of the targeted content, or afforded problem space (Ertmer and Koehler 2014, 2015; Hmelo-Silver 2013). As noted by Ertmer and Koehler (2014), a case discussion can stimulate students’ critical thinking skills by engaging them in productive discourse related to both case and course content.

However, instructors and/or facilitators play a key role in prompting students to address the problem space afforded by each case study (Balaji and Chakrabarti 2010; Ertmer and Koehler 2015; Note given that the course instructors served as the discussion facilitators in both settings we use the words “instructor” and “facilitator” interchangeably). “Problem space” refers to the features, knowledge, and goals needed to solve a given problem (Teasley and Roschelle 1993), and, in this study, provided a means to quantify what was covered and/or learned during a case discussion. Further, in CBI, the problem is typically presented via a case study narrative, defined as an accurate representation of the “complexity and ambiguity of real-life problems,” as experienced by practitioners (Stepich et al. 2001, p. 54).

According to Arend (2009), the facilitator’s main impact on a discussion relates to the general facilitation strategies he/she uses to promote critical thinking: that is, quality is more important than the quantity of responses. Other research findings support this conclusion (Stepich et al. 2001; Darabi et al. 2011; Scott 2014; Zhang et al. 2010). Regardless of context, it is generally agreed that the facilitator’s role comprises finding a balance between stimulating students’ deep engagement with the content and creating a learning environment that encourages an open exchange of ideas (Stepich et al. 2001). Schmidt and Moust (2000) refer to these key facilitation characteristics/strategies as social congruence (e.g., interacting with students in a personal and authentic way), cognitive congruence (e.g., using language and terms that are easily understood by the students), and content expertise (e.g., possessing an appropriate level of relevant content knowledge). These categories are similar to the three “presences” proposed by the community of inquiry (COI) framework: social presence, teaching presence, and cognitive presence, respectively (Garrison et al. 2001). Although researchers have examined the relative importance to, and/or impact of, each of these characteristics on students’ learning experiences (Chng et al. 2011; Yew and Yong 2014), the general consensus is that each strategy/presence is important to effective facilitation as well as subsequent student learning (Kang et al. 2009; Nandi et al. 2012; Richardson et al. 2015; Schmidt and Moust 2000). Furthermore, based on their analysis of five instructors who facilitated asynchronous discussions in different sections of the same course, Clarke and Bartholomew (2014) found that students generally favored instructors who balanced their interactions across the three types of presence.

An extensive research base has developed regarding the facilitation of discussions in problem-centered contexts such as case- and problem-based learning environments in higher education (Heckman and Annabi 2006; Hmelo-Silver 2013; Zhang et al. 2010), as well as in K-12 inquiry-based STEM classrooms (Chin 2007; Engle and Conant 2002; McNeill and Pimentel 2010). Similarly, a number of researchers have examined the role of discussions, in general, and facilitation strategies, more specifically, on student learning in asynchronous learning environments (Bernard et al. 2009; Clarke and Bartholomew 2014; Kanuka 2011; Wang and Chen 2008). However, there is little research that “explicitly and rigorously explores the similarities and differences between the learning processes that occur in ALNs (asynchronous learning networks) and FTF (face-to-face) activities” (Heckman and Annabi 2005, p. 1), especially during problem-centered discussions. Further, little is known regarding whether, and/or how, instructors modify their expectations and facilitation strategies across these two contexts (Redmond et al. 2014).

Although some researchers (e.g., Darabi et al. 2011) have cautioned that the cognitive and social benefits of face-to-face discussions may not transfer to the online environment, others (e.g., Heckman and Annabi 2006; Slagter van Tryon and Bishop 2009) have suggested that the strategies used to facilitate discussions in the face-to-face environment must simply be modified in order to take advantage of the unique affordances of the online environment. That is, although the strategies used to meet course or discussion goals may differ across contexts, it is still possible to accomplish the same goals via a discussion approach. This is not to suggest that the process of transferring an effective face-to-face discussion into an online environment is a simple or straightforward one (Redmond et al. 2014), only that similar outcomes can be achieved by being aware of, and capitalizing on, the unique affordances of each context.

Research purpose and questions

This study was designed to examine how co-instructors facilitated the same case study discussion in a face-to-face and online environment. Our primary research goal was to examine the similarities and differences among the specific strategies used by the co-instructors to meet the same discussion goals in the two contexts. Additionally, we wanted to examine how well the students in these two contexts addressed the targeted problem space. Our overarching research question was: How do facilitation strategies and problem space coverage compare across face-to-face and online discussions? Sub-questions included:

  • How do students’ and instructors’ participation and discourse patterns compare across face-to-face and online environments?

  • How do instructors’ discussion facilitation strategies compare across face-to-face and online environments?

  • Given differences in context and facilitation strategies, how does problem space coverage compare across settings?

Methods

Research design

To compare facilitation strategies across face-to-face and online settings, we used an exploratory, descriptive case study research design. In general, case study designs are useful when examining unique phenomena or situations for which researchers could benefit from greater understanding (Yin 2014). In this study, the phenomenon/case consisted of a case-based discussion, centered on the same case narrative and facilitated by the same co-instructors, but occurring in different contexts.

To compare similarities and differences across the two contexts, we conducted a content analysis of instructor and student discussion posts from the two discussions in order to analyze, code, and transform qualitative data into quantitative data (e.g., counts, averages). These data, then, enabled us to describe different patterns of facilitation across the two contexts and to examine differences, if any, in problem-space coverage (Darabi et al. 2011). Because it is difficult to determine the effectiveness of various facilitation strategies on learning (Lundeberg and Yadav 2006; Saleewong et al. 2012; Yew and Yong 2014), measuring problem-space coverage provides a valuable means for quantifying learning during a case discussion (Hmelo-Silver 2013). In this research, the afforded problem space was determined by identifying the instructional design (ID) content targeted by the case study under discussion. We then analyzed students’ discussion posts/comments to determine how much of the problem space was addressed and to examine how the instructors participated in, and facilitated, the problem-solving process.

Description of the research setting

The data for this study were collected from the discussions that took place as part of an advanced ID graduate course, Advanced Practices in Learning Systems Design, a required course in the M.S. and Ph.D. programs in learning design and technology (LDT) at a large Midwestern university. The course utilizes a case-based instructional approach in order to prompt learners to apply existing ID knowledge to solve real-world professional problems and is offered in both a face-to-face and online format. Although the goals, objectives, and instructional activities were similar for both formats of the course, differences in course structure existed. For example, the face-to-face course lasted 16 weeks, met once a week for a 2 h and 50 min period of time, and included ten case discussions (six instructor facilitated, four student facilitated). In the online format, the course lasted 8 weeks, was housed within the Blackboard learning management system, and included six case discussions (three instructor facilitated, three student facilitated). While both Ph.D. and M.S. students could enroll in either course format, the online offering was specifically created to serve students enrolled in the online LDT M.S. program. Due to the size of the online program, several sections of the course were offered each fall semester to accommodate program needs.

In both formats of the course, students were expected to participate in weekly class discussions, focused on the assigned case study for that week. Prior to each discussion, students completed individual case analyses in which they (1) outlined stakeholders’ roles and perspectives, (2) articulated key design and non-design challenges, and (3) recommended plausible solutions. By focusing students’ attention on key case elements and requiring completion of individual case analyses prior to the discussion, it was expected that students would be more prepared to discuss the case, increasing the likelihood of achieving deep understanding of case concepts (Ertmer et al. 2009; Tawfik and Jonassen 2013). In the online format, the discussions were asynchronous and completed in a discussion-board forum.

For this research, we analyzed one online (fall 2012) and one face-to-face (fall 2013) discussion of the Craig Gregersen case (Dundis 2014). In both courses, this case was the third instructor-facilitated case study. Briefly stated, in the Craig Gregersen case, an outside ID consultant is hired to develop a training solution that addresses the conflicting needs of multiple stakeholder groups (engineers, lawyers, and the training department). Through the case discussion, students were prompted to consider organizational factors impacting ID decisions and to articulate solutions that worked within the specific case constraints (e.g., limited development time, diverse audiences, pre-determined 1-day training format).

Participants

Participants included 26 graduate students enrolled in two sections of an advanced ID course. During fall 2012, 16 students were enrolled in one section of the online course (10 males, 6 females; 15 M.S. students, 1 Ph.D. student), and during fall 2013, 10 students were enrolled in the face-to-face course (6 males, 4 females; 4 M.S. students, 6 Ph.D. students). Among the face-to-face students, three individuals had several years of professional experience in ID or K-12, while three students had previously worked or were working in other industries (e.g., food science, construction). Four individuals had limited professional experience. In the online course, all 16 students had prior experience working in educational-related settings (e.g., training manager, educational software consultant, K-12 technology specialist). Despite differences in student demographics, the instructors maintained the same discussion goals for each group. Rather than being perceived as a threat to validity, these differences provided a deeper understanding of how instructors modify their discussion strategies to enable different types of student populations to successfully analyze the case study under consideration. That is, as an exploratory, descriptive study, this research was specifically designed to examine how instructors modify their facilitation strategies given differences in context, population, and other uncontrolled variables, in order to meet the same discussion/course goals.

Both the online and face-to-face sections of the course were co-instructed by the same faculty member and advanced graduate student. Over the last several years, the faculty member had taught the course in both formats, while the graduate student had previously completed the face-to-face course. In this research, the facilitation strategies of both instructors were combined and coded together. The goal of this research was not to determine how each individual instructor influenced the learning process, but rather to examine the case facilitation process as a whole. For this reason, the facilitation strategies of the two instructors were analyzed together.

Data collection

We analyzed and then compared students’ and instructors’ contributions to the two case discussions. Online discussion postings were collected from the course learning management site, while the face-to-face discussion was video recorded and transcribed verbatim. Although every case discussion during the semester was video-recorded in its entirety (with the exception of small group work), only the recording related to the Craig Gregersen case study was transcribed. In addition, as part of that transcription, we noted the time at which the discussion focus changed from problem finding to problem solving. Further, as part of the problem-solving discussion in the online section, students posted their key takeaways in a “Lessons Learned” wiki at the end of the discussion. In the face-to-face section, a similar strategy was used to conclude the discussion—students were asked to verbally share their individual lessons learned. As such, lessons-learned responses, in both formats, were included as part of the discussion dataset.

Data analysis

To compare student–instructor interaction patterns across course formats, several quantitative measures were calculated, as recommended by Heckman and Annabi (2006): (1) number of words spoken/written by students and instructors, (2) the number of turns completed by students and instructors, (3) average and range of words per turn for students and instructors, (4) average and range of words per student, and (5) average and range of turns per student. Additionally, both student and instructor discussion posts/comments were qualitatively coded, tallied, and analyzed for patterns, providing a deeper comparison across formats. More details of the coding process are provided next.

Instructor facilitation

To identify and compare instructors’ facilitation and participation patterns across the two discussions, we coded both deductively and inductively. First, using coding categories established by previous researchers who had analyzed facilitation strategies implemented in online and face-to-face discussions (Heckman and Annabi 2005, 2006; Hmelo-Silver and Barrows 2006; Richardson et al. 2015; Watson et al. 2017; Yew and Yong 2014), a tentative coding schema was outlined. Next, each researcher used the tentative schema on a sample of instructor contributions. Following this initial round of coding, the researchers discussed their analyses, and revised the coding schema by deleting, combining, and adding codes in order to accurately capture the strategies used by the instructors while facilitating the discussion. For example, initially, three codes were identified to capture the instructors’ efforts to stimulate participation (e.g., invites participation, encourages student participation, and draws students in). After preliminary coding was completed, these were combined into a single category, “invites participation.” In addition, new codes were added to the schema following the first round of coding. For example, the instructors would often express agreement with students’ ideas, a technique that went beyond just acknowledging what was said. To capture this strategy, “agrees” was added to the coding schema.

Subsequently, codes were grouped into specific categories. Specifically, two main categories emerged that appeared to capture the instructors’ actions as they facilitated the case discussion: (1) setting the climate for learning (e.g., being personable, establishing a positive environment for learning) to create social cohesion and (2) using expertise (e.g., providing feedback, directing student attention, making connections) to facilitate students’ understanding. For each of the main categories, subcategories and codes were identified. For example, under the first main category, “setting the climate for learning,” the specific facilitation strategies were coded as either a form of (1) acknowledgement (e.g., restating or revoicing student ideas or using students’ names) or (2) a means to be personable and/or create a sense of community (e.g., inviting participation, using humor/emotion, agreeing, offering approval or encouragement). The second main category of “using expertise” included three subcategories: tempering expertise (e.g., using communication techniques to share expertise in a nonthreatening way), sense making (e.g., providing formative feedback, clarifying ideas, making connections among case concepts), and questioning (e.g., asking for clarification, encouraging articulation of a solution, prompting learners to consider their ideas further).

After additional coding and extensive dialog, the coding schema was finalized and used to code every instructor comment in both sections of the course (see Appendix 1). Typically, multiple facilitation strategies were used in a single posting or turn. Figure 1 shows an example of a coded turn during the face-to-face discussion. In total, there were 224 coded segments in the online section and 816 in the face-to-face section.

Fig. 1
figure 1

Coded instructor comment

After coding was complete, frequency counts were calculated for each category and sub-category of the coding schema and patterns within the codes were noted both within each course and across course formats. Additionally, a percentage was calculated to represent the proportion of a single code to the total number of coded segments in that context. Finally, comparisons of percentages were made between the facilitation strategies used in the online and face-to-face sections (Saldana 2009). Unfortunately, because the total number of observations was not fixed, we could not conduct a Chi square analysis to determine whether the differences in frequencies/percentages were statistically significant.

Student discussion participation

To identify the content students addressed in the case discussion, participation was coded in terms of those aspects of the afforded problem space that were covered. The afforded problem space was determined by identifying the concepts, ideas, and knowledge necessary to solve the Craig Gregersen case problem. The afforded problem space was further divided into two main categories, which directly relate to the two main functions of the ID problem-solving process: problem finding (i.e., articulating a solid understanding of the problem) and problem solving (i.e., proposing solutions that address problem issues) (Ertmer and Stepich 2005). For each of these areas, key concepts were articulated that comprised the specific problem-finding and problem-solving space of the Craig Gregersen case. A detailed explanation of the case mapping process is available in previous research (See Ertmer and Koehler 2014). Tables 1, 2, and 3 provide examples of the mapped problem space for the Craig Gregersen case study.

Table 1 Mapping the problem space afforded by an ID case study (Craig Gregersen Case from The ID CaseBook, 2014)
Table 2 Sub-categories related to problem finding: identifies relevant non-design challenges (project constraints)
Table 3 Sub-categories related to problem solving: solutions address design and non-design challenges

Using the case map deductively, students’ comments in each case discussion were coded at the idea level. That is, for each turn or post, students’ ideas were coded according to the categories and/or subcategories of the problem space addressed. After all student posts/comments were coded, frequency counts were calculated for each category and sub-category. In addition, a word count was tallied for each coded segment. Given the different numbers of students enrolled in each section (10-face-to-face vs. 16-online), an average word count/student was also calculated for each subcategory of the problem space. Patterns within each course and across courses were noted and similarities and differences were identified (Saldana 2009). Finally, for the online section, students’ postings in the “Lessons Learned” wiki were coded using the same method and added to the previously calculated frequencies. As noted earlier, multiple ideas were typically represented in a single posting or turn. In the online course, there were 288 coded segments; in the face-to-face course, 407 segments were coded. Figure 2 shows a coded example of a face-to-face student turn.

Fig. 2
figure 2

Coded student comment

Reliability and validity

Several steps were taken to ensure the research methods were valid and reliable. First, when developing the coding schema, previous research was used to guide this process (Creswell 2009; Creswell and Plano Clark 2011) and records of the evolving schema were saved and revisited to prompt thoughtfulness in the schema’s creation (Miles et al. 2014). Additionally, to establish triangulation in the coding process, the two authors/researchers independently coded all discussion comments and compared results (Creswell 2014). Although there were initial differences in researchers’ interpretations, these coding differences were resolved through extensive dialog, as recommended in the literature (Armstrong et al. 1997).

Results and interpretation

Instructor–student participation and interaction patterns

Results indicate that participation and interaction patterns in the face-to-face and online discussions were noticeably different across contexts, with both the instructors and students taking fewer turns, but averaging more words/turn, in the online environment than the face-to-face context (see Table 4). More specifically, the instructors posted 31 times in the online discussion, compared to speaking 342 times in the face-to-face discussion. However, the average number of words/turn was 73 in the online environment versus 22 in the face-to-face discussion, suggesting a more efficient posting pattern in the online environment (i.e., making each post more substantive). Further examination of the range of words/turn supports this interpretation, with the shortest turn having just one word in the face-to-face discussion, but 15 words in the online context. More specifically, there were 26 turns with only one word in the face-to-face context (e.g., “Sure,” “Okay,” “Absolutely,” “What?”), with an additional 170 turns having 2-14 total words, a response length not used by the instructors in the online context at all. Given the general recommendation in the literature for instructors not to dominate online discussions (An et al. 2009; Arend 2009; Clarke and Bartholomew 2014), it appears as though relatively short comments were eliminated or perhaps combined into longer posts that accomplished multiple goals.

Table 4 Instructor and student participation patterns in face-to-face and online discussions

In the online discussion, students posted a total of 182 times (average/student = 11; range = 15 [from 6 to 21]); in the face-to-face discussion, students took a total of 511 turns (average/student = 51; range = 97 [from 6 to 103]). As was true for the instructors, students’ posts were longer in the online forum than the face-to-face discussion, averaging 92 words/turn as opposed to an average of 19 words/turn in the face-to-face discussion. The shortest posts in the online discussion consisted of 3 words (which occurred only 2 times) compared to 99 posts of 3 words or less in the face-to-face discussion. Ainley and Armatas (2006) speculated that the electronic environment elicits more informative comments from students than in a face-to-face discussion, which may feel more “test-like” to students. In our study, a more likely interpretation is that students, like the instructors, were more inclined to make their posts more substantial in the online setting, especially given that syllabus guidelines discouraged a large number of short posts (e.g., “Class participation points will be based more on quality than quantity;” emphasis in original).

Another possible interpretation for the differences in length of response across contexts may relate to the amount of time students had to reflect on peers’ comments before adding their own ideas to the conversation. As noted by previous researchers (Hrastinski 2008; Paloff and Pratt 1999), online students have more time to compose and revise their messages before posting in a discussion forum. In contrast, students in a face-to-face discussion tend to have to jump in quickly if they wish to respond to a peer’s specific comment. As such, face-to-face students do not tend to make lengthy, polished comments but rather tend to add spontaneous ideas as the discussion progresses (Kock 2005). According to Kock, an exchange of 600 words would span about six minutes in a face-to-face context compared to an hour time span when the same number of words are exchanged by email. This estimated difference illustrates how the more rapid pace of a face-to-face discussion might lead to relatively quicker exchanges among students and instructors than that which would occur in an asynchronous context.

The total number of words written or spoken by the instructors was considerably less in the online discussion (2267) than in the face-to-face discussion (7630). However, this pattern did not hold for the student data; the total number of words used by the students was greater in the online environment (16,765) than face-to-face (9893). This difference may be related to the fact that there were 16 students in the online course and only 10 in the face-to-face course. In fact, the average number of words/student was fairly similar across contexts—1048 online and 989 face-to-face (see Table 4). However the range in average number of words/student was greater in the online discussion (2044) than face-to-face (1496), suggesting slightly more equal participation among students in the face-to-face discussion.

In general, students contributed more of the discourse in the online discussion than in the face-to-face discussion. More specifically, students contributed 88% of the total words and 85% of the total posts in the online environment, compared to 56.5% of the total words and 60% of the total turns in the face-to-face discussion. Despite these differences, both classrooms would be considered student-centered based on criteria proposed by McNeill and Pimentel (2010), who noted that “active” student classrooms are those in which students contribute 60% of the turns. Still, the smaller percentage of contributions made by students in the face-to-face environment suggests more involvement of the instructor in that context. These findings are supported by the results of Heckman and Annabi (2006), who compared instructors’ participation in face-to-face and online case discussions. Similar to the results observed in our study, Heckman and Annabi reported, “the presence of the teacher was more pervasive in the FTF discussions” and that “students carried a much greater share of the discourse [in online discussions]” (p. 144). Again, this might be related to the greater number of students in the online course; additional research is needed to determine how these percentages change when there are more, or fewer, students participating in the discussion.

Similarities and differences in facilitation strategies

To examine similarities and differences in facilitation strategies used by the instructors in the two settings, we coded every instructor comment in each setting, using the coding scheme described earlier (see Table 5). After frequencies were calculated for each code, we converted these to percentages, based on the total number of segments coded in each setting—816 in the face-to-face discussion, 224 in the online discussion. Given the differences in the number of instructor turns/posts in each setting, comparing percentages of total strategy use provided a more meaningful comparison. Table 6 presents the frequencies and percentages of occurrence of each code in the two settings.

Table 5 Facilitation codes
Table 6 Comparison of facilitation strategies used in face-to-face and online discussions

Table 7 presents the top 11 strategies used by the instructors in each context, in order of frequency (Note due to multiple ties, we chose to include the top 11, as opposed to the top 10). Of the top strategies used in the different contexts, only three strategies appear on both lists: (1) refers to students as a collective group (“you”), (2) clarifies, and (3) directs student attention. The first strategy falls within the “Sets Climate for Learning” category and occurred in 7% of the coded segments in the face-to-face discussion and in 5% of the coded segments in the online discussion (see Table 7). The other two strategies common to both contexts fell under the “Uses Expertise” category, relating specifically to sense making. “Clarifies” was observed in approximately 4.5% of the coded segments in the face-to-face discussion and in 5% of the coded segments in the online discussion.

Table 7 Most frequent facilitation strategies used in each context

“Directs Student Attention” also occurred in approximately 4.5% of the coded segments in the face-to-face discussion but was the most commonly observed strategy in the online discussion, accounting for 8.5% of the coded segments. This difference is not surprising given the challenge of keeping online students focused on the topic under consideration. Hmelo-Silver and Barrows (2006) noted that an online facilitator must emphasize specific strategies that keep all members engaged in the discussion, especially given the lack of verbal cues that can be used in a face-to-face context (e.g., a raised eyebrow, an encouraging nod, etc.). Drawing students’ attention to an important issue, perspective, or interesting new idea is one way to keep students focused. For example, in the online discussion, the instructors in this study typically responded to students by quoting segments of a previous comment as a way to draw everyone’s attention to an idea that warranted further consideration. This strategy is unlikely to be needed in a face-to-face discussion, as it is relatively easy to determine if everyone has heard the previous comments (especially with a relatively small class). Unfortunately, in the online environment, it is impossible to guarantee that every post is actually read (Ertmer and Koehler 2014), thus suggesting the need for additional “focusing” strategies, such as those used by the instructors in this study to direct students’ attention.

The strategy used most often in the face-to-face context was that of “Recognizes/Replies,” occurring in 8% of the coded segments. This finding relates to the number of short responses made by the instructors in this setting, as discussed earlier. Most of these short utterances comprised a simple acknowledgement of students’ contributions (“Sure,” “Okay,” “Alright”), without specifically agreeing or disagreeing with their comments. Again, these types of short comments were not observed in the online environment. This aligns with recommendations in the literature (Arend 2009), which emphasize the importance of purpose over frequency of instructors’ posts, noting that when instructors post with more intention (e.g., responding to the content of the discussion; questioning, clarifying, or extending students’ comments; Lewandowski et al. 2016), students report engaging in higher levels of critical thinking (Nandi et al. 2012).

Frequent acknowledgement responses in a face-to-face discussion, especially those that are non-evaluative, would not be expected to negatively impact student participation (Arend 2009). As noted in the literature (Estepp et al. 2013), these types of behaviors, collectively referred to as teacher immediacy behaviors (e.g., smiling; gesturing while teaching; calling students by names; praising students’ work, actions, or comments; and moving around the room while teaching), comprise effective teaching practices (Bailie 2012). In this study, acknowledgement comments seemed to encourage continued discussion, as they signaled that the instructors were listening closely to the students’ ideas.

Focus on context: establishing a positive climate for learning

In general, social-oriented facilitation strategies (i.e., sets climate for learning) comprised six of the top 11 strategies in both discussions. Across all coded segments, social strategies comprised 51% of the total segments in the face-to-face discussion and 49% in the online discussion (see Table 6). Thus, although only one of these strategies (i.e., referring to students collectively) appears as a top strategy in both environments, the instructors used a fairly equal percentage of climate-setting strategies, although they emphasized different strategies in the different contexts. For example, while the instructors verbalized more acknowledgement and agreement in the face-to-face discussion (due to the rapid interchanges that readily occur in that context), they restated students’ ideas more frequently in the online environment (due to the ease with which one can quote, or copy and paste, students’ comments as a prelude to making a response). In general, these acknowledgement (face-to-face) and restating (online) strategies appeared to serve the same overarching purpose—to establish and maintain a positive environment for learning—while taking advantage of the affordances of the particular environment in which they were used. That is, as noted earlier, both strategies comprise effective teacher immediacy behaviors (Estepp et al. 2013), but with one being relatively easier to accomplish than the other in a specific environment. As such, by simply modifying the specific strategies used in each setting, the instructors were able to accomplish one of their central goals—establishing a positive climate for learning—in that specific context.

Focus on content: using expertise to move the discussion forward

The instructors also used a fairly equal percentage of expertise strategies in the two environments: 49% of total coded segments in the face-to-face discussion and 51% in the online discussion. However, as was observed in the instructors’ use of climate-setting strategies, differences were noted across settings in terms of which expertise strategies were used most frequently. In the face-to-face context, the most frequently used expertise strategies included extending students’ ideas (5% of coded segments), providing clarifying information, directing student attention, injecting knowledge (all accounting for approximately 4.5% of coded segments), and using the reflective toss questioning strategy (4%; Hmelo-Silver et al. 2002; Zhang et al. 2010). In the online environment, the most frequently used expertise strategies included directing student attention (8.5%), linking constraints and solutions (5.4%), providing clarifying information (5%), confirming understanding (4%), and tempering expertise (4%). In the next few paragraphs, we discuss some of the more notable differences (> 3%) in expertise strategy use across contexts including: (1) links constraints and solutions, (2) tempers expertise, and (3) injects knowledge. (Note directing student attention was discussed earlier.)

Differences in the use of the expertise strategy, “links constraints and solutions” (with greater use online), may relate to the instructors’ relative uncertainty that students in the online discussions were making explicit links between these aspects of the case. As such, the instructors made a concerted effort to continually ask students to connect constraints and solutions (e.g., “How does your solution address the various stakeholders’ needs described in the case?”). Although strategy frequency was nearly identical across settings (n = 14 and 12 in the face-to-face and online environments, respectively), given the greater number of coded segments in the face-to-face discussion, the relative frequency was considerably less than that recorded in the online environment. Still, in the face-to-face environment, the instructors could be more certain that every student actually attended to each instance the strategy was invoked.

In both course formats, students were asked to devise a solution that would make key stakeholders happy, which, at least indirectly, prompted them to link constraints and solutions. However, this was done early in the face-to-face discussion as opposed to mid-week in the online discussion. We conjecture that online students may have experienced some difficulty switching perspectives mid-week, as they had just spent the first part of the week viewing the case from a single stakeholder’s perspective. Additionally, in the online discussion, early conversations/threads tended to bleed over into the timeframe allotted for mid- and end-of-week discussions. Given the asynchronous nature of the online discussion, it is difficult to turn everyone’s attention, simultaneously, to a new prompt (Mazzolini and Maddison 2007). As such, the instructors may have felt the need to help students switch their attention by asking follow-up questions that explicitly required them to explain the links between new proposed solutions and existing constraints.

“Tempering expertise” refers to the tendency of the instructor to “tone down” disagreements or to couch them in such a way that students do not feel attacked. Given the lack of verbal cues in the online environment, this strategy was used, in part, to maintain a positive learning climate and to keep students engaged in an open exchange of ideas. As noted by Clarke and Bartholomew (2014), “When we put things in writing, we have to be more aware of the tone as we don’t want to put someone off right away. In a face to face discussion, we can rely on things like body language and eye contact to make our message come off in a certain way” (p. 18). In this study, 4% of the coded segments in the online environment were used for this purpose, whereas in the face-to-face environment, tempering expertise accounted for less than 1% of the coded segments.

Finally, “Injects knowledge” was observed more frequently in the face-to-face than the online discussion (37 vs. 2 times). In the online environment, the instructors tended to “stick to the script” and to refrain from sharing what they personally knew about the case. However, in the face-to-face discussion, especially after the students had proposed their best solutions, the instructors felt comfortable divulging a few more insights into the case, even sharing a few of the author’s take-home lessons. In the online environment, only a few of these ideas were shared in the final debriefing. Because the instructors knew this case would be discussed in future offerings of the online course, they were careful not to give away details that might influence future students’ analysis efforts (Note online students tended to openly share these types of details in their “student only” Facebook group).

Summary of facilitation strategies

These results provide a nuanced view of how face-to-face facilitation strategies might be translated into the online environment and add to the current discourse regarding the instructors’ role in facilitating online discussions. Interestingly, despite noticeable differences in the specific facilitation strategies used by the instructors in each setting, overall use of both social and expertise strategies was relatively equal across settings. Furthermore, in each setting, the instructors used a fairly equal number of strategies that were designed to (1) establish a positive climate for learning and (2) to push the discussion forward using relevant expertise (49–51% in each). This finding is similar to the results reported by Watson et al. (2017), who conducted a detailed analysis of the strategies used by an experienced facilitator in six different online discussions. In their study, social codes accounted for approximately half of the coded segments (50.5%), with expertise and cognitive codes making up the other half. Dolmans et al. (2002) and others (e.g., Kang et al. 2009) have also stressed that it is the combination of social facilitation skills and expertise that leads to the most powerful facilitation approach.

However, these results contrast with those reported by Clarke and Bartholomew (2014) who noted that the five instructors in their study, “relied heavily on social codes and were less likely to employ cognitive [or expertise] codes” (p. 17). Differences may relate to previous experiences of the instructors, both with the specific course and with the online environment, which may have led to differences in their levels of content knowledge. As noted by Lachner and Nuckles (2016), instructors with deep content knowledge generate more informative explanations than instructors with “surface” knowledge. In the Clarke and Bartholomew study, instructors had between 5 and 10 years of experience teaching online and had taught the course “numerous” times. In our study, the primary instructor had been teaching online for nearly 20 years and had developed and taught this specific course for approximately 25 years. Additional research is needed to determine the relationship between a facilitator’s content expertise and teaching experience and his/her use of social and cognitive/expertise strategies in an online or face-to-face discussion.

Comparisons of problem-space coverage

To compare students’ coverage of the problem-finding and problem-solving space during their case discussions, we coded every student comment in the two environments. As noted earlier, problem-finding categories related to identifying the stakeholders in the case, delineating the key design and non-design challenges, as well as describing the relationship among these challenges. Codes related to problem solving included proposing specific recommendations that worked within the given constraints and linking solutions to the identified challenges (see Table 1). Frequencies were calculated for each code and then compared across settings (see Table 8). Because the amount of coverage is not directly related to the number of students discussing a case (i.e., a small group of students could cover the same amount, or even more, of the afforded problem space than a larger group of students), we did not convert frequencies to percentages for these comparisons. However, to provide a more nuanced look at the amount of discourse devoted to each aspect of the problem space, we calculated both total word counts and an average word count/student for each aspect.

Table 8 Differences in coverage of problem-finding and problem-solving space

Problem finding

Results showed that frequencies in each sub-category of problem finding were higher in the face-to-face course than the online course, suggesting more extensive discussion of the issues in the case (i.e., each sub-category was addressed more frequently in the face-to-face context). Despite the smaller number of students in the face-to-face class, their discussion of the stakeholder and consultant roles and the relationships among the key challenges equaled more than twice the discussion of these aspects by the students in the online course. In addition, discussion of the key design and non-design challenges was approximately 1.5 times that observed in the online setting. Not only did face-to-face students address these problem-finding topics more frequently, they averaged from 1.5 to 6 times as many words/student while discussing these topics. Overall, face-to-face students averaged 552 words/student discussing the problem-finding topics, while the online students averaged 229 words/student discussing the same aspects of the problem-finding space (see Table 8).

In both settings, students were divided into three groups and asked to consider the case issues from the perspective of one of the key stakeholders. In the online context, this small group discussion occurred during the first 2 days of the week (Monday–Tuesday), with a summary posted by each small group on the morning of the third day (Wednesday). In the face-to-face context, students worked in stakeholder groups for about 30 min, and then shared their perspectives with each other for an additional hour of class time. Perhaps because the face-to-face discussion occurred within a more concentrated time period, students were quick to adopt their assigned perspectives and to hold onto these viewpoints throughout the subsequent whole-class discussion. For example, 50 min into the whole-class discussion, one student, who had been a member of the “engineering” group, continued to use his assigned role to discuss perceived issues, “…they (legal stakeholders) would not always solve the problems we have… we have issues to deal with and they (legal) have not been helpful.”

In the online discussion, students appeared to have a harder time coalescing around their assigned perspectives, perhaps due to the asynchronous nature of the discussion. As a general rule, students tended not to check into the discussion early in the week (e.g., only one of the three groups had contributions from every member by the end of the first day), thus making it more difficult for small groups to come to consensus prior to posting a summary on Wednesday morning. Requiring this short turn-around time may have limited students’ ability to discuss the issues to the same extent as the face-to-face students. Kienle and Ritterskamp (2007) noted that although the use of deadlines in an online discussion may increase participation initially, a limited timeframe makes it difficult for students to come to agreement on assigned tasks.

Problem solving

Frequency counts for the problem-solving sub-categories tended to be greater in the online course, suggesting more extensive discussion of solutions, especially related to the first four sub-categories (see Table 8). However, a closer look at the average number of words/student suggests that face-to-face students equaled or surpassed (at least at an individual level) the amount of discourse that was observed in the online context in four of the six subcategories. The two sub-categories in which online students averaged more words/student included (1) proposes solution for bringing stakeholders together and (2) solution addresses relevant non-design challenges. We discuss these results in more detail next.

Students in the online course addressed how to bring the stakeholders together more than two times as often as the face-to-face students, using nearly four times as many words, and averaging nearly three times as many words/student. In the online course, as students turned their attention to solutions mid-week, they were explicitly prompted to consider how the instructional designer could meet everyone’s needs in a one-day training session. One of the first students to respond to this prompt mentioned the need to find “common ground” among the stakeholders, to which others responded with similar ideas about consensus building and compromise. This, then, became a focal point for much of the students’ subsequent discussion, which may account for the greater frequency of discussion around solutions that were designed to bring stakeholders together. Still, this “solution” was not a design solution, per se, but rather one that paved the way for a design (training) solution to work. Students in the face-to-face and online discussions addressed specific solutions for the training course approximately the same amount (59 vs. 63 coded segments, respectively; averaging 99 words/student in each context), but online students spent considerably more time discussing how to address the non-design challenges in the case (e.g., budget, limited time, etc.) than the face-to-face students (625 vs. 205 words, respectively; 39 vs. 29 words/student). As noted in previous research (Ertmer and Stepich 2002; Stepich et al. 2001), students who are new to case-based learning are often unable to sort through the noise of a case to identify the relevant design issues. Although we attempted to counteract this tendency by continually reminding students to focus on solving the specific design issues in the case, this was less controllable in the online context.

In contrast, the frequency with which the face-to-face students discussed the relationships among solutions, as well as the consequences of their proposed solutions, was more than twice that observed by the online students. This result is supported by noted differences in the amount of discourse used to discuss these aspects. Face-to-face students used 157 words, averaging 16 words/student to discuss relationships among solutions, while online students used 38 words, averaging 2/student; face-to-face students used 723 words, averaging 72/student to discuss consequences of solutions, while online students used 348 words, averaging 22/student. Although this illustrates one of the limitations inherent in trying to compare words/student of students who are engaged in different types of discussion (i.e., as noted earlier, students engage in these discussions in dramatically different ways), these quantitative measures provide at least partial comparisons regarding how the discussions unfolded in the two contexts. That is, despite the smaller number of students in the face-to-face setting, their coverage of the targeted problem space did not appear to be negatively impacted.

Differences in students’ attention to these aspects of the problem space are likely due to differences in the instructors’ ability to monitor and intervene in the two discussions. Because ID novices often propose solutions without considering the potential consequences of those solutions (Ertmer and Stepich 2005), instructors must be prepared to query students’ recommendations in order to elicit deeper thinking about the positive and negative implications of their suggestions. In the face-to-face discussion, the instructors accomplished this by continually asking students to consider the consequences of any new idea proposed (“How does Louise [the training manager] feel about that? Does she agree with that?” “What would legal say about that?” “What would the challenges be with that [solution]?” “Let’s talk about the ramifications of that.”). In the online environment, similar questions were posed (“What might be the challenges of bringing everyone together? What happens if Craig doesn’t get this consensus?”), but inevitably there was some delay between the posting of a possible solution and the instructor’s response to, or query about, that idea, resulting in less time to discuss the potential consequences.

Ng and Tan (2006) and others (Perez and Emery 1995; Hmelo-Silver et al. 2002) have suggested that the problem-solving approaches of novices tend to be fairly limited, as novices tend to propose solutions before fully analyzing the problem at hand. In both contexts in this study, the instructors addressed this tendency by asking (and reminding) students not to consider solutions until the issues had been thoroughly analyzed and discussed. However, this was easier to control in the face-to-face context than online. If students began to discuss solutions, prematurely, in the online context, there was little the instructors could do until after the post had already been made. In contrast, in the face-to-face environment, the instructors could quickly interrupt or redirect students if they began to discuss solutions before completing their analysis of the issues (Hrastinski 2008).

Another possible interpretation may simply relate to the amount of time spent on the discussion of solutions. In the face-to-face course, the class turned their attention to solutions during the last 40–60 min of the class (2/5 of the allotted class time), while the online students discussed solutions from Wednesday-Friday (3/5 of the allotted time). It is important to remember that comparing these different time frames is somewhat like trying to compare apples and oranges, as time in a face-to-face context is much more concentrated and focused than that which occurs in the online context. Still, online students tended to increase their participation toward the end of the week in order to assure they contributed the required number of posts before the discussion ended. Because the last few days of the discussion were specifically devoted to finding a reasonable solution, greater coverage of the problem-solving space resulted.

Summary of differences in problem-space coverage

Although face-to-face and online students did not address each aspect of the case to the same extent, coverage appeared adequate (i.e., more than 5 comments/aspect), if not extensive (i.e., more than 10 comments/aspect), in both. That is, in both discussions in this study, nearly every aspect of the problem-finding and problem-solving space was addressed at least 14 times, suggesting that the instructors were successful in meeting this discussion goal in both contexts. Only one sub-category showed minimal coverage (i.e., 5 or fewer coded segments) in the face-to-face context—relationships among solutions, while two categories showed minimal coverage in the online context—relationships among challenges and relationships among solutions. However, what remains unknown is whether online students benefitted from “hearing” the entire discussion: that is, to what extent did they read, and attend to, their peers’ posts? Given the difficulty in gauging students’ attention in the online environment, more coverage may be needed to have the same impact on students’ learning. Future research is needed to investigate students’ participation strategies in the online context (e.g., to what extent do they read every post?) and to consider how these participation patterns impact students’ learning.

It is difficult to determine causes for the observed differences in problem-space coverage by students in the two contexts. That is, due to the qualitative nature of this research, we cannot directly relate what the instructor did in each context (i.e., the facilitation strategies used) to what the students discussed or to the extent of problem-space covered. In fact, differences in coverage may have had more to do with differences in student demographics or the time allotted to each aspect of the case, as noted earlier, than to specific facilitation strategies used. Depending on the instructors’ specific discussion goals (e.g., helping students sort through a variety of conflicting stakeholder perspectives, scaffolding students’ efforts to propose specific training interventions), it may be important to emphasize one aspect of the case over another (problem finding vs. problem solving). For example, in this study, discussion of the relationships among issues and between identified challenges and proposed solutions were discussed relatively less frequently than the other aspects of the case. If this were a persistent pattern across multiple case discussions, the instructors might consider implementing strategies that emphasized these relationships in order to increase students’ attention to these types of details in future cases. Furthermore, while formulating these new strategies, instructors would do well to pay close attention to the specific environmental affordances present in their contexts, in order to select the most effective strategies.

Implications

The results of this study have implications for the use of case-based discussions in both face-to-face and online settings. First, this work provides a detailed comparison of participation and discourse patterns across the two settings. Although instructor and student participation patterns were different across contexts (e.g., fewer turns but more words/turn online for both instructors and students), the average number of words per student was fairly similar. However, similar to the pattern described by Heckman and Annabi (2006), students in the online setting contributed more of the total discourse than students in the face-to-face setting (88% vs. 56.5% of total words, respectively). While this difference may simply relate to the greater number of students in the online course, it is also likely that the asynchronous nature of the discussion provided online students with greater opportunity to provide lengthier comments (Hara et al. 2000). In addition, as recommended in the literature (e.g., Arend 2009, Clarke and Bartholomew 2014), online instructors are typically advised not to dominate the discussion, allowing students to “own” the conversation as much as possible (Wang and Chen 2008). As such, it appears that many of the instructors’ shorter comments, made frequently in the face-to-face context in this study, were eliminated or combined into other posts so as not to clutter the discussion with a large number of short posts. In general, previous research supports the use of this type of “efficiency” approach. For example, Mazzolini and Maddison (2003) found that frequent instructor postings led to fewer student postings as well as a shorter discussion overall.

These differences in discourse patterns suggest that, despite using similar, if not identical, prompts to initiate both case discussions, the discussion experience was not identical across settings. While the initial prompts can “set the standard and quality for later postings” (Wang and Chen 2008, p. 172), it is impossible to fully anticipate how students will respond and/or the direction they will take, especially during an open-ended discussion such as that which occurs during a case-based discussion. This points to the need for instructors to be flexible—to respond to students’ unique misconceptions and questions, as well as their individual insights and understandings. This flexibility, however, must be rooted in a deep understanding of the goals for the discussion (Zhu 2006) and guided by ongoing observation of students’ evolving understandings of the issues/content under discussion. Although the facilitation, itself, may look different across settings (e.g., short, quick responses in the face-to-face setting; longer, more multi-purposed comments in the online setting), the goal is the same—to engage students in a purposeful case-based discussion that facilitates coverage of the targeted problem space (Hmelo-Silver 2013; Yew and Schmidt 2012). As such, capitalizing on the specific affordances of each context can enable instructors to engage students more readily in a meaningful case-based discussion.

Second, the results of this study provide a detailed picture of how facilitation strategies might be adapted, using the unique affordances of the online and face-to-face environments, to achieve the overarching goals of social cohesion and sense making in each setting. Although others may have different goals than those pursued by the instructors in this study, our results suggest that general facilitation strategies might transfer across settings, although their specific applications tend to differ based on the affordances of the current environment (Heckman and Annabi 2006; Slagter van Tryon and Bishop 2009). For example, the general strategy of acknowledging students’ ideas is important in both environments. In the online setting, this might occur by quoting part of a previous post; in the face-to-face setting this might entail a simple non-verbal acknowledgement (head nod, smile, etc.). By focusing, first, on the important general strategies (e.g., questioning, sharing expertise), instructors might more readily adapt existing strategies to new contexts. Ultimately, instructors need to be aware of what they are trying to achieve via the discussion as well as the various strategies best suited to accomplishing them in that setting.

In this study, the instructors used a fairly equal percentage of climate-setting and expertise strategies in both contexts, suggesting they valued both goals equally. Others have also noted the importance of using a combination of facilitation strategies, particularly in the online setting (Lewandowski et al. 2016; Watson et al. 2017). For example, Clarke and Bartholomew (2014) reported that the students enrolled in the courses they studied tended to favor instructors who balanced their interactions across the three types of presence that comprise the Community of Inquiry framework (i.e., social, cognitive, and teaching; Garrison et al. 2001). In the face-to-face setting, work by Schmidt and Moust (2000) and Yew and Yong (2014) also supports this conclusion. Specifically, these researchers found that student learning in problem-centered contexts related to the effective use of three general facilitation strategies: (1) social congruence (interacting with students in a personal manner), (2) cognitive congruence (using language that is readily understood by the students), and (3) content expertise (possessing an appropriate level of relevant content knowledge). In this study, the instructors appeared to balance their use of these strategies in both contexts, modifying only the specific application/form of each. This modification process is supported by Slagter van Tryon and Bishop (2009), who noted that face-to-face facilitation strategies can successfully transfer across contexts by modifying existing strategies to capitalize on the unique affordances of the online environment.

Finally, the results of this study suggest that despite differences in discourse patterns and specific facilitation strategies across contexts, problem-space coverage of the targeted case content was considered adequate, if not extensive, in both sections, with nearly every aspect of the problem space being mentioned at least 14 times. This implies that one context is not inherently better than the other—students in both contexts were able to achieve problem space coverage, which was the overall goal for the discussion. Still, differences were noted in the amount of coverage: face-to-face students addressed the problem-finding space more often, averaging nearly twice as many words/student overall (552 vs. 229), whereas online students addressed the problem-solving space more frequently, particularly in terms of proposing non-design solutions (i.e., solutions for bringing stakeholders together and proposing solutions that address non-design challenges). However, despite addressing the problem-solving space more frequently, online students averaged slightly fewer words/student than face-to-face students in the problem-finding category (262 vs. 299). As noted earlier, this result may have been a function of the amount of time allotted to each aspect of the problem-solving process and/or to the features of the specific contexts. For example, in a face-to-face setting, it is relatively easy to determine if students heard comments made by their peers, whereas in the online setting, more repetition is likely to occur to assure that students “heard” what was said. This may have led the instructors to repeat ideas/comments or to explicitly prompt multiple students to address the same question/issue.

Limitations and suggestions for future research

While this research methodology provides a foundation for comparing problem-centered discussions in face-to-face and online instructional contexts, the results from this investigation are limited by the exploratory nature of the research design, our limited data set, the lack of random assignment of students to each course, as well as the specific type of discussion being facilitated. That is, only one case-based discussion from each educational context was examined. Future research should analyze additional types of discussion formats in diverse content areas in order to verify and expand upon these results. In addition, results are limited by the unequal number of participants in each course, although percentages were used as much as possible to make comparisons more meaningful.

While the instructors in this study had substantial previous experiences facilitating case-based discussions, it is unclear whether novice facilitators would achieve similar results across the two discussion contexts. Furthermore, focusing on other learning goals beyond those investigated in this study might yield different results. Finally, while the comparison of the two contexts made in this study focused on diverse data sources (facilitator strategies, problem space coverage), collecting and analyzing additional data sources (e.g., student performances on their individual case analyses) and perspectives (e.g., student/instructor preferences/beliefs) would provide a deeper understanding of similarities and differences across the two instructional contexts and the impact of facilitation decisions on the overall case learning experience.

Conclusion

In student-centered methodologies, class discussions provide important social and cognitive opportunities for students to construct knowledge (Mazzolini and Maddison 2007). With the increasing number of courses being offered in online formats (Allen and Seaman 2013), today’s instructors must be well-versed in facilitating discussions in both face-to-face and online contexts. As noted in this study, each educational environment affords unique features that instructors can use to implement problem-centered pedagogies. However, this is no easy task as understanding how and why to effectively use the affordances of each context is challenging. A deep understanding of how these strategies compare across face-to-face and online educational environments has yet to be fully realized, although the results of this study provide a promising start.

This research gives us a rare glimpse into how experienced instructors modify their facilitation strategies to accomplish the same course/case goals in different contexts. In general, it is hard to control external variables when conducting classroom research. However, in this study two important variables were held constant: the instructors of the course and the content under discussion. By keeping these variables constant, we are able to gain a better picture of how the affordances of different contexts impact the manner in which discussions are facilitated. Results suggest that experienced instructors are able to help students increase their understanding of the case/topic under consideration by adjusting their specific use of facilitation strategies. Given the relative importance of discussion to student-centered instructional approaches, both online and face-to-face, researchers, instructional designers, and course instructors can benefit by understanding how to design, facilitate and manage class discussions to elicit specific learning outcomes.

Appendix 1

Facilitation codes, definitions, and examples

Code

Definition

Example

I. Sets climate for learning

Creates a positive learning environment

 

 A. Acknowledgement

Responding/reacting to students’ comments

 

  Recognizes/replies

Recognizing student ideas/contributions to the discussion

So, that’s an interesting solution …

  Restates

Repeating what the student says

Someone said you can’t make this a win–win–win. You are not going to be able to please everyone

  Revoices

Repeating what the student says, but in a way that clarifies students’ ideas

So, your solution is kind of trying to find a way to make everyone at least a little happy

  Name

Using students’ names

I think Wes starts to get at this above

 B. Social/cohesion

Drawing students into the conversation; being personable and inclusive

 

  Being personal/conversational

Injecting personality into postings; adding informal comments to make the conversation more personal

… As an aside, as I’m reading the postings this morning …

  Agrees

Expressing agreement with a students’ idea

It does seem odd that legal created the training…

  Approval

Responding positively to an idea and/or giving praise

I think you captured well what they want

  Emotion

Expressing likes or dislikes, frustration, sadness, etc.

Wow—poor Louise!

  Emphasis

Highlighting or raising awareness for an idea

Anything he does HAS to be signed off by legal

  Encourage

Offering encouragement

But good ideas, nonetheless!

  Enthusiasm

Expressing enthusiasm or excitement for the content or an idea

Nuggets of awesomeness! Well-put!

  Group

Promoting whole-class unity

 

   Collective reference

Addressing the class as one group (“You”)

How much power do you think Louise has?

   Peer—including self with students

Placing self at student level (“We” “Us”)

As much as we might want to change some of these “givens” we really can’t—and neither could Craig

  Humor

Teasing or joking with the students; sharing a laugh with students

Or is that your evil plan—to have Craig fail so that you look better by comparison?

  Invites participation

Stimulating participation; asks students, either individually or as a group, to respond to a question or comment

Can anyone speak to how “mandatory” training (think OSHA type requirements) typically works in an organization?

  Self-disclosure

Provides personal information or ideas

I like this case because it’s a situation where you’re not going to please everybody, and you have to find common ground…

II. Uses expertise

Providing feedback, directing student attention, tempering expertise, making connections

 

 A. Tempering expertise

Using communication techniques to share expertise in a nonthreatening way

 

  Softens disagreements

Disagrees in a non-threatening way

Not to be harsh …

 B. Sense making

Helping students make sense of course concepts and ideas through various techniques

 

  Alternative viewpoint

Providing a different perspective to prompt students to further consider an idea

I think this is a fair assumption, but that doesn’t mean that they can’t all walk away feeling “partially” happy

   Disagrees

Expressing disagreement with a students’ idea

I don’t think legal is going to do that

  Clarifies

Providing a deeper explanation of topics, issues, ideas that seem to be misunderstood or not fully understood

So it’s Stan communicating it. I don’t know where—the information is not coming directly—he’s reporting something else—but it’s all filtered through Stan

  Directs student attention

Providing important cues for students as to where they should focus

What’s your ideal solution? In the next discussion, you can try to find some common ground, so that Craig can propose a workable solution

  Formative feedback

Offering feedback on student ideas

I like this idea for two reasons: (1) Craig does his best with what he’s got and (2) he’s honest in saying, but more could be done if the company wanted to go that route

   Confirms understanding

Affirming a student’s thinking is on the right track

I think this accurately captures Louise’s mindset about this whole thing

   Diagnoses misconception

Diagnosing students’ misconceptions of case/course topics

If so, while this does sound proactive, this sounds kind of more like restructuring and less like training…

  Injects knowledge

Adding new information

…because that’s where Craig can add some value

  Makes connections

Helping students see the links between earlier points or between solutions and constraints

So again, to me, that’s more evidence that she’s his boss. Because he goes to her to ask for more time or maybe to make adjustments to the project

   Example

Providing an example to support an idea

Another thing you can put in your contract is kind of the potential for scope creep… You know, things kept getting bigger and bigger…

   Extends students’ ideas

Building on ideas proposed by students

Assessment is usually pretty simple (e.g., employee attendance or some low level type of assessment). All the company has to prove to the oversight committee is that everyone participated

  Maps between constraints and solutions

Articulating the relationship between case issues and possible solutions or asking students to describe specifically how their solutions address a specific constraint

And it just kind of, as he says, “The content keeps exploding and the size of the box stays the same size. And that is all that Louise—Louise isn’t going to let him make that any bigger

  Seeks consensus

Working to establish shared understanding

Are you all in agreement about that?

  Summarizes

Summarizing ideas shared in the discussion

So we’ve got some people with different desires from this training. We [engineers] want a communications plan and you [training] want some one day package of something and you [legal] want it really abstract

C. Questioning

Using various forms of questions to engage students in content under discussion

 

 Asks for clarification

Asking a student for more details

What does the case say about legal?

  Pushes for explanation

Prompting students to articulate a reason for a particular idea

Tell me a little bit more about how Craig does that?

 Direct question

Using an explicit, straightforward question to stimulate deeper thinking

So does this cause an ethical dilemma for Craig?

 Problematizing

Helping students focus on aspects of the problem that are most relevant; drawing attention to parts of the problem that may have been overlooked

I think that’s what you said. But if you ask that question of Louise what is she going to say?

 Encourages articulation of solution

Encouraging students to share specific details of a solution

So we need to think about what the training might look like if he does please the training dept, Louise; if he does please Stan, or if he does please legal. Because those would probably be three different solutions, as a starting point

 Reflective toss

Responding to a question with a question; throwing the question back to the students

What might be the challenges of bringing everyone together?