Keywords

In this section, we discuss five main strategy dilemmas that educators might encounter: (a) use of grades or marks, (b) use of number of posting guideline and posting deadlines, (c) use of message labels or sentence openers (online scaffolds), (d) extending the duration of the online discussion, and (e) instructor-facilitation. As mentioned earlier, strategy dilemmas are strategies that previous empirical research had shown mixed results for, upon their implementation. Acknowledging these dilemmas is essential for educators and researchers to make informed decisions about the discussion strategies they are considering/implementing in future.

4.1 Use of Grades or Marks

Bullen (1998) found that the grades or marks associated with participation did not necessarily result in more participation for some students. These students used the marks as part of a type of cost-benefit analysis to determine how to apportion their time. Students responded to the marks, but not necessarily with enthusiasm. Their contribution was not particularly original or insightful, but often a rehash of what others had said in order to get the marks.

This was echoed by Oliver and Shaw (2003) who found that students were merely “playing the game” of assessment (p. 64). Students simply made postings to earn marks, but rarely contributed otherwise. Yeh and Buskirk (2005) similarly found that although grading the discussion was found to be the best intervention to enhance student posting, the majority of students did not further interact with their peers. In other words, students were not so much interested in exchanging ideas with their course mates as telling the instructor that they had posted their messages, so that they would not get a bad grade.

This was confirmed by Palmer et al. (2008), who reported that the frequency of postings was generally kept to the required minimum that allowed students to be awarded the assignment mark. Students tended to merely fulfill the minimum (e.g. one new post and one reply per week) to qualify for the assignment marks offered (i.e. 10 % of the course marks). Cheung and Hew (2005) found that while the awarding of marks served the purpose of encouraging students to contribute in the discussion, some students felt pressurized to make themselves heard. As a result, their messages ended up sounding very similar to one another. Brewer and Klein (2006) found that groups of students who were given specific incentives or rewards (e.g. bonus points for the week’s assignment) had more off-task behaviors (i.e. making statements about topics not related to the course), as compared to those who did not have incentives or rewards.

Due to this dilemma, the mere awarding of marks to increase student contribution may not be the best strategy. In view of this, several researchers (Baron and Keller 2003; Jackson 2010; McNamara and Burton 2010) have proposed the use of a set of rubrics that clearly states the allocation of marks for the different categories of contribution. Constructed rubrics that describe the specific desired outcomes of contributions to online discussions ought to, on face value, address students’ expectations and steer their contributions in deeper, more meaningful directions (Jackson 2010). Studies such as that of Bai (2009) assure us that the use of rubrics is worthwhile. However, more evidence is still needed to show that the use of rubrics indeed has a direct relationship with higher frequency and quality of participation (Jackson 2010).

An alternative option is to ask students to peer evaluate one another’s contributions in the online discussions. For example, at the end of each discussion assignment, individual students could fill out a peer evaluation form to identify those who have been active, whose contribution have been useful (and explain why), as well as those who did not participate or who had merely played the game of assessment. In order for the peer evaluation process to work fairly, Lewis (2006) suggested the use of anonymity. For tracking and accountability purposes, the evaluator’s name is required on the form but the results can only be seen by the instructor, not by other students. In addition, Lewis (2006) suggested that the instructor has the flexibility to change the individual student’s participation marks based on his or her observations in the online discussions.

4.2 Use of Number of Posting Guideline and Posting Deadlines

Although Dennen (2005) found that students needed to know how many messages they were expected to post (i.e., number of posting guideline) so that they would be interested to contribute in the discussion, other researchers disagree about the efficacy of such an approach. Researchers (e.g., Pena-Shaff and Nicholls 2004) have found that giving specific guidelines on the number of postings per week reduced students’ contribution and interaction with their peers. This is because students tended to cease contributing for the week once they achieved the required number of postings stipulated by the instructor.

Other researchers found that the quality of the discussion could suffer too. For example, Murphy and Coleman (2004) found the quality of the discussion declined when students were forced by the course requirement to post messages in relation to a number of posting guideline. Students, for instance, found the perfunctory posts to be extremely dull, or superficial (e.g., making very general comments and “me too” additions), unlike other forums that had no requirement on the number of messages that must be posted.

With regard to the use of posting deadlines, Bullen (1998) found that they were only partly successful and might have some unintended impacts on participation. Students felt that the discussion was limited, because some students procrastinated and only posted when the deadline was fast approaching, which then left no time for follow-up responses. Dennen (2005) found that discussion deadlines served as both a participation motivator and a discussion inhibitor. In classes that did not require consistent and ongoing dialog, discussion deadlines dictated the timing of much student participation, with clusters of messages posted within the few days leading up to a deadline. While discussion deadlines were effective at generating participation, it often stifled actual dialog conversation development because students were merely racing to post messages by the due date, as opposed to reading and responding to each other’s messages. The latter activity would require contribution at multiple points in time, which, generally, was not feasible when all students were contributing at the last minute.

So, how then should deadlines be structured? As of now, no conclusive finding has been reported in the literature. Bullen (1998) suggested that deadlines should perhaps be established near the midpoint of the discussion, so that adequate time is allowed for follow-up comments. Instructors may also consider establishing two deadlines, one for the initial contribution and a second for a follow-up comment (Bullen 1998). Instructors should also perhaps encourage students to respond to one another within 24–48 h. Such a strategy might be a more viable option than simply imposing two or three major deadlines throughout the duration of the online discussion. However, we are not sure if these suggestions would work because students may still choose to wait until the last moment to post. We urge future research to examine this further.

An alternative option would be to use other forms of incentives, such as a rewards program that combines quantitative and qualitative measures to motivate student contribution, instead of using mere quantitative measures. Such a reward program is similar to the frequent flyer program that airline companies have adopted. For example, in Hummel et al.’s (2005a) study, the reward mechanism allowed individual students to gain personal access to additional course-related information that are useful and relevant to their studies through the accumulation of points earned by making postings to discussion forums. This additional course information was not available elsewhere.

The reward system, which included both quantitative and qualitative components, awarded participants with points for activities such as making contributions in the discussion (20 points for each post), replying to posts (10 points each), and rating a posts (3 points for each). On the qualitative side, contributors received points: each time their contributions prompted a reply (5 points for each reply to a post), and each time their posting was rated by their peers (3 points × rating value), where ratings ranged from 1 (very poor) to 5 (very good) (Hummel et al. 2005a). The qualitative measure was included to encourage participants to provide contributions that would benefit the online discussion. A threshold of 33 points, with just one post, one reply, and one rating, was needed to gain access to the extra course-related materials. Results showed that the level of contribution indeed increased with the introduction of the incentive system. Interestingly, participants continued contributing even after the reward was withdrawn.

Also, delineating certain expectations of what the postings should entail, such as requiring students to provide reasons or explanations for their “I agree” statements, might be a better alternative, as compared to merely providing a set of guidelines for the number of required postings.

4.3 Use of Sentence Openers or Message Labels

Not all scholars agree that the use of sentence openers or message labels could positively impact student contribution in online discussions. Typically, the types of online messages that participants could post are constrained to a predefined set of sentence openers or message labels embedded within the discussion forum. This could lead to some unexpected consequences. The use of these sentence openers or message labels could cause disruption to the online discussion, as they force participants to interact in an unnatural way (Beers et al. 2005; Dillenbourg 2002). Participants, for example, could not raise a point at the moment they wished to raise it, because a prior contribution must be closed before a new one can be made (Beers et al. 2005). This could disrupt students’ thoughts, and subsequently stunt the flow of the discussion.

Jeong and Joung (2007) examined the impact of message labels on collaborative argumentation in asynchronous online discussions. In one group, students posted messages using a prescribed set of message categories such as argument, evidence, critique, and explanation. Another group was told to explicitly label their online messages with these message categories. A control group used none of the categories and labels. The researchers found that the message labels inhibited the thinking processes needed to produce critical argumentation in an online discussion. Results suggested that students who use message labels are two to three times less likely to critique postings by other students, and two to three times less likely to respond back to their peers’ critiques to defend their own previous claims. The label used to identify critiques might have discouraged students from posting critiques. For example, the label “CRIT” carried negative connotations which could have made students perceive posting critiques as being overly confrontational. Perhaps a less confrontational label could be considered for future use, so as to encourage participants to critique one another’s postings.

In another study, Ng et al. (2010) found no significant difference among students who had access to sentence openers and message labels and those who did not, in terms of the mean number of message posted. The researchers found that as much as 52.6 % of the online posts were wrongly labeled. In other words, participants may use the message labels in a way that is not necessarily consistent with the meaning provided by the designers (Baker et al. 1999). Further analysis of the Ng et al. (2010) study suggested that the participants were not clear about the distinction among message labels, in particular the “Identify problems,” and the “Discuss problems” labels. One possible solution to this problem would be to explain each message label clearly to the participants, and provide them with examples of message postings that fall under each message label category. In addition, if students were reluctant to openly disagree with others, they need to be encouraged to do so. Perhaps, one viable way to invite argumentation would be to include sentence openers and incorporating the Socratic questioning approach (Ng et al. 2010). Socratic questioning can probe students to exchange viewpoints, explore various solutions to problems, as well as consider the implications of solutions (Ng et al. 2010). However, this suggestion is still currently based merely on conjectures. Future research should be conducted to verify this claim.

Overall, we found that the findings related to the use of sentence openers or message labels are not conclusive at this stage, and so there is a necessity for future research. Dillenbourg (2002) argued that the challenge is not to formulate a golden script, but rather to understand why some scripts are effective and others are not. In other words, it would be useful if future studies could examine the specific conditions under which scripts are most effective, as well as the conditions in which they do not function. This would enable scholars to chart a possible road map or guideline for educators to use.

4.4 Extending the Duration of the Online Discussion

Some scholars attribute the issue of lack of time to the following reason: there is insufficient time for discussion in an online setting, because involvement in online discussions typically takes more time than it does in a traditional, face-to-face class. Due to this claim, the suggestion to extend the duration of online discussions has been made in order to allow students to have more time to think and contribute to the discussion (e.g., Jeong and Frazier 2008; Yeh and Lahman 2007).

However, in a series of studies that we conducted (Hew and Cheung 2009, 2010a, b, 2011a) we found no evidence for such a claim or suggestion. For example, we found no supporting evidence for the hypothesis that discussion forums that had more messages posted enjoyed longer length of online discussion than the forums where users posted less frequently in (Hew and Cheung 2009, 2010a). Neither was there a correlation between the duration of the discussion and the frequency of higher level knowledge constructions (Hew and Cheung 2010b, 2011a).

Moreover, Brown and Green (2009) found that the average student spent about 1 h per week reading the messages posted in an online discussion. Based on the assumption that it takes less than 2 h to compose initial messages and responses to the discussion prompt, the time commitment required for an online discussion was found to be similar to that of traditional, face-to-face courses. Overall, the results of the study suggested that asynchronous discussion activities used in online learning is comparable to face-to-face classes in terms of the time needed for weekly participation.

Therefore, we suggest that the issue at hand may actually be a matter of priority choices or personal preference, rather than a lack of time per se. How students prioritize a particular activity over another will determine how much time they are willing to spend on it. So, the problem of not having enough time for the discussion may not be a problem in itself, but rather, it is indicative of a failure to prioritize. For example, many students in Gerbic’s (2006) study tended to view work and family commitments as more urgent or more important as compared to contributing in online discussions, hence the discussions were placed on a lower prioritizing, and were subsequently overlooked. Similarly, many participants in Hammond’s (1999) study felt that they had too many demands made on them at work and at home, and that allocating time to participate in online discussions meant restructuring their priorities, and some people were reluctant to do so.

The problem of a lack of priorities may be compounded by the fact that students do not see one another face-to-face in online discussions, as well as the time-independent nature of the forums. This is perhaps best captured by the adage: Out of sight, out of mind. Students feel less pressurized to participate in the online discussions, as they could delay participation, or even get out of the habit of participating altogether (Hammond 1999). In contrast, there is peer pressure to take part in face-to-face discussions simply because everyone is physically present, and the activities are clearly timetabled (Hammond 1999).

Probably, the only way to manage the prioritizing problem is to create a sense of great need or urgency for the online discussions. If students are able to view the online discussions as being important, they will be able to adjust their schedules in such a way that they can find time to contribute in the discussions. On the other hand, if the discussions are not seen as being important, students will have the tendency to procrastinate. Students need to see the value for participating in online discussions. Perhaps, using the online discussions to complement the course or add value to it in a way that surpasses other ways (e.g., the strategies that we described earlier from the findings of Hummel et al. 2005a, b and Guzdial and Turns 2000) would be helpful.

4.5 Use of Instructor Facilitation

The instructor, traditionally, takes on the role of an online facilitator. The responsibilities of a facilitator may be classified into three different types: organizational, social, intellectual, and technical (Berge 1995; Paulsen 1995). Based on Paulsen’s framework (1995) as well as other researchers’ work (e.g., Berge 1995; Cheung and Hew 2005; Goodyear et al. 2001; Klemm 1998; O’Grady 2001; Salmon 2004; Salter 2000; Winiecki and Chyung 1998), the role of facilitation may be summarized into Table 4.1.

Table 4.1 Description of activity related to the organizational, social and intellectual facilitation types

Some researchers recommend that an instructor monitors the discussion for the reasons of keeping the discussion on track, engaging the students, helping participants overcome technical problems, establishing the rules, and setting common expectations for the discussion (e.g. no personal attacks), among others (Beaudin 1999; Cifuentes et al. 1997; Lang 2000; Yeh and Lahman 2007). It is important to note that, although instructors play an important role in facilitating online asynchronous discussions, not all researchers agree that an instructor/facilitator is the best choice.

Firstly, facilitating an online discussion could be time consuming. Hiltz (1988) described it as being a parent: “You are on duty all the time, and there seems to be no end to the demands on your time and energy” (p. 441). Having the instructor take on the role of facilitator may not be the best choice, because not all instructors are able to dedicate the amount of time and energy needed to facilitate the discussions (Correia and Baran 2010; Seo 2007).

Secondly, findings from previous research suggested that instructor/facilitation may result in instructor-centered discussion (Light et al. 2000), and inhibit students’ participation and voice (Pearson 1999; Zhao and McDougall 2005). Students, typically, see the instructor as an expert, and so the instructor’s postings may deter students from contributing their thoughts and comments, because students consider the instructor’s comment as the final, authoritative one (Zhao and McDougall 2005). Dysthe (2002) differentiated between asymmetrical and symmetrical discussions. In asymmetrical discussions, communication lines tend to center upon the instructor, whose authority rests on status, power, and knowledge; while in symmetrical discussions, communication lines tend to focus on the students (Dsythe 2002). Dysthe argued that by staying out of a discussion, the instructor could stimulate symmetrical discussions among the students and give each student voice more authority.

An et al. (2009) found that when the instructor’s facilitation was kept to a minimum, students tended to express their thoughts and opinions more freely. Arend (2009) reported that in forums that exhibited lower level critical thinking, the instructors were very active in the online discussions, sometimes responding to nearly every student post. Correia and Baran (2010) found that many students treated the discussion questions as short answer essay questions instead of interactive discussions when the discussion was facilitated by the instructor. Dennen (2005) found that when the instructor was involved in the online discussion, students responded to the instructor’s comments instead of one another’s. As Rourke and Anderson (2002, p. 4) warned, “Ultimately, the concern is that instructor-led discussions can easily revert to the recitation structure, or initiate-respond-evaluate structure, of a traditional lecture.”

Furthermore, in a large study that examined over 40,000 postings from a total of 375 discussion forums, Mazzolini and Maddison (2007) found that the percentage of discussion threads started by instructors showed significant negative correlations with both the length of discussion threads and the student posting rate. Results also showed that the percentage of instructor postings within a forum yielded a significant negative correlation with the length of discussion threads, as well as a significant negative correlation with the student posting rate. In summary, Mazzolini and Maddison (2007) concluded that the more the instructors posted, the less frequently students posted, and the shorter were the discussion threads.

4.6 The Case for Peer Facilitation

One possible strategy to circumvent these concerns is to have students facilitate the online discussions. There exists two variants of student facilitation. The first variant is called same-age (after Smet et al. 2008) or peer facilitation which refers to students from the same course facilitating the online discussion. The second variant is called cross-age (after Smet et al. 2008) facilitation, which refers to older students facilitating the discussion of younger students, such as graduate students or research assistants facilitating undergraduate students’ online discussions (e.g., Murphy et al. 1996). Although in both cases the instructor is not involved in the online discussion, we felt that the cross-age student facilitation is akin to instructor facilitation. After all, younger students tend to rely on older students for guidance and input. It is, therefore, not unreasonable to assume that younger students perceive the graduate students as instructors who provide explanations for issues or questions asked or suggestions for problems discussed.

There is less research done on same-age or peer facilitation, than instructor, or cross-age facilitation (Baran and Correia 2009; Hew and Cheung 2008; Hew et al. 2010b; Ikpeze 2007). Results of previous research on peer facilitation suggested that students felt more comfortable vocalizing their views, brainstorming for ideas, and challenging one another’s ideas in a peer-facilitated discussion environment (e.g., Correia and Davis 2007; Hew et al. 2010a, b; Rourke and Anderson 2002). For example, Rourke and Anderson (2002) reported that a majority of students expressed a preference for peer-facilitated discussions over instructor-facilitated ones, explaining that peer-facilitated discussions invited more response (i.e. more messages being posted per week). Tagg (1994) selected two students from his class to act as peer facilitators. One of them was assigned to set the agenda for the discussion and post initial contributions, while the other helped summarize the discussions. Tagg (1994) found that the involvement of the peer facilitators increased student participation rates, as well as students’ understanding of the content.

Students’ posts were also found to be significantly longer in length in weeks where they were in charge of facilitating the discussion (Poole 2000). A peer-facilitated online discussion forum was also found to contain significantly more posts responding to previous comments, as well as more substantive responses than those found in a nonpeer-facilitated one (Seo 2007). A message was deemed to be substantive if the student offered an appropriate interpretation, inference, or justification in explaining his or her views, while a message that did not add such an element was treated as a nonsubstantive response (Seo 2007). Baran and Correia (2009) identified three peer facilitation strategies that generated innovative ideas, motivated students to participate, and created a relaxed environment for online discussion: (1) inspirational (i.e., asking participants to imagine idealistic scenarios, search for inner goals, and discuss ways to achieve them), (2) practice oriented (i.e., encouraging participants to reflect on real-life situations and their actual teaching and learning contexts with constant connections to the readings), and (3) highly structured (i.e., organizing the discussion around the questions of what the participants already knew, wanted to know, and learned before and after reading the assigned chapter for the week).

Thanks to the aforementioned research, we have a better understanding of peer facilitation. However, despite the potential of peer facilitation in asynchronous discussions, Correia and Baran (2010) argues that more research needs to be done in order to better understand its use. Some extant, research on peer facilitation is limited because the actual types of peer facilitation techniques were not clearly explained. For example, in Gilbert and Dabbagh’s (2005) study, peer facilitators were provided with facilitation guidelines that included an article entitled “The role of the online instructor/facilitator”, which was a web-based resource explaining the various facilitator roles in an online discussion. What exactly these roles entailed were not elaborated.

Some fundamental questions or issues are still not fully addressed. For example, what exactly motivates participants to contribute in a peer-facilitated discussion environment? How can participants’ discussion be sustained in a peer-facilitated environment? Also, how can higher or advanced levels of knowledge construction be fostered in peer-facilitated online discussions?

In Chaps. 5, 6 and 7, we report ten empirical studies conducted to examine peer facilitation and how it could promote the following three outcomes: (1) increase students’ online contribution rate, (2) sustain students’ online discussion, and (3) foster higher levels of knowledge construction. However, it is important to note that, peer facilitation should not be viewed as a “cure-all” or panacea for all online discussion issues or challenges. Hence, in Chap. 8, we discuss certain conditions or situations that may best be addressed using peer or instructor facilitation. We report a study in Chap. 8 that attempt to answer this issue.