Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

This chapter presents critical reflections on the multivocal analyses presented by Chris Teplovs and Nobuko Fujita (2014, Chap. 21), Nancy Law and On-Wing Wong (Chap. 22), and Ming Ming Chiu (Chap. 23) on asynchronous online discussion data that was collected in an online graduate education course using Knowledge Forum (Fujita, 2014, Chap. 20). These analyses work towards identifying and exploring collaborative interactions and “pivotal moments” in dynamic group processes that support the progress towards knowledge building over 13 weeks of the course.

This data forms the second iteration of a larger design-based research study (Fujita, 2009) and differs from the other datasets in this book by featuring asynchronous, text-based discourse that unfolded in a higher education online learning context. It is also a large dataset that focuses on the 1,330 notes contributed by 17 graduate student participants. A tenure-stream faculty instructor and a researcher closely collaborated to design instructional interventions and participated in the forum to foster progressive discourse for knowledge building.

A common teaching problem in online courses is moving students beyond expressions of social connection and opinion exchange. Earlier research indicates that desirable educational outcomes such as critical thinking, knowledge construction, and critical discourse rarely occur in these settings (Garrison, Anderson, & Archer, 2001; Gunawardena, Lowe, & Anderson, 1997; Rourke & Kanuka, 2007). My study departed from previous studies by focusing on higher goals for collaborative interaction. It refined designs of instructional interventions that support group processes towards knowledge building in online graduate education courses and offered a unique perspective to identifying characteristics of resulting high quality online discourse. I combined multiple levels of analyses, including an in-depth analysis of group discourse to explore ways to assess individual and group level learning and knowledge building in online courses.

Findings from my various analyses converged to suggest that peer scaffolding that made norms for progressive discourse for knowledge building was most effective at the beginning of the course for newer online learners and newer graduate students, and least effective for students who were practicing K-12 teachers. A significant barrier to knowledge building discourse was the tendency for teachers to reject these norms and revert to “belief-mode thinking” (Bereiter & Scardamalia, 2003) and “devotional discourse” typical of traditional schooling (Woodruff & Brett, 1999). Additionally, findings suggested that software-based scaffolding (as found in Knowledge Forum’s scaffold support feature) is a most promising avenue for future innovations to promote knowledge building discourse.

However, identifying and describing patterns of collaborative interaction in large textual data sets is a daunting task for even seasoned researchers. The quantitative and qualitative analyses yielded deep insights into online learning and teaching processes, but were time-consuming and laborious. A significant issue arising from the study was the need for future research to draw a more complete picture of the complex learning unfolding online in meaningful, timely, and actionable ways. I sought to extend the suggestive findings of the characteristics of knowledge building discourse with visualizations and advanced quantitative analyses.

Therefore, to gain new perspectives of collaborative learning through alternate analytic approaches, this dataset was contributed for the 2009 Alpine Rendez-Vous, along with the Shirouzu dataset. In that workshop, the analyses on this data were presented by Teplovs and Fujita (2009) and Tscholl and Dowell (2009). Discussion of the analyses were provided by Rosé; meta-discussion was given by Law. Teplovs and Fujita used latent semantic analysis (Landauer, Laham, & Derr, 2004; LSA) and visualizations generated by the Knowledge Space Visualizer (Teplovs, 2008) to pinpoint moments in the online discussion that showed promise for being a “rise above” or synthesis moment. While Rosé pointed to the difficulty in interpreting the visualizations, the limitations of LSA, and the possibility of using alternative approaches such as latent dirichilet allocation (Blei, Ng, & Jordan, 2003), this automated analysis quickly and accurately identified a few interesting, possibly “pivotal” moments in the data. It thus offered the potential to make analyses of large textual corpora more practicable during design-based research iterations and to inform teachers’ pedagogical decision making while the course was still in session to enhance the quality of student learning.

In contrast, Tscholl and Dowell (2009) took a more traditional qualitative analysis approach. Their analysis traced “individualistic appropriations of words, propositions and objects (e.g. symbols) in a collaborative learning situation, and to show that often these constitute pivotal moments in collaboration.” Beginning with the notion of “uptake” (Suthers, Dwyer, Medina, & Vatrapu, 2010), they identified myriad instances of uptakes and shifting problem frames in online exchanges. While highly desirable, instances of knowledge building are very difficult to foster in online discourse and happen infrequently even in graduate education. The meta-discussion by Law and comments with other workshop participants concurred with some reservations I had about the findings from Tscholl and Dowell’s analysis. In addition, Law’s meta-discussion presented another promising automated method that incorporates participation patterns and discourse markers to provide an overview of the nature and depth of students’ engagement with course concepts. Later, Law’s meta-discussion evolved into an analysis presented at the 2011 Alpine Rendez-Vous along with analyses by Teplovs and Fujita, and Chiu.

In the sections that follow, I critically reflect on the analyses presented by Teplovs and Fujita, Law and Wong, and Chiu that were introduced at the 2011 Alpine Rendez-Vous workshop along the five dimensions for productive multivocality. Then, I turn to discuss the implications of multivocal analysis for design-based research with recommendations for future research.

Five Dimensions for Reflecting on Productive Multivocality

This section delves into the multivocal analysis by Teplovs and Fujita, Law and Wong, and Chiu along the following five dimensions exhorted at the Alpine Rendez-Vous workshops: theoretical assumptions, purpose of analysis, unit of analysis/unit of interaction, data representations, and manipulations on data representations.

Theoretical Assumptions

The theoretical assumptions of knowledge building underpin the original study for which the data were collected (Fujita, 2009). These assumptions also drive the analyses by Teplovs and Fujita (Chap. 21) and Law and Wong (2014, Chap. 22). In contrast, Chiu’s analysis (Chap. 23) explicates a method that has relatively few theoretical assumptions and may be compatible with diverse theoretical lenses. Knowledge building is defined as the production and continual improvement of ideas of value to a community, through means that increase the likelihood that what the community accomplishes will be greater than the sum of individual contributions (Scardamalia & Bereiter, 2003, p. 1370).

Knowledge building may be considered a theory, pedagogy, and technology (Scardamalia & Bereiter, 2006). As a theory, it places an overt focus on improving ideas. In knowledge building, ideas are considered “conceptual artifacts” and knowledge work is defined as “work that creates or adds value to conceptual artifacts” (Bereiter, 2002b, p. 69). Broadly, conceptual artifacts are cultural artifacts, as in communities of practice. Of these, some cultural artifacts are abstract rather than concrete. Conceptual artifacts are abstract cultural artifacts (theories, abstract models) that “can be distinguished by the logical relations that exist between them” (Bereiter, 2002b, p. 76).

As a pedagogy, it is an attempt to reform education in a fundamental way to enculturate students into the culture of knowledge creation (Scardamalia & Bereiter, 2006). From this view, being able to advance the state of community knowledge is not a social process exclusive to experts, but rather one in which students can and should engage in if they are to progress along a developmental trajectory from childhood inquisitiveness to mature, disciplined creativity. Knowledge building differs from other learning community models by putting ideas at the center and focusing on idea improvement rather than on collaborative learning activities such as in “communities of learners” (Brown & Campione, 1990, 1994). It is also guided by a set of 12 principles (Scardamalia, 2002) that characterize the complex socio-cognitive and technological dynamics it involves. Although “collective cognitive responsibility” (Scardamalia, 2002) seems to be an overarching principle for knowledge building, all of the 12 principles work in concert with each other, not separately, to drive the knowledge building process.

Knowledge Forum, a second generation of CSILE (Computer Supported Intentional Learning Environment) software, is a technology especially designed to support knowledge building. Students work in virtual spaces or “views” to develop their ideas, represented as “notes.” Knowledge Forum offers sophisticated features not available in other conferencing technologies including “scaffold supports” (labels of thinking types), “rise above” (summary note), and a capacity to connect ideas through links between notes in different views. These features provide means to overcome the chronological sequence of threaded discussion, in which important ideas may be lost. In addition, Knowledge Forum facilitates the collection of data that are amenable to analysis with a variety of assessment tools. These include behavioral and interaction analyses (Burtis, 1998), traces of vocabulary development (Hewitt, 1999), social network analysis (SNA; Teplovs, Donoahue, Scardamalia, & Philip, 2007), and semantic analysis (Fujita & Teplovs, 2009).

In addition to the theoretical assumptions of knowledge building, Teplovs and Fujita’s current analysis is informed by those of networked learning and computer-supported collaborative learning (de Laat, Lally, Lipponen, & Simons, 2007). Fundamental to all these approaches is the notion of community dynamics, in which patterns of interaction develop over time and may be investigated using social or semantic network analyses. Assessment tools built in to an online environment can provide participants with formative feedback on the progress towards advancing the community’s emergent discourse and embodies the knowledge building principle of concurrent, embedded assessment in knowledge building. Further, their approach assumes that a semantic model based on student contributions (you are what you write) is a reasonable one, and that LSA is valid on such short texts.

Likewise, Law and Wong’s analysis is strongly committed to knowledge building. They seek to identify the trajectory of knowledge building from the two perspectives which they deem most meaningful: (1) the extent to which a group of students exhibits characteristics of the 12 principles; and (2) the advances in the emergent understanding of the key ideas. Law and Wong suggest that pivotal moments indicate movement from one stage of the trajectory to another, which may be considered from the perspective of both the community and the individual. Following Bereiter (2002b), they accept that the improved ideas or emerging insights from the community cannot be attributed to any one individual or subgroup of individuals. In addition, they assume that an individual would not be able to advance to another stage of the trajectory unless the community as a whole has been able to do so. Furthermore, they acknowledge that not all individuals within the community may achieve the understanding made by the collective.

Chiu’s chapter showcases Statistical Discourse Analysis (SDA). Chiu explains that SDA has few theoretical assumptions or commitments and suggests that it is possible to use SDA with many theoretical frameworks. Nonetheless, Chiu points to at least three assumptions underpinning his analysis of this particular dataset. First, the analysis assumes that participant-selected scaffolds in notes (participants labeled the characteristics of their own notes by inserting a Knowledge Forum scaffold support when they composed a note), individual differences in participants, and time period (weekly discussions were organized around a topic and led by student discussion leaders) are sufficiently similar to be treated as equivalent for the purpose of this analysis. Second, it takes as given that notes containing scaffolds, participating individuals, and time together constitute a “micro-context” in which future notes emerge. Lastly, the analysis supposes that the characteristics of recent notes, their authors, and time period can influence the characteristics of later notes.

Thus theoretical assumptions of social metacognition (Chiu & Kuo, 2009; Chiu & Pawlikowski, 2013) also play a key role in Chiu’s analysis. Social metacognition goes beyond individual metacognition that involves monitoring and control of one’s own knowledge, emotions, and actions (Hacker & Bol, 2004) to consider group members’ monitoring and control of one another’s knowledge, emotions, and actions (Chiu & Kuo, 2009). Social metacognition can enhance “micro-creativity,” or the creation of new, useful ideas (Chiu & Pawlikowski, 2013). In Chiu’s analysis, social metacognition refers to how students monitor and control one another’s ideas and actions through questions, evaluations (agree vs. disagree), and summaries. Social metacognition may be likened to the knowledge building principle of collective cognitive responsibility, in which “the responsibility for the success of a group effort is distributed across all the members rather than being concentrated in the leader” (Scardamalia, 2002, p. 68). However, these differ because social metacognition attends to the emotions, public self-image (face), and social rapport building in thinking, whereas collective cognitive responsibility emphasizes the “cognitive” dimension over other aspects. This is not to say that knowledge building pedagogical designs do not pay heed to social aspects of student and teacher interactions. On the contrary, knowledge building teachers take great pains to establish a culture of safety in the database to enable students to take risks in voicing nascent ideas (Zhang, Scardamalia, Reeve, & Messina, 2009). In the original study (Fujita, 2009), students were encouraged to use specially designed materials (Discourse for Inquiry cards) to help them structure their discourse for problem solving in polite and supportive ways. Additionally, opportunities to engage in metacognitive reflection have been found to enhance knowledge building in online courses (Brett, Forrester, & Fujita, 2009; Cacciamani, Cesareni, Martini, Ferrini, & Fujita, 2012).

In all three of the analyses, the theoretical assumptions drive the methodology to go beyond the analysis of an individual student’s behavior or the content of a single note. They all trace the development of collaborative interactions involving more than one student over time. Although there are some similarities between the theoretical underpinnings that inform Chiu’s work with knowledge building, it is likely that researchers from the knowledge building community will point to the fundamental incompatibility of diverse theoretical assumptions in this application of SDA. They may experience tension between how the theoretical assumptions influenced the resultant methodological choices.

Purpose of Analysis

The purpose of Teplovs and Fujita’s analysis is to examine the relationship between social interactions and the semantics of the written contributions of students participating in an online graduate course. To do so, they introduce a framework and software for learner modeling that interweaves social network analysis and latent semantic network analysis of online discourse called the Knowledge, Interaction, and Semantic Student Model Explorer (KISSME). KISSME uses highly interactive visualizations of semantic and social interactions among learners. It enables researchers to examine the interplay of students’ social interactions and the latent semantic models of those students. They attempted to test the hypothesis that uptake (Suthers et al., 2010) is most likely to occur when the semantic relatedness of the corresponding student models is neither too high nor too low, but at the optimal level of compatibility for collaboration.

Law and Wong’s analysis is driven by a strong pedagogical motivation to investigate the possibility of designing a dashboard of indicators derived from automated analysis for teachers to help them identify the state of students’ progress, the key problems of understanding that they are exploring, and to identify any “at-risk” students. Thus, the authors seek to establish a form of learning analytics for teachers to access information “on the fly” to help them in understanding students’ overall engagement and conceptual advancement in knowledge building.

The purpose of Chiu’s analysis is to use SDA to (1) identify pivotal moments along specific dimensions that divide the data into distinct time periods; and (2) examine variables that significantly increase or decrease the likelihoods of dependent variables of interest. The dependent variables of interest were as follows:

  • H-1. Online discussions have proportionately more ideas, facts and explanations than face-to-face discussions

  • H-2. New fact

  • H-3. Ask for explanation

  • H-4. Theorize

  • H-5. Summarize

All three analyses seek to identify and explore “pivotal moments” and are compared to broader analysis of dynamics of group processes that support knowledge building. Two of these analyses (Teplovs & Fujita; Law & Wong) investigate the potential of automated analyses for use by students, teachers, and researchers. The third analysis (Chiu) extends SDA to understand the probabilities of one kind of note following another in a sequence in asynchronous online discussions, which would yield useful insights for researchers.

Unit of Analysis/Unit of Interaction

The units of analysis in Teplovs and Fujita’s analysis are documents—online discussion messages called “notes.” It follows that the smallest unit of interaction would be two notes. This unit focuses on the interpersonal system and the patterns of interaction between students mediated by notes and goes beyond individual contribution to knowledge building. They attempted to show points in the data over time, 109 days of the course, where there was progression in (1) the latent semantic learner model (LSLM) networks; and (2) the social interaction network determined through the intensity of shared reading events.

Law and Wong’s analysis combines quantitative and qualitative methods and sundry units of analysis. First, to identify pivotal weeks, they compute median and dispersion statistics of individual students’ writing and reading behavior on a week-to-week basis using the note as the unit of analysis. Second, Law and Wong use threads (discussion threads or tree structures of at least two notes) to examine pivotal weeks. Third, they utilize keywords (nouns or noun phrases collocated as one word as units of analysis) to examine students’ engagement with core concepts on a week-by-week basis. The keywords they used were based on a chapter from Bereiter (2002b): idea, knowledge building (collocated as one noun phrase; KB), discourse, conceptual artifact (CA), belief mode, design mode, and world (as in Popper’s (1972) world 1, world 2, and world 3). Fourth, Law and Wong used machine identification of discourse markers to track the presence of question markers that might indicate the presence of factual, explanatory, and elaboration questions. This analysis, like the thread-level indicators, was used to delve into the establishment of a progressive inquiry orientation. The discourse markers comprise words and phrases to identify various question types. Fifth, Law and Wong trace advances in students’ conceptual understanding by conducting qualitative content analysis of a subset of data using the sentence as a unit of analysis. Finally, although Law and Wong concede that this “crude selection and analysis process” does not indicate whether any of these ideas were developed outside of this subset of data, they found a prominent theme around the concept of idea improvement emerging through qualitative analysis of 10 of the 48 sentences.

Chiu’s unit of analysis is the sequence of one type of note following another and how this affects their content (Chiu, 2000a). At a minimum, this involves two notes. His analysis examines sequences among a subset of data (306 student notes) that contain scaffold supports. Students can label a particular “thinking type” by inserting one or more Knowledge Forum scaffold support(s) while composing a note. Chiu’s SDA models the probabilities of these sequences.

In short, the three analyses examine manifold units of analysis or interaction over time. The analyses by Teplovs and Fujita and by Chiu concentrate on interactions, or relationships among students, rather than focus on the properties of individual notes. In Teplovs and Fujita’s analysis, the relationship assessed is between documents or notes written by students, where they assume each student author is represented by notes that he or she writes. In Chiu’s analysis, the relationship examined is the sequence of at least two notes containing particular scaffolds that are labeled with a thinking type or discourse process. One vulnerability in Chiu’s analysis is that it assumes that the scaffold supports accurately reflect the discourse processes in the text and is susceptible to critique unless a neutral observer can predict the scaffold supports that the students used to label or self-code their own note in the database. In Fujita’s (2009) study, however, this problem was addressed through randomly selecting 56 segments of student discourse containing a scaffold support from the sample (scaffold supports either bracketed or preceded segments of text, setting it apart from the rest of the note). Then, the scaffold support that the student participants actually used were omitted from the text and another graduate student was asked to guess correctly the appropriate scaffold based on the discourse processes reflected in the text. Next, percentage agreement was calculated. This found that 79 % of the time a graduate student can predict the scaffold support that another graduate student would use.

Law and Wong’s analysis employs several units of analysis at varying levels of granularity: a thread, a note, a sentence, and discourse markers (word or phrase). The analytic toolkit (ATK; Burtis, 1998) built-in to Knowledge Forum facilitates some of these analyses to investigate knowledge building dynamics. Previous researchers have reported findings correlating such quantitative indicators of participation to portfolio scores and conceptual understanding (e.g., Lee, Chan, & van Aalst, 2006). Relationship between extensive writing, reading, and use of features such as build-on notes, rise-above notes (summaries and higher-order syntheses), referencing, and scaffold use have also been identified with knowledge building Zhang et al. (2009). Law and Wong’s week-by-week analysis differs from the summary analysis that researchers often compute to get an overview of a Knowledge Forum database. Their week-by-week approach is likely to be useful for researchers and teachers to identify and describe changes in participation patterns and engagement in knowledge building. The authors’ strong background in knowledge building gives purchase to their final qualitative analysis, particularly as they can discern relevant and irrelevant concepts that students discuss in the data. For example, in tracing the development of concepts, they eliminated sentences containing keywords that were not unique conceptual terms as others such as “world” when they were not in reference to Popper’s (1972) theory of the three worlds vs. commonplace usage such as “virtual world” (c.f. Tscholl & Dowell, 2009).

Data Representations

While much attention has been paid to the development of graphical representations of quantitative data, less attention has been paid to the graphical displays of qualitative and mixed methods data (Onwuegbuzie & Dickinson, 2008). Visual techniques can assist with data reduction and conclusion drawing/verification in qualitative and mixed methods research (Leech & Onwuegbuzie, 2007; Miles & Huberman, 1994). Thus, information visualizations may reveal “pivotal” moments unfolding online; concurrently, attention must be paid to the crucial information they may conceal.

Visualizations play a central role in Teplovs and Fujita’s analysis. First, Teplovs and Fujita generated high-dimensional vector representations of the content of notes via LSA. That is, the words in the notes are turned into a vector of numbers. Examining the co-occurrence of words in a term-by-document matrix followed by singular value decomposition of that matrix reveals the semantic similarity between notes. LSA enhances the structural (build-on or reply to notes) and the social (read notes) relationships that may exist among students over time. Second, network and adjacency matrixes representations of productive collaborative interactions among students were explored to test predictive models of similar semantic contributions and shared reading behavior over the 109 days of the course data. One bias in their approach is that they assume that students’ cumulative written notes or artifacts in Knowledge Forum can be considered learner models. Another bias may be that they apply the Vygotskian notion of scaffolding in the zone of proximal development to predict that the individuals most likely to benefit from collaborative learning situations are those who are semantically not too close or not too far, but just right (c.f., Zampa & Lemaire, 2002). These assumptions conceal cognition and metacognition not written in Knowledge Forum notes but perhaps communicated among students via other modes available in the course (synchronous chats, telephone conversations, videoconferences, and online learning journals/blogs). The interpretability of the network diagrams and adjacency matrices is also a concern, but consistent with Tufte’s (2001) six principles of analytical design, they attempt to compare and explain the evidence from social interaction and semantic content recorded in the database. Their approach is promising as it advances current visual techniques for quantitative data representation of online discussion data and identifies compatible students based on LSLMs (for instance, students who coauthored notes or led discussions together).

Law and Wong’s analysis aims to use “simple, easy to understand graphical displays accessible to teachers.” They propose different representations to reveal different layers of insight from both quantitative analyses (participation statistics, week-by-week questions and keywords) and qualitative analyses (content analysis at the note and sentence level). For example, they utilize a boxplot graph to represent the number of notes created and the percentage of notes read by students each week. This representation differs from the tabular format that the ATK (Burtis, 1998) generates of these metrics. Alternatively, they employ line graphs to show the various discourse markers for various communication functions. These sorts of data representations offer learning analytics or “teaching analytics” (Vatrapu, Teplovs, Fujita, & Bull, 2011) that may be meaningful and actionable to those teachers who are able to decipher them. However, Bachelor of Education programs do not prepare teachers to be researchers (Donald, 2002; Labaree, 2003). Teachers’ comprehension of statistical data displays are limited through the lack of exposure in teacher preparation programs (Jacobbe & Horton, 2010). Thus, professional development is necessary to enable teachers to make sense of the graphical displays so that they can understand their students’ engagement in knowledge building and enact timely decisions to foster epistemological growth.

Chiu’s analysis employs standard representations of quantitative information such as a database table, a summary statistics table, a breakpoints table, a time series graph, and a path diagram to convey the nonlinear sequence of notes with scaffolds (self-coded notes) and the probabilities of group problem solving outcomes. These tables and figures are conventional ones following the APA style guide (American Psychological Association, 2009). They offer clear ways to make the complexity of the SDA more accessible to the reader by organizing the sequences of words (cognition and social metacognition) and numbers (probabilities) together in a diagram.

Analytic Manipulations on Data Representations

Teplovs and Fujita’s analysis first generated network diagrams, which are dynamic and illustrate changes in interaction patterns over time. To increase interpretability, they manipulated the data representation into an adjacency matrix based on LSLMs in which intermediate similarity values are indicated by intensity of color. This predicts pairs of students who should interact productively. Next, they generated an adjacency matrix based on intensity of shared reading events of students who indeed interacted by reading each other’s notes in Iteration 1. Comparing the pairs of students who should have interacted based on semantic similarity and actual reading interactions, they found that such automated analyses can accurately identify productive collaborative relationships (e.g., pairs of students who led weekly discussions, authors of cowritten notes). However, as these analyses were conducted post hoc as a summary analysis, it was not possible for the researcher or teacher to explore the effect that formative feedback might have had in cases where pairs of students who should interact productively based on intermediate levels of semantic similarity but who did not interact through reading as might be predicted.

Law and Wong’s analysis currently offers only static displays of quantitative and qualitative information. The authors would like to work towards more dynamic, open displays that enable users to access different layers of the analyses.

Chiu’s SDA incorporates sophisticated and comprehensive analytic manipulations to address the analytic difficulties in modeling (1) sequences of notes within a tree structure that differs across topics; (2) four infrequent, dependent variables; (3) many explanatory variables that might yield mediation effects or false positives; and (4) general issues of missing data and robustness procedures (see Table 2 in Chiu’s chapter).

Pivotal Moments

Teplovs and Fujita defined a pivotal moment as “a rise-above or synthesis moment” at the Alpine Rendez-Vous 2009. In the current paper, they consider a pivotal moment as “a point in time at which the semantic structure of the community changed in an important way.” If we take intermediate semantic similarity and intensity of reading behavior of pairs of students as the measure of optimal collaborative interactions, the point in time when such interactions happened could be considered pivotal moment. The primary goal of Teplovs and Fujita’s analysis, however, was not to pinpoint pivotal moments, but rather to examine the relationships between social interactions and semantic relationships between notes in an online learning environment. They attempted to extend the state-of-the-art in graphical representation of such collaborative interactions in asynchronous discussion data. Accordingly, they present a summary analysis from the 100th day of the course (no notes were posted on the final 9 days of the course), showing pairs of students who collaborated or have the potential to collaborate effectively. Generating visualizations at earlier points in time would be helpful to trace the development of concepts over time and people over time.

Law and Wong adopt two different definitions for the notion of pivotal. The first definition refers to time periods—pivotal weeks—that may be particularly productive. This matches the curriculum design for the online course data, in which discussions were organized around weekly themes. To this end, weekly statistics on participation (medians and dispersions in the number of notes written and read) and on discourse structure (thread size, thread depth, number of references used) were calculated using the ATK. The second definition refers to breakthroughs in students’ understanding of key concepts—pivotal moments—found through qualitative content analysis of the students’ discourse. Analyses dovetailed to identify week 9 as a pivotal week in terms of: (1) group dynamics conducive to knowledge building (smallest dispersion in writing and reading, but highest percentage of notes read); (2) establishment of a progressive inquiry orientation (smallest number of notes but high in indicators related to collective cognitive responsibility such as question markers, scaffolds, and references); and (3) engagement with targeted curricular concepts (high frequency of eight keywords). Tracing advances in conceptual understanding through qualitative analysis, Law and Wong found that week 10 was pivotal in terms of new ideas introduced. They suggest that the previous week may have had a positive effect, since week 9 was also productive. In week 9, four of the nine new ideas introduced were from the “teachers” (instructor and researcher) rather than from the students. Perhaps modeling in week 9 helped students to exercise higher levels of agency to build knowledge in week 10.

In Chiu’s chapter, summaries are seen as “pivotal messages that radically change the interaction” (Wise & Chiu, 2011). Summaries of discussion have been shown to enhance knowledge construction both in the note and in subsequent notes (Wise & Chiu, 2011). Chiu’s analysis did not find pivotal moments thus defined among sequences of notes containing certain scaffold supports (My Theory, New Information). However, Chiu’s analysis did identify six notes containing scaffold supports recoded into “summarize” (e.g., Putting our knowledge together) and “ask for explanation” (e.g., I need to understand) variables. Chiu asserts that certain individual characteristics like gender and occupation as well as the presence of new information predicts summaries. Omitting one moment that occurred in a private view, these notes containing summaries that might be pivotal occurred in course weeks 6, 8, 9, and 10. These correspond with some of the same weeks that Law and Wong (weeks 6, 9, 10) also identify as being unusual in some way.

In summary, all three analysts may have found pivotal moments in the data, but their definitions of pivotal moments are diverse and their findings appear to have few commonalities. Teplovs and Fujita and Law and Wong suggest that there are pivotal moments that changed discussion on a larger scale, whereas Chiu does not. Chiu’s pivotal moments are found on a finer time scale, or “micro-time contexts.” The lack of shared pivotal moments may be influenced by the disparate theoretical and methodological assumptions undertaken. Both Teplovs and Fujita’s and Law and Wong’s analyses are faithful to knowledge building theory and empirically driven, while Chiu’s analysis employs SDA to analyze knowledge creation in asynchronous discussion data and is methodologically driven.

Teplovs and Fujita’s exploratory analysis attempted to identify points in time at which the semantic structure of the community changed in an important way, but presents a summary analysis from the end of the course. It introduces a vision for designing a leading-edge software system, KISSME, that can be used for future intervention studies, but more analysis is needed in order to optimize its potential and further development.

Law and Wong’s analysis identify multiple pivotal weeks and moments through an array of quantitative and qualitative methods that is somewhat messy but nonetheless compelling. In my original study, I too chose Week 9 to begin qualitative discourse analysis because it appeared most promising for discovering instantiations of progressive discourse. I later abandoned it because it featured relatively small number of notes written by mostly four doctoral students, the Instructor, and the Researcher. Additionally, the use of scaffold supports, especially “Idea Improvement” scaffolds introduced for this week, was made mandatory by the student discussion leaders. Although “disciplined creativity” is characteristic of knowledge building (Scardamalia & Bereiter, 2003), students vehemently complained of the structure the scaffold supports imposed on their thinking. To compare the discourse from the beginning and end of the course, I eventually selected Week 3 and Week 10 for in-depth analysis. Yet, Law and Wong’s analysis has renewed my interest in revisiting Week 9 as a pivotal week for knowledge building in this dataset. Refinement of Law and Wong’s methodological design and the further development of Knowledge Forum assessment tools offer much promise.

Even expert students like graduate education students rarely engage in convergent processes such as writing synthesis or summary notes without considerable direction from the instructor (Hewitt, 2001, 2005). Chiu’s analysis showed that demographics and occupation can account for differences in discussion behaviors. The small number of participants may limit generalizability of the statistical inferences made here, but the large number of notes in the dataset lend credibility. Interestingly, Chiu found that informal cognition in the form of opinions, anecdotes, elaboration and facts increased the likelihoods of subsequent formal cognition in the form of more facts, theories, and summaries. Chiu defines pivotal moments as summaries and found only one instance of such a note in Week 6. One explanation may be that Chiu’s analysis is restricted to a subset of sequences of notes that contain scaffold supports. Aside from Week 9, scaffold use was optional and idiosyncratic. A few students used them prolifically, and others just once. It is possible to compose summary notes without inserting a scaffold support, but Chiu’s current analysis would conceal such pivotal moments.

Chiu reveals that social metacognition in the form of questions and different opinions, affected the likelihood of new facts, explanation requests, and theories. This is consistent with students posing wonderment questions to investigate a problem of understanding through knowledge building discourse (Scardamalia & Bereiter, 1991, 1994), but a knowledge building researcher would not consider questioning to be social metacognition. Chiu found that a new fact has the largest effect on a subsequent summary. Knowledge building involves an emergent process of explanatory coherence (Thagard, 1989), where groups of students contribute ideas and advance theories that best explain facts. Students can use a New Information scaffold to contribute new facts gleaned from authoritative sources and use a Putting our Knowledge Together scaffold to label better theory, but they may not. Further SDA using micro-context codes applied by researchers, rather than by students, would increase methodological rigor and perhaps convince knowledge building researchers who are familiar with the particular difficulties of analyzing group processes supporting knowledge building in large textual corpora within the Knowledge Forum platform.

Implications for Design-Based Research

I was given the particular distinction of serving as a discussant for multivocal analyses on data that I collected for a larger design-based research study (Fujita, 2009). In the original study, I investigated how instructors could foster progressive discourse for knowledge building in three online graduate education courses. Productive multivocality is more than just data sharing. I shared the data from an online graduate education course using Knowledge Forum (Fujita, 2014) to seek new perspectives from alternate analyses and the effect that this would have on my own insights. Seeing, reading, and being exposed to other researchers’ analyses of my data influenced my research by encouraging me to collaborate with colleagues to examine: (1) leading-edge automated approaches such as visualizations and network analyses that make assessments for learning more applicable to educational practice (Teplovs, Fujita, & Vatrapu, 2011; Vatrapu et al., 2011); and (2) sophisticated statistical modeling methods for investigating knowledge creation processes in education (Chiu & Fujita, accepted).

These forays into automated and quantitative approaches to analyzing asynchronous discourse data urged me to reexamine my epistemological beliefs and practices as a design-based researcher. In the sections that follow, I outline in brief characteristics of design-based research relevant for this discussion, note advantages and challenges of the multivocal analyses for design-based research, and summarize the implications for future design-based research studies.

Characteristics of Design-Based Research

Responding to major changes in the focus of learning theory from the study of individual behavior and cognition to larger interactive systems, Ann Brown (1992) introduced the term “design experiments” to label a new methodology for carrying out studies of educational interventions (Collins, Joseph, & Bielaczyc, 2004). Design experiments are iterative, situated and theory-based attempts to understand and improve educational processes (Brown, 1992; Cobb, Confrey, diSessa, Lehrer, & Schauble, 2003; Collins, 1992, 1999; diSessa & Cobb, 2004; Edelson, 2002). Design-based research allows researchers to study complex learning where it is difficult to test the causal impact of particular variables with experimental designs (Barab, 2006). It deals with complexity by iteratively changing the design of the environment over time, collecting evidence of its effects, and recursively refining successive designs. Quantitative and qualitative methods may be combined, but similar to qualitative research, it uses criteria to ensure rigor such as trustworthiness and credibility akin to reliability and validity, and usefulness, analogous to generalizability or external validity (Barab & Squire, 2004). Like participatory action research, design-based research also involves participants to bring their different expertise into producing and analyzing designs. However, design-based research is distinguished by its goal to advance new theories and practices that can be generalized to other educational settings. Following Hoadley (2002), I use the term “design-based research” rather than “design experiments” or “design research” to avoid mistaken identification with experimental design, studies of designers, or trial teaching methods.

Design-based research and traditional psychological experiments differ on paradigmatic issues such as ontology, epistemology, methodology, and axiology. Design-based researchers assume a participative reality instead of positing that the knower has an independent existence from the subject (Barab & Kirschner, 2001). Design-based researchers’ epistemological stances also vary along a continuum (see Fig. 24.1):

Fig. 24.1
figure 1

Epistemological stances among design-based researchers adapted from Dede (2004)

As Dede (2004) notes, some researchers (e.g., diSessa & Cobb, 2004) are on the objectivist end of the epistemological continuum, but suggests that most are in the middle, with cognitivists closer to the objectivist stance and the situated learning theorists on the subjectivist side. In terms of methodology, design-based researchers typically use mixed methods to describe the complex phenomena over time. For example, traditional pretest and posttest data may be combined with a few in-depth analyses of some students (A. L. Brown, 1992). Additionally, values play a large role in interpreting results. Bereiter (2002a) argues that design-based research is not defined by its methods, but the goals for sustained innovation of education. Likewise, diSessa and Cobb (2004) suggest that the goal of design-based research should be ontological innovations. Finally, design-based research shares philosophical characteristics of pragmatism with mixed-method research (Tashakkori & Teddlie, 2003), but differs in that one of its goals is to advance theory (Barab & Squire, 2004; Cobb et al., 2003; diSessa & Cobb, 2004).

The multivocal analyses by Teplovs and Fujita, Law and Wong, and Chiu seek to identify and explore “pivotal moments” in relation to the broader analysis of group processes that support knowledge building. As I participated in the data collection and the analyses, I accept a participative reality. Previously, my epistemological stance leaned towards the situated learning end, but after embracing multivocality, I seem to have moved a little closer to the objectivist side of the epistemological continuum. Being a pragmatist, I have always deployed mixed methods. As a member of the knowledge building community, I aimed to advance understanding of how knowledge building discourse can be fostered among students in online graduate courses.

Advantages and Challenges of the Multivocal Analyses for Design-Based Research

Multivocal analyses offer new perspectives for design-based researchers open to critically reflect on their own theoretical and methodological contributions. The divergent voices along the five facets of multivocality offer some advantages as well as disadvantages to reconsidering the findings from analyses of the shared data, which came from the second of three iterations of a larger design-based research study.

One significant advantage of multivocal analyses is the sharing of data. Design-based research projects collect large amounts of data over several iterations. Inevitably, some of this data is left unanalyzed. Collins et al. (2004) recommended that the design-based research community “establish an infrastructure that would allow researchers at other institutions to analyze the data collected in design studies, in order to address their own questions about learning and teaching” (p. 40). A recent analysis by Anderson and Shattuck (2012) of the five most-cited design-based research articles from each year in the past decade revealed that there was no evidence of data sharing among diverse research teams in their sample of 47 articles. The multivocal analyses in this section embrace data sharing among researchers at three different geographic locations (Canada, United States, and China) and encourage a complex social construction of meaning around the asynchronous online discussion data.

Another advantage is that the automated and quantitative analyses presented in this section offer researchers the possibility of conducting just-in-time assessments of student learning within and between iterations of the design-based research study. Modeling student learning during the redesign cycles define this iterative research approach (Kelly, Baek, Lesh, & Bannan-Ritland, 2008). Increasingly, it is possible to model student learning and more complex twenty-first century skills such as collaboration, problem solving, and learning to learn in real-time and generate usable visualizations of this activity (Johnson, Bull, Reimann, & Fujita, 2011). Cutting-edge assessment tools may enable design-based researchers to collaborate with teachers to modify and improve interventions, whether they are instructional design or technological design interventions, in real-time instead of in retrospect. This may also open the possibility for other stakeholders such as students, parents, and policy makers to participate in the design process and voice their needs in their local contexts of use. While the particularity of the intervention means that the impact of design-based research on practice is on a small scale, the design principles that can emerge out of these rich conversations hold great promise.

Finally, multivocal analyses may resolve some concerns about the question of causality in design-based research, which often employs mixed methods. As Reimann (2011) argues, causality in design-based research is a “particular causation” (Miles & Huberman, 1994, p. 147) or “action causality” (Abell, 2004) that pertains to the local needs of the particular participants involved, similar to qualitative research (Maxwell, 2004). As design-based research is distinguished by its goal to advance new theories, however, explaining how theoretical conjectures will function in the designed features of the environment, mediate learning and produce intended outcomes is an important concern (Sandoval, 2013).

Multivocal analyses were instrumental for me in reflecting on the “conjecture map” or argumentative grammar of my design-based research study and articulating the “mediating processes” or “design conjectures” (Sandoval, 2013). From Knowledge Building theory, the intended outcomes of the study were to foster students’ understanding of the commitments to progressive discourse (seek common understanding, to expand the base of accepted facts) and to produce high-quality discourse among online graduate students. To do so, the “embodiment” included three intervention designs (tools and materials, activity structures, discursive practices): a reading by Bereiter (2002b), Discourse for Inquiry cards, and Knowledge Forum scaffold supports. The mediating processes, theorizing (explanations supported by authoritative sources or new information) and summaries (rise above or convergent processes), became more explicit for me though the multivocal analyses as I collaborated with the analysts. While the process of explanatory coherence (Thagard, 1989) is crucial to Knowledge Building and theorizing, it was not made so explicit in my own work at the beginning of the study. Moreover, since previous related studies had examined younger students, the emphasis had been to encourage students to provide explanations rather than facts, but my study found that the graduate student participants actually provided little evidence to back their claims in online discussions. Students did contribute summaries, but it would have been useful for instruction if summaries were framed as potential pivotal moments for shared conceptual understanding within and across weekly views in the Knowledge Forum database. Future research would benefit from such reflections on the mediating processes for supporting progressive discourse for Knowledge Building.

Yet, challenges remain in productive multivocality in design-based research. Chief among them is the lack of convergence on the definition and findings of pivotal moments, perhaps as a result of divergent theoretical assumptions underpinning the analyses. For example, while Chiu claims his SDA method may have few theoretical assumptions or commitments and may be compatible with many theoretical frameworks, researchers in the knowledge building community will have difficulty reconciling the modeling of social metacognition as knowledge building. Even when the theoretical assumptions underlying the analyses are the same, as in Teplovs and Fujita’s and Law and Wong’s analyses, inconsistencies in the units of analysis used make the accumulated findings difficult to interpret and apply in practice. Future research is needed to refine the design of methodologies and assessment tools presented. For example, Chiu’s analysis would benefit from micro-context coding of the larger dataset to showcase his SDA on the asynchronous discussion dataset, which should reveal more pivotal moments. Teplovs and Fujita’s analysis would capture pivotal moments more effectively if the analyses could be conducted in earlier weeks of the course, perhaps as a pretest and posttest to assess the effectiveness of a particular intervention (e.g., introduction of new scaffold supports) or for formative rather than summative assessments. It would also be useful to see what is happening in between iterations of the larger study. Finally, Law and Wong’s analysis would benefit from streamlining to harness the most important aspect of the multifaceted analyses. This would also make the assessments more usable for researchers and teachers who must collaborate closely in design-based research to optimize the learning outcomes for students.