Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The key idea of this volume is that scientific and practical advances in an area of study can be obtained if researchers working in multiple traditions—including traditions that have been assumed to be mutually incompatible—make a concerted effort to engage in dialogue with each other, comparing and contrasting their understandings of a given phenomenon and how these different understandings can either complement or mutually elaborate each other. Incompatibilities may remain but at least are reduced to essential and possibly testable differences once the noise of nonessential differences has been reduced. This key idea potentially applies to many fields, particularly in the social and behavioral sciences in which no single tradition has established primacy. The present volume offers case studies and insights of interest to anyone concerned with understanding the coordinated use of multiple methods but goes beyond mixed methods to address the coordinated joint work of diverse methodologists or the discourse within a diverse or “multivocal” discipline.

The researchers involved as editors and authors in the present volume work in the areas of collaborative learning, technology-enhanced learning, and cooperative work. We share an interest in understanding group interactions, including interactions mediated by various technologies ranging from paper and pencil to online environments. We approach this topic from a variety of traditional disciplinary homes and theoretical and methodological traditions that converge in a “field” known as computer-supported collaborative learning (CSCL) (Koschmann, Hall, & Miyake, 2001), the study of how interaction leads to learning with the support of designed artifacts. CSCL is situated more generally in the learning sciences (Sawyer, 2006), the interdisciplinary study of human learning and of the design and implementation of innovations and methods in support of learning and instruction. In addition to the methodological project behind the key idea, this volume also offers research contributions within CSCL and the learning sciences.

The diversity of CSCL is salient to anyone involved in the conference series or journal that bears this name. The CSCL community is an international community (Kienle & Wessner, 2006) consisting of researchers, designers, and practitioners from computer science, education, educational psychology, human–computer interaction, and psychology as well as linguistics and other educational, information, learning, and social sciences (Wessner & Kienle, 2007). Hence numerous theoretical frameworks and methodological traditions drive work in this community to the extent that one can question whether it can be called a single field of study.

We take the term multivocal from Bahktin (Bakhtin, 1981; Koschmann, 1999), who used it to describe the presence of multiple “voices” that can be discerned in texts. Here the “text” is the collective discourse of those who identify with the CSCL community and its core values. This multivocality is a strength only to the extent that there is sufficient commonality to support dialogue between the voices and reach some degree of coherence in the discourse of CSCL (Suthers, 2006). The learning sciences and CSCL are too diverse (theoretically and methodologically) for unification to be possible. Moreover, unification is not at present even desirable—diversity is our strength in exploring alternate approaches to understanding learning in interaction. However, we would benefit from boundary objects (Star & Griesemer, 1989) that form the basis for dialogue between theoretical and methodological traditions applied to the analysis of learning in and through interaction. The question at hand is what constitutes effective boundary objects and how they may be leveraged.

Motivated by these considerations, the authors of this volume and other colleagues collaborated over a period of 5 years through a series of workshops and online interaction, seeking appropriate boundary objects and strategies for supporting productive multivocality between multiple analytic traditions in CSCL. This collaboration has become known as the “productive multivocality project.” With this book we offer to colleagues in our own and other fields the insights of our activities. This chapter provides an overview of the project and summarizes its lessons. After a brief history of the project, the chapter summarizes dimensions for describing analytic approaches (discussed further in Chap. 2, Lund & Suthers, 2014), the composition of our data corpus, and strategies for productive multivocality (see also Chaps. 3234: Dyke, Lund, Suthers, & Teplovs, 2014; Lund, Rosé, Suthers, & Baker, 2014; Rosé & Lund, 2014). Readers interested primarily in an executive summary of our insights are encouraged to read the present chapter with Chap. 31 (Suthers, Lund, Rosé, & Teplovs, 2014), which provides a more comprehensive post hoc summary of what we have learned. But the accounts in these summary chapters are given in the abstract: the case studies through which our work was conducted provide concrete examples. The body of this volume consists of five sections, each using a case study to investigate specific barriers to multivocal analyses, strategies to overcome these barriers, and benefits that may accrue from leveraging theoretical and methodological diversity. These case studies also offer other potential value to readers beyond the productive multivocality objectives. They serve as examples to students learning about new methods (see also Chap. 32), provide examples of how multiple methods may be combined in approaching one’s own data (complementing volumes such as Tashakkori & Teddle, 2003), and yield research results that may be of interest to researchers studying the specific settings and phenomena we analyzed. The reader is referred to Chap. 3 (Suthers, Rosé, Lund, & Teplovs, 2014) for a guide to selective reading of the rest of the volume under these various reading objectives. The final section of the book discusses various issues encountered and lessons learned, offering implications for research programs and fields of study. Let us now begin our story.

Origins and Development of the Productive Multivocality Project

This project received inspiration from and emerged out of various earlier efforts, including a video analysis workshop at CSCL 2009 (Suthers, Christie, Goldman, & Hmelo-Silver, 1999), Tim Koschmann’s “data fest” workshops at several CSCL and Winter Text Conferences, and various workshops and collaborations organized by Gerry Stahl around the Virtual Math Teams data (culminating in Stahl, 2009). The present Productive Multivocality project developed through a series of workshops at the International Conference on the Learning Sciences (ICLS) in 2008 and 2010, the CSCL conference in 2009, and the STELLAR Alpine Rendez-Vous (ARV) in 2009 and 2011. An interim report was also presented at a CSCL 2011 symposium (Suthers et al., 2011). Below we describe the motivations for each workshop and how major lessons learned led to changes in our strategy in each subsequent workshop. This historical account is relevant because it explains how our findings are based on what went wrong or was found to be insufficient as well as what worked.

A Common Framework for CSCL Interaction Analysis (ICLS 2008)

A premise of our first workshop was that common conceptions, representations, and tools are needed to support and bridge between multiple theoretical perspectives as well as to facilitate the application of different analytical methods and tools to complex data sets. Progress in any scientific discipline requires that practitioners share common objects such as instrumentation, data sources, and analytic methods that enable researchers to replicate or challenge results. Shared instruments and representations mediate the daily work of scientific discourse (e.g., Latour, 1990; Roth, 2003), and advances in other scientific disciplines have been accompanied with representational advances. Similarly, we reasoned that researchers studying learning in distributed and networked environments need shared ways of conceptualizing and representing what takes place in these environments to serve as the common foundation for our scientific and design discourse.

The goal of our first workshop (organized by volume editors Suthers, Law, and Rosé, with Nathan Dwyer) was to establish requirements for a common conceptual and representational framework to support collaborative learning process analysis, by (a) demonstrating our analytic tools to one another in the context of analyses we had conducted, (b) identifying commonalities among these tools and analyses along four dimensions, and (c) generating requirements for a common conceptual model and abstract transcript that might also form the bases for shared analytic software. The dimensions are as follows:

  • Purpose of analysis. What is the analyst trying to find out about interaction? (In our context, some aspect of learning or meaning-making in interaction is usually a focus.)

  • Units of action, interaction, and analysis. In terms of what fundamental relationships between actions do we conceive of interaction? What is the relationship of these units to the unit of analysis? The unit of interaction should not be confused with the unit of action or unit of analysis: units of action (e.g., chat messages or a discussion postings) are put into relation to each other by units of interaction (e.g., uptake of others’ contributions) in a manner that constructs a model of interaction informative for the desired unit of analysis.

  • Representations of data and analytic interpretations. What representations of data and representations of analytic constructs and interpretations capture these units in a manner consistent with the purposes and theoretical assumptions? Specifically, what requirements does the analytic method place on the representation of the original trace of activity? How are units of action interaction represented in terms of this trace representation (if they are)? What subsequent interpretations are layered on top of these representations, and how are they in turn expressed?

  • Analytic manipulations taken on those representations. What are the analytic moves that transform a data representation into successive representations of interaction and interpretations of this interaction? How do these transformations lead to insights concerning the purpose of analysis?

These dimensions are described further in Chap. 2. At the workshop, we found that the dimensions were helpful for characterizing diversity (i.e., they described ways in which our approaches differed from each other), but we realized that our multivocality presented challenges in identifying a single common conceptual and representational framework for analysis. Yet, we felt that we were gaining some understanding from looking at each other’s analyses. A software “tool fair” also generated considerable interest, and we noted the need to make our theoretical assumptions explicit.

Common Objects for Productive Multivocality in Analysis (CSCL 2009)

In our second workshop (organized by editors Suthers, Law, Lund, Rosé, and Teplovs), we decided to tackle multivocality head on by having analysts from different traditions assigned to analyze the same data set, a strategy that many others have tried (e.g., Koschmann, 2011). Two corpora were used, from the Virtual Math Teams (Stahl, 2009) and Knowledge Forum (Scardamalia, 2004) projects. We continued to use the four dimensions to characterize different analyses and added the following dimension.

  • Theoretical assumptions underlying the analysis. What ontological and epistemological assumptions are made about phenomena worth studying, and how can we come to know about them? (Here we assume that such phenomena broadly include interaction.)

This dimension was needed to warrant the decisions expressed in the first four dimensions. Theoretical assumptions permeate the other methodological dimensions. For example, representations of data embody implicit theoretical commitments (Ochs, 1979).

As the analyses were presented, we tried to use our dimensions to discover commonalities (“common objects”) that can support productive multivocality. We also sought to determine whether the analytic differences are complementary (potential sources of richer understanding) or incompatible (potential barriers to a common discipline). Again, we found that the dimensions highlighted how the analyses differed rather than their commonalities. Asking ourselves what we did have in common, we agreed that we shared (a) learning through collaborative interaction as our topic of study and (b) the desire and willingness to engage in this activity together. These are key prerequisites for productive multivocality. Although we had hoped that multiple analyses of shared data corpora would provide a basis for dialogue, the analyses presented were disconnected in part because the analysts were approaching these corpora with entirely different questions: they were “talking past” each other. This observation led to the objective of identifying “pivotal moments” in the next workshop.

Pinpointing Pivotal Moments in Collaboration (ARV 2009)

Our third workshop (organized by Lund, Law, Rosé, Suthers, and Teplovs) continued the prior strategy of having researchers from different theoretical and methodological traditions analyze shared data corpora. We used a different Knowledge Forum corpus (the basis of the case study in Chaps. 2024 of this volume) and a Japanese primary school mathematics class (Chaps. 48 of this volume). As before, we assigned analysts to data, deliberately pairing up analysts from different methodological traditions. We also assigned an analyst to data from a setting he did not normally study (the textual analysis of Bakhtin being applied to multimodal data) and grappled with the question of how data-hungry quantitative methods can inform microanalysis. We addressed the prior mismatch in analytic objectives by asking analysts to identify the pivotal moments in the interactions recorded in the data. The definition of pivotal moments was purposefully left unspecified, providing a projective stimulus that drew out different researchers’ assumptions and insights and led to exciting comparative and integrative discussion.

As expected, analysts differed in their conception and identification of pivotal moments, but these differences (as well as some congruencies) generated productive discussion of how learning arises from interaction. In this workshop we first articulated our core strategy for multivocality: assign diverse analysts to shared corpora and charge them with analytic objectives that are deliberately open to interpretation (e.g., “pivotal moments”). During this and the prior workshop, our own objectives shifted: we talked less about sharing the same concepts or representations and more about boundary objects (such as the corpora and pivotal moments) supporting dialogue between different traditions. Boundary objects “have different meanings in different worlds but their structure is common enough to more than one world to make them recognizable, a means of translation” (Star & Griesemer, 1989, p. 393). We found that it is useful to align analytic results (e.g., to find overlaps and differences in pivotal moments identified) and so wanted to explore further how shared analytic frameworks (e.g., Howley, Mayfield, & Rosé, 2013; Suthers, Dwyer, Medina, & Vatrapu, 2010) and shared analytic software tools (e.g., Tatiana; Dyke, Lund, & Girardot, 2009) could serve as or produce appropriate boundary objects.

Productive Multivocality in the Analysis of Collaborative Learning (ICLS 2010)

In our fourth workshop (organized by Lund, Suthers, Law, Rosé, and Teplovs), we sought to build on the success of the third workshop, replicating the strategy of having deliberately diverse analysts identify pivotal moments in shared corpora. There were two novelties. First, we brought in new data corpora and new analysts. Corpora included a Group Scribbles mathematics classroom in Singapore (subsequently replaced) and university-level chemistry study groups in the USA (Chaps. 913 of this volume). Second, we wanted to revisit the possibility that a shared software tool and its data and analytic representations would help support more detailed comparisons between analyses, by providing all the data and analyses within the common tool. This latter effort enabled analyses to be shared ahead of the workshop and is reported in Dyke et al. (2011).

The primary strategy again proved to be productive, surfacing issues and exemplifying insights by the case studies. In the chemistry case, analysts discovered that they had different conceptions of “leadership,” leading to refinement of this concept and its analytic manifestations. With the exception of one analyst who emphasized implicit interaction via nonverbal means, most analysts concluded that there was not much collaborative learning taking place in the Group Scribbles mathematics corpus. Although we recognized that educators must deal with failed collaboration all the time and therefore research could examine these missed opportunities, we decided that analysts and (subsequently) readers of this volume would not be very motivated to put time into an “uninteresting” case (in fact, one analyst on this corpus dropped out of the project). However, many other interesting examples were available from the Singapore Group Scribbles setting.

Leveraging Researcher Multivocality for Insights on Collaborative Learning (ARV 2011)

The final formal workshop of this collaboration (organized by editors Rosé, Lund, Suthers, Law, and Teplovs, with Gregory Dyke) brought in two more data corpora that are represented in the present volume. At our request, our Singapore colleagues replaced the mathematics corpus with another Group Scribbles corpus, this one on learning about electric circuits. This corpus has features not found in the prior corpora, including use of technology to support face-to-face interaction, use of physical manipulatives (batteries, wires, and light bulbs), and the multimodality that results from this combination. It forms the basis of Chaps. 1419. A final corpus along with three new analysts was introduced, involving the use of a software agent in discovery learning of 9th-grade biology (Dyke, Adamson, Howley, & Rosé, 2013). This corpus is unique in two ways: the use of agents in support of collaborative learning and the role that the analyses are playing in iterative design and improvement of this software environment. It forms the case study of Chaps. 2530 of the present volume. The end of the 2-day workshop was structured to identify themes common across the case studies and thus surface practical, methodological, and theoretical issues and strategies for productive multivocality that are highlighted in the present volume (especially in Chaps. 3134).

Subsequent collaborations continued beyond ARV 2011 with numerous individual and small group meetings at conferences and each others’ institutions and resulting in a number of papers (e.g., Chiu & Fujita, 2014; Dyke, Howley, Adamson, & Rosé, 2012; Dyke, Kumar, Ai, & Rosé, 2012; Dyke et al., 2011; Dyke et al., 2013; Howley et al., 2013; Jeong, Chen, & Looi, 2011; Medina & Suthers, 2013; Oshima, Matsuzawa, Oshima, Chan, & van Aalst, 2012; Oshima, Oshima, & Matsuzawa, 2012; Oshima, Oshima, Matsuzawa, van Aalst, & Chan, 2011; Reynolds & Chiu, 2012; Schwarz et al., 2010; Suthers et al., 2011; Wise & Chiu, 2011a, 2011b). The remainder of the chapter discusses the diversity of our data and methods and summarizes issues and strategies that will be revisited throughout the book and discussed further in Chap. 31 onwards.

The Corpora and Analytic Traditions

In selecting the data corpora (case studies) and analysts for this project, we were cognizant of the need to bring multiple theoretical and methodological traditions to bear on a diversity of interactional settings. Diversity of data and traditions helps ensure that we encounter the range of issues present in a multivocal research community and helps make a more convincing case for the generality of our conclusions. Of course, we also worked within the constraints of the available data and analysts and had to consider the motivations of our project participants.

Data Corpora for Case Studies

Data corpora for case studies were subject to two individual criteria (i.e., criteria that are applied independently of what other data corpora were under consideration): the data must have the potential to show learning through interaction, and must be compelling as evidenced by the desire and willingness of multiple analysts to spend time analyzing that data. The corpus was also subject to collective criteria of achieving diversity, deliberately sampling various interactional and learning settings of interest. We wanted to achieve diversity of age levels, diversity of settings (formal and informal learning in schools, workplaces, and elsewhere), diversity of interactional media (face-to-face, synchronous, and asynchronous computer-mediated communication), and diversity of domains or topics of study.

In the end, we were able to obtain and perform multiple analyses of the corpora shown in Table 1.1, listed by domain, population and setting, and interactional media. As one can see from Table 1.1, we were successful in obtaining various topics, age groups, and interactional media within formal educational settings. The emphasis is on science and mathematics, and we are missing case studies in informal settings or workplaces.

Table 1.1 Summary of data corpora

Analytic Traditions

A project on productive multivocality requires sufficient diversity of theoretical and methodological traditions. There is a “sampling bias” in this project in that the traditions represented are those brought by persons who were willing to commit the effort to either share their data or analyze others’ data and participate in the workshops. The persons we were able to recruit use methods as diverse as various forms of content analysis, conversation analysis, polyphonic analysis, semiotic and multimodal analysis, social network analysis, statistical discourse analysis, computational linguistics, and uptake analysis. Theoretical traditions include cognitivism, constructivism, dialogism, ethnomethodology, group cognition or intersubjective meaning-making, knowledge building, progressive inquiry, semiotics, and systemic functional linguistics.

Reflecting on the corpora and traditions represented, there are clearly gaps. We particularly would have liked to include data from outside of formal schooling, such as a workplace setting, and in conjunction with this to have included sociocultural traditions of analysis (attempts were made to recruit relevant data and participants but were unsuccessful). Also, our case studies are biased towards small group interaction and hence microanalysis rather than large-scale networks of learners. Yet, we believe that we have sufficient diversity to have encountered and grappled with major issues in achieving productive multivocality in the analysis of interaction. Our attempts to bring the analytic traditions listed above into conversation with respect to the various corpora encountered difficulties that we overcame with the strategies discussed in the next section.

Issues and Strategies for Productive Multivocality

As suggested in the preceding account of the historical development of the project, our series of workshops was an iterative process in which we refined our shared objectives, encountered issues and problems, and developed strategies for meeting these objectives. Our objectives shifted from one of identifying common representations and practices that would enable the specification of requirements for shared data and tools to one of enabling productive dialogue between multiple traditions through whatever boundary objects served this purpose. Following is a preview of some of the strategies we developed for making our dialogue productive. These strategies, along with the issues they are intended to address, are discussed in greater detail in Part VII of this volume, with a summary in Chap. 31 and more detailed discussion of methods for achieving productive multivocality in Chaps. 3234.

Use Standards, Metadata, and Repositories to Share Data and Tools

There is a great redundancy in the software efforts behind analysis. Many research groups develop their own tools, and there are technical barriers to applying these tools to data gathered in multiple settings. The first workshop began with the objective of developing standards that would enable a suite of software tools developed at different labs to interoperate on common data and analytic representations. These solutions have been the focus of a number of other efforts. For example, Harrer, Monés, and Dimitracopoulou (2009) have developed standards for representing data and analyses, and Reffay, Betbeder, and Chanier (2012) have proposed standards for a data repository. Ontologies have long been a focus in the Artificial Intelligence and Education community (e.g., Mizoguchi, Ikeda, & Sinista, 1997).

Our project did not culminate in the development or the adoption of standards across the project, but methods of sharing data and tools were critical to each case study. An exception was that the Tatiana analytic tool (Dyke et al., 2009) served a useful role as a common tool in several of the case studies. Tatiana provided a medium within which to share synchronized replayable data traces (e.g., video, transcripts, and log files) and to construct analytic representations (e.g., coded segments) on top of these traces that are also synchronized with them. The case studies in Part II (Case Study 1, Fractions), Part IV (Case Study 3, Electric Circuits), and Part VI (Case Study 5) in particular made use of Tatiana for sharing data and/or comparing analytic results.

Technical solutions that enable researchers in different settings to reuse the software developed and data gathered elsewhere are useful but not sufficient: to bring multiple traditions into productive dialogue they must share an object of study.

Analyze the Same Data

An obvious and well-known strategy for engaging researchers in dialogue is to have them analyze the same data and discuss their results so that different perspectives on and results obtained concerning the same object of study may be compared. This strategy has been found to be useful within single traditions. For example, in quantitative content coding multiple coders are used to achieve reliability, and similarly collaborative interaction analysis reaches a richer understanding of interaction through group review of video data (Jordan & Henderson, 1995). Work within education and CSCL has taken this strategy: recent examples are Koschmann (2011) and Stahl (2009).

This strategy was introduced in our second workshop and continued throughout the project. Some of the multivocal dialogue that takes place actually precedes the analysis of the data, as participants need to agree on what data is worth considering and how it should be selected and represented. Data selection and preparation will expose assumptions. We found that this strategy can productively be augmented with an auxiliary strategy of a shared analytic objective, considered shortly.

Pair Up Diverse Traditions

If the analysts assigned to a data corpus work in similar theoretical or methodological traditions, it will be easier for them to talk to each other. They will share basic assumptions and will be able to focus on the nuances of their results and fine-tune their analytic practice. Such work is valuable but does not address the objective of fostering dialogue between representatives of theoretically and methodologically diverse communities who are working within a given area of study (such as learning through social interaction).

We have found that it is useful to pair up analysts from quite distinct traditions. This approach surfaces otherwise implicit assumptions concerning what data is suitable for study and what questions are worth asking, and once these questions are resolved (and with the application of a strategy described below), comparisons of results can lead to productive dialogue concerning analytic concepts and results. For example, in Part II (Case Study 1), analysts from three traditions compared the points of interaction that they considered to be the most significant, finding agreement on some but non-overlap on others. This discrepancy led one analyst to reconsider how he was defining these “pivotal moments.” In Part III (Case Study 2), the concept of “leadership” was refined through juxtaposition of linguistic and conceptual coding methods. In Part VI (Case Study 5), analysts from several traditions problematized a core design assumption behind the data provider’s software.

Push Methods Outside of Their Comfort Zone

The next strategy is related to (and perhaps inevitable given) the strategy of pairing up diverse analytic traditions, as in any deliberately diverse pair one analyst may feel closer to the data than the other. We found it useful to give an analyst data that is not of the type they normally analyze. This of course must be done with care, as too great of a mismatch would not be productive. The objective from a research community perspective is not merely to challenge individual researchers but rather to explore how analytic traditions might be applicable beyond the scope of data to which they have been usually applied. The benefits for the community are that analytic traditions are brought out of their isolation, coming into contact with each other, and also we discover unanticipated ways in which they might contribute to understanding new phenomena.

In our project, a clear example of the success of this strategy was when we asked an analyst who had been doing conversation analysis of texts (written conversation) informed by Bahktin to analyze video data that included gestures and manipulation of paper and blackboard diagrams (see Part II, Case Study 1). A potential issue is whether the analytic method is also pushed outside of its zone of validity. For example, in the same case study a statistical breakpoint analysis was applied to a sample that might be considered too small for this method. Yet the exercise has utility as long as it is understood that a different scientific game is being played: rather than generalizing to a population from a sample, statistical analysis was used to expose features of the data that other analysts might consider from their standpoints.

Address a Shared Analytic Objective

As we found in our second workshop, it is not sufficient to have diverse analysts take on the same data. There is no guarantee that their analyses (or even how they construe the object of study) will be comparable, and given that they come from different traditions they are likely to “talk past” each other. Identification of this problem led to our most crucial strategy: to request analysts to approach the data with a shared analytic objective so that the different analyses can be compared and hence the traditions brought into dialogue with respect to these traditions. In our case, we asked analysts to identify the “pivotal moments” of the interaction found in the data: What events were most crucial for the collaboration?

The concept of a “pivotal moment” is deliberately vague. Vagueness can be understood as advantageous if we consider the concept of a “shared analytic objective” with respect to the objectives of our project. We cannot ask analysts to address the same research question at the usual level of specificity found within a given analytic tradition, because a research question that is well specified within one tradition may not be interesting to or make sense within another tradition or may even violate its assumptions. We need to offer analytic objectives that are interpretable by each tradition involved so that they can be brought into dialogue with each other around this object. An analytic objective that only makes sense within one tradition is not “shared.” An analytic objective that is sharable across traditions acts as a boundary object (Star & Griesemer, 1989)—one that is interpretable by all traditions involved, perhaps differently, but this is what makes the exercise interesting! In a sense, vagueness is a great advantage. To draw an analogy in which analytic traditions are psychodynamic persons, the objective of finding pivotal moments serves as a “projective stimulus” in which each tradition sees, or upon which it projects, what is important in the given data. This strategy is exemplified well in Part II (Case Study 1).

Eliminate Gratuitous Differences in Data Considered

In some cases, we found that analysts came to different conclusions merely because they looked at different aspects of the data. This was the case in our first Group Scribbles study, discussed in Chap. 19, in which it was found that analysts differed on whether they analyzed private (as well as public) activity and whether verbal acts, nonverbal acts, and the states of artifacts that resulted were considered. Once gratuitous differences are eliminated, the differences in results and interpretations that remain are more likely to be essential to the dialogue needed between traditions.

An issue discussed previously arises again: analytic traditions may differ in the data considered because they differ in what is considered relevant or in the “amount” of data needed to meet validity requirements for the tradition (e.g., inferential statistics vs. conversation analysis). This problem has been dealt within the author’s laboratory through an overlapping technique: a focal session is chosen for microanalysis, but analysts who have larger data requirements (e.g., to study role development or relationship formation over time) analyze the data they require and report the implications of the results for understanding the results of session microanalysis.

Align Analytic Representations

Having eliminated (to the extent possible) gratuitous differences in the scope of data considered, we have found that it is extremely helpful to represent analytic results in some form that can be brought into alignment with each other for comparison. The most obvious basis for such an alignment is time: different interpretations of the same sequence of events are given a visual representation along a common timeline. Such representations highlight congruences and discrepancies and serve as excellent prompts and resources for conversation between analysts. Chapter 33 discusses the role of representations and tools for achieving productive multivocality in greater detail.

Iterate

The above strategies imply that iteration is required. For example, even if analysts have agreed on a data corpus and a shared analytic objective, in the first meeting they may discover that they have examined different aspects or scopes of the data. Inconsequential differences should be eliminated and the analyses repeated to focus on essential (e.g., conceptual and epistemological) differences and convergences.

Step Back from Methods

None of the above strategies will help if participants remain within their methodological boxes. Ultimately we want to bring theoretical ideas into dialogue, but this can be prevented if the methods in which one is trained are taken as fundamental to how the phenomenon is viewed. The researchers who will be most successful in achieving productive multivocality in a community are those who can take off their methodological eyeglasses and dialogue about methods as object-constituting, evidence-producing, and argument-sustaining tools. This dialogue requires careful consideration of what methods as inscriptions and means of operating on inscriptions bring with them intrinsically as well as what teleological and theoretical commitments are made in the practices of applying these tools to a domain.

Conclusions

Sharing analyses has benefits both for the individual analysts and the community. Analysts are confronted with aspects of the data highlighted by others that they might not have themselves considered; epistemological assumptions are challenged; analytic concepts are fine-tuned; and a multidimensional understanding of the phenomenon being investigated and analytic constructs used to approach it is gained. The process leads to greater dialogue and mutual understanding in our community. Yet, these benefits do not accrue merely by putting analysts together in the same room or even by having them analyze the same data. Productive multivocality is facilitated by strategies such as eliminating gratuitous differences in the scope and representation of data considered and deliberately pairing diverse analysts charged with a common yet flexible analytic objective.

The collaboration constituting this project is, we believe, unprecedented and significant in our field. Many volumes result from one-shot workshops, but sustained collaboration over a period of years is rare, particularly in the face of academic incentive structures that provide greater rewards to solo efforts and self-promotion. The researchers we worked with on this project are large in number and represent diverse disciplines and analytic traditions, yet all shared a commitment to the project and were congenial colleagues to work with. This volume is a testament to their dedication to finding ways to bring the individual and collective needs of research in CSCL and the learning sciences into congruence with each other.