Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

There has been increasing interest in the adoption of social constructivist approaches (Bransford, Brown, & Cocking, 1999; Sawyer, 2006) to organizing teaching and learning in formal educational settings. While the study of interactions has been an important area of research (Kumpulainen & Wray, 2002), interactions among students in collaborative learning contexts are by nature very different from those between teachers and students. In contrast to traditional instructional approaches, collaborative learning generally values interactions among students as an important input to the learning process, and gives more scope for students to take responsibility and make decisions in the learning process. Whereas IRE (Initiate-Response-Evaluate, Cazden, 1988) has been identified as the most prevalent form of interaction in traditional, teacher-centered classrooms, the nature and impact of peer to peer interactions are much more varied and less well understood. Research on peer interactions in collaborative learning contexts is becoming an important field in education research, but few metrics or tools have been offered to help teachers make sense of students’ collaborative process and to make decisions on whether and what interventions should be made to advance the learning of groups and individuals.

With the increasing accessibility of the Internet and popularity of web 2.0 applications, many teachers have also introduced collaborative learning mediated through synchronous and/or asynchronous information communication technologies (i.e., computer-supported collaborative learning, CSCL for short) into their day-to-day pedagogical repertoire. CSCL brings rich, diverse possibilities to collaborative learning that would not have been possible otherwise, as illustrated through the three datasets generated in CSCL settings in the present volume. The Biology data set introduces conversational agent technology into the CSCL interactions to enact facilitation for an approach called Academically Productive Talk. The Group Scribbles environment associated with the Electricity dataset allows students to construct their own digital artifacts in the form of drawings and text, and to share those with others within and outside of their own small groups. Just for a small group of four students, the collaboration data collected in the Electricity dataset consisted of five videos of the tablet screens, one for each of the student’s individual screen and one for the shared screen, and this is in addition to the video of the actual physical activities of the group and the audio recording of their verbal discourse. The Knowledge Forum® Education dataset contains discourse data for a master level 13-week course conducted totally online through an asynchronous forum and the data accumulated amounted to more than 200K words! Such advances in learning technology provide unprecedented opportunities for rich peer collaborative learning opportunities, but also poses serious challenges to the teacher as facilitator on how to even be aware of what the students are discussing, let alone whether they are making progress, whether they are facing difficulties with the subject matter or the socio-dynamics.

While the focus of this book is generally to explore whether multivocal analysis of the same dataset can lead to productive interactions among researchers, and the possible theoretical and/or methodological developments that this may bring about (see Chaps. 1 and 3134), this chapter explores whether such multivocality would have meaningful implications for practice. Would the different analyses reported in this volume offer insight as to whether and how such analytical methods and tools may be helpful to teachers in helping them to understand how students respond to different collaborative learning designs, pedagogical settings and strategies, even though these methods are underpinned by disparate theoretical assumptions and analytical approaches?

The five datasets forming the basis for the multivocal analyses in this book are all collected in formal educational settings, spanning primary, undergraduate and postgraduate education. The modes of interaction are also various, ranging from entirely online synchronous, entirely online asynchronous, face-to-face augmented with online platform for individual and shared artifact construction, to entirely face-to-face. Further, the pedagogical principles underpinning the learning designs in the five settings also differ greatly. This provides a welcome diversity for an initial exploratory study reported here.

In this chapter, we will be addressing the following questions:

  • Do pivotal moments identified by researchers have implications for pedagogical practice?

  • Would teachers be interested in the multivocality or the pivotal moments identified by researchers?

  • Would teacher’s understanding of their own practice and students’ learning be enriched by the multivocality of the researchers?

  • Is it likely that such analytical processes would be considered for productive adoption on a routine basis?

We invite the reader to inquire with us in (1) the value of such analysis as a diagnostic tool for the teacher as opposed to analyses that help to answer specific research questions, and (2) the feasibility of the analysis being easily conducted by the teacher on the basis of ease of administration.

Background

Classroom interaction and talk have been inquired into for several decades (Fisher, 1993; Flanders, 1970; Kumpulainen, 1996; Lemke, 1990; Mercer, 1996; Resnick, Levine, & Teasley, 1991). A major thrust of such studies has been to make teaching more information-based (or data-driven) through classroom observation schedules. However, researchers’ analytical methods, results, and tools do not easily find their way into teachers’ regular practice. One exception has been Moreno’s (1934) sociogram. It was and remains a popular tool for teachers. However, much of the complexity of sociometry has been reduced. The same with Gardner’s theory of multiple intelligences, which has a broad impact on teachers. Related educational resources and professional development workshops simplified Gardner’s contribution. Large-scale dissemination is often at the expense of complexity, and rich analytical methods and tools applied in a rigid, technical manner.

To counter technical rationality in the education of professionals, Schön (1983) put forward the reflective practitioner approach, which has become a dominant approach in professional education. Within this paradigm, it is a common activity to collect data on one’s own teaching through means such as videos, interviews, students’ work, etc. to act as foci for analysis and dialogue. In the early days of sociometry, teacher educators who engaged pre- and in-service teachers in its derived techniques did not have such a purpose. Neither did they have the technology available today—to provide practitioners with rich descriptions through visualizations that help develop deeper understanding of specific learning environments.

UNESCO’s (2011) three ICT competencies for teachers, especially deep understanding and knowledge creation, invite teachers to such an exercise. It is our working hypothesis that the data–information–knowledge trio can be achieved in teacher education and professional development. For this to happen, researchers are providing tools (conceptual, methodological, and technological) to transform data into information for evidence-based knowledge in support of pedagogical decision-making. Writing this chapter, which focuses on peer interaction and talk, we have in mind teachers, including preservice teachers and teacher educators, whose circumstances could allow them to analyze their own learning environments for improvement purposes.

Given that the analyses in this book are on human interactions for learning purposes in which peer interactions are important, the assumption is that the methods or tools can only be of relevance to pedagogical practices that are at least broadly social constructivist in nature. We are also aware that there are great diversities across different social learning theories and so different foci in terms of analysis for researchers interested in the different theories. Teachers also have been exposed to a variety of theoretical perspectives and pedagogical methods. They hold different sets of values and beliefs but are open to new tools that have resonance for their practice.

Methodology

We developed an analytical grid for exploring the chapters in terms of their pedagogical relevance for practice. Four themes were identified as follows:

  • Analytical focus or Pivotal moments.

  • Relevance for pedagogical practice.

  • Mechanisms/tools for detection.

  • Potential for automatic detection to interactively inform pedagogical decision-making.

First, we looked at the analytical focus of the chapter and including the pivotal moments, if applicable. The special attention given to pivotal moments was to align with the fact that they are considered boundary objects for those researchers participating in the writing of this book (as described in Chaps. 1 and 31). Some chapters pinpointed pivotal moments in their analysis of the same set of data (e.g., Oshima, Chap. 12; Lund and Bécu-Robinault, Chap. 17; Shirouzu, Chap. 5; and Trausan-Matu, Chap. 6) and others not. Some researchers found more pivotal moments than others (e.g., Trausan-Matu, Chap. 6 and Chiu, Chap. 7). This is no surprise as the definition of a pivotal moment had been left open on purpose. Thus, some researchers identified pivotal moments, while others presented a different interpretation regarding the same dataset (e.g., the Electricity data set). Looi, Song, Wen and Chen (Chap. 15) identified seven pivotal contributions, one of them being the following one: “The teacher’s intervention (T12) to ask the group to draw their electrical circuit of lighting one bulb using two batteries in GS was considered pivotal to shape the students’ inquiry to a higher level for conceptualization”. Medina (Chap. 16) interpreted pivotal moments differently, as he posited that “a pivotal sequence of interaction occurring in the later half of the activity in which one member of the group proposes an innovation for illuminating two light bulbs in a single circuit”. Other researchers distanced themselves from pivotal moments as boundary objects. Their analytical foci did not point directly to pivotal moments (e.g., Stahl, Chap. 28; Teplovs and Fujita, Chap. 21).

Second, we read the chapters with an eye to their relevance for pedagogical practice. Were these results (content or process) relevant to teachers? Two kinds of relevance were distinguished: results that presented general relevance (generalities about collaborative learning) and results that had more specific relevance as they pertained to specific learning contexts.

Third, we gave attention to the unit of interaction and mechanisms/tools for identification of pivotal moments or other points of analytical focus. We reckon that the complexity (or simplicity) in the way the unit of interaction is defined and operationalized would play a role in teachers’ ease of understanding the focus of the analysis and hence their readiness to make use of the analytical results. Furthermore, some analyses are very refined and the mechanisms used for conducting them may not be easily understood conceptually by practitioners, which may influence a teacher’s readiness to incorporate such analyses into their pedagogical decision-making, even if the analyses are done for them.

Fourth, in our analysis, we also wished to identify analyses that have strong possibilities for using technological support to inform teachers’ pedagogical decision-making in interactive ways. This is important because teachers have less time than researchers to devote to interaction analysis. Technology support to analyze online talk would be most welcome, particularly those results that point to interactions indicative of particular states or transitions in the collaboration process.

The current availability of timesaving analyses of mechanisms/detection tools limits how far practitioners can take accounts of the complexity of classroom talks (contexts, affects, and the like). Complexity remains a most challenging issue. Therefore, automatic detection would be considered a plus. The challenge of uncovering pivotal moments with the support of automatic detection kept being at the center of our exploration. We know it is complex but have confidence that advances could be made.

Results

Based on the methodology described above, we reviewed and analyzed all the analysis chapters in the five data sections in this volume according to the four themes identified. Table 35.1 presents a brief overview of the findings for 13 of the data analysis chapters, within which we see strong promise of the analysis described to have pedagogical relevance. In the remainder of this section, we will present our key findings and explicating the contents of the table in the process.

Table 35.1 A brief overview of the findings for a sample of the data analysis chapters

Focus and Purpose of Analyses

The focus and purpose of the analyses have been found to be of paramount importance to whether the analysis has pedagogical relevance. The contributors of the five datasets were all contributing one analysis chapter. These data contributors also played a major role in the pedagogical design of the collaborative learning settings. It is hence not surprising that the purposes and foci of their analyses chapters have a clear pedagogical connection with the respective data collection context, and have potential pedagogical relevance to teachers from that perspective. For example, Shirouzu’s analysis of his own Origami-fractions data focused on identifying students’ collaborative advancement as well as their individual progression, which was the purpose of the study for which the data was collected. Likewise, Sawyer’s analysis of his own Chemistry discussion data was to identify which kind of peer leader role would be more conducive to knowledge building emerging through group discussions in peer-led team learning contexts. Looi’s analysis of the Electricity data looked for pivotal contributions that shifted the group of students’ foci and subsequent action/understanding in their attempts to connect bulbs and batteries. The purpose of Teplovs–Fujita’s analysis of the Knowledge Forum® discourse data was to explore the usefulness of the KISSME tool in generating and testing predictive models of learner interactions to optimize learning. Howley et al.’s analysis of the Biology chat data was to identify what type of online tutor agent interaction would encourage students to articulate their reasoning and to listen and respond to the reasoning of others. It is also worth noting that of these five analyses, Sawyer’s and Teplovs and Fujita’s are primarily concerned with developing a generalizable model about particular aspects of collaborative learning, while the other three analyses also have specific relevance pertaining to the particular learning contexts.

While the five datasets were all collected by researchers with pedagogically related analytical goals, the analyses by researchers other than the data provider may be motivated by very different research goals. We find that irrespective of the analysts’ theoretical or methodological constructs, whether the work has pedagogical relevance depends largely on the purpose and focus of the analysis. This can be illustrated using the analyses reported in Part 4 on the Electricity data. Lund et al.’s analysis was grounded on the science education literature and the goal was to look for instances of conceptual change, which is very different from a theoretical standpoint from Looi’s identification of uptake that was grounded on the theory of intersubjectivity. However, these differences between the analyst and the data provider will not stop science teachers from finding meaning in Lund et al.’s analysis to track the students’ transformations of conceptual content from the physics domain perspective as they communicate by talk, gestures, and drawings in the GS interface and through manipulations of the experimental apparatus. On the other hand, the purpose of Medina’s analysis, which was also grounded on the identification of uptake based on the theory of intersubjectivity, was an academic one: to explore how sequential structures of multimodal interactions, including the availability of persistent artifacts generated on inscription devices, influence joint meaning making processes. While the findings from such research may have implications for understanding collaborative learning involving multimodal interactions, these are rather more distant from the immediate concerns of the practitioner faced with achieving the set curriculum goals through designing and facilitating collaborative learning.

Pivotal Moments

While all the data analysts were asked to identify pivotal moments (as defined by the analyst concerned), which are to be used as boundary objects for scaffolding productive multivocality, not all analysis chapters identified pivotal moments. Where pivotal moments have been explicitly defined in studies that have pedagogically relevant analytical purposes, the pivotal moments may serve as important conceptual artifacts and scaffold understanding that are very helpful to practice. For example, in Part 5, while the chapter of Teplovs and Fujita and the chapter of Law and Wong were similarly grounded on the theory of knowledge building and had a similar purpose of providing pedagogically relevant analytical visualizations of learner interactions, they differ in that the former did not identify pivotal moments while the latter did. The KSV tool used by Teplovs and Fujita is a very innovative one integrating latent semantic analysis and social network analysis, and providing very flexible graphic visualizations to the user. They used the tool to identify students who shared similar latent semantic learner models, found that many of these students did not have high-level interactions, and went on to hypothesize that the tool may be of value to teachers as a basis for purposefully promoting interactions among these students. The validity of this hypothesis is yet to be substantiated. The Law and Wong paper focused on two types of pivotal moments. The first was to look for “pivotal weeks” during which the statistical interaction parameters indicate having reached a social dynamic condition illustrative of some of the knowledge building principles. The second type of pivotal moments was breakthroughs in students’ understanding of key concepts based on semantic analysis of the note contents. These pivotal moments link directly with the concerns of the teacher.

Pivotal moments that are directly linked to the subject matter domain being studied are likely to be easily appreciated by teachers as relevant to their practice. For example, the analysis of the electricity data by Lund and Bécu-Robinault to identify instances of action/concept reformulation as a specific group of students engages in collaborative learning would be enlightening to and very much appreciated by science teachers.

However, not all pivotal moments have direct relevance to pedagogical practice. For example, in Chiu’s analysis of the Origami-fractions data, a pivotal moment is a “conversation turn that separates a portion of the conversation into two distinct time periods (before and after) with substantially different likelihoods of the focal variable (e.g., correct ideas)”. This formulation of a pivotal moment does not link directly to the practice concerns of the teacher on a day-to-day basis.

Unit of Analysis and Mechanism of Pivotal Moment/Event Detection

As discussed in the Methods section above, analyses in which the units of analysis or mechanisms of identification that are complex to understand and/or to operationalize would face more challenges in convincing teachers of their pedagogical relevance. For example, Shirouzu’s analysis of the Origami-fractions data (Chap. 5) identified two units of analysis, the group and the individual. For the former, the focus of analysis was to look for those collaborative utterances that are indicative of constructive interaction, while the latter looked for changes in personal focus. These parallel teachers’ interest in knowing about students’ individual gains in understanding, as well as in how the collaborative process might have contributed to their advances in understanding. Chiu’s analysis of the same dataset (Chap. 7) defined his unit of analysis as a conversation turn, which is simple to understand, but the unit of interaction was defined as a sequence of one type of action following another, with the actions being “microcreativity” that is to be identified through coding of the argumentative attributes of each conversation turn, and the units of interactions to be identified through statistical discourse analysis. While some of the pivotal moments identified by these two analyses are the same, Shirouzu’s analysis would be more accessible to teachers, and hence this type of analysis is more likely to have impact on practitioners.

We suspect that the complexity of the units of analysis/interaction as well as the mechanisms for identification of the analytical point of interest does not only impact on the uptake of the related analysis by practitioners but other researchers as well. For example, to construct the uptake graphs in Looi et al.’s analysis of the Electricity dataset (Fig. 15.1) requires such detailed analysis and meticulous construction of the visualization that it is doubtful whether members of this research team concerned will repeatedly conduct the same analysis after the research advance targeted has been achieved. On the other hand, the visualization of Jeong’s analysis of the emergence of group understanding of circuits as demonstrated through the physical and digital artifacts students constructed (Fig. 18.4) has a simplicity in its ease of construction and clarity in communication that other interaction analysts may wish to learn from.

Potential for Automated Analysis to Interactively Inform Pedagogical Decision-Making

Automatic capturing, processing, and analysis of interaction data is not one of the common themes for the present volume on productive multivocality in interaction analysis. However, we would like to argue that the exploration and sharing of automated tools to facilitate digital processing, analysis, and visualization of interaction data is one valuable potential outcome to be achieved as productive multivocality for a project of this nature. Automated, or even semiautomated analysis of interactions, would be particularly relevant in pedagogical situations where the analysis will provide information on the behavior or performance patterns of the specific group or individuals, as such information would be able to scaffold further pedagogical decisions.

Of all the analyses reported in this volume, only Teplovs and Fujita (Chap. 21) conducted their entire analysis through automated tools, the KSV and KISSME. Law and Wong (Chap. 22) identified the pivotal weeks using the ATK tool built into Knowledge Forum®, while their identification of pivotal breakthroughs in students’ understanding of key concepts was achieved through an automatic selection of sentences based on keyword search, followed by critical reading and qualitative analysis of the selected sentences to identify the critical advances. Howley et al. (Chap. 11) mentioned explicitly that the analysis of social positioning transactive interactions has been successfully automated in their team’s earlier work, though the specific analysis reported was done manually. For the other chapters, no explicit mention has been made on the issue of analysis automation.

From the perspective of providing just-in-time analysis results to support teachers’ pedagogical decision-making, the format of the interaction data also matters. Cases where the entire set of data can be captured digitally and ready for processing and analysis as in the case of the asynchronous discussion data on Knowledge Forum® consisting entirely of text data, or the synchronous chat log in the Part 6 Biology dataset offer a relatively low threshold for automation. Participation statistics and easily computed interaction patterns such as Social Network Analysis still offer valuable insight to teachers, despite their limitations. With advances in text-to-speech technology, it may be possible in the near future for the kind of network analysis of words reported by Oshima et al. (Chap. 12) to be carried out relatively easily in a timely fashion to inform teachers of the progress in students’ collective knowledge advancement. With advances in natural language processing, it is anticipated that some of the analyses that are primarily grounded on looking for linguistic features/patterns in discourse would also be candidates for possible automation, such as the identification of convergence and divergence through detecting changes in repetition, etc. in Trausan-Matu’s analysis of the Origami-fractions data (Chap. 6) and Howley et al.’s Souflé-based conversation analysis to identify students’ reasoning behavior in their analysis of the Biology data (Chap. 26).

Further Observations

We are very heartened by the findings from this preliminary study as it provides substantial evidence that the multivocality in interaction analysis can be productive in providing valuable insight and pedagogical support to teachers interested in implementing collaborative learning in their everyday practice. The five different sets of data analyzed in this volume are very diverse in the contexts from which they were collected, not only in terms of their level and subject domain of study but also in terms of the use or otherwise of technology in the collaborative learning process. Our analysis demonstrates at length the presence of collaborative learning and the feasibility of at least some of the analyses being productive in identifying important issues for practice, despite the fact that some of the cases involved multimodal multimedia data (e.g., the electricity dataset), while others involved only monotonic textual data. Our findings also illustrate the potential of interaction analysis as a productive method for teachers’ evaluation of the suitability of specific CSCL environments when they wish to select one to realize particular collaborative processes, as illustrated by the analysis of the Biology data.

Our analysis also demonstrates that a meaningful analysis from the practice perspective can be made by researchers who do not themselves generate the data, and using analytical methods that are grounded on theoretical frameworks different from the ones underpinning the pedagogical practice contexts from which the data were collected. In fact, the relevance to practice appears to depend largely on the goal and focus of the analyst.

It is interesting to note that there is some consistency in the triangulation of the analyses results, for example, although the conceptualization of pivotal moments in the three analyses of the Origami-fractions data were different, some of the pivotal moments identified by them referred to the same moments. Whether such pivotal moments are particularly significant ones in the students’ collaborative learning process has to be confirmed on a case-by-case basis. However, when the results from one analysis are reinforcing the results of another, there is productive multivocality by providing a complementary perspective to the validation, and offer clear targets for teachers to consider as a first priority.

Examining analysis results on the same dataset that do not provide triangulated validation could also be productive from a practitioner perspective as the difference could promote reflection from multiple perspectives. For example, Howley et al.’s analysis of the Biology data (Chap. 26) was grounded on the assumption that heteroglossic conditions would be more conducive to students’ adoption of accountable talk. When their experimental hypothesis that the Indirect Agent condition would be more conducive to accountable talk behavior was not supported by their analysis of the aggregate data collected from all the groups, they examined some of the interaction segments between the tutor agents and the students, and came to the conclusion that a crucial problem was the lack of coordination between the condition-specific prompts and the timed task prompts from the agents, and that the Indirect Agent should be improved with respect to timing and coordination. Stahl chose to analyze in detail the data of only one of the groups from the same dataset (Chap. 28) and constructed a visualization of the response structure of the group to show the threading of responses, mediation of accountable talk, and content uptake. This enabled him to home in on three instances of the Indirect Agent successfully mediating accountable talk. The in-depth analysis of these “pivotal moments” led him to rather different conclusions. On the basis of the holistic trace of an entire chat log, Stahl identified issues of lesson design, over-scripting of the agents, and masking of the social identity of students. He proposes that accountable talk is a sophisticated level of discourse, which needs the skills of a teacher with mastery of this pedagogical approach and not simply the canned interventions from automated agents. Such fundamental conclusions arising from multivocality in analysis will also help teachers in their evaluation and selection of CSCL environments.

Discussion

The Potential of Interaction Analyses and Analytical Tools to Contribute to Pedagogical Practice

Classroom interaction analysis is something all teachers practice informally whenever direct interaction with students is involved. The focus and level of sophistication of the analysis, however, greatly varies according to the time devoted to such analysis, and its objective, means, and context. For those willing to go a step beyond the free-flow of thoughts regarding what is going on, during or after a class, observation/analysis tools that harmonize with the acquisition metaphor (AM, Sfard, 1998) are available. However, they are rarely used by teachers and remain in the toolkit of supervisors. The social psychology theory of power relationship and the critical theory of education argue for giving space for encouraging participation. But teachers moved by the participation metaphor (PM) have less direct access to what students are talking about or doing than when the AM metaphor applies. Questions as simple as “Are some students working and others not? Are students simply adding up individual contributions without much discussion?” are haunting ones for teachers. Therefore, reflective practitioners who engage students in collaborative learning may be more appreciative of ready-to-use analytical tools likely to inform them about what is going on. The analytic tools provided in this book are a most significant contribution to the PM repertoire of analytical tools.

Written electronic conversations have the great advantage of providing readily accessible traces of student participation. An online tool usually offers a teacher the possibility to glance through what is going on, be attentive to author names and turns, time and length of messages, and the like. Teachers can monitor, scaffold, evaluate. Depending on the functionalities of the online tool and related analytical tools (e.g., chat, GroupScribbles, Knowledge Forum), the teacher as a reflective practitioner can proceed to further analyses. See, for instance, the analysis of keywords in Law and Wong’s chapter. For analysis of the epistemological/conceptual foci of the discourse (KB), KSV and KISSME provide visualizations of group or classroom talk. To interpret results, however, a teacher has to be acquainted with Scardamalia and Bereiter’s (2003) perspective on knowledge building. In other cases, as with the use of TATIANA, knowing a theoretical perspective such as Michaels, O’Connor, and Resnick’s (2008) Accountable Talk brings meaning to a tool that would otherwise remain opaque to a teacher.

To fully interpret results, however, would require an understanding of the metrics beneath. Few teachers are likely to be interested. Some teachers may be inclined to trust the metrics and engage students in reflecting upon automated analytical results. (This means that automated data analysis could have implications for student learning—for teachers to use the tools to support student self-reflection, peer collaboration on reflection and whole-class reflection, on own performance, and on the collaborative process, thereby building their own capacity as self-directed learners or autonomous inquiry groups.) The community of practice theory, which argues for giving more space/permission for participation, and the power relationship theory may be instrumental as a theoretical basis for creating this opportunity in the classroom. Here, visualizations that highlight pertinent aspects of the interactions may prove especially attractive to groups and classrooms. Students’ identification of pivotal moments in onsite/online conversations would be an important metacognitive act for them to perform.

Therefore, the very introduction of the pivotal moment concept and its spread in analytical activity may be one of the most important outcomes of this book. This is an advance in terms of analyzing human interactions in the classroom. Most teachers are likely to have the conceptual understanding (conceptual tools) to be able to see the pedagogical relevance of identifying pivotal moments during onsite/online group/classroom talk. The writers of this book make a major contribution by showing that collaborative learning does take place in the classroom, and that it leads to deeper understanding and knowledge building.

At a time when UNESCO (2011) is fostering deep understanding and knowledge creation as key teacher competencies in the digital age, the pedagogical implications of the previous chapters’ results extend to teacher educators. They are the ones most likely to grasp the value of pivotal moments during classroom talk, and consider them as an important feature of classroom/group talk that is student focused. While planning learning experiences for pre- and in-service teachers that will help them develop this kind of understanding, teacher educators may want to begin with pivotal moments that point to changes in knowledge and understanding, and patterns of interaction, as these are of general concern to teachers. At an experiential level we suggest to combine synchronous (onsite/online) and asynchronous (online) talk on questions that matter to teachers. The exercise of identifying pivotal moments in their own conversations would be valuable for teachers. Moreover, it could become a nice addition to the practice of group reflective analysis, an innovation in teacher professional development going back to the eighties and nineties (Schön, 1983) and one that stresses the importance of multiple perspectives (Valli, 1992).

Although in this chapter we are seeing opportunities for pedagogical practice and arguing that multivocal analyses of CL interactions are relevant for practice, whether this is really the case remains to be empirically explored, and should best be done as an interdisciplinary collaboration between learning analytics researchers and education/pedagogy researchers and teachers. It is unlikely that even those understanding and valuing multivocality will apply it on a regular basis, at least not in the near future.

Suggestions for Learning Analytics Researchers with an Interest in Supporting/Influencing Practice

Learning analytics researchers interested in classroom interaction analysis may want to engage teachers in discussing the analytical results from their practice, thus uncovering whether teachers may find these to be helpful to them in understanding students’ learning, in orchestrating/facilitating learning, and if so, in what formats or what kinds of visualizations would be more helpful. They may have already done so with local teachers as they developed or tested their metrics.

Researchers may also want to consult teachers and/or learners about whether providing such results to learners would be meaningful and helpful to the learning process. Teachers could share with researchers (1) ways in which they would introduce learning analytics results to students, and (2) ways in which they have attempted to share such results with students. It would help researchers to know the circumstances within which teachers may find multivocality results informative for immediate pedagogical action.

Researchers may have a graduate seminar that would allow them and their students to give special attention to pivotal moments during classroom talk, thus bridging their research and teaching activities. Pivotal moments could serve as scaffolds for teachers and learners for understanding and self-direction in steering learning. In the context of supporting assessment for learning, and in particular assessment for collaborative learning, it is important that analytical results not only be composed of discrete learning outcomes, but provide more nuanced understanding of how these come about as an integral part of the interactional process, which would give teachers and learners greater sociometacognitive agency in CL. For instance, as an integral part of a graduate seminar, circumstances of use could be grasped, results could be interpreted with a sense of the “whole”. The closest illustration of such a process is Sawyer’s attempt (Chap. 10) to better understand the Peer-led team learning discourse practices used by peer leaders and students, and among the students themselves, that give rise to an enhanced understanding of the chemistry content.

The Potential of Multivocality in Interaction Analysis to Contribute to Pedagogical Practice

The results presented in Table 35.1 indicate that the goal of using technology to support teachers in almost real-time pedagogical decision-making would not be feasible in most cases. Such a “pessimistic” conclusion probably reflects not only limitations in the current state of development of analytical and visualization technology but also the complexity of many of the analyses and our inadequate expertise in this area. On the other hand, the multivocal analytical methods and tools presented in this volume may probably become a pathway or scaffold for realizing the use of technology tools as supports to teachers in assessment for learning, which could have significant implications for practice.

We also feel that there is good potential for some of these analyses and tools to be used for the purposes of professional development and teacher learning (teacher education), especially in supporting teacher reflection on the impact of different pedagogical designs and facilitation on the processes and outcomes of collaborative learning. This offers a pathway for teachers (pre- and in-service) to engage in and hence to learn about social constructivist models of learning. The kind of data and analysis made available in this volume is a very attractive and appropriate resource to support more open, exploratory modes of learning in teacher education as the multivocality contained therein ensures that these would not be used as “ideal” or “authoritative” analysis, but as stimulus for further discussion.

There is also the potential of interaction analysis to illuminate which kind of learning technology is likely to be supportive of more open, collaborative, and knowledge-building-oriented approaches to learning and teaching, and which ones are likely to restrict it (e.g., the analyses of the biology data). The following kinds of analysis could help practitioners to understand and assess the appropriateness of a learning technology for CSCL:

  • Analysis that reveal whether the interaction between technology and learners resemble teacher-/instructor-centered interactions which tend to obstruct students’ knowledge building or encourage more open interaction and exploration among learners.

  • Analysis that reveal and encourage students’ agency in learning, such as the generation of good inquiry questions and sustained efforts in improving understanding, which are important if knowledge building is to be achieved through CSCL. Such analysis would also help teachers to differentiate software platforms that provide external agents to direct student learning from ones that foster student agency to take responsibility for and to monitor their own learning progress.

As mentioned in the introduction, multivocal interaction analysis can contribute to two types of relevance to practice: those that can inform more immediate pedagogical decision-making and those that provide more general insight and understanding to the processes and outcomes of learning and knowledge building in collaborative contexts. The literature on interaction analysis has provided a scientific basis for more nuanced understanding of collaborative learning, which clearly has pedagogical implications. However, our analysis reveals that research that offers greater alignment between the analytical goals and learning outcomes or processes of importance to the daily milieu of a teacher’s practice is more likely to contribute to advances in CSCL practice in educational settings.