Keywords

1 Introduction

Adaptive collaborative learning support (ACLS) provides intelligent support to enhance collaborating students’ learning outcomes [10]. There are many ACLS systems that have been applied in different contexts, from face-to-face classroom environments [1, 8, 19] to online learning [6]. For example, [6] designed adaptive support in the form of strategic prompts (e.g., request an explanation, offer assistance, encourage collaboration) to provide a structured and extended discussion in an online collaborative learning environment. [19] developed a system to support help-giving in a classroom peer tutoring context by providing timely and appropriate help. While these technologies show promise, they focus on supporting students within a single activity in a given context, and do not take into account that students are often collaborating across multiple educational platforms. We build on this prior work to design a cross-platform ACLS to support student collaborative activity across multiple platforms and improve learning.

Designing ACLS for multiple platforms is important because of the need to understand how students’ interactions and skills transfer across different contexts. First, students might behave differently on different platforms, and focusing interaction within a single context limits the potential effectiveness of the ACLS. Student behavior in a synchronous collaborative learning environment (e.g., text-based communication) might be different than in an asynchronous collaborative learning environment (e.g., online threaded discussion). For example, [4] assessed graduate student participation in a synchronous chat and asynchronous threaded discussion environment, and reported more responding and reacting statements in the synchronous environment compared to the asynchronous one. [12] explored the social and cognitive presence of graduate students in a synchronous and a asynchronous tool within the same online environment. In this work, we are interested specifically in the quality of collaborative support within a learning activity distributed across multiple platforms. Second, as students learn how to collaboratively construct knowledge on one platform, they will hopefully transfer these skills to a second platform. Their collaborative activity could be informed by one platform to the other. So, facilitating the transfer of skills using ACLS might ultimately enhance students’ collaborative learning abilities beyond a single context. However, one of the challenges in supporting collaboration is modeling the collaborative behaviors of students [19]. For multiple platforms, our first research question is: How do individual students’ collaborative interactions vary across different learning platforms?

In this paper, we examine student help-giving behavior in a mathematics classroom. Help-giving is defined as an activity where students interact with their peers, give explanations to one another, and provide feedback and examples [20]. While doing this, students clarify, elaborate, articulate their own understanding, justify their reasoning, and organize concepts to explain their idea [16, 21]. These behaviors contribute to the co-construction of knowledge, which help students learn and transfer their knowledge in multiple contexts. To support student collaborations across multiple contexts, we need to understand their patterns of collaborative interactions. As student collaborates with each other, their motivation in help-giving can affect their participation during collaboration. Thus our second research question is: How does student collaborative behavior across different platforms predict learning and motivation? Here, we explore motivation in the context of expectancy-value theory [22]. Both expectancy (whether an individual expects to be able to perform a task) and value are essential motivational factors that might help us to understand student help-giving behaviors and design ACLS systems for multiple platforms.

In addition to the influence of different learning environments, students’ collaborative help-giving behaviors can also be influenced by their motivations. Much work in CSCL has analyzed the influence of motivation on students’ contributions to online discussions, learning activities, and knowledge acquisition during classroom collaboration [15, 17, 24]. We considered both student attitude towards mathematics (ATM) and self-efficacy (SE) as motivational factors in this paper. Self-efficacy, a concept developed by Bandura [2], refers to a students’ beliefs about their capacity to accomplish certain tasks which affect human motivation, efforts, persistence, and achievement. Attitude refers to student beliefs whether a “task is important, enjoyable, or difficult” [9]. Both ATM and self-efficacy play an important role in how students learn mathematics. To support a student with a cross-platform ACLS, we need to explore how these individual motivational differences influence student interaction across different educational platforms. Our third research question is: How do individual differences predict student interactions across multiple platforms?

The purpose of this study is to develop an understanding of students’ collaborative interactions in a middle school mathematics classroom where students have different collaborators and use different learning environments. In order to explore the concept of a cross-platform ACLS, we have chosen two platforms: (1) Modelbook, an interactive digital textbook, and (2) Khan Academy, an online question answering platform. Modelbook allows synchronous communication through different tools promoting collaboration mainly in the form of discussion and text-based chat. On the other hand, in Khan Academy students participate in a collaborative activity through answering questions. We investigate our research questions within the context of these two platforms.

2 System Description

Our learning environment leverages a 5-day curriculum which we co-designed with an expert consultant fluent in Modeling pedagogy, a type of instructional pedagogy where students collaborate in small groups to answer a problem [7]. The curriculum focused on a ratio and proportions unit in middle school mathematics, including the topics of proportional relationships, lines, and linear equations. On Day 1, students were asked to devise a method for mixing blue and red paint in the perfect ratio to create purple. On Day 2, students iterated on their models and discussed the definition of a proportion, and then on Day 3 students looked at other examples of proportions. In Days 4 and 5, students received hands on experience applying their understanding of ratios and proportions to modeling the speed of a moving car. Following modeling pedagogy, on each day students alternated between individual problem-solving, small-group activities, and whole class discussions, creating multiple opportunities for collaboration.

Modelbook incorporates several components to facilitate student interaction. For example, once students have completed their small-group activities, the teacher asked students to upload photos of their work to a gallery. The students discussed each image, providing feedback to others (see Fig. 1 left). Modelbook also has a chat feature where students can engage in general discussion. With the guidance of the teacher, students were encouraged to perform help-giving interactions in the gallery and general chat.

Fig. 1.
figure 1

Left: Gallery image thread with student discussion, Right: Khan Academy discussion thread

ModelBook was built using the Django web framework. The front end of the application is implemented using HTML, jQuery, and CSS. Templates within Django contain the static parts of the desired HTML output as well as provision for inserting dynamic content. For each tool in Modelbook, there are icons on the left hand of the application, which, when clicked, triggers a jQuery event to dynamically load related interface on the right-hand side of the application. All discussion threads were implemented using the “Pusher” service - a hosted API for quickly, easily and securely adding a real-time bi-directional connection. We have used the default SQLite database that accompanies the Django framework. All user activity (e.g., uploaded images, messages) is stored in the database.

The other collaborative platform used is Khan Academy. While most known for its instructional videos, Khan Academy allows asynchronous collaboration with geographically distributed learners in a question and answer environment under each video (www.khanacademy.org). For our curriculum, students posted responses to Khan Academy questions four times over the 5-day period. To facilitate this activity, students were given a homework sheet that included instructions to watch a related Khan Academy video, and were asked to look for two questions posted by other people and provide a response. Students’ responses were then discussed in class the following day, and they were encouraged to post in class if they had not done their homework. An example of student participation in Khan Academy is shown in Fig. 1 (right).

3 Method

We conducted a five-day design study to explore how student interactions differed between ModelBook and Khan Academy, whether individual characteristics predicted how they interacted, and whether their interactions on the two platforms predicted their learning outcomes.

3.1 Participants

We conducted the study at a middle school in the southwestern United States with a minority enrollment of 47%. The study was conducted as part of regular classroom practice in an eighth-grade mathematics class with a class of 28 students. We received parental permission for 20 students and thus excluded the other 8 from the analysis. Student ages ranged from 12–14; there were 11 boys, 8 girls, and 1 student who selected “Other” on the question. Participant self-reported ethnicity was as follows: Hispanic (10), White (3), African American (2), Native American (1), Asian (1), and Other (3).

Of the 20 students who consented to participate in the study, only 16 students participated in all elements of the study: two students incorrectly filled out the motivation questionnaire (i.e., selected multiple items), one student was absent for the pre-survey, and one student did not post on Khan Academy. Thus, the final data analysis was done with 16 students.

3.2 Procedure

Domain pretests and a motivation survey were given to students on the Friday before the intervention week. Over the five days of the intervention, students followed the curriculum and engaged in multiple types of activities and interactions, such as: receiving direct instruction from the facilitator, working in small groups of two or three, participating in classroom discussions, completing Modelbook activities, and answering questions on Khan Academy (both in class and for homework). While the classroom teacher was present for each day of the study, the activities were facilitated by one of the authors who was a former teacher and was also our expert consultant in modeling curriculum. On the Wednesday following the five intervention days, students took a domain posttest and a motivation post-survey.

3.3 Measures

Domain Assessment. The pretest and posttest consisted of two isomorphic forms designed to assess students’ ability to solve proportional and ratio problems and relevant proportional definitions. It was based on district benchmarks and co-designed with the expert consultant. Each test form included 12 items, with eleven items assessing students’ mastery of the domain concepts, and one item asking students to provide an explanation. Forms were counterbalanced across participants (i.e., half the participants received form A for the pretest and B for the posttest, and half received form B for the pretest and A for the posttest). After giving the tests in the study, we noticed that a multi-part question (consisting of 3 items) on Form B was unclear to students and resulted in a disproportionate number of incorrect responses. We excluded that question from the test analysis, along with the corresponding question on Form A. Thus, a total of 8 items were summed to assess student domain learning, with 1 item used to assess student explanatory skill.

Motivation Pre-measure. We surveyed students about their attitudes towards math and mathematical self-efficacy. The instrument consisted of 22 five-level Likert-type items. Value and enjoyment of mathematics were assessed using a portion of the Attitudes Towards Math Scale [18], modified by reversing some items to balance positive and negative statements. To examine students’ mathematics self-efficacy, we adapted items from the Motivated Strategies for Learning Questionnaire (MSLQ) [5, 14]. The MSLQ scale is generic, so we modified the items to be specific to mathematics. An example item is, “I believe I will receive an excellent grade in math class.”

Motivation Post-measure. The post-intervention motivation scale consisted of 15 questions based on Expectancy-Value Theory, with 5 equivalent questions for each platform (ModelBook, Khan Academy, and face-to-face interaction). We wanted to assess whether students perceptions of the tasks differed between platforms and varied based on their experiences during the intervention. The scale was modified from [3] to reflect students’ motivation towards help-giving in math. Two example items are: “I’m certain I can make others understand the most difficult material presented in the question” (expectancy), and, “I enjoy helping others with their math questions” (value).

Coding of Interactions. We coded the digital interaction data using a coding scheme based on [21] with the following dimensions: (1) Level of Relevance to the content (LOR), (2) Level of Elaboration (LOE), and (3) Social factors (S). LOR was coded using three categories: General (information on the content but not enough to call it a explanation; e.g., “I agree because my board also was not an exact pattern.”), Specific (information specific to the content; e.g., “I think the unit rate is not 2/3 but it is 2:3”), Offtopic (irrelevant to the domain content). LOE coded for on-topic (general & specific) utterances has two categories: Non-Elaborated (answer without example or explanation; e.g., “I agree our car also did not go in a straight line.”) and Elaborated (answer with example, proper explanation with reasoning and justification; e.g., “if we have 2 cups+3 cups that would = five but we need 20 cups”). Finally, we classified an utterance as social if it had at least one of the following four factors: praise (“the graph is good”), apologetic (“No offense but this makes no sense to me, sorry.”), polite (“Thank you”), and encouragement (“Just do your best”). A second rater independently coded 17% of the dialogues with LOE (kappa = .805), LOR (kappa = .954) and Social (kappa = 1.0). Disagreements were resolved through discussion.

4 Results

For the analysis, we computed both the total numbers of each code dimension as well as student-level percentages with respect to the total utterances for each dimension. Table 1 shows the means and standard deviations for N = 16 for Modelbook (MB) and Khan Academy (KA):

Table 1. M and SD for each coding category

1. How does student interaction differ between Modelbook and Khan Academy?

Table 2 shows mean percentages and standard deviations of categories elaborated, specific, and social utterances for both Modelbook and Khan Academy with respect to the total utterances for each dimension (i.e., LOE, LOR, and S).

Table 2. M and SD for distinct types of utterances

To investigate differences in interaction between platforms, a repeated measures MANOVA was conducted with percent elaborated, percent specific, and percent social as dependent variables, and platform (Modelbook or Khan Academy) as an independent variable. The overall model was significant, \(F(3, 13) = 32.136\), \(p < .001\). Univariate tests revealed that while percent elaborated was not significantly different between conditions [\(F(1, 15) = 2.480\), \(p = .136\)], percent specific was [\(F(1, 15) = 45.226\), \(p < .001\)], as was percent social [\(F(1, 15) = 23.122\), \(p < .001\)]. It should be noted that interaction on Khan Academy followed a fairly uniform pattern, with nearly all on-topic utterances being specific, and no utterances being social.

As students gave both elaborated help and specific help in Modelbook and Khan Academy, we computed correlations between elaborated help across both platforms and specific help across both platforms. Elaborated help in Modelbook was not significantly correlated with elaborated help in Khan Academy [r(16) = 0.433, p = 0.094]; and specific help in Modelbook was not significantly correlated with specific help in KA [r(16) = 0.261, p = 0.328]. Interestingly, specific help in Modelbook was correlated with elaborated help in Khan Academy [r(16) =.746, p = .001]. This analysis demonstrates that not only was interaction different in general across the different platforms, but for each individual student, interaction on one platform did not predict their interaction on another platform.

Table 3. M and SD for post-motivational measure on help-giving behavior

While behaviors were different across the different platforms, perceptions of students’ own interactions in the platforms were not. A repeated measures MANOVA was conducted with each of the motivational post-measures (self-efficacy, importance, interest, utility, and cost) as the dependent variables and platform (Modelbook or Khan Academy) as an independent variable. The overall model was not significant [F(5, 11) = 1.082, p = .422] and there were no significant univariate effects. Table 3 summarizes the result.

2. How does help-giving behavior predict learning and motivation?

Table 4 shows means and standard deviations of the pre-test and post-test scores. We conducted a repeated-measures ANOVA and found that learning was not significantly different from pretest to posttest. Despite the overall lack of learning gains, we still look at predictors that may contribute to learning for individuals.

Table 4. M and SD for domain assessment and pre-motivational measures

We did a stepwise multiple regression analysis with percent elaborated in both Modelbook and Khan Academy, percent specific in both Modelbook and Khan Academy, percent social in Modelbook, pre self-efficacy, attitude towards math score, and pre-test score as predictor variables, with post-test score as the dependent variable. The model that emerged from the stepwise analysis contained only percent elaborated in Modelbook (\(\beta \) = 0.584; p = 0.003) and pretest score (\(\beta \) = .488; p = 0.010) as significant predictors, together explaining 67% of the total variance (Adjusted R-square = 0.619; F(2, 13) = 13.181; p = 0.001). Thus, the only behavioral variable that predicted posttest score was the level of elaborated help in Modelbook.

3. How does motivation and prior domain knowledge predict student help-giving behavior across the two platforms?

Table 4 shows the means and standard deviations of the pre-motivational measures: math self-efficacy, value, and enjoyment. To determine how motivation and prior knowledge predicts student help-giving behaviors, we conducted two multivariate regressions. The first analysis was done for Modelbook behaviors. We used percent elaborated, percent specific and social as dependent variables with pre-test score, average self-efficacy, and average attitude towards math score as predictors. No significant model emerged from it. Univariate tests also did not show any significant results; F(3, 10) = .471, p = .709, for pre-test score; F(3, 10) = 1.046, p = .414 for average pre self-efficacy, and F(3, 10) = 1.007, p = .430 for average attitude towards math score. Multivariate analysis done for Khan Academy behaviors with percent elaborated, percent specific as dependent variables with pre-test score, average self-efficacy, and average attitude towards math score as predictors also demonstrated similar result. Univariate tests didn’t show any significant results; F(2, 11) = .618, p = .557, for pre-test score; F(2, 11) = .596, p = .568 for average pre self-efficacy, and F(2, 11) = .286, p = .756 for average attitude towards math score. Students’ motivation prior to the intervention did not have an effect on their behaviors during the intervention.

5 Discussion and Conclusion

To design adaptive support for collaboration, student activity history in the collaboration contexts and current engagement in collaborative activities are essential [13]. In this paper, we examined whether student interactions differed across different technological platforms, how their interactions predicted learning and motivation, and how their interactions were informed by their individual characteristics. We found that students displayed better help-giving behavior in Khan Academy compared to Modelbook, but only help-giving behaviors in Modelbook predicted student learning. Individual characteristics like prior knowledge and math motivation did not predict how students gave help.

One interesting finding from this work was that while students gave more high-quality help in Khan Academy than in Modelbook, only the elaborated help in Modelbook was predictive of student posttest scores (controlling for pretest). The affordances of Khan Academy (asynchronous communication with an external community) may have led students to take more time to formulate their response [23], leading to more specific help and more elaborated help. In contrast, Modelbook represented synchronous, informal communication with peers, leading to overall less high-quality help but more social behaviors (which have shown in other work to be beneficial for learning [11]). In Khan Academy, because of the increased pressure of asynchronous public posts, students may have engaged in knowledge-telling behaviors [16], where they gave help on concepts they had already mastered. This may have led to less learning than their more off-the-cuff interactions in Modelbook, which may have represented knowledge-building, where they construct their knowledge as they are constructing their explanations. One implication of this finding for the design of adaptive support is that to improve outcomes from help-giving, it may be sufficient to encourage more elaborated help in Modelbook. However, in Khan Academy, it may be necessary to directly scaffold students in constructing the elaborated help so that they engage in reflective knowledge-building behaviors.

Another critical element of our results is that while context dictated how students gave help, individual differences did not. Students’ help-giving behavior was more elaborate and specific in Khan Academy compared to their behavior in Modelbook, and for individual students, these behaviors were not correlated with each other. This indicates that student behavior in one platform does not inform how they will behave in another platform; rather, the different platforms influenced how students will help each other. Additional support for this finding is provided by the fact that neither prior knowledge, math self-efficacy, nor attitude towards math predicted how students gave help in either platform. This finding implies that a model of student help-giving on one platform is unlikely to generalize to the same student’s help-giving behaviors on a different platform, and context thus needs to be part of any knowledge-tracing model of help-giving.

This study has a number of limitations. First, the sample size was small. Second, the number of interactions was greater in Modelbook compared to Khan Academy due to the design of the curriculum. To adapt to this limitation, we used student-level percentages to compute the results rather than absolute counts of student interactions. Third, students did not learn as a whole from pretest to posttest, possibly because the intervention time was too short or our assessment wasn’t sensitive enough to detect changes in student knowledge.

Nevertheless, the present research has important implications for computer-supported collaborative learning and ACLS. Students’ interactions in different platforms can be used to design individualized support that facilitates productive communication across collaborative learning environments. This goal will require a cross-platform student interaction model along with a domain knowledge model and motivation model for each student. Investigation is required to understand how to make predictions about student behavior within a single platform using this cross-platform interaction model, whether and how to encourage students to participate in platforms they are less comfortable with, whether and how to encourage students to transfer their skills from one platform to a different platform, and whether and how the same student should be given different kinds of support on different platforms.

In this paper, we examined students’ help-giving behavior across Modelbook and Khan Academy. This paper takes a step towards establishing the need for understanding cross-platform collaborative behavior, and based on our findings, we are currently building a cross-platform help-giving model. We believe this approach will ultimately enhance peer collaboration as students move between platforms of interaction.