Introduction

The processes that enhance and control group learning are an emerging topic in research on computer-supported collaborative learning (CSCL) (Kyprianidou et al. 2012; Zion et al. 2015). Collaborative learning facilitates the exchange of information, ideas and materials, and supports peer review and continuous feedback during virtual activities (Kim and Ryu 2013). Collaboration requires coordination and regulation of activities. In CSCL, participants interact constantly with other team members, and the group must reach a significant level of coordination to achieve high-level learning goals (Järvelä et al. 2015). Effective teamwork involves the use of strategies for controlling the progress of the activities and regulating the processes adopted by the group (Valcke et al. 2009). Participants must be able to evaluate the strengths and the weaknesses of their collaborative work (Biasutti 2011), and to assess the skills and the competencies of their teammates. Effective CSCL requires abilities such as reflection on the actions performed by the group and awareness development of the cognitive potentialities of the group (Valcke et al. 2009). These processes involve metacognitive skills, which have a great importance in controlling the cognitive dimension during the performance of a task (Vrugt and Oort 2008).

Research has shown that metacognition can be experienced in learning settings at an individual level and several aspects have been identified (De Backer et al. 2012; Khosa and Volet 2014). Metacognition has been studied mainly as a process of the individual while neglecting the role of social regulated behavior during collaborative activities (Zion et al. 2015). The mechanisms of individual metacognition are well known, and there is discussion about shifting the research focus from the individual to the group dimension (Hadwin and Oshige 2011; Janssen et al. 2011). The collective knowledge management processes and how the group creates, manages and controls the information during knowledge building should be analyzed (Järvelä et al. 2015). Although the development of regulated learning skills in CSCL activities was considered (Järvelä et al. 2015), there are few contributions and tools on the metacognition of group dynamics and on group awareness (Janssen et al. 2011).

In this paper, discussion focuses on the idea that metacognition could be applied to collaborative learning activities with specific focus on the reflection of cognitive potentialities of the group. The effects of metacognition of the group’s skillset could be similar to the effects of metacognition on individuals, but with an additional impact on teamwork. Specifically, this research examines the validation process of a tool for measuring the metacognition of group processes in CSCL environments.

Theoretical background

The theoretical background of this study considers previous metacognitive models and studies on metacognition in CSCL environments and during online collaborative activities. In addition, research on the building and validation of tools for measuring metacognitive skills is analyzed.

Metacognitive models

In instruction, there is plenty of research on metacognition at the individual level, and several patterns and classifications have been developed. Metacognition is a complex construct that was simply defined as cognition of cognition or thinking about thinking (Flavell 1979). Metacognition refers to the ability of a person to understand and develop a consciousness about his/her own cognitive processes and to control these processes to get the most out of learning activities. Metacognition is considered an important process for improving awareness, controlling the cognitive elaboration of information and developing regulated strategies during learning.

Several authors highlighted that metacognition consists of two components: the knowledge of cognition and the regulation of cognition (Schraw and Moshman 1995). The knowledge of cognition is the information that people have of their own cognitive processes, while regulation of cognition consists of the resources used to regulate and control the learning processes. Regulation of cognition includes the following activities: planning (predicting the products and results, defining the methods and arranging the strategies), monitoring before and during learning (controlling, testing, revising and changing learning strategies and approaches), and evaluating the activities (making judgments about results and ways of performing the tasks). These metacognitive activities are linked and influence each other reciprocally. This model was validated and used in many studies (De Backer et al. 2012; Khosa and Volet 2014; Zion et al. 2015), demonstrating that the skills of planning, monitoring and evaluating are the most used during learning. Several directions were undertaken in metacognitive research, considering aspects such as the executive control involved in models analyzing processes such as planning, evaluating, monitoring and revising. In addition, the concepts of regulation—the mechanisms for transferring control from other to self—and self-regulation—the processes used by students for continuously directing and refining their actions—were largely considered. More recently, the study considered the metacognition of group processes as well.

Metacognition of group processes in CSCL environments

Several studies are linked to the metacognition of group processes, providing premises, concepts and a conceptual framework for the development of this construct. The metacognition of group processes can be defined as the ability to reflect on the cognitive skills of the group during collaborative activities. It refers to thinking about the cognitive characteristics and potential of the group and considers how the participants are conscious of their skills in selecting and organizing the information as well as their abilities to plan the activities, distribute, modify and improve the work and evaluate aspects and processes of the collective work and of the virtual learning environment. The metacognition of group processes could be considered an emergent global feature rather than the sum of individual aspects.

The relevance of common cognitive characteristics during collaborative learning has been explored in many research fields but has rarely been extensively applied in metacognitive research. Several words such as team cognition, shared cognition, group awareness and transactive memory have often been used for highlighting the importance of team knowledge and collective mental constructs. In team cognition, shared knowledge is generated efficiently when members are able to collectively process information and when teams follow communication strategies which promote equal rates of information sharing through members (Grand et al. 2016). Mathieu et al. (2000, p. 280) discussed the role of shared cognition in team effectiveness, arguing “the shared-mental-model construct suggests that it is not only the overlap of knowledge among team members that is predictive of team outcomes but also the synergy of the knowledge organizations. (…) The sharedness of teammates’ mental models predicts subsequent team-processes performance.” Group awareness is another key factor for team effectiveness in collaborative learning environments and several conceptualizations and techniques have been proposed, including behavioral, social and cognitive awareness (Bodemer and Dehler 2011). Behavioral awareness refers to the activities of the students; social awareness refers to aspects such as the consciousness of the presence of others and to the participation of group members; cognitive awareness or knowledge awareness is connected to the knowledge of members of the group. Furthermore, another construct linked to group processes is the transactive memory which is a shared system for encoding, storing, and retrieving information. While occurring in a group, the transactive memory system involves the action of the memory system of the individuals and the process of communication within the group (Wegner 1986).

The group processes have also been considered in flow and optimal experience research. Flow involves intrinsic motivation, introspection, and concentration on the self. In group flow, all the members focus on achieving the same result, sharing goals and objectives, and synchronizing with one another mentally to obtain group strength (Biasutti 2017). Research has demonstrated that the effect of flow state on team performance is similar to the effect of flow state on individuals but with an additional impact on team processes (Heyne et al. 2011).

Another concept linked to metacognition is regulated learning, which implies controlling and evaluating one’s own and group learning. Hadwin and Oshige (2011) considered the following three types of regulation in collaborative tasks: self-regulated learning, co-regulated learning and socially shared regulation of learning. In self-regulated learning group members take control of their own cognition, motivation, and emotion. In co-regulated learning engagement in self-regulatory processes is supported by group members, technologies and contextual characteristics. In socially shared regulation of learning group members work together to regulate their collective cognition. Regulated learning is guided by metacognition and involves the definition of strategic actions such as planning, monitoring, assessing, and motivation to learn. Studies on regulated learning in CSCL environments have shown that it is crucial to stimulate students’ metacognitive awareness and the adaptation of their actions to their situated cognitive, motivational, and emotional challenges (Järvelä et al. 2015). During CSCL group members have to plan together, discuss collaboration strategies, monitor how the group is performing, assess the outputs, regulate and change actions accordingly to the achieved results. In CSCL environments, metacognitive processes are considered relevant for strengthening group coordination and developing effective learning (Järvelä et al. 2015). Metacognitive activities based on tasks such as designing plans, monitoring task progress, and evaluating ideas and strategies are decisive to enhance performance during collaboration (Janssen et al. 2011).

Through metacognitive practices, participants become aware of their potentialities as learners and are able to evaluate and control their cognitive processes. Several metacognitive practices were developed and compared in previous research, such as individual and social metacognitive support. It was found that individual metacognitive support significantly affects pupils’ virtual metacognitive performances, while social metacognitive support improves students’ participation in their peers’ learning processes and assisted them in collaborating via the virtual environment (Zion et al. 2015). Sharing cognitive experiences facilitates the control and the evaluation of one another’s behaviors, cognitive processes, and feelings (Kwon et al. 2013). These aspects emerge when participants interact in CSCL, as shown by Biasutti (2015), who analyzed engagement in collaborative creative activities. A qualitative analysis provided evidence of metacognitive processes such as knowledge of cognition, monitoring of cognition, and regulation of cognition. Participants expressed reflections such as: “Yes, but we are adding ideas on ideas”; “… but we have to arrange it in detail” and “The work goes on, and we are now mastering the technological resources; therefore, we focus on creativity.” These quotes highlight the use of “we”, demonstrating a focus on the group processes rather than on the individual elaboration of information. Participants were thinking about the activities and reported thoughts, ideas and contributions about the collective dimension of the work. They reflected on the group knowledge, evaluated the group progress and regulated their cognitive resources. The virtual collaborative activities encouraged the development of higher-order abilities, indicating the relevance of a reflective practice on the cognitive dimensions of group work.

Questionnaires for measuring metacognitive skills

A number of questionnaires have been developed to assess the metacognitive skills during instruction, although this evaluation is challenging. Several aspects affect the evaluation of metacognition; for example, since it is not directly observable, it might be confused with working memory size and verbal skills. In addition, some participants have demonstrated a propensity to give socially desirable responses and underestimate or overestimate processes (Veenman 2011). Moreover, the same tools consider specific aspects that have limited connections with learning activities in the schools. One of the most common tools is the Metacognitive Awareness Inventory by Schraw and Dennison (1994), which comprises 52 items analyzing the knowledge and the regulation of cognition.

Several metacognitive tools present limits, such as fragmented item division, failed verification of the theoretical model and poor psychometric properties. Some questionnaires, such as the study process questionnaire (Biggs 1987), the learning and study strategies inventory (Weinstein et al. 1987) and the Motivated Strategies for Learning Questionnaire (Pintrich et al. 1993), dedicate more attention to general processes than to specialized processes, offering few items or scales on real metacognitive processes. Other instruments present issues, such as failure to verify the theoretical model and poor psychometric properties (Garrison and Akyol 2013).

Regarding CSCL, a number of questionnaires on awareness issues have been developed. A four item scale for assessing students’ awareness of the participation of their group members has been proposed by Janssen et al. (2011) and consists of statements such as “I knew how much my group members contributed to the collaboration.” Janssen et al. (2007) have developed the following measures: group-norm perception and perception of online collaboration and communication. The group-norm perception questionnaire consists of the following three scales: critical group-norms (“Our group is critical”), consensual group-norms (“In this group people generally adapt to each other”) and exploratory group-norms (“During discussions, criticism and counterarguments were accepted”). The scale perception of online collaboration and communication consists of the following three scales: positive group behavior (seven items e.g., “We helped each other during collaboration”), negative group behavior (five items, e.g., “There were conflicts in our group”) and students’ perceived effectiveness of their group’s task strategies (eight items, e.g., “We planned our group work effectively”).

Garrison and Akyol (2013) have developed a questionnaire for assessing metacognition comprising the following three factors: knowledge of cognition, monitoring of cognition and regulation of cognition. The items in the questionnaire focus on the reflection of the individual’s elaboration of the information. Study results show that only the factor regulation of cognition was statistically confirmed.

The previous mentioned questionnaires mainly ask questions about the processes focusing on an individual metacognitive level rather than considering the group level. Aspects such as the group’s ability to elaborate information and to control group processes are considered only in a few measures. In addition, several tools are tested with a limited number of participants and were briefly presented without reporting data, such as exploratory factor analysis or confirmatory factor analysis. These studies provide an overview of the issues and are attempts to develop tools for measuring metacognition in CSCL. However, more work should be performed to test and validate the tools. What is missing in this scenario is a valid tool to measure the group’s ability to evaluate the collective elaboration of the work.

Synthesis of the background analysis

The literature review has shown that several models were developed for individuals, while metacognition for group processes is underestimated. One of the most researched metacognition models is based on knowledge of cognition, planning, monitoring and evaluating, as has been validated and used in many studies (De Backer et al. 2012; Khosa and Volet 2014; Zion et al. 2015). However, this research does not refer to a collective but only to an individual process, leaving the group aspects unexplored. Regarding the tools, several scales have been developed in the framework of CSCL without presenting detailed data on the validation process. A lack of validated instruments to assess metacognitive collective processes in online collaborative learning currently exists, and a shift from the individual to the group dimension in the metacognitive research is supported. Studying the group dimension in depth could provide inputs for understanding the collaborative processes that occur during CSCL and the socially shared metacognitive regulation. This study intends to fill this gap by creating a statistically validated instrument for measuring the group metacognition during CSCL activities. Individual respondents are asked to focus on an evaluation of group regulation processes.

Aims and research questions

The aims of the current research are to develop and validate a quantitative scale, the group metacognition scale (GMS), to analyze group metacognitive skills during CSCL activities. This scale is based on the following four aspects: knowledge of cognition, planning, monitoring and evaluating, and it could be used for evaluating the level of reasoning in group processes during CSCL.

The theoretical framework is connected to Schraw and Moshman’s (1995) study aiming to verify if the structure of their metacognitive measurement is valid also for group processes. With respect to previous studies, the items of the scale have been revised, introducing a collective aspect to measuring. The following research questions were considered.

  1. (1)

    Are the dimensions of the GMS supported by the exploratory and confirmatory factor analysis?

  2. (2)

    Is the GMS found to be sufficiently reliable and stable?

  3. (3)

    Can the GMS ascertain differences in metacognitive processes among university students attending different courses?

Methods

Participants

The participants comprised 362 (Mean age = 37.52 SD = 9.13) students who attended online collaborative activities at a university located in Europe. In total, 41.2% (n = 149) of the participants were males, while 58.8% (n = 213) were females. Participants were enrolled in degrees such as teacher training programs for primary and secondary schools, psychology and education.

Procedure

The scale was administered at the end of the online courses. The data were sorted into two groups, respectively 257 (first group) and 105 (second group) participants. The first group was assigned to the exploratory factor analysis, while the second group was assigned to the confirmatory factor analysis. Both groups were assigned to the multi group invariance testing. The courses were delivered in asynchronous e-learning environments and included collaborative learning activities. Participants interacted in small groups while designing the projects (such as creating a didactic unit, preparing a presentation about a topic), sharing ideas on colleagues’ proposals, and discussing the material. The online tools used were wikis and forums.

Research design

The methodological directions by DeVellis (2003) were followed during the development of the GMS. As a first step, a literature review was carried out to identify models, potentials and limits of the existing instruments. In addition, the goals of the measurement activity, the main features of the constructs to be measured and the factors of the scale were defined. The theoretical framework proposed by Schraw and Moshman (1995) was adopted using the following four dimensions, as previously described, as the main factors: knowledge of cognition, planning, monitoring, and evaluating. The dimensions and the target skills considered for the development of the scale are reported in Table 1.

Table 1 Dimensions, definitions and target skills for the GMS

The items were developed using the specified dimensions as a reference. During this phase, several questionnaires related to metacognition assessment were examined, such as the Metacognitive Awareness Inventory by Schraw and Dennison (1994), the Metacognition Questionnaire by Garrison and Akyol (2013), and the state metacognitive inventory (SMI) by O’Neil and Abedi (1996). Some items were adapted from these questionnaires, changing the individual to a collective dimension, in addition to other adjustments. Participants were asked to evaluate the group aspects of the knowledge construction processes using language such as “We know” instead of “I know”. In the Table 2 the adaptation process of the items is reported.

Table 2 References, the original items of questionnaires and the adaptation of the GMS items

To provide content validity, the scale was submitted to two external experts in the field of metacognition and online collaborative learning. Experts were asked to check for ambiguous statements and for correspondence between the conceptual validity and the formulation of the items. The suggestions of the experts were considered when revising the scale, and changes were made accordingly. The scale was also administered in a pilot study with five participants, who were asked to complete the scale and to provide comments regarding the understanding, fairness and appropriateness of the assignments and questions. The paper focused on content validity and validity of construct based on expert opinion rather than considering the predictive validity of the instrument.

Material

The final scale was a self-reported scale with 20 items measuring students’ metacognitive group skills and addressed what generally happened in their group during online collaborative activities. Participants were asked to express their level of agreement on a five-point Likert scale with the following grades: “strongly disagree”, “disagree”, “neutral”, “agree”, and “strongly agree”. The scale is reported in Appendix 1.

Data analysis and results

IBM SPSS Statistics 20 and Lisrel 8.80 were used to analyze the validity and reliability of the scale. Descriptive statistics, exploratory factor analysis, KMO and Bartlett tests, Cronbach’s alpha, a confirmatory factor analysis and multi-group invariance testing were computed. In addition, a group comparison was performed with an Anova to compare the students’ uses of different group metacognitive skills. The missing data of each scale were handled using the listwise exclusion method, only the data having values valid for all the variables of the scale were analyzed.

Research question one

Psychometric properties and factorial structure of the scale

To respond to the first research question, the Worthington and Whittaker (2006) protocols were followed. To assess factorability of the data the KMO and Bartlett tests were employed. A KMO value of .60 and higher is considered good (Worthington and Whittaker 2006). The values of the Bartlett test suggest that the null hypothesis must be rejected when there is a significance level of .05 (Snedecor and Cochran 1989). The findings are: KMO = .892; Bartlett test: χ2 = 2407.97, df = 190 (p = .0001), which indicates the factorability of the GMS.

An exploratory factor analysis (EFA) was computed on the first group using the Principal component analysis method and the Varimax rotation method was used to establish the links between the observed variables and underlying factors (Byrne 1998). The number of factors was determined using the Kaiser criterion and the Scree test and eigenvalues of factors equal or superior to one were considered (Biasutti and Frate 2017). The EFA revealed a structure of four factors, with five items per factor. The rotated component matrix indicated values ranging between .461 and .828, as indicated in Table 3. The rotation was unconstrained and items with factor loadings lower than .42 are not reported. When an item was loaded in two factors, the higher value was considered. The total variance explained by the factors is 61.59% as reported in Table 4. Descriptive statistics, eigenvalues, percentages of variance and Cronbach’s alphas are indicated in Table 4.

Table 3 Mean (M), standard deviation (SD), and rotated factor matrix (exploratory factor analysis) for the GMS
Table 4 Descriptive statistics mean (M) and standard deviation (SD), eigenvalue, percentage of variance, Cronbach’s alpha (reliability) for the GMS

The second statistical examination for validating the scale was the confirmatory factor analysis (CFA), which enables investigation of latent variables and reduces the total number of variables.

The CFA was performed by using the Robust maximum likelihood method with data from second group of 105 participants. The extent to which values fit is presented in Table 5. The values suggest an acceptable fit for RMSEA (values ≤ .05 indicate a good fit and values as high as .08 a reasonable fit) and S-RMR (values ≤ .08 are acceptable) and a good fit for CFI, IFI and NNFI (values > .95 are good) (Byrne 1998). The path diagram model of the scale is shown in Fig. 1. Although the GFI and AGFI were a little lower than the middle values, they were close to the value 1, which indicates an acceptable fit (Byrne 1998). The CFA confirmed the four factors model.

Table 5 Goodness of fit of CFA of GMS (N = 105) and multi-group invariance (MGI), configural, metric and scalar, for the first group (N = 257) and second group (N = 105)
Fig. 1
figure 1

Confirmatory factor analysis model of the GMS (N=105)

Research question two

Reliability and stability of the scale

The second analysis concerned the reliability and stability of the scale. Cronbach’s alpha, the reliability coefficient, was computed to assess the scale’s reliability and internal consistency. The values for each factor ranged from .80 to .86, while .91 was the value for the whole scale. The results are presented in Table 4 and indicate a very good internal consistency.

The multi-group invariance testing was applied to the first group (N = 257) and the second group of 105 participants to assess the stability of the scale; then, the multi-group configural, metric and scalar invariance testing were computed as recommended by Chen (2007). The factor structure and factor-loading patterns were then compared to provide statistics for a good fitting model (Byrne 1998). Invariance was calculated with the factor loadings left free. The findings are reported in Table 4 and suggest that for configural invariance testing, RMSEA and NFI are acceptable, and the CFI, IFI and NNFI are a good fit (Byrnes 1998).

Regarding the multi-group metric test, the relationships between factors were computed by constraining them to be equal across the two samples. The findings of this test demonstrated that the structure of the GMS was the same in the two samples (RMSEA, CFI, IFI and NNFI are acceptable). For the multi-group scalar, the measured invariance was computed by constraining the intercepts to be equal across the two samples. The values were acceptable for RMSEA and the NFI, CFI, IFI and NNFI have a good fit (Byrnes 1998). These findings confirmed the stability of the scale. The Chi square difference tests were computed between the configural multigroup model and metric multigroup model (adjusted Chi square difference = 120.033; df = 46; p = .0001); the configural multigroup model and scalar multigroup model (adjusted Chi square difference = 115.131; df = 62; p = .0001); and the metric multigroup model and scalar multigroup model (adjusted Chi square difference = 4.902; df = 16; p = .099). These results indicate that the configural multigroup test shows a better invariance between the two groups of participants.

Research question three

Group comparison

The third research question concerned the differences in group metacognitive perceptions among university students attending different courses. A group comparison was performed with a one-way ANOVA (with Cohen’s d as the effect size index) that compared 75 students who were pursuing degrees in teaching in primary education (42 students) with those studying psychology (33 students). In addition age differences in all participants grouped in 19–25 years, 26–35 years, and 36–60 years were considered.

Trainee teachers of primary schools have stronger opinions regarding planning F(1,73) = 7.000, p < .05, d = .62, monitoring F(1,73) = 6.490, p < .05, d = .60 and evaluating F(1,73) = 6.447, p < .05, d = .59 than do psychology students, demonstrating stronger opinions toward group metacognition and working collaboratively. The mean values for trainee teachers of primary school and psychology students for the factor planning were, respectively, M = 3.714, SD = .770 and M = 3.200, SD = .912; M = 3.876, SD = .679 and M = 3.430, SD = .803, respectively, for factor monitoring; and M = 3.709, SD = .753 and M = 3.272, SD = .760 for factor evaluating. Regarding the age, no differences were found for any of the factors F(2,360) = .109, p = .897; F(2,360) = .789, p = .455; F(2,360) = 1.251, p = .288; F(2,360) = 1.769, p = .172.

Discussion and conclusions

The current study presented the development and the validation process of a quantitative 20-item scale for measuring the skills involved in group metacognition based on the following factors: knowledge of cognition, planning, monitoring and evaluating. This study is connected to the literature on the models of metacognitive skills (Schraw and Moshman 1995) and the evaluation of metacognitive skills (Garrison and Akyol 2013; O’Neil and Abedi 1996; Pintrich et al. 1993; Schraw and Dennison 1994; Weinstein et al. 1987). Previous instruments considered metacognition as an individual process, while the GMS focuses on the group metacognitive skills during CSCL. The statistical analyses have shown that the instrument is sufficiently reliable and stable. The exploratory factor analysis, confirmatory factor analysis and the multi-group invariance testing sustain the validity of the structure of the scale and support the model of metacognition by Schraw and Moshman (1995). Furthermore, the reliability and stability analyses have shown that the GMS is sufficiently valid, and it is appropriate for measuring metacognition skills in university students. Because the instrument is valid the model about metacognition group processes is confirmed, providing evidence of the relevance of this construct. These findings shed lights on group consciousness and its relevance during CSCL. In CSCL it would be interesting to discuss the role of group metacognition to improve online collaborative learning. Several studies (Janssen et al. 2011; Kwon et al. 2013; Zion et al. 2015) provided evidence of the importance of metacognition during CSCL and the metacognition group processes model could be applied for analyzing the strengths and weaknesses of the collaborations developed during online activities.

To demonstrate its utility, GMS was used to observe differences in the processes among university students matriculated in different degree programs. This analysis gave an idea of the possible applications of the scale in higher education by comparing trainee teachers of primary school with psychology students. The findings highlighted a different trend related to the students’ degrees: the trainee teachers have stronger opinions on planning, monitoring and evaluating than did the psychology students. These findings revealed that trainee primary teachers had a greater group metacognitive attitude than the psychology students. This result could be because trainee primary teachers are more accustomed to working collaboratively and they do more group workshop activities during their training than psychology students do. In addition, teachers know the importance of metacognitive strategies in general because they are immersed in learning environments, and metacognition is included in their study subjects. These findings are connected with previous research about the beliefs of trainee teachers about teamwork and working collaboratively (Biasutti 2012).

This study has limitations, such as the restricted number of students contributing to the research. In addition, the participants all came from the same university. These aspects limit the generalizability of the results to university students from other regions and countries. It would be relevant to perform additional research for validating GMS with a greater number of university students from several countries pursuing different degrees. In addition, the GMS could be administered to groups with specific characteristics to verify whether students with attitudes toward group metacognition behaviors produce different results with GMS. The relevance of GMS could also be verified when assessing programs and distinguishing enhancements in group metacognitive skills. These findings offer a foundation for designing future research on the metacognition of group processes.

It would be interesting to consider how the scale could be applied in further research. GMS could be useful for assessing changes in the level of development of group metacognitive skills in students and evaluating the correlations with other constructs. A pre-post research design could be used for verifying whether students’ results vary after attending CSCL activities or other programs for fostering their awareness of group potentialities. GMS could be useful to prove how group metacognitive skills are important for achieving positive outcomes during CSCL. The direct application involves a didactic approach to the processes rather than the products. Processes such as knowledge of cognition, planning, monitoring and evaluating could be promoted through the design of specific reflective activities regarding these skills. The aim of these activities is to induce a shift from the implicit to the explicit level of awareness about the potentialities of the group, through which students intentionally express their understanding of self and others (Bodemer and Dehler 2011; Buder 2011).

The use of GMS can be also extended to several contexts and conditions, such as to compare students working collaboratively in face-to-face situations to those working collaboratively in online settings. In addition, the GMS can be used to examine the connections between metacognitive group attitudes and other constructs, such as regulated learning processes and self-efficacy in virtual learning environments. However, more research must be developed to further verify GMS within these variables.