Keywords

1 Introduction

Education ecosystems are rapidly developing, facilitated by emerging technological trends that open diverse opportunities for innovative and more interactive learning approaches. These trends lead to blurring the lines between digital and physical arenas, being expected that this constructive movement proliferates worldwide. In this context, Mass Collaborative Learning (MCL), as an example of novel opportunity, by focusing on lifelong learning, facilitates the way that even general public can engage in collaborative learning practice outside the conventional organizational structures and traditional education system. In other words, interested learners from around the world can jointly and continually create, share, and acquire new knowledge through transforming and exchanging their experiences and information by means of ICT platforms in an informal way. Opposed to traditional learning methods and bureaucracies i.e., formal learning that is hierarchically structured, explicitly controlled, chronologically graded, and delivered by an instructor in a systematic intentional way, MCL defies many hierarchical practices, goes outside of a traditional formal learning environment and is not necessarily intentional [1,2,3,4].

MCL communities are mostly self-directed, non-hierarchical, heterogeneous, and support dynamic initiatives that bring together diverse sets of autonomous learners (with various levels of expertise and performance) aiming to reap the power of collaboration and the advantages of diverse minds toward learning of favorite subjects. There are several potential and substantial benefits of this promising complementary approach of learning which include but not limited to: creating a dynamic environment of involved, simplified, and exploratory learning; increasing the access to general and specific knowledge and information; solving some complex problems by using collective intelligence and crowdsourcing; enhancing global interactions with peers and experts; fostering self-management and self-learning skills; stimulating critical thinking and assisting learners to develop ideas through community discussion; promoting alternate content assessment; and encouraging an atmosphere of positive collaboration. Finally, the provided synchronous or near real-time communication through ICT platforms enables MCL communities to maneuver with great agility and speed [1, 2].

Despite the positive and promising features of MCL, and the opportunities that it can open for the societies, communities, and learners, MCL (as a type of social network) faces with a huge number of problems, challenges, and limitations. For example, MCL must deal with the challenge of determining and controlling the quality and reliability of shared materials within the community. In fact, mass collaboration is regarded as a double-edged sword when it comes to learning. From one side, it is relatively low cost, accessible, and it can potentially facilitate knowledge sharing and increase public awareness. On the other side, the size (big) and environment (online) of the community can potentially put it at risk of encountering, involving, abusing, and damaging with unreliable knowledge or information. On top of that, the anonymity of community participants can likely intensify the problem. Since MCL is typically supported by a public platform, any participant can post anything with various degrees of truthfulness. The dissemination of such unhealthy contents throughout the community, without doubt, can negatively influence its members (e.g., to be misinformed or mislead) [1, 5, 6]. Thus, it is left to community members to recognize whether the content is true or not. Unfortunately, this is a dark side to MCL.

On that account, it has raised MCL community's concerns about not only the accuracy of the shared knowledge or information, but also in which way the community can properly gauge the quality of those materials. To contribute to solving this problem, this study proposes a potential mechanism that can assist MCL community members to assess the reliability and trustworthiness of shared knowledge or information in a systematic way. Therefore, the main objective of this work is to raise our level of consciousness about the impact of sharing unreliable contents (knowledge and/or information) in a MCL community and more concretely propose a mixed method that can help the MCL community members to assess the reliability and quality of shared contents through a multi-user and multilevel evaluation approach. Thus, the key research question that emerges is:

What could be a suitable assessment method to help minimize the problems related to the reliability of shared contents through mass collaboration within a community?

The following hypothesis is then set for this research question:

The problems related to the reliability of shared contents through mass collaboration could be minimized if the community benefits not only from the combination and application of a set of appraisal rules, criteria, and methods, but also the content materials are critically assessed through a collective effort.

It is noteworthy that despite great achievements in this specific area, there are still several ambiguities about the mass collaboration and learning process. Furthermore, the related concepts and associated mechanism are still evolving. In particular, the process of detection and evaluation of unreliable contents is not easy to be accomplished since it certainly requires adapting and deploying advanced technologies and comprehensive methodologies [1].

It should be also pointed out that in the early stage of our research we first conducted a systematic literature review to provide an exhaustive summary in connection with MCL and also to gain better understating about the main concepts, opportunities, challenges, and influential factors in this context [1, 2]. In next stage of research, we reviewed and analyzed the organizational structures of 14 real examples of mass collaboration and their most suitable features to be then able to propose a general organizational structure for mass collaborative learning purpose [3]. Afterward, a reference model for MCL was developed, aiming to highlight the main internal and external components of the community [4]. To complete our research and develop the proposed organizational and governance structure, the current study concentrates on specific management issues associated to content quality.

The remaining of this paper is organized as follow: Sect. 2 presents the relationships of this study with technological innovation for applied artificial intelligence systems. In Sect. 3, the literature and related work are reviewed. Section 4 explains the proposed mixed method for assessing the reliability of shared knowledge or information. Section 5 discusses the importance of proposed mixed method in assessment of shared materials in MCL community. Finally, the conclusion is given in Sect. 6.

2 Relationship to Technological Innovation for Applied Artificial Intelligence Systems

Assessment is an integral part of learning, and technology has long been one of the main contributing instruments in the assessment process. The most common types of assessment technologies in use nowadays are, in many ways, the tools that were once innovative, such as automated scoring. Indeed, the Information Communication Technologies have been providing the possibility to evaluate what learners are studying, learning, and sharing at very fine levels of detail (from distant locations) by means of vivid simulations of real-world situations. Such technologies have extensive potential to not only provoke radical changes in learning ecosystems at all levels, but also can bring greater sophistication, timeliness, and efficiency to different aspects of assessment method design and implementation. “ThinkerTools, for example, is a computer-enhanced middle school science curriculum that promotes metacognitive skills by encouraging students to evaluate their own and each other’s work using a set of well-considered criteria” [7].

Artificial Intelligence (AI) technologies have proved the potential to be adopted in teaching and learning. That is, AI can open a wide range of opportunities for learning through providing a deep and fine-grained understanding of how and when learning really occurs. There are various types of AI techniques (e.g., semantic analysis, speech recognition, natural language processing) that can each be used to assess learning processes [8]. More specifically, AI can: (a) evaluate massive amounts of contents (knowledge, information, and data), (b) enable learners to conduct objective and consistent assessments of contents at a much earlier stage, (c) aid MCL communities to eliminate conscious and unconscious bias, (d) compared with practices of human raters, the process of assessment by AL is a more transparent, open to challenge, and legally-defensible approach, and (e) potentially accelerate, optimize, and promote the process of assessment. In addition, AI scoring and grading systems can create multiple-choice test questions (for different purposes like voting) and then truly leverage adaptive scoring [9]. Even though, AI can simultaneously perform several types of evaluations (even far more than any human), we do believe that, in the context of MCL, the assessment of shared knowledge or information requires interaction between human and computer and their effective contributions. However, in this paper, the focus is given more to human part and highlighting the power of “mass collaboration”.

3 Related Work

In the era of knowledge economy, it is highly significant to form strategic alliances through properly sharing reliable and valid knowledge or information across the collaborative networks. Such community networks can open new doors for participants to easily and freely access, use, share, create, and develop a variety of knowledge or information around different topics, and then leverage their knowledge and expertise for creating value. Furthermore, they can or should tackle the spread and impact of unhealthy contents, enhance the security and reliability of the shared knowledge or information, and build and promote the community resilience to disinformation. In this respect, identification of healthy knowledge or information (that stands on proper evaluation method) for reuse and exploitation in different contexts is one of the most crucial parts of knowledge sharing and enrichment in MCL. As such, it is considered as a hallmark of high-performing organizations [1, 10, 11].

Unreliable knowledge and inaccurate information, in this study, are considered verifiably false contents that might be created, and/or disseminated by one or some MCL community participants who most probably intend to inform, but are unaware that the materials are false or incorrect (e.g., a wrong interpretation, a false connection, a misconception), or in rare case(s) (knowingly) intend to deceive and mislead other participants (e.g., misleading information, fabricated or manipulated contents), either for the purposes of causing harm, or for political, personal, or financial gain [12, 13]. Such deleterious materials are dangerous and can act like a destructive weapon with damaging effects on both community and members. Sometimes this might have far-reaching consequences including but not limited to, destroying the reputation of the community and its members, driving participants to cut out their collaborative and social relationships with other peers or even drop out the community, and to simply create and/or consume less (reliable) knowledge and information overall [1, 2].

To combat the creation and distribution of unreliable contents, build resilience in the learning ecosystem, reduce the community members' vulnerability, improve the accuracy in judging the suspicious contents, and consequently develop proper collaborative learning, the literature recommends different strategies (e.g., providing appropriate training, guiding information, or tips for learners) [1, 14], approaches (e.g., deep learning [15]), or technologies (e.g., smart systems and software) [2, 16, 17] to be adopted. In this context, several research works have also attempted to introduce or develop useful learning procedures and evaluation methods. In [18], for example, the authors introduce a multilevel method for building organizational learning in three integrated levels namely, micro (individual), meso (network), and macro (systems). The study in [19] also focuses on a method of multilevel organizational learning as a systems approach to connect the individual, team, and organizational levels. In another study [20], the validity of content is assessed by using a mixed method approach. The authors use of a Table of Specifications (ToS) for estimating content validity. The ToS stands upon triangulating multiple data sources, expert debriefing, and using peer review.

Even though there are several references for evaluating the validity of shared knowledge in the literature [21, 22], there is relatively little comprehensive discussion about knowledge sharing in MCL [1, 2]. The existing literature, furthermore, lacks sufficient evidence and results on the way according to which the learners can assess the reliability and quality of shared knowledge in MCL communities. To fill part of these gaps, this study proposes a potential complementary assessment approach to equip MCL community participants with the basic information and needed analysis skills (e.g., ability to assess both content and sources) and evaluation techniques (e.g., thinking critically, logical reasoning, investigation, evidence-based reasoning, sharing views, community discussion, checklist approach, and voting) to determine the validity of what other contributing members deliver. Thereupon, a mixed method (that uses the above-mentioned qualitative and quantitative techniques) is introduced to, from one side, help making the results of knowledge assessment broader, deeper, and/or more precise. From another side, the application of this mixed method in different levels of the MCL community (individual and network) also assists reaping the advantages of diverse community members inclusion. However, content evaluation is an extremely complicated process, and it remains as one of the major concerns in this realm. In fact, there are still no universally agreed ways for assessing the reliability of contents, or the set of corresponding measurement technologies.

To successfully deal with the issue of content unreliability, one possible alternative for MCL is focusing on the prevention and reducing the risk of creating and sharing unreliable contents within the community, before involving with reliability assessment. In this respect, MCL could for example adapt the Failure Mode and Effects Analysis (FMEA). FMEA as a structured approach helps identifying potential unreliable contents and their major causes and effects on the community. The major causes could be related to human errors, procedural problems, management oversight, training deficiency, etc. In this process, the community can take needed actions to reduce the chance of potential unreliability occurrence. Unreliable contents can be also prioritized according to how easily they can be detected, how serious their consequences are, and how frequently they occur. Furthermore, a list of recommendations for reliability improvement can be then provided. The aim of applying FMEA is to take actions to eliminate or reduce unreliable contents. In case a piece of unreliable content is created and shared, despite utilizing FMEA, the community can then proceed to utilizing MAM-MCL.

4 Proposed Mixed Method for Assessing the Reliability of Shared Knowledge or Information (MAM-MCL)

In MCL, a huge amount of knowledge and/or information can be shared within the community. It is note taking that not all need quality and reliability assessment because a number of them can be readily, logically, and/or reasonably realized as a true content (e.g., facts, verified contents). However, there might be some contents that give the impression of being untrue or suspicious (e.g., completely false contents, fabricated contents, manipulated contents, misleading contents). In such cases, the MCL participants can use the proposed MAM-MCL to assess and gauge the reliability and quality of shared knowledge or information. In this framework, MAM-MCL can benefit from different innovative technologies such as digital fingerprints to for example, facilitate the identification of community participants, reduce their anonymity, and better track their contributions particularly in the case that they might share unreliable contents. MAM-MCL comprises of 5 main steps (see Fig. 1).

Fig. 1.
figure 1

Proposed MAM-MCL for assessing the reliability of shared contents in MCL community

These steps are briefly explained in following.

  • Step 0 – preliminary assessment (by technology): in this step the shared knowledge or information (that are marked for assessment by community contributors) will be initially checked and then filtered by means of technology (e.g., AI software filtering system). The contents that are not deleted or rejected in this step (those that the technology marks for further evaluation), will be referred to the next step.

  • Step 1assessment by moderators and content controllers: in this step the received contents (e.g., suspicious contents, controversial cases) from step 0 will be first checked by moderator(s) to prioritize them (based on predefined importance) for reliability assessment. The checked contents will be then classified based on the preset fields, classes, and topics. The classified contents shall be then codified based on predetermined codes (e.g., controversial, not controversial, scientific and professional, and not scientific and professional). Then after, they will be sent to the respective assessment level (Step 1.1). If the checked contents “are not controversial”, they will be sent to the individual level. If the checked contents “are controversial” (e.g., ethical, cultural, critical issues), they will be sent to the community level. The assumption here is that compared with individuals, the community can better assess and make decisions about the controversial contents.

  • Step 1.1referring to individual or community level: in this step the checked contents will be referred by moderator(s) either to individual or community level:

    •  > Individual level: indicates that the reliability and quality of “not controversial contents” should be evaluated individually. If the considered contents “are scientific and professional”, they will be sent to expert participants (at individual level). If the considered contents “are not scientific and professional”, they will be sent to ordinary participants (at individual level).

    •  > Community level: shows that the reliability and quality of “controversial contents” should be evaluated collaboratively. If the considered contents “are scientific and professional”, they will be sent to expert participants (at community level). If the considered contents “are not scientific and professional”, they will be sent to ordinary participants (at community level).

  • Step 2 –assessment by ordinary members and expert participants (in individual and/or community levels): the last stage of assessment. In this step the considered contents will be assessed either by ordinary or expert participants in both levels, individual and community:

    •  > Ordinary participants (at individual level): they assess the knowledge or information that “are not controversial, scientific, and professional”. There are 5 defined questions (addressed in Table 1) to which the ordinary participants will give a rate (based on 5-point Likert Scale) to determine the reliability and quality of considered contents. In this personal evaluation, if the sum of given rates to all 5 questions is lower than the mid-point (2.5 point), the evaluator notes the contents as “rejected” or “unreliable”. If the sum of the given rates to all 5 questions is above the mid-point, the participant notes the contents as “accepted” or “reliable”. In cases that the contents seem needing further/deeper evaluation, the evaluator marks them. When the number of marked contents reaches a certain percentage, the moderator(s) will send them to “ordinary participants at the community level”.

    •  > Expert participants (at individual level): they assess the knowledge or information that “are not controversial but are scientific and professional”. Similarly, by responding to 5 questions, they will give a rate (based on 5-point Likert Scale) in order to individually and professionally determine the reliability and quality of considered contents. In this personal evaluation, if the sum of given rates to all 5 questions is lower than the mid-point (2.5 point), the evaluator notes the contents as “rejected” or “unreliable”. If the sum of the given rates to all 5 questions is above the mid-point, the evaluator notes the contents as “accepted” or “reliable”. In cases that the contents seem needing further/deeper evaluation, the expert participant marks them. When the number of marked contents reaches a certain percentage, the moderators will send them to “expert participants at the community level”.

    •  > Ordinary participants (at community level): the process of assessment and rating by ordinary participants at the community level follows the same process as for ordinary participants at individual level. Even though, in this step and level, the ordinary participants first by collaborative efforts (e.g., group discussion, sharing ideas and viewpoints) exchange their opinion and findings about the considered contents that “are controversial, but not scientific, and professional”. Considering the raised points in the community, the ordinary participants then go through a personal rating.

    •  > Expert participants (at community level): the process of assessment and rating by expert participants in community level is similar to the process for expert participants at individual level. Although, in this step and level, the expert participants first by collaborative efforts (e.g., group discussion, sharing ideas and viewpoints, peer review, Delphi method) exchange their opinions and findings about the considered contents that “are controversial, scientific, and professional”. Taking into account the raised points in the community, the expert participants then go through a personal rating.

  • Step 3 – publishing the results of assessment by moderators: having received the results of assessment from step 2, the moderators will proceed to the last phase of assessment. They first analyze the received results and then publish their final decisions about the reliability of the shared contents.

The frequently updated results of assessment from step 2 are visible to all community participants. They can, however, individually analyze and judge the reliability and quality of assessed knowledge or information.

The 5 defined questions, 5-point Likert Scale, and proposed formula for calculation of the given rates (in both levels) are presented in Table 1 and Table 2. It is note taking that every Likert question addressed in Table 1 corresponds to one quality indicator, but the same weight (1) is adopted for the 5 addressed questions, because they all are closely correlated and also considered equally important in the assessment.

Table 1. Examples of questions and 5-point Likert Scale for assessing the reliability and quality of shared knowledge or information in MCL community.
Table 2. Proposed formula for calculation of given rates at individual and community levels

5 Discussion

MCL is an emerging learning mechanism relying on a ubiquitous source of jointly created, shared, and developed knowledge and information for interested learners. However, the quality of shared contents in a MCL community is highly variable. As MCL is indeed (by nature) a self-governed community, it is significant that all community participants spontaneously and responsibly take part in the process of content reliability assessment, although the managerial group members (e.g., moderators and content controllers) have active roles in managing this process [3].

Determining the reliability and quality of contents is an extremely complicated process. This complication might arise for several reasons, for example:

  1. a)

    When the number of contents (to be assessed in a short time frame) increases to a large number. This case would require more computer-human interaction particularly the applications of AI in order to facilitate and speed of the process of assessment.

  2. b)

    When the knowledge (to be assessed) is complex in terms of, for example, structure, or grammar. In this case strong human contribution and intervention is indispensable. The idea is thus that by leveraging of multiple and diverse minds, expertise, and experiences in MCL community, the complexity could be minimized.

Determining the reliability and quality of such contents is an extremely complicated matter, and it often involves sophisticated technologies, experimental investigation, and extensive background knowledge. There is a variety of proposed solutions for evaluating the reliability of shared materials in virtual learning environments [23]. The review of content quality assessment methods under diverse collaborative learning scenarios can provide a basic understanding on the state-of-the-art on false content detection methods.

It should be added that the assessment of content reliability and quality in MCL is still in the early stage of development, and there are several unsolved and challenging issues that still require further consideration, clarification, and research contributions. Furthermore, MAM-MCL is here introduced for the first time and at this stage it is proposed theoretically (although taking inspiration from some successful examples of mass collaboration such as Wikipedia that utilizes fact-checking approach). Without doubt, MAM-MCL needs to be applied to a wider range of real case studies (in future works) to not only focus on its possible challenges (e.g., how can we encouragingly involve a large number of participants in the task of content assessment, and how can we appropriately direct the series of steps addressed above), but also provide more practical solutions. From this perspective, this study attempts to (a) raise the concern about the dissemination of unhealthy contents in learning environments like MCL, (b) underline the importance of developing needed skills at the community level and providing supportive technologies for content assessment, (c) highlight the role of social involvement and power of collaboration, and (d) increase the awareness about the potential solutions in dealing with such cases in the real world. Therefore, it is expected that the proposed MAM-MCL, by taking the advantages of human and technology inclusion, could assist learners to minimize the effect of false and unreliable contents in MCL communities.

In a previous stage of our research, we focused on the organizational structure and governance principles of MCL [4], towards the elaboration of a reference model for such communities. Current work assumes the existence of such MCL organizational structure and focus on specific management issues related to handling false and unreliable contents. Therefore, MAM-MCL, which is still an ongoing work, will contribute to a more robust governance of such collaborative communities.

In the current stage of development, the focus is on the validation of the proposed MAM-MCL, a process that will comprise several steps:

  1. 1.

    Define the purpose and scope of the proposed method (as presented in previous sections);

  2. 2.

    Conduct a survey to gather feedback from some experts (focus group) regarding MAM-MCL (ongoing);

  3. 3.

    Analyze collected feedback and make adjustments to the method (if needed);

  4. 4.

    Consolidate the method towards optimization;

  5. 5.

    Integrate the MAM-MCL in the MCL governance model.

This process is summarized in Fig. 2.

Fig. 2.
figure 2

Plan for the development and validation of MAM-MCL

6 Conclusion

The rapid development of collaborative activities, and informal lifelong learning, coupled with technological advances, led to emerging and developing innovative and useful approaches for social learning. MCL, as an example, bring diverse contributions from a pool of minds, talents, experiences, and skill sets to come up with productive learning practices. However, the quality of exchanged knowledge or information by contributing members in the MCL community is extremely variable. Thus, there is a need for the application of an appropriate assessment system. Given that, this study, by proposing the MAM-MCL, attempts to introduce an assessment method to help evaluating the reliability of shared contents through simultaneously bringing into service a) various technological support tools (e.g., ICT platforms, AI, and digital fingerprints), b) community involvement and collaborative evaluation, and c) combination of different evaluation strategies and techniques (e.g., brainstorming, critical thinking, group discussions, debates, peer evaluations, inquiry-based learning, rating, and expert consult).Validation of MAM-MCL is the ongoing step of development.