Keywords

1 Introduction

Experiential learning is quite common in Computer Science degrees in the form of capstone projects [1, 2] related to programming or data science. In this paper, we present an activity proposed as part of an introductory bachelor course on Computer Science Education, named ICDD (Computer Science for Creativity, Teaching, and Dissemination). The learning outcomes of the course include skills to design and conduct hands-on lab activities for introducing beginners to coding and programming. These hands-on activities are targeted to primary and middle school students and their contents are based on the national directions for teaching Computer Science in schools, proposed by the Italian Inter-university Consortium for Computer Science (CINI) [3, 4]. In pre-pandemic times, our students acquired such skills through design and conduction of in-presence laboratories mainly based on visual languages, such as Scratch [5] and Pocket Code [6]. The need to switch to the online setting gave us the opportunity to introduce an additional gamification element [7, 8], by involving our students in the design and realization of an online coding challenge among middle school classes, named Italian Coding League (ICL). ICL, jointly organized by the Digital School Interest Group of the University of Genova and by Edutainment Formula, took place in March 2022 and involved 609 students from 29 classes of 9 Italian regions. (The first edition of the ICL [9] was organized in 2021 with limited student involvement.) The competition was supported by the Smart O.C.A. (Smart Online Challenge Activity) online game platform [10] and was managed by the authors assisted by 15 tutors for a total of 112 h of training and competitions among the classes. The involvement of the ICDD students in the experience of the ICL organization pertained several different aspects including design of the format, selection of the set of questions proposed during the competition, and the online conduction of the activity during the different phases of the event.

Plan of the Paper.

In Sect. 2, we present the format adopted for the organization of the ICL competition. In Sect. 3, we describe the experiential learning process adopted for the university students involved in the initiative. In Sect. 4, we present a summary of the outcome of the entire activity using data collected during the 2022 edition. In Sect. 5, we focus on considerations on the reflection phase for university students. Finally, in Sect.6 we address current and future work directions.

2 An Overview of the Italian Coding League 2022 (ICL 22)

The second edition of the Italian Coding League was proposed to Italian Schools by the University of Genoa. In line with the European Commission's Digital Education Action Plan 2021–2027, with stress on the need for basic education in computer science for all School levels, the Italian Coding League proposed to teachers and students a competition built on top of the Proposal for National directions for Teaching Computer Science issued by the CINI Computer Science and School Laboratory. In the organization of the event, special attention has been paid to the following topics: algorithms, programming, data, and information. The teaching model designed for the competition was made explicit to teachers and related to the syllabus of the Pedagogical Certification on the Use of Digital Technologies run by the University of Genoa in the context of the EPICT (European Pedagogical ICT Licence) Certification [11]. Gamification has a central role in the Italian Coding League format implemented via a multiplayer responsive game platform called Smart O.C.A. [10]. Game instances created in Smart O.C.A. can be played on students’ devices and on interactive boards. Each player (individual or team) is represented in the game board by a virtual marker. The player rolls a virtual dice to proceed to the next cell. Each cell presents a quiz possibly accompanied by multimedia elements (videos and pictures). The 2022 edition took place in March 2022, with the first phase (selection) spanning from 8th to 25th. It involved 609 students from 29 classes from 9 Italian regions. The activity was managed by the organizers assisted by 15 tutors for a total of 112 h of training and competitions with the classes. Students, logged in from their classrooms, took on the challenge, consisting of a series of questions on computational thinking and coding, with the support of a university tutor. As mentioned at the beginning of the section, the questions were designed according to the Proposal for National Directions for Teaching Computer Science in School. Each class received feedback on its performance during the competition, to highlight the areas on which students obtained the best and worst results. In the first phase of the competition, each class participated in a challenge consisting of 15 questions. The resulting ranking was based on the number of correct answers, the time required to complete the challenge, and the interest and enthusiasm shown during the activity. To complete the activity, in the following days, the same questions have been proposed to the students of all classes involved in the first phase as an individual challenge (to compare team and individual performance). The average score obtained by all classes in the first phase was then used as a threshold for the admission to the final. The final took place on March 30th and involved 13 classes. The finalists had to tackle a challenge consisting of 17 new questions. The winner received a ticket to the 2022 edition of the Festival of Science of Genoa.

The Smart O.C.A. platform engaged students in a competition based on gamification capable of motivating and focusing students’ attention on the learning task. Students participate in the game first as a single team and then as individual players. Repeating the game in the two different modalities was used to derive a proficiency measure for classes and individuals. Thanks to the skills of teachers, groups of experts were created in the classrooms: those at the blackboard solving problems under the directions of their classmates; those from their tablets looking for information to share; and those in groups solving problems and sharing them in the class. A large “cognitive workshop” was activated in all classes with great internal cohesion and the stimulus of competition with other classes.

3 Behind the Scenes of ICL 2022

The ICL initiative turned out to be a successful experiential learning activity for the university students attending the third-year elective course “Computer Science for Creativity, Teaching and Dissemination” of the Bachelor of Computer Science at the University of Genoa of the academic year 2021–22 (second semester). The learning outcomes of the course include: “design and conduct hands-on lab activities for introducing beginners to computational thinking and coding”. In this section, we take the point of view of the course and discuss the activities behind the preparation and conduction of the ICL event.

Question Preparation.

The first activity in which university students were involved consisted in the preparation of the questions to be used in the challenge. After introducing computational thinking concepts, visual coding languages (Scratch), and to the CINI syllabus, the university students were required to prepare a set of questions/tests on Algorithms, Programming and Data that could be consistent with the level and goals of grade III in Secondary School. A further requirement was to formulate the questions according to the Computational Thinking guidelines, that is, to take inspiration from everyday algorithms (e.g., recipes, regulations, instructions), to distinguish algorithms from ambiguous or incomplete or non-terminating procedures, to focus on basic computational concepts such as variables, iteration, if-then-else, etc. Questions related to coding were based on programs written in visual languages like Scratch. Starting from the analysis of the questions of the first ICL edition, an engaging collaborative work involving students and instructors produced 32 questions for the two games (15 questions for the selection and 17 questions for the final). Collaborative tools such as Google Spreadsheet were used to organize the set of questions in accord with the different learning objectives of the CINI syllabus. The most voted questions were discussed in face-to-face meetings, in which students and instructors deeply analyze them according to different criteria (clarity, conciseness, appropriateness for the age target, diversity, etc.). Students were asked to review the proposed questions w.r.t. clarity and possible misinterpretations and ambiguities. Several rounds involving reformulations of each single quiz were needed to get to the final version. For the second game, a further difficulty was the requirement to link each question to one of the seventeen Go Goals of the Agenda 2030 (a further dimension in the collaborative sheet used during the game preparation). The collaborative work out of which the final questions emerged spanned three weeks and a total of 16 h of face-to-face meetings.

The questions used in the two games are publicly available in the Genial.ly web portal in the Italian language (finalFootnote 1, selectionsFootnote 2). Some examples from the final, are described below.

Fig. 1.
figure 1

Question 1: Defeating Poverty

Question 1 (“Defeating Poverty”, the first Go Goal of the Agenda 2030), shown in Fig. 1, was about understanding natural language and basic concepts of logic: From the assertion “If all people have a home or money then there is no poverty” it follows that: (A) if there is poverty no one has a home; (B) if there is no poverty everyone has some money; (C) there is no poverty or someone is homeless and has no money; (D) there is no poverty or someone has no home.

Fig. 2.
figure 2

Question 13: Climate Action

Figure 2 shows Question 13 (“Climate Action”) that required the comprehension of a program written using block instructions. A variable V is used to maintain the maximum value in a loop used to monitor the current temperature T (e.g., coming from a sensor) of the sea. The possible answers were: Variable V is: (A) used to maintain the average value of the temperature; (B) used to maintain the maximum value of the temperature; (C) used to maintain the minimum value of the temperature; (D) always equal to the first measurement.

Fig. 3.
figure 3

Question 15: Life on Earth

Question 15 (“Life on Earth”) shown in Fig. 3 required to calculate the effect, in this case with an exponential trend, of parasite reproduction after a certain number of months: The parasite population of trees in a certain area counts 10 units. If every parasite reproduces 5 times every month and 20 units can kill one tree, how much time is needed to kill 15000 trees? (A) 20 years; (B) 2 years; (C) 20 months; (D) 6 months.

Fig. 4.
figure 4

Question 17: Partnership for the Goals

Finally, Fig. 4 shows Question 17 (“Partnership for the Goals”) dealing with an example of network analysis and the concept of centrality of a node in a graph: The graph with nodes 1–6 represents existing collaborations among international organizations: 1 is linked to 5, etc., which organization has the smaller distance from all other ones? (A) 2 or 5; (B) 1; (C) 2; (D) 4 or 5.

The considered questions have been associated with the objectives identified by the CINI syllabus: in the selection phase, the domains Algorithms, Programming, Data, and Information were considered; in the final phase, a question related to the domain Digital Awareness was added. Table 1 shows the classification of the questions according to the CINI syllabus in the corresponding topics.

Game Board and Graphical Design.

As a second step, university students worked on the presentation of the challenge, creating a game board and a graphical layout for the different questions via the Genially editor. As shown in the quiz examples, the graphical layout for the seventeen questions used in the final was inspired to the seventeen Go Goals of the Agenda 2030. The questions were then linked to the Smart O.C.A. game instance to provide a multiplayer game experience.

Game Conduction.

After several internal tests, our students brought their game instance to the field by conducting 609 participants during both the selection phase and the final match. The tutors guided the different classes with parallel sessions run via Google Meet. Responses from individual students in a class (or small groups) were aggregated using Wooclap, an online survey web app. The answer most voted by the class was then entered by the tutor in the Smart O. C. A. game board to continue in the game.

Table 1. Scope and Objectives of the CINI Syllabus. The third column reports the questions for each objective (in the Selection and the Final)
Table 2. Responses to the first phase questions referenced to the CINI syllabus

4 Assessment Using Data Collected During the Competition

As the questions of the challenge were based on CINI framework we managed to obtain a global picture of the skills and competences performed by all the participants and at class level. The analysis of the 435 responses to the 15 selection questions sent by the 29 classes the statistics obtained are shown in Table 2. The questions related to algorithms, e.g., everyday algorithms, properties of algorithms, distinguishing algorithms from ambiguous or incomplete or non-terminating procedures, etc., were found to be the most difficult to tackle. Many of the programming questions were related to examples of scripts in Scratch, a language already used by all the classes, and thus were most familiar. As for the final, the 17 questions (thus with 221 total responses accepted) mainly covered the areas on algorithms and programming and one question on digital skills as described in Table 3.

Table 3 reports the statistics emerging from the analysis of the responses given in the final. Since the selection of classes to be admitted to the final was based on the average score obtained in the selection match, we expected an increase in the success rate. This improvement was indeed observed: the rate improved significantly in all categories: Algorithms (from 51% to 71%), Programming (from 67% to 78%), and Data (from 55% to 73%) categories. In the final as well, questions on algorithms represented the greatest difficulty.

Table 3. Responses to final phase questions referenced to the CINI syllabus

Table 4 illustrates the comparison of the data collected during individual challenges, in which students individually attempted to answer the questions, with those collected from Wooclap during the selection phase (each class as a single player). Each record used in the student-level data identifies a student (or a pair/small group of students per tablet) rather than the entire class. Table 4 shows the response success rate for each skill area in the selection phase (with class- and student-level data) and the individual phase (with student-level data). In the individual challenge, the percentage of correct answers improved significantly for both the data and algorithm domains. In contrast, there seems to be no learning effect on programming. This could be attributed to the lack of data from some classes on the selection phase. In fact, not all classes participated in the individual challenge. Finally, a comparison between the class-level and Wooclap-level selection phase shows greater variability in responses for the data topic, which increases again in the individual phase.

Table 4. Correct response rate by topic area for the selection and individual stages.

To make the ICL a learning opportunity for all participating classes, we provided feedback to each class about their own results. The feedback provided the class with an analysis of the level of proficiency achieved in each computer area. Specifically, the document offered a visual analysis of the data with radar charts at both the scope and individual question levels. Finally, it contained solutions and explanations of the exercises.

5 Reflection Phase for University Students

The experiential learning activity for the University students was concluded by a meta-cognitive [12] activity in which they were guided in reflecting on the experience. Specifically, after the conclusion of the experience, and through an anonymous Wooclap survey they were asked to answer some open question, e.g., to illustrate the hardest, the most useful thing and the learning outcome of the experience.

In addition, they were asked to evaluate on a Likert scale (1–4): the challenge level of the tasks they were involved in; the level of support offered by teachers, classmates, and the quality of the learning resources used to complete the tasks. The experience was perceived as challenging (all answers were 3 and 4) but students perceived to have been supported enough (87,5% of answers were 3 and 4) during the entire activity. The answers to the open questions were discussed in a final meeting, clustering individual answers to open questions, and identifying them as disciplinary, pedagogical, or soft skills. According to the obtained answers, the main skills recognized by the students to have been achieved or strengthened by experience are:

  • Disciplinary skills:

    • Creating content aimed at middle school.

    • Matching CS knowledge and the topics required by the syllabus.

  • Pedagogical skills:

    • Focusing on the specific skill/knowledge assessed by a question.

    • Formulating questions adequate to the target.

  • Soft skills:

    • Teamwork and project management.

    • Inventiveness and creativity.

In the same metacognitive final feedback, the main difficulties related to the experience emerged as formulating clear and unambiguous questions and linking the questions to the Agenda 2030 objectives. Indeed, although the added value of an interdisciplinary approach was well recognized, the need to involve domain experts as advisors and/or reviewers emerged. Although the online management of the challenge was not simple, this was not perceived as a difficulty by the students. Being part of a bachelor’s degree course, the proposed activity was a great opportunity for students to discuss their experiences, to propose and test their work to a real audience, therefore empowering their soft skills, and for instructors to establish a sense of trust and openness with students according to the experience-based learning criteria proposed in [13] and, specifically, concrete experience, active experimentation, and reflective observation.

6 Conclusions

In this paper we have described the results of a successful collaboration between university instructors and students in the organization of a distance laboratory on computational thinking. The ICL event enabled multiple levels of learning for all the involved actors: university students and instructors and school classes and teachers. University students had the opportunity to experiment nonstandard education methodologies learnt in the ICDD course in a real scenario and to acquire knowledge on the guidelines for teaching computer science in schools by formulating questions and referring them to the CINI syllabus. University instructors collect useful feedback on the different areas covered by the syllabus and on the format adopted for this kind of orientation events. For school students and teachers, it turned out to be a good opportunity to evaluate computer science competences in a less traditional context. A similar activity has been offered in a remote laboratory with four classes organized in the Pisa Internet Festival in October 2022. The ICL experience was also a turning point for the organization of the contents of the ICDD course that is now based on a series of lectures and activities centered around different type of learning methodologies in which university students must take an active role.

Among future research goals we plan to explore other nonstandard formats to combine computational thinking and digital competences in engaging activities and to refine the evaluation method used for measuring the learning outcomes of university students that got involved in this kind of activities.