Introduction

Assessment practices developed with Information and Communication Technologies (ICT) are playing an increasingly important role in the transformation of tertiary education (Crossouard 2010; Whitelock 2010). While some authors focus on the potential of e-assessment to carry out automated processes such as auto-scoring (Chiou et al. 2009; Noorbehbahani and Kardan 2011; Stödberg 2012), others focus on the potential for developing formative assessment activities to optimize the monitoring and support of the student’s learning (Gaytan and McEwen 2007; Lafuente et al. 2014).

In this context, some studies have highlighted the potential of e-assessment to increase transparency in assessment processes. However, the use of the term ‘transparency’ can be ambiguous, as it is polysemic. Some studies adopt the term transparency, meaning correction and grading processes are visible and comprehensible. Studies developed around this concept usually point out the importance of using criteria-based assessment practices to inform the student about what and how it will be assessed (McCracken et al. 2012; McNamara and Burton 2010). Other studies adopt the term transparency, meaning the student’s learning process is accessible and visible. This approach highlights the need for the instructor to access the process followed by the student in order to more properly apprehend the student’s learning and support it (Jones and Cooke 2006; Macdonald 2003). In this study we will focus on the latter meaning.

Background

Jones and Cooke (2006) use the concept ‘a window into learning’ to describe how online environments provide insight into students’ thoughts and problem solving processes by facilitating communication records. In a similar fashion, Macdonald (2003) states that online collaborative work can be more transparent than face-to-face collaboration, since a record of their messages can be used to assess their collaborative work. Although some authors have argued that increasing transparency over the student’s learning process might create a ‘Big Brother’ scenario that diminishes the students’ privacy (Lemanski 2011), some benefits have also been pointed out. For instance, the instructor is in a better position to assess more fairly the engagement of every student in the collective process. Caple and Bogle (2013) emphasize the same benefit when studying the use of a wiki tool in grading collaborative activities: the instructor may draw on the technological tools to detect ‘free-riders’ and grade everyone according to their individual contributions.

However, the use of transparency to enhance learning could be of greater importance than optimizing grading processes. The more the instructor knows about the student’s learning process, the better he or she will be able to support it, for instance by providing more valuable and tailored feedback (Nicol and Macfarlane-Dick 2006). According to Nicol and Macfarlane-Dick (2006), the term ‘feedback’ refers to information offered to the student about the gap between his present state and the learning goals and standards. It is impossible to give feedback on the learning process without having sufficient and relevant evidence about it. This issue affects the degree of validity and reliability ultimately achieved by assessment, especially e-assessment (García et al. 2014; Gikandi et al. 2011). We will use the term e-assessment in this article to refer to any process conducted through ICT—including the Internet, Learning Management Systems (LMS), communicational tools like forums or chats, etc., where information on the student’s learning is gathered and analyzed in relation to achievement of prior learning outcomes (Gikandi et al. 2011). Reliability of online formative assessment relates to providing opportunities for documenting and monitoring evidence of learning. Instructors must be able to monitor the learning process and identify individual learners’ progress, strengths and weaknesses in order to provide adequate feedback (Beaumont et al. 2011; Gibbs and Dunbar-Goddet 2007).

The information provided by the process is more detailed than the evidence provided by the end-product when it comes to assessing and supporting students (Jones and Cooke 2006). Traditional approaches to assessment have radically separated process and product, and have prioritized the assessment of the final product or result achieved by the student. However, process and product are intrinsically associated (Akyol and Garrison 2011); ‘what is learned (the outcome or the result) and how it is learned (the act or the process) are two inseparable aspects of learning’ (Marton 1988, p. 53). In addition, following the process rather than just assessing the product, gives the instructor the opportunity to intervene in a timely manner, giving in-task guidance (Beaumont et al. 2011). This procedure enables the regulation of future development in a ‘feedforward’ perspective, rather than focusing the students’ attention solely on past events (Price et al. 2011). Additionally, feedback that addresses subsequent performance may lead to improvement of ensuing learning by engaging students in a dialogic-guidance process (Hyatt 2005); this is described as a process where information regarding the adequacy of learning flows in two ways. Despite the growing importance of communication media based on video and audio, many online instructors continue to use text-based technologies for establishing bidirectional communication with the student, arguing this form of communication facilitates higher levels of learning (Kanuka 2011). The dialogic approach to feedback is conceived as a socially and interactive ongoing process, as opposed to the one-off product approach where instructors deliver feedback as a unidirectional action at the very end of the learning process (Price et al. 2011; Tuck 2012). The dialogic approach to feedback is usually carried out in formative and alternative forms of assessment. Although alternative forms of assessment based on formative and continuous approaches are increasingly important in higher education, traditional approaches linked to summative and end-product assessment are still considered to be more authoritative (Black and McCormick 2010; Cross and O’Loughlin 2013).

Aim of the study

In this study, we are committed to exploring the phenomenon of transparency more in depth. The term transparency is used to refer to the potential of the instructional design and the learning and communication media to offer access to evidence about the learning process (Jones and Cooke 2006); we must understand the term ‘learning process’ in a broad sense, including any process leading to knowledge acquisition; this encompasses both the student’s individual and mental processes, and the student’s performances (including communicational and social interactions) leading to learning.

Previous studies have analyzed this phenomenon within the context of a single activity, always a collaborative one (Caple and Bogle 2013; Jones and Cooke 2006; Macdonald 2003); we would like to broaden the analysis comparing the transparency afforded by all the assessment activities of the course including both collaborative and individual activities. Hence, our study puts the following question forward: (1) are collaborative activities that track students’ contributions by means of ICT the ones that yield higher transparency of learning? With the aim of exploring more in depth this phenomenon, we consider, additionally, the following question: (2) what other elements related to the pedagogical design of assessment activities or to technological properties may contribute to achieve a high transparency of learning? Finally, with the intention of studying the impact of transparency on ensuing feedback processes, we pose this question: (3) do higher levels of transparency in assessment activities provide better adjusted and timely feedback by instructors?

Methods

We carried out a qualitative case study. A case study approach is useful when researchers seek to reach a deep understanding of an instructional context, answering not only descriptive, but also explicative questions (Yin 2003). This approach highlights the importance of ecological validity, that is, the study must reflect the real context that was examined (Flick 2009); thus, the instructional context cannot be manipulated by the researcher (e.g., the sample of students and instructors, or the activities). We analyzed two different cases that took place at two tertiary institutions in Spain. Participants took part in the research voluntarily, and we always followed the ethics standards of each institution. We treated data anonymously and strictly for the purpose of this study.

We used two criteria to select the cases: first, both cases should have a significant amount of formative assessment practices mediated by ICT; second, one case should develop in full virtual conditions, the other should be a blended course. We chose two contrasting scenarios under the assumption that this would help us find diverse patterns of reaching transparency over the student’s learning. While some studies have minimized the differences between the online and the blended mode regarding learning processes (Brooks and Bippus 2012), other authors have highlighted the differences between the two formats (Mansour and Mupinga 2007), and even asserted the superiority of virtual environments over blended settings (Reasons et al. 2005).

The blended case was a course on Educational Psychology. The course, which was a compulsory subject for a PhD program at that university, combined face-to-face activities with activities developed on the open source LMS Moodle. In the blended case, the participants were three male instructors in their 50’s teaching collaboratively the same group of 38 students (32 females and six males) aged between 23 and 38. The participants communicated in Spanish. All the instructors were Spaniards, whereas there was a myriad of nationalities on the students’ side: 20 Spaniards, 4 Colombians, 4 Brazilians, 4 Chileans, 3 Argentineans, and 3 Mexicans. Although the instructors had more than 20 years of experience teaching and researching on the Educational Psychology field, that was their first teaching experience in a blended setting; the students came from diverse educational backgrounds (mostly Psychology and Teaching postgraduates), and their technological experience was also quite diverse. As for the virtual case, it was a course on Instructional Psychology. This compulsory course took place on the campus of a virtual university by means of a closed source LMS (based on Sun Microsystems software). There was one female instructor in her 30’s and 35 students (22 females and 13 males) aged between 20 and 48. All the participants were Spaniards and, as members of a virtual university, they all had extensive experience teaching and learning on virtual platforms.

In both cases, prior to the beginning of the course, we collected available documents and materials related to the techno-pedagogical design of the course. These documents included the course syllabus, documents with specific instructions for activities and units, and course materials such as readings. These documents helped us appraise the techno-pedagogical design of the course, which refers to the instructional decisions revolving around the use of technological devices for the sake of teaching and learning (Ramírez et al. 2012). In addition, with regard to the study of the assessment process development, in both cases we selected three instructional units from every case to gain a more in-depth understanding of the development of the assessment activities: the first unit of the course, another one in the middle (unit number 4 for each case), and the final one. We collected all the communicative exchanges between instructors and students in the assessment activities of those units; this included gathering messages posted in public spaces (such as a forum), messages posted in student teams’ private spaces, and private e-mail messages that the participants would forward to us. We copied all those messages and stored them in a spreadsheet, coding them according to several variables (author, activity, date, unit, and communication space). In the sample units, a total of 235 messages were registered in the blended case (171 messages by the students, and 64 by the instructors) and 715 in the virtual case (609 messages by the students, and 106 by the instructor). We also stored all the draft versions elaborated by the students during the collaborative activities, in order to have additional evidence of the process. Additionally, in the blended case, we made systematic observations of face-to-face sessions, writing a narrative register that focused on the participants’ performances in the assessment activities.

With the purpose of enriching the analysis, we collected the participants’ opinions on the assessment process: three semi-structured individual interviews were conducted with all the instructors, as well as with seven volunteer students from the blended case (seven females), and three from the virtual case (two females and one male). The first interview was conducted at the beginning of the course, in order to apprehend their expectations on the assessment process, and their previous experience in e-learning settings. Two more interviews were conducted: one in the middle of the course and the last one at the end. In these interviews, the participants elaborated on their views and experiences during the assessment process.

With the purpose of studying in detail the feedback processes, communicative exchanges involving those volunteer students were targeted in order to identify prototypical forms of feedback received by them.

Data analysis

We performed qualitative content analysis on the whole variety of data gathered for both cases (Flick 2009). First of all, from a technological perspective, we described the most relevant technological properties mediating the phenomenon of transparency of all the tools involved in the assessment activities. We took into account (Chou 2003): (1) properties for communication (synchronous/asynchronous; unidirectional/bidirectional; one to one/one to many/many to many); (2) properties for representation of contents (text/audio/visual; linear/non-linear representation; permanent/temporary representation of contents); (3) properties for accessibility to contents (everyone can access the contents/only some participants can access the contents). Second, we analyzed the level of transparency that each particular assessment activity was affording to the instructor. Since the focus of this study is e-assessment, we decided to focus the analysis on assessment activities mediated by ICT, rather than face-to-face activities without the use of ICT.

We created a rubric in order to evaluate the transparency of the assessment activities. The rubric was the result of an inductive–deductive design procedure (Moskal and Leydens 2000). First, we defined operational criteria for every evaluation category, based on theoretical principles derived from the literature review. Then, the rubric was applied by the researchers in an iterative process that involved modifying some categories. Later, to determine the reliability of the coding system, the rubric was applied by a set of three external judges. The reliability of the category system was tested by calculating the Kappa coefficient (Banerjee et al. 1999), which corrects the possibility of agreement due to chance. Since we used a set of three external judges who coded 11 activities into 4 categories, a multirater Kappa coefficient was used (Randolph 2005). The overall agreement reached by the three judges was 0.88. A satisfying free-marginal Kappa of 0.84 was calculated.

The final coding system to analyze the transparency of the activities considers the accessibility for the teacher to the communicative exchanges among students; the products elaborated during the development of the activity; and features likely to reveal the process followed by the students, such as their line of reasoning for selected choices, self-evaluation activities, etc. We defined four different levels of transparency (see Table 3 in “Appendix” for further details on this rubric):

  • Very high transparency the instructor may monitor the activity continuously and globally;

  • High transparency the instructor may monitor the development and the closure of the activity substantially;

  • Moderate transparency the instructor may monitor the beginning and the closure of the activity;

  • Low transparency the instructor has access only to the final product delivered by the student.

Results

Transparency in the blended case

The blended case included the following assessment activities:

  • Collaborative elaboration of documents students wrote collaborative summaries for each of the six topics of the course.

  • Reading, attending and participating in fortnightly face-to-face sessions prior to every session the students had to read a text and some of them presented their collaborative summary to their classmates; instructors and students held a discussion around these summaries.

  • Peer-assessment of presentations all students, except the authors, had to complete a form to assess those summaries.

  • Participation in discussion forums three different discussion topics were raised in three forums on the platform. The students had to make a contribution at least once a month, throughout the duration of the course.seeing if they had contributed.

  • Individual synthesis the students had to elaborate a personal synthesis of all the learned contents.

The transparency analysis of the activities revealed that only one of the activities reached a high level of transparency. Indeed, only the discussion forum reached a high level of transparency. In this activity, the instructors had access from the very beginning to the students’ postings, so that their communicative exchanges could reveal their lines of reasoning. The tool (an e-forum) allowed the students to communicate asynchronously through text-based messages expressed in a classical linear fashion. Those messages, permanently recorded by the tool, allowed the instructors to go back and review each student’s participation at any given time during the course. Secondly, we found a moderate transparency achieved by the elaboration of the collaborative documents. In this activity, students worked on the document autonomously; however, they sent a draft version of their presentation to the teacher through electronic mail. Then they had a face-to-face interview where the instructor gave guidance before the presentation of such document in the classroom. Nevertheless, the most relevant result was probably the low transparency achieved by the rest of activities. Table 1 shows these results.

Table 1 Transparency of assessment activities (blended case)

Development of the assessment process in the blended case and participants’ perceptions

As one of the instructors explains in the initial interview, the forum discussion was designed to ‘gather indicators of learning throughout the course, enabling the detection of any evolution in the students’ contributions’. However, during his last interview he stated:

‘I admit that the forum is a powerful tool in terms of giving personalized and adjusted feedback; however, it’s highly time-consuming. Actually… we ended up not having the time to analyze their messages, so our assessment was limited to counting their posts and seeing if they had contributed more or less to the discussion’.

Actually, we observed no mediation by the instructors during the discussion held on the forum. Something similar transpired during the peer-assessment activity. This had a significant impact on the students’ participation. As one student said:

‘The forum discussion is too open, there’s no debate moderation. At first my motivation to post messages was low, but now it’s almost zero. You say to yourself: okay, I will look around for an opinion I can submit. But then I don’t have the time to read everything that has been said, nor to analyze it… For me, the discussion is non-sense. Everyone is posting just because it’s mandatory’.

Instructors saved most of their feedback for the face-to-face settings. The collaborative elaboration of documents, and the face-to-face sessions ended up being the contexts where the instructors offered more support to the students.

The instructors focused on the contents of the synthesis activity to grade the students. According to the instructors, the rest of the activities had a ‘modulatory’ role in that process. They sent a personal e-mail to every student informing them of their final grade, along with a general comment on their achievement.

Transparency in the virtual case

In the virtual case, the following activities and tools were considered to assess the students:

  • Screening activity at the beginning of the course, students had to answer some questions about their knowledge and expectations on that course.

  • Collaborative discussion a team composed of four students had to reach an agreement as to which were the three main ideas of each unit. This discussion was held on an environment called the ‘Team Working Space’ (TWS). Every team had its own private TWS; they had a forum tool, a file repository, and a bulletin board. The instructor had access to the TWS of each team.

  • Case study the students had to work collaboratively to solve one case in every unit. This activity took place in the TWS. The instructor eventually allowed some of the students to do this activity individually.

  • Monograph reading each student had to choose a book out of a choice of six, answer a reading guide, and hand it into the instructor at the end of the course.

  • Individual synthesis at the end of the course, the students had to elaborate an individual synthesis summing up all the learned contents.

  • Authentication test this was the only activity that took place in a face-to-face setting; each student had to answer four questions on the previous activities to verify their authorship.

As for the transparency of activities, we observed that the instructor had exceptional access to the students’ learning process, particularly in two activities: the collaborative discussion and the case study (when it was developed collaboratively). Hence, we conclude that the instructor could reach an extremely high level of transparency in these two activities. These were, precisely, the only activities developed in every unit and the only ones that required the students’ collaboration. For the development of these activities, every team worked on their TWS; they had an e-forum that allowed the students and the instructor to communicate asynchronously and bidirectionally through text-based messages. The file repository allowed the students to share the progressive versions of the product. Both the messages and the files were recorded permanently by the technological devices. This property contributed to reach a very high transparency of the process followed by the students. In addition, we would remark the strong contrast between the very high transparency afforded by those two activities, on the one hand, and the low transparency enabled by the rest of activities, on the other hand. Table 2 shows these results.

Table 2 Transparency of assessment activities (virtual case)

Development of the assessment process in the virtual case and participants’ perceptions

The strong contrast between the transparency of collaborative and individual activities was captured in the case study activity, where students had the choice to do it either way; this was commented by the instructor in one interview:

‘I couldn’t oversee what students were doing, if they chose to do it individually. They sent me the product and that’s it. However, those who did it collaboratively… yes, I went into their working space and supervised their drafts and their messages, even if they didn’t ask for my help. And I offered support, especially if I saw that it was a mess…’

In the two collaborative activities the teacher was able to look over the collective process, diagnose the students’ specific needs and offer tailored support. This was expressed through the following comments on the collaborative discussion activity:

‘There are teams that need more support. Some of them have a hard time elaborating the main ideas of the unit, some have problems deciding which idea is more important; in those cases, I give more direction. Also… I have noticed that some groups tend to negotiate merely the form or the structure of the writing, rather than its contents. So I have to refocus the discussion of those groups…’

Sometimes, feedback involved giving support when the instructor detected that the students had difficulties organizing and coordinating their performances. For instance, this was published by her in the collaborative discussion activity:

‘Why don’t you post all your ideas in a single file and then one of you makes a proposal as to which ideas are more relevant? Maybe it’ll be easier […]’

The support offered by the instructor was very appreciated by the students. This was captured in the following comments by a student in the final interview:

‘Having the teacher clarifying issues, saying: “this is wrong”, or “you could add other ideas”… That’s when you realize that you’re moving forward. There’s a dialogue between you and the teacher and… this makes you feel more secure. Otherwise, it’s just reading and discussing with your team-mates, and then you wonder: am I really learning everything I have to learn?’

In the individual activities, the instructor returned the students’ final products with some notes on the same document, pointing out occasional mistakes and/or appropriate answers. The students’ final grade was calculated by assigning a 30 % weight to the collaborative discussion and the individual synthesis, and 20 % to the case study and the monograph reading.

Discussion

We would like to discuss our results following the questions previously presented.

Are collaborative activities that track students’ contributions by means of ICT the ones that yield higher transparency of learning?

Collaborative activities that record students’ contributions by means of ICT provide an ideal setting for increasing transparency of learning, whereas individual activities yield the lowest possible level of transparency; instructors assessed only the final product with little or no access to communicative exchanges during the process of those activities.

From all the activities analyzed, the two activities that reached the highest levels of transparency were the ones with a collaborative structure. We must point out the importance of the TWS of the virtual case in this phenomenon, since it afforded the creation of a private environment in which all the students actions were registered. Furthermore, the collaborative structure forced students to expose evidence of their learning process. Such exposition took place by posting doubts or requests of support, justifying selected choices, or publishing successive drafts of the product. These two features increased the potential of these activities to be used as a monitoring device in the assessment process.

However, we must specify that in the blended case we had an activity whose design yielded a high level of transparency: the discussion forum. We cannot say that a public debate is precisely a collaborative activity since it only requires the students to post their opinions (and not reach the same product or goal in interdependency). Hence, while our study is consistent with conclusions of previous research (Jones and Cooke 2006; Macdonald 2003), we must specify that it is not the collaborative nature of the task, per se, that increased the transparency over the learning process (like those studies show), but the communicative exchanges between students. Therefore, we must conclude that activities that promote peer-to-peer communication are the ones that yield higher levels of transparency. This educational setting is an ideal context for assessment, since it increases the likelihood of the teacher diagnosing the learners’ strengths and weaknesses.

What other elements related to the pedagogical design of assessment activities or to technological properties may contribute to achieve a high transparency of learning?

From a technological perspective, we found that high levels of transparency were achieved by tools that provided potential for asynchronous and bidirectional communication through text-based messages; those potentialities are abundantly used in current online settings according to previous research (Kanuka 2011). In our study, the affordance that enabled registering the students’ actions and performances all along the activity proved to be a key technological feature. In the virtual case, where we had the two activities with the highest levels of transparency, we observed the use of clusters of different tools in collaborative working spaces (like a forum, a bulletin board and a file repository). Thus, it appears that having a wide range of technological devices certainly has a strong educational potential.

However, transparency of learning relies on pedagogical rather than on technological aspects; indeed, we observed activities that used similar tools and achieved very different levels of transparency. Therefore, we conclude that the whole set of technological tools and their properties do not have a predetermined impact on learning transparency. Learning transparency relies on the pedagogical aspects described in the previous question.

In both cases instructors seemed to ensure means of monitoring the student’s learning process through the design of some suitable assessment activities. However, in both cases, the instructors complained about the workload associated with having to assess the contributions of so many students in so many activities, something we see reflected through other experiences (Jones and Cooke 2006; Cross and O’Loughlin 2013). Indeed, as asserted by previous studies (McCarthy et al. 2010), online discussion boards may lead to an overwhelming experience, especially if they are not carefully planned and implemented by instructors. In our study, the use of e-assessment was very different in each case: whereas in the blended case the instructors focused on the assessment of the more individual and summative activities, in the virtual case the instructor used the more collaborative and formative activities to monitor students’ progress. These collaborative and formative types of activities have been documented as highly effective assessment procedures in online environments in previous studies (Gaytan and McEwen 2007).

Do higher levels of transparency in assessment activities provide better adjusted and timely feedback by instructors?

Our study contradicts the assertion that high levels of transparency will definitely improve the quality of the instructor’s feedback. While in the virtual case the instructor made the most of the high transparency to improve her feedback, in the blended case transparency was mostly wasted.

In the virtual case, we found that the instructor had better chances of monitoring the students’ learning process, which allowed her to better diagnose their needs; some of those needs dealt with focusing the discussion on superficial issues, or having difficulties in the elaboration of contents or in the choice of key ideas. The instructor found that having a high transparency of the students’ learning process allowed her to adjust her messages to those needs; in addition, that support was given while the task was still in process, resulting in an ‘in-task feedback’, according to the model of Beaumont et al. (2011). In those activities, feedback-giving reflected the dialogic and socially-embedded approach highlighted by previous research (Hyatt 2005; Price et al. 2011; Tuck 2012). In our study, the feedback process was positively valued by the students; they acknowledged the effort invested by the instructor in monitoring all the learning process, and the opportunity offered to engage them in a more interactive and ongoing feedback process.

On the other hand, in the blended case, we observed that high transparency created by means of ICT was not used by the instructors to diagnose the evolution of the students’ learning process, nor to give them appropriate feedback on those tasks. Actually, the absence of feedback during the development of activities, which could have afforded a rich discussion between students and instructors (such as the forum activity), was perceived by the students as a lack of support by the instructors; this interpretation actually affected their participation and engagement in those activities. Feedback in the blended case was mainly delivered in face-to-face contexts and at the end of the activities. Hence, in the blended case, feedback-giving reflected the one-off product approach described in previous studies (Hyatt 2005; Price et al. 2011), where instructors deliver feedback as a one-way action. We can speculate that the lack of experience of these instructors in such type of blended settings may explain to some extent the poor exploitation of the technological platform in that case.

Eventually, we cannot state as a rule of thumb that any richer and more relevant information provided via the LMS, would automatically warrant a better feedback by the instructor. Having relevant information may be regarded as a condition for providing relevant feedback (Nicol and Macfarlane-Dick 2006); however, it seems logical that having instructors use this information to provide feedback and optimize learning requires something more on the part of the instructor. According to our study, we can state that while in the virtual case ICT was conceived both as a medium to gather indicators of learning and to optimize learning, in the blended case it was mainly used to collect quantitative indicators of such learning.

Conclusions

This study points out the importance of achieving transparency over the students’ learning process. High transparency does not automatically improve the instructors’ feedback; however, the results found in the virtual case show that it can be used to improve the diagnosis of the students’ needs and correspondingly adjust feedback, encouraging further learning (Nicol and Macfarlane-Dick 2006). Hence, high transparency of the students’ learning process can actually improve the degree of validity and reliability ultimately achieved by e-assessment (Gikandi et al. 2011). Information gathered by instructors from activities with high transparency of learning may facilitate feedback-giving under a dialogic-guidance approach (Price et al. 2011; Tuck 2012).

Gaining insight into the student’s learning requires arranging both pedagogical and technological elements of instructional settings. Pedagogical components of the design seem to play the most important role in that phenomenon. That being said, we must also point out that having a wide range of technological devices in a virtual environment, where students leave a trace of their communicative exchanges, is of utmost importance for reaching high levels of transparency. Certainly, this form of transparency is very difficult to reach in traditional face-to-face settings that do not involve the use of ICT (Jones and Cooke 2006; Macdonald 2003).

A small scale study like ours does not intend to provide universal conclusions ready to be generalized to any instructional context. Actually, in our study the format of the course was just one variable that differentiated the cases, along with other important factors such as the educational degree, the technological platform, or the participants’ experience in online environments. This is a clear limitation of our study, as it hinders the comparison between the two cases. However, there may be insights to be gained from looking at the individual case that can have wider implications (Denscombe 2003). Based on our findings, we can put forward some criteria for instructional designers and instructors willing to use ICT to improve revealing the student’s learning:

  • Promote peer-to-peer communication which can be recorded by a wide range of technological tools throughout the activity.

  • Asynchronous text-based communication is still a highly effective device to enable high learning transparency.

  • Consider formative assessment activities as a means for gathering information to improve feedback, and not only to control and grade your students; in those activities, engage your students in dialogic-guidance feedback formats.

  • Live up to the expectations you have created: in formative activities where students expect support from instructors, they must receive it; otherwise, they may feel “abandoned” and their participation may not be authentic.

  • In case of overburden, focus on the monitoring of collaborative activities as they provide an open ‘window’ to the students’ learning process.

The results found in this study must be contrasted, extended and enriched by the analysis of assessment practices different from what we studied. We must also urge the study of new generations of ICT, assuming that new technological tools might impact on the accessibility of the learning process for the instructor.