Information and communication technology (ICT) literacy has been seen as one of the key educational goals for the twenty-first century. For example, consider this joint statement from three information technology companies:

The economy of leading countries is now based more on the manufacture and delivery of information products and services than on the manufacture of material goods. Even many aspects of the manufacturing of material goods are strongly dependent on innovative uses of technologies. The start of the 21st century also has witnessed significant social trends in which people access, use, and create information and knowledge very differently than they did in previous decades, again due in many ways to the ubiquitous availability of ICT. (CIM 2008, p. 1)

One can assume that this broad change will have a large influence on the personal and working lives of many people, and thus will also have large effects on the educational systems that prepare people for their lives and careers. This will include the characteristics and labels of the subjects that are taught in schools, the instruction for those new subjects (and the traditional subjects), and how education is structured. Current changes in educational policies, such as in the U.S. Common Core Standards (e.g., CCSSI 2010) and the Next Generation Science Standards (NGSS Lead States 2013), are examples of efforts to cope with these broad changes. We see the movement towards twenty-first century skills (Binkley et al. 2012), in general, and towards new forms of ICT literacy in particular, as further examples of the same thing. We begin our discussion with this initial broad definition of information and communication technology literacy: information and communication technology literacy is a set of skills associated with the use of contemporary technologies for information processing and communications. The definition is deliberately variable with respect to technological developments over time – this will involve changes both in the technologies themselves (both hardware and software), and also in the range of human activities that are facilitated by those technologies. In fact, it reaches back in time, and hence can be seen to include the use of Morse code on telegraphs, signal flags on sailing ships, handwritten letters, and even glyphs carved in stone.

The current conceptualization of educational assessment is out of date in some respects. First, in business, knowledge is applied across disciplinary boundaries in the process of dealing with real problems, but in schools the subjects are based on traditional disciplines. Second, in business, people work both alone and in groups to share complementary knowledge and skills and attain common goals – this is in contrast with the situation in schools and assessments where students are required to work on projects and take tests individually. Third, in business, workers have access to large amounts of information and to technological tools, where the task is to craft an efficient and satisfying solution, which differs strongly from the typical practice of “closed book” standardized assessment. Fourth, in business, problems are contextualized in particular situations, which are not structured to be addressed by simply recalling knowledge or working through simple algorithms, which again differs from a great deal of education in schools, but most strongly the context of standardized testing (CIM 2008). We observe that these changes in the nature of work in the workplace have led to changes in the concept of ICT literacy.

The efforts described in this chapter were grounded in the Assessment and Teaching of Twenty-first Century Skills project (ATC21S – Griffin et al. 2012; Care 2017), which was launched in 2009 as a response to these transitions in the world economy due to developments in information and communication technologies.

In conceiving ICT literacy itself as a twenty-first century skill, we view it as encompassing a range of subtopics, including learning in networks, information literacy, digital competence and technological awareness, all of which contribute to learning to learn through the development of enabling skills. In the global economy, learning through digital networks, and the use of digital media, is becoming increasingly important in private life, in learning, and in professional life. We predict that this aspect of learning will become even more important in the future. We see this as being true at the individual level and local or regional levels as well as at international levels. Thus, we focus the concept of learning to learn onto the digital domain, and arrive at the idea of learning to learn in the context of digital networks.

We provide a brief review of developments in the concept of ICT literacy over the last 25 years or so. We see that the concept of ICT literacy has changed a great deal during these years: from a conceptualization as a specific domain of knowledge about computers to an understanding of it as a domain-general or transversal twenty-first century skill. (Note, a full account of this was originally published in Wilson et al. 2015). The second half of the chapter gives a brief account of the ICT Literacy project itself (for more details on this see Wilson et al. 2015), and then examines selected results from the empirical study, focusing on (a) the multidimensional model of ICT Literacy, and (b) The Wright Maps for its subdimensions. We conclude with a summary and discussion of broader implications.

ICT Literacy: A History of the Concept

The concept of twenty-first century skills is one that has drawn broad support in recent years. For the ATC21S project, sets of twenty-first century skills were identified based on an analysis of 12 relevant prior twenty-first century skill frameworks drawn from a number of countries and international organizations (Binkley et al. 2012). These included the OECD and countries in Europe, North America, and Asia/Oceania. In the new framework, called “KSAVE,” the ten components of the framework encompass not only skills, but as the acronym implies, knowledge (K), skills (S), attitudes (A), values (V), and ethics (E). KSAVE organizes the ten components into four conceptual groupings, ways of thinking, ways of working, tools for working, and living in the world. ICT Literacy was chosen as one of these twenty-first century skills to be examined in more detail, and exemplified in the shape of online assessments. In the paragraphs that follow, we trace the conceptual changes in the idea of ICT literacy in four main steps.

  1. (a)

    First, it was seen as a concentration of core knowledge and skills about computers and their use, coalescing into the concept of ICT literacy in the early years of the field.

  2. (b)

    Second, this idea transitioned to a view of ICT literacy as a broad set of skills that have links to many traditional and non-traditional school subjects, and the move to technology integration in education.

  3. (c)

    Third, in a second transition, ICT Literacy was expressed as progress variables that are essential tools for the design of curriculum and assessments. The “progress” view depicts the need to understand initial ICT knowledge likely to emerge followed by a developing picture of mastery.

  4. (d)

    Fourth, we consider a new view of ICT that emerged from the impact of the “network” perspective into ICT – the critical need for building the power of virtual skills through proficiency with networks of people, information, tools, and resources. Here we offer a new framework for assessing student ICT learning, based on a learning progression point of view.

A Set of Core Skills

What we now call ICT Literacy was first seen as a concentration of core knowledge and skills about computers and their use, coalescing into the concept of ICT literacy in the early years of the field. Attempts to measure ICT literacy in schools go back at least 25 years. The 1989 and 1992 Computers in Education Studies (COMPED), carried out by the International Association for the Evaluation of Educational Achievement (IEA), evaluated computer use in schools and its impact on students. These IEA studies found that, at that time, in most countries, there was a consistent increase in school computer equipment being made available as well as more teachers using computers in their lessons. However, still very few educators were participating in this trend (IEA 2014a; Pelgrum and Plomp 1991). At around the same time, research synthesized in meta-analytic studies was raising awareness about the growing importance of computers in education, and of the role that digital literacy would play in student proficiencies (Kulik 1994).

An example of a traditional framework of this kind is shown in Fig. 11.1, which is from Dallas County Community College District. The framework for their Computer Skills Placement Test (Dallas County Community College District 2014) consists of six parts: Basic Concepts, File Management, Information and Communication, Spreadsheets – Excel, Presentations – Powerpoint, and Word Processing – Word. The topics are quite tightly focused on specific ICT concepts and skills, and even specific computer software products. We chose the third topic, “Information and Communication” to focus on, as this seems more broadly based than the others (see Fig. 11.2). This section is split into two portions, Internet and Email, and each of these has a number of subtopicsFootnote 1 under it: Internet has 8 subtopics, such as “Open (and close) a Web browsing application,” “Bookmark a Web page,” and “Knows how to prevent unauthorized access to a PC;” while Email has 13 subtopics, such as “Open one, several mail messages,” “Use a spell-checking tool to make changes,” and “Choose print number of copies.” Again, the topics are quite tightly focused on specific ICT concepts and skills. We show two of the sample items from the test in Fig. 11.3. These demonstrate the emphasis on vocabulary, knowledge and skills that are the core of traditional definitions of ICT literacy.

Fig. 11.1
figure 1

Framework for the computer skills placement test questions (Dallas County Community College District)

Fig. 11.2
figure 2

Detail from the framework for the computer skills placement test questions (Dallas County Community College District)

Fig. 11.3
figure 3

Sample items from the computer skills placement test questions (Dallas County Community College District)

In 2002 the Organization for Economic Co-operation and Development (OECD) also engaged in the international work on ICT literacy. A key question of the OECD study was to examine the need for a measure of ICT literacy: was there a need to know what had been mastered by students? Or, more specifically, was there a need to understand what students know and can do, rather than simply record frequency or time duration counts of use, array of tools employed, and so forth? The resulting IILP Framework (IILP 2002) made major advances by expanding definitions of ICT competencies. Not only were the usual digital and communications tools to be included, but also the concept of “networks” or means by which students were to access, manage, integrate, evaluate and create information: these became the five components of the framework (Fig. 11.4). The overarching goal was defined as ‘being able to successfully function in a “knowledge” society.’

Fig. 11.4
figure 4

The five components of the IILP framework

Technical skills were required at each phase but so also were cognitive and communication skills. Not only did this include various daily life activities employing ICT, as had been described by previous usage studies, but the framework advocated for a broader understanding of the critical components of ICT literacy. This would stimulate a deeper transformation in the skills and knowledge that must be acquired. Furthermore, the framework was based on the assumption that ICT literacy was best achieved through integrated learning. Numerous researchers were beginning to agree: Kozma (2003), Jewitt (2003), and others (e.g., Ridgway and McCusker 2003; Quellmalz and Kozma 2003) called for rethinking both what digital literacy called for, and how technology could better contribute to its assessment. In other words, single ICT-focused, stand-alone curricula in information technology courses was not advocated. Rather, ICT literacy skills were described as needing to be integrated appropriately into curricula in subject matter areas. At the same time, the instructors in these courses would need to be able to address IT and technical skills, or have support materials available, in order to ensure improved ICT literacy.

Transition to ICT as a Key Educational Practice

The computer or other digital device is one important means by which students engage in key educational practices to form, consolidate, elaborate and communicate domain knowledge, whether during instruction or during assessment (Ainley et al. 2014; Fraillon 2014). In this way, information and communication technologies have rapidly become for schools not so much an independent skill but an embedded skill, or a vehicle used by students to express and engage in their discipline specific knowledge, understanding, and skills in many schooling areas.

One way of approaching school use of digital literacy is to consider it a practice, or way of working through new tools. Friedman (2007) described such practices as a major shift toward technology that educators need to address. He discussed how it may be counter-productive to ask students to power down when they enter the school doors (as is the case in many schools, where technology such as cell phones are seen primarily as distractions from the “real” work). Rather, students should actively engage in digital literacy practices in formal learning, including using the tools, networks and body of expertise available to students virtually. This both underscores developing ICT knowledge and skills as an important practice in schools, and allows educators to teach and model appropriate use while supporting subject matter learning.

One key to assessment developments marking this evolution was the emergence of subject-matter specific technology-enhanced assessments (TEAs), more commonly referred to at the time as “computerized tests”. Of course, migration from paper-and-pencil to the computer environment was expected, given greater ease of assessment delivery and data collection (Scalise and Gifford 2006; Wilson et al. 2012; Wilson and Scalise 2011). However, this movement also acknowledged the expectation of at least some familiarity with digital literacy practices in subject matter areas. This development both enhances the importance of ICT literacy, and at the same time promotes the idea of ICT literacy as a broad set of skills that underlie success in other substantive areas such a traditional school subjects, and (of course) other twenty-first century skills. Some examples of prominent efforts along these lines are the OECD PISA programs of Digital Reading assessments (OECD 2011, 2013a) and Collaborative Problem Solving assessments (OECD 2013b), the IEA Progress in International Reading Study (PIRLS) in 2016 (Mullis and Martin 2013), and the U.S. National Assessment of Educational Progress technology-based startup pilot administrations in mathematics, reading and science (US Department of Education 2010).

Of course, as the integration of ICT into subject-matter areas continued along with the move to extend the ICT frontier, the need to think of student knowledge as encompassing a developing span of skills grew. Earlier foreshadowed in the IILP framework, it was becoming not enough to think of digital skills as only use, or even as present or absent. The view of it as a laundry list of traits that could be checked off as mastered or not, used or not used, was showing serious shortcomings. Both the degree and type of knowledge present or absent in any given area of ICT importance was growing more important to understand. Research revealed that students might know how to log in to an online site but not how to navigate effectively or make strategic and creative use of the resources present. It became clear that it mattered significantly whether students had advanced skills, or only novice skills with some school-specific tools, such as spreadsheets and simulators. Especially, degree and type of knowledge tended to interact with the specific context.

Thus, the move to technology integration and the need for subject matter specificity of skills brought the need for a developing conception of student understanding to the fore. What does it look like to be proficient and to grow more proficient? What are detailed markers of proficiency, and how do these markers change as students grow in their skills? Key to emerge in these new efforts was the idea of collaboration as an aspect of digital literacy. The benefits of such collaboration included the contributions of student peer-to-peer engagement (Erstad 2006; Loader 2007) as well as expectations of being able to collaborate digitally.

Introduction of the Explicit Progress Variable

The next transition for ICT Literacy expressed as a movement towards progress variables, which are essential tools for the design of curriculum and assessments. The “progress” view depicts the need to understand initial ICT knowledge likely to emerge followed by a developing picture of mastery. Students are acknowledged as not one-size-fits-all in ICT literacy but moving toward increasing competency in their virtual skills, knowledge, competency, awareness, and use.

An ICT Literacy framework was developed by the Australian Council for Educational Research (ACER) and released in a national report (MCEETYA 2005). In this framework, ICT Literacy was defined as:

The ability of individuals to use ICT appropriately to access, manage, integrate and evaluate information, develop new understandings, and communicate with others in order to participate effectively in society. (MCEETYA 2005, p. xiii)

The MCEETYA/ACER Framework includes six processes, which are similar to the five components of the IILP Framework (Fig. 11.3) – three remain essentially the same, with ‘Integrate and Create’ subsumed into a new one called ‘Creating,’ while two new ones are added, ‘Communicating’ and ‘Using ICT Appropriately.’ These processes were then examined both substantively and empirically to ascertain a deeper dimensional structure, and the result is that they were then combined into three strands: Working with Information, Creating and Sharing Information, and Using ICT responsibly. Moreover, these strands were each seen as instances of the concept of a “progress variable”, explicitly acknowledging the importance of the developing view of proficiency and according it measurable characteristics (Masters et al. 1990):

Any assessment is underpinned by a conception of progress in the area being assessed. This assessment of ICT literacy was based on a hierarchy of what students typically know and can do. It was articulated in a progress map described in terms of levels of increasing complexity and sophistication in using ICT. For convenience, students’ skills and understandings were described in bands of proficiency. Each band described skills and understandings that are progressively more demanding. The progress map is a generalised developmental sequence that enables information on the full range of student performance to be collected and reported. (ACARA 2012, p. 8–9)

Although the progress variables for these three strands were also developed, the eventual use of the progress variable concept in the national tests was as a single progress variable, as shown in Fig. 11.6, which provides the MCEETYA/ACER Framework: Digital Transformation for ICT Literacy. Note that the “Levels” in Fig. 11.5 correspond to the “levels” and “bands” in the quotation above.

Fig. 11.5
figure 5

The levels of the MCEETYA/ACER framework

Social-Networking Learning Progression Perspective

In this subsection, we discuss the impact of a “social-networking” perspective on ICT – the critical need for building the power of virtual skills through proficiency with networks of people, information, tools, and resources. Here, we offer a new framework for assessing student ICT learning, based on a learning progression point of view.

As mentioned above, the ATC21S project was initiated to develop new assessments in the area of twenty-first century skills, based on the idea that new assessments could lead the way to these new subjects. Using the BEARFootnote 2 Assessment System approach (Wilson 2004; Wilson et al. 2012), the project developed a demonstration ICT assessment. The ATC21S effort yielded a synergy of both schools of thought: collaboration and strategic solution, creation and effective application. The ATC21S methodology group described that, in order to achieve a working hypothesis of such a complex domain, one approach is to describe “dimensions of progression,” or theoretical maps of intended constructs, in terms of depth, breadth and how the skills change as they mature for students (Wilson et al. 2012). For this, the ATC21S project set up an expert panel of ICT experts,Footnote 3 who turned to the research literature to inform expanded definitions of digital literacy.

Studies and research findings tapped into by the ATC21S panel of ICT experts included the areas of augmented social cognition (Chi et al. 2008), applied cognition (Rogers et al. 2007), team cognition (Cooke et al. 2007), social participation (Dhar and Olson 1989), cognitive models of human-information interaction (Pirolli 2007), technological support for work group collaboration (Lampe et al. 2010; Pirolli et al. 2010), theories of measurement for modeling individual and collective cognition in social systems (Pirolli and Wilson 1998), and topics in semantic representation (Griffiths and Steyvers 2007).

For instance, research in augmented social cognition (Chi et al. 2008) describes how the ability of a group of people to remember, think and reason together emerges. It explores how people augment their speed and capacity to acquire, produce, communicate and use knowledge, and to advance collective and individual intelligence in socially mediated environments. It is expected that augmented digital and virtual settings will be increasingly common for students to navigate in twenty-first century skills learning.

The ATC21S panel of experts then developed definitions in these areas. Here, the goal was formulation of hypotheses concerning the nature and characteristics of the developmental learning continua associated with relevant skills. Such developmental progressions, if validated, could help define the skills in such a way that they could effectively be measured, and ultimately mapped to curriculum and instruction.

Consistent with the thinking that some beliefs about the current practice of schooling are outmoded in the global working environment, the expert panel described how definitions of ICT literacy are changing. Recent workshops on a National Initiative for Social Participation (NISP), funded by the U.S. National Science Foundation (Pirolli et al. 2010) identified the need for an educational focus on learning in networks (or technology-mediated social participation). A report by the subgroup on educational priorities (Lampe et al. 2010) recognized that learners fall into multiple categories and suggested curricular goals for K-12 and higher education. These goals included skills for information access and literacy (as a consumer), increasingly sophisticated participation (games, forums), increasingly sophisticated ability to develop social capital (e.g., find opportunities and gain support, organizing others), and “computational thinking” about social-computational functions and services such as using bots to match people and tasks or using crowdsourcing to solve problems.

These reports from the NISP, along with research from other agencies and operational examples around the world, informed the ATC21S framework development efforts. These included the National Assessment Program Information and Communications Technologies in Australia (ACARA 2012) mentioned above, Singapore’s ICT Master Plans (Park 2011), the ISTE standards from the U.S. (http://www.iste.org/standards), international efforts of OECD with the Programme for the International Assessment of Adult Competencies (PIAAC, http://www.oecd.org/site/piaac/), and early planning on the IEA International Computer and Information Literacy Study (ICILS, http://www.iea.nl/icils).

These efforts identified many important and worthy ICT literacy goals for students. Clearly established in frameworks worldwide were individual consumer skills, or using information and tools available through technology, often on a Web 1.0 model of repositories that could be accessed over the Internet by students. Emerging trends were additionally seen around a variety of producer skills, in which students needed to craft, create, express, post and manage digital assets, in new ways due to the emergence of Web 2.0 technologies. Finally, it was noted that, as described in the NISP documents (e.g., Pirolli et al. 2010), the field was beginning to recognize and acknowledge the importance to education of networks, both requiring social capital skills of students and the ability to draw on intellectual capital of groups and teams. This included Web 3.0 skills of “semantics,” or meaning-making through technology, with such tools as analytics, effective use and evaluation of ratings, crowd sourcing, peer evaluation, tagging and the ability to judge credibility and viability of sources.

ACT21S “Learning in Digital Networks” ICT Literacy Framework and Assessments

To make progress on this goal, the expert panel challenged itself to define, for each of these four competencies, what having “more” and “less” of the competency would look like, for students aged 11, 13 and 15. In other words, as one expert noted, “When someone gets better at it, what are they getting better at?” This might also be described as what students will know and be able to do, as well as how the field of education will recognize the ranges of skills and abilities likely to be seen if the competencies are assessed and instructed (Wilson and Scalise 2013).

For ATC21S the focus of ICT Literacy was on learning in networks, seen as being made up of four strands:

  • Functioning as a consumer in network

  • Functioning as a producer in networks

  • Participating in the development of social capital through networks

  • Participating in intellectual capital (i.e., collective intelligence) in networks.

The four strands are seen as interacting, as parallel developments that are interconnected.

First, functioning as a Consumer in Networks (CiN) involves obtaining, managing and utilizing information and knowledge from shared digital resources and experts in order to benefit private and professional lives. It involves questions such as:

  • Will a user be able to ascertain how to perform tasks (e.g., by exploration of the interface) without explicit instruction?

  • How efficiently does an experienced user use a PDA or other mobile device to find answers to a question?

  • What arrangement of information on a display yields more effective visual search?

  • How difficult will it be for a user to find information on a website?”

Second, functioning as a Producer in Networks (PiN) involves creating, developing, organizing and re-organizing information/knowledge in order to contribute to shared digital resources.

Third, developing and sustaining Social Capital through Networks (SCN) involves using, developing, moderating, leading and brokering the connectivities within and between individuals and social groups in order to marshal collaborative action, build communities, maintain an awareness of opportunities and integrate diverse perspectives at community, societal and global levels.

Fourth, developing and sustaining Intellectual Capital through Networks (ICN) involves understanding how tools, media and social networks operate and using appropriate techniques through these resources to build collective intelligence and integrate new insights into personal understandings.

In Tables 11.1, 11.2, 11.3, and 11.4, levels of these four strands are described as hypothesized construct maps showing an ordering of skills or competencies involved in each. At the lowest levels of each are the competencies that one would expect to see exhibited by a novice or beginner. At the top of each table are the competencies that one would expect to see exhibited by an experienced person – someone who would be considered very highly literate in ICT.

Table 11.1 Functioning as a consumer in networks (CiN)
Table 11.2 Functioning as a producer in networks (PiN)
Table 11.3 Developing social capital through networks (SCN)
Table 11.4 Developing intellectual capital through networks (ICN)

These construct maps are hierarchical in the sense that a person who would normally exhibit competencies at a higher level would also be expected to be able exhibit the competencies at lower levels of the hierarchy. The maps are also probabilistic in the sense that they represent different probabilities that a given competence would be expected to be exhibited in a particular context rather than certainties that the competence would always be exhibited.

The levels and assessments in the ATC21S Learning in Digital Networks framework were developed using the BEAR Assessment System (BAS) approach (Wilson 2004), which takes as its first step the delineation of qualitatively different levels of performance, just as for the progress variable described above. The levels in each strand follow a similar valuing of:

  • Awareness and basic use of tools

  • Followed by more complex application directly relevant to teaching and learning

  • With evaluative and judgmental skills emerging as experience and knowledge are gained

  • Moving to leadership and ability to manage and create new approaches.

These levels within the strands may be seen as “staggered” (Fig. 11.6) in that they have not been positioned on the same fixed scale for each strand. We see them as strands of the same broad construct – ICT Literacy – but the lower levels of one strand may be equivalent to the middle or even higher levels of other strands. It should also be noted that these construct maps were developed to encompass the full range of competencies within each strand rather than the range that one might expect to be exhibited by school students at middle and secondary levels. The question of targeting assessments to match what students can do is an empirical question to be determined through consultations with teachers and cognitive laboratories with students, as well as the results of pilot and field studies.

Fig. 11.6
figure 6

The four strands of ICT Literacy, represented as a profile of four staggered progress variables

The BEAR Center at UC Berkeley developed three scenarios in which to place tasks and questions that could be used as items to indicate where a student might be placed along each of the four strands. Each scenario was designed to address more than one strand, but there were different emphases in how the strand areas were represented among the scenarios. Where possible, they took advantage of existing web-based tools for instructional development. Just one of these is briefly described below (Wilson and Scalise 2015, shows scenario information, and includes tasks and scoring associated with scenarios).

In the Webspiration demonstration task, framed as part of a poetry work unit, students of ages 11–15 read and analyze well-known poems. Figure 11.7 shows a screen from the computer module, to give a feel for how the scenario “looks” onscreen. In a typical school context, we might imagine that the teacher has noticed that his or her students are having difficulty articulating the moods and meanings of some of the poems – in traditional teacher-centered instruction regarding literature the student role tends to be passive. Often, teachers find that students are not spontaneous in their responses to the poems, but may tend to wait to hear what the teacher has to say and then agree with what is said. To encourage students to formulate their own ideas on the poems, the ATC21S demonstration task uses a collaborative graphic organizer through the Webspiration online tool. The teacher directs the students to use Webspiration to create an idea map collaboratively using the graphic organizer tools, and to analyze each poem they read. Students submit their own ideas and/or build on classmate thoughts. For example, a fragment of the students’ chat while they were working on the graphic organizer was:

Student 1: “it was my idea”

Student 2: “where is?”

Student 1: “in the middle.”

Such use of networking tools and styles of communication by students is also hypothesized to occur at the PiN2 level (“Functional builder”) of the Producer in Networks strand shown in Table 11.2. Here students can be seen, as stated in the scoring rubric, “using networking tools and styles for communication among people,” in the process of developing creative and expressive content artifacts, in this case a collaborative graphic organizer for the analysis of a select piece of literature (poem). As the next step, students are asked to upload their work and the chat log.

Fig. 11.7
figure 7

A sample page from the Webspiration scenario

Within the Webspiration task, students were asked to create a one-minute audio commentary for the poem they have found online, and also to explain how they created the audio. Note that students who are successful on this task are hypothesized to be at the highest (“Creative Producer”) level of the Producer in Social Networks strand. The screen for this task is shown in Fig. 11.8.

Fig. 11.8
figure 8

Webspiration tool: “Time to Create!” task

Empirical Study

We selected two of the three scenarios for validation studies with middle-school students. These were (a) the science/math Arctic Trek collaboration contest and (b) the Webspiration shared literature analysis task. These were identified by participating countries as the most appropriate to study at the current time. This was because they were more aligned with the school curricula in the participating countries, which sometimes did employ mathematics simulations and online scientific quests as well as graphical and drawing tools for student use, but which infrequently used anything like cross-country chat tools. The third task (the Second Language Chat) was seen by participating countries, teachers and schools as a forward-looking and interesting scenario, but more remote from the adoption curve for school curricula.

Each of the two scenarios was presented in three forms, for 11, 13 and 15 year-olds respectively, with a subset of common items across the three forms. For a sample of 103 students in our first field test (i.e., those for whom we were able to match their login IDs for two scenarios), assessment results from the two scenarios within the overall ICT literacy domain were analyzed using a four-dimensional item response model with age groups as manifest regressors. This is a small sample for multidimensional analysis, but we see it as worthwhile to report, given the novel conceptualization of the assessments. Note that the collaboration took place within pairs or teams of students and did not use computer avatars or other pre-programmed forms of collaboration.

Before beginning the assessment on the 45-min tasks, students were provided about five minutes of directions by the assessment administrator. Students were told that they would engage in collaborative activities online and would be provided with partners. They were told that the activities were for a math/science task, for which teams were composed of four students, or for an English language literacy task, for which pairs (dyads) were assigned. In both cases, students were informed that activities were intended to provide information on how students work with the technology tools, processes, and partners provided. Students were informed they would have 45 min for one task, and that timing information would appear on the screen. Furthermore, students were encouraged to tap into assistance from their partners through the chat tools provided but told that they could not collaborate face-to-face. They also were required to restrict all collaboration and conversation to the online tools provided within the assessment environment, which does share some screen information among team members in various tools in the collaborative suite, but does not include full screen sharing. The assessment administrator remained in the room during the assessment, to answer questions and to monitor that instructions were followed. Team members and pairs were randomly assigned within classroom and students were not informed in advance of their partner(s) identities, but once beginning the task, students could communicate their identity and any other information they wished to share, through tools such as chat windows.

An initial research question was whether the plan for the four dimensions of ICT is displayed in the empirical results. That is, one can ask whether these four constructs have been successfully distinguished by the items. A typical way to test this is to ask whether a single composite construct (i.e., a unidimensional construct) could explain the results just as well as a four-dimensional construct. We fitted a single composite item response model (see Adams et al. (1997) for model equations, estimation algorithms, etc.) using ConQuest software (see Adams et al. (2012) for computation considerations, etc.) and compared the results to those for the hypothesized four-dimensional model. The overall fit statistics for the models are presented in Table 11.5.

Table 11.5 Deviance and number of parameters for the two models

Since the unidimensional model is nested within a multidimensional model, it is appropriate to compare the model fit using the difference in deviance (G2), which is approximately distributed as a chi-square statistic with the difference in the number of estimated parameters as degrees of freedom. The difference in deviance between two models is 51 (3,419–3,368) with 19 degrees of freedom (22–3), and thus the multidimensional model fits this dataset significantly better than a unidimensional composite model, at the α = 0.001 statistical significance level.

Table 11.6 shows the variances and disattenuated correlations obtained from the multidimensional model. The highest correlation is between the CiN and PIC dimensions (0.97) and the lowest is between PiN and PSN (0.90). These are high correlations, and bring into question the need for a multidimensional psychometric model at this point. However, there are often high correlations among dimensions of achievement dimensions – compare for instance, the correlation of 0.86 between Science and Reading for the 2000 PISA tests (Kirsch et al. 2002) – and few educators or educational researchers would consider these two dimensions to be substantively the same dimension. Hence we continue to use the multidimensional results, as we see that there are educational differences among the dimensions, even though students are performing similarly across all four.

Table 11.6 Variances and correlations from the multidimensional model

One way of checking if the assumptions and requirements of the model are met is to examine the weighted mean square fit statistic estimated for each item. Item fit can be seen as a measure of the discrepancy between the observed item characteristic curve and the theoretical item characteristic curve (Wu and Adams 2013). ConQuest estimates the residual based weighted fit statistics, also called infit, by comparing the observed residuals to the expected residuals by taking the ratio of the two variances and weighting down the respondents whose abilities are estimated further from the item. Ideally, infit values are expected to be close to 1.0. Values of less than one imply that the observed variance is less than the expected variance, while values of more than one imply that the observed variance is more than the expected variance. It is a common convention to use 3/4 (0.75) and 4/3 (1.33) as acceptable lower and upper bounds (Adams and Khoo 1996). Three out of the 44 items fell outside this range, all below 0.75 (at 0.68, 0.69, 0.70). This is close to the range of what might be expected by chance.

As shown in Table 11.7, the reliability estimates from the multidimensional approach using responses to both scenarios are all higher than 0.80.

Table 11.7 Reliabilities (EAP) from the multidimensional approach

Figure 11.9 shows one of the “Wright maps” (Wilson 2004) obtained from the four-dimensional model, specifically for Consumer in Social Networks. Items are vertically ordered with respect to their difficulties, and persons (cases) are vertically ordered with respect to their abilities. Each “X” on the left-hand side represents a small number of students, and the items are shown on the right-hand side using their item numbers. The locations are interpreted as follows, for dichotomous items:

  1. (a)

    When a student’s X matches and item location, the probability of that student succeeding on that item is expected to be 0.50

  2. (b)

    When a student’s X is above the item, then the probability is above 0.5 (and vice-versa), and

  3. (c)

    These probabilities are governed by a logistic distribution (see Wilson 2004 for a discussion).

Fig. 11.9
figure 9

Wright map for the consumer in social networks strand with bands representing each level

Where the items are polytomous, the labeling is more complex; for example, in Fig. 11.9 note that Item 5 is represented by two labels: 5.1 and 5.2. The former is used to indicate the threshold between category 0 and categories 1 and 2 (combined); the latter is used to represent the threshold between categories 0 and 1 (combined) and category 2. The interpretation of the probability is equivalent to that for a dichotomous item: that is, when a student’s X matches the 5.1 location, the probability of that student succeeding at levels 1 or 2 on that item is expected to be 0.50; and similarly, when a student’s X matches the 5.2 location, the probability of that student succeeding at only level 2 on that item is expected to be 0.50.

This map identifies whether there is a good coverage of abilities by items. Ideally, if permitted a sufficient number of items for each strand, the range of item difficulties would approximately match the range of person abilities. This would mean that there are items approximately matching every level of the person ability. This is true for the Consumer in Social Networks strand. Figure 11.9 also shows the “banding” of the levels for the Wright Map for the Consumer in Social Networks (indicated by the alternating grey and white regions on the graph) – that is, we have carried out a judgmental exercise to locate where the approximate transitions among the levels are located. This is accomplished by analyzing the skills needed for each item (or levels of the items, for polytomous ones), and mapping them back to the levels of the four strands. Note that not all items were useful in setting these bands – it is a continuing exercise to determine which items are best for this banding exercise. From Fig. 11.9, we can see that students in this sample have a range of abilities on this strand that spans all three hypothesized levels, from Emerging to Discriminating Consumer, although there are relatively more students in the lower levels than in the higher levels (an observation that will be repeated in other strands).

Figure 11.10 shows the banded Wright map for the Producer in Social Networks strand. For this map, we see that the highest level, Discriminating Producer, was not displayed by the items that remained after the item piloting. Hence there are only two levels that remained for inclusion in the Wright map. Also, not many items representing the lowest level – Emerging Producer – survived the piloting process. Thus, an effort needs to be made to develop new items for these two levels.

Fig. 11.10
figure 10

Wright map for the producer in social networks strand with bands representing each level

We do not display a banded Wright map for the Developer of Social Capital strand, as there were only two items remaining in this strand, after items were deleted during development, both of which were aimed at measuring at third level (Proficient connector) of the construct. Thus, an effort needs to be made to develop new items for all levels of this strand.

Figure 11.11 shows the Wright map for the Participator in Intellectual Capital strand. Although all three levels are represented in the bands for this strand, both the highest and lowest are sparsely represented. Hence an effort should be made to add items at each of these levels. Again, the students are concentrated in the bottom two levels, although some are indeed at the higher levels.

Fig. 11.11
figure 11

Wright map for the participator in intellectual capital strand with bands representing each level

Summary and Conclusion

Four steps have been used here to illustrate the trajectory of ICT literacy over approximately the last two decades. Simple early measures of computer use have morphed to include the integration of technology across educational areas and brought on a sweeping need for understanding ICT literacy as a developmental progression in student skills and thinking. We have illustrated this new understanding in the shape of (a) a four-dimensional developmental framework, including levels of sophistication within each dimension, (b) examples of online scenarios and items that can be used to assess those dimensions, and (c) empirical results that illustrate how these concepts and materials function in the real world.

Networks are groups or systems of interconnected people and resources. The ability to connect with and strategically access a vast array of people, information, tools and resources has significant and broad impact for learning and student competency. Web development has progressed through important stages, from information repository to social media to semantic environment, carrying the needs of the learner along with it.

Following this thinking we conclude that the current conceptualization of education is becoming dated, at least in some respects. Schools find their students must apply knowledge across disciplinary barriers, work effectively both alone and together, access and interact with large amounts of information, and make strategic and contextualized use of tools. Yet assessments and their associated frameworks often are missing the new trends. We still tend to measure isolated skills, and individuals working solo and stripped of augmentations such as information-rich and tool-rich access. How this compares to what we really want to teach and know is not well understood.

A new type of twenty-first century ICT literacy, which we described here, can be focused at least in part on learning and achieving goals in networks. While performance on any one of the four strands is not yet fully explicated, such a framework does offer new conceptualizations of the ICT literacy cognitive space. Initial efforts to address this have been reported above. We have had considerable success, with all four strands showing reliabilities higher than 0.80, and the likelihood ratio test showing that the four strands are different in a statistical significance test. We have also found that three of the strands display content-validity according our criterion of item banding based on our construct maps (although one shows only two levels rather than three, due to a lack of higher-difficulty items that survived the stress-test of the trial) and the fourth, Developer of Social Capital, had too few items surviving to be considered for content-validity testing. The correlations among the four strands are quite high, though no higher than are correlations between, say, Mathematics and Science in large-scale testing. Clearly there will need to be more item development undertaken regarding these four strands and the novel item formats that are involved in this assessment. What are the design principles for these new assessment tasks? What are the critical implementation steps that need to be undertaken to ensure success?

Learners who are able to activate modern ICT literacy skills will need to know how to strategically and creatively interact with real people, authentic online tools, complex information, and complex virtual networks. Thus, learning in digital communities calls for also assessing in digital communities: that way we can help all learners thrive among the networks in which they must 1 day function as full and mature participants.

Educational training for this new approach to ICT literacy will involve other ideas and methods as new as those described here. There are well-established approaches for training in ICT literacy that are used with current conceptions of ICT literacy and accompanying textbooks, such as Roblyer (2004), that are very much “big-sellers” in the textbook market. Just as the technology changes described above have prompted changes in assessments, we can expect that these changes will flow into the training field as well and will be speeded by changes in the assessments (as was the intention of the technology companies mentioned in the introduction).

These developments have important consequences for assessment in other domains. In both traditional subject matter areas, as well as in other twenty-first century skills, assessments are trending towards TEAs, and this development will make ICT literacy an increasingly important area for students (and all learners, in almost any environment). The traditional importance of reading and writing (and speaking and listening) will be matched by the importance of ICT literacy. Although this does imply the need for specifically-focused ICT literacy instruction and assessment in terms of a foundation for all students, it means that more advanced instruction and assessment in those traditional subject-matter areas, and in twenty-first century skills, will need to include aspects of ICT literacy as integral components of the instructional design and the assessments.

The challenge for ICT literacy assessment is adapting to frequent and deep changes in the technology-enhanced world in which we live, as the world of work (and hence, the world of schooling) is inevitably enveloped by this technological environment. This is not just a challenge in terms of instructional materials and assessment materials, but also a challenge to the relevant professionals involved – teachers, instructional developers and assessment developers. And, in fact, it is a strong challenge also to the researchers, who, although they can stand aside from the tumult of technological change, will need to become experts in each new wave of technology, as those technologies change the educational environment.