Keywords

Introduction

Computer literacy has been recognized for some time as important for life in modern society (Kozma 2003), and many countries have officially recognized the importance of developing computer literacy through schools. During the first decade of the twenty-first century, policy statements from many national education authorities asserted the importance of school students developing capabilities in information and communication technology (ICT) to support learning in other domains and to provide a foundation for their productive future participation in work and society (Qualifications and Curriculum Authority 2007; E-learning Nordic 2006; Hinostroza et al. 2008; MCEETYA 2008; Sanchez and Salinas 2008; Office of Educational Technology, US Department of Education 2010).

Computer literacy also features as a focus of cross-national organizations. Interest in capabilities related to digital technologies is evident in the assertion by the European Commission that digital literacy is “increasingly becoming an essential life competence and the inability to access or use ICT has effectively become a barrier to social integration and personal development” (European Commission 2008, p. 4). The United Nations Educational, Scientific and Cultural Organization (UNESCO) has canvassed the possibility of a global measure of digital and ICT literacy skills in a paper commissioned for the 2016 Global Education Monitoring Report (ACER 2016). Projects such as the IEA International Computer and Information Literacy Study (Fraillon et al. 2014) and the Assessment and Teaching of 21st Century Skills (Griffin et al. 2012) indicate a quickened interest in defining and assessing appropriate capabilities for modern societies. An assessment of digital reading was included as an international option implemented by 19 countries in the PISA 2009 cycle (OECD 2011).

One view of student use of digital technologies was that students in the modern age were “digital natives” (Prensky 2001) who had grown up to be expert users of these technologies and who had developed “sophisticated knowledge of and skills with information technologies” and even different learning styles from older generations (Bennett et al. 2008, p. 777). Others have challenged this view pointing to wide variations among young people in their use of ICT, the lack of a clear distinction between young users and older users, and the predominance of relatively mundane usage (Helsper and Eynon 2010; Koutropoulos 2011; Selwyn 2009). Selwyn (2009) characterizes young people’s Internet use as “passive consumption of knowledge rather than the active creation of content” (p. 372). For this reason, and other reasons, many school systems provide teaching of computer literacy to develop students’ capabilities in that domain (Sturman and Sizmur 2011) sometimes outlining a learning progression (e.g., ACARA 2017), and the assessment of computer literacy has become a component of monitoring student achievement. In many countries, and internationally, assessment programs have been designed to determine the extent to which students are developing appropriate levels of computer or ICT literacy. This paper draws on the evidence that has been assembled by those assessment programs to address the question: To what extent are students computer literate?

Conceptualizing Computer Literacy

The terms computer literacy, ICT literacy, and digital literacy have slightly different meanings but in practice are used interchangeably (Markauskaite 2006; Siddiq et al. 2016). This chapter refers to computer literacy but recognizes that the concept overlaps with other terms. Conceptualizations of computer and ICT literacy have been greatly influenced by work of an international panel convened by the Educational Testing Service (ETS) to develop a framework for ICT literacy (ETS 2002). The panel proposed that “ICT literacy is using digital technology, communications tools, and/or networks to access, manage, integrate, evaluate, and create information in order to function in a knowledge society” (ETS 2002). Since that initial work, conceptualizations have combined aspects of technological expertise with information literacy and extended to include ways in which digital information can be transformed and used to communicate ideas (Catts and Lau 2008). The conceptualizations of computer literacy that have emerged involve both the use of digital tools and the ideas of information literacy. Studies of computer literacy assume that there is an underlying construct that can be measured in different contexts. This assumption is consistent with the view that computer literacy is more than operating hardware and software and includes the information literacy skills associated with accessing and evaluating information as well as the creative competencies of transforming and creating new digital information.

Binkley et al. (2012) synthesized the operational definitions of ICT literacy that had been developed around the knowledge, skills, and attitudes concerned with being able to access and evaluate information (e.g., search, collect, and process), use and manage information (e.g., process information accurately and creatively, manage the flow of information from various sources, understand the reliability and validity of information), and apply technology effectively (e.g., applications and devices). Similarly, the European Commission proposed a digital competence framework (DIGCOMP) based on a review of ICT literacy frameworks (European Commission 2010). It invokes five main competence areas: information management, collaboration, communication and sharing, creation of content, and problem-solving (European Commission Joint Research Center-IPTS 2013; Ferrari 2012). At a more detailed level, DIGCOMP specifies particular skills that make up these five competence areas and proficiency levels for each competence area.

ICILS adopted the definition of computer and information literacy as referring to “an individual’s ability to use computers to investigate, create and communicate in order to participate effectively at home, at school, in the workplace and in society” (Fraillon et al. 2013, p. 17). It envisaged two strands. The first was a receptive strand involving collecting, managing, and processing information. It incorporated knowing about and understanding computer use (the generic characteristics and functions of computers and basic technical knowledge and skills that underpin the use of computers), accessing and evaluating information (the processes that enable a person to find, retrieve, and make judgments about the relevance, integrity, and usefulness of computer-based information), and managing information (the capacity to work with computer-based information to adopt and adapt information-classification and information-organization schemes in order to arrange and store information so that it can be used or reused efficiently). The second strand focused on producing and exchanging information and using computers as tools for thinking, creating, and communicating. This included transforming information (change how information is presented so that it is clearer for specified purposes), creating information (using computers to design and generate information products), sharing information (understanding of how to use computers to communicate and exchange information with others), and using information safely and securely (understanding legal and ethical issues of computer-based communication).

The definition of ICT literacy adopted in Australia for use in its National Assessment Program was similar to, and preceded, that used in ICILS. It was elaborated through a set of six key processes and a broad description of progress according to three strands (MCEETYA 2005). The six key processes were accessing information, managing information (organizing and storing information), evaluating and making judgments, developing new understandings (e.g., creating information and knowledge), communicating (i.e., exchanging information and creating information products), and using ICT appropriately. The three strands of progress were working with information, creating and sharing information, and using ICT responsibly.

In 2014, the US National Assessment of Educational Progress included an assessment of technology and engineering literacy (NAEP-TEL) which has ICT as one of its three areas. Information and Communication Technology was one of three content areas (the other two were Technology and Society and Design and Systems) (Institute of Education Sciences, National Center for Education Statistics 2012). Information and Communication Technology covered “computers and software learning tools, networking systems and protocols, hand-held digital devices, and other technologies for accessing, creating, and communicating information and for facilitating creative expression” (NCES 2016a, p. 8). This involved proficiency in using computers and software learning tools, networking systems and protocols, handheld digital devices, and other technologies that enable users to access, create, and communicate information and engage in creative expression. It also identified five subareas of competence: construction and exchange of ideas and solutions, information research, investigation of problems, acknowledgement of ideas and information, and selection and use of digital tools (NCES 2016b).

In Chile, the ICT Skills for Learning Test thus defines ICT literacy as “the capacity to solve problems of information, communication and knowledge in digital environments” (Claro et al. 2012, p. 1043). Claro et al. (2012) elaborate that this requires the mastery of ICT applications in order to “solve cognitive tasks in a digital environment” but that the skills are not themselves technology-driven in that they reference “information, communication and knowledge tasks in an ICT context” rather than particular software. They also incorporate higher-order thinking processes and related to continuous learning. ICT literacy is envisaged as involving using ICT with regard to information skills (sourcing for information, processing information), communication skills (effective communication, collaboration and virtual interactions), and appreciating ethical issues and social impact (evaluating responsible use and social impact) (Claro et al. 2012).

Assessing Computer Literacy

Over the past two decades, there has been a number of approaches to the assessment of computer literacy including self-reports, traditional assessments based on multiple-choice and constructed response items, and performance assessments. Each of these provides different perspectives.

Self-Reports of Computer Literacy

Siddiq et al. (2016) note that there have been numerous examples of self-reports as measures of students’ ICT literacy. They caution that, although these measure self-confidence or self-efficacy, they do not provide sound measures of ICT literacy. Therefore, Siddiq et al. (2016) excluded studies based on self-reports from their review of instruments measuring ICT literacy. They noted the low correlations of self-reports with measured competence. Rohatgi et al. (2016), based on analyses of Norwegian data, note that ICT self-efficacy may not be a unidimensional construct and distinguish basic ICT self-efficacy from advanced ICT self-efficacy. Basic ICT self-efficacy was positively related to computer literacy (r = 0.22), whereas advanced ICT self-efficacy was negatively but minimally associated with computer literacy (r = −0.06) (Rohatgi et al. 2016). A similar correlation between ICT self-efficacy and measured computer literacy was reported among students in Years 6 and 10 in Australia (ACARA 2017). There is also evidence from Australia that measures of ICT self-efficacy may not be equivalent for boys and girls: Boys were more confident than girls about using ICT, but this confidence was not matched in measured computer literacy (ACARA 2017). It has been widely concluded that self-report measures do not provide a basis for inferring the extent to which students are computer literate even though these measures may be important for understanding student learning in computer-based learning environments (Moos and Azevedo 2009).

Assessments Based on Traditional Item Formats

Siddiq et al. (2016) have provided a comprehensive review of the instruments used to assess students’ ICT literacy. They observe that many of the instruments are directed to lower secondary-school students, involve multiple-choice items, and measure aspects such as searching, retrieving, and evaluating information as well as technical skills. Siddiq et al. (2016, p. 63) identify four item categories in assessments of ICT literacy: multiple-choice items based on static information, multiple-choice items based on dynamic information, items that require the respondent to interact with the test material, and the performance of tasks that require responses in an authentic situation. Some assessments of computer literacy use print formats to assess computer literacy (Senkbeil et al. 2013). Critiques of approaches using standard item formats have been that they do not reflect the ways in which people use these technologies in practice and that they do not capture the range of computer literacy competencies (Siddiq et al. 2016).

Most assessments of computer literacy and related constructs have used computer-based assessments (Siddiq et al. 2016). Computer-based assessments typically take advantage of the medium to provide richer stimulus material (e.g., video and other dynamic graphic material) on which to base items and enable better targeting of item difficulty to respondent ability (e.g., assigning items based on responses to previous items). However, an essential consideration for computer-based assessments, which is not such an issue for print-based assessments, is to ensure a uniform assessment environment. Computer-based assessments delivered through software and resources loaded on devices such as laptop computers are able to achieve this, but when materials are located on demountable media such as USB devices and accessed by local computers or accessed through the Internet, uniformity is harder to achieve.

Performance Assessment

Many approaches to assessing computer literacy incorporate performance assessment in which students are required to perform tasks on a computer rather than simply answer questions about computers and computing (Aesaert et al. 2014; Ainley et al. 2016; Claro et al. 2012; NCES 2016b; Van Deursen and Van Dijk 2016). Performance assessments provide opportunities to collect different types of responses (e.g., information products) and enable interaction between the respondent and an assessment system (e.g., dialogue between a respondent and the system). Performance assessments have been used in a number of learning domains and require a demonstration of mastery by performing a task or producing an information product that is subsequently evaluated using an assessment rubric (Madaus and O’Dwyer 1999). There are also examples of recording direct observations of people completing a task (Katz 2007).

Aesaert et al. (2014) developed a performance-based direct measure of ICT competence with a focus on primary-school students’ competence in digital information processing and communication. They developed and trialed 56 items around a matrix of 19 higher-order competences and 15 technical skills, and the data from the trial were examined to investigate dimensionality, fit, local item dependence, and monotonicity. The items were “simulation-based assessment tasks” which reflected “real-life information searching, processing and communication activities” (p. 171). The tasks were embedded in a narrative theme of “a journey through time” (p. 171). Twenty-seven items that required retrieving and processing digital information as well as communication with a computer were included in the final instrument.

Van Deursen and Van Dijk (2016) used a measure of Internet skills based on the performance of 21 discrete tasks in a survey of just over 1000 adults. The survey was administered as part of an online survey and also included measures of traditional literacy and Internet usage. The reliability of the measures of formal Internet skills, information Internet skills, and strategic Internet skills was high (α > 0.80), but that of observed Internet skills was a little lower than ideal (α = 0.72). They concluded that the level of operational skills, as well as traditional literacy, had a positive effect on formal, information, and strategic Internet skills. However, traditional literacy was not related to operational Internet skills.

ICILS employed four modules of which each student was randomly assigned two. Each module involved a number of discrete tasks and a large task (Fraillon et al. 2014, p. 18). One of the modules required students to create an advertising poster for an after-school exercise program (Available for viewing at: http://www.iea.nl/icils-past-cycles). To do this, they needed to set up an online collaborative workspace through which to share information which was then selected and adapted for the purpose. A second module required the creation of a webpage about a school band competition. Students needed to plan the webpage and use a simple website builder. The third module asked students to prepare a presentation about breathing for 8- or 9-year-old students. It involved file management and the collection and evaluation of relevant information. The fourth module required students to produce an information sheet about a school trip for their peers. They needed to use online database tools and to assemble relevant information. They could create a map using an online mapping tool. Each large task resulted in an information product that was assessed by scorers against a set of criteria (each criterion with its own unique set of scores) relating to the properties of the task. In ICILS 2013, large tasks constituted around 60% of the assessment.

In Chile, the ICT Skills for Learning Test also made use of purpose-designed software that simulated ICT applications and tasks that were embedded in realist contexts (school and work) (Claro et al. 2012). The software included commonly used applications (email, Internet browser, text processor, spreadsheet, and presentation software). In addition, there was a chat window through which a virtual conversation among three participants was simulated. Each student was assigned tasks to perform. The tasks were incorporated in a story (ecology), and in each task the student was faced with different situations, which were designed to test ICT skills. The first task required the student to participate in a campaign for the protection of species under threat of extinction. The second task required the student to prepare a working document about global warming. The third task required students to recognize risky behaviors in virtual environments and evaluate the impact of the Internet in different dimensions of life. Students were expected to use scientific evidence presented in graphs, text, and images to analyze and contextualize the evidence (Claro et al. 2012, p. 1045).

The NAEP-TEL assessment in the United States was also computer-based and included interactive, “multimedia scenario-based tasks (SBTs)” (NCES 2016b). Students were required to solve technology and engineering problems in real-world contexts that were designed to allow students to demonstrate the range of knowledge and skills detailed in the three content areas (Technology and Society, Design and Systems, and Information and Communication Technology) across three practices (Understanding Technological Principles, Developing Solutions and Achieving Goals, and Communicating and Collaborating). Some tasks related to one content area and practice, while others were concerned with more than one content area or practice. In one SBT, a local fruit juice company had offered to build a new recreation center for teenagers. Students were required to create content for a website promoting the benefits of building the recreation center (NCES 2016a). Another involved a group project examining the advantages and disadvantages of a cold medication. Students searched a relevant website and used evidence from that site to develop an evaluation. A third involved planning the building of a tree house using information from a website about tree houses and searching for information about the strength of different types of tree. The complete TEL assessment included 20 SBTs and 97 discrete questions within a rotated design so that individual students responded to only a portion of the entire assessment.

Computer Literacy of Students

In order to estimate the extent to which students are computer literate, it is necessary to focus on those studies that involve representative samples of defined student populations in one or more education systems. In practice, most of these studies focus on lower secondary schools (Years 7–10).

Perspectives from ICILS

In 2013, ICILS investigated the computer literacy of Year 8 students and established four described levels of its computer and information literacy (CIL) scale. In addition, there were a small number of items that had difficulties below Level 1 which represented the most basic skills and did not warrant description (Fraillon et al. 2014). The levels are outlined in Table 1 together with the percentages of students in the ICILS countries whose computer literacy score would place them in each of the levels.

Table 1 Percentages of Year 8 students at each proficiency level in ICILS 2013

In addition to recording the percentage distribution across levels for the 14 countries, Table 1 also shows the distribution for 12 countries with Thailand and Turkey omitted because the distributions for those 2 countries were very different from the other 12 countries. The discussion focuses on the 12-country distribution.

The ICILS scale is hierarchical in the sense that computer literacy becomes more sophisticated as student achievement progresses up the scale. Table 1 records the percentages of students “in each level” as well the percentages “up to and in each level” (in other words, the cumulative percentage). Students working below Level 1 are unlikely to be able to create digital information products.

At Level 1, students show familiarity with basic software commands enabling them to access files, complete routine text and layout editing, recognize basic software conventions, and recognize the potential for misuse of computers by unauthorized users. At Level 2 (which could be considered computer literate – see later) and beyond, students are familiar with and can use a broader range of software commands. They can format text and images in information products. At Level 3, they can edit and create information products with attention to layout and design, and at Level 4 they show an awareness of audience and purpose when formatting information products that they create including using layout and formatting features appropriately. At Level 2, students can use computers as information resources to locate information in digital sources as well as select and add content to information products. At Level 3, students independently search for information and select relevant information from digital resources being aware that information may be biased, inaccurate, or unreliable. Key factors differentiating Level 4 from Level 3 are the extent to which students can work autonomously with a critical perspective when accessing information and the precision with which they search for and locate information.

The data in Table 1 show that across the 12 countries, just over two thirds (69%) of Year 8 students recorded scores in Levels 2, 3, or 4. There was some variation across these countries with the eight middle countries in the distribution having percentages ranging from 64% to 75%. Across the 12 countries, just over one quarter (26%) of Year 8 students recorded scores that corresponded to a Level 3 or Level 4 attainment.

Links to the Australian National Assessment of ICT Literacy

ICILS 2013 was administered in Australia to a sample of 5326 students from 320 schools (de Bortoli et al. 2014). In addition to the ICILS assessment, students also completed a module from the National Assessment Program Information and Communication Technology Literacy (NAP-ICTL) assessment (ACARA 2012). The purpose was to benchmark the NAP-ICTL scale against the international ICILS scale (ACARA 2012). It found that ICILS Level 1 was equivalent to NAP-ICTL Level 2, ICILS Level 2 was similar to NAP-ICTL Level 3, ICILS Level 3 to NAP-ICTL Level 4, and ICILS Level 4 was equivalent to NAP-ICTL Levels 5 and 6.

The NAP-ICTL surveys involved a standards-setting exercise (MCEETYA 2007). Consultations were held with panels of practicing teachers, ICT curriculum experts, and educational assessment experts. They examined test items and results from the assessments to agree on proficient standards for each year level (MCEETYA 2007, pp. 46–47). The proficient standard for Year 6 was defined as the boundary between NAP-ICT Levels 2 and 3 (ICILS Levels 1 and 2), and the proficient standard for Year 10 was defined as the boundary between NAP-ICT Levels 3 and 4 (ICILS Levels 2 and 3). Based on this standard setting, it can be concluded that across 12 developed countries, two thirds (68%) of Year 8 students had attained a standard of proficiency that would be recognized for a Year 6 as computer literate and one quarter (26%) had attained the standard of proficiency for a Year 10 student.

Perspectives from the NAEP-TEL Assessment

In the US Technological and Engineering Literacy Assessment (TEL), results were reported in terms of the percentages of Year 8 students in three performance bands and a fourth band that was lower than basic (NCS 2016b). The descriptors of these bands incorporate the related content areas of Technology and Society, Design and Systems, and Information and Communication Technology. Table 2 records the descriptors that refer to the Information and Communication Technology content area.

Table 2 Descriptors of performance bands in the US Technological and Engineering Literacy Assessment with percentages of Year 8 students in each band: 2014

Perspectives from the ICT Skills for Learning Test in Chile

Results from the ICT Skills for Learning Test in Chile referred to the percentages of 15-year-old students who would be expected to successfully complete eight major tasks or levels. The analyses resulted in a set of tasks empirically ordered from most difficult to least difficult. The tasks are listed in Table 3 along with the percentages in each level and at or above each level (Claro et al. 2012).

Table 3 Percentages of 15-year-old students able to successfully complete each type of task in the ICT Skills for Learning Test in Chile: 2009

Claro and colleagues (2012, p. 1050) concluded that approximately three quarters of Chilean students were able to search for information, half were also able to organize and manage digital information, but only one third of the students are able to develop their own ideas in a digital environment, and less than one fifth could refine digital information and create a representation in a digital environment.

Perspectives from the ICT Literacy Assessment in Korea

Kim and Lee (2013) report the results of an Internet-based test of ICT literacy that incorporated 36 multiple-choice and performance items and was administered to a representative sample of 15,558 middle-school students in Korea in 2011. A set of performance levels was defined through a standard-setting exercise with expert judges. According to Kim and Lee (2013, p. 85), “excellent” represented an ability “to use or create information using existing ICT most effectively for solving problems”; “average” indicated “an understanding of the operating principles of computer and IT equipment” as well as “essential concepts and principles of computer science” and being able to use ICT for solving problems somewhat effectively; and “basic” signifies the ability to use IT skills “without having an understanding of the principles behind the skills.” The percentage distribution of student scores for Grade 3 was categorized as excellent (8%), average (34%), basic (53%), and below basic (4%).

Factors Associated with Computer Literacy

From a number of studies, it is evident that student computer literacy varies across countries, across schools within countries, and across students within schools. To begin with the variation across countries, average computer literacy varies greatly across countries and is positively associated with the ICT Development Index and negatively associated with the average ratio of students to computers in schools (Fraillon et al. 2014). Aesaert et al. (2015) point out that national ICT policies influence school and classroom practice through the curriculum, professional development for teachers, and the provision of resources. In terms of variation within countries, Aesaert et al. (2015) proposed a multilevel model based on factors from the student, classroom, and school level to explain differences in computer literacy within countries.

Individual Influences

Student-level influences on the development of computer literacy can be considered in terms of attitudinal or dispositional characteristics, familiarity with ICT, and background characteristics. In terms of attitudinal factors, Hatlevik et al. (2015) report that higher levels of mastery orientation and self-efficacy were associated with the development of digital competence in Year 7 students in Norway. ACARA (2015) reports that ICT literacy and ICT self-efficacy were both associated with interest and enjoyment in using computer technology. Greater interest in and enjoyment of ICT use were associated with higher CIL scores in 9 of the 14 countries that met the ICILS sampling requirements (Fraillon et al. 2014). The direction of the relationship between ICT or computer literacy and ICT self-efficacy is not clear from these studies and may involve reciprocal links. It also appears that measured computer literacy is positively associated with basic ICT self-efficacy but not with advanced ICT self-efficacy (Fraillon et al. 2014; Rohatgi et al. 2016). This is consistent with measures of computer literacy being focused on information and communication rather than advanced computer skills such as programming or database management.

Familiarity with computer technology is taken to include frequency of use and experience of using the technology. These factors appear to be consistently associated with computer literacy and self-efficacy in many countries (ACARA 2015; Fraillon et al. 2014; Kim et al. 2014; Rohatgi et al. 2016). Interestingly, it appears that ICT use for recreation may be a stronger predictor of computer literacy than ICT use for study or school purposes (Rohatgi et al. 2016). Among background characteristics, greater access to ICT resources (e.g., access to a home Internet connection and the number of computers at home) and higher socioeconomic status in general (e.g., as measured by parental occupation and educational attainment, the number of books in the home, or access to subsidized meals) are consistently associated with higher levels of computer literacy (ACARA 2015; Fraillon et al. 2014; Claro et al. 2012; Kim et al. 2014; Hatlevik et al. 2015). Aesaert et al. (2015) argue that, in addition to the socioeconomic characteristics of homes, consideration needs to be given to parental attitudes to, and practices with, information technology.

A number of studies report that girls have higher levels of computer literacy than boys (ACARA 2015; Fraillon et al. 2014; Kim et al. 2014) but this is not universally the case with some studies reporting the reverse and others finding no difference (Rohatgi et al. 2016). It seems possible that an explanation for this mixed pattern could lie in the nature of the assessments with those assessments that emphasize information literacy favoring girls and those that emphasize technical aspects favoring boys.

There is also evidence that computer literacy is higher for students in more advanced school years than those in earlier school years. The Australian National Assessment of ICT Literacy used a linked scale covering Year 6 and Year 10 so that it was possible to compare the performance of students 4 years apart (ACARA 2015). It found that students in Year 10 exhibited higher levels of computer literacy than students in Year 6. The differences were between one and one and a half standard deviations but becoming smaller over time from 2005 to 2014. Kim and Lee (2013, p. 87) also report higher levels of computer literacy among students in the third year of secondary school than among students in the first year of secondary school.

Classroom and Teacher Influences

The model proposed by Aesaert et al. (2015) includes among potential classroom- and teacher-level influences teachers’ ICT competence, attitudes to ICT, professional development, logistic appropriateness, ICT use, and ICT experience. However, there are few reported studies that relate teacher attributes and classroom practices using ICT to the development of student computer literacy. There is evidence that the emphasis on developing computer literacy is related to teachers’ ICT self-efficacy and positive views of ICT in education (Fraillon et al. p. 218). It also appears that the emphasis placed on developing computer literacy varies across learning areas being much higher in computer studies than in other learning areas and relatively higher in science and human sciences or humanities than in mathematics (Fraillon et al. 2014, p. 220). Kim et al. (2014) suggest that satisfaction level of students in classes using ICT was related to students’ ICT literacy levels. Again, the direction of causality is not clear.

School Influences

There are several studies that explore school-level influences on student computer literacy. It does generally appear, across most countries, that the average level of socioeconomic background of the students attending a school is associated with student computer literacy (Fraillon et al. 2014). Hatlevik et al. (2015) found that commitment among teachers in a school toward professional development was associated with students’ computer literacy. Dexter (2008) argued that managing teachers’ professional development in the use of ICT for teaching and learning was an important school-level influence on the use of ICT in schools. Fraillon et al. (2014) found that collaboration among teachers about the use of ICT was positively associated with the emphasis placed on developing computer literacy. ICILS also reported that in several countries student reports of having learned about ICT at school were associated with higher levels of computer literacy (after controlling for other measured influences).

Emerging Issues

The Challenge of Change

Changes in hardware and software technologies have meant that the contexts in which computer literacy can be demonstrated have also changed. An important question is whether the nature of the computer literacy construct has remained consistent even though advances in hardware and software technologies have meant that the contexts in which computer or ICT literacy can be demonstrated have changed. Studies that have attempted to measure changes in computer literacy over time have used anchor tasks between adjacent cycles and new tasks to accommodate new developments in software and hardware contexts (ACARA 2015). It is important to establish that the construct has remained the same if the proportions of students who are computer literate are to be compared.

Computer Science and Computational Thinking

There has emerged a challenge to the idea that computer literacy as currently conceptualized is an adequate element of understanding computing and computers. This challenge proposes that learning to use and apply computer technology is not an adequate representation of computer literacy and that understanding some principles of computing has educational benefits (Peyton Jones 2011). In other words, this view sees the sort of thinking used when programming a computer as being part of computer literacy (Grover and Pea 2013; Lye and Koh 2014). Computational thinking includes activities such as formulating, representing, analyzing, and executing a solution. This can involve developing or implementing an algorithm with computer code. The principles of teaching computational thinking resonate with work from five decades ago using “logo” and “turtle graphics” (McDougall et al. 2014). A number of countries have school computer science courses that contain computational thinking. The take-up of the ideas of computational thinking represents a challenge to the ways in which computer literacy has been envisaged since 2002. There remains a debate as to whether this provides a basis for general education (Barr et al. 2011) and how computational thinking relates empirically to computer literacy as currently conceptualized.

Conclusion

Since 2002, computer literacy has been accepted as referring to being capable of accessing, evaluating, using, managing, processing, and communicating information with digital technology. On the basis of international and national studies of computer literacy, it appears that there are substantial variations in levels of computer literacy among students in the lower and middle grades of secondary school. Therefore, there is no simple unambiguous answer to the question are students computer literate? In most countries, there is a group of students whose computer literacy is very limited. In most technologically developed countries, this appears to be fewer than 10% of students, but it is more than that in many countries. In general, approximately one half of lower secondary students demonstrate proficiency, or advanced proficiency, in computer literacy, and approximately one quarter demonstrate advanced proficiency in computer literacy. However, the field could benefit from investigations of how the various measures of computer literacy relate to each other from standards-setting exercises that could inform judgments about the scores on these scales that represent computer literacy.

Computer literacy is higher in countries with higher levels of ICT development and within countries is higher among students whose families are socioeconomically advantaged, who are more familiar and experienced with computer technology, and who have greater confidence in using computer technology. Girls tend to have slightly higher levels of computer literacy than boys. It is sometimes assumed that young people who have grown up with computers as a ubiquitous part of their lives will have high levels of computer literacy. However, there remain wide variations in computer literacy, and there is a challenge to ensure all young people are capable of productively using a range of digital technologies. Part of that challenge is to build a larger base of empirical studies of school and classroom influences on the development of computer literacy. Longitudinal studies of these influences on changes in computer literacy would be especially informative.

If we are to ensure that the widest possible range of young people becomes computer literate, greater attention needs to be given to the inclusion of education about computer technologies in the school curriculum. Designated courses about computer technology currently appear to provide the greatest emphasis on developing computer literacy, and the prospect for developing computational thinking, but what happens in other subjects is also important. Computer literacy is best seen as a cross-curricular capability and needs to be underpinned by progression maps (such as those reviewed in this chapter) that are linked to curricula in other learning areas. A progression map could provide a focus for collaboration among teachers and for professional learning activities for teachers.