Research assignments using the Internet are now typical in university classrooms. Yet the ease and speed of accessing information online often belie the challenges students face in filtering and making sense of the information they encounter. Consider a scenario in which a student is assigned to argue for or against requiring laptops in college classrooms:

You are a student representative on a committee that is considering whether your university should require students to have laptops for use in the classroom. The committee is composed of students, faculty, and university administrators who must first decide if having laptops in the classroom is a good idea. As the student representative to this committee, they have asked you to prepare a brief report on this topic and provide your recommendations. Prepare a summary (1–2 pages) of recent reports and make recommendations on the advantages and disadvantages of requiring laptops for use in classrooms.

Basically, the student is asked to research different perspectives, take a position, and write a brief essay. Which types of knowledge are necessary for successful completion of this assignment? There is general consensus in the literature that digital literacy refers to the ability to read and write using online sources, and includes proficiency in selecting sources relevant to the task, synthesizing information into a coherent message, and communicating the message with an audience (Barton, 2001; Glister, 2000). To explore the concept of digital literacy we asked students to engage in the online research task described above. We then assessed the quality of the resulting essay on a five-point scale, which is used as our measure of digital literacy proficiency.

Objectives

Two research questions guided this work. First, which kinds of knowledge are required to be successful in academic digital literacy tasks? Conventional wisdom might suggest that experience in using computers in general, and the Internet in particular, is what is needed in today’s digital age. In the present study, we examine the roles of three kinds of knowledge in working online to create a high quality essay–academic experience (in which we compare first-year college students and graduate students), domain knowledge (assessed by questionnaire responses concerning knowledge of educational technology issues), and technical knowledge (assessed by questionnaire responses concerning experience in using digital media). We are particularly interested in whether digital literacy proficiency depends on the same kind of knowledge as traditional forms of scholarship, or whether technical knowledge plays a major role.

Our second question is which processes during the online writing task are required to be successful? Conventional wisdom might suggest that students will be more successful to the extent that they take advantage of access to great amounts of information via searching the Internet, such as indicated by the number of web pages visited or search terms entered. In the present study, we examine the roles of search processes–such as total web pages visited, unique websites visited, and search terms entered–and of integration processes—measured by inclusion of authors’ names, source titles, specific supporting details, and the number of unique sources in the resulting essay. We are particularly interested in whether digital literacy proficiency depends on the same writing processes as in traditional forms of scholarship (i.e., integration processes), or whether skills in accessing information on the Internet play a major role.

Many strong claims are made for the unique demands of digital literacy (Coiro et al., 2008). According to a technology-centered view of digital literacy, proficiency depends on the user’s knowledge of technology and on the amount of information accessed during learning (Oblinger & Oblinger, 2005). In short, the most important aspect of digital literacy involves skill at accessing vast amounts of information. In contrast, according to the learner-centered view of digital literacy, proficiency depends on the learner’s academic knowledge and the learner’s strategies for effectively selecting and integrating the available information for the task at hand (Wiley et al., 2009; Rouet, 2006). In short, the most important aspect of digital literacy involves skill at using information. The goal of the present study is to subject these visions of digital literacy to an empirical test.

Technology-centered versus learner-centered approaches to digital literacy

As use of the Internet among young adults grew, reaching 73 % by 2000 (Lenhart, Lewis, & Rainie, 2001), so did theories about the interaction between technology and young users. Prensky (2001) labeled those born after 1980 as digital natives, claiming their higher fluency with technologies would lead to more fluid, advanced use. Tapscott (1997), Jenkins (2006), and Palfrey and Gasser (2008) argued that current methods of education would be insufficient for digital natives, who, because of their technical capabilities, would expect greater interactivity and seek collective approaches to expertise development. Indeed, gaming (Gee, 2003; Salen, 2007), blogging (Jenkins, 2006), and other informal digital environments (Oblinger & Oblinger, 2005) were presented as complementing and possibly supplanting formal education. Evidence for these claims was often anecdotal, based on limited, non-systematic observation and small sample sizes (Bennett, Maton, & Kervin, 2008). An additional weakness of these technology-centered arguments was their focus on proficiency within the digital environment without considering the contribution of prior knowledge and skills. Assumptions of homogenous technical aptitudes have been criticized for not accounting for variation by age (Livingstone & Helsper, 2007), access (Nasah, DaCosta, Kinsell, & Seok, 2010), or skills (Hargittai, Fullerton, Menchen-Trevino, & Thomas, 2010; Metzger, 2007). Despite recent empirical studies challenging the digital native theory, beliefs persist that access to and experience with technology significantly change learner’s expectations and enhance their academic performance.

Yet discussions of technology use can provide a starting point for deeper investigations into how learners engage with online resources. Rouet (2003, 2006) showed that literacy skills associated with a functional understanding of how documents work, (i.e., a textbook usually labels sections with bold headings, or a telephone book contains alphabetized listings) aid in managing multiple information sources. When examining students’ expectations of document contents both online and offline, Rouet (2003, 2006) found that, regardless of medium, accurately envisioning a document’s layout improves learners’ ability to make sense of the material because they can target specific areas of the text. In considering the cognitive demands of multimedia learning environments, Mayer (2009) identified a complex sense-making process in which learners select words and images, organize each into a coherent mental model, and integrate them to form an understanding. These studies identify the cognitive strategies learners leverage to maximize understanding of potentially overwhelming material.

Additionally, learner-centered research considers the skills and processes evident in proficient practice. Lazonder (2000) compared performance on an online academic search task between high school students with high and low technical expertise. Productive searches involved both procedural knowledge (of the browser) and a degree of domain knowledge. Lazonder (2000) found that students who demonstrated a combination of web expertise and high domain knowledge were more likely to select relevant information, suggesting that facility with technology was not the lone predictor of success. In a comparison of graduate and undergraduate students’ problem-solving skills, Brand-Gruwel, Wopereis, and Vermetten (2005) found that while groups did not differ in their number of search actions, graduates devoted significantly more time to evaluation and demonstrated stronger comprehension of the source material. This finding is consistent with Wineburg’s (1991) offline comparison of historians and students. When asked to evaluate the trustworthiness of selected historical materials, the groups did not differ in their number of evaluative statements; however, historians devoted considerably more space to assessing the source and context, and also comparing reports across other documents. Evaluation strategies also informed choices observed by Bråten, Strømsø, and Salmerón (2011) who found that undergraduates with low domain knowledge were more likely to trust biased sources and fall prey to illusions of trustworthiness. These studies demonstrate that prior knowledge must be considered in addition to technical aptitude when assessing digital literacy.

Theory and predictions

The left column of Table 1 lists the three major research questions in this study, the second column lists the predictions of the technology-centered view and the third column lists the predictions of the learner-centered view. The first question concerns what proficient students know about how to prepare a report based on online sources, and is investigated by examining which kind of knowledge (e.g., technical knowledge versus academic knowledge) best predicts digital literary proficiency (as measured by essay quality rating). According to the technology-centered view, technical knowledge should be the best predictor of essay quality; whereas according to the learner-centered view, academic knowledge should be the best predictor of essay quality.

Table 1 Three research questions

The second question concerns what proficient students do as they prepare a report based on online sources, and is investigated by examining which kind of online activities (i.e., accessing information such as number of web pages visited versus using information such as number of copy/paste operations) best predicts digital literacy proficiency (i.e., essay quality rating). According to the technology-centered view, measures of accessing information should be the best predictors of essay quality; whereas according to the learner-centered view, measures of using information should be the best predictors of essay quality.

The third question concerns what knowledgeable students do as they prepare a report based on online sources, and is investigated by examining which kind of online activities best predicts digital literacy proficiency. According to the technology-centered view, high knowledge students (i.e., graduate students in education) should score higher than low knowledge students (i.e., first-year college students) on measures of accessing information such as number of web pages visited; whereas according to the learner-centered view, high knowledge students should score higher than low knowledge students on measures of using information such as number of copy/paste operations.

Method

Participants and design

Participants were 62 graduate students in the Graduate School of Education who were in their fifth year of post-secondary education or higher (high academic knowledge group) and 88 first-year undergraduate students in the College of Arts and Science (low academic knowledge group) recruited from intact classes at the University of California, Santa Barbara. The graduate students were recruited from a required course on “Technology for Teachers” whereas the undergraduates were recruited from a required first-year writing composition course, “Writing 2.” For the graduate students, there were 51 women and 11 men, with a mean age of 23.7 years (SD = 3.0). For undergraduate students, there were 31 women and 57 men, with a mean age of 18.1 years (SD = .6). For both groups, instructors received an invitation to participate that included a description of the study. All students enrolled in the courses were given the option to participate. To motivate participants’ performance, both the graduate and undergraduate courses incorporated the research task into the curriculum. Students who opted out of the study (n = 17) completed a different, but equivalent activity.

Materials and apparatus

For each participant, the paper-based materials consisted of a pre-questionnaire, task assignment sheet, and post-questionnaire. The pre-questionnaire contained 13 questions that solicited demographic information, frequency of technology use, years of academic experience, and knowledge of the field of Education. The task assignment sheet instructed students to write a 1–2 page essay using recent information available on the Internet to recommend whether students on campus should be required to bring laptops to classrooms, using the following prompt:

You are a student representative on a committee that is considering whether UCSB should require students to have laptops for use in the classroom. The committee is composed of students, faculty, and university administrators who must first decide if having laptops in the classroom is a good idea. As the student representative to this committee, they have asked you to prepare a brief report on this topic and provide your recommendations. Prepare a summary (1–2 pages) of recent reports and make recommendations on the advantages and disadvantages of requiring laptops for use in classrooms.

The post-questionnaire asked 15 questions about learning practices, training in credibility evaluation, practice of credibility evaluation, and participation in research and teaching in the field of education.

The apparatus consisted of 25 Dell Pentium IV computers, which were identically configured to include Internet access, Microsoft Office, Adobe Reader, and access to the University Library’s databases. The computers were housed in a university computer classroom and were arranged in five vertical rows, with a printer located at the front of the room accessible to all students. Monitoring software installed on each computer recorded keystroke activities, active applications, and URL visits (Jmerik, 2004). Each action record was stored as a log file, which was dated and time-stamped.

Procedure

Depending on pre-existing class size, participants were tested in groups of 10–25 during a single 70-min class session. Participants in each session were enrolled in the same course. All participants were seated in the computer classroom containing 25 computer stations as part of a regular class session. Participants selected a computer when they first entered the computer classroom. Once the entire group was seated at their computer stations, the pre-questionnaire was distributed and participants were given 10 min to complete it. After 10 min, the researcher then collected the pre-questionnaires. The researcher then handed out the task assignment sheet and presented oral instructions explaining that students would be given 50 min to write a 1–2 page report about whether laptop use should be required in college classrooms. As part of the oral instructions for the research task, participants were asked to use Microsoft Word for their writing and to save and print their work when they were finished. The researcher announced when 20 and 40 min had elapsed. At the end of the 50-min research task, the printed essays were collected. Then, the post-questionnaire was distributed and participants were given 10 min to complete it. After participants left the room, log files were remotely extracted from each computer, along with digital files of each participant’s essay.

Measures

Table 2 lists the major measures used in the study, along with their source and a brief description. The major dependent variable was digital literacy proficiency, measured by a holistic score from 1 “poor” to 5 “excellent” given to each essay that participants completed as part of their research task. Coders used a scoring rubric with five levels in which each scoring level evaluated demonstration of comprehension, coherence, and synthesis (please see Table 3 for detailed descriptions). Two coders experienced in university-level writing assessment scored each essay. Applying the training guidelines used in Subject A and Writing 1 Common Final scoring in the state-wide university system, the coders used four sample essays for training and then scored the same essays until they consecutively reached consensus (n = 8). In cases of disagreement, scores were discussed to determine whether a consensus could be reached. When a consensus could not be reached (n = 4), scores were averaged to create a final score. The level of agreement between the two coders using Cohen’s kappa was κ = .93 and using Krippendorff’s alpha was α = .93. Both indices revealed high levels of inter-coder agreement, according to established interpretation criteria (Lombard et al., 2002).

Table 2 Measures used in the study
Table 3 Digital literacy proficiency holistic scoring measures

Measurement of academic knowledge was based on questions asking students to indicate their class standing as “freshman,” “sophomore,” “junior,” “senior,” or “graduate student,” with undergraduate standing scored as “1” and graduate standing scored as “2.” Measurement of technical knowledge was based on asking students to select activities that applied to them from a list of 15 technology-related items such as “maintain a blog,” “have a profile on a social networking site,” “play video games,” “post comments/entries to sites such as Wikipedia,” “use social tagging sites,” “know a computer programming language,” and “am often asked by family, colleagues, or friends to help them fix their computers.” Each checked item received a score of 1, yielding a total possible score of 15. Measurement of domain knowledge was based on questions in which students rated their familiarity from “low” (scored as 1 point) to “high” (scored as 5 points) on six broad educational topics, including “No Child Left Behind,” “laptops in the classroom,” and “methods of educational research;” and to indicate whether or not they had “published an article in an academic journal,” “attended a conference in the field of education,” “taught a course at the primary, secondary, or college level,” “completed a course in the field of education,” or “conducted research in the field of education,” with 2 points given for “yes” and 1 point for “no” for each item.

Measurement of accessing information was based on five types of user actions collected in the log files: (a) total number of web pages loaded, (b) total number of unique URLs loaded, (c) total number of times the same URL was visited, (d) total number of clicks on links, and (e) total number of search terms entered. Each action counted as one point and ranged from 0 to 114. All keystroke and mouseclick actions were summed to create a “total actions” measure which ranged from 9 to 194.

Measurement of information use was based on log file analysis and student essays. Information use counts from the log files included each occurrence of the copy or paste command (in this case, text was copied from an Internet source and pasted into the Microsoft Word document) and each time a .pdf file was opened. Using information in the essays included each occurrence of supporting details (i.e., facts, direct quotes, statistics) and citations (i.e., author name, date, title, parenthetical citation, or reference list) counted as 1 point. In addition, each unique reference (as opposed to the same source referred to multiple times) was assigned 1 point. Word count in the essays ranged from 170 to 1,011.

Results

What do students need to know for proficiency in digital literacy?

A major question addressed in this study concerns which kind(s) of knowledge are related to the quality of students’ essays in an online writing task. According to the technology-centered view, proficiency in digital literacy depends largely on the user’s technical knowledge; whereas according to the learner-centered view, proficiency in digital literacy depends largely on the learner’s academic knowledge gleaned from years in college. In order to address this issue, a 2 × 2 × 2 analysis of variance was conducted with academic knowledge (undergraduate versus graduate), domain knowledge (low versus high based on a median split), and technical knowledge (low versus high based on a median split) as the factors and digital literacy proficiency as the dependent measure.

Based on main effects, academic expertise had a significant impact on digital literacy proficiency with graduate students writing higher quality essays (M = 3.02, SD = 1.27) than undergraduate students (M = 2.31, SD = 0.96), F(1, 142) = 10.96, MSE = 13.51, p < .001, d = 0.63; and technical expertise had a significant impact on digital literacy proficiency with high technical expertise students writing higher quality essays (M = 2.80, SD = 1.27) than low technical expertise students (M = 2.47, SD = 1.16), F(1, 142) = 6.40, MSE = 7.88, p = .013, d = 0.27. Table 4 shows the mean digital literacy proficiency score for undergraduate and graduate students by level of technical expertise. As shown in Table 4, there was a significant interaction in which high technical expertise greatly improved performance for graduate students (M = 3.71, SD = 1.05) but not as much for undergraduate students (M = 2.44, SD = 1.17) F(1, 142) = 7.07, MSE = 8.71, p = .009, d = 1.14. Thus, there is evidence that both academic and technical expertise affect digital literacy proficiency, with students who score high on both performing particularly well on the online research task.

Table 4 Mean digital literacy proficiency scores (and SD) for low and high academic knowledge students by technical knowledge level

Concerning domain knowledge, there was no significant main effect, F(1, 142) = 0.88, MSE = 1.08, p = .350, which may reflect the high correlation (r = .53) between domain knowledge and academic knowledge. Table 5 shows the mean digital literacy proficiency scores by domain knowledge and technical knowledge. As shown in Table 5, there was a significant interaction between domain knowledge and technical knowledge, in which high technical knowledge helped low domain expertise students (d = 0.72) but not high domain expertise students (d = 0.00), F(1, 142) = 5.25, MSE = 6.47, p = .023. There was no significant three-way interaction among academic, domain, and technical knowledge groups, F(1, 142) = 0.28, MSE = 0.34, p = .599. Overall, technical expertise compensates for lack of domain expertise and domain knowledge compensates for lack of technical expertise, with students who score low in both performing particularly poorly on the online writing task.

Table 5 Mean digital literacy proficiency scores (and SD) for low and high domain knowledge students by technical knowledge level

The results of the analysis of variance indicate that all three kinds of expertise play a role in digital literacy. However, a related issue concerns which type of knowledge is most important in predicting digital literacy proficiency. To address this issue, a stepwise linear regression was conducted with each of the three types of knowledge as the predictors and digital literacy proficiency score as the dependent measure. Table 6 shows the correlations among these four variables, indicating that both academic expertise and domain expertise are significantly related to digital literacy proficiency score and they are highly related to one another. Interestingly, technical knowledge is negatively correlated with academic knowledge, indicating that students who have spent more years in higher education tend to have less experience in using popular technologies.

Table 6 Correlation matrix of digital literacy proficiency score with three kinds of knowledge

As shown in Table 7, the regression model chose academic knowledge (with a standardized coefficient of .34) and technical knowledge (with a standardized coefficient of .21) and was able to explain 13 % of the variance in digital literacy proficiency scores (R 2 = .13). As with the ANOVA and correlational analysis, the regression demonstrates that multiple forms of knowledge contribute to performance on digital literacy tasks, with academic knowledge having the strongest influence. Domain knowledge did not add further predictive power, likely because it is highly correlated with academic knowledge, which was already included.

Table 7 Summary of stepwise regression analysis for expertise variables predicting digital literacy proficiency

Overall, there is strong evidence that making information available to students through the Internet is not enough to enable students to engage in high quality academic writing from online sources. For example, in Table 4, students who are high in academic and technical knowledge score more than 1.5 SDs (d = 1.53) better on the digital literacy writing task than do students who score low in both. Similarly, in Table 5, students who are high in domain and technical knowledge score more than 0.7 SDs (d = 0.70) better on the writing task than do students who score low in both. These analyses point to the significant role of the knowledge that the student brings to the online research task, particularly academic and technical expertise. These results provide evidence against a strong form of the technology-centered view, because technical knowledge is not the only or even the strongest predictor of digital literacy proficiency. Consistent with the learner-centered view, the strongest predictor of proficiency on the online task was academic knowledge.

How do students demonstrate proficiency in digital literacy?

A second major question addressed in this study concerns which kinds of actions during the online research task are related to the quality of students’ essays. For purposes of this analysis, we focused on two kinds of actions–accessing information (indicated in the top portion of Table 8) and using information (indicated in the bottom portion of Table 8). According to the technology-centered view, the unique advantage of online resources is that they provide access to great quantities of information, so the quality of the essay should be related to how well the user accesses great quantities of information (such as indicated by the number of pages or websites visited). According to the learner-centered view, the learner’s challenge is to figure out how to use the available information effectively within the essay, so essay quality should be related to how the learner integrates the information in the essay (such as indicated by the number of authors mentioned, sources cited, or specific details included).

Table 8 Correlations between essay quality scores and measures of accessing information and using information

Table 8 shows the correlation between essay quality scores and measures of accessing information and using information during the writing task. None of the measures of accessing information correlated significantly with essay quality, suggesting that amount of access to information (such as number of pages or websites visited) does not predict proficiency in digital literacy. In contrast, many of the measures of using information correlated significantly with essay quality, suggesting that the way that learners incorporated information in their essays (e.g., such as number of supporting details, authors cited, or sources used) predicted proficiency in digital literacy. Interestingly, number of copy/paste actions significantly correlated with digital literacy proficiency, suggesting that using selected information found on websites contributed to essay quality (but simply visiting a lot of websites did not).

How do high knowledge and low knowledge differ on a digital literacy task?

As a follow-up analysis, we compared the actions taken by graduate students and undergraduates during the online writing task. Our goal was to determine whether students with more academic knowledge tend to access more information or to use more of the accessed information as compared to students with less academic knowledge. The top portion of Table 9 shows the mean scores (and SD) on six measures of the process of accessing information for undergraduates and graduates. As can be seen, the groups did not differ significantly on total actions, including total web pages viewed, unique websites viewed, return to source, links followed, or modification of search terms.

Table 9 Means (and SD) for undergraduate and graduate information access and use

The bottom portion of Table 9 shows the mean scores (and SDs) on eight measures of using the accessed information within the essay. As shown in the table, the groups differed significantly in their use of examples, quotes, total supporting details, author, title, date, total citations, and unique sources. In addition, graduates more frequently engaged in use of the copy/paste function (M = 3.00, SD = 5.37) than undergraduates (M = 1.40, SD = 2.23), and this difference was significant, t(148) = −2.514, p = .013, d = −.413. Students who performed a higher number of the copy/paste command demonstrated increased digital literacy proficiency (r = .238, p = .003). This small, but significant relationship between pasting and essay score indicates a highly effective use of a simple technology to both organize and integrate information. Use of Adobe Acrobat to open .pdf files was also significantly different between the groups, t(148) = −2.60, p = .013, with graduate students (M = 3.02, SD = 5.36) viewing a higher number of .pdf files than undergraduates (M = 1.40, SD = 2.23). This factor, however, did not significantly correlate with digital literacy proficiency (r = −.011, p = .791).

The groups differed in the quality of their writing, with essays written by graduate students receiving a higher mean score (M = 3.02, SD = 1.27) than essays written by undergraduates (M = 2.31, SD = 1.08), t(148) = 3.69, p < .001, d = .61; however, they did not differ in the quantity of writing, with the number of words in the essay equivalent for graduates (M = 515.47, SD = 142.59) and undergraduates (M = 479.86, SD = 147.10), t(148) = 1.48, p < .141, d = .24.

Overall, this analysis shows that the increased research and writing practice afforded through years in school appears to significantly affect the way students use information, but not the amount of information they access. In short, once again, the results refute the technology-centered view of digital literacy, which focuses on access to large amounts of information; and support the learner-center view of digital literacy, which focuses on effectively using information.

Discussion

Empirical contributions

As shown in the right column of Table 1, this study provides answers to three research questions. First, concerning what proficient students know, in an ANOVA this study found the largest effect size for differences in digital literacy proficiency was for high versus low academic proficiency, although a smaller but significant effect was also found for high versus low technical knowledge. Similarly, in a stepwise regression, academic knowledge was the best predictor of digital literacy proficiency, although technical knowledge also contributed additional predictive power. Domain knowledge was so highly correlated with academic knowledge that it did not add predictive power. Second, concerning what proficient students do, this study found that digital literacy proficiency correlated with measures of using information (such as the number of copy/paste actions) but not with measures of accessing information (such as the number of websites visited). Third, concerning what knowledgeable students do, high academic knowledge students scored higher than low academic learners on measures of using information but not on measures of accessing information.

Theoretical contributions

Overall, as shown in the left columns of Table 1, the results are not consistent with the technology-centered view that the primary underpinnings of digital literacy concern the learner’s technical knowledge about digital media and are manifested in the ability to access large amounts of information. Instead, the results are most consistent with the learner-centered view that the primary underpinnings of digital literacy are based on the same kind of academic knowledge involved in traditional forms of scholarship and are manifested in the ability to effectively use information by selecting and integrating relevant information from multiple sources. It is not necessarily a students’ knowledge of technology that propels their digital literacy proficiency, it is their basic academic knowledge of how to use sources of information. Similarly, it is not how many web pages they visit that determine students’ digital literacy proficiency, it is how they use the information on the pages they visit. In summary, this study suggests that emerging theories of new media literacy should not ignore the role of traditional academic knowledge and integrative cognitive processing in the development of digital literacy proficiency.

Practical contributions

This study suggests that just knowing how to search the Internet does not ensure digital literacy. Students in the digital age still need classic scholarship skills, particularly how to select and integrate information from multiple sources. The inclusion of digital literacy in the academic curriculum does not mean that classic scholarship skills are no longer needed.

Methodological contributions

There is much speculation in the literature on new literacies, but not a correspondingly large body of empirical evidence from which to draw reasoned conclusions. This study demonstrates a methodology for how to study the foundations of digital literacy, and thereby contributes to an empirical research base. It provides a preliminary way of measuring some basic constructs such as digital literacy proficiency, various kinds of knowledge (e.g., academic, technical, and domain knowledge), and various kinds of processing (e.g., processes for accessing information and processes for using information).

Limitations and future directions

Although this study advances the field by providing preliminary ways of measuring key constructs, more work is needed in conceptualizing and measuring them, including better measures of cognitive processing as well as better distinguishing between academic knowledge and domain knowledge. Although this study took place in authentic academic settings, it consisted of a single short episode, so future work is needed with a longer-term focus. Finally, as improvement in digital literacy proficiency is our ultimate goal, future work is needed to test the effectiveness of training and online aides for reading and writing in digital environments.