Abstract
Competent use of the Internet to locate information is an important skill for today’s youth. Yet, many lack the knowledge and dispositions to engage in the processes necessary to effectively and efficiently find information on the Internet. As a result, various countries have incorporated references to the processes of online inquiry within their educational standards. Despite similarities in these standards, however, international comparisons are rare and have not produced insights into broader themes and patterns regarding how cognitive, metacognitive, and affective variables interact to influence outcomes on related measures of success, e.g., international assessments. The purpose of this research was two-fold: to examine the measurement invariance of a German-language version of the Survey of Online Reading Attitudes and Behaviors across a sample of participants from Germany and to compare the results with students from United States who completed the English-version of SORAB. The results justified comparisons across the samples with respect to the latent factor variables and comparisons yielded differences associated with cognitive and behavioral engagement, value/interest, and anxiety. No differences were noted with regard to self-regulation and efficacy for online reading. Implications are framed within broader contextual variables that may have been influential in producing the differences between the samples.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Information and communication technologies (ICTs) play a prominent role in many people’s daily lives, providing access to information unlike any other time in history. As a result, effectively using ICTs, especially the Internet, for information-seeking purposes has gained increasing prominence world-wide to ensure citizens master the competencies necessary for occupational success and lifelong learning (European Commission 2014; National Governors Association and Council of Chief State School Officers 2010). The challenge, however, is that proficiency in this process requires users to understand the unique strategies and display the dispositional attributes necessary to engage in the cognitively demanding processes associated with searching for and locating information on the Internet.
As a result, educational systems around the world have begun to focus specifically on preparing students to be successful in these processes, creating policies to develop students’ “use of ICT as a learning tool… [and] ICT-based skills in critical thinking, collaboration, and communication” (Fraillon et al. 2014, p. 56). A growing list of countries, including the United States (National Governors Association and Council of Chief State School Officers 2010), Germany (Ministerium für Kultus, Jugend und Sport Baden-Württemberg 2016a, 2016b), Finland (Finnish National Board of Education 2014), and France (French Ministry of Education 2015), among others, have identified specific outcomes associated with the use of ICTs to efficiently and effectively locate information within their educational standards. Commonalities among the standards include the development of students’ capacity to enact processes associated with being able to find/locate, analyze/evaluate, select, use/process, and synthesize information from digital sources. Furthermore, assessments, including the Digital Reading Assessment of the Program for International Student Assessment [PISA] (Organization for Economic Cooperation and Development [OECD] 2011), the International Association for the Evaluation of Educational Achievement’s [IEA] International Computer and Information Literacy Study [ICILS] (Fraillon et al. 2014), and IEA’s ePIRLS 2016 (Mullis et al. 2017) include items designed to measure students’ proficiency in accessing, evaluating, and interpreting information from digital sources, yielding the potential for cross-cultural comparisons of student performance among countries who participate in these assessments (see Naumann 2015; Paul et al. 2017).
The aforementioned steps are important; yet, they are insufficient in developing a comprehensive understanding of all facets related to online inquiry. In particular, there is also a need to incorporate examinations of how dispositional attributes, such as self-efficacy, motivation, and anxiety, may impact the processes. There is currently a lack of a valid means to do so, especially within cross-cultural examinations. Extending previous research by the authors (see Putman 2014; Putman, Wang, & Ki 2015) on middle school students’ strategies and self-regulation within the use of the Internet, this investigation was undertaken to expand the capabilities to further explore affective attributes, including self-efficacy and anxiety, associated with information-seeking activities online. Through the validation of a German version of the Survey of Online Reading Attitudes and Behaviors (SORAB), which measures cognitive strategies and affective attributes, an exploratory comparison of students from Germany and the United States was conducted.
2 Literature review
Few would argue that school-age youth spend significant amounts of time using the Internet for entertainment, to communicate, and to locate information (Feierabend et al. 2016; Lenhart 2015; Organization for Economic Cooperation and Development [OECD] 2015). While use of the Internet has taken a larger role in classrooms as students are required to search for and locate information online as part of schoolwork, many of these pursuits are engaged in as part of leisure and social activities that occur more frequently outside of school settings (OECD 2015). This is important, yet problematic, since despite spending extended periods of time using the Internet, the reality is that many students are underprepared for the rigors and intricacies presented by the online environment (Greene et al. 2018; Hinostroza et al. 2018; Salmerón et al. 2018). As a result, various researchers (see Putman 2014; Kanniainen et al. 2019) continue to cite the need to understand better the key skills, strategies, and dispositions necessary for students to engage in information-seeking activities on the Internet.
Various models and frameworks have been proposed to capture the processes associated with information-seeking activities on the Internet, also referred to as online inquiry (see Putman 2014). Consistent among them is the conclusion that there is a clear need for those engaged in information seeking to employ a strategic approach within the process. In general, Internet search processes include: (1) defining the problem; (2) developing guiding questions and choosing search terms; (3) searching for and locating information; (3) scanning and evaluating the list of results; (4) evaluating the suitability of located information, e.g., trustworthiness and reliability; and (5) synthesizing the information from multiple sources (see Brand-Gruwel et al. 2009; Frerejean et al. 2019; Leu et al. 2013). Within these processes, attention toward the specific cognitive and metacognitive strategies as well as dispositional attributes such as motivation and efficacy maximizes the opportunity for success (Goldman et al. 2012; Greene et al. 2018; O’Byrne and McVerry 2009).
2.1 Cognitive strategies for online inquiry
Searching for and locating information on the Internet requires a problem-solving approach, which is augmented through the use of effective navigational strategies (Hahnel et al. 2015; Kiili et al. 2018; Naumann and Salmerón 2016). Specific cognitive processes are necessary, including: a) demonstration of strategic, goal-directed thinking before and during inquiry (Cho and Afflerbach 2017; Leu et al. 2013); b) active monitoring while using strategies during inquiry to locate relevant content and evaluate its suitability (Barzilai et al. 2018; Greene et al. 2018); and c) regular synthesis of information from diverse sources (Goldman et al. 2012; Leu et al. 2013). Students must also engage in ongoing reflection as the search process proceeds (Frerejean et al. 2019; Winne and Hadwin 2008). Multiple studies have shown that individuals were more successful in locating and evaluating information when they engaged in these behaviors in online environments (Brand-Gruwel et al. 2017; Cho and Afflerbach 2017; Coiro et al. 2018). Furthermore, within these broad categorizations there is a related set of subskills that increase the likelihood of accessing task- and information-relevant websites, thereby maximizing comprehension of material. We examine these subskills within the broader principles of self-regulation and cognitive and behavioral engagement for online inquiry.
Self-regulation
Self-regulated learning is described as “active, constructive process whereby learners set goals for their learning and then attempt to monitor, regulate and control their cognition, intentions and behavior, guided and constrained by their goals and the contextual features of the environment” (Pintrich 2000, p.453). The cognitive strategies associated with online inquiry are highly aligned with self-regulation as SRL proceeds in a predictable sequence of steps or phases that begins as learners form a perception of a task based upon their prior knowledge or memories of similar situations experienced in the past (Barzilai et al. 2018; Greene et al. 2018; Winne and Hadwin 2008). Subsequently, they develop a plan related to task-specific goals, execute strategies they hypothesize as necessary to meet the goals, and use monitoring to determine if goals or strategies require adaptation (Winne and Hadwin 2008). Upon task completion, learners reflect upon their performance, identifying changes that would enable them to complete a similar task more effectively (Winne and Hadwin 2008).
Proficient online readers have been shown to be adept at self-monitoring, collecting and analyzing evidence and using this information to adapt and revise processes and strategies while engaging with multiple Internet sources (Brand-Gruwel et al. 2017; Cho et al. 2017; Naumann 2015). According to Cho et al. (2017), strategic online readers reflect upon more than “what to find” and “how to access it” (p. 716), but also engage in inferential planning, enabling them to find information more aligned with their questions and goals as they predict where links may lead (Coiro et al. 2018; Hahnel et al. 2015; Kiili et al. 2018; Naumann and Salmerón 2016). This active monitoring has also been associated with refined navigation strategies. Students were more selective in accessing links and adapting their search process, positively impacting their comprehension of content and the construction of meaning as information is synthesized across multiple texts (Goldman et al. 2012; Greene and Azevedo 2007).
Cognitive and behavioral engagement
Guthrie et al. (2012) characterized engagement as “multidimensional” (p. 602) as it is comprised of both behavioral and cognitive attributes. Behavioral engagement involves active participation in an activity, as demonstrated through effort and persistence, while cognitive engagement is associated with mental efforts necessary to accomplish a task, including the use of self-regulatory strategies. Each dimension of engagement demonstrates notable linkages to several facets associated with successful searches for information. For example, behavioral engagement has been linked to increased time spent selecting sources to examine, total number of sources read, and overall effort (List et al. 2018). Cognitive engagement, on the other hand, is highly associated with comparing, contrasting, and corroborating information found through within the search process, which maximizes success when engaging with multiple documents (Anmarkrud et al. 2014; Cho, 2014; Goldman et al. 2012). In sum, various researchers (Brante and Strømsø 2018; Naumann 2015) have confirmed the importance of high levels of behavioral and cognitive engagement in producing frequent visits to task-relevant pages and a greater willingness to adapt processes and behaviors based on search results, thereby maximizing the likelihood of successfully locating and accessing relevant information.
2.2 Affective and dispositional attributes associated with online inquiry
While there is clear evidence regarding the important role of cognitive processes and strategies within online inquiry, researchers have also highlighted the need to explore further the relationships of dispositional attributes, including self-efficacy and motivation, within online inquiry (Naumann 2015; Zylka et al. 2015).
Self-efficacy
Research has emerged that shows self-efficacy as being positively associated with strategy use and self-regulation within online reading tasks (Tsai and Lin 2004; Hofman et al. 2003; Senkbeil and Ihme 2017; Winne and Hadwin 2008). For example, positive self-efficacy was associated with increased self-monitoring within environments that include hypermedia (Greene et al. 2018; Moos and Azevedo 2009; Senkbeil and Ihme 2017), as would be found on the Internet. Notably, self-monitoring helps an individual to be more responsive to the ongoing results of the search for information, adapting and changing strategies as necessary to maximize success, which subsequently improves self-efficacy. Salmerón et al. (2018) found that students who were more confident, i.e., self-efficacious, in using the Internet for informational purposes were also more persistent in their searches. On the other hand, poor self-efficacy limited the use of cognitive and metacognitive strategies inherently necessary within online tasks (Moos and Azevedo 2009; Naumann 2015). Students with poor self-efficacy were more likely to disengage from the inquiry process, especially when success was not immediately forthcoming. Disengagement decreased subsequent motivation to engage with online resources later (Naumann 2015).
Value and interest
Motivation is often a determinant of behavior, with value and interest representing important components associated with motivation. For example, when an individual has a personal interest in task or topic, the person is more likely to value opportunities to engage with related information, and subsequently will experience motivation. Moos (2014) found students who were highly interested in a topic or driven by internal goals were more active in self-monitoring or evaluating content found online, which facilitated later synthesis of information as the students gained greater conceptual knowledge. Interest also created the condition for students to be more thorough in their browsing and they were more likely to critically evaluate content (Hofman et al. 2003; Moos 2014). This was especially true when Internet users were seeking information relative to self-selected questions associated with authentic problem-based tasks or given choice and flexibility within the search process. Value and interest have also been associated with individual persistence in situations or tasks that were cognitively demanding, such as those associated with searching for information online. In O’Byrne and McVerry’s (2009) research, persistence was identified as one of four attributes important for successful inquiry. Notably, there is evidence of positive correlations between motivation and persistence, which increases the likelihood of successful searches for information (Moos and Azevedo 2009; Moos 2014; Paul et al. 2017; Salmerón et al. 2018).
Anxiety
Conversely, anxiety demonstrates a negative correlation with self-efficacy and motivation (Naumann and Sälzer 2017; Zylka et al. 2015). Therefore, anxiety represents a barrier to the students as they search for information online. Investigations have revealed detrimental effects of anxiety, including the interruption of cognitive processing (Derakshan and Eysenck 2009) and a lack of persistence in pursuing searches (Brosnan et al. 2012). However, conclusions have been formed within examinations of computer anxiety or Internet anxiety, in general (see Paul and Glassman 2017). There is a lack of current research that has specifically examined the impact of anxiety on tasks that involve locating information in online environments, i.e., online inquiry.
In sum, it is clear that seeking relevant information online introduces new challenges for the reader given the advanced cognitive demands as well as the dispositional and affective attributes required for continuous motivation and engagement. As a result, there is a need to investigate and understand factors impacting students’ information-seeking activities on the Internet, especially within academic activities.
2.3 International comparisons: Germany and the United States
Germany and the United States, the two countries from which the participants in this research were drawn, have not been compared in aspects associated with online information-seeking activities. This is largely due to variance in participation in PISA, ICILS, and, recently, ePIRLS. For example, the U.S. did not complete the optional ICT portion of PISA 2012 and began participating in ICILS in 2018, while Germany was not among the participants reported by Mullis et al. (2017) in the ePIRLS report. Yet, they share multiple characteristics that would seemingly yield cross-cultural insights. As noted previously, both countries have identified conceptually similar concepts and processes related to inquiry within their educational standards (see Ministerium für Kultus, Jugend und Sport Baden-Württemberg 2016a, 2016b; National Governors Association and Council of Chief State School Officers 2010). In addition, each country has high rates of Internet use for school-related purposes. For example, in Germany, 70% of the 9- to 13-year-olds report using search engines at least once a week and it has been estimated that 50–60% of students use the Internet for out of school learning activities, spending about 39 min per day using the computer and the Internet (Feierabend et al. 2016; Medienpädagogischer Forschungsverbund Südwest 2014). In the United States, it has been estimated that more than 90% of school-age children are online using the Internet as a tool for gathering information, with more than 30% using it daily for this purpose (Madden et al. 2013).
There also appear to be differences in the preparation of students to engage in online inquiry activities, despite similarities in educational standards. Prior research has shown that many German teachers do not emphasize methods to access online information or teach students to examine information found online for reliability and trustworthiness (Eickelmann et al. 2014; Fraillon et al. 2014). Teachers in the U.S., on the other hand, appear more likely to require the students to engage in information searching within a teacher directed approach, as opposed to searching for authentic purposes (Hutchison and Reinking 2011). These contrasts in instruction may manifest themselves differently with respect to the strategies and attitudes students demonstrate as they seek information online. Previous research findings by the authors (see Putman 2014; Putman et al. 2015) revealed similar tendencies with regard to student use of the Internet; yet, there also differences in the capacity to self-regulate and in the levels of persistence and motivation displayed by the students.
2.4 Purpose of the study
Given this context and due to a long-term research collaboration between the authors’ universities within the two countries, this investigation sought to answer questions specifically comparing learners in Germany and America. Acknowledging that cross-cultural research is challenging given the necessity of assuming the equivalency of context and that cognitive processes and affective variables have proven to be sensitive to cultural differences (Heine and Buchtel 2009), the authors specifically sought to compare German and American students’ strategies and affective (dispositional) attributes related to searching for and locating information online. Given the exploratory nature of this research, it is important to note the sample was not intended to be representative of all students in either country. As such, comparisons of students from the two different countries in this sample may be used to identify preliminary similarities and differences, providing impetus for future investigations. In this study we sought to examine the following research questions:
-
Research question 1 (RQ1): What is the measurement invariance of the SORAB [name redacted for blind review] across samples from Germany and the United States?
-
Research question 2 (RQ2): What are the differences between German and American students’ cognitive strategies and affective attributes for online inquiry?
-
Research question 3 (RQ3): How do German and American students compare regarding Internet experience and usage?
3 Method
3.1 Participants
The participants for the investigation were 784 5th and 6th grade students in the United States and Germany. The sample of 426 students from the U.S. was drawn from four schools in the Midwest. The sample consisted of 230 5th graders and 196 6th graders. Gender was equally distributed in the sample, with males and females each represented by 213 participants. A sample of 358 participants from the German state of Baden-Wuerttenburg were included, totaling 107 5th graders and 251 6th graders from three schools. Of the 358 participants, 159 were male and 188 were female (One participant did not indicate gender).
3.2 Instrument
SORAB was developed to measure students’ behavioral and dispositional attributes relative to online inquiry tasks. Two versions of SORAB were used in this research: the original English version was used with American participants (Putman 2014) and a version translated into German was used with the German participants. Each version included 53 items (See Appendix A), measuring:
Cognitive Strategies | Affective Dispositions |
---|---|
1. Self-regulatory behavior (SRL) (14 items) 2. Cognitive and behavioral engagement (CBE) (11 items) | 1. Efficacy for online reading (EOR) (6 items) 2. Value/interest (VI) (13 items) 3. Anxiety (8 items) |
With the exception of the subscale to measure self-regulatory behavior, items were written in a 4-point response scale from strongly disagree (1) to strongly agree (4). The scale measuring self-regulatory behavior was also written on a 4-point scale with response options of “never”, “sometimes”, “often”, and “all the time.” Each subscale, with the exception of Anxiety, was scored with the most positive response receiving the highest point value (4). Of note, items on the anxiety subscale had to be reverse-coded to reflect more positive dispositions (higher score) versus less positive dispositions, e.g. higher anxiety. Total score on the instrument is represented by the sum of the subscales. Higher scores, on the instrument are theorized to be indicative of greater cognitive strategy use as well as more positive dispositional attributes.
A translated version of SORAB (SORAB-G) was created using the protocol established by Peña (2007). The process consisted of the following steps:
-
Co-author translated SORAB into German;
-
Translated instrument was examined by two individuals with dual language proficiency;
-
The two SORAB versions were compared, and, where applicable, differences were noted and discussed. Modifications were proposed based on examinations and comparisons of the two forms to ensure maximum coherence and consistency of meaning of individual items across the two forms;
-
A pilot test was conducted at two schools. Modifications to the instrument were proposed based on their responses in the pilot.
-
Back translation was conducted by an individual fluent in English and German;
-
Final examination of the instrument was conducted by a faculty member from the German co-authors’ institution who was fluent in English and German.
Participants in Germany completed SORAB-G in their respective school classroom. A paper copy of the instrument was administered by a co-author or a trained research assistant. Prior to administration of the instrument, proctors presented a list of pertinent vocabulary to ensure participants understood all terms included in SURVEY-G. A trained research assistant collected and compiled information from all survey responses into SPSS for subsequent data analysis.
While not used within the computation of a score on SORAB, five additional questions (see Appendix A) were included within the administration of the instrument to examine factors specific to Internet use that may have a related impact on strategies, attitudes, and beliefs. These included comfort level for Internet use, overall experience (time) using the Internet, frequency of Internet use at school and at home, and average time spent using the Internet per week.
3.3 Data collection
In the United States, SORAB was administered to all participants in their respective school classroom or computer lab by a trained research assistant or the classroom teacher, who was provided with directions for administration. Administration was primarily completed using computers or tablets; however, several teachers expressed a preference for paper copies of the instrument, thus a portion of the sample completed a paper copy of the instrument that was identical to the electronic copy.
Participants in Germany completed SORAB-G in their respective school classroom. Prior to administration of the instrument, proctors presented a list of pertinent vocabulary to ensure participants understood all terms included in SORAB. A paper copy of the instrument was administered by a co-author or a trained research assistant. Prior to administration of the instrument, proctors presented a list of pertinent vocabulary to ensure participants understood all terms included in SORAB-G. A trained research assistant collected and compiled information from all survey responses into SPSS for subsequent data analysis.
3.4 Data analysis
Relative to RQ1, Cronbach’s alpha was used to see if the responses to the survey questions were consistent (internal consistency). The structural aspect of validity (Messick 1995) of participants’ responses to the survey was provided with outcomes from Confirmatory factor analysis (CFA). The goodness of fit indices included standardized root mean square residual (SRMR), root mean square error of approximation (RMSEA), comparative fit index (CFI), incremental fit index (IFI), goodness-of-fit index (GFI), adjusted goodness-of-fit index (AGFI), and the 90% confidence intervals of RMSEA. Some research studies have questioned the validity of Hu and Bentler’s (1999) two-index strategy in model fit assessment (Fan and Sivo 2005), and suggested that this two-index strategy was based on very restrictive assumptions and tended to reject adequately fitting models (Marsh et al. 2004). Therefore, this study placed more emphasis on the combinations of multiple goodness-of-fit indices. The suggestions provided by MPlus to add paths from observable variables to latent variables were not followed because this could mechanically fit the model not suggested by theory (MacCallum et al. 1992). Error covariances between observable variables within each latent construct were not added because the models were acceptable without these error covariances (Figs. 1 and 2).
As the sample consisted of students from the United States and Germany, it was necessary to test if the factor loadings, intercepts, residual variances, and the factor variance/covariances were the same between participants from each country in the measurement model first. Dimitrov (2010) suggested three steps and five models to test factorial invariance across groups. The three steps are: (a) configural invariance; (b) measurement invariance; and (c) structural invariance. Model 0 tests configural invariance by fitting a baseline model for each group separately without any constraints (Model 0). Models 1–3 test measurement invariance by constraining: (a) the factor loadings to be the same across the groups (Model 1 – weak measurement invariance); (b) both factor loadings and item intercepts to be the same across the groups (Model 2 - strong measurement invariance); and (c) factor loadings, item intercepts, and residual variances to be the same across the groups (Model 3 - strict measurement invariance). Finally, structural invariance was tested by constraining factor loadings, item intercepts, and factor variances/covariances to be the same across the groups (Model 4). Statistically significant changes in chi-square values relative to the changes in degrees of freedom and changes in CFI values of less than −.01 were used to flag significant differences when testing the models (Cheung and Rensvold 2002). After the establishment of invariant factor loadings and partial invariance of intercepts, Pearson correlation and multivariate analysis of variance (MANOVA) were employed to compare the responses from the German and the U.S. students. Finally, percentage frequencies were computed for Internet experience and usage questions that were incorporated into SORAB.
4 Results
The primary results are presented relative to the respective research questions.
4.1 Measurement invariance of SORAB (RQ1)
Cronbach’s alpha were reported in Table 1 for each construct and for the students from each country. Although the Cronbach’s alpha values were relatively low for the German students’ responses to items used to measure EOR and CBE, respectively, and for the U.S. students’ responses to items used to measure EOR only, these values are acceptable using Nunnally’s (1978) suggested criterion of .70.
Results from Confirmatory Factor Analyses of the data for each sample were presented in Table 2.
The combination of all goodness-of-fit indices in Table 2 suggests that the U.S. sample and the German sample data fit our theoretical model and the model from the Exploratory Factor Analyses in Putman's (2014) previous study very well except one item (Item 7 for VI). This item is worded as “I can relax better if I read something in a book or magazine rather than in the Internet.” In comparison to other items in this construct, which focused on the articulation of a preference to read on the Internet, the item presented a reversed focus. Prior research (see Sonderen et al. 2013) has revealed that variation in items that incorporate a contrary focus can cause misunderstandings, especially after translation. Our data screening showed that the variance of this item is larger than that of all other items, indicating that some respondents may have been confused with the meaning of this item.
Results of factorial invariance tests between the U.S. and German samples were presented in Table 3. We will present the results of the factorial invariance tests in the sequence of the three steps in our methodology.
4.1.1 Configural invariance
Model 0 tested the configural invariance of SORAB by fitting a baseline model for the U.S. group and the German group separately without any constraints. The result indicated that the data fit the model very well, CFI = .84, RMSEA = .05, and SRMR = .04.
4.1.2 Measurement invariance
Models 1–3 tested the measurement invariance. In model 1, the factor loadings were constrained to be the same across the U.S. and the German groups. Our data fit the model well with a change of CFI of −.01 but the change of Chi-square values was statistically insignificant (Δχ2 = 45.76; df = 47, p > .05).
In Model 2, both factor loadings and item intercepts were constrained to be the same across the U.S. and the German students. Our data suggest a misfit with a change of CFI of −.05 and a statistically significant difference in the change of Chi-square values (Δχ2 = 798.01; df = 52, p < .01). The modification indices provided by the MPlus program suggested that the chi-square value for the goodness-of-fit of the model would drop greatly (>20) if these 10 items were not constrained to be equal across the two groups. As a result, the following intercepts were freed: Y7, Y14, Y20, Y23, Y27, Y32, Y38, Y45, Y52, Y53, CBE and AXT.
The removal of the constraint of the intercepts of these 10 items and that of the intercepts of two latent constructs (CBE and AXT) was necessary to ensure the chi-square value and the degree of freedom relative to that of the weak invariance model (M1) was statistically insignificant (Table 3). This result suggests the presence of item bias (or differential item functioning) between the samples for the 10 items described above. That is to say, how U.S. students and German students responded to these 10 items varied. Moreover, the residual variances of nine items (Y2, Y10, Y13, Y15, Y28, Y29, Y47, Y52 and Y53) were found to be not invariant between the two samples (Model 3), thus only evidence for partial measurement invariance between the U.S. and the German samples was found in our study.
4.1.3 Structural invariance
The structural invariance was tested by constraining the factor variances and covariances to be the same (Model 4). Our data fit the model very well and support partial invariance of the measurement model between the U.S. and German samples (M4).
4.2 Comparisons between U.S. and German students’ cognitive strategies and affective attributes (RQ2)
Data revealed that the measurement models between the U.S. sample and the German sample has invariant factor loadings (M1), partial invariant intercepts (M2P), and partial invariant residual variances. These results justify the comparison across the samples with respect to the relations between latent factor variables (Dimitrov 2010). Therefore, we further examined the convergent validity (external aspect of validity) by correlating the factors with each other and by mean comparisons between the two samples.
The results showed a significant amount of similarity between the U.S. and the German samples (Table 4). These results were consistent with the CFA models presented in Figures 1 and 2 and Model 1 in the invariance tests (invariance factor loadings).
Descriptive statistics for SORAB scores are presented in Table 5, and MANOVA results suggested a statistically significant differences in the linear combination of the five latent constructs, Wilk’s λ = 0.91, F (5, 778) = 15.93, p < .001, partial η2 = .09. According to Cohen (1992), this is a medium effect size. Tests of between-subjects effects indicated that, in comparison to German students, U.S students reported (a) higher levels of CBE, F (1, 782) = 16.93, p < .001, partial η2 = .02 (small effect size); (b) higher levels of VI, F (1, 782) = 6.67, p < .001, partial η2 = .01 (small effect size); and (c) lower levels of AXT, F (1, 782) = 23.96, p < .001, partial η2 = .03 (small effect size). Students’ responses to EOR and SRL were not statistically significant between the U.S. and the German students.
4.3 Comparisons between U.S. and German Student’s internet and experience usage (RQ3)
The percentage frequency distribution of the Internet and experience usage questions are reflected in Table 6. Results indicated that the most of the American students had used the Internet for more than three years (85.8%), while German students were most likely to have between six months and two years of experience. Perhaps relatedly, a larger percentage of German students (14.5%) were uncomfortable using the Internet in comparison to their American counterparts (2.6%). With regard to frequency of use, students from the U.S. were more likely to use the Internet at school at least once per week (85.9% vs. 56.0%), while more than 90% students in each group used the Internet at home at least once per week. Finally, more than 50% of German students indicated they used Internet for more than five hours each week compared to slightly more than 35% of the American students.
5 Discussion
This research was directed towards the cross-cultural validation of SORAB and provided an initial examination into the differences in strategies and affective attributes for online inquiry between samples of students from the United States and Germany. As such, it provides insights about students in two countries that have high rates of Internet use as well as educational standards focused on preparing students to engage in information-seeking activities online. Results from this study suggest a weak measurement invariance (invariant factor loadings, partial invariant intercepts, and partial invariant residual variances) between the U.S. and German students’ responses to SORAB. Although only 10 out of 52 items were noted to have different intercepts, researchers should be cautious when using SORAB to compare German and U.S. students. The invariant factor loadings suggest that comparisons between the two samples with respect to relationships are warranted, however.
5.1 Differences in cognitive and behavioral engagement
Comparisons of the results of SORAB indicated that students from the United States demonstrated higher levels of cognitive and behavioral engagement within online inquiry than the German students. The cognitive and behavioral engagement subscale focuses on the strategic, goal-directed behaviors that occur before and during inquiry, each of which contributes to successfully locating relevant information (Cho and Afflerbach 2017; Frerejean et al. 2019). Included, for example, are items that address formulating a question before searching, making inferential judgments or predictions when selecting links, and determining if information on a website is reliable or trustworthy. Importantly, though, the items focus on assessing respondents’ beliefs, i.e., confidence, in their capacity to engage in the behaviors as opposed to actually enacting the strategies or behavior directly within the process of engaging in inquiry. Stated another way, the findings did not demonstrate that the U.S. students were more proficient in the strategies that have shown to be important on effective inquiry (Brante and Strømsø 2018; Naumann 2015), but they appeared more confident in their ability to engage in them.
One potential explanation for the difference is the longer period of time the American students have been using the Internet (see Table 6). Notably, familiarity has been shown to impact engagement positively (Naumann 2015). German students tend to start using the Internet later (OECD, 2015), and over 80% of the German participants in this investigation had fewer than 2 years of experience, while more than 85% of the students from the U.S. had more than three years. Furthermore, the frequency of use of the Internet in schools by American students was much higher. Despite high rates of Internet use overall, multiple studies have shown that German students are less likely to use the Internet in school (see Eickelmann et al. 2014; Paul et al. 2017); yet, it has been shown that the use of the Internet in school contexts has proven impactful on strategic use and engagement (Mullis et al. 2017; OECD 2015). Experience and familiarity, in combination, could contribute to positive beliefs. Finally, given students from the United States have been shown to overestimate their abilities, a portion of difference may be attributable to cultural variation in response styles (Putman et al. 2015).
5.2 Differences in value and interest
The value/interest subscale measured participants’ views of the Internet as a valuable learning tool, including a preference to use the tool over traditional means, i.e., books. Within the results, the American students demonstrated a more positive attitude toward using the Internet as opposed to traditional print materials. Examining Table 6, more than 50% of the German students used the Internet for more than 5 h per week, compared to 35% of Americans, which would seemingly present contradictory evidence. However, several studies (see Naumann 2015; Salmerón et al. 2018) have shown a high preponderance of Internet use for social activities rather than information-seeking purposes by German youth, which may be a contributing factor to the discrepancy. Not all activities that use ICT enhance knowledge and skills, and use of the Internet for social activities does not produce or develop skills necessary to engage in information seeking activities, including task-specific navigation.
5.3 Differences in anxiety
Prior research has shown anxiety to be negatively correlated with ICT engagement and motivation to engage with digital texts (Naumann and Sälzer 2017; Zylka et al. 2015). Acknowledging this research as well as the findings relative to the value/interest subscale, the corresponding higher levels of anxiety expressed by the German students was consistent with prior research. Examining usage and experience data, over 95% of the students from the United States indicated they were comfortable or very comfortable using the Internet compared to 84% of the students from Germany. What is noteworthy is that more than 14% of the German students indicated they were uncomfortable using the Internet. Thus, it could be inferred that a small portion of the German sample may be the cause of the significant differences related to anxiety.
5.4 No significant differences in self-regulation and self-efficacy
Results revealed no differences between the samples in comparisons of efficacy for online reading (EOR) and self-regulation within online inquiry (SRL). Senkbeil and Ihme (2017) have noted the positive correlation of ICT-related self-efficacy with ICT-related self-regulation, thus this result is aligned with previous findings. Re-examination of the items associated with EOR subscale, however, revealed that they were focused more broadly on skills associated with browser use, e.g., “I feel confident that I can use a browser to navigate the Internet.” Thus, generalized experiences of simply using the Internet should contribute to these feelings of confidence. The items on the SRL subscale were focused on the cognitive, affective, metacognitive, and motivational processes before, during, and after inquiry. Notably, mean scores for each group were very similar, falling between “sometimes” and “often” engaging in a specific behavior. Each of the self-regulatory processes addressed within SORAB, e.g., planning, establishing goals, and monitoring progress towards those goals, represent important skills that adolescents do not appear to consistently engage in while conducting online inquiry (Brand-Gruwel et al. 2009). Thus, the results appear to confirm prior research.
Additionally, the presence of a quantifier within each item that required respondents to reflect about their actions in specific situations, e.g., “When I am conducting research on the Internet, I stop and think about how well I am doing and change strategies if necessary” may have proven influential to the results. Given the contextually specific nature of tasks that require ongoing self-regulation (see Greene and Azevedo 2007; Greene et al. 2018), the lack of difference may be attributable to the need for greater contextualization of the prompts to facilitate understanding and differentiate the degree to which an analysis of needs related to the online search for information could occur. Thus, the lack of difference may be leading towards social desirability response bias as the participants understood the necessity of performing the activity, and answered based on that understanding.
6 Implications
This investigation represents the first attempt to directly compare German and American students’ cognitive strategies and affective attributes and results indicated it is appropriate to use SORAB for this purpose. It is also clear there are some differences between the two groups. Noting the exploratory nature of the research, additional investigations are warranted to further explore the differences identified between the samples.
Before addressing future research, it is important to acknowledge that cross-cultural research necessitates the assumption of the equivalency of context. Thus, there must be direct efforts made to address the additional contextual features present in each country/region, including access, impetus for use, and predominant device, which may be present and contribute to the reported differences as a result of systematic errors. The presence of these errors must be accounted for and limited in future research through direct efforts to more effectively triangulate data. To limit potential systematic bias, this may include the incorporation of methods and processes to examine the activities that the Internet is being used for as well as the devices being used, both in and outside of school, by the students in each of the respective countries Within the classroom, concurrently collecting observations of classroom instruction of teachers from each country implementing activities that incorporate conceptually-aligned educational standards would also address the support and guidance accessible to students while using the Internet for information-seeking purposes. If students are spending large amounts of time on the former without guidance, they are not likely to engage in the necessary metacognitive strategies and navigation skills for effective searching, as evidenced in previous studies (see Naumann and Salmerón 2016). Furthermore, additional questions may be incorporated into SORAB that are directed toward accessing student behaviors and device use.
Perhaps as importantly, there is a need to combine information collected on the attitudes and behaviors measured through SORAB with actual performance in digital reading tasks. This could include observations of students engaged in searching the Internet for specific information and the use of think-aloud protocols to examine understanding as it relates to decision-making processes within these searches. Doing so will allow researchers to extrapolate on the present findings related to cognitive and behavioral engagement and their related influence in the inquiry process. Such research could also be used to examine whether the influence of a positive self-efficacy for online reading is mediated by value/interest as it relates to the inquiry tasks (see Christoph et al. 2015). Noting the focus of large-scale assessments, i.e., ePIRLS, on using more authentic, school-like assignments to measure performance, there may be additional opportunities to examine the related impact of these attributes.
Lastly, researchers would be advised to employ Item Response Theory (IRT) or Different Item Functioning (DIF) analyses to continue examining the response patterns between the U.S. and the German students to SORAB. Doing so will allow more nuanced examinations that address the differences within the subscales as well as further delineate item level differences that may occur within the subscales that did demonstrate a significant difference. Doing so may provide information regarding the relative emphasis of these factors, in combination with contextual variables (see Naumann 2015; Salmerón et al. 2018), within the inquiry process.
6.1 Limitations
The present study contributes to describing cross-cultural differences associated with the behaviors and attitudes associated with finding information in an online environment, yet there are several limitations within this research that must be acknowledged. Foremost among them, the participants were recruited from a convenience sample from each of the countries and represent students from small geographic areas, thus the results may not generalize to other samples more representative of the general student population in each country. Additional research should seek to incorporate more randomized sampling techniques to ensure a broader representation of various populations. This could include expanding the age-span to incorporate older, high school-age students or university students who may engage in higher levels of Internet use. In addition, as with any self-report measure, participant responses may be influenced by a social desirability bias as students sought to answer each question based on what they perceived to be the sought after or anticipated result. This social desirability bias could also make the assumption of normal distribution of responses to items a challenge, which could make the results from MANOVA invalid. Fortunately, the assumption checks for the multivariate outliers, normality, homoscedasticity, homogeneity of variance, and the homogeneity of variance-covariance matrices were all met with the data collected in this study. This does not mean that that all the measurements in this study are error-free. Errors could also be non-systematic. Therefore, future researchers are highly recommended to check the model assumptions before interpreting the results.
7 Conclusion
Given the ubiquity of Internet access and the knowledge that the majority of research examining various constructs within online inquiry has been conducted on culturally homogeneous samples, it’s important to begin to think about differences in how these variables manifest themselves in various international contexts. Leu and his colleagues (Leu et al. 2013) wrote, “Individuals, groups, and societies who can identify the most important problems, locate useful information the fastest, critically evaluate information most effectively, [and] synthesize information most appropriately” (p. 5) will have a distinct advantage in global societies. Thus, cross-cultural comparisons will be important to meaningfully understand how students in countries with a focus on preparation in this area engage in information-seeking activities. This research represents a preliminary step and may further enable comparisons between German and American that can be utilized to help facilitate effective teaching and learning processes for students across the respective countries.
References
Anmarkrud, O., Bråten, I., & Strømsø, H. I. (2014). Multiple-documents library: Strategic processing, source awareness, and argumentation when reading multiple conflicting documents. Learning and Individual Differences, 30, 64–76.
Author. (2014).
Author. (2015).
Barzilai, S., Zohar, A. R., & Mor-Hagani, S. (2018). Promoting integration of multiple texts: A review of instructional approaches and practices. Educational Psychology Review, 30, 973–999. https://doi.org/10.1007/s10648-018-9436-8.
Brand-Gruwel, S., Wopereis, I., & Walraven, A. (2009). A descriptive model of information problem solving while using internet. Computers & Education, 53, 1207–1217. https://doi.org/10.1016/j.compedu.2009.06.004.
Brand-Gruwel, S., Kammerer, Y., van Meeuwen, L., & van Gog, T. (2017). Source evaluation of domain experts and novices during Web search. Journal of Computer Assisted Learning, 33, 234–251. https://doi.org/10.1111/jcal.12162.
Brante, E. W., & Strømsø, H. I. (2018). Sourcing in text comprehension: A review of interventions targeting sourcing skills. Educational Psychology Review, 30, 773–799. https://doi.org/10.1007/s10648-017-9421-7.
Brosnan, M., Joiner, R., Gavin, J., Crook, C., Maras, P., Guiller, J., & Scott, A. J. (2012). The impact of pathological levels of internet-related anxiety on internet usage. Journal of Educational Computing Research, 46(4), 341–356. https://doi.org/10.2190/EC.46.4.b.
Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9, 233–255. https://doi.org/10.1207/S15328007SEM0902_5.
Cho, B.-Y. (2014). Competent adolescent readers’ use of Internet reading strategies: A think-aloud study. Cognition and Instruction, 32 ,253-289. https://doi.org/10.1080/07370008.2014.918133
Cho, B.-Y., & Afflerbach, P. (2017). An evolving perspective of constructively responsive reading comprehension strategies in multilayered digital text environments. In S. E. Israel (Ed.), Handbook of research on reading comprehension (2nd ed., pp. 109–134). New York: Guilford Press.
Cho, B.-Y., Woodward, L., & Li, D. (2017). Examining adolescents’ strategic processing during online reading with a question-generating task. American Educational Research Journal, 54, 691–724. https://doi.org/10.3102/0002831217701694.
Christoph, G., Goldhammer, F., Zylka, J., & Hartig, J. (2015). Adolescents’ computer performance: The role of self-concept and motivational aspects. Computers & Education, 81, 1–12. https://doi.org/10.1016/j.compedu.2014.09.004.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.
Coiro, J., Sparks, J. R., & Kulikowich, J. M. (2018). Assessing online collaborative inquiry and social deliberation skills as learners navigate multiple sources and perspectives. In J. L. G. Braasch, I. Braten, & M. T. McCrudden (Eds.), Handbook of multiple source use (pp. 34–54). New York: Routledge.
Derakshan, N., & Eysenck, M. W. (2009). Anxiety, processing efficiency, and cognitive performance: New developments from attentional control theory. European Psychologist, 14(2), 168–176. https://doi.org/10.1027/1016-9040.14.2.168.
Dimitrov, D. M. (2010). Testing for factorial invariance in the context of construct validation. Measurement and Evaluation in Counseling and Development, 43, 121–149.
Eickelmann, B., Schaumburg, H., Drossel, K., & Lorenz, R. (2014). Schulische Nutzung von neuen Technologien in Deutschland im internationalen Vergleich. In W. Bos, B. Eickelmann, J. Gerick, F. Goldhammer, H. Schaumburg, K. Schwippert, M. Senkbeil, R. Schulz-Zander, & H. Wendt (Eds.), ICILS 2013. Computer- und informationsbezogene Kompetenzen von Schülerinnen und Schülern der 8. Jahrgangsstufe im internationalen Vergleich (pp. 197–230). Münster: Waxmann.
European Commission (2014). The International Computer and Information Literacy Study (ICILS). Main findings and implications for education policies in Europe. Brussels: European Commission.
Fan, X., & Sivo, S. A. (2005). Sensitivity of fit indices to misspecified structural or measurement model components: Rationale of two-index strategy revisited. Structural Equation Modeling, 12, 343–367. https://doi.org/10.1207/s15328007sem1203_1.
Feierabend, S., Plankenhorn, T., & Rathgeb, T. (2016). JIM-Studie 2016: Jugend, Information, (Multi-)Media. [JIM-Study 2016: Youth, Information, (Multi-Media)]. Stuttgart, Germany: Medienpädagogischer Forschungsverbund Südwest.
Finnish National Board of Education. (2014). Description of seven different transversal competence areas. Retrieved from https://www.oph.fi/download/190839_aiming_for_transversal_competences.pdf
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Gebhardt, E. (2014). Preparing for life in a digital age: The IEA International Computer and Information Literacy Study International Report. Heidelberg, Germany: Springer International Publishing. https://doi.org/10.1007/978-3-319-14222-7.
French Ministry of Education. (2015). Socle commun de connaissances, de competences et de culture. Retrieved from http://cache.media.education.gouv.fr/file/17/45/6/Socle_commun_de_connaissances,_de_competences_et_de_culture_415456.pdf.
Frerejean, J., Velthorst, G. J., van Strien, J. L. H., Kirschner, P. A., & Brand-Gruwel, S. (2019). Embedded instruction to learn information problem solving: Effects of a whole task approach. Computers in Behavior, 90, 117–130. https://doi.org/10.1016/j.chb.2018.08.043.
Goldman, S. R., Braasch, J. L. G., Wiley, J., Graesser, A. C., & Brodowinska, K. M. (2012). Comprehending and learning from Internet sources: Processing patterns of better and poorer learners. Reading Research Quarterly, 47, 356–381. https://doi.org/10.1002/RRQ.027.
Greene, J. A., & Azevedo, R. (2007). Adolescents’ use of self-regulatory processes and their relation to qualitative mental model shifts while using hypermedia. Journal of Educational Computing Research, 36, 125–148.
Greene, J. A., Copeland, D. Z., Deekens, V. M., & Yu, S. B. (2018). Beyond knowledge: Examining digital literacy’s role in acquisition of understanding in science. Computers & Education, 117, 141–159. https://doi.org/10.1016/j.compedu.2017.10.003.
Guthrie, J. T., Wigfield, A., & You, W. (2012). Instructional contexts for engagement and achievement in reading. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement (pp. 601–634). New York: Springer-Verlag.
Hahnel, C., Goldhammer, F., Naumann, J., & Kröhne, U. (2015). Effects of linear reading, basic computer skills, evaluating online information, and navigation on reading digital text. Computers in Human Behavior, 55, 486–500. https://doi.org/10.1016/j.chb.2015.09.042.
Heine, S. J., & Buchtel, E. E. (2009). Personality: The universal and the culturally specific. Annual Review of Psychology, 60, 369–394. https://doi.org/10.1146/annurev.psych.60.110707.163655.
Hinostroza, J. E., Ibieta, A., Labbe, C., & Soto, M. T. (2018). Browsing the internet to solve information problems: A study of students’ search actions and behaviors using a ‘think-aloud’ protocol. Education and Information Technologies, 23, 1933–1953. https://doi.org/10.1007/s10639-018-9698-2.
Hofman, J. L., Wu, H., Krajcik, J. S., & Soloway, E. (2003). The nature of middle school learners' science content understandings with the use of on-line resources. Journal of Research in Science Teaching, 40, 323–346.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1–55. https://doi.org/10.1080/10705519909540118.
Hutchison, A., & Reinking, D. (2011). Teachers’ perceptions of integrating information and communication technologies into literacy instruction: A national survey in the United States. Reading Research Quarterly, 46, 312–333.
Kanniainen, L., Kiili, C., Tolvanen, A., Aro, M., & Leppänen, P. H. T. (2019). Literacy skills and online research and comprehension: Struggling readers face difficulties online. Reading and Writing. https://doi.org/10.1007/s11145-019-09944-9.
Kiili, C., Leu, D. J., Utriainen, J., Coiro, J., Kanniainen, L., Tolvanen, A., et al. (2018). Reading to learn from online information: Modeling the factor structure. Journal of Literacy Research, 50, 304–334.
Lenhart, A. (2015). Teens, social media, & technology overview 2015. Washington, D.C.: Pew Research Center. Retrieved from http://www.pewinternet.org/files/2015/04/PI_TeensandTech_Update2015_0409151.pdf
Leu, D. J., McVerry, J. G., O’Byrne, W. I., Kiili, C., Zawilinski, L., Everett-Cacopardo, H., et al. (2011). The new literacies of online reading comprehension: Expanding the literacy and learning curriculum. Journal of Adolescent & Adult Literacy, 55, 5–14.
List, A., Alexander, P., & A. (2018). Cold and warm perspectives on the cognitive affective engagement model of multiple source use. In J. L. G. Braasch, I. Braten, & M. T. McCrudden (Eds.), Handbook of multiple source use (pp. 34–54). New York: Routledge.
MacCallum, R. C., Roznowski, M., & Necowitz, L. B. (1992). Model modifications in covariance structure analysis: The problem of capitalization on chance. Psychological Bulletin, 111, 490–504. https://doi.org/10.1037/0033-2909.111.3.490.
Madden, M., Lenhart, A., Duggan, M., Cortesi, S., & Gasser, U. (2013). Teens and technology 2013. Washington, DC: Pew Research Center’s Internet & American Life Project. Retrieved from http://www.pewinternet.org/~/media/Files/Reports/2013/PIP_TeensandTechnology2013.pdf.
Marsh, H. W., Hau, K. T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler's (1999) findings. Structural Equation Modeling, 11, 320–341. https://doi.org/10.1207/s15328007sem1103_2.
Medienpädagogischer Forschungsverbund Südwest [Media Educational Research Network Southwest]. (2014). KIMStudie 2014. Kinder + Medien, Computer + Internet. Basisuntersuchung zum Medienumgang 6–13-Jähriger in Deutschland. Retrieved from http://www.mpfs.de/studien/kim-studie/2014/
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from person’s responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749.
Ministerium für Kultus, Jugend und Sport Baden-Württemberg. (2016a). Wahlfach Informatik an der Hauptschule, Werkrealschule und Realschule. Villingen-Schwenningen: Neckar-Verlag GmbH.
Ministerium für Kultus, Jugend und Sport Baden-Württemberg. (2016b). Gymnasium - Basiskurs Medienbildung. Villingen-Schwenningen: Neckar-Verlag GmbH.
Moos, D. C. (2014). Setting the state for the metacognition during hypermedia learning: What motivation constructs matter? Computers & Education, 70, 128–137.
Moos, D. C., & Azevedo, R. (2009). Self-efficacy and prior domain knowledge: To what extent does monitoring mediate their relationship with hypermedia? Metacognition and Learning, 4, 197–216.
Mullis, I. V. S., Martin, M. O., Foy, P., & Hooper, M. (2017). ePIRLS 2016 international results on online informational reading. Chestnut Hill: TIMMS & PIRLS International Study Center.
National Governors Association Center for Best Practices & Council of State School Officers. (2010). Common core state standards for English language arts & literacy in history/social studies, science, and technical subjects. Washington, DC: Authors. Retrieved from http://corestandards.org/assets/CCSI_ELA%20 Standards.pdf.
Naumann, J. (2015). A model of online reading engagement: Linking engagement, navigation, and performance in digital reading. Computers in Behavior, 53, 263–277. https://doi.org/10.1016/j.chb.2015.06.051.
Naumann, J., & Salmerón, L. (2016). Does navigation always predict performance? Effects of navigation on digital reading are moderated by comprehension skills. International Review of Research in Open and Distributed Learning, 17(1). Retrieved from: http://www.irrodl.org/index.php/irrodl/article/view/2113/3586
Naumann, J., & Sälzer, C. (2017). Digital reading proficiency in German 15-year olds: Evidence from Pisa 2012. Zeitschrift für Erziehungswissenschaft, 20, 585–603. https://doi.org/10.1007/s11618-017-0758-y.
Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill.
O’Byrne, W. I., & McVerry, J. G. (2009). Measuring the dispositions of online reading comprehension: A preliminary validation study. In J. Worthy, B. Maloch, J. V. Hoffman, D. L. Schallert, & C. M. Fairbanks (Eds.), 57thYearbook of the National Reading Conference (pp. 362–375). Oak Creek: National Reading Conference, Inc..
Organization for Economic Cooperation and Development. (2011). PISA 2009 Results: Students on Line: Digital Technologies and Performance (Volume VI). https://doi.org/10.1787/9789264112995-en
Organization for Economic Cooperation and Development. (2015). Students, computers, and learning: Making the Connection. https://doi.org/10.1787/9789264239555-en
Paul, N., & Glassman, M. (2017). Relationship between internet self-efficacy and internet anxiety: A nuanced approach to understanding the connection. Australasian Journal of Educational Technology, 33, 147–165.
Paul, J., Macedo-Rouet, M., Rouet, J., & Stadtler, M. (2017). Why attend to source information when reading online? The perspective of ninth grade students from two different countries. Computers & Education, 113, 339–354.
Peña, E. D. (2007). Lost in translation: Methodological considerations in cross-cultural research. Child Development, 78, 1255–1264.
Pintrich, P. R. (2000). The role of goal orientation in self-regulated learning. In Handbook of self-regulation (pp. 451–502). New York: Academic Press.
Putman, S. M. (2014). Exploring dispositions towards online reading: Analyzing the Survey of Online Reading Attitudes and Behaviors. Reading Psychology, 35, 1-31.
Putman, S. M., Wang, C., & Ki, S. (2015). Assessing the validity of the cross-cultural Survey of Online Reading Attitudes and Behaviors with American and South Korean fifth- and sixth-grade students. Journal of Psychoeducational Assessment, 33, 403-418. https://doi.org/10.1177/0734282914564038
Salmerón, L., Garcia, A., & Vidal-Abarca, E. (2018). The development of adolescents’ comprehension-based Internet reading activities. Learning and Individual Differences, 61, 31–39. https://doi.org/10.1016/j.lindif.2017.11.006.
Senkbeil, M., & Ihme, J. M. (2017). Motivational factors predicting ICT literacy: First evidence on the structure of an ICT motivation inventory. Computers & Education, 108, 145–158.
Sonderen, E., Sanderman, R., & Coyne, J. C. (2013). Ineffectiveness of reverse wording of questionnaire items: Let’s learn from cows in the rain. Plos One, 8(7). Retrieved from https://doi.org/10.1371/journal.pone.0068967
Tsai, C. C., & Lin, C. C. (2004). Taiwanese adolescents’ perceptions and attitudes regarding the Internet: Exploring gender differences. Adolescence, 39, 725–734.
Winne, P., & Hadwin, A. (2008). The weave of motivation and self-regulated learning. In D. Schunk & B. Zimmerman (Eds.), Motivation and self-regulated learning: Theory, research, and applications (pp. 297–314). Mahwah: Erlbaum.
Zylka, J., Christoph, G., Kroehne, U., Hartig, J., & Goldhammer, F. (2015). Moving beyond cognitive elements of ICT literacy: First evidence on the structure of ICT engagement. Computers in Human Behavior, 53, 149–160. https://doi.org/10.1016/j.chb.2015.07.008.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix 1 Survey of Online Reading Attitudes and Beliefs [name redacted for blind review]
Appendix 1 Survey of Online Reading Attitudes and Beliefs [name redacted for blind review]
Efficacy for Online Reading | |
1. I feel confident that I can use a browser (like Safari, Explorer, or Firefox) to navigate the Internet. | |
2. I feel confident that I can open a web address directly by typing in the address. | |
3. I feel confident I can use the “back” and “forward” buttons to move between web pages. | |
4. I feel confident that I can use a search engine (like Google) to locate material during research. | |
5. I feel confident understanding terms/words related to the Internet. | |
6. I feel confident trouble shooting Internet problems. | |
Cognitive & Behavioral Engagement | |
1. I am confident that I can think of a question to ask about content before reading/searching on the Internet. | |
2. I am confident I can skim the results of an Internet search page to see what link might be best. | |
3. I am confident that I can read the search summaries of websites carefully to understand the meaning of information on the website. | |
4. I am confident that I can skim a website to decide whether or not the information is useful for my question. | |
5. I can stay focused on the information I need from a website rather than getting distracted by things I do not need. | |
6. I am confident that can make a prediction about where a website link might lead if I click on it. | |
7. I am confident I can use knowledge of how a webpage is set up to help locate information on it. | |
8. I am confident I can use the search engine located within a website to find information on the site. | |
9. I am confident that I can combine information from more than one website in a way that makes sense to other people. | |
10. I am confident that I can determine if information on a website is a reliable and trustworthy. | |
11. I am more careful in my research using the Internet when I know that I am going to be graded. | |
Value/Interest | |
1. I feel confident that I can find information on the Internet much faster than I can when I use a book to search. | |
2. When I search for information on the Internet, I remember it better. | |
3. I prefer to use the Internet for research because it helps my grades. | |
4. Once I start researching information on the Internet, I cannot stop because I want to find the answers. | |
5. I would rather complete research on the Internet than using a book or magazine. | |
6. I would rather read on the Internet than read a book during free time. | |
7. Reading a book or magazine is more relaxing than reading on the Internet.* | |
8. I think kids who do not use the Internet miss out on a lot of important information. | |
9. I think kids who are really good at using the Internet get better grades in school. | |
10. Everyone should know how to use the Internet. | |
11. Being able to use the Internet is important to me. | |
12. I believe using the Internet for research and reading has made learning more interesting. | |
13. Using the Internet for research is beneficial because it saves people time. | |
14. I believe the Internet makes it easier to get useful information. | |
Self-Regulation | |
1. When I have trouble understanding something on the Internet, I re-read the task. | |
2. When I have trouble understanding something on the Internet, I go ask a friend or classmate for help. | |
3. While I am conducting research on the Internet, I stop and think about how well I am doing and change strategies if necessary. | |
4. When I become confused about something I am reading on the Internet, I scroll back to previous screens. | |
5. Before I begin to research on the Internet, I look to see if I can break the task into smaller pieces to make it easier. | |
6. If I am researching something on the Internet, I can motivate myself even if the topic is boring. | |
7. When I have completed an Internet project, I think about how well it went and what I could change. | |
8. I always think about the information I am reading on the Internet to help me understand if it matches the required information I am looking for. | |
9. When I encounter difficulties on the Internet, I work through them by telling myself that I can complete the task. | |
10. Before I start a task on the Internet, I organize myself and think about how I will accomplish the task. | |
11. Before using information from a website to answer my question, I check to see if the author is reputable. | |
12. Before beginning an Internet search about a topic, I think about what I know about that topic. | |
13. When I navigate to a website on the Internet, I tend to read the whole page before clicking on any hypertext (links). | |
14. Before beginning an Internet search about a topic, I think about whether I know how to find information on it. | |
Anxiety | |
1. Researching information on the Internet intimidates me. | |
2. Researching information on the Internet makes me feel tense. | |
3. I feel helpless when asked to research information on the Internet. | |
4. I cannot relax when I am reading/researching on the Internet. | |
5. I believe it is easy to get lost when I am using the Internet for research. | |
6. Sometimes I worry that other kids do not think I can read on the Internet as well as they can. | |
7. I go out of my way to avoid using the Internet. | |
8. I feel anxious about using the Internet. | |
Additional Questions | |
1. How comfortable to you feel using the Internet? (Very comfortable, Comfortable, Uncomfortable, Very Uncomfortable) | |
2. How long have you been accessing Internet? (Less than 6 months, 6 to 12 months, 1 to 2 years, 3 to 5 years, more than 5 years) | |
3. How frequently do you use the Internet at school? (Every day, 2–3 times per week, Once a week, Once a month, Less than once a month) | |
4. How frequently do you use the Internet outside of school? Every day, 2–3 times per week, Once a week, Once a month, Less than once a month) | |
5. On average, how many hours a week do spend on the Internet? (0–1 h, 2–4 h, 5–7 h, 8 to 10 h, More than 10 h per week) |
Rights and permissions
About this article
Cite this article
Putman, S.M., Wang, C., Rickelman, B. et al. Comparing German and American students’ cognitive strategies and affective attributes toward online inquiry. Educ Inf Technol 25, 3357–3382 (2020). https://doi.org/10.1007/s10639-019-10066-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10639-019-10066-6