1 Introduction

Although many assume that students have mastered literacy skills by the time they leave school; recently, scholars have emphasized that a suite of skills, i.e., academic literacy skills, continues to develop through the university years. Academic literacy skills become particularly important to the success of students continuing into graduate programs, yet few studies attend specifically to the need to explore, teach, or nurture them. Most of the studies within the realm of the academic literacy have limited their scope to writing (e.g. Lea & Street, 1998; Lillis & Scott, 2007). Reading, despite its importance, has received less attention (see Abbott, 2013; Afdal et al., 2022; Baker et al., 2019; Mann, 2000). Reading, which precedes writing, is crucial in graduate programs in that students need to deal with highly technical genres. Moreover, after finishing their programs, MA graduates of applied linguistics will be involved in designing/developing/teaching reading courses and/or evaluating the outcome of reading courses at university level. In fact, the ability to formulate a well-written abstract capable of delivering an accurate summary of an article with a high level of clarity and coherence is closely linked to one’s reading ability, where analysis and synthesis along with a correct understanding and well-chosen structure lead to the best outcome (Delgadova, 2015).

In order to be able to help students with academic reading, we need to develop a valid instrument reflecting different components of academic reading and pinpointing the areas in which students need help (Green & Agosta, 2011). To the best of the researchers’ knowledge, there is no instrument probing different aspects of academic reading at graduate levels of applied linguistics. Moreover, the investigation reported in this article is a response to calls echoed in the literature for conducting research on the constituents of academic literacy (see Sengupta, 2005, the special issue of the Journal of English for Academic Purposes). Accordingly, this study represents an initial attempt at exploring the components of academic reading and then developing a validated instrument which can be used for studying and teaching academic reading at graduate levels.

1.1 Academic literacy and academic reading

Braine (2002) cogently argues that the academic literacy needed for success in MA programs is more than the ability to read and write. He argues that students need to work on building relationships with their teachers and peers, developing efficient writing skills and research strategies, and getting used to the culture, linguistic and social environments of their host institutions and departments. Similar to Braine’s (2002) definition of academic literacy and extending his definition to academic reading, we espouse that academic reading is a multi-dimensional skill that cannot be limited to proficiency or the ability to read different text types or familiarity with generic features of different texts. Academic reading includes personal, sociocultural, and political aspects and can range from reading strategies, to familiarity with academic language and genres, to interaction with teachers and peers, and to information literacy, etc. (Nejadghanbar et al., 2022). This study will shed light on different components of academic reading.

Academic reading, as Shaw and Pecorari (2013) argue, is ‘an index of successful academic achievement for students.' Ellison (2001) argues nowadays articles published in journals are more difficult and longer than those published 30 to 40 years ago. MacMillan and Mackenzie (2012) note that new journals for a large audience with special interests and expertise are being published with a highly elevated discourse. They claim that students’ unpreparedness to read scholarly articles and the increase in the specificity and difficulty of the articles pose a double problem. They go on to quote French (2005, pp.18–19) arguing that “the content of these articles pose many problems … for those not already postdoctoral scholars in the field.” In some contexts, e.g., China, lower language proficiency (mostly vocabulary and grammar) has been mentioned as the main reason for the difficulty in academic reading (Wen, 2003). Nevertheless, Hyland (2003) argues that students’ problems do not necessarily revolve around language proficiency, but rather around the literacy practices leading to academic texts. Similarly, Murray (2010) and Chen (2017) rightly posit that many students who study academic sources in English might suffer from low English language proficiency as well as low familiarity with the literacies required for success. Viewing academic reading as an issue deriving primarily from the limited English language proficiency of English as an additional language (EAL) students would lead to overlooking the challenge in contexts such as Australia where low academic literacy skills “are a problem for a surprisingly large proportion of locally-born Australian students” (Botha, 2022, p. 1). The questionnaire developed in this study can be adapted and used with both international and local students to evaluate their academic reading literacy and determine how best to meet their needs.

In a scoping review of academic reading studies in higher education, Baker et al. (2019) indicated four main conceptualizations of academic reading. First, the cognitive conception views academic reading as a decontextualized skill and emphasizes functional skills such as the decoding of symbols and activation of syntactic and semantic knowledge to elicit meaning from the text. Second, the compliance conception attempts to explore how students deal with their readings, in other words their textual practices, and how their assumptions compare to those of their teachers. Third, the contextualized definition identifies cognitive, affective, and psychological factors affecting reading. Finally, the sociopolitical perspective views academic reading practices as “fluid, highly complex, context-dependent, and based on the reader’s engagement with background knowledge, schematic understandings, and ideological perspectives” (p. 150). Overall, focusing on two countries, South Africa and Australia, Baker et al. (2019) argued for more studies into academic reading from an academic literacies perspective.

The results of previous studies addressing academic literacy show that interaction between teachers and students (Bhatt & Samanhudi, 2022; Green & Agosti, 2011) and explicit teaching of academic genres (Hedgcock & Lee, 2017; Roald et al., 2021) enhance students’ learning of the literacies required to succeed in academia. The extant literature also indicates that academic reading needs to be understood as a personal and socially situated skill (Atai et al., 2018; Kuzborska, 2015; Mann, 2000). Students’ conceptions of academic reading change as they transit from high school to undergraduate and then to graduate programs (Atai et al., 2018; Ohata & Fukao, 2014). Lack of familiarity with genre and the requirements of each text type for a different set of reading strategies can hinder students’ understanding of different discursive texts (Dhieb-Henia, 2003; Negretti & Kuteeva, 2011). Some previous studies have investigated students’ perceptions of academic reading and their challenges (Mann, 2000; Ohata & Fukao, 2014), academic reading levels of university students (Delgadova, 2015), and students’ perspective taking while doing their academic readings (Kuzborska, 2015).

Traditional measures of college readiness use students’ general English proficiency as a measure of reading literacy (Talwar et al., 2023). There are some tests of academic literacy which are general in nature and resemble general English proficiency tests (e.g., Butler, 2010; Weideman & van Dyk, 2014). The existing (writing) scales and tests do not cover the specific literacies and skills, such as research article genre and structure, that advanced graduate students require (Rakedzon & Baram-Tsabari, 2017). Our review of the current literature indicated that no previous study has attempted to collect data on different aspects of academic reading both at a graduate level and in the field of applied linguistics.

Finding out what the different components of academic reading are can throw light on the reasons for success and failure and provide a platform for teaching the full range of academic reading skills. An instrument that can be used to investigate students’ academic reading and provide a picture of different aspects of academic reading can assist teachers and supervisors in identifying and addressing the problems most likely to impede student progress. Accordingly, this study addressed the following research question.

  1. 1)

    What are the components of academic reading skill for Iranian MA students of applied linguistics?

2 Method

In developing an instrument for academic reading skill, we followed these steps: reading the literature and interviewing university professors to extract the initial themes, developing the first draft of the instrument, piloting and revising the instrument, and finally validating it.

2.1 Context

This study was conducted in Iran among MA students of applied linguistics. In the two-year MA programs in applied linguistics in Iran, students need to pass 32 credits, all in the English language, covering courses related to English language teaching and testing themes. Almost all these students have learned English as a foreign language and have studied English-related majors such as English language teaching, English translation studies, English literature, or English linguistics in their BA programs. The language proficiency of these students is normally at an advanced level and many of them would be working part-time as English language teachers. During their MA programs, students receive some writing support that mainly focuses on general essay writing skills and fails to cover (discipline-specific) generic features of different text types and does not cover academic literacy skills such as reading strategies, interaction with teachers and peers, information literacy, and critical reading. Like many other international programs (See Rhead & Little, 2020 for a detailed discussion), there is no reading course, and no academic reading support is provided; students are left to their own devices in doing their academic readings (See Nejadghanbar et al., 2022).

2.2 Participants

2.2.1 University professor participants

The first group of participants included 15 university professors of applied linguistics (10 males and 5 females) selected by employing purposive sampling. They were teaching at graduate and postgraduate levels and were involved in designing and implementing the applied linguistics curriculum. Moreover, they were engaged in research on academic reading and had each published at least one academic paper in the field of academic reading. To ensure the inclusion of various viewpoints, we included university teachers with different academic positions, i.e., assistant professor (n = 6), associate professor (n = 6), and full professor (n = 6), and teaching experiences, ranging from one to over thirty years.

2.2.2 Participants at the piloting phase and the exploratory and confirmatory factor analysis phases

There were 52 MA students of applied linguistics, 17 (32.7%) males and 35 (67.3%) females, selected based on availability sampling, at the piloting phase.

In the exploratory factor analysis (EFA) phase of the study, 345 MA students of applied linguistics, 142 (41.1%) males, and 203 (58.9%) females were selected and asked to complete the instrument.

For the confirmatory factor analysis (CFA) phase, the questionnaire was distributed among 207 MA students of applied linguistics, 83 (40.1%) males and 124 (59.9%) females. The participants for the EFA and EFA phases were also chosen based on availability sampling.

2.3 Procedure

2.3.1 Extracting the themes from the literature and conducting interviews with 15 university professors

After reading the available literature on academic reading (e.g., Abbott, 2013; Baker et al., 2019; Braine, 2002; Kuzborska, 2015; Mann, 2000; McGrath et al., 2016; Ohata & Fukao, 2014), the following initial themes were extracted: (1) reading strategies of skimming, scanning, summary, analysis, and synthesis; (2) language proficiency, (3) research literacy; (4) visual or graph literacy; (5) genre awareness; (6) information literacy, (7) reading as an interactive activity; (8) the ability to distinguish and evaluate; and (9) critical reading.

Having identified the initial themes, interviews were conducted with 15 university professors to seek their perspectives on academic reading at graduate levels of applied linguistics and investigate which of the extracted initial themes they considered key to academic reading. The interviews lasted from 40 min to 1 h and 30 min.

The interviews were transcribed verbatim for content analysis. The first researcher carefully read the interviews several times and noted as many themes as he could. Later readings of the interviews helped merge some themes and form more overarching ones. For example, in response to the question “what do you think different aspects of academic reading are?”, a university professor said: “let me start with the online searching. To me a very important aspect of academic reading is ‘online searching ability’. Many of my students do not know how to search online for what they are looking for. For example, they enter a general term such as ‘reading comprehension’ into Google and face many different sources and they get lost. In addition to their unfamiliarity with the online searching techniques, an important reason for their failure is that they lack enough content schemata… they also need to be familiar with genre and how different genres are organized.” The themes that were extracted from this were “online searching ability,” “content schemata,” and “genre awareness.” In further analysis of data, subcategories were developed for the main themes. For example, the theme “online searching ability,” which was later on named information literacy, was developed by subcategories such as the ability “to distinguish academic websites/data bases from popular ones, to evaluate the credibility, accuracy and relevance of online sources, to know credible databases of the discipline, to read and look for basic characteristics of sources published in academia, to know the right techniques for finding relevant articles online.” Each of these subcategories was later on turned into an item supporting the component.

The emergent themes were very similar to the themes found in the literature reviews. We decided to keep all the common themes. In few cases, there were themes in the interviews which were hardly highlighted in the literature as a component of academic reading. These were added to the framework if a theme was repeated at least three times in the interviews. For example, “content knowledge” was repeated more than three times and we added it to the framework as one component of academic reading. However, themes such as “behavioral literacy” and “ethical issues” were repeated fewer than three times in the interviews and were ignored. Table 1 shows the initial themes and the items capturing each theme of the framework.

Table 1 The initial themes and items of the questionnaire

Having developed the first draft of the questionnaire, the researchers sent out the questionnaire to 7 Iranian experts and 7 non-Iranian experts and asked them to read the items of the questionnaire and leave comments on the organization, wording, and content of the questionnaire. After receiving their comments, some changes were made to the wording of the items of the questionnaire. For instance, some technical terms were replaced with some more general ones (e.g., the phrase “content schemata” was replaced with “background knowledge”). In addition, some double-barreled items were pointed out and revised accordingly. They also pointed out items which appeared to be repetitive in measuring the same construct. For example, in the first draft of the questionnaire, there were two items investigating students scanning the texts; i.e., “I conduct a goal-focused scan of different parts of the text” and “I do not always read the whole text. Oftentimes, I scan the text, choose the parts which are in line with my goals and skip the rest.” These two questions were reduced to one in the new draft of the questionnaire; i.e., “I scan different parts of the text to find specific pieces of information.” After applying all the comments shared by the experts, the questionnaire was reduced to 50 items.

In order to make sure that each item was understandable to students, we contacted 15 MA students of applied linguistics and asked them to read each item and tell us what they thought each item meant (i.e., think aloud). As a result, some items were provided with examples to make sure students understand them. For example, the item “I read additional sources to expand and deepen my background knowledge” was changed to “I read additional sources (e.g., books and articles) to expand and deepen my background knowledge (i.e., the specialized concepts, theories, and disciplinary knowledge relevant to my major).”

2.3.2 Exploratory factor analysis and confirmatory factor analysis

The purpose of these phases was to ensure the construct validity of the newly developed questionnaire. In the EFA phase of the study, the semi-finalized questionnaire with 50 items was completed by 345 students of applied linguistics. The participants were either contacted in person or electronically. Eventually, the data derived from 345 questionnaires were entered into SPSS 22 (Statistical Package for Social Sciences) for EFA.

After running EFA, the questionnaire was distributed among 207 participants and the data were entered into SPSS for CFA by using Analysis of a Moment Structures (AMOS). The final version of the questionnaire is provided in the Appendix.

3 Results

3.1 Reliability of the academic reading instrument

Before embarking on evaluating the reliability of the instrument and conducting FA, items with “never” to “always” scale were reverse coded. In addition, the skewness and kurtosis were checked, to assure the normal distribution of the data.

To measure and ensure the “internal consistency” of the newly developed questionnaire, the Cronbach alpha coefficient was obtained (Dorney, 2003). The reliability index of the instrument turned out to be 0.792, which suggests that the scale is reliable (Cohen et al., 2007; Dorney, 2003). Item-total statistics was also inspected to see if removing some items would noticeably enhance the reliability of the whole scale (Cohen et al., 2007). We noticed deleting item 9 (i.e., I highlight, underline, or number key ideas and sentences.) could increases the reliability of the questionnaire to 0.807, which is a highly reliable index (Cohen et al., 2007). Thus, item 9 was deleted.

3.2 Exploratory factor analysis

To check the construct validity of the newly developed instrument’s underlying components and the distribution of items under each component, EFA (i.e., principal axis factoring) was run. The rotation method was varimax. First, the suitability of the data for FA was assessed. The Kaiser–Meyer–Olkin value (0.742) was higher than the recommended value of KMO ≥ 0.6, and Bartlett’s Test of Sphericity was significant at 0.000 value, confirming the factorability of the correlation matrix.

The results of PAF revealed the existence of 14 factors with eigenvalues higher than 1 explaining 63.123 of the total variances (see Table 2). It is argued that relying on “eigenvalue > 1 rule” is insufficient in that it is one of the least precise criteria for determining the number of factors to retain (Henson & Roberts, 2006). In addition, it is argued that in order to decide on the number of factors to retain, one should use different criteria (Henson & Roberts, 2006).

Table 2 Total variance explained

Another procedure which is argued to be the most precise criterion in helping to decide on the number of factors to be retained is parallel analysis (Henson & Roberts, 2006). Parallel analysis is “based on a comparison of eigenvalues obtained from sample data to eigenvalues one would expect to obtain from completely random data” (Fabrigar et al., 1999, p. 279). Accordingly, Monte Carlo PCA for parallel analysis (by Marley W. Watkins) was used. As Table 3 depicts, the first 9 factors' initial eigenvalues are larger than the random eigenvalues reported in the output from parallel analysis [1.403 < 1.512].

Table 3 Monte Carlo PCA for parallel analysis

Finally, the rotated component matrix was consulted to see if a 9-factor model could be supported or not. However, checking the rotated matrix led to retaining 8 factors. Factor nine was loading on items which were not theoretically connected, e.g., item 3 (i.e., I skim different sections to decide whether a source is worth reading or not), item 39 (i.e., my knowledge and familiarity with the author of the text influences the way I read the text), and item 45 (i.e., I adopt a critical stance toward academic conventions) in the original questionnaire. So finally, 8 factors loading on 38 items were retained. The eight factors explained 47.77% of the total variance.

Table 4 shows the grouping of the items under eight factors. Items 1, 2, 3, 4, 5, 6, 7, 29, 31, 38, and 39 (check Table 1 to see the items) were deleted from the instrument because they loaded on irrelevant factors and had low loadings. After deleting them, the reliability of the instrument was calculated once again, and it appeared to be 0.787 suggesting that the scale is reliable (Cohen et al., 2007; Dorney, 2003).

Table 4 Rotated component matrix of the FA of the items in the academic reading instrument

The grouping of items in Table 4 was then carefully analyzed to see whether items that were grouped together could create new factors and if a suitable name could be chosen for items that were grouped together. A few of these groupings were different from what was expected based on the preliminary framework. Items in factor one had to do with either summary (items 1 and 2), analysis (items 3, 4, and 5), or synthesis (items 6, 7, 8, 9, and 10). The commonality among these items could be due to the fact that they are all considered as reading strategies in the available literature. Therefore, this factor was named “reading strategies.”

In the original preliminary framework, five items (23 to 27) were formulated to capture students’ assessment of their information literacy. All these five items were grouped together as the result of the EFA, and their original name which was “information literacy” was kept.

In the preliminary framework, there was a factor called “research literacy” which had elements of research (two items) and statistics (four items). However, the results of the EFA grouped the items which were dealing with statistical literacy together and excluded the items dealing with research. From among the items which were designed to capture research literacy, one loaded on an irrelevant factor was deleted and one loaded on the content knowledge component was kept and renamed. Accordingly, the items in factor three (items 15, 16, 17, and 18) were renamed “statistical literacy.”

Seven items were designed to capture students’ critical reading. All these items grouped together in the new framework and emerged as the fourth factor. The title “critical reading” was kept for this factor which was composed of items 32, 33, 34, 35, 36, 37, and 38 in the new framework.

From among the four items which were designed to capture student’s genre familiarity, three of them grouped together and the title “genre awareness” was kept for this factor. This factor was the fifth factor and was composed of items 18, 19, and 20.

Five items in the original preliminary framework were designed to capture students’ evaluation of their interaction with teachers and peers. From among these five items, four of them were grouped together and one of them loaded on irrelevant factors and was accordingly deleted from the framework. Therefore, the original name formulated for this factor was kept. This factor is composed of items 28, 29, 30, and 31 and emerged as the sixth factor.

Three items in the original preliminary framework were designed to capture “content knowledge.” One of these items loaded on an irrelevant factor and was deleted from the framework. However, to this list, an item which was originally designed to capture “research literacy” grouped with the remaining two items. Since this made sense and these could be considered theoretically related, this item was kept here. Therefore, factor seven (items 13, 14, and 21) was named “content knowledge.”

Items loadings on factor 8 (items 11 and 12) did not manifest any change from the preliminary theoretical framework. Accordingly, the name “English language proficiency” was kept for this factor. Table 5 depicts the final academic reading framework that emerged as the result of the EFA.

Table 5 The final framework emerged as the result of the EFA

3.3 Confirmatory factor analysis

In this phase of the study, the eight-factor factor structure emerging as the result of EFA was used as the initial model to run CFA. Before conducting CFA, data screening was done, normality of the data was investigated, and sampling adequacy was checked. Linear regression and variances were checked, and no outliers were detected.

Afterward, the factor structure obtained in EFA was set up and run in AMOS. In this study, in addition to chi-square, df, and p value, the model fit was assessed in accordance with this combination of goodness-of-fit indices including RMSEA, CFI, and SRMR (Henning et al., 2018; Lei & Wu, 2007). The model fit was reviewed by evaluating the following components against the best fit criteria (Lei & Wu, 2007):

  1. 1.

    SRMR with values ≤ 0.08

  2. 2.

    RMSEA with values ≤ 0.06

  3. 3.

    CFI with value greater than the 0.90 s (or more desirably ≥ 0.95)

Unit measures were chosen based on the largest estimates. After checking unit loadings, model fit was checked against the fit indices. Table 6 shows the initial fit indices of the eight-factor model emerging as the result of EFA.

Table 6 Initial fit indices of the eight-factor model of academic reading at higher education

As Table 6 shows, chi-square was significant (χ2 (637) = 1095.01, p > 0.05), the reason for this is that chi-square is extremely sensitive to sample size (Lei & Wu, 2007). Therefore, the value for χ2/df was calculated and it was 1.71 which, according to Loo and Loewen (2004), is within the satisfactory range (2 < χ2/df < 5). Then, the emergent CFA model fit (see Fig. 1) was assessed in accordance with other goodness-of-fit indices. The CFA model showed that the eight-factor 38-item model produced a reasonable fit with SRMR = 0.066, RMSEA = 0.059, and CFI = 0.88. As can be seen, CFI (0.88 ≥ CFI) did not reach the cut-off criterion (i.e., CFI ≥ 0.90).

Fig. 1
figure 1

Initial CFA results

In order to improve model fit, modification indices were checked to see if model fit could be enhanced by modifying the model (Lei & Wu, 2007). In doing so, modification indices were checked, high covariances were pinpointed, and covariances were specified between them. The covariance of e9 and e10 was the highest one (e9 < –- > e10 = 18.51) followed by (e29 < –- > e30 = 13.69). In addition, item loadings were checked to see if there were items with low loadings. Accordingly, item one (i.e., I take summary notes to keep track of the development of ideas in the texts.) was deleted because it manifested a low factor loading (i.e., 37). Afterward, CFA was run again and this time CFI reached the 0.901, meeting the cut-off criterion (i.e., CFI ≥ 0.90). As Table 7 indicates, other fit indices are improved as well. Figure 2 shows the final factor structure after running CFA. Therefore, this slightly modified structure confirmed the structure derived from the EFA.

Table 7 Fit indices after applying modification indices
Fig. 2
figure 2

Final CFA results

4 Discussion and conclusion

Academic reading is considered a very important and critical skill for success at higher education (Delgadova, 2015; McGrath et al., 2016; Pecorari et al., 2019). However, it has received less attention than writing, and fewer studies have addressed academic reading (see Abbott, 2013; Baker et al., 2019; Mann, 2000; McGrath et al., 2016). In order to be able to help students with literacies, we need to develop an effective instrument that could help pinpoint the areas in which students need help (Green & Agosta, 2011). Accordingly, the goal of the present study was developing and validating a reading instrument at graduate levels of applied linguistics.

To achieve this purpose, a preliminary twelve-factor model emerging as the result of reviewing the previous literature and conducting interviews with university professors was proposed, i.e., skimming and scanning, extensive reading, summary, analysis, synthesis, proficiency, content knowledge, research literacy, online searching literacy, genre awareness, interaction with teachers and peers, and critical reading. This preliminary framework was then tested on a sample of 345 MA students for the EFA phase and 207 for the CFA phase. EFA led to reduction of 4 components from the initial framework. In fact, a few of the groupings which emerged as the result of EFA were different from the preliminary framework. The results of EFA led to the integration of summary, analysis, and synthesis components and led to the emergence of a new component named reading strategies. Moreover, the initial framework had a component called “research literacy,” which had elements of research (two items) and statistics (four items). However, the results of the EFA grouped the items which were dealing with statistical literacy together and excluded the items dealing with research. From among the items which were designed to capture research literacy, one loaded on an irrelevant factor was deleted and one loaded on the content knowledge component was kept and renamed. The research literacy component was then renamed as “statistical literacy.” Hence, the components which emerged as the result of EFA included reading strategies, proficiency, content knowledge, statistical literacy, genre awareness, information literacy, interaction with teachers and peers, and critical reading. The results of the CFA confirmed the emerged model as the result of EFA.

Overall, the findings show that academic reading in higher education settings for graduate students is a multi-dimensional skill that cannot be solely defined in terms of grammar and vocabulary. Since it is not limited to textual features, teaching generic features of different text types is insufficient as preparation for graduate-level academic reading. Rather, reading should be seen as a multi-dimensional skill best defined and captured by acknowledging the full range of component skills needed: reading strategies, English language proficiency, content knowledge, genre awareness, statistical literacy, information literacy, interaction with peers and teachers, and finally critical reading. In other words, a student who is literate in reading at MA levels requires the activation of these eight interrelated components.

Native and non-native students are novices when they start dealing with academic disciplinary discourses (Thesen & van Pletzen, 2006). Similarly, in the Iranian context, by the time that MA students of applied linguistics start their MA studies, except for the small percentage who have studied teaching English as a foreign language (TEFL) for their BA studies, they can be considered as novice academic readers and writers in the new discipline of applied linguistics/TEFL. By novice, we do not mean low English language proficiency; rather, we mean unfamiliarity or low familiarity with and competence in the eight components identified in this study. To expect students to know about the 8 components beforehand or to expect them to learn academic reading on their own would lead to many students’ demotivation, disappointment, and frustration and might lead to unintentional acts of plagiarism and publishing in predatory journals (see Babaii & Nejadghanbar, 2017; Nejadghanbar et al., 2023). Students’ unfamiliarity with these literacies means misfunctioning in reading, and this impairs their ability to participate in disciplinary practices. Students need to gain these literacies to be able to read properly and become members of their discourse communities.

This can be true for international postgraduate students in other contexts as well, i.e., Australian university students, who are reported to have difficulty in academic reading and their difficulty is influenced by their language proficiency among other factors (Phakiti & Li, 2011). Universities need to provide support to students in doing their academic readings by considering their needs (Armstrong & Stahl, 2017) and identifying the determinants of successful academic reading (Perin, 2020; Porter, 2018). The results of our study can provide a model and an instrument for studying students’ academic reading at a graduate level. In fact, in order to be able to help students acquire the academic literacies they need to succeed, the first challenge is the development of a measure that can be used to diagnose students’ challenges and can pinpoint their needs (Green & Agosta, 2011). The instrument developed in this study can be used in this regard. It should be mentioned that this instrument can be used for studying students’ academic literacy with a focus on academic reading and should not be used to study academic literacy in its totality.

Our findings indicated that academic reading at graduate levels is a multi-dimensional skill. Despite calls made in the literature, inviting researchers to conduct research exploring what constitutes (advanced) academic literacy (see Sengupta, 2005, the special issue of the journal of English for Academic purposes), there seems to be no study investigating the concept of academic reading literacy at the MA levels of applied linguistics. The eight-factor model provided in this study could start a dialog on what constitutes academic reading. The importance of understanding the nature of academic literacy is highlighted in Sengupta (2005, p. 288) arguing that “until such time that we understand the nature of AAL [advanced academic literacy], and we are able and willing to be informed by this understanding, we may not recognize the central issues in learning to become adequately literate in order to contribute to the world of knowledge.”

The instrument developed in this study appears to be a valid and reliable tool depicting different aspects of reading at graduate levels of applied linguistics and can be used by university teachers as a basis for helping students who are novices in academic reading to study the areas in which they need the most help and take specific measures to help them gain the required literacies. These components can be used to help them to become academically literate in doing their readings. This study can contribute to practice by showing different aspects of academic reading which could help design curricula for them and provide instructional materials in accordance.

The present results should still be considered as preliminary, and further research might be needed since the study has limitations. This questionnaire can contribute to further research exploring other aspects of academic literacy at higher education. Further studies could be conducted to see how and in what ways contextual and disciplinary factors would lead to modifications in the present instrument. In other words, the instrument was validated in the Iranian context. Like other EAL learners in other contexts such as Australia (see O’Neill et al., 2022), the greatest challenge for these students is the lack of academic literacy skills and their struggle at identifying and obtaining the academic literacies they need to succeed. We believe it can be used for other MA students of applied linguistics in other contexts because literacies are discipline specific. However, revalidating the questionnaire with participants from other contexts could provide empirical data on the applicability of the instrument in other contexts. Although the participants in this study were all EAL learners, we believe that the instrument can be used with native English speakers as well because native and non-native English speakers are both novices when they start dealing with academic disciplinary practices (Thesen & van Pletzen, 2006). However, further studies are needed to test this. One limitation of the developed instrument is that it elicits students’ self-assessment of their academic reading. Among the different aspects of academic literacy, we decided to focus on reading literacy. This way we could obtain an in-depth account of reading literacy at MA programs in applied linguistics. Further studies can explore other aspects of academic literacy and include scenarios and read aloud protocols.

We should reiterate that in this study, we did not attempt to present a standardized and/or unidimensional understanding of academic reading. In fact, the purpose of this study was to draw attention to other aspects of academic reading which go beyond language proficiency or generic features of different text types. Accordingly, we believe that using this instrument will be more helpful when teachers and researchers see academic reading as a situated skill that might have other aspects based on their teaching contexts. Future studies could also investigate the relationship between the findings of the questionnaire and the students’ reading achievement.