Keywords

As specialists in the field, the authors of this volume have naturally focused on the design and delivery of the assessment programme in their respective institutions, with a concern for improving the quality of the measurement of academic language abilities and reporting the results in a meaningful fashion to the various stakeholders. However, this obviously represents a narrow perspective. No matter how good an assessment may be, it will not achieve its desired objectives unless there is strong institutional support at the policy level as well as adequate resourcing – not just for the assessment itself but for effective follow-up action through advising of students and provision of opportunities for academic language development.

1 Provision for Academic Language Development

In societies like Hong Kong, Oman and South Africa , where a high proportion if not all students entering English-medium universities come from non-English-using backgrounds, the need to further enhance their English language skills is obvious – even if they have had some form of English-medium schooling previously. The language enhancement may be in the form of a foundation programme , compulsory English language courses in the first year of study and beyond, a learning and study skills centre, or (as in the case of Hong Kong) a fourth year added to what has traditionally been a 3-year undergraduate degree.

On the other hand, universities in the major English-speaking countries vary widely in the extent to which they have made provision for the language and learning needs of incoming students, as noted briefly in the Introduction. Universities in the US have a long tradition, going back at least to the 1950s, of freshman composition programmes to develop the academic writing skills of first-year domestic students, and the growth in foreign student numbers from the 1960s led to the parallel development of ESL courses, in the form of both intensive pre-admission programmes and credit courses for degree students. In the UK, the impetus for addressing these issues came initially from the need to ensure that students with English as their second language from Commonwealth countries who were recipients of scholarships and study awards had adequate proficiency in academic English to benefit from their studies in Britain, and summer pre-sessional courses have become an institution in British universities, serving the much broader range of international students who are now admitted. In other English-speaking countries, it has been the liberalising of immigration regulations to allow the recruitment of fee-paying international students which has led to a variety of pre- and post-admission programmes to enhance their academic English skills. The same liberalisation has seen an influx of immigrant families with children who work their way as “English language learners” through the school system to higher education without necessarily acquiring full proficiency in academic English. For such students and for many other domestic students who are challenged by the demands of academic literacy at the tertiary level, there are learning centres offering short courses, workshops , peer tutoring, individual consultations, online resources and so on.

Thus, in a variety of ways universities in the English-speaking countries already offer study support and opportunities for academic language enrichment to their students, at least on a voluntary basis. A proposal to introduce a post-admission language assessment represents a significant further step by seeking to identify students who would benefit from – or perhaps have an obvious need to access – such services in meeting the language demands of their studies. This then leads to the question of whether the assessment and any follow-up action on the student’s part should be voluntary or mandatory. It also raises the issue of whether the language and literacy needs revealed by the assessment results may be greater than can be accommodated within existing provisions, meaning that substantial additional funding may be required.

1.1 External and Internal Pressures

In the cases we have seen in this book, some universities are subject to external pressures to address these matters. The controversy over English language standards in Australian universities has already been discussed in the Introduction. In 2012, the Tertiary Education Quality and Standards Agency (TEQSA) announced that its audits of universities in Australia would include comprehensive quality assessments of English language proficiency provisions (Lane 2012). However, a change of government and vigorous lobbying by tertiary institutions asserting that such assessments imposed onerous demands on them led to a ministerial decision that TEQSA would abandon this approach in favour of simply ensuring that minimum standards were being met (Lane 2014a). In the most recent version of the Higher Education Standards Framework, the statutory basis for TEQSA audits, there is just a single explicit reference to English language standards, right at the beginning of the document:

1 Student Participation and Attainment

1.1 Admission

1. Admissions policies, requirements and procedures are documented, are applied fairly and consistently, and are designed to ensure that admitted students have the academic preparation and proficiency in English needed to participate in their intended study, and no known limitations that would be expected to impede their progression and completion. (Australian Government 2015)

The change in TEQSA’s role was seen as reducing the pressure on tertiary institutions to take specific initiatives such as implementing a post-entry language assessment (PELA), and some such moves at particular universities stalled as a result. Although it is generally recognised that the English language needs of students should be addressed, there is ongoing debate about the most suitable strategy for ensuring that universities take this responsibility seriously (Lane 2014b).

Another kind of external pressure featured in Chap. 6 (this volume). The Oral English Proficiency Test (OEPT) at Purdue University is one example of an assessment mandated by legislation in US states to ensure that prospective International Teaching Assistants (ITAs) have sufficient oral proficiency in English to be able to perform their role as instructors in undergraduate courses. This of course is a somewhat different concern from that of most other post-admission assessments, where the issue is whether the test-takers can cope with the language and literacy demands of their own studies.

In contrast to these cases of external motivation, other post-admission assessments have resulted from internal pressure, in the form of a growing recognition among senior management and academic staff that there were unmet language needs in their linguistically diverse student bodies which could no longer be ignored, particularly in the face of evidence of students dropping out of their first year of study as a result of language-related difficulties. This applies to the original moves towards a PELA at the University of Melbourne (Chap. 2, this volume; see also Elder and Read 2015) in the 1990s, as well as the introduction of the Diagnostic English Language Needs Assessment (DELNA) at the University of Auckland (Chap. 6, this volume; see also Read 2015b) and what has evolved as the diagnostic assessment procedure for engineering students at Carleton University (Chap. 3, this volume).

2 The Decision to Introduce a Post-Admission Assessment

For universities which are considering the introduction of a post-admission assessment, there are numerous issues to work through. Several useful sources are available to guide institutions in making decisions about whether to introduce a post-admission assessment – preferably in conjunction with a broader strategy to address language and literacy issues among their students – and, if so, how to implement the programme successfully. These sources draw particularly on the experiences of Australian universities with what they call post-entry (or sometimes post-enrolment) language assessments (PELAs), which have grown out of a specific social, educational and political environment over the last 10 years, as explained in the Introduction. However, much of the Australian experience can be applied more widely, in English-speaking countries if not in EMI universities elsewhere.

  • The Degrees of Proficiency website (www.degreesofproficiency.aall.org.au) developed from a federally funded project conducted by Katie Dunworth and her colleagues (2013) to survey PELA initiatives in Australian universities and identify the issues faced by the institutions in maintaining English language standards. The website includes a database of existing PELAs and university language policies , links to a range of source materials and other sites, some case studies of programmes at specific universities, and advice on how to implement post-entry assessments as part of a broader strategy for English language development

  • In his book on Standards of English in higher education, Murray (2016) devotes a chapter to a discussion of the challenges and risks for a university in introducing a PELA. The book builds on Murray’s experiences at an Australian university, which provides a case study for a later chapter in the book, but it is also informed by his knowledge of the situation of universities in the UK.

  • In a similar vein, Read (2015a) has a chapter outlining “The case for introducing a post-entry assessment”, which also considers the pros and cons of such a decision, as well as alternative ways for a university to address students’ language and literacy needs.

2.1 Positive and Negative Messages

In his analysis of the advantages and disadvantages of a PELA, Murray (2016, pp. 122–128) gives some emphasis to the kind of messages which are conveyed by using this type of assessment. On the positive side, a PELA can signal to various stakeholders a commitment on the part of the university to be responsive to the English language needs of incoming students by identifying those at risk of poor academic performance at an early stage. Potentially, it enhances the reputation of the institution if it is seen to be fulfilling its duty of care to the students. Assuming that students being admitted to the university through various pathways all take the same assessment, the PELA also provides an equitable basis for allocating English language tutoring and other specialist resources to the students who are most at risk. Thus, if the commitment is genuinely made, it reflects well on the institution in meeting its ethical responsibilities to a linguistically diverse student body.

On the other hand, Murray points out that the messages may be negative. He reports from his observations that university senior management are very cautious about any form of PELA because, first, it may indicate that the university has lowered its standards by accepting students who are linguistically weak, and, secondly, it may put off potential students when they learn that they face an additional hurdle after meeting the normal admission requirements, and in particular after “passing” IELTS or TOEFL . Murray suggests how a university can be proactive in countering such concerns through the way that it presents the rationale for the PELA to external stakeholders. In addition, he recommends that the assessment should be conducted in a low-key fashion through faculties and departments, rather than as a high-profile, mandatory and centrally administered programme which is more likely to attract criticism and complaint from students.

This last point is taken up by Read (2008), in his discussion of how the Diagnostic English Language Needs Assessment (DELNA) has been promoted internally at the University of Auckland . Read draws on Read and Chapelle’s (2001) concept of test presentation, defined as “a series of steps, taken as part of the process of developing and implementing the test, to influence its impact in a positive direction” (p. 185). In the early years of administering DELNA, before it became mandatory, mature students and others with no recent history of study in New Zealand would receive a letter from the Admissions office inviting them to take the assessment and emphasising its potential value as a diagnosis of their academic language ability. To reach a broader range of students DELNA staff speak to students at Orientation and other events about the benefits of the assessment; there are posters, bookmarks and webpages which offer a “free health check of your academic English language skills” and feature slogans such as “Increase your chance of success” and “Students say DELNA is time well spent”. Every effort has been made to embed the assessment as just one more task that first-year students need to complete in order to enter the university.

Similarly, there are ongoing efforts to inform academic and professional staff at Auckland about the programme. The main vehicle is the DELNA Reference Group, composed of representatives from all the faculties and relevant service units around the university, which meets twice a year to discuss policy issues, monitor student compliance with the DELNA requirements, and provide a channel of communication to staff. In addition, the DELNA Manager is active in briefing and liaising with key staff members on an individual basis, and there is an FAQ document which addresses common questions and concerns. Through all these means, the university seeks to ensure that the purpose of the assessment is understood, and that students take advantage of the opportunities it offers.

2.2 Costs and Benefits

The costs of introducing a post-admission assessment often weigh heavily on those charged with making the decision. The direct expenses of developing the instruments and administering them are the most obvious ones, but then there are also the associated costs of enhanced provision for English language development to cater for the needs of the students who perform poorly on the assessment. As Murray puts it, “to deprive these students of such opportunities [for development] would undermine the credibility of the institution and its English language initiative, and call into question its clarity of thinking and the commitment it has to those students and to the English agenda more generally” (2016, p. 127).

Based on her survey of Australian universities, Dunworth (2009) found a number of concerns about the resources associated with a PELA. Many of her respondents were worried that there would not be adequate funding to meet the needs revealed by the assessment, especially if the PELA itself consumed a disproportionate amount of the budget for student services. This was more of an issue when the assessment was designed for a particular School or Faculty, which would obviously have a more limited funding base than the central university budget. There was a tendency for university managers to underestimate the resources required to implement a good-quality assessment programme as well as the need to plan ahead for appropriate follow-up strategies.

An interesting perspective on the relative costs and benefits of a post-admission assessment is found in Chap. 3, where the Dean of Engineering and Design at Carleton University in Canada is quoted as saying, with reference to three students who remained in the undergraduate programme rather than dropping out, “Retaining even two students pays for the expense of the entire academic assessment procedure” (this volume, p. xx).

Along the same lines the Deputy Vice-Chancellor (Academic) at the University of Auckland , who has management responsibility for the University’s Diagnostic English Language Needs Assessment (DELNA) , reasons this way:

If one looks at the high level figures, it is easy to see the picture. You can divide the DELNA budget by the funding which the university receives for each fulltime student to get an idea of how many students we need to retain as a result of DELNA impact to protect our revenue. This deals with future revenue lost by the university, and it amounts to around 20 students. There is also the matter of the past wasted financial costs to students and the government (50/50 to each party) when students withdraw or are excluded for reasons that can be traced to their inadequate academic English; to these costs can be added the income foregone by students when they have been attending university to little purpose rather than working—probably $20,000 per student. Then there are all the non-financial costs—angst, frustrated expectations and so on. (John Morrow , personal communication, 8 March 2016)

This quote refers specifically to the costs of the assessment, but the same line of argument can be extended to the funding needed for a programme of academic language development, much of which was already in place at the time that DELNA was introduced.

3 Extending the Scope of Academic Language Development

One criticism of post-admission assessments is that by definition they are administered when students first arrive on campus and, as we have seen in the chapters of this volume, follow-up language development programmes are concentrated in the first year of study. The implicit assumption is that early intervention is the best strategy (and perhaps all that is needed) for addressing the students’ needs. However, it is worth recalling from the Introduction that Birrell’s (2006) paper which prompted public debate in Australia about the English proficiency of international students was concerned with the evidence that they were graduating with inadequate command of the language to be employable in that country, rather than whether they could cope with the language demands of their academic studies.

3.1 Professional Communication Skills

A logical response to Birrell’s work, then, would be to determine whether students have the language skills they need for future employment at the time they complete their undergraduate degree. This is consistent with the current practice in Australian, New Zealand and British universities of specifying generic graduate attributes , which are defined in this widely quoted statement as:

the qualities, skills and understandings a university community agrees its students should develop during their time with the institution. These attributes include but go beyond the disciplinary expertise or technical knowledge that has traditionally formed the core of most university courses (Bowden et al. 2000, cited in University of Edinburgh 2011).

In the policy documents of particular universities in English-speaking countries, language tends to figure under the guise of “effective communication”, as in these examples:

  • University of Melbourne:

    Melbourne graduates … can apply knowledge, information and research skills to complex problems in a range of contexts and are effective oral and written communicators. (http://msl.unimelb.edu.au/teaching-learning)

  • University of Sydney:

    1. 5.

      Communication

      Graduates of the University will recognise and value communication as a tool for negotiating and creating new understanding, interacting with others, and furthering their own learning.

However, as with other graduate attributes , there is a lack of university-wide strategies to determine whether graduating students have acquired such communication skills, except through the assessment of the courses they have taken for their degree. As Arkoudis & Kelly put it,

institutional graduate attribute statements that refer to the communication skills of graduates are merely claims until evidenced. Institutional leaders need to be able to point to evidence demonstrating that the oral and written communication skills of their students are developed, assessed, monitored and measured through the duration of a qualification. (2016, p. 6)

They go on to note the need for research to articulate exit standards and to produce an explicit framework which could guide academic staff to develop the relevant skills through the teaching of their courses.

As a step in this direction, Murray (2010, 2016) proposes that the construct of English language proficiency for university study should be expanded to include professional communication skills, of the kind that students will require both for work placements and practicums during their studies and in order to satisfy the expectations of future employers and professional registration bodies once they graduate. Murray identifies these skills as follows:

  • Intercultural competence

  • A cultural relativistic orientation

  • Interpersonal skills

  • Conversancy in the discourses and behaviours associated with particular domains

  • Non-verbal communication skills

  • Group and leadership skills

The one language testing project which has sought to produce a measure of at least some of these skills is the Graduating Students’ Language Proficiency Assessment (GSLPA) , developed in the 1990s at Hong Kong Polytechnic University (PolyU), with funding from the University Grants Committee (UGC) in Hong Kong (Qian 2007). It is a task-based test of professional writing and speaking skills designed in consultation with business leaders in Hong Kong. Although the test has been administered to PolyU students since 1999 (see http://gslpa.polyu.edu.hk/eng/web/), it was not accepted by the other Hong Kong universities and, as an alternative, the UGC ran a scheme from 2002 to 2013 to pay the fee for students to take the Academic Module of IELTS on a voluntary basis when they were completing their degree. Two Australian universities (the University of Queensland and Griffith University) have adopted a similar policy of subsidising the IELTS test fee as a service to their graduating international students (Humphreys and Mousavi 2010). While this strategy provides the students with a broad, internationally recognised assessment of their academic language proficiency at the time of graduation, it can scarcely be regarded as a valid measure of their professional communication skills . Indeed, O’Loughlin (2008) has questioned the ethics of using IELTS for such a purpose without proper validation.

3.2 Embedded Language Development

A quite different approach involves embedding these skills, along with other aspects of English language development, into the students’ degree programmes. This already happens to varying degrees in professional faculties, like Engineering, Business, Medical Sciences and Education, where students need to demonstrate the application of relevant communication skills in order to be registered to practise their chosen profession. The same strategy can in principle be applied to degree programmes across the university. Numerous English language specialists in higher education – notably Arkoudis et al. (2012) in Australia and Wingate (2015) in the United Kingdom – strongly advocate the embedded delivery of academic language development to all students as current best practice. In support of this position, Arkoudis and Kelly cite studies which document “the limitations of communication skills programs which sit outside the disciplinary curricula and are supported by staff who are not recognised by students as disciplinary academics” (2016, p. 4).

This quote highlights the point that academic English programmes are typically delivered as adjuncts to degree courses by tutors with low (and maybe insecure) status within the institution who may not have the relevant knowledge of discourse norms to address issues of academic literacy or professional communication skills within the disciplines. On the other hand, subject lecturers and tutors tend to shy away from dealing with problems with language and genre in their students’ writing, claiming a lack of expertise. In their influential study of academic literacies in undergraduate courses in the UK, Lea and Street (1998) reported that tutors could not adequately articulate their understanding of concepts like “critical analysis”, “argument” or “clarity”. As Murray (2016) puts it, although academic teaching staff have procedural knowledge of academic discourse norms in their discipline, they lack the declarative (or metalinguistic) knowledge needed to give the kind of feedback on student writing that would allow the students to understand how they can better meet the appropriate disciplinary norms.

This suggests that the way forward is to foster more collaboration between learning advisors and English language tutors on the one hand and academic teaching staff on the other. Murray (2016) proposes as a starting point that the practice in some universities of locating language tutors within particular faculties should be more widely adopted, to give more opportunities for interaction between the two sides. Drawing on their extensive experience as learning advisors at the University of Sydney , Jones et al. (2001) outline four models of collaboration in the development of academic writing skills. At the most basic level, there is a “weak adjunct” model which provides generic tutorials on academic writing outside of class hours. A “strong adjunct” model is delivered in a similar fashion but with a focus on writing genres that are relevant to the students’ discipline, such as lab reports or research proposals. Then comes the “integrated model” in which learning advisors give presentations or workshops on discipline-specific aspects of academic literacy during class hours. At the top level, a fully “embedded” model involves a course curriculum with a primary focus on literacy in the discipline, designed collaboratively by learning advisors and the subject lecturers who will actually teach the course.

The integrated and embedded models clearly require a significant ongoing commitment of time and resources by both parties, which is difficult to initiate and even more challenging to sustain. Arkoudis et al. (2012) describe a version of the integrated model which was conducted for one semester in an Architecture course at the University of Melbourne , with promising results, but they acknowledge that the model could not be widely implemented on a regular basis. As alternatives, they discuss ways in which course coordinators can incorporate academic literacy goals into the grading of course assignments and can foster productive interactions among their students through the careful design of group discussions and projects, with the active involvement of English language specialists.

Wingate (2015) makes a strong case for what she calls “inclusive practice” to overcome the limitations of current approaches to academic literacy development. This means applying four principles, which can be summarised as follows:

  1. 1.

    Academic literacy instruction should focus on an understanding of the genres associated with the students’ academic subjects, rather than taking the generic approach found in the typical EAP programme.

  2. 2.

    All students should have access to this instruction, regardless of their language background. Any language support for non-native speakers should be provided in addition to the academic literacy instruction.

  3. 3.

    The instruction needs to be integrated with the teaching of content subjects so that ideally academic literacy is assessed as part of the subject curriculum.

  4. 4.

    Academic literacy instruction requires collaboration between writing experts and subject experts to develop the curriculum jointly (2015, pp. 128–130).

As a first step, Wingate describes how she and her colleagues at Kings College London have designed and delivered academic literacy workshops for students in four disciplines, but she recognises that substantial cultural and structural changes would be necessary to implement the four principles throughout a whole university. Nevertheless, she argues that longer term trends will force institutions to move in this direction: “market forces such as growing competition for students and expectations by high-fee paying students will increase the need for universities to provide effective support for students … from diverse backgrounds” (2015, p. 162).

Full implementation of Wingate’s principles would reduce, if not eliminate, the need for post-admission language assessment – but that prospect seems rather distant at this point.

4 The ELF Perspective

One further perspective to be considered is represented by the term English as a Lingua Franca (ELF). In Chap. 8, Roche et al. have adopted the term to refer to the status of English in the Omani universities in which they conducted their research. At one level, it can be seen as a synonym for English as an International Language (EIL), a relatively neutral description of the current dominance of the language as a means of communication across national and linguistic boundaries, as well as the prime vehicle for globalisation in social, economic, scientific, educational and cultural terms. However, during the last 15 years ELF has come to represent in applied linguistics a more critical perspective on the role of English internationally and, more particularly, the status of native speakers and their brand of English. Non-native users of the language greatly outnumber native speakers on a worldwide basis and a large proportion of daily interactions in the language do not involve native speakers at all. This calls into question the “ownership” of English (Widdowson 1994) and the assumed authority of native speakers as models or arbiters of accuracy and appropriateness in the use of the language.

To substantiate this argument, a large proportion of the ELF research has drawn on spoken language corpora – the Vienna-Oxford International Corpus of English (VOICE) (Seidlhofer 2011), English as a Lingua Franca in Academic Settings (ELFA) (Mauranen 2012) and the Asian Corpus of English (ACE) (Kirkpatrick 2010) – featuring mostly well-educated non-native speakers of English from different countries communicating with each other. Apart from providing descriptions of recurring grammatical and lexical features in these oral interactions, researchers have highlighted communicative strategies that anticipate or repair potential breakdowns in mutual comprehension, putting forth the argument that non-native users of English are more adept at dealing with such situations than native speakers are.

One of the most prominent ELF advocates, Jennifer Jenkins (2013), has turned her attention in a recent book to English-medium instruction (EMI) in universities, both those in the traditionally English-speaking countries and the increasing number of institutions, particularly in Europe, the Middle East, and East and Southeast Asia, which offer degree programmes in English as well as their national language. From an analysis of university websites and a questionnaire survey of 166 academics, Jenkins concluded that institutional claims to the status of an “international university” for the most part did not extend to any recognition of the role of English as a lingua franca, or any corresponding challenge to the dominance of native speaker norms. Most of the questionnaire respondents apparently took it for granted that the best guarantee of maintaining high academic standards was to expect second language users to adhere (or at least aspire) to native speaker English. However, they also acknowledged that the level of support offered by their university to non-native English speakers was inadequate, with consequent negative effects on students’ confidence in their ability to meet the standards.

The latter view received support in a series of “conversations” Jenkins (2013) conducted at a UK university with international postgraduate students, who expressed frustration at the lack of understanding among their supervisors, lecturers and native-speaking peers concerning the linguistic challenges they faced in undertaking their studies. This included an excessive concern among supervisors with spelling, grammar and other surface features as the basis for judging the quality of the students’ work – often with the rationale that a high level of linguistic accuracy was required for publication in an academic journal.

4.1 ELF and International Proficiency Tests

Jenkins (2013; see also Jenkins 2006a; Jenkins and Leung 2014) is particularly critical of the role of the international English proficiency tests (IELTS , TOEFL , Pearson Test of English (PTE) ) in their gatekeeping role for entry to EMI degree programmes. She and others (e.g., Canagarajah 2006; Clyne and Sharifian 2008; Lowenberg 2002) argue that these and other tests of English for academic purposes serve to perpetuate the dominance of standard native-speaker English, to the detriment of ELF users, by requiring a high degree of linguistic accuracy, by associating an advanced level of proficiency with facility in idiomatic expression, and by not assessing the intercultural negotiating skills which are a key component of communication in English across linguistic boundaries, according to the ELF research. These criticisms have been largely articulated by scholars with no background in language assessment, although Shohamy (2006) and McNamara (2011) have also lent some support to the cause.

Several language testers (Elder and Davies 2006; Elder and Harding 2008; Taylor 2006) have sought to respond to the criticisms from a position of openness to the ideas behind ELF. Their responses have been along two lines. On the one hand, they have discussed the constraints on the design and development of innovative tests which might more adequately represent the use of English as a lingua franca, if the tests were to be used to make high-stakes decisions about students. On the other hand, these authors have argued that the critics have not recognised ways in which, under the influence of the communicative approach to language assessment, contemporary English proficiency tests have moved away from a focus on native-speaker grammatical and lexical norms towards assessing a broader range of communicative abilities, including those documented in ELF research. The replies from the ELF critics to these statements (Jenkins 2006b; Jenkins and Leung 2014) have been disappointingly dismissive, reflecting an apparent disinclination to engage in constructive debate about the issues.

This is not to say that the international proficiency tests are above criticism. Language testers can certainly point to ways in which these testing programmes under-represent the construct of academic language proficiency and narrow the horizons of students who undertake intensive test preparation at the expense of a broader development of their academic language and literacy skills. IELTS and TOEFL are prime exemplars of what Spolsky (1995, 2008) has labelled “industrial language testing ”, being administered to around two million candidates each at thousands of test centres around the world. This means that there are huge resources invested, not just in the tests themselves but in the associated test preparation industry, and as a consequence it is a major undertaking to make any substantive changes to the tests of the kind that ELF advocates would like to see.

4.2 ELF and Post-Admission Assessments

This brings us back to the role of post-admission assessments. As things stand at present, and for the foreseeable future, such assessments cannot realistically replace tests like IELTS , TOEFL or PTE for pre-admission screening of international students because most universities take it for granted that a secure, reliable test of this kind is an essential tool in the admissions process and, in the cases of Australia and the United Kingdom, the immigration authorities specify a minimum score on a recognised English test as a prerequisite for the issuing of a student visa. However, post-admission assessments developed for particular universities can complement the major tests by representing flexible responses to local circumstances and to changing ideas about appropriate forms of assessment, such as those associated with ELF.

Perhaps the most revealing finding from Jenkins’ (2013) surveys was the extent to which academics in the UK and in EMI institutions elsewhere defined academic standards in traditional terms which favoured native-speaking students, and many appeared insensitive to ways in which they could modify their teaching and supervisory practices to accommodate international students, without “dumbing down” the curriculum. The introduction of a post-admission assessment will do nothing in itself to shift such attitudes. If an assessment is implemented in such an environment, it may basically perpetuate a deficit model of students’ language needs, which places the onus squarely on them (with whatever language support is available to them) to “improve their English”, rather than being part of a broader commitment to the promotion of high standards of academic literacy for all students, regardless of their language background.

One issue here is whether incoming students for whom English is an additional language should be considered to have the status of “learners” of English, rather than non-native “users” of the language who need to enhance their academic literacy skills in the same way that native-speaking students do. Most of the ELF literature focuses on non-native users who are already highly proficient in the language, so that the distinctive linguistic features in their speech represent relatively superficial aspects of what is actually a high level of competence in a standard variety of English. A good proportion of international doctoral students potentially fall into this category, particularly if they have already had the experience of using English for purposes like presenting their work at conferences or writing for publication in English. On the other hand, a diagnostic assessment may reveal that such students read very slowly, lack non-technical vocabulary knowledge, have difficulty in composing cohesive and intelligible paragraphs, and are hampered in other ways by limited linguistic competence. This makes it more arguable whether such students should be considered proficient users of the language.

A similar kind of issue arises with first-year undergraduates in English-speaking countries matriculating from the secondary school system there. Apart from international students who complete 2 or 3 years of secondary education to prepare for university admission, domestic students cover a wide spectrum of language backgrounds which make it increasingly problematic to distinguish non-native users from native speakers in terms of the language and literacy skills required for academic study. In the United States English language learners from migrant families have been labelled Generation 1.5 (Harklau et al. 1999; Roberge et al. 2009) and are recognised as often being in an uncomfortable in-between space where they have not integrated adequately into the host society, culture and education system. Linguistically, they may have acquired native-like oral communication skills, but they lack the prerequisite knowledge of the language system on which to develop good academic reading and writing skills. Such considerations strengthen the case for administering a post-admission assessment to all incoming students, whatever their language background; this is the position of the University of Auckland with DELNA, but not many universities have been able to adopt a comprehensive policy of this kind.

At the same time, there are challenging questions about how to design a post-admission assessment to cater for the diverse backgrounds of students across the native – non-native spectrum. It seems that the ELF literature has little to offer at this point towards the definition of an alternative construct of academic language ability which avoids reference to standard native-speaker norms and provides the basis for a practicable assessment design. The work of Weideman and his colleagues in South Africa , on defining and assessing the construct of academic literacy , as reported in Chaps. 9 and 10, represents one stimulating model of test design, but others are needed, especially if post-admission assessments are to operationalise an academic literacies construct which takes account of the discourse norms in particular academic disciplines, as analysed by scholars such as Swales (1990), Hyland (2000, 2008), and Nesi and Gardner (2012). At the moment the closest we have to a well-documented assessment procedure of this type is the University of Sydney’s Measuring the Academic Skills of University Students (MASUS) (Bonanno and Jones 2007), as noted in the Introduction.

Nevertheless, the chapters of this volume show what can be achieved in a variety of English-medium universities to assess the academic language ability of incoming students at the time of admission, as a prelude to the delivery of effective programmes for language and literacy development. It is important to acknowledge that all of the institutions represented here have been able to draw on their own applied linguists and language testers in designing their assessments. As Murray noted in identifying universities “at the vanguard” of PELA provision in Australia and New Zealand, “It is certainly not coincidental that a number of these boast resident expertise in testing” (2016, p. 121). The converse is that institutions lacking such capability may implement assessments which do not meet professional standards. However, by means of publications and conference presentations, as well as consultancies and licensing arrangements, the expertise is being more widely shared, and we hope that this book will contribute significantly to that process of dissemination.