Keywords

1 Introduction

The past decades have seen an exponential increase in the employment of English Medium Instruction (EMI) for academic subjects at tertiary level in non-native English speaking countries. Implementation of EMI programs is expected to enable students to develop both the disciplinary knowledge and the English competence to succeed in the knowledge economy and the globalised labour market. Consequently reliable measurement of the extent to which these expectations have been achieved is of vital importance to feed into the educational process. This requires assessment practices that are well designed and properly undertaken. However, to date, few studies have been conducted focusing on EMI assessment in tertiary institutions, and this is also the case in Vietnam despite a growing body of research on EMI programs. This chapter, therefore, reports on the assessment practices carried out in EMI courses at a public university in Vietnam. It explores the assessment methods employed, the reasons for their employment, and the effectiveness of the assessment practices for the purposes they were intended to serve and for the purpose of enhancing EMI pedagogically.

2 Assessment and EMI

We take assessment in education as all those activities that involve gathering evidence to inform judgments about a student’s knowledge, skills, or ability or his/her attainment of the learning goals being assessed (Green, 2014; Harlen, 2006). The two main roles of educational assessment are to summarise or verify learning achievement for certification, selection, and accreditation purposes and to support learning (Broadfoot, 2009; Little, 1996). Assessment for the purposes of summarising or verifying learning achievement (also called Assessment of Learning or AoL) prioritises the ‘consistency of meaning’ of assessment results across contexts and individuals, with the most common forms being standardised tests, exams, or final assignments. Meanwhile, assessment for purposes of facilitating learning (often referred to as Assessment for Learning or AfL) emphasises the collection and interpretation of learning evidence for use by learners and their teachers to improve learning (ARG, 2002). As an integral part of the teaching and learning process, key areas of AfL practices include questioning, self- and peer-assessment, feedback, and formative use of summative tests (Black et al., 2003). The goals are to make explicit what the students know, how they come to know, what else they need to know and how (Murphy, 2008). Due to the prevalent use of assessment results for making important decisions which could have an impact on students’ life chances, considerably more focus has so far been placed on AoL than AfL purposes (Broadfoot, 2009; Little, 1996; Tran, 2015).

Despite the voluminous research on EMI implementation at HEIs, only a few studies focus on assessment; these focus primarily on assessment methods employed, student learning outcomes, and the implementation of AfL practices.

In terms of assessment methods, after surveying 29 teachers delivering various EMI courses at different HEIs in Taiwan and interviewing eight of the teachers, Kao and Tsou (2017) found that most of the assessment was summative. Common methods included written exams, weekly assignments, term projects, and in-class quizzes which aimed to measure the students’ learning outcomes at the end of an instruction unit. Almost all of the participating teachers (90%) indicated that the assessment tools employed in EMI courses were basically the same as those used in the parallel non-EMI courses. Spain, Dafouz and Camacho-Miñano (2016) also reported employment of such assessment methods as seminars, mid-term and final exams to measure students’ learning outcomes in EMI Accounting courses. Interestingly, students’ active class participation was included in the calculation of the course grade, accounting for 10%.

Concerning student learning outcomes, the research conducted by Dafouz and Camacho-Miñano (2016) found that students’ participation grades were lower in EMI courses than in non-EMI courses, while Macaro et al. (2018) revealed students’ general belief that they studied less well in English than in their first language. However, both the experimental study carried out by Tatzl and Messnarz (2013) and the case study conducted by Dafouz and Camacho-Miñano (2016) showed that there were no statistical differences in the final academic results of the students taking EMI courses and those taking non-EMI courses. What is not known is whether this could be attributed to differences in AoL practices.

Regarding AfL practices, Hu and Li (2017), after observing and analysing the audio recordings of ten EMI lessons in different disciplines, found they were characterised by cognitively low-level teacher questions and responses, with about one-third of teacher questions in English achieving only student silence. Other studies have also reported impoverished classroom discourse in EMI classes with limited student participation (Hu et al., 2014; Yang, 2015) and low-quality classroom interactions (Hu & Duan, 2019; Macaro et al., 2018). After surveying 40 tertiary EMI teachers in Taiwan, Li and Wu (2018) found differences in the assessment practices carried out in EMI and non-EMI classes. In particular, written feedback, group work and pair work, and tests or quizzes were practised less in EMI classes, while selecting textbook-provided test items, using the assessment results to plan teaching, assessing through observation and presentation, and including student engagement into grade calculation were applied more. However, little is known about the enactment of or the reasons for such practices.

In response to the gaps identified above, this study addresses the following research questions:

  1. a.

    What assessment practices are carried out in the EMI courses at the university?

  2. b.

    Do the assessment practices fulfil the purposes they are supposed to serve?

3 The Study

This study originated from a larger research project conducted in a public university in the north of Vietnam. The university has more than 700 teaching staff and more than 24,000 full-time students. In response to the requirements to include EMI courses in tertiary education programs to promote the internationalisation of Vietnamese higher education (Vietnamese Government, 2005, 2008), the university has recently encouraged Faculties to develop EMI courses, allowing students to register voluntarily. In preparation for EMI implementation, the university has also offered teachers several professional development courses on English language and EMI teaching pedagogy. However, due to various reasons, not all of the teachers attended those professional development courses before starting EMI implementation. At the time of data collection, there were eight EMI courses at the university, all being delivered for the first time and all designed and delivered by existing university staff. One of the EMI courses was newly developed without any equivalent course in Vietnamese, while the other seven courses had parallel Vietnamese Medium Instruction (VMI) courses delivered either at the same time or in previous years.

The study participants comprised the eight teachers delivering EMI courses and the 275 students undertaking them. Courses #1 to #4 were in Business Management, course #5 in Electronics, courses #6 and #7 in Mechanical Engineering, and course #8 in Information Technology. The teachers (coded T#1 to T#8 matching the courses) all met the ministerial requirements to deliver EMI courses, that is, they had either studied post-graduate programs in English overseas or had the English proficiency equivalent or above C1 in the Common European Framework of Reference for Languages (MOET, 2014). Six teachers held PhD degrees and the other two had MA degrees. None had ever delivered EMI courses before. Regarding the participating students, more than half (58%) were in their 3rd year; one-third of them (32%) in the last year, and the rest (9%) in their second year. Based on their willingness, eight students (coded S#1 to S#8 matching their course affiliation) took part in individual interviews with one of the researchers.

This study employed a mixed-method design, using multiple instruments to collect data from different sources. The instruments included classroom observations; survey questionnaires to students; semi-structured interviews with teachers and student representatives, lasting from 30 to 45 min; and document analysis of the students’ end-of-semester test papers and assignments. The classroom observations were conducted during eight forty-five-minute lessons, one in each EMI course, focusing on the classroom interactions between teachers and students and among students. The teacher interviews centred on assessment practices and reasons for adopting them. The student questionnaires and interviews asked about the assessment practices experienced in the courses and the students’ perspectives on their usefulness to their learning and the reliability of the assessment results. The students’ end-of-semester test papers and assignments in English were analysed in comparison with parallel test papers and assignment questions in Vietnamese in terms of difficulty level and content coverage.

4 Findings and Discussion

In this section, the research results are presented and discussed with reference to the results of related studies; assessment for learning and assessment of learning practices are addressed separately.

4.1 Assessment for Learning Practices in the EMI Courses

The identified AfL practices can be grouped into four key areas: questioning, feedback, self- and peer-assessment, and formative use of summative tests (Black et al., 2003). The focus here is primarily on how such practices were carried out and how they supported student learning.

4.1.1 Questioning

Classroom observations saw few oral interactions in most of the EMI lessons. The occasional teacher questions were predominantly low-level cognition for knowledge recall or comprehension-check purposes and were generally pre-prepared and written on the board or a PowerPoint slide or sent to the students in advance to ‘make sure that the students understand the questions in case they could not keep up with the lesson’ (T#3). In each class, a few students often volunteered to answer, while others only responded when short answers were required and did so softly in chorus. When no one volunteered, teachers would call on individual students, but responses were often unintelligible due to poor pronunciation and structural errors. One student was observed pointing to the words in the notebook so that the teacher could understand. Few teachers elaborated on students’ responses for further evidence of student understanding or meaning negotiation. The following excerpt is one example.

T#8: (Writing the question on board and asking the whole class) What are software product standards?

After a while, one student volunteered to answer.

S: …( incomprehensible, he seemed to be reading from his notes)

T#8: Who has any comments? Is it true or not?

Some students said ‘true’, some said ‘no’ softly, the teacher called another student

S: … (incomprehensible)

T#8: Thank you

(Teacher showed the correct answer on the slides and for some minutes read and explained in English and used Vietnamese to explain new terms).

It was noticeable that many students did not respond to teacher questions, or pay attention to their peers’ responses. Sometimes even the appointed students refused to give an answer. Also, few students were observed asking the teachers questions.

The reasons for students’ low level of participation in question and answer activities, as indicated in the student interviews, comprised limited English proficiency, insufficient understanding of the related disciplinary content, and lack of confidence, which also relates to face-saving concerns.

Only those who are good at English often volunteered to respond to the teacher questions. Those who are not so good at English like me could answer only easy questions. For more difficult questions we could only respond in Vietnamese since we did not know the English words. (S#5)

Only when I had prepared to present and understood the subject matter thoroughly, did I participate in responding or asking questions of the teacher. (S#6)

These views are consistent with the survey responses (see Table 10.1).

Table 10.1 Students’ difficulties in participating in EMI lessons

Analysis shows a statistically significant relationship between the students’ learning difficulties and their participation in responding to teacher questions during lessons. Although fewer students indicated having difficulties in learning motivation, there was a statistically significant correlation between students’ English proficiency and their EMI learning motivation. In particular, those students who self-evaluated that their English proficiency was not enough to study in English were likely to find themselves not interested in the lessons.

It was noticeable that more students in teacher #7’s class responded to teacher questions and asked questions of the teacher during the lesson. Further interview analysis shows that fewer students in T#7’s class indicated having difficulties in understanding the instruction in English or having an insufficient understanding of the related content knowledge or feeling unconfident or unmotivated to take part and respond to questions. Teacher #7 used the most teaching aids (picture, video, realia, games), gave the most clear and fluent instruction in English, and insisted on students going to the board to write their ideas instead of standing at their seats since he believed that the practice would ‘help students get over the feeling of apprehension and become familiar with communicating in English’ (T#7). The students’ active participation in questioning practice indicates the importance of teachers’ English competence and teaching methods in engaging students in assessment and active learning in EMI classes.

4.1.2 Feedback

Classroom observations showed that most teachers spent little time giving oral feedback on student learning, especially teachers of technical disciplines. After calling on students to give answers to questions or present a topic or do an exercise on the board, the teachers sometimes commented on the students’ answers, affirming the correct answers or praising the students. Two teachers (T#1, #7) recast students’ responses, emphasising correct word stress and word choice. Two other teachers (T#2, #5) were not observed to give comments but provided further instruction, when necessary, to facilitate student understanding. The following excerpt illustrates the practice.

(At the beginning of the lesson, teacher #2 asked students to close the book and work in groups of four to write down the key terms learned in the previous lesson. After five minutes, she called five representative students of the groups to write down the keywords on the board)

T#2: (Pointing at the two terms written by one group) are they the same or different?

(One group said ‘different’ but most other students kept silent)

T#2: Don’t you know these two terms or …Speak out, please ... No ideas?

T#2: (Turning to the only group saying ‘different’) How different are they [the two terms]?

(Pointing to one term) Which information does this provide?

(None of the students responded)

T#2: OK, I think we need to discuss these terms again. (Explained the two concepts) Remember it? Can you see the differences? (Explained further)… understand? Do you need me to explain in Vietnamese? No? OK.

In the interviews, students representing two EMI classes recounted their teachers’ practices of giving feedback as follows.

…After our presentations, the teacher summarised the main ideas of our presentation and gave feedback on the weaknesses of our work. We would work on the feedback to revise the work and resubmit for marking. (S#4)

We have to make chapter reports, summarising the chapter content for marking. If we would like to gain high marks, we could send our written report for the teacher’s feedback in advance or ask for her permission to present in front of the class and get feedback to revise before official submission. (S#3)

Little evidence of teachers’ written or oral feedback on students’ submission was collected so it was unclear whether the feedback could assist students to gain an understanding of their learning goals or what they needed to do to achieve the goal (Tran, 2015). However, as reflected in the students’ recounts, the opportunities to resubmit their assignments for grading made the students value and pay more attention to teacher feedback. It was observed that students did not appear to pay as much attention to feedback on pronunciation or wordings or to further instruction.

4.1.3 Peer- and Self-Assessment

Interviews with the students and teachers showed that only two teachers (T#4, #5) asked students to assess one another and one teacher (T#3) asked students to carry out self-assessment. Teacher #4 recounted giving students a list of guiding questions to answer in groups out-of-class, calling on one group at random to present in the lesson, asking other groups to assess the presenting group and calculating the student peer-assessment results into the final scores given to the presenting group. Teacher #5 reported a similar practice with the only difference lying in the score calculation: ‘I gave the presenting group a total sum of scores and the group members had to decide who gained which marks themselves and handed in’ (T#5). According to the representative students, such practices made them prepare for the lessons since they ‘did not know which group would be called on to present’ (S#4) but none of them were certain about the assessment criteria.

Different from the peer-assessment practices, the self-assessment practice implemented by teacher #3 was not related to student scores. As reflected by her students below, the practice was helpful for student self-study.

In every chapter, the teacher gave us a bank of True-False questions on the chapter content to answer and self-assess our understanding. When preparing for the lessons, I often self-tested by answering the questions and looking up the answer keys to see if my understanding was correct. (S#3)

However, no further guidance was provided for students’ self-study and teacher #3 also indicated that she could ‘not have enough time’ to carry out the practice frequently.

4.1.4 Formative Use of Summative Tests

During the EMI courses, the students undertook summative assessment periodically in such formats as written assignments, oral presentations, performance tests, and written tests. Based on the students’ mistakes in the tests or signals of students’ misunderstanding in the assignments or presentations, the teachers sometimes provided students with further instruction on the relevant matters. The following extract demonstrates the practice.

When returning the tests, the teacher often focused on the test items that most of us gave incorrect answers, asking those who answered correctly to write their answers again on the board and explaining why they were correct. (S#4)

However, it appeared that the main focus was on showing the correct answers with little attention to analysing why or how the students made the mistakes or finding out what should have been changed in the teaching and learning to help students avoid such mistakes.

Besides the periodically planned assessments, some of the teachers also assessed students via such activities as asking students to answer questions or do exercises on the board and giving marks based on the students’ performance. As demonstrated in the following extract, the practice aimed to motivate student learning.

When students have volunteered to do exercises on the board many times, they would be given one or half of a point to add to their scores of the periodical summative assessment. That [the bonus marks practice] motivated them to put more effort into learning. (T#2)

According to representative students, the possibility of gaining bonus marks did encourage them to be more active in participating into the lessons to obtain ‘high marks’ and ‘good learning results’. However, since the (bonus) marks mainly assessed the students’ learning attitudes and behaviours, the calculation of the marks into the students’ academic results could negatively affect the validity and reliability of the results.

In short, in these EMI courses, the AfL practices were rather limited and largely depended on individual teachers’ professional competence. Together with contextual factors such as face-saving concerns and inadequate attention to assessment for learning practices, the employment of EMI hindered the students’ demonstration of their learning and impeded their understanding of the learning goals. As a result, the gaps between the students’ current and desired level of learning were not clearly identified and student learning was not scaffolded effectively, which endangered the realisation of the dual goal of EMI implementation.

These findings in regard to AfL practices are in alignment with the findings of previous studies such as Hu and Duan (2019), Hu et al. (2014), Hu and Li (2017), and Yang (2015) on the limited student involvement in EMI courses and the low quality of classroom interactions. Regarding feedback practices, similar findings were also documented in Li and Wu’s (2018) study, where teachers provided less feedback in EMI classes, especially when class sizes were large. The results of this study are also consistent with the findings reported in the studies carried out by Hu and Duan (2019) and Kao and Tsou (2017) on the influences of teachers’ professional competence on their AfL practices in EMI courses.

4.2 Assessment of Learning Practices in the EMI Courses

The research results on the AoL practices delivered in the EMI courses focus on three aspects: assessment methods, assessment content, and assessment results. The discussion centres on how these were influenced by EMI and whether they could fulfil the purposes of measuring and certifying student learning outcomes.

4.2.1 Assessment Methods

As can be seen in Table 10.2, the EMI courses employed four methods to assess student learning: written test, performance test, oral presentation, and written assignment, among which written test was the most common. The level of student participation was not graded in any of the courses, although it was sometimes assessed and unofficially included in the course grades in the form of bonus marks.

Table 10.2 Summative assessment in the EMI courses

Data from the interviews with the teachers showed that the assessment methods in the EMI courses were basically similar to those implemented in the parallel VMI courses except for courses #4, #5, #6, and #8. Notably, in the parallel VMI courses, the progress tests 2 in courses #4 and #8 were written tests and the end-of-semester assessment in courses #5 and #6 were performance tests. As demonstrated in the extracts below, the assessment methods were purposefully adjusted to motivate students to put more effort into learning and to encourage their retention in EMI courses, reasons directly related to their English language competence.

Due to the students’ low English proficiency, studying in English requires students to self-study more than when studying in Vietnamese. However many students do not know how to self-study. Thus, I had to develop a study guide to show them how to prepare for the lessons. To make sure that they followed the requirements, I asked them to present what they have prepared and assessed their understanding. The fact that their presentation scores were calculated into the final results made them pay more attention to the task and learn better. (T#4)

This is the first time we have offered this course in English. Thus, some students registered because of curiosity or by mistake. However, after registration, they were very anxious about not passing the course and would like to drop out. Therefore, we had to meet with the students and encourage them not to give up. We also changed the end-of-semester assessment format from the performance test into a written assignment so that the students could have more time to prepare for the assessment. (T#5)

There was not enough data to explore whether the assessment methods or the adjustment in the methods employed were appropriate to ensure the validity and reliability of the assessment results. However, the above described practices indicate that the summative assessment methods employed in those particular courses were affected to some extent by EMI, although the level of impact depended largely on the teachers and their perceptions of the need to consider the students’ English proficiency.

4.2.2 Assessment Content

Analysis of the end-of-semester test papers and assignment questions showed that the complexity and formats of the test questions in the EMI courses were similar to those in VMI courses except for courses #3 and #2. In the course #3 test paper, there were only five True/False questions whereas there were ten in the test paper in the parallel VMI course, and in the course #2 test paper, question 2, tax information was not included as data whereas it was in the parallel VMI version. The interviews with teachers and students showed that not only the end-of-semester tests but the progress tests of courses #2 and #3 were intentionally made easier to take account of the English language demand in doing the tests. Such practice also appeared to be a means to reassure the students about their results when taking EMI courses.

Due to the language barriers, the EMI students did not have much time to practice doing calculation exercises [during the course]. The test questions in English were a little bit easier since they did not have as much time [to do the calculation] as the VMI students. (T#2)

At first the students were worried that they could not understand the lessons but then they felt more confident since they all could do well in the [progress] tests. (T#3)

As indicated in the following extract, the students appeared to recognise the teacher’s practice of reducing the assessment demand in the EMI course.

In comparison with my friends studying in the parallel VMI course, I think we studied less. In the EMI course, the content knowledge was not discussed and expanded in further details [as it was in the VMI course]. Those friends of mine learn more knowledge and know more formulas than I do. Our [EMI] course content was just a part of their course but we learned in English so it was more difficult. Because of that reason, the teacher was more generous when assessing our learning and the tests were easier. (S#3)

It was also noticed that in the EMI test papers, the theoretical questions accounted for a small proportion (20–30%) of the total score, with the larger proportion (70–80%) allocated to calculation questions or problem-based exercises which did not require a wordy explanation in English. During the courses, the students were also guided on ‘how to respond to theoretical questions in English’ (S#2, #3, #4, #7). Such practices seemed to minimise the impact of English language demand on the demonstration of the students’ learning outcomes. However, as indicated by the students below, their test results were still negatively affected by their low English proficiency.

When doing the tests in English, sometimes we were too hurried to read the test questions carefully so we misunderstood the questions and gave incorrect answers. It would not have happened if we had taken the test in Vietnamese. (S#2)

Overall, it can be seen that the assessment content was in various ways affected by the use of EMI. Although the issue was found only in two out of the eight courses, this finding heightens concerns about possible narrowing of curriculum and lowering of educational standards as a result of EMI implementation.

4.2.3 Assessment Results

Table 10.3 demonstrates the students’ perspectives on assessment in the EMI courses. As can be seen, most of the students believed that the assessment carried out in the EMI courses was appropriate to the course content and the assessment results reflected truthfully their learning outcomes with 98% and 91% of the students agreeing with the respective statements. However, only about two-thirds of the students indicated that they were satisfied with the assessment results. It was noticed that the EMI courses requiring students to do assignments at the end-of-semester assessment (courses #5, #6, #8) had a higher percentage of students feeling satisfied with the assessment results than the other courses. There was not enough data to examine the impact of the assessment methods on the student assessment results. However, this research result on the students’ level of satisfaction with the course grade supports the result reported in Al-Bakri’s (2017) study that many EMI students were not content with the assessment results they obtained.

Table 10.3 Students’ perspectives on the assessment of the EMI courses

It can also be noticed in Table 10.3 that fewer than half of the students thought they would gain similar grades if they studied the subjects in Vietnamese. In the interviews, when speculating on student learning outcomes if they had taken the parallel VMI courses instead, almost all of the teachers and students believed that the students would have acquired from 10 to 30% more knowledge and achieved higher course grades. The only exception was in course #3 where both the teacher and the student tended to believe that the learning outcomes would have been lower if the students had taken the VMI courses. Interestingly, the reason given by the student for her belief related to the seemingly lower assessment demand set for EMI courses while the reason given by the teacher lay in the better teaching and learning conditions of the EMI classes in comparison with VMI classes: ‘There are about 80 students in VMI classes: Only some students study actively, many others do not. Meanwhile, there are only 20 students in the EMI class and the students concentrate more on studying so…they could learn better’ (T#3).

In short, these research findings show that EMI did impact the assessment method, content, and results although the impact was not ubiquitous and the level of impact was different in different courses. Given the adjustments in the assessment methods and content and the teachers and students’ common belief in the negative impact of EMI on student learning performance, it is questionable whether the assessment practices fulfil the purposes of precisely measuring student learning outcomes.

The results of this study in regard to AoL practices echo the findings of Al-Bakri (2017) in that the use of English could affect students’ comprehension of questions and consequent educational performance, which might violate the validity of the assessment result. Up to 46% of respondents in Al-Bakri’s (2017) study indicated that they sometimes gave incorrect answers because they did not understand the questions. Coupled with Al-Bakri’s findings, the findings in this study highlight the potential impact of students’ insufficient English proficiency on teaching and learning, raising concerns about the educational quality of EMI courses and whether the degree the EMI students receive actually meets international standards. Given the research results on the adjustments in the assessment content and methods in EMI courses in consideration of students’ low English level, this study supports the claim made by Macaro et al. (2018) that research findings on EMI student learning outcomes need to be interpreted with caution because of the disparate educational contexts and different connotations attached to the assessment results. Furthermore, while the practice of ‘giving bonus marks’ appears to mirror the practices of assessing students’ lesson participation reported in the studies carried out by Dafouz and Camacho-Miñano (2016) and Li and Wu (2018), it raises concerns about creating inequality since individual teachers have their own ways of giving bonus marks and not all students are capable of taking the opportunities to gain bonus marks (Tran, 2015).

5 Conclusion and Implications for Moving Forward with EMI

This study investigated the assessment practices carried out in the EMI courses in a particular HE context in Vietnam and explored the reasons for those practices and their capacity to fulfil the two main roles of assessment: supporting learning and measuring learning achievement. The research findings demonstrate that both the AfL and AoL practices largely depended on the individual teachers, particularly on their teaching and assessment competence. This situation can be attributed to insufficient preparation for implementation, especially in terms of students’ English proficiency, teacher training, and provision of institutional guidelines on EMI course assessment. The research results also indicate that the assessment practices did not adequately fulfil the purposes they were supposed to serve.

A recurring theme was noted in the analyses of both AfL and AoL practices in the EMI courses, which relates to the increasing top-down demands on EMI implementation and the dominant use of assessment results for important decision-making processes alongside inadequate preparation for EMI implementation. It is that assessment and assessment results were being used as a means to maintain and encourage students’ learning engagement and continuing enrolment in the EMI courses. From the student data, although such strategies appeared to work with a proportion of EMI students, they cannot be regarded as effective solutions, given concerns about the validity and reliability of the assessment results and the fact that up to a third of the EMI students indicated that they would not take part in other EMI courses in the future.

However, the findings presented and discussed above should be interpreted in view of several methodological limitations. First, the classroom observations were carried out only once in each of the EMI courses. Second, only one student from each EMI course was invited to take part in the interviews. Third, data related to teachers’ written feedback on the students’ assignments during the courses, the content of the periodical assessment and the students’ assessment results were not collected. Finally, the research was conducted in only one HE institutional context in Vietnam with a small number of EMI courses having been investigated. Further research should be conducted in more varied educational contexts, with more data on EMI classroom interactions, the teachers’ perspectives on EMI teaching and assessment and the student learning outcomes, analysing them in comparison with parallel non-EMI courses for a deeper understanding of the reasons for assessment practices and the fulfilment of the practices for formative and summative purposes.

Despite the aforementioned limitations, this study carries important implications for improving the implementation of EMI at the tertiary level. First, it is vital for the students to obtain adequate English proficiency to study in English and that they also be supported to develop self-regulated and active learning skills. Next, EMI teachers need to be provided with professional development on teaching pedagogy and assessment. Key areas for teacher professional development highlight the importance of AfL and include (i) improving the quantity and quality of classroom interactions via questioning practices; (ii) involving students in assessment-related activities such as responding to teacher questions and peer- and self-assessment; (iii) explicitly communicating the assessment criteria and learning objectives via feedback and guiding students’ self- and peer-assessment; (iv) selecting summative assessment methods and content; (v) using assessment data to inform teaching and give feedback on student learning. Third, it is necessary to examine the appropriateness of the summative assessment methods and assessment content delivered in EMI courses to ensure the validity and reliability of assessment results and to monitor the quality of the degree offered to students undertaking EMI programs. Finally, more research needs to be conducted to investigate the effectiveness of EMI assessment practices in terms of realising the dual goals of developing both disciplinary knowledge and English language proficiency. We believe that high-quality EMI assessment practices are an integral part of moving forward pedagogically with EMI in Vietnamese universities.