Keywords

1 Introduction

As the higher education marketplace expands rapidly in many countries, students and their families are becoming more discerning when selecting a post-secondary school. Access and financial considerations are not the sole criteria driving their decision-making. Demonstrated quality and value-added are becoming the distinguishing factors when making this critical choice; what Gallagher (2015, para. 2) refers to as the “marketization of higher education due to a focus on value and transparency.” Increasingly, universities, faculties, and instructional programmes are seeking means to validate institutional quality. While Engineering and Business programmes have set the pace for quality assurance, primarily through accreditation, there is little doubt that Foreign Language programmes, particularly EFL programmes, should be at the forefront as well. As the gateway to many, if not all programmes in universities, EFL programmes may be the largest instructional units in a university. Consequently, EFL programmes are quite often the face of the university when it comes to attracting students. A strong grasp of English—provided through a preparatory or foundation program-opens the door to a quality higher education. As well, a quality EFL programme is also a proxy for less time (and money) spent at this early stage, meaning a quicker path to a degree. Thus, in order to continuously affirm the value of an EFL programme, ongoing evaluation and quality assurance is becoming a mandatory activity carried out in parallel with the curriculum delivery.

Quality of EFL programmes have not always garnered the current level of attention. Indeed, for many programmes the need for evaluation still warrants justification, which explains the relevance of this chapter. In most higher education institutions, the EFL programme is not viewed as an academic unit (i.e., resulting in research by faculty or degrees for students). Rather, they are outside the perimeter of the institutional core, acting as a service unit, playing the role of a loosely coupled (Weick, 1976) entity. As such, while loosely coupled units by definition may enjoy greater flexibility in how they operate, they are often marginalized in terms of resources and status. This is often evidenced via contrasting academic designations-where the EFL programme may be administered by a director (not a chair or dean), and content is delivered by instructors and teachers (not assistant, associate, or full professors). As such, and for better or worse, EFL programmes often operate unchecked internally and externally.

Yet, there are a number of global trends in post-secondary education that are pushing universities to reconsider the amount of attention given to the quality of the EFL programme. With the steady growth of higher education institutions delivering so-called English-Medium Instruction (EMI), or Content and Language Integrated Learning (CLIL) (e.g., Coleman, 2006; Maiworm & Wächter, 2002), schools are finding themselves in a position whereby evaluation and quality assurance are unavoidable. Practically speaking, in many such institutions, the EFL Programme may be the gateway and gatekeeper for a significant proportion of students. Ensuring the successful, timely completion of the EFL Programme for the greatest number of students is essential, if for nothing other than to prevent a bottleneck in the EFL programme. Additionally, the growing privatization of higher education has resulted in a context where institutions compete for student-consumers, thus quality and reputation become value-added features that provide a competitive advantage, particularly in a crowded marketplace. This has led to an accreditation movement across the field. It is also important to note that a lack of attention to the quality of the EFL programme may result in issues regarding student persistence and retention. The EFL Programme may serve as an essential stepping-stone toward a student’s academic and career goals. Students that struggle with the language learning, or feel dissatisfied with the effectiveness of the programme, may choose to leave the institution. While the effectiveness of the EFL programme is closely associated with the efficient delivery of the curriculum, there are numerous additional factors across the programme that comprise the broader definition of its quality.

From this context of increasing need for attention to the quality of EFL programmes, a number of pertinent research questions emerge. First, what would one expect to find in the design of a comprehensive, rigorous evaluation of an EFL programme? Second, what considerations should be made in order to design and implement a sustainable quality assurance effort within an EFL programme? This qualitative case study carried out at an EMI higher education institution in Istanbul, Turkey will seek to provide clarity to these two important questions.

2 Theoretical Background

The foundation of an effective and sustainable quality assurance programme is the belief and actions of the organization, demonstrating a “collective commitment” (Maki, 2004, p. 11) to this endeavour. The organization may be the larger institution in which the EFL programme is housed, or may be the unit in which the programme resides; e.g., a faculty or a school of foreign languages. The point to emphasize is that quality assurance cannot be viewed as a desktop exercise, carried out by a few sequestered individuals who are charged with examining data and creating reports. After all, the goal of assessment is not just to gather evidence, but also to make evidence-informed changes (Banta & Blaich, 2011, p. 25). Thus, the sustainable quality assurance enterprise must be “driven by the internal curiosity” of a learning organization as opposed to external forces. It should be envisioned, planned, implemented, and “reflected in structures, processes, and practices,” with the intent of becoming institutionalized and leading to the broad pursuit of knowledge for improvement. The task for leaders then, is to foster the learning environment by supporting and taking responsibility for quality assurance initiatives, particularly at the unit level where learning takes place (Banta, Jones, & Black, 2009, p. 12). That is, a context must be created where evaluation and assurance are “purposefully planned and intentionally reflective” (Bresciani, 2006, p. 23), and where a feeling of trust is continually reinforced; that is, where data and results are utilized in the manner for which they were specified (Bresciani, 2012, p. 418). Thus, there is the critical groundwork that must be laid in order for a healthy, productive environment that sees evaluation and assurance not as perfunctory activities, rather opportunities for the organization, or unit, to learn and grow.

The literature on programme assessment is quite clear regarding the establishment of a productive learning environment within the unit. In the design and implementation of a quality assurance programme, the first important step is to develop a plan that gives prominence to evidence and makes it a “consequential factor” in programme planning and review processes (New Leadership Alliance, 2012, p. 7). A critical piece of the plan is to deliberately consider the weight of time required of faculty and staff to participate in evaluation and assurance initiatives. By those developing the plan, it should be acknowledged that impact on learning cannot take place without increased engagement from faculty and staff (Banta & Blaich, 2011, p. 23) and that evaluation and assurance-related activities are time-intensive, given the expectations for analysis and reflection (Baker, Jankowski, Provezis, & Kinzie, 2012, p. 10). Additionally, such activities are often perceived as bureaucratic and externally-driven and needlessly encroach on time that should be dedicated to students (Banta & Blaich, 2011, p. 25).

Beyond planning, when it comes to implementation, it is essential that logical connections are consistently reinforced to all involved in the enterprise between the perceived busy work of data collection and analysis and the goal of improved quality of learning. This can occur through opportunities at various points in the assessment cycle provided for faculty and staff to develop an understanding of assessment and advance their knowledge and experience with it (Banta et al., 2009, p. 15). Collecting and analysing programme data must be viewed as a means to an end; it should exist to provide stakeholders with the information to satisfy their own natural curiosity about the results of their work (Bresciani, 2012, p. 15), which, in turn, results in discussions around learning and the curriculum. Blaich and Wise (2011) make the case that “assessment data only has legs (p. 12)” if there is a feedback loop with the data speaking to questions that staff have about learning and the programme. If the message of data supporting programme improvement becomes blurred, assessment runs the risk of becoming marginalized (Maki, 2004, p. 15). As Banta et al. (2009, p. 5) stress, assessment that “spins in its own orbit” will not succeed.

Evaluation and quality assurance efforts are a double-edged sword. On one side, Banta and Blaich (2011, p. 27) point out that assessment is a subversive activity, raising questions about learning and quality, and creating discomfort regarding established habits within a programme. On the other side, when implemented effectively, a focus on quality assurance leads to consistent programme improvement and growth. For the latter condition to prevail, how the quality assurance effort is implemented is critical in both the short and the long term. In the short term, it is about building trust; demonstrating that data is for improvement and not punishment. Tagg (2007, p. 37), arguing for why organizations have such difficulty with change, believes that the most fundamental problem of colleges is that the people within them do not learn very well. Therefore, the long-term goal for the quality assurance effort should be to foster a learning organization that comes from consistent, productive conversations around data and quality, as well as the reinforcement of trust, through conscientious and equitable use of data. Bresciani (2006) offers “criteria for good practices” to guide the design stage of an outcomes-based assessment programme. In the case of the institution examined in this study, her list of criteria was adapted and has served as a valuable point of reference for their own quality assurance programme development. Her nine criteria outline an instructive cycle of design and implementation from clarity of goals and expectations, to collaboration and recognition of effort, to coordination and flexibility of implementation, to ongoing evaluation of the process. The following case study explores the design and implementation of a quality assurance initiative in a university-level EFL programme, and the degree to which Bresciani’s criteria may serve as a framework for other such programmes in the initiation of quality assurance efforts.

3 Method

This research was carried out using the Case Study approach, which explores a “bounded system, such as a process, activity, event (…)” (Creswell, 1998, p. 112). The case is bounded within a particular context, which is the School of Foreign Languages (SFL) at a private university in Istanbul, Turkey. Furthermore, this can be viewed as an instrumental case study (Stake, 1995), whereby it is used to illustrate an issue, which is the design and implementation of a Quality Assurance initiative of an EFL programme. In Turkey, there are currently 185 universities; 41 % (n = 76) are private institutions. According to data provided by the website of the Turkish Higher Education Council, 84 % (n = 155) of the 185 universities have EFL preparatory programmes. As of Summer 2015, fewer than ten EFL programmes in the country had received accreditation, with the site of this case study being one of them.

The case study site has five Faculties-Arts and Sciences, Economics and Administrative Sciences, Architecture and Design, Engineering, and Fine Arts. English is taught as the medium of instruction in all faculties and academic programmes, with the exception of the Faculty of Fine Arts and a recently established Psychology programme in the Faculty of Arts & Sciences. The school has an enrolment of approximately 3000 undergraduate and 500 graduate students; nearly half of the undergraduates live on campus. Approximately 80 % of all incoming freshman enrol in a one-year, intensive academic English preparatory programme at the school; the remaining 20 % are exempt either through performance on the EFL proficiency exam or standardized exams such as the TOEFL or IELTS. An Exit Exam is developed and administered at the end of the academic year by the EFL preparatory programme. Students who successfully complete the EFL programme move along to their academic programmes as freshmen. Those who do not successfully complete the programme may enrol in an intensive summer programme and enter an Exit exam at its completion. If they are once again unsuccessful, they must re-enrol in the EFL programme for a second academic year. Between the 2011–2012 and 2014–2015 academic years, the average percentage of EFL students required to re-enrol for a second year in the programme was 48 %.

The organizational structure for the EFL programme is that there is one director, who reports directly to the rector of the university, and an administrative layer, comprised of the Coordinators team: The Academic coordinator, the Administrative coordinator, the Curriculum coordinator, the Testing and Assessment coordinator, the Integrated Technology coordinator, and the Student Learning Centre coordinator. In 2014, prior to pursuing accreditation, positions were created for a Quality Assurance Coordinator and a Continuous Professional Development Coordinator.

Prior to the 2014–2015 academic year, attention to quality could be characterized as unstructured and unsystematic. Data analysis was conducted primarily through end-of-year Exit Exam results and annual survey data gathered from students and staff. Most data could be classified as output-based (e.g., number of students passing or failing, number of students visiting the Student Learning Centre, number of teachers receiving professional development during the year). While broad learning outcomes were identified (e.g., “students are able to write a 600-word informational essay”), a comprehensive list of learning outcomes and objectives was lacking, as was an accompanying outcomes assessment process. Little data found its way into a feedback cycle that informed adjustments to the curriculum or staff development. In addition, data on incoming students was not analysed to serve as predictive analytics, nor was there an attempt to gather and analyse data on those students who had left the EFL programme prior to successful completion. In sum, the EFL programme lacked a systematic means for aggregating, disaggregating, and analysing data. Therefore, to some degree, life was easy. If students were unsuccessful, the responsibility was their own. With only a broad light cast upon the programme, there were few sharp beams that focused on specific issues. Potential solutions remained in the shadows.

In the 2014–2015 academic year, the director of the SFL determined that the EFL programme would pursue accreditation, which would serve two purposes. First, in the crowded university marketplace for student attention, a stamp of accreditation provides schools with a competitive advantage. Second, the director determined that evaluation of quality-of processes and instruction-were to be a major priority from that point forward. The first step was the establishment of an SFL-wide quality assurance system, embodied in the creation of a Quality Assurance Unit (QAU), which included a place on the organizational chart as well as the hiring of a coordinator and one part-time staff member, who is also a full-time instructor. The first priority for the QAU was to facilitate the accreditation application process. However, accreditation was not seen as the sole intent of the QAU. The mission of the QAU is to ensure continuous improvement across all aspects of the SFL, and the accreditation process is viewed as an important way to periodically affirm ongoing quality assurance activities. This study is an evaluation of the design, development and implementation of a QA effort within an EFL programme.

The design and development stage of the QA process took place over a seven-month period (i.e., May–November) as the EFL programme conducted a self-study and revised or established policies, procedures, and processes that would result in an effective, sustainable QA system. Implementation and evaluation are ongoing. From the outset, there has been a consistent focus on data collection and analysis for evaluative and planning purposes. Through surveys, focus groups and interviews, data has been gathered in order to formatively evaluate the effectiveness of the design implementation, as well as assist with planning. Surveys have been used on a frequent basis as they help to acquire a “quantitative or numeric description” of attitudes or opinions of a population through a sampling of that population (Creswell, 2012, p. 155). Despite the risk of playing a role in the modern phenomenon of “overload” (Sue & Ritter, 2012) from too many digital surveys, brief online surveys (e.g., Google Forms or Survey Monkey) have proven to be an effective way to collect data. Finally, whereas surveys are able to provide rapid feedback on activities, that is, to “learn about what you cannot see” (Glesne, 1999), interviews and focus groups have also served as useful means for the collection of evaluative data. Interviews and focus groups are generally semi-structured in order to guide the conversation so that if there are multiple interviewers, the integrity of the process is maintained (Fontana & Frey, 1998, p. 52).

4 Results

The design and development phase of the quality assurance initiative took place over a seven-month period, from May to November 2014. The first step in the process was the establishment of a Quality Assurance Unit (QAU). A coordinator was hired and an office was dedicated to the Unit. The initial task of the QAU was to lead the first-time accreditation application process. A working team was assembled from current EFL instructors and staff, and relevant tasks were allocated; e.g., a curriculum team leader assumed responsibility for addressing standards related to learning, the Assessment Coordinator addressed standards related to testing and assessment. The thread woven through all standards in the application, however, was quality-establishment of policies, procedures, and a system that would focus on continuous improvement across all aspects of the EFL programme. Therefore, while the QAU kept one eye focused on the accreditation application, the other eye was trained on the broader goal of an effective and sustainable Quality Assurance system.

In terms of Quality Assurance, the EFL programme, and the SFL broadly, was a blank slate. Prior to the decision to pursue accreditation, no systematic, collaborative, results-oriented process of targeted data collection and analysis had existed. Data was often anecdotal. Reports were sporadic, often created on an as-needed basis. For curricular purposes, aggregation and analysis of class-level data, such as attendance and assessments, did not occur. Programme-level student learning outcomes and objectives existed as disparate lists, with little emphasis placed on the alignment of these outcomes and objectives with assessments. For administrative purposes, broad sweeping, end-of-year surveys were administered, yet purposeful, task-specific surveys were not; in neither case were there follow-up focus groups, interviews, or action plans. Likewise, processes and procedures (e.g., recruitment, staff induction, student database management, student orientation) were not subject to analysis for effectiveness and efficiency. “Quality Improvement” was not a bullet in anyone’s job description.

Thus, in parallel with addressing the individual standards in the accreditation application, the QAU devised a framework that has guided the design, development, and implementation of quality improvement policies and processes within the EFL programme and across the SFL. The framework divided the establishment of a quality assurance programme into two major components: Organization and Monitoring & Adjusting. Under each of which, a list of sub-categories was created. Each of these sub-categories would be comprised of a foundational piece (i.e., definitions, descriptions, policies, procedures) and an implementation piece (i.e., collecting, analysing, and acting upon relevant data, or what is commonly known as closing the loop).

The first crucial step in establishing a quality assurance programme was to create the Quality Assurance Unit. By adding this Unit to the organizational chart, assurance was being institutionalized and thus legitimized, particularly among the EFL staff. Prominent office space within the EFL building was allocated to the Unit, as was funding for two position-a full-time coordinator and a part-time officer. Subsequently, the Continuous Professional Development Unit (CPD) was established, and subsumed under the QAU. The rationale for doing so was that in the cycle of quality improvement, continuous and broad-ranging training for the staff results in an improved programme; the QAU identifies training needs and the CPD develops and delivers. Similar to the QAU, CPD was assigned a part-time coordinator, a part-time officer, and office space. This structural move also raised the profile and the importance of CPD among the EFL staff.

The QAU began its existence by creating its own vision, mission, and Quality Assurance policy for the EFL programme, specifying that the QAU would be responsible for monitoring, reporting, and guiding the EFL in ongoing quality improvement, as well as facilitating professional development opportunities. The policy also specifies the formation of a Quality Assurance advisory committee, comprised of SFL stakeholders, and which meets semi-annually for the purpose of providing direction to the QAU. Functionally, the QAU at this time published a Quality Assurance Manual, which articulated the role and responsibilities of the QAU, with descriptions of all significant functions (e.g., administrative and student learning outcomes assessment, programme review), as well as an annual calendar identifying a timeline for data collection, analysis, action plan development, and reporting. For accreditation purposes, the QAU also oversaw the revision of the EFL Student Manual and the Staff Manual, as well as the creation of an EFL Policy Manual.

The next step for the QAU was the development of a Data Warehouse for the SFL. Prior to the establishment of the QAU, little attention had been paid to the array of EFL-relevant data that could be analysed for quality purposes. A centralized repository for information on student demographics, student activity, and student performance had not existed previously. The QAU identified a list of sub-fields from all three of these broad areas for the data warehouse. The student demographics are intended for use as predictive analytics for retention and student performance in the EFL. Fields under this category include: Student high school GPA, does the student have a scholarship, did the student attend a private or public high school, the location of the high school, the parents’ educational background, whether the student has siblings attending a university, whether the student lives on campus. Under student activity, does the student participate in the student government or any clubs, does the student have work-study, to what extent does the student utilize the EFL’s Student Learning Centre. Student performance data includes in-class skills assessments, mid-term and final scores (sub-divided into the skills sections), and attendance records. The academic year is divided into four, seven-week modules. The performance data is entered at the end of each module. One reason for the warehouse is to help identify students who are struggling, either before they walk through the door for the first time, or as early as possible once they have started the EFL programme. Another reason for the warehouse is that the data can be analysed for trends across levels, skills, and time within the EFL programme.

The final step in establishing the QAU was the development of a Communications Plan. The Plan covers a range of communication objectives with the overall aim of increased transparency, clarity, and collaboration throughout the EFL. For collaborative activities (e.g., addressing accreditation standards or writing policy), a shared repository was created. The free cloud storage, Google Drive was deemed most suitable for the needs of the project. In addition to storage, it allows collaborators to synchronously edit documents. A file system was arranged according to the major objectives and standards for the accreditation application, and all working members of the accreditation team were granted authority to upload or edit relevant files. This permitted the QAU coordinator to take inventory at any point in time of current progress toward the completion of the application. Beyond accreditation, the Google Drive remains a repository for QAU and CPD staff to store and collaborate on work.

Furthermore, an additional aspect of communication was the need to archive and distribute pertinent information to internal and external stakeholders through an easy-to-manage means. The desire of the QAU was to remain transparent, yet avoid burdening the staff by flooding in-boxes with non-essential information. The solution was the creation of a website/blog utilizing an open source website creation platform. The SFL Quality website became a dynamic space for QAU/CPD team members to announce relevant internal and external activities (e.g., workshops and conferences), to share information about research or webinars, to report and reflect upon internally organized activities, and to serve as a publically accessible repository of slide presentations and non-sensitive materials and reports created by these two units (e.g., manuals and workshop handouts). The website also serves as a venue to report on evaluation and research projects carried out by the two units.

Finally, in an effort to establish effective communication throughout the EFL programme, it was important to move in concert with the broader communications network of the SFL, and not at crossed-purposes. In terms of the organizational culture, this was critical. The QAU and CPD were striving to gain acceptance and legitimacy. If they were viewed as contributing to a perceived over-burdening of staff, they would struggle to build a base of support. Organizing trainings, workshops and meetings required a good deal of advance planning to ensure that conflicting events (e.g., assessments, coordinator-level meetings, staff meetings) were not scheduled for that time, nor that the planned QAU activity became part of a cluster of other activities arranged by other EFL units, thus giving credence to the belief that the staff were over-worked. During the first academic year of its existence, much angst was experienced by the QAU and CPD units as many scheduling conflicts arose due to the lack of a common working calendar. While certain dates for Curriculum and Testing were announced well in advance, these units would frequently organize last-moment meetings to address pressing issues. These meetings would sometimes clash with QAU or CPD meetings, which generated conflict. The solution in this case was for the Curriculum and Testing units to create a master calendar of their own events at the beginning of the academic year, and the QAU and CPD units would target their own activities for dates that resulted in the fewest conflicts manageable.

In addition to the foundational pieces necessary to establish the QAU, the other major initiative was the design of an integrated monitoring and adjustment system for student learning and administration of the EFL. This was implemented through two primary activities: Assessment of Student Learning Outcomes (SLOs) and Programme Reviews. Regarding SLOs, the QAU and the Curriculum Unit established curriculum maps for each skill area comprised of objectives to be addressed during each of the four modules in the academic year. Instructors are required, twice per module, to submit maps indicating which objectives they had achieved in the previous three weeks. The maps, along with assessment results, permit the QAU to perform frequent analyses of curriculum delivery to determine the pace of progress toward end-of-year SLOs. The QAU has also implemented a separate approach to evaluation through Programme Reviews. Generally speaking, the programme review is a self-study conducted by an academic unit as a means to evaluate the effectiveness and efficiency of the programme. Often, such reviews are carried out by the programmes themselves, with guidance and support from an institutional effectiveness unit, such as the QAU. The programme review takes a wider view of quality than the narrower focus of student learning outcomes assessment, examining both quantitative and qualitative data, and looking more broadly at issues such as student success and persistence, instructor preparation and professional development, and instructor and student perceptions of the programme.

In terms of defining programme for the purpose of evaluation, the QAU determined that in order to more effectively analyse the quality of the EFL programme broadly, it would be useful to examine it according to its three separate levels, or tracks, which are distinguished by the language abilities of the learners in those tracks (i.e., track-one students are near-beginners whereas track-three students may be able to complete their requirements after one semester). While there is certainly much integration across the EFL curriculum, each of the three tracks has their own unique features that allow them to be viewed as separate entities. The rationale for conducting the reviews is that rather than examining student performance at the aggregate, or EFL programme level, it is easier to identify issues and solutions, at a micro level.

Therefore, each programme, or track is reviewed on an annual basis. Data is gathered and analysed during the first semester of the academic year, with analysis and planning taking place in the second semester. The logic behind this timeline is that if the decision is made to change textbooks, there is sufficient time to select and order them for the next academic year. The data is gathered and organized in a report by the QAU. This includes student performance data (including SLO data), student success and persistence data, information about instructors (e.g., demographics, hours of instruction, office hours), student survey and focus group data, and instructor survey and focus group data. In the second semester of the academic year, the QAU facilitates an analysis discussion with a working team comprised of members from the Curriculum Unit and instructors from that track. The outcome of the meeting is a set of concrete action plans to be developed during the summer, and implemented the following academic year.

The QAU is not solely focused on the quality of learning. There are numerous administrative functions that must be evaluated and monitored for quality on a continuous basis. Just as there are Student Learning Outcomes, there are Administrative Outcomes (AOs) as well to provide a comprehensive evaluation of the EFL programme. AOs are used to analyse the effectiveness and efficiency of routine processes and non-instructional activities. In large part, AOs are identified by the accreditation standards (e.g., staff recruitment and induction, grievance procedures). There are also a number of AOs that have been identified based on local needs and initiatives, such as student retention, the Student Learning Centre, or the development and implementation of the Continuous Professional Development programme. Additionally, there is a steady stream of initiatives and pilot projects across the EFL programme, which require close observation to determine their effectiveness. Examples of recent projects include the implementation of an Early Alert system, Individual Learning Plans for low-performing students, and Peer Observation programme for instructors, as well as pilots of a First Year Experience and Early Alert programmes. Data collection and analysis for administrative outcomes is both qualitative and quantitative, and continues throughout the year.

5 Discussion

After 1.5 years of design and implementation of a quality assurance system for the EFL programme, the primary question to be addressed is how successful is the effort? To what degree has the QAU impacted quality, and what is the potential for sustainability of the quality assurance initiative. The data gathered for this study has revealed that, in a broad sense, the initiative appears to be on the right track. Organizationally, the foundation for quality has been laid and the evidence suggests a positive impact on administrative outcomes and the establishment of a culture of assessment within the EFL programme. This is not to say that it has been an effortless process; some initial issues needed to be resolved, and various challenges persist-typical to any change effort. Nevertheless, at this stage in the maturation process of the quality assurance initiative, a number of distinct themes and take-aways have emerged that warrant discussion, from the need for quality assurance, to the need for rigorous planning and execution, to the need for effective leadership.

EFL programmes often find themselves as marginalized units on campus. They may assume a visible role on campus if one views them from an enrolment numbers (and revenue generation) perspective. Unfortunately, this is not how it usually works. As EFL programmes generally lack the prominent researchers and publications output of academic programmes on campus, they are often viewed as less prestigious within the university caste system. This is an ironic situation. On the one hand, the loosely-coupled (Weick, 1976) nature of EFL programmes allows them a certain degree of latitude in, for example, hiring practices and enrolment management. Yet, this may also suggest that the institution does not hold the programme to the same standards as other units. In many institutions, the EFL programme may be the gateway and gatekeeper for a significant proportion of students. It is here that students begin their university lives, and if the programme does not meet their needs or expectations, it may also be where they end their university lives (at least at that school). As the global higher education sector continues its expansion, with more and more institutions offering English Medium Instruction, or Content and Language Integrated Learning, schools are seeking a competitive advantage that will help them stand out. One such way is for universities to assure the quality of their foreign language programmes, which is reflected in the recent growth in the EFL programme accreditation market. As mentioned previously, this was a driving factor behind the EFL programme at the institution, where this case study has been conducted, being the first unit on campus (and one of the first EFL programmes in the country) to seek accreditation. This accomplishment has not escaped the attention of the university administration, which has actively promoted this achievement on the university website.

An important consideration in establishing a sustainable quality assurance programme is to examine the current organizational structure. Where are the supports and where are the barriers to success? In a meta-analysis of institutions implementing assessment programmes, Baker (2012, p. 10) and colleagues found that successful schools were those that “worked diligently over time to create structures, processes, and an atmosphere conducive to the use of assessment to improve student learning.” One simple example of structure, as seen at the institution where this case study was carried out, is the establishment of an office that oversees Quality Assurance. By doing so, quality improvement has been legitimized; assuring stakeholders that there is a long-term vision for the process. Additionally, the physical space that the QAU occupies is prominent, on the main floor of the EFL building, set visibly between instructor and administrative offices. With the creation of this Unit, staff have demonstrated a greater interest in quality. The increased interest is quantifiable, vis a vis the number of instructors who have visited the QAU since opening its doors, with many noting, “it’s about time.”

An additional example of how organizational structure becomes a barrier to engagement is through workload. On one hand, the structure of the EFL programme is viewed as favourable as instructors are required to only teach four days a week. However, this also meant that teachers had only four days per week to teach, prepare for classes, hold tutorials for their students, and attend meetings. In surveys and interviews, teachers have consistently raised concerns about being “overworked”, and for this reason, quality assurance and professional development activities are perceived as burdensome. This is not the image that the QAU is hoping to portray as it attempts to build engagement. Therefore, as opposed to working at odds with an existing structure by trying to layer on traditional forms of training and workshops, the QAU, from the beginning, has worked to foster engagement by altering its delivery methods. For example, in-house webinars have been created and distributed to the staff. Face-to-face sessions are shorter with smaller working groups. One successful endeavour has been the “Nano Conference,” with its three separate five-minute presentations. 65 % of respondents to the follow-up survey indicated that the timing was “just right”. The important point here is that when creating a new entity within an existing structure, it is critical to understand the limits that the structure imposes on the organization and its employees. To gain acceptance, the new entity must carefully insert itself into the current environment.

Furthermore, the effective quality assurance programme does not magically appear-as much as we would like it to. It requires planning (Lennon et al., 2014; New Leadership Alliance, 2012). As the author Lewis Carroll noted wisely, “If you don’t know where you are going, any road will get you there.” In other words, effective planning leads to a roadmap that articulates a comprehensive, manageable list of quality-related activities, their importance, and an assessment plan for each. There are generally two sets of forces, external and internal, driving quality assurance. The former could be generally viewed as more directive, coming from the top-down, whereas the latter may be seen as more deliberate and aimed at continuous improvement. One reason why external calls for quality assurance are perceived as invasive is that they are usually accompanied by non-negotiable, unrealistic deadlines. There is little appreciation given to careful, strategic planning that is critical for effective, sustained design and implementation. Therefore, effectively planned quality assurance initiatives are characterized by two important requirements: Inclusion and time.

Planning requires inclusion. Effective planning requires input and participation from a breadth of stakeholders. Ineffective planning is a small group working in isolation in order to complete a task with a given deadline. Just like good assessment itself, it is important to receive multiple perspectives in planning. At the very least, the list of stakeholders should include the faculty conducting the instruction and the assessments, the students who will receive the instruction, and relevant administrators. Bresciani (2009, p. 87) concluded that a significant threat to successful implementation of programmes had been the absence of instructors in the “planning and delivery of programs, the assessment, or in the discussion of results.” Planning must also result in shared consensus. There must be general agreement on those key qualities and quantities that define a successful student and an effective programme.

Planning requires time. As can be imagined, the processes described earlier take time. The simple logistics of gathering and analysing the data generated by stakeholders is time consuming. The additional layer of ensuring that consensus is achieved means that more time must be dedicated to the deliberate process of seeking input and achieving buy-in. One important caveat is that over-planning can be toxic too. There is nothing more frustrating and de-motivating to busy instructors than talk that appears hollow and appears to lead nowhere.

Thus, an important component of quality assurance is the assessment plan-what data will be collected, when will it be collected, and by whom will it be collected and analysed. There is a wide range of outcomes-both learning and administrative-that the QAU is responsible for identifying and monitoring. This required the establishment of a detailed assessment plan, which includes a yearly assessment calendar that specifies what data will be collected, when it will be collected, when it will be analysed, when action plans will be devised, and who will ensure that these steps take place on time. In order to establish continuity in the wording and design across all assessments, the QAU utilized the SMART acronym as a guide.

  • Specific: The wording used for the assessment will specify the knowledge, skill, or behaviour that is being assessed, the expectation of acquisition-i.e., awareness, understanding, or application, the population that is being assessed, and the assessment instrument. An illustration of this tenet can be found in one Student Learning Outcome from this case study:

    EFL students [population] will demonstrate the ability to write [expectation of acquisition] an effective response essay on the end-of-year Final Exam [assessment instrument].

  • Measurable: The assessment should articulate the expected percentage change (from baseline to target) in performance as a result of instruction, clinical experience, etc.

  • Achievable: The established target should be achievable within the stated timeframe. We may want 70 % of EFL students to demonstrate proficiency in writing an effective response essay, but if our baseline is 50 %, it may be ambitious to expect a 20 % increase within the stated timeframe.

  • Relevant: The assessment must be aligned with established Standards, such as the Common European Framework, as well as with the needs and mission of the unit, which will include those of the academic programmes in which the EFL students will one day study.

  • Timeframe: A beginning and ending point for learning and assessment must be stated. Is the target long or short-term? For what time period will the population be assessed? When will the assessment take place?

The acronym has proven quite helpful when explaining assessment planning to instructors who are unfamiliar with the exercise. Specifically, for each identified outcome and its corresponding assessment, a simple table was created that contains the following columns: Outcome, Assessment, Target, Results, Use of Results, and Person Responsible. The Outcome is identified by the QAU along with relevant stakeholders (e.g., Curriculum Unit, Administrative Coordinator, Programme Director), as is the Assessment—with wording guided by the SMART method. Baseline data informs a realistic Target. Results are collected once the specified assessment is given, and the QAU facilitates assessment meetings where teams analyse the data and develop action plans.

The QAU encourages the collection of data from multiple sources. In situations where an outcome can be assessed using, for example, both quantitative and qualitative data (e.g., the effectiveness of the Student Learning Centre), confidence in the findings increases. Likewise, achievement of a specific learning objective (e.g., determining the meaning of vocabulary in context) is analysed with data from in-class and end-of-module assessments. The QAU also encourages instructors to utilize performance-based assessments in order to measure the amount of deep-learning (as opposed to rote memorization) that is taking place among the EFL students.

The commonly accepted phrase for completing an assessment cycle is closing the loop. This means that the steps in the assessment cycle are effectively accomplished-identifying the outcome, assessing it, analysing the data, and acting upon the data. Completion of at least one full cycle could be called a success. Completion of cycles on a continuous basis, particularly resulting in curricular change and improvement, is referred to as sustainability. The steps described previously in establishing the assessment plan may lead to one successful completion of the assessment cycle. Yet, to realize sustainability, the EFL programme must strive to establish a culture of assessment and a learning culture. To do so takes time, planning, and inclusiveness.

If a culture of assessment does not already exist, the process to establish such a culture takes time. Change takes time. The QAU has been very deliberate in its data collection and analysis approach, focusing more on programme-level data, and very little on classroom-level data. In a Culture where failure generally results in admonishment rather than support, it is little wonder that instructors are extremely reluctant to examine class-level data, for fear that their class will be singled out, or that their capability as an instructor will come into question. The QAU and the CPD unit piloted a Peer-Observation initiative in the 2014–2015 academic year, and it was received very coolly, with the conclusion being that instructors were very apprehensive about being under the microscope in their own classrooms-despite the non-critical nature of the design. Thus, regarding the data on student learning, the approach of the QAU has been to only facilitate conversations around programme level data, and in turn, emphasize problem solving over fault-assignment. Again, the transformation is taking time, through extended conversations about “what people hunger to know about their teaching and learning environments and how the assessment evidence speaks to those questions” (Blaich & Wise, 2011, p. 12). The approach taken by the QAU is that while programme-level data may be useful in stimulating conversation, eventually the curiosity of the instructors themselves will encourage them to begin asking questions that require classroom level data, which is a positive step toward what Bresciani calls “forming ‘habits’ of assessment” (2009).

In addition to time, careful planning is essential. It is important to create a roadmap that leads the way to the goal of a sustained and effective quality assurance initiative (New Leadership Alliance, 2012). This is where the standards in the accreditation self-study were useful as a guide to creating an initial plan. The plan for sustainability articulates clear direction for assessments (i.e., what will be assessed and when), responsibilities (i.e., who is responsible for collecting, analysing, and reporting data), and communication (i.e., who will write reports, and when will they be submitted). Across the EFL programme, responsibility for reporting data is widely distributed across coordinators-e.g., the Curriculum coordinator, the Testing coordinator, the Administrative coordinator-and funnelled to the QAU to be used in reports for broader data analysis. For instance, the Student Learning Centre coordinator creates an activity report at the end of each semester. That data is fed into the QAU data warehouse and becomes part of broader analysis to determine success predictors for EFL students.

Inclusiveness is the glue that holds the successful assessment plan together. The greater the breadth of engagement from EFL stakeholders, the greater the likelihood of sustainability. As Hersh and Keeling (2013, p. 9) argue, “too often, assessment is orphaned to the province of a small group of dedicated faculty and staff,” which significantly hinders the growth of engagement in the process. Ensuring that the EFL instructors who implement the assessment plan have a voice in designing and administering the program is essential; “significantly growing and deepening faculty involvement” is where Hutchings argues that the “real promise of assessment” lies (2010, p. 6). It is important that those who are committed to the programme feel that they are making a contribution to continuous improvement, and that their contributions are recognized. Which means that public acknowledgement of instructors who lend time to the quality assurance initiative is highly valuable. This is a practice that the QAU is well aware of and employs at presentations and trainings.

At the same time, there must be a level of accountability. Establishing an atmosphere of collaboration and collegiality does not necessarily mean an environment free of conflict. Just as action is recognized, so is inaction. When issues regarding the quality assurance process arise with individuals, the QAU coordinator sets up a meeting with the instructor to seek out a resolution. If that does not result in a positive change, the SFL director has a conversation with them. To retain its accreditation status, the EFL programme must ensure that all staff are actively engaged in quality improvement. In cases where instructors have shown reluctance to contribute, an honest conversation about the value of total participation in the process has yielded positive results.

Finally, broad, active engagement requires a high level of trust and what is known as psychological safety (e.g., Edmondson, 1999). In such a context, individuals feel a sense of security in the reporting of results, as well as in taking risks to suggest changes in instruction. If there is a sense of support, not fear of punishment or retribution, then instructors are more willing to collaborate and experiment. If it is clear that conversations focus on improvement, not punishment, then the door swings open to greater collaboration for the sake of quality improvement. Thus, as the Unit works to develop a culture of assessment within the EFL programme, it is very careful in the early stages to focus on results at the programme level, and avoid the isolation of data at the classroom level. As mentioned previously, eventually, instructors will begin asking for classroom level data for the sake of improvement. That occurrence will signal an environment of psychological safety and a learning culture.

From the outset of the quality assurance initiative, communication and transparency have been explicit objectives of the QAU. The need for clear, consistent communication cannot be overstated (Bresciani, 2009). Thus, it is important that the quality assurance team share relevant information with all stakeholders (Baker et al., 2012; Bresciani, 2012) and that information is presented in an “actionable” way (Banta & Blaich, 2011, p. 27). This includes intent, rationale, expectations, milestones, outcomes, and a proposed timeline. Transparency is essential for sustainability, and yet often ignored (Miller, 2012; New Leadership Alliance, 2012), as the perception is that it invites criticism. On the contrary, transparency contributes to confidence in a unit by demonstrating a commitment to improvement. As part of the communication plan, transparency in terms of activity and progress ensures that a larger body of stakeholders is aware and informed. Transparency and communication assist quality assurance planners in guarding against the perception that progress is not being made, or that only a select group is taking responsibility, which means that those not involved can ignore the process. Therefore, multiple channels of communication are advised—from staff meetings, to emails, to social media (tweets, Facebook posts, blogs), to conversations in the corridor or over coffee. The important point is that communication must be continuous and relevant. Busy stakeholders will quickly tune out from agenda items and emails that do not have a connection to their daily work lives. To this end, the QAU pushes out updates on quality assurance initiatives via its website. EFL staff members choose to subscribe to the website, and receive email announcements when new material (e.g., reports, initiative updates, conference announcements) is posted to the website. The QAU coordinator actively participates in as many meetings as possible-among coordinators or instructors-so that information is frequently gathered and information is shared. Establishing a visible, active presence across the EFL programme is a critical objective toward engaging EFL stakeholders.

Quality assurance is not a single event, carried out for the purpose of satisfying an accreditation visit. QA is one component in an integrated process that includes curricular design, teaching, assessment, professional development, and programme administration. In this way, the quality assurance initiative must be developed in such a way that is both integrated and systematic. And, equally important, it must also be perceived as such.

Even if a school does not have the resources (i.e., money, time, or people) to establish and staff an office, an attempt should be made to create a modest system that regularly monitors a manageable number of administrative and learning outcomes. Quality assurance should be approached from the perspective that relevant activities (i.e., outcomes and assessment identification, data collection and analysis) are formative and not summative; these activities should be ongoing and consistent. Without regular monitoring, whether semi-annually, annually, or even biannually, an EFL programme will simply fall into a reactive, rather than proactive mode of operation.

Moreover, if the programme is genuinely aiming for improvement, there will be a constant cycle of new initiatives that are designed, piloted, and assessed. Thus, in parallel with the continuous assessment rotation of core outcomes, there will also be a secondary level of evaluation involving ongoing research and development of new initiatives. For instance, since the 2014–2015 academic year, in addition to monitoring primary student learning and administrative outcomes, the QAU was involved in the development, implementation, and evaluation of such initiatives as Peer Observation for instructors, Individual Learning Plans for students, a First Year Experience, an Early Alert System, as well as a variety of professional development activities.

Crucially, all of the actions discussed earlier do not take place without the vision and support of leadership. Indeed, quality assurance and accreditation in the context of this case study are still more or less an institutional choice; they are not driven by external mandates. Leadership (i.e., the Director of the SFL) has been instrumental in initiating accreditation and structural change, as well as garnering support and resources from the university administration in order to bring these ideas to fruition. Additionally, Distributed Leadership (Spillane & Sherer, 2004; Spillane, Halverson, & Diamond, 2001, 2004) has played a role in the CPD unit’s ability to gain a foothold in the SFL organizational culture. Distributed leadership is often misperceived as the distribution of power, when in fact it suggests the distribution of cognition-the spread of vision and values that sustains organizational efficiency and effectiveness. Although the SFL Director paved the way for the QAU by creating an office and a place on the organization chart, she has also stepped aside and allowed the QAU to set an agenda and run its own course. This is not to say that the QAU was simply a case of plug-and-play. There have been moments of anxiety as the existing structure and communications system adjust to this new entity. Again, however, leadership has played the critical role of stepping in on occasion to validate the Unit and reinforce the notion that the QAU is and will remain an integral part of the SFL structure.

6 Conclusions and Recommendations

Foreign language programmes, and EFL programmes in particular, are not often the focus of rigorous programme evaluation. Due to their nature as non-academic programmes, they are often viewed as external to the core operation of the higher education institution. This is unfortunate. As universities strive to differentiate themselves from the growing crowd, foreign language programmes have proliferated. And, as increasing numbers of schools provide instruction through English or another foreign language, such programmes are finding themselves as the gatekeeper to the university’s core faculties and departments. Thus, it is logical that such programmes, which are often the largest units on campus, and the first stopping place for substantial numbers of students, should be required to ensure consumers of higher education that they are enrolling in a programme that is quality assured. Yet, there are relatively few EFL programmes that have structured quality assurance into their existing organization and operations, and fewer still that have sought accreditation.

This study has been a case study of an EFL programme in one university in Istanbul, Turkey that decided to pursue accreditation while also establishing a Quality Assurance Unit. The study described the development and implementation of a QAU and its quality assurance system. It also analysed the current state of the initiative, enumerating major themes that have emerged from the data collected over a 15-month period. Foreign language programmes that are considering the establishment of a quality assurance initiative, if not a Unit, should be able to glean lessons from the analysis provided within this study. At the same time, it is accepted that this study is limited in that the data is only for a 15-month period. A longitudinal study must be carried out in order to determine the sustainability of the quality assurance initiative and its long-term impact on the culture of the organization-will it transform into a culture of assurance and inquiry, or will status quo prevail.

Finally, this grounded study may contribute to the field of EFL programme evaluation research by providing a framework vis a vis the list of major findings reported in the analysis section. Future research may look to tighten this framework to a more succinct list of factors that affect success and sustainability. Bolman and Deal’s Four Frames (2003) may serve as a means for analysing the successful quality assurance start-up, but it is arguably too broad for the specific intent of determining the critical pieces necessary for an effective quality assurance effort. EFL programmes are often too large and too instrumental to not be taken seriously. Regardless of whether institutional leadership elects to evaluate the quality of its EFL programme on a continuous basis, the leadership of the EFL programme should realize the importance of quality assurance, and strive to establish at least a system, if not an entity, that continuously asks the questions- How are we doing? Can we be doing better? If so, how?