Keywords

1 Introduction

Educational organizations are charged with one critical task: effectively and efficiently ensuring student learning. Historically, the prevailing assumption was that if a student had completed the required coursework, then they had mastered all relevant content, and the institution was perceived as having successfully executed its role. In the case of many English as a Foreign Language (EFL) programs, verification of competence is generally through a single, end-of-year high-stakes exam. Thus, the quality of the EFL program has traditionally been measured solely by the number of students who successfully pass the exam. Over the course of the past two decades, this belief has been subject to much scrutiny, driven by calls for greater accountability by internal (e.g. governing boards, administrators, faculty members) and external (parents, students, politicians, and taxpayers) stakeholders, who are demanding solid evidence of learning—not simply evidence of teaching. A tangible outgrowth of this movement has been the emergence of outcomes assessment as a means for substantiating learning. This has inspired a substantial body of literature providing detailed discussions of relevant principles and practices of outcomes assessment (e.g. Baker, Jankowski, Provezis, & Kinzie, 2012; Banta, Jones, & Black, 2009; Maki, 2004; Suskie, 2010; Walvoord, 2010).

As is often stated, the goal in implementing any outcomes assessment initiative is the establishment of a process that consistently fosters improvement in student learning. While this may look good on paper, the stark reality is that examples of schools demonstrating a closed assessment loop—from design to implementation to analysis and action to consistent improvement in learning—are difficult to come by (Banta & Blaich, 2011; Hutchings, 2010; Miller, 2012). The absence of cases exemplifying success has been attributed to a number of barriers and missteps. Examples range from schools that focus too closely on assessment for accreditation rather than learning (Hersh & Keeling, 2013), to failing to turn data into action (Blaich & Wise, 2011; Bresciani, 2012), to insufficient faculty involvement (Bresciani, 2009; Hutchings, 2010), to educational organizations themselves not knowing how to learn very well (Tagg, 2007). Meanwhile, the literature on outcomes assessment can be broadly characterized as focusing on the macro level, such as a system or district (Lennon et al., 2014; Bresciani, 2009a) or an institution (Blaich & Wise, 2011; Maki, 2004).

Where program-level guidance does exist (e.g. Bresciani, 2009), it is generally not directed at any particular field; even more rare is a discussion of outcomes assessment for leaders of English as a Foreign Language (EFL) programs. Yet, there are two separate driving forces that should inspire leaders of EFL programs to more seriously consider the virtues of a rigorous outcomes assessment system. First, the aforementioned accountability movement is gathering momentum globally; accreditation and quality assurance, and, by extension, rankings, are now part of the day-to-day lexicon of higher education. That EFL programs will be held accountable for quality is not an issue of if, but when. Second, EFL programs provide a value-added service in a crowded higher education market place. As university market shares are shaved off by competitors, demonstration of quality will replace the mere existence of an EFL program; an associated issue related to competition and program quality is student retention. Certainly, EFL program leaders cannot simply flip a switch and expect an outcomes assessment program to power up. As mentioned, EFL programs, similar to most higher education programs, do not have a tradition of assessing learning outcomes. Therefore, the aforementioned principles for effectiveness and efficiency require careful consideration and strategizing at the developmental stage in order to ensure successful and sustained implementation and a continuous cycle of improved student learning. In other words, while these principles and practices may be instructive to EFL program leaders, their application at the program level may appear intimidating.

It is for this reason that a Distributed Leadership model (e.g. Spillane, Diamond, & Halverson, 2001, 2004) may offer a framework worth exploring for EFL leaders faced with the task of implementing and sustaining an effective outcomes assessment system. The Distributed Leadership (DL) model offers a unique, but arguably compelling perspective on examining leadership. While most leadership models explore the personalities and actions of individuals, the DL model views the activity of leadership as the focal point. In doing so, DL posits that leadership is not the result of one individual’s actions, but rather a complex web of social interaction between the leader, followers, and the situation (Spillane et al., 2001, 2004). Therefore, whether analyzing or planning an initiative, the focus becomes how leadership is, or is not, diffused throughout a unit. For an EFL program leader, the question shifts from “How am I going to make this happen?” to “How can I facilitate successful implementation and sustainability of this initiative?”

This chapter will review the most commonly accepted principles and practices in the outcomes assessment literature today. This will be followed by a discussion of the barriers that seem to be inhibiting successful implementation of outcomes assessment programs. The final section will explore the concept of Distributed Leadership, particularly given the backdrop of the barriers discussed in the previous section, and present it as a viable framework for EFL leaders to consider when implementing outcomes assessment programs.

2 Principles and Practices

Educational organizations are charged with one critical task: effectively and efficiently ensuring student learning. In the past, this meant that the institution enrolled the student at the beginning of his or her academic endeavor, provided a list of courses for that student to take while at the institution, and hoped that the student graduated at the other end. While this approach was sufficient in the past, over the last two decades this philosophy has changed dramatically. Internal and external stakeholders have exerted pressure on educational institutions to demonstrate that their students are not merely going to class, but that they are learning. Schools have been required to respond by devising systems to demonstrate that they are actually paying attention to what students are purportedly learning. And, if the students are not acquiring the knowledge, skills, and experience that they were promised in the first place, then the onus is on the institution to make appropriate changes to improve the student’s opportunity for educational success.

Given this context, the outcomes assessment movement has garnered increasing attention over the last two decades (e.g. Angelo, 1999; Banta, 1993, 1996; Cross, 1998; Ewell, 1988; Palomba & Banta, 1999). The previous metrics, or outputs (e.g. students matriculated, students graduated, grade-point-averages) are no longer sufficient to determine whether an institution has provided value-added to its students (e.g. Angelo, 1999; Kuh & Ikenberry, 2009). Today, higher education institutions must provide evidence that their students are demonstrating achievement of specified learning outcomes, as identified and monitored at the institution, program, and classroom levels. Learning outcomes measure changes in students’ knowledge, skills, and behaviors over time—vis a vis the unit of analysis (i.e. institutional, program, or course level). In order to carry out an effective and efficient outcomes assessment program—that is, to consistently monitor and improve learning at all levels—educational institutions must carefully plan, implement, and work to sustain such initiatives. With this increase in attention toward outcomes assessment comes the need for guidance, which in turn has inspired a substantial body of literature providing useful direction concerning relevant principles and practices of outcomes assessment (e.g. Baker et al., 2012; Banta et al., 2009; Maki, 2004; Suskie, 2010; Walvoord, 2010).

Some sources highlight specific principles or practices, such as communication and sharing evidence (Blaich & Wise, 2011), planning (New Leadership Alliance, 2012), or meaningful, measurable, and mission-driven assessment (Baker et al., 2012). Others provide more comprehensive, detailed component descriptions and recommendations (e.g. Banta et al., 2009; Bresciani, 2009, 2012; Maki, 2004). Banta et al. (2009) outline the three Phases of Assessment: Planning, Implementing, and Improving and Sustaining. When planning the implementation of an outcomes assessment initiative, for an EFL program, particularly if such a system is non-existent, then it is certainly advisable to break the principles and practices into these three progressive stages.

3 Planning

Planning is what EFL program leadership must embark upon as early as possible. To begin, an EFL assessment committee should be constituted. It is particularly important to have a program-level committee as this is where responsibility for assessment resides (Banta & Blaich, 2011). These committees become the face and voice of assessment as the initiative is developed and begins to spread throughout the organization. Bresciani (2009) provides a useful list of guiding questions to be considered during the formation of the assessment committee, such as who will be on the committee, for how long will they be on the committee, and what support or rewards will they receive for membership. Certainly, this will vary depending on the size and structure of the program and workload distribution. This is where leadership commitment, in the form of providing time and resources to those who will enact the initiative, is crucial.

The assessment committees work with relevant stakeholders in order to identify expectations for student learning (Maki, 2004) which lead to the generation of assessment questions (Blaich & Wise, 2011) and ensure that assessment is meaningful, manageable, and mission-driven (Baker et al., 2012). Naturally, stakeholder groups would include the students themselves, the faculty members who are teaching core subjects to the EFL program completers, and potential employers. The EFL assessment committee will also facilitate the process of establishing a common language for the initiative (e.g. what is a goal versus an objective) and, importantly, a “shared conceptualization” of why the program is undertaking the establishment of an outcomes-based assessment program (Bresciani, 2012). The plans devised by the committee include specification of how evidence and changes will be disseminated on a regular basis. Transparency of the process is often referred to as a critical factor in ensuring success of an outcomes initiative (Blaich & Wise, 2011; Bresciani, 2012; Maki, 2004; New Leadership Alliance, 2012; Jankowski & Provezis, 2011).

4 Implementation

Implementation is the subsequent phase. At this stage, the EFL program assessment committee moves from input gathered from stakeholders and extant data to identification of a specific set of learning outcomes that they wish to measure. They also devise an assessment plan for each outcome, keeping in mind that it is not necessary to assess all outcomes every year (Bresciani, 2009). Attempting to do so may prove burdensome from a workload perspective, as well as overwhelming and demoralizing to those responsible for collecting, analyzing, and reporting results. As individual outcomes are identified and defined, and assessment plans are devised, it is instructive to bear in mind the SMART acronym:

  • Specific—the outcome should specify the group that should be achieving the outcome. The outcome should also only assess one specific skill, behavior, or ability. For example, the outcome “Students will read and summarize a text” is, in fact two separate outcomes. One outcome will assess the students’ ability to read and comprehend a text. The second outcome will assess their ability to summarize the text. The more specific the outcome, the greater the chance of identifying the root cause of any issues.

  • Measurable—the outcome clearly identifies a numerical value that will change as a result of learning. For example: “75 % of Track 3 Writing students will receive 3 or higher on the 5-point rubric for the summarization exercise”. Banta and Blaich (2011) recommend multiple measures to increase reliability and validity. For example, in order to determine whether students have mastered the skill of summarization, there may be multiple exercises assessed over the course of a term, along with a summarization item on a final exam.

  • Achievable—the target indicated should be realistic and attainable within the given learning period. If only 50 % of students successfully completed a lecture note-taking exercise last semester, it may be unrealistic to expect 75 % to do so this semester. The more often assessments are conducted, the more realistic the projected targets will be.

  • Relevant—is the outcome aligned with the vision, mission, and goals of the program? Is the outcome based on stakeholder input, or on reliable data indicating an area worthy of focus?

  • Time Frame—the time period for learning and assessment are defined. The wording of the outcome should specify the amount of time in which the skill, knowledge, or behavior should be acquired; e.g. One semester, or by the time the student has completed the program.

Beyond identification and definition of individual outcomes, the implementation phase includes the following steps: assessment of the outcomes, analysis of data, establishment of action plans, and reporting of results and plans. Completion of this cycle is commonly referred to as “closing the loop”, and it is generally facilitated by instructors and staff who have a firm understanding of the process. Bresciani (2009) notes that it is important to have a support structure in place to provide assistance to faculty members who are responsible for these steps. Assistance may manifest as access to professional development or participation in conferences, or due compensation for their efforts, such as overtime pay or release time. Likewise, the support may be in the form of technology that can assist with analysis, storage, and reporting of results.

5 Improving and Sustaining

Improving and Sustaining an outcomes assessment program are the hallmarks of a successful outcomes assessment system. As will be discussed below, this is quite often the phase that remains out of reach. In this stage, institutions and programs have established outcomes systems where, on a regular basis: outcomes are assessed, data is collected and analyzed, evidence-based changes in programs and practices are devised (New Leadership Alliance, 2012), and ultimately there is evidence of improved student learning and improved efficiency in processes. One of the keys to sustainability is consistent communication and improvement through the utilization of results (Banta & Blaich, 2011; Bresciani, 2012; New Leadership Alliance, 2012). Similarly, the EFL assessment committee must ensure “an entirely public process” where assessment evidence is “widely shared and discussed on campus” (Blaich & Wise, 2011, p. 12), which may come in the form of faculty-led forums and the posting of results of dialogues on a website (Maki, 2004). The National Institute for Learning Outcomes Assessment at the University of Illinois at Urbana-Champaign has established an initiative to push for greater transparency of outcomes assessment reporting. Other keys to sustainability are ongoing faculty development and the establishment of an environment of trust (Banta & Blaich, 2011; Banta et al., 2009; Bresciani, 2012).

As previously mentioned, one way to conceptualize the process of developing, implementing and sustaining a program of outcomes assessment is through the three-phase approach (Banta et al., 2009). I would like to propose that EFL program leaders envision the process through a different framework—what I refer to as the Environment approach—whereby EFL program leaders can analyze their institution from the perspective of organizational culture and determine where barriers to development and implementation may arise. This approach analyzes the institution in terms of an Enabling environment, an Attractive environment, and a Sustainable environment. The Enabling environment essentially asks whether institutional leadership is receptive and supportive (Banta & Blaich, 2011; Banta et al., 2009), whether in the form of resources (e.g. human, technological) or in the creation of an atmosphere of trust that is non-threatening (Maki, 2004).

An Attractive environment provides structural features that encourage faculty and staff to engage in the outcomes assessment process. Outcomes work that is perceived as confusing, burdensome, or not aligned with the goals and needs of the EFL program often dissuades any engagement in the process. In contrast, release time, rewards, targeted professional development opportunities (Banta & Blaich, 2011; Bresciani, 2012), and encouraging research and scholarship that is aligned with the outcomes assessment initiative (Baker et al., 2012) may strengthen buy-in from instructors and staff.

The Sustainable environment emerges when the assessment initiative no longer relies on an individual or committee. It is apparent when assessment and data drive conversations for change and improvement, rather than the necessity for change coming from an external source. Another indicator is when anecdotes are rejected and data is accepted as the only valid evidence. The EFL program that collects, analyzes and then shares data regularly with its stakeholders, and invites their input in a continuous pursuit of improvement has fostered the Sustainable environment. As Banta et al. (2009, p. 3) concisely put it, effective assessment emerges over time. Whether educational leaders wishing to establish an outcomes assessment system view this task from the three-phase approach, or the Environmental framework—or a combination thereof—it is critical that they assume a long term view of the process. Rushing a process may result in short-term success, but there is a good chance of long term failure as people feel overwhelmed and under-motivated. In the end, it will be the students who pay the price. A consistent, persistent long-term approach to development and implementation of an outcomes assessment program will increase the probability of success.

6 Barriers

With so much good advice available, why are improvements in student learning as a result of assessment the exception rather than the rule? (Banta & Blaich, 2011).

An outcomes assessment program provides a valuable means for improving student learning at the institutional and program levels. In order to do so, an organization or a program must develop an outcomes assessment program that effectively and efficiently closes the loop on the assessment cycle—on a continuous basis. That is, data is collected, analyzed, and acted upon to improve learning. There is, however, relatively little evidence that schools are experiencing success in closing the loop (Banta & Blaich, 2011; Hutchings, 2010; Miller, 2012). Despite the plentiful availability of sources that carefully define and describe best principles and practices for establishing outcomes assessment programs, a growing body of literature is substantiating the reasons why schools struggle with achieving success in implementation. The major challenges that have been cited range from focusing on outcomes for accountability rather than improvement, an inability of schools themselves to engage in deep learning for substantive change, failing to convert results from data into action, and, overall, institutional cultures that do not value collaboration and transparency for the sake of improved learning.

Hersh and Keeling (2013) allude to what is perhaps the most prevalent reason for the lack of success in implementing outcomes assessment programs: institutions responding to external demands for accountability. Often, such responses to external bodies are transactional, generating little systematic or systemic change (p. 4). Banta and Blaich (2011) concluded that the indifference to action is perpetuated by the belief among many faculty members that assessment is an “externally motivated and bureaucratic process” (p. 24), which minimizes time with students. In other words, these institutions and programs have failed to generate transformational or deep change that would lead to perhaps completely different approaches to delivery of instruction. Tagg (2007) refers to this as double-loop learning (citing Argyris & Schön, 1978). As opposed to examining the “governing values” (p. 38) behind the policies and processes that may actually be the root cause of ineffective change, institutions justify the status quo by relying on defensive routines and refusing to publicly report performance results. The condition that Tagg is alluding to is able to persist because many institutions lack a culture that collaboratively examines student learning—from its design to its assessment—what Hutchings (2010) points to as the absence of faculty involvement in the process, which could be explained in part by the “excruciatingly slow” work on common learning outcomes (Miller, 2012).

Blaich and Wise (2011), in attempting to uncover why so many institutions with ostensibly successful outcomes assessment programs have not transformed learning, determined that a major issue is the translation of data into action. The common procedure is for institutions to gather data and simply circulate results among a small group, and then “shelve them if nothing horrible jumps out—and sometimes even if it does!” (p. 12). The issue is that assessment data gathering is not followed by faculty presentations on the nature of the data, nor with faculty-driven discussions about how to respond. Blaich and Wise go on to point out that unless the data reveals something “truly devastating” there is little to no response. Bresciani (2012), in her case study, attributes an unsuccessful assessment program to ineffective communication in the planning stage. She realized that the major pitfall was that key stakeholders had not agreed on a “shared conceptualization” of what metrics or data would be collected, or with which audience they would share the data.

7 Distributed Leadership

Measuring and assessing learning outcomes is critical to ensuring that students have successfully mastered the skill, competency or knowledge. But where and how this is done is still an underdeveloped area (Lennon et al., 2014, p. 10).

English as a Foreign Language (EFL) programs, similar to most higher education programs, do not have a tradition of assessing learning outcomes. For EFL program leaders who wish to implement and sustain a successful outcomes assessment program, the task may seem intimidating when considering the lengthy list of principles and best practices prescribed by the literature. Yet the evidence indicates that few schools and programs establish successful outcomes assessment programs (e.g. Hutchings, 2010). If we boil down the barriers to success, we are left with: “little or no collaboration…”; “insufficient shared planning…”; “no transparency with data…”; “ineffective communication…” (Banta & Blaich, 2011; Hutchings, 2010; Miller, 2012). In other words, we are left with the notion that many organizations do not have people that can talk and work with each other—for the good of the students.

It is for this reason that Distributed Leadership (DL) may offer a framework worth exploring for EFL leaders faced with the task of implementing and sustaining an effective outcomes assessment system. Distributed Leadership (DL) has its roots in primary and secondary education in the United States, however it is gaining broader appeal across the educational spectrum and in different countries. (e.g. Bolden, Petrov, & Gosling, 2009; Jones, Lefoe, Harvey, & Ryland, 2012; Pont, Nusche, & Hopkins, 2008; Van Ameijde, Nelson, Billsberry, & Van Meurs, 2009). While most leadership models explore the personalities and actions of individual leaders, DL views the activity of leadership as the focal point. Spillane et al. (2001, 2004) posit that leadership is not the result of one individual’s actions, rather it is a complex web of social interaction between the leader, followers, and what they refer to as the situation. Therefore, when planning an outcomes assessment initiative, the focus becomes the way in which leadership is diffused throughout a unit. In the case of an EFL program leader, the question shifts from “How am I going to make this happen?” to “How can I facilitate successful implementation and sustainability of this project?” Two similar, yet distinct, queries.

The distributed perspective of leadership calls into question the generally accepted notion that leadership is the exclusive domain of those in leadership positions, such as a president or the head of a department. Rather, Distributed Leadership, as the name suggests, is derived from the idea that leadership emerges from the efforts of a variety of individuals within an organization—both positional and non-positional leaders. The central notion is that leadership is manifested when a leader’s cognition is stretched, or distributed situationally, over aspects and actors (Spillane & Sherer, 2004). Actors, according to the authors, may be both leaders and followers, for without followers a leader cannot lead. Therefore, the focus of leadership shifts from a single individual to the “interplay between the actions of multiple people” (p. 37) utilizing particular tools and artifacts within a particular situation. Spillane et al. (2004, p. 25) explain that this “collective leading requires multiple leaders working together, each bringing somewhat different resources—skills, knowledge, perspectives—to bear”. In sum, the unit of analysis is not the leader but the activity and the “web” of leaders, followers, and the situations that constitute the leadership itself. In the specific case of the EFL program embarking on an outcomes assessment process, the Distributed Leadership framework may be compelling because of the large size and structure of most EFL units, in addition to the potential for expanding and deepening engagement across the program.

As mentioned throughout this chapter, there is a respectable body of literature dedicated to the principles and practices associated with successful outcomes assessment programs. (e.g. Angelo, 1999; Banta et al., 2009; Bresciani, 2009, 2012; Maki, 2004). In contrast, there is a body of literature indicating that evidence of successfully implemented initiatives is scant. As Hutchings (2010) has concluded, “Unfortunately, much of what has been done in the name of assessment has failed to engage large numbers of faculty in significant ways” (p. 3). This sentiment has been echoed by Hersh and Keeling (2013), who lament that too often assessment is “orphaned to the province of a small group of dedicated faculty and staff” (p. 9), which can easily lead to exhaustion and marginalization. Distributed Leadership works to adjust these imbalances by drawing a greater number of participants into such critical processes. This can be accomplished by examining the core concepts associated with Distributed Leadership (i.e. leaders, followers, cognition, and the situation), and exploring the ways in which Distributed Leadership may play out in the context of learning outcomes assessment in an EFL program.

8 Leaders and Followers

While it is critical that positional leaders (e.g. president, dean, department chair) are supportive of initiatives to assess outcomes (Banta & Blaich, 2011; Bresciani, 2009; Maki, 2004), it is also essential that other individuals within the organization or unit assume non-positional leadership roles in the assessment process. These may be EFL instructors leading working groups in the designing assessments or discussing results data. In one EFL program of 600 students in a university in Istanbul, Turkey, there is a testing office dedicated to the development and administration of placement and exit exams. However, all instructors are given the responsibility of developing a number of assessments that will determine 40 % of their students’ final grades. This allows instructors to have a greater understanding of the assessment process, while also giving them greater ownership in assessing their students. From this situation, a number of instructors have emerged as non-positional leaders in founding the program’s outcomes assessment process, while some others have assumed leadership positions in a recently expanded testing office. As the number of non-positional leaders grows, there is a corresponding increase in the number of followers who are brought along because of their colleagues’ influence. Likewise, there is an overall increase in engagement with the outcomes assessment process and student learning—a rising tide lifts all boats.

In an EFL program where top heavy leadership was once concentrated in three positional leaders, there is now much greater depth and breadth of leadership across the program. Indeed, the leadership of the three positional leaders has been distributed by spreading the influence of the non-positional leaders and their followers. This has resulted in much greater attention to student performance through collaboration among the teaching staff while attempting to make sense of and interpret assessment evidence (Banta & Blaich, 2011) and devise action plans.

In the lexicon of Distributed Leadership, the cognition of the positional and non-positional leaders has been stretched across the organization (i.e. the EFL program) as the collaboration expands. As Maki (2004) suggests, one can evidence a “collective commitment” via the structures (i.e. an expanded assessment office), the processes (i.e. shifting a percentage of student assessment responsibilities to classroom instructors), and the practices (e.g. no classes on Tuesday afternoons so that teachers have dedicated collaboration time) (emphasis mine).

In another instance from the Turkish university, an instructor took it upon herself to devise a survey to gauge professional development needs among the teaching staff. Based on the results of the survey she recruited instructors from within the teaching staff to provide professional development in areas where they had knowledge and experience. She then created a professional development calendar for the semester. In turn, those who provided the training sessions became the de facto go-to people in their respective areas of expertise. Thus, not only were the 1-h training sessions offered, but there was also the advent of a distributed resource center through the initiative of individual instructors.

Another instructor saw the need for an Academic Support Center for freshman students of English. Students needed assistance in writing papers and studying for TOEFL and IELTS. She began by offering her free time to her own students. After 1 year, there were five volunteer instructors who provide tutoring to any freshman student of English who requested assistance. The positional leader may or may not have perceived this need, but this instructor did, and she was not only able to found the center, but she has attracted a cadre of followers in the other volunteer instructors.

A final, and perhaps most relevant example for our purposes here, occurred when another instructor saw that outcomes assessment was not given much attention within the EFL program. The university had recently conducted some activities related to the Bologna process,Footnote 1 but there was no direction provided by the institution, other than “complete this within 2 weeks.” In the case of the outcomes, at a staff meeting, the instructor provided an overview of the principles and the process of outcomes assessment, then asked if any of the instructors saw a need to assess any outcomes. Five instructors and two administrators expressed desire to begin the outcomes assessment process. Since then, all seven have been through one full cycle of the project and are currently developing action plans, based on their data collection and analysis. One of the projects, because of positive results, has moved from the pilot stage to program-wide implementation. Moreover, the EFL program recently began an accreditation process. Five of the seven who have been leading the outcomes assessment initiative are now on the leadership team for the accreditation process. Based on their experience with outcomes assessments, these five are able to effectively lead the accreditation process, as well as serve as ambassadors for the process and engage more followers. This may not have happened if the one instructor had not initiated the outcomes assessment process a year prior.

In sum, the positional leader of the program has enabled this environment, which allows instructors and staff to explore ideas and expand them into formal and informal entities that ultimately result in improved learning opportunities for the students of the EFL program. In addition, from a distributed perspective, Spillane and his colleagues describe this as a multiplicative rather than additive model (2004, p.16). That is, the interactions among two or more leaders in carrying out a particular task may amount to more than the sum of those leaders’ practice. Whether it is the positional or non-positional leaders, their cognition (vision and leadership) has been stretched (distributed) across the organization (EFL) via the interactions of these actors (leaders and followers).

9 The Situation

Spillane et al. (2004, p. 10) contend that Distributed Leadership has three essential constituting elements: the web of leaders, followers, and the situation. I have examined the actors, now it is important to explore the situation. The situation is comprised of the many facets of an organization that either enable or constrain a leader’s work. The situation may be the organizational culture or the structure—physical or organizational. It may be policies and procedures, or the symbols, tools, and other designed artifacts that are part and parcel of day-to-day leadership practice (p. 21). They further explain that a leader’s thinking and practice is mediated by these artifacts: they serve as constituting components of leadership practice, not simply as devices or means that allow individuals to do what they want to do (p. 23). Thus, when a leader (positional or non-positional) creates, for example, a memo, a report, a new policy, a new program, or a new office that is a means by which leadership is being distributed. Likewise, existing buildings, policies, and organizational structures provide conduits—and barriers—to leadership. As agents, leaders must choose to utilize the situation, or make efforts to amend the situation in order to facilitate the distribution of their cognition (leadership). To that end, leaders in an EFL program working to establish a successful outcomes assessment initiative must keep in mind that the situation plays a critical role in determining how effectively they distribute their leadership. In other words, it is not simply interactions with others, but the means through which interactions take place. Likewise, the structures (physical and organizational), written documents, policies, emails, or celebrations mediate cognition—either stretching it or constraining it.

According to Spillane and his colleagues (Spillane et al., 2004), the situation is multi-dimensional. In the context of EFL outcomes assessment, the situation may include the organizational climate, including such indicators as degrees of inquiry, accountability. The situation may also assume processes, such as the aggregation, disaggregation, and analysis of assessment data, the formulation of action plans based on results, as well as the eventual reporting of results, in either hard or soft copy. And, certainly, the situation may include structures such as a building or an office, an organizational hierarchy, and policies and positions. The most common situational aspects mentioned in the assessment literature include the need for continuous professional development (Blaich & Wise, 2011) and the need for structured time so that faculty can plan assessment in addition to analyze and reflect upon data (e.g. Hutchings, 2010) so that improvements in learning can be made.

When we revisit some of the EFL cases mentioned previously, we can see how the situation completed the Distributed Leadership triangle (leader + follower + situation) and contributed to the distribution of the leader’s cognition. In the case of the instructor who started the professional development sessions, she was able to pursue her vision in the first place because of the collaborative culture (organizational culture) already in existence within the EFL program. She was also able to distribute her cognition by administering the needs-survey (an artifact) to not only gather data, but also to create awareness. She also took advantage of the fact that no classes are scheduled for Tuesday afternoons (structure), thus ensuring that there would be empty classrooms available (structure) and that instructors would be able to attend the training sessions. In the case of the instructor who developed the Academic Support Center for freshman students of English, she did not have the same physical structure as the instructor in our previous example, as they work in different buildings. Thus, she needed to create a center (a structure) by placing one small desk in the limited space in her own office, and asking each volunteer instructor to do the same (structure). Like the professional development leader, she took advantage of the collaborative culture (organizational culture) and the teaching schedule (structure)—instructors could choose to substitute 3 h per week in the center for one class. In this way, she was able to attract other volunteer instructors to expand the center. Finally, in the case of the instructor who developed the outcomes assessment project, he was aware that the Bologna Process and notions of quality assurance were in the minds of the instructors and staff. He also understood that some staff and instructors were open to change if it would improve student learning (organizational culture). He was also given permission (organizational culture) to utilize one staff meeting (structure) to introduce outcomes assessment. Handouts and forms (artifacts) were created and distributed to ensure clarity of the project and the process. In all three of these instances, the leaders were able to navigate the organizational structure by avoiding barriers and discovering leverage points that “enable the movement and generation of knowledge” (Spillane et al., 2004, p. 27). These examples may appear common and uninteresting. However, if we return to the discussion of common barriers to successful implementation, we can see that the sharing, collaboration, and transparency that are missing from the unsuccessful examples are prevalent in these examples. Thus, one could argue that the unique feature is that the leaders in these three EFL cases, acting of their own accord, were able to take advantage of the situation and were able to distribute their leadership across their organization.

10 Conclusion

The English as a Foreign Language (EFL) program often constitutes the largest department or program in a higher education institution. The EFL program may also have the largest number of students and instructors in the institution. As such, monitoring quality across the unit is critical. Until recently, in many EFL programs, quality was measured by the percentage of students who were able to fulfil the stated requirements and advance to their academic programs. However, pressure is increasing from external forces (e.g. taxpayers, parents, quality assurance/accrediting bodies, rankings agencies, competitors). As this situation progresses, there is ever-more need to demonstrate that students are acquiring specified knowledge, skills, and behaviors. And, if they are not, then stakeholders want to know how the program is going to respond to the situation. Outcomes assessment is broadly viewed as an ideal way to monitor achievement as well as drive change toward more effective teaching and learning. This is especially true if educational leaders heed the advice carefully explicated in some of the more well-respected texts in the field (e.g. Banta et al., 2009; Baker et al., 2012; Maki, 2004; Suskie, 2010; Walvoord, 2010). Certainly, designing, developing, and implementing a sustained outcomes assessment effort is rarely a smooth ride, and this can be especially so for EFL programs embarking on this process as they can be large, unfamiliar with such processes, and lacking institutional resources. Implementing such an initiative involves culture change, and it is well-documented that organizational culture is one of the primary barriers preventing successful implementation (Blaich & Wise, 2011; Hersh & Keeling, 2013; Tagg, 2007). Thus, Distributed Leadership may provide a useful framework for EFL leaders who are embarking upon this intimidating task. Distributed Leadership shifts the focus of leadership from the individual, positional leaders to the actual activity of leadership, providing some explanation as to how leadership can be stretched across an organization through both individuals (i.e. leaders and followers), as well as the so-called situation. The pitfalls associated with implementation of outcomes assessment initiatives—lack of participation, lack of consensus, lack of deep learning—may be mitigated through Distributed Leadership as it generates broader, more committed involvement in the outcomes assessment cycle. Greater engagement results in greater dedication to the vision of the program and commitment to its success. In the end, the unrealistic image of a single, positional leader influencing deep learning in the organization transforms into a more sensible notion of vision and commitment distributed across the instructors and staff, thus enhancing the possibility that the drive for improved learning becomes woven into the culture of the program.