Keywords

This chapter looks at the broad outline of developments in ways to teach foreign languages, starting from the post-WWII focus on language through memorization and skill practice as necessary initial stages in language acquisition, and reaching up to recent, student- and sociolinguistic-centered emphases in language acquisition. Its objective is to challenge readers to think about that historical legacy and its impact on the profession’s practices in a period of transition in postsecondary education as a whole. This thumbnail history is not intended to be comprehensive, but rather to illustrate why the restructuring of language teaching at this time necessitates addressing the heritage of institutional and professional practices in foreign language (FL) instruction that initially dominated and still continues to influence the field well into the twenty-first century.

As will be traced below, in the years following WW II to the present day, shifts in major directions for FL teaching have been associated with cross-disciplinary fields, notably behavioral and cognitive psychology, psycholinguistics, discourse analysis and computer technologies. Whether these initiatives preceded or developed while simultaneously influencing FL pedagogies, each needs to be discussed as they apply to specific phases of FL teaching rather than in the strict chronology of their historical appearance. This caveat is particularly relevant here to the current chapter’s references to Bloom’s Taxonomy of Educational Objectives (1956).Footnote 1 In the following six decades, this early statement outlining a learning sequence for educator’s assessment of cognitive processing has undergone a variety of reinterpretations, as new readings of the Taxonomy have been proposed and its applications expanded. Consequently, Bloom’s Taxonomy will be referred to throughout this chapter in terms of the particular direction of its influence during a given era in FL teaching in the United States, not in any attempt to set a normative reading of its significance into place.

The major eras that emerge as significant need to be understood in terms of different outside forces. In the first four decades after WW II, empiricist models and structural linguistics (particularly in the 1950–1970s) that dominated the textbooks and assessment were structuring curricular decisions about elementary and intermediate years of language instruction. Advanced learners were not a special focus of attention. By the late 1970s, however, the ACTFL proficiency movement introduced a more comprehensive vision of what language instruction meant, setting performance objectives for the spectrum of language learners in North American colleges and thus intending to raise the profile of FL instruction. That vision was augmented in the 1990s by ACTFL’s development of Standards for Foreign Language Teaching, which again broadened our focus by turning it onto what it meant to learn a language, turning classroom emphasis away from correctness and toward context-based performance of tasks relating to culture and communication in a variety of interactional settings. During this same period, the internet and increasingly available forms of online communication enabled a more intense focus on the learner that enabled Bloom’s Taxonomy to remerge and reframe our ways of thinking about stages in the FL acquisition process. With computers and later with iPods, iPads, tablets, e-readers, and a host of downloadable applications, students and their teachers could interact with authentic foreign languages on their terms and in real time as learner communities—increasingly, FL learning became identified with learning about foreign language use as manifestations of speakers’ and writers’ cultures.

After a look at what each of these stages meant to FL instruction, I argue in the chapter’s conclusion that the cornerstone of language acquisition today needs to be understood in new ways: FL learning now has the broader goal of helping adult learners to use their extant literacy capabilities to interact with unfamiliar concepts expressed in an unfamiliar language; they need not only to learn about and interact with the language and its culture, but also how to move beyond classroom settings and manage their own identities and interactions in that new context. If this summary describes the new goals for FL learning, then such student literacy is fostered only when learners are able to apply features of preexisting knowledge to negotiate content, language, and pragmatic decisions about identity and action as covalent components of the meaning of language use. Such a project will, as the following analysis suggests, involve rethinking historically anchored structural and pedagogical components of many FL departments in North America.

1 Setting the Stage: Bloom’s Taxonomy and the Turn Toward the Learner

In 1956, what many authorities acknowledge as the most significant twentieth-century public document in the field of education appeared: Bloom’s Taxonomy of Educational Objectives. Written by a committee with Benjamin S. Bloom as chair, this document broke down the education process into a series of goals, each of which could purportedly be met by learners who practiced increasingly more complex tasks leading them to structured learning outcomes in different domains; those tasks moved through a hierarchy of difficulty, from simpler to more complex (the taxonomy), that outlined the logic of the educational process.Footnote 2 The original proposal by the committee defined three domains of activity through which a learner acquired knowledge, each of which could be described with its own taxonomy, reflecting a hierarchy of difficulty from simpler/more fundamental activities of mind up through more difficult ones: cognitive (human thought processes), affective (the range for human emotional responses and their impact on thinking and behavioral processes), and psychomotor (how the body learns through physical activities). The three realms have been subsequently modified by many other scholars to apply to learning processes in different frameworks, all the while stressing both learners and their development over time.

The resulting report presented the tasks associated with learning in these domains as sequences, reflecting hierarchies of increasingly complex activity. Later critics pointed out that the result was a taxonomy of objectives for classroom instruction, one that described the difficulty of tasks imposed in designing tasks and tests, and not necessarily descriptive of cognition itself (Anderson and Sosniak 1994). Just as critically, the first and most important part of the original report nonetheless focused on what it called the cognitive domain, in line with the era’s preference for equating learning with forms of knowledge construction (and not necessarily embodied human cognition), an equation called into question today with the increasing focus on the learner in sociocultural contexts—the other two domains of Bloom’s Taxonomy.Footnote 3

Despite such disputes, Bloom’s Taxonomy remains a consistent reference point. Today’s models for learning, especially in fields like foreign language education (but also in all subjects involving reading, writing, and critical thinking), now routinely describe sequences and constellations of pragmatic competencies associated with learning outcomes and learner motivation, as they also take mediality of the knowledge base (rather than items of knowledge reified into patterns) into account, differentiating, for example, between the literacies involved in reading texts and various forms of electronic media (e.g. Blake 1998; Berrett 2012). Researchers have produced abundant evidence about the ways that text and reader interact in a multi-facetted and evolving mental processing that constitutes literacy, a word that has come into fashion to emphasize the process of learning, rather than the product, and to describe literacy as a lifelong task involving an individual learner’s connections with the world, connections whose definitions vary widely depending on learner goals (e.g. Kramsch 2009).

In the present context, I suggest that Bloom’s Taxonomy still needs to be part of an analysis of today’s models for learning and curricular development, even if it has fallen into disrepute and disuse as a research paradigm, because its terminology and description of mental work (defined as tasks, not cognition) remains as a ghost in the educational machine and a live component of our thinking about learning as a structured process. That assertion is supported by any internet search using the term “Bloom’s Taxonomy,” which shows many teaching and learning aids that parallel the original heuristic.

Bloom’s 1956 Taxonomy arranged the components of acts associated with learning in a sequence extending from simpler cognitive activities up through their uses as foundations for more complex ones. While often understood as based on different research and educational objectives than those of the twenty-first century (and hence on different models of what learning and cognition are), the proposals’ authors recognized the enduring premise that “the simpler behaviors may be viewed as components… [that are based on] more complex behaviors” (Bloom 16).

As critics have frequently asserted, however, the sequence in the chart below has never been tested empirically. The theoretical model simply outlines the graduated complexity in the cognitive acts associated with learning as it was known at the time. They do not describe cognition as adhering to the brain or multimodal thinking, they talk about the behaviors of learners—what they are expected to be able to manipulate in the tasks that are set in learning sequences. Usually represented as a pyramid moving from the simpler tasks at the base to the “tip” of more complex learning behaviors, I here reproduce Bloom’s original classifications in their order ranging from simplest to more complex, more concrete to more abstract:

2 The Cognitive Processes

Table 1

More recent iterations have reversed the final two categories to reflect modern English usage (diagram below), to mix together the ideas of synthesis with the new category of “knowledge creation,” a mental activity that leads to an original contribution to the realm of knowledge in a given field. In more recent models, then, some categories have been regrouped and some have been added. The original stages identified in the standard graphic representation of Bloom’s work below have been subject to revisions and updating for the digital age but, I propose, remain fundamentally applicable today. The original taxonomy is usually depicted as follows:

figure a

As the graphic above suggests in its geometry, levels of difficulty remain critical to our thinking about teaching and learning, as we routinely use terminology like “higher order thinking” (or its circumlocutions as “problem solving” or “critical thinking”).Footnote 4 And many discussions of learning cultural phenomena today still easily pick up on all three of Bloom’s domains—sometimes by reference to other fields of theory (e.g. Bourdieu’s 1991 habitus, including the hexis, the acculturated and habituated physical bodies), but nonetheless still remaining firmly anchored in the cognitive domain for actual models of curricular practice that stress forms of logical analysis as learning goals. The taxonomies described in the Bloom Committee’s report are only one example of such hierarchies, but it remains the fundamental and perhaps most comprehensive model ever offered in US educational practice.

That today’s learning models still tacitly reference such cognitivist models for learning from the post-World-War-II environment is significant for understanding what they intend, especially given that learning hierarchies have proved themselves to be resistant to the empirical research that would establish their validities. Their focus on learning in the abstract is our necessary starting point for reanalyzing the “standard account” of the historical evolution of FL teaching and learning in the United States since WW II in brief. This analysis must necessarily take into account that the transition is still very incomplete from a model of teaching cognitive tasks arranged in difficulty levels like Bloom’s into a notion of learning as individual and individuated literacy acquisition. Being able to move from understanding a concept to applying it (in Bloom’s language) is a formal description of one dimension of a much more complex process implicated in an individual learner’s abilities to read or interpret cultural products for meaning and to draw textually substantiated inferences about the significance of that meaning for that learner, to write coherently, and to think critically and constructively about the written and spoken word in its sociolinguistic context (Hymes 1974; Halliday 1987; Halliday and Matthiessen 2004; Hammer and Swaffar 2012).

The account I outline here is not an attempt to recoup Bloom in any of its historical adaptations, but rather to point back at the lost complexity of this model as describing what literacy means in terms of logic and cognition in the abstract, and to parse more carefully what the FL profession’s 60-year history since World War II has actually accomplished in terms of redefining such formalist descriptions of learning as pertaining not simply to the structure of knowledge to be learned, but also to the learner and the pragmatic practices involved in learning language (and hence to complex cognitive, affective, and psychomotor interactions centered on the individual learner and at an individuated site of learning). That job involves recouping a more complete context for both the development and afterlife of such postwar models for learning and teaching. That recovery process is particularly critical since foreign language instruction has only recently begun to research how to integrate learning and language concerns. Such holistic approaches could then be integrated into classroom models.

The reasons for this dereliction arguably lie in the history of the profession’s evolution and its research agenda since WW II. Dell Hymes’ broader concept of communicative competence, introduced in the 1960s (Hymes 1966), was later expanded in FL pedagogy (Savignon 1972, 1983) by adding the idea of “communicative competence” focusing on oral expression. Whereas Hymes stressed that “communicative competence” commenced with comprehension of an utterance or text’s context, FL pedagogy tended to stress communication, neglecting the basis for communicative competence, the comprehension of a text’s ethnography. In so doing, the practice of FL education tended to eclipse the fact that comprehension is starting point of any learning sequence, preceding acts of language production, whether written or spoken, and thus is the companion in the process of knowledge acquisition and in literacy.

At that time, that lack of attention to comprehension was understandable, given that behaviorist theories had begun to influence FL instruction at beginning and intermediate levels under the aegis of outcomes-oriented models, connecting input with outcomes to be tested in what came to be identified in FL teaching as four observable but separated skills: reading, writing, listening, and speaking. Today, the assumptions made by those models have been superseded in an era when researchers have, among other options, the ability to track neurological information during processing as more complex and multi-modal than behaviorism’s stress on the link of stimulus to response.

Sixty years ago, however, without access to such tools, behaviorist psychologists and positivist theorists in education could assert with impunity that only separate, discrete, externalized outcomes and observable behaviors could be the measure of learning, with data collected and assessed in quantitative analyses. Such outcomes were more readily measurable than were learning processes. Thus concrete data about discrete expressions of learning were collected and evaluated as indicators of learner achievement levels. However, efforts to undertake assessments of ­learning strategies (how learners tried to produce these outcomes), the role of student backgrounds, of first languages, of affective influences, or perceptions about FL cultures were not done because they afforded only indirect and often only descriptive data at a time prior to computer-assisted data collections and multi-variant analyses.

Today, almost 60 years later, the FL professions are at the point where cognition, affect, and psychomotor domains need to be rethought and reintegrated as part of a single literacy-based model that describes learning. The time has come to move beyond the past’s disputed but persistent implementation of heuristics like Bloom’s taxonomy and to reclaim its (still largely unrealized) potential—using these heuristics derived from other strategies for understanding teaching and learning in more general terms to reread paradigms for teaching and learning FLs in a more inclusive way, accounting for the learners. Integrative, language-driven paradigms for what and how a FL is learned have become increasingly relevant for a more comprehensive learning framework demanded in today’s curricula and for the more diverse and globalized body of learners who engage with it as part of a twenty-first century paradigm for learner-centered and literacy-oriented education in the FLs and beyond.

For that reason the waypoints in the teaching and learning models implemented in the United States’ FL instruction after World War II bear examination in some greater detail, to see how many of the still-dominant curricular and pedagogical paradigms of earlier eras helped create a situation that today threatens to marginalize FL instruction in colleges and universities rather than integrating it as central to the literacy of the university curriculum in general.

3 Skill Acquisition as a Learning Model: The Emergence of Technocratic Language Instruction in the United States

The time-honored tradition of childhood learning as anchored in reading, writing, and arithmetic was still solidly at play in the United States after World War II, as the nation faced the challenge of developing a modern education system that would bring learners across measurable levels of achievement (ideally up to post-secondary education) and create the best educated workforce in the world.

Big science—science fostered by government funding and all too often driven by its politics—began its work in the public sphere after its wartime successes, as committees like that headed by Bloom emerged and standardized testing (aptitude and achievement) ruled as the benchmarks attesting to institutions’ success in educating a new, mass student body. Both the procedures and the outcome data produced by such initiative fit empiricist (and usually experimentally grounded) theories that saw evidence of learning in performance rather than in less readily verifiable cognitive outcomes.

Influenced by behavioral psychology and conditioned response models that remained mainstream theories of learning through most of the 1960s, the skills-as-performance model initially transferred to postwar FL instruction in the form of audio-lingual training—learning to speak a FL through rote repetition (as habits or “overlearning”) and learning grammar rules inductively on the basis of that repetition. Audio-lingual secondary and postsecondary textbooks (particularly the ALM Method series for all the major languages taught at those levels, based on structuralist approaches to describing languageFootnote 5) reflected practices used by the U. S. military in WW II. After the war, rote memorization was held to have inherited the cachet of the scientific empiricist methods widely respected in the 1940s and 50s: input of a certain number of hours of instructions yielded predictable outputs, judged by standardized tests.

By 1958, the Cold War political climate, with its focus on a Europe dealing with the Soviet threat, contributed to government passage of congressional funding through the National Defense Education Act (NDEA). The resulting centers for teacher training led to funding for adapting instructional programs in foreign languages along these empiricist-behaviorist models—and for claims about scientific approaches to learning as compared to older four-skills curricula.Footnote 6

Political exigencies in the 1950s also had a practical impact on the constitution of FL departments: these influences changed the make-up of language department faculty for elementary and intermediate classes. Nationwide, a surge in language requirements introduced into the curriculum increased undergraduate enrollment and encouraged expansion of graduate programs, turning FL learning into a linchpin in the postwar education system of the United States (e.g. Berman 2003; Richter 2003). The NDEA centers established to train these new instructors in the audio-lingual approach later introduced other evolving pedagogies. Instead of extensive choral work in the classroom, students were sent to language labs to practice with taped language drills in a stimulus and response framework.

With burgeoning enrollments, beginning instruction now placed new demands on FL programs and tacitly gave graduate students a new role in comprehensive or research universities—the emergence of the “teaching assistant” as instructor of record in beginner and intermediate classes. In the 1960s, the faculty position of language coordinator also emerged, initially a regular faculty member who administered programs and provided supervision for growing numbers of graduate student instructors. Gradually, this role expanded, and a faculty member would generally be hired specifically to work with first and second year language programs. To promote uniformity in lower-division pedagogy and assessment, such coordinators began to have weekly meetings with graduate instructors that then evolved into a required course in FL learning theory and methods. While generally not having the rank or prestige of other faculty in a FL department, the coordinator was hired not only to supervise the curriculum but also, by the 1980s, to undertake empirical research or produce “how to” or theoretical articles for education journals, visit graduate instructor classes to encourage consistent teaching practices, produce teaching materials (even textbooks) on the methods they were classroom testing and provide coherence to multi-section courses in first- and second-year levels through informal coordination and testing sessions.

By the1980s, what had been “foreign language education” in schools of education, often defined in terms of ESL/EFL settings, found its analogue in the then almost ubiquitous efforts in FL departments to provide pedagogical training of graduate students. With that status, a new research specialty emerged, most commonly known as “applied linguistics” (e.g. Magnan 1983). Such a disciplinary evolution was necessary to upgrade the status of the faculty involved in “pedagogy” as a purely pragmatic activity and occasionally in psychometric research of the type not represented elsewhere in a typical language program of the time. Where ESL/EFL had as its focus how non-native speakers integrate into English-language environments, the goal of this new FL specialty was helping second language learners acquire the languages of countries to which they had little access other than through books and limited options for immersion, such as summer school or study abroad.

However, the traditional “graduate faculty” of the typical PhD program found it difficult to accept this new entry into their programs. In their view, upper-division and graduate courses in more traditional specializations of research and publishing (e.g. linguistics or literature) were the purview of research-oriented faculty, a definition that stressed interpretative studies or theoretical modeling rather than tracing “skills” through the curriculum. That these new “applied linguists” studied lower-division learners only reinforced curricular distinctions between so-called “lower” and “upper” division language courses.

Bloom’s taxonomies as originally applied suggest ways to understand this division as more than prejudice. The lower division was managing the cognitive domain of language learning, as it was defined until well into the 1980s: as a question of linguistic structure. The learner was believed to be able to automatize or “overlearn” the rules of the target language, prioritizing grammatical correctness as evidence of learning. At the same time, elementary stages in learning a FL became an issue of learning linguistic form rather than other contents, which cut the learning styles of the typical lower division FL classroom apart from those in the upper division—“skills” were supposed to be mastered as a prerequisite to upper division learning of content (especially literature and high-culture texts), and their transfer (the shift repeating paradigms to using them as part of authentic communication, for example, was assumed to be a natural sequence).

The definition of language at play since the 1950s continued to be compatible with the linguistics of later decades: formalist and relating to structures and their correct use, as documented in the linguistic evidence. When specialized domains of language were considered (often under the rubric of “language for special purposes,” such as use in business or science or medicine), those new cognitive domains were defined in terms of inventories of linguistic forms and lexical items used. Language for special purposes often ignored the factors motivating acquisition of content subsumed in definitions of “content-based instruction” today (e.g. Stoller 2004).

The research paradigms existing within the typical FL department were thus incompatible. The skills approach to the lower-divisions language classroom operated on premises that did not foster upper-division expectations about content learning, critical thinking, or articulation of affective responses to what was learned. It focused on memory work and separating speaking, listening, reading, and writing in pedagogy and assessment; it was paired with research agendas dealing with a limited range of cognitive domains: usage, correctness, automaticity, memory per se rather than their application in synthetic or analytic reasoning. Indeed, the affective domain, recognized as critical in the reading of literary works (Shanahan 1997; Tucker 2000), was viewed as a potential block to automaticity and correctness. Such fundamentally different mindsets influence FL curriculum practices, materials development and research agendas at all levels of instruction to this day.

The historical development of the profession illustrates the impact of these splits. By the 1960s, increases in secondary school FL enrollments and a one- or two-year language requirement at most postsecondary institutions created the need for a professional venue that could foster and guide policies at these levels. The Modern Language Association (MLA, founded 1883), the dominant public policy venue for language study at that time, had often addressed such issues in the past since its founding, with a periodic focus on instruction in its flagship publication, the PMLA. By the mid-1960s, however, two wings of the FL college faculty had emerged as increasingly separate concerns (linguists and “literary scholars”) and a third had begun to (applied linguistics): instructors conducting elementary instruction anchored in memorization and reproduction of language and professors devoting their energies to teaching advanced content and interpretation of linguistics and literature. Keeping these wings of the profession together appeared to many MLA members a divide too wide to breach.Footnote 7

The solution to this problem was addressed in 1967, when the MLA sponsored the founding of a new professional organization devoted to FL research and teaching: The American Council on the Teaching of Foreign Languages (ACTFL).Footnote 8

4 The Challenges to Empiricist Models

As the institutional face of FL research and teaching became reified in one trajectory, the research paradigm took off in other directions, accelerating dramatically. By the 1960s, the behaviorist model for learning was being questioned in ways that acknowledged expanded domains for language learning.

Work in the emerging field of psycholinguistics was challenging premises that limited research to observable behavior and empiricist premises about language acquisition. The evolving paradigm in psycholinguistics rested on a broad range of work, from outliers as far afield Jean Piaget’s (1971) research through Eric Lenneberg’s (1967) related proposals about language learning and stages in cognitive development, as well as Noam Chomsky’s (1965) hypotheses about differences in language acquisition due to cognitive capabilities of a child compared with those of an adult. Although much of this linguistic or learning theory was not directly applicable to adult FL learners, its emergence prompted some voices in the FL profession to take a broader look at language acquisition as the result of interrelated abilities involving thought processes, not just behavioral modification (albeit in a cognitivist-mentalist paradigm).

By the late 1960s, new and expanded publishing venues gained in audience and influence. Increasingly, journals published research on learning that introduced changes into the FL curriculum, perhaps most notably the Modern Language Journal. The articles by Kenneth Chastain and Frank Woerdehoff in 1968 and 1970, for example, were one landmark for change. The authors used the definitions of John Carroll, a leading researcher on human intelligence and testing (e.g. Carroll 1967), to compare the audio-lingual habit theory with various impetuses remembered today under the general rubric of cognitive code-learning theory. Their study offered evidence that would ultimately shift the direction of language teaching: it looked at two groups’ scores in speaking, listening comprehension, writing, and reading, measured using the MLA’s foreign language exam, which included not only grammar, but also reading and listening comprehension (see Chastain and Woerdehoff 1968).Footnote 9 Results favored the cognitive code group over habit-formation. Audio-lingual approaches emerged as the less effective teaching tools.

What was then understood by the term “cognitive code approach” and related rubrics that were subsequently incorporated was a more deductive style for teaching, tending toward explicit instruction about grammar rules and their applications in drills and exercises correlated with them, use of glossed reading materials, and reading or listening questions to check students’ grasp of factual information (Chastain and Woerdehoff 1968). Their work prompted a new wave of research, notably a large-scale study of high school FL learners that found improved performance in control groups with grammar instruction as compared with audio-lingual classes (Smith 1970). The audio-lingual method and the government money investment associated with it had not produced language learners who reflected the gradually changing definitions of desirable communicative outcomes for FL classes (Hymes 1974; Savignon 1972; Canale and Swain 1979).

Not surprisingly, in the wake of such research, the preeminence of audio-lingual pedagogies declined dramatically and the federal funding that had generated audio-lingual textbooks was not renewed.Footnote 10 The cognitive code pedagogy with its grammar explanations, vocabulary lists, and discrete point learning exercises had indeed, by the 1970s, emerged as a viable and appealing alternative to rote learning—and as a kind of compromise focus on the established four skills. Teaching materials began to reflect some gestures toward emphasizing student motivation and user-centered language choice (rather than just normative formal linguistics), but the interface between learning theory and language teaching remained largely absent in the construction of teaching materials and curricula.

Post-ALM textbooks in the 1960s and 70s did not initially pay any great attention to redefining cognitive domains associated with language study, the affective domains of learners, or new psychomotor approaches to learning styles such as total physical response or game playing. The most significant elision was perhaps the increasingly influential psycholinguistic research about links between prior knowledge and language acquisition that would start to be acknowledged in the 1970s and 80s (e.g., Anderson 1974; Rumelhart 1977; Kintsch and Van Dijk 1978).

Arguably, however, even today, many textbooks remain palimpsests of past, questioned or even discredited concepts about language learning. They do so by focusing, for example, on isolated features of formal grammatical accuracy (idioms, prepositional phrases) rather than pragmatic applications or communicated content. Comprehension tasks rarely precede complex production exercises (such as synthetic sentences)—learners are asked to make language constructions without seeing them in their natural environments. As a result, tests of such books still reward memorized command of isolated language features (morphology, “fill in the blanks”) rather than holistic abilities to integrate language and meaning.

5 The Impact of Psycholinguistic Theory and Research

Starting in the late 1960s, linguists and psychologists in their research began to focus on the nature of cognitive processing in the foreign as well as the adult learner’s native language (e.g., Kintsch 1970). Their impact was recognized by a diverse set of applied linguists trying to innovate programs in FL learning. As a result, by the 1970s, more student-centered learning approaches were being proposed, notably in venues such as NEH or FIPSE grants and ACTFL workshops. Some resulting publications in book series, and articles in influential venues such as The Foreign Language Annals and The Modern Language Journal introduced reading for ideas (textual propositions) and initial steps toward the pragmatics of grammar and the particular value of collocations. By the late 1970s and at the start of the 1980s, research focused on the role of cognitive processing in FL acquisition, expanding the definitions of the cognitive domain that had been in play under the sway of behaviorism. This work foreshadowed the focus on the learner that dominated in pedagogical thinking of the 1990s—“the Decade of the Learner.”

Examples of efforts to establish a comprehension-based learning sequence for cognitive, affective, and psychomotor processes: Valerian Postovsky (1974) found evidence supporting teaching comprehension before asking for language production (Winitz 1981); Alice Omaggio-Hadley (1979) studied the role of pictorial input to enhance vocabulary retention; James Asher’s (1972) “Total Physical Response” linked psychomotor responses to cognitive processes in FL acquisition; and Janet Swaffar and Margaret Woodruff (1978) investigated adult level content-based instruction that commenced with recognition tasks. As a master of monikers that emphasized students’ affective as well as cognitive processing—coining by-words such as “comprehensible input” and “affective filter”—, Stephen Krashen (1982) emerged as a catalyst for theoretical rethinking ESL and FL pedagogy, as well. Such new, more detailed attention to the affective, psychomotor, and cognitive domains as affecting the learner in ways quite far from the formalisms of language itself (at the basis of skills-driven assessment). Together, these trends pointed to more holistic approaches to language learning (Swain 1985).

The most influential ongoing studies in this new, significantly more student-centered learning came from Canada’s research centers, investigating bilingual education, as they attempted to build curricula in new ways. In ongoing contributions, Michael Canale and Merrill Swain’s (1979) work argued for the value of Dell Hymes’ (1974) earlier suggestions about discourse contexts as key markers of speaker intentionality. Indeed, speech acts such as inquiry or negotiation were recognized as critical to communicative effectiveness (Kramsch and Crocker 1990). This expanded definition of “communicative competence” commenced with comprehension of speaker intent prior to emphasis on student exchanges. These exchanges, anchored in familiar social situations, replaced repetition and over-learning activities by encouraging learner’s language choices and expanding their freedom of expression. Pragmatic language use was beginning to assume importance in the curriculum.

Such shifts to a more learner-centered pedagogy appeared to be supported by attitude studies such as those of Elaine Horwitz (1986). Findings about stress and inhibition in a classroom focused on a teacher-driven question-and-answer environment. Earlier investigations of classroom discourses had argued for more student talk and teacher review of accuracy issues in general rather than attention to accuracy in individual oral performance (Holley and King 1975; Schumann and Stetson 1975).

These proposals gave credence to ideas about changing FL programs, but no consensus emerged about how to do so. Attitude research, discourse research, and broadened attention to sociolinguistics and user concerns did not necessarily add up to new curricula. The impact of such research was gradual, constrained by practical exigencies, unlike the curricular breaks when the audio-lingual approach after 1945 supplanted predecessors, only to be displaced by the cognitive code and any number of subsequent efforts to claim a preeminent “method” for teaching and acquiring a new language. Given this array of pedagogical options teachers trained to teach from textbooks that championed an approach now found themselves confronted with multiple, sometimes competing facets of new pedagogical models.

The dominant proposals centered on Communicative Language Teaching (CLT), but shared that stage with related foci such as functional-notional approaches or teaching for proficiency, all of which encouraged students’ verbal interactions in and outside of class to express particular intents and to negotiate different social situations (Rivers 1981). For teachers trained in the relative straightjacket of ALM, these precepts represented a stark contrast in freedom for both their students and their curricula. For many, these new trends lacked a coherent set of pedagogical practices and involved fundamentally new modes for assessment of learning. Indeed, for a variety of reasons, entrenched practices proved difficult to alter.

As noted above, the American Council for the Teaching of Foreign Languages (ACTFL) and its publication, The Foreign Language Annals, helped solidify not only the professional value, but also the distinctly different enterprise of professional language teachers vis-à-vis their colleagues in the fields of literature and linguistics in postsecondary institutions—validating a tribe of empiricists in the midst of a humanist discipline. It also created a professional link between K-12 teachers and postsecondary teachers of language, which over time was perceived by many to be stronger than the lower- to upper-division ties at the college level. In any case, the thus-reinforced professional divide proved particularly evident in postsecondary schools granting PhDs for language teachers.

In cases for promotion and tenure in such institutions, pedagogy and applied linguistic research lacked the prestige of literary and linguistic studies in the minds of other colleagues in the liberal arts. At the same time, they were not viewed as broad enough for most schools of education, or technical enough for the formal linguistics of the time. This lack of prestige also affected (and continues to affect today) the salaries and tenure prospects of language specialists.Footnote 11

Particularly in those many situations where language coordinators were untenured, they remained isolated from the advantages of research and professional development. Separated in their teaching venues from faculty teaching upper division and lacking funds and professional initiatives, they were not in a position to speak with a strong voice in crafting curriculum design for the departmental language program as a whole. The sense of competing methodologies left both teachers and particularly textbook publishers understandably preferring small-step modifications of the status quo rather than adopting full-scale innovations. While providing short readings from authentic materials that would hopefully enhance student motivation, for example, textbooks continued to offer dialogues and slot-filling or “synthetic sentence” exercises. More socio-linguistically or cognitively complex activities tended to appear at the end of chapters. They often appeared as addenda or optional components in revised editions of textbooks originally designed in the 1970s, often with “language lab” components (gradually adapted to television and, later, computer use).

Consequently, “eclectic” textbooks continued to anchor the FL profession in a tradition of amalgamated agendas (standard task sequences such as ALM dialogues and drills and CLT activities) rather than the sequenced, integrative learning approaches designed to bridge the lower- and upper-division gaps in objectives and pedagogies. The new, contextualized activities were often set in artificial situations, ostensibly content-based. Pragmatically, the result was a further estrangement of FL research and teaching from the departments now suspicious about “the latest methods” and their lack of success for learners, from institutions and funding agencies that had invested in audio language labs that now embraced new technologies with untested application, and from researchers in learning whose paradigms for exploring learning and the learner had greatly expanded, but lacked criteria for progress and testing programs for assessment that was reliable and verifiable.

6 Professional Organizations Weigh In: Toward a Second Post-War Curricular Reform

No wonder, then, that the FL teaching professions sought to find a new set of data validating its new practices, even as they avoided a search for a new model of learning that provided links between the material that was to be learned, the learner, and strategies for teaching.

It was not until the late 1970s that, under ACTFL auspices, a program to assess performance (and thus to provide the new data validating practice) was initiated in response to an increasingly popular pedagogical emphasis on what Canale and Swain (1979) called “communicative competence.” The resultant oral proficiency test represented the profession’s first step since the audio-lingual period (with its NDEA institutes) toward establishing nationwide curricular objectives and standards for FL study, this time through an assessment program and by training raters who understood how to compare certain kinds of language performance. ­Proficiency testing was developed as an outcome measure. Although it was not intended to act as a curricular framework, as an outcome measure it certainly had curricular implications (see, for example, Omaggio-Hadley 2000; Liskin-Gasparro 2003).

Adapted from procedures used by the Defense Language Institutes in Monterrey, California, and Washington, D. C., the proficiency movement ushered in alternatives to the behaviorist “accuracy” and “skills” model that had dominated assessment to that point. It did so by introducing the notion that learner objectives needed to shift in relation to communicative effectiveness—and implicitly that the curricula should construct stages in evolving discursive competencies that reflected ascending levels of their Oral Proficiency Interview (OPI). Sometimes grammaticality would shift as students improved on the OPI. In some subsequent studies, for instance, researchers found that, as speakers of a foreign language improved in conceptual and discursive range as rated by achieving higher OPI levels, they tended to make more surface language grammar errors than did students who rated lower on the proficiency scale (see, for example, Magnan 1988).

In other words, curricula began to be adapted to these assessment practices. FL textbooks began to assign exercises that structured communicative complexities in negotiating disagreements or expressing abstract ideas on their own merits (Kramsch and Crocker 1990). This new paradigm acknowledged the speaker’s processing load, and thus how and why surface-language (grammatical, lexical) errors increased. By emphasizing the value of increasing articulatory ability, the oral proficiency test gave a FL learner’s ability to express creative, context-appropriate ideas pride of place as an advance in language competency. With that step, the movement introduced what was considered a new framework for assessing learner progress in what seemed to be a more student-centered and communicative-based classroom. FL learning practice was beginning to encourage and reward adult literacy—knowing how to do things with words even when not “native-like” (Byrnes 1998a, b; Birdsong 2006).

In this way, the movement also contributed to groundwork for introducing more complex models of language and social behavior into models of language teaching and learning, introducing, for instance, discourse analysis (Allwright 1980; Bacon 1987; Lazaraton 2003; McCarthy and Carter 1994; Scott 2009), pragmatics (Kasper 1998), and notions of cultural literacy into the FL curriculum (Arens 2009; Firth and Wagner 1997; Lantolf 2006; Kramsch 2009). From the outset, the rigorous ACTFL training program to qualify as a proficiency rater stressed that ranking involved sensitivity to a variety of cultural contexts, and it provided clear links between learners and curricular practice that had been missing in earlier eclectic models for classroom teaching. Raters-in-training worked with models for each of the four levels of proficiency—novice, intermediate, advanced, and superior—that looked for increasingly literate expression (Byrnes and Canale 1987).

With the OPI focus on literacy in oral communication, sociolinguistic concerns entered the curriculum in new ways, changing the cognitive focus for learners from language formalisms to aspects of language use and performance. Although the ACTFL Proficiency Guidelines (2012 [1986]) had a section on each of the four skills, only the OPI existed as a testing technique. To achieve advanced or superior competency ratings in ­speaking, for instance, speakers had to respond appropriately in different social settings and to different contexts: work, home, leisure, for example, and do so according to the norms of any recognized social community. At the highest levels, register (in the sense of prestige varieties of language) became important. At those levels speakers had to display cultural awareness about the existence of various specialized or domain-specific languages, not just a single normative language competency.

Research quite naturally followed on this new model of the cognitive demands placed on the learner, seeking to add data that confirmed proficiency criteria for assessing speaking levels (Magnan 1988). Subsequently taped, computerized formats for assessment interviews were also developed, yielding consistent and verifiable results (Liskin-Gasparro 1984). Yet the initial goal of the proficiency movement, expanding this individualized, multidimensional form of assessment to encompass reading, listening, and writing levels, has not yet materialized in equally developed forms.

More critically, the broader agenda of the movement—to assess all aspects of language acquisition—hit a snag. The initial criteria models that worked for establishing FL speakers’ different levels of reading ability could not be verified in early research studies (see Allen et al. 1988; Lee and Musumeci 1988). Work in discourse and genre theory, especially that of SFL (Eggins 1994, 2004; Lee 2001; Martin and Rose 2008) suggests several reasons for this unanticipated problem that ACTFL encountered in establishing a performance sequence for reading, writing, and listening comprehension.

The issue was that reading, writing, and listening are not externally conditioned exchanges of language in the same way that oral interviews are—they all fall under the rubric of “language use,” but not in the same way. In the oral proficiency situation, an interlocutor and the contextual constraints on any given exchange help fix ideas of communicative appropriateness and restrict choice. The reader, listener, or writer, on the other hand, engages in a particular, internally generated discourse that is not driven by an interlocutor-framed interaction or an assigned description, as is the oral proficiency test.

To develop viable descriptions of what it means to “read a text” or “write about culture” requires many more decisions about what success or failure in these tasks might require learners to do. Raters would want to know what individual background knowledge or cultural experiences informs a particular reader’s performance (e.g. Johnson 1982). When listeners, readers, and writers confront an “other” in their heads rather than in a conversational interchange, they comprehend or generate language on their terms, affectively as well as cognitively, and hence may or may not address the comprehension or the language use sought by an evaluator.

In spoken proficiency, for instance, such sophistication is the hallmark of the very advanced or superior speaker, since speaking makes greater demands on rapid recall and automaticity than reading or listening do. Readers and writers in particular have options to reflect and reread or rewrite. Time is on their side, an advantage speakers do not have.Footnote 12 But adult learners who can read and write in their native language are able to process FL texts applying some strategies they already possess, albeit in different ways than native speakers with equivalent background knowledge and reading goals might–perhaps recursively rather than simultaneously. More recent work with the 2012 version of the ACTFL proficiency guidelines have broadened beyond genre to focus on author purpose, text type, and specific reading, writing, and listening tasks, thus incorporating more kinds of literacies (e.g. Clifford and Cox 2013; Luecht 2003).

The Bloom committee’s work on sequencing performance assessment provided an early reference for identifying factors that learners employ to manage (negotiate) situations. It was not until the 1980s, however, that one sees how the theoretical paradigms for research and teaching have begun to redefine the cognitive and affective domains of learning. Critical for FL instruction was the move to “authentic” language in assessing proficiency, and hence to a vision of communicative competence that recognized cultural differences. Increasingly, language literacies were becoming the focus of communication, supplanting the notion of language defined as an artificial standard of accuracy (Dörnyei and Ushioda 2011).

Yet the specter of native-like speech still raises questions about oral proficiency assessment. For instance, is proficiency testing sufficiently sensitive to discursive factors anchored in cultural differences (e.g. Kramsch 1987)? And what is the relation of “oral proficiency” to electronically mediated exchanges such as chats (e.g., Abrams 2003)?

By the 1990s, such concerns challenged definitions of “communicative competence” as a reference point for curricular development. Scholars in literature and cultural studies (the latter an important new wave in scholarship in FL departments commencing in the 1980s) would still point to the poverty of any model of language that does not reference more sophisticated performances of textuality in various genres, or “reading” other cultural artifacts. From their point of view, the cognitive domain related to language remained impoverished, no matter what teachers of FL asserted about learning language as learning culture. The “culture capsules” inherited from the four-skills and ALM textbook generation did not introduce content of any sophistication to engage learners’ point of view, even at the moment when students were increasing their study abroad and ongoing media access to foreign venues on the internet.

Two external influences in the 1980s, globalization trends in transportation and communication, had also begun to contribute significantly to curricular change. Relatively inexpensive fuel, airline deregulation, and more advanced jet engine design, all made international travel more readily affordable for students. These factors led to a surge in study abroad, international tourism, and business travel that increased public interest in communicative approaches to language learning.

7 Global Language Studies?

By the 1990s, the internet introduced a radical change in communication worldwide. With the advent of increasingly widespread public internet access, textual production and dissemination (text in the sense of multi-media) began to explode, and the resulting media ecology destabilized older definitions of authorship, authenticity, and reliable narrators. In the age of Google and Wikipedia, declarative knowledge, now readily accessible, became less relevant than procedural competences, thus creating a generation of students receptive to instruction that uses these media. Increasingly, the widely varying implications of media texts depended on their sources and their discursive as well as visual and acoustic styles. Multiliteracy became an online opportunity.

Classrooms gradually became fully networked, as well, allowing real-time access to a new range of authentic materials that facilitated study of contemporary culture. At the same time these options also presented problems in reading and interpretation (and their assessment) that FL research on teaching and assessment had not addressed. Oral proficiency testing, a validated measure of oral performance, had limitations in other domains.

In retrospect, the communicative competence and proficiency movements of the 1970s and 80s pointed the way toward a reframing of what FL teaching and learning needed to account for to retain its significance as an area of study and research next to literary/cultural studies and linguistics in the “FL department.” And these tenuous indicators of progress were again put under pressure following the collapse of the USSR and commensurate political changes in Eastern and Western Europe in the 1990s, concomitantly with the rising costs of postsecondary education, the end of the Cold War, and demographic shifts in the student enrollments in FL study.

These changes affected shifting institutional infrastructures. Global competitiveness introduced new, pragmatic objectives to FL study. Formerly less commonly taught languages (LCTLs) gained status and students within the university. Generous financial support from the Japanese and, more recently, from the Chinese government has introduced Japanese and Mandarin teachers and created a market for learning materials into US curricula. At the same time, with increasing numbers of Spanish speakers within the United States, the economic and social value of that language created a surge of student numbers, especially in Southwestern states. The traditional institutional dominance of French and German in high schools and colleges shrank precipitously (see Goldberg and Welles 2001; Goldberg et al. 2004).

Given the reduced need for new French and German instructors at these levels, graduate programs were undergoing significant reductions, the underappreciated segment of the average PhD-producing department now emerged as controlling the purse-strings, and the general inability of many departments to identify and assess outcomes over a curriculum put whole departmental entities into question. Formerly independent FL departments were closed, amalgamated into departments of modern languages, or placed within an English or Humanities program. In professional journals, administrators and pedagogues alike proposed that supplanting traditional ­language programs with studies in translation or cultural studies could complement or even supplant the need for FL learning in some institutions.

And with the present crisis of institutional mission versus models for FL teaching and learning, the profession has come full circle on its own turmoil. The half-century since Bloom’s committee and its work have left aspects of the taxonomies behind, but FL teachers and scholars have not yet answered its overall challenge: how to describe the learning process in terms of the domains that the learner uses to learn (cognitive, psychomotor, and affective), and in terms of the outcomes of the learning process (a set of the challenges that define the kinds of tasks that an “educated” learner must answer to in order to be assessed as educated, as the proficiency movement outlined for oral proficiency). As I shall address in the conclusion to the present essay, another aspect of Bloom’s taxonomies, the hierarchy of task difficulty that challenges the learning process, has been both lauded and critiqued but not extensively rethought in terms of possible relevance to a postsecondary FL curriculum.

In response to these challenges, a group of professionals interested in modeling language learning as a more comprehensive engagement with learning in general and with learning about other cultures in particular have offered a tool with a reach not unlike Bloom’s taxonomies, but which models the best current thinking about the domains active in FL learning. The ACTFL Standards (2010 [1996]) provided a model designed to guide curricular development, assessment and research about the teaching and learning that is used in the gamut of FL programs and departments as a whole, not just its lower division courses, devoted putatively only to language-teaching. Unfortunately, while in theory having a K-16 scope, curricular implementation has been largely restricted to secondary schools and textbooks.

8 The ACTFL Standards as a Major Step Toward a New Comprehensive Model for Teaching and Learning

A critical proposal designed to model more comprehensive visions of language learning for a new generation of curricular development, the ACTFL Standards strive to integrate the results of theories about language offered by humanists as well as linguists. A not unimportant second goal was to offer a tool to educators at all levels to explain what kinds of learning are associated with “language teaching,” and to set up frameworks for professional rewards, assessment, and research related to these new learning tasks.

This initiative was undertaken by a consortium of professional language organizations working with the American Council on the Teaching of Foreign Languages (ACTFL). The resulting blueprint was laid out in ACTFL’s 1996 publication, now known by its revised title and elaborated descriptions of tasks: Standards for Foreign Language Learning: Preparing for the Twenty-First Century (ACTFL 2010). Since 1996, the Standards’ project has developed a support system for curriculum and professional development that integrates professional organizations and federal agencies with state and district language supervisors in secondary schools (see Phillips and Abbott 2011). As anticipated by project developers, many individual states and professional organizations have modified components of this new ACTFL model as they took its framework to guide their own designs and implementations for curricula and teacher training. Various task forces have adapted the Standards’ project’s overall model for learning objectives and instructional tasks to set out frameworks for teaching and learning different languages.

To date, the primary impact of the Standards has been their federally mandated target audience: FL program developers in elementary and secondary schools who have used its framework to represent language teaching in their local curricula. June K. Phillip’s and Marty Abbott’s (2011) report, A Decade of Foreign Language Standards: Impact, Influence, and Future Directions, documents publications and participation efforts for extensive implementation of the Standards’ pedagogical objectives in foreign language instruction K-12, but not in the colleges and universities as the original project also envisioned. K-12 teachers have begun to tag their own practical work with the kinds of labels that can be drawn from the Standards, but, ironically, theoretical presentations by postsecondary authors have dominated public discussion of the standards’ use and implications for the curriculum in both secondary and postsecondary teaching of foreign languages.Footnote 13

A recent publication documents responses of over 16,000 elementary and intermediate college students to a written questionnaire, including questions about whether and to what extent FL students share the goals of the Standards’ five Cs—Communication, Culture, Comparisons, Connections, and Communities. Its results suggest that these standards do indeed reflect significant learning objectives for FL learners in colleges and universities, albeit with different emphases among languages and learning levels (Magnan et al. 2012, 2014). That is, the standards do have some claim at presenting and modeling the FL teaching and learning domains in postsecondary institutions, even if they have not been implemented overtly into their curricula.

Questionnaire respondents were not, however, asked about specific applications of the standards in college FL nor in classes taken prior to the ones they were taking in college.Footnote 14 Consequently, the results do not document statistically the extent to which those students’ own expressed learning goals at the college level are attributable to explicit instruction in which the standards played a decisive role (Magnan et al. 2012). Given the absence of comprehensive organizational implementation of the Standards at the postsecondary level, information about the degree and their pedagogical presence in university FL programs remains largely anecdotal or inferentially based on syllabi and course descriptions from individual institutions. This seems true even for K-12 methods courses for FL teachers.Footnote 15

That overt teaching of the Standards remains a negligible factor in postsecondary curricula is not surprising, given the degree to which current research and theoretical models for learning in the field of applied linguistics have been ignored in those contexts—sometimes even by FL methods instructors. The large study’s questionnaire comparison of student goals and expectations of learners enrolled in both commonly and less commonly taught languages revealed that most of the Standards reflected their personal goals in FL study but that their goals and expectations “did not completely align” with those of foreign language educators (Lafford 2014, p. v). The underlying reasons for such discrepancies lie to some degree in the division between teaching and research specializations that have reified since the 1960s, as noted above. Yet several other curricular and pedagogical legacies of the last half-century persist even when their origins have been discredited or forgotten, and as a result they probably reinforce resistance to the paradigm the Standards represent.

First, for reasons discussed in foregoing pages, the FL profession has entrenched concepts about a wide gap existing between teaching language acquisition at elementary and intermediate levels and teaching the literacies that characterize upper-division work—continuity is rarely assumed between these levels. Consequently, the weave between language learning, learning processes, and content represented in task descriptions easily goes unnoticed, because all too many faculty members posit language learning as a sequence leading from language learning, rather than as a set of progressively more difficult negotiations among aspects of language managed by learners in the cognitive, affective, and psychomotor domains. The Standards stress that language and other contents, set in particular contexts that require active negotiation, are interrelated from the outset of instruction. They rest on the tenet that acquiring a foreign language requires a broader kind of engagement between learner, discourse contexts, and language than researchers and instructors trained in prior research and assessments (focusing so often on the formalisms and normativity of language) are wont to notice. The Standards narrate these interactions as subordinate standards that project cognitive, affective, and psychomotor demands into various tasks. While fulfilling these tasks, learners are encouraged to focus on managing sociocultural demands that determine linguistic usage. The staging and sequencing of task difficulties enables students to study the resources of an L2 culture (performing communication, making connections and comparisons, joining communities and learning about the culture of a target language).

The second and related issue counter to the adoption of the Standards’ model for thinking about teaching and learning on the postsecondary level may be academic freedom based on research productivity. Teachers, not learners, are presumed to be in control of the classroom, even if what is taught may be unlearnable, in terms of common notions of cognitive development (Halford 1978). And when those teachers are scholars, the materials taught take center stage rather than the learners. Often they turn the classroom into a showplace for a particular theory (e.g. gender or ethnic identity politics) with little regard for students’ possible inability to intake materials presented according to a program rather than a learning sequence.

What these specialists are not taught how to do is to use a preferred theoretical model to structure teaching or to structure a curriculum—to teach learners how to participate consciously in achieving a particular goal or set of goals by giving them systematic, developmental practice in negotiating the demands (Bloom’s cognitive, psychomotor, and affective domains, or ACTFL’s framing of culture represented in intersections of the 5 Cs [see Arens, this volume] that learning on the postsecondary level requires).

This lack, however, has not emerged as a conscious project needing correction in post-secondary education. To address it would involve a fairly radical shift in current curricular practices. Instead of the dominant pedagogy for designing today’s content courses, structuring a learning sequence around the material to be learned, a more conscious pedagogy would construct that sequence around the growing capabilities of the learner. Instead of producing adult participants who tend cultural legacies as scholars within favored theoretical grids, that pedagogical paradigm would focus on offering practice in content-based situation management, which, over a sequence of practices, would produce assessable outcomes—a strategy that can articulate language learning into more general frameworks in U. S. colleges and universities (Swaffar 1981).

Some faculty members might perceive that shift as an invasion of their intellectual freedom. The postsecondary professorate has traditionally been privileged to decide independently what and how to teach. Arguably, however, when professors dismiss other parts of the curriculum as “not my specialty,” they also abdicate responsibility for choosing overriding frameworks about the domains to be learned in their fields.

Without such shared frameworks for a FL program, especially a program in a large, diverse department, systematic staging of entry into those domains in ways that accommodate learners and known learning strategies becomes virtually impossible. With its legacy of specialization in separate fields of inquiry (“original research”) that has characterized US higher education for decades, the expertise acquired by specialists brings with it prestige but also a degree of insularity that today threatens the status of FL programs in postsecondary institutions (Kramsch 1992, 1995; Swaffar and Arens 2005).

I have presented the pedagogical implications of the Bloom committee’s taxonomies and the Standards project as attempts to outline coherent models for learning in ways that inform a multiliteracy curriculum, but it is critical to remember that they are very different documents. The core of Bloom’s taxonomies in all three ­dimensions is a map of the available strategies that can facilitate or impede learning in different frameworks (domains), or what we might call the learner’s modes of learning or learning styles when confronting materials (cognitive, affective, psychomotor). That map acknowledges degrees of difficulty in the structure of these strategies within its domain. The Standards’ central metaphor is a map on which a learner is to be situated, within the context of curriculum development and the staging of language acquisition: a diagram of interlocking rings, one for each domain of knowledge and pragmatic usage associated with language and social-semiotic expression within a cultural community.

What is not generally acknowledged is how these two models have been presented: in their conventional use, they are shown as trying to tie their respective domains to task hierarchies—to series of tasks reflecting increasingly complex negotiations of a learner with a body of information, forms of expression, and social roles that inhere to a field of knowledge, using strategies (culturally and cognitively) available to targeted learners. In other words, neither model has been appreciably applied as a framework for staging the acquisition, articulation, and assessment of new knowledge students glean from working with FL materials.Footnote 16

Bloom’s Taxonomy in the cognitive domain is taken all too often as a representation of learning strategies. Yet those strategies are based on a learner’s task sequence leading from comprehension through production to critical thinking. To be sure, that sequence is by no means a one-way street. Learners frequently circle back through various stages, reiterating or reconsidering original reactions and assessments and augmenting their implications as they learn these strategies that are central to understanding and communication in the West.Footnote 17 And in Western culture the amalgam of taxonomies often function most overtly in the cognitive domain.

Western sociolinguistic models tend to privilege the patterns of thought that characterize an adult, independent learner in a particular society (and his or her developmental stages). One should not take this process as cognitively normative, however, because each discipline and its privileged cognitive and expressive norms are issues of context and history within a culture. And so the Bloom committee specified that labeling must come prior to working with patterns in a system privileging formal logic (usually, Western formal logic), and that original syntheses are the most difficult patterns in logic to teach and to learn.

In a different vein, the Standards project maps the domains of knowledge folded together in language learning, and then posits ever more complex negotiations that a learner must engage in to perform an identity within those domains of knowledge in the target culture. Here again, the model points to how learning can be tacitly staged as a task hierarchy, moving from simpler to more complex negotiations, defined in terms of the expressions it prioritizes. Ideally, those expressions are integrated with a knowledge community, with the culturally specific forms in which such a community stores its knowledge and with the ability to critique those forms, or engaging work in hypotheticals and counterfactuals as well as description.

What such comprehensive models suggest in staging task hierarchies (no matter how constructed), then, is that the institutional and content divides that have predominated in FL departments over the period surveyed here cannot stand—teaching and learning need to be modeled with attention to how learners can best be given a map to learning particular contents, what outcomes are desired as critical strategic tools for integration into various communities of expression and knowledge, and how they can be assessed as part of a developmental series.

The proficiency movement offered a miniature of that requirement, focusing on oral exchanges and with relatively little reference to more than hypothetical sociocultural contents (e.g., conversations, asking and answering questions, managing discourse in social contexts). Viewed together, the Bloom committee’s work and the Standards project shared an effort to ground curricular reform on the basis of a more comprehensive model of learning, realized by specifying task types, and assessed (respectively) as mental processing challenges or sociocultural negotiations understood and expressed in language or language-based behaviors. The difficulties posed by language and content could be addressed by means of appropriate tasks to the students’ learning and motivational levels.

By implication, both proposals represent teaching premises that model learning sequences in the form of assessable outcomes, staged developmentally. They also represent a challenge to the curricular premises of most departments of foreign languages, because FL departments divided between lower- and upper divisions and across specialty content areas will not be able to capture the developmental stages in multiliteracy acquisition. The literacy in all three of Bloom’s domains or all five of ACTFL’s rings that is associated with FL teaching is a substantial area of cultural content (not just language use) and learning as a student-centered process.

The leading professional organizations representing postsecondary fields and their specialists—the MLA, the AAUSC, and ACTFL—have all introduced a number of efforts to change entrenched attitudes of FL department faculties about both curricular divides and the FL profession’s intellectual roles in a changing university. In so doing, they have offered another comprehensive model for language learning and teaching, one that underscores the need to unify departments institutionally as both content- and language-driven. The MLA’s recent statements underscore that the fate of the profession lies in the ability to overcome traditional divides between content areas in programs and departments, and to focus on the learners as well as on content areas to be covered.

The MLA ad hoc Committee on Foreign Languages (2007, 2008) issued its first recommendation for FL programs to establish “clear standards of achievement for undergraduate majors in speaking, reading, writing, and comprehension and to develop the programming necessary to meet these standards.” Their report’s emphasis on the teaching of culture led to subsequent conference sessions and written responses whose scope ranged from assessing its implications for given languages (Costabile-Heming 2011) to criticism of the report for failing to define key terms such as “knowledge base” (Bernhardt 2010), the role of literature to cultural studies (see Forum 2007, 2008), and literacy (Arens 2012).

The MLA’s restructured 2011 convention program (“The Academy in Hard Times” 2011), while addressing the wider impact of the 2008 recession and its aftermath, did so in conjunction with further addressing the problems that were alluded to in the MLA Report (MLA ad hoc Committee on Foreign Languages 2007, 2008). Overall, MLA convention planning has expanded to include greater numbers of sections dedicated to FL instruction at all levels (including the teaching of literature), as well as taking in sessions structured by the AAUSC, the professional organization devoted to issues in language coordination. More recently, as well, MLA presidents who are noted scholars in traditional FL study have also joined in voicing concerns proposing solutions to perceived disparities between lower- and upper-division learning objectives. Such moves signal the need for a comprehensive model for FL teaching and learning that can begin by mapping traditional areas of scholarship as literacies—as cultural knowledge of content and practices that empower individuals as part of groups.

But to change institutional practices underlying long-held convictions about how to teach foreign language assessed with respect to a hypothetical, linguistically defined native language literacy demands a united effort of all faculty members in a department to modify long-held attitudes about foreign language learning and its relation to learning in general and to the areas of learning and scholarship that have existed for a half-century in U.S. FL departments. As Heidi Byrnes has noted, the MLA’s Advisory Committee for Foreign Languages and Literatures, inaugurated in 1990, was part of its effort to change professional attitudes, to “transcend the powerful native—non-native distinction” in the field and “examine the relation between foreign language study and native language literacy” (Byrnes 1998a, p. 3).

To change such attitudes and develop programs based on new insights about foreign language study and native language literacy also involves changing the current culture of language departments in North America. And as the field is increasingly aware, the location and mission of individual departments has tremendous impact on the foreign language literacy they choose to develop. One size will not fit all (Eigler and Kathöfer 2009; Hock 2009). Nor will any one model of literacy. But what has to happen is that the messages of Bloom’s Taxonomies and the Standards be taken seriously. FL departments exist to teach extant learners in consistent, complex, and integrative language use to express meaning, and to sequence learning in terms of learner development, not just by traditional approaches to favored scholarly materials held apart from each other.

9 Acknowledging Problems and Fixing Them

Such a shift involves major commitments to changing practices and developing viable curricula in departments and programs at all levels. It will be critical to have senior faculty in those universities that train PhDs as scholars and teachers recognize and alter the drastically self-marginalizing nature of a two-tiered language program (James 1989; MLA ad hoc Committee on Foreign Languages 2007, 2008).

To be sure, individual departments must undertake realistic steps that best suit their own academic environment (Bernhardt 2010; Hock 2009). I urge only that a department faculty needs to consider adoption of a model for learning that suits their own objectives and that allows for the sequencing of a task hierarchy that establishes what, in a particular framing, is learnable at what stages in student FL acquisition—and how the complexity of learning interlocked content and performance literacies can be acknowledged and fostered. Like Allen and Arens’ chapters in this volume, I see the imperative for change in those institutions with graduate programs: research and comprehensive universities that create coherent, media-based and adult-level programs in language and culture.

Such a change will require faculty (re)education. Reframing a curriculum of a FL department in a research or comprehensive university involves familiarizing its faculty with the lower- and upper-division pedagogy, goals, and their realization in assessment practices (Byrnes and Kord 2002). The point in making this effort is to ensure that, regardless of the type of program developed, continuity of content, expectations, and pedagogies flow from lower division to upper-division courses—and to stress that the two ends of the program must both adapt to create common and assessable learning outcomes (Byrnes et al. 2010).

With that continuity, discussions about “bridge courses” between “lower” to “upper levels” become superfluous. For students only taking FL courses to fill a requirement, this shift of approach will weld even those courses into the kinds of literacy—content and task managements—expected of them in other college courses. For students continuing on to advanced courses, for instance, the bridge to using language for higher order thinking in the sense of Bloom’s Taxonomy in the cognitive domain will have already been built. The kind of multidimensional negotiations modeled in the Standards will become commonplace as the start of active learning in lower-division courses that already have introduced such learning.

In most of today’s FL sequences, language acquisition is staged before literacy, and so the learning gap between lower and upper levels is also a question of content and expectations about learning—cognitive and cultural readiness, not just language readiness. Beginning instruction focuses on everyday speech used in generic contexts and the reading for factual information about different topics. At advanced levels, on the other hand, learners are asked to read or view culturally unfamiliar texts to identify their points of view, implications, and contributions to subsequent events. Some experts maintain that, for students of Western languages, only thirty percent of what accounts for FL reading comprehension encompasses a FL’s grammar and vocabulary, about twenty percent attributable to background knowledge (Bernhardt 2005). The case made in this essay is that programs anchored solely in the fundamentals of foreign language competencies fail to encourage students to use the other, still unidentified fifty percent of what can be taught and learned from texts, broadly defined in multiple media.

A department’s claims to approach language learning as the learning of culture must address this disparity in their curriculum. The heuristics for making this pedagogical and curricular change can be found in comprehensive models for staging learning, such as Bloom’s Taxonomy and the Standards, because both point out the need for learners to integrate language and knowledge acquisition through structural variation and recursions that sequence these challenges. In this sense, both documents support the claims the FL field must make to survive in today’s postsecondary curriculum. They are roadmaps in preventing the self-marginalization of foreign languages in the academy because they reference ways to teach multiple literacies and language acquisition simultaneously.

To use these roadmaps, departments must first discover what they themselves do, from A to Z. The initial work in introducing curricular change involves careful self-assessment of the program. So the first stage in addressing changes will be to identify features in a departmental status quo: what it now does, what it values as learning outcomes. That assessment necessitates that all a departments’ professors and instructors visit courses at all levels. Their goal will be to establish the pedagogies and outcomes that characterize the program as a coherent whole so that it may choose a model highlighting the kind of literacy it values most. That process involves talking constructively with each other and with their students about what language and what types of literacy different courses achieve.

At the same time, this process cannot only involve what is taught. To draw a comprehensive picture of what is possible, students’ execution of assignments, quizzes, and departmental exams must also be examined: read as documents about what features cohere or build a stage in their students’ developing language literacy. Such data provides a picture of what pedagogical practices, student assessment and realized expectations a department has at the present time. Only then can its faculty members undertake the second step: define or redefine the literacy they want their students to achieve and identify features in all levels of their program that offer consistent approaches to those expectations, both in terms of language competency and as content literacies—framed as identifying what learners are asked to do at each level.

Visitors to classes would note, for example, in what classes and in what ways student comprehension of language is linked to synthesizing or analyzing information, what activities are undertaken and what learning results (Hock 2007). Do students in an assigned essay get assessed for establishing both a point of view and idea development, and how? Do listening comprehension tasks ask learners to identify not only the facts of the exchange but its sociolinguistic implications as speech acts (why a polite or a brusque request)?

The case made here is not for a particular list of questions, but for an assessment of program learning to be conducted with a view to establishing what kind of ­learning a particular program fosters. That can be determined if reviewers collect data such as:

  • the amount of time spent in specific classroom learning activities,

  • the type of foreign language content dealt with,

  • to what extent students engage in tasks that encourage thinking about subject matter,

  • to what extent the subject matter relates to students’ background and interests (their majors, their extracurricular activities or work),

  • to what extent content and tasks are recycled across levels to reinforce learning and insure success at all levels, and

  • to what extent the reward system balances students’ literacy acquisition with whatever surface language accuracy a faculty views as characteristic of a learners’ stages toward achieving maximally effective comprehension and communication of ideas and intentions.

The resulting compilation of current practices leads to a given faculty figuring out what it wants to continue and discontinue—to discard the unrealistic and discouraging for the plausible and rewarding. It may also provide the case for revising a departmental curriculum to more adequately serve the needs of its institution, its student body, and the department’s existing resources.

Proposals for curricular change in FL programs include incorporating English language texts to facilitate reading by focusing on language comprehension in a program that uses contributions of a foreign language in its literary (Bernhardt and Berman 1999), cultural, or historical manifestations (Kramsch 1992, 1995). The Earlham College initiative of having faculty in other disciplines use foreign language texts in their courses across its curriculum (Jurasek 1988) or Rhode Islands’ program of German for engineers (Grandin 1992) continue to serve as efforts for departmental enterprises anchored in content-based learning. Starting in an intensive three-year process in 1997, Georgetown’s German Department collaborated to design “a curriculum that is content-oriented from the beginning of instruction and explicitly fosters learners’ language acquisition until the end of the four-year undergraduate sequence.”Footnote 18 In other words, the Georgetown program does not differentiate between so-called “language” courses and “content” courses and has integrated compatible learning strategies at all levels.

The vital component for such a program’s development and subsequent success lies in its coherent pedagogy—like the Emory faculty described by Maxim in this volume, the Georgetown faculty re-approached their various contents and favorite learning outcomes, and restaged them as task hierarchies calibrated to institutionally appropriate learning outcomes. In such programs, the commonly heard complaint of upper-division teachers that they “have no time to teach grammar” must finally bow to the fundamental insight of systemic-functional linguistics that language function and language messages are inseparable, covalent, and must be acknowledged concurrently (Halliday and Mattiessen 2004).

Nelson and Kern (2012) view the challenges of multiliteracies as postlinguistic conditions due to the prevalence of multimodalities (48–49) that are moving language “…. from its former unchallenged role as the medium of communication, to the role of one medium… albeit more rapidly in some areas than other” (Kress and van Leeuwen 2006, p. 34 [italics in original]), as Warner, Willis-Allen and Arens illustrate in this volume. Their chapters provide pedagogies in line with Nelson and Kern’s assertion that, because language is increasingly technologically mediated, it is a semiotically dynamic resource. In that now-globalized framework, no successful program can teach language without teaching a literacy that encourages learners to “to combine with other semiotic resources to act in the world” (Nelson and Kern 2012, p. 49).

University administrations appreciate such integrative efforts and support them because they serve students as much as they serve scholars. Deans value concrete proposals about what they can do to expand a FL curriculum and its outreach to their institutions, whether in extracurricular activities, pre- and post-study abroad follow-up studies, or assessment (Roche 2011). They support such programs in spite of budgetary hard times precisely because departments that serve student audiences effectively are the lifeblood of an institution, and faculty who show students that “FL learning” adds value to their lives are teaching a new outreach: literacy for life.

This chapter has attempted to sketch the historical course of FL teaching since WW II in thumbnail to point to precisely such integrative solutions. “Skills” and methods tested only in assessments that separate outcomes by modality rather than as integrative processes fail to address psycholinguistic realities as they are understood today. Such assumptions from the past 70 years are currently questioned in both public and educational venues. Indeed, changes in student demography and the role of institutional structures have resulted in fundamentally different learning environments compared to those of as few as 20 or 30 years ago, and hence to different and increasing demands for accountability. In this same vein, the escalating costs of a college education give rise to questions about the usefulness of foreign language learning in an increasingly global, technological environment dominated by the English language (see Levine).

I have argued that, in consequence, the future of foreign language instruction in North America involves taking full account of our past, and moving from distinctions between “language teaching” and “scholarship” to a more comprehensive vision of teaching FL literacies of culture and content at all levels of FL department curricula. That goal, articulated as appropriate for individual institutional settings, must share the aim of guiding learners into knowledge acquisition through work with multiliteracies, defined as the abilities to read not only language, but also how language interacts with medial contexts, its outcomes recognized as relevant to those contexts and to the humanities in their institutional curricula.Footnote 19