Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This Is Not a Test

“It is far from clear whether concerned parents and scorned instructors are enough to stop the march of big data on education.” “The reality is that it’s going to be done,” says Eva Baker, director of the Center for the Study of Evaluation at the University of California, Los Angeles. “It’s not going to be a little part. It’s going to be a big part. And it’s going to be put in place partly because it’s going to be less expensive than doing professional development” (Fletcher 2013, para. 19.). Some 13 years ago, John Seely Brown and Paul Duguid (2000) authored The Social Life of Information. Among the many positive accolades reproduced in the book’s opening pages are those of William J. Mitchell: “neither cheerleaders nor debunkers, these knowledgeable and reflective Silicon Valley insiders provide a much-needed critical perspective on the buzzwords, myths, and conventional wisdom of the digital revolution.” The two argue convincingly that “our future world is evolving from the complex interaction of powerful new technology with resistant existing structures and practices.” The authors, in fact, decry what they describe as the “6-D vision” 1 of their Silicon Valley colleagues, an “overreliance on information” that they argue “is not necessarily twice as good as the ordinary 3-D kind. Indeed, in many cases it is not as good, relying as it does on a one-dimensional, infocentric view” (Brown and Duguid 2000, p. 21). Unfortunately, time has proven that the majority of “Silicon Valley insiders” have chosen to ignore Brown and Duguid’s admonition and chosen to focus almost exclusively on information. In fact, the emergence of “big data,” that is, data with a capital “D,” has resulted in an even more tightly focused “one-dimensional, infocentric view,” one might even say a “7-D” vision that completely ignores “existing structures and practices.”

In Big Data at Work, Thomas Davenport (2014) notes that “Big data is undeniably big, but it’s also a bit misnamed”; in fact, “it’s a catchall term for data that doesn’t fit the usual containers. Big data refers to data that is too big to fit on a single server, too unstructured to fit into a row-and-column database, or too continuously flowing to fit into a static data warehouse”; moreover, and more importantly, “while its size receives all the attention, the most difficult aspect of big data really involves its lack of structure (p. 15).” Given this lack of structure, if big data is information, it is so only the very broadest sense of the term. But how big is “big data?” In Big Data: A Revolution That Will Transform How We Live, Work, and Think, Schöenberger and Cukier (2013) state: “when the Sloan Digital Sky Survey began in 2000, its telescope in New Mexico collected more data in its first few weeks than had been amassed in the entire history of astronomy. By 2010 the survey’s archive teemed with a whopping 140 terabytes of information. But a successor, the Large Synoptic Survey Telescope in Chile, due to come on stream in 2016, will acquire that quantity of data every five days (p. 19).” 2 This deluge of data inspired Anderson (2008) to argue that we now reside in the Petabyte Age: “the Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud”; moreover, “at the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later (Anderson 2008, para. 4).” Even though attributing data an inherent meaning outside of its cultural context is something Brown and Duguid (2000) argue explicitly and compellingly against, there is something alluring about the promise of “dimensionally agnostic statistics (Anderson 2008, para. 4),” a siren call those with a proclivity for data simply cannot resist.

However, to be amenable as “data,” culturally embedded information must first be “digitized” (converted from analog to digital) and then subjected to “datafication” (“taking information about all things under the sun—including ones we never used to think of as information at all, such as a person’s location, the vibrations of an engine, or the stress on a bridge—and transforming it into a data format to make it quantified. This allows us to use the information in new ways, such as in predictive analysis” (Schöenberger and Cukier 2013, pp. 35–36). Our immediate tendency is to view these processes of digitization and datafication as something new, but this could not be further from the truth. As Caleb Crain (2015) notes in “Fighting for Literature in an Age of Algorithms”: “Counting has changed the world before. Consider Europe and America in the two or three centuries before 1750, when society had a structure that was still half-feudal… And then, between 1750 and 1850, everything changed. Lengths and weights became standardized; time-keeping mechanisms were improved and clocks became widely distributed; bureaucracies took charge of record-keeping (para. 4).”

But, just as those first steps toward the quantification of everyday life resulted in a period of tremendous adjustment and struggles, so too have our own steps toward the Petabyte Age: “we fight about whether to replace the personal judgment of teachers with standardized curricula and frequent testing, whether it’s ethical for employers to track the keystrokes and body movements of workers, 3 whether we’re comfortable with retailers having the intimate knowledge of ourselves that they’re able to piece together from our purchasing histories, and whether we trust our governments with the power to monitor our phone calls and emails” (Crain 2015, para. 4). The problem, Crain continues, is that it makes little sense to “count,” unless one presumes an underlying interchangeability: “there’s little point in counting, after all, if you can’t take the mental shortcut of assuming that the aspects of a thing that can’t be counted don’t matter” (para. 13). This, Cain contends, “is the basic trade-off at the heart of economics, which treats human desire as more or less fungible, even though most of us experience desire as particular and various (para. 13, emphasis added).” It is “in exchange for this procrustean simplification,” Cain suggests, that “economics acquires a powerful predictive capacity (para. 13).” But this is far from the only “procrustean simplification” undergirding the most recent incarnation of counting.

The very idea of “counting,” as all ideas, began as a metaphor. 4 This, in and of itself, is not a bad thing. Metaphors, as Barrett (2011) notes, are “an essential part of science” that “help us extend the boundaries of knowledge (p. 114).” Problems arise, however, “when the metaphors employed are taken too literally” (p. 115). When the counting metaphor is taken too literally, for example, an underlying interchangeability must be presumed, and a belief that “the aspects of a thing that can’t be counted don’t matter (Crain 2015, para. 13, emphasis added)” must be assumed. The reality is that “counting” is a figurative practice, not literal, and what we must assume does not matter are often the essential and defining characteristics of that which we are counting:

The need for enlarging language beyond the level of the literal invades even mathematics. This need is encountered by anyone who seeks the meaning of (say) the number two used to ‘count’ two concrete individuals. Socrates was the first to note the oddity in the fact that, though he and Cebes are each one, yet if they are juxtaposed, then somehow together they become two (Phaedo 96d). In what sense are they two?…when we speak of two concrete individuals, ‘two’ is not given a literal but a figurative sense. In order to conclude that Socrates and Cebes together form a (quantitative) group of two, the measurer must ignore the Socratic character of Socrates and the Cebean nature of Cebes…. Thus, the concrete ‘two’ refers us to unlike component unities. We may call this kind of unit pre-mathematical, for it cannot be used in counting objects but only for referring to objects before abstraction from their unique being has been made (Ballard 1978, pp. 186–190; cited in Fisher 1994, p. 358, emphasis added).

In reality, Fisher (1994) continues, “for the purposes of designing a measurement system, we ‘act as if,’ ‘entertain the possibility that,’ ‘suspend our disbelief in the fiction that’ what we are counting is some kind of ‘one’ thing (p. 358).” For Fisher, “the problem with virtually all educational, psychological, and social measurement is that the metaphorical fiction entertained in the counting is simply assumed true (Ibid.).”

Yet another fiction advocates of counting must “suspend their disbelief in” is the “very powerful metaphor that helped shape the fields of psychology, cognitive science, and artificial intelligence for many years… the way in which many neuro-, cognitive, and comparative psychologists liken the brain to a computer” (Barrett 2011, p. 114). Over time, Barrett notes, there have been numerous metaphors for the brain, so many, in fact, that it requires an act of incredible hubris to expect that “we’ve finally hit on the correct one, as opposed to the one that just reflects something about the times in which we live (p. 115).” 5 The brain-as-computer metaphor is often (mistakenly, Barrett argues) attributed to Alan Turing, creator of the “Turing machine.” 6 In fact, Barrett contends, the metaphor arose and became entrenched in popular consciousness when psychologists who were abandoning the behaviorist brain-as-black-box metaphor “began to cotton on to the idea that understanding brains and intelligence could be achieved not only via the analogy of the computer, but also by the actual use of computers to model and mimic the activities of the brain (p. 116).” Further conviction to “suspend” our disbelief in the fiction that the brain is a computer came with the US Army’s creation of ENIAC (Electronic Numerical Integrator and Computer):

In the late 1940s, John von Neumann was one of several people charged with the task of making the ENIAC more convenient and useful, and it was he who designed the architecture used by all modern computers today: a central processing unit, a main memory, a set of peripherals (like keyboard and monitor), and a second memory that could be used to store information externally, like hard drives, CDs, and memory sticks. It is, therefore, to von Neumann that we owe the ‘brain-as-computer’ metaphor, as it was he who helped create self-contained digital computers. In addition, it was he who specifically compared his computer architecture to that of the brain, suggesting that the central control (CPU) of his computer corresponded to the “associative” neurons of the human nervous system, and that the input and output devices were the equivalents of sensory and motor neurons, respectively. (Barrett 1994, p. 121, emphasis added)

Once the “taken-for-granted” concepts of “counting” and “brain-as-computer” were firmly entrenched, conditions were ripe for the emergence of big data, requiring only technological advances in computing power and information storage—a process that accelerated throughout the 1980s and 1990s. 7 Once appended to Brown and Duguid’s (2000) 6-Ds of “demassification, decentralization, denationalization, despacialization, disintermediation, [and] disaggregation (p. 22),” the seventh “D” of big Data, contributed significantly to the creation of two oft-heard refrains: “education is broken” and a “tsunami of change” is imminent. 8 In response to these refrains, big business, further motivated by the hi-tech mantra of “disruption”—yet another “D” “to represent forces that, unleashed by information technology, will break society down into its fundamental constituents, principally individuals and information” (Brown and Duguid 2000, p. 22)—has invested billions 9 in the belief that

just as the Internet has fostered decentralization and disaggregation in a variety of traditional markets, a similar process will take place in the education market… The ‘core’ services and products provided by the university [and schools] will be disaggregated from the peripheral ones, that a variety of differentiated services and products will emerge in order to cater to different market segments, and that this process of unbundling will enable highly flexible forms of mass customization. The viability of this paradigm is dependent on the extent to which education can be divided up into modular, scalable units, which remains an open question. (Werry 2002)

Despite such cautions as: “this latest wave of education technology is too new and eclectic to have proved its worth definitively. It is still mostly a matter of patching together different bits” (The Economist 2013a), advocates of the 8-Ds continue their efforts to reform education. As Michelle Rodino (2002) notes: “as the debates unfolded, however, it became apparent that technology was not driving the changes being imagined for this brave new world. Rather, technology provided an air of legitimacy and urgency to what was really a campaign to expand markets for the computer and higher education industries, use public funds to subsidize such expansion, and reorganize academic labor.” The Economist (2013b) notes: “Bill Gates calls this ‘a special moment’ for education. Private-sector money is piling in. Rupert Murdoch, hardly a rose-tinted-specs technophile, is allowing Amplify, his digital education business, to run up losses of around $180m this year in hope of dominating an edtech market that News Corporation reckons will soon be worth $44 billion in America alone.”

But this speculative vision is not so much of a brave new world as a staid old one. Big data, or more precisely, its progeny, data analytics and adaptive learning, are the latest incarnation of a fantasy that coalesced at the turn of the twentieth century: the “teaching machine.” It was the heyday of Taylor’s “scientific management” and Thorndike’s “principles of learning.” First imagined and operationalized by Sidney Pressey, the teaching machine was championed by B.F. Skinner in the 1950s, but waned in the 1960s, only to be revitalized in the 1970s as computer-assisted instruction (CAI). In “Rebirth of the Teaching Machine through the Seduction of Data Analytics: This Time It’s Personal,” Phillip McRae (2013) notes how, “today, yet again, a new generation of technology platforms promise to deliver ‘personalized learning’ for each and every student (para. 1).” One example of this current behaviorist vision is Dreambox’s intelligent adaptive learning (IAL) system: “the level of sophistication of today’s IAL systems is far superior to similar technologies of the past” (Lemke 2013, p. 13, cited in McRae 2013). But what escapes, or is simply ignored by the advocates of these new “counting” systems is that they are based on the same old notion of “the isolated individual, in front of a technology platform, being delivered concrete and sequential content for mastery” (McRae 2013, para. 13). The problem, as McRae notes, is that

adaptive learning systems (the new teaching machines) do not build more resilient, creative, entrepreneurial or empathetic citizens through their individualized, linear and mechanical software algorithms. Nor do they balance the desire for greater choice, in all its manifest forms, with the equity needed for a society to flourish. Computer adaptive learning systems are reductionist and primarily attend to those things that can be easily digitized and tested (math, science and reading). They fail to recognize that high quality learning environments are deeply relational, humanistic, creative, socially constructed, active and inquiry-oriented. (para. 14, emphasis added)

As its predecessors, the latest, big-data-driven iteration of the “teaching machine” takes its “counting” too literally, assuming an underlying interchangeability based on the belief that “the aspects of a thing that can’t be counted don’t matter,” and in so doing, “treats human desire as more or less fungible, even though most of us experience desire as particular and various” (Crain 2015, para. 13, emphasis added). It is for this very reason, a disregard for human desire, that “teaching machines” of whatever will inevitably fail.

The Immanent Flaw of Teaching Machines

Students do not learn in a vacuum. They bring much more to the classroom than pens, pencils and mobile devices. They bring their hopes, fears, beliefs, experiences, memories, uncertainties, doubts, goals, hopes, dreams and desires. Such factors constantly impinge upon today’s learning environments, but in the haste to provide learners with new knowledge and skills, instructors often pay little heed to these affective impediments to learning. Yet, it is often such factors that determine the success or failure of today’s learners, rather than carefully planned teaching and learning methodologies, lesson plans, assessment models and/or learning analytics. Unfortunately, most educators are taught to steer clear of the affective realm (values, motivations, attitudes, stereotypes, beliefs, feelings, desires), because it is far too nebulous, amorphous and emotive a foundation for professional practice. As a result, rational-cognitive conceptions of knowledge and cumulative-linear models of learning have come to dominate and are in a seemingly constant state of revision, a race to further the untrammelled transmission of educational content. Consequently, much of education remains in the thralls of “scientism.” 10 Increasingly pressured to provide learners with new knowledge, skills, aptitudes and capacities, today’s teachers feel compelled to restrict their practice to the cognitive realm, yet it is often the affective domain that determines the success or failure of learners, as opposed to carefully planned teaching and learning methodologies, lesson plans, assessment models and/or learning analytics.

But the learning environment is riddled with affective elements. This is hardly a revolutionary claim: over two decades ago, the work of noted female educators such as bell hooks (1994, 2003) and Jane Gallop (1995, 1997, 2002) revealed the learning environment to be shot through with affective elements—the writing of both, for example, draws our attention to the emotional and romantic bonds that fuel and complicate the teacher–learner relation, and in so doing reveals learning to be a far from rational, cognitive and linear process. But it is the insight of a third female educator, Shoshana Felman (1987) that provides the most compelling case against instrumental conceptions of learning , 11 and she provides this through a psychoanalytic lens: “proceeding not through linear progression but through breakthroughs, leaps, discontinuities, regressions, and deferred action, the analytic learning process puts in question the traditional pedagogical belief in intellectual perfectibility, the progressist view of learning as a simple one-way road from ignorance to knowledge” (p. 76). Desire, Felman argues, is interwoven into the very fabric of learning. It can, of course, be ignored, but at a cost. That cost is the ability to identify why some learners fail to succeed, in spite of elaborate teaching and learning methodologies, explicit lesson plans, appropriate assessment models and/or learning analytics. Yet Felman’s promotion of Jacques Lacan’s psychoanalytic insights into learning has had minimal impact on the practice of education.

Bruce Fink (1995) provides an indication of why this may be the case: “a peculiar temporal logic is involved in reading Lacan: you cannot read his writings (in particular the Écrits) unless you already know more or less what he means…; in order to get anything out of his writing, you already have to understand a good deal of what he is talking about (p. 150).” Anthony Wilden (1968), the translator of Lacan’s seminal Rome Discourse, even situates his introduction to Lacan after his translation, noting: “it is almost impossible to write any sort of introduction to Lacan unless the reader has first been introduced to him (p. ix).” Fink further contends that this peculiar temporal logic leaves the committed reader with two choices: “learn about Lacan from someone else—with all the biases that entails—…then try to verify or refute what you have learned by examining his texts”; or “read and reread and reread his work until you can begin to formulate hypotheses of you own, and then reread yet again with those hypotheses in mind, and so on (p. 150).” Both methods, he notes, are not only tedious and time consuming but also antithetical to “the publish-or-perish economic reality of most academics” and “a certain American pragmatism and independence (p. 150).” Many academics, he contends, argue: “if I cannot put someone’s work to use for me in a relatively short space of time, what is the point?… I need to prove that I am an independent thinker, and thus I must criticize it as soon as I think I have begun to understand it (p. 150).” For Fink, the unfortunate result of such reasoning is a peremptory reading of Lacan, “with a view to critiquing it, short-circuiting the ‘time for comprehending’ and proceeding directly to the ‘moment of concluding’ (p. 150).” 12 Consequently, the typical North American response to Lacan is homologous to Freud’s example of the threefold denial expressed by a man accused of returning a damaged kettle to its owner:

  1. 1.

    If I cannot figure him out myself, then he is not worth thinking about.

  2. 2.

    If he cannot express himself clearly, then it must be muddled thinking.

  3. 3.

    I never thought much of French “theory” anyway.

  1. 1.

    I returned the kettle undamaged.

  2. 2.

    The kettle had a hole in it when I borrowed it.

  3. 3.

    I never borrowed the kettle in the first place.

But contrary to the standard North American response, Fink (1995) suggests that “if an author is worth reading seriously, you have to take for granted at the outset that, as crazy as certain ideas may at first seem, considered in greater detail they may become more convincing, or at least lead you to understand the aporias that gave rise to them (p. 151).” Unfortunately, this “is more credit than most people are willing to give an author, and a love-hate ambivalence gets played out around reading. To assume that it is not as crazy as it sounds is to love the author …, whereas to read it critically comes off as hate (p. 151).” Thus, although many remain convinced that “hate is the condition for a serious reading,” Fink cautions, “if that indeed is the condition, it had better be preceded by a prolonged period in which the reader loves the author and presumes him or her to have knowledge! (p. 151).” It with an eye to cultivating a greater love for Lacan’s work that I offer what I hope is a more accessible portal to Lacan’s notion of desire and its implications for learning, a personal narrative of everyday life.

The personal narrative that follows is a “first person singular” that aired on the Canadian Broadcasting Corporation’s This Morning. 13 The author of the narrative speaks tellingly of the role desire and fantasy play in the formulation, pursuit and near attainment of a central goal, offering, in plain and humorous language, insightful comments on not only the fantasies she constructed to support her desire and sustain her in the pursuit of her goal, but also the factors that led her to reconsider and ultimately sacrifice her goal when reality clashed with fantasy. This personal narrative serves as the perfect vehicle to introduce and discuss some important yet otherwise abstract insights into the relation between desire and learning.

The narrator of our first person singular recounts how, after being told by her doctor that she risked a heart attack, stroke, or diabetes if she did not lose weight, she suddenly found herself with sufficient incentive to lose 40 pounds in quite short. The process she then followed is typical of that prescribed by many education programs: from expert knowledge (content), through linear learning (transmission of content), to a predictable outcome (goal). But as her “education” proceeded, and with only an additional 30 pounds to go and her desired goal in sight, the narrator found her motivation waning. She notes:

Good health is a fine reason to lose weight. But after a point, health is not enough. Call me impatient, or needy, or fragile, or vain, but 120 over 70 just doesn’t cut it. No one can see my low blood pressure. No one can see my good health. Only I—and my doctor—can see that I am healthier. The people I pass when I jog, they just see another jogger, plodding her way along the seawall, trying not to trip over the dog droppings along the way (emphasis added).

Troubled by her waning motivation, the narrator further reflects: “why can’t I keep the weight off? Why can’t I just become normal and stay there? It’s because I don’t know what normal is. I can’t long for that flat stomach I had in high school. I never had it. I can’t accept who I am since I’ve never been who I wanted to be.” She continues: “they see someone normal. I’m not normal. I’ve never been normal. I’ve always been fat. When I was 20, I managed to shed about fifty pounds and keep it off. For about a month.” However, she recounts how, “Sometimes, in my imagination, I am thin, tall—maybe even blonde, why not?—with an aquiline nose and high cheekbones. The most popular girl in the school. Brainy, too. When I try to lose weight, that’s who I’m trying to become” (emphasis added). It is not good health, then, that supports the narrator’s desire to lose weight, but the fantasy construct to be someone other than she is: thin, tall, blonde, and so on. The narrator’s elaboration on her fantasy construct is revealing:

When I am thin, I will look fabulous in Size 5 Gap jeans. My little butt will stick out just the right amount. I will wear my shirts tucked in, like in my hairdresser’s fashion magazines, with the top buttons undone just enough to show off the lace on my sexy push-up bra. Maybe I will wear those little T-shirts that expose just a hint of my iron-hard tummy. My jawline will be ice sharp, my cheekbones—way up near my ears—will glow, and my long flaxen hair will glisten as I toss it around oh-so-casually. I will be amazing.

The narrator’s fantasy is startling in both its clarity and detail:

Heads will turn. People will stare as I walk down the street. I won’t be able to jog; I’d attract too much attention. I’ll come home from work at night and change out of my Size 5 Jones New York business suit into a slinky wraparound gown, and I’ll enjoy a glass of wine and some chocolate-dipped strawberries with my loving husband, who won’t be able to keep his hands off me. Jeez, we’ll probably even have sex standing up! And the best thing of all? When I eat an ice cream cone, I can get a little dribble on the end of my nose, and I will be devastatingly cute.

What this charming and entertaining account reveals is that what is supporting the narrator’s desire to lose weight is neither her doctor’s expert knowledge, a regimented method of balancing calories in against calories out, nor a goal of good health, but the fantasy of becoming someone she is not. But after losing 40 pounds and moving ever closer to realizing her goal, we learn that what the author wanted to avoid, even at the cost of failing health, was a former painful truth that threatened her fantasy:

What I remember, so vividly and so painfully, was looking at myself in the mirror one day, having lost all that weight, and realizing that, after all that caloric deprivation, I was still only five feet tall. My hips still stuck out too much on the sides. My hair was still limp and mousy brown. My knees still knocked. And my nose still turned up too much. I was back at the peanut butter before you could say “body mass index.” And, of course, back came the weight. And again it went, and again it came back.

The narrator goes on to share what she knew only too well: “If I lose those last 30 pounds, I risk finding out the truth” (emphasis added), a truth she finds too painful to accept:

The truth is that I am fortyish, nearsighted, and short. If I got into those Size 5 jeans, I’m sure I wouldn’t be able to walk ten feet. I have a gall bladder scar across my stomach that looks like a tire skid, so the little T-shirts are out. I will never have an ice sharp jawline or high cheekbones. My face is a little apple dumpling and always will be. And my greying hair will never swing because it’s too much hassle to grow it long. As for sex, well, my loving husband is past 50, has a bad back, and isn’t likely to cart me around the house in sexual ecstasy any time soon. And ice cream on my nose? Cute if you’re six, but pretty embarrassing for a grown woman.

Clearly, the narrator is under no allusion about her actual appearance and condition, but what is particularly telling is her conclusion: “Yes, it’s lunacy. But while I am fat, I can think whatever I want and ain’t no one going to tell me I’m wrong” (emphasis added).

What this first person singular makes clear, then, is that the narrator’s desire to lose weight is sustained not by her doctor’s expert advice, the cumulative accomplishments of her dietary and exercise regime, or the measurable outcome of good health, but the fantasy of being someone she is not. Moreover, knowledge and sound reasoning aside, it is this “insane” fantasy the author chooses to sustain rather than attain her logical goal.

This is completely in keeping with Lacan’s account of desire, which holds that a fantasy construct does not disappear once it is successfully interpreted and its function revealed. This is because a certain enjoyment, what Lacan dubs jouissance, remains at play. For instance, the “lunacy” of remaining overweight affords the narrator of the first person singular the pleasure of: “thinking whatever I want and ain’t no one going to tell me I’m wrong.”

Yet when learners fail to achieve their goals, we continue to focus on identifying inappropriate teaching methodologies, flawed lesson plans and/or poorly conceived outcomes, rather than the learners themselves and the fantasies that sustain their desires. The argument against such a course is that if education abandons its “scientific” principles and methods (knowledge as observable, measurable, abiding; learning as the transmission of knowledge from expert to novice), it will lose its legitimacy and status. But until we are willing to look beyond the boundaries of modern education practice, it will remain impossible to make sense of why some learners choose failure in the face of the clearest presentation and transmission of content and a deluge of learning analytics that “predict” success.

Notes

  1. 1.

    “The D in our 6-D notion stands for the de—or dis—in such futurist-favored words as demassification, decentralization, denationalization, despacialization, disintermediation, disaggregation.” (Brown and Duguid 2000, p. 22).

  2. 2.

    Consider also, how much social media and online services contributes to big data: every minute, for example, Facebook users like 4,166,667 posts, Twitter users send over 347,222 tweets, YouTube users upload 300 hours of new video, Instagram users like 1,736,111 photos, Pinterest users pin 9,722 images, Apple users download 51,000 apps, Netflix subscribers download 77,160 hours of video, Reddit voters cast 18,327 votes, Vine users play 1,041,666 videos, Tinder users swipe 590,278 times, Snapchat users share 284,722 snaps, Buzzfeed users view 34,150 videos, Skype users make 110,040 calls, and Uber passengers take 694 rides (James 2015).

  3. 3.

    “IBM, programmers have put together mathematical models of fifty thousand of the company’s tech consultants. They crunched massive amounts of data on the employees—how many emails they sent, who got them, who read the documents they wrote—and used this information to assess their effectiveness and deploy their skills in the most cost-efficient way” (Basen 2011).

  4. 4.

    “Every concept articulated in language begins as a metaphor. Then the poetic vitality associated with new metaphors wears away until the metaphor dies and a taken-for-granted concept is petrified in its place” Fisher (1994, p. 358).

  5. 5.

    Socrates, for instance, considered the mind a wax tablet, Locke, a tabula rasa, Freud, a hydraulic system, and “the mind/brain has also been compared to an abbey, cathedral, aviary, theater, and warehouse, as well as a filing cabinet, clockwork mechanism, camera obscura, and phonograph, and also a railway network and telephone exchange” (Barrett 2011, 115).

  6. 6.

    Barrett (2011, 120) argues convincingly that “Turing’s concerns were clearly mathematical, rather than psychological. He was simply interested in what numbers it was possible to compute, as a human did, using a pencil and paper…Turing’s machines were never intended to be a model of the mind or of mental processes (p. 120).”

  7. 7.

    “Moore’s Law… states that computing power doubles every 18 months. Moore’s Law is important and will be in effect for another two decades, but it is the least spectacular and slowest law at work. Every nine months,—twice the speed of Moore’s Law—our ability to increase the bandwidth of optic fibres and optical amplifiers doubles, according to a fiber law. Multiply that by the ability to store information which doubles every year, and the result is more useful information generated, flowing, and accessible” (Brown 2002, p. 50).

  8. 8.

    In 2012, John L. Hennessy, president of Stanford University, famously told The New Yorker that technology was about to dramatically change higher education. “There is a tsunami coming,” he said (Jaschik 2015, para. 1).

  9. 9.

    “Several big education companies have been investing heavily in technology ever since the 1990s. Pearson has spent over $9 billion in the past decade on technological upgrades for its education business. News Corp is also taking a big bet on Amplify, run by Joel Klein, a former chancellor of schools in New York City (and one-time antitrust nemesis of Mr. Gates). Amplify’s office, in an old warehouse in New York’s DUMBO district, contains not only classrooms, where students and teachers use new technology, but groups of former teachers working with software engineers, graphic artists, psychometricians, and game designers to produce new content. Other organizations funding the application of all this potential to education include companies who, like Pearson, are already established in education as providers of textbooks and other resources; companies already established in technology who see big new markets (Apple says it sold 3 m iPads to American educational institutions last year); and companies established in other businesses who see edtech as a big opportunity. Then there are legions of start-ups, backed by an American venture-capital crowd that has proclaimed edtech to be the new thing. According to GSV Advisors, a consultancy, investment in edtech soared to $1.1 billion in 2012. The Education Innovation Summit held in Scottsdale in April was crawling with would-be investors; presentations from new companies were packed. Investment in the education sector in 2011 was almost as high in nominal terms as the dot.com peak and was higher in terms of volume” (The Economist 2013a, para. 4).

  10. 10.

    “The conviction that we can no longer understand science as one form of knowledge but rather must identify knowledge with science” (Habermas 1972, p. 4).

  11. 11.

    Instrumental reason is “the kind of rationality we draw on when we calculate the most economical application of means to a given end. Maximum efficiency, the best cost–output ratio, is its measure of success” (Taylor 1992, p. 5).

  12. 12.

    See Samuels (1993, pp. 10–14) for a succinct account of Lacan’s three logical stages: (i) the instant of the look, (ii) the time for understanding, and (iii) the moment to conclude.

  13. 13.

    “CBC Radio’s flagship current affairs program, This Morning, is in the market for personal essays. We call this feature of the show ‘First Person Singular.’… These pieces do not deal with issues, but with significant experiences and happenings that shape people’s lives in big and small ways. In each there should be an element of transformation …an epiphany…a turning point” (Levine 2003, para. 5).