Keywords

Introduction

In the film ‘Groundhog Day’, the protagonist is forced to experience the events of a single day over and over again. He is free to act in any way he chooses, but whatever he does the day always finishes in the same way. […] People who have been involved over any length of time with educational technology will recognise this experience, which seems characterised by a cyclical failure to learn from the past. We are frequently excited by the promise of a revolution in education, through the implementation of technology. We have the technology today, and tomorrow we confidently expect to see the widespread effects of its implementation. Yet, curiously, tomorrow never comes. We can point to several previous cycles of high expectation about an emerging technology, followed by proportionate disappointment, with radio, film, television, teaching machines and artificial intelligence. (Mayes, 1995: n.p.)

This widely quoted excerpt from Mayes neatly summarizes many of the challenges that face research on educational communications and technology. There is hype; there is exploration; there is disappointment; and then a new topic arises that diverts attention. The date of Mayes’ lament shows that this is nothing new; the continued relevance of the excerpt shows that the problem remains with us.

This chapter works through some of these challenges, identifying possible solutions but also considering how much of this is an inevitable feature of research in the field. This discussion serves to identify both opportunities to foster relevant research, but also the challenges which are likely to persist.

In order to do this, the next section starts by focusing on the idea of “relevance,” considering it in relation to different contexts, audiences and traditions of research.

What Is Relevant Research?

“Relevance” implies a sense of purpose—something cannot be relevant in general, only to particular situations or ends. This is just as true of research as for anything else, so, in order to reflect this, this section explores two questions: relevance to what, and to whom?

Contextual Relevance

Research on Educational Communications and Technology might be expected to inform practice or decisions in a wide range of settings, but differences between settings mean that work undertaken in one context may not be relevant elsewhere. In order to understand how this affects research, it becomes important to ask, how can contexts vary?

The first problem with this question is that “pinning down what we mean by context is not an easy task” (Luckin, 2010: 3). Luckin’s discussion of the “spatial turn” in research identifies physical, digital, social, and historical elements that constitute our understanding of context. Each has a role to play in establishing whether a particular practice, such as an educational use of technology, will be considered successful, appropriate, and so on.

As a further complication, this already complex picture of context has been viewed from at least two perspectives. The first of these views context as something that “surrounds” activity; the second views context as something created through the weaving together of artifacts and practices. Even without going further into the ecological account of learning with technology that Luckin develops, this demonstrates just how different contexts can be. In addition to the practical questions that it raises—how similar, for example, is the historical and material context of this teacher to that of the research?—it also highlights that the introduction of any new technology, or even a new use of an existing technology, can be understood as changing the context of practice itself.

To illustrate some of the many problems this poses for research, it is worth considering the problems faced by the “One Laptop per Child” initiative. This work traces its roots back to Papert’s seminal work on constructivism, but it has been shown that the principles derived from studies in western settings could not easily be applied in developing countries for organizational, technical and social reasons. The project has been criticized for adopting a simplistic, “one size fits all” model of change (Leaning, 2010) that assumes technology alone can cause beneficial change (Tabb, 2008). Rather than applying research to transform education in other contexts, the risk for such work is that the simplistic transfer of principles can come across as “an arrogant sense of superiority and a ‘benign colonialist’ attitude” (Leaning, 2010: 244).

The only way to avoid such issues is to assume that no research outcomes are universally relevant, but instead will need to be recontextualized each time they are applied. For example, the principles behind the “One Laptop per Child” initiative remain relevant in other contexts, but arise in different ways. Studies undertaken in South Africa show that older learners (studying at University) do indeed benefit from having individual devices to support learning—however, these may well be mobile phones rather than laptops, and it may involve considerable personal effort and hardship to fund their use, meaning that interventions might be better aimed at adults who can use income to sustain their use rather than children who depend on others to provide the infrastructure they need (Czerniewicz, Williams, & Brown, 2009). Thus, whilst some of the principles derived from earlier work can indeed be applied, they require reworking rather than simple application; the change in context makes their “relevance” complicated, so that it needs to be reestablished rather than assumed.

Contextual differences do not only apply at this macro level; even within relatively small areas, contexts vary considerably. The differences between home and school use of technology demonstrates this. Learners need to use different technologies, or use familiar technologies in new ways, depending on what they are trying to achieve and the settings in which they are acting (Livingstone & Bober, 2004). This is not a simplistic contrast in which one set of practices—say, the more extensive use of technologies in informal settings—is universally “better”; rather, it is simply that some uses of technology may be inappropriate in school but acceptable at home (Lankshear & Knobel, 2006). One example of this is in relation to information searching: being able to find videos on YouTube does not mean that a learner will be able to find and judge academic information sources appropriately, even if they can operate the search interface with equal technical skill (CIBER, 2008).

Many of these issues are considered explicitly within the field of mobile learning. Attempts to theorize what is important about mobility have moved away from a focus on the portability of devices, and towards the idea of learning across contexts (see, e.g., Sharples, 2005). This is particularly obvious in examples such as educational field studies, where data collected in one context might be analyzed and interpreted in another; or augmented reality simulations, in which information is used to supplement and hence transform experiences of what might otherwise be very familiar spaces (Roschelle & Pea, 2002). In this tradition of work, the difference between contexts becomes a resource for learning, rather than a problem: it remains important to study and understand it, but relevance arises from the contrasts and specialization that learners will encounter, rather than homogeneity.

Relevance to Audiences

Implicit in the idea of “relevance” is a purpose, and this in turn implies an actor. There are several possible groups of actors who might reasonably be thought of as audiences for research on educational communications and technology, and members of each may have their own purposes and interests.

Perhaps the most obvious audience is other researchers, for whom research is “relevant” if it helps them to advance their own work. In this sense, work might be relevant because it contributes evidence relevant to a research problem; methods through which it might be studied; theories, concepts or models through which it might be understood; or because it highlights a gap or problem that needs to be addressed in the first place. In terms of technologies, the importance of novelty—the ability to make a contribution to knowledge—can be seen as fuelling the cycles of work that Mayes (1995) identified. In this respect, new developments, prototypes and cutting-edge applications are more likely to be of interest than studies of well-established technologies, which is why proof of concept studies are more common than (for example) cohort studies of the roll-out of technologies that have been commercially adopted (Alsop & Tompsett, 2007). Different kinds of research are simply cited—a measure typically taken as an indicator of research impact—in different ways. Studies show that review articles, for example, are almost always more highly cited than any other kind of publication—but that does not mean that other kinds of research lack value or should not be undertaken (Cameron, 2005).

Irrespective of the topic, however, studies show that the format in which research is presented matters. Recent work on digital scholarship advocates a commitment to open sharing of research publications (e.g., Weller, 2011); studies show that across various disciplines, open access articles are more widely cited (Antelman, 2004).

While researchers might be one obvious audience, policy makers are usually given a higher profile in discussions of the relevance of research. Within traditions such as evidence-based policy, the link between research and policy is clearly formulated: research evidence is aggregated in a systematic way, judged in terms of the quality of evidence, and the outcomes (typically quantitative) are integrated (Nutley & Webb, 2000). Within this systematic, rational model, evidence is gathered “just in case”—it may relate to an issue of the moment, but reviews draw on all evidence already available; this specific aggregation may well have been unimagined by the researchers who undertook the work. However, such reviews are only possible when a series of studies has been undertaken with the same (or closely related) technologies. The ability to speak with confidence about the general value of a technology thus needs different kinds of research to the ones of most relevance to researchers.

Furthermore, while this model has widespread appeal, it has been criticized as failing to reflect how policy is developed and enacted, and how evidence is (or is not) used to inform that process. Patton (1997), for example, undertook studies of the ways in which evidence from commissioned evaluations was used to inform policy decisions and found that, for the majority of cases, it simply did not—the reports that were produced were not even read. Instead of the “just in case” model of evidence production and consumption, Patton advocates a “just in time” process of evaluation that generates evidence (which may or may not be quantitative, depending on what will best inform a specific audience) in relation to current concerns, so as to inform specific decisions that need to be made.

This attention to the practice of policy work has been developed more extensively in the area of policy sociology (Ball, 1997). This perspective recasts policy as a social ­process, rather than just as a collection of paper documents; policy is seen in terms of a “circuit of production,” in which people develop, write, promote and implement policies. Each of these stages can involve different groups of people, each with their own competing interests. Provision of evidence at any point in this process can alter the balance of power, providing one group with an advantage in arguing for its preferred position (Patton, 1997), and changing the effects of the policy process. In such an account, “relevance” would be understood in terms of the potential for one group to use research evidence to gain advantage for their favored position over some alternative; it is therefore inherently political.

This echoes work on the kinds of study that have successfully influenced educational technology policy. Roblyer (2005) identifies four kinds of study that could move work forward, and which by implication might inform policy work: that which establishes relative advantage for particular technology-based strategies; research that improves implementation strategies; work that monitors important societal goals; and that which reports on uses outside of educational settings to develop work in educational contexts.

The other group for whom the question of research relevance arises is the broad category of “practitioners,” normally understood to cover teachers. There are problems with simply “applying” research to teaching practice, just as there were complexities discussed earlier with the idea of applying research undertaken in one context within another.

This can be illustrated by work on technology and teacher training. Mishra and Koehler, focusing on teacher education programs, developed Shulman’s framework for talking about professional knowledge to consider what teachers needed to know about technology. Shulman’s original framework made the point that, in addition to knowing about their subject and about teaching, teachers needed to know about the specificities of how to teach their subject (a point analogous to the concerns about contextualization, above). Mishra and Koehler’s development points out that, as well as knowing about technology, teachers also need to know about the technologies of teaching (e.g., interactive whiteboards, managed learning environments,) and the technologies of their discipline (e.g., concordances for languages, molecular modeling software for Chemists, patient monitoring equipment in clinical medicine). They also need to know about the specifics of using technology to teach their discipline—in Mishra and Koehler’s terms (2006), “Technical Pedagogic Content Knowledge” or TPCK.

This explains why applied case studies are so relevant to practitioners: they directly address the problems of implementation in specific contexts that they face when teaching with technologies. It also offers an explanation of why these case studies are not widely read by teachers in other subject areas. Moreover, such studies are different again from the kinds of work that are most relevant to researchers or policy makers.

Of course, not all research needs to be so specific in order to be relevant to teachers: it is perfectly legitimate to view research as being relevant to knowledge about technologies for teaching, or just technology more generally. Russell’s meta-review of studies of technological interventions (1999), for example, makes a perfectly legitimate contribution to knowledge about teaching with technology (rather than teaching with technology in a specific disciplinary context) by showing how rarely significant and meaningful differences are found in studies. Nonetheless, the idea of TPCK helps to highlight gaps in the research, and more specifically, in the research presented to teachers as part of their training. It allows relevant research to be undertaken by highlighting where there may be gaps, but also explains that studies may not be of obvious relevance simply because teaches are thinking about the specificities of their subject, rather than thinking about teaching in general. Studies have shown that interventions that address gaps in this way help practitioners to develop a better integrated understanding of the relationship between technology and teaching practice (Koehler, Mishra, & Yahya, 2007).

Another major group of “practitioners” who may form an audience for research is designers. Several well-established traditions of research inform design practices—for example, much of the field of human-computer interaction does exactly this. However, not all of this is relevant to questions of learning; Nielsen’s Web usability guidelines (1999) for example are widely cited, even within work on learning and technology, but concern commercial transactions rather than instruction. By contrast, Mayer’s work on the integration of verbal and visual information (e.g., 1997) and Sweller’s work on cognitive load (e.g., 1994) have directly informed design principles for multimedia, information sequencing and representation in instructional materials. Similarly, instructional design work building on Gagné’s events of instruction (1985) has produced extensive and highly developed guidelines that are of direct relevance to problems of learning and instruction. What these and other similar sets of principles share is a common orientation to recognized and recurrent problems encountered by designers producing instructional materials. Such research is relevant, because it can be directly applied to guide the design of instructional programs (Merrill, 2002).

Finally, while learners are commonly featured in research, they appear more often as the subject of studies than as an audience. While there has been much work undertaken to understand learners’ practices and preferences, little of this is directly relevant to them; more often it is drawn upon to inform design. More common is that learners are appropriated—they are spoken for, often in ways that lend weight to particular positions. For example, the claim that there is a generational divide—whether between digital natives and immigrants, or new millenials, or net generation—serves to bolster the arguments of reformers who want teachers to make more use of technology, even if “rather than being empirically and theoretically informed, the debate can be likened to an academic form of a ‘moral panic’” (Bennett, Maton, & Kervin, 2008: 775).

Of greater relevance to learners, arguably, are attempts to use the frameworks that researchers have developed to inform their own understanding of learners in order to support learners’ own meta-cognition. Some of this work is of dubious value: for example, work on learning styles is widespread, and has led to the development of self-evaluation instruments that are intended to make students more aware of their own learning styles—ignoring the evidence that these “styles” are situated responses to learning tasks, heavily influenced by assessment regimes, and that there is little evidence that they are stable or persistent over time (Coffield, Moseley, Hall, & Ecclestone, 2004). However, there is evidence of the value of developments such as Open Learner Modeling, an approach that uses student models to raise students’ awareness of their progress, practices and so on, so that they can make better informed choices about future actions (Kay, 1997). This approach has been shown to support reflection and metacognition, and further value may arise where these models can be shared with peers and tutors too (Bull & Nghiem, 2002).

To summarize, there are multiple audiences for research on educational communications and technology, and each tends to have its own distinctive interests. The result of this is that what will count as relevant research will depend on the interests of the audience that is considering it. This leaves researchers with a difficult dilemma: in response to this situation, their work must either be political (serving the interests of one group rather than another) or naïve.

Fostering Research

As the discussed of relevance shows, research does not take place in a social vacuum. Various interests stand to be supported, challenged or sidelined by research. Obviously, it is in the interests of each group to ensure that others are supported and encouraged to undertake the kind of work they need.

In this section, some of the processes through which this takes place are reviewed. This discussion draws on ideas from Wenger’s work on communities of practice: this analyzes social practices to identify how groups stabilize what they do, manage new developments or new participants, and relate to other groups.

Specifically, this discussion uses the ideas of alignment and of constellations of practice. “Constellations of practice” are “configurations […] too far removed from the scope of engagement of participants, too broad, too diverse, or too diffuse to be usefully treated as a single community of practice” (Wenger, 1998: 126–7); instead, a constellation describes groups such as an institution or a social movement in which there may be many communities of practice, with divisions between them, but across which practices remain connected. “Alignment” explains how groups change their practices in order to confirm to the expectations of others (ibid: 179); it explains how some groups exercise power and why others respond, and how resources (such as funding) or other reifications (ideas, approaches, approval) can be used to encourage some practices within a constellation and discourage others.

First, however, some of the communities that constitute the constellation of practice for research on educational communications and technology are identified.

Traditions of Research on Educational Communications and Technology

Educational communications and technology is a diverse field that has seen different traditions of work rise and fall over the decades. Saettler (1990) provides a thorough account of the emergence and development of traditions of work in this field, specifically in the USA, and explores how they were shaped by different theories (such as behaviorism and cognitivism), technological interests (such as educational television and radio), research interests (media studies, artificial intelligence, instructional design, etc.) and funding and administrative infrastructures. Hawkridge (2002) provides a perspective that contrasts the evolution of the field in the USA with developments elsewhere. He highlights points of divergence that he argues reflect differences in “attitudes towards science, including beliefs in objective reality and natural laws. Nor can these origins be separated from industrial and military uses of systems analysis to solve problems.” His account describes work in Europe, and particularly in the UK, as drawing drawn more extensively on social and critical theory, even if the roots of the field were drawn from work in the USA. Friesen (2009) similarly distinguishes between a positivist, instrumental tradition of work, influenced by the concerns of the US military, and alternatives that he describes as practical (concerned with interpretation or meaning) and emancipatory (concerned with critiques of power and control).

Friesen argues further that the latter traditions have been relatively neglected within the field internationally, which instead has been oriented almost exclusively towards solving instrumental problems. He cites Koper’s claim that “E-learning research is technology oriented instead of theory oriented” (Koper, 2008: 356), for example, as exemplifying a body of work that focuses solely on asking how to develop better instruments, whilst neglecting to ask why those instruments are being asked for or used. In terms of the discussion of relevance above, such instrumental work supports the objectives of researchers and some policy makers, but neglects the practical concerns of audiences such as teachers. It also fails to ask the political question of whether the objectives of particular groups are the right ones to support in the first place. He illustrates this with studies of areas such as peoples’ experiences of Internet use or of using technology to support conceptual development in mathematics.

It is worth highlighting, however, that there are traditions of work that do address Friesen’s practical interests. As already noted, applied action research has relevance to teachers, precisely because it addresses their practical concerns. The emerging body of work described as design-based research (Barab & Squire, 2002) could also be argued to fit this agenda. This work takes as its starting point the idea that context affects learning and cognition; it therefore seeks to bring together applied studies of implementation with the work of theory building. It might be argued that this work is also emancipatory, in that it brings participants into the analysis and production of design so that it recognizes their interests and expertise, rather than the research being “done to” them (ibid: 4); however, while it may modify the conventional relationship between designers and users during the project, this does not really address broader concerns about equity and participation, which critical theory focuses on. Studies adopting this approach have demonstrated its ability to improve designs so that they support student engagement and learning outcomes more effectively (e.g., Dede, Nelson, Jass Ketelhut, Clarke, & Bowman, 2004).

Fostering Relevant Research by Practitioners

As the preceding discussion has shown, practitioners such as teachers may be well placed to undertake applied, contextually relevant research—and indeed, some do; however, they are not typically well supported in doing so, nor are their contributions particularly valued (Oliver & Conole, 2003). Nonetheless such case studies have practical value; can be used to inform other research (for example, through a synthesis or review study); and can act as a first step towards fuller participation in the research field (In Wenger’s terms, they can act as legitimate peripheral participation).

Nonetheless, the relevance of such studies is first and foremost to the practitioners who undertake the work, and those in very similar contexts. Several attempts have been made to try and intervene in such studies to increase their relevance to others, too—notably, practitioners in different contexts, but also to researchers. Studies of such interventions have shown the importance of recognizing and rewarding practitioners’ research; however, the most important determinant of fostering a credible research output was whether the practitioner was able to work with more experienced staff who can advise on the empirical work and its interpretation, and support the process of preparing it for dissemination (Smith & Oliver, 2000).

Where such support is not available, structured processes have been shown to help when describing and sharing evidence. Work in the field of learning design, for example, focuses on the production of reifications of pedagogic practice; it is closely related to instructional design, but begins from a “ground up” documentation of current practice that are then formalized and refined, rather than a “top down” specification of practice as determined by a particular theory. As such, it is an area in which practitioner studies are of obvious value. To encourage attention to important pedagogic features and generate consistent representations of pedagogic practice, pedagogic planning tools have been developed, and their value to practitioners evaluated (San Diego et al., 2008). These “use current good practice to create and check the relationships between the different aspects of the user’s input (e.g. balancing learners’ resource and teaching time; linking topics, outcomes, methods, and assessments; supporting decisions on sequencing and scope of topics; testing designs based on pedagogical frameworks; providing exemplars and links to existing web-based resources)” (ibid: 21). Generating consistent and formalized representations of practice in this way, it is suggested, allows the development of are intended to support “a user-oriented analytical approach to learning design” (ibid: 24).

This provides a useful example of the way in which artifacts help to foster research, and in particular, research that is relevant to other groups. One problem predicted by a Community of Practice perspective is that meaning is determined locally: peers within the community judge the appropriateness of an interpretation or action. This results in variability in the way that any resource might be made sense of or worked with. Developing standardized representations of practice helps to mitigate this problem: with interactions between communities in a constellation of practice, different possible interpretations are discouraged, so that—in this case—practice can be represented and understood in more consistent ways. Meanings are aligned through the use of artifacts that cross the boundaries between separate communities, and research into local practice is made more relevant to the community that developed the representational scheme (and potentially, to other communities who also use it).

An illustration of this is provided by studies of the community that has grown up around use of Learning Activity Management System (LAMS). LAMS generates sharable representations of practice, the intention being that these can be used to make practice more motivating, effective or efficient by sharing and refining approaches to learning and teaching. It has been argued that LAMS should be used to support approaches such as action research, since it can help teaches to formalize, reflect upon and share their pedagogy by using the learning sequences LAMS produces “as a form capturing the pedagogy appropriate to [a] type of objective” (Laurillard, 2008: 150). However, while studies have shown that practitioners find the idea of sharing their practice appealing in principle, it has not been easy to achieve in practice; for example, one evaluation of LAMS use by teachers concluded that “while they recognised the importance of sharing their practice with others, technical and cultural barriers need to be overcome” (Masterman & Lee, 2005: 3).

Even if such barriers could be overcome, the use of standardized representations is no guarantee that communities will develop consistent, nor even compatible, understandings of particular forms of practice. As Falconer’s study demonstrated (2007), sometimes it is impossible to create a single representation that allows meaningful discussions about learning and technology across different professional communities, and work needs to be done by people to support and repair interpretations. Just as representations can act as “boundary objects” that allow separate communities to coordinate their work (Wenger, 1998: 106), people act as “brokers,” moving between communities and engaging in “processes of translation, coordination, and alignment between perspectives. It requires enough legitimacy to infuence the development of a practice, mobilize attention, and address conflicting interests [… and] the ability to link practices by facilitating transactions between them” (ibid: 108).

One example of such effort is provided by work in the area of pedagogic pattern languages (Mor, Winters, Cerulli, & Björk, 2006). This work involves the generation and application of “design patterns,” abstractions that represent previous successful responses to problems. In spite of the potential for such representations to support practitioners as they design instructional experiences, they found it hard to make sense of patterns that they encountered, and all but impossible to generate patterns based on their own practice. However—as was found with the earlier study of factors supporting practitioner-researchers (Smith & Oliver, 2000)—when researchers worked with teachers in problem-oriented workshops, teachers were able to generate and share meaningful design patterns by deriving them from case studies of practice, because they were able to relate the unfamiliar processes and representations to the kinds of narrative case descriptions that teachers were able to produce (Winters & Mor, 2008). The resulting patterns could then be shared with other communities of teachers, but were also of interests to researchers studying practices of teaching with technology.

Closely linked to these processes are concerns about language. Studies have explored what kinds of representations are most helpful teachers in develop their educational practice. A consistent message from this work was the central importance of any intervention having an immediate and recognizable relevance to practitioners’ contexts of practice, which must furthermore “take account of the language, values, culture and priorities of their particular community” (Sharpe & Oliver, 2007: 123).

This implies that the specialized, expressive forms of representation used by design experts are likely to be inaccessible to the practitioners they might wish to support or influence. Such terminology may well be seen as “jargon” (Falconer, 2007); indeed, Falconer’s study shows that if practitioners are unable to engage with the forms of representation that are used, the descriptions of practice that are generated will probably be viewed as irrelevant and meaningless, no matter how principled the pedagogic design that they represent. The specialized forms of representation may well be necessary for the design and development community—but work will be needed to adapt these forms to new audiences and ensure that they are comprehensible.

Fostering Relevant Research by Researchers

Many of the issues that arise when supporting teachers and other practitioners also arise for researchers: practices need to be aligned, and common understanding developed. The mechanisms through which these processes operate tend to be different for communities of researchers however than for working with practitioners.

The artifacts that researchers use to align each other’s practices typically include theories, models and concepts, expressed through publications. Citation can be seen as a way of demonstrating an appropriate alignment with others in order to claim legitimacy within a field (Millen, 1997). Patterns of citation are therefore useful markers of discrete traditions of work, delineating and differentiating communities of researchers. Czerniewicz’s study (2010) of literature characterized as “educational technology” shows how diverse and fragmented this field is. Her analysis revealed no hegemonic traditions (although instructional design is widely drawn upon and has been advocated by some authors as a potential unifying perspective). Instead there is a broad array of positions, linked to many different disciplines.

Fostering relevant research, from this perspective, involves generating and sharing theories, methods, instruments or other resources that other researchers wish to use to advance their own work. The consequence of this is that “relevance” becomes performative (Lyotard, 1979), with the value of work determined by the way in which others take it up. Work can therefore be made more relevant by signposting its contributions as clearly as possible—a conventional requirement for academic writing—but also by demonstrating its pedigree through adoption of the language and processes that other researchers recognize as being authoritative. This has led to criticisms of publishing and peer reviewing processes as being conservative and restrictive (see, e.g., Weller, 2011). The conventional account, by contrast, would be that peer review serves to challenge and test ideas, introduce new literature to authors and also to try and eradicate the most divergent reinterpretations of texts and practices: in other words, its function is to educate and raise quality.

However, just as with representations for practitioners, there is no guarantee that producing a theory or artifact means that communities will engage with it, or align their work to an author’s ideas. Studies of the way that people engage with theories and models (e.g., De Freitas, Oliver, Mee, & Mayes, 2008) show that they judge them in terms of their relevance and similarity to already-used representations; such recontextualization is rarely documented, however, so that the lessons learnt are not used to revise or develop the theories. Many opportunities to “talk back” to theory are simply not taken (Bennett & Oliver, 2011), missing the chance to improve the relevance of theories and models.

What counts as relevant research also varies in relation to the questions currently being posed by researchers. Different questions become important at different moments in the cycles of technology that Mayes (1995) described in the excerpt that opens this chapter. He noted at the time—and others since have observed (e.g., Alsop & Tompsett, 2007; Czerniewicz, 2010)—that when new technologies are developed, simple questions about the efficacy and role of technology need to be answered. This tends to generate a slew of “proof of concept” case studies that demonstrate the feasibility of its use in educational settings. However, the risk is that, by orienting to the new technology, such studies fail to connect to existing bodies of work tackling well-established problems and issues. In other words, such studies are relevant to short-term concerns at the expense of addressing longer-term concerns and issues; indeed the relevance to longer-term issues may not be at all obvious.

For example, recent work on personalization and adaptive e-learning requires the development of systems that can anticipate a user’s needs and actions in a credible way—yet such work often proceeds without reference to decades’ worth of prior research in areas such as student modeling (Mödritscher, García, & Gütl, 2004). As a consequence, it can be hard to see the relevance of such new work for these established issues, and this can contribute to the fragmentation of literature in the field. Without conceptual links to prior work or systematic procedures for moving beyond individual studies, there is a risk that work of this kind will be irrelevant to researchers and practitioners alike (Alsop & Tompsett, 2007)—if not immediately, then as soon as a newer technology becomes the problem of the moment instead.

Awareness of these cycles, and of the longer-term issues that they can hide, is important as a check to the short-term pressures of policy and research funding (Conole, Smith, & White, 2007). Researchers are far from being the only audience for research work; however, while policy makers, teachers, designers, and managers may all have an interest in research on educational communications and technology, not all are equally well placed to sponsor, support or otherwise foster relevant research. Few teachers, for example, have anything to offer researchers that would lead them to realign their research to serve the teacher’s interests. Policy makers, however, are able to influence research, in no small part because they can control the flow of resources that researchers need to operate (Conole et al., 2007). Processes of tendering and contracting help to ensure that research remains relevant to the interests of funding bodies, who in turn may represent the interests of government, trustees of charitable bodies, the military, and so on. As already noted, Friesen’s critique (2009) of work in the field shows how military concerns about closed systems shaped the research agendas they funded. Saettler (1990) also provides a history of the relationship between educational technology research, funding and policy bodies, looking for example at the effects of the National Science Foundation’s investment in projects to demonstrate the effectiveness of computer-assisted learning. A similar analysis for the UK is provided by Conole et al. (2007), who conclude that “research has a tendency to follow policy directives and technological developments, rather than informing them” (ibid: 53). Clearly, policy makers have been more successful at aligning researcher’s work to their interests than the other way around.

Conclusions

The notion of relevant research implies a sense of audience and interests, and frames research in a political context. It raises questions about which audience’s interests will be served by the work, and how this will alter their relationships with others. Research on educational communications and technology, however, tends to focus on problem solving, and has been criticized for its instrumentalism; critical questions about the politics of research are rarely addressed in literature in this field.

Many of the techniques used to foster relevant research are familiar and mundane: processes such as funding, training and peer review are familiar across disciplines, and remain powerful influences in this field. However, there are also techniques that are relatively distinctive to this research. These include the use of tools, formalisms and representations to elicit, standardize and share practitioners’ knowledge and practices, and participative processes that bring designers into contact with users.

Nonetheless, challenges remain. The cycles of hype, hope, and disappointment that Mayes (1995) described are set to persist, so long as researchers orient to technologies of the moment, rather than to more enduring concerns or theories. Concerns about contextualization mean that studies undertaken in one setting (the classroom, a laboratory) may be hard to make use of in another (the home, say). Research continues to be led by funding, rather than leading the policies that determine how funds are allocated.

It seems unlikely that any single development will solve all of these problems simultaneously. However, several practical implications do follow from this. First, there is the need to build connections between fragmented communities of researchers working in this broad field—an issue that is likely to recur each time a new technology becomes the focus for work. Since this cannot be avoided, what is needed are mechanisms that will encourage connections between the new research areas and established, longer-term concerns. Peer review, conferences, and publications are established mechanisms that should help in this respect; however, these have not stopped the problem to date. Ways of improving processes such as peer review should be considered, as they have been in other fields (e.g., Schroter et al., 2004). As Weller argues (2011), there is also value in pursuing new opportunities for interdisciplinary work, which would help ensure the relevance of research in education and communication technology to other areas of concern. Review articles should be encouraged so that separate bodies of research activity can be related, particularly where this can bring together work separated by divergent terminology rather than conceptual differences.

Secondly, further work is needed to help practitioners such as teachers to share principled accounts of their practice and knowledge, with each other and with researchers. Attempts to encourage this have met with mixed success; there is evidence that both tools (such as representational formats) and interventions (such as support, training, or workshops) can help support such activity. It would be prudent to view such activity as an expert task, and to adopt a scaffolded approach towards supporting it—something which may initially involve working in partnership with more able peers, whose time may need to be paid for through special initiatives or research project funding.

Thirdly, there is a need for awareness of the social processes through which research is produced and used, analogous to recent work exploring policy as a process (rather than understanding it purely in terms of produced texts; Ball, 2008). The emerging body of work in the field of digital scholarship represents one viewpoint on the processes of production; much of the field of library and information sciences might be viewed as another. However, work on digital scholarship often focuses on exceptional cases (e.g., Weller, 2011), while studies in library and information sciences are primarily restricted to sections of the process concerned with published texts. An agenda of work that brought together the scope of the digital scholarship research with an evidence base developed from the kinds used in library and information sciences may help document and develop these relatively unstudied processes.

The processes of research use, and particularly of the ways in which research outputs are taken up in practice, require different kinds of study, however. Here, ethnographic approaches have value in understanding how people make sense of new technologies, and what they mean to them (Friesen, 2009). Similarly, design-based research becomes important as a way of ensuring the mutually informed adaptation of technology and practice (Barab & Squire, 2002). Projects that aspire to change practice, or to improve learning outcomes in classrooms rather than purely under controlled conditions, would benefit from incorporating empirically grounded work that links studies of practice to processes of technology adaptation and adoption. Similarly, policy makers and funders may wish to consider encouraging different kinds of studies, perhaps along the lines suggested by Alsop and Tompsett (2007), who advocate moving from studies that demonstrate an effect, to studies of efficacy of use in controlled situations, and from there to studies of use in typical practice settings, followed by case studies intended to reveal and understand unintended side-effects. This kind of structured lab-to-classroom progression promises a better chance of establishing the relevance of interventions than is currently possible.

Developments such as these will enable researchers to make their work more useful and more relevant, to their peers as well as to others.