According to the Definition and Terminology Committee of the Association for Educational Communications and Technology (AECT), educational technology is “the study and ethical practice of facilitating learning and improving performance by creating, using and managing appropriate technological processes and resources” (Januszewski and Molenda 2008, p. 1). Few useful distinctions exist between educational technology and instructional technology with respect to the types of research conducted under these labels; therefore, these terms are considered synonymous for the purpose of this study.

Educational technology has become a matter of increasing controversy in recent years, both within the academic community and across the general population (cf. Selwyn 2013). Whereas some scholars regard educational technology as the potential salvation of educational systems viewed by many as in great need of improvement (Collins and Halverson 2009; Stallard and Cocker 2014), other critics view educational technology as undermining the best practices of traditional educational approaches (Cuban 2013; Thomas and Brown 2011).

Educational technology research should be an asset in efforts to resolve such controversies, but it has not consistently fulfilled this role to a satisfactory extent (Roblyer and Knezek 2003; Ross et al. 2010). It has also has been the subject of critical analysis for nearly fifty years (cf. Mielke 1968; Clark 1983; Salomon 1991; Kozma 2000; Oliver 2011). To explore the evolving nature of educational technology research, this paper presents an analysis of the goals and methods of educational technology research studies that have been published over a quarter century (1989–2014) in a leading refereed journal, specifically Educational Technology Research and Development, the premiere research publication of the AECT. This analysis is intended to enable the identification of new directions for educational technology research that could help make formal inquiry in this field more relevant to the needs of practitioners, policy-makers, and other stakeholders (Oliver 2014). Implications for how educational technology researchers are prepared based on this analysis are also presented.

Goal, method, and research question

As part of an earlier analysis of educational technology research, Reeves (1995) analyzed the contents of research articles published in the Educational Technology Research and Development (ETRD) journal during a six-year period from 1989 to 1994. He categorized the research papers according to their goals and the methods used to address those goals. Six categories of research goals are described in Table 1.

Table 1 Goal categories of educational technology research

For the purposes of the analysis reported in this paper, the scheme for classifying the goals of educational technology research has been updated from the one presented in the Reeves (1995) paper. Specifically, the category simply labeled “Empirical” in the 1995 paper has been relabeled as “Exploratory/Hypothesis-Testing” in this paper to better reflect the intent of educational technology researchers to either discover possible relationships among variables or to test hypotheses concerning specific relationships among variables.

Educational technology research papers can also be classified with respect to the primary research methods employed in any given study. The methods categories used in the Reeves (1995) paper and in this paper are presented in Table 2.

Table 2 Research methods classification scheme

The concept of distinguishing between research goals and research methods was derived from Krathwohl (1998), who developed a matrix to categorize educational research studies according to three roles or goals (Description, Explanation, and Validation) and five major methods (Qualitative, Survey, Historical, Longitudinal, and Quantitative). Distinguishing between the goals pursued by educational technology researchers and the research methods they employ to reach their goals is complicated by the fact that there is an inevitable overlap between certain goals and specific methods. However, we argue that clarifying one’s research goal up front enables a greater focus on the ultimate intent of a research study or agenda and enables the researcher to make a more informed choice of the most appropriate method to achieve that goal.

From our perspective, research methods should not be identified until the goals of the researchers and the specific research questions they wish to address are understood. Although it seems legitimate for educational researchers to self-identify as design researchers or as interpretivists in reference to the types of goals they pursue in their research, it appears to us to be less straightforward for researchers to describe themselves as “qualitative researchers” or “quantitative researchers” in reference to the methods they might prefer to use. The latter seems akin to claiming to be a “hammer carpenter” or a “saw carpenter.” Although not everyone agrees, we view methods as tools that primarily have meaning in the context of a particular job. Individual researchers may certainly have preferences for one method over another and there can be a high degree of alignment between certain goals and specific methods, but the final choice of method will ideally be aligned with the nature of the researchers’ goals and the questions addressed within a particular study. We fear that educational technology researchers, especially doctoral students, often become enamored of one method or another without giving sufficient thought to the ultimate goal of their research agenda.

With these distinctions between research goals and methods in mind, the study reported in this paper has a Synthesis Goal (see Table 1). The primary research question addressed in this study is “How have the goals pursued and methodologies used in educational technology research changed over the 25-year period from 1989 to 2014?” The study has employed literature review as the primary method (see Table 2).

Specifically, the contents of the Educational Technology Research and Development journal were compared across two different six-year spans, 1989–1994 and 2009–2014. As researchers, we expected to find some shifts in both goals and methods of educational technology research studies over this quarter century time period, but we did not have specific hypotheses about the nature or direction of these shifts beforehand. The intent of our analysis was to examine the directions and magnitude of these shifts, and based on the findings, synthesize implications for new research directions and the preparation of novice educational technology researchers.

Background of Educational Technology Research and Development journal

According to the Springer website for the ETRD journal, it is the “only scholarly journal for the field focusing entirely on research and development in educational technology.” ETRD has been published since 1953, although it did not adopt its current title until 1989. It was previously published under two other titles, specifically Educational Communication and Technology Journal (1978–1988) and Audio-Visual Communication Review (1953–1977). It is now issued bi-monthly as an official publication of the AECT. ETRD’s website indicates that it has an impact factor of 1.420 according to the 2014 Thomson Reuters, Journal Citation Reports ® for 2013. The SCImago Journal & Country Rank portal ranks ETRD number 43 of 914 education journals.

In its current format, ETRD has two distinct sections as described on the journal’s website:

  • The Research section features well documented articles on the practical aspects of research as well as applied theory in educational practice, a comprehensive source of current research information in instructional technology.

  • The development section publishes articles concerned with the design and development of learning systems and educational technology applications.

For the purposes of this study, only the papers in the Research Section were analyzed, and within the Research section, only those papers explicitly labelled as Research Papers (as opposed for example to Guest Editorials) were analyzed.

This particular journal has been the focus of several other studies. For example, Zaugg et al. (2011) analyzed the contents of ETRD from 2001–2010. Although their analysis revealed several important characteristics (e.g., topics, contributing authors, citation patterns, etc.) of the journal for that decade, one of their analytical schemes appears to us to conflate educational research goals and methods. Instead of specifically addressing the goals of researchers in their review, Zaugg et al. (2011) analyzed what they labeled the “methodologies” of the papers using these seven categories:

  • Developmental/design based

  • Survey

  • Quantitative

  • Qualitative

  • Theoretical/Philosophical

  • Content/Discourse

  • Combined methods (p. 44)

Hsu et al. (2013) included ETRD in their analysis of educational technology research trends in six SSCI-indexed refereed journals from 2000–2010. They reported research trends, primarily focusing on research topics across the six journals and in each journal. However, they conducted no analysis of research goals or methods. An acknowledged limitation of their analysis was that they applied automated data mining methods only to the abstracts rather than the bodies of the papers. By contrast, our analysis, as detailed below, was based on a close reading of the entire contents of every article. Abstracts in educational research journals (and in social science research reports generally) vary greatly in their specificity and the types of information included, with as many as a third misrepresenting the information that is actually in the article itself (Hahs-Vaughn and Onwuegbuzie 2010).

Baydas et al. (2015) analyzed the contents of ETRD and the British Journal of Educational Technology from 2002 to 2014. They specifically focused on research methods, subjects, data collection tools, sample selection methods, sample sizes, and data analysis methods, but as with the Zaugg et al. (2011) analysis of ETRD, they did not address differences in research goals. Although we found much to recommend in the Baydas et al. (2015) study such as their analysis of the sample sizes of quantitative studies and their identification of subject trends across two major journals, we would have found more value in their analysis if they had sought to clarify the goals pursued by the authors of the 1255 studies they reviewed. As noted above, research methods are tools, and their adoption is a secondary decision dependent on the nature of the researchers’ goals and research questions as well as on factors such as budget and feasibility. We maintain that examining the nature of researchers’ goals is critical to any analysis of educational research literature.

Literature review methodology

The first author of this paper used the goal and method categories presented in Tables 1 and 2 to review the complete text of every paper in the Research Section of ETRD over two six year periods, 1989–1994 and 2009–2014. There were 95 articles reviewed in the first time period and 102 in the second, for a total of 197 articles. Each paper was analyzed carefully to determine the primary goals of the study and the methods that were employed. Specifically, the reviewer first read the title of the study as some titles give a clear signal of the researchers’ goal, or alternatively, the researchers’ method. For example, an ETRD paper by Wijekumar et al. (2012) is titled “Large-scale randomized controlled trial with 4th graders using intelligent tutoring of the structure strategy to improve nonfiction reading comprehension,” signaling that the researcher had a hypothesis-testing goal and employed a quantitative experimental design. However, most article titles provide few clues as to either the researchers’ goal or methods, and in any case titles alone could not possibly provide sufficient information for the purposes of our analysis. Each whole paper had to be read.

Next the reviewer read the abstract for the paper to find out further details about the goal and methods of the study. Here is the abstract for the aforementioned article by Wijekumar et al. (2012):

Reading comprehension is a challenge for K-12 learners and adults. Nonfiction texts, such as expository texts that inform and explain, are particularly challenging and vital for students’ understanding because of their frequent use in formal schooling (e.g., textbooks) as well as everyday life (e.g., newspapers, magazines, and medical information). The structure strategy is explicit instruction about how to strategically use knowledge about text structures for encoding and retrieval of information from nonfiction and has consistently shown significant improvements in reading comprehension. We present the delivery of the structure strategy using a web-based intelligent tutoring system (ITSS) that has the potential to offer consistent modeling, practice tasks, assessment, and feedback to the learner. Finally, we report on statistically significant findings from a large scale randomized controlled efficacy trial with rural and suburban 4th-grade students using ITSS. (p. 987)

As noted above, abstracts alone do not provide reliable portrayals of the contents of research articles and may even misrepresent studies in many cases (Hahs-Vaughn and Onwuegbuzie 2010; Hartley and Betts 2009). Accordingly, the reviewer read every article in detail to locate specific information that could be used to clarify the researchers’ goals and methods, and no articles were definitively classified until they had been completely read.

Microsoft Word was used to produce a large table with eight columns labeled “citation, abstract, goal, question, methodology, findings, comments, and origin.” The first and second columns are self explanatory. The goal column was used to record the reviewer’s interpretation as to which of the six goals described in Table 1 were pursued by the authors of the study. The question column was used to record the authors’ research questions. More often than not, research questions were explicitly stated in the papers, but sometimes they had to be paraphrased on the basis of other information in the text. The methodology column was used to record the reviewer’s interpretation as to which of the five methods described in Table 2 were utilized by the authors of the study. The findings column was used to record the reviewer’s interpretation of the primary findings of the study. The comments column was used to record any notes the reviewer had made concerning the design, implementation, or reporting of the study, e.g., the length of the treatment time. Finally, the origin column recorded the country or countries where the research was conducted.

To provide a check on the reliability of the reviewing process, the second author carefully reviewed 34 randomly selected papers from ETRD. Then, an inter-rater reliability (Cohen’s Kappa) was calculated between two coders (authors). Inter-rater reliability regarding the research goals was .91 and for the research methods was .83, considered as very good strength of agreement (Landis and Koch 1977). Subsequently, the two researchers conferred concerning the categorizations upon which they initially differed and attained an agreement.

The primary source of initial interpretive disagreement between the two reviewers occurred when mixed methods were used as opposed to just quantitative or qualitative methods. For example, a given study may apply a blend of qualitative methods such as observations and interviews whereas another study may include a quantitative method such as an online survey followed by a qualitative method such as interviews with volunteers from the survey sample. After consultation, we decided to categorize studies as mixed methods only when they clearly included both qualitative and qualitative methods (Johnson et al. 2007).

Results

The primary research question for this study was “How have the goals pursued and methodologies used in educational technology research changed over the 25-year period from 1989 to 2014?” Table 3 presents the classification of 95 research papers that appeared in the ETRD from 1989 to 1994. There were 104 articles published in the Research Section of this journal over these six years, but not every article was a research paper per se. Six “methodological articles” (presenting a new method/procedure for conducting research) and three “professional articles” (analyzing the state of the profession of educational or instructional technology) appeared in the journal from 1989 to 1994, and were not included in the analysis represented in Table 3.

Table 3 Classification of ETRD articles (1989–1994)

Table 4 presents the classification of 102 research papers that appeared in the ETRD journal from 2009 to 2014. Papers excluded from the analysis shown in Table 4 include several brief articles introducing special issues, one “methodological article,” and one “professional article.”

Table 4 Classification of ETRD articles (2009–2014)

Comparison of Tables 3 and 4 shows some interesting trends. With respect to research goals, there was a major reduction in the number of papers published with theory development/synthesis goals. Fully a third of the papers in ETRD from 1989 to 1994 had such goals whereas only 10 % did in the six years from 2009 to 2014. At the same time, the percentage of papers with descriptive/interpretivist goals increased from 1 % in 1989–1994 to 16 % in 2009–2014. There was also an increase in the papers with exploratory/hypothesis-testing goals from 51 % in the earlier time period to 68 % in the later period. There was a reduction in the percentage of papers with action/evaluation goals, from 9 % (1989–1994) down to 2 % (2009–2014), and the number of papers with design/development goals remained stable from 6 % (1989–1994) to 5 % (2009–2014). There were no papers published in either time period with critical/postmodern goals.

With respect to methodology, there were some major shifts, particularly with respect to the reduction of literature review papers (from 40 % in 1989–1994 to 8 % in 2009–2014) and in the increase of studies reporting the use of mixed methods (from 12 % in 1989–1994 to 29 % in 2009–2014). There were increases in the use of both quantitative (from 41 % in the earlier period to 52 % in the later period) and qualitative methods (from 7 % in the earlier period to 11 % in the later period). There were no papers published in either time period reporting the use of critical theory methods.

Discussion

Some research goals are more strongly associated with specific methods than others. For example, researchers with theory development/synthesis goals tend to rely upon literature review as their preferred methodology; those with exploratory/hypothesis-testing goals more often apply some type of quantitative method, especially experimental or quasi-experimental designs, in their studies, and those with descriptive/interpretivist goals frequently use qualitative methods, especially interviews. Researchers with design/development goals tend to employ mixed methods, but this trend is not as strongly represented in this data as are the other tendencies noted above. However, there is not a one-to-one correspondence between the goals pursued by educational researchers and the methods they use.

Over the past quarter century, there has been a major increase in the representation of papers with descriptive/interpretivist goals from 1 to 16 %. This likely reflects the fact that research with descriptive/interpretivist goals has become more acceptable in the field of educational research as a whole. Active discussions on promoting and articulating the goals, methods, and value of descriptive/interpretivist research (e.g., Howe 1998; Freeman et al. 2007) in major educational journals such as Educational Researcher have contributed to the “phenomenal growth” (Lichtman 2013, p. xvii) in the adoption and publication of studies with descriptive/interpretivist goals in education from the early 1990s until today.

Anecdotal evidence supports this as well. When the first author of this paper was a new assistant professor in a College of Education at a large research university beginning in 1982, most of the doctoral dissertations conducted there had exploratory/hypothesis-testing goals and utilized quantitative methods, usually some type of survey or a quasi-experimental design. At that time, any students interested in research with descriptive/interpretivist goals were warned against this direction by their research supervisors, and there were no courses available to help students learn to apply qualitative methods. Today, in that same College of Education, students can earn a special certificate in Interdisciplinary Qualitative Studies requiring a minimum of 15 semester credit hours. In the Educational Technology Ph.D. program in that same College of Education, more than half of the students now pursue studies with descriptive/interpretivist goals.

There was also a dramatic reduction in the number of papers with theory development/synthesis goals in ETRD from 33 % in 1989–1994 to 10 % in 2009–2014. There was an even greater drop in the percentage of papers in which literature review was the primary method used, from 40 % in 1989–1994 to 8 % in 2009–2014. It is unclear why this has happened. Perhaps, over the years, editorial boards and reviewers have become more in favor of data-based papers, regardless of methodologies used, rather than literature-based ones. The journal website has the following statement under aims and scope of the journal:

The Research Section assigns highest priority in reviewing manuscripts to rigorous original quantitative, qualitative, or mixed methods studies on topics relating to applications of technology or instructional design in educational settings. Such contexts include K-12, higher education, and adult learning (e.g., in corporate training settings). Analytical papers that evaluate important research issues related to educational technology research and reviews of the literature on similar topics are also published.

Despite the overall decrease in publication of papers of this nature, it is interesting to note that West and Borup (2014) identified the most cited paper in ETRD from 2001 to 2009 as Merrill’s (2002) paper on First Principles of Instruction, which was written with a theory development/synthesis goal. West and Borup (2014) also reported that seven out of nine most-cited papers from the ten instructional design and technology journals they selected were theoretical or literature-based synthesis papers rather than data-based papers.

Interest in design-based research has grown across many fields of educational inquiry (Anderson and Shattuck 2012; McKenney and Reeves 2013; Plomp and Nieveen 2013), but the representation of papers with design/development goals in ETRD has remained marginal with 6 % in 1989–1994 and 5 % in 2009–2014. One challenge for researchers pursuing design/development goals may be that reports on their studies are often quite lengthy, far exceeding the 6000–8000 word limits of many educational technology journals. The ETRD website’s Instructions for authors section states:

Articles exceeding 8000 words (about 20–30 double-spaced pages) in length are unlikely to be published unless they are of exceptional significance requiring an extended presentation to do justice to the material. Submissions that successfully present the research in 5000 words are particularly welcome, as short, focused articles are helpful to readers and enable the journal to make a greater range of research available to its readership.

There were no papers published in ETRD in either time period with critical/postmodern goals, a finding that may reflect an insufficient coverage of critical perspectives in the curriculum of educational technology doctoral programs. In the pages of ETRD, Solomon (2000) called for a postmodern research agenda in the field of educational technology, but virtually no research has been published in ETRD with these goals in recent years. Evans (2011) published a critique of the postmodern agenda in ETRD, in which he concluded that researchers pursuing critical/postmodern goals in the field of educational technology should “take a ‘realist’ position on the ontological status of the social objects under investigation and a ‘critical’ position as to what we can know about those objects” (p. 811). This may be appealing advice to some in the field, but it is unlikely to be accepted by most researchers with critical/postmodern goals because they are much more likely to presume alternatives to critical realism interpretations of the meaning of reality, e.g., those that are inspired by radical constructivist, Neo-Marxist, multicultural, Feminist, or other perspectives. Indeed, in educational technology as a field, there is a culture and history of focusing research and development on innovative things that are still emerging rather than taking critical stances toward those things and policies related to educational technology that have been around for many years or have become more widely adopted (Reeves and Reeves 2015).

Interest (or the lack thereof) in conducting educational technology research with critical/postmodern goals is also evident in the primary handbooks that synthesize research in this field. Whereas the first edition of the Handbook of Research on Educational Communications and Technology (Jonassen 1996) included two chapters devoted to these goals (Nichols and Allen-Brown 1996; Yeaman et al. 1996), the most recent fourth edition (Spector et al. 2014a, b) includes no such chapters. Although the first handbook include more than 20 entries for the term “critical theory” in its index, the fourth edition’s index does not include a single entry for this term.

Finally, with respect to those researchers with action/evaluation goals, they are even less well represented in the pages of ETRD today than they had been earlier (9 % in 1989–1994 and 2 % in 2009–2014). There may be several explanations for this. More journals focusing on action research or program evaluation have appeared in recent years such as the Educational Action Research journal and the Studies in Educational Evaluation journal. Researchers with action/evaluation goals may be publishing their work there. At the same time, researchers with action/evaluation goals may be directed to the development side of the ETRD journal. The ETRD website states that “Empirically-based formative evaluations…are welcome” on the development side of the journal.

Implications

The trends identified in this study have implications for future research directions in the field of educational technology. Educational research studies in general, including those focused on educational technology, often yield findings amounting to “no significant differences” (Hattie 2009). This is not an inherently undesirable outcome in situations in which researchers simply wish to demonstrate the equivalent effectiveness of different delivery systems for very similar kinds of instruction as in the well-documented equivalency of learning outcomes between classroom instruction and online learning (cf. Tallent-Runnels et al. 2006).

However, frequently the promotion of educational technology is based upon the belief that educational technology innovations will yield educationally significant improvements in learning outcomes in schools as well as in businesses (cf. Davidson 2012; Horn and Staker 2014; West 2013). Similar promises that educational technology will transform education and training can be traced back nearly 100 years (Reiser 2001). As each new technology has been introduced into instructional contexts (films, teaching machines, radio, television, interactive videodisc, e-learning, serious learning games, etc.), numerous studies have been done to compare the effectiveness of the new delivery mode or approach with business-as-usual instruction. These studies have often added to what Russell (2001) called the “no-significant differences” (NSD) phenomenon in the educational technology literature.

In the Epilogue to the most recent edition of the Handbook of Research on Educational Communications and Technology (Spector et al. 2014a, b), Jan Elen questioned, “the relevance of investigating a well-known principle simply because a “new” technology is on the market.” This is a question that all of us engaged in educational technology research should ask. Reeves and Reeves (2015) argued that these persistent NSD findings can be traced to the problem that “so much educational technology research is focused on things that others create rather than the problems that should concern us as educators” (p. 92).

Is there a need for a much wider uptake of educational design research (also known as design-based research) (McKenney and Reeves 2012)? Nearly three quarters of the educational technology research studies published in ETRD in recent years (2009–2014) have either exploratory/hypothesis testing or descriptive/interpretivist goals and address “what works?,” “how is this experienced by learners?,” or “does this work better than that?” questions. Alternatively, studies with design/development goals ask the question “what is the problem and how can we solve it?” Perhaps there has been an increase in educational technology researchers with design/development research agendas over the past 25 years, but this study did not reveal such a trend in the research papers published in ETRD.

The trends that were identified in this study have implications for the preparation of graduate students to establish their own research goals and to make more informed decisions about their choice of research methods. We recommend that beginning doctoral students take at least one course focused on the philosophy of science and alternative research paradigms (Phillips 2000) so that they can make a much more mindful choice when it comes to defining their research goals. Such a course may also help students to develop a more nuanced appreciation of how difficult and complex educational research actually is (Phillips 2014).

Given the absence of research with critical/postmodern goals in the pages of ETRD, educational technology doctoral students might also be encouraged to take at least one course focused on critical perspectives of educational technology and education as a whole (Gitlin 2014; Hlynka and Belland 1991). Attracting more students with postmodernist or critical orientations to advanced studies in educational technology could also be a route toward encouraging a more critical perspective of research in our field.

The decline in the number of high-quality literature review papers or papers with theory development/synthesis goals in the pages of ETRD over the quarter century from 1989 to 2014 may also deserve attention. As reported by West and Borup (2014), when high quality papers with theory development/synthesis goals are published, they can bring forth new theoretical insights and provide important conceptual and theoretical frameworks for numerous data-based papers. Also, papers using literature review as their primary method can inform scholars of useful macro perspectives on selected topics. A lack of consensus about the value of such papers may have contributed to fewer published papers of this nature over the timespan of our analysis. By contrast, human resource development as a scholarly field actively teaches and uses integrative literature review (ILR) (Torraco 2005) as one of its accepted research forms and publishes ILR papers in major HRD journals including Human Resource Development Review (www.hrd.sagepub.com).

Should the direction of educational technology research agendas be of concern to those of us involved in preparing the future scholars in our field? Bulfin et al. (2014) surveyed 462 educational technology researchers and found an over-emphasis on “relatively basic forms of descriptive research, coupled with a lack of capacity in advanced quantitative data collection and analysis” (p. 403). Bulfin et al. (2014) concluded that educational technology “researchers need to make a conscious effort to interact in wider circles in order to stimulate an informal ‘enculturation’ into a greater range of different research traditions than is the case at present” (p. 410). We agree.

In addition, we strongly recommend that all of us involved in educational technology research should periodically pause to consider how little of the research we publish typically transfers to other practical contexts. The lack of impact of research on practice is not by any means limited to educational technology. Kane (2016) opined, “In other fields, research has paved the way for innovation and improvement. In pharmaceuticals and medicine, for instance, it has netted us better health outcomes and increased longevity. Education research has produced no such progress” (p. 82).

Of course, the findings of our analysis of ETRD research articles over the past quarter century provide no basis for such a blanket condemnation of educational technology research. The most important trends we detected were:

  • A reduction in the number of papers with theory development/synthesis goals in ETRD from 33 % in 1989–1994 to 10 % in 2009–2014.

  • A drop in the percentage of papers in which literature review was the primary method used from 40 % in 1989–1994 to 8 % in 2009–2014.

  • An increase in the representation of papers with descriptive/interpretivist goals from 1 % to 16 %.

  • Representation of papers with design/development goals in ETRD remained steady, but low, with 6 % in 1989–1994 and 5 % in 2009–2014.

  • No papers with critical/postmodern goals were published in ETRD in either time period.

Given these trends, we suggest that the editors of ETRD convene a session at an upcoming convention of its parent professional association, AECT, to discuss this paper, delve into its implications, and make changes in its submission or reviewing processes if any such modifications are deemed desirable. We are not calling for any specific modifications based on the findings of this analysis, but are recommending further discussion of the research directions illustrated in this paper.

Limitations

This paper has several important limitations. First, the distinction between research goals and research methods may not be universally accepted. Some scholars argue that any given researcher plans and conducts studies within the confines of a specific paradigm, and that this paradigm severely limits the choice of methods that the researcher would likely apply. Cilesiz and Spector (2014) categorized most educational technology researchers as working within the worldview of one of three distinct paradigms, postpositivism, constructivism, or phenomenology. By contrast, Treagust et al. (2014) describes science education researchers working within the paradigms of post-positivism, interpretivism, or critical theory. We view our focus on six different research goal orientations as adding more clarity to an analysis of the research literature in our field than three broad paradigms, but acknowledge that not everyone will find this useful.

Second, this analysis was limited to papers on the research side of ETRD specifically labelled as research papers. We recommend that a future study be conducted to encompass a representative sample of papers from the development side of ETRD.

Third, we have also limited our analysis to this one journal. It would certainly be worthwhile to pursue a similar analysis of the research papers published in other educational technology journals during the same time period to see if similar or different trends would be identified. This is a project that we are undertaking ourselves, but would welcome others to join us.

Fourth, there were only two reviewers involved in this research endeavor. Future studies should seek to include more reviewers with appropriate checks for inter-rater reliability.

Lastly, we only reviewed papers that were published in ETRD. Thus, our analysis is limited to the papers that were accepted rather than those that we submitted to the journal. For example, no papers with critical/postmodern goals were found in either time period, but there is no way of knowing how many, if any, papers with critical/postmodern goals were submitted to ETRD, but were simply not accepted.