“But he has nothing on!” said the whole people at length. That touched the Emperor, for it seemed to him that they were right; but he thought within himself, “I must go through with the procession.” The Emperor’s New Clothes Hans Christian Anderson (1837)

When it comes to educational psychology journals, some of the research on which recommendations for practice (RFP) are based appears to be “causing” a dilemma like the emperor’s as described above. Even though their research may not warrant causal conclusions, authors must still go through with the procession of offering RFP. Educational psychologists are often drawn to the field because they wish to conduct research that informs student learning. Indeed, designing empirically supported policies and practices may be one of the most important responsibilities of educational psychologists (Anderman, 2011). At times, misinterpretation and misuse of research is due to unsupported claims researchers make when they go beyond their data in making recommendations for practice. Furthermore, research in educational psychology is not always interpreted correctly by policymakers and educators and may be used to support instructional interventions that do not yet have sufficient evidence, such as the self-esteem movement of the 1970s (Baumeister et al., 2003). Although educational psychologists do not have control over the way that policymakers and educators interpret their research, they do have a responsibility to ensure that their RFP are within the bounds of their methodological approaches. The present study is an extension of previous investigations exploring both methodological and RFP trends in the field of educational psychology (Hsieh et al., 2005; Reinhart et al., 2013; Robinson et al., 2007).

Methodological Shifts in Educational Psychology

Since its inception, educational psychology has relied on a variety of empirical approaches (Dumas et al., 2015). Adopting different empirical approaches benefits the field of educational psychology by allowing researchers to study phenomena through an array of lenses. These different methodological approaches are rooted in varied epistemological views; thus, both the goals and implications of the approaches vary.

When the ultimate goal of a line of research is to inform educational policy and practice, researchers should engage in a sequence of multiple research stages from conceptual development to hypothesis testing (Hsieh et al., 2005). In the earliest stages, research is largely descriptive with the goal of better understanding the research context. Both qualitative (e.g., observations) and quantitative (e.g., self-report surveys) methods may be appropriate in these early stages. Once these more descriptive studies have identified appropriate variables and correlations that indicate possible cause-effect relations, a researcher may design interventions that may improve student learning. These intervention hypotheses are then tested more formally as a new treatment is compared with existing standard practices.

Despite the logic of the exploration-to-intervention study sequence, recent educational psychology research appears to be skipping the final stage and ending instead with correlation studies. Over the past 30 years, intervention research in empirical educational psychology journals has been on the decline, while observational/correlational methods have been on the rise (Hsieh et al., 2005; Reinhart et al., 2013; Robinson et al., 2007). The percentage of empirical articles that employed interventions decreased from 47% in 1994 to 25% in 2010. During the same time period, correlational research increased from 43% in 1994 to 66% in 2010. The present study seeks to examine whether these trends have continued.

The Cost of Under-supported Recommendations for Practice

As stated earlier, a major goal of educational psychology research is to improve educational experiences for students. Thus, some journals explicitly require researchers to describe the implications of their findings for educational practice. Typically, RFP require rigorous randomized controlled trials in most scientific fields. In instances where a non-experimental approach is used (e.g., correlational/observational, qualitative), the evidence base may simply be insufficient to make strong recommendations.

As a recent example in educational psychology of what we mean by correlational observational, one study used structural equation modeling to analyze self-report variables for tens of thousands of students. There was no manipulation of variables; thus, no intervention. This absence of independent variable manipulation is a hallmark of the ex post facto, correlational/observational design. Yet, in the Implications for Practice section, the authors provided clear recommendations for manipulating one study variable to improve another study variable. Unfortunately, the “evidence” that such an intervention of one variable will “cause” a change in the other was purely correlational.

This example exhibits two characteristics that are relevant to the present study. First, the authors employed statistical modeling to infer causal relations with observational (nonintervention) data. Second, they provided RFP based on assumed causality. This example is not unique. In addition to the decline in intervention research discussed previously, the tendency for authors to make RFP based on nonintervention research is increasing. In 1994, only 30% of correlational articles included such recommendations. By 2010, it had increased to 46%. Thus, the trend of ending the research sequence at the correlation stage also appears to be accompanied with the trend of more RFP (Hsieh et al., 2005; Reinhart et al., 2013; Robinson et al., 2007).

The Debate over Recommendations for Practice

Some may feel that requiring experimental evidence for educational recommendations is extreme. Perhaps even more extreme, 10 years ago, Robinson et al. (2013) argued that, regardless of methodology, RFP based on any empirical studies should be restricted within educational psychology journals because of the tendency for researchers to overstate, and the public to overinterpret, the implications of studies. Even with a multiple-experiment, replication-and-extension study, RFP based on single studies are simply not-yet-ready-for-primetime. Rather, systematic reviews and meta-analyses of several experiments are needed to assess the cumulative evidence. Few agreed with this view. Alexander (2013), for example, argued that excluding RFP is in direct conflict with the goals of educational psychologists. Harris (2013) suggested that there are benefits to including appropriately grounded RFP within empirical studies. For instance, such RFP may provide directions for future research. Regardless, there does seem to be strong agreement that RFP should be based on experimental research (Alexander, 2013; Harris, 2013; Renkl, 2013; Vaughn & Fuchs, 2013; Wainer & Clauser, 2013). Some researchers have argued that causal statements can and should be integrated into non-experimental studies to clarify the research goals and interpretations of findings (Grosz et al., 2020).

Present Study

The present study extends previous research investigating methodological trends in educational psychology research. Three previous studies (Hsieh et al., 2005; Reinhart et al., 2013; Robinson et al., 2007) found decreases in intervention and randomized experimental research, increases in correlational research, and increases in RFP accompanying correlational studies from 1994 to 2010. To determine whether these trends have continued, we examined articles in the same five educational psychology journals published in 2020.

Method

We reviewed 342 empirical articles published in 2020 in the Journal of Educational Psychology (JEP), American Educational Research Journal, Cognition and Instruction, Journal of Experimental Education, and Contemporary Educational Psychology. We categorized each article based on the method as follows: observational/correlational, intervention, experimental, qualitative, and mixed method/multi-method. Observational/correlational studies did not include the manipulation of an independent variable and included quantitative data. Both intervention and experimental studies involved manipulation of an independent variable; however, experimental studies further included random assignment. Qualitative studies did not include the manipulation of an independent variable and included qualitative data (e.g., text from interviews and observations). Finally, mixed method/multi-method studies did not include manipulation of an independent variable and included both quantitative and qualitative data. These categories were identical to those identified by Reinhart et al. (2013) and Robinson et al. (2007), with the exception that we added mixed method/multi-method. Finally, in line with Reinhart et al., we coded an additional volume year of JEP (2019) to determine year-to-year stability.

Also similar to previous studies, we coded each article as to whether RFP were included. Such RFP imply or explicitly state that “if Practice X is adopted/avoided or increased/decreased, then teacher or student Outcome Y will improve” (Reinhart et al., 2013, p. 244). Table 1 shows examples of RFP.

Table 1 Examples of prescriptive statements

All authors first separately coded each article in Contemporary Educational Psychology, 2020, Volume 60, and then discussed any disagreements to reach consensus. Each author was assigned approximately 50 articles to code from the remaining journals. To compute interrater reliability, each author additionally coded 30 articles that other authors had originally coded (15 assigned to one author, 15 assigned to another). Interrater reliability was calculated based on coder agreement divided by total possible agreement. Agreement for research method was 91% and 76% for RFP. Consensus was reached for all coding disagreements through group discussion.

Results

Table 2 displays the results of 2 years of JEP (2019 and 2020), along with 2 years Reinhart et al. (2013) examined (1999 and 2000). When examining the stability from 2019 and 2020, there are three notable differences. First, 2020 included 10 qualitative studies, whereas 2019 had none. This difference was due to JEP publishing a special issue on Qualitative Studies of Reasoning and Participation (2020, Volume 112, Issue 3). Thus, it could be argued that qualitative research was overrepresented in 2020. Second, 2020 included a slightly smaller percentage of observational/correlational studies with RFP (67%) than in 2019 (82%). Thus, the tendency of researchers to include RFP with such studies is likely underrepresented in 2020. Finally, a greater number of empirical articles were coded in 2020 (n = 115) compared to 2019 (n = 87).

Table 2 Summary of findings for JEP

Table 3 displays the results across all five journals for 2020, while also including the years 2000 and 2010 from Reinhart et al. (2013). The total number of empirical articles published has increased over the past two decades (114 more in 2020 than in 2010). Reinhart et al. documented a decline in intervention research from 2000 to 2010 (from 40 to 25%). Our findings suggest that intervention research has remained stable since 2010. However, randomized experiments continued to decrease, from 23% in 2010 to 20% in 2020.

Table 3 Summary of findings for all journals

Interestingly, we found a decrease in the percentage of articles that were observational/correlational (66% in 2010 to 46% in 2020). The reduction in proportion of observational/correlational articles may be due to the change in percentage of qualitative articles, which increased from 9% in 2010 to 22% in 2020. Some of this increase in qualitative research was due to the special issue in JEP 2020; however, when removing the special issue from calculations, 19% of the total number of articles remained qualitative. Finally, mixed method research (6% in 2020) was not coded separately from observational/correlational in previous studies. Most importantly, nonintervention research remained at about three fourths of all research in 2020—the same as 2010.

Prior studies found an increasing trend in RFP found in observational/correlational studies. In 2000, 41% of observational/correlational articles included RFP. In 2010, this percentage increased to 46%. This trend has continued into 2020 with a marked increase; 66% of observational/correlational articles included RFP. Though prior studies did not examine qualitative or mixed method articles for RFP, in the present study, we found that RFP remained pervasive when using both methods (65% of qualitative studies and 60% of mixed method/multi-method studies). In all approaches combined, in 2020, RFP were included in about two out of three nonintervention articles.

Discussion

Four findings from the present study stand out when compared to previous findings. First, there has been an increase in qualitative research. Compared to 2000 and 2010, the percentage of articles employing qualitative methods has risen, whereas the overall percentage employing nonintervention methods has remained 75% since 2010. Second, intervention research has remained at 25% since 2010. Third, experimental research decreased from 23% in 2010 to 20% in 2020. Forty years ago, in 1983, almost half (47%) of empirical articles in the five journals employed random assignment experiments (Hsieh et al, 2005). Now only one in five empirical articles employ random assignment. Fourth, the present findings depict a continued rise in RFP based on nonintervention research. Compared to 46% in 2010, 66% of nonintervention articles included RFP in 2020. We discuss each of these findings further in the following sections.

The Continued Shift Away from Intervention Research

The change in methodology across the last four decades may be due to a number of contributing factors. For example, these changes may simply indicate shifts in worldviews in educational psychology. Worldviews include a particular set of assumptions and underlie research design; whether implicit or explicit, researchers’ worldviews inform their research design (Creswell, 2013; Jones et al., 2014). Historically, fields of psychology tend to favor a post-positivist view (Meyer & Schutz, 2020). The increase in qualitative and mixed method research may suggest movement toward adoption of other worldviews that underlie these approaches (e.g., postmodernist, relativist; Creswell, 2013) or, at the very least, increased variation in the worldviews adopted by educational psychologists.

Alternatively, these trends may highlight shifts in what perspectives are accepted by high-impact educational psychology journals. Said differently, it could be that this variation in worldviews existed all along, but articles outside of more traditional, post-positivist perspectives were not accepted by major outlets. This barrier to publishing qualitative articles is discussed briefly in the Introduction to the Special Section: Qualitative Studies of Reasoning and Participation in the Journal of Educational Psychology (Kuo & LeBaron Wallace, 2020). Kuo and LeBaron Wallace (2020) note that the lack of qualitative articles in journals such as JEP may be due to educational psychologists devaluing of qualitative methods. Although not analyzed for this paper, a similar special issue focused on qualitative and mixed method research was published in Educational Psychologist in 2020 (Volume 55, Issue 4). Similar to the special section in Journal of Educational Psychology, the special issue in Educational Psychologist highlights the benefits of qualitative and mixed method approaches in educational psychology (Meyer & Schutz, 2020).

Another potential reason for shifts in methodology could be due to the challenging practical constraints and resources required to conduct educational intervention research. Conducting interventions requires time and, oftentimes, money. Researchers at all ranks may experience pressures to publish. When designing a research agenda, these researchers likely consider the timeline of projects and how these timelines align with various review processes (e.g., tenure and promotion). It could be that researchers are prioritizing research that requires less time in terms of data collection to allow for a quicker publication process, although many would argue that good qualitative research may require even more time.

Many intervention studies, especially experiments, employ small sample sizes. The replication crisis in psychology may have led to increased awareness among educational psychology researchers of low statistical power. This could be one reason why more researchers are turning to large existing data sets—although such data is correlational by nature with no independent variable manipulation.

Finally, intervention research often occurs in partnership with K-12 schools. K-12 teachers and administrators continue to face a variety of pressures—increased accountability metrics and high-stakes testing may create a more explicit focus on instruction that prioritizes teaching to the test rather than methods that lead to long-term learning (Morgan, 2016). These accountability policies may influence teacher burnout, stress, migration, and attrition (Ryan et al., 2017), leading to resistance against experimenting in their classrooms. Although intervention research is intended to improve educational experiences for students, interventions may feel cumbersome for teachers and administrators given the current landscape of K-12 schools. Thus, compared to prior decades, school systems may not be as willing to partner in intervention research. This may be another reason why researchers access national databases with self-report data. Of course, there may also be cases where researchers must leverage secondary data because they are interested in better understanding minoritized groups of students who may not be well-represented in primary data collection studies (see Fong et al., 2019); that being said, these data are still correlational and are precluded from making robust RFP.

The Continued Rise in Recommendations for Practice

Our analyses revealed a continued increase in RFP in articles that used observational/correlational, qualitative, and mixed method approaches. Although authors may include these RFP as an effort to bridge the gap between research and practice, RFP that are not supported may not have the desired outcome. Using care when identifying and describing the RFP resulting from empirical research seems to be in line with other movements in educational psychology intended to enhance the credibility of educational research. In particular, the call for greater transparency is in line with recent discussions about the benefits of open science (Gehlbach & Robinson, 2021).

Interestingly, as we analyzed authors’ RFP, variations in scope and strength became evident. In some cases, RFP were closely tied to the study sample, whereas in other cases, they were aimed more broadly at a particular population. In other cases, modifiers such as “may” and “might” preceded RFP to indicate less confidence. Because the goal of our study was simply to examine instances of RFP, our analytic approach did not include coding for strength and scope. The underlying assumption of this study is that any RFP that is not based on sound methodology that can identify causal relations is suspect; however, stronger, more general recommendations for practice may be particularly egregious.

There are some cases where authors seem to recognize the boundaries of their methodologies, yet still provide an RFP. In some instanceswe noticed clear RFP in the discussion section accompanied by statements qualifying these RFP in the limitations section. Oftentimes, in the limitations sections, researchers would note that the methodology they used did not allow for causal conclusions and that future experimental research was needed before RFP were appropriate. This mismatch points to potential pressures that might encourage RFP in studies that use non-experimental approaches.

Reinhart et al. (2013) concluded that correlational/observational research that relied on modeling as the analytic approach was more likely to include RFP. The hypothesis was that modeling approaches may “cause” use of more “causal” nomenclature such as “predictors,” “mediators,” and “outcomes.” Initially, in the present study, we intended to code for modeling using a similar approach as Reinhart et al. However, the past 10 years have seen a dramatic increase in the types and uses of modeling approaches. Modeling approaches were so varied and prevalent in our target journals that distinguishing between what counts as modeling and what does not became inconsequential.

Disentangling why there is a rise in RFP in nonintervention articles is complex. Aspects of the publication process may facilitate the inclusion of RFP. For instance, differences in opinions on whether RFP based on nonintervention methodologies are warranted may lead to conflicting suggestions from reviewers. There may be cases where reviewers, editors, or publishers encourage or explicitly request RFP to be integrated into discussion sections. This may lead to an unintentional feedback loop. As authors are requested to integrate RFP into discussion sections, they may perceive these statements as the norm. When these authors are then asked to review for journals, they may become the reviewer who requests RFP.

Additionally, shifts in the particular expectations of journals may also contribute to this increase in recommendations. As an example, the Journal of Educational Psychology requires authors to submit an “Educational Impact and Implications Statement.” The intent of this statement is to make studies easier for the general public to interpret; however, these statements may inadvertently encourage the inclusion of recommendations that go beyond the data for a particular study.

Conclusion

Our extension of previous studies confirmed that the trends of increased diversity in methods in educational psychology and increased ungrounded RFP have continued. The diversity in methods used in educational psychology provides opportunities to investigate a variety of research questions; however, researchers must carefully consider whether RFP rooted in particular methods are appropriate. Without this careful consideration, the field of educational psychology risks becoming more hat than cattle.