Introduction

Policy scholars have long been interested in learning, or the acquisition of new ideas, information, or beliefs by actors involved in policy processes, which can result in changes to policies, decision-making processes, and governance outcomes (e.g., Heclo 1974; Sabatier and Jenkins-Smith 1993; Bennett and Howlett 1992; Heikkila and Gerlak 2013). Recently, substantial attention has been paid to questions of learning in the realm of environmental policy. Environmental issues are a fertile ground for the study of learning, because they are characterized by high levels of uncertainty associated with cross-scale feedbacks, unclear problem definition and resolution, and diverse policy interests (Bressers and Rosenbaum 2000; Folke et al. 2005). When operating on their own, government agencies, institutions, and communities may not be well placed to deal with such complexity nor effectively adapt policy and governance approaches to changing social and ecological conditions (Innes and Booher 2010; Newig and Fritsch 2009). In response, scholars have identified and analyzed various environmental policy approaches that have been associated with learning. For example, learning is seen as a key feature of adaptive governance (Folke et al. 2005; Pahl-Wostl 2009) and adaptive co-management (Baird et al.,2014; Huitema et al. 2009; Armitage et al. 2008). Environmental governance scholars have also argued that learning can lead to improved governance outcomes, such as sustainability transitions (Bos et al. 2013a; Bodin and Crona 2011), or bridging cultural divides around conservation (Pietri et al. 2015). Others have delved into how types of learning, such as technical versus social learning, can play out differently in shaping environmental policy outcomes (Fiorino 2001).

Despite this widespread attention to learning among environmental policy scholars, questions remain as to how this literature has contributed to conceptual, theoretical, and empirical advancements. Bennett and Howlett (1992), in comparing some of the prominent approaches to policy learning over two decades ago, argued that policy scholars needed to pay closer attention to who is learning, what is learned, and to what effect. In this article, we argue that such critiques and recommendations are still relevant today. The primary aim of this article is to assess how the literature on environmental policy examines and engages with “learning” as an analytical device and conceptual lens, and to assess the strengths and weaknesses of the literature. Like Bennett and Howlett’s (1992) assessment of learning in the policy literature, a handful of scholars recently have examined or critiqued the status of the literature on learning as it relates to environmental governance more specifically. For example, Rodela (2013) has explored the themes and trends covered in the literature on social learning and natural resource management. Rodela et al. (2012) also have examined the methodological approaches and epistemologies guiding literature on social learning and natural resources. Additionally, Armitage et al. (2008), Muro and Jeffrey (2012), and Reed et al. (2010) have all offered valuable insights and critiques to the literature on what in the environmental governance debate has come to be termed “social learning,” or learning that occurs as a consequence of the interaction between various actors.

Our analysis complements these studies by examining the status of literature on learning and environmental policy more broadly. We include literature from the fields of public policy, resource management (e.g., collaborative management, adaptive management), adaptive governance, and systems approaches. The focus of the analysis is on how the literature in environmental policy defines, explains, and analyzes learning. Therefore, we do not restrict our assessment of the literature to a particular type of learning (such as social leaning) or to literature that adheres to a particular definition of learning (such as the definition propagated by Sabatier and Jenkins-Smith 1993). Doing so allows us to examine how the literature treats or understands diverse forms of learning, including social learning, loop learning, policy learning, transformative learning, and learning by doing. In doing so, we consider how different theories or expressions of various types of learning play out across the literature. In addition, we deepen the debate on learning by using a standardized framework to code and compare a large sample of the literature. This has added value because earlier assessments rarely have relied on a standardized analytical approach (see Rodela 2013; Rodela et al. 2012 as exceptions to this). For our analysis, we posit several criteria that we expect to see in published literature and then assess the literature by analyzing the content of 163 articles on learning and environmental policy over the past decade.

In the following section, we detail our analytical criteria and methods. Next, we synthesize the results of our analysis according to the research criteria. Before doing so, we provide an overview of the research landscape, in terms of which journals are publishing this research and the overall trends in empirical versus more conceptual research within our sample. Following the presentation of results, we discuss the strengths and weaknesses of the literature. Overall, we find diversity in the questions or goals related to learning in the literature, but a lack of clear conceptualization, and limited theoretical advancement and measurements of learning. In discussing the implications of our study, we offer specific recommendations to advance the literature in ways that can improve our understanding of learning in environmental policy. Ultimately, doing so can help us diagnose the types of policy processes or governance features that can foster or impede better social and environmental outcomes.

Analytical criteria and methods

The criteria we use to guide our analysis of the literature on learning in environmental governance are based on recommendations established by social science scholars who have espoused a diversity of approaches to research design. First, we expect to see clear research questions or goals around learning, as well as theoretical grounding of the question or goals (Gerring 2012; Singleton and Straits 2005). Theoretical grounding requires situating the literature either within a specific theory or framework, or from multiple theories or comparing theories, or the development of new theory where existing theory is lacking (Singleton and Straits 2005; George and Bennett 2005). Theoretical development also requires attention to defining concepts and constructs (Goertz 2005; Gerring 2012). With a complex concept such as learning, clear definitions and operationalizations are all the more important.

Second, the cases and context for the research should be clearly identified, and the empirical research methods should be explicit and transparent to the reader (Gerring 2012). For empirical research, clear statements of hypotheses or propositions can also add to theoretical development. Additionally, employing a diversity of research methods in a body of research can enrich the development of the literature and enhance the validity of results over time (Poteete et al. 2010). This should also include testing and analyzing research questions across a diverse set of cases and contexts.

Third, we expect to see overall advancement in our knowledge about the phenomenon of interest through the literature as a whole. For instance, this may include advances in understanding the venues where learning is likely to occur, the factors that promote or inhibit learning, the stages of the learning process, or whether learning processes lead to changes in behavior or policy outcomes (Heikkila and Gerlak 2013).

While we recognize that our criteria may diverge from some epistemologies that underlie research in the environmental policy arena, we argue that these criteria are general enough to accommodate a large diversity of approaches to research on learning. We do not assume that either quantitative or qualitative approaches are superior or preferred, or that the literature must emphasize empirical applications over theoretical development. Rather we embrace diverse methods and expect that where theory may not be fully developed, various conceptual or non-empirical articles can enhance our understanding of the literature. Ultimately, across the literature, we expect to see growth in knowledge of the meaning of learning and how it emerges or affects policy and governance outcomes through rigorous theoretical and/or empirical research.

We identified articles for inclusion in our analysis by using six sets of keyword search terms, including: learning, environmental, natural resources, governance, policy, and management. Searches were conducted using two search engines, Scopus and Web of Science, both of which provide coverage of a large number of journals from different scientific fields.Footnote 1 Searches were conducted for articles published between 2004 and May 2014. The number of articles produced in the searches was over 7400, including some articles listed multiple times across the search engines and terms. This number does include many articles not relevant to the study (e.g., because the terms learning and environment are used in a very different meaning, such as for robotics). Due to the large number of results, we applied a purposive sampling method. The top 25 articles, as listed by relevance by each search engine and each set of search terms, were selected for inclusion.Footnote 2 This approach ensured that we gathered both relevant articles and a breadth of research topics.Footnote 3 After accounting for duplicate articles covered by both search engines, and removing some articles that were not explicitly about learning in an environmental context, the total sample of articles was 163. (For a list of all of the articles in the database and their ranking by search term, and by database, please see our supplemental online appendix).

To analyze the sample of articles and assess the criteria identified in the introduction, we developed a codebook and coding instructions, as shown in “Appendix 1.” The codebook captures general information on the type of journal publishing the articles, authors, titles, dates of publication, and whether the articles are empirical or conceptual. It further includes fields to identify the articles’ goals, how the articles conceptualize and define learning, and how they ground their research theoretically. The codebook also captures how the articles approach the cases and context of learning empirically, including the environmental issue, the cases and their geographic location, and the methods of data collection and analysis. Finally, the codebook includes questions aimed to assess what the literature contributes to our knowledge of learning, such as whether learning is linked to changed outcomes and which venues are associated with learning.

A preliminary version of the codebook was tested and revised by the five co-authors, who then each coded a subset of the articles. Thirty-one articles were coded by multiple coders (two to four) in order to determine intercoder agreement across non-text-based fields. Agreement of 80% or above was achieved across 21 fields with yes/no questions or categories with numerical values, which we use in our analyses.Footnote 4 Fields with yes/no or numerical values that did not reach acceptable levels of intercoder agreement (>80%) were not included in the analysis. Additionally, we re-coded the responses for four text-based fields into select typologies to allow for comparison of results. These include the type of journal, type of venue, type of learning, and geographic scale. Two or more coders reviewed and discussed the categorization for each of these fields to achieve 100% intercoder agreement on these categories. The remaining text-based fields (e.g., the definition of learning used in the article, or the coder’s perceptions of the article’s strengths and weaknesses) were not re-coded, but used to qualitatively inform the analysis. Data can be made available upon request to the authors.

Results

Research landscape

The 163 coded articles appeared in seventy different journals.Footnote 5 An examination of the number of articles per year indicates that learning articles in our sample have increased in more recent years, with about 20 articles a year from 2010 to 2014 and about 7–10 articles per year in the years prior, except 2007, which had over 20 articles. As we did not sample randomly, we cannot say whether this trend is representative of the full population of articles; however, it does represent those that are identified as highly relevant in the field based on our sampling approach. As shown in Table 1, a majority of articles are published in ecology and natural resources journals (30%) and management and planning journals (27%), with fewer articles are found in journals oriented to policy and politics (15%), resource-specific issues (14%), and other broader topics (10%). Additionally, most of the 163 articles are primarily empirical (112) versus conceptual (51). Among those that are primarily conceptual, 35% are published in journals centered on ecology and natural resources. Of primarily empirical articles, the highest percentages are in journals focused on management and planning (30%), and ecology and natural resources (28%). In terms of individual journals, the highest number of articles examined are published in Ecology and Society (12 articles) and Environmental Management (12 articles), followed by the Journal of Environmental Management (11 articles), Environmental Science and Policy (8 articles), and Ecological Economics (6 articles).

Table 1 Types of journals publishing conceptual versus empirical articles on learning

Analytical criterion 1: Research questions, theoretical grounding, and concept definitions

Our first criterion addresses clarity in research questions or goals, attention to theoretical grounding, and conceptual development. First, our analysis found that 75% of the coded articles state their research questions or goals around learning explicitly. For example, many are interested in how learning can affect or enhance environmental management, policy, or governance outcomes. A research goal that illustrates this is offered by Dessie et al. (2012: 259): “The purpose of this study is therefore to investigate whether social learning plays a facilitating or impeding role in the adoption of soil conservation measures in Ethiopia.” Another paper by Clark and Clarke (2011) states that their goal is to explore whether English National Parks show adaptive governance characteristics including learning and adaptation. Some seek to explore learning in a particular case, such as urban water management in Australia (Bos et al. 2013a) or West African biosphere reserves (Levrel and Bouamrane 2008). Others are interested in understanding barriers or opportunities for learning within a particular environmental policy setting. Among the more conceptual papers, some aim to develop frameworks of environmental governance and policy that include learning, or explicitly address learning, while others seek to draw from learning literature to inform perspectives and indicators on issues such as sustainability. Although the goals are diverse, when we examined the goals and research questions qualitatively across these articles we find that many use unclear, vague, or overly complex wording when stating their goals. While 25% of the papers do not have learning included in the paper’s goal or research question, nearly all papers we coded treat learning as a key concept (98%). An example of a paper that deals with learning as a key concept, without including learning explicitly in a research question is presented in Brugnach et al. (2011). This article explores “the notions of framing and ambiguity,” and then considers “dialogical learning” as one of five strategies designed to deal with framing and ambiguity.

We also find that 44% of the papers state that an explicit theory or framework is being used as a primary guide for their paper. Among these articles, we find no overall agreement or coherent use of any single or unified theory of learning. Some 32 unique theories and frameworks are referenced across the papers. We did not impose subjective interpretations on the articles in identifying theories or frameworks. Our coding rules dictated that we code those theories or frameworks identified by the authors themselves, rather than what the coders perceived to be well-established theories or frameworks. As identified by the authors of the papers, the top three most common are social learning (16 mentions, or 18% of theories and frameworks cited), theories of adaptive governance and management (11 mentions, or 12.5% of theories and frameworks cited), and the advocacy coalition framework (4 mentions, or 4.5% of theories and frameworks cited). Four papers reference more than one theory or framework, and in all four cases social learning is used in conjunction with another approach (advocacy coalition framework, adaptive management, transformative learning, organizational theory, and principal agent theory). About 25% aim to develop theories or frameworks and of those, and 18% intend to empirically test these theories and frameworks. Among the papers that develop their own frameworks, most integrate elements of other literatures that focus on learning. For example, Crona and Parker (2012: 2), in examining various literatures related to learning state: “Our goal is to relate concepts, methods, and metrics from these research areas as a means of advancing research on learning in support of adaptive natural resource governance as it occurs in bridging organizations.” Across the articles in our sample, we find that 18% explicitly state hypotheses.

We also examined the primary bodies of literature the author says they are drawing from in the paper. We find that 48% of the papers (or 79 out of 163) used two or more primary bodies of literature to frame the analysis, accounting for 266 total mentions of different bodies of literature (adjusted for those that were unknown or unclear). A total of 59 bodies of literature were identified as framing or situating analyses of learning, the most common of these can be found in Table 2. Others of note include adaptation/resilience (12 mentions), integrated natural resource management (12 mentions), organizational learning/studies (9 mentions), and multi-stakeholder/participation literature (10 mentions). Bodies of literature related to learning that were explicitly mentioned as follows: social learning (42 mentions), organizational learning (9 mentions), transformative learning (4 mentions), collaborative/participatory learning (3 mentions), experiential learning (1 mention), and urban learning (1 mention). In those papers, drawing upon two or more bodies of literature, systems approaches, and adaptive management/governance are often used in conjunction, as are social learning and policy sciences, and social learning and adaptive management/governance.

Table 2 Most common bodies of research for framing articles in the sample

Only 42% (n = 69) of the sampled articles include an explicit definition of learning. As shown in Table 3, learning definitions reflect four broad categories: social learning; policy learning; organizational learning; and generic definitions or other types of learning. A number of these papers draw from a specific source or reference to define learning while others build their own definitions of social, policy, or organizational learning. Others provide their own generic definition of learning or name another type of learning. This includes types of learning such as sustainability learning (Tabara and Pahl-Wostl 2007) or cognitive learning (Haug et al. 2011). Generic definitions of learning include conceptualizations such as information and knowledge acquisition and assimilation, exploration, and critical reflection (e.g., van de Kerkhof and Wieczorek 2005; Nilsson 2005; Lin 2012). Few authors build integrated definitions of learning (as an exception, see Feindt 2010; Bendt et al. 2013; Heikkila and Gerlak 2013). Only 39% of conceptual papers define learning. As shown in Table 4, for articles that are primarily empirical, the majority do not have an explicit definition of learning (56%).

Table 3 Definitions of learning adopted
Table 4 Explicit definitions of learning in conceptual versus empirical article

Even though the majority of sampled articles do not define learning, 83% (n = 134) refer to a type of learning. Among those articles that identified a type of learning, social learning is the most prominent. Social learning appears in 75 articles, or 56% of those articles that mention a type of learning. Social learning is identified in 37% of primarily conceptual articles and 50% of primarily empirical articles. This reflects the large number of articles that point to social learning as a theory or framework guiding the paper. The other types of learning identified in the sample include experiential (17%), organizational/loop (15%), collaborative (14%), policy/political (12%), transformative/adaptive (9%), and instrumental (7%). The sampled articles identify many other types (25%) that do not fall into these categories such as “conceptual,” “dialogical,” “sustainability,” and “generative.” Many articles (39%) identify multiple types of learning. As an example, in reflecting on concepts of learning in environmental assessments, Sinclair et al. (2008) reference nine different types of learning, including: transformative learning, social learning, experiential learning, collective learning, individual learning, sustainability oriented learning, adult learning, instrumental learning, and communicative learning (Sinclair et al. 2008).

Analytical criterion 2: Cases, context, and methods

The empirical research in our sample of articles often focuses on specific environmental issues. For instance, water is the most prevalent type of research issue across the articles (21%). Both energy/climate issues and agricultural issues were the focus of 11% of articles, respectively, while species or biodiversity issues appear in 9% of the articles and forests in 3%. Some articles (12%) tackled multiple issues. However, many articles (34%) either do not identify a specific issue or focus on an issue that does not fit clearly into the main resource issue types we identified. For example, some of these include local development, tourism, sustainability indicators, environmental assessment cases, international environmental agreements, environmental education, and environmental alliances.

In looking at the geographic areas that the articles cover, as reported in Table 5, the largest percentage (28%) are situated in Europe, followed by North and Central America (15%), Australia (9%), Asia (7%), Africa (4%), and South America (3%). Another 12% of articles cover multiple regions, while 22% do not discuss a specific geographic location. Of articles that are primarily empirical, 31% focus on Europe for observations, 20% focus on North America, while 12% use multiple geographic locations.

Table 5 Geographic locations in conceptual versus empirical articles in the sample

In looking at the geographic scale of analysis, the largest proportion focused on “within-region” (e.g., state, watershed, province) scales (28%), followed by local (22%), national (9%), transboundary regions (e.g., regions and watersheds across boundaries, and international) (8%), and “other” (5%). However, 29% did not identify a specific scale of analysis.

The articles we coded employ various methods to examine and assess research questions or test theory. In looking at the data collection methods, we find that interviews (40%) and document analyses (30%) are frequently used. The research also relies on data and evidence from focus groups (23%), secondary analyses of existing literature (19%), and surveys (17%). Among the methods of analyzing data, 19% of the articles include descriptive statistics, but only a small percentage (10%) use advanced statistics, such as regression analyses or other econometric techniques. Instead, a large majority of articles use qualitative approaches in analyzing their evidence or cases. Among those using qualitative methods of analysis, only 33% provide explicit explanations of their methods.

In reviewing the research methods, we noted that few articles operationalize and measure learning directly (although we did not code “direct” measurement). What we observed is that researchers often measure learning indirectly by observing factors that are theoretically linked to learning (e.g., adaptive capacity). Nearly a third of the articles (31%) focus on learning theoretically, such as a key assumption to explain why a particular governance process may or may not lead to certain outcomes. These articles then focus on analyzing the governance process rather than learning per se in the empirical study. When we assessed qualitatively how the articles measure learning, we noted several approaches to measurement. One is to identify a type of learning and then observe cases to determine whether those “types” emerged over time, or as a result of a particular process. For example, Haug et al. (2011) assessed “cognitive,” “normative,” and “relational” learning indicators among actors involved before and after a simulation on European climate policy. Others explicitly attempt to measure learning by operationalizing an underlying construct of learning. One article, for example, defined learning as knowledge utilization and then identified various context-specific indicators to determine whether the knowledge utilization was observed in the cases (Crona and Parker 2012).Footnote 6

Criterion 3: Contributions to building knowledge about learning

Finally, we sought to better understand what the literature is contributing in terms of building knowledge about learning. We found that 57% of the articles identified factors that enable learning. For instance, various authors argue that the selection of participants is critical in shaping whether learning occurs and to what extent (Garmendia and Stagl 2010; Muro and Jeffrey 2012; Robards and Lovecraft 2010). Additionally, learning should be included as an “explicit objective” (McDaniels and Gregory 2004: 1921) to better ensure that learning is achieved (Bos et al. 2013b; McDaniels and Gregory 2004). Others point out that it may be necessary to promote particular tools, such as decision support systems or professional facilitation, within venues to aid learning (Castella 2009; Lynam et al. 2007; Maurel et al. 2007; Raymond and Cleary 2013, Videira et al. 2010), and their selection and application is dependent on the context and the stage of the process (Lynam et al. 2007).

One of the areas where we see attention to knowledge building across the sampled articles relates to the treatment of what we call “venues for learning.” Venues are the institutional locations and places, decision processes, or forums where learning may take place.Footnote 7 Approximately 60% of the sample articles identified a venue associated with learning. In coding whether an article identifies such venues, either empirically or theoretically, we included a qualitative description of the venue(s) described by the authors and then inductively organized the list of all venues coded into a set of common categories, summarized in Table 6. The most common type of venue identified by 43 articles in our sample is a specific type of meeting, such as a workshop or focus group. The second most frequent type of venue identified in 31 articles is a multi-stakeholder process or collaborative forum. Other categories that appeared in multiple articles include environmental assessment/peer review processes, organizational bodies like a watershed association, and more broadly defined networks.

Table 6 Venues identified and associated with learning in the sample

Our findings suggest that the majority of the articles place an emphasis on venues that provide opportunities for face-to-face interactions and dialogue in studying learning. This may be a reflection of the significant attention to social learning in the articles we sampled, but also of the often expressed belief that interaction and dialogue foster learning. Indeed, many articles make explicit arguments that venues support dialogue and interaction (Albright 2011; Castella 2009; Colvin et al. 2008; Faysse et al. 2014), and require a diversity of stakeholders (Bond et al. 2011; Dessie et al. 2012, 2013; Garmendia et al. 2012; Wang et al. 2006).

Additionally, we coded whether articles identify the stages of the learning process, and a demonstrated empirical link between learning and changed outcomes. There was low intercoder reliability for these fields, likely attributable to the vague language used to describe these facets of learning, as well as the difficulty in creating coding rules that allow for a reliable identification of these complex questions. Therefore, we do not have statistical results to report on these codes, but our qualitative review of articles in the coding process allows us to draw some findings. First, we found very few instances of articles identifying what we considered stages of learning, although some coders found stages in certain articles where other coders did not. At least subjectively, we found that many articles assume a linkage between learning and changed outcomes, but few make the link empirically explicit. We struggled to operationalize an “explicit” link in our coding process, however. Given the coding challenges we confronted on these issues, we posit some recommendations below for how the literature might address these questions more explicitly, but also recognize that some contributions in the literature may be highly subjective.

Discussion

The literature on learning in environmental management has expanded considerably since Lester Milbrath (1989) suggested that we should “learn our way out of sustainability challenges.” Some 26 years later, an impressive number of books and articles on learning and its importance in environmental policy exists. Although scholars have explored systematically the literature on social learning (Rodela et al. 2012; Rodela 2013), there are no comprehensive reviews or assessments of the broader literature on learning and environmental policy. The results of our review help to address this gap and to provide insights on the overall coherence and impact of this body of scholarship. We summarize our main insights below and draw connections to the analytical criteria framing this review.

Houston, we have a theory problem

With respect to our first set of criteria, the literature not only needs to pay more careful attention to clarity in framing research goals and questions, but needs to develop learning theory. Given that over half of the papers in our sample had no explicitly stated theoretical approach to guide the research, this arguably can impede the identification, analysis, and/or measurement of learning variables and attributes. Second, within the set of papers that did indicate an explicit theory, our results indicated the emergence of many niche bodies of literature, suggesting a fragmented approach to theory. While we expect to see theoretical diversity given the diverse disciplines in the field, after a period of a decade we might also expect to see some consensus in the literature on key theoretical insights. In other words, there are different “languages” being used even among scholars examining similar phenomenon, which may be offering many perspectives but limited cumulative insight. Third, we uncovered a disconnect between the bodies of literature used by authors and actual theoretical framing of learning. For instance, the bodies of literature explicitly related to learning (aside from social learning) do not figure prominently in the set of papers we coded. Bodies of literature one might expect to see referenced more frequently, such as networks and the advocacy coalition framework, are in fact mentioned infrequently. Moreover, several of the niche bodies of literature invoked to frame analyses of learning seem to be unique “constructions” developed to reflect a particular context, such as urban learning or visual problem appraisal.

Of course, theoretical development in the social sciences starts with explicit attention to the definitions and conceptualization of the key phenomenon of interest (Goertz 2005). Only 42% of the articles studied include an explicit definition of learning. In other instances, many types of learning are mentioned in the same study without any definition or explanation. Clarifying definitions is important for theory development because different types of learning mean different things and therefore different sets of conditions would explain those different types and presume different types of outcomes.

Even though the majority of articles studied here do not define learning, we do find that a significant majority of articles (82%) refer to a type of learning. Social learning is identified more than any other type of learning, mentioned in 46% of the articles examined. This reflects a trend toward adoption of social learning as a primary way of discussing learning in the environmental policy scholarship. Social learning has become a normative goal in natural resource management and policy (Reed et al. 2010), as an alternative approach to natural resource management (Rodela 2011). In our analysis, we find that researchers often use the concept of social learning in the paper without also connecting it to a theory. Examples of studies that have connected social learning to a theory include: Brummel et al. (2010) and Wilner et al. (2012) who use transformative learning theory framework as a way to investigate distinct social learning processes and outcomes. In addition, Van der Wal et al. (2014) employ cultural theory to better operationalize and study social learning. Still, the social learning concept remains problematic. As earlier research has argued, social learning is often conflated with other learning concepts (Armitage et al. 2008; Diduck 2010; Reed et al. 2010). Despite the lack of a coherent theoretical foundation and a clear definition of social learning in the literature, there is a general understanding or presumption that social learning encompasses participatory processes, is heavily influenced by institutional design, and is expected to lead to better environmental outcomes (Siebenhüner et al. 2016; Reed et al. 2010; Muro and Jeffrey 2008).

It is time for some methodological and contextual diversity

Our review has revealed scope for greater transparency in articulating methods. In many instances, methodology is limited to anecdotal and subjective assessments. Further, the fact that a small number of papers in our sample (18%) explicitly state a hypothesis reflects a relatively narrow methodological approach in much of the literature. The emphasis on qualitative methods may be tied to the significant attention paid to social learning. For example, Cundill et al. (2012) recognize that case studies are a valuable approach for social learning because they allow researchers to uncover processes of change. Rodela et al. (2012) similarly find that researchers using a social learning perspective to study natural resource issues tend to adopt methodologies that allow for in-depth descriptions, and focus on process rather than testing assumptions associated with social learning. Yet, they argue that: “This analysis exposes a tension. On the one hand, on the basis of the methodological choices being made by researchers, we find that the social learning discourse seems to be leaning toward the critical and interpretivist approaches, while on the other hand there seem to be expectations about testable knowledge”(Rodela et al. 2012: 21).

While we acknowledge the value of in-depth qualitative research for studying learning processes, we argue, alongside other researchers, that there is substantial room for improvement within the literature with respect to methods (Crona and Parker 2012; Heikkila and Gerlak 2013; Ison et al. 2013). Some examples of how to do this are available in the literature. Van der Wal et al. (2014) present explicit approaches for measuring social learning. Similarly, Leach et al. (2014), for example, offer a clear definition of learning and a quantitative approach for measuring it. They focus on a limited sub-component of learning, similar to Crona and Parker (2012), who measure learning through knowledge utilization. Baird et al. (2014) look at three types of learning and use a mixed-methods approach for measuring each concept. The lesson from these examples is that explicit and reliable measurement may require a limited focus on either a subset or type of learning. The downside, however, could mean that the findings of these studies are limited in their generalizability. Still, such approaches may lend themselves to more transparent and direct measurement. At the same time, more innovative methods of data collection, such as survey experiments, can offer insights on factors that shape learning. As one example, Montpetit and Lachapelle (2015) recently devised a survey experiment to test how exposure to scientific information influences policy learning around environmental protection.

In addition to more precision and diversity of analytical methods, we also need more attention to diverse cases to enhance the learning literature. For example, a significant proportion of the articles focus on North/Central American or European contexts—42% of all articles and 54% (n = 69) of articles that identify a specific geographic location. Within this subset, European cases are featured twice as often as those in North or Central America. We can only speculate on why we see these patterns, but we offer a few tentative thoughts. First, the narrow geographic focus may reflect more limited experiences with learning and environmental policy in different settings. However, our suspicion is that it more likely represents the convenience of cases near to authors who are publishing in the journals in this field and/or funding for such research. Another explanation is in the nature of the current scientific enterprise itself—leading journals published in English, with higher impact factors, tend to be based in these regions, and these journals may be potentially more familiar with cases and studies from their region. Alternatively, there might be differences in the size of the scientific community that is present in these regions, and that instigates research projects aimed at learning, and analyzing learning. Funding for research might be less than in other regions.

Another indicator of the limited diversity, we identified in our analysis is related to the geographic of the empirical applications (such as a river basin, a municipality, etc.). For example, 50% of the articles surveyed focus on the local scale or on a state or region within a watershed, while 9% focus on the national scale, and 8% focus on transboundary and international scales, respectively. This attention to smaller jurisdictional scales in the literature, however, may reflect the idea that physical proximity can facilitate learning. Tentatively, we would propose that this is because the context in such settings is different from international or supranational settings (compare Young 2002) in the sense that actors at local levels often know each other better, know that they will be interacting for a while longer, and are more likely to engage in face-to-face interaction. At global and regional scales, the benefits of regional similarity on improving learning are not consistent for all regions, and recommendations for improving learning on these scales include a focus on multi-stakeholder governing bodies (Lee and van de Meene 2012). Greater research across larger scales is necessary to improve our understanding of the nature of learning between differing groups, as well as to successfully approach global problems such as climate change (see Lee and van de Meene 2012).

Let’s get to the heart of the issue

In terms of our third criterion, we found that cumulative knowledge building in the field is limited. First, evidence of factors that influence learning, or how learning is linked to outcomes, is lacking. While we find substantial attention paid to the different venues associated with learning, the evidence of the factors that support learning within these venues is not well developed—or at least, based on our coding, the contributions of the literature are difficult to assess objectively. Additionally, the literature appears to be challenged in linking learning to changed outcomes even though many papers state that as a goal. Beyond the factors we explored in our coding, we also note that the literature has failed to address some key issues. For instance, despite engaging with a fundamentally social problem, the learning papers we coded largely lacked any reference to theories of power. Mentions of power are reflected in the literature on learning but there is limited evidence that learning scholars are adopting a theory-driven approach to assess power in the context of learning processes in environmental policy settings. More broadly, our results show limited evidence of any critical social theory being applied to learning issues.

To develop a more coherent body of literature, we recommend going back to the basics—or our first set of criteria. Greater clarity in definitions, terminology, and concepts is a key starting point. For instance, the lack of clear definitions has led many scholars to conflate the factors that cause learning with outcomes of learning. The same factors, for example, may be listed as both “process features that foster social learning” (with an arrow leading from these factors to learning) and “social learning conditions and process.” Learning is commonly described as process, but those process components are also termed “prerequisites” for learning. Recognition of these types of conceptual challenges is not new, but here we show the relative depth of the problem and its broader implications for the state of scholarship on learning. Our aim is not to advance a singular approach to the study of learning nor is it possible to do so. However, scholars working individually and collectively can foster internal consistency in theoretical and empirical studies of literature by carefully framing learning types and definitions to theory so that empirical insights on consistently measured variables can be achieved.

There are some limitations of our research approach. For example, given that our sample of articles was not random, we cannot claim that our results are representative of, or generalizable to, the full population of articles published on learning and environmental policy. Therefore, our results are illustrative but not necessarily indicative of wider trends in the literature. However, a full analysis of this literature is not feasible given its scope and our desire for manual coding. Moreover, a fully random sampling approach was not feasible given the difficulty of identifying the true population of articles across such a diverse field of study. The number of articles produced in our initial search was over 7400 but many of those articles were “false positives” (i.e., articles not directly dealing with learning or environmental policy). A purposive sampling approach where the top 25 articles, as listed by relevance by each search engine and each set of search terms, was used to ensure that we gathered relevant articles which are central to the debate across a diverse set of literatures on environmental policy and learning. Of course, the indexing algorithms of our two search engines—Scopus and Web of Science—could bias our sample. The algorithms that the search engines use to identify “relevance” consider factors such as the frequency of search terms, their location in the article (i.e., in the title, keywords, abstracts), and the proximity of one search term to another. So it is certainly possible, for instance, that journals that auto-index key words might be overrepresented. At the same time, we restricted our searches to English terms, so articles written in other languages are not represented. The fact that our findings overlap substantially with earlier studies that have reviewed and critiqued related literature (e.g., Rodela et al. 2012; Crona and Parker 2012; Heikkila and Gerlak 2013) suggests that our findings are not likely to be an artifact our sampling approach. We would encourage future research to explore alternative sampling methods to test the validity of these results further, as well as to assess whether the literature on learning is advancing, or learning over time.

Conclusions: It’s official…we still have much to learn

With the growth of research on learning in environmental policy, it is valuable to assess the status of the literature and its contributions, and how we are “learning about learning” in the scholarly community. This review indicates several positive trends in the literature. In particular, the review draws attention to the interesting diversity of questions or goals being addressed in the learning literature, the examination of various barriers or opportunities to learning, and consideration of how learning can support sustainability and facilitate positive environmental outcomes or behaviors. These findings complement previous research, which suggested that the conceptual landscape of environmental learning is rich and that it cuts across many academic fields, including education, psychology, and social psychology (Lundholm and Plummer 2010).

However, in considering the criteria we set forth at the beginning of the paper, our analyses suggest there is scope for further development on a number of fronts, echoing the calls for clarity on “who learns,” “what is learned,” and “to which effect,” made in this journal in the 1990s (Bennett and Howlett 1992). First, theoretical grounding and development could be more direct, especially with respect to the conceptual and operational definitions of learning and hypothesis testing. Second, with our sample of articles, the empirical applications are limited in their diversity of cases and methods, and in clarity of methods. Third, we find limited cumulative knowledge about the nature of learning processes in environmental policy, what facilitates learning, and how learning affects governance outcomes. This should concern all disciplines involved in the study of learning, and thus the policy sciences, too. We speculate that greater levels of interdisciplinary collaboration here would help create a meta-discussion about learning, learning concepts, and learning theories. Our impression is that a more intensive interaction between policy scholars and learning/pedagogics scholars could pay off (as helpfully demonstrated by Haug et al. 2011).

To extend empirical insights, greater emphasis is needed on designing research on learning in ways that enable more rigorous assessments of when learning occurs, what leads to learning, and the individual behavioral changes that result from learning processes—including changes in power relationships or changes in routinized behaviors that lead to environmental degradation. Without making these linkages, we are still not able to state with confidence if and what learning processes and/or governance venues actually matter. To improve analyses of the factors associated with learning and learning outcomes, we believe that better theoretical development, more diverse methods and cases, and more rigorous qualitative approaches are in order. For example, case study research can employ more longitudinal studies of environmental policy, such as process tracing, to tease out the factors that support or impede learning. Scholars should also employ methods that are largely missing from the literature, such as laboratory and field experiments, which may require interdisciplinary research teams.

We also recognize that there may be many additional areas of empirical research related to learning and environmental policy where scholars can make new contributions. Studying learning processes and outcomes in relation to alternative governance modes, such as networks, market-based or hierarchical governance is an example of where further attention is warranted. That is, there seems to be an implicit assumption of collaborative modes of governance at the core of learning, yet relatively little direct comparisons across alternative modes.

In summary, research in the field of learning and environmental policy is growing and addressing many important questions for practitioners and policymakers. However, based on our sample of the literature, the field as a whole is facing many challenges with respect to conceptualizing learning, and theorizing and measuring learning processes and outcomes. Given the limitations we observed, as well as the opportunities we have identified for extending the field in new directions, we believe there is substantial work remaining that is worthy of our collective efforts.