Keywords

1.1 Introduction

The integration of scientific knowledge produced in the field of work and organizational psychology (WOP) in the practices of professionals in different types of organizations has always been a core concern of research. The idea of applicability of knowledge to the concrete reality of organizations to improve their work and contribute to the quality of life at work is an assumption shared by scholars at large since this subject was created as a specific branch of psychology. In fact, the development of the WOP is partially due to the effort toward trying to solve concrete problems resulting from the dynamics of the social systems in the labor organizations framework, initially, with emphasis on the issues related to personnel selection, and then expanding to the dynamics associated with communication, motivation, leadership, and decision-making. In the last decades, it incorporated all processes related to the relationship between the individual and work and with others in group and organizational contexts, focusing the different dimensions —cognitive, affective, and behavioral—that boost the social systems and performance at the different levels of the organizational life. The accumulation of knowledge expanded all of these areas, and its quality has been critically evaluated and improved.

However, despite this intimacy with the professional world as typically claimed in the WOP scope (e.g., Briner & Rousseau, 2011), for many decades, we have observed a significant gap between the scientific knowledge produced and the professional practices implemented in the organizations, as if they were two autonomous worlds focused on the same reality but with few connection points with each other. In fact a study carried out with members of the Society of Industrial and Organizational Psychology (SIOP, USA) showed that most professionals believed that practice was more advanced than research in 14 areas of activity (Silzer, Cober, Erickson, & Robinson, 2008).

This separation or gap has been largely discussed in recent years (e.g., Anderson, Herriot, & Hodgkinson, 2001; Bartunek, 2014; Rousseau, 2006; Rousseau & Gunia, 2016). This is particularly relevant in times when, on the one hand, the scientific evidence available in WOP has been developed and updated, and, on the other hand, the complex and dynamic realities of the organizations increasingly demand more decisions and interventions based on reliable and robust knowledge. The rudimentary or null use of scientific knowledge in the professionals’ practice will be translated at the very least into the suboptimization of their efforts or even inefficacy of their performance while at the same time evidencing the underuse of the research investment.

The concern about effectively articulating research and professional practice is not specific to the WOP. The debate also exists in other fields of psychology and subjects such as health, political science, sociology, education, social service, organizational behavior, management, and entrepreneurship. In the field of management and organizational behavior, the debate gave rise to a movement that attempted to actively promote scientific evidence-based professional practices, inspired by the principles and methods that evidence-based medicine uses to translate scientific knowledge to health professionals (Sackett, Straus, Richardson, Rosenberg, & Haynes, 2011).

This chapter briefly analyzes this problem in its several dimensions, the tensions at stake related to knowledge produced by research and its applicability in professional practices, or at the level of academic community and teaching practices in the university. It also discloses some of the main factors that influence the current situation dynamics. Taking into account the debate and the evolution that have occurred in other disciplines, notably medicine and management, it also presents alternatives to build and solidify bridges to reduce the gap between the two worlds, in order to increase (a) the relevance of research to the professional practice and its actual efficacy and (b) the contribution of professionals to the development and practical relevance of research and the advancement of scientific knowledge in this field.

1.2 The Main Gap in the Articulation Between Scientific Knowledge and Professional Practice

The most elementary and probably most critical tension between the knowledge produced by academics and scientific researchers and the knowledge applied by practitioners in the organizational contexts can be expressed in the allegedly irreconcilable binomial between rigor and relevance.

Briefly, the process of producing scientific knowledge is focused on rigor , demanding a set of procedures duly typified and standardized to ensure robustness to its validity. However, practitioners make little use of this knowledge. As it seems, rigor is not followed by the relevance of this knowledge for the problems that practitioners have to handle. In a broad definition, we could consider “practitioners” as the professionals “who make recommendations about the management or development of persons in organizational contexts or who advise those who do it” (Cascio & Aguinis, 2008, p. 1062). In other words, practitioners are those who somehow influence the dynamics of the organizations with impacts on efficiency, efficacy, and sustainability and on the performance and quality of life of individuals at work. In this sense, it comprises organizational and labor psychologists but also many other professionals with formal training in similar fields or not so similar such as managers and organizational consultants.

Naturally, the first question is whether or not this rigorous knowledge effectively exists and whether or not it is used in practice and, if not, why does it happen.

More than two decades ago, Murphy and Saal (1990) colloquially put the problem this way: “many times managers believe that half of what psychologists know is common sense, and the other half is wrong” (p. 1). Disregarding their visible exaggeration, this observation points out that organizational decision-makers do not use the knowledge produced by academics when they have to deal with problems that clearly fall into the field of work and organizational psychology such as personnel selection, organization, motivation, and leadership or to manage their performance. There is no formal requirement for consultants, managers, and other decision-makers to have had specific training in this area or acquired scientific knowledge about the topics comprised by work and organizational psychology, although most of their activities directly or indirectly affect the behavior and life of people at work (Rynes, 2012). Many times it is just about being impossible to use what you do not know. However, there are plausible explanations to when one knows but does not use it. We will explain it ahead.

In the last decades, the academics in the organizational area have increasingly defined their research problems following the university dynamic focused on their productivity, measured through the articles (or books) published with the results of their studies, regardless of any relationship with the problems discussed by practitioners in the organizations and, eventually, disregarding its contribution to the effective advance of knowledge and its applicability (e.g., Mohrman & Lawler, 2011; Van de Ven & Johnson, 2006). This strategy somehow contributed to intensify the divide and take out the emphasis from the desirable contribution of the knowledge produced to the society.

Some authors point out the policies promoted by the universities as the main reasons that lead to the negligible amount of knowledge produced that could be used by practitioners and that makes “hundreds of thousands talented researchers to produce few or no long-lasting (knowledge)” and is “a genuine source of contribution to knowledge” (Starbuck, 2006, p. 3). Analyzing the academic research development for more than 25 years, Mohrman and Lawler (2011) emphasized “carrying out research with incidence on theory and practice is not a prevailing guidance in the field of organizational and managerial sciences.”

Even when the value of the produced knowledge, which is generally validated, could be very relevant to improve the organizational practices’ efficacy, practitioners often neglect the use of such knowledge and decide for options eventually less suitable. For example, one of the areas in which research has consistently produced robust knowledge with internal and external validity ensure, i.e., which can be generalized to the organizational contexts, is that of personnel selection. Instead of using duly standardized selection devices, including the structured interview, which has proven capacity of predicting the applicants’ performance, managers or decision-makers usually prefer the subjective methods rooted in the belief that their intuitions are reliable (e.g., Grove, Zald, Lebow, Snitz, & Nelson, 2000; Highhouse, 2008; Lievens, Highhouse, & De Corte, 2005; Rynes, Colbert, & Brown, 2002). Several studies point out large use of non-structured interviewers, more than 50%, even when professionals are aware about the usual problems resulting from this sort of interviews that, they believe, may “happen with the others” but “not with them” (e.g., Church, 1996; Dipboye, 1997; Rynes, Barber, & Varma, 2000). Apparently, one reason for this behavior is that they believe that this way they have more flexibility in their analysis and decision-making, besides being “naturally very good in conducting selection interviews.”

Another example of robust knowledge extremely relevant to the organizations refers to the mental capacity of the applicants. As Schmidt (2009) says, “keeping other things the same, higher intelligence leads to better performance in all roles” (p. 3). According to the author, the scientific research shows that, with three conditions present, personnel selection based on intelligence ensures significant improvements to the performance. These conditions refer to the labor force available in the market (that allows selecting) and measure intelligence with standardized testing, and performance in the role should have a variability higher than zero (e.g., is not mechanically cadenced). Studies show that, for functions demanding trained skills, “ top workers can produce 15 times more than those on the performance baseline” (Schmidt, 2009, p. 7), and for more complex duties, the difference is even higher (Hunter, Schmidt, & Judiesch, 1990). However, despite these empirical studies and many others that systematically show that general intelligence is the singular variable with strongest capacity of predicting the individual performance, professionals still tend to disregard it and ground their decisions on other kinds of indicators, like school ranking, experience or personality testing, and, “obviously,” the intuitive interview (e.g., Pinker, 2002; Rynes et al., 2002; Schmidt & Hunter, 1998).

We are emphasizing the selection area because this is perhaps the one where the robustness of knowledge produced is more evidenced, and, nonetheless, the gap between research and practice still persists. It is not surprising to find this gap in other areas, such as motivation, objective orientation, fairness, satisfaction, well-being, stress and occupational health, communication, creativeness, decision-making, teamwork, leadership, conflict, change, training, etc. (Rynes, Giluk, & Brown, 2007).

Somehow we could say that 70 years after the foresight of Kurt Lewin (1951) that pointed out the path to optimize the mutual fertilization between scientific and knowledge, these keep on following parallel paths such as the margins of a river with no bridge or, more exactly, with relatively weak bridges that isolate each community on a margin, developing different beliefs, cultures, and interests (e.g., Daft & Lewin, 2008; Vermeulen, 2007). Even when the theoretical knowledge is fully developed on one of the “riverbanks,” it should be tested by the degree to which it contributes to improving the organizational life (Mohrman & Lawler, 2011).

The perspectives that “there is nothing more practical than good theory” and that “we cannot understand reality until we try to change it” express what could be the maximalist design of articulation between research and practice in work and organizational psychology and in other organizational subjects. Living presently on the knowledge society, it is hard to conceive that we are not valuing these major guidelines.

Using Lewin’s metaphor about the fields of powers and change, we should identify some of the likely restrictive factors that contribute to sustain these gaps and the driving forces that could contribute to overcome or minimize them.

1.3 From Rigor to Relevance and from Relevance to Rigor

In a simplistic view on the issue, it could be just about carefully translating scientific knowledge into a more accessible language to professionals so that they, after understanding it, apply this knowledge with the likely resulting benefits (Rousseau & Boudreau, 2011). However, and regardless if one knows if the “translated” remains as scientific knowledge (e.g., Kieser & Leiner, 2009), the existing gaps refer to a greater complexity that should be clarified.

1.3.1 Research Practices

Many factors can be identified that, in different ways and extents, seem to contribute to those gaps that refer either to the process of producing scientific knowledge or to the process of disseminating it.

Many of these factors are found in the academic community itself and in practices that for decades have ruled the production of scientific knowledge in organizational psychology and in other social and managerial sciences.

Firstly, there are tensions in the academic community regarding the relevance and value assigned to some research strategies and processes of knowledge validation, e.g., experimental studies vs. correlational studies, quantitative vs. qualitative studies, and synchronic vs. longitudinal studies. Considering the complexity of the phenomena under study, all these approaches can contribute to advance knowledge; however, the existing tensions are far from facilitating the required complementarity and integration.

In this context, we find dozens of theories or alternative—sometimes even discrepant—models about the same phenomena, for example, motivation and leadership . Therefore, we should firstly know to which extent the academic community agrees on the quality of the produced knowledge. A study carried out among senior academics in European countries found that of 24 essential scientific discoveries in WOP, only four had a high degree of consensus about the existence of high-quality scientific evidence, in the sense of having well-grounded and evidenced knowledge (Guest & Zijlstra, 2012).

Within the framework of dialectic in knowledge production, the existence of competitive theories, which is naturally healthy but at the academic community level, is not worked on enough to produce basic principles that could be generalized and consensually validated to be disseminated and transferred for practitioners to use them in organizational contexts (e.g., Locke, 2002; Locke, 2009). Basic principles considered to be true, even if resulting from lab work, outside of the organizational reality, are naturally susceptible to be applied in some context. We should avoid applying them only when they prove not to be true.

The production of scientific knowledge in organizational psychology, just like in other sciences, can be basically scheduled around three axes: identify and observe the effects resulting from the action of some variables on others; analyze intervenient psychological processes or mediate mechanisms to explain that relationship; and specify moderating factors that clarify how and to which extent the context variability influences the manifestation of those effects (e.g., Fiske & Borgida, 2011).

According to the canon that rules research in psychology and other social sciences, the scientific knowledge rigor mainly results from the evidence of its internal and external validity. Internal validity is ensured by a set of methodological and technical procedures to be observed by the time of production, following standardized protocols. This way the knowledge produced, for example, in experimental or longitudinal surveys, may have a high degree of consistency and internal validity, i.e., rigor. External validity mainly refers to moderating and contextual factors that could specify and qualify the predictable conditions and effects of the knowledge application and its possibility of being generalized to different contexts.

The scientific community members must intervene in the evaluation of the validity of such knowledge. As there are no objective standards to verify the quality of such validity (of science), the commonly accepted criterion is the review by experts (peers) in the scientific area and their consensus. Usually there are higher or lower levels of this knowledge’s rigor quality associated to the reputation of the journals in which it is published.

The rigor guarantee stipulated by consensus is likely to be one of the most critical factors to advance knowledge in Popper’s perspective (Popper, 1959) and to the plausibility of applying it to the specific work contexts of the practitioners. The general principles do not embrace the specificity of the application context although they could (and should) also predict the effects flow according to some contextual moderators. In fact, beyond the production of knowledge or explicative theories, there is a great shortage of studies and theories about application to guide interventions in this field (Pritchard, Harrell, DiazGranados, & Guzman, 2008).

As many studies have proved, the hundreds of thousands studies in the last decades mostly focus on micro-problems of research, in an attempt to isolate variables that are far from being related to the problems faced by practitioners (e.g., Boehm, 1980; Cascio & Aguinis, 2008; Heath & Sitkin, 2001). Moreover, still according to the culture that prevails in scientific works, the theory-guided approach of problems is valued above all, i.e., the identification of problems based on existing theories or production of theories that allow to define the research issues, dismissing the study of concrete issues of the organizational reality. This hinders professionals from feeling “empathy” for the problems and models of research that, by definition, are incomplete and fragmented. Rather, they adhere to solutions and cure-all of gurus that are easier to understand, more accessible, and disseminated in more attractive ways. However, knowledge uses to be very fragmented in these narratives, simplistically labeled as trans-contextual, i.e., ensuring to practitioners that it works in any concrete situation to which it is applied, typically with no major efforts (except for financial efforts, certainly). Therefore, the two research approaches— theory-oriented and problem-oriented —should go beyond being alternatives, to being applied in a complementary strategy that mutually improves one another in the production of rigorous, robust, and relevant knowledge. In this sense, the improved collaboration between academics and practitioners could be important in the logic of knowledge co-production. This implies engaging them in the definition of the problems to be surveyed and in the research design and results reading, mainly in relation to the contexts of application (Mohrman & Lawler, 2011; Van de Ven, 2007). This is an additive strategy in the sense of assuming the autonomous exercise of one of these approaches with its advantages and limitations, respectively.

Besides the issue of defining the problem (i.e., the relevance of the problem in the practitioners’ light) the research results are published on scientific journals following quite standardized rules that are not accessible to professionals at large, or, when their contents are accessible, it is hard to decode and extract something that could be applied to their specific issues.

As aforementioned, in the last few decades, the tradition demands academics to publish on those journals, dismissing publications of professional or general nature and translating the constructs and results found into a language easy to understand (e.g., Adler & Harzing, 2009; Shapiro, Kirkman, & Courtney, 2007; Starbuck, 2006). Communication between academics and practitioners is clearly a relevant issue associated to the language and symbols employed, as well as to the different benchmarks (Benson, 2011; Rousseau & Boudreau, 2011).

Thus, the way the produced knowledge is disseminated, or not, is another factor that certainly contributes to the gap between research and professional practice .

Another aspect that has not been properly approached concerns the common standard that all studies must have a section where authors supposedly describe how the findings could give rise or be transferred to applications in the organizational context. This standard bears some equivocality. On one hand, as previously mentioned, most professionals have no access to those articles, and, as such, the applicability section usually has no practical outcome because, in fact, it will recommend or suggest applications to other researchers and not to practitioners. On the other hand, the simple random reading of any of such sections shows that the suggestions of applicability are usually at such a high level of abstraction and generality that they can hardly be read as guidance toward concrete action in the organizational reality. This is partially understandable, since a specific result should not be generalized and fragmented due to the number of variables involved. In other words, this practice seems to be more a trap of “relevance” than an effective exercise of contributing to improve the professionals’ practices.

In order for this section to be more useful as prescription on what to do and how to work in organizational contexts, the authors should combine the results found and the accumulated knowledge on the topic or issue of their research. Moreover, they should also articulate with the knowledge of practitioners themselves, such as consultants, technicians, or organizational decision-makers.

1.3.2 The Practices of Practitioners

On the professionals’ side, many factors play a role in the problem persistence and in the belief that scientific knowledge does not add value to their “empirical” and “intuitive” practices (e.g., Rynes, 2012). Strictly in instrumental terms, a highlight is the short access to scientific journals and short time to update scientific knowledge. This leads them to decide for accessing the general or open media, printed or Internet material, whose reliability may vary a lot, including popular books with accessible and appealing language publicized, for example, by the industry of gurus (e.g., Rynes, 2012). These objective factors should be added with other factors that are so determinant, or more, to the decision to not apply the research results, as described below.

First of all, there is a somewhat paradoxal attitude among the professionals about what they consider to be relevant and useful scientific knowledge to their concrete situation (Highhouse, 2008). On one hand, when they are familiar with some basic principles resulting from research that are general by definition, such as any scientific law, they consider that these are not fully applicable to their working context considering the large number of variables and their complexity (e.g., Boehm, 1980; Highhouse, 2008). On the other hand, they hunger for simple and simplified ideas believing that there is one single solution to different situations and organizations, regardless of how complex and different they are, easily adhering to fads and tendencies that the authors persuasively cease to demonstrate.

Frequently the practitioners tend to follow the common sense, easily accepting ideas, “facts,” examples, and intuitions based on their personal experience or on sources they rely on, such as the “ready-to-use ideas ” proposed by popular authors that are attractive and relatively easy to use since most of them are simplified or simplistic ideas that one solution could serve to a wide range of situations and contexts. The easy acceptance with no critical scrutiny by the practitioners deserves to be studied.

Secondly, besides the scientific knowledge, there are other forms of knowledge that could be efficaciously mobilized to solve problems in the organizations. Throughout their working experience, the professionals develop empirical knowledge that could be transferred to new situations, with highly efficacious application. Expertise associated to the continued exercise of an activity enables the development of cognitive standards that facilitate diagnosing of problems and seeking for practical solutions. According to some studies, 10 years of work is enough time to build professional expertise capable of significantly improving the practitioners’ performance efficacy (e.g., Ericsson, 2006; Simon, 1996).

Likewise, professional consultants develop empirical knowledge and intervention models whose application could be efficient to solve organizational problems (e.g., Van de Ven, 2007).

So, to address the complexity of concrete problems, many times with singular nuances, the knowledge generated by the three aforementioned sources (research, consultancy, and professional experience) should be mobilized and combined to increase the likely efficacy of the decision or intervention.

Thirdly, the power games in the organization should be considered. While the predictable beneficial effects of applying the scientific knowledge are clear, they decide for not applying it, managing problems according to the interests and commitments of the involved players, not compromising the practices in use in the organization as aforementioned. In other words, we cannot assume that practitioners effectively want to anchor their decisions in a strictly rational analysis of the problems. Frequently, they prefer to think of them according to criteria of usefulness and power strategies (e.g., Latham & Whyte, 1994; Pfeffer & Sutton, 2000). On the other hand, a relatively expanded aspect of the WOP professional exercise is performed by consultancy services that have only indirect influence on the organizations’ decision-makers.

In addition, the insufficient learning of scientific knowledge and critical thinking to duly analyze the problems and change usual practices is very frequent and refers to another associated gap: the research-education gap.

1.4 The Research-Education Gap

Typically, the university education practices fail in duly fostering the scientific spirit and critical thinking of students. Since as students they did not learn and train the process of production and validation of scientific knowledge , they cannot acquire competences that are increasingly crucial in the scope of the knowledge society, in order to know how to search for the proper information and rigorously filter data and information available today. Paradoxically, the shortage of these competences makes many professionals unable of untangling, amidst the abounding information, the most useful and applicable one to the problems they cope with everyday in the organizations.

Therefore, one could say that the gaps between research and practice start being cultivated in the university, by using pedagogical practices in which most students had no opportunity of getting acquainted to and training the process of production and validation of scientific knowledge or employing the critical thinking techniques to the theories and results of the surveys. They are more exposed, when exposed, to the updated results of the research than to the permanent development of knowledge, which is the distinctive feature of scientific knowledge and an example of critical thinking.

When this happens to professionals during their university studies, they cannot be expected to further seek for scientific knowledge and use the critical thinking to apply them to the problems they have to solve.

As such, many professionals are not duly trained to keep pace with the general, technological, social, and organizational changes of the last decades or to incorporate to their benchmarks and practices the new knowledge that keeps on being produced. Let’s recall some changes since the year 2000 and the new realities that emerged: Facebook and other social networks, iPhone and other smartphones, iPad and other tablets, virtual work, scholarship bubbles, poor work, new generations in work, etc.

In face of the uncertainties associated to fast and tempestuous changes as those we are seeing, any professional should update knowledge to feel confident to solve problems. However, in opposition to the hunger for scientific knowledge, they are seduced by other types of knowledge, those skillfully packaged, easily accessible, generalized by the peers, and easy to memorize. Mastering any scientific literacy is useful to qualify managers and other professionals to systematically try to ensure reliability and usefulness of indicators and metrics on the organization and its activity (Rousseau, 2012a, b). On the other hand, an overwhelming majority of the information available on the web cannot ensure that the proposed interventions or decisions are the most suitable, since the information must be filtered to learn which is more reliable in terms of rigor and quality, which can lead to not using it or using the wrong information. The lack of scientific literacy is an objective obstacle to the proper use of this information and to fill in the research-practice gap.

In many disciplines, students are not socialized enough in the culture, methods, and techniques of scientific knowledge production or critical thinking and the rigorous evaluation of the information and its sources. Maybe one of the main barriers to the use of research results in the professional activity is the fact that students have not developed this sort of scientific literacy skill. Sometimes, they do not even get in touch with the research results during university, and thus, their study is basically based on academic handbooks or popular books that disseminate their topics of study. In other words, not only they do not learn to think critically according to scientific rigor standards but also they do not learn to seek, identify, select, and evaluate the information publicly available.

Specifically in organizational fields , there is a frequent gap between what is taught and what the research says (Charlier, Brown, & Rynes, 2011). Since most of the information provided, mainly on the web, does not have to comply with quality, rigor, and validity standards, and since the vast majority of professionals lack those skills, we find a seemingly paradoxal situation: the greater the volume of information available, the less professionals tend to use scientific results in their decision-making or when they try to implement interventions. This fact suggests that the research-education gap deserves special attention, and the pedagogic practices of several university courses should be adjusted.

The educational approaches based on the theory of cognitive effort combined with the models on expertise development in academic and professional performance suggest a set of guidance on very efficacious pedagogical practices based on scientific evidence. The path between the novice (usually still at the university) and the expert (usually many years after concluding the formal education) includes many stages of complexification of the cognitive schemes and requires different types of learning and several resources, namely, cognitive, memory, perception, and thinking, among others. Considering the differentiation proposed by Hatano (e.g., Hatano & Inagaki, 1986) further generalized between routine expertise (repetitive work, solving well-defined everyday problems) and adaptive expertise (capacity of solving complex and unexpected problems, transferring competences and knowledge for contexts characterized by high degrees of uncertainty), the pedagogical practices should provide a large number of opportunities to develop the adaptive expertise that will be transferred and incremented in further professional contexts.

This way, if memorizing concepts, creating and organizing cognitive schedules , and training and optimizing their use are necessary, learning to mobilize these cognitive resources to solve predictable and unpredictable problems and make decisions in different contexts is crucial in times when the complexity of actual situations and the speed required to work are consistently increasing. To overcome the research-education gap, the pedagogical processes should ensure that students develop competences to learn to permanently update their knowledge over life, seek reliable information, learn to solve complex and unexpected problems, learn to critically differentiate the validated and robust information from wrong or biased information, and, above all, develop the capacity of learning and adjusting to new situations. The development of these competences demands a learning process capable of maximizing critical thinking and the capacity of properly applying concepts and theoretical models, as well as to solve complex problems. Moreover, it requires the combination of these cognitive factors with affective and motivational aspects that influence the performance. Such a pedagogical process will maximize learning even more if it demands students to train the critical analysis of opinions, beliefs, and facts and to get familiar with the formulation and testing of hypothesis to solve problems (Goodman & O’Brien, 2012) and gradually acquire self-regulation and self-efficacy techniques. Zimmerman (2002) suggests several self-regulation techniques to be developed, notably self-monitoring, self-evaluation, goal setting, and task strategies. The education methods per se do not enable this learning but how they are combined and integrated in a scientific evidence-based education approach. For example, “short exhibition sessions could be alternated with cases or other activities” (Zimmerman, 2002, p. 317). This way, the concern about evidence-based learning is not based on a specific pedagogical method.

1.5 Knowledge Based on Professional Experience

The professional experience of managers , consultants, and other professionals reflects the repeated practice and the resulting specific knowledge and may be a highly specialized tacit knowledge. Excluding repetitive automatisms , learning based on self and the others’ experience is not as easy and direct as some simplistic approaches consider and could result in highly biased and wrong knowledge (Denrell, 2003). The accumulation of professional experience in one single domain could be translated into adaptive expertise if it is effectively part of a signification network developed by the individuals. In many areas, this process can take up to ten years of practice (Chase & Simon, 1973; Ericsson, 2006; Hoffman, Shadbolt, Burton, & Klein, 1995; Sutcliffe, Weick, & Weick, 2009). The development through different stages from newcomer to the professional world or a novice in Hoffman’s and collaborators’ (1995) terminology and the expert in a given field of activity is not only about accumulating years of practice, since individuals can remain in intermediate stages for long and distinct periods.

The knowledge ensuing from the individuals’ experience may overlap their beliefs and opinions and level out eventual biases, mainly if they are used to critically analyze these experiences and establish cognitive standards that enable them to “intuitively” recognize configurations in the existing situation and potential ways of action and to efficaciously analyze complex problems (Simon, 1996). In these cases, the combination of this source of evidence with other sources of information, namely, the scientific one, can be extremely useful.

However, the mere professional experience is not enough to ensure this effort. Rather, it is envisaged as a substitute of other sources of information, since professionals tend to trust their experience, finding no reason to try to seek different information.

Therefore, evidence-based practice demands conscious and focused effort by the professionals as an alternate to automatism , usual practices, and quick decision-making processes. These are common actions in the everyday management, many times based on low-quality information, as can be the personal opinion and experience, the best practices, and the leaders’ opinions. Making decisions does not necessarily demand more time but, above all, a permanent critical and proactive attitude base on more than one source, which systematically challenges and checks the reliability and relevance of the information being used. What effectively defines the decisions is not the evidence itself but its analysis and interpretation jointly with the expected goals. In this process, information usually is incomplete, and in most situations, problem-solving and decision-making remain probabilistic but more likely to be responsive. Synthetically and in pragmatic terms, this critical attitude means making systematic considerations before making a decision: “which evidence corroborates it?,” “to which extent is this information reliable?,” and “is this the best evidence available?” (Rousseau, 2012a, b). These questions are applicable to the different types of information that should be analyzed. Moreover, they assume the capacity of sorting the high-quality level out of the low-quality level of the research available.

The factors outlined above, associated to the gap between research and practice, play as obstacles to bridging the two sides. Therefore, this approach suggests that academics should enhance their efforts to effectively shorten this gap, producing investigation that is simultaneously rigorous and relevant.

1.6 Options to Shorten the Research-Practice Gap

As aforementioned, this problem is not specific to the work and organizational psychology. Many other subjects face the same problem. Among these, the field of organizational management and behavior is where the debate is more persistent. The area actively tries to bridge the two worlds inspired by the solutions applied by medicine for some decades now regarding the production and availability of systematic literature reviews.

1.6.1 Systematic Literature Review

In the 1980s, a critical problem of medical practice was identified. It was found that most physicians that exercised the profession did not incorporate into their practice the advanced scientific knowledge in subjects that contribute to medicine (e.g., Smith, 1991).

It brought about the urgent need to learn how active physicians could remain updated, not only regarding knowledge about baseline sciences such as the most suitable intervention procedures and protocols for the patients. The gap was so large that gave rise to the so-called evidence-based medicine movement that aims, above all, to analyze and improve the medical interventions’ efficacy (Sackett et al., 1997).

In brief, the approach started from the need of translating and publicizing the results of scientific research in formats easy to understand and accessible to professionals. The last decade has seen an unprecedented multiplication of journals and a true flood of scientific articles published, sometimes making impossible for an individual to process the whole information produced.

The methodology of systematic literature review was then developed. Despite some common aspects, it differs from the customary narrative reviews and meta-analyses that naturally remain crucial in the scientific work (Denyer & Tranfield, 2009; Petticrew & Roberts, 2006).

The systematic literature review aims to analyze all studies relevant to a given question following clear methodology , explicit in the criteria and procedures used and replicable by other researchers to critically identify what is really known or not about the topic or problem in question (Petticrew & Roberts, 2006). The studies to be comprised depend on the specific objectives and the issue to be analyzed. These could be quantitative, qualitative, experimental, or correlational and may have been published or not.

It is not just about summarizing the existing literature but to try to answer a problem considered to be relevant. This methodology has proven to be very useful to systematize what is known about the efficacy of interventions in the fields of medicine and of public and social and education policies, which already have thousands of reviews available to the public. For example, in the field of medicine in the 1990s, the Cochrane platform (Cochrane Collaboration) was created and became the main repertoire of knowledge available to medicine professionals. Today, it has more than 4000 systematic reviews. In the field of social policies, the Campbell Collaboration already has hundreds of reviews, and many other movements have emerged in other areas (consult evidencebasedmanagement.com).

This review is different from usual reviews, named narrative reviews, in many facts mainly because it deliberately tries to prevent the researcher’s bias, making it very clear and explicit from the beginning what methodology was used to select the works to be analyzed. It also differs from the meta-analytical review as it uses statistical procedures to integrate the results of several empirical studies into the quantitative estimates, such as the explained variance and effect size (Cooper, Hedges, & Valentine, 2009).

Schematically, the systematic literature review methodology comprises five main stages : (1) review planning, (2) inventory and location of studies, (3) evaluation of their contributions, (4) analysis and summary of information, and (5) report on the best evidence (Briner & Denyer, 2012; Petticrew & Roberts, 2006). The proper use of the methodology is a key factor to the quality and eventual usefulness of the systematic review. To that, the essential principles of the scientific knowledge production should be applied to the review. Sometimes, the narrative literature reviews miss these principles, becoming subject to biases associated to the argumentation purposes of the authors (Petticrew & Roberts, 2006).

The results of a systematic literature review are useful to research in that they can not only contribute to test and develop theories, including explanatory ones (Petticrew & Roberts, 2006), but also specify what is known, identify the topics or problems that lack evidence or quality, and demand more or better studies (Briner & Denyer, 2012). As one of its main goals , the results of the systematic literature review also contribute to making the existing scientific information available to professionals. Due to the process of production and dissemination of this information, it is usually dispersed and with contradictory and inconsistent results that are not easily understood or integrated by the professionals. Today, most of the empirical studies published are not even replicated due to publishing and university policies. This makes any isolate study weak and less useful than a systematic review of all studies related to the cause in question.

It is not about giving a specific answer about what is done in a given situation but about categorizing and organizing what is known or unknown and, in this sense, providing valid knowledge that helps in making decisions in the concrete contexts in which professionals work. A systematic literature review usually shows that we know less about a given issue than we believe we know (Petticrew & Roberts, 2006). The difficulties of making a decision in concrete contexts result from the degree of uncertainty involved. While some of these uncertainties may be of structural nature (i.e., are part of the nature of the phenomena in question and of their unpredictability) others ensue from the limited and low-quality information available or from its processing. In these cases, the systematic literature review could definitely contribute to mitigate ignorance and increase the professionals’ efficacy (and of researchers, whenever applicable). The greater the uncertainties associated with likely interventions or decisions , the greater the potential usefulness of the systematic reviews related to the issues in question, since these make evident and provide a kind of validated knowledge. However, the systematic reviews have their own limitations ensuing from the scope and assumptions about the issue being approached and the quality of the information analyzed. Therefore, it is not a “miraculous” solution to all interventions or professional decisions.

1.6.2 The Evidence-Based Practice Movement

As aforementioned, inspired in this approach and methodology tested in medicine, a movement named evidence-based management also came about in the field of management. According to Rousseau (2012a, b), one of its main advocators, evidence-based management is the “management practice systematically based on management, which incorporates scientific knowledge into content and decision-making processes” (p. 3). This movement intends to expand the use of scientific knowledge in management education and in the everyday practice of professionals, improving their knowledge and competences and emphasizing critical thinking, information analysis, and decision-making. For this purpose, different ways of building and strengthening the bridges between scientific research and professional practice have been inventoried and made available. Some of these have proven their usefulness in other areas such as medicine, social policies, and education (e.g., Center for Evidence-Based Management, www.cebma.org).

Basically, this movement tries to promote quality to managerial decisions and assist professionals to critically evaluate the information they have, namely, the beliefs , “pre-made ideas,” fashions , and the “quick fix ” offered by gurus and other professionals. Decisions should be based on the conscious, explicit, and critical use of the information available from different sources combining, for example, scientific knowledge, existing data or data collected in the organization, the empirical experience of decision-makers or other stakeholders, etc. (Briner, Denyer, & Rousseau, 2009; Rousseau & Gunia, 2016). Therefore, it is not about making decisions exclusively based on scientific information. However, this information should be one of the important sources to be considered, especially when it is of high quality. But this is far from happening today. In fact, as aforementioned, several studies have shown the large gap between scientific research in management and the professional practices. Professionals choose to base their decisions mainly on their personal experience, subject to all widely known biases, or on the experience of other organizations quite different in their cultures and strategies. Even the use of the so-called best practices could be a bad decision if these are not subjected to a critical analysis, mainly in regard to context and the specific conditions of application (Denrell, 2003). Briefly speaking, the basic assumption of evidence-based management is that best-quality decisions bring more positive consequences to organizations, to the individuals working in there, and to the society.

Briner and Rousseau (2011) tried to organize some features of the evidence-based practice and estimate their prevalence in work and organizational psychology including “the term ‘evidence-based’ is used or well-known”; “the latest discoveries and research abstracts are available”; “primary research articles and traditional literature reviews are accessible to practitioners”; “state-of-the-art practices, panaceas, and fashionable ideas are handled with healthy skepticism”; “clients request evidence-based practices”; “practitioners’ decisions are integrative and based on the four information sources” as aforementioned; and “the initial training and continued professional training adopt evidence-based approaches” (p. 9). The authors also believe that these features are not generalized in the practice of work and organizational psychologists.

In operational terms, the evidence-based practice demands the competence of translating concrete problems into questions, the systematic search for information (evidence) that allows plausible answers, the critical analysis of the reliability and relevance of such information, weighing and adding information, incorporating the information into the decision-making process , and evaluating the consequences of the decision made. According to this approach, at least four sources of evidence should be considered: scientific information, organization data and facts, personal experience of professionals, and values and concerns of the decision stakeholders, ethically answering the impact of decisions (Rousseau, 2012a, b).

As such, the evidence-based (information-based) practice involves a process of collecting and analyzing the information and of decision-making, as well as the quality of the content of information from different sources (Briner & Rousseau, 2011).

1.6.3 Which Strategy Should Be Implemented in Work and Organizational Psychology?

The advances in other areas, notably the evidence-based movement , allow forecasting the advantages and difficulties of expanding this approach in work and organizational psychology (e.g., Bartunek, 2014; Rousseau & Gunia, 2016).

Basically, the strategy being implemented in the managerial field is fully applicable to the field of work and organizational psychology as well as in other areas of social and organizational sciences.

We could then systematize the main aspects that could contribute toward the general application of this approach in the WOP field, mainly regarding the three pillars that are more directly involved.

At the research level, what matters most is that the nature of knowledge produced, due to its degree of abstraction, generality, and validation process mainly regarding mechanisms that explain the effects of the variables being studied, tends to be far from the concrete problems posed to professionals. In the theory-oriented research , researchers define the problem and methods of study, and the practical implication may be an immediate concern or not (Fiske & Borgida, 2001; Hodgkinson & Starkey, 2011). In some sense, this gap is crucial to the effective advancement of scientific knowledge. However, the theory-oriented research could and should be complemented by strategies based on real problems of the organizational life. This will surely be easier in the analysis of context moderators but could also enhance the study on mediating processes. In other words, it is not about alternatives, as these are sometimes considered, but of complementarity and mutual fertilization, since both the theory-oriented research and problems-oriented research can contribute to enhance the professional practice.

In this sense, researchers and organizational professionals should be more articulated in at least three levels. On one hand, research could benefit greatly if professionals are viewed as stakeholders in the identification and definition of problems to be investigated. This collaboration could give rise to creative insights by crossing the practitioners’ experimental knowledge and the theoretical knowledge of researchers. Instead of downgrading the nature of the knowledge of consultants and other organizational professionals, it should extract from this knowledge—with all of its limitations—ideas and concepts on the problems and respective strategies of solutions that could enhance the research benchmark. On the other hand, the analysis of research problems thus identified allows the researcher to better view the likely transfer of knowledge produced to the organizational contexts. This could facilitate “translating” this into a more accessible language and disseminating it among the professionals. Finally, any professional could better recognize its applicability since the problem is somehow rooted into the organizational practice (Hodgkinson & Starkey, 2011).

Today the systematic literature reviews are suitable ways to identify and provide the existing knowledge and persistent failures. This could be extremely useful for researchers and practitioners, regardless if they are consultants or organizational decision-makers. However, not every important problem posed to professionals is eligible for a systematic knowledge review. Moreover, there is no guarantee that the results of the literature systematic reviews can be effectively applied, because many professionals still undervalue or do not recognize them.

At the university level , the WOP should incorporate the techniques suggested and implement them in the scope of evidence-based education, which has been summarized above. This strategy demands relatively deep changes to the teaching/learning methodologies used by many psychology courses. The acquisition and development of critical thinking competences and proper degree of scientific literacy should be explicit goals, subject to evaluation, in any WOP course. Although in this aspect psychology is distinguished for its positive perspective in face of most of the other social sciences, stronger and more focused efforts are required to overcome this problem. The acquisition of competences to perform systematic literature reviews could be a curriculum component both in master’s course and doctoral programs in work and organizational psychology, notably those with a component of education for executive officers or formally held in cooperation with corporations and/or other organizations. Likewise, the master’s and doctoral courses should comprise training in elaboration and publication in general or professional journals of articles to disseminate the topics or issues being investigated.

The level of professionals requires permanent updating with the results of research in their fields of work. This, in turn, may demand the acquisition or development of cognitive competences to improve their benchmarks and challenge their practices.

The collaboration with academics to identify concrete problems or to disseminate knowledge is an extremely enriching strategy for their professional practices.

As aforementioned, two kinds of professional players deserve special attention: the consultants that serve as knowledge facilitators and produce and disseminate knowledge and the organizational technicians and decision-makers as likely final users of the knowledge that will be incorporated to their practice and professional experience. Regarding dissemination, researchers should be attentive to any eventual differentiation of competences, concerns, and objectives of the potential targets of the knowledge they are trying to disseminate. A specific target of players is the senior management of the organizations, mostly graduated in areas very different from the WOP. As clients, these managers play a core role to change the existing practices but may not be sensitive to the importance of the evidence-based decision. Therefore, the information to be disseminated among these players should be designed specifically in this sense. This is true not only for the contents of the evidence-based knowledge but mainly to the decision-making process implied by this approach. As well evidenced in the WOP, the emotions, biases, prejudices, intuitions, temporal pressures, and other behavioral drivers are frequently engrained into the decisions made by these organizational players, disregarding the evidence available. The academics should properly translate the knowledge they produce and actively take on the role of publicizing this evidence among those players and the processes that could reduce uncertainty of contingences associated to their decisions and their consequences to individuals and organizations (e.g., Hodgkinson, 2011; Morrell, 2008).

1.7 Findings

As observed in other scientific fields, the work and organizational psychology is also marked by the weak link between knowledge produced in the scientific research and the professionals’ practices in organizations.

Literature has unveiled several factors that explain the tensions at stake and that could contribute to maintain and expand the gap between the world of academic knowledge and the professionals’ practices.

The researchers’ practices and beliefs about the quality of the empirical evidence produced by them and their usefulness, professionals’ beliefs and attitudes toward the relevance of that knowledge, and the decision-making practices and processes rooted in their activities, as well as the prevailing pedagogical models in the education on this specialty of psychology at universities that fail to work on the tension between rigor and relevance, have contributed to sustain this gap and hinder the desired articulation between research and professional practice.

Although university education in psychology differs from most of the other social sciences for its concern about scientific approaches and methodologies, the students’ education usually comprises few bridges to the margin of application in the professional practice. This leaves the construction or not of these bridges to the contingencies of the organizational socialization process of the graduate, eventually replicating their supervisors’ practices and, maybe, even reproducing the gap that should be narrowed.

As this chapter tried to make clear, the professional practice of work and organizational psychologists will be much more efficient and ethically responsible, considering its effects on the individuals’ organizational lives, if they can match at least four information sources: the expertise built on their professional experience, the scientific knowledge available and relevant to the situation, the information specific to the intervention context, and the perspectives of those affected by any decision made.

Researchers and the universities, assisted by professionals and other stakeholders, play a core role to narrow the gaps found, ethically taking on their responsibility not only for the rigor of the knowledge they produce but also for the effort of making it available to the society. Considering the significant advances in other professions and the means and processes used, we believe that work and organizational psychology can come to more vigorously lead the prevailing academic and professional practices and cultures in this direction.