Evidence-based practice (EBP) is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual clients (Sackett et al. 1996). Continuing expectations from clients to receive the best possible care, the call for cost effective therapies, a significant rise in scientific evidence and greater professional accountability have created a shift in the way we think and implement best practice making EBP a desirable approach in health care delivery (Bennett and Bennett 2000; Evidence-Based Medicine Working Group 1992; Lloyd-Smith 1997).

All health care professionals are expected to work within an EBP context. Occupational therapists similarly, are expected to use a systematic approach based on evidence, professional reasoning and client preferences to help individuals improve their function in the occupations of life (Canadian Association of Occupational Therapists (CAOT) 2002).

Initially, EBP skills are developed within the context of academic programs. At the professional Master’s level, most North American occupational therapy programs prepare entry-level clinicians who are expected to demonstrate competence in the knowledge, skills and behaviours for the practice of evidence-based health care. Academic programs must determine how to promote EBP skills, increase awareness of evidence sources, ensure a widespread change in attitudes to evidence use (Turner 2001) and ultimately produce evidence-based scientific practitioners (Rothstein 1998). If the EBP approach is to be entrenched in the day to day practice of future clinicians, a pedagogically sound approach would be to incorporate EBP in every aspect of the curriculum. This, however, would require a comprehensive understanding of EBP: its basis, the principles that underpin it and the most effective methods for teaching and evaluating it. A comprehensive review that elucidates these details does not exist. The existing literature does not shed light on how requisite competencies for EBP are developed in health professional education in general and in OT education in particular. It is still not an established fact that academic programs are successful in fostering EBP competencies through various courses and practica nor that these competencies are reinforced and honed systematically as students advance in their programs to the extent that at the time of graduation, they have acquired the basic skills of EBP. In the absence of systematic or empirically supported models for teaching and evaluating EBP in academic programs, the development of EBP skills is left to chance with outcomes that are, at best, haphazard. The following review will suggest that this challenge may well apply to the other health professions. To address this shortcoming, a logical starting place would be to develop a comprehensive understanding of the concept of EBP, aspects of the curricula and teaching approaches that foster it, and the actual trajectory of development both in academic courses and clinical practica. Drawing from educational psychology and EBP in the health professions, this paper provides a critical review that examines (1) the theoretical underpinnings of EBP, (2) the literature on EBP-related skills, (3) the role of expertise in EBP and (4) the literature on the effectiveness of various EBP teaching and assessment interventions. The paper concludes with suggestions for teaching and evaluating EBP that have solid grounding in educational theory and can inform the design of the EBP curriculum in the health professions.

Epistemological foundation of EBP, related skills and the role of expertise in the development of EBP competence

Epistemological and theoretical foundations of EBP

Although many professional societies, funding agencies and the public have embraced EBP, they have done so without an apparent consideration of the serious criticisms that have been made regarding its epistemological foundations. Determining what can adequately and legitimately (Maynard 1994) be considered as evidence has been central to many EBP discussions causing much of the divide between opponents and proponents of the EBP movement (Mowinski-Jennings and Loan 2001). Opponents claim that EBP uses a very narrow concept of evidence, essentially discrediting practitioner experience as well as professional reflection and intuition in favour of population-based research and large scale clinical trials (Mitchell 1999; Morse 2005; Rolfe 1999; Stetler et al. 1998; Webb 2001). Others attribute their scepticism to EBP’s strict reliance on the evidence hierarchy, and the narrow and overly prescriptive nature of systematic reviews (Marks 2002). The randomized controlled trial (RCT) is also frequently criticized for producing findings that are often inapplicable to specific clients (Rolfe 1999; Welsh and Lyons 2001). Yet another criticism of EBP is the apparent lack of regard for qualitative research. Because EBP has been primarily grounded in quantitative approaches and experimental designs, findings from qualitative studies have seldom been considered as legitimate sources of evidence. Consequently, critics argue that qualitative research has not taken its rightful place within EBP despite the momentum this approach is gaining in the health professions, particularly for its potential to better capture the complex relationships of clients, their experience of illness, and their social circumstances (Forbes and Griffiths 2002; Hammell 2001; Herbert et al. 2001; Miles et al. 2004).

In a recent paper on the epistemological inquiries in EBP, Djulbegovic et al. (2009) challenged the criticisms regarding EBP, stating that these criticisms are related to classical debates regarding the nature of science and knowledge. The authors suggest that EBP should be ‘conceptualized as an evolving heuristic structure that helps improve client outcomes rather than be viewed as a new or scientific theory that is concerned with changing the nature of medicine’(p. 158). These criticisms necessitate that scholars reflect upon and address the issues raised in the ongoing debates in order to establish a more comprehensive conceptualization of EBP.

While there is no definitive or widely accepted descriptive statement of EBP, there are working definitions that are useful to advance EBP enquiry. A fundamental question is whether there is a theory associated with EBP. While none has been identified in the literature, we contend that it is possible to think about EBP from a theoretical perceptive by broadening the conceptualization and accounting for concepts such as knowledge production, knowledge acquisition and knowledge use.

In EBP, knowledge in the form of scientific discoveries or research evidence on the effectiveness of treatment interventions is intended to be exchanged between researchers and professionals in a mutually created and socially constructed context. For example, researchers who produce findings from an effectiveness study, disseminate results through publication and presentations. Clinicians as consumers of research findings, consider the evidence in light of clients’ needs and their personal and social context. They then act upon available knowledge by relating it to what they already know, developing new insights and, in many cases, monitoring their understanding throughout the process. As such, knowledge is not an inert object “sent” by researchers and “received” by clinicians. Rather it is a fluid set of understandings shaped both by those who produce it (researchers) and by those who use it (clinicians). The meaning of research evidence and how it fits with a particular client is, therefore, constructed and interpreted by the clinician, who acts as an active problem-solver and a constructor of personal knowledge, rather than as a more passive receptacle of information (Hutchinson and Huberman 1993).

These ideas are consistent with social constructivist perspectives and are applicable to the enterprise of EBP (Fuhrman 1994). The fundamental premise of constructivism is that knowledge is a human construction and the learner is an active participant in the learning process (Vygotsky 1978). Constructivism is based on three assumptions about learning (Driscoll 1994; Gredler 1997; Savery and Duffy 1995; Slavin 1994; Steffe and Gale 1995). First, learning is a result of the individual’s interaction with the environment. What a learner comes to understand is a function of the context of learning, the goals of the learner and the activity the learner is involved in. In EBP, the individual retrieves research evidence and considers it in relation to personal experience and to clients’ need. EBP is hardly ever a private effort devoid of any input from the social environment. Exchanges with peers and collaboration with the client enhance the EBP process. The second assumption is that cognitive dissonance, the uncomfortable tension that comes from holding two conflicting thoughts concurrently, is a primary stimulus for learning. In EBP, this tension can arise from, for example, competing opinions about a preferred line of assessment. An attempt to address the uncertainty can stimulate information seeking activities, which result in alternative or broader approaches to the selection and administration of a clinical assessment. This information seeking can ultimately enhance practice. The third assumption is that the social environment plays a critical role in the development of knowledge. For instance, other individuals in the environment might put the learner’s understanding to test and this might lead to the introduction of competing views that put to question the viability of the learner’s knowledge. In the EBP process, in any of the health professions, this can occur during client-clinician or clinician-clinician encounters concerning the nature, quality and relevance of the scientific evidence. These parallels suggest that social-constructivist theory is a suitable lens through which to examine EBP.

Steps and supporting skills in evidence-based practice

The interaction of research evidence with clinical expertise and client input helps with the clinical decision-making process involved in EBP (Bennett and Bennett 2000; Haynes et al. 2002; Rappolt 2003; Sackett et al. 1997). For example, clinical decisions made in OT are the end point of a process requiring clinical reasoning, problem solving and awareness of the client and his/her context (Clark et al. 1993; Dubouloz et al. 1999). Reagon et al. (2008) proposed a framework that presents evidence-based OT as an iterative process in which theory, evidence and practice mutually inform one another. This framework uses a broader view of ‘evidence’ because it considers sources of evidence other than that which come from empirical research and randomized clinical trials. Text books, qualitative research, colleagues, clinical experience, clients and their families, outcome measurement, and observation constitute acceptable sources of evidence (Bennett and Bennett 2000; Egan et al. 1998). In addition to this broader conceptualization of evidence, there is one other characteristic difference between EBP in OT and this approach in other health professions such as medicine. The client-centered evidence-based practice of the OT model, positions the client at the center of the decision-making process. It not only values the clinician’s knowledge and expertise for decision-making but acknowledges the client’s knowledge and experiences as essential to the EBP process (Egan et al. 1998; Hammell 2001).

The steps involved in the evidence-based OT process are essentially similar to this the evidence-based approach in medicine. The steps include (1) posing a clinical question, (2) searching the literature, (3) appraising the literature, (4) considering research evidence in clinical decision-making and (5) reviewing the procedure and outcome of the EBP process (Rosenberg and Donald 1995; Sackett et al. 1997; Sackett et al. 2000; Strauss and Sackett 1998). The two approaches are compared in Table 1.

Table 1 Comparison of basic steps in evidence-based medicine and evidence-based occupational therapy

Skills that support EBP

The literature offers some insight into the essential skills for successful EBP, although claims made are not always substantiated empirically. In a paper on expertise in evidence-based OT, Rappolt (2003) claimed that successful EBP is a function of research skills, knowledge and experience. She asserted that therapists must demonstrate the ability to identify clinical issues, gather and appraise evidence, demonstrate good problem solving skills, and have sufficient knowledge and experience to draw from in order to make clinical decisions. This author derived the skill sets from a review of the premises and methods involved in evidence-based OT rather than from an empirical study. An early paper by Lloyd-Smith (1997) discussed the historical developments of EBP and offered practical solutions for overcoming barriers to the use of evidence. The paper placed the emphasis on searching, retrieving and critically appraising the literature (Steps 2 and 3 of the EBP process) which he described as being a major focus of OT curricula. However, it fell short in accounting for the skills that may be involved in the other steps (Steps 1, 4 and 5). Miles et al. (2004) also claimed that judgment is a necessary skill in EBP because research facts “never really speak for themselves” and thus there needs to be an interpretative role for the therapist “using an evidential knowledge base”. Craik and Rappolt (2003) examined the self-reported use of research in practice of expert OTs in order to identify the processes involved in translating research into practice. They concluded that clinical experience and structured reflection were “necessary components for building knowledge, applying research findings to clinical care” and “decision making”. Mattingly and Fleming (1994) demonstrated how and why generating clinical questions and hypotheses are necessary for the development of expertise in clinical reasoning which they claim is a required EBP skill. Using a grounded theory approach, Craik and Rappolt (2006) examined the self-reported research utilization behaviors of 11 OTs working in stroke rehabilitation. They found that clinicians’ experiences, their active engagement in continuing education, their involvement in research and their mentoring of students contributed to their capacity to translate research evidence into practice. The researchers indentified clinical, research, teaching and reflective practice skills as four key skill sets that were found to be important contributors to successful application of research evidence in OT practice.

Together, this literature suggests that the EBP approach requires therapists to draw on clinical experiences, teaching and reflective practice skills, as well as critical thinking and problem solving skills in order to identify a clinical problem. Then, through the appropriate use of evidence, they ought to formulate a plan to address the client’s problem within their social context. The studies suggest that in order to apply research findings in clinical decision making, clinicians must be able to pose a good clinical question, and have a skill set that facilitates the searching and appraisal of the literature. For the most part, these are theoretically driven descriptive statements that capture and represent the EBP steps. However, they do not identify or explain what cognitive and metacognitive skills support the successful implementation of EBP.

The nature of expertise and its role in EBP

Successful application of research evidence in clinical medical practice is believed to be a function of expertise in a domain (one of the three necessary components of effective EBP) (Davidoff 1999; Haynes 2002; Rappolt 2003; Rolfe 1999; Sackett et al. 1996). How is expertise characterized? What differentiates an expert from a novice? How does expertise influence EBP? Despite the paucity of literature on EBP expertise there is extensive research on expertise in general and expertise in the health professions in particular that can shed light on these questions.

The nature of expertise has been studied using two approaches (Chi 2006): (1) the study of ‘exceptional people’ (Chi 2006, p. 21) to understand how they perform in their domain and how they differ from the general population (absolute expertise) and (2) the study of experts relative to novices in a specific domain which assumes that expertise is a level of performance that novices can achieve over time with intentional practice (relative expertise). This body of research has provided a solid foundation for elaborating on how experts acquire, process and use knowledge and problem solving skills (Alexander 2003; Lajoie 2003). It has also contributed to our understanding of the process and the trajectory of expertise development in a domain. This rich literature, particularly the insights it provides on key characteristics which differentiate experts from novices, the features of professional expertise and expertise development, is instrumental in understanding the trajectory of expertise development in EBP. The following section addresses these dimensions.

Characteristics of general and professional expertise

Table 2 presents a comparison of general expertise and professional expertise on nine major dimensions of expertise. Traditional expertise research has shown that experts reach superior levels of performance in their domain not only because of years of experience but because of an extensive accumulated body of domain knowledge that is coherently and efficiently organized (Bransford et al. 2000; Ericsson and Smith 1991; Lesgold et al. 1988). They can retrieve domain knowledge and required strategies to solve a problem with minimal cognitive effort (Alexander 2003) and execute skills with greater automaticity and greater cognitive control. Experts also attain superior performance because of sustained and deliberate practice. Deliberate practice involves supervision, feedback, well-defined tasks to improve certain aspects of performance and opportunity to improve upon performance (Ericsson 1996, 1998, 2001, 2004). When solving problems experts focus on the conceptual features of the problem. They see patterns, cues and underlying principles that assist with problem resolution (Chi et al. 1981; Lesgold et al. 1988). They are attuned to the affordances provided by the problem (Anderson 1982) and because they recognize and effectively utilize these affordances, they have more success in problem resolution. Although experts may spend more time analyzing a problem qualitatively, they are faster at solving the problem because of extended practice in their domain and more automatized problem solving routines (Glaser and Chi 1988; Klein 1993). Experts are also opportunistic. They make use of all sources of information and available resources to solve a problem (Gilhooly et al. 1997). Experts have more accurate self-monitoring skills which help them detect errors more precisely and self-monitor the status of their comprehension during problem resolution (Chi et al. 1982; Chi 1978).

Table 2 Dimensions of general and professional expertise

Although expertise in the health professions builds upon attributes of general expertise, there are notable differences between expertise in professions such as medicine and OT and expertise in domains such as music, chess and sports. Professional expertise appears to be manifested through and build upon interpersonal relationships with clients and other professionals. Experts in domains such as medicine and OT demonstrate mastery of a diverse body of knowledge (biomedical, clinical) and a range of motor (surgical skills, manual muscle testing skills), cognitive (problem solving, clinical reasoning) and interpersonal skills. This is clearly, unlike many other domains (Norman et al. 2006). Also unlike other domains, professional expertise involves coordination of formal versus experiential knowledge. For instance, physicians must keep up with the volume of new knowledge on diagnostic tools and medical treatments (Choudhry et al. 2005) in addition to engaging in extensive periods of training in order to attain success in their practice.

Notwithstanding these differences, there is one major common feature of expertise across different domains. There is little support for skill or talent as the defining characteristic of expertise across domains. “General skills are as inadequate an explanation for surgical expertise as they are for violin expertise” (Norman et al. 2006, p. 350). Rather, it is the individual’s knowledge and cognitive processes as well as the deliberate practice with feedback that are believed to be the key to expertise in most domains.

Development of expertise

Contemporary expertise research programs have been focusing on the development of models of what learners need to know in order to demonstrate complex performance across domains (Alexander 2003; Chi et al. 1988; Glaser et al. 1987; Lajoie 2003; Lajoie and Azevedo 2006; Mislevy et al. 1999). While the emphasis on developmental trajectories towards expertise is prevalent in contemporary studies of expertise (Ackerman 1996, 2000, 2003; Alexander 2003; Lajoie 2003), researchers in the professions have yet to identify such trajectories. Yielder (2004) argued that one reason for this shortcoming is that traditional expertise research focused primarily on experiential and cognitive factors as contributors to expertise, ignoring the need to integrate these into a coherent model of professional practice (Rolfe 1998). Instead of examining expertise from the strictly traditional cognitive point of view, Alexander (2003) has proposed that the focus be shifted to the development of expertise in academic domains. Her model of domain learning (MDL) considers subject-matter knowledge, general strategic processing and learner interest as interacting elements in the development of expertise (Alexander 1992, 1997). The MDL conceptualizes expertise as having a domain-specific nature and recognizes that learning must involve both cognitive and non-cognitive factors such as motivation and affect. The MDL targets improvement in student learning and development in academic domains as its primary purpose. As such, the journey towards achieving expertise becomes more important than the differences between experts and novices.

Alexander’s work has resulted in a description of the stages of expertise development from acclimation, to competence and proficiency. Within this framework, students are not expected to reach the proficiency level while in school because proficiency requires a broad knowledge base, advanced problem solving skills and interest, which can only be acquired following extended exposure and practice in a domain (Alexander 1992). The implication of the MDL for EBP competency development is as follows: if successful application of evidence in clinical practice is a function of expertise (Davidoff 1999; Haynes 2002; Rappolt 2003; Rolfe 1999) and if we conceptualize EBP as a domain, then it may be unrealistic to expect university students to reach expert performance levels in EBP by the end of their educational experience. Instructional environments may have to redirect their objectives and move to meet interim targets on the trajectory of learning and progressive acquisition of EBP competence. The educational training may be designed to help students with individual features of expertise and among these, self-monitoring, which may be a key element that can propel progress along the trajectory. Although Alexander’s model requires further study in relation to how it can support the development of EBP expertise in the health professions, it offers a promising reference for examining trajectories of development and particular subset of skills that might reasonably be achieved at various stages. In the interim, to help students move along the trajectory of developing EBP expertise, instructors must design effective teaching environments and use valid instruments and assessment methods to assess both the individual student’s competence and the programmatic impact of EBP curricula. Doing so requires a solid grasp of teaching and learning theories as well as analyses of the research evidence in this area. In this next section we survey the literature on the teaching and assessment of interventions targeting EBP knowledge, skills and attitudes in professional programs and post graduate education.

Effectiveness of evidence-based practice teaching and assessment interventions

Teaching interventions

Teaching activities described in the health sciences literature are designed to address one or more of the required skills for successful implementation of EBP and are typically aligned with the five EBP steps. Few teaching approaches address all five EBP steps and, fewer have demonstrable success in teaching all of the skills needed to adequately and consistently integrate EBP into clinical practice. Five systematic reviews, conducted between 1998 and 2007, examined the effectiveness of teaching interventions on knowledge of critical appraisal, attitudes, skills and EBP behavior. Flores-Mateo and Argimon (2007) studied the effect sizes for different instructional interventions aimed at improving EBP knowledge, attitudes, skills and behaviors in postgraduate health care education (medicine, nursing and allied health professions). They found small improvements for all four outcomes when these were measured alone but rather large improvements (effect size >0.79) in knowledge and skill in EBP when these were measured together in a total score. These findings notwithstanding, the authors were critical of many of the studies in the review because of poor study quality and lack of validated outcome measures. Norman and Shannon (1998) showed that instruction in critical appraisal resulted in positive gains on medical students’ knowledge of critical appraisal, without providing evidence of gains sustained over time or translated into practice. Coomarasamy et al.’s (2003) systematic review of the teaching of critical appraisal revealed improvements in knowledge of critical appraisal but not in EBP attitudes, skills or behavior. The same authors (2004) reviewed the effect of standalone vs. integrated courses (teaching of EBP integrated within clinical practica) on critical appraisal knowledge, skills, attitudes and behaviors. They found that the former improved knowledge only, whereas the integrated approach showed improvement in all four outcomes (knowledge of critical appraisal, skills, attitudes and behaviors) supporting the use of authentic teaching situations and the situated aspect of learning (Lave and Wenger 1991). Hyde et al.’s (2006) review examined the teaching of critical appraisal and the impact of this teaching on client care, client outcomes and knowledge of critical appraisal. The review, which included only one RCT, indicated that teaching improved knowledge of critical appraisal by 25%, however there were no data reported on client outcomes.

Several conclusions are drawn from these reviews. Teaching interventions have a greater impact on knowledge and skill than they do on sustainable EBP behaviors. Hence, there is no evidence as to whether teaching interventions ultimately have an impact on clinical practice. Improvements in EBP knowledge seem to vary according to the level of the learner, whether undergraduate or postgraduate. It is not clear what works for which groups because studies were conducted on learners at different levels in their training (students, post graduates). There is a lack of theoretical grounding for many of the studies where investigators do not use theoretical frameworks to support the studies. Lastly, EBP instruction may have a greater impact on learning and acquisition of EBP related knowledge, skills and attitudes if integrated into real life contexts using authentic situations as those afforded by fieldwork and clerkships, which support the value of situated learning and the use of authentic teaching situations (Lave and Wenger 1991). A note of caution: a number of methodological flaws in most of the studies reviewed considerably limit the validity and generalizability of findings. According to Hatala and Guyatt (2002) and Gruppen (2007) these include: infrequent use of randomization in experimental designs, a heavy reliance on quantitative methods for measuring and explaining the complex forms of EBP competencies, the short duration of interventions in university environments where there is a rapid student turnover and limited time for longitudinal studies and repeated use of self-reports of knowledge and skill instead of objective measures of performance and behaviour.

Assessment of EBP competencies

Until about 1998, published assessment instruments focused mostly on the evaluation of critical appraisal, essentially ignoring the other EBP steps. Furthermore, the majority of instruments measured EBP knowledge and skills but did not objectively assess behaviours in actual practice. Most importantly, few had established validity and reliability (Shaneyfelt et al. 2006). In the last decade, several instruments have been developed to address the shortage of measures with strong psychometric properties that incorporate all steps of EBP. Green (1999) conducted a systematic review of evaluation instruments in graduate medical education training in the areas of clinical epidemiology, critical appraisal, and evidence-based medicine. The main objective of the studies included in Green’s review was to improve critical appraisal skills (other EBP steps were excluded) in resident-directed small-group seminar teaching, using scores on multiple-choice examination as the outcome measure. Only four of the eighteen studies met minimum methodological standards for controlled trials and of the seven studies that evaluated the effectiveness of the teaching of critical appraisal skills, the effect sizes ranged from no effect to a 23% net absolute increase in test scores. Green, however, reported problems with the studies including incomplete description of curriculum development, absence of behavioural objectives and clearly defined educational strategies and, inadequate evaluations of the curricula which introduced limitations to the systematic review process. A 2006 systematic review by Shaneyfelt et al. identified 115 articles on assessment of EBP, representing 104 unique instruments administered primarily to medical students and postgraduate medical trainees. Although the majority of available valid instruments were self-report measures of skills in searching for and appraising the literature, the authors highlighted two instruments with strong psychometric properties that evaluated most of the EBP steps: (1) the Fresno Test (Ramos et al. 2003) which uses two clinical vignettes and asks students to formulate a clinical question, acquire the evidence, appraise it and then apply the evidence for the client depicted in the vignette, and (2) the Berlin Questionnaire (Fritsche et al. 2002) which measures EBP knowledge and skills using a 15-item multiple choice test. Although the latter is easier to score than the Fresno it does not evaluate all the EBP steps (Agrawal et al. 2008). Other instruments included in the Shaneyfelt et al. (2006) review targeted fewer EBP steps and were specific to certain types of EBP curricula. In a 2007 review, Flores-Mateo and Argimon compiled 22 distinct assessment methods for EBP skills, knowledge, behaviors and attitudes of post-graduate healthcare workers. The authors also described several problems with the studies in their review including poorly reported feasibility of implementation, underreporting of time needed to administer and score the instruments and lack of instrument validation. Only 45% (N = 10) of the instruments were validated with at least two or more types of evidence of validity or reliability. In addition, most instruments had limited applicability to different teaching modalities or to different curricula in the health professions.

The literature reviewed so far has highlighted a number of strategies used to promote EBP knowledge, skills, and attitudes in undergraduate students and post graduate trainees. Whether because of methodological flaws, absence of theoretical grounding or challenges in teaching and assessing the 5-step process in a pedagogically sound manner, this literature suggests that there is still no consensus on the ideal methods for teaching and evaluating EBP. In this final section we offer suggestions for teaching EBP, drawing on educational theories and their relevance to teaching in higher education. For the evaluation of EBP, rather than proposing specific assessment instruments, we discuss general concepts of assessment that need to be taken into account and offer suggestions for the design of EBP assessment.

A framework for teaching and evaluating EBP

Theoretical guidelines for teaching EBP

Proceeding through the five steps of EBP requires a balance of skills in each step (Dawes et al. 2005). Curricula designed to promote knowledge, skills and attitudes of EBP grounded in the 5-step process and in different clinical situations can help students see the EBP process as a continuum.

Specific teaching methods that help students acquire and integrate cognitive and self-monitoring strategies and discover, use and manage knowledge (Collins et al. 1989) can support the move along the trajectory of developing expertise in EBP (Lajoie 2003). Constructivist theories provide a solid foundation for guiding the design of effective learning environments where individuals learn by doing and where learning takes place in context (Lajoie and Azevedo 2006). We propose that instructional design that targets EBP competencies in the health professions be based on five salient constructivist characteristics about learners and the learning context. Instructors should (1) consider the learner’s existing knowledge, beliefs and attitudes regarding EBP; (2) understand the salient role of social negotiation and collaboration with peers in incorporating evidence in clinical decision making; (3) acknowledge that the learning situations, content and learning activities are meant to foster self-analysis, problem-solving, higher-order thinking and deep understanding; as such, they must be relevant, authentic and represent the natural complexities of the world; (4) support collaborative learning which exposes students to alternative viewpoints and affords them the opportunity for apprenticeship learning; (5) scaffold learners from what is presently known to what is to be known, thereby facilitating the learner’s ability to perform just beyond the limits of current ability (Ernest 1995; Honebein 1996; Jonassen 1991, 1994; von Glasersfeld 1995; Vygotsky 1978; Wilson and Cole 1991). Accordingly, EBP can be taught in a socially constructed environment in the classroom and in authentic learning contexts such as those afforded by fieldwork. In these contexts, students should be encouraged to engage in discussion, debate, reflection and problem solving with peers and experts and ultimately solve problems that reflect the broad scope of scenarios they are likely to encounter in the future. The content and context of learning can be structured and guided by the teacher in collaboration with the learner. The teacher could model the EBP process and its underlying skills, scaffold students through practice and progressively fade the support allowing students to engage in EBP autonomously. The use of collaborative learning methods, case-based methods and cognitive apprenticeship offer much promise for promoting the development of EBP competencies. Table 3 shows these three approaches highlighting the main objectives, processes, instructor roles and desired outcomes of each approach.

Table 3 Suggested for teaching EBP in occupational therapy education

In collaborative learning (CL), students work in small groups. In the process of cooperatively solving problems, students generate self-explanations and construct inferences about a specific problem which ultimately helps them integrate and solidify new understanding and solve problems (Slavin 1991). Engaging in discussions, problem solving and questioning (Johnson and Johnson 1993) allow students to test each other’s understanding and build knowledge. The types of constructive activities involved in CL also trigger metacognitive activities. In attempting to solve problems, students monitor their understanding and become aware of errors and misunderstandings. Group problem solving improves awareness of misunderstandings which in turn triggers help-seeking behaviors and explanations and ensures better understanding and problem resolution (Johnson and Johnson 1993). CL contexts afford many opportunities for working on EBP cases where students can discuss client scenarios and integrate the EBP steps collaboratively.

Case-based methods are frequently used in health sciences education and OT education in particular. Clinical cases can be designed in a manner that promotes knowledge acquisition and problem solving and helps students work through the evidence-based OT process. Neistadt et al. (1997) found that the use of cases in OT leads to improved quality of student intervention plans and understanding of clinical reasoning concepts. Using case studies Reed (1996) developed and evaluated a 12-week course designed to help foster problem solving skills in OT students. Results indicated that students in the program were not only more confident in their selection of assessment and treatment interventions, but they could apply effective problem solving skills to determine solutions to complex pediatric patient problems. Case based methods have great potential for evoking both the knowledge and skills required for evidence-based decision-making.

The cognitive apprenticeship framework as a social-constructivist approach to OT education in general and to the teaching of EBP in particular offers much promise. It can promote the required EBP skills and competencies by exposing students to authentic practices through activity and social interaction. Classroom and clinical milieus that incorporate cognitive apprenticeship principles, place teaching and learning practices within rich and varied contexts that are meaningful and authentic to students. In the classroom, clinical cases, collaborative learning groups and clinical experiences can be woven though the curriculum, providing authentic learning opportunities for students to enter into cognitive apprenticeship with practicing clinicians and instructors. The didactic portion of the curriculum also offers opportunities for cognitive apprenticeship. Instructors can model their thought processes and verbalize their problem solving processes while working on cases (Graham 1996; Maudsley and Strivens 2000). In fieldwork, preceptors can demonstrate and model the EBP skills and behaviors that students are expected to learn. Gradually, preceptors can reduce their direct assistance and shift from modeling to guiding or facilitating learning with the objective of engaging the student in the EBP process independently (Sullivan and Bossers 1998).

Considerations for assessment of EBP

To be compatible with and support a constructivist model of teaching and learning, assessment should be targeting both the process and the product of learning. We propose that the design of EBP assessment methods be grounded in the 5-step process where competence is evaluated using different assessment tools that target the different skills involved in each step. Table 4 illustrates the key assumptions and features of assessment design and provides examples for application in EBP assessment.

Table 4 Assumptions and features of assessment design and examples of application in assessment of EBP

Assessment of learning and competence in EBP requires careful planning. Instructors can design valid, reliable and authentic assessments that take place in authentic environments similar to those in which the learner is expected to apply the newly acquired knowledge (Boston 2003). Dynamic assessment evaluates learners’ understanding and performance in EBP during a term of instruction. It also provides useful and immediate feedback to both the student and the instructors (Brown et al. 1992; Lajoie and Azevedo 2006; Palinscar 1998). This feedback reinforces metacognitive abilities of students and helps instructors to better scaffold student learning (Vygotsky 1978), to modify the content and process of instruction, and to make recommendations to students for areas of improvement (Palinscar 1998).

Assessment methods must clearly capture and be aligned with the learning objectives (Fenwick and Parsons 2000; Frederiksen and Collins 1989; Kelson 2000). Both the methods and the manner in which they are used must converge with the specific expectations as pre-stated in the learning outcomes. Assessments should contain explicit criteria of what is expected from the student. Because the EBP process contains various steps and associated skills, the notion of “transparency” allows learners to know exactly what aspects are being evaluated and how (Bass and Glaser 2004; Frederiksen and Collins 1989; Frederiksen and White 1997; Pellegrino et al. 2001; Shepard 2000, 2001; Wolf and Reardon 1996). Access to evaluation criteria satisfies a basic fairness criterion and helps students develop their understanding of standards in a domain (Shepard 2001).

When possible, instructors should use a broad range of formative and summative assessments (Pellegrino et al. 2001). No one single test score can capture the complexity of EBP and its five related steps. A wide range of assessments allows instructors to address the diversity of learner needs, develop a holistic understanding of what EBP knowledge and skill has been acquired (Fenwick and Parsons 2000) and enhances validity and fairness of inferences by giving students various ways of showing their competence in EBP (Pellegrino et al. 2001).

Assessments should be designed to target group performance and group contributions to complex EBP situations because many real life situations will require interactions with others. Assessment should focus on the thinking and the cognitive processes involved in EBP (decision-making, problems solving) as opposed to emphasizing acquisition of content knowledge only (Pellegrino et al. 2001; Royer et al. 1993; Shepard 2000).

Lastly, authentic and performance-based assessments represent the complex thinking and problem solving skills that are necessary for successful EBP in today’s world. They are useful for assessing the process and product of learning (Bransford and Schwartz 1999; Lajoie 2003; Linn et al. 1991; Lajoie and Azevedo 2006; Pellegrino et al. 2001) and can be developed to reflect the kinds of competencies needed in most occupations and professions (Graue 1993; Schuwirth and van der Vlauten 2006; Shepard 1989).

Conclusion

Consumers of health services expect the best possible care from competent, up to date professionals who base their clinical decisions on a combination of expert judgment and sound research evidence. EBP attempts to meet that expectation. Despite the continuing debate about the nature of evidence and the generalizability of large scale clinical trials, EBP remains an attractive and highly researched paradigm of health care practice.

Successful EBP is a function of experience in a domain, the use of sound evidence and the integration of client choices. If the development of clinical expertise is in part dependent upon extended experience and practice in a domain, exposure to a variety of cases and sufficient domain knowledge, then we propose that EBP competency development be conceptualized as a progression along a path of developing expertise with clearly delineated landmarks.

Acquisition of EBP competencies must begin during an individual’s professional training. To this end, health professions academic programs are expected to design curricula that target these competencies. The specifics concerning what is the most effective way to teach and evaluate EBP in professional programs, such as OT, remains illusive to curriculum designers. This paper highlighted how the breadth and depth of EBP knowledge can be addressed by teaching and modeling the expert competencies needed for practice. For instructors to successfully teach EBP, they must ensure that students possess the essential domain knowledge and that learning is embedded within a socially constructed environment using authentic problems which they solve with peers first and then independently. Whether in the classroom or in fieldwork, learning environments should promote self-monitoring skills and intra-class feedback that will allow students to regulate their learning and actively engage in the learning tasks.

Finally, we have outlined educational models that can inform both teaching and assessment of EBP. There is considerable knowledge about how people acquire, synthesize and use information to solve real-life problems and this knowledge ought to inform the design of professional curricula. This kind of development needs go hand in hand with research to determine effectiveness and whether desired EBP competencies are achieved. There is a wide territory ripe for exploration.