Introduction

Recent changes to postgraduate medical training have resulted in the introduction of new methods of ensuring and evidencing competence progression. At the forefront of this movement are workplace-based assessments (WBAs). These tools were designed to provide a means of assessing clinical skills objectively, within the workplace, permitting assessment of the top tiers of Millers Pyramid (Miller 1990). The term ‘workplace-based assessment’ describes a number of tools (Table 1), each of which is designed to assess different components of clinical practice. Regularly undertaking these assessments can then, in theory, provide a holistic picture of the trainee’s competences and progression through training.

Table 1 Descriptions of the WBA tools in use

Since their conception, WBAs have rapidly taken a central role within the current competency-based curricula in postgraduate medical training, despite some arguing evidence regarding their fitness for purpose to be sparse (Miller and Archer 2010; Shalhoub et al. 2014). Aside from possessing validity and reliability, tools being implemented as widely as these must be acceptable to the stakeholders: trainees and trainers.

Implementation of WBAs varies between countries but as an example, in the UK (where the majority of the literature originates), after graduating from medical school until completion of specialist training WBA now form a central component of training and assessment. Details differ between specialty and stage but in essence, trainees are required to complete a set number of each WBA type, each year—with the expectation that increasing proportions of assessments will be performed by consultants with progression through training.

Over recent years there has been a realisation that engagement with WBAs in the medical workplace varies significantly, and that WBA tools are not being used as intended—in particular that they have been adopted as summative rather than formative assessments and trainees see them as simply hurdles (Bindal et al. 2011). This is exemplified by a recent study from the Netherlands examining trainee and trainer perceptions towards WBAs (Fokkema et al. 2014). The authors performed a Q-study, with obstetric and gynaecology residents (n = 22) and attendings (n = 43). Analysis revealed five perceptions towards the effects of implementing WBAs on training: enthusiasm (n = 11), compliance (n = 21), effort (n = 4), neutrality (n = 4) and scepticism (n = 5). It was interesting to note that none of the viewpoints agreed that ‘WBAs tally with my own ideas about what education should be like’.

The realisation that negative perceptions towards WBAs remain prevalent has compelled the educational community to explore approaches to improve their performance. Here, we review the literature examining trainee and trainer perceptions towards WBAs, and consider the impact of changes to WBA implementation that have been proposed to improve trainee engagement.

Methodology

The literature relating to WBAs was reviewed to identify studies examining trainee and trainer perceptions towards WBA tools. An independent search was performed on NCBI PubMed and Web of Science by both authors, using combinations of the following search terms: ‘workplace based assessment’, ‘workplace assessment’, ‘tools’, ‘perceptions’, ‘experiences’, ‘evaluation’ and ‘views’. The search was restricted to contemporaneous literature published between January 2005 and January 2015. The results were combined and duplicates excluded. A total of 934 results were identified.

Eligibility judgments were made by both authors to minimise bias and improve validity, with disagreement resolved by discussion, on the basis of information found in the article’s title, abstract, or full text if necessary. Papers were included when they contained information on perceptions towards the use of WBAs or if they considered how learning through use of WBAs could be enhanced. Studies were excluded if not written in English, if the sampled population was non-medical/dental or the study was performed with medical students. Review articles, commentaries, and letters were also excluded. This strategy returned 28 relevant articles which were obtained and read in full by both authors. The bibliographies of these articles were examined for identification of additional relevant articles. This was further supplemented with searches for non-indexed reports published by relevant bodies such as the General Medical Council. This revealed 31 relevant studies. Information pertaining to the demographics of study participants, the perception of users towards WBAs, factors identified to influence these perceptions and proposed strategies for improving WBA implementation was extracted by both authors and any differences resolved by discussion. An abridged version of the extracted data is presented in Table 2.

Table 2 Published literature examining trainee and trainer perceptions towards workplace based assessments

It was noted that identified studies examine different WBA tools, in different specialties, in different grades of trainees over a period of time during which tools and their implementation has evolved significantly. In view of this, it was decided that the results would be best presented as a critical narrative review.

Results

With WBAs featuring more prominently in postgraduate medical training, it is important to recognise that acceptability to users is essential. Learning and assessment tools will only be successful when all parties fully engage. Therefore, understanding user perceptions is critical when evaluating these assessments. When thinking about user perception, there are two principal perspectives to consider: the perspective of the trainees who are required to be assessed, and the perspective of the trainers who are required to perform the assessments. In reviewing the literature it became apparent that although there are similarities, there are also important differences between trainees and trainers in their perceptions and the factors influencing these positions. Therefore, the results have been framed first from the perspective of trainees and then trainers to facilitate a comprehensive examination of user perceptions and identification of the factors underlying these two positions.

The trainee’s perspective

Value to training and professional development

WBA tools aim to facilitate learning and improve clinical performance. There have been several studies examining the extent to which trainees feel WBA tools achieve these goals. A comprehensive review of this literature by Miller and Archer in 2010 revealed that only multi-source feedback had convincing evidence of effectiveness in improving trainee performance (Miller and Archer 2010). This paper though, highlighted concerns that WBAs were not having their intended impact.

The largest study to examine this subject since, surveyed 1065 Foundation Programme doctors in the UK, identifying a small majority (61.2 %) rating WBA’s as being of ‘some’, ‘moderate’ or ‘great’ value to their training (Dean and Duggleby 2013). This study was far from a glowing endorsement of the tools. A similar study, again considering Foundation Programme doctors (n = 215), identified that 60 % disagreed with the statement that generating an e-portfolio of WBAs ‘created a positive learning experience’ (McKavanagh et al. 2012). Focus groups within this study revealed many trainees to possess an overall feeling that they are being made to ‘jump through hoops’ with little value to their training. More recently, a study of core medical trainees in the UK revealed that the majority see WBAs as ‘a means to passing the year, rather than as a meaningful educational exercise’ (Tailor et al. 2014). A similarly negative feeling was reported in a study of 417 surgical trainees shortly after the introduction of WBAs, with 41.4 % of those responding feeling that introduction of WBAs had a negative impact on their training (with only 6.3 % reporting a positive impact), a very significant finding (Pereira and Dean 2009). The study was repeated with a new cohort 3 years later, producing less negative results, although 36 % of trainees rated WBA’s negatively and only 22 % positively (Pereira and Dean 2012). These feelings are echoed in several other studies which reveal trainees questioning the educational value of WBA tools (Basu et al. 2013; Cohen et al. 2009; Sabey and Harris 2011; Tailor et al. 2014). Despite this, it is important to highlight that not all findings have been negative. For example, the vast majority of a group of GP trainees reported that they valued the verbal feedback provided by WBAs (Sabey and Harris 2011), a sentiment echoed by qualitative studies of trainee perceptions (Julyan 2009; Marriott et al. 2011; Nesbitt et al. 2013; Sandhu et al. 2010; Weller et al. 2009). Further positive findings were reported in a large study by Wilkinson in which 80 % of 230 trainees from a range of specialties reported that mini-clinical examination exercises (mini-CEX) and directly observed procedural skills (DOPS) assessment tools were ‘useful’ for their personal development (Wilkinson et al. 2008). The greater positivity reported in this study may be explained in part by knowledge that WBAs were undertaken voluntarily for this study by self-selected participants, rather than as a compulsory element of training.

WBAs as ‘assessments’

A significant area of contention surrounding WBAs has centred on the word ‘assessment’. WBA tools have predominantly been intended as formative, rather than summative, assessments (Beard 2011; General Medical Council 2010). Trainees though, desire only their successful achievements to be documented rather than their progression and potential weaknesses (Jenkins et al. 2013). This creates a challenge when the primary purpose is to provide the trainee with structured feedback to drive reflection and continued learning. Unfortunately, there is still a feeling amongst trainees that they are being graded and ranked and this has negatively impacted on their engagement (Nesbitt et al. 2013). This has not been helped by postgraduate training programmes requiring trainees to undertake ever increasing numbers in order to progress in their training (Pentlow 2013). The concern that WBAs are summative assessments of their performance, clouding the realisation that these tools are in fact intended to facilitate their learning, has negatively impacted upon trainee engagement with WBAs and has reportedly resulted in:

  • Trainees avoiding discussion of cases with poor outcomes or a high degree of complexity as part of case-based discussion (Mehta et al. 2013; Sabey and Harris 2011)

  • Trainees undertaking the minimum required to be signed off (Powell et al. 2014)

  • Stress surrounding the assessments, impacting on performance and generating a staged environment (Cohen et al. 2009; Tsagkataki and Choudhary 2013)

  • Trainers focussing on the tick-box ratings, at the expense of the more useful verbal and written feedback (Sabey and Harris 2011)

  • Trainees seeking ‘friendly’ assessors, hoping for a more positive ‘mark’ (McKavanagh et al. 2012; Rees et al. 2014; Simmons 2013)

Factors trainees report to impact on their engagement with WBAs

It is clear that negative feelings towards WBAs are prevalent in the medical workplace. To extend this observation, several studies have aimed to identify what trainees view as being the major problems, in order that they can be addressed. These studies repeatedly identify time constraints as being a dominant concern. Even the earliest reported surveys of trainees using WBAs identified time-constraints as the central factor preventing them from achieving the intended benefits (Cohen et al. 2009; Wilkinson et al. 2008) this is not limited to the British NHS, and has been reported by Dutch users also (Dijksterhuis et al. 2013). Trainees struggle to identify time within their busy schedules to undertake WBAs, finding it challenging to prioritise their learning and training in a workplace demanding service provision. As a result, WBAs are often seen as a low priority and this contributes to the misuse of the tools that has been widely reported (Ali 2013; Bindal et al. 2011). This is only further confounded by training bodies requiring increasing numbers of WBAs to be completed each year (Pentlow 2013).

Another frequently reported concern is poor assessor engagement and understanding of WBAs, despite the fact that trainees are usually required to self-select their assessors. Trainee selection of assessors serves both to empower trainees to take responsibility for their own learning, but also removes a significant administrative burden from the trainers. Nevertheless, assessors would typically comprise the senior doctors and consultants on their current clinical team—their direct clinical supervisors who should be in a good position to offer feedback on clinical performance. Studies suggest that trainees perceive many consultants, and other assessors, do not fully engage with WBAs (Basu et al. 2013; McKavanagh et al. 2012; Sabey and Harris 2011). In one such study, McKavanaugh recently reported that 152 of 215 foundation doctors (71 %) disagreed with the statement: ‘I found consultants keen to complete assessments, with only 13 % agreeing’ (McKavanagh et al. 2012). This striking finding is not isolated; Sabey for example, reported that 85 % of 52 GP trainees struggled to engage suitable assessors. There is a suggestion that trainees feel that trainer enthusiasm correlates with the quality of learning, noting in focus groups that the best teaching comes from assessors who approached the trainee volunteering their time, but that this is a rare occurrence (Sabey and Harris 2011). Of course, there is recognition that consultant time is valuable, and that in the busy clinical environment training, unfortunately, takes second priority at times—which may be misconceived as a lack of enthusiasm. However, these reports go further to suggest that a significant number of trainees feel their assessors have an incomplete knowledge and understanding of WBAs, which underlies this lack of engagement. The figures quoted range from 29 % of trainees (Hrisos et al. 2008) (shortly after the introduction of WBAs in 2008) to 53 % more recently, who believe that assessors lack understanding (Sabey and Harris 2011). This highlights that even several years following the introduction of WBAs, despite becoming commonplace, there are still a significant number of trainers who trainees feel lack sufficient understanding of WBAs. This is of importance since trainees, on the whole, have the responsibility of selecting their assessors. The challenge of engaging consultant trainers to conduct WBAs leads trainees to often select lower grade assessors (Basu et al. 2013). Of 215 newly qualified doctors surveyed in one study, just 19 % reported completing CbDs with consultants and only 1 reported completing a DOPs or mini-CEX with a consultant (McKavanagh et al. 2012). This becomes significant when trainees select assessors who they believe will provide favourable feedback (McKavanagh et al. 2012; Sabey and Harris 2011). Indeed it has long been recognised that the relationship between trainee and assessor can have a significant impact on the validity of the assessment, (Holmboe 2004; Norcini 2003) and potentially impact on the ability of these tools to identify struggling trainees, due to the leniency bias introduced (Mitchell et al. 2011).

A further concern of trainees, likely to impact upon their engagement with WBAs, is a lack of formal training in the educational basis of WBAs, and how to get the most out of them. In simple terms trainees on the whole do not fully understand WBAs. As discussed, many do not recognise their formative intent for example. It has previously been highlighted that formal training in the use of WBAs is key to their successful implementation (Saedon et al. 2010), and this has been corroborated by users (Rauf et al. 2011). Despite this, a recent study of Foundation doctors revealed that just 10 % had received any formal training in the use of WBA tools (Weston and Smith 2014). As a consequence the majority of respondents in this study made no reference to their own learning when asked to describe WBAs, highlighting a lack of appreciation that WBAs are primarily intended as assessments for, not of, learning.

Together, these concerns result in many trainees seeing WBAs as simply ‘hurdles’ or ‘tick-box exercises’ required to progress in their training, and results in WBA tools being misused. For example, trainees frequently admit to not being observed for the whole duration or, in some cases, any of the clinical activity they are being assessed on (McKavanagh et al. 2012; Nesbitt et al. 2013; Rees et al. 2014; Sabey and Harris 2011; Tailor et al. 2014). Clearly, the learning opportunity afforded by WBAs is lost in these circumstances.

The trainer’s perspective

Much of the literature on WBAs focuses on the perceptions and acceptability to trainees. However, the assessors completing WBAs are equally, if not more, important stakeholders. The manner in which assessors conduct WBAs will have a critical impact on the learning that occurs and potentially the on-going engagement of trainees (Sabey and Harris 2011). As such, several studies have made an attempt to capture assessor perceptions.

Trainer knowledge and understanding of WBAs

Understanding the purpose of WBAs is central to their success. It is clear that trainee understanding of WBAs is lacking, and their perception is that this is also the case for their assessors. The most comprehensive attempt to assess trainer perceptions and understanding of WBAs have come from the UK’s General Medical Council (GMC), who attempt to collect feedback each year from all registered trainers in the country. The results of one such national survey were published in 2011. There was a 45 % response rate, with 14,393 trainers (2223 GP trainers and 12,839 consultants) responding. In contradiction to trainee perceptions, the majority of consultants (75 %) reported receiving training in WBAs during the preceding 3 years although the nature of this training was not expanded upon (General Medical Council 2011). It is noteworthy though that 11 % of responding trainers reported to never having had training. The extent to which this survey is representative of the whole workforce though is unclear since the response rate was relatively poor. In this regard it is notable that a recent study from 2013 identified that 43 % of 129 consultant anaesthetists (also in the UK) denied having ever received training in WBAs despite using them regularly (Bindal et al. 2013). Thus it is apparent that despite WBAs now being established in postgraduate training in the UK, a sizeable proportion of consultant trainers continue to report not having received relevant training which clearly will have an impact on how they conduct WBAs.

Trainer perceptions of WBA value to training

The UK national trainer survey went further to examine trainer thoughts on the problems with trainee and assessor engagement. A minority (15 %) of consultants felt disengagement of senior staff was a problem whilst 25 % reported that trainee disengagement impeded the practice of WBAs (General Medical Council 2011). Further studies describing trainer perceptions of WBAs have tended to include small numbers (Basu et al. 2013; Bodle et al. 2008), or to address the question indirectly (Powell et al. 2014) making it challenging to reach generalisable conclusions. These studies do suggest that there may be a lack of trainer engagement with the WBA process, identifying a proportion of trainers who believe that time spent on WBAs could be more effectively spent on other aspects of training (Basu et al. 2013; Powell et al. 2014). Indeed, in one study examining perception towards the impact of WBAs on surgical skills, only 63 % felt that WBAs contributed to improving the skills of their trainees (Bodle et al. 2008). Notably, in this study, 90 % of trainees believed that their surgical skills had improved as a consequence of undertaking WBAs, highlighting a potential disparity between trainee and trainer perceptions towards WBA value.

Despite potential scepticism towards the process of WBAs, trainers do appear to perceive WBAs as being valid assessment tools. The UK GMC trainer survey found that 75 % of consultant trainers felt that WBAs (specifically DOPS, CbD, mini-CEX and MSF) provided a ‘meaningful and sufficient dataset’ to assess trainee clinical competency, particularly DOPs and CbD which were deemed ‘meaningful and sufficient to determine trainee competency in the skills tested’ by more than 85 % (General Medical Council 2011). Indeed, trainers do appear to hold more positive views in this regard than trainees, as evidenced in the few studies examining trainee and trainer perceptions simultaneously. For example, in the recent study by Fokkema it was notable that of the 11 participants categorised as demonstrating ‘enthusiasm’, 10 were trainers, whilst the modal category for trainees was ‘compliance’ (in which participants view WBAs as useful tools, but lack clarity on their broader educational effects and impact on clinical care) (Fokkema et al. 2014). This finding was similarly observed in three further studies (Barton et al. 2012; Basu et al. 2013; Bodle et al. 2008). Although trainers feel confident in the validity of WBAs it is interesting to note studies highlighting that some consultants appear unwilling to score trainees poorly for fear of the impact this will have on the trainees progression (Royal College of Physicians 2014).

Discussion

The literature reported in this review presents a bleak view of current user perceptions towards WBAs, with many questioning their validity and worth. Trainees identify a lack of time, poor assessor engagement, poor understanding of assessors towards the aims of WBAs, poor quality feedback and their own lack of training in WBA methodology as principal factors underlying their negativity. Although trainers tend to be more positive towards WBAs, this is not universal and they similarly highlight available time, but also trainee disengagement as important factors hindering the success of WBAs. In some ways, these results complement the findings of Miller and Archers systematic review which concluded that apart from multi-source feedback, WBA tools were not having their intended impact on clinical performance (Miller and Archer 2010).

The greatest experience of WBAs reported in the literature is from the UK. Despite now being established in UK postgraduate training, from the discussion thus far it is clear that the medical profession remains ‘rightly suspicious of the use of reductive ‘tick-box’ approaches to assess the complexities of professional behaviour’ and that there is ‘widespread cynicism’ towards WBAs (Academy of Medical Royal Colleges 2009). This has resulted in WBAs being widely misused and regarded by many as merely an inconvenient ‘tick-box’ to progress in their training. In the workplace these tools are not performing as was intended. In agreement, reports following formal review of postgraduate training programmes in the UK specifically highlight WBAs to be a problem that needs to be urgently addressed (Collins 2010; Eraut 2005). This is easier said than done. It will be an enormous challenge to gain widespread ‘buy-in’ of trainees and trainers into the process of WBA without significant effort. However, the UK experience provides a learning opportunity for other countries considering formal implementation of WBAs into their postgraduate training programmes.

Improving trainee experience of WBAs

The results highlight three principal, modifiable, shortcomings of current WBA implementation: lack of clarity as to the purpose of WBAs, a lack of available time and lack of quality feedback driving change in clinical practice. There are several approaches we have identified that could be considered, ideally simultaneously, to begin addressing the three dominant shortcomings with current WBA implementation, which will be discussed in turn (Table 3).

Table 3 A list of strategies to improve trainee engagement

Clarifying the purpose of WBAs

It is clear that confusion surrounding the purpose of WBAs is prevalent in both trainees and trainers. Two approaches may help clarify the formative purpose of WBA tools.

Firstly, the importance of training in WBA methodology has been highlighted (Academy of Medical Royal Colleges 2009; General Medical Council 2010). The Academy of Royal Colleges in the UK, highlight the importance of engaging trainees in repeated instruction in the use and purpose of WBA tools. This also applies to assessors. However, engaging clinicians in such training can prove to be a challenge. For example, one study quoted an uptake rate of just 11.5 % for training offered to their trainers (Canavan et al. 2010). One way to overcome this inertia is to include training as part of the mandatory induction programme undertaken when doctors rotate through hospitals, though this would not address training of consultants. A related suggestion is to increase exposure of medical students to WBA tools during medical school, particularly as formative tools. Not only will this promote engagement of students with workplace-based learning, it will facilitate the re-contextualisation of clinical knowledge learnt in the classroom (Van Oers 1998). This early engagement, without the many impediments faced by junior doctors, may permit establishment of positive perceptions towards WBAs that will continue into their training. There are reports emerging describing WBA introduction into medical school curricula in the UK (Nesbitt et al. 2013), the Nertherlands (Bok et al. 2013), Saudi Arabia (Al-Kadri et al. 2013) and Australia (Olupeliyawa et al. 2014). Clearly then, the importance of engaging users at an early stage has been recognised internationally, but the impact this will have on junior doctor perceptions and engagement has yet to be seen.

The second approach is to consider re-branding WBA tools to emphasise their purpose. The GMC have proposed adoption of new nomenclature: supervised learning events (SLEs) for tools designed to provide formative feedback, and assessments of learning (AoL) for summative tools (General Medical Council 2010; Kessel et al. 2012). It is felt that removing the word ‘assessment’ and introducing ‘learning’ will explicitly clarify the formative intent of the tools. How far this ‘re-dressing’ of WBAs will go towards changing perceptions is unknown (Ali 2013). Nevertheless, the UK Foundation Programme, whose implementation of WBA was heavily criticised (Collins 2010), have recently made an active move to adopt SLEs into their curriculum. SLEs have now replaced WBAs and assessment forms have been simplified with white space boxes for written feedback and trainee reflection, with all tick boxes removed, for their 2014 curriculum (UK Foundation Programme Office 2014). The long-term impact of this change on trainee perceptions will be eagerly anticipated following a recent publication suggesting tentative support (Rees et al. 2014). Introducing SLEs to ‘just-qualified’ doctors may also engender a more positive attitude when encountering WBAs later in their training, since emphasis on their learning functions have already been instilled. Encouragingly, early signs are that SLEs have a higher level of support from users (Rees et al. 2014; Cho et al. 2014). However there is also evidence that the fast pace of change is beginning to confuse users, particularly trainers, which may place this goodwill in jeopardy (Cho et al. 2014). These findings highlight the importance of ensuring new implementations are carefully assessed before introduction.

Managing the problem of time

Lack of available time remains the most frequently quoted challenge to full engagement with WBAs, by both trainees and trainers. In this regard it is notable that WBAs have been, on the whole, positively received by dental trainees (Grieveson et al. 2011; Kirton et al. 2013). Dental Foundation Trainees have an allocated trainer who ‘must have adequate time for training clearly identified in their job plans or appointment systems’ (UK Committee of Postgraduate Dental Deans and Directors 2012). Perhaps there is something to be learnt from dentistry. Indeed having training time written into consultant contracts has been suggested, although the challenge involved in funding this will no doubt prevent widespread adoption of this proposal (Academy of Medical Royal Colleges 2009). However, it is likely that this would facilitate maximising the potential of these training tools.

A scenario commonly reported to be a consequence of the lack of time, is that a clinical task is observed by an assessor but forms (with written feedback) are completed, sometimes, months later, when it is difficult to expect reliable recollection of the event sufficient to provide feedback, questioning its validity (Basu et al. 2013; Bindal et al. 2013; Tailor et al. 2014). This may also strain the relationship between trainee and trainer as the trainee is made to pursue the trainer for documentation (Rees et al. 2014). One quoted impediment to immediate completion of written feedback is access to computers and the internet, since paper forms were replaced by online portfolios (General Medical Council 2011; Goodyear et al. 2013; Pereira and Dean 2009). However, the development of tablet/smartphone applications will hopefully facilitate more immediate written feedback (Torre et al. 2007).

Addressing the quality of feedback provided

Feedback is consistently considered by trainees as the most valuable feature of WBAs (Miller and Archer 2010; Sabey and Harris 2011). It has been shown that systematic, effective feedback, from a credible source can change clinical performance (Veloski et al. 2006; Watling et al. 2012). However, that the feedback received from WBAs fulfils these criteria and impacts on clinical practice is not clear (Basu et al. 2013; Sabey and Harris 2011; Veloski et al. 2006). The quality of feedback provided as part of WBAs has been examined in some detail and the conclusion of many authors is that the feedback provided is not fit for purpose, and may even have detrimental consequences on trainees such as decreasing their motivation (Hattie and Timperley 2007; Saedon et al. 2012). A study by Canavan et al. (2010) examined feedback on 977 completed multisource feedback surveys of which only 282 (29.1 %) contained written comments. Furthermore, only 29 (10 %) contained negative feedback, this is not an isolated finding, and other studies have noted that feedback collected by MSF is 50× more likely to be positive than negative (Wood et al. 2006). Canavan et al. (2010) also found that when feedback was obtained, 210 testimonies (74 %) included at least one general comment, with no direct relevance to the task, for example ‘a fantastic guy’. These comments provide no information on what specific behaviours had led the assessor to arrive at this judgement, and therefore do not enable development. The authors summarise that this feedback ‘may be at best useless and at worst detrimental to learners’ progress’.

It is likely that the widespread lack of formal training for assessors in completing WBAs, and perhaps specifically on how to provide effective feedback, is the major factor underlying the poor quality feedback being provided to trainees (Norcini and Burch 2007). In this regard, increasing the provision of assessor training has been suggested to improve the quality of feedback (Babu et al. 2009; Basu et al. 2013; Norcini and Burch 2007; Pelgrim et al. 2012). Despite this, addressing the quality of feedback throughout the medical workplace is not straightforward, particularly considering the challenges reported in engaging assessors in training (Canavan et al. 2010). However, requiring trainers to undertake mandatory training in providing feedback and perhaps going so far as to require certification, should perhaps be a considered a requirement in the future, if a serious attempt at addressing this concern is to be made; especially since professional development of trainers is considered critical to the success of WBAs as an educational tool (Norcini and Burch 2007).

Limitations

The major limitation of this review is its UK centric nature. The vast majority of WBA literature originates from the UK, where WBAs have been implemented in a top-down fashion which has resulted, inadvertently, in them being considered a form of summative assessment. Whilst this certainly limits external validity and generalisability of the findings beyond the UK, as alluded to, it also provides an opportunity for countries less advanced in their implementation of WBAs to avoid making the same mistakes that have clearly impacted negatively on user perceptions. It is also worth noting that there are differences between specialties in their implementation of WBAs, for example in the number and range required of their trainees each year. As a result, some of the findings may be more relevant to some specialties but not others and again limit the generalisability of findings. For this review, only literature published in the English language was included which introduces the potential that valid and informative work from other countries was excluded from this analysis. Whilst this work has focused on trainees and trainers as the major stakeholders in the WBA process, one must not also forget that patients and healthcare institutions are also stakeholders that are impacted upon by the WBA process. It would be interesting to consider in more detail the impact of WBAs on these additional stakeholders.

Future directions

WBAs are likely to remain an important component of competency-based postgraduate medical training programmes. Therefore, it would be wise to consider research priorities for determining how WBAs can be successful to guide future WBA implementation. One such priority could be to expand the limited literature on medical student perceptions of WBAs. Implementation of WBAs into medical school curricula is becoming increasingly common. From the reading of the literature, and anecdotally, the approaches to integration vary widely between schools. This provides an opportunity for examining user engagement and perceptions, with comparison of the effectiveness of different strategies that could be extended to postgraduate curricula. Alternatively, studies should focus on examining the impact of adopting the various strategies identified within this review. Although we are optimistic that such changes would have a positive impact, the ability to roll out these changes on a wide scale will likely require significant support and investment from training programmes which is unlikely to be forthcoming.

Conclusion

Workplace based assessments have become the ‘norm’ in postgraduate medical training. However, trainees and their assessors appear to have incomplete understanding of the educational basis of the tools they are regularly using. This has had a negative impact on both their perceptions of WBA value and their engagement with the tools as learning aids, resulting in their widespread misuse.

The educational community has begun to acknowledge that WBAs are failing as formative assessment tools. Here we have highlighted a series of potential approaches to address the three dominant underlying problems: lack of user understanding, lack of available time and insufficient training of trainers in feedback provision. Some of these measures are already being introduced into various undergraduate and postgraduate medical training curricula, and the community eagerly await reports of the impact of these measures on trainee and assessor perceptions.