Keywords

1 Introduction

Traditionally, students in higher education learn by listening to lectures, working in seminar groups and reading the recommended and core key texts in module reading lists. They are then assessed on their ability to recall and communicate what they have learnt, often via a curriculum founded on the one-way flow of knowledge from theory to practice. However, there are other ways of adapting teaching and learning methodologies and locally, problem-based learning (PBL) has been in use as an innovative approach to student-centred learning since 1998. Wood (2003) defines problem-based learning as an instructional method where students use “triggers” from a given problem, case or scenario, to define their own learning objectives. Subsequently, students do independent, self-directed study before returning to the group to discuss and refine their acquired knowledge. She argues it is not about problem solving per se, but rather uses appropriate problems, often generated from real-life situations, to increase knowledge and understanding in students. We would argue that for nursing students, it is vitally important for learning in context to take place.

The PBL process is also clearly defined in the literature, and the several variations that exist all follow a similar series of steps (for example, see Gijselaers 1995, Wood 2003, McLoughlin and Darvill 2007). However, in this chapter, we argue that assessment of problem-based learning (PBL) can also be used differently to traditional teaching in higher education; students can work with facilitators,  and are assessed on their ability to work in teams and can go through a process of exploring triggers for learning.

Barrows defines it as:

The learning that results from the process of working towards the understanding of a resolution of a problem. The problem is encountered first in the learning process—(Barrows and Tamblyn 1980:1).

Feletti and Ryan (1994) state that the triple-jump assessment is a versatile but under-explored instrument and is used for assessment purposes in a PBL curriculum and it can be used for both developing and assessing problem-based learning. They also state that despite the absence of any published psychometric studies, particularly on reliability, the “triple jump” has been used internationally now since about 1980 (Feletti and Ryan 1994; O’Gorman et al. 1998). This chapter evaluates its use in the summative assessment of undergraduate nursing students. We began with the premise that assessment and learning are inextricably linked and even more so when students are actively involved and participating in the assessment of their acquired knowledge (Race 1995; Biggs 2003, Carless 2007).

2 Background

PBL has been described as an instructional method, where students learn through solving problems and reflecting on their experiences (for example, Barrows and Tamblyn 1980). Locally, in this institution, PBL had been used and assessed formatively in a year three module on cultural awareness for healthcare practice (McLoughlin et al. 2003). This was situated in year three of an undergraduate nursing programme, just before preparation for registration, and running since 2000. Previous student evaluations had identified that students desired recognition for the formative work that they had undertaken as part of the PBL process, with many explicitly stating that they would prefer this component to be summatively assessed and rewarded. An action plan was formulated for development from these evaluations, as it was considered important to ensure that the PBL component was aligned to the learning outcomes and the assessment strategy. Biggs (2003) argues that alignment is crucial to allow students to develop knowledge and understanding. As module team members, the process, therefore, was as equally important as the product for achieving the summative outcomes for this module (Darvill 2000). Deciding upon the type of assessment to use to enable students to undertake different forms of assessment in order to achieve learning outcomes was another consideration. PBL was already being used both formatively and summatively as part of the curriculum changes made in 2000 (Glen & Wilkie 2000), following United Kingdom policy directives from the Department of Health and the United Kingdom Central Council for Nursing and Midwifery, now the Nursing and Midwifery Council (NMC) (DH 1999, UKCC 1999). However, the local curriculum was due for revalidation and this provided an opportunity to respond to outgoing student evaluations, where most had identified that they would have preferred the PBL trigger work to be summatively assessed and therefore rewarded with a grade.

An institutional audit (University of Salford 2006) within this university had identified an overreliance on written assessment as a form of summative assessment strategy. Having returned from a problem-based learning facilitator workshop run by McMaster University in 2000, funded as part of a Teaching and Learning Quality Initiative scheme, all the attendees were first introduced to the triple jump as a different type of assessment for student-centred learning. Reflection and feedback on the workshop learning was undertaken by both participants and was then used as a vehicle to promote change in summative student assessment strategies on return to the UK (Darvill and McLoughlin 2000; Holland et al. 1999).

Ramsden (1992, 2003) suggests that, from the students’ perspective, it is assessment used as an extrinsic motivator, which always defines the curriculum being studied. Biggs (2003) uses the term “backwash” when students focus on assessment as being of more importance than teaching, suggesting amendments should be aligned to curriculum outcomes. The “backwash” thus becomes positive rather than negative, and deeper approaches to learning are encouraged. Haith Cooper (2000) argues that these are important considerations when developing new assessment strategies within a nursing curriculum.The integration and development of key skills for employability, such as communication and working in teams, is also crucial, in nurse education programmes [Department of Health (DH) 2001]. Therefore, the development of these skills with students before they became part of the professional workforce was a very important consideration for inclusion in this new assessment strategy (Burns 2005). A study commissioned by the University of Birmingham (2007) mapped how communication skills are assessed across a sample of professional curricula, but these results indicated inconsistent and patchy provision. These findings influenced the formation of the modified triple jump for us in this School of Nursing aiming to enhance communication skills in future practitioner.

2.1 The Triple Jump

A method of assessment was devised for the undergraduate medical programme at McMaster University, which was nicknamed the “triple-jump” exercise (Painvin et al. 1979). The three original steps consisted of initial analysis of the problem by the students in the presence of assessor, then information searching undertaken independently and usually over a limited period of time, and finally synthesis, again in the presence of the assessor, and here, the students describe the process of the information search, how personal objectives were set and prioritised, which resources were used and how the time was spent. Students then present the final analysis of the problem, highlighting how new knowledge was attained, related to the understanding of the problem under investigation, and this is then followed by feedback from the assessor (O'Neill 1998).

The definition used locally in this School of Nursing is a modification of this original “triple-jump” exercise used at McMaster, with the difference being that this assessment is given to the entire class at the same time not individually, and the emphasis is not just on seeking information but also on using prior knowledge to “solve” new problems as outlined in Rangachari (2002) and Schmidt (1993). McTiernan et al. (2007) argue that careful preparation is required when developing triple-jump and PBL assessment strategies. The problems or triggers were designed and developed with qualified nursing and healthcare practitioners and resulted in five case-based scenarios being written. This was in order to meet the requirements of the module outcomes and reflected the range of complex child health nursing needs that students could encounter in the clinical setting. Wilkie and Burns (2003) state that real-life case scenarios must reflect the fact that nursing is a practice-based profession and are central to the PBL process but more crucially should be drawn from clinical practice. To provide structure for marking and feedback purposes, grade descriptors were then developed by the module team members and were based on the work of Baptiste (2000) at McMaster University in Canada and Arkell and Dudley (2009), two academics who were part of the North West problem-based learning special interest group, a group consisting of university lecturers in the UK interested in implementing PBL into nurse education curricula. Once developed, these descriptors were scrutinised and subsequently ratified by the quality assurance mechanisms within the University.

2.2 The Role of the Facilitator

PBL facilitation emphasises the importance of student-centred instead of teacher-centred education, and in PBL, the teacher’s role is to facilitate collaborative knowledge construction (Burrows 1997). Furthermore, Dolmans et al. (2001) argue that a tutor’s performance is not a stable characteristic but is partly situation specific. It is considered by many that a facilitator of PBL should have some subject matter expertise but more importantly should know how to facilitate the learning process. Therefore, it is argued that, in evaluating facilitation, what has been identified is that the role of the facilitator is central to success (McLoughlin 2002; Haith-Cooper 2003a, b; Wilkie 2004). Facilitation in education stems from the work of Rogers (1969) and Heron (1999). Rogers (1969) suggests that the qualities of an effective facilitator include the ability to be seen by students as genuine, accepting and prizing their contributions, but also being able to offer empathic understanding. Rogers and Frieberg (1994) argue that being a facilitator requires a special perspective on life; in PBL, students learn through addressing problems and reflecting on their experience, and they work in small groups being guided by a facilitator. Ultimately, it was hoped this assessment strategy would foster collaborative working partnerships and develop skills in conflict resolution, but also focus individual and group learning. The teacher, through facilitation, should seek to foster a safe, trusting climate in which the learner is motivated to hope for success; in this way, the role of the facilitator is key to the success of PBL as a learning methodology.

Burrows (1997) also believes there should be genuine mutual respect between the students and facilitator and a partnership in learning should develop, which involves the facilitator as co-learner. However, this transition to the role of the facilitator of learning in PBL may not be easy for some lecturers (Darvill 2003). Many have been used to more traditional “transmissionist” approaches to teaching, and research exploring how to do this effectively is limited to research in centres where PBL has been used for a number of years, like Maastricht in Europe or McMaster in Canada. Tools for evaluating the role of facilitator in a PBL curriculum are scarce, and the available evidence is limited to those identified in the PBL Toolkit (adapted from Dolmans and Ginns 2005).

However, some research in facilitation has demonstrated a positive impact on educational outcomes through the use of PBL (Haith-Cooper 2003a, b). Sandahl (2009) Tuckman 1965) posit that in small group learning the teacher acts as facilitator, but one of the key issues for development and enhancement is that students should be supported to work in collaboration, and in health care, this should also be transferred to practice (DH 2001). What the triple jump and PBL involve is also encouragement in students of active self-directed learning, and what has been observed is that students encourage each other to achieve group goals, developing respect and conflict resolution skills (Brown et al. 2008; Johnson et al. 2007). Students’ were advised to rely upon and encourage each other to achieve the group goal, which aids in the development of mutual respect and the development of skills and addresses conflict resolution (Johnson et al. 2007). Working collaboratively in small groups is a key to the delivery of quality care within the clinical practice (DH 2001). Thus, small groups of between 4 and 6 students were developed to undertake the analysis of the PBL case.

3 Methods

The aim of the study was to evaluate the impact of the triple-jump summative assessment strategy and answer the following questions:

  • Did the triple jump improve the rate of first-time passes and did marks improve compared to the written format?

  • What was the student experience of the triple jump as a summative assessment strategy?

  • What were the experiences of the facilitators using the triple-jump assessment strategy?

Evaluation research was chosen to help answer the research aim and questions. According to Silver (2004) programme providers use evaluation research in order to consider the “effectiveness” of educational developments. The findings from evaluations focus on the strengths and weaknesses of various aspects of innovations as well of their overall “outcome”. This information is, in turn, used to consider how such interventions might be modified, enhanced or even eliminated in the effort to provide an effective assessment.

A mixed-methods approach was chosen to undertake this evaluation. Data were collected in the form of student evaluation questionnaires, and in facilitator focus groups. Bryman (1988, 2006) and Carpenter and Jenks (2003) describe how evaluation studies can use a fusion of the two styles of research, i.e. a mixing of qualitative and quantitative designs. Mixing methods offers ways of generating new ways of understanding and experience and offers an alternative picture of the phenomena under study. Mason (2006) believes that any experience can be multidimensional; therefore, adopting an inclusive approach can enhance the understandings of the problem. However, this approach is not without criticism and difficulties as Mason (2006) also suggests that the researcher needs to engage with the question to ensure that the data generated allow comparisons to be made.

3.1 Sampling

Purposive sampling was used for this evaluation as it focused on a discrete curriculum change taking place and ultimately the experiences of specific cohorts of students and their facilitators, with the intent of obtaining the views of as many as possible (Silverman 2001; Parahoo 2006; Mason 2006). Thus, this sample was chosen in terms of relevance to the research questions aiming to produce meaningful results.

The target population were third-year child branch diploma in nursing students and their facilitators. One hundred and seven students from 4 cohorts over 2 years, who were undertaking this assessment as part of their studies, were included over the course of the module life. Four facilitators from the module team also participated in focus groups. There were a mix of ages and genders in the student group, but all facilitators were female academics. The target population were beginning third-year child branch diploma in nursing students from 4 cohorts with Sept 2006 (N = 20) acting as the pilot group (Table 6.1).

Table 6.1 Target population

3.2 Data Collection

Data collection was ongoing from September 2008 to March 2010. To answer the question regarding the student experience of the triple jump as a summative assessment strategy, data were collected from the students using a 10-point Likert scale questionnaire, see Appendix B). The questionnaire was designed and field-tested with a cohort, as advised in Parahoo (1997). This was then modified and revised to include questions related to the facilitator role in the triple-jump process. Following minor modifications, it was then distributed at the end of the module to three concurrent cohorts of undergraduate child branch diploma in nursing students.

Likert scales are commonly used in educational evaluation questionnaires and are advocated as a data collection tool; respondents indicate their level of agreement with specific statements that express a favourable or unfavourable attitude towards a concept being measured Seale (2004). The scale consists of several declarative statements that express a viewpoint on a topic, and good Likert scales usually include ten or more statements. In this study, all students were asked to indicate their level of agreement with the declarative questions and the scoring was expressed with unfavourable scoring being 1 to favourable scoring 10 on the Likert scale. Student questionnaires were distributed in the final week of the module, before publication of assessment results so as not to influence findings.

The tool used for this questionnaire was based on the original evaluation tool developed by the Salford Key Skills team (Oakey and Doyle 2000). This was modified and amended to produce eighteen statements for this evaluation based on the findings from the fields and the recommendations provided by the PBL evaluation toolkit (PBL Special Interest Group 2009). Several key questions invited written qualitative commentary (see appendix A). The aim being to gather responses that were reflective of the different attitudes held within the student group towards the triple jump as a summative assessment strategy. According to Polit and Beck (2013), spreading out responses of various people with different attitudes along a continuum allows a broader representation of views.

3.3 Focus Group

The next stage of the data collection was to seek the views of the facilitators. This involved a focus group interview with the facilitators who taught and assessed learning on the module. Focus groups can provide participants with a safe environment in which to share experiences and can lead to an uninhibited discussion (Barbour 2008). Morgan (1988) cited in Cohen et al. (2000) defines a focus group as discussion between the participants about a particular topic, in this case the use of the triple jump. It is a popular research tool, but there are critical issues with the use of this method, as researchers often fail to identify how the sessions were conducted, making replication impossible (Barbour 2008). This was addressed by the researcher using the research question as a guide and framework suggested by the PBL SIG (2009) from the PBL/SIG evaluation toolkit. Another researcher was also present taking notes and observing group dynamics, as Barbour (2008) advises the interviewer to take careful note of the dynamics of the group to ensure all participants have an equal opportunity to express their views. The PBL SIG (2009) suggests focus groups are a useful way of collecting data related to PBL evaluation because the topic is focused, and the information gained can help to develop themes. The interaction between group members in the focus group should lead to greater spontaneity and a greater depth of data than would be obtained from a questionnaire or structured interview. As Kitzinger (1995) states, it “reaches the parts that other methods cannot reach” by allowing the researcher to examine not just what people think, but how and why they think that way. The researcher ensured that the meeting was open-ended but to the point, as advised in Morgan (1988) cited in Cohen et al. (2000), and a co-facilitator was present to take notes. The focus groups were audio taped and transcribed verbatim as were field notes.

3.4 Ethics and Informed Consent

Within this university, it is considered best practice to evaluate curriculum changes; ethics approval was not required, but informed consent was obtained from both students and facilitators before undertaking this study. The RCN (2009) research ethics guidance for nurses was followed throughout. Students and facilitators were advised that inclusion in the study completion was voluntary and that they had the option to withdraw at any time. Participants were fully informed as to the nature of the evaluation being undertaken in relation to the triple-jump assessment.

3.5 Data Analysis

Student questionnaires were analysed using simple descriptive statistical analysis (Robson 2002). The descriptive approach aimed to gather student opinions of the desirability of the triple jump as a summative assessment strategy, and the descriptive statistics allowed the presentation of quantitative descriptions in a manageable format. Findings were then summarised and presented in Microsoft Excel package. The focus groups were analysed using qualitative content analysis as discussed by Sandelowski (2000). Transcriptions were carefully examined and coded in order to identify emerging themes. The researchers individually analysed the focus group data coming together to discuss the findings and themes emerging to add rigour and trustworthiness (Guba and Lincoln 2005; Sandelowski 2000).

4 Results and Discussion

4.1 Results

The aim of the study was to evaluate the impact of the triple-jump summative assessment strategy and answer the following questions.

  • Did the triple jump improve the rate of first-time passes and did marks improve compared to the written format.

  • What was the student experience of the triple jump as a summative assessment strategy.

  • What were the experiences of the facilitators using the triple-jump assessment strategy.

In order to answer the question.

Did the triple jump improve the rate of first-time passes and did marks improve compared to the written format.

Documentary evidence was accessed from examinations office; this included essay marks from three previous cohorts (March 2005, September 2004 and March 2003 cohorts) and compared with the marks from the four cohorts (Sept 2006 to March 2008) using the triple-jump assessment. The overall pass rate at first attempt was compared with these cohorts who had undertaken a 3000-word summative written essay on similar topic areas to those assessed using the triple-jump assessment. Results demonstrated an improved pass rate at first attempt (see Fig. 6.1) and an improved standard deviation score. (see Fig. 6.2). The use of the standard deviation allowed for comparison of observations from different normal distributions within these groups of students. The overall marks awarded were higher with the triple-jump students than the essay students.

Fig. 6.1
figure 1

a Average mark comparison: presentation V’s essay. b Standard Deviation of presentation V’s essay

Fig. 6.2
figure 2

Question 1: What was your perception of using problem based learning (triple jump) as a summative assessment strategy?

Standard deviation is commonly used to measure confidence in statistical conclusions and is a widely used measurement of variability for comparing how much variation or “dispersion” there is from the mean or average mark. A lower standard deviation, as indicated with students undertaking the triple-jump assessment here, demonstrates that the marks tend to be very close to the mean, whereas higher standard deviation, as with the essay marks, indicates that the data are spread out over a large range of values. The use of the standard deviation for analysis allowed for comparison of observations from different normal distributions within these groups of students. The overall marks awarded were higher using the triple-jump assessment as opposed to essay format. We considered this to be an important finding in this study.

A selection of key findings from the questionnaires will now be discussed (see Appendix A). The response rate overall was 86 out of 107 students, and Mason (2006) suggests that a response rate of greater than 65 % is sufficient for most purposes. The field test group elicited a response rate of 20 out of a potential 30 again a greater than 60 % response rate.

The first question sought to explore the student’s perception of using problem-based learning as a summative assessment strategy. Figure 6.3 below indicates that out of the total respondents who participated in the evaluation, only 3 students scored below 5 on the Likert scale in the first cohort, which suggests that overall the students were satisfied with this changed method of assessment. However, the initial findings from the field test group (S06) indicated a less favourable outcome in terms of using the triple-jump assessment strategy, this could equate with the facilitators’ unease and inexperience or the individual student learning style and attraction for this type of assessment or that this was the first time it was being used and the students were aware.

Fig. 6.3
figure 3

Question 4: Facilitator encouragement of directed learning

In Fig. 6.4, it can be seen that four students from 3 different cohorts scored 5 and below and indicated in qualitative comments that they required more time and more input from the facilitator. The amount of facilitation has to be carefully balanced to ensure students do not become overreliant on the facilitator, as it is the students who are charged with solving the problems arising out of the case presentations, and these findings are supported by Brown et al. (2008) and Matthes et al. (2008), who argue that the approach of supplementing the assessment by structured case-based tools makes it more appropriate to PBL.

Fig. 6.4
figure 4

Question 7: How well do you feel the group members have participated and worked effectively as a team member on this presentation?

Indeed, as one student wrote, “at some stage you have to take responsibility for your learning - yes it is a new assessment strategy and it is stressful but so are written essays”

This is reflected in the scoring of other student responses with the majority of the scores on the Likert scale between 4 and 10, indicating a leaning towards a positive facilitation student experience.

Question 4 addressed facilitator encouragement of self-directed learning by the module team. A key factor in the successful use of the triple jump is the role of the facilitator (Rangachari 2002; Wilkie 2004; Matthes et al. 2008), and according to Haith-Cooper’s (1997) findings in her research with midwives, positive facilitation has an impact on educational outcomes. The literature also reports that group size is important for student learning and recommends small group sizes; locally, these were 6 students per group, which encouraged positive facilitation of student-centred learning (Sandahl 2009; Gallagher 1997). We argue that this approach offered a more coordinated effort towards self-directed learning and problem solving.

Questions 7–16 focused on the student’s ability to work in groups and share the learning. Question 7 (Fig. 6.5) asked How well do you feel the group members have participated and worked effectively as a team member on this presentation? Being reliant on other group members to share the workload is a constant source of stress associated with problem-based learning. Indeed, Savin-Baden (1997) has written extensively on this aspect of problem-based learning. However, success is achieved by students who can develop the skills to challenge those group members who lack commitment (Smith and Coleman 2008). Figure 6.5 demonstrates that the September 07 cohort reported a very good response to working effectively as a group, with over 90 % of them indicating a mark of 6 or above on the Likert scale. However, other cohorts’ results are generally positive but do illustrate problems for some students working in groups and teams. The findings have close links to the requirement for developing students for clinical practice as once qualified, nurses are expected to work inter-professionally and in teams overcoming any personality issues for the benefit to the client group.

Fig. 6.5
figure 5

Question 17: Do you prefer the triple jump case presentation to a summative written essay?

Student comments reflect this as they state:

The facilitator of the group kept us on track to analyse the case study, this was important as we had gone off on a tangent, results fantastic ….best assignment ever… learned loads.

This opportunity gave me the chance to adapt to other’s working styles.

The final question 17 asked Do you prefer the triple jump case presentation to a summative written essay? (see Fig. 6.5). Any form of summative assessment creates stress, and many students have different learning styles; some prefer oral individual assessments, whereas others feel they can express themselves in written academic assignment (Boud and Falchikov 2007). The findings from this question are interesting, and while the majority of students from all cohorts scored 5 and above on the scale, there were 8 students who preferred to be assessed with an essay as illustrated in these quotes from the qualitative components below.

I prefer a written assignment, academic support is more focused and I am not reliant on others which freaks me out.

Although PBL felt very pressurised it was a nice change from a written assignment.

I had to manage my own time and the content to produce work to a high standard to meet the deadline set by the group…challenging but on reflection enjoyed more than an essay and learned more.

Enhanced many skills. Cognitive, written and verbal communication, due to presenting academic content and organising tasks within the group.

I have enjoyed this assignment and the group work but had to overcome my nerves for the presentation.

Made me want to learn more about the subject as I was teaching it to others and needed to understand and remember it.

Jeffrey (2004) and Major and Palmer (2001)identify that poor academic outcomes can be improved if areas of perceived weakness are identified at an early stage of the learning process, and Roberts and Ousey (2003) suggest this is underpinned with robust academic support

The final question asked Do you prefer the triple jump case presentation to a summative written essay? (see Fig. 6.5). There were a total of 86 responses in all these cohorts, and of these, only 11 scored 5 or below on this Likert scale, with the remaining 65 students’ responses from the 4 cohorts involved.

We argue that any form of summative assessment creates stress, and many students have different learning preferences; some prefer oral individual assessments, whereas others feel they can express themselves in written academic assignment, also identified in Boud and Falchikov (2007). What this seems to indicate is that the PBL triple-jump presentation assessment does appeal to a higher percentage of these students surveyed here.

In relation to the aim of this study, it was important to understand the student experience of the triple-jump assessment strategy. Some of the qualitative comments illustrate some preferences in terms of academic assessment. The comments were generally positive and highlighted the skills acquisition required to successfully complete the summative case presentation

I prefer a written assignment academic support is more focused and I am not reliant on others which freaks me out.

Although PBL felt very pressurised it was a nice change from a written assignment.

I had to manage my own time and the content to produce work to a high standard to meet the deadline set by the group…challenging but on reflection enjoyed more than an essay and learned more.

These statements do present contrasting views on how the students perceive the triple-jump assessment strategy. Clearly, one student prefers a written assignment and an individual approach to support, and this could be interpreted as a singular approach to gaining marks for personal benefit or even seeking support elsewhere. Whilst other students appear to have developed their group working skills, group performance and the adoption of peer learning approaches.

The following two statements illustrate what we considered to be positive learning outcomes for students undertaking the assessment on this module, for example one stated…

Enhanced many skills. Cognitive, managerial, written and verbal communication, due to presenting academic content and organising tasks within the group.

with another discussing the role of the facilitator in the group sessions

The comments and feedback were generally positive and highlighted the skills acquisition required to successfully complete the summative case presentation.

4.2 Focus Group Facilitator Narratives: Emergent Themes

Four themes emerged from the analysis of data from the focus groups; these were facilitation, assessment, student experience and group work. Within each of the key themes, a number of issues arose related to facilitation.

4.2.1 Facilitation

Clear facilitation of students within groups was deemed to be extremely important, and in some instances, some expressed concern about providing too much or not enough direction. as

Another comment was about “students going down blind alleys” and not engaging in critical thinking related to the case under investigation’. Albanese and Mitchell (1993) identified through a meta-analysis review of the literature on PBL that students have to work together to analyse the problem, just as they should be doing in professional practice in teams. They state that students need to make sense of uncertain or conflicting information, but the role of the facilitator is indeed crucial for guidance. This was the experience for one facilitator in the focus group, who at trigger review stage had identified that the students had misinterpreted the cues embedded within the case. This facilitator then encouraged the students to unpick via reasoning and underpinning knowledge issues surrounding their decision-making, and ultimately, the students were guided in the right direction. Burrows’ (1997) concept analysis of facilitation indicates that the facilitator should focus students on developing a goal-oriented process. This should include stepping back, encouraging investigation, identifying goals and giving meaning to activities being undertaken in relation to the trigger or casework. Schmidt (1993: 790) states that the facilitator should be discouraged from active involvement in exploration of the trigger and be considered a “safeguard and not a guide”. This quote illustrates how the facilitator should be emphasising how the students should develop the skills or enhance particular roles within the group sessions, “Sometimes they are entrenched on learning just their component of the scenario and try to disassociate themselves from the rest of the content…. but as a facilitator and marker… I make it clear that the whole group must know each others content…in case someone is sick on the day…..the role of the chair person is vital”

Furthermore, the impact of PBL facilitators on developing subject expertise remains unclear in relation to student achievement (Dolmans et al. 2002). However, in this module, the facilitators were allocated to the cases because of their subject expertise, and also their experience from clinical practice, a decision taken in relation to where the students were situated and the impact this would have on their practice in these areas in the future.

Barrows (1980, 1986) initially developed the philosophy of PBL and argues that it should consist of the following key objectives, structuring of knowledge for use in clinical context, the development of effective clinical reasoning, the development of self-directed learning skills and the increased motivation for learning.

4.2.2 Student Experience

Barrow et al. (2002) evaluation of PBL within nursing curricula revealed findings of an overall positive experience of PBL; however, several students initially experienced stress associated with the ambiguity of the trigger, with one student in their study stating “compared to when we first started grasping at all the areas, now we are more confident to discount areas and say why”. Johnson et al. (2007) identify that when members of the group encourage each other through effective communication, they are more likely to accomplish the group goals.

While on the whole the groups across all these cohorts appeared to function in a cohesive manner, there have been problems with disjunction. Savin-Baden (1998) describes disjunction as when students become completely stuck in their learning or feel fragmented resulting in frustration, confusion and often a demand for the right answers of searching for new meaning and understanding.

Indeed, one facilitator stated quite categorically

I anticipate there are going to be problems in groups and I prefer to let them surface, encouraging openness within the group… one student went off sick and this impacted on their contribution to the group presentation ….

An interpretation of later work by Savin-Baden (1999) suggests, however, that this is a powerful component of the learning process and the problem-solving aspects of PBL; students deal with disjunction in a number of ways, but resolution can be achieved. The students from this cohort were able to solve the problem of the “missing piece of the jigsaw” and redistributed the work between the other members of the group and were successful in their summative presentation. Responsibility for own learning and motivation to learn emerged when the facilitators stated that the process of learning was important, “it develops lifelong learning skills, teaching them how to approach, analyse and develop a topic”. Brown et al. (2008) identify a move from a more passive to a more self-directed participative learner, stating PBL engages students in the learning process. One student stated, “I have enjoyed this assignment and the group work but had to overcome my nerves for the presentation,

With another declaring…

Made me want to learn more about the subject as I was teaching it to others and needed to understand and remember it.

The facilitators identified that the PBL process was a cognitive experience, and this was demonstrated through student interactions in demonstrating their understanding of the trigger and by using cognitive reasoning when conflict arose within the group. It appeared that the knowledge evolved through social negotiation and individual understanding of the content, and this is supported by the work of Savery and Duffy (1995) who claim these as being three constructivist principles in relation to cognitive learning.

4.2.3 Assessment

McTiernan et al. (2007) argue that the use of the triple jump as an assessment tool encourages students to challenge practice, allowing students to identify their knowledge deficits; they begin to utilise key skills to solve problems and also promote the use of evidence to inform practice. “I think the scenarios are good…well written and authenticated by practitioners”. An important comment to illustrate how transferable skills of becoming a nurse and linking knowledge and understanding to practice was the overall focus for facilitators here. The participants in the focus groups identify that this method “assesses keys skills” and that the delivery “demonstrates the students ability to communicate, organise material, manage their own learning and present and interpret data, they also problem solve when they are presented with the trigger at the outset”. These findings are also supported by Biggs (2003) who suggests that the assessment strategy should be congruent with the learning outcomes and goals of the module or programme. Rangachari (2002) argues that PBL courses should place emphasis on analysis, information retrieval and then critical analysis. The students use the onion model, devised by McLoughlin and Darvill (2007), to support the process and offer structure, and with the use of carefully prepared PBL scenarios and explicit grading criteria, this can identify weaker students in order to offer more focused feedback for development (suggested in Painvin et al. 1997). “We have more students passing first time… can access level 3 studies without re-submissions”.

4.2.4 Group Work

While supporters of PBL encourage self-direction in learning, they also advocate collaborative and group learning (Barrows 1986; Sampson and Marthas 1990; Katz 1995; Boud and Feletti 1997; Engel 1997; Savin-Baden 1999). Students benefit from the perspectives of others and are encouraged to work together as they would in the workplace rather than being competitive about their learning. A collaborative approach to learning and working alongside other health professionals is necessary for practice (

Engel 1992; DH 2001). Sandahl’s (2009) review of the literature identified that collaborative testing for students was a positive experience, improved student performance and facilitated critical thinking in groups.

The focus group participants here stated, “the idea is they are supposed to gain information and knowledge from their peers which forces them to work as part of a team” “it’s the nature of the task and the type of people, some students are driven and quite competitive, whereas others hang onto the coat tails of the facilitators”.

However, even though literature reviewed here proposes that collaborative learning is a positive experience, this is not always the case for students. Indeed, Tuckman (1956) cited in Sampson and Marthas (1990) identifies that the first stage of group development is forming with other stages including storming before performing can take place. The facilitators here reported that the personal relationships between some of the group members resulted in conflict, and this also appeared to be associated with some “high fliers” expressing fear of failure.

Johnson et al. (1991) also suggest it is a mistake to assume that students can interact effectively in groups, and often, this is because they have not been coached. With Johnson et al. (2007) later claimed that positive interdependence occurs when groups are encouraged to communicate effectively, respect each other and adopt a positive approach when conflict arises. However, they also identify that negative interdependence occurs when some group members obstruct the efforts of others, often in pursuit of their own goals. In order for the students to be able to deliver the summative component for this module, they needed to perform as a group in a seamless way, and timed sessions for the role of the facilitator are therefore built into the timetable in order to offer them a staged approach to preparation for presentation (i.e. performing). Indeed, illustrated by this quote here from a focus group participant, “It feels like a time consuming process, but it isn’t because all the supervision is timetabled not invisible like in essay supervision”.

4.3 Discussion

The students were visibly worried by this new method, and expressed concerns were raised among all the groups. However, both the students and the staff involved, as illustrated with some of the facilitator narratives here, agreed that the depth of knowledge from exploration of the chosen triggers and the practice of applying their problem-solving skills were a unique and invaluable combination for achieving success.

If you asked them what they wrote in their essay they wouldn’t remember, they can recall this learning.

Both student and facilitator evaluations have been extremely positive, but as with any form of assessment, there is always a certain amount of anxiety and apprehension. Although students viewed this method of assessment with trepidation, these findings suggest that they were also able to articulate how they had developed the key skills expected of future healthcare professionals, including their ability to communicate and work together in small teams.

If you want my opinion the Triple-jump has great merit… it is a learning process for all… facilitators included….I think it provides an interesting challenge… they go through a process of learning that gives them life long skills.

5 Conclusion

Summative assessment using the PBL triple jump for these students and based on the evaluations was predominantly successful, and the students achieved an improved pass rate at first attempt. We also introduced a new and successful assessment strategy that has since gone on to be further developed and enhanced in this programme.