Keywords

1 Introduction

Rural and remote practice of emergency medicine presents unique challenges, particularly when faced with infrequently encountered cases and procedures (Williams et al. 2001). These challenges are amplified by the fact that a large proportion of emergency care in rural areas must be provided by general practitioners, or by nurses and nurse practitioners (Williams et al. 2001; Casey et al. 2008). This poses a serious challenge to equitable health-care delivery when patients in rural areas do not have access to comparable levels of emergency care as those in urban centers (Rogers et al. 1999; Ireland et al. 2006). In emergency medicine, and rural and remote emergency medicine in particular, low-frequency occurrences of many clinical encounters limit the opportunity for skills to be developed and maintained through on the job experience alone. Therefore, a systematic approach to training personnel for these emergencies is required. Simulation-based medical education (SBME) has been identified as a valuable tool in the acquisition and maintenance of knowledge and skills (Rogers et al. 1999; Ireland et al. 2006; Issenberg et al. 2005; Cook et al. 2011; Scott and Dunnington 2008; Roy et al. 2011) because it facilitates deliberate practice without compromising patient safety (Ziv et al. 2003). However, simulators are often located in urban centers and are not easily accessible outside these centers due to geographic, cost, and time constraints (Ikeyama et al. 2012; McCoy et al. 2017; Rosen et al. 2012). Mobile tele-simulation has the potential to overcome these barriers by bringing the simulation training to the trainees, but challenges such as a comfortable learning environment, technical issues, and ability to teach desired content via tele-simulation must be addressed.

We have developed a mobile tele-simulation unit (MTU) prototype that enables mentor and trainee emergency health care workers, to connect and access SBME on procedural skills in rural and remote settings. This study focuses on a proof of concept regarding the acceptability, feasibility, and effectiveness of the proposed intervention. The goal is to determine whether using this unit, in areas where simulation training would otherwise not be available, is acceptable to all parties given the proposed advantages that an MTU can offer in terms of flexibility, convenience, and costs. The specific objectives of this project are:

  1. 1.

    Acceptability and feasibility: to gather feedback on the design and function of each iteration of the MTU prototype and incorporation into the finalized MTU

  2. 2.

    Effectiveness: to examine learning outcomes and assess if the outcomes in the MTU are comparable to face-to-face training

This study takes place in Newfoundland and Labrador (NL), Canada where 40 percent of the population lives in rural areas. NL has a population of around 525,000 that is geographically dispersed across a land mass of approximately 405,000 km2, or almost one and three quarters the size of Great Britain. However, Great Britain has more than 100 times the number of people. NL has a relatively new simulation lab, located at the medical school in the capital city; however, the geographic dispersion of medical facilities across the province makes it expensive, time consuming, and often impractical for trainees to visit and access resources at the urban facility. In addition, the simulation lab operates at near capacity, with preference given to medical students and limited access to outside groups.

2 Background

Mobile tele-simulation is a combination of tele-simulation and mobile simulation. Tele-simulation involves using Internet protocol-based teleconference software to give trainees access to simulators and/or mentors in a different location. It couples the principles of simulation with remote Internet access to teach procedural skills (Mikrogianakis et al. 2011). Mobile simulation enables access to simulation training by bringing necessary equipment, and sometimes even the training environment, directly to the remote teaching site. Research in tele-simulation and mobile-simulation is limited but has been growing in recent years.

Tele-simulation is particularly useful when there are distance limitations, time constraints, or a lack of skilled mentors that constrain access to training at simulation centers (McCoy et al. 2017). Tele-simulation has been shown to be an effective means of teaching medical skills. It has been used to teach procedural and surgical skills such as intraosseous line insertion (Mikrogianakis et al. 2011), laparoscopic surgery (Okrainec et al. 2009, 2010; Henao et al. 2013), treatment of ventricular fibrillation or desaturation in an intubated patient (Ikeyama et al. 2012), pediatric resuscitation (Ohta et al. 2017), and performance of ultrasound-guided anesthetic techniques (Burckett-St Laurent et al. 2016). Some of these studies examined the use of tele-simulation to provide training to physicians in resource-restricted regions, for example, laparoscopic surgery to surgeons in Botswana, Africa (Okrainec et al. 2009, 2010), in Colombia (Henao et al. 2013), and in Puerto Rico (Small et al. 1999; Treloar et al. 2001).

Tele-simulation has also been found effective in the remote assessment of skills (Burckett-St Laurent et al. 2016; Okrainec et al. 2013; Choy et al. 2013). This is important as it has the potential to decrease costs without impacting assessment validity. For example, Burckett-St et al. (2016) found that the evaluations of ultrasound-guided anesthetic procedures conducted remotely were consistent with those conducted on-site. Also, Okrainec et al. (2013) found that the results of remote administration and scoring of the exam for laparoscopic surgery were consistent with standard on-site testing.

Emergency personnel have also been trained using tele-simulation. Treloar et al. (2001) used a high-fidelity human patient simulator (HPS) to provide an educational program for emergency personnel. The personnel had access to the HPS and they received both on-site and remote instruction. They found a significant overall improvement in both perceived preparedness and self-efficacy. Von Lubitz et al. (2003) also studied the use of HPS and took it one step further by examining remote access to the simulators to train physicians in three emergency scenarios. They found a statistically significant improvement in all testing measures and that trainees’ confidence in performing the procedures also improved.

A challenge with tele-simulation is that trainees may not have access to simulation equipment or the training environment necessary for tele-simulation. These challenges are addressed with mobile simulation. Mobile simulation can make use of a specialized unit with portable simulation equipment that effectively represents a safe, immersive classroom environment for simulation training. For example, a patient simulator and an audiovisual system were set up at rural health centers in Australia to teach trauma teams (Ireland et al. 2006). They used multidisciplinary team training to combine scenario-based learning with mobile patient simulation to practice technical and behavioral skills in the actual work environments. Weinstock et al. (2009) recognized the high setup cost and the need for a dedicated space when establishing the simulation training at the health center and sought to create a low-cost method for delivering mobile simulation. They created a self-contained mobile simulation cart that contained a laptop to display vital signs and had audiovisual equipment to allow for video-based debriefing. A systematic review of the mobile simulation literature found only 29 papers that conducted a study of the use of mobile simulation to train physicians (Rosen et al. 2012). They found that the studies covered a limited range of clinical topics, with the majority focusing on surgical and obstetrical areas. Most studies focused on evaluating learner reactions and changes in attitudes and found positive results, and several studies found improvements in care (Steinemann et al. 2011; Riley et al. 2011; Hunt et al. 2008).

Another type of mobile simulation is one in which a self-contained unit containing the simulation and other materials is transported to the trainees, rather than setting up the simulation equipment in the hospital itself. In this model, the equipment is sent to the training site and the training is conducted in the self-contained unit. Some examples of a mobile simulation unit include: a modified van with simulation equipment to practice laparoscopic skills in Australia (Xafis et al. 2013); a modified van with simulation equipment and camera to record and train emergency clinicians in Italy (Ullman et al. 2016), and; a modified ambulance with camera and simulation equipment in the US to teach endotracheal intubations (Bischof et al. 2016). There is limited information available on the learning or patient outcomes associated with use of such mobile simulation tools to train physicians. However, the preliminary results provide some evidence of the potential power of mobile simulation. With Ullman et al.’s (2016) modified van, all participants in the study expressed interest in participating in future training sessions. Furthermore, Xafis et al. (2013) found that learning with their modified van was comparable to training at a fixed simulation center. In fact, there was a trend toward superior participant performance with the mobile unit. The authors speculated that this may be because of the convenience of having the unit deliver training at the hospital instead of trainees travelling to a skills center, or because of the novelty of skills training in a vehicle.

Proponents of mobile simulation suggest that enabling trainees to learn in their work environment with their own clinical team fosters individual, team, unit and organizational learning. Also, it saves staff time and money as staff does not have to travel to a physically separate training environment (Rosen et al. 2012). For rural areas, or those without access to a dedicated simulation center, mobile simulation is an especially valuable resource for the delivery of medical training (Ireland et al. 2006; Rosen et al. 2012; Xafis et al. 2013; Ullman et al. 2016; Bischof et al. 2016; Weinstock et al. 2009; Pena et al. 2015). However, bringing the mentor, experienced in the subject area and in effective SBME and debriefing, to the learner can often prove to be quite expensive and prohibitive due to the time needed to travel to the training site. Since accessibility to both an expert mentor, along with the appropriate training environment and equipment, can be obstacles to simulation training in rural and remote areas, merging the two concepts of tele-simulation and mobile simulation presents an innovative solution. To our knowledge, research on the concurrent application of tele-simulation and mobile simulation to deliver medical training has yet to be conducted.

3 Description of MTU Prototype

Using the MTU, procedural skills training sessions would be delivered remotely to emergency health-care providers in rural or remote locations using content developed by mentors experienced in the subject area and in SBME. The MTU would be transported to the location and is designed to require minimal technical support for setup and execution of the training session. Educational content of the modules delivered can be variable and tailored to the site-specific needs of the learners. The geographically separated mentor would deliver the training session remotely via a live broadcast with two-way video and audio. The importance of a mentor with experience in the clinical environment and with delivering simulation training remotely cannot be underestimated (McGaghie et al. 2010). All sessions would consist of a pre-briefing, simulation scenario, and deliberate practice with feedback. Relevant review materials would be sent out to learners prior to each session to allow pre-session familiarization with key information.

4 Methods

The iterative development and piloting of the MTU prototype was carried out through a mixed-methods approach and with input from a multi-disciplinary team with backgrounds in emergency medicine, clinical simulation, health informatics, engineering, computer science and research. We followed Haji et al.’s (2014) adapted Medical Research Council (MRC) framework to develop programs in simulation education for training of health professionals. The MRC framework was developed to help researchers of SBME develop programs of research, rather than a project-based strategy, with the goal of optimizing instructional design of SBME. Research on a “study-by-study” basis can result in “a body of evidence that is at times chaotic, contradictory, and limited in advancing … understanding” (Haji et al. 2014, p. 250). Our primary goal of following the MRC framework is so that we are moving beyond studying if the MTU is effective and toward an understanding of why it is effective or not. Also, future research will be able to build upon this program of research to advance the understanding of SBME.

The MRC emphasizes a theory-based, iterative systematic approach to the design, refinement, evaluation, and implementation of SBME. The emphasis is on the design of programs of research in SBME that are theoretically based and methodologically transparent. The MRC framework (Fig. 1) was originally created for development of complex clinical interventions and has been successfully applied in that area (Campbell et al. 2000).

Fig. 1
figure 1

MRC framework (Haji et al. 2014)

The MRC framework consists of four research process cycles:

  1. 1.

    Cycle A – Theory and Modeling: theory and/or evidence identification and modeling of the MTU.

  2. 2.

    Cycle B – Piloting: following a reflective approach, collect data to determine appropriateness of MTU, outcome measures, comparison groups and understanding of the context within which the MTU will operate.

  3. 3.

    Cycle C – Evaluation: conduct a summative evaluation of the MTU.

  4. 4.

    Cycle D – Implementation: implement MTU into the health-care setting.

We followed an iterative approach and have completed cycles A and B. The necessary institutional ethics review board approval was obtained before the project began and initial results of this study have been presented at academic conferences (Parsons et al. 2016a, b, 2017a, b; Jewer et al. 2018).

4.1 Cycle A – Theory and Modeling

We started by identifying the need for improved rural and remote emergency health-care providers’ access to training. We then set about determining how to address this need and deliver the training remotely. As previously discussed, a review of the literature revealed some research on tele-medicine, and mobile simulation; however, we did not find any research on mobile tele-simulation units. Using the Aim-FineTune-FollowThrough (AFT) process to guide the design of the MTU prototype, we moved through the iterative development process (Cristancho et al. 2011). The AFT process is grounded in learning theory and was developed to aid the development of simulation training programs. The AFT process has been used to successfully design a simulation-based program to train surgeons (Cristancho et al. 2012). In the “Aim” stage of the AFT process, we selected the procedural skill to be taught, broke the design into main components, and developed a concise, measurable definition of each component. We then used motor and cognitive modeling diagrams (MCMD) to determine processes, decisions, and logic required to complete the components of the MTU prototype on three main areas – comfort, technology, and human factors. Refer to Appendix A for an overview of the AFT process and a sample of the MCMD diagrams we constructed. In the “FineTune” stage, we used the Delphi method to collect input from experts with experience in emergency medicine, simulation training, and medical education on potential applications and key design components (Dunne et al. 2018). The prevailing opinion was that mobile tele-simulation would be useful for those in rural or remote locations. Key design components identified included: a reliable connection and competent technical support, a knowledgeable mentor, and content relevant to the trainee’s location. We also revised the MCMDs and determined evaluation points and performance measures. In the “FollowThrough” stage, we finalized the MCMDs and developed and validated the MTU prototype.

4.1.1 Development of MTU Prototype

We designed the MTU prototype to ensure an efficient arrangement and operation of telecommunications and simulation equipment to allow ease of instruction, procedural performance, and assessment. Table 1 identifies the design and technical features that guided the design of the MTU prototype.

Table 1 Features of the MTU prototype

As the main focus of the study design was to assess educational effectiveness of a mobile tele-simulation unit, an inflatable rapid deployment tent was determined to be the most practical solution (Fig. 2). Vehicle- and trailer-based units were much more expensive and felt to be impractical at this point in time. The MTU tent was obtained locally in NL from Dynamic Air Shelters.Footnote 1 Its robust construction makes it suitable for transport and deployment in a variety of harsh environmental settings.

Fig. 2
figure 2

Rapid deployment tent designed to function as the MTU

Table 2 and Fig. 3 show an overview of the equipment used in the most recent iteration of MTU prototype. Off-the-shelf and low-cost equipment was used to keep the design of the MTU accessible and practical.

Table 2 General equipment for setup of the mentor base station and the remote MTU station
Fig. 3
figure 3

MTU with low-fidelity simulation setup at the remote site and mentor presence via telecommunication

4.1.2 Development of Training Program

We applied the best practices of SBME pedagogy outlined by McGaghie et al. (2010), including: feedback, deliberate practice, outcome measurement, simulation fidelity, and skill acquisition and maintenance. Educational content was provided through presession delivery of background information to the learner followed by hands-on teaching during instructional sessions. Prior to the teaching day, presession information consisted of an online New England Journal of Medicine video demonstrating the procedure and important details about chest tube insertion including indications, contraindications, complications, and necessary equipment (Dev et al. 2007). On the teaching day, learners were given a brief clinical scenario on details necessitating insertion of the chest tube on their “patient.” We designed the session to allow for deliberate practice, which has been found to be an important part of SBME (Cordray and Pion 2006). During the hands-on sessions, learners received guidance and real-time feedback on their performance and had the opportunity to ask questions. The real-time two-way communication between the mentor and trainees enabled this feedback.

Our session was geared toward teaching an important procedural skill, with joint reductions at Session A and tube thoracostomy (chest tube) at Sessions B and C. Joint reductions were taught with trainees doing hands-on practice on each other. In contrast, chest tube placement was taught using a low-fidelity setup: 3D-printed ribs secured to a plexiglass stand with low-cost simulated skin and subcutaneous tissue (later in the project the skin was also 3D printed) (Fig. 3). There is evidence of SBME as an effective method for teaching the chest tube procedure (Hutton et al. 2008).

4.2 Cycle B – Piloting

Piloting is divided into four sub-phases: (1) establish feasibility and acceptability; (2) clarify uncertainties in the design of the intervention and outcome assessment; (3) identify and design the training protocol for a comparison group, and; (4) address methodological issues. These sub-phases are independent and not completed in any particular order. We held three prototype evaluation sessions to complete these four sub-phases and pilot the MTU prototype. This also involved iteratively applying the AFT process. The descriptions of the sessions are presented in Table 3, and Fig. 4 presents an overview of the piloting cycle.

Table 3 Select features of each MTU prototype evaluation session
Fig. 4
figure 4

Overview of cycle B – piloting

4.2.1 Session A

The purpose of the first session, Session A, was to evaluate the feasibility and acceptability of the MTU and to clarify uncertainties in the design of the intervention. We considered possible barriers to the prototype implementation and addressed technical issues. We also evaluated and documented the setup and takedown of the MTU and all related components since the MTU would require setup by a technician at a remote site. The MTU prototype was deployed at a wilderness training course attended by 35 family medicine residents. Groups of approximately 9 residents were instructed on select joint reductions, skills relevant to the rural practitioner. Following the format for the curriculum described in Sect. 4.1.2 of this paper, an experienced emergency medicine physician (the mentor) remotely instructed learners on elbow dislocation via a telecommunications link. Trainees had the opportunity to interact directly with the mentor during the session. The mentor’s camera was displayed on the laptop screen in the MTU. The mentor observed trainees by two cameras stationed in the MTU. An experienced emergency medicine physician, located in the MTU, provided support and led training on finger and shoulder dislocations.

As shown in Fig. 4, students were asked to fill out a demographic questionnaire at the beginning of the session and a design questionnaire at the end of their session. The demographic questionnaire collected information on demographics and past experience with the procedure, SBME, and tele-medicine training. The design questionnaire focused on design and telecommunications features of the MTU, and perceptions of learning experiences. The features were rated on a five-point Likert scale from strongly disagree (1) to strongly agree (5).

4.2.2 Session B

Prototype B incorporated feedback from Session A, involving family medicine residents at the wilderness training course, and also took into consideration the comments of research team members with respect to improvements. The purpose of Session B was to continue examination of the feasibility and acceptability of the MTU and clarify uncertainties in the design of the intervention and outcome assessment.

Session B saw the MTU transported by airplane to Labrador, a more remote northern region of the province. It was necessary to address challenges of packaging and transport with this deployment. The extreme environment, with its very cold temperatures (−20 °C) presented additional challenges to the effective delivery of our educational content. Chest tube insertion was chosen as the procedure for this session as it was felt to be an important skill for learners and it was amenable to low-fidelity simulation setup and effective demonstration by the remote mentor. Learners were instructed remotely on the completion of a chest tube insertion procedure on 3D-printed low-fidelity models following the curriculum described in Sect. 4.1.2. No on-site mentor was present in this session, learners received only remote instruction. We reduced the number of trainees receiving training in the MTU in each session from nine to two, acting on feedback from Session A with respect to learner to instructor ratios. Additionally, due to the lag with two cameras on the trainees in Session A, we decided to use just one camera per site in Session B.

As with Session A, trainees completed the demographic questionnaire before the session and completed the design questionnaires after the session (see Fig.4). Additional information was also collected. Design elements of the training session were evaluated using the adapted National League for Nursing (NLN) Simulation Design Scale (National League for Nursing 2005). Questions about the objectives and information provided, support, problem-solving and feedback were asked (see Appendix C). Learning outcomes were also evaluated in this session. Trainees were given a set of procedural skills questions, based on the presession materials (Dev et al. 2007), to answer before and after the session (see Appendix B). These questions were used to assess whether there were differences in the knowledge of the chest tube procedure within the group. They were also included to measure learning after the session. These materials were evaluated by an experienced physician to determine if differences existed pre and post-session. Measures of self-reported learning outcomes were adapted from the NLN Student Satisfaction and Self-Confidence in Learning scales (National League for Nursing 2005) to measure beliefs and attitudes about learning in simulation (see Appendix C). These NLN scales have been widely used and have been found to have sufficient reliability and validity to be used in education research (Franklin et al. 2014). We found that the extreme cold temperatures presented challenges. The space heater could not overcome the −20 °C temperatures, and some related discomfort was noted by participants. As well, low temperature resulted in compromised seals on the tent components and caused a slow air leak requiring reinflation during the session – a process requiring air blowers and potentially a generator, all leading to significant noise interference.

4.2.3 Session C

This session continued to build upon information gathered from the earlier prototype design cycle. We continued to evaluate the design and function of the MTU but also worked to complete the third and fourth sub-phases of Cycle B, the design of the training protocol for the comparison group and addressing any methodological issues. Because the overall purpose of this MTU prototype is to deliver training comparable to face-to-face training, we designed the training session for the comparison group to be given in this manner. The same procedure was taught (i.e., chest tube insertion), using the same medical instruments, supplies, and low-fidelity setup. The session was given the same amount of time for the face-to-face and tele-medicine groups. It also took place in the MTU tent to minimize any environmental influences as compared to Session B, although this round of testing was in a warm environment. Eighteen first and second year medical students were the subjects for this session. Three groups of equal sizes were created: the intervention group (tele-medicine), the comparison group (face-to-face), and the control group (no training session). Since this is a noninferiority study, a control group was needed to confirm that not only is the intervention group not inferior to the comparison group but that both treatments are actually effective (Greene et al. 2008). Trainees were randomized to each group based on the order of their reply to request for participation and we delivered the session to two trainees at a time. A third student per group was put in the control cohort and did not receive training (either remote or face-to-face). Instead they worked on solving a game puzzle for 20 minutes and then completed the post-tests and questionnaires.

Upon arrival at the session, the trainees completed the demographic and design questionnaires as in the previous sessions. Satisfaction and self-confidence in learning was evaluated using the instruments from Session B. To evaluate skill and knowledge maintenance over time, trainees were tested 1 week after the training session using the procedural skill questions (see Appendix B), and they were asked to rate how competent they perceived themselves with performing a chest tube insertion. We also asked if they had performed, witnessed, or received training in chest tube insertions in the week prior to doing the retention test.

5 Results

Through each successive session, the MTU was evaluated on physical design of the unit, function of the telecommunications equipment and overall impression on the utility of the MTU. All trainees completed these questions with the exception of the six control and six face-to-face trainees in Session C, because they did not receive remote training. The trainees’ ratings on a scale of 1 (lowest) to 5 (highest) regarding design features, telecommunications, and overall satisfaction with the MTU are shown in Figs. 5, 6, and 7, respectively. Appendix C shows the means and standard deviations.

Fig. 5
figure 5

Feedback on physical MTU design features

Fig. 6
figure 6

Function of telecommunications

Fig. 7
figure 7

Overall satisfaction with the MTU

The design and telecommunications Figs. 5, 6, and 7, features and the overall satisfaction with the MTU were rated at around 4 or higher for all sessions, except for noise level and audio clarity. A Kruskal-Wallis test (non-parametric ANOVA) was used to determine if there were statistically significant differences between the sessions on these features. Table 4 shows the results of the analysis. Trainees’ ratings were only statistically significant on the noise level and the clarity of the audio. Pairwise comparisons were performed using Dunn’s (1964) procedure with a Bonferroni correction for multiple comparisons for these two features. This post hoc analysis revealed statistically significant differences in ratings on noise level between Session A (27.13) and Session B (10.17) (p = 0.007), and on clarity of the audio between Session A (27.26) and Session B (12.42) (p = 0.019).

Table 4 Analysis of design, telecommunications, and overall satisfaction with the MTU

In addition to examining the acceptability and feasibility of the MTU, in Sessions B and C, we also examined the design elements of the training session with questions on the objectives and information, support, problem-solving, and feedback. Appendix C shows the means and standard deviations. The results in Session B indicated that there was some room for improvement in the training program with the average rating on items ranging from 3.17 to 4.17 on a scale of 1 (lowest) to 5 (highest). This feedback was used in making changes to Session C. For example, we included a video of the procedure in the presession materials in Session C. Session C groups that received the training remotely versus face-to-face on the design elements were compared. A Mann-Whitney U test (non-parametric t-test) revealed that there were no statistically significant differences between the remote and face-to-face groups on any of the items (see Appendix C).

The effectiveness of the MTU was also evaluated in terms of learning outcomes. First, the skills questions were assessed. For Session B, a Wilcoxon signed-rank test (non-parametric paired t-test) found that there was no statistically significant difference between the pre and the post-skills test results, z = 10, p = 0.066. For Session C, we introduced three groups (i.e., received training remotely, received training face-to-face, and the control group that did not receive any training) and a retention test. We created two new variables: one variable to calculate the difference between the pre- and post-skills tests and between the post and retention skills tests. A Kruskal-Wallis test on these new variables indicated that there were no statistically significant differences between the groups on the pre and post-skills test ((χ2(2) = 4.150, p = 0.126) or between the post and retention skills tests (χ2(2) = 2.485, p = 0.289). Next, the self-reported learning measures (adapted from the NLN scales) that were asked in the post-test were analyzed (see Appendix C for the means and standard deviations). A Mann-Whitney U test revealed that there were no statistically significant differences between the remote and the face-to-face groups on any of the items (see Appendix C). Finally, self-reported competency in performing the chest tube procedure was examined for Session C using a two-way repeated measures ANOVA. The effect of training methodology (or no training in the case of the control group) on perceived competency shows no significant difference (F(2, 10) = 2.059, p = 0.178), nor is there a difference between the groups (F(1,10) = 2.317, p = 0.149). However, it should be noted that there is a larger increase in the mean self-rated competency level from the pre-test to retention tests for the remote group than for any of the other groups. In addition, the control group has the lowest mean competency level increase (see Table 5). There is a statistically significant increase in self-reported competency within the groups (F(1,5) = 122.5, p < 0.005). All but one of the trainees in the remote and face-to-face groups reported improved competency in the retention tests; whereas, only three of the six trainees in the control group reported an improvement.

Table 5 Mean and standard deviation for self-reported competency in Session C

6 Discussion and Future Research

To our knowledge, this is the first report of MTU development for remote training of emergency health-care providers. It was helpful to follow the four cycles of the adapted MRC framework to develop and pilot the MTU prototype. This framework enabled us to follow a theory-based approach, to identify challenges in the prototype, and to address these challenges iteratively in the piloting phase.

Overall, the trainees in each session were satisfied with their experience and would recommend the MTU to their colleagues for SBME. Additionally, the design and telecommunication features were rated highly in all sessions except for the noise level and the audio clarity of the telecommunications equipment. Specifically, issues were noted with the noise level and audio clarity during Session B. During this session, the extreme cold was associated with air leaks in the MTU structure and required pausing instruction to reinflate the unit, a process we feel contributed to the lower ratings on satisfaction with respect to noise and audio quality. The other two deployments required no reinflation. Built-in laptop speakers provide adequate audio in most circumstances, but external speakers of better quality proved advantageous in Sessions A and C.

One of the key challenges of the development of the prototype was minimizing costs and keeping the MTU easy to deploy with little technical experience, while maximizing the value for trainees. We used off-the-shelf communications software to keep costs low. The challenge with this was that it is developed for high bandwidth; however, the rural or remote locations may not have access to high bandwidth. Setting video quality at low-resolution helped avoid choppy audiovisual transmission but was associated with compromise of fine detail and made assessment of some components of the skill (e.g., suturing) more difficult. Using single camera setups at each of the mentor and remote stations in Session C helped to solve some of the delays seen in Session A when two cameras were used in the remote station.

Feedback on design elements of the training in Session B (i.e., objectives and information provided, support, problem-solving, and feedback) were used to modify the training in Session C. On average, the trainees consistently rated these design elements highly. Furthermore, a comparison of the ratings of these elements provided by trainees in Session C who completed the training remotely versus face-to-face revealed that there were no statistically significant differences on the ratings between these groups. This is important as these design elements can impact the instructional effectiveness and learning outcomes.

Learning outcomes were measured in three ways. All findings support the remote training as comparable to face-to-face training. First, examination of the written procedural skills tests revealed that there were no statistically significant differences between the groups (remote, face-to-face, or control). This indicates that the effects of training methodology (or no training) on skills necessary to perform the chest tube insertion procedure between the groups are not different, with respect to performance on a written test. The ability to physically and capably complete a procedural skill therefore relies on deliberate practice of that skill (Ericsson 2008). There are two distinct key areas of knowledge with respect to competent procedural skills performance; one relating to factual background information and the second being the ability to complete all necessary steps. The second learning outcome measure was self-reported confidence in learning and personal satisfaction with the learning experience. Again, no statistically significant differences were found between the groups in Session C. The third learning outcome measure was self-reported competency level in procedural performance. No statistically significant differences between the groups were found. However, there was a larger increase in the mean self-reported competency rating with the remote group than with the other groups, and the control group had the smallest mean increase in self-reported competency rating. This is encouraging and it would be interesting to investigate this with a larger sample. A statistically significant difference was found between subjective competency level before the hands-on training and after the retention test. These results are encouraging and indicate that self-perceived learning appears to have occurred during the training, and that it did not matter if the training was delivered remotely or face-to-face. These findings are consistent with other studies, which have compared SBME with other instruction, and with no intervention (Ilgen et al. 2013).

The main limitation of the study to this point has been the small sample sizes at each stage of prototype development. This is mitigated by the use of the Delphi method which enabled the inclusion of experts’ opinions on potential applications and key design components of the MTU. Another limitation is the use of self-reported learning measures; however, the use of self-reported performance measures is common practice in educational research and such measures tend to be consistent with objective measures (Anaya 1999). Additionally, the use of the NLN scales that have been shown to be reliable and valid, help to alleviate some of these concerns. Further sessions are planned in the evaluation stage of this research to study the MTU with more subjects and objective measures of learning outcomes to enable more robust collection of data and analysis of results.

The next steps are to follow Cycle C of the MRC framework: evaluation. We will evaluate the educational effectiveness of the MTU’s use with a larger group of subjects and the application of objective assessments to obtain quantitative data amenable to statistical analysis. These results will allow comparison of the pre, post, and retention tests with respect to learning outcomes. If we find that the learning outcomes from sessions delivered remotely are comparable to face-to-face, then we will proceed with Cycle D, implementing the MTU into broader practice settings. The ultimate goal is the delivery of the simulation-based training remotely through the use of a larger, self-contained vehicle outfitted with simulation equipment necessary for provision of a wider range of scenarios. This will present an opportunity to overcome geographic, cost, and time barriers to emergency medical education provision in rural and remote areas.

Future research will investigate the challenges faced in this study with audiovisual transmission and explore the use of a purpose-built efficient communications system designed for low bandwidth. Elements in the delivery of the training program, such as objectives and information provided during the session, could also be studied. There were no statistically significant differences found between the remote and face-to-face groups on any of these elements; however, several of the elements around the cues and information provided during the session (see Appendix C) were rated lower by the remote than by the face-to-face group. This increase in communication ambiguity as media naturalness decreases is a known issue in remote training (Kock 2005). Ways to reduce this ambiguity could be explored. Additionally, the impact of adding debriefing to the training program on learning outcomes could be examined, as debriefing has been found to be essential to SBME (Cheng et al. 2014). Debriefing could be added following an approach such as the four-step model presented by Rudolph et al. (2008). Future research will also examine possible collaboration between urban and rural clinicians using the MTU to learn from each other with the goal of improving the delivery of care in both regions. The potential delivery of mobile tele-simulation-based training to other medical disciplines could be examined. For example, use of the MTU could enable surgeons to train with their own clinical team, and thus foster the benefits of mobile simulation training, while saving time and money. Furthermore, as with tele-simulation, the MTU could also be beneficial for the remote assessment of skills, especially in domains that are poorly covered by traditional written and oral examinations. Concepts explored in the MTU project also have the potential to be useful for the provision of training in less developed regions of the world. Research on conditions specific to these regions that may impact learning outcomes, such as cultural differences or low bandwidth could be conducted.

7 Conclusion

Following a theory-based approach of the MRC framework and the AFT process has helped us to conduct the iterative development and piloting of an MTU prototype targeted to meet the learning needs of emergency health-care providers in rural and remote areas. Designing a complex intervention, such as the MTU, poses substantial challenges to investigators; however, the use of frameworks that harness qualitative and quantitative methods should improve the intervention, study design, and generalizability of results. The MTU prototype has been improved through ongoing evaluation, reflection, and redesign. Feedback to ensure a quality learning experience in the MTU has directed key features of physical design, technical performance and the training program that have been applied in deployment of the unit in each evaluation session. The MTU prototype appears to be an effective means to make quality simulation training on procedural skills more accessible to emergency health-care providers in rural and remote areas, while addressing the challenges of simulation, tele-simulation and mobile simulation. Further evaluation of design, telecommunications, and learning outcomes will help to determine the full potential of the MTU and help to address some of the challenges to equitable health-care delivery by transcending the barriers of distance, time, and costs.

Effective applications of tele-simulation, mobile simulation, and particularly mobile tele-simulation are in their infancy and the opportunities that these platforms provide for innovative training are limitless. Challenges such as inadequate exposure to infrequently encountered medical cases and procedures, and lack of access to SBME may be addressed through the use of these techniques. The MTU in particular may provide advantages to those with limited access to simulation training centers by providing them access to experienced mentors and enhanced quality SBME experiences.