It has been the vision of computer scientists and surgeons to create intelligent operating room (OR) systems functioning as an “autopilot” in the background of surgery that not only collect data from an operation [1] but also, by online data analysis, interpret whether the course of the operation is normal or deviating from the schedule (“situation awareness”). Even more ambitious is the demand that these systems be able, after a correct interpretation of the actual situation, to recommend the adequate next steps of the intervention and to identify imminent risky situations.

However, the implementation of this “intelligent system in the background” in a surgical environment is by far more demanding than its implication under technical or industrial conditions [2]. Because “situation awareness” is based on a precise modeling of the respective process and continuous real-time data acquisition [3], a highly standardized procedure with reliable process definition (e.g. laparoscopic cholecystectomy) is needed [4].

However, modeling of even a highly standardized surgical procedure is extremely challenging, and various methods are currently under evaluation, such as workflow mining [5, 6], hidden Markov models [7] and various segmentation techniques [8]. All these efforts can be successful only if sufficient information is available for interpretation and analysis to enable decision making. This inflow of data has to be continuous and comprehensive in real time.

In addition, data collection must be possible without disruption to the routine course of the surgery. Data must be collected in a “stealth mode,” defined by Sutherland et al. [1] as an “automated collection of data that is normally observed, yet irregularly captured because of lack of time and tedious manual data entry procedures.” The stealth mode of automated data collection should not impose an additional workload on the team and must simultaneously avoid erroneous perceptions in the high-stress environment of the OR.

Self-evidently, the spectrum of independent information to achieve “situation awareness” should be as broad as possible. Under practical conditions, however, this is limited to what is absolutely necessary and currently feasible. The minimum includes time since the beginning of the operation, which can be easily assessed, real-time information about the functional state of peripheral devices, surgical instruments in use, and behavior of the team. In addition, patients’ vital signs (e.g., blood pressure, pulse, electrocardiogram, temperature, ventilation) monitored by the anesthetist need to be taken into account.

Given the dynamic and interactive nature of critical care environments such as the surgical OR, current methods of data capture do not provide the optimal solution. With such constraints imposed on data collection in complex environments, there is a need for an unobtrusive method of data collection that can augment existing methods and capture workflow from different points of view simultaneously. Thus, assisting and security mechanisms, such as a intensive care unit alert or on-time consultation of specialists, are achievable for an increase in patient safety [9].

Recently introduced technologies in health care such as bar code, radiofrequency identification (RFID), and voice and emotion recognition may have the potential to meet these demands and open up new and exciting methods toward the OR of the future. Automatic surgical instrument identification by bar code or matrix code is technically feasible [10], but applicable systems for its routine use in the surgical OR have not been published to date. The RFID technology is widely available and relatively easy to integrate into the health care supply chain [11]. Furthermore, its application has proved to be effective for patient tracking [12] and for reducing or eliminating instances of gossypiboma or retained surgical sponges [13].

The emotional level in the OR is an important indicator of whether surgery is running smoothly or the team is confronted with difficulties [14, 15]. However, data capture and analysis are difficult. During an operation, emotional levels can be recognized either nonverbally by gestures and mimic or verbally by speech. Although technically feasible [16], nonverbal emotion recognition in the OR is difficult to assess due to limited freedom of movement as well as the surgical cap and mask. In contrast, emotion recognition by speech is a promising technology for further workflow prediction [17]. Assessment of classical physiologic parameters alone (pulse rate, skin resistance) is not sufficient because sensors with cables cannot be attached routinely to the OR team, and comprehensive data collection for emotional analysis is not possible.

This report aims to identify, based on our own institutional experience and review of the literature, which technologies are currently the most promising for providing the required data and to describe their fields of application and potential limitations.

Materials and methods

The Minimally Invasive Interdisciplinary Therapeutic Interventions (MITI) research group, founded in 1999 within the department of surgery at the Klinikum rechts der Isar, Technical University Munich, develops innovative diagnostic procedures and therapeutic solution concepts for (minimally invasive) surgery. Experts from engineering sciences, industry, and clinic form a know-how center for medical technology. Through the interdisciplinary approach, MITI works in a broad spectrum of application areas such as ergonomics and process optimization, image processing and navigation, robotics, innovative therapies, and telemedicine.

Retrieval of information on the functional state of peripheral devices and systems

Preliminary institutional studies showed that retrieval of information on the functional state of peripheral devices in the OR is technically feasible by continuous sensor-based data acquisition and online analysis. For highly standardized surgical procedures with reliable process definition (>150 laparoscopic cholecystectomies; mean duration, 57 min), the following environmental parameters were assessed and their validity for workflow description proved: status of the OR room/table light, intraabdominal pressure, tilt of the OR table, irrigation and aspiration volume, and application of coagulation and cutting current [18]. Each cholecystectomy was postoperatively classified on a 5-degree scale from simple to very difficult by an experienced minimally invasive surgeon.

For retrieval of the OR room/table light status, photosensitive sensors were attached to each OR room/table light recording the period during which the light source was enabled. The intraabdominal pressure was measured by pressure gauge integrated into the laparoscopy unit. The position of the OR table was determined by the tilt of a sensor attached to the OR table. For measurement of irrigation and aspiration volume, scale sensors were attached to the irrigation solution bag and recuperation tank (Fig. 1). Intraabdominal bleeding causes a negative difference between irrigation and aspiration. The application of coagulation and cutting current was recorded automatically by a photosensitive sensor attached to the two diodes of the current generator (coagulation/cutting), which were enabled during current application.

Fig. 1
figure 1

Scale sensors attached to the irrigation solution bag (left) and recuperation tank (right) for measurement of irrigation and aspiration volume

Recognition of surgical instruments

Besides retrieval of information on the functional state of peripheral devices and systems, a continuous automated instrument surveillance system also would be most valuable for obtaining essential information about the actual part of an operation. In highly standardized surgical interventions, a characteristic sequence of different instruments can be expected. The challenge is to identify the respective instrument.

We developed an automatic instrument identification system [19] based on bar code recognition that detects and registers the individual laparoscopic instrument during insertion and removal through the trocar (Fig. 2). The system is based on opto-electronic object detection using a bar code (bar code 128 and 2/5 interleaved) attached to the laparoscopic instrument. The instrument is detected by a micro-endoscopic camera and light device that can be used during routine minimally invasive procedures without technical modifications of standard laparoscopic systems.

Fig. 2
figure 2

Bar code as seen in the camera focus (top) and lasered bar code on the laparoscopic instrument (bottom)

Behavior of the team

The dynamics of human activities in the OR comprise key information for description of workflow during surgery. Presence (and absence) of the OR team members (scrub nurse, circulator, resident, surgeon) follow a specific order during the course of a surgical procedure that reflects the single steps of the operation (preparatory state, operation, postprocessing step).

In a pilot study, we currently are investigating the application reliability of an active RFID system for personnel tracking during surgical operations. The experimental OR of our research institute is strategically equipped with three RFID controllers (ZOMOFI, Zone Monitoring & Find; SIEMENS Schweiz AG) and open cholecystectomy on the ELITE OR phantom [20] is performed (n = 10; duration, 30 min., 6–10 position changes of the surgeon/assistant). For personnel tracking, each team member puts on an active RFID transponder batch (ZOMOFI, 2.45 GHz, Zone Monitoring & Find) when entering the OR.

The experimental OR scrub room is equipped with an RFID controller indicating the presence of the respective OR team member. As soon as the transponder is detected by one of the OR table sector controllers positioned on both sides of the OR table (OR table left/OR table right), the team member can be considered as an “active” member of the procedure (i.e., operating), and his or her position is automatically localized. The RFID controller data are thus processed via ZOMOFI software and displayed on a standard laptop (Fig. 3).

Fig. 3
figure 3

Online operating room (OR) team detection (radiofrequency identification [RFID]). The wall-mounted display shows the actual position of the team member. Active transponder tags no. 3428/3425. Operating room table left = zone 1/right = zone 2

At the end of the procedure, the team members put off their RFID batches and leave the OR. No RFID signal is received any longer. Besides information about presence, absence, and “active” participation in the procedure, the actual position of each team member is detected, as well as the changing of the position.

Besides personnel tracking, another field of application for RFID systems in the OR is the detection of intraoperatively deployed objects. In an ex vivo setting (ELITE OR phantom [20]), 20 surgical sponges were equipped with passive RFID transponders (MOBY-D, 13.56 MHz, and a surgical procedure with 30 object changes was simulated. For continuous object monitoring, three RFID antennas (FEIG Electronic GmbH, Weilburg, Germany) were strategically positioned on the instrument table under the phantom’s abdominal cavity and in the discarding bin. The actual textile position was displayed in real time by specially developed software (SIEMENS Medical Solutions, Germany) on a laptop after RFID data had been processed via a standard RFID reader.

Emotion analysis

The emotional level is an important indicator of whether surgery is running smoothly or the team is confronted with difficulties [14, 15]. The assessment of classical physiologic parameters such as pulse rate or skin resistance is not suitable for stealth data collection in the OR environment because sensors with cables cannot be attached routinely to the OR team.

Emotion recognition in the OR by automated speech control (ASR) is a promising technology for further workflow prediction. In cooperation with colleagues from the Institute of Human–Machine Communication, Technical University Munich, ASR was realized by an Hidden Markov Model (HMM) recognizer based on three-state inner-word phoneme triphones with Bakis model topology based on Mel Frequency Cepstral Coefficients (MFCC) 1-12 plus energy and derived speed plus acceleration regression coefficients [21. The usage of a phoneme-based recognizer allows for a flexible exchange of the terms to personalize the vocabulary to a surgeon’s preference.

The automatic speech presegmentation is based on energy in the time domain. First, the recorded audio file is multiplied with a Hamming window function. The window’s width is set at 512 sample points, and the frame length that specifies the intervals between the values is set at 256 sample points. A mean log power value of five consecutive frames is calculated. If this value exceeds 50 dB, the current frame is regarded as speech onset. After onset detection, 60 consecutive frames with a log power value less than 21 dB are regarded as speech offset. To prevent loss of speech information, five frames are added at both the start and the end. This very basic segmentation prevents loss of potential speech turns but demands a manual check in a subsequent step. This is ensured by one annotator. In the next step, three experienced male annotators manually label these speech segments within five emotion classes (neutral, happy, angry, impatient, confused) chosen from an open initial labeling set with respect to frequency of occurrence and the target application.

We thus have set up a speech databank in our department (speech in minimally invasive surgery [SIMIS]) consisting of 3,035 speech sequences (speech time, 5–17 min; no. of segments, 159–523) recorded during 10 minimally invasive procedures (procedure time, 36–80 min.; 16 bit; 16 kHz of data) using both room and headset active condenser microphones to gain further information on the impact of emotional levels on the course of an operation. Besides the intraoperatively spoken words, the emotional level of the surgeon was noted.

Review of the literature

To provide a comprehensive overview in addition to our own institutional approach, we conducted a search of the published literature through the MEDLINE database (2000–2010) using the medical subject heading term “situation awareness.” Workflow concepts in surgery with precise modeling of the respective process and technologies for continuous real-time data acquisition emerged in the late 1990s [1], and the literature research started in the year 2000. The number of manuscripts was specified using the terms “workflow analysis,” “surgery,” and “technology.” We reviewed all abstracts to obtain full-text articles of potentially relevant manuscripts addressing the issue of situational awareness in the surgical OR. The inclusion criteria used for the review required published manuscripts with full-text articles in the English language, technologies applicable in the surgical OR (bar code, RFID, emotion analysis), and data usability and validity for workflow description to achieve situation awareness.

Results

Retrieval of information on the functional state of peripheral devices and systems

The application of cutting and coagulation current was significantly increased in cholecystectomies classified as very difficult (p = 0.04), as were irrigation and aspiration volume (p = 0.001). Continuous sensor data acquisition showed characteristic schemes for aberration of the normal procedure. In case of a stronger bleeding, frequent and long use of coagulation current combined with a positive difference between irrigation and aspiration volume was detected. In case of conversion to open surgery, a sudden intraabdominal pressure (0 mmHg) combined with “all lights on” and a sudden repositioning of the OR table was noted (Fig. 4).

Fig. 4
figure 4

Identification of specific sensor-based indicators for conversion of laparoscopic procedure to open surgery. Top irrigation and aspiration volume. Upper middle intraabdominal pressure. Lower middle coagulation and cutting current. Bottom operating room/table light

Recognition of surgical instruments

Previous published methods for instrument tracking can be broadly separated into two categories: external tracking systems such as electromagnetic and optical tracking [22, 23] and image-based detection algorithms [24, 25]. Generally, external tracking systems suffer from limitations of the surgical environment. Whereas electromagnetic tracking systems have limited accuracy and are problematic to implement due to the abundance of ferro-magnetic objects in the OR, optical tracking of instruments is complicated by line-of-sight requirements. Furthermore, both of these systems suffer from errors introduced by improper registration to the tracking coordinate frame. In addition, these systems track only the motion of the instruments and do not recognize the instrument specifically.

To eliminate such errors, image-based algorithms recently have been introduced to track instruments (surgical graspers) within ultrasound images [26]. However, image-based registration systems, developed primarily for cardiac surgeries, are likewise not suitable for routine use in abdominal surgical procedures because ultrasound-based instrument detection within the abdominal cavity is, at the moment, hardly feasible technically and impractical for routine intraoperative deployment.

Other types of automatic identification and data capture (AIDC) systems are bar coding, optical character recognition (OCR), magnetic stripe, and RFID [27]. The OCR and magnetic stripe identification technologies are restricted for application with surgical instruments due to their scanning and decoding technology. Because not all laparoscopic instruments are permanently visible within the intraabdominal camera focus and because sudden instrument motion changes occur, OCR technologies lack accuracy in minimally invasive surgery. For open surgery, however, OCR identification seems to be feasible if line-of-sight requirements are warranted, although convincing literature is missing to date.

Magnetic stripe identification is limited for application with surgical instruments due its strong interference with metallic objects. Furthermore, integration of readers is highly elaborate. Although RFID is a promising technology for object tracking during surgical operations [28], instrument identification at the point of care is not possible to date. The points of limitation are mainly scanner and transponder sizes as well as interferences from metals and liquids [29].

The advantage of bar coding for surgical instrument identification is its high data capacity, flexible encoding, adaptable encrypting, and orientation-independent legibility [30]. Successful results of automated textile detection using bar code technology in general surgery [31] and instrument identification by termed matrix coding [32] have already been reported. By using established and modified bar code technologies [33, 34], automatic instrument identification is feasible and could, in principle, not only improve economies in the processing of instruments in health facilities but also facilitate the automatic recording of instrument usage in laparoscopic surgery.

We therefore developed an automatic identification system for laparoscopic instruments based on opto-electronic bar code detection, which can be used during routine minimally invasive procedures without technical modifications of standard laparoscopic systems [19]. The reliability of the system was evaluated under experimental conditions (laparoscopic training simulator) for the compilation of basic data and in vivo (animal experiment) for evaluation of technical aspects under surgical conditions. The system showed satisfactory results for instrument detection with slow insertion velocity (1 cm/s). However, the best results for reliable instrument detection were achieved with a 5-s static retention period of the bar code in the camera. Dynamic movement (10/20 cm/s) resulted in a rapid decline of the detection rate.

This limitation for the intraoperative application needs further technological improvement. A solution could be a definite retention margin on the instrument (e.g., a milled-in gauge) that allows a short retention period of the bar code in the focus of the camera during insertion and removal. Furthermore, application of a high-velocity black-and-white camera may give the opportunity of a correct and reliable instrument registration, although the retention period of the bar code in the focus of the camera is very short. Although not yet mature for routine clinical application, proof for the concept of an automatic operating system for instrument identification in minimally invasive surgery was achieved with reliable detection rates under laboratory conditions (Fig. 3).

Behavior of the team

Continuous automated monitoring of the OR team in real time not only is the basis for obtaining information of the (routine) surgical workflow. It also is essential for determining potential deviations from the schedule. A promising technology for continuous personnel tracking is data acquisition with RFIDs [35]. The RFID systems already well established in the industry (transport, logistics [11]) show their potential to facilitate the simultaneous tracking of several persons (and objects) without interrupting the routine workflow in the OR.

The individual transponders (tags) used in RFID systems emit a specific identification signal. Nearby antennas emit radio waves that are absorbed by the tag, converted to electrical energy, and then re-emitted at the tag’s specific frequency. These frequencies are then read by the antennas, creating a real-time “inventory” of the controlled sector [13]. This information is then usable by a variety of middleware applications, opening options for “safety” checkpoints, automated workflow protocols, and access control systems.

Both active and passive RFID systems can be applied in the surgical OR environment. Whereas active systems have the advantage of higher detection ranges and increased data storage (internal power supply), passive systems are smaller in size [35] and can therefore be integrated into medical equipment (e.g., surgical sponges [28]). To avoid interference with technical surroundings in the OR (high-frequency current, anesthesiologic equipment), frequencies of 13.56 MHz should be applied.

In a pilot study, we currently are investigating the application reliability of an active RFID system in an ex vivo model for personnel tracking during surgical procedures. Up to six active RFID transponders have been detected correctly by RFID OR room surveillance and sector controllers attached on both sides of the OR table (OR table left/OR table right). Furthermore, side changes have been detected correctly (mean, 30–60 s). Besides information about presence, absence, and “active” participation in the procedure, the actual position of each team member and the changing of their position have been automatically monitored. Thus, information about deviance of the routine workflow could be gained automatically. In case of a bleed, for example, the surgeon (on the patient’s left side) assisting the resident (on the patient’s right side) suddenly changes position (from left to right side), indicating that he or she is taking over the operation.

Besides personnel tracking, another field of application for RFID systems in the OR is detection of intraoperatively deployed objects. The preliminary results of our ex vivo study are promising, with detection rates of 100% for retained sponges [36]. These data are consistent with the reported detection accuracy achieved with an RFID handheld device by Macario et al. [28].

Emotion analysis

The emotional level is an important indicator of whether surgery is running smoothly or the team is confronted with difficulties [14, 15]. The following two alternative options for emotion analysis are conceivable.

Emotion analysis by speech

During an operation, emotional levels can be recognized either nonverbally by gestures and mimic or verbally by speech. Because team communication and control of medical devices is accomplished primarily by speech, the latter has a higher priority for application in the OR. Furthermore, speech recognition is easier to assess than gesture in the OR due to the limited freedom of human movement and restricted illumination conditions (e.g., the laparoscopy monitor as the only light source during laparoscopic operations). Specific alterations of speech, however, may allow an undisturbed conclusion on the emotional level of the speaker [17].

Medical systems with integrated speech control such as AESOP [37] or SIOS [38] are already implemented in the OR. However, high costs, slow speech control, and lack of user-friendliness limit their application during routine surgical procedures.

Emotion recognition in the OR by speech is a promising technology for further workflow prediction. Evaluation of speech sequences (speech in minimally invasive surgery [SIMIS]) showed that only 53% of the intraoperatively annotated sequences can be considered “normal.” The remaining 47% of the speech sequences are emotionally “modified” to either positive or negative emotional levels. By application of recently introduced new classification standards and high-dimensional optimized identifying features, a discrimination between “positive” and “negative” emotional levels is possible [21].

For a reliable emotion analysis in real time, the openEAR toolkit for simultaneous speech recording and classification of the speech signal was developed [39]. Because speech recognition in the OR suffers from increased noise disturbance levels, a “Switching Linear Dynamic Model” for separated modeling of noise levels and speech can be used [40].

The next step for emotion analysis by speech in the OR, the subject of our current research activities, is implementation of the system into the routine surgical workflow to evaluate the possible impact of different emotional levels (positive and negative) on the course of an operation. Thus, both the principal influence and the alteration of emotional levels in the case of deviations from the routine operation schedule due to occurring difficulties will be observed and further investigated.

Emotion analysis by eye tracking

Besides emotional analysis by speech, nonverbal emotion recognition is feasible, as shown by Granholm and Steinhauer [16]. However, for routine use in the OR, nonverbal emotion analysis must be reduced to eye-tracking systems because gesture surveying (due to limited freedom of movement) and mimics (due to the surgical cap and mask) are highly limited. In a first step, James et al. [8], showed in a porcine model that the critical step during a laparoscopic cholecystectomy (clipping of the cystic duct) can be recognized with an accuracy reaching 75% using eye-gaze and image-based workflow analysis with a parallel-layer perceptron model (Parallel-Layer Perceptron Model (PCP), a feed-forward neural network with an algorithm that attempts to fix all errors encountered, e.g., in the training set). The classifier is eye-gaze contingent but combined with image-based visual feature detection for improved system performance. For implementation of eye tracking in the surgical OR, a remote eye tracker (60 Hz) needs to be attached (e.g., on the surgeon’s frontlet) because conventional distinct remote eye-tracking screens cannot be applied during a surgical procedure.

De Lemos et al. [41] reported a new automated method of measuring human emotions by analyzing eye properties via an eye-tracking platform. The method uses eye-tracking data to measure the immediate unconscious and uncontrollable emotional responses before they are cognitively perceived, interpreted, and biased by the human brain. Thus, emotions were spotted correctly in up to 80% of cases.

Although these preliminary reports are promising, venturous methods for emotional analysis by eye tracking that can be routinely applied in the surgical OR have still to be developed. Facilitating a combination of the respondent’s visual attention with his or her emotional response in one measurement, eye-tracking systems certainly would add valuable information toward the creation of “intelligent OR systems” by a nonintrusive, reliable, and cost effective measurement of the emotional response in the surgical OR.

Discussion

Workflow technologies for the surgical OR require real-time patient monitoring, detection of adverse events, and response systems adaptive to a breakdown in normal processes. However, to date, adaptive workflow systems are rarely implemented [1], primarily because the creation of an “intelligent OR system” in a surgical environment is by far more demanding than in technical or industrial conditions. A comprehensive data acquisition is needed that calls for a precise modeling of the respective process and of continuous real-time information [42, 43]. However, modeling even a highly standardized surgical operation is extremely demanding. Thus automated data collection should not impose an additional workload on the team and must not interfere with the routine surgical workflow.

However, the advent of evidence-based medicine, guideline-based practice, and a better understanding of cognitive workflow combined with novel technologies including automated data capture, bar code systems, RFID technologies, and emotion recognition interfaces opens up new and exciting methods toward the OR of the future. Total situational awareness of events, timing, and course of surgical activities could possibly generate both self-organizing changes in the behavior of the OR team members and deployed equipment for enhancement of the intraoperative workflow.

The importance of real-time data acquisition in (minimally invasive) surgery for workflow analysis and logistic reasons is well described in the literature [43, 44]. Our studies showed that retrieval of information on the functional state of the peripheral devices and systems in the OR is technically feasible [18] by continuous sensor-based data acquisition and online analysis. Thus, irregularities during the course of an operation can be reliably detected, and a delay in the upcoming surgical step can be foreseen. In combination with an online analysis of the patient’s vital signs (e.g., blood pressure, pulse, electrocardiogram, temperature, ventilation) monitored by the anesthetist, potentially harmful deviations in the routine procedure (e.g., acute bleeding) could be detected at an early stage.

This provides a basis for the future objective to develop adaptive systems that not only “predict” the course of an operation but also, in case of an irregularity, set up correction measures automatically (e.g., intensive care unit alert or on-time consultation of specialists). Reliable detection of the “environmental OR” data is therefore of utmost importance.

Furthermore, nontechnical skills in relation to the development and maintenance of the surgeon’s situational awareness also are a matter of current research activities [4]. Besides retrieval of information on the functional state of peripheral devices in the OR, automatic instrument detection systems also have the potential to increase situational awareness in the OR. The online registration of the position, reposition, and change of instruments during an operation not only gives essential information about the actual part of the procedure but also may indicate any change in the routine workflow. Thus, the condition for a real-time protocol of the operation could be established, providing for the development of intelligent assisting systems such as those for automated real-time ordering of disposables and documentation of the number of applications of nondisposable devices [45]. In addition, “life cycles” of sterilized devices would be monitored, improving cost efficiency [10].

By the use of established and modified bar code technologies [33, 34], an automatic instrument identification seems conceivable, which would facilitate the automatic recording of instrument usage in minimally invasive surgery. Based on this data, we developed an automatic instrument detection system using opto-electronic bar code registration technology for real-time detection of laparoscopic instruments at the point of interest (trocar). Although not yet mature for routine clinical application, proof for the concept of an automatic operating system for instrument identification in minimally invasive surgery was achieved with reliable detection rates under laboratory conditions [19].

In addition to the previously mentioned technical parameters, the dynamics of human activities also comprise key information about the workflow during surgery because the presence, absence, and position of the OR team members follow a specific order during the course of a surgical procedure. A sudden change of the senior surgeon assisting the procedure for the resident from one OR table side to the other indicates his or her takeover of the operation (e.g., bleeding). Therefore, by a continuous monitoring of the OR team in real time, not only can information of the (routine) surgical workflow be obtained, but potential deviations also can be determined.

A promising technology for continuous personnel tracking is data acquisition with RFID devices [35] because the latter has the potential to facilitate the simultaneous tracking of several persons (and objects) without interrupting the routine workflow in the OR. The preliminary findings of our study show promising results [36].

Analyses of adverse events in health care have found that many underlying causes originate from failures in nontechnical aspects of performance rather than a lack of technical expertise [46]. In a recent study, communication was found to be a causal factor in 43% of the errors made in surgery [47]. In addition, the emotional level is an important indicator of whether surgery is running smoothly or the team is confronted with difficulties. However, data capture and analysis are difficult. During an operation, emotional levels can be recognized either nonverbally by gestures and mimic or verbally by speech. Although technically feasible [16], nonverbal emotion assessment in the OR is difficult due to limited freedom of movement and the surgical cap and mask.

In contrast, emotion recognition by speech is a promising technology for further workflow prediction. Our current data suggest that only 53% of intraoperatively annotated speech sequences can be considered “normal,” whereas 47% are emotionally “modified” to either positive or negative emotional levels [21].

Improved methods for emotional data acquisition and correlation between intraoperative annotated emotions and the course of an operation are needed to evaluate the possible impact of different emotional levels (positive and negative) on the course of an operation and thereby to obtain new information on nontechnical aspects for future workflow interpretation. The presented technologies are only a first step toward achieving an increased situational awareness in the OR.

Some unsolved problems still remain. Automated recording of the functional state of the peripheral devices is sometimes inaccurate. The matching of manually and automatically acquired data is difficult, and the quality of preoperative diagnoses varies. In addition, technical interferences during data acquisition and signal processing may hamper automated data retrieval. Furthermore, the role and impact of the surgeon’s emotional state are not yet fully clarified and difficult to analyze. Another critical point involves private and legal issues related to comprehensive “stealth” data collection. However, as long as only environmental OR data are captured, apprehension can be dispelled.

Nonetheless, our results show that workflow definition in surgery is feasible if the surgical procedure is standardized, the peculiarities of the individual patient are taken into account, the level of the surgeon’s expertise is regarded, and comprehensive data capture can be obtained. A precise description of the actual state of the procedure (situational awareness) can be achieved by a combination of manual and automated data recordings with highly specific constellations of various parameters (peripheral parameters, surgical instruments), analysis of team behavior, and emotion analysis. If continuous, comprehensive, and automated real-time data acquisition is provided, a prospective description of the procedure should be conceivable.

Although automated (security) monitoring systems can help to decrease human error and increase situational awareness in the OR, they should be seen only as supporting systems rather than self-operating security units in the OR. The surgeon’s appreciation remains the gold standard of error prevention, but it can be substantially supported by the previously described monitoring systems.