1 Introduction

When it comes to increasing efficiency and functioning safely, operating rooms (OR) must be undeniably labeled as one of the most cost-intensive sectors that conspicuously come up with particular challenges in any hospital.

To begin with, surgery is a complex process wherein the efficient and safe functioning of ORs heavily relies on what the surgical teams do in a coordinated way. Moreover, surgical teams including surgeons, nurses, technicians, and the other personnel need ready access to accurate and up-to-date medical history and information about their patients. Vital information should be available as immediate, efficient, and safe as possible to facilitate the process and optimize the team’s functionality while maintaining patient safety. Although in a typical operating room, most patient information is stored separately in variety of information systems, and there is little attempt to collect them in a comprehensive operating room information system (ORIS), benefits of integration are numerous and would yield tremendous advantages [1]. That is to say, integration would be able to make greater surgical precision which in return leads to the faster surgical suite workflow. What’s more the ORIS has the potential to improve patient care process by accurately giving quick access to patient data. In one word, implementation of the idea behind ORIS can improve OR management, diminish the occurrence of adverse events related to poor information, and minimize interruptions in the OR team’s workflow [2].

Second, surgical teams are still using traditional computer accessories such as keyboards and mice, not specifically designed to be in use in ORs [3]; as such accessories could potentially increase the risk of transferring contaminated material in between the sterile and non-sterile environments. Indeed, what matters the most to be considered in this process is providing surgical teams with efficient, intuitive and safe means of interaction without affecting the performance [4]. Many of these limitations can be overcome by introducing the hand free NUI as an ideal solution among various alternatives. Then, surgical teams and staff members are recommended to make benefit of NUI based on the 3D sensor for freehand interactions.

Data documentation could be another challenge to be encountered by the surgical teams. For example, it is cumbersome to accurately describe each stage of the surgical process in video or audio recordings as the technology is still limited when a surgeon needs to document some stages intangibly, or record a piece of surgery for educational purposes without losing their concentration.

Finally, it is essential, at different stages of the surgical path process, to locate and identify the roles that need to be kept track of in the ORs so that safety, security and surgical workflow could be further improved. This is what radio frequency identification devices (RFID) systems could deal with as well as possible.

Additionally, the standardization of the transmission protocols has become an important component in the modern medical world. To ensure the proper integration of various software modules, it is necessary to apply Health Level 7 (HL7) and digital imaging and communications in medicine (DICOM) (a standard for handling, storing, printing, and transmitting information in medical imaging) standards that provide a set of rules and algorithms specific to the medical fields [5].

In conclusion, to overcome the above limitations, there must be an operating room information system with the following features: It must be compatible with n and DICOM standards. It also focuses mainly on a novel interaction style, targeting the more natural interactions between people and technology; i.e. a touch-less NUI based on 3D sensors. Moreover, the system requires patients and staff to wear radiofrequency identification tags in the OR in order to store operation and location information obtained automatically by the “Locating RFID” component of the system.

2 Materials and methods

Based on expertise in machine vision, human–machine interfaces, and surgical factors, there is an operating room information system called “MediNav” that enjoys a touch-less user interface based on natural hand movements. This is presented to target whatever mentioned here as the challenges any given surgical team may come across. Dr.Shariati Medical Center, affiliated to Tehran University of Medical Science (TUMS), hosts the current study that bring together researchers and surgical teams to evaluate the proposed system in a live OR environment. “MediNav” consists of “application forms” implemented in C#WPF and four system interfaces, namely the touch-less NUI with a 3D sensor, Files and Data management, HL7 protocol, and a Locating System with RFID implemented in C# and C++. To operate, each element must simultaneously interact with two of these system interfaces. For example, “Clinical Info.”, “Patient Info.” or “Medical Images” use the “File and Data management” as well as “HL7” interfaces to work (Fig. 1).

Fig. 1
figure 1

The operating room information system elements and their relation

Architecture of “MediNav” is also illustrated in Fig. 2. As could be seen, once the patient code is entered in the main menu of the information system, the program creates a message using Patient Referral (REF) and Observation Request (OBR) segments for the HL7 consumer. Then, the HL7 consumer sends it to the HL7 provider installed in the hospital. Subsequently, the HL7 provider retrieves the requested data from the Health Information System. Then, the HL7 provider sends an Observation/Result message (OBX) to the HL7 consumer. In response, the HL7 consumer converts data from the HL7 message format to XML and puts them in the patient’s folder. If “ValueType” field of OBX segment contains an “RR” value, “Observation value” content will send it to the PACS server (Picture Archiving and Communication System) for the response. Next, DICOM images retrieve files from the PACS server and deliver them to the patient folder. The file and data management interface loads the data into the program. Some data such as surgery timings are appended and sent to the “MediNav” database via touch-less user interface in the OR. Then, the program performs calculations on the data and extracts information stored in the “MediNav” data warehouse to be used in the “OR Dashboard” pan. Finally, the “OR Dashboard” program uses this information to help with decision making for enhanced management. Following subsections explain the elements and the interface of “MediNav”.

Fig. 2
figure 2

The Medinav architecture

2.1 The touch-less NUI with 3D sensor

A NUI is the one designed to reuse existing skills for interacting directly with the content. There are so many different input modalities, including multi-touch, motion tracking, voice, and stylus to interact with NUI [6]. Moreover, the intuitive control mechanisms imitating human behaviors and gestures can be used to communicate without the use of indirect input devices [7].

Further, touch-less control is a new approach for human–computer interaction in which users have the chance to control a device without touching or clicking on it. Touch-less NUI that can be sensed by devices such as the Microsoft Kinect, are suitable under some circumstances where touch input is less intended to be applied [8].

The Microsoft Kinect sensor consists of a pair of depth sensors to track users in three dimensions, a standard color digital camera, and four microphones. Using image, audio, and depth sensors, Kinect detects users’ movements, identifies their faces, and recognizes their speech [9]. The Microsoft Kinect sensor is used because of the fact that it provides an easy way for real-time user interaction, and it enables users to control and naturally interact with programs just by tracking the user’s hand movement and spoken commands without the need to physically touch the controller.

It also enables touch-less NUI to control the medical information. The traditional unsuitable computer accessories such as keyboards and mice are replaced by the proposed touch-less NUI equipped with 3D sensors. The touch-less NUI is developed by C++ and C#.

Also, touch-less NUI relies on a novel image processing algorithm developed for user s’ hand tracking and the recognition system. The sequential steps are described below:

  1. 1.

    Capture an image from an entire page as a background. Name it “background”. It should be taken when the hand is down and it is checked every second.

  2. 2.

    Detect the midpoint of hand by the Skeleton function, and find the “m_points”.

NuiTransformSkeletonToDepthImage(skel.SkeletonPositions[NUI_SKELETON_POSITION_WRIST_RIGHT], &x_0, &y_0, &depth_0,NUI_IMAGE_RESOLUTION_640x480);

NuiTransformSkeletonToDepthImage(skel.SkeletonPositions[NUI_SKELETON_POSITION_HAND_RIGHT], &x_1, &y_1, &depth_1,NUI_IMAGE_RESOLUTION_640x480);

m_Point.x = x_0;

m_Point.y = y_1;

  1. 3.

    Use “m_points” to find the depth of hand-palm; that is, the minimum distance between the position of the palm center and Kinect, and name it “min_depth”. Then, consider a virtual square around the palm center.

min_depth = pBufferRunDepth [_x + (_y*cDepthWidth)].depth;

  1. 4.

    Map the depth frames with image frames using “MapDepthFrameToColorFrame” API function.

  2. 5.

    Consider a second larger virtual square with the same palm center captured in 29 fps with an RGB camera. Name it “newFrame_sqr”.

  3. 6.

    Consider the images (of hand-palm and fingers) taken by a Depth Camera inside these two virtual squares. Name it “img_depth”.

  4. 7.

    Map the “newFrame_sqr” into the “background” and crop the redundancy parts. Then, name it “background_sqr”.

Provided the color of the hand is recognized, there will be a chance of more accuracy in performance. For example, when in the normal mode, any given hand appears rather red than the other two colors (green and blue). Therefore, it is possible to consider the tolerance of red much more than the other two colors in the next step. Our idea is that we obtain the difference between them using the current image and background image.

  1. 8.

    Find the difference between “newFrame_sqr” and “background_sqr” by the “cvLoadImage” function in terms of RGB colors (red, blue and green):

IplImage* frame_old = cvLoadImage(“c:\\img\\background_sqr.bmp”,1);

IplImage* frame_new = cvLoadImage(“c:\\img\\newFrame_sqr.bmp”,1);

  1. 9.

    Compare the difference between the current frame (frame_new) and the previous frame (frame_old) by the “cvAbsDiff” function for RGB colors separately:

cvAbsDiff(frame_new_r,frame_old_r,img_diff_r);

cvAbsDiff(frame_new_g,frame_old_g,img_diff_g);

cvAbsDiff(frame_new_b,frame_old_b,img_diff_b);

There is a subtle point that we can use only pixels of “img_diff_r,g,b” that are above tolerance of the related color using input data on tolerance of an RGB color. Therefore, this technique can be more effective than the threshold in OpenCV. For example, when the blue tones of backgrounds are much more than the other colors, a higher number is assigned to blue. On the contrary, we decrease the number of red due to the fact that the skin color contains more red. It is worth mentioning that an increase in the number assigned to a color signals a decrease in the noise level, yet a corresponding reduction in the number of the pixels. It means that more accurate points bring less points numbers. Moreover, the different number is calculated automatically by the program. However, it is performed in a separated calibration form in order to recognize the real color of the hand, skin, and background.

  1. 10.

    Create a new image that is the result of community “img_diff_r”, “img_diff_g”, and “img_diff_b” and name it “img_diff”.

  2. 11.

    Compare the “img_diff” images (taken by the RGB camera) and the “img_depth” (taken by the Depth camera) and complete the missing points of “img_depth”. The points are valid if they are located in the hand area. Moreover, the colors should be consistent with rest of the hands’ color.

The mentioned algorithm is implemented using Microsoft Kinect for windows SDK version 1.7.0.529 library and openCV version 2.4.5.

2.2 HL7 standard compatibility

HL7 refers to the highest level of the International Standards Organization’s (ISO) communication model for Open Systems Interconnection (OSI). The application level provides a framework for the exchange, integration, sharing, and retrieval of electronic health information [10].

The main objective of the HL7 standard is to produce a set of specifications that allows free communication and exchange of data among medical software applications in order to eliminate or reduce incompatibility among them [11]. In other words, HL7 is dedicated to the processing and management of administrative and clinical data. It defines a number of messages that cover all activities specific to medical units.

The HL7 standard supports two message protocols: Version 2 and Version 3. The HL7 V2.3.1 messaging standard is the one used in this study.

REF/RRI (I^12) message is used for patient referrals. The relevant trigger events trigger a message to be sent from one healthcare provider to another regarding a specific patient. The referral message may contain any individual patient’s demographic information, specific medical procedures to be performed (accompanied by previously obtained authorizations), and relevant clinical information. Moreover, the OBR segment is used to transmit information specific to an order for a diagnostic study or observation and physical exam, or assessment. An OBR segment is followed by one or more observation result segments (OBX). NHapi component is used to transform HL7 2.3.1 into an object model to be used in Microsoft.Net.

2.3 Files and data management

The “MediNav” files consist of XML, media and DICOM files that include patient information from HL7, GCM (Green Cyber Media) files for video, MP3 files for audio, and JPG files for images, and patients’ medical graphics, respectively.

2.4 Locating RFID

An active real time location system (RFID-RTLS) refers to a collection of sensors that work together to automatically identify and track the location of objects including people in real time RFID-RTLS is assigned to every role (including surgeons, nurses, anesthesiologists, and patients) to be kept track of in the ORs. Inclusive elements of the presented RFID system are tags, readers and locators as illustrated in Fig. 3.

Fig. 3
figure 3

RFID system structure chart

Locators in the system are used to activate the tag. They send Locator IDs to the tag while “waking up” the tag. Next, the tag sends both the tag ID and the locator ID at the same time. Readers are also responsible for receiving the signal sent from the tag including Tag ID and Locator ID numbers. These records can be placed in a central computer and can be freely accessed by anyone just by scanning the tag s/he is wearing. Here, the Microwave type of tag, working on 2.45 GHz, is used. Moreover, the locators work with the 2.45 GHz reader. The models of RFID readers, tags and locators used in the current system are tabulated in Table 1.

Table 1 Model of RFID components used in our application

Each reader is assigned with an IP address in the range of the hospital network with port number 2559. Two RFID readers are deployed throughout the surgical suite with eight ORs. The readers generate magnetic fields that enable the RFID system to locate the mentioned roles (via the tags activated by locators) only within its range. The reader reports the tag when it is inside the detected range. Locations are determined according to a square space of a surgical suite floor plan. This study, for example, considers each operating room as a space. This could be generalized to other wards as well as floors of a hospital. Then, the position of X and Y-axes, as well as locator numbers are assigned to the mentioned space. It is better for the locators to be installed near the door to provide good readability of tags for the reader. Locator numbers are definable using a separated interface.

RFID tags are attached to individuals categorized into four roles (surgeons, nurses, anesthesiologists, and patients). They can also be used to locate equipment and other staff roles such as porters. Staff tags are fixed in a related table, while patient tags are varied. As entering the surgical suite, patients wear a plastic covered tag that could be later reused by other patients. By defining access zones (including general, attention, forbidden, and danger) the roles can be granted and revoked in particular zones considering safety and security issues. For instance, this is a defined attention zone for infectious patients against other roles that alert the individuals from making direct contact with them. Moreover, access to drug inventory is denied for the people other than the authorized inventory personnel. Thus, the forbidden zone is defined as a security issue for non-drug inventory personnel. An application form is designed to monitor and locate any roles using specific colors in operating suite plan.

The classes in RFID245ApiLib.dll are the aids to store the received information from any readers. The information consists of locator numbers, time stamps and tag IDs activated via locators and received from the readers. Where the personnel are, how long they have been waiting, how long it usually takes to settle the service, and to transfer would be determined using simple calculations upon the arrival of the roles at a specific position comparing with their next position. This data is applied in the OR Dashboard and electronic White Board.

2.5 The application forms

The application forms are composed of seven major forms including Patient Information, Clinical Information, Surgery Information, Surgery Report, Medical Image, Whiteboard, and touch-less NUI as shown in Fig. 4.

Fig. 4
figure 4

Identification of “MediNav” application forms. a“Switch” button for switching to main NUI menu. b “Minimize” or “Close” button for minimizing or closing the application. c “Patient” button for selecting the specific patient to monitor the information. d“Patient Info.” button for monitoring the patient information. e “Clinical Info.” button for monitoring clinical info information. f “Surgery Info.” button for monitoring surgery information of patient. g “Surgery Report” button is used to report the surgery or anesthesia reports in term of recording voice, video or take a picture as well as preparing a summarized video of procedure for educational purpose. h “Medical Image” button for monitoring and navigating the required patient DICOM images. i “Whiteboard” button for monitoring e-whiteboard. j “Touchless NUI” button for switching between hand mode and figure mode in order to use touchlessly or for using mouse

Patient Information (Patient Info.) consists of “Patient Identification”, “Illness History”, “Medication History”, “Surgical History”, “Allergy History”, and “Satisfaction Form”. Patient Identification is responsible for providing information both about the patient demographics such as his/her name, age, body mass index (BMI), as well as parents’ name, and social history such as mother tongue, alcohol drinking, and marital status. Besides, Illness History provides a history of patient’s previous illness and information about the starting and ending dates of the disease. Information about any medications taken by patients prior to the surgery (such as the name of medication, status, starting and ending dates, frequency of use and the reason for taking them) is displayed in the Medication History item. Surgical History screens all previous surgeries and indications, dates and types of procedures, serious injuries or complications, hospitalization and surgeon’s name. Likewise, Allergy History presents basic information regarding patient’s allergies and precautions. Beside, to evaluate patient satisfaction after surgery, a satisfaction checklist, available in the Satisfaction Form, can be filled in with patients’ assistance.

Surgical teams need clinical data to view the diagnostic results obtained from different services such as Electro-Neuro (EEG, EMG, EP, and PSG), Electro-cardiac (e.g., EKG, EEC, and Holter), laboratory, microbiology, surgical pathology, radiation therapy etc. These could be found in the application form allocated to the Clinical Information (Clinical Info.).

Surgery Information (Surgery Info.) consists of different parts. “Procedure Info.” displays information on the patient admission type, surgical team information, diagnosis type, procedure name, and type of anesthesia.

Safety indicators including infections, medications, and procedural errors of surgery can be recorded via specific checklist available in Safe Surgery Checklist. Moreover, Surgery Timing including the following procedure times can be effectively recorded by the surgical team without users touching any controller.

  • Entrance into OR

  • Start/complete Anesthesia

  • Start/complete pre-preparation

  • Start surgery

  • Begin closure

  • Complete closure

  • Complete wrap up (awakening time)

  • Exit from OR

Recorded data in surgery timing pan is used in the OR Dashboard. Times regarding surgical procedures are automatically recorded by RFID and touch-less NUI with a 3D sensor.

Another part of “Surgery Info.” is defined as Medication Use in which information about taken medication by patient through surgery can be recorded and displayed. Information about patient morbidity or mortality can be recorded via Complication item. In addition, safety surgical checklist designed by World Health Organization (WHO) is available in Safety Checklist item. It can be filled using touch-less NUI enabled with Kinect.

Surgery Report application form is equipped to video and photo viewer and recorder which can also edit any medical video source. The Medical Image application form equips the surgical team to view the required DICOM image of patient such as CT scans, X-rays or MRI in the middle of an operation. The Whiteboard application form could be applied in order to make the automatic process monitoring possible in the preoperative, perioperative, and postoperative environments.

Finally, touch-less NUI algorithm can be altered to two finger tracking or hand tracking (Grip and move, Press for selection) modes using the Touch-less NUI application form. Figure 5 indicates a snapshot of different application forms of “MediNav”.

Fig. 5
figure 5

A snapshot of some application forms of “MediNav”

2.6 Dashboard

The long-term goal is to turn raw and collected data into valuable information or knowledge. Forasmuch as surgical suite is an arena where decision making for management is vital, the OR dashboard should be regarded as an emerging best practice among the leading hospitals. It is an enabling management tool that presents objective information to guide OR decision makers [12]. The dashboard captures data elements from different sources, helps analyze them, and highlights what is operationally meaningful in an intuitive format. Accordingly, “MediNav” is also equipped with an OR dashboard.

The MediNav Dashboard could capture and display real-time results and monitor key performance indicators in areas such as efficiency, quality, patient safety, and procedural timing in the surgical suite, and it can enhances the user’s ability to mine data for important knowledge and helps improve performance of operating rooms by monitoring and identifying the root cause for measures that are out of compliance.

The MediNav Dashboard displays the key factors impacting OR performance in a concise format. OR managers could access up-to-date surgical case data to quickly identify problems and dive into the details to gain insights for improvement opportunities. Figure 6 indicates a screen shot from the OR Dashboard interface in “MediNav”.

Fig. 6
figure 6

A screenshot from the “OR Dashboard”

3 Results

The first prototype of “MediNav” was deployed in April 2013, in the general surgery operating rooms in Dr. Shariati medical center in Iran. Firstly, the application was tested in a live OR environment, and then it has improved at a rapid rate. We have worked closely with surgical teams to test our software.

To start testing the system, the 3D sensor is attached over a flat screen in front of the main surgeons between the surgery bed and the wall. After that, the program is set up and the system is used by main surgeons or their assistants during 30 general surgeries. Also, active RFID tags are worn by patients and surgical teams including surgeons, nurses and anesthesia for tracking them and recording required data.

Two different types of usability tests have been conducted with the “MediNav” system; the former is in the form of a contextual interview that are based on watching and listening to the users while they work, and the latter is a usability satisfaction questionnaire. At the end of the operation procedures, the user working with the system including surgeon or assistant filled the Post-Study System Usability Questionnaire (PSSUQ) to state their impressions about usability. It was a ten-item questionnaire used to assess participants’ satisfaction after the completion of the procedure; as presented in [13]. The items consider important components of users’ satisfaction with user friendly and functionality. A sample of the questionnaire is available in appendix1. We also aimed to investigate the system capability to improve information flow and workflow in the surgical suites.

It has been suggested that integrating the power of “MediNav” increases the surgical precision, and the main surgeon remains sterile. The flexible visualization of the user interface in this study supports the surgical teams in getting access to the required information ranging from patient medical history to any DICOM images. This makes rapid reaction to access medical information without any need to change the location. This saves a huge amount of time spent doing paperwork and leaves more time for the patient care. As a result, the surgical suite workflow could be remarkably speeded up.

We could also receive feedbacks from other surgery students (Resident or Fellowships) who are not involved in surgical procedures, yet “MediNav” provides them a live interaction during surgery using high-quality video. It is a useful live classroom for them with educational purposes. In addition, it could make the attendees satisfied by enabling them to record just the important parts of a procedure as a concise and effective source of teaching during an educational session.

The main issues found after the questionnaire assessment was simplicity in finding required information as some of surgeons believed that the platform put the entire information at their fingertips. They were also extremely satisfied with the centralized access to all patient information and media technology. Moreover, the users preferred finger tracking to control the applications since it is similar to the conventional interaction with mouse but in the safe and sterile mode.

Likewise, the surgical manager was enormously pleased by the seamless approach for data gathering and displaying this data in an easily accessible format. From his point of view, the automatic approach to record the duration of the surgical procedures is more effective than paper-based techniques used so far. He concerned that the system introduces useful technologies to improve quality and efficiency in the surgical suite.

4 Discussion

Results show that there are several major challenges facing surgical teams and OR managers in surgical suites. Firstly, surgeons sometimes need to quickly access the patient information, medical images and clinical information. Using the File and Data Management application form that is compatible with HL7 and DICOM, MediNav provides accessibility to enriched information wherever and whenever required.

Further, surgeons have some problems with computer interaction. They require freehand interaction throughout the surgery, it is necessary to get the mouse and click on any application though. Furthermore, reducing the risk of contamination and increasing patient safety is important. However, traditional mouse and keyboard method mostly leads to infection. So, Touch-less NUI enabled with 3D sensors such as Kinect can dissolve this problem.

Thirdly, surgeons require to intangibly recording audio and video files such as surgery descriptions, anesthesia descriptions or a summary of surgical procedures for educational purposes. Audio & Video applications enabled with 3D sensors can help them tackle the problem.

Besides, it is essential to enter and save a variety of information such as procedure timings, patient safety and complications. As a remedy to this challenge surgeons can depend on comprehensive data entry forms and automatic data entry forms enabled with 3D sensors and RFID respectively. However, it is proved that there is great potential for RFID systems to be utilized in the operating rooms in order to further improve safety, security and surgical workflow. The Locating RFID system interface can be used for tracking or identifying the roles including surgeons, nurses, anesthesiologists, and patients. It can also be expanded to other applications such as equipment and medical device tracking.

Finally, a graphical management tool that presents objective information to guide OR decision makers is very interesting for the operating room managers. The MediNav Dashboard enables them to assess operating room quality and efficiency at a glance. The proposed dashboard is fed by the information that comes from other parts of system such as Locating RFID, Files and Data Management and Touch-less NUI.

All of these elements are tied together into a single system. To our knowledge, there is no such comprehensive and integrated technology that can handle all of these issues within operating room. However, there is a wide literature on the mentioned topics separately. Therefore, Table 2 summarizes the literature review to present the researches on touch-less methodology [1418], NUI [19], RFID locating systems [20, 21], video and audio documentation [22, 23], information systems compatible with HL7 and DICOM standards [5, 24], and dashboard [12, 25, 26] applicable in the operating room.

Table 2 Functionality of “MediNav” program

As shown in this table, there was a need for innovative ideas in order to cover a comprehensive and integrated information system able to visualize medical information. Moreover, using an NUI controller which detects the finger or hand position, it can allow the control of application by means of finger or hand tracking (touch-less manipulation). The system is a gateway to record procedural data automatically and view the acquired information from multiple perspectives graphically.

The summarized information about “MediNav” functionality is listed in Table 3.

Table 3 A summary of literature review on related topics

5 Conclusions

The study investigates how to develop an integrated and comprehensive operating room information system compatible with HL7 and DICOM. An NUI is designed specifically for operating rooms. The solution uses touch-less interactions with finger and hand tracking (Grip and move, Press for selection) modes instead of mouse and keyboard methods. Procedural timings were obtained by the system automatically and are visible into electronic whiteboard platforms. Likewise, observational analyses of procedural waiting times are displayed in an operating room dashboard.

The results of usability tests are promising, and indicate that integration of these systems into a complete solution is the key; not only to stream data and work flow but also to maximize surgical team usefulness. In summary, touch-less NUI can help comprehensively collect and visualize medical information. It can also play the role of a management tool.

However, there is still a huge room for conducting experiments to compare the proposed system with other human–machine interfaces in the future.