Keywords

1 Introduction

Despite an increase in doctors per population, there are still not enough to monitor such a huge, ever-growing population. Unless there are significant structural and transformational changes in healthcare, it will be difficult for them to remain sustainable. Besides attracting, training, and retaining more healthcare professionals, we also must utilize their time to the greatest benefit of patients—caring for them. In order to improve the outcomes of chronic conditions and reduce the need for frequent medical visits, it is imperative that we embrace and accelerate the development of remote patient monitoring technologies. Additionally, in scenarios such as COVID-19, monitoring remotely can be a crucial tool for health care practitioners to assess their patients’ conditions before they present for treatment.

The Internet of Things is currently being studied in various countries in relation to human health monitoring systems. The transfer of long-distance medical data (long-distance consultation and monitoring) has greatly advanced [1,2,3,4,5,6,7,8]. But most of the models achieving this type of advance follow a store-and-forward architecture in which the data collected from the patient is stored in a database or on a cloud server, and the data is retrieved by doctors at a later stage to analyze the patient’s vitals. As a result, assistance is costly and time-consuming.

Furthermore, some of the existing models and implementations exhibit a problem of insufficient communication stability and their complexity of use. Despite receiving positive responses, video conferencing systems designed specifically for telehealth are not flawless. The purchasing price has been a concern for many small hospitals, clinics and users that are on a limited budget. In many cases, proprietary technologies are used in an ad hoc manner that is incompatible with other similar solutions. For example, Audio and video are processed using hundreds of different methods, but there is no interoperability between them. In many cases, downloading the software, understanding the setup and configuration instructions, as well as providing ongoing maintenance (upgrades), required a considerable level of IT expertise.

Because of these shortcomings, this paper presents a model for monitoring patients remotely in real-time, which will enable instant communication between stakeholders.

WebRTC emphasizes direct communication by enabling developers to build applications that can perform audio and video calling, share live video and screens as well as instant messaging. With in-browser media content support, end users no longer require downloading, installing, and manually configuring applications. Additionally, they do not require proprietary plug-ins in the browser to view media content. As a result of WebRTC, real-time communication can be accessed via a simple JavaScript API via any web application [9]. It aims to aid developers with creating browser-based cross-peer real time interaction that is often necessary for multimedia services such as video conference calls [10]. Thus, A WebRTC (web real-time communication) system was adapted to meet the model requirements of being simple, inexpensive, and interoperable.

In this study, data will be collected from a wearable device to calculate heart rate. Wireless sensors are used to collect information for a health monitoring system. React has been used to develop the interfaces for these sensors and the communication is facilitated via WebRTC as part of React integration.

Our model tends to be more effective than other existing models by-

  • Providing real-time doctor-patient consultation using videoconferencing.

  • Allowing interpretation of medical data using screen share facility.

  • Ensuring no packet loss and reduced server overload.

2 Related Work

Lee et al. [11] discussed a device that enables remote patient monitoring via smartphone. It utilizes a smartphone app. In order to talk about the treatment process with the doctor, the patient can use a smartphone to video conference. Skype’s AES-style encryption was used with this system to secure patient video data. However, real-time functionality is not available on this system. The collected vital signs need to be uploaded manually by the patient to the server.

A remote patient monitoring system utilizing RF technology was described by Roy et al. [12]. This system allows users to access medical information on mobile devices and through the web. Several components constitute the system, including sensor nodes, coordinator nodes, web and database servers, and graphical user interfaces (GUI). Data is collected by the sensor node. A central server receives the acquired data. The GUI allows users to view and analyze data. Unfortunately, in real-time mode, the system cannot function because of the delay in transmission and slow response from the server.

A new Android application designed by Mehmet [13] monitors the heart rate and heart rate variability of cardiovascular patients undergoing close and constant monitoring. The wearable sensors in the system continuously monitor the cardiovascular activity of the patient via wireless connectivity. The sensed signals are then transmitted wirelessly to an Android interface. Several parameters are configured to ensure that the system detects if the critical values of Heart rate, Heart rate variability, and Body Temperature are exceeded. When it is, the parameters and the patient’s location are sent as an e-mail message and Twitter notification to the doctor and family members. The system does not offer real-time analysis of vital statistics. However, it permits patients to move around and to live a more fulfilled life in their immediate surroundings.

An e-health system based on IoT was implemented by Pap et al. [14] using Raspberry Pi devices. The data was recorded in the chart and the user could view, analyze, and download the chart using a web browser using a Node.js application. The system includes sensors for measuring blood pressure, pulse oximeters, airflow, temperature, and galvanic skin conductivity. Depending on the user’s preferences, this system will record data either live or on record. Only temporary data is stored in the live session mode unlike the recording session, where the data will be stored permanently.

3 Proposed Model

Essentially, the proposed model can be broken down into two phases,

  • Setting up a peer-to-peer connection between doctor and patient.

  • Collection and transfer of health vitals from health monitoring devices, from patients, end to the doctor end.

In this way, instantaneous communication can be established which will make it possible for a patient to send their vitals data via webRTC as shown in Fig. 1.

Fig. 1
figure 1

Model overview

In order to demonstrate the proposed model and verify its correctness a prototype was built.

3.1 Prototype Architecture

Establishing the P2P Connection Using WebRTC.

This telemedicine model which we propose, is web-based. The patients connect to an HTTP server through a web browser and access the main HTML page that uses JavaScript and CSS also. The web page being accessed by the clients implements React JS code to enable the connection to the signaling server, which acts as a broker to coordinate the doctor-to-patient communication between the browsers.

PeerConnection, MediaStream, and DataChannel constitute the three core concepts of WebRTC API. Interaction between a local computer and remote peer is established using the PeerConnection interface. Connections can be made to remote peers, maintained and monitored, and closed when no longer required. When working with audio or video data, a MediaStream is used which defines the ways of working with the data, the restrictions associated with its type, and the success and error callbacks. Furthermore, DataChannel is used to represent a connection between two peers.

Primary steps for securing a connection involved were:

Exchanging SDP.

An announcement or invitation is made using the Session Description Protocol, which allows the description of multimedia communication sessions. This protocol isn’t used for media delivery but for negotiating between peers of different audio/video codecs, and for getting network topology and other information.

SDP offers are created using RTCPeerConnection.createOffer(). The caller sets this offer to be the local description of the connection by calling RTCPeerConnection.setLocalDescription(). This is followed by use of Signaling server by the caller to send the return offer to the intended recipient. Recipient records the remote description of the received offer using RTCPeerConnection.setRemoteDescription(). Upon this, An answer is created at the caller’s end using RTCPeerConnection.createAnswer(). Passing the answer as input to RTCPeerConnection.setLocalDescription(), the recipient sets the answer’s local description as the answer. Once again, the signaling server is used in order to send the answer to the caller. Finally, Response is received by the caller who Sets the answer using RTCPeerConnection.setRemoteDescription().

Exchanging ICE Candidates.

A peer must also exchange information about the network connection in addition to information about the media. A candidate ICE (ICE candidate) indicates whether the peer can communicate via direct connections to the peer or via TURN servers. It is typical for peers to propose their best candidates first, working down the line to their worst candidates. Although UDP is preferred (since it’s faster, and can recover from interruptions more easily), TCP candidates are also allowed under the ICE standard.

This exchange involves the callers to create and set local descriptions (SDPs). An ICE candidate is generated by STUN when the caller contacts the server. Signaling servers are used for exchanging ICE candidates, especially after STUN servers receive candidates from STUN and after setting up locally and remotely descriptors. ICE candidates are also generated by the STUN server at Callee’s request. As soon as the signaling server is called, both sides call RTCPeerConnection.addIceCandidate(). An illustration of the entire process can be seen in Figs. 2 and 3.

Fig. 2
figure 2

Overview of WebRTC connection

Fig. 3
figure 3

Overview of exchange process to establish connection

Integrating Health Monitoring Systems.

There are several different modules that make up the health monitoring system, such as the pulse rate recording module, the body temperature measurement module, the heart rate acquisition module, and the blood pressure monitoring module. The patient is connected to several sensors, which communicate with each other. In order to receive readings from the patient, sensors automatically convert the readings into signals.

The system can be configured to be able to send or retrieve data that will update an individual’s record or create reports, depending on how the system is configured. Open source SDKs allow the communication between these health monitoring devices and the interface using Bluetooth/Wi-Fi.

Several methods of wireless communication are available. We have chosen Bluetooth technology since it does not require high data communication bandwidth to transmit medical data, including heart rate, that our prototype measures. Bluetooth’s small power consumption, limited range of 10–100 m, along with its power and space saving characteristics makes it ideal for mobile devices and embedded systems that exchange relatively less data. Due to the frequency hopping spread spectrum (FHSS) method used by Bluetooth, the connection also has low interference. In today’s world, Bluetooth technology is very attractive and can drive cost-savings and enhanced efficiency. With Bluetooth enabled devices, the wireless synchronization process is automatic. We have optimized our module to provide synchronized patient information across a variety of devices, including portable digital assistants, laptop computers, cell phones, and more [15]. Many devices and applications support Bluetooth for internet connectivity.

For testing purposes, we have only used the heart acquisition model. By measuring the difference in light transmittance in the blood vessels caused by the heartbeat, the light volume method measures the heart rate. Photoelectric sensors convert light sources into optical signals, and then an electrical signal is generated by a filter circuit using the optical signal. A wavelength between 650 and 750 nm was chosen. Figure 5 illustrates the heart rate sensor signal flow. The transmittance of the light source changes when a pulse of light passes through the peripheral blood vessels of the human body. During the amplifying process, a photoelectric converter reflects an optical signal through peripheral vessels of the human body is converted into electrical signals. Analog voltage is used to display the heart rate. Digital conversion is performed by Arduino, and quantities are transmitted via Bluetooth. Figure 4 depicts this integration process.

Fig. 4
figure 4

Integration of telehealth devices with the interface

Fig. 5
figure 5

Flow chart for heart rate sensing

3.2 Prototype Implementation and Interface

The purpose of this section is to display the prototype implementation and design of a WebRTC-based video conferencing system to illustrate how the concepts described earlier were realized.

The developed application is divided into some distinct but complementary components. The core component of this application was responsible for connecting the peers with real time video and audio. In addition to its session server that identifies logged-in and available users, it also has real-time collaborative chat boxes as well as a screen sharing area.

Chat box area is used to send text messages and binary files. Care coordinators and patients can exchange data via a chat service in addition to their online meeting. This will allow prescription exchanges and the sharing of files. An online chat service requires both client and server code implementation. A chat channel is established from a socket.io server. Chat messages are sent to the client once the connection has been established.

Using the getDisplayMedia function we got the user’s screen stream, inbuilt in the browser. Now, in order to send this stream, we used a package called SimplePeer in WebRTCPeerConnection. The other side of the connection will then start to fetch users screenshare streams using the getUserMedia function. Using this Screen-Share feature, Patients can show vital statistics and data collected health monitoring devices to the doctors who will then be able to give their inferences and advice.

An open source React Native wrapper, MedM DeviceKit SDK, is used which provides a wide range of medical sensors to seamlessly connect via Bluetooth to acquire data such as heart rate, blood pressure, spirometer and temperature. A heart rate acquisition module was used to collect the data for our prototype.

In order to use the MedM DeviceKit SDK, Registering the Device Kit and Connecting it to our heart acquisition devices is done followed by vitals data collection. With this, device’s data is acquired in XML format.

The React Native Web library is then used to allow use of React Native components in both web and mobile applications.

Finally, Users connected to the server will now need to identify themselves in order to access the web. Following the user’s approval of the webcam and microphone control, he or she will see the interface. Our left-hand column displays the users that can initiate a WebRTC connection, and the list of common rooms created for group calls, which allows multiple participants in a single request, such as nurses, GPs and specialists. to join at the same time. Clicking on an active user starts the connection, if the callee is available, following which the patient will see the interface illustrated in Fig. 6. In the right column, the remote video appears after a WebRTC connection has been established, along with buttons to share screen, turn off local video, disable microphone, open chat box, and end the call. Upon screen-sharing by the patient, interface shown in Fig. 7 will appear at the doctor’s end.

Fig. 6
figure 6

Interface on connection

Fig. 7
figure 7

Interface showing vitals data upon screen-share

4 Results

A test was carried on the prototype using two tabs on the same browser connected to internet. There was a successful connection established as shown in Fig. 6 and we were able to transmit video (and voice), share screen, and transfer files and messages without experiencing any issues.

The Arduino microcontroller accurately received data from the heart pulse sensor. Data was then processed by the microcontroller and sent to the LCD screen for display. Wireless module then successfully transmitted the data via Bluetooth, and from there it was visualized on the interface without any errors as shown in Fig. 7.

To ensure that there are no packet anomalies, we evaluated total ECG data of a randomly selected set of 4500 packets between sender (health monitoring device) and receiver (mobile device). According to the results as shown in Table 1, the sum of the ECG values of the sender and subscriber are the same. In comparison to most existing models, the proposed model in private networks had very low latency and high performance.

Table 1 Performance of proposed model in private network

5 Limitations

For our needs, we have developed and implemented a simple application that makes use of WebRTC. Few limitations are inherent in the current design and implementation. These limitations are described next, along with future plans to address them.

  • Some deviations occurred during data acquisition from health monitoring devices. This was due to the sensor of the heart acquisition module being associated with the circuit design. This can be solved using sophisticated and advanced circuit hardware.

  • We used Socket.io and Node.js in our web application to create a signaling server. Messages can be exchanged easily through this design choice. However, this model works well with very few users, such as the current situation. For a large number of participants, the model would not be scalable causing latency and delay issues. A signal server for this model that is capable of handling high volumes of participants can therefore be used, like SIP or Jingle.

  • Bandwidth allocation while using the application deteriorated on having multiple tabs open in the browser. As a result, video conferencing reliability in browsers was variable. By adjusting output bitrates to match the continuously fluctuating bandwidth, this issue can be solved.

6 Conclusion

In an unprecedented time of COVID 19, health needs are immediate and urgent. Now is the time to introduce remote patient monitoring standards. It is necessary to bring change to the centralized healthcare system to provide more personalized healthcare to improve outcomes and create a healthier world.

By using WebRTC technology, a standard-based interoperable, simple and cheap method of developing a video conferencing system is presented. A connection between patient and doctor was achieved by using Wireless Sensor Technology and a browser.

Based on an experimental prototype, this study was demonstrated, and a successful remote patient monitoring system was built. Our real-time communication capability enabled us to provide telehealth teleconferencing services promptly.

The paper also describes the adaptations that we made in the system architecture, as well as the component models. The prototype implementation also yielded several insights which have been discussed. Finally, we also pointed out some of the shortcomings of our current implementation.