Abstract
Telemedicine research improves the connectivity of remote patients and doctors. Researchers are focused on data optimization and processing over a predefined channel of communication under a depictive low QoS. In this paper a consolidated representation of telemedicine infrastructure of modern topological arrangement is represented and validated. The infrastructure is aided with Multiple Objective Optimized Medical dataset (MooM) processing and a channel optimizing TelMED protocol designed exclusively for remote medicine dataset transmission and processing. The proposed infrastructure provides an application oriented approach towards Electronics health records (EHR) creation and updating over edge computation. The focus of this article is to achieve higher order of Quality of Service (QoS) and Quality of Data (QoD) compared to typical communication channels algorithms for processing of medical data sample. Typically the proposed technique results are achieved to discuss in MooM dataset processing and TelMED channel optimization sessions and a resulting improvement is discussed with a comparison of each MooM dataset in reverse processing towards server end of diagnosis and a consolidated QoS is retrieved for proposed infrastructure.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Generally, Medical data processing is performed over a stand-alone system or in a static infrastructure such as diagnosis centers, hospitals and care-centers. With Information and Communication Technology’s (ICT) modernization in medical diagnosis, Teleconsultation and Telecommunication is supported. Various developed nations have inculcated telemedicine as a routine of healthcare monitoring and communication. Typically, telemedicine is defined as a connectivity bridge over ICT to connect remote patients with doctors [1]. Demonstrates a detailed study of ICT’s role in building telemedicine for developing countries like India, Bangladesh and Pakistan. A detailed insight is provided towards challenges and barriers for telemedicine in developing countries and one such is Quality of Service (QoS) and Quality of Data (QoD) for diagnosis.
Telemedicine is expanding, with added artificial intelligence and machine learning terminologies for detection and diagnosis, the accuracy rate is improved and enhanced. Researchers are focused on various techniques towards detection and prediction of medical disabilities in data-samples. According to a study, the convertible rate of medical data processing techniques and algorithms is 42.3% and major lack is due to the environmental setup in ideal or close to idea processing under 0% dynamic challenges such as data losses, blind-third party algorithms, connectivity, Signal to Noise Ratio (SNR) or PSNR, accuracy rate and many. Thus research challenges are open in managing data ratio under channel for processing and binding the proposed or current processing technique in ideal environment to achieve a higher order of accuracy rate in diagnosis.
In the proposed technique, an initiative is taken to bridge the research gap between data-samples collection, transmission on channel and processing in idea environment. The technique is novel in its kind as it incorporates multi-objective data-types such as images, signals and text records. The medical data samples are termed as Multi-Objective Optimized Medical (MooM) data, typically covering medical images, signals and text data. The technique is also processed on a defined protocol of communication, designed to perform medial data transmission over low line communication channels, known as TelMED (Telemedicine protocol). Thus, from the proposed methodology, a defined and optimized datasets are processed and diagnosed using standard third party medical processing techniques discussed in session of methodology.
Literature reviews
Medical data processing research has challenges unseen towards physical cases and scenarios of evaluation. In [2] a detailed survey of 248 research papers over various search engines and repositories is conducted for QoS and Quality of Experience (QoE), the authors provide a statistical information on relevance of telemedicine data processing over defined interest goals [3, 4] provides a defined infrastructure for designing telemedicine services over a WiMax and Open channel. Systematic software components required for implementation of telemedicine is studied by [4] and an economical software and IoT based device implementation for data collection over remote patients is discussed by [5]. Challenges and barriers for data processing under telemedicine is studied and reported across 30 research papers by [6] with results demonstrating 11% occurrence of challenges is due to technical or engineering aspects.
Telemedicine services are reflected as remote connectors in diagnosis and consultation with a safer communication medium towards advanced cost-cutting and time saving [7]. The relevance of compromising medical data leads to ineffective diagnosis and consultation and justifying false positive results. Thus [4, 8] revisits the deliverables of telemedicine services under rural or slighter regions [10, 12, 13].
Problem statement
Medical data transmission over current telemedicine infrastructure is performed under existing channel or spectrum with third party optimization algorithms. These algorithms reduce the reliability factor of medical diagnosis. False positive and true negative conclusions are drawn, over such interpretation datasets. Henceforth, an order of channel optimization with respect to server-end processing and reverse engineering algorithms for medical data sample is on research demand. Another major concern on dataset processing is the diversity of data-types with advancement in technology and computation. The proposed technique is focused on MooM data-type processing inclusive of medical images such as CT/MRI/PET, medical audio signals such as ECG/Phonocardiography (PCG) and text or log data-type such as Electronics Health Records (EHR) / EEG.
Methodology
In proposed technique as shown in Fig. 1, the MooM datasets are processed with various multi-objective and multi-dimensions with respect to data-type such as medical images, audio signals samples, EHR and textural documents such as log file etc. Typically, these three forms of medical data is considered and processed. The processed data is transmitted via a designed protocol of transmission namely TelMED. The protocol aims to achieve dynamic user grouping and clustering for systematic resource allocation of users over a telemedicine environment or channel of communication.
In the proposed methodology, an improvised version of MooM data processing and TelMED protocol is discussed with remote online processing schema of evaluation (i.e.) the remote users upload samples via rural center for medicine and healthcare through a technician, the samples are future uploaded to cloud/server environment. On successful uploading, the medical datasets undergoes MooM processing. Since, the data is transmitted via a domestic channel; a third party optimization is processed. Thus the proposed system aims to perform a checksum for data uploading via TelMED infrastructure and future strengthens MooM processing for data’s QoS over telemedicine channel.
Dynamic MooM dataset processing
As discussed in previous session, the MooM datasets are processed under TelMED protocol for assuring the QoS improvisation over two ends of communication (i.e.) doctor and patients. The proposes technique is achieved with a pre-processing followed by MooM dataset oriented processing with a check-sum for measuring QoD in transmission line and cloud optimization algorithms.
Pre-processing of data samples
Consider a remote patient (R) uploading a data-sample (D) under a primary diagnosis for expert consultation, the data (D) is randomly collected via system logs of user-service provider (USP). The data is processed by a remote data collection unit designed at lowest points of healthcare and transmit via a third party network provider. Thus prior to transmission, the technician or remote care person officially segregates the data samples of TelMED.
TelMED processing
On prior to uploading, the data undergoes processing with online TelMED protocol. TelMED generates a hybrid and dynamic resource allocation and sharing technique for establishing a secure and reliable connection for transmission. Consider, the given dataset (D) is uplifted for transmission under TelMED protocol infrastructure; typically TelMED is designed on dynamic Multiple Input Multiple Output (MIMO) based user clustering and classification framework. Hence represented in a hexagonal format of cells-creation. The dynamic user grouping (DUG) algorithm can be represented as in Eq. 1
Where, UTis user group under transmission of bandwidth (λ) for a time interval (T) with Hi representing hexagonal cell alignment with respect to interval of time (t). On a summary the overall TelMED framework can be represented as
Such that, the data uploaded to the server is co-related and formulated with an improvised and optimized techniques for uploading under TelMED scheme, the medical data processed is secure and reliable for future optimization under third party algorithms and hence generates a dynamic strategies towards data preservation on open channel of communication.
Medical dataset processing on MooM data-samples
Pre-processing and indexing
Telemedicine datasets are sensitive and are exposed to various third party optimization algorithms and challenges such as bandwidth, multiple data uploading, multiple times similar data uploading and manual errors. Typically these errors or challenges causes image or data recursion and improves duplication and data manipulation scenarios. Thus, in the proposed technique, a dedicated recession algorithm is proposed and validated using machine learning terminologies (KNN) for upgrading a better result under remote diagnosis channel. The source data collection varies from one server to another host respectively. Consider a data sample (I) with an unique features recorded under database model and thus repeated version of same is searched or validated, thus representing each of data-sample of given server as I = {I1, I2, I3...In}where ‘n’ is the dynamic value of medical samples indexing for given server based on uploading standards [11].
Consider a scenario; there exists a medical sample If on an occurrence against a location Lf for independent addresses i.e. If ⊆ I such that, each of I is against L and Lf ⊆ Ln ∈ L where each of L is a memory location on indexing a sample. This reflects overall system for recurrence detection and elimination. Thus represent in eq. 3
Medical image interpretation on optimized data
In previous section, a recursive medical data samples are extracted and redefined set (R) is retrieved for a given transaction over a remote network. The successive step is to define and validate the images with re-alignment of mis-lead angular differences such as images with varying angles of freezing. Such images are not detected using pre-processing and indexing techniques. Since a relative edge detection and pixel density based variation methods are appended, the registration technique demonstrates a higher order of reliability and accuracy.
Consider the secondary processing images (IS) over the set of defined and pre-processed set (R) such that, each of (IS ⊆ R)/R ⊆ If ∈ I. The interdependency of each image (IS) is fetched from user (U) transaction. Assume k images are processed from (R), and then the region of extraction of cover image / reference image to that of k images is given in Eq. 4
Where, S is the set of registration images logs to that of k iterations resulting in repeated iteration as S = {S1, S2, S3...Sn}. For each of S ∈ R ⇒ k ⊆ IS on comparison with reference images (RS), the S should generate an equivalent ratio of minimal errors and maximum similarity as in Eq. 5
For each iteration of image, a consolidated generation of optimized set is represented, such that, the consolidated data on processing is equivalent to the data over a channel. Thus the retrieved images processes a cancer lesion detection on a generalized and optimized set of data using open third party algorithms with a synchronization as shown in Eq. 6
Where, D is the consolidated set of data representation over HD (Huffman’s distribution) for optimized images. Thus the tumor in sample with a pattern density (x) over Eq. 3 is fetched on a difference with Reference image RS.
Medical signal interpretation on optimized data
Medical signals are compressed on interpretable audio signals (*.wav) and sensitive audio recordings of internal organs. In MooM, a series of representation is demonstrated over a Phonocardiography (PCG) and Electrocardiography (ECG) signals and fetched a frame of data over a low line channel of communication as represented in Eq. 7
Where, (PS)is considered as feeder signal for communication over low line channel and is computed on length of signal to that of each signal is internally segregated and framed as MOTIF units of smaller representation. Further, in proposed scheme the validation of Eq. 7 is pre-moved on cloud/server environment towards quality assurance under third party optimization algorithms. The incoming signal SA is received and broken down into secondary units of data in expanding MOTIF signal (M) as in Eq. 8
Where, each of signal’s expanded unit is explored with respective to MOTIF data on a internal ratio of saturation Δ(xi) occurred in each Motif pattern (M) i.e. M = {M1, M2, M3...MSa} whereM ⊆ Sa. Typically, Motif (M) is extracted from series as shown below
Where ∀PA = {PA1, PA2, PA3....PAn} represents a generalized form of \( \sum \limits_{i=0}^n{\left({P}_A\right)}_i \) with each signal portion describing an attribute as in Eq. 10
Thus from eq. 11, an inter connective expansion of signal is achieved at receiver end and expands (PA) to original form of SA withleng(PA). Thus, to validate the given signal quality, integrated quality variables is attested with extracted quality variables on sever-end. Framing a loop-up look-up circuit for QoS enhancement as demonstrated below.
Medical text-records interpretation on optimized data
The other common form of medical data representation is textural or text format, such as data files, Comma Separated Value (CSV) files, EHR and PDF based on EEG data logs files etc. Typically, the data in this modern era of technology deals with larger files on continues value extraction and hidden patterns or information mining. In the proposed terminology, the data files are correlated and processed with machine learning based KNN approach for clustering and validating the most relevant information segment from considered file.
Typically, a file (FS) is considered with EEG log information, such that each of (FS) is segment under optimization algorithm as FS = {(FS)1, (FS)2, (FS)3 … (FS)n}with nth segment to the highest order of file division. According to generalized terminology of KNN, a nearest neighbor is appended for clustering as follows.
Where, each of file recorded a End of File (EoF) operation pointer and file segments with SK intervals. i.e. SK≤∗EoF ⇒ (FS) ⊆ F as F is the unprocessed file of medical data and FS is the processed file on low line third party channel of communication. Thus on expansion, the QoS of each segment of file is evaluated and compered to FS with respect to F. such that, (FS ≈ F) ≤ (0.732/2π) where, the value is equivalent to received F. For optimized transaction i.e. C = {C1, C2, C3 … Cn} and hence, clustered data, re-building of FS on experimental environment is evaluated as Eq. 14
On reverse engineering of optimized or clustered data (Ci) a reassembling of original or alike original file (F) is retrieved from FS. On experimental consideration, EEG based RT-BIRD algorithm is proposed for optimization and retrieving extraction.
Discussion and results
The proposed methodology is to implement an optimized data processing using trivial technique on machine learning to retrieve originality behavior of master file/record. In the proposed system a detailed study is evaluated on various file system on architecture of TelMED protocol to retrieve MooM datasets. Over a low band communication channel and third party optimization. Firstly the medical images are retrieved with better QoS and are tabulated in Table 1 for higher order of registration and rebuilding.
Processing of medical image samples
In Fig. 2a, b and c a descriptive imaging of reference images of each five (5) sub-samples are considered for registration and Fig. 3 demonstrates, the registration processes involved for image aligning and point coordinating to retrieve best similar image root. Thus on appending, the results towards third party or generalized [4] technique of image processing, the detection of tumor accuracy is likely to be on distortion rate of 2.456. (i.e) the resemblance is 97.5% on an average for consider data-samples (Figs. 4 and 5).
Processing of medical signal samples
Medical signal data samples include a Phonocardiography (PCG) signal processing, acquired via nodal head-phones of range 0-250KHz as demonstrated in Figs. 6 and 7 respectively. Figure 6 demonstrate normal signal processing of heart samples acquired via PCG technique and hardware setup and Fig. 7, represents an abnormal acquired signal.
Processing of medical text/log file samples
Medical data is represented in most common form of text and electronics health records. Hence forth, the data considered for evaluation is sampled from various EEG log records and are structured to perform a represented pattern of data as in Fig. 8. Thus, it represents a virtual signal regeneration pattern for physical data under logical format of evaluation (Figs. 9, 10, 11, 12, 13, 14 and 15) [9].
Conclusion
Telemedicine is next generation terminology of communication with enhancing e-health and modernization with medical instrumentation. The proposed technique is focused with medical data regeneration and interpretation with respect to third party optimization algorithms and low line channel of transmission. The technique is focused with dynamic processing via TelMED protocol and reterives Multi-Objective Optimized Medical (MooM) data-types. Typically, MooM data-types are processed for first time under collectively processed and single platform with a run-time environment.
The proposed technique aims to improvise Quality of Service (QoS) and Quality of Data (QoD), under supervision of machine learning algorithms. The technique has successfully processed and synchronized a higher-order of QoS and QoD for various MooM data-types such as Images (CT/PET/MRI), audio (PCG/ECG) and text/log files (EHR/EEG) with a demonstrated performance on third party optimized data. In near future, the technique can be improvised with added and expanded MooM data-types such as 3D imaginaries, Ultra-graphic records, nano-scale imaginaries and much more in an explore manner.
References
Syed Thouheed Ahmed, S., Sandhya, M., and Shankar, S., ICT’s role in building and understanding Indian telemedicine environment: A study. In: Fong, S., Akashe, S., Mahalle, P. (Eds), Information and Communication Technology for Competitive Strategies. Lecture Notes in Networks and Systems, vol 40. Singapore: Springer, 2019, 391–397.
De la Torre Díez, I. et al., Systematic review about QoS and QoE in telemedicine and eHealth services and applications. J. Med. Syst. Springer 42.10:182, 2018.
Chorbev, I. and Mihajlov, M., Building a wireless telemedicine network within a wimax based networking infrastructure. 2009 IEEE International Workshop on Multimedia Signal Processing. IEEE, 2009.
Patil, K. K., and Ahmed, S. T., Digital telemammography services for rural India, software components and design protocol. 2014 International Conference on Advances in Electronics Computers and Communications. IEEE, 2014.
Ahmed, S. S. T., Thanuja, K., Guptha, N. S., and Narasimha, S., Telemedicine approach for remote patient monitoring system using smart phones with an economical hardware kit. 2016 international conference on computing technologies and intelligent data engineering (ICCTIDE'16): 1-4. IEEE, 2016.
Scott Kruse, C., Karem, P., Shifflett, K., Vegi, L., Ravi, K., and Brooks, M., Evaluating barriers to adopting telemedicine worldwide: A systematic review. J. Telemed. Telecare 24(1):4–12, 2018.
Russo, J. E., McCool, R. R., and Davies, L., VA telemedicine: An analysis of cost and time savings. Telemed. e-Health 22(3):209–215, 2016.
Zachrison, K. S., Boggs, K. M., Hayden, E. M., Espinola, J. A., and Camargo, C. A., A national survey of telemedicine use by US emergency departments. J. Telemed. telecare, 2018: 1357633X18816112.
Ahmed, S. T., Sandhya, M., and Sankar, S., An optimized RTSRV machine learning algorithm for biomedical signal transmission and regeneration for telemedicine environment. Proc. Comput. Sci. 152C:140–149, 2019.
Joseph Manoj, R., Anto Praveena, M. D., and Vijayakumar, K., An ACO–ANN based feature selection algorithm for big data. Cluster Computing, 2018. https://doi.org/10.1007/s10586-018-2550-z.
Thouheed, A. S., and Sandhya, M., Real-time biomedical recursive images detection algorithm for Indian telemedicine environment. In: Mallick, P., Balas, V., Bhoi, A., Zobaa, A. (Eds), Cognitive informatics and soft computing. Advances in intelligent systems and computing, vol 768. Singapore: Springer, 2019.
Vijayakumar, K., and Arun, C., Integrated cloud-based risk assessment model for continuous integration. Int. J. Reasoning-based Intell. Syst. 10(3/4):316–321, 2018.
Vijayakumar, K., Suchitra, S., and Swathi Shri, P., A secured cloud storage auditing with empirical outsourcing of key updates. Int. J. Reasoning-based Intell. Syst. 11(2):109–114, 2019.
Funding
This study was not funded by any organization.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Ethical Approval
This article does not contain any studies with human participants or animal performed by any of the authors.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the Topical Collection on Image & Signal Processing
Rights and permissions
About this article
Cite this article
Ahmed, S.T., Sandhya, M. & Sankar, S. A Dynamic MooM Dataset Processing Under TelMED Protocol Design for QoS Improvisation of Telemedicine Environment. J Med Syst 43, 257 (2019). https://doi.org/10.1007/s10916-019-1392-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10916-019-1392-4