1 Introduction

Novel methods, implementations, and smart machines have been created by technology’s constant evolution [1]. Various fields, including farming, manufacturing, and healthcare have significantly benefited from the state-of-the-art techniques. The Internet of Things (IoT) technology is one of the most revolutionary technologies of the modern era. Connectivity to miniature-sized electronic gadgets is a key feature of the IoT technology [2]. The healthcare industry has benefited greatly from IoT-enabled devices due to the ability to collect data in real-time, analyze data to provide more precise outcomes, make medical records more easily accessible, and counter epidemics [3]. Conspicuously, IoT is playing a crucial role in healthcare by facilitating the interconnection of all consumer-facing smart health monitoring technologies [4]. As a result, the increased efficiency of IoT-inspired health systems aids in increasing decision-services and decreasing mortality rate [5].

Fig. 1
figure 1

Rise of DT technology in US Healthcare

Problem description

Diabetes, cholesterol difficulties, heart-related illnesses, and colon disease are only some of the significant health disorders that have been linked to a lack of physical activity and irregularity in adults [6]. According to research by the United Nations Department of Economic and Social Affairs, the number of people aged 60 and older might reach 1.9 billion by the year 2050. In this way, it became clear that the elderly would get approximately half of all medical resources. However, the bulk of today’s medical assets is using outdated monitoring techniques, which severely restricts the real-time distribution of medical resources [7]. In addition, the US Medicine Institute found that medical errors including not accessing a patient’s historical data, missing and delaying assessment, or contaminated data are among the leading causes of death around the globe [8]. A person’s care requirements increase with age, therefore providing for an elderly person should be a constant process [9]. As a result, measuring the extent of inactivity and anticipating abnormal bodily occurrences is one of the most pressing questions in the field of smart healthcare. Smart sensors and the internet have been shown to promote interoperability in the healthcare industry [10]. Smart healthcare services have come a long way, yet there are still limits to the current healthcare solutions.

  1. 1.

    There is no consistent contact between patients and healthcare facilities.

  2. 2.

    There is still work to be done to completely integrate smart medical systems and physical systems.

  3. 3.

    There is a need for improvement in the monitoring of older patients in terms of data management, security, and the timely delivery of alerts.

  4. 4.

    Existing systems do not provide lifecycle-based personal health management services for the elderly.

1.1 Research inspiration

To overcome the aforementioned drawbacks, it is necessary to keep the real world and the digital world interacting properly. NASA’s introduction of the DT is ranked by the IEEE Computer Society as the third most innovative technology of 2020. Factually, Fig. 1 shows the rise of DT technology in the healthcare sector of US.Footnote 1 The term “DT” describes the technique that creates an exact digital copy of a physical thing [11]. IoT technology backed by Artificial Intelligence (AI), Virtual Reality (VR), and other cutting-edge technologies may be combined with actual things and corresponding digital replicas in DT, expanding the field’s applicability. DT technology may be employed to create blueprints for intelligent machines. When it comes to improving health operations, control, and promotion, a virtual of a person can be an optimal way [12]. The simulation can aid medical staff in keeping track of a patient’s present health condition. Additionally, it may be possible to predict the future pattern by examining the past [13]. The purpose of the presented research is to create a smart IoT-inspired event perception framework to forecast physical abnormality of an elderly person in time-sensitive manner by coordinating DT with medical treatment. Additionally, blockchain technology may be used to keep patients’ records more privately and securely.

Fig. 2
figure 2

Conceptual framework

1.2 Research contribution

Figure 2 depicts the basic structure of the proposed solution. Moreover, specific state-of-the-art research contributions are as follows;

  1. 1.

    The current research presents an intelligent framework to analyze an elderly person’s physical activity for smart healthcare by incorporating DT and blockchain.

  2. 2.

    The proposed framework gathers ubiquitous information on the adult’s physical activities.

  3. 3.

    DT layer of the system architecture is used for data preparation and pattern unification.

  4. 4.

    Cloud-based deep learning-based analysis of inactivity and anomalies in physical behavior.

  5. 5.

    Persistent tracking of patient health records using blockchain technology.

Article organization

Section 2 presents crucial research that has been done related to the current domain of study. Section 3 explains the proposed architecture’s parameters in a detailed manner. The calculations and specifics of the implementation are discussed in Section 4. Finally, Section 5 concludes the paper by discussing the future scope and challenges of DT technology in healthcare.

2 Related research

There have been several initiatives in the last few years to improve data monitoring procedures using security smart solutions. Some of the relevant works are discussed ahead.

2.1 DT-inspired innovative approaches

The healthcare field has seen several efforts to use DT with Mohamed et al. [14] proposing a reference healthcare framework with DT’s strengths. The authors have prioritized reproducible observations and predictions by using self-adaptation and autonomic computation benchmarks. Moreover, the authors have utilized a motivational scenario for handling diabetes-related health irregularities as a case study to evaluate the suggested approach. A DT-inspired system was presented by Narasimhaiah et al. [15]. To analyze data and efficiently administer medical services, authors have used cloud computing and the DT paradigm. In addition, two case studies have been done to assess the viability of the strategy. Unfortunately, there was no evaluation of the contextual experiments’ processes or outcomes. To address emergency medical situations, Haleem et al. [16] suggested using DT. The authors have used a simulation framework based on discrete events to handle a crisis. By continually using data without disrupting regular procedures, an ML-based prediction model has improved the healthcare process. The suggested solution’s viability in a variety of scenarios has been evaluated using the FlexSim software. Using the idea of the DT technology in the field of smart manufacturing, Cheng et al. [17] suggested a semi-physical commissioning technique. The authors have developed a new paradigm for achieving sustainability in smart manufacturing systems by integrating the DT method with the open architecture approach. A case study confirmed the solution’s promising results, demonstrating its high level of precision. For the similarly fast reconfiguration of automated production systems, Abadi et al. [18] presented a DT-driven solution. The data mapping module received the optimized findings and used them for verification. The suggested method was tested in practice, and the results were positive. The suggested method has significantly improved performance while reducing overhead costs by automating and fast optimizing the reconfiguration process. Leng et al. [19] suggested a DT-based product-service approach for warehouses. The DT collects data in real-time from the actual warehouse and maps it to the virtual model. The creation of a proof-of-concept model and its verification were studied using a tobacco warehouse. Moreover, the computed results made effective product-service strategy.

Table 1 Comparative assessment

2.2 IoT-cloud-based intelligent solution

To provide a productive ecosystem for smart healthcare, Butpheng et al. [20] suggested an IoT-cloud-assisted intelligent management architecture. In addition, the authors have zeroed in on the idea of medical asset optimization and redistribution. In addition, an IoT-cloud-based smart health architecture was presented by Manocha et al. [21] to store individual medical histories using virtualization. Patients would save physical health data digitally and then share them with the stakeholder when needed. In Thyagaturu et al. [22], the authors suggested another cloud-based system for delivering real-time, and all-encompassing medical services to patients. The suggested solution was able to handle unstructured and semi-structured heterogeneous input data relating to the patient’s physical characteristics. Another framework for real-time collection and processing of health indicators relevant to a person is suggested by Chandrasekar et al. [23], which uses biosensors and cloud computing. McCradden et al. [24] has suggested an intelligent personal healthcare framework for the commercial and practical sectors. The Seoul National University Hospital [25] created a private cloud database using a cloud-based Virtual Desktop Framework. To construct a platform for exchanging information by spanning various healthcare frameworks in China, Huawei eHealth disseminated the Savvy Wellbeing Cloud Arrangement. In addition, businesses with a focus on the cloud, such as Ali Healthcare, provided a variety of cloud-based medical solutions including Graded Diagnosis, Cloud Medicine, and Telemedicine Cloud. A blockchain-inspired autonomous method was presented to increase manufacturing responsiveness and timeliness by Xiong et al. [26]. The multi-agent framework relies on the principle of intelligent contracts to coordinate and negotiate the completion of tasks. The suggested technique was tested in a variety of production settings by creating a prototype and conducting trials. To guarantee efficient task coordination in disruptive environments, Nie et al. [27] implemented a blockchain-based multi-agent system. To back up the rescheduling choices, data regarding events and decision-making are gathered at the edge and used to construct a DL-driven prediction framework. Based on the comprehensive literature review, Table 1 shows the comparative analysis with state-of-the-art research works.

3 Proposed model

The major objective of the current research is to offer a context-aware health monitoring system, which would enhance the quality of intelligent health-oriented decision-making procedure as a whole by using the data-stimulative technique of DT. The suggested approach uses the idea of DT technology to create a digital clone of the patient under observation, enabling medical professionals to analyze all potential scenarios in real time. Figure 3 depicts the layered structure of the proposed framework. DT provides novel approaches to include high-stakes situations without directly engaging the person. Table 2 further details the necessary hardware and software to run the framework.

Fig. 3
figure 3

Proposed layered framework

Table 2 System specifications
Table 3 DFS parameters

3.1 Sensing and transmitting data

The initial layer is in charge of collecting information about people’s physical activity. In the current study, collecting useful data on angular position, acceleration, and velocity through wearable sensors is performed. Table 3 provides a comprehensive breakdown of the wearable sensors and the intended forms of exercise. The recorded data is sent to the cloud layer to study the out-of-the-ordinary physical phenomena. A stationary time interval approach is determinedto partition the seamless data signal into variable blocks to anticipate the event-type and nature. It is due to the fact that the nature of the events is dynamic and the limit of the duration is not known in advance.

Definition 1: Block formation

There are four parameters in an event block: T(s), R(s), R(m), and T(e). In this case, the sensor Rs characterizes the kind of signal capture, and R(m) characterizes the kind of event. Beginning and ending times for an event associated with a certain time instant are marked by T(s) and T(e), respectively. T = [T(s)-T(e)] is the formula for determining the duration of the signal associated with a given time instant.

Various adapters including mobile phones and gateways are employed to transmit data collected by the wearable sensors used in the proposed research. 6LoWPAN, WiFi, and 5G/6G are some of the communication technologies that can be used to transmit data to the cloud. ActivPAL sensors are used over WiFi to get access to and transmit the data. Moreover, TCP (Transmission Control Protocol) is used for data transfer between entities. Several services may be accessible through middleware layers, including data analysis, user administration, remote consultation, and remote monitoring.

3.2 Fusion of data

It is vital to construct a mathematical DT model using an ML technique. Information on the physical actions performed on the real and virtual objects forms the basis of the suggested DT. The suggested technique for training via DT requires the processing of the raw time series patterns collected originally by the wearable sensors. As such, the proposed method’s layer, which is only devoted to the DT model, executes a crucial data preparation phase of time series data unification.

Harmonization of routines

Captured physical activity data from wearable sensors generally contain the continuous properties stated in Table 3 and are gathered at the corresponding frequency. Attributes must be brought to reduced frequency using expansion or compression technique to establish uniformity between data frequencies. As stated in Definition 2, unification between activity patterns is possible with the use of the deep learning technique.

Fig. 4
figure 4

Algorithm 1: event pattern unification

Definition 2: Pattern unification

Let it be assumed that for a given event, Y(1), Y(2), and T(1) and T(2) are two sets of physical activity patterns recorded at distinct frequencies. In the current case, Y(2) is selected as the standard measure of activity size. Consequently, the new Y(1) data dimension will be derived from Y(2) and synchronized with the Y(2) data sample frequency.

Mathematically, the procedure for pattern unification is formulated as

$$\begin{aligned} Y(1j)= & {} Y'(1m) + (Y(1(m+1)) - Y(1m))\\{} & {} *\frac{(T(2)*j) - (T(1) *m)}{T1} \end{aligned}$$

Baseline time series data patterns are denoted by Y(i) and Y(j). The given equation has two constants, Y(1m) and Y(1(m+1)), which are used as starting points. After locating the reference values, the weighted average approach is applied to get the ratio of the distance along the time axis between Y(1m), Y(j), and Y(1(m+1)). Also,

$$\begin{aligned} m = \frac{T(2) *j}{T(1)} \end{aligned}$$

If the frequencies of Y(1) and Y(2) are different from one another, the frequency ratio of T(1) and T(2) is a crucial indicator for determining the new reference point of Y’(1) in the original Y(1) curve. Algorithm 1 (Fig. 4) explains the whole procedure for pattern unification. Once data fusion and unification have been performed, the resulting events are sent to the cloud layer for further analysis of the events’ types and the prediction of any underlying physical irregularities. Prediction is used in this way to provide a wide range of healthcare services, including continuous monitoring, assistance with medications, and data mining.

Fig. 5
figure 5

Temporal abstraction framework

3.3 Temporal data analysis

A unique hybrid method is proposed for physical irregularity detection inspired by deep learning and is used to identify the occurrence of abnormal physical occurrences in a monitored person. Figure 5 details the conceptual logic behind the suggested approach.

Representing a physiological occurrence

The ability to correctly identify a single physical event is seen as a crucial part of the suggested approach. To analyze input signals and extract features for event prediction, 1D Convolutional Neural Networks (CNNs) are considered. CNN consists of many convolutional and pooling layers that are trained to extract features for the event generated by them. Therefore, the presented approach analyzes raw data signals in real time using feature extraction and learning. Convolutional layers’ use of several filters to facilitate the assessment of local patterns in the recorded signals and pooling layers’ representation of physical pattern patterns. Single- or multi-channel data signals may be extracted from transient patterns using conventional CNN-based techniques. However, it has been found that CNN has a weakness when it comes to analyzing extended temporal patterns associated with a variety of physical activities, such as jogging and ascending stairs. Henceforth, the current research proposes a hybrid technique based on Gated Recurrent Units (GRUs) as a replacement for CNN.

Modelling of time-varying features

CNN classifies events by category without considering the temporal context. Therefore, the assumption of time independence is false in the field of healthcare since the physical occurrences are so reliant on the passage of time. To overcome the drawbacks mentioned above, the notion of sequential modeling may be used for the analysis of the kind of physical event using GRUs. By combining the concepts of GRU and CNN, the presented model performs event analysis more reliably while using less training data. As a result, a CNN-BiGRU hybrid solution, which uses the computation of consecutive connections between data samples is incorporated to recognize a sequential physical event.

By feeding the GRU network the features retrieved by the CNN model, represented as Y = [y(1), y(2),..., y(n)], sequential feature modeling is performed. Here, y(i) is the current event state, and Y is the feature matrix. In this way, the GRU network receives, at a given time instance t(i), a set of Y(n) feature matrices from the time module T.

$$\begin{aligned} r(t)= \pi (X(r)y(n) + V(r)g(t-1)+ a(r)). \end{aligned}$$

Here r(t) is the reset gate of the GRU cell with the capability of data selection.

$$\begin{aligned} \pi (y)= \frac{1}{1+e^{-y}} \end{aligned}$$

Similarly, X(r) and V(r) are respective GRU weights. Furthermore, alternative cell c(t) is deduced by the consolidation of reset gate rt and last state g(t-1) with y(t) as the input state.

$$\begin{aligned} c(t)=\tanh (X(y(t))+(V(r(t) \circ g(t-1))+a)) \end{aligned}$$
Fig. 6
figure 6

Algorithm 2: time-sensitive alert generation

where \(\circ \) denotes paired multiplication. In addition, when the value of the reset gate r(t) is near 0, the idea of updating the value of the alternative cell c(t) is implemented by forgetting the prior state represented by g(t-1).

$$\begin{aligned} \tanh&= \frac{e^{y}- e^{-y}}{e^{y} + e^{-y}}\\ v(t)&= \pi (X(v)x(t) +V(v)g(t-1)+ a(u)) \end{aligned}$$

The current state g(t) is calculated with the aid of update gate v(t), which, after updating the state of alternative cell c(t), consolidates the alternate state c(t) concerning the cell’s prior state, g(t-1).

$$\begin{aligned} g(t) = v(t) \circ g(t-1) + (1-v(t)) \circ c(t) \end{aligned}$$

The BiGRU network’s final layer is modified by adding fully linked layers with as many classes as there are physical events in the planned research. The Softmax function is used to generate the outcomes (Z(t)) for event determination.

$$\begin{aligned} Z(t) = softmax (X(z)g(t)) \end{aligned}$$

Full-connected layers’ ultimate weight estimates are represented here by X(z). In addition, the cross-entropy-based error computation approach is employed where is the difference between the calculated result Z(t) and the actual outcome Q(t).

$$\begin{aligned} L[Z(t), Q(t)] = - \sum _{i=1}^{m} Z(t)log(Q(t)) \end{aligned}$$

An optimizer known as ADAM stochastic is used to improve the hidden layer’s estimated weights by adjusting the learning rate following BackPropagation Through Time (BPTT) theory.

Fig. 7
figure 7

Algorithm 3: customized RBFT and block chaining

3.4 Real-time service

Traditional smart healthcare may be broken down into four stages; (a) Data Block Generation; (b) Analysis of data; (c) Assessment of Outcomes; (d) Decisions that matter. To achieve the goal of continuous health management, the proposed framework provides a real-time health-assistive environment whereby individual health data is managed with the use of blockchain technology. To alert the carer or medical representative in real-time, the current approach uses a severity analysis procedure, since the higher layer is responsible for predicting any abnormal physical occurrence. Algorithm 2 (Fig. 6) presents the whole procedure of severity analysis by producing a real-time warning.

Data block generation

Information about the projected irregular occurrence is recorded on the blockchain when the suggested solution has identified it. Data Aggregation (DA) is the key idea behind the consolidation of several transactions into a single block. Since, a consortium blockchain is used, further verification steps are unnecessary. The use of blockchain technology in the suggested solution is motivated by a desire to aid clinical decision-making for the elderly. Additionally, all participants in the process, including healthcare authorities, drug providers, and resource managers, may use the information contained in the blockchain. The suggested method involves the occasional creation of blocks. However, the presented study may be expanded to investigate other issues, such as compromised peers.

Block chaining

In the suggested architecture, the consortium network is implemented using Reputation-based Byzantine fault tolerance (RBFT). Security, dependability, decentralization, and scalability are enhanced by the RBFT. The suggested method uses a modified RBFT to get a consensus-based judgment. The validator then announces block C(i) to the network, completing the blockchain. Individuals inside the blockchain network sign the block’s header after approving the transaction. The signatures based on the identities that created the blocks do counts as a vote. High throughput and low latency are achieved with the aid of the consortium blockchain. Moreover, the vote-based approach assists to combat 53% of attacks. Algorithm 3 (Fig. 7) explains the whole consensus procedure in detail. In the Algorithm, N represents the total number of consortium members, and the unique symbol defines the consortium members’ verification signature on block C(i).

4 Experimental simulation

This section delves into the specifics of the proposed solution in terms of the ability to recognize events, train, test, validate models, and handle data quickly and cost-effectiveness. Python’s Sklearn package, Tensorflow, and Keras were used for the experimental simulation. Numpy, Pandas, and Matplotlib are a few of the libraries used for preparing data and calculating outcomes. In addition, the Python framework is used to facilitate RBFT’s implementation in C++. Blockchain combined with DT for determining rare physical events is uncharted territory, rendering all comparisons to it irrelevant. Following is a breakdown of the trial design, key performance indicators, and findings.

  1. 1.

    Evaluation of Performance in Motion Recognition

  2. 2.

    Verifying the Precision of Trained Models

  3. 3.

    Examining the Latency Rate

  4. 4.

    Quantifying the Expense of Data Processing

4.1 Evaluation of prediction capability for motion

The presented prediction method for motion detection is assessed using (a) Optimisation of Hyperparameters; (b) The Optimal Period for Recording Events; (c) Accuracy in predicting events globally.

Fig. 8
figure 8

CNN layer detection accuracy (L1 Layer 1; L2 Layer 2; L3 Layer 3; L4 Layer 4)

Fig. 9
figure 9

Optimal neuron detection accuracy (L1 Layer 1; L2 Layer 2; L3 Layer 3 )

4.1.1 Optimisation of hyperparameters

To determine the practical setup for event identification, several hyperparameters are chosen and applied to the suggested technique. By adjusting the number of pooling layers with variable filter sizes, convolutional layers and feature maps, the optimal attributes have been recovered from temporal space. Conspicuously, 4 convolutional layers are used by adjusting parameters like padding, learning rate, and max-pooling. For a total of 300 iterations, the presented architecture is trained with learning rate from 0.2 to 0.002. Overfitting is prevalent in DL, hence the criterion for early halting is chosen to halt training after determining the minimum training error. In this way, the best possible CNN design may be generated for testing.

Layer count

In Figs. 8 and 9, the best hyperparameters for the number of convolutional layers are presented. The results show that incorporating the first three layers significantly improves training accuracy and maintains high quality over time. However, a loss gap between layers 4 and 3 has been found when layer 4 was included. After layer 4 was added, the suggested model’s accuracy began to decline. In this way, 3 convolutional layers are considered optimal for effectively extracting information from the data and making predictions about the outlier.

Fig. 10
figure 10

Optimal feature map size accuracy (L1 Layer 1; L2 Layer 2; L3 Layer 3)

Fig. 11
figure 11

Optimal pooling layer size accuracy (L1 Layer 1; L2 Layer 2; L3 Layer 3)

Fig. 12
figure 12

Optimal GRU cell accuracy

Maps with features

Experiments were performed to determine the results of using several feature maps to extract features for CNN layers. Accuracy of 86.68%, 89.65%, 91.23%, and 89.23% was attained by applying the suggested solution to the aforementioned feature maps, with the computed variability. Using 130 feature maps, the estimated results show that the suggested solution has reached maximum accuracy. In addition, adding 150 feature maps shows a decline in performance. Figure 10 shows the results of the evaluation for feature extraction accuracy after the ideal number of feature maps has been determined by increasing the size. The best accuracy on the test dataset is achieved with a channel size of 120.

Layer thicknesses in pool

As can be seen in Fig. 11, the performance of the CNN model is independent of the size of the pooling, in contrast to the filter size. A pooling size of 1\(*\)5 is selected as sufficient to run pooling over the various iterations.

Finalized hyper-parameters

After finding the optimal size for the pooling layer, the best values for the other CNN model hyperparameters are as follows; (a) Number of CNN Layers = 3; (b) Number of Feature Maps = 130; (c) Optimal Size of Feature Map = 1 \(*\) 9; and (d) Optimal Size of Pooling Layer = 1\(*\)5. This configuration yielded a maximum accuracy of 92.23% on test samples and was therefore chosen as the final solution. In addition, it was shown that by adding two completely linked layers totaling 2048 nodes, the model’s accuracy could be improved to 94.73%.

Quantity of GRUs

To handle the sequential nature of the collected event, the proposed CNN model uses bidirectional GRU units to replace the last layers. Different amounts of GRU cells, from 20 cells up to 140 cells, have been added to the GRU unit to reach the highest possible event determination accuracy.

The results of the calculations are shown in Fig. 12. The results show that the GRU model with 128 cells has the highest accuracy, at 96.09%.

4.1.2 Selecting optimal period for an event

An individual’s effectiveness in determining events is proportional to the duration of the event and the server’s buffering capacity. According to Table 4, the suggested approach loses efficacy in event determination for window sizes greater than 0.80s, which might cause long delays in the production of results. Using this method, we find that a window size of less than 0.80s results in improved data processing efficiency, both in terms of accuracy and throughput. When using the data with a window size of 0.60s, the model is producing the best possible result with an accuracy of 96.45%. Therefore, the results are based on a 0.60s time frame.

Table 4 Accuracy analysis
Table 5 Precision classification scores; EMS Electrophoretic Mobility Shift Assay; HMM Hidden Markov Model; DBN Deep Belief Network

4.1.3 Precision efficacy

The performance of the suggested solution is verified by comparing the computed results with state-of-the-art solutions, such as the Electrophoretic Mobility Shift (EMS), Hidden Markov Model (HMM), and Deep Belief Network (DBN). Precision, Recall, and F-measure are the performance matrices that are calculated and analyzed. Tables 5, 6, and 7 show the results of the event determination for the physical activities included in the current study. The results show that the suggested method is the most accurate, with a precision of 93.16%, when compared to other methods such as EMS (84.25% accuracy), HMM (82.17% accuracy), and DBN (81.41% accuracy). The suggested method has been claimed to outperform EMS, HMM, and DBN in terms of Recall and F-measure, with an accuracy of 94.26% and 98.47%, respectively, for event determination. Results reveal that the suggested solution is competitive with state-of-the-art methods for determining the order of physical events. A confusion matrix is computed to characterize the overall performance of event detection specific to the activity and to calculate the mis categorization (Table 8). Physical activity misclassifications like “Running” and “Jogging” may be readily identified using the confusion matrix. The comparable pattern made it challenging for the suggested technique to distinguish between events associated with running and jogging.

Table 6 Recall classification scores; EMS Electrophoretic Mobility Shift Assay; HMM Hidden Markov Model; DBN Deep Belief Network
Table 7 F-Measure classification scores; EMS Electrophoretic Mobility Shift Assay; HMM Hidden Markov Model; DBN Deep Belief Network
Table 8 Confusion matrix
Fig. 13
figure 13

CNN training accuracy estimation

Fig. 14
figure 14

CNN loss accuracy estimation

Fig. 15
figure 15

GRU training accuracy estimation

Fig. 16
figure 16

GRU loss accuracy estimation

Fig. 17
figure 17

Delay estimation

4.2 Precision of trained models

A three-layered CNN architecture is suggested after studying the data-processing potential of network networks. In the current study, it is trained for 90 iterations at a learning rate of 0.02. In Figs. 13 and 14, the proposed model’s CNN and GRU architecture accuracy is shown for training and validation results, respectively. According to the data in Fig. 13, after 70 epochs, with a training accuracy of 98.25% and a loss of 2.26%, the optimum weights have been determined for CNN. The suggested design has also reduced the validation error to 2.01%, with a loss of 0.12%. The GRU network is trained alongside the CNN model, and both undergo 90 iterations of training at a learning rate of 0.02. After 70 epochs, the optimum weights for the GRU network have been attained by minimizing the loss by 0.13% and increasing the accuracy, much like CNN. The detailed results are depicted in Figs. 15 and 16.

4.3 Rate of latency (RL)

RL is defined as the elapsed time between when a request is made and when it is received as a response. The following criteria are used to rate various RLs: 1. RL in a Network 2. Blockchain RL

4.3.1 RL in a network

It is crucial in the healthcare industry to determine the network delay involved in the alerting of decisions. RL in Network may be computed mathematically as;

$$\begin{aligned} \text {RL (Network)}= & {} \text {Irregular event determination}\\{} & {} - \text {Alert delivery} \end{aligned}$$

Figure 17 shows the rate of delay brought on by the Edge and Cloud layer apps sharing the same network. In comparison to Edge-inspired healthcare frameworks, cloud-based frameworks have been shown to cope with lower bandwidth, increase network congestion, and calculate longer round-trip times. This has allowed cloud-based applications to achieve a network delay time that is on par with industry standards.

Table 9 Blockchain transaction delay

4.3.2 Blockchain RL

Minimum, average, and maximum rates are computed for every block and peer formed concerning some time event T to determine the RL for blockchain. Table 9 displays the results of calculating the RL using a range of peer counts. The results of the calculations show that the delay rate rises along with the number of peers. The formula yields an average growth factor with an increase of 2.23% and a minimum growth factor with an increase of 0.99%. It must be emphasized that the use of encryption and a lightweight cryptographic method might be a cause of delay and internal computation for block creation. Thus, it follows that the increase in RL is proportional to the number of people contributing to the blockchain infrastructure. However, the computed RL justifies the use of blockchain technology in healthcare.

4.4 Quantifying data processing cost

Assessing the effectiveness of the solution’s decision-making process requires measuring the system’s cost. Transaction costs and computational expenses are determined in Table 10. The computational cost characterizes how long it takes the system to make a judgment on an event acquired at a certain instant in time. The difficulty of producing and verifying blocks in the network is reflected in the transaction fee. Table 10 shows the total time and effort spent on data creation, data cleaning, and event counting. Conversely, the transaction cost is the sum of the costs associated with consensus and the transmission of keys and encrypted messages.

Table 10 Cost estimation

5 Conclusion

In the current study, an IoT and deep learning-powered digital twin architecture for elderly healthcare is presented to identify and foresee vulnerabilities. When applied to the specifics of the smart healthcare sector, the suggested method has considerable promise. There are a variety of challenges inherent to digital twin implementations, including sequential data assessment, latency concerns, data security, and dynamic promptness. Results from the experimental analysis show that the suggested method for identifying anomalous events is effective. Some limiting aspects can be addressed in future works. Because training the digital twin is an ongoing process, continuous iteration practice is required to maximize the efficacy of event prediction. This calls for the evolution of digital twins’ capacity for self-improvement and adaptation in the context of operational research. To further lower the risk factor and increase the economic rewards, the suggested technique may be extended to additional areas in the future.