1 Introduction

Depending on the phenomenal development of the digital technology in recent years, the Internet of Things (IoT) has a great impact in our lives [16]. IoT can be considered as a set of devices equipped with sensors, actuators, and processors that communicate with each other to serve a reliable objective. IoT innovates an integrated system that combines different systems to provide intelligent performances in each task. It has created a new growth of cell phones, home and other embedded applications that are all connected to the internet. Consequently, different IoT-enabled systems such as smart healthcare, smart city, smart home, smart factory, smart transport and smart agriculture are getting significant attention across the world. Cloud Computing (CC) is considered as the standard infrastructure, platform and services to develop IoT systems [11]. However, Cloud datacenters locate at remote distance from IoT devices which leads to high latency. This issue adversely affects the response time for real time applications such as critical health monitoring systems, traffic monitoring, and emergency fire. In addition, IoT sources are geographically extended and can generate a large amount of data sent to Cloud for processing which lead to overloading. The edge computational resources can handle the previous mentioned challenges in IoT systems [21]. Fog Computing (FC) can be defined as edge computing that aims to deploy services at the edge network. The FC achieves the location-awareness and reduces the latency [2].

In FC, there are various devices such as Cisco IOx networking equipment, micro-datacenter, Nano-server, smart phone, personal computer and Cloudlets. The fog device is usually named as Fog Node (FN). These Fog Nodes create a generous distribution of services to process IoT-data near to the source. FC and CC paradigms most often work in an integrated manner as shown in the Fig. 1 to meet Quality of Service (QoS) and resource requirements of wide standard IoT systems [2]. In FC, Resource Allocation (RA) represents a hard mission as it involves a set of various resources and fog nodes to achieve the required computations for IoT systems [38]. Many previous RA methods have been proposed such as Least Connection (LC), Round Robin (RR), Weighted Round Robin (WRR), and Adaptive Weighted Round Robin (AWRR). These algorithms will illustrated in details in section 2 (the related work section). However, they have many disadvantages such as; (i) they do not take into account the heterogeneity of the computational resources, (ii) low performance due to process migration. (iii) High latency. (iv) The lack of unified standard for FC [1]. The integration between Fog, Cloud, and various IoT devices increases the difficulty of RA issue [22]. The real-world implementation of FC environment with IoT and Cloud datacenters is very expensive and the modification is tiresome issue so the solution is the empirical analysis using simulation. There are a certain number of simulators like Edgecloudsim [28], SimpleIoTSimulator [25] and iFogSim [12] for modelling FC environment.

Fig. 1
figure 1

Interactions among IoT-enabled systems, Fog and Cloud Computing

The rest of this section introduces some concepts in the field of stratified sampling, resource management, Deep Learning (DL) algorithm, Deep Neural Network (DNN), Probabilistic Neural Networks (PNN), and Reinforcement Learning (RL).

Figure 1 shows the relation between IoT, big dta, cloud, and fog. It also illustrates some examples of IoT applications.

1.1 Stratified sampling

In stratified sampling, the data is first divided into subgroups (or strata) with same probability. For example, you can stratify by gender to ensure that men and women are sampled in equal proportions, or by region to ensure that each region within an urban population is represented. It is also possible to specify a different sample size for each stratum. For example, when you want to sample for a survey, you can give men a higher probability of being sampled than women when you think that the willingness to participate in the survey is lower for men than for women. It is used when we might reasonably expect the measurement of interest to vary between the different subgroups, and we want to ensure representation from all the subgroups. For example, in a study of stroke outcomes, we may stratify the population by gender, to ensure equal representation of men and women.

The study sample is then obtained by taking equal sample sizes from each stratum. In stratified sampling, it may also be appropriate to choose non-equal sample sizes from each stratum. For example, in a study of the health outcomes of nursing staff in a county, if there are three hospitals each with different numbers of nursing staff (hospital A has 500 nurses, hospital B has 1000 and hospital C has 2000), then it would be appropriate to choose the sample numbers from each hospital proportionally (e.g. 10 from hospital A, 20 from hospital B and 40 from hospital C). This ensures a more realistic and accurate estimation of the health outcomes of nurses across the county, whereas simple random sampling would over-represent nurses from hospitals A and B. The fact that the sample was stratified should be taken into account at the analysis stage. Stratified sampling improves the accuracy and representativeness of the results by reducing sampling bias. However, it requires knowledge of the appropriate characteristics of the sampling frame (the details of which are not always available), and it can be difficult to decide which characteristic(s) to stratify by.

1.2 Resource management

The resource management is divided into; (i) resource assignment, and (ii) task placement [14, 34]. Resource assignment defines the number of incoming processes and determines the resource to be allocated to each process. For example, a Spark application selects the resource to be allocated based on the available memory and the data size. Task placement involves the determination of the location for each incoming task. For example, Delay Scheduling algorithm [41] in Hadoop assigns tasks based on data locality.

1.3 Deep learning (DL)

Deep learning (DL) (also known as deep structured learning or hierarchical learning) is a branch of machine learning methods based on Artificial Neural Networks (ANN) [39]. DL is named deep as it uses the deep neural networks [4, 26]. It is applied in fields of object detection, classification, and speech recognition [6]. Unlike the traditional learning algorithms, the performance of DL algorithm increases with the increase of the amount of data as shown in Fig. 2.

Fig. 2
figure 2

Why deep learning

1.4 Deep neural network (DNN)

A Deep Neural Network (DNN) is an ANN with multiple layers between the input and output layers [29]. It is constructed with a set of connected layers which are: (i) Input Layer, (ii) Output Layer, and (iii) Hidden Layers (all layers in between). It finds the correct mathematical method to convert the input into the output. The word "deep" means the network joins the neurons in more than two layers (hidden layers) [3] as shown in Fig. 3.

Fig. 3
figure 3

DNN architecture

In DNN, each layer represents a deeper level of knowledge, i.e., the hierarchy of knowledge. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights [15]. That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data.

1.5 Probabilistic neural networks (PNN)

Probabilistic Neural Networks (PNN) is organized into a multilayered feed forward network containing four layers [17, 35]: (i) Input layer: contains nodes with a set of measurements. (ii) Pattern layer: contains one neuron for each case in the training data set. It computes the Euclidean distance of the test case from the neuron’s center point and then applies the Radial Basis Function (RBF) kernel function using the sigma values. (iii) Summation layer: performs a sum operation of the outputs from the second layer for each class. (iv) Output layer: takes all the outputs of the summation nodes and outputs the max, e.g. the label node that had the highest score. PNN has several advantages such as [17]: (i) PNNs are much faster than multilayer perceptron networks. (ii) PNN networks are insensitive to outliers. (iii) PNNs can be more accurate than multilayer perceptron networks. (iv) PNN networks generate accurate predicted target probability scores. (v) Guaranteed to converge to an optimal classifier as the size of the representative training set increases. In general, it is obvious that PNNs have high speed of learning and training. The main advantage of a PNN is its ability to output probabilities in multi-classification. However, PNN has disadvantages such as it requires more memory space to store the model. It requires a representative training set.

The originality of this paper is to introduce a new Effective Resource Allocation Methodology (ERAM) for Fog environment, which is suitable for Healthcare applications. ERAM tries to achieve effective resource management in Fog environment via real-time resource allocating as well as prediction algorithm. ERAM is composed of three main modules, namely: (i) Data Preprocessing Module (DPM), (ii) Resource Allocation Module (RAM) and (ii) Effective Prediction Module (EPM). The DPM is responsible for sampling, partitioning, and balancing data to be in the appropriate form for analyzing and processing. The RAM learns to select the best FN to execute the incoming request. The RAM uses Reinforcement Learning (RL) algorithm to achieve a high LB for fog environment. The EPM uses the PNN to predict a target field, using one or more predictors. In order to detect the probability of the heart attack, PNN is trained using the training dataset. Then PNN will be tested using the user's sensing data coming from the IoT layer to predict the probability of heart attack and then take the most appropriate action accordingly.

The proposed IoT-Fog system consists of three layers, namely: (i) IoT Layer, (ii) Fog Layer, and (iii) Cloud Layer. The mission of the IoT layer is to monitor the patient symptoms. The fog layer is considered with handling the incoming requests and forwards them to the suitable server. The cloud layer is responsible for managing the transfer of data to and from the fog layer. The user data is sent to the most appropriate resource to be allocated and processed. The allocating resource is managed by a specific healthcare organization. The main goal of the system is to achieve a low latency while improving the Quality of Service (QoS) metrics such as the allocation cost, the response time, bandwidth efficiency and energy consumption. Unlike other RA techniques, EPRAM employs deep RL algorithm in a new manner. It also uses the PNN for the prediction algorithm. It has achieved such acceptable performance due to using deep RL and PNN. Deep RL has shown impressive promises in resource allocation. PNN generates accurate predicted target and is much faster than multilayer perceptron networks. Accordingly, EPRAM is a suitable algorithm in the case of real-time systems in FC which leads to load balancing.

The rest of paper is organized as follows: Section 2 introduces some of the recent previous efforts in the field of RA techniques generally. Then, it introduces the recent previous efforts in the area of RA algorithms for in CC and FC. Section 3 introduces a proposed Effective Prediction and Resource Allocation Methodology (EPRAM) for Fog environment using real-time resource allocating as well as PNN with more details about each contribution. Section 4 introduces the evaluation results and discussion. Our conclusion is discussed in Section 5.

2 Related work

This section introduces some of the recent previous efforts in the field of RA techniques generally and in the area of RA algorithms for in CC and FC. In the last years, many researchers are working on different issues to solve the FC and IoT challenges. Fatma M. Talaat et al. [30] proposed a dynamic RA method based on Reinforcement learning and genetic algorithm. It observes the traffic in the network continuously, collects the information about each node, manages the incoming tasks, and distributes them between the available nodes equally using DRA method. It is efficient in real-time systems in FC environment such as in the case of smart healthcare system. Dubey Sh et al. [7] used Round Robin (RR) to list the available nodes and assign the incoming processes equally to each node in order. It is simple and easy to understand and implement. However, if the servers have different processing capacities, one of them can become overloaded and crashed.

Gupta S et al. [13] used Weighted Round Robin (WRR) for resource allocation process. It is similar to the RR in the cyclical assigning manner but it differs in that the node with the higher weight will be given the highest number of requests. In WRR, each server is allocated with a weight based on its capacity. Singh G, Kaur K [27] used Least Connection (LC) to achieve the load balancing. It chooses the node with the least number of active transactions and modifies its data periodically depending number of connections.

Q. Fan et al. [8] proposed a model to minimize the communication and processing time by assigning the incoming process to the suitable source. Authors in [8] use the best Signal-to-Noise Ratio (SNR) and distributed alpha algorithm to measure the load on each node to achieve the load balancing. It reconstructs set of series of events and compares the SNR. However, the disadvantage of this method is that it focuses on the delay of wireless communications. Wei Y et al. [36] use the Reinforcement Learning algorithm to find the best policy for scheduling in nonhomogeneous networks to maximize the energy efficiency. Authors in [37] used Multi-agent Reinforcement Learning to increase the network resource utilization.

Ashkan Yousefpour et al. [40] proposed a policy to minimize the latency of the IoT services. They aimed to assign each task according to the response time. The proposed policy in [40] divides the tasks to light and heavy. If the response time on a specific fog node is less than the threshold, the task will be assigned to this node. Otherwise, the task will be forwarded to one of the neighboring nodes or to the cloud. The limitations of this method are: (i) It explores various scenarios in a distributed manner. (ii) It is not reasonable to decide whether to assign task to fog or to cloud depending on the processing time.

The RA problem in CC has gained attention for several years. However, there are little studies related to this issue in FC. Table 1 summarizes some of these related works highlighting their strength and weakness.

Table 1 Related RA methods for Fog Computing and Cloud Computing

Many researches about PM2.5 prediction are presented in many research papers such as in [10, 19, 20]. Convolutional neural networks are frequently hampered by high computational and storage needs. For several structural model pruning procedures and datasets (CIFAR-10 and ImageNet), the authors in [5] examine the accuracy-efficiency trade-off (TPUs).

2.1 Problem statement

Although, there are many RA algorithms they have many limitations such as i) most of them depend on the response time to decide whether to assign task to fog or to cloud which is not plausible. ii) They don't consider the task's requirements such as the priority of the task and the number of tasks. (iii) Calculating the capacity is difficult in some cases such as in case with varying packet size. (iv) They may cause network bottleneck.

2.2 Plan of solution

The proposed IoT-Fog system consists of three layers, namely: (i) IoT Layer, (ii) Fog Layer, and (iii) Cloud Layer. The mission of the IoT layer is to monitor the patient symptoms. The fog layer is considered with handling the incoming requests and forwards them to the suitable server. The cloud layer is responsible for managing the transfer of data to and from the fog layer. The user data is sent to the most appropriate resource to be allocated and processed. The allocating resource is managed by a specific healthcare organization.

2.3 The proposed effective prediction and resource allocation methodology (EPRAM)

One of the most significant applications related to the aims of IoT is an efficient healthcare system. In healthcare systems, many factors should be taken into consideration such as time, privacy of data, and accuracy. The healthcare system should be reliable and available at any time. Accordingly, this paper is concerned with designing an IoT-Fog based Healthcare System as shown in Fig. 4. The proposed IoT-Fog system consists of three layers which are: (i) IoT Layer, (ii) Fog Layer, and (iii) Cloud Layer. The IoT layer combines the IoT devices (pulse oximeter, ECG monitor, etc.) to observe the user status. The fog layer is considered with handling the incoming requests and forwards them to the suitable Fog Node (FN). The fog layer is divided into set of fog regions. Each fog region has a Master Node (MN) which manages and controls the whole nodes' data. Each FN's managing software sends its characteristics information to the MN. The characteristics of the FN can be found by checking the values in each Node Characteristics Table (NCT) which is located at the MS. The main characteristics for each FN as shown in Table 2 are: (i) The cache size (capacity), (ii) The RAM size (RAM), and (iii) the usage of the CPU (CPU_Usage). A new characteristic (Adaptive Weight (AW)) will be calculated according to the mentioned parameters by (1) as:

Fig. 4
figure 4

Fog environment for IoT-enabled healthcare case study

Table 2 Server characteristics table (SCT)
$$AW=\alpha * \left(\ \frac{\left( Capacity\ast RAM\right)}{CPU_{Usage}}\ \right)$$
(1)

The cloud layer is responsible for managing the transfer of data to and from the fog layer.

2.4 A case study in smart healthcare

In Healthcare systems, the IoT devices are usually connected with smart phones. Examples of these devices are; pulse oximeter, ECG monitor, smart watches, etc. They send the health status of the patient via application module. The smart phones act as gateway node to preprocess the IoT sensed data. If the available resources at the gateway node meets the requirements, the process will be executed at the application. Otherwise it will be executed at FN. The application gateway node selects the appropriate node to execute the process and initiate actuators according to the result. The smart healthcare systems are useful in many cases such as: (i) Reduce the incidence of a heart attack among those with a cardiac disease, (ii) Early detection of symptoms of a particular virus, (iii). Detecting Parkinson’s disease. The IoT based healthcare system architecture can be illustrated as 3-tier architecture as shown in the Fig. 4. Layer 1 combines the IoT devices (pulse oximeter, ECG monitor, etc.), layer 2 contains the fog nodes combined as set of regions, and layer 3 is the cloud datacenters.

2.5 The proposed EPRAM

This subsection introduces a new Effective Prediction and Resource Allocation Methodology (EPRAM) for Fog environment, which is suitable for Healthcare applications. ERAM tries to achieve effective resource management in Fog environment via real-time resource allocating as well as prediction algorithm. The patient data is sent to the most appropriate server to handle it. This server is administrated by a specific healthcare organization. The main goal of the system is to achieve a low latency. The fog layer consists of three main modules as shown in the Fig. 5, namely: (i) Data Preprocessing Module (DPM), (ii) Resource Allocation Module (RAM) and (ii) Effective Prediction Module (EPM).

Fig. 5
figure 5

Effective Resource Allocation Methodology (ERAM)

2.5.1 Data preprocessing module (DPM)

The DPM is responsible for sampling, partitioning, and balancing data to be in the appropriate form for analyzing and processing. The DPM is divided into three sub modules, namely: (i) Sampling Module (SM), (ii) Partitioning Module (PM) and (ii) Balancing Module (BM).

Sampling module (SM)

In SM, the incoming data is divided into subgroups (or strata) which share a similar characteristic using stratified sampling algorithm. In stratified sampling, it may also be appropriate to choose non-equal sample sizes from each stratum. In SM, data is first sampled according to the location it comes from. Then it will be sampled according to its type. Data can be classified as the following: (i) Not or Low Critical: Examples are the logging of training activity, weight or body posture. Such data can be examined by doctor when needed. If the system fails to log some data points, the patient is still safe. (ii) Critical Data: data in critical conditions. Examples are cardiac monitoring via ECG with automatic alarms once critical situations are detected [9]. The criticality requires fast response time, i.e., real-time response. Context Management merely observes patients, devices, or employees to figure out their context and help by improving planning or taking proper decisions [32]. However, data in this case is not urgent but it gains some degree of criticality due to real-time response need. (iii) Very Critical Data Control: The detected events are not only used to alert personnel, but also to control devices. This kind of data needs a feedback and real-time response. An example is a device that regulates the amount of oxygen provided to a patient [23].

Partitioning module (PM)

PM splits the data into three samples. The model is built on the training set, and the model is applied to the testing set to establish its credibility. However, the testing set can then be used to further refine the model. If the performance of the model needs improvement, the parameters can be changed, and the model is then rebuilt using the training sample, after which the performance on the testing set is examined. The validation sample, which unlike the training and testing sets played no role in developing the final model, is then used to assess the model the model’s performance against unseen data.

Balancing module (BM)

The BM achieves the balancing of the data by discarding (reducing) records in the higher-frequency categories. The reason is that when you boost record you will run the risk of duplicating anomalies in the data.

2.5.2 Resource allocation module (RAM)

The proposed RAM is based on Reinforcement Learning (RL) algorithm to achieve a high LB for fog environment. In RL, an agent learns to interact with environment to achieve a reward. RL is an Artificial Intelligence technique in which an agent takes an action in an environment to gain rewards. The agent receives the current environment state and takes an action accordingly. The taken action leads to a change in the environment state and then the agent will be informed with the change through a reward. The RAM learns to select the best FN to execute the incoming request. The overall steps of the Resource Allocation Module (RAM) are shown in Algorithm 1.

figure a

Algorithm 1. (RAM) uses RL to achieve the lowest latency (the target of RL or the reward is defined as the lowest latency). The reward of RL is defined by the user as we want the agent performs. We here define the reward as the low latency. It achieved that by training and interacting with the environment [24]+.

2.5.3 Effective prediction module (EPM)

The EPM uses the PNN to predict a target field, using one or more predictors. In order to detect the probability of the heart attack, PNN is trained using the training dataset. Then PNN will be tested using the user's sensing data coming from the IoT layer to predict the probability of heart attack and then take the most appropriate action accordingly. The architecture of PNN is shown in Figs. 6 and 7. The steps of PNN based Prediction Algorithm (PPA) are shown in Algorithm 2.

Fig. 6
figure 6

PNN Layers

Fig. 7
figure 7

PNN Architecture

figure b

Illustrative example

Assume there are ten cases. For each case, we consider the values from three sensors Sens.1_P, Sens.2_P, and Sens.3_P as shown in Table 3.

Table 3 Servers’ capabilities

When new incoming sensing data arrives with values [2, 21, 22]. Count_1 is the number of examples belongs to Probability 1. Count_2 is the number of examples belongs to Probability 2. Count_3 is the number of examples belongs to Probability 3 (Tables 4, 5, 6, 7).

Table 4 Input Data Set for PNN
Table 5 Test example
Table 6 Training Calculations 1
Table 7 Training Calculations 2

As P2 has the largest value, incoming sensing data arrives with values [2, 21, 22] will be detected as Probability 2.

3 Implementation and evaluation

FC runs applications in fog devices between cloud and end devices. This paradigm is used benefits of cloud and edge for distributed data and low latency. IoT sensors which are located in the lower layer of the architecture, are responsible for receiving and transmitting data through the gateways to the higher layer. The actuators in the lowest level, are also responsible for system controls. In fact, FC provides filtering and analysis of data by edge devices. Each application of fog network has a different topology.

3.1 Simulation tool

iFogSim [33] is a toolkit used for simulation and modeling FC environments. It is used for evaluation of scheduling algorithms and resource management techniques in FC Environments. It can be used in different scenarios and it focuses on the effect on operational cost, power consumption, latency, and network congestion. It simulates cloud data centers, network links, and edge devices to measure performance metrics. iFogSim is available for download from: https://github.com/harshitgupta1337/fogsim. NetBeans IDE 8.0.2 can be downloaded from: https://netbeans.org/downloads/8.0.2/. Or Eclipse Modeling Tools from: http://www.eclipse.org/downloads/packages/release/Mars/2. The iFogsim simulator has a Graphical User Interface (GUI) module for designing custom and ready topologies [33]. Sensors, actuators, fog, cloud, and link elements can be added to the topology via this GUI. We create a case study with the new topology for healthcare environment and use a case study in iFogsim as shown in Fig. 8. These topologies can be read and executed by other modules in the simulator.

Fig. 8
figure 8

Simulation of Fog Topology

The Resource Allocation Module (RAM) and the Effective Prediction Module (EPM) are implemented using python.

3.2 Mobile HEALTH dataset

The Mobile HEALTH (MHEALTH) (https://archive.ics.uci.edu/ml/datasets/MHEALTH+Dataset) dataset contains vital signs and body motion recordings for 10 volunteers during several physical activities. Sensors placed on the subject's chest, right wrist and left ankle are used to measure the motion experienced by diverse body parts, namely, acceleration, rate of turn and magnetic field orientation. The main characteristics of MHEALTH Dataset are shown in Table 8.

Table 8 MHEALTH dataset characteristics

In this paper, the MHEALTH is used to detect the possibility of heart attack. Hence, a new extra column (column 24) is added to have the values of the probability of attack. Column 24 has three main values which are; (i) 1 for strong probability, (ii) 2 for average probability, (iii) 3 for low probability. In order to simplify the classification, only 560 instances are selected from the MHEALTH Dataset for training and 240 instances are selected for testing. Number of training dataset instances for each probability is shown in Table 9. A sample of MHEALTH Dataset is shown in Table 10.

Table 9 Used MHEALTH dataset
Table 10 A sample of MHEALTH Dataset

3.3 Performance metrics

The performance of LBOS vs. previous mentioned LB algorithms can be compared by considering the following metrics shown in Table 11.

Table 11 the Performance metrics to evaluate the proposed EPRAM scheme

TAT: is the time difference between completion time and arrival time as calculated in (2). WT: is the time difference between turnaround time and burst time as calculated in (3).

$$\kern0.75em TAT= CT- AT$$
(2)
$$WT= TAT- BT$$
(3)

Where, AT: is the Arrival Time which is the point of time in milli seconds at which a process arrives at the ready queue to begin the execution. BT: is the Burst Time which refers to the time required in milli seconds by a process for its execution.

ARU can be calculated as in (4) and LBL can be calculated as in (5).

$$ARU=\frac{\left( BS+ OL\right)\ }{FSs} * 100\%$$
(4)
$$LBL=\frac{BS\ }{FSs} * 100\%$$
(5)

Where, BS: is the number of Balanced FNs, OL: is the number of Overloaded FNs, and FNs: is the number of all available FSs.

3.4 EPRAM implementation

Comparing the performance of the proposed EPRAM algorithm and the previous LB algorithms (LC, RR, WRR, and AWRR) is shown in Table 12. The value of WT for EPRAM compared to previous LB algorithms is shown in Fig. 3. The value of WT for EPRAM compared to previous LB algorithms is shown in Fig. 4.

Table 12 EPRAM vs. previous LB algorithms

From Figs. 9 and 10, it's observed that the EPRAM algorithm takes the lowest WT and the lowest TAT because the RL develops an agent that can assign tasks fast and efficiently [24]. It also uses the PNN for the prediction algorithm. It has achieved such acceptable performance due to using deep RL and PNN. Deep RL has shown impressive promises in resource allocation. PNN generates accurate predicted target and is much faster than multilayer perceptron networks [17]. EPRAM algorithm has been compared with LC, RR, WRR, and AWRR. The values of makespan are shown in Table 13 and in Fig. 11. The values of ARU are shown in Table 14 and in Fig. 12. The values of LBL are shown in Table 15 and in Fig. 13.

Fig. 9
figure 9

WT for EPRAM vs previous LB algorithms

Fig. 10
figure 10

TAT for EPRAM vs previous LB algorithms

Table 13 Makespan analysis (in ms)
Fig. 11
figure 11

Makespan for EPRAM vs Previous LB algorithms

Table 14 ARU Analysis (%)
Fig. 12
figure 12

ARU for EPRAM vs Previous LB algorithms

Table 15 LBL Analysis (%)
Fig. 13
figure 13

LBL for EPRAM vs Previous LB algorithms

Figure 11 explained that EPRAM algorithm gives lower Makespan as compared to previous LB algorithms. Figures 12 and 13 explained that EPRAM algorithm gives better result as compared to previous LB algorithms as it achieved the higher ARU and higher LBL. Hence all the above results have shown that EPRAM algorithm performs better for makespan, ARU and LBL as compared to LC, RR, WRR, and AWRR.

4 Conclusions and future work

This paper presented a new Effective Resource Allocation Methodology (ERAM) for Fog environment, which is suitable for Healthcare applications. The proposed IoT-Fog system consists of three layers, namely: (i) IoT Layer, (ii) Fog Layer, and (iii) Cloud Layer. The mission of the IoT layer is to monitor the patient symptoms. The fog layer is considered with handling the incoming requests and forwards them to the suitable server. The cloud layer is responsible for managing the transfer of data to and from the fog layer. The user data is sent to the most appropriate resource to be allocated and processed. The allocating resource is managed by a specific healthcare organization. ERAM achieved an effective resource management in Fog environment via real-time resource allocating as well as prediction algorithm. ERAM is composed of three main modules, namely: (i) Data Preprocessing Module (DPM), (ii) Resource Allocation Module (RAM) and (ii) Effective Prediction Module (EPM). The DPM is responsible for sampling, partitioning, and balancing data to be in the appropriate form for analyzing and processing. The RAM learned to select the best FN to execute the incoming request. The RAM uses Reinforcement Learning (RL) algorithm to achieve a high LB for fog environment. The EPM used the PNN to predict a target field, using one or more predictors. In order to detect the probability of the heart attack, PNN is trained using the training dataset. Then PNN has been tested using the user's sensing data coming from the IoT layer to predict the probability of heart attack and then take the most appropriate action accordingly. The main goal of the system is to achieve a low latency while improving the Quality of Service (QoS) metrics such as (the allocation cost, the response time, bandwidth efficiency and energy consumption). Unlike other RA techniques, EPRAM employed deep RL algorithm in a new manner. It also used the PNN for the prediction algorithm. It has achieved such acceptable performance due to using deep RL and PNN. Deep RL has shown impressive promises in resource allocation. PNN generates accurate predicted target and is much faster than multilayer perceptron networks. Comparing the EPRAM with the state-of-the-art algorithms, EPRAM achieved the minimum Makespan as compared to previous LB algorithms, while maximizing the Average Resource Utilization (ARU) and the Load Balancing Level (LBL). Accordingly, EPRAM is a suitable algorithm in the case of real-time systems in FC which leads to load balancing. Then EPRAM has been tested using the user's sensing data coming from the IoT layer to predict the probability of heart attack and then take the most appropriate action accordingly. In future work, we aim to test EPRAM using various datasets to calculate the data transfer rate and compare it with the previous algorithms in different scenarios. In future work, EPRAM will also be tested to be used at different hierarchical levels with current investigations on how to make it more distributed.