1 Introduction

The modern computing and networking approaches are rapidly extending IoT applications in miscellaneous domains [1]. Real-time interaction and accurate service delivery time request associated with IoT applications make hesitation to adopt far distant cloud datacenters for these application deployment. Many researches in recent efforts are focusing on how to exploit edge network capabilities for supporting IoT applications and their requirements efficiently [1,2,3,4]. Computing nodes at the proximity of edge network act in both filtering the gathered data owing to sending low and condensed amount of data volume to the cloud datacenters and for analysis and processing of data in the vicinity where data have been perceived. Therefore, the fog computing has been developed cloud-based applications on the edge networks for executing IoT applications. Fog computing is as a complementary facility of IoT + Cloud scenarios comprising new layer of collaborative devices which are capable of delivering the services and doing business missions completely and independently. In the fog, network components like smart gateways, micro-datacenters, routers, and switches are considered as fog nodes utilized for processing objectives [2]. Inasmuch as a fog server can autonomously process the data gathered from IoT devices without relying on cloud datacenters, it effectively saves network bandwidth usage, cloud storage space, and resource storage for data and critical processes [3, 4]. In addition, fog computing can support usage of unified cloud and edge resources. So, it facilitates IoT application deployment near to data sources. Therefore, it reduces network traffic load and guarantees on-time service delivery. Although the fog computing is the extension and development of cloud computing, the deployment, management, and updating of IoT applications in such layered environment make new challenges. Firstly, fog computing works in large-scale environment comprising a lot of number of nodes with heterogeneous and separate processing, memory, and storage capabilities. Secondly, the workload on each processing node is dynamic and variable. Finally, each IoT application has its own requirements and limitations such as the degree of sensitivity on delay, computing requirement, and preserving privacy constraints. Thus, the deployment of application modules must be properly performed in the fog infrastructure; at the same time, the application requirements along with the features of software, hardware, bandwidth, and delay between nodes in deployment of application modules on fog infrastructure must be taken into consideration [5]. The smart and intelligent deployment of a user’s application modules on fog infrastructure can lead to gain the maximum amount in power consumption reduction and efficient usage of physical resources and bandwidth. For illustration, Fig. 1a demonstrates when a fog node which hosts all modules of associated an application breaks down, this user’s IoT application does not work in which it has drastic effect on QoS of delivery services. For this reason, the applicable policy must be taken in an appropriate deployment of application modules for reliable service delivery. For instance as Fig. 1b depicts, the customer determines one point of fault tolerance for a requested application; then, regarding the determined amount of fault tolerance, the minimum number of fog nodes along with meeting the QoS for deployment of application modules must be taken into consideration till the effect of failure of some fog nodes be in acceptable level.

Fig. 1
figure 1

Deployment of customer’s IoT application modules

In distribution of application modules, there are several possible mappings in which one suitable and optimal mapping must be selected in regard to objective functions. According to Fig. 1, in a problem with little number of modules, there are many possible states to place modules on available fog nodes; so, with the increase of both application modules and the number of fog nodes and also with regard to heterogeneous nature of fog environment, finding an optimal solution for application module deployment on fog infrastructure is a combinatorial problem. This deployment issue is an NP-hard problem in which there is not any exact solution in polynomial time [6, 7]. Recently, several researches have been done to solve distribution of application modules on cloud and fog infrastructure [8,9,10,11,12,13]. A distribution algorithm of IoT application modules on fog nodes has been done with regard to delay sensitivity and efficient network resource utilization [8]. Moreover, Venticinque and Amato [9] have presented a methodology for solving fog service placement problem which contains an optimal mappings between IoT applications and computing resources for meeting the QoS requirements, but the sensor requirements were neglected in this article. An integrated fog computing platform has been suggested in the literature to figure out dynamic deployment of modules on fog devices [10]. In [10], to avoid from one point of failure the modules are distributed on more than one fog node, but the distribution mechanism has not been elaborated. A decentralized collaborative scheme was proposed for forwarding and placement of modules so that it controls on centralize surveillance limitations such as application management overhead, one point of failure, residual communications, and delay in decision-making process [14]. A general and developable model has been published in the literature for description of QoS-aware deployment of IoT applications over fog computing infrastructure [15]. Although many researches have been disseminated to solve service and module distribution, and deployment in the literature, there is a clear lack of multi-objective QoS-aware and reliable deployment model in the literature. To this end, this paper presents a multi-objective QoS-aware and traffic-aware algorithm for reliable module deployment on fog infrastructure. To harness this huge search space of fog colonies, the proposed algorithm exploits graph theory; then, it models search space as a connected graph. It works in two phases; at first in the preprocessing phase, it finds all full connected graphs; it abstracts them to well-known clique problem owing to have subgraphs with one-hop connected nodes provided satisfying determined QoS and constraints. Afterward, the second phase finds optimal distributed deployment inside one of the extracted cliques in regard to objective functions. Therefore, the main contributions of the current paper are as follows:

  • This paper defines the new parameter of “fault tolerance against failure” to guarantee the customer desired execution reliability.

  • This paper defines the “link resource wastage” concept to improve the efficiency of utilizing fog node’s bandwidth.

  • This paper formulates deployment of IoT applications’ modules on fog infrastructure into a multi-objective optimization problem with minimizing both bandwidth wastage and power consumption approach which is an NP-hard problem.

  • This paper presents an advanced multi-objective genetic-based optimization algorithm (MOGA) to solve the aforementioned NP-hard combinatorial problem with regard to minimization of both total power consumption and bandwidth wastage perspectives along with guarantee of user application reliability.

The proposed method has been validated in different extensive scenarios. The gained results from simulation show the significant dominance of suggested method in terms of stated objectives against other state-of-the-art approaches. The rest of the paper is organized as follows. Section 2 is dedicated to related works. The proposed system models and framework are placed in Sect. 3. Problem statement is expressed in Sect. 4. The proposed MOGA model which solves the problem of IoT application deployment on fog platforms is presented in Sect. 5. The proposed work is simulated and evaluated in Sect. 6. A brief discussion is placed in Sect. 7. Section 8 concludes this paper and indicates future direction.

2 Related works

In this section, published studies associated with service deployment or module distribution of IoT applications in fog computing environment are investigated regarding different objectives and perspectives. This approach reveals the shortcomings for paving the way for further improvement. A mapping algorithm for module deployment of distribution applications in fog environment has been proposed [8]. It was a network-aware approach in which it sorts fog nodes and application modules based on their capacity and current requests; then, the modules are distributed providing the constraints are preserved. In other words, proposed algorithm assigns preference on modules based on their waiting on resources. This policy leads a way to maximizing resource utilization for distributed IoT applications. Furthermore, this algorithm proves that interaction between cloud and fog yields better performance in terms of end-to-end delay in comparison with only cloud-based approaches. A platform as a service (PaaS) architecture was presented for IoT application management based on requirements during development process so that the deployment of applications on mixing cloud2fog scenario is facilitated [16]. A PaaS architecture that they proposed is as centralized coordinator in which it can develop applications according to their objective domains; it can also discover, initiate, configure, and scale the resources exploitation and execution of application modules, management of execution streaming between modules, monitoring on service-level agreement (SLA), module migration, and presenting interfaces for resource management and components for evaluation. Development and deployment of IoT applications were inspired from DevOps concept propounded in [17]. The cornerstone of this approach includes remote resource management of IoT applications and an integrated tool for development, deployment, and exploitation. The concentration of the proposed approach was to guarantee the accuracy of applications functionality once the old version is substituted by the new one. To this end, the blue–green development patterns is used for a device and the Canary development method is used for reliable substitution on set of devices. With the increase in the (machine-to-machine) M2M communication traffic, bandwidth limitation of edge network, congestion prevention, and service delay became a critical issue in M2M platforms. Since IoT applications are made based on M2M platform, it is an approach presented to decrease the traffic from network to cloud and deployment of IoT application modules on M2M platform so that data are preprocessed before sending on network; therefore, sent traffic is reduced. As existing M2M platforms do not support automatic and dynamic module deployment, a dynamic deployment framework has been suggested in [18]. The main concentration was on automatic and dynamic management and optimal deployment of modules according to the user service requirements. A methodology was presented to support developers in solution of service placement problem in fog environment [9]. The proposed methodology contains finding optimal mapping between IoT applications and computing resources in regard to QoS meeting of application. In addition, an optimal deployment configuration is presented for smart energy ambit and response to application’s QoS, deadline times, and throughput which must be preserved. Hong et al. [10] have proposed an integrated fog computing platform for dynamic module deployment on fog devices. To solve the module deployment problem, a heuristic approach has been suggested. In their work, users’ requests are directed to a server and then are registered. Each request may be split to several modules each of which can be encapsulated by a docker or a container. After gathering a set of requests, the server runs the algorithm of distribution modules for generating a deployment plan of modules associated with one request. In the following, distribution plan sends modules of each request for deployment on to the fog platform. The main objective was to increase the number of satisfied requests in regard to their requirements. In addition, for running away from one point of failure, the modules may be distributed over more than one computing node. An automated cloud-based IoT application deployment was presented in [19]. This paper exploits Topology and Orchestration Specification for Cloud Applications (TOSCA) standards, a standard for cloud service management, for determining of systematic components and configuration of IoT applications. The automatic deployment process is done in heterogeneous IoT environment by utilizing TOSCA standards. TOSCA is a new standard of Organization for the Advancement of Structured Information Standards (OASIS) organization so that it improves cloud application portability in confronting with heterogeneous cloud environment. This standard is a model for service structure specification and IT service management. In addition, the conceptual of application modules’ internal topology and IoT application deployment process are descripted by exploiting TOSCA. Regarding aforementioned issues, the main objective of proposed model was a clear description and interpretation of application modules and their adaptation executing on fog nodes. Saurez et al. published a distributed infrastructure programming interface, so-called foglets, for computing chain of fog and cloud nodes which are geographically distributed [20]. The proposed method provides application programming interface (APIs) for abstraction of data dependent on time and place for storing and retrieval of data produced by applications in local nodes and initiation of communication between resources and computing chain. Foglets manage the application components over fog nodes. This method provides different algorithms for execution of application components and management of component migrations between fog nodes based on sensors mobility patterns and applications’ dynamic computing requirements. Four features are supported in Foglets. In the first stage, the automatic fog computing resources are discovered in different levels of networks hierarchy and application components are deployed over fog computing nodes commensurate with tolerable delay for each component. At the second stage, it supports co-hosted multi-programming policy on each fog node. In the third stage, it provides communication APIs for application components which are placed on different physical layers of network hierarchy so that they can negotiate and communicate based on their situation. Finally, it supports adoption of delay-sensitive resources of workload and migration dependent on time and situation for getting over in dynamic situation awareness. Moreover, Foglets supports QoS-aware migration in which quality parameter is relevant to the delay between a component and its parents in an application. To make the flexibility of applications which their deployment topology is completed during the elapsed time, the separation between application components and their executions are necessary. Topology changes in application deployment are done not only in the time which applications are created, but also it is done in user request pattern changes, physical infrastructure changes, and in edge network changes such as drop/add sensors and gateways. It can also be done for the sake of evolutionary changes of application business during its life cycle. Therefore, Vogler et al. proposed a framework entitled DIANE for generating optimal dynamic deployment topology for IoT applications commensurate with existing physical infrastructure [21]. In this process parameters such as the time needed for deployment, edge device exploitation, bandwidth and execution time needed for running the IoT applications are investigated. Mahmud et al. propounded an approach for management of delay-aware application modules over fog computing environment [14]. In this proposed work, different facets of delay in distributed applications such as delay for service availability, service delivery time, and internal delay communications were considered. The aim of this policy was to manage the time-sensitive and delay-tolerant IoT applications via different approaches in which deadline-based QoS for all type of applications are met besides optimal resource utilization in fog environment. For the sake of optimization in utilizing the number of active fog nodes, the forward and re-allocation application module strategies are used. Furthermore, decentralized organizing is suggested for placement and forwarding the application modules so that it gets over on limitation of centralized surveillance such as application management overhead, one point of failure, residual communications, and delay in decision-making process. Availability of suitable infrastructure and fog application models are very important for success of automated QoS-aware deployment in chaining of clouds to things. Unfortunately, the most advanced tools for automated deployment of distributed software do not deal with non-functional attributes for gaining favorite deployment. Therefore, a simple and general model has been propounded by Brogi et al. for supporting of QoS-aware IoT multi-component applications on fog computing infrastructures [15]. This model descripts the quality of operational systems associated with existing infrastructures in terms of delay and bandwidth; interaction between software components and things; and the business policies. In addition, a number of algorithms have been proposed for favorite component deployment on fog infrastructure. A multi-objective fault-tolerant optimization algorithm based on multi-objective cuckoo search algorithm was extended to solve resource allocation for IoT applications deployment on fog platforms [11]. The resource allocation is done in such a way that to decrease additional power consumption and to minimize the overall latency of all applications [11]. To meet the reliability and to avoid one point of failure, some conditions were added in the problem’s constraints. A multi-objective algorithm based on multi-objective optimization PSO algorithm was developed to solve micro-service QoS-aware placement of IoT applications on fog computing infrastructure [12]. The objective functions were makespan, budget satisfaction, and network resource utilization that the proposed algorithm generated solutions which make a compromise between conflicting objectives. A reinforcement learning heuristic algorithm was presented to solve micro-service deployment of IoT applications on edge–cloud hybrid environment [13]. The proposed algorithm is aware of challenges such as heterogeneity of underlying resources, dynamic geographical information of IoT devices to decrease the total average of waiting time for all devices in this hybrid complex environment.

Table 1 reviews the literature of researches and achievement along with existing limitations in deployment of IoT application modules on fog and cloud infrastructures by a contrast viewpoint.

Table 1 Summary of the literature review

The review study reveals that there is a clear lack in studies for considering the degree of fault tolerance in module distribution and also for taking the coefficients of user requested QoS during distribution and deployment of application modules into account. Note that taking aforementioned parameters into consideration has drastic impact on system’s overall performance. Therefore, this paper presents solutions to make a trade-off between user’s requested requirements and system’s utilization objectives to meet both objectives of two prominent stakeholders in the system which are users and providers.

3 System models

This section presents system models and proposed framework for the suggested IoT application deployment scheme. For simplicity and to ease following the models, Table 2 is used for introduction of nomenclature, notation, and utilized parameters and symbols in this paper.

Table 2 Introduction of nomenclature and used parameters

3.1 System framework

The proposed system framework is depicted in Fig. 2. Regarding Fig. 2, a fog orchestrator component is placed on the top of fog layer. One of the most important responsibilities of a fog orchestrator is to select appropriate fog nodes and deployment the IoT application modules. An orchestrator makes decision for module deployment on cloud or fog platforms according to application module attributes. This orchestrator is logically centralized, but it can be implemented with distributed fashion to prevent single point of failure phenomenon. In the proposed framework, the priority is to apply fog computing nodes for distribution of application modules, but the cloud datacenters are utilized only for deployment of modules which are not time-sensitive and are engaged for periodically information processing. To deploy application modules, the fog nodes are defined as fog mesh network for the sake of communication between fog nodes. The fog mesh can be defined as a computing pattern which differs from traditional mesh networks, but it uses mesh networks of fog nodes such as switches and fog servers for distribution process inside the network. Figure 2 also illustrates a high perspective of fog mesh computing pattern. The architecture of fog mesh pattern is similar to wireless mesh network (WMN) proposed in [22], but it has its own characteristics.

Fig. 2
figure 2

System framework based on fog mesh network

To manage the appropriate deployment of application modules on fog nodes, a module management framework is placed in the orchestrator component regarding system’s performance. As Fig. 3 demonstrates, this framework has different components. One of them is the planner component which includes application module manager and other components. Besides the planner, there are other components to store and retrieve of network information and other fog resources. The gathered information is used for application module management and presenting a deployment scheme by planner component. In the following, the functionality of components are elaborated.

Fig. 3
figure 3

Application module management framework


The components are enlisted and clarified as below:


Application module manager The main component which exploits other existing components of the proposed framework in decision-making process determining how to deploy application modules on cloud or fog nodes. In multi-module applications, the decision of module deployment strongly depends on several determinants such as accessible to the resources, network structure, QoS requested for the application, and load balancing. The deployment process can be done with regard to the reduction in the power consumption and network traffic minimization viewpoints.


Resource management This component decides based on processing and memory capacity requirements and deploys modules according to existing resources and objective functions. One of the main objectives is to balance loads over host nodes. Recall that load balancing is not a goal, but is a technique for improving other QoS objectives such as response time, makespan, and system throughput.


Communication management Communication significantly incorporates in usage of fog resources that are consuming for IoT applications. Management of IoT application modules includes optimization in utilization of computing resources, memory, and communications simultaneously. Therefore, communication management component plays vital role in this ambit. This component determines which of the fog nodes can communicate with each other and what is the optimal way for sharing data between different nodes.


Resource discovery Resource discovery means to find fog nodes which can provide information and services requested for each module. Trustable resource discovery in determined time frame is a challenge because the network is large scale and susceptible to topology changes. The gained information via this component is used for determining which nodes for deployment of modules to be interacted. The decision, which planner component makes, depends on discovered information by resource discovery component.


FN Information This component saves all information about a fog node such as available sensor types, information about modules that are running on the fog node, using of existing resources like energy, memory, and CPU. The deployment of modules on each fog node depends on information presented by FN information component. This information is applied in planner component by resource management component.


Network information This component registers all information about the routers, neighbor fog nodes, connected end devices, different kinds of sensors in the network, etc. The information obtained via resource discovery component is delivered to network information component and is stored in its memory. The decision for module deployment which communication management component makes in the planner is based on the information saved in the network information component.

3.2 Fog and communication network models

Fog Model: In this paper, it is assumed that there are N fog nodes heterogeneous in terms of processing and energy capacity in the network which are capable of storing and executions of application modules. Figure 4 depicts a sample fog model specification. Each fog node accesses to different kinds of sensor nodes directly or indirectly via wired or wireless communications. Each fog node \(fn\in F\) is presented with a tuple \(\left\{\mathrm{id},H,S,{\mathrm{fn}}_{\mathrm{type}},sensorlist\right\}\) where parameters id is fog node identifier, H is the hardware, S is the software, \({\mathrm{fn}}_{\mathrm{type}}\) is the fog node type in terms of performance, and \(sensorlist\) is the list of sensors available for the fog node. Since the fog node’s energy consumption greatly depends on CPU power usage, the concentration on improvement of CPU usage associated with fog nodes can enhance both resource utilization and power consumption [23]. Therefore, the resource utilization and performance of fog nodes are taken into account during the process of application module deployment. In this regard, high-performance fog nodes are preferred in comparison with low-performance nodes because each high-performance fog node consumes relatively lower energy in comparison with the low-performance ones although they host more workload to process. Note that the communication links between two different nodes are modeled by vector (L,B) where parameters L and B are the delay of communication link and communication bandwidth, respectively.

Fig. 4
figure 4

Example of fog model specification

Communication network model In this paper, communication network model comprises a mesh network with connected nodes. The communication network is modeled by a graph G = (FN, D). In this graph, \(\mathrm{FN}=\left\{{fn}_{1},{fn}_{2},\dots ,{fn}_{N}\right\}\) is a set of fog nodes and D = {\({d}_{ij}\) = distance between two fog nodes \({fn}_{\mathrm{i}}\) and \({fn}_{\mathrm{j}}\)} is a set of edges. In this heterogeneous model, each fog node \({fn}_{\mathrm{i}}\) is shown by a vector \({fn}_{\mathrm{i}}\)=(\({\mathrm{id}}_{\mathrm{i}},{\mathrm{h}}_{i},{s}_{i},{sensolrlist}_{i}\)) where its elements are identifier, hardware, software, and equipped and available sensors for fog node \({fn}_{\mathrm{i}}\), respectively. In this regard, Fig. 5 illustrates communication network graph and matrix in Eq. (1) is dedicated for distance between each pair of fog nodes.

(1)

Owing to the reduction in the search space and for the sake of dependent module deployment, the full-mesh network is initially extracted from the existing primary fog network. In other words, all full connected subgraphs which are abstracted to a clique problem are extracted because the one-hop full connected networks can shorten delay for time-sensitive applications. Then, the distance matrix value is set via Eq. (2).

$$D_{ij} = \left\{ {\begin{array}{*{20}l} {0,} \hfill & {\quad {\text{if}}\;i = j} \hfill \\ {1,} \hfill & {\quad {\text{else}}\;i \ne j} \hfill \\ \end{array} } \right.$$
(2)
Fig. 5
figure 5

Communication network graph

3.3 Application model

The current applications that process big data explosion specifically in the large scale are no longer monolithic, but they have multi-module structure [24]. Therefore, the applications which run on fog computing infrastructure are a set of dependent modules incorporating to meet customer requests. For instance, take a simple IoT application associated with a theft alarm system for smart home which are presented by a safety company to its customers. This application has three different modules. As Fig. 6 depicts, the modules are M1 (threat manager) that monitors the environment for inception of control and intrusion alarm once it detects; M2 (control center) for gathered data interpretation and manual system control; and M3 (machine learning) for storing of data history and updating the intrusion detection model to be deployed on strong fog nodes or cloud ones.

Fig. 6
figure 6

Application module specifications

In this regard, Fig. 6 illustrates list of hardware resources and software capabilities requested for each module. The relationship between modules are depicted by links which must meet QoS constraints associated with delay and link’s bandwidth. In addition, for on-time management of urgent threat circumstances, a module M1 must access to necessary sensors (acoustic, motion, virtual sensor, etc.) and an actuator which triggers safety mechanism. Note that this process must be done in 10 ms from where the M1 module is deployed to installed sensors and actuator. Furthermore, it is expected fog and cloud nodes can remotely access to things existing in their neighbor nodes in the network via API proposed by fog middleware layer [2]. The problem that must be solved for modules is how to implement three modules so that all determined non-functional constraints on software, hardware resources, software interactions, and remote access to IoT are to be met; even for a simple example where the number of application modules are 3 and the number of nodes in a full connection graph is 5 (3 fog nodes and 2 cloud nodes), there are up to 50 possible deployment options to find appropriate mapping of software modules on fog or cloud nodes. This is because that more than one module can be deployed on the nodes based on existing resources. This is impossible for a human user to determine favorite deployment once the infrastructure and the number of software modules grow significantly and the search space grows exponentially accordingly. To get these modules work properly and also the application as a consequence, it is necessary to meet resource requirement and requested QoS accurately. In this paper, it is assumed that we are given a set of R IoT applications, each of which r ϵ R is shown by a vector r = \((FTT,M,\mathrm{Appmodlist})\) where FTT is fault tolerance threshold parameter (c.f. in Sect. 3.4). Each application has M different modules where are enlisted in \(Appmodlist\) variable. Accordingly, each module is elaborated in a vector \({Appmod}_{i}\)=\((k,\mathrm{h},s,sensorlist,{t\mathrm{h}r}_{QoSscore}\)). Note that parameter \({t\mathrm{h}r}_{QoSscore}\) is used to indicate the requested QoS which must be met by fog node hosting the considered module. User application is modeled in the form of graph \(G=\left(Appmodlist,T\right)\) where \(Appmodlist\) = (\({Appmod}_{1},{Appmod}_{2},\dots ,{Appmod}_{m}\)) is module list and \(T\) = {\({t}_{ij}\)| is the amount of traffic between \({Appmod}_{\mathrm{i}}\) and \({Appmod}_{\mathrm{j}}\)} is set of traffic pairs. Figure 7 demonstrates communication graph and matrix in Eq. (3) is dedicated for traffic between each pair of application modules.

(3)
Fig. 7
figure 7

Traffic matrix and communication network graph of application modules

3.4 Reliability model

Assume that the fog orchestrator component receives R execution requests from fog layer for requested applications concern to different customers. The users requested applications are modeled by Eq. (4)

$$UApp=\left\{{UApp}_{i}|0<i\le R\right\}$$
(4)

In this model, each user application \({UApp}_{i}\) is determined by a specification vector which Eq. (5) shows.

$${UApp}_{i}=\left({FTT}_{i},{M}_{i},{Appmodlist}_{i}\right)$$
(5)

where the parameters \({M}_{i}\) and \({FTT}_{i}\) are used for the number of modules and fault tolerance threshold determined for an application \({UApp}_{i}\). Fault tolerance threshold parameter is the number which the customer submits to the system along with other application’s information. In addition, as Eq. (6) calculates, \({Appmodlist}_{i}\) specifies list of modules concerning to ith requested application, namely \({Application}_{i}\).

$${Appmodlist}_{i}=\left\{{Appmod}_{k} | 0<k\le {M}_{i}\right\}$$
(6)

where parameter \({Appmod}_{k}\) indicates kth module of an application. This parameter is elaborated in Eq. (7).

$$Appmod_{k} = { }\left\{ {Appmod_{k} \in Appmodlist_{i} { }|{ }Appmod_{k} = k,H,S,sensorlist,t{\text{h}}r_{QoSscore} } \right\}$$
(7)

The aforementioned parameters requested by a customer are registered in fog orchestrator component. Then, the orchestrator applies this information for decision making of module deployment process. In this paper, we assume the probability of simultaneously crashing more than one fog nodes is very low. In addition, we consider crashing probability is the same for all fog nodes. To investigate the reliability execution of requested application on specific fog node, it is compared with user’s submitted FTT parameter. To do so, the average effect of fog nodes’ crash on customer’s application is compared with the FTT parameter submitted by a customer. To calculate the effect amount of fog nodes’ crash on customer’s application, Eq. (8) is dedicated [25].

$${FTT}_{i}=\sum_{L}^{{n}_{L}}{FP}_{L}\times \frac{{m}_{i,L}}{{M}_{i}}$$
(8)

The parameter \({FTT}_{i}\) is dedicated for the average amount of fault tolerance relevant to customer’s application running on fog nodes. In this regard, \({FP}_{L}\) is the fault probabilty of each fog node L, and \({m}_{i,L}\) is the number of modules associated with an application \({UApp}_{i}\) deployed on fog node L. In addition, \({M}_{i}\) is the number of whole modules in an application \({UApp}_{i}\) and \(\frac{{m}_{i,L}}{{M}_{i}}\) is the the effect amount which an application \({UApp}_{i}\) is affected once the fog node L is crashed. Moreover, the parameter \({n}_{L}\) is used for the number of fog nodes in the network incorporating in running an application \({UApp}_{i}\). If we consider the same probability of failure for each fog node, namely \({FP}_{L}=\frac{1}{{n}_{L}}\), consequently, Eq. (6) is simplified in Eq. (9) [25].

$${FTT}_{i}=\frac{1}{{n}_{L}}\times \sum_{l}^{{n}_{L}}\frac{{m}_{i,l}}{{M}_{i}}=\frac{1}{{n}_{L}}$$
(9)

As explained earlier, there is a meaningful relationship between the fault-tolerant threshold of an application running on fog nodes and the number of underlying utilized fog nodes. In other words, the minimum number of fog nodes \({n}_{L}\) is needed to meet fault-tolerant threshold \({FTT}_{i}\) of an application \({UApp}_{i}\). Therefore, Eq. (10) indicates the necessary number of fog nodes to meet the fault-tolerant objective for running an application \({UApp}_{i}\).

$${n}_{L}>\frac{1}{{FTT}_{i}}$$
(10)

3.5 Deployment model

The problem of module deployment can be solved in both static and dynamic fashions. As the number of fog nodes is considered fix in a time frame, the decision for deployment of modules is made in predetermined time window. Therefore, the optimization deployment scheme can be periodically re-executed to be commensurate with the dynamic nature of IoT and fog environment. There are two stakeholders in the system under study, namely customer and provider, each of which has its own objectives to be met. The objectives may have conflict, so the proposed scheme must compromise between them. In the proposed paper to meet reliability, which is associated with user perspective for requested application, the minimum number of fog nodes is calculated for module deployment. On the other hand, to minimize the total power consumption, which is associated with provider perspective, the modules must be distributed on calculated fog nodes, but it is not always possible because of variable requirement of modules, heterogeneous fog node specification, and different constraints. So, it may utilize more fog nodes for deployment process which not only increases power consumption, but also increases bandwidth wastage. Therefore, a trade-off is necessary between stakeholders’ objectives. To figure out module deployment problem, all full-mesh networks are extracted from an initial fog network graph in which fog nodes in extracted full mesh can meet all requirement of requested applications in terms of delay, bandwidth, and sensors. It is also assumed that if a sensor is not supported for a fog node, other fog nodes can satisfy the limitation via resources existing in extracted full-mesh subnetwork by a single-hop connection. In this line, decision variable \({d}_{ij}\) is used to indicate whether the module \({Appmod}_{i}\) is deployed on a fog node \({fn}_{j}\) or not. To decrease network traffic load, the distance matrix between fog nodes in network graph and traffic matrix between each pair of modules must be calculated. Since there is a limitation in delay and bandwidth between each communication link, the traffic rate between modules of applications are bounded based on communication links capacity. Therefore, Eq. (11) indicates the communication constraint in this domain.

$$\sum_{{Appmod}_{i}\in {fn}_{m}}\sum_{{Appmod}_{j}\in {fn}_{n}}{b}_{ij}\times {l}_{ij}<{B}_{mn}\times {L}_{mn}$$
(11)

Among QoS parameters, the latency and bandwidth are taken into consideration. QoS parameters are evaluated in regard to the kind of modules in an application and the distance between requested QoS by customer and presented QoS by provider [26]. To this end, aforementioned attributes of QoS are measured in Eq. (12) which finally gains a score in interval [0..1].

$${Score}_{QoS}=\sum_{q=1}^{d}{\alpha }_{q}\times {Score}_{QoS}^{q}$$
(12)

In Eq. (12), parameters d and \({\alpha }_{q}\) are the number of attributes in QoS and the weight of qth quality attribute in whole QoS. In addition, the score of qth quality, \({Score}_{QoS}^{q}\), is determined in [0..1] which is calculated via Eq. (13) [27].

$$Score_{{QoS}}^{q} = \left\{ {\begin{array}{*{20}l} 0 \hfill & {\quad {\text{if}}\;diff\left( {r_{q} ,o_{q} } \right) = \infty } \hfill \\ 1 \hfill & {\quad {\text{if}}\;diff\left( {r_{q} ,{\text{~}}o_{q} } \right)~ = 0} \hfill \\ {f\left( {diff\left( {r_{q} ,o_{q} } \right)} \right)} \hfill & {\quad {\text{else}}} \hfill \\ \end{array} } \right.$$
(13)

where function \(diff\left({r}_{q},{o}_{q}\right)\) returns the difference between requested amount of qth quality for a module (\({r}_{q}\)) and presented amount of qth quality presented by provider (\({o}_{q}\)). It shows that if the difference is zero, the score is one; in case the difference is ultimate, the score is zero. For other cases, the score is gained via the degree of satisfaction fucntion which Eq. (14) calculates [27].

$$f\left(diff\left({r}_{q},{o}_{q}\right)\right)={e}^{-diff\left({r}_{q},{o}_{q}\right)}$$
(14)

The function diff(.,.) is calculated by Eq. (15).

$$diff\left( {r_{q} , o_{q} } \right) = \left\{ {\begin{array}{*{20}l} \infty \hfill & {\quad {\text{if}} \left( {\beta = 0\; {\text{and}}\; \left( {r_{q} - o_{q} } \right) \times type_{q} > 0} \right)} \hfill \\ 0 \hfill & {\quad {\text{if}} \left( {\beta \ge 0\;{\text{and}}\;\left( {r_{q} - o_{q} } \right) \times type_{q} \le 0} \right) } \hfill \\ {\left( {r_{q} - o_{q} } \right) \times type_{q} } \hfill & {\quad {\text{if}} \left( {\beta > 0\;{\text{and}}\;\left( {r_{q} - o_{q} } \right) \times type_{q} > 0} \right)} \hfill \\ \end{array} } \right.$$
(15)

where parameter \(\beta\) ϵ [0..1] is used for level of flexibility associated with a module in regard to attribute qth quality. For instance, \(\beta\) = 0 means that the special service must be completely met with high degree of obligation. In addition, \({type}_{q}\) ϵ {− 1, 1} which takes 1 or − 1 according to type of qth quality. In other words, it takes 1 for capacity-based quality parameters and takes − 1 for time-based quality parameters. For instance, in regard to latency parameter, if 4 ms in terms of latency is necessary for a module and the presented latency by fog is 2 ms, then \(\left({r}_{q}-{o}_{q}\right)\times {type}_{q}\) = (4 − 2) \(\times\) −1 < 0; it means \(diff\left({r}_{q},{o}_{q}\right)\) = 0 based on Eq. (15).

In deployment process, the score gained via Eq. (12) which is the capability of fog for execution of application modules must be greater than user determined QoS threshold. This inequality constraint is presented in Eq. (16) [27].

$$Score_{QoS} \ge t{\text{h}}r_{QoSscore}$$
(16)

In addition, there are some other constraints that will be explained in problem statement section.

4 Problem statement

As explained earlier, the static module deployment issue is formulated to a multi-objective optimization problem with simultaneous minimization of both link wastage rate (LWR) and total power consumption (TPC) perspectives. To this end, we present two objective models mathematically.

4.1 Link wastage model

To model link wastage rate (LWR), the cost model associated with communication of each pair of fog nodes is calculated by Eq. (17).

$${Cost}_{ij}=\sum_{{Appmod}_{m}\in {fn}_{i}}\sum_{\begin{array}{c}{Appmod}_{n}\in {fn}_{j}\\ {fn}_{i}\ne {fn}_{j}\end{array}}{d}_{ij}\times {t}_{mn}$$
(17)

Therefore, total link wastage rate is equal to sum of all mutual traffics traded between each pair of modules distributed over fog nodes. So, the total link wastage rate is modeled by Eq. (18).

$$\mathrm{Link Wastage Rate }({UApp}_{Cost}) =\sum_{{fn}_{i} \in \mathrm{F}}\sum_{\begin{array}{c}{fn}_{j} \in F\\ i\ne j\end{array}}{Cost}_{ij}$$
(18)

4.2 Power consumption model

Generally, several factors effect on total power consumption of a system under study such as computation workloads, communication technology, amount of traded traffic, and distance between each pair of fog nodes. To calculate power consumption of each fog node, power consumption relevant to running of modules on the fog nodes and the power consumed as for information communication between each pair fog nodes are considered. The power consumption of each fog node directly depends on its resource utilization [28]. Therefore, average normalized resource utilization of a fog node \({fn}_{i}\) is calculated by Eq. (19).

$${U}_{{fn}_{i}}^{res}=\frac{{W}_{1}.\sum_{j}^{{fn}_{i}}\frac{{r}_{{Appmod}_{j}}^{CPU}}{{R}_{{fn}_{i}}^{CPU}}+{W}_{2}.\sum_{j}^{{fn}_{i}}\frac{{r}_{{Appmod}_{j}}^{RAM}}{{R}_{{fn}_{i}}^{RAM}}}{2}$$
(19)

Note that two real-valued coefficients \({W}_{1}\) and \({W}_{2}\), where 0 ≤ \({W}_{1}\) ≤ 1, 0 ≤ \({W}_{2}\) ≤ 1, and \({W}_{1}\) + \({W}_{2}\) = 1, are applied to indicate the importance of resources incorporating in total power consumption. As the big portion of power consumption is relevant to processing units instead of main memory, in this paper the CPU utilization is considered for power consumption, namely \({W}_{1}\) = 0.9 and \({W}_{2}\) = 0.1 [6]. So, Eq. (20) calculates the power consumption (\({P}_{{fn}_{i}}^{res}\)) owing to used resources associated with each node (\({fn}_{i}\)) which hosts different modules [6, 28].

$${P}_{{fn}_{i}}^{res}=(\left({P}_{max}-{P}_{min}\right)\times {U}_{{fn}_{i}}^{res}+{P}_{min})$$
(20)

where parameters \({P}_{min}\) and \({P}_{max}\) are used to indicate the minimum and maximum power consumption of each processing node in the lowest and uppest utilization circumstances, respectively. In addition, a decision binary variable \({y}_{{fn}_{i}}\) is applied to imply whether the processing node \({fn}_{i}\) is active or not. It is multiplied with parameter \({P}_{{fn}_{i}}^{res}\) in Eq. (23). In addition, the power consumption owing to data transfer via communication links are calculated by Eq. (21).

$${P}_{{fn}_{i}}^{tr}=\sum_{{fn}_{i}\ne {fn}_{j}}{t}_{{Appmod}_{m},{Appmod}_{n}}\times {P}_{tr}$$
(21)

The parameter \({P}_{tr}\) is used for prower consumption unit of traffic exchanges. Note that this power is considered provided the modules are deployed on different computing nodes. Cosequently, the total power consumption is calculated via Eq. (22). The first part is for resource utilization, whereas the second part is for traffic transfer power cost.

$${P}_{{fn}_{i}}={P}_{{fn}_{i}}^{res}+ {P}_{{fn}_{i}}^{tr}$$
(22)

4.3 Multi-objective QoS-aware optimization deployment model

After two objective models has been defined, we formulate module deployment problem to a multi-objective QoS-aware optimization model. In this line, Eqs. (2324) are dedicated to objective functions, whereas Eqs. (2533) are presented to indicate problem constraints.

$${\varvec{min}}\,{\varvec{TPC}} = Min \mathop \sum \limits_{{fn_{i} \in F}} P_{{fn_{i} }} \times y_{{fn_{i} }}$$
(23)
$${\varvec{min}}\,{\varvec{LWR}} = Min{ }\mathop \sum \limits_{{fn_{i} { } \in {\text{ F}}}} \mathop \sum \limits_{{\begin{array}{*{20}c} { fn_{i} \in F} \\ {i \ne j} \\ \end{array} }} Cost_{ij}$$
(24)

Subject to:

$$n_{L} > \frac{1}{{FTT_{k} }},\quad k = 1,2, \ldots ,R$$
(25)
$$\mathop \sum \limits_{{Appmod_{m} \in { }fn_{i} }} \mathop \sum \limits_{{Appmod_{n} \in { }fn_{j} }} b_{mn} \times l_{mn} < { }B_{ij} \times { }L_{ij}$$
(26)
$$Score_{QoS} \ge t{\text{h}}r_{QoSscore} ,\quad \forall {\text{Appmod}} \in UApp,{ }fn_{i} ,fn_{j} \in F$$
(27)
$$\mathop \sum \limits_{{{\text{Appmod}} \in UApp}} x_{{Appmod,fn_{i} }} \cdot s_{Appmod} \le S_{{fn_{i} }} ,\quad \forall fn_{i} \in F$$
(28)
$$x_{{Appmod,fn_{i} }} \le { }y_{{fn_{i} }} ,\quad \forall {\text{Appmod}} \in UApp,fn_{i} \in F$$
(29)
$$\mathop \sum \limits_{{{\text{Appmod}} \in UApp}} x_{{Appmod,fn_{i} }} \cdot r_{Appmod} \le R_{{fn_{i} }} ,\quad \forall fn_{i} \in F$$
(30)
$$\mathop \sum \limits_{{fn_{i} \in F}} x_{{Appmod,fn_{i} }} = 1,\quad \forall Appmod \in UApp$$
(31)
$$x_{{Appmod,fn_{i} }} \in \left\{ {0,1} \right\}$$
(32)
$$y_{{fn_{i} }} \in \left\{ {0,1} \right\}$$
(33)

Note that Eq. (28) states that available sensors must be more than the requested number of sensors. Equation (29) indicates that the module can be deployed on a fog node providing it is an active node. In addition, the total resources requested by modules deployed on a fog node cannot exceed from its capacity; so, Eq. (30) is dedicated to this issue. Moreover, each module should be only placed on one fog node; so, the constraint of Eq. (31) is dedicated to this issue. Furthermore, Eqs. (3233) are used for two binary decision variables indicating whether the module is deployed on fog node and whether a fog node is active or not, respectively.

5 Proposed MOGA for module deployment

As the stated multi-objective optimization problem is computationally NP-hard, so finding an optimal solution of large-scale problem is impossible in practice; therefore, we apply a hybrid and effective approach which has two phases to figure it out. Firstly, a heuristic algorithm is utilized to reduce very large search space into a concise district in short time. Secondly, this balanced search space is given to a meta-heuristic algorithm for finding non-dominated solutions because the problem is a multi-objective optimization and there is not any single optimal solution to meet all objectives. To this end, we utilize dominance concept [29]. To solve aforementioned combinatorial optimization problem, we engage genetic algorithm (GA) although we examined several meta-heuristic optimization algorithm. Then, we trust on genetic-based algorithm in this project because of gained results. Since the search space of stated problem is discrete in nature, the GA is more flexible for customizing in discrete search space. In multi-objective domain, famous GA-based NSGA-II had several achievements in solving combinatorial studies [29]. Therefore, we customize multi-objective GA capability in such a way to solve module deployment problem. To do so, we conduct two operators: crossover for exploration and mutation for exploitation. It this way, it escapes from getting stuck in local optimal. Figure 8 depicts the block diagram of proposed algorithm. This algorithm receives input parameters relevant to deployment modules such as information of requested resources for applications’ modules, communication pattern between modules, fog infrastructure configurations, population size, and the maximum number of iterations. Then, it outputs set of non-dominated deployment schemes in regard to objective functions.

Fig. 8
figure 8

Block diagram of proposed algorithm

In the following, encoding schema, operators, fitness function, non-dominated sorting, and the crowding distance concepts are elaborated and discussed.

5.1 Encoding schema

One of the most important of GA’s elements is the genes and chromosomes concepts and how to encode the problem. There may be several encoding schemas for a single problem each of which has its own performance; then, proposing an efficient encoding schema has drastic impact on overall algorithm performance [26, 30, 31]. Each chromosome is an agent of a possible solution. In this line, a chromosome contains |M| genes where M is the number of modules, each of which is used for a module to be deployed on the set of N fog nodes. This is the reason each gene gets an integer value in [1..|N|] interval such as in [6, 26, 32]. Figure 9 demonstrates a feasible solution scheme.

Fig. 9
figure 9

Example for encoded of feasible solution to a chromosome

5.2 Preprocessing

Since there exist dependencies between application’s modules, it is necessary to select fog nodes for deployment so that they have communication with each other and can meet QoS requirement in network domain. Full-mesh extraction from initial network graph, which is abstracted to famous Clique problem, has multiple advantages; firstly, it shortens search space for extraction of optimal deployment scheme. Secondly, if a sensor is not supported for application’s module by a fog node, the required sensors can be available via other fog nodes in full-mesh subnetwork with one-hop connection. To extract full-mesh subnetworks that can support all modules’ requirements, Algorithm 1 is designed.

figure a

Algorithm 1 receives fog set specifications and returns all full-mesh networks. It applies data structure FullMesh_FN_SubNetworks as a vertical array in which its every row has a linked list of elements. In the inception, all pair of fog nodes which directly communicate with each other are added to this data structure. In other words, at the outset for each fog node \({F}_{i}\), the FullMesh_FN_SubNetworks [\({F}_{i}\)]. LinkedList is filled by direct fog nodes which \({F}_{i}\) connects with. Afterward, the algorithm plummets into while-loop. It is iterated until the specified criteria are met. The termination criterion is the number of desired clique size (K). In other words, the main loop is iterated K times. At the next rounds, for each node each row of data structure is investigated. If the node is not in the list, but is connected to all of nodes in the list; then, this node is added to the linked list. After each round, duplicated record is discarded. Finally, all of full-mesh subnetworks are returned. In this way, the search space is declined and condensed to concise district. Since the effective statements of Algorithm 1 are in the while-loop, its time complexity is O (K.\({N}^{2}\)). Up now, we have several full-mesh subnetworks, but a question arises: Which of them is well suited for requested IoT applications’ modules? To meet the problem constrains which have been formulated in Eqs. (2532), after extraction of full-mesh subnetworks, Algorithm 2 selects number of nodes to construct target subnetwork nodes for hosting requested modules provided the constraints are satisfied.

figure b

In the first line of Algorithm 2, the minimum number of requested fog nodes is determined to fulfill condition of Eq. (25). In the second line, the minimum number of fog nodes regarding their existing capacity and requested resources of modules is estimated. If estimated number of fog nodes are more than existing fog nodes, all available fog nodes are considered for Number_of_Fog_Nodes variable. In line 6, different full-mesh permutations with estimated number of fog nodes are extracted from FullMesh_FN_subNetworks via nchoosek(.) function. To meet constraint of Eq. (26), in line 11, function Check_Delay_DW(.) compares minimum bandwidth and maximum delay of subnetwork in \({Row}_{i}\) with maximum bandwidth and minimum delay requested for modules. If it is the case, it returns true. Lines 12–15, try to meet other constraint of stated problem, i.e., Eq. (27). If each of Delay_BW_Status or Check_QoS_Score returns true, the full-mesh subnetwork is registered in Candidate_SunNetworks and is returned as a candidate full-mesh subnetwork. All calculation and checker functions of Algorithm 2 are simple and belong to O (1) except for function nchoosek(.) in line 6 and main loop in line 10 which belong to O (N) and O (M), respectively. Therefore, Algorithm 2’s time complexity is O (N + M).

5.3 Initialization step

Similar to other meta-heuristic algorithms, our proposed algorithm starts with initial population after preprocessing phase which meets Eqs. (2527). To this end, initial population is randomly produced regarding satisfying constraints stated in Eqs. (2931). Note that, during the course of algorithm running for producing possible new solutions, some solutions violate constraints generating infeasible solutions; this is because Algorithm 3 is proposed to repair them for exploiting all population possibilities and capabilities toward final solutions. It finds a fog node which is overloaded; then, it selects an Appmod with minimum dependency to other modules; then, it migrates preferably on an active fog node which has sufficient resources to adopt it. Afterward, this solution and related data structure are updated.

figure c

Time complexity of Algorithm 3 is O (N.PopSize) because two nested for-loop are the most effective statements.

5.4 Selection strategy

There are several strategies for selection of solutions such as roulette wheel, rank-biased, and tournament. All of them are conducted in such a way that the fittest solution has the high probability to be selected. Algorithm 4 is dedicated for this issue. It firstly selects two group of solutions; namely, one of them is from the first Pareto front and the other is from the second Pareto front of population. Then, one individual is randomly selected from each group. Afterward, the one has upper crowding distance is returned because the crowding distance indicate high potential of searching divergence [29]. It is clear-cut that its time complexity is O (1).

figure d

5.5 Crossover Operator

To explore search space, crossover operator is applied to produce new generations. Then, the new generation is added to current population with predetermined probability. In this line, the single-point procedure is done for crossover. Figure 10 exemplifies a single-point crossover application.

Fig. 10
figure 10

Application of crossover for exploration

Algorithm 5 is presented for crossover (exploration).

figure e

It is clear-cut that its time complexity of Algorithm 5 belongs θ (nCrossover) which is O (PopSize).

5.6 Mutation Operator

Mutation is applied in GA to avoid pre-saturation and early convergence in course of execution. Despite crossover behavior, mutation only changes one gene randomly. In each generation, some new solutions are added to population which was predetermined by mutation percentage parameter. Figure 11 depicts how the mutation operator works. In the main proposed algorithm, one chromosome is selected as a parent by tournament method; then, two random genes are selected for exchange in the opted parent. Newborn individual is added to the population. Algorithm 6 is dedicated to mutation operator.

Fig. 11
figure 11

Application of mutation for exploitation

figure f

It is clear-cut that its time complexity of Algorithm 6 belongs θ (nMutation) which is O (PopSize).

5.7 Fitness function

Generally, one of the most important issues in evolutionary computation is to evaluate competency of each candidate solution. Since each solution is equivalent to one deployment scheme, the objective functions or fitness values of Eqs. (2324), namely total power consumption and total link wastage rate, are calculated regarding information derived from candidate solution. Algorithm 7 receives a candidate solution; then, it returns two objective function values as a result.

figure g

It is clear-cut that its time complexity of Algorithm 7 is O (PopSize).

5.8 Non-dominated sorting

In multi-objective meta-heuristic optimization algorithm, population evolution is gained by special strategy with the aim of omitting inferior solutions and preserving superior solutions. In other words, low-level solutions are gradually discarded. In this line, dominance concept is applied and non-dominated solutions are remained in the final result set called Pareto set [29]. Similar to NSGA-II, our algorithm exploits non-dominated sorting algorithm to classify individuals based on their competencies in regard to dominance concept [29]. Note that all individuals in the same class, hereafter called Frontier, do not dominate each other, but they dominate individuals in the upper levels which are inferior. So, the topmost Frontier belongs to the first Frontier. Algorithm 8 calculates all Frontiers of a given population.

figure h

Since the effective statements of Algorithm 8 are in nested For-loop, its time complexity is O (\({PopSize}^{2}\)).

5.9 Crowding distance

In multi-objective domain, another important thing is to how extent the solutions are scattered in search space. The more distribution in search space, the more probability to find better solutions. For this reason, we apply crowding distance strategy which Algorithm 9 draws. This algorithm avoids early convergence of solutions in course of evolutionary process.

figure i

It is clear that the time complexity of Algorithm 9 is O (PopSize. Log (PopSize)) which is mostly relevant to sorting procedure.

5.10 Description of proposed MOGA

The problem of IoT application modules’ deployment is solved by calling Algorithm 10. This algorithm receives problem specifications relevant to both application and fog infrastructure; then, it returns non-dominated solutions regarding two defined objective functions. Firstly, it calls Algorithm 1 to extract full-mesh subnetworks. Then, Algorithm 2 is called to refine subnetworks in such a way that to reach the target full-mesh subnetworks which cover all applications requirements. In line 5 of Algorithm 10, the random population is generated. To repair the probable infeasible solutions, Algorithm 3 is called in line 8. To calculate the fitness values of each individual Algorithm 7 is called in line 9. Afterward, Algorithms 8 and 9 are called to find initial non-dominated solutions and balance them in crowding distance. The main loop of Algorithm 10 is started from line 13 and ends in line 33. It is iterated until the termination criteria are met. At the outset of the main loop, Algorithm 4 is called to select good individuals for crossover; also, it is called for mutation. After both of them, Algorithm 3 is run to repair probable defective chromosomes. Then, Algorithms 8 and 9 are called to find non-dominated solution and balance them based on crowding distance. Since the size of population may exceed from its initial size because of adding newborn individuals resulting from crossover and mutation operations, selecting the most suitable individuals is done for next round with the same size as initial size. To do so, the selection of individuals to consider for next round is based on solutions’ rankings. It is done from the first Pareto ranking to the worse. In case of selection in the same ranking, the ones which have greater crowding distance value are in high priority to be selected. Thus, the new population is ready for the next round. Once the last iteration is done, the final non-dominated solutions associated with the first front are returned as a Pareto front or set of non-dominated solutions.

figure j

Now that time complexity of all subalgorithms have been determined, the time complexity of Algorithm 10 is easily calculated. Its initialize phase take O (M + K.\({N}^{2}\)). In addition, the main loop iterates MaxIteration times. For the main loop, we have MaxIteration\(\times\) (N.PopSize + PopSize. Log(PopSize) + \({PopSize}^{2}\)). After simplification, its time complexity is O (M + K.\({N}^{2}\) + MaxIteration.\({PopSize}^{2}\)) which is relatively acceptable time complexity.

6 Simulation and evaluation

To evaluate the proposed algorithm for solving IoT application module deployment on fog computing infrastructure, we conduct extensive scenarios along with considering two prominent criteria, i.e., total power consumption and total link wastage rate. In this line, the proposed algorithm is compared with different state of the art such as MODGWO [33], MODCS [34], and MODPSO [35] in different scenarios. Note that all comparative algorithms associated with each scenario have been run in the fair conditions. In this regard, all of them were run 20 times independently. Then, the comprehensive simulation results along with descriptive statistics in terms of min, max, average, and standard deviation (STD) values associated with each scenario have been reported. This report can strongly support the current proposal.

6.1 Experimental settings, datasets, and scenarios

In this section, experimental settings, scenarios, datasets associated with both requested applications and fog infrastructure are presented. To this end, 12 different scenarios are conducted in 3 groups. In the first group, the number of modules are fix, whereas the number of fog nodes are gradually increased. In the second group, the number of fog nodes are fix, whereas the number of modules are gradually increased. Finally, in the third group, number of both modules and fog nodes are gradually augmented. Table 3 depicts the details of different scenarios. Note that all simulations have been executed in a computer specified by Dual core Intel Core i3380M with 2.53 GHz clock rate, four logical processors, and 8 GB RAM.

Table 3 Determined Scenarios for Simulation

As fog infrastructure is ad hoc and heterogeneous in nature, there is not abundant dataset in the literature. This is the reason we produce our own dataset and we take its heterogeneity into account. Fog nodes’ heterogeneity are in terms of processing power, storage capacity, bandwidth, delay, and power consumption as well. For instance, Tables 4, 5, and 6 are dedicated to specification of a sample fog network which has 10 different heterogeneous processing nodes. Note that CPU capacity, memory, bandwidth, power consumption and delay are based on GHz, GB, Mbps, Watt, and ms units, respectively. As Table 4 shows, threshold of CPU and memory usage, minimum and maximum of power consumption, type of supported sensors, and power consumption of data transfer are completely different.

Table 4 Resource specification of fog nodes
Table 5 Bandwidth specification between each pair of nodes
Table 6 Delay specification between each pair of nodes

In this regard, Tables 5 and 6 are used to indicate bandwidth and delay between each pair of fog nodes. To make dimensionless, the values in the tables are normalized in [0..1] interval. In Table 5, the zero means two nodes do not have direct connection, but one means source and destination are the same. Also in Table 6, the one means two nodes do not have direct connection, but zero means two nodes are the same.

Similar to Tables 4, 5, 6, 7, 8, and 9 are dedicated to module specifications in terms of requested resources, recommended bandwidth and delay between modules.

Table 7 Specification of requested resources for applications’ modules
Table 8 Specification of bandwidth recommended for applications’ modules
Table 9 Specification of delay recommended for applications’ modules

Note that the values of Tables 7, 8, and 9 have been received from processing power and storage capacity of fog nodes based on their architecture and existing computing fog nodes such as personal digital assistant (PDA), smart phones, and internal computers.

In addition, parameters of proposed, MODGWO, MODCS, and MODPSO algorithms are set in such a way Table 10 draws.

Table 10 Parameter settings of algorithms selected for comparison

6.2 Experimental results

In this section, the result of simulations are analyzed in three categories. To compare performance of comparative algorithms, one of the good solutions of them in each scenario was selected to be depicted. The illustration is relevant to Pareto front, and status of objective functions in each iteration along with their convergence in course of running, the values of objective functions, and elapsed time are utilized. Finally, at the end of this subsection, the descriptive statistics in terms of min, max, average, and STD values relevant to all scenarios are tabulated. As forthcoming figures demonstrate, there are two versions for MODGWO, i.e., MODGWO-I and MODGWO-II. In the former, GA’s operators are applied for evolutionary process, whereas in the latter canonical MODGWO’s operators are applied. Moreover, for geography place map of fog nodes, random values in normal distribution are generated for fog nodes’ (X, Y) coordinates value in its network. In addition, fog nodes’ distribution map relevant to each scenario is depicted. In this map, the host nodes which deploy application’s modules are placed in the map with blue color.

6.2.1 First category: the number of fog nodes is variable and the number of modules is fixed

In this subsection, four different scenarios are investigated. Figure 12 demonstrates performance comparison of different algorithms for solving the first scenario, in a platform containing 10 fog nodes with 20 requested modules when FTT parameter is 0.24. In regard to FTT parameter, at least 5 fog nodes are needed for module deployment. The best so far solution is to select node numbers 2, 4, 6, 8, and 10 which our proposed algorithm determined. Figure 12a compares Pareto fronts associated with different algorithms; as it shows, our proposed algorithm outperforms against others regarding the quality and number of finding non-dominated solutions. Figure 12b, c shows that proposed algorithm beats others in terms of two objective functions with the increase of iterations. In this line, the best results are depicted in Fig. 12d, e which proves dominance of proposed algorithm. In addition, our best so far plan is depicted in Fig. 12.

Fig. 12
figure 12

Simulation results of a scenario in a platform containing 10 fog nodes with 20 requested modules

Furthermore, the average elapsed time of different algorithms are compared in Table 11. In contrast to other algorithms, our algorithm has favorite time elapsed.

Table 11 Average elapsed time comparison of studies for a scenario in a platform containing 10 fog nodes with 20 requested modules

Figure 13 illustrates performance comparison of different algorithms for solving the second scenario, in a platform containing 15 fog nodes with 20 requested modules when FTT parameter is 0.24. In regard to FTT parameter, at least 5 fog nodes are needed for module deployment. The best so far solution is to select node numbers 4, 10, 12, 14, and 15 which our proposed algorithm determined. Figure 13a compares Pareto fronts concern to different algorithms; as it draws, our proposed algorithm has dominance against others regarding the quality and number of finding non-dominated solutions. Figure 13b, c indicates that proposed algorithm has better results against others in terms of two objective functions with the increase of iterations. In this line, the best results are depicted in Fig. 13d, e. Figure 13d proves dominance of proposed algorithm in terms of the best result of the first objective function, but Fig. 13e shows that our algorithm is marginally in the third place after MODGWO-I and MODCS in terms of the best of second objective function. In addition, our best so far plan is drawn in Fig. 13f.

Fig. 13
figure 13

Simulation results of a scenario in a platform containing 15 fog nodes with 20 requested modules

In addition, the average elapsed time of different algorithms are compared in Table 12. In regard to elapsed time, two algorithms MODPSO and MODCS have the shortest value, but in regard to overall objective functions all of them fall behind to our proposed algorithm.

Table 12 Average elapsed time comparison of studies for a scenario in a platform containing 15 fog nodes with 20 requested modules

Figure 14 demonstrates performance comparison of different algorithms for solving the third scenario, in a platform containing 20 fog nodes with 20 requested modules when FTT parameter is 0.2. In regard to FTT parameter, at least 5 fog nodes are needed for module deployment. The best so far solution is to select node numbers 2, 5, 10, 12, and 14 which our proposed algorithm determined. Figure 14a compares Pareto fronts associated with different algorithms; as it proves, our proposed algorithm outperforms against others regarding the quality and number of finding non-dominated solutions. Figure 14b, c shows that proposed algorithm beats others in terms of two objective functions with the increase of iterations. In addition, the best results depicted in Fig. 14d, e prove dominance of proposed algorithm. In addition, our best so far plan is depicted in Fig. 14f.

Fig. 14
figure 14

Simulation results of a scenario in a platform containing 20 fog nodes with 20 requested modules

In addition, the average elapsed time of different algorithms are compared in Table 13. In regard to elapsed time, MODPSO has the shortest value, but in regard to overall objective functions our proposed algorithm is in the first place.

Table 13 Average elapsed time comparison of studies for a scenario in a platform containing 20 fog nodes with 20 requested modules

Figure 15 shows the performance comparison of different algorithms for solving the fourth scenario, in a platform containing 25 fog nodes with 20 requested modules when FTT parameter is 0.2. In regard to FTT parameter, at least 5 fog nodes are needed for module deployment. The best so far solution is to select node numbers 3, 8, 12, 13, and 15 which our proposed algorithm determined. Figure 15a compares Pareto fronts relevant to different algorithms; as it indicates, our proposed algorithm beats others regarding the quality and number of finding non-dominated solutions. Figure 15b, c indicates that proposed algorithm outperforms against others in terms of two objective functions with the increase of iterations. In this line, the best results are illustrated in Fig. 15d, e which proves dominance of proposed algorithm. In addition, our best so far plan is depicted in Fig. 15f.

Fig. 15
figure 15

Simulation results of a scenario in a platform containing 25 fog nodes with 20 requested modules

In addition, elapsed time comparison of different algorithms which is placed in Table 14 proves that our proposed algorithm has better performance in terms of execution time.

Table 14 Average elapsed time comparison of studies for a scenario in a platform containing 25 fog nodes with 20 requested modules

6.2.2 Second category: the number of fog nodes is fixed and the number of modules is variable

In this subsection, four scenarios numbered 5 through 8 are investigated. Figure 16 demonstrates performance comparison of different algorithms for solving the fifth scenario, in a platform containing 15 fog nodes with 20 requested modules when FTT parameter is 0.24. Note that in our simulations there may be similar scenarios, but the values used in datasets are completely different. For instance, the fifth scenario is similar to second one, but the data are different. Anyway, in regard to FTT parameter in fifth scenario, at least 5 fog nodes are needed for module deployment. The best so far solution is to select node numbers 2, 3, 5, 7, and 8 which our proposed algorithm determined. Figure 16a compares Pareto fronts associated with different algorithms; as it shows, our proposed algorithm significantly outperforms against others regarding the quality and number of finding non-dominated solutions. Figure 16b, c also shows that proposed algorithm beats others in terms of two objective functions with the increase of iterations. In this line, the best results are depicted in Fig. 16d, e which proves dominance of proposed algorithm. In addition, our best so far plan is depicted in Fig. 16f.

Fig. 16
figure 16

Simulation results of a scenario in a platform containing 15 fog nodes with 20 requested modules

In addition, elapsed time comparison of different algorithms which is placed in Table 15 proves that our proposed algorithm has better performance in terms of execution time.

Table 15 Average elapsed time comparison of studies for a scenario in a platform containing 15 fog nodes with 20 requested modules

Figure 17 depicts performance comparison of different algorithms for solving the sixth scenario, in a platform containing 15 fog nodes with 25 requested modules when FTT parameter is 0.24. In regard to FTT parameter, at least 5 fog nodes are needed for module deployment. The best so far solution is to select node numbers 1, 2, 3, 4, 9, and 13 which our proposed algorithm determined. Figure 17a compares Pareto fronts related to different algorithms; as it shows, our proposed algorithm outperforms against others regarding the quality and number of finding non-dominated solutions. Figure 17b shows that proposed algorithm is marginally in the third place after two competing MODCS and MODPSO algorithm in terms of first objective function, but Fig. 17c indicates our proposed algorithm and MODCS, which having near results, beat others in terms of second objective functions with the increase of iterations. In this line, the best results are depicted in Fig. 17d, e which proves dominance of proposed algorithm. In addition, our best so far plan is depicted in Fig. 17f.

Fig. 17
figure 17

Simulation results of a scenario in a platform containing 15 fog nodes with 25 requested modules

In addition, elapsed time comparison of different algorithms is placed in Table 16. Although our proposed algorithm is ranked in second place after MODCS in terms of execution time, our proposed algorithm has better performance in terms of objective functions.

Table 16 Average elapsed time comparison of studies for a scenario in a platform containing 15 fog nodes with 25 requested modules

Figure 18 demonstrates performance comparison of different algorithms for solving the seventh scenario, in a platform containing 15 fog nodes with 30 requested modules when FTT parameter is 0.21. In regard to FTT parameter, at least 5 fog nodes are needed for module deployment. The best so far solution is to select node numbers 3, 7, 8, 9, 10, and 15 which our proposed algorithm determined. Figure 18a compares Pareto fronts associated with different algorithms; as it shows, our proposed algorithm outperforms against others regarding the quality and number of finding non-dominated solutions. Figure 18b shows that proposed algorithm beats others in terms of first objective function, but Fig. 18c indicates our proposed algorithm after MODCS, which having near results, beat others in terms of second objective functions with the increase of iterations. In this line, the best results are depicted in Fig. 18d, e which proves dominance of proposed algorithm against others in terms of the first objective and second place after MODCS in second objective. In addition, our best so far plan is depicted in Fig. 18f.

Fig. 18
figure 18

Simulation results of a scenario in a platform containing 15 fog nodes with 30 requested modules

In addition, elapsed time comparison of different algorithms is placed in Table 17 which has similar result in comparison to previous scenario. Although our proposed algorithm is ranked in the second place after MODCS in terms of execution time, our proposed algorithm has better performance in terms of objective functions.

Table 17 Average elapsed time comparison of studies for a scenario in a platform containing 15 fog nodes with 30 requested modules

Figure 19 demonstrates performance comparison of different algorithms for solving the eighth scenario, in a platform containing 15 fog nodes with 35 requested modules when FTT parameter is 0.23. In regard to FTT parameter, at least 5 fog nodes are needed for module deployment. The best so far solution is to select node numbers 1, 2, 9, 10, 11, 12, 13, and 14 which our proposed algorithm determined. Figure 19a compares Pareto fronts associated with different algorithms; as it shows, our proposed algorithm outperforms against others regarding the quality and number of finding non-dominated solutions. Figure 19b shows that proposed algorithm beats others in terms of first objective function, but Fig. 19c indicates our proposed algorithm after MODCS, which having near results, beat others in terms of second objective functions with the increase of iterations. In this line, the best results are depicted in Fig. 19d, e which proves dominance of proposed algorithm against others in terms of both objective functions. In addition, our best so far plan is depicted in Fig. 19f.

Fig. 19
figure 19

Simulation results of a scenario in a platform containing 15 fog nodes with 35 requested modules

In addition, elapsed time comparison of different algorithms is placed in Table 18 which has similar result in comparison with the previous scenario. Although our proposed algorithm is ranked in second place after MODCS in terms of execution time, our proposed algorithm has better performance in terms of objective functions.

Table 18 Average elapsed time comparison of studies for a scenario in a platform containing 15 fog nodes with 35 requested modules

6.2.3 Third category: the number of fog nodes is variable and the number of modules is variable

In this subsection, four other scenarios numbered 9 through 12 are investigated. In this line, Fig. 20 depicts performance comparison of different algorithms for solving the ninth scenario, in a platform containing 10 fog nodes with 25 requested modules when FTT parameter is 0.19. Note that in our simulations there may be similar scenarios, but the values used in datasets are completely different. For instance, the tenth scenario is similar to seventh one, but the data are different. Anyway, in regard to FTT parameter in fifth scenario, at least 6 fog nodes are needed for module deployment. The best so far solution is to select node numbers 1, 2, 4, 6, 9, and 10 which our proposed algorithm determined. Figure 20a compares Pareto fronts associated with different algorithms; as it illustrates, our proposed algorithm significantly outperforms against others regarding the quality and number of finding non-dominated solutions. Figure 20b, c also shows that proposed algorithm beats others in terms of two objective functions with the increase of iterations. In this line, the best results are depicted in Fig. 20d, e which proves dominance of proposed algorithm. In addition, our best so far plan is depicted in Fig. 20f.

Fig. 20
figure 20

Simulation results of a scenario in a platform containing 10 fog nodes with 25 requested modules

In addition, the average elapsed time of different algorithms are compared in Table 19. In contrast to other algorithms, our algorithm has favorite time elapsed.

Table 19 Average elapsed time comparison of studies for a scenario in a platform containing 10 fog nodes with 25 requested modules

Figure 21 depicts performance comparison of different algorithms for solving the tenth scenario, in a platform containing 15 fog nodes with 30 requested modules when FTT parameter is 0.18. In regard to FTT parameter in tenth scenario, at least 6 fog nodes are needed for module deployment. The best so far solution is to select node numbers 1, 3, 6, 8, 9, and 13 which our proposed algorithm determined. Figure 21a compares Pareto fronts associated with different algorithms; as it illustrates, our proposed algorithm significantly outperforms against others regarding the quality and number of finding non-dominated solutions. Figure 21b, c also shows that proposed algorithm beats others in terms of two objective functions with the increase of iterations. In this line, the best results are depicted in Fig. 21d, e which proves dominance of proposed algorithm. In addition, our best so far plan is depicted in Fig. 21f.

Fig. 21
figure 21

Simulation results of a scenario in a platform containing 15 fog nodes with 30 requested modules

In addition, the average elapsed time of different algorithms are compared in Table 20. In contrast to other algorithms, our algorithm has favorite time elapsed.

Table 20 Average elapsed time comparison of studies for a scenario in a platform containing 15 fog nodes with 30 requested modules

Figure 22 illustrates performance comparison of different algorithms for solving the eleventh scenario, in a platform containing 20 fog nodes with 35 requested modules when FTT parameter is 0.24. In regard to FTT parameter in tenth scenario, at least 5 fog nodes are needed for module deployment. The best so far solution is to select node numbers 1, 4, 8, 10, 15, 17, and 18 which our proposed algorithm determined. Figure 22a compares Pareto fronts associated with different algorithms; as it illustrates, our proposed algorithm significantly outperforms against others regarding the quality and number of finding non-dominated solutions. Figure 22b, c also shows that proposed algorithm beats others in terms of two objective functions with the increase of iterations. In this line, the best results are depicted in Fig. 22d, e which proves dominance of proposed algorithm. In addition, our best so far plan is depicted in Fig. 22f.

Fig. 22
figure 22

Simulation results of a scenario in a platform containing 20 fog nodes with 35 requested modules

In addition, the average elapsed time of different algorithms are compared in Table 21. In contrast to other algorithms, our algorithm has favorite time elapsed.

Table 21 Average elapsed time comparison of studies for a scenario in a platform containing 20 fog nodes with 35 requested modules

Figure 23 illustrates performance comparison of different algorithms for solving the twelfth scenario, in a platform containing 25 fog nodes with 40 requested modules when FTT parameter is 0.20. In regard to FTT parameter in tenth scenario, at least 5 fog nodes are needed for module deployment. The best so far solution is to select node numbers 1, 5, 6, 13, 14, 17, 18, 19, 20, and 25 which our proposed algorithm determined. Figure 23a compares Pareto fronts associated with different algorithms; as it illustrates, our proposed algorithm significantly outperforms against others regarding the quality and number of finding non-dominated solutions. Figure 23b shows that after MODCS, the proposed algorithm beats others in terms of first objective function, whereas Fig. 23c shows that proposed algorithm beats others in terms of second objective functions with the increase of iterations. In this line, average results are depicted in Fig. 23d, e which proves dominance of proposed algorithm. In addition, our best so far plan is depicted in Fig. 23f.

Fig. 23
figure 23

Simulation results of a scenario in a platform containing 25 fog nodes with 40 requested modules

In addition, the average elapsed time of different algorithms are compared in Table 22. In contrast to other algorithms, our algorithm has favorite time elapsed.

Table 22 Average elapsed time comparison of studies for a scenario in a platform containing 25 fog nodes with 40 requested modules

To closer look and to have better analysis for evaluation of comparative algorithms in different scenarios, the descriptive statistics gives supportive information. To this end, Tables 23, 24, 25, 26, 27, and 28 are dedicated. Table 23 shows the min and Max values associated with the first objective (TPC) gained from simulation results in different scenarios. Note that the min and Max values of proposed MOGA is less than others in terms of TPC as the first objective function.

Table 23 Obtained range of TPC value in terms of minimum and maximum associated with different algorithms
Table 24 Obtained range of LWR value in terms of minimum and maximum associated with different algorithms
Table 25 Performance comparison of different algorithms in terms of TPC mean value
Table 26 Performance comparison of different algorithms in terms of LWR mean value
Table 27 Performance comparison of different algorithms in terms of TPC STD value
Table 28 Performance comparison of different algorithms in terms of LWR STD value

Table 24 shows the min and Max values associated with the second objective (LWR) gained from simulation results in different scenarios. Note that the min and Max values of proposed MOGA is less than others in terms of LWR as the second objective function except for some cases for lower search space that MOPSO is better.

Table 25 compares the average values of different algorithm in terms of the first objective function along with relative percentage deviation (RPD) [7]. As Table 25 shows, the proposed MOGA has the best average value. In addition, the improvement of the proposed MOGS against other state of the art is separately reported.

Table 26 compares the average values of different algorithm in terms of the second objective function along with relative percentage deviation (RPD). As Table 26 shows, the proposed MOGA has the best average value. In addition, the improvement of the proposed MOGS against other state of the art is separately reported.

To prove the convergence of comparative algorithms, the STD value for both objective functions are brought in Tables 27 and 28, respectively.

The lowest STD value of proposed MOGA in both objective function associated with all scenarios proves the high rate of convergence and low data skewness.

7 Discussion

In the fog systems, there are two prominent stakeholders which are service providers and resource requesters. Each stakeholder needs to meet their requirement based on its business objectives. One of the most important issues in the reduction in the providers’ capital expenditure (CAPEX) is associated with power management [32], the reason why the first objective of the current paper is to minimize the TPC during the IoT applications’ deployment. Secondly, the IoT applications are typically wireless and their performance are strongly dependent on the common bandwidth usage the reason why the second objective of this paper is to minimize total bandwidth wastage rate. On the other side, the users eventually release cloud/fog service providers which deliver unreliable service provisioning [36]; so, the minimum number of processing nodes which meet the least reliability execution of IoT applications are considered in the constraints of the proposed model. To solve this combinatorial problem, the MOGA model was proposed which takes benefit of several novel operators each of which is exploited in timely manner. The new utilized operators improve the quality of produced solutions. Firstly, it applies the new efficient selection strategy that works better than existing tournament, roulette wheel, and rank-biased strategies. The single-point crossover and random mutation is utilized similar to multi-objective genetic presented by Hosseini et al. in [6]. Also, the crowding distance procedure is customized based on the stated problem (c.f. in Algorithm 9) to produce randomly diverse solutions which contingently lead to efficient solutions. One important novelty of the current solution is to apply two heuristic algorithms 1 and 2 as preprocessing by incorporating the clique concept from graph theory which accurately confines the search space and increase the convergence speed. To evaluate the performance of the proposed MOGA model against other state-of-the-art MODCS, MOGWO-I, MOGWO-II, and MOPSO, 12 extensive scenarios were conducted. All of the aforementioned state of the art were customized based on the stated problem and have been run on the same platform and datasets with the same conditions to reach fair results and comparisons. The result of extensive simulations proves that the proposed MOGA model has 18, 38, 9, and 43 percent of improvement against MODCS, MOGWO-I, MOGWO-II, and MOPSO in terms of TPC and it has 6.4, 15.99, 28.15, and 15.43 dominance percent against them in terms of LWR, respectively. In terms of real run time, the average execution times of MODCS, MOGWO-I, MOGWO-II, MOPSO, and proposed MOGA models in all 12 scenarios are, respectively, 341.32, 1236.70, 342.87, 293.61, and 323.49 s. Although the convergence speed of MOGA model in solving the stated multi-objective optimization problem is marginally in the second ranking after MOPSO, the closer look in the quality of generated solutions in terms of objective functions shows that the MOGA model beats others. Consequently, it is worth mentioning that the quality of solutions generated by MOGA model is acceptable in comparison with other models with overlooking the little bit slow convergence in regarding MOPSO model.

8 Conclusion and future direction

This paper focused on problem of application module deployment on cloud and fog nodes with regard to two prominent stakeholders’ viewpoints, namely users and providers. For users viewpoints, the QoS and reliability are the most relevant objectives and for providers the power consumption is one of the most concerns. This is the reason this paper formulates the IoT application deployment modules to a multi-objective QoS-aware reliable optimization problem with minimization of total energy consumption and total link wastage rate inclination. To solve this combinatorial problem, a multi-objective genetic algorithm (MOGA) has been presented. To validate the proposed GA-based algorithm, extensive scenarios were conducted. It was evaluated against some other comparative state of the art in different scenarios. The analysis and assessment were based on descriptive statistics, namely min, Max, average and STD values to support the simulation results. The simulation results gained from extensive scenarios prove the superiority of proposed algorithm in terms of prominent evaluation parameters along with analysis of descriptive statistics. Totally in all 12 scenarios on average, the proposed MOGA model beats other comparatives models with the amount of 27 and 16.49% improvement, respectively, in terms of TPC and LWR as two prominent objective functions. Except for the MOPSO model which delivers poor results, the proposed MOGA model returns more efficient solutions quicker than others in terms of real execution time. For future work, we envisage to present a QoS-aware economic model for dynamic deployment of IoT applications in fog environment.