Keywords

1 Introduction

Cloud computing is the latest emerging paradigm of distributed computing which uses the concept of hardware and software virtualization to provide a dynamically scalable services. Based upon the demand, these services can be accessed over Internet. In the past few years, Distributed and Parallel Computing as well as Service Oriented Computing attract the interests of researchers [1]. As compared with the other traditional computing paradigms like cluster, Grid, and peer-to-peer (p2p), the Cloud computing adopts a market-oriented business model where users are charged for consuming Cloud services such as computing, storage, and network services like conventional utilities in everyday life (e.g. water, electricity, gas, and telephony) [2]. Cloud computing delivers three defined models as Software as a Service (SaaS), where the user uses the various applications but has no control over the hosting environment. The examples of SaaS include Google Apps and Salesforce.com [3]. Platform as a Service (PaaS) offers a full or partial application development environment that users can access and utilized online collectively or individually. It facilitates the deployment of applications without the cost and complexity of buying and managing the underlying hardware and software and provisioning hosting capabilities. In this model, the platform is typically an application framework. AWS and Google App Engine are PaaS cloud providers [4]. In infrastructure as a service (IaaS), the service provider provides the wide variety of resources of different processing power and storage capabilities. The user can use these resources to deploy its own applications. The user no longer needs to maintain the hardware. Amazon EC2, Globus, Nimbus, and Eucalyptus are IaaS providers [5]. Scalability and flexibility are two main advantages of cloud where the user can access and release the resources as per their need.

On the other hand, Internet of Things (IoT) is a network of physical things or objects fixed with electronics, software, sensors and network connectivity. It is based upon ubiquitous and pervasive computing [6]. IoT consists of real world small things with limited processing and storage capacities. But, the cloud virtually provides unlimited storage and processing capabilities. Thus, by integrating these two complementary techniques, the mutual advantages have been identified in the literature and this new paradigm is known as CloudIoT [7]. In general, to handle the issues of limited processing and storage capabilities, IoT can use unlimited resources of Cloud. Similarly, Cloud can extend its scope from virtual to real world things with the help of IoT. Cloud acts as an intermediate between the real things and virtual applications. It hides the complex functionality details to implement the IoT [8]. The various applications of IoT includes Healthcare services [9], Smart cities and communities [10], Smart home [11], Video surveillance [12], and Energy efficient smart grid [13] etc. All of these applications need massive storage and computational resources. Combining IOT with Cloud can solve the problem of processing and storage. In the following we list the few advantages obtained using CloudIoT paradigm:

  1. (i)

    Storage: IoT applications generate a large volume of structured and semi-structured data. It requires collecting, processing, sharing and searching of large volume of data. This problem can be solved by accessing the unlimited, cost efficient and on-demand storage services of Cloud [14].

  2. (ii)

    Computational Resources: IoT devices cannot perform complex on-site data processing due to their limited processing and limited battery. So, major processing unit of an application is transmitted to nodes that are more powerful in terms of processing and storage. As, cloud provides virtually unlimited processing capabilities, this represents another important CloudIoT driver. The major processing part of an application is offloading to the cloud for energy saving of IoT devices [7].

Thus, these several motivations lead to the integration of Cloud and IoT. But, at the same time, it imposes several challenges for each application. Main challenges include heterogeneity of resources, security, performance, reliability and power and energy efficiency, etc. [8].

For IoT devices to be operational for longer time, the development of energy efficient schemes for sustainable computing environment is a challenging issue. Energy efficient task offloading is currently getting interest of the research community [15]. Computational Intelligence techniques can help to save device resources and energy by shifting the computational tasks from device to cloud. Computational Intelligence techniques generate number of Pareto optimal solutions depending upon the user requirements.

This chapter first explores the work done in the area of energy efficient task offloading techniques for IoT enabled mobile devices. From the study of work done, we have found that none of the existing techniques used Computational Intelligence techniques to give optimal energy saving results to the user. So, to make the devices energy efficient, in this chapter, we have also proposed a novel technique to offload the task from IoT enabled mobile devices to cloud. The proposed approach uses the multi-objective computational intelligence to generate Pareto optimal solutions for task offloading. Our method first determined which of the tasks can be run locally over the mobile cores and which are to be offloaded to the cloud. Then, results of this assignment is fed into the initial population of multi-objective swarm intelligence technique, i.e., Multi-Objective Particle Swarm Optimization (MOPSO) to schedule the task either over mobile cores or offloaded to cloud such that the precedence requirements among the different tasks along with time constraints are met and energy consumption of IOT mobile devices is minimized. The simulation analysis validates that the solutions obtained with proposed heuristic deliver better convergence and uniform spacing among the solutions as compared to others.

The remaining chapter is organized as follow: Sect. 2 presents the related work done on energy efficient task offloading techniques. The problem description is presented in Sect. 3. Section 4 described the multi-objective optimization approach. Section 5 explains the proposed modified multi-objective PSO. Section 6 discusses the simulation strategy and result analysis. Finally, Sect. 7 concludes the chapter.

2 Related Works

The various heuristics in the literature have been proposed for task scheduling and task offloading problems in mobile cloud environment. Broadly, these are of two types: (I) minimizing the makespan of an application [1618] and (ii) minimizing the battery consumption of mobile devices [15, 19, 20]. With the objective of minimizing the makespan, a list based heuristic, HEFT [16] was proposed. Firstly, it assigned priority to all tasks and then mapped the highest priority task to a machine that gave the earlier finish time of a task at each step and thus minimized the overall completion time of an application. Another heuristic, named, Push-Pull algorithm has also been proposed. This algorithm initially used a random schedule and then deterministic guided search method is applied to iteratively improve the current solution [17]. For maximizing the throughput a genetic algorithm was proposed [18] that partition the application task over a mobile device and the cloud in an optimized manner. An incremental greedy strategy to offload and parallel execution of perceptual applications [21] has also been proposed to reduce the finish time of applications. In recent years, the main focus of the researchers is on energy aware scheduling mobile devices. Rong and Pedram [22] used the positive slack time between tasks for minimizing the energy consumption of a computer system. Li et al. [23] presented an optimized maximum-flow/minimum-cut task partitioning algorithm to offload the tasks from mobile device to cloud for minimizing energy consumption. Kumar and Lu [24] proposed a strategy based upon computation-to communication ratio for making offloading decision to minimize the energy consumption.

To find the trade-off solutions between completion time and energy for parallel tasks, Lee and Zomaya [25] proposed two energy-conscious scheduling (ECS and ECS + idle) for heterogeneous computing systems. An integer liner programming based optimization technique [26] for adaptive computation offloading is addressed considering the available memory, CPU and energy consumption as the main criteria for offloading. Wu et al. [27] presented an offloading decision model using network unavailability to decide whether to offload a task for remote execution or not. Similarly, another task offloading technique, CRoSS algorithm [28] was also presented using the link failure rate and the bidirectional transmission rate as main factors for offloading. Along with these factors, other important computing factor is clock frequency. Using Dynamic Voltage Frequency Scale (DVFS), the mobile device energy can be further optimized [29] as the CPU clock frequency is approximately linearly proportional to the voltage supply. Similarly, a mobile device can be connected to more than one wireless networks and thus can offload the data to different networks. To address multisite offloading, a graph partitioning approach is proposed to find solution to the partitioning problem [30]. Lin et al. proposed the task scheduling and task migration algorithm from mobile device to cloud based on DVFS for mobile cloud computing [31]. The results showed a significant reduction in energy under the application completion constraints. Energy efficient computational offloading framework (EECOF) [32] has been presented to leverage minimal application processing migration to cloud and thus reducing the total energy consumption cost. Based on contextual network conditions [33], an energy model was presented whether to offload a task or to run it locally. Similarly, few fuzzy and artificial intelligence based decision support systems [3436] have also been developed to offload the tasks.

From the review of literature, it has been found that most of the existing studies try to minimizing the makespan or energy consumed while scheduling the tasks in mobile cloud environment. None of the existing techniques used multi-objective computational intelligence techniques like MOPSO [37], NSGA-II [38], and FDPSO [39] etc. that give set of near optimal solutions. Hence, this chapter presented multi-objective optimization technique that generates a set of near optimal solutions for mobile cloud applications. We proposed the Modified Multi-Objective Particle Swarm Optimization (MMOPSO) algorithm using the concept of non-dominance to offload the tasks from mobile device to the cloud so as to minimize the energy consumption of created schedule plan.

3 System Model and Assumptions

3.1 Application Model and Mobile Cloud Model

A user application is modelled by a Directed Acyclic Graph (DAG), defined by a tuple G (T, E), where T is the set of n tasks {t 1 , t 2 , , t n }, and E is a set of e edges, represent the dependencies. Each t i ε T, represents a task in the application and each edge (t i … t j ) ε E represents a precedence constraint, such that the execution of t j ε T cannot be started before t i ε T finishes its execution [40]. If (t i , t j ) ε T, then t i is the parent of t j , and t j is the child of t i . A task with no parent is known as an entry task and a task with no children is known as exit task. The task size (z i ) is expressed in Million of Instructions (MI).

Our mobile cloud model consists of a mobile device having m, computational cores, R = {r 1 , r 2 , , r m } at different processing power and a cloud resource. The processing power of a core (mobile core or cloud core), is expressed as Million of Instruction per Second (MIPS) and is denoted by \(PP_{{r_{p} }}\). Each core is Dynamic Voltage Scaling (DVS) enabled; in other words, it can operate with different Voltage Scaling Levels (VSLs) i.e., at different clock frequencies. This mobile device has also access to the computing resources on the cloud. Each task can be executed on different cores or can be offloaded to cloud for its execution. The execution time, ET( i,p ), of a task t i on a core (either mobile core or cloud core), is calculated by the following equation:

$${{ET}}_{{({i},{p})}} = \frac{{{Z}_{{i}} }}{{{PP}_{{{r}_{{p}}} }} }$$
(1)

We use ET( i,c ) to denote the execution time of task t i on cloud c. Time for sending the task t i to cloud is given by

$${T}_{{s}}^{{i}} = \frac{{{data}_{{i}} }}{{{BW}_{{s}} }}$$
(2)

where data i is the task data and BW s is the available bandwidth of sending channel. Similarly, time for receiving output of task t i from cloud is given by

$${T}_{{r}}^{{i}} = \frac{{{data}_{{i}} }}{{{BW}_{{r}} }}$$
(3)

where data i is the task data and BW r is the available bandwidth of receiving channel.

Let EST (t i , r p ) and EFT (t i , r p ) denote the earliest Earliest Start Time and the Earliest Finish Time of a task t i on a local core r p , respectively. For the entry task, we have:

$${EST}\left( {{t}_{{{entry}}} ,{r}_{{p}} } \right) = {avail}\left( {{r}_{{p}} } \right)$$
(4)

For the other tasks in DAG, we computer EST and EFT recursively as follows:

$${EST}\left( {{t}_{{i}} ,{r}_{{p}} } \right) = {max}\left\{ {\begin{array}{*{20}c} {{avail}\left( {{r}_{{p}} } \right)} \\ {\mathop {{max}}\limits_{{{t}_{{j}} {\varepsilon pred}({t}_{{i}} )}} \{ {AFT}\left( {{t}_{{j}} } \right) + {ct}_{{{ij}}} \} } \\ \end{array} { }} \right.$$
(5)
$${EFT}\left( {{t}_{{i}} ,{r}_{{p}} } \right) = {ET}_{{({i},{p})}} + {EST}\left( {{t}_{{i}} ,{r}_{{p}} } \right)$$
(6)

where pred (t i ) is the set of parent tasks of task t i , and avail (r p ) is the time when the core r p is ready for task execution. The Estimated Remote Execution Time of a task t i on a cloud is given by:

$${ERT}\left( {{t}_{{i}} ,{c}} \right) = {ET}_{{({i},{c})}} + {T}_{{s}}^{{i}} + {T}_{{r}}^{{i}}$$
(7)

Similarly, AST (t i ,r p ) and AFT (t i, r p ) denotes the Actual Start Time and Actual Finish Time of task t i on local core or on cloud, respectively. The makespan is equal to the maximum of actual finish time of the exit tasks t exit and is defined by

$${M} = {max}\left\{ {{AFT}\left( {{t}_{{{exit}}} } \right)} \right\}$$
(8)

The makespan is also referred to as the running time for the entire application DAG. The energy model used in this study is derived from the capacitive power (P c ) of Complementary Metal-Oxide Semiconductor (CMOS)-based logic circuits [41] which is given by:

$${P}_{{c}} = {ACV }^{2} {f }$$
(9)

where A is the number of switches per clock cycle, C is the total capacitance load, V is the supply voltage, and f is the frequency. It’s clear from Eq. (9) that the supply voltage is the dominant factor; hence, low supply voltage means lower power consumption.

The energy consumed by executing entire application tasks over available local core is defined as [41]

$${E}_{{l}} = {ACV }^{2} {f} \cdot {ET}_{{\left( {{i},{p}} \right)}} = {\alpha V}_{{i}}^{2} {ET}_{{\left( {{i},{p}} \right)}}$$
(10)

where V i is the supply voltage of the core on which task n i is executed, and \(ET_{{\left( {i,p} \right)}}\) is the execution time of task n i on the scheduled core r p .

If task n i is offloaded to the cloud, the energy consumption of mobile device for offloading the task is given by:

$${E}_{{c}} = {ACV }^{2} {f} \cdot {ET}_{{\left( {{i},{c}} \right)}} = {\alpha V}_{{i}}^{2} {ET}_{{\left( {{i},{c}} \right)}}$$
(11)

where V i is the supply voltage of the sending channel and \(ET_{{\left( {i,c} \right)}}\) is the execution time of task n i on the cloud c. Therefore, the total energy consumed, i.e., E total for executing the whole application is given by

$${E}_{{{total}}} = \sum\limits_{{{i} = 1}}^{{n}} {{E}_{{i}} }$$
(12)

where E i  = E l if the task t i is executed locally and is equal to E c if the task t i is offloaded to the cloud.

4 Task Scheduling Based on Multi-objective Particle Swarm Optimization

The first part of this section introduces the concept of Multi-Objective Optimization and the second part gives an overview of Particle Swarm Optimization (PSO).

4.1 Multi-objective Optimization

A Multi-objective Optimization Problem (MOP) [42] with m decision variables and n objectives can be formally defined as:

$$Min(y = f(x) = [f_{1} (x), \ldots ,f_{n} (x)])$$

where x = (x 1 , , x m ) ∈ X is an m-dimensional decision vector, X is the search space, y = (y 1 , , y n ) ∈ Y is the objective vector and Y the objective-space.

In general MOP, there is no single optimal solution with regards to all objectives. In such problems, the desired solution is considered to be the set of potential solutions which are optimal for one or more objectives. This set is known as the Pareto optimal set. Some of the Pareto concepts used in MOP are as follows:

  1. (i)

    Pareto dominance. For two decision vectors x 1 and x 2 , dominance (denoted by ≺) is defined as follows:

$$x_{ 1} \,\prec\, x_{ 2} \Leftarrow \Rightarrow \forall_{i} f_{i} (x_{ 1} ) \le f_{i} (x_{ 2} ) \wedge \exists_{j} (x_{ 1} ) < f_{i} (x_{ 2} )$$

The decision vector x 1 is said to dominate x 2 if and only if, x 1 is as better as x 2 for all the objectives and x 1 is strictly superior to x 2 in at least one objective.

  1. (ii)

    Pareto optimal set. The Pareto optimal set P s is the set of all Pareto optimal decision vectors.

$$P_{S} = \{ x_{1} \in X,|{\nexists }\,x_{2} \in :x 2\,\prec\, x 1\}$$

where the decision vector, x 1 , is said to be Pareto optimal when it is not dominated by any other decision vectors, x 2 , in the set.

  1. (iii)

    Pareto optimal front. The Pareto optimal front P F is the image of the Pareto optimal set in the objective space.

$$P_{F} = \{ f(x) = (f_{ 1} (x), \ldots ,f_{n} (x))|x \in P_{S} \}$$

4.2 Particle Swarm Optimization (PSO)

Particle Swarm Optimization (PSO) is a stochastic optimization technique that operates on the principle of the social behavior of swarms of birds or the schools of fish [43]. In this technique, a swarm of individuals, known as the particles, flow through the swarm space. Each particle represents a candidate solution to the given problem. Each particle is associated with two parameters, namely, current position, x i and current velocity, v i .

The position of a particle is influenced by the best position visited by it, i.e., its own experience (pbest). Along with pbest, the second parameter that influences the position is the position of the best particle in its neighborhood, i.e., the experience of neighboring particles (gbest). The performance of each particle is measured using a fitness function that varies depending on the optimization problem. During each PSO iteration k, particle i updates its velocity \(v_{i}^{k}\) and position vector \(x_{i}^{k}\) as described below [43]:

  1. (a)

    Updating Velocity Vector

$${v}_{{i}}^{{{k} + 1}} = {\omega v}_{{i}}^{{k}} + {c}_{1} {rand}_{1} *\left( {{pbest}_{{i}} - { x}_{{i}}^{{k}} } \right) + {c}_{2} {rand}_{2} *\left( {{gbest} - { x}_{{i}}^{{k}} } \right)$$
(13)

where ω: inertia weight; \(c_{1}\): cognitive coefficient based on particle’s own experience; \(c_{2}\): social coefficient based on the swarms experience; \(rand_{1} , rand_{2}\): Random variables with between (0,1).

The inertia weight, ω, controls the momentum of the particle. Improvement in performance is obtained by decreasing the value of ω linearly from its maximum value, ω 1 , to its minimum value, ω 2 [44]. At iteration k, its value, ω k is obtained as:

$${\omega}_{{k}} = \left( {{\omega}_{1} -{\omega}_{2} } \right)\frac{{{\mathbf{max}}\_{k} - {k}}}{{{\mathbf{max}}\_{k}}} +{\omega}_{{2{ }}}$$
(14)

Similarly, if c 1 decreases from its maximum value, c 1max, to its minimum value, c 1min , then more divergence among the particles in the search space can be achieved, while if c 2 increases from its minimum value, c 2min , to its maximum value, c 2max , then the particles are much closer to the present gbest. The following equations are used to find the values of c 1i and c 2i at iteration k:

$${c}_{{1{i}}} = \left( {{c}_{{1{min}}} - {c}_{{1{max}}} } \right)\frac{{k}}{{{\mathbf{max}}\_{k}}} + {c}_{{1{max }}}$$
(15)
$${c}_{{2{i}}} = \left( {{c}_{{2{max}}} - {c}_{{2{min}}} } \right)\frac{{k}}{{{\mathbf{max}}\_{k}}} + {c}_{{2{min }}}$$
(16)

where max_k is the maximum number of iterations and k is the iteration number.

  1. (b)

    Updating Position Vector

$${x}_{{i}}^{{{k} + 1}} = {x}_{{i}}^{{k}} + {v}_{{{i }}}^{{k}}$$
(17)

where \(x_{i}^{k}\): position of the particle at kth iteration; \(v_{i : }^{k}\) velocity of the particle at kth iteration.

  1. (c)

    Fitness Function

The fitness function used in proposed MMOPSO is as described in Eq. (18):

$${Fitness} = {E}_{{{total}}}$$
(18)

The next section describes the proposed algorithm based upon multi-objective PSO.

5 Proposed Work

In order to solve the multi-objective task scheduling problem for mobile cloud environment, we have proposed the Modified Multi-Objective Particle Swarm Optimization (MMOPSO) algorithm based upon non-dominance sorting procedure. The proposed algorithm is consisting of two phases. In the first phase, the initial schedule in created based upon HEFT [16] to minimizes the makespan . Then in the second phase, the schedule created in first phase is fed into the initial population of MMOPSO for minimizing the energy consumption (E). Both of the phases are explained below:

5.1 First Phase: Initial Schedule

For creating the initial schedule, HEFT algorithm is used to schedule tasks over the mobile cores as well as over the cloud cores. For this purpose, first of all, the application tasks are divided into either local task or cloud task. For each task t i , we defined its minimum completion time over mobiles cores as

$${T}_{{i}}^{{{min}}} = \mathop {{min}}\limits_{{1 \le {p} \le {m}}} \{ {ET}_{{\left( {{i},{p}} \right)}} \}$$
(19)

And if \({T}_{{i}}^{{{min}}} < {ERT}\left( {{t}_{{i}} ,{c}} \right)\), then the task t i will run on mobile cores and is known as local task, otherwise it is known cloud task.

If a task t i is cloud task, then its average execution time is given by

$${w}_{{i}} = {ERT}\left( {{ti},{c}} \right)$$
(20)

Otherwise,

$${w}_{{i}} = \mathop {{avg}}\limits_{{1 \le {p} \le {m}}} \{ {ET}_{{\left( {{i},{p}} \right)}} \}$$
(21)

Each task is assigned a priority using upward rank as defined in HEFT and is given by Eq. (22).

$${rank}({t}_{{i}} ) = {w}_{{i}} + \mathop {\hbox{max} }\limits_{{{t}_{{j}} {\varepsilon succ}({t}_{{i}} )}} \{ {rank}({t}_{{j}} )\}$$
(22)

where w i is the average execution time of the task on the different computing resources; succ(t i ) includes all the children tasks of t i . After assigning the rank to all tasks, initial schedule is generated using HEFT .

5.1.1 An Example

An example workflow with 10 tasks as shown in Fig. 1a is considered to illustrate the working of the first phase. Figure 1b shows the execution time of these tasks on three different available mobile cores. It has been assumed that T i s  = 3, T i r  = 1 and ET(t i ,c) = 1 for each task.

Fig. 1
figure 1

An example

After applying Eq. (18), only task t 2 is identified as cloud task and rest will be assigned on the mobile cores. Then the rank of all the tasks is calculated using Eq. (21). The order of execution after sorting tasks in descending order of their rank is: t 1 , t 3 , t 6 , t 2 , t 4 , t 5 , t 7 , t 8 , t 9 , and t 10 . Now the tasks are assigned either on local cores or over cloud using HEFT as shown in the Fig. 2.

Fig. 2
figure 2

Initial assignment of first phase

5.2 Second Phase: MMOPSO Algorithm

The main steps followed in MMOPSO algorithm are described in Fig. 1. The fitness function used is presented by Eq. (18) in Sect. 4.2. MMOPSO algorithm is executed for bi-objective task offloading problem, i.e., minimization of execution time and energy. Therefore the task offloading problem is formulated as (Fig. 3):

Fig. 3
figure 3

MMOPSO Algorithm

  • \(Minimize\,Time(S) = {max}\left\{ {{AFT}\left( {{t}_{{{exit}}} } \right)} \right\}{ }\)

  • Minimize Energy (S) = E total

  • Subject to Time (S) < D

Where D is the maximum completion time of an application over mobile device. MMOPSO algorithm used the following operators:

  1. (a)

    Archive Updating:

In multi-objective algorithms, the non-dominated particles are stored in elite archive. Particle’s dominance is checked against other particles based upon the objective functions. The current generation’s solutions are combined with the solutions in the archive of previous generations to make 2 N solutions, where N is the size of archive. Then, all of these solutions are sorted in ascending order of their dominance. If more than one solutions show the same dominance value, then diversity perimeter, I (.) is calculates for such solutions. The solution showing higher value of I (.) is selected. For updating archive, the best N solutions are selected from these 2 N solutions based upon dominance and perimeter [39].

  1. (b)

    Diversity Perimeter :

The diversity parameter for any solution y, I(y) is given by:

$${I}\left( {y} \right) = \sum\limits_{{{i} = 1}}^{{M}} {\frac{{{f}_{{i}} \left( {x} \right) - {f}_{{i}} \left( {z} \right)}}{{{max}\left( {{f}_{{i}} } \right) - {min}\left( {{f}_{{i}} } \right)}}}$$
(23)

where x and z are adjacent solutions to y, after sorting the solutions in ascending order according to ith objective. The infinite value is assigned to the boundary solutions. Higher the value of I(y), more is sparseness. So, the diversity of the solutions increases with the high values of I(y).

  1. (c)

    Updating pbest and gbest:

The binary tournament operator is used to select gbest solution from the current archive. The particle’s current position is compared based on dominance sort with the best position from the previous generation for updating pbest. If there is no dominate solution, then the current position of particle is selected as current pbest.

  1. (d)

    Mutation:

MMOPSO algorithm used the replacement mutation [45] to avoid stucking into local minima and to explore the search space efficiently. For applying the adaptive mutation, mutation probability, P(Mutation) is calculated using the following equation:

$${P}\left( {{Mutation}} \right) = 1 - \frac{{k}}{{{max}\_{k}}}$$
(24)

where k is the current iteration and max_k is the maximum iterations. A random number (rand) in range [0, 1] is generated for every particle. If rand < P (Mutation), then a task is randomly selected for mutation.

6 Performance Evaluations

In this section, the simulation of the proposed heuristic, MMOPSO is presented. To evaluate the proposed task offloading workflow scheduling algorithm, we used five synthetic workflows based on realistic workflows from diverse scientific applications, which are:

  • Montage: Astronomy

  • EpiGenomics: Biology

  • CyberShake: Earthquake

  • LIGO: Gravitational physics

  • SIPHT: Biology

The detailed characterization for each workflow including their structure, data and computational requirements can be found in [46]. Figure 4 shows the approximate structure of each workflow.

Fig. 4
figure 4

Structure of various workflows [46]

6.1 Experimental Setup

For simulation, we assume a mobile cloud environment consisting of a mobile device and a cloud service provider. We assume three heterogeneous cores with different processing speed in a mobile device and one core available at the cloud. For simplicity it is assumed that every task take 30 secs to sending data over cloud from the mobile device and take 10 secs to receive the data from the cloud. For this study, we have used the CloudSim [47] library. The existing CloudSim simulator allows modelling and simulating cloud environment by dealing only with single workload. It is not suitable for mobile cloud environment. So, the core framework of CloudSim simulator is extended to handle task scheduling problem for MCC. Each core is Dynamic Voltage Scaling (DVS) enabled; i.e., it can work at different Voltage Scaling Levels (VSLs). For each resource, a set V j of v VSLs is random and uniformly distributed among three different sets of VSLs (Table 1).

Table 1 Voltage–relative speed pairs

The values for maximum completion time, D is generated as:

$${D} = 3*M_{HEFT}$$

where M HEFT = makespan of HEFT

6.2 Performance Metrics

The analysis of the proposed algorithm has been done with existing state-of-art algorithms using the following performance metrics:

  1. (a)

    Generational Distance (GD): GD [38] is a convergence metric and used to access the quality of an algorithm against the true pareto front P* which is generated by merging solutions of different algorithms. It is calculated using Eq. (25):

$${GD} = \frac{{(\sum\nolimits_{{{i} = 1}}^{{|{Q}|}} {{d}_{{i}}^{2} } )^{1/2} }}{{\left| {Q} \right|}}$$
(25)

where d i is the Euclidean distance between the solution of Q and the nearest solution of P*. Q is the front obtained from algorithms for which GD metric is calculated.

  1. (b)

    Spacing : To check the diversity among the solutions, spacing metric [38] is used and is given by Eq. (26):

$${Spacing} = \sqrt {\frac{1}{\left| Q \right|}} \sum\limits_{{{i} = 1}}^{\left| Q \right|} {({d}_{{i}} - \overline{{d}})^{2} }$$
(26)

where d i is the distance between the solution and its nearest solution of Q and it is different from Euclidean distance and \(\overline{d}\) is the mean value of the distance measures d i . The small value of both GD and Spacing metric is desirable for an evolutionary algorithm.

6.3 Simulation Results

This section presents simulation results and analysis of our proposed Multi-objective MMOPSO algorithm. Now a days, Non-dominated Sort Genetic Algorithm (NSGA-II) [38] and ε-FDPSO [39] are the state-of- art techniques to solve MOP. To measure the effectiveness of proposed MMOPSO algorithm, all these algorithms have been designed and simulated for multi-objective workflow scheduling problem for mobile cloud environment. For implementing the NSGA-II, we used binary tournament selection, one-point crossover and replacing mutation. We have assumed parameters used in ε-FDPSO, and MMOPSO algorithms to be: population size = 20, c 1 = 2.5 → 0.5 and c 2 = 0.5 → 2.5, inertia weight ω = 0.9 → 0.1 and for NSGA-II population size is 20, crossover rate is 0.8, and mutation rate is 0.5. The performance of scheduling algorithms is evaluated considering the randomly generated workflow applications. For bi-objective task offloading problem, we considered the application completion time and the energy consumed of the created schedule as two conflicting objectives. To obtain the Pareto optimal solutions with ε-FDPSO, MMOPSO, and NSGA-II, algorithms, 100 samples have been captured through simulation.

Figures 5, 6, 7, 8 and 9 shows the bi-objectives non-dominated solutions for Montage, CyberShake, EpiGenomics, LIGO, and SIPHT workflows, respectively. The x-axis represents the execution time of created schedule for respective workflow structure, and y-axis represents the energy consumed by created schedule for respective workflow structure.

Fig. 5
figure 5

Bi-objective non-dominated solutions for montage workflow

Fig. 6
figure 6

Bi-objective non-dominated solutions for CyberShake workflow

Fig. 7
figure 7

Bi-objective non-dominated solutions for epigenomics workflow

Fig. 8
figure 8

Bi-objective non-dominated solutions for LIGO workflow

Fig. 9
figure 9

Bi-objective non-dominated solutions for SIPHT workflow

It has been observed that most of the solutions obtained using MMOPSO algorithm is lying closely to the true front and showing he uniform spacing among the solutions. These results are analyzed using two metrics, i.e., GD and Spacing. Table 2 and Table 3 presents the comparative results for all the three algorithms on the basis of GD and Spacing metrics for Montage, CyberShake, EpiGenomics, LIGO, and SIPHT workflows, respectively. The results are obtained by taking the average of 10 simulations as described below.

Table 2 Comparative results of GD for all workflow structures
Table 3 Comparative results of Spread for all workflow structures

From Table 2, it is clear the performance of the MMOPSO algorithm is better and reaches a solution set that is 84, 83, 55, 55, and 80% closer to true Pareto front as in comparison to the solution set created by FDPSO for Montage, CyberShake, Epigenomics, LIGO and SIPHT workflows, respectively as well as 90, 86, 70, 72, and 87% closer to true Pareto front as in comparison to the solution set created by NSGA-II for Montage, CyberShake, Epigenomics, LIGO and SIPHT workflows, respectively.

It has been observed from Table 3, the values of spacing metric using MMOPSO algorithm is 26, 43, 38, 36, and 40% lower than that of values of spacing metric obtained using FDPSO for Montage, CyberShake, Epigenomics, LIGO and SIPHT workflows, respectively as well as 35, 47, 34, 38, and 37% lower than that of values of spacing metric obtained using NSGA-II algorithm for Montage, CyberShake, Epigenomics, LIGO and SIPHT workflows, respectively. This is due to use of trade-off schedule plan between makespan and energy in the creation of non-dominated solution set. So, it is concluded that MMOHPSO algorithm provides uniform spacing as well as better convergence among the solution set as compared to other algorithms for all workflow structures under consideration. Hence, it is applicable to offload large workflows task like face detection and matrix multiplication etc., over Mobile Cloud environment.

7 Conclusion and Future Work

In the past few years, a single objective task offloading problem has been addresses by many researchers. However, in real life applications, there are multiple conflicting objectives that must be satisfied simultaneously. So, the goal of decision maker is multi-fold and prefers the set of Pareto optimal solutions. To address this issue, we proposed the Modified Multi-Objective Particle Swarm Optimization (MMOPSO) algorithm based on the concept of dominance to solve the mobile cloud task scheduling problem. It is a combination of multi-objective particle swarm optimization algorithm and list based heuristic. Its performance is analyzed using two conflicting objectives of makespan , and energy consumption under application completion constraints. The efficacy and applicability of the proposed approaches are demonstrated by using different application task graphs and comparing it with state-of-art MOO techniques. The simulation experiments exhibit that MMOPSO performs better and generates the solutions that are more converged towards the true Pareto optimal front and shown uniform spacing among the created solutions. Hence, it is applicable to solve a wide class of multi-objective optimization problems for scheduling tasks over Mobile Cloud environment.

In future, the concept of neural networks, fuzzy logic, etc. needs to be tested for possible enhancement to the proposed heuristic for real life case studies.