1 Introduction

Cloud computing offers unlimited computational resources (hardware or software) which are ready to use from anywhere, anytime on request. Cloud computing provides an significant chance of progress for industry to give a high quality of services like scalability and serviceability of computational resources. It is benefit of cloud computing that user can pay only for those resources which are accessible by them. These benefits of cloud encourage service providers to enlarge computational resources in data centre to sustain user requests. It includes significant increase in resources such as processor unit, storage and etc. Therefore, proper utilization of cloud resources become major concern. There is increasing number of service providers who started to offer services of cloud. These services satisfy users requirement but consumes a great amount of resources energy. To achieve maximized utilization of computational resources (physical as well as virtual resources) and minimized energy consumption of resources are goals of proposed system. Virtual Machines using live migration helps to reduce the number of active physical machines (Cardoso et al. 2015). Virtualization introduces the layer between hardware and operating system. Without utilizing a physical version for any computing infrastructure or resources accessing virtual version of computing resources such as OS, storage device or network components is termed as virtualization (Cardoso et al. 2015). With help of virtualization software physical host machine can create and manage one or many virtual machines on it. Virtual Machines are able to run its own operating systems and applications same as a physical computer. Migrating a virtual machine is nothing but transferring running state of VM from source physical host (source node) to another destination physical host (target node). The active state of VM is transferred from source host to destination host without interrupting network connections. Live migration of VM is termed when the original VM resides in running state, while the migration process takes place. Live VM migration proposes small amount of migration time which is beneficial for fast migration. The objectives of proposed system is to lively migrate the virtual machine from over-loaded physical machine to under loaded physical machine within minimum time. To generate physical machine's resource utilization profile. To identify overloaded physical server using ant colony optimization algorithm. To migrate the virtual machines lively in minimum down time. To minimize the energy consumption of physical resources.

Problem statement: To develop an energy saving mechanism for shared cloud resources using live virtual machine migration technique using algorithmic steps of Ant Colony Optimization in cloud computing environment.

Following are the objectives of proposed work:

  • To generate physical machine's resource utilization profile.

  • To identify overloaded and under loaded physical server using theory in Ant Colony Optimization.

  • To migrate virtual machines lively within minimum down time.

  • The minimization of energy required for physical resources.

2 Releted work

Veeravalli and He (2015), focus on various recent issues in cloud computing areas. Those issues are categorized into different sections i.e. cloud resource management, increase in cloud resource costing and marketing of resources as well as inclusion of Cloud within cloud. Mashayekhy et al. (2014) suggested most advantageous mechanism for managing resources in cloud. Kwak et al. (2015) proposed Dynamic Resource and Task Allocation algorithm for dynamically managing mobile cloud resources. Patel et al. (2014) recommend that an elastic management of cloud resources with green cloud mechanism. They propose DVFS concept for energy management. Cardoso et al. (2015) proposes mechanism that utilization of cloud resources done properly leads to save energy. For saving energy and utilizing resources in efficient way they focus on live VM migration. Also they mention Wake On Lan protocal to turn off and on idle physical machine to save power as per requirement. Agarwal and Raina (2012) proposes migration of active state virtual machines with essential phases of migration. Wen et al. (2015) described ant colony optimization algorithm to detect and balance load on cloud resources by using live VM migration. Biswas et al. (2016) have judged the work of live migration of virtual machine using high network interfaces. The network topology is based on Openstack cloud environment. Yang et al. (2015) mentioned libvirt function to establish cloud infrastructure using KVM hypervisor. This paper proposes green cloud computing by shutting down idle cloud resources. Dhanoa and Khurmi (2015) have studied the effect of size of virtual machine and bandwidth of network on total migration time required to migrate VM. Live migration mechanism save energy and supports green cloud computing. Kumar and Prashar (2015) have done relative study of algorithms to equal load in cloud computing environment. Wang et al. (2015) mention curious mechanism to increase profit for placing VM in datacenter and diminish number of VM migrations.

The remaining paper systemized in following way. In Sect. 3, recommend architectural design of proposed system. In Sect. 4 discusses the algorithms and the modules of the system. The Sect. 5 includes experimental results. Lastly, come up with conclusion in Sect. 6.

3 System architecture design

As shown in Fig. 1 the proposed system functions in following steps: In this architecture design there are number of physical host and they can include any number of virtual machines running on it. Therefore when a VM is migrating from one physical machine to another PM, the mutual connection is preserved between physical host and virtual machine. Migration of VM takes place in accordence with the proposed modules. The output of proposed module initiates the live migration of VM. The implementation of the proposed modules done at server physical machine in the cloud environment. Above diagram represents the system architecture design of proposed system. Following are the logical phases (Agarwal and Raina 2012) that are executed when migration of VM takes place. While migration of VM it is necessary to safely migrate VM and handle failure if occurs. To achieve this, following phases needs to take place during migration.

Fig. 1
figure 1

System architecture design

Phase 1. Pre-migration This phase start with running instance of virtual machine on physical machine A. In this phase the destination physical machine is determined and selected. This selection of destination physical machine helps to reduce total migration time.

Phase 2. Reservation During reservation phase the migration request is initiated from physical machine A to physical machine B. It confirms the required resources for successful implementation of VM are available on physical machine B. Then it reserves the resources if available, otherwise VM continues running on physical machine A.

Phase 3. Iterative pre-copy In this phase while first iteration all the memory pages are transferred from physical machine A to physical machine B. From next iterations remaining pages from previous iterations are transferred.

Phase 4. Stop-and-copy This phase stops the VM on source physical machine A and move its network traffic to destination physical machine B. The CPU state and reamaining memory pages are redirected to host B. At this phase VM is suspended from host A still it is considered its primary copy in failure cases.

Phase 5. Commitment This is commitment phase where the host B sends an acknowledgment to host A that it has successfully received VM. This phase guarantees the successful migration process. After receiving successful acknowledgment from host B, the host A remove VMs copy on it. Host B is now primary machine for VM.

Phase 6. Activation In this phase the VM is successfully running on host B.

With the help of above defined phases and modules of the proposed system we migrate the virtual machines lively from overloaded PM to underloaded PM.

4 Algorithm and modules

The proposed system has seven modules. The resource monitor collects CPU and Memory usage of physical and virtual resources. Capacity distribution leads to specify threshold values for resources. Task allocation helps to determine overloaded servers. The optimizer helps to reduce physical machines which are in active state. Local Migration Agent helps provides best suited PM to VM to be migrated. Migration Orchestration perform live migration of virtual machine. The energy manager leads to save power consumption of resources.

4.1 Resource monitoring

The Resource Monitoring module collects CPU and memory usage profile of cloud resources. The psutil library(python system and process utilities) communicate with physical machine’s hypervisor and collect resource usages. The CPU utilization of each virtual machine and physical machine is directly calculated by psutil.cpu_percent(). The CPU usage profile for physical machine is calculated by aggregating CPU usages of its virtual machines. The process.memory_info() function provides memory utilization of cloud resources. This resource utilization is saved into the database for future reference.

4.2 Capacity distribution

The Capacity distribution distributes the capacity among the physical machines. This is simply sets the upper threshold and lower threshold values for the physical machines. This will help in optimizer module to detect the load.

4.3 Task allocation

The Task allocation calculates the execution time and memory required for the coming task according to that the task will be assigned to the machines. According to calculations distribution of task will help us in migration phase. To minimize the migration time task allocator helps.

4.4 Optimizer

The Optimizer receive the resource usage profiles with help of Resource Monitor and perform an ant colony optimization algorithm to determine the load amount on physical machines which are in active state. After successful execution of ACO-Based VM Distribution List algorithm it gets a new list of allocation of virtual machines for physical machines. The resulting message of VM redistribution on PM is transmitted to the Migration Orchestration with help of local migration agent.

Algorithm A (ACO-Based VM Distribution List)

Step I: if an instance of physical machine loaded it exceeds the threshold then

Step II: Generate sorted list of VMs running on that instance of PM that should migrate

Step III: while i < M do

Step IV: Generate ants

Step V: ants traverse with Positive Traversing Strategy

Step VI: i = i +  + ; i is pheromone

Step VII: end while

Step VIII:Generate ants

Step IX: ants traverse Negative Traversing Strategy

Step X: Generate set of physical machines in low load condition

Step XI: Match the relationship between the PMs with respect to VMs

Step XII: Recent mapping relationship is generated

Step XIII: end if

Algorithm B (VM Mapping with PM Algorithm)

Step I: Input: LVM+, LPM+

Step II: Best Fit =  − ∞

Step III: for∀vi ∈ LVM+do

Step IV: for∀pj ∈ LPM+do

Step V: Calculate Fitness (vi, pj)

Step VI: ifFitness (vi, pj) > Best Fitthen

Step VII: Best Fit = Fitness (vi, pj)

Step VIII: end if

Step IX: end for

Step X: ifBest Fit >  = 0 then

Step XI: Select the instance of PM as the destination PM

Step XII: Migrate instance of VM

Step XIII: LVM+− {vi}

Step XIV: end if

Step XV: end for

Step XVI: ifLVM+  ≠ then

Step XVII: Reschedule Algorithm 1

Step XVIII: end if

Algorithm A (Wen et al. 2015) describes ACO-based VM distribution list algorithm. The M defines the iteration times. Physical machines in high load condition are specified by step III to VIII. Algorithm B (Wen et al. 2015) outputs the sorted list of VMs LVM+ = {v1, v2,…,vi}. This list defines VM should be migrated. Also it generates the list of destination PMs LPM+ = {p1, p2,…, pk}. This list defines possibly low load condition PM. With the help of these two sets, it initiates the matching of VMs with the PMs.

4.5 Local migration agent

When the load condition of an instance of the physical server machine surpass the upper threshold value, the local migration agent will rearrange all VMs from instance of that PM according to their moderate load. This load is evaluated by the arithmetic Formula 1 given below. The higher loaded virtual machine has greater priority.

$$\overline{{.{\text{Vi}} \left( {{\text{j,}}\;{\text{T}}} \right)}} = \frac{1}{{\text{T}}}\sum\nolimits_{{{\text{k}} = 1}}^{{\text{n}}} {{\text{Vi}} \left( {{\text{j,}}\;{\text{k}}} \right) \left( {{\text{tk}} - {\text{tk}} - 1} \right)}$$
(1)

Vi (j, k) defines the resources acquired by the j-th instance of VM on the i-th instance of PM in time period k, is calculated by

$${\text{Vi}}({\text{j,}}\;{\text{k}}) = \frac{1}{{\text{k}}}\left( {{\overline{\text{U}}}_{{{\text{cpu}}}}^{2} + {\overline{\text{U}}}_{{{\text{mem}}}}^{2} + {\overline{\text{U}}}_{{{\text{net}}}}^{2} } \right)$$
(2)

where,

$${\text{K}} = \left( {{\overline{\text{U}}}_{{{\text{cpu}}}}^{2} + {\overline{\text{U}}}_{{{\text{mem}}}}^{2} + {\overline{\text{U}}}_{{{\text{net}}}}^{2} } \right)$$
(3)
$${\overline{\text{U}}}_{{{\text{cpu}}}} = \frac{1}{{2}}\left( {{\text{U}}_{{{\text{cpu}}_{{{\text{t}}_{{\text{k}}} }} }} + {\text{U}}_{{{\text{cpu}}_{{{\text{t}}_{{\text{k } - \text{ 1}}} }} }} } \right)$$
(4)
$${\overline{\text{U}}}_{{{\text{mem}}}} = \frac{1}{{2}}\left( {{\text{U}}_{{{\text{mem}}_{{{\text{t}}_{{\text{k}}} }} }} + {\text{U}}_{{{\text{mem}}_{{{\text{t}}_{{\text{k } - \text{ 1}}} }} }} } \right)$$
(5)
$${\overline{\text{U}}}_{{{\text{net}}}} = \frac{1}{{2}}\left( {{\text{U}}_{{{\text{net}}_{{{\text{t}}_{{\text{k}}} }} }} + {\text{U}}_{{{\text{net}}_{{{\text{t}}_{{\text{k } - \text{ 1}}} }} }} } \right)$$
(6)
  • \({\mathrm{U}}_{{\mathrm{c}\mathrm{p}\mathrm{u}}_{{\mathrm{t}}_{\mathrm{k}}}}\): CPU resource consumption by the j-th instance of VM on the i-th instance of PM in the time interval k, and it is similar for the terms \({\mathrm{U}}_{{\mathrm{m}\mathrm{e}\mathrm{m}}_{{\mathrm{t}}_{\mathrm{k}}}}\), as well as \({\mathrm{U}}_{{\mathrm{n}\mathrm{e}\mathrm{t}}_{{\mathrm{t}}_{\mathrm{k}}}}\).

  • \({\mathrm{U}}_{{\mathrm{c}\mathrm{p}\mathrm{u}}_{{\mathrm{t}}_{\mathrm{k}}}}\), \({\mathrm{U}}_{{\mathrm{m}\mathrm{e}\mathrm{m}}_{{\mathrm{t}}_{\mathrm{k}}}}\), and \({\mathrm{U}}_{{\mathrm{n}\mathrm{e}\mathrm{t}}_{{\mathrm{t}}_{\mathrm{k}}}}\) : constants.

The load on physical machine instance Pi is calculated by following formula,

$${\text{P}}({\text{i,}}\;{\text{T}}) = \sum\nolimits_{{{\text{j}} = 1}}^{{{\text{m}}_{{\text{i}}} }} {\overline{{{\text{Vi}}({\text{j,}}\;{\text{T}})}} }$$
(7)

where,

  • mi: defines the count of virtual machines on the instance of physical machine.

Repeatedly, the Local Migration Agents observes the load on the physical machine. If the physical machine is overloaded then migration takes place. The upper and lower threshold values are set by capacity distributor (Wen et al. 2015).

4.6 Migration orchestration

The Migration Orchestration organize the migration of the virtual machines. The actual migration of virtual machines lively is carried out by migration orchestration. The Orchestration uses vMotion feature of vSphare Client and Vcenter Server Appliance. Following factors are responsible to affect the migration time: VM size, network traffic, physical machine resource usage. Virtual machines are migrated one by one to reduce overload of network traffic.

4.7 Energy management

After execution of all above modules the idle physical systems that are identified via previous module needs to shut down to save energy. The idle physical machine is one that has no VM running on it. But this module is unable to start the physical machines whenever there is need to satisfy increasing demands of clients for acquiring resources. So it is suggested to turn physical machine into sleep mode whenever it is needed.

5 Experimental results

This section consists with experiments have been supervised to analyse the efficacy for proposed system. The proposed ant colony optimization algorithm (Wen et al. 2015) is compared with existing Load Balancing algorithm. Here its also examined proposed system in terms of migration time acquired to migrate a virtual machine. To evaluate our proposed system we set up virtualized cloud environment with Vmware workstations, Vmware vSphare Client, NAS storage and Vcenter server appliance. Our experiment is mainly based on random workload designed to evaluate the proposed system. In the experimental environment there are four physical machines involved. Those physical machines have 8 GB of RAM and Core i5-6200U. Three ESXi virtual machines of 4 GB RAM are created on these physical machines. One physical machine is worked as Vcenter Server Appliance. One virtual machine is installed for NAS storage. To demonstrate the superiority of proposed system, the consumption of CPU is generated stochastically.

Comparison of Ant Colony Optimization (ACO) algorithm (Wen et al. 2015) with the Round Robin (RR) load balancing algorithm (Kumar and Prashar 2015) has been done. The functioning of these algorithms has examined with respect to time required to balance the same amount of inputted data.

Graph 1 shows that the ACO algorithm requires the minimum time to detect and balance processes. This specifies that ACO is better than Round Robin algorithm.

Graph 1
figure 2

Performance comparison of RR and ACO algorithm

Performance of VM migration with respect to time required for migration is evaluated by considering following factors.

5.1 Increase in number of physical server machines

The above graph says that as the number of physical servers in the network increases the migration time is increased. In this proposed system minimum 2 machines are required for migration of virtual machines so in the above graph minimum 2 physical machines are there. When the migration is carried out between 2 physical machines 0.32 s are required to migrate a virtual machine. As there is increase in physical machine i.e. 3 and 4 the migration time increases to 1 min 4 s and 5 min 13 s respectively. So this graph depicts that as there is increase in number of physical machines the migration time increases (Graph 2).

Graph 2
figure 3

Performance evaluation of VM migration when physical machines increases

5.2 Increase in CPU usage of physical machine

Graph 3 shows that while migrating a virtual machine from source physical machine to destination physical machine the migration time is affected by the CPU usage of the source physical machine. Consider CPU usage of source physical machine in terms of percentage and migration time in minutes. As the CPU usage of physical machine’s are increased due to its own performance and applications the time required to migrate VM is increasing. When CPU usage of physical machine is 10% the VM migration time is 7 min and as per the CPU usage is increased upto 50% the migration time required is 13 min and 45 s. From the above graph it shows that as CPU usage are increases the migration time increases.

Graph 3
figure 4

Performance evaluation of VM migration when CPU usage increases

5.3 Performance evaluation of CPU usage in VM migration

The performance of physical machines are analyze when CPU usage is increases while migration of virtual machine. With the help of ACO algorithm detect the load on physical server then it is decided to migrate the virtual machine to appropriate physical server. Hence, before migrating VM the CPU usage profile of source physical machine gets higher and after migration of virtual machine CPU usage profile of source physical machine gets reduced. Same way before migration of virtual machine on the destination physical machine's CPU usage is lower whereas after migration of virtual machine the destination machine's CPU usage is getting increased. This is shown in the following table.

As shown in the above Table 1 Virtual machine named VM11 is being migrated from source server machine ESXi_Server1 to destination server machine ESXi_Server2. Before migration of virtual machine VM11 the CPU usage of ESXi_Server1 is 90% and is reduced after migration upto 60%. Where as the destination machine's CPU usage before the virtual machine is placed over ther 45% and after virtual machine VM11 is placed on that destination server machine CPU usage is 75%. In this way we can say that with the help of virtual machine migration we can balance the load on the physical server mahcines. By analyzing from above Tables 1 and 2 it is said that CPU load is balanced after virtual machine is migrated.

Table 1 Performance evaluation of CPU usage (before migration) in VM migration
Table 2 Performance evaluation of CPU usage (after migration) in VM migration

6 Conclusion

Our system reduces the energy consumption by lively migrating virtual machines in cloud environment. It provides minimum migration time for virtual machine migration. This work results into proper utilization of available physical resources in cloud environment leads to save energy consumption. It is concluded that virtual machine are lively migrated successfully leads to save energy consumption.

As a future work we plan to focus on cleaning the VMs by removing unnecessary processes. It will reduce the size of VM before migration. This leads to reduce the migration time. Also it is intended to improve the threshold value for load detection from a predetermined value to an elastic value.