1 Introduction

Thanks to the everywhere cloud, everyone is eager to use cloud computing technology and wants to create a result to be compared with other traditional and natural technologies. From cloud technology, there is a synergy between different technologies that are running with many algorithms. People now came to know about cloud technology in how data is processed in the cloud with various scheduling algorithms. Multiprocessing is also done by interconnecting various visualized resources under task scheduling. A series of different configuration systems collaborate with different types of resources to achieve critical computing. It provides a range of useful tools and advanced services to make a scientific and practical world for embedded technology. Cloud computing manages the essential things in a very pleasant way. As in the world map, everything is visible to everyone; similarly, computational capabilities can also be tracked by providing open doors for developers to create cloud map visibility. Cloud computing is like a pool where every technology is deeply defined with its advantages and disadvantages. Pools include service migration, replication, security, storage space, practical tools for development, commodity management, and more. The specialty of one fruit here is that no one is dependent on the other in any way. All users are unique; so it don't create dependencies on anyone. The term development serves as an innovation, which must be followed in all aspects. Assurance of development and advancement using cloud computing is on the completion line. Cloud computing technology services have many names, all defining their key points with an in-depth analysis of practical, as well as theoretical aspects. Executed results can be deployed with different architectures of the cloud (Salamy 2019). Versions of the cloud enable the user to run their work with their own spaces and responsibilities.

Application based projects are running in the market with high name and fame. Cloud computing is also designed for accomplishment of this platform. It works through an Internet connection to access data from the server to user processes. Cloud computing works in two phases, one phase for the user end and the other for the system end. The user end allows access to the stored data and the system end provides data storage facilities with its complete privacy. The cloud server provides assurance about the services provided with the help of all connected components, such as the repository, the server, and the main one CSP (Cloud Service Provider). Only the CSP can communicate and provide cloud services to the customer with their needs and fulfillment of commitments. Services are being provided by terms and conditions designed with a particular authorized channel. The channel maintains governance about the functioning of the system and hierarchy. Several security gates have also been installed on the servers to reduce routine violations. Multinationals companies are recognizing the benefits of cloud-based architecture and build their own systems to allow developers to use utility support, to take screenshots of the golden digital world. The organization works as a CSP (cloud service provider) to store a layout for technology workers. The working model for cloud computing is depicted by Fig. 1a, b. Since a large number of requests are generated day to day to receive services from the cloud, many customers are soon ready in a queue to get their chance. The entire activity is focused by strong connectivity, as the CSP receives the request via send/receive and then it merges with the server system to complete the process.

Fig. 1
figure 1

a. Working model for cloud computing. b Working model for cloud computing

The CSP is divided with a description of the scheduling process. In which, the task is mapped with the applicable scheduling and criteria are set for obtaining the optimized result by considering the specific factor. All CSPs differ in their services, so they can design a chart according to their internal imperatives. For availing the services, User first searches the CSPs according to their needs and then sends a request to an established CSP. The service provider informs the customer about the terms and conditions of obtaining the commitment. Communication then takes a legal place in the system to provide the benefit of essential services to the customer. Then system set the requests in queue with the preferences of the requested policies and related factors, now the request is available and mapped to the required configuration machine, and then it produces the optimized results.

Types of Clouds Private Clouds, not for everyone, it provides services to the preferred user over private confidential networks rather than the general public. Public Cloud is open to all; it provides services by the third party over the universal internet. It is accessible for any type of user who wants to use the provided services (Anjuli Garg and Krishna 2014). With the passage of time, the provider can change the rules and services according to the user's response. To increase the market and introduce new services, the technology needs more refinement. Hybrid cloud requires an amalgamation of the two to complete a single project. Public cloud is very helpful for dealing with public morality and private is very safe and secure for people working in the same organization. Community Cloud, a combination of efforts, has given freedom to various organizations that work similarly to development by sharing tools and facilities to the respective organization. Details about the cloud type are shown by Fig. 2a, b.

Fig. 2
figure 2

a. Types of cloud computing, b. Types of cloud computing

The characteristics of various clouds are listed below. Public and private clouds are intended as a personal concern, while community and hybrid clouds are based on mutual concern meetings.

Interest in real life is an important point to achieve targeted work. In the same way, scientists are eager to introduce the concept of cloud; Cloud is neither a product nor a process. The cloud is just the concept, with whatever researchers have analyzed what is happening in the developing environment. After getting an in-depth analysis, the theories found that everyone wants easily accessibility of the services in one place instead of going here and there. Then cloud computing technology comes and starts counting growth only in the cloud as the cloud gives freedom to use different types of services on the same server. Services as follows IAAS (infrastructure as a service), PAAS (platform as a service), SAAS (software as a service). Further services were introduced from these three core services, they are: STAAS (storage as a service), APIAAS (API as a service), DAAS (data as a service), TAAS (trust as a service), TEAAS (testing as a service), AAAS (application as a service), SECAAS (security as a service).

Now everyone is accustomed to cloud computing technology with its useful activities. In order to process this paper will have to introduce a phase of cloud computing i.e. scheduling. Scheduling is also working as a service for users by obtaining more coordinated working model to process the task very well. How to schedule a work from the cloud server, through which other assignments should not be interrupted and secure the good results of the scheduling work. Thousands of scheduling algorithms have been developed to get better results than in the past. And it produces big chaos in today’s market, how to select the best scheduling algorithm to accomplish a task in a fruitful way (Bansal 2018; Bansal et al. 2016; Bansal and Dutta 2014). This paper is expanding further by writing literature surveys. The process of this paper is as follows in Fig. 3:

Fig. 3
figure 3

Flow of the paper

2 Scheduling objective and methodology

Scheduling starts with the mapping of the resources to the machine. If resources are defines as R = {R1, R2, R3…Rn} and machines are defines as M = {M1, M2, M3…Mn} then mapping function can be defined as

$$MpF_{1 \to n} :R \to T \in M$$

where T defines as several task T = {T1, T2, T3…Tn} and MpF ranges from 1 < = n. Task allocated on the prescribed machine with required resource gives a complete mapping of the function to generate fruitful result. The scheduling process is depicted in Fig. 4:

Fig. 4
figure 4

Scheduling processing

Factors explanation is to be considered in various designed scheduling, makes the scheduling successfully. Factors are in the main role for any designed scheduling, because of its experimented value, scheduling can be goes for judgement in terms of reliability and get decided that proposed scheduling is good to use up to calculated value. Factors may be profitable for user as well as for service provider. According to the user and service provider perspective, factors are here with clear explanation for getting more acceptances.

Throughput Successful rate of the transmission is during the way of communication defined as a throughput. It includes slender latency and various rules for a specific schedule can perform in an efficient way. The analysed percentage value of the factor is 09% in today’s scheduling development era.

Makespan It is defines as the difference in between the start and end time of a queue of tasks. Most of the survey papers have set the target to get the minimum makespan, i.e. 81%. Due to reduce this factor, many others factor are also getting improved value. Such as cost, resource utilization, schedule length. Many factors are dependent to each other. Due to this good feature, more factors get optimized in a single designed scheduling.

Power consumption energy consumption is indirectly proportional to the utilization of the resources. Machines usage will be good then power consumption will be low. Sometimes machines will not get proper use due to heavy requirement of resources, at that time systems produces high energy and the cost will also get increase automatically. The power factor consumed will help to analyze how energy consumption is considered in developing optimized results i.e. 14% value obtained.

Local optima In the computational world, the best solution is provided by local optima through a problem occurring over a small area. For the global optimum value, the local optima perform well with the cream layer under different possible generated layers or outcomes. This factor needs more calculation as it has yielded 05%.

Cost and Time it contains many factors such as transmission, allocation, execution, completion, management etc. Both are directly promoted to each other, if the time will be reduced, the cost will be reduced automatically the value also. Spend less time, expenses will be less. The user will gain access to the services for a limited and required time period, and will pay the provider based on its revaluation. Mathematics is a very strong keyword from where people can find formulas for their specific work. The modification can be applied to statistical instructions through the user to achieve the desired output. The esteemed values obtained through analysis for time and cost are 25% in this paper.

Availability of resources The available resources satisfy the user that he has all the materials to achieve success and complete all the tasks in the design project. The Reservation Factor may be considered by researchers to ensure this factor makes it easier to schedule task. 01% gained by this factor in current survey.

Schedule length Due to high demand and limited number of resources, the length of the schedule to execute the task and to meet the target increases. Resources are being generated to reduce the length value. Paper analysed the value for this factor is 04%.

Resource utilization Use can be done within the range, never crossing the limit value. Each machine must achieve a uniform or average load during processing for full utilization of resources. To get higher mileage, the fare amount should be clear to the user. 55% value is reserved for utilization.

Bandwidth Through single or multiple channels, the amount of data migrated is considered in the definition of bandwidth. High bandwidth is considered a good result in the computation of many data on a channel. Analysed value for bandwidth factor in this paper is 04%.

QoS Many values of factors can be achieved through quality service. A single entity defines many more factors or values. Whatever services the service provider provides to the user, the evaluation result falls under the quality factor. 07% papers are looking for QoS.

Fault Tolerance Within the failure, the system generates the output very clearly, defining as the property of fault tolerance. It is an intelligence factor when a machine calculates a task properly in a non-active way. It should be kept in mind by the authors in later papers that the factor analysis is found to be very low i.e. 01%

Security This is a very important and non-avoidable factor, as resources are varying in nature due to varying demand by the user. Data security becomes a very crucial goal after multi-tent features due to virtual century. Very limited or no research paper is trying to overcome this factor. The fear of data leakage and vulnerabilities is playing a main role in virtualization and distribution of facilities. To achieve an efficient and secure virtual processing, the security factor must be highlighted with the highest priorities.

Percentage values are calculated based on the consideration of a particular factor in current research. The use of the factors is noted from the articles and the values for each of them are summarized.

3 Task scheduling

The research paper published from various prestigious publications presented here. Scheduling is a term that defines the perfect match between two entities. These two entities can be anything; it can be a resource or a task. Various types of scheduling methods are in nature to get optimized results. Optimization can be considered in terms of a good result or decision. Task scheduling processes the task efficiently and effectively based on their characteristics. Many more concepts or terminologies are in use under task scheduling, these characteristics are explored here with published articles.

A novel hybrid cost-based prioritization concept has been used to design new task scheduling to obtain better values of factors such as throughput, makespan, and average waiting time (Satish 2019). A fuzzy based modified particle swarm algorithm has been optimized which is working perfectly in the paper and showing improvements such as makespan, correction ratio, imbalance degree, efficiency and total execution time (Mansouri 2019). A valuable technique has been devised to move the load on different machines for better performance. In this paper, the author works on the earliest time limit (EDF) and increases the value of power consumption (Salamy 2019). Inequality of work makes things indisputable. To overcome this, the combination may move towards a catastrophic situation. By applying the Hungarian algorithm, the author achieves the required results (KumarPanda et al. 2018). As scheduling demand increases, new challenges take their place to compete with the other. To provide assurance, within the time limit, the author proposed an assurance by proposing a backfilling mechanism (ChandanNayak 2018). Author refines the factors throughput, local optima by using hybrid load balancing algorithm with the combination of Teaching-Learning-Based Optimization (TLBO) and Grey Wolves Optimization algorithms (GWO) (Mousavi 2018). The energy consumption factor is modified with the use of the equal speed method for each task in the queue and is ready for execution (Li 2018). To save the energy consumed by the system, it is best to use an approximation algorithm for the first fit decreasing concept (Tian et al. 2018). Amalgamation of two different factors may provide an efficient result. By combining genetic algorithms (GAs) and the bacterial foraging (BF), algorithm gets reduction in makespan and energy consumption (Srichandan 2018). The most crucial factor i.e. cost has been considered in this paper. Author reduces the cost efficiency by using the new resource provisioning and scheduling strategy that designed specifically for Workflow as a service (WaaS) environments (Rodriguez and Buyya 2018). To maximize the CPU real time, a novel scheduling has been proposed, it analyses the resources usage over the time passing by their executions in order to achieve optimized performance (Sotiriadis and Bessis 2018). A unique technique is applied in this paper i.e. an alternative direction method of multipliers, which makes room for users to obtain greater availability of resources (Sebastio 2017). Ant colony optimization has been used to provide better in terms of scheduling enhancements in robustness and reduce machine inequality. To improve resource utilization, proper use of resources should be made during the selection rule proposed by the author (Wei et al. 2012). Get amendment in various factors like, energy efficiency and execution time by implementing Clonal Selection Algorithm (CSA) (Jena 2017). Most common factors makespan and resource utilization are getting more refinement by proposing a novel load balancing technique (Mohit Kumar and Sharma 2017). To reduce the length of the schedule for each task, a conversion is implemented as a budget constraint for an application (Weihong et al. 2017). Makespan reduces its value by using the method of heterogeneous earliest finish time in the proposed article (KalkaDubey and Sharma 2017). The author eliminates the time value by completing logs that take previous scheduling information and evaluate them as needed (Ellendula Madhukara and Thirumalaisamy Ragunathan 2015). Lower value found in proposed paper for mentioned factors makespan, execution time, round trip time, and transmission cost by adopting load balancing modified particle swarm optimization (LBMPSO) (Awad et al. 2015). The author improves makespan by accepting unbeatable approach ie zone based honeybees. This approach was effectively done by modification made in the real honey bees nature life cycle (Doreen Hephzibah Miriam et al. 2015). Proper use of resources is a very important factor that should be included in the scenario for any research. The credit-based scheduling concept was proposed in a related paper under the supervision of the corresponding author (Antony Thomas et al. 2015). Makespan and the cloud usage factor are envisaged by the author by proving allocation-aware task scheduling method (Sanjaya 2015a). By using multipurpose nested particle swarm optimization, power consumption and processing time are reduced (Jena 2015). Marginal time completed by applying fuzzy scheduling on referenced paper (Zadeh and Hashemi 2013). The loading factor is considered by the author to reduce execution time and waiting time using the nature of the honeybee (Dhinesh Babu and Venkata Krishna 2013). Quality (QoS), the most popular factor of service, is enhanced by performing tasks according to the highest priority. Another advantage of this algorithm is that it also provides minimum completion time (Xiaonian et al. 2013). Over-use over-processing To achieve this tag, two online dynamic resource allocation algorithms have been implemented for IaaS cloud systems with pre-emption applied to functions (JiayinLi et al. 2012). The curious factor comes here i.e. throughput and user satisfaction. These are achieved by presenting fuzzy logic in various arguments (Fahmy 2010). To achieve increased energy efficiency, adaptive power-aware virtual machine connectors (APA-VMPs) are recommended (Jeyarani et al. 2011). By introducing a new method, namely an online potential timeout, this approach has been designed and implemented using CloudSim Toolkit, which improves various factors of task scheduling based on the increased width of the road to drive heavy traffic smoothly (Aida 2018). Computation cost and makespan and many more QoS factors are minimized by announcing parent preference-based mechanisms (Muhammad Shahzad Arif et al. 2019). The author works on various factors such as: makespan, CPU time, load balancing, stability, and planning speed in this paper using the firefly algorithm (Kashikolaei et al. 2019). The proposed paper, using a load balancing mechanism based on probability distribution, yielded useful results in resource utilization (Panda and Jana 2019). To facilitate energy efficiency and makespan, energy-efficient work scheduling algorithms are recommended (Sanjaya 2019a). In order to elevate the makespan, heuristics approaches have been prescribed for implementation (Nayak and Padhi 2018). Several factors have been appropriated in this paper using the Modified Analytic Hierarchy Process (MAHP) with bandwidth aware divisible scheduling (BATS) and BAR optimization. These apply with the longest expected processing time pre-emption (LEPT) and divide-and-conquer methods (Gawali and Shinde 2018). Developed cuckoo search algorithm to get more refinement in resource usage (Agarwal and Srivastava 2018). The author minimizes makespan and maximizes throughput by polishing the common method with social group optimization and short-job-first scheduling (Phani Praveen et al. 2018). To achieve convalesce in makespan, execution time based sufferage algorithm facilitates more optimization (Krishnaveni and Sinthu 2018a). Several enhancements have been made to the possible algorithm. Makespan, resource utilization and allocation costs are achieved by considering and increasing function-based particle swam optimization with the BAT algorithm (Valarmathi and Sheela 2017). The author summarizes the cost of bandwidth, memory, and storage under a budget constraint schedule by evaluating aggregation into groups of tasks (Alworafi et al. 2017). Genetic algorithm-based customer-consciousness resource allocation and task scheduling in multi-cloud computing are able to reduce makespan and maximize customer satisfaction (Tamanna Jena and Mohanty 2018). Authors accelerate the makespan, cloud utilization, gain and penalty cost of services using service level agreement based scheduling techniques (Sanjaya 2017). The proposed paper shows improvements in makespan and cloud usage by working on allocation-aware task scheduling (Sanjaya 2019b). The author survives long trails whose pheromones are mainly misguided by ants. The paper uses reinforcement learning techniques to perform well all the way. (Moon et al. 2017). Flower pollination algorithms are a very useful method to achieve growth in makespan and cloud resource usage (Gupta et al. 2017). The author improves makespan and cloud usage by using distribution and near-radix scaling (Sanjaya 2018). It was very effective to find optimizations with the grouping algorithm by obtaining short performance times for all tasks (Parthasarathy 2017). In the proposed paper, management cost and makespan decreases by using genetic algorithm (Ajeena Beegom and Rajasree 2015). To reduce energy consumption, the proposed scheduling was very useful based on the processing time of the operation and the use of VMs (Panda and Jana 2016). Makespan and cloud resource utilization have been improved by using several methods: MCC, MEMAX and CMMN (Sanjaya 2015b). The author is able to reduce the total execution time and cost by using genetic algorithm and amplification of fuzzy theory (Javanmardi et al. 2014). Using the AVL tree system in the proposed paper improves real-time average system load (Chiu et al. 2014). The author improves resource utilization by using load balancing sensitive genetic algorithm with min-min and max–min method (Zhan and Zhang 2014). Ant colony adaptation proves to be very efficient to reduce the makespan legally (Wang 2013). The author improves the time to complete a task and resource utilization by applying flexible scheduling in real time constraints (Ma et al. 2012). User provision is considered an important factor in the care of service changes. To accomplish this, two-level load balancing task scheduling proves beneficial for resource utilization (Fang et al. 2010). Incompleteness is detected by new challenging scheduling. Therefore fuzzy logic based methods are able to reduce the average waiting time and turnaround time by implementing tasks based on priorities (Shatha 2010). To obtain information about the right service provider, the author relied on the use of the Honey Life scheduling concept to find the trust factor in terms of assurance of quality service (Firdhous et al. 2011). The author improves makespan and resource usage by implementing comprehensive simulations on benchmark data with innovative ideas (Jyoti Gupta et al. 2016). To provide higher resource utilization, genetic gives better results in today's scenario, so the author takes into account two factors. These are: resource utilization and closing costs. These factors have to be optimized using the double-fitness algorithm (Yin et al. 2018). The author improves execution time and fault tolerance by showing separation between different frameworks, such as Amazon or more (Soualhia et al. 2018). The ant colony method is of great use to optimize the results. The author also used it to reduce makespan by implementing the proposed task scheduling algorithm (Gupta and Garg 2017). The author uses energy-aware dynamic task scheduling to save energy for computing many more tasks (Li et al. 2017). Everyone wants the cheapest and best tag to be received from any service. Using a genetic algorithm, the paper minimizes several factors such as performance cost, performance time, and schedule time (Gupta and Tewari 2017). The published paper evaluates the cost of execution using the Bayes classification and shows their optimal results (Zhang and Zhou 2018). In this paper improvements have been made using priority measures for meta tasks, and based on the Min-Min algorithm obtains better makespan, and resource utilization (George Amalarethinam and Kavitha 2017). Dominant Resource Priority and Maximum Utilization Allocation were launched to improve a novel scheduling concept and achieve minimum makespan (Zhang et al. 2016). This paper presents that applying ranking-based scheduling algorithms improves performance in time and resource utilization (Ettikyala 2016). The authors reduce power consumption by using scheduling based on “redundant shutdown, turn on demand” based on potential matching with low energy resource allocation algorithm (Xiaolong et al. 2016). By introducing an optimized algorithm, the author improves resource usage and makespan (Mittal and Katal 2016). Using space-sharing mechanisms in allocated memory, the paper gets refined results by reducing the length and time limit of work (Chauhan and Jaglan 2016). The author improves the use of various factors such as throughput, execution time, and allocation cost and resource usage by using Honeybe's life cycle (Kaur and Kaur 2016). Proposed paper improves makespan and resource usage by using a generalized multi- purpose minimum-minimum maximum-minimum rating (Gajera et al. 2016). Published paper improves work benefit, work fine, throughput, provider benefits and user loss using space sharing scheduling (Himani and Sidhu 2015). The use of genetic algorithm in published articles improves makespan and resource usage (Alaka Ananth 2015). The author developed a smoothing concept and improved the average cloud usage and makespan (Panda et al. 2014). To get a novel solution for better makespan and resource usage, the min min and max min algorithm has been applied (Panda and Jana 2015). The author writes an algorithm to increase resource usage by using a swarm based algorithm (Rana et al. 2014). In the proposed article, the author uses a natural concept i.e. Honey bees which reduce the allocation cost and increases makespan (Anjuli Garg and Krishna 2014). Selected article reduces energy consumption by using energy- saving task consolidation (Panda and Jana 2014). By using cost-conscious cloud workload performance, resource utilization increases. The result of the paper shows that the complete use of machines is very important today because there are many compilations in today's real problems (Thanasias et al. 2014). Most authors are using natural behaviour as an adaptation to achieve success in their implementation. The whole world is dependent on nature. Natural things can never harm anyone. So this letter also uses the Honeybees behaviour to reduce the considerable factor i.e. makespan (Bitam 2012). The referenced article obtains a minimum makespan using the max–min (Devipriya and Ramesh 2013). Author improves processing usage and makespan using Ant Colony Optimization (Li et al. 2011). Fuzzy logic enables more than one option in front of the author to get an optimum result. Therefore, using this argument, many researchers are getting more variation in optimal results through which they can choose the best result (Omara and Zohier 2010). The author improves processing usage and execution time by implementing the work based on its priority (Zhenxia 2008). Makespan, total cost and use of the cloud, has been refined using intimate efforts on some synthetic and benchmark data sets in the proposed article (Panda and Jana 2015). A heterogeneous approach has been used to design this paper. It makes schedules for the task over time. Using this, the author gives good results like schedule length and cost (Topcuoglu and Hariri 1999). A heuristic approach has been used to implement the proposed idea. Author reduces execution time by using it (El-Rewini et al. 1995). The algorithm inspired by the nature used to solve global optimization problems. The author took advantage of this and achieved a reduction in the execution time of the scheduling algorithm (Aujla 2015). By implementing shortest job first, author reduced waiting time and turnaround time (Krishnaveni and Sinthu 2018b). At task benefit and time limit, the author reduces various factors using priority based scheduling (Tapale et al. 2019). The author uses the most powerful meta-heuristic approach i.e. Imperialist competitor algorithms and obtains low performance time (Habibi and Navimipour 2016). To move forward in complex engineering and management processes, the author uses Bee colony optimization technology to reduce execution time (Phani Sheetal and Ravindranath 2019). Author improves performance time, power consumption and cost by using social learning optimization (Zhizhong Liu et al. 2017). According to the weighted queue, the work is arranged and then fixed in the SJF manner. And then author gets optimal results through IOT mobile devices (Preethi and Jayavel 2018). Author gets lower makespan by using deadline & suffrage aware task scheduling in referenced article (Chaudhary and Varsha 2017). The referenced article improves makespan, cost and resource usage. These are the key goals of this optimization model is written here. Model uses lion optimization and adversary optimization to show good performance (Krishnadoss and Prem Jacob 2019). The author uses the basic priority concept to reduce execution time and get updated results. Round robin scheduling works with quantum time duration in which the time is equal to execution of each task (Amit Agarwal and Saloni Jain 2014). Each task gets its value and according to their surffage value, each job goes to evaluate. The author improves makespan and resource usage by using time- based suffrage algorithm (Krishnaveni et al. 2018). Writer reduces makespan by applying a development-based assessment strategy (Ahluwalia and Sharma 2016). The combination of genetic algorithm and fuzzy logic is very effective to get the best results. The author reduces the allocation cost by using this combination (Dahiya 2015). The author improves execution time, resource usage and cost from implementing a novel algorithm based on cost criteria (Mohammed et al. 2016). The author lowers the cost parameter by considering the driver’s strategy in the dynamic path (Xie et al. 2016). Least level value density first Method applied in paper to overcome situations related to proper use of cloud resources with appropriate time constraints (Kuang and Zhang 1864). The author uses a population- based incremental learning and genetic algorithm to show a better estimate of distribution algorithms and accelerates convergence speed faster (Sun et al. 1955). The author improves execution time and resource usage by applying an ideal algorithm (Kanna Babu and Sree Latha 2018). Considering heavy weight on cloud, the author designs a new algorithm i.e. random resource selection algorithm. Applying it to real problems, the author shows good response times by performed simulation (Vijayalakshmi and Venkatesa Kumar 2014). The author saves energy by applying a combination of three types of algorithms. These are: Energy Aware VM available time scheduling, heuristic algorithms and credit scheduling (Loganathan et al. 2017). The author uses the best search optimization algorithm which saves energy. This customizable algorithm defines with firefly and bats algorithm features (M. Senthil kumar 2018). By achieving the minimum value of the makespan, it means that the resources are properly used for execution. The author uses Differential Evolution-Gravitational Search Algorithms to reduce makespan in the proposed paper (Sharma and Tyagi 2016). By grouping the tasks with similar specification, author decreases the makespan by taking the superiority of k mean algorithm (Dandhwani andVipul Vekariya 2016). Author improves decent factor i.e. local optima by using Particle Swarm Optimization and Shuffled Frog Leaping Algorithm (Xie et al. 2016). Author improves response time, waiting time, processing cost and power consumption by using skewness algorithm (Kaur and Navtej 2017). Using Black Hole based algorithm, the author improves makespan and resource usage (Ebadifard and Borhanifard 2016). Author reduces waiting time for the tasks waiting for execution in a queue by applying the fruitfulness of fuzzy technology (Kundu 2015). Author improves quality of service, resource utilization and power saving by using fuzzy. It controls Vm’s configuration to serve the proposed theory (Goswami et al. 2018). To get less performance time and cost, the article proposed by the author has one or more objectives under consideration (Leena et al. 2016). The paper shows the reduction in cost parameters using the CloudSim toolkit with different number of virtual machines using priority based on user needs (Bansal et al. 2015).

4 Categorization and discussion

Various task scheduling algorithms has been implemented by providing various benefits for user as well as service provider perspective. After analysing many novel research articles and review articles, found that all are talking about the algorithm and focusing on a single or many more factor whereas other important factors are in challenge in today’s market. Security is taking a big place in challenge that how can one process the task securely on the virtual machines. No paper has been written about secure task scheduling algorithms in cloud computing. The main objective of thispaper is to show the actual scenario of the algorithms and concern objectives (Table 1).

Table 1 Various publications with various factors

Various reputed publications are publishing research articles day by day with full honesty and sincerity. Figure 5 showing a chart representation for deliberative purposes in publishing houses with data isolation. Publishing companies have placed their distinctive value in development, given the most appropriate concern in the research era. Elsevier is one of the most famous or valuable publishing company that specialized for scientific proposals and articles. Annually it publishes around 480,000 research material considering diverse fields. Springer is holding a great value with benefits in the same field by publishing ebooks, articles, and new thinking towards the wounds of humanity. The total number of published books and papers is more than 170,000, established in Springer’s database with various fields of study. IEEE has a name, Institute of Electrical and Electronics Engineers, which has the largest body of professionals. It is associated with over 423,000 technical persons. Many more gateways exist in publications, including attractive artefacts that focus primarily on research and development. The summary chart is displayed from Fig. 6, The conclusion is that more research work has been done on the makespan factor only. The researcher should also focus on other factors to ensure the benefits offered by the services provided through cloud computing (Vijayalakshmi and Venkatesa Kumar 2014).

Fig. 5
figure 5

Analysis of publications with various factors

Fig. 6
figure 6

Parameters frequency and flow

A novel algorithm can be based on any technology but it must reliable in any circumstances. An algorithm must produce optimized results in anyway. The flow of calculated parameters has been conducted mostly on distributed and focused utility. Trust and demands are directly proportional to reducing the unwanted response of the fiasco. The results should always be in favor of positive decisions so that the possibility of satisfaction of the users can be considered.

Multi issues and challenges related to task scheduling in cloud computing justified in the paper. Optimization of the digital world is involved with this issue with full appropriation. Multi scale architecture and more advanced versions of the technology may resolve this issue according to the coming requests. This plan would be very effective to build support to create a highly advanced era.

5 Challenges and issues

Task scheduling needs more accuracy towards utilization and activities performances, promising with the user by service provider. To get optimal results, cloud computing is comply with many challenges such as: Charismatic and vibrating workloads: Due to heavy and uncertain workload, adaptability becomes a major challenge in cloud computing. If predictable variation came during execution, that may be cure in advance while designing the program with proper management of resources within the time frame. Assuring profitable resource utilization: According to the fluctuations in user's demand, assignment of the resources has to be done immediately. Service provider has to keep the knowledge related with the intake workloads, because of the agreement done with the user has to be accomplished. Optimal scheduling is in need due to this scenario to execute the tasks completely. Composite nodes in cloud datacentres: allocated tasks are accessible across the distributed network in distinct stations with fluctuating power, infrastructure, database repository, etc. Each node processes different tasks in each execution. Elevated granularity in optimized scheduling: If you want more customization, more problems will come automatically. The heavy workload becomes the support of technology. Most of the time machines are moving to incompatible states during processing due to the large number of applications. Migration of virtual machines is heavily processed by applying partitions to the heavyweights.

Many inimitable issues came in the administration of optimization achievement in cloud environment. Time and cost are the common factors for a scheduling in cloud computing. To improve the value for these two is a very difficult task. To accomplish this many scheduling are in publication process. As the requirement of the resources is growing day by day, allocation to every node of datacentre becomes a very tuff scenario. Need for monitoring: government organizations need modernize system that would be capable for grasping the new technology easily. But this is not a feasible situation for cloud service providers. Due to security reasons, consumers are going through indecision by acquiring more number of service providers. Doubt increases because users are very possessive for their data. As the technology is burdened due to unnatural demands of the users, the organization is not getting satisfactory results from its employers. It faces an uncontrollable situation from the market point of view, because employers have not improved their skills and are lacking to remove the unexpected error from systems. So organizations are moving to get the benefits from the cloud by using its services on payment basis. Identity recognition of cloud service provider: No one can judge the identity of service provider and how it will provide the services in best way. Based on verification card only, one can know that this is the authorized dealer for the cloud services. ICT consultant will provide the authenticity for providers without any factiousness.

6 Conclusion and future scope

Many complex applications can now achieve deployment of new paradigms through cloud computing and generate good value due to its distributed and administrative environments. No need to take care of physical devices to develop things, this is a best feature provided by cloud computing. Large-end applications can execute independently regardless of hardware requirement specifications by offering services such as cloud usage through virtualization. The flow of information can move from one region to another by crossing several kilometres within just a second, the only reason for cloud computing. By introducing different, unique and nature based algorithms, more and more optimization is going on. In addition, security is also an essential direction for better-optimized algorithms, which can be named as secure-trust-algorithms (STAs). This future proposed algorithm may consider factors such as access control models along with cryptographic techniques for further improvement towards cloud security. By generalizing the usage control model, some facts are included and some are not. The considered facts are designated as authority, obligations, conditions, active control and variability. Non-considered factors are designated as mandatory, discretionary and role-based access control. These factors are processed by the UCONABC model (Park and Sandhu 2004; Rajkumar and Sandhu 2020). This model covers all the facts whether it is traditional or adapted. Finally, the paper explores the literature survey with all measurement parameters analyzed for task scheduling performance through different platforms and architectures to process multiple applications at the same time. Apart from this access control is also in the picture to cover the key points of the security mechanism with smart IoT devices. A fine-grain access model with specified policies (Bhatt et al. 2021; Ge et al. 2013) has also become very useful. It also uses an encryption model that removes the traditional use of public-key encryption by introducing a certificate less encryption policy (Ge et al. 2009). By integrating different paradigms, it is possible to offer a powerful perspective to task scheduling in cloud computing with a more hybridized approach with multi-purpose applications. The entire theory shows the value of the factor based on market demands in the current situation. Through which new papers can include those factors, which have less experimentation. In a beneficial context, lower usage factors can give much efficiency to incoming applications for greater loyalty.

A table helps to demonstrate a clear approach to literature in any context. This paper also creates a Table 2 for the entire literature survey to consider various factors for task scheduling algorithms in a cloud environment.

Table 2 Comparison of various objectives based task scheduling Cloud Computing

All scheduling objectives are the main points for literature design. There are also many literary series to say that other factors should also be taken into consideration, although the ratio of failure to secure the system will always underline the problems and issues. Performance and analysis of the requirement may perhaps be influenced by this ratio. In order to make the system reliable and scalable for upcoming problems, problems should be considered in future objectives according to statements in already published articles.

In future more factors must be optimized for better execution so that all scheduling features can be provided in a single scheduling algorithm. Evaluation has been done for the mentioned factors with different publication values, depicted by Fig. 7.

Fig. 7
figure 7

Considered objective performances in the literature