Abstract
In this paper, Human Learning Optimal (HLO) algorithm is presented to solve the scheduling problem. HLO is a meta-heuristic search algorithm which is inspired by the process of human learning. Three learning operators are developed to generate new solutions and search for the optima by mimicking the learning behaviors of human. This new algorithm has been proved to be very effective in solving optimization problems. HLO is applied to solve an actual production scheduling problems in a dairy factory and the performance of HLO is compared with that of two other meta-heuristics algorithms, BSO-PSO and HGA. Comparison results demonstrate that HLO is a promising optimization algorithm.
J. Yao—This work was financially supported by the Science and Technology Commission of Shanghai Municipality of China under Grant (No.17511107002).
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Scheduling optimization problems are wide ranging and plentiful, which people always find in various fields of engineering, scientific, economic management and so on [1]. However, optimization problems are becoming more and more complicated with the development of science and technology, and traditional gradient-based methods are inefficient and inconvenient for such problems as they require substantial gradient information, depend on a well-define starting point, and need a large amount of enumeration memory. Meta-heuristics mimic nature biological systems to solve various kinds of optimization problems. For instance, Genetic Algorithms (GAs) [2], Particle Swarm Optimization (PSO) [3], Ant Colony Optimization (ACO) [4], Tabu Search [5] are popular with optimizers. In recent years, more and more novel algorithms have been proposed to tackle optimization problems, such as the Binary Differential Evolution algorithm (BDE) [6], the Binary Artificial Bee Colony Algorithm (BABCA) [7], the Binary Bat Algorithm (BBA) [8], the Binary Flower Pollination Algorithm (BFPA) [9], the Binary Gravitation Search Algorithm (BGSA) [10], the Binary Simulated Annealing Algorithm (BSAA) [11], and the Bi-Velocity Discrete Particle Swarm Optimization (BVDPSO) [12].
To solve hard optimization problems more effectively and efficiently, new powerful meta-heuristics inspired by nature, especially by biological systems, must be explored, which is a hot topic in evolutionary computation community [13]. As is known to all, human beings are the smartest creatures in the earth and invent numbers of machines and tools to make our lives convenient, which indicates human beings are gifted to tackle many complicated problems. People master and improve skills through repeatedly learning which is similar to iteratively searching for best solutions by optimal algorithms. The process can be considered as an optimization of iterative process. For the example of learning Sudoku, a person may learn randomly due to the lack of prior knowledge or exploring new strategies (random learning), learn from his or her previous experience (individual learning), and learn from his or her friends and books(social learning) [14, 15]. Inspired by human learning mechanisms, a new meta-heuristic algorithm called human learning optimization (HLO) algorithm is presented. The performance of HLO to solve the continuous optimization problems such as a suit of numerical benchmark functions, deceptive problems, 0–1 knapsack problems [16, 17] has been proved. HLO will be applied to an actual production scheduling problems in this work.
The rest of the paper is organized as follows. Section 2 introduces the presented HLO in detail. In Sect. 3, HLO is applied to an actual production scheduling problems in a dairy factory and the results are compared with those of other meta-heuristics collected from recent works to validate its performance. Finally, Sect. 4 gives a conclusion of the work in this paper.
2 Human Learning Optimization Algorithm
2.1 Initialization
HLO adopts the binary-coding framework in which each bit corresponds to a basic component of knowledge to solve problems. Therefore, an individual, i.e. a candidate solution, is represented by a binary string as Eq. (1) which is initialized as “0” or “1” randomly assuming that there is no prior-knowledge of problems,
where xi is the ith individual, P is the number of product, and M is the number of machine i.e.
After all the individuals are initialized, the initial population of HLO is generated as Eq. (2). N is the number of individuals of the population.
2.2 Learning Operators
Random Exploration Learning Operator.
While learning to solve unfamiliar problems, people usually learn randomly because of lacking or forgetting knowledge. Simulating these phenomena, HLO performs the random exploration learning with some probability as Eq. (3)
Where \( RE(0,1) \) is a random number in [0, 1).
Individual Learning Operator.
Individual learning is defined as the ability to build knowledge through individual reflection about external stimuli and sources [18]. Every person learns in conscious or unconscious states, which is a fundamental requirement of existence. In HLO, an individual learns to solve problems by the individual learning operator based on its own experience which is stored in the Individual Knowledge Database (IKD) as Eqs. (4) and (5),
where \( IKD_{i} \) is the individual knowledge database of person i which stands for the ith best solution of person I and P denotes the size of the IKDs.
Social Learning Operator.
Social learning is a transmission of knowledge and skills through direct or indirect interactions among individuals. In the social context, people can learn from not only their own direct experience but also the experience of the other members, and therefore they can develop further their abilities and achieve the higher efficiency with an effective knowledge sharing. To possess the efficient search ability, the social learning mechanism is mimicked in HLO. Like human learning, each individual of HLO studies the social knowledge which is stored in the Social Knowledge Database (SKD) with some probability as Eqs. (6) and (7) when it yields a new solution,
where the SKD is the best knowledge of the social collectivities. The S best individuals are selected as initial social knowledge database stored in the SKD. In the following iterated searching, the IKD and SKD will be updated if new knowledge is better than that in the IKD and SKD.
In summary, HLO yields a new solution by means of random exploration learning, individual learning and social learning with certain rates, which can be simplified and formulated as
where xipj is the productivity of the pth product on jth machine, pr is the probability of random exploration learning, (pi-pr) and (1-pi) represent the rates of individual learning and social learning, respectively.
2.3 Updating of the IKD and SKD
After individuals accomplish learning in each generation, the fitness values of new solutions that evaluated through the fitness function f(x) are obtained. If fitness values of new candidate solutions are better than the worst one in the IKD, or the dimension of individual knowledge which is stored in the IKD is less than N*P, the new candidate solutions will be saved in the IKD. In the same way, the SKD is updated. The SKD is updated by at least one solution every iteration in case of falling into local optimum.
3 Establishment of the Model and Experimental Results
3.1 Establishment of the Model
Scheduling problems have a major impact on the productivity of a manufacturing system, which can be described as follows. Given a number of tasks which must be carried out by some processors, it is required to find the best resource assignments and tasks sequencing, For example, the total completion time needs to be as small as possible. The study of production scheduling problem in this paper is based on actual production procedure in a dairy factory. The efficiency and capacity of each product on each machine are fixed and the demands for the output of dairy products in daily order are variant. The purpose of production scheduling is to distribute different kinds of dairy products to different kinds of equipment and minimize production time. The production scheduling model can be described as follows.
where \( t_{i} \) is one day runtime of the ith machine, \( c_{ij} \) is the amount of the ith product on the jth machine, ηij is the Productivity of the ith product on jth machine, vij is the capacity of the ith product on jth machine, b is 0 or 1, If it needs to switch to other milk after the ith product, b is equal to 1 and if not, b is equal to 0. The number of ith production on the jth machine \( c_{ij} \) is between \( \hbox{min} c_{ij} \) and \( \hbox{max} c_{ij} \) as Eq. (10)
In summary, the procedure of HLO for scheduling problems in a dairy factory can be concluded as follows:
3.2 Experimental Results and Discussions
Production scheduling based on HLO is carried out according to production plan of a certain day in a dairy factory to verify the effectiveness of HLO. 25 kinds of milk products need to be produced on the 10 machines. The efficiency and capacity of each product on each machine are show in Tables 1 and 2. Table 3 is orders of one day in a dairy factory.
The performance of HLO is compared with the BSO-PSO (Brain Storm Optimization with Discrete Particle Swarm Optimization) [19] and HGA (Hybrid Genetic Algorithms) [20]. All experimental tests were implemented on a PC of Intel Core CPU i7-2700 K @ 3.50 GHz with 8 GB RAMs. A set of fair parameters are chosen for all three algorithms in this paper. For example, population size is set to 20 and iteration times are 300. Other diverse parameters that are used in HLO, BSO-PSO, and HGA are listed in Table 4.
Histograms of task arrangement are given in Fig. 1 and histograms of runtime are given in Fig. 2 to display the task arrangement.
Figure 1 displays the task arrangement and illustrates that the task is evenly distributed to each machine. Figure 2 indicates that the run time of each machine is almost the same and no machine is idle. Therefore the utilization of machines are improved.
Table 5 shows that HLO, BSO-PSO and HGA find solutions respectively after running 30 times. Row “Best” denotes the best solution from each algorithm. Row “Mean” denotes the mean value of the total run solutions from each algorithm. Row “Worst” denotes the worst solution found from each algorithm. As we can see, all results from HLO in Table 5 are better than other two algorithms.
Figure 3 illustrates the iteration curves of the optimal algorithms in this paper. It is no surprise that the convergence speed of HLO is faster than both BSO-PSO and HGA.
4 Conclusion
In this paper, a novel human learning optimization algorithm (HLO) is presented which is inspired by the human learning process. In this method, three learning operations, i.e. the random learning operator, the individual learning operator, and the social learning operator, are developed by mimicking human learning behaviors to generate new solutions and search for the optimal solution of problems. The performance of this proposed method is validated by applying to an actual production scheduling problems in a dairy factory. The experimental results show that the performance of the proposed method is better than the performance of compared methods.
References
Mullen, R.J., Monekosso, D., Barman, S., et al.: A review of ant algorithms. J. Expert Syst. Appl. 36(6), 9608–9617 (2009)
Elsayed, S.M., Sarker, R.A., Essam, D.L.: A new genetic algorithm for solving optimization problems. J. Eng. Appl. Artif. Intell. 27(1), 57–69 (2014)
Fang, W., Sun, J., Chen, H., et al.: A decentralized quantum-inspired particle swarm optimization algorithm with cellular structured population. J. Inf. Sci. 330, 19–48 (2016)
Prakasam, A., Savarimuthu, N.: Metaheuristic algorithms and probabilistic behaviour: a comprehensive analysis of Ant Colony Optimization and its variants. J. Artif. Intell. Rev. 45(1), 1–34 (2015)
Qin, J., Xu, X., Wu, Q., et al.: Hybridization of tabu search with feasible and infeasible local searches for the quadratic multiple knapsack problem. J. Comput. Oper. Res. 66, 199–214 (2016)
Chen, Y., Xie, W., Zou, X.: A binary differential evolution algorithm learning from explored solutions. Neurocomputing 149, 1038–1047 (2015)
Zhang, X., Zhang, X.: A binary artificial bee colony algorithm for constructing spanning trees in vehicular ad hoc networks. Ad Hoc Netw. 58, 198–204 (2017)
Mirjalili, S., Mirjalili, S.M., Yang, X.S.: Binary bat algorithm. Neural Comput. Appl. 25(3–4), 1–19 (2014)
Rodrigues, D., Silva, G.F.A., Papa, J.P., et al.: EEG-based person identification through binary flower pollination algorithm. Expert Syst. Appl. 62, 81–90 (2016)
Automatic channel selection in EEG signals for classification of left or right hand movement in Brain Computer Interfaces using improved binary gravitation search algorithm
Li, X., Ma, L.: Minimizing binary functions with simulated annealing algorithm with applications to binary tomography. Comput. Phys. Commun. 183(2), 309–315 (2012)
Shen, M., Zhan, Z.H., Chen, W.N., Gong, Y.J., Zhang, J., Li, Y.: Bi-velocity discrete particle swarm optimization and its application to multicast routing problem in communication networks. IEEE Trans. Ind. Electron. 61(12), 7141–7151 (2014)
Fister, J.I., Yang, X.S., Fister, I.: A brief review of nature-inspired algorithms for optimization. Elektroteh. Vestn. 80(3), 1–7 (2013)
Wang, L., Ni, H., Yang, R., et al.: A simple human learning optimization algorithm. J. Commun. Comput. Inf. Sci. 462, 56–65 (2014)
Wang, L., An, L., Pi, J., et al.: A diverse human learning optimization algorithm. J. Global Optim. 67(1–2), 1–41 (2016)
Wang, L., Yang, R., Ni, H., et al.: A human learning optimization algorithm and its application to multi-dimensional knapsack problems. J. Appl. Soft Comput. 34, 736–743 (2015)
Wang, L., Ni, H., Yang, R., et al.: An adaptive simplified human learning optimization algorithm. J. Inf. Sci. 320, 126–139 (2015)
Forcheri, P., Molfino, M.T., Quarati, A.: ICT driven individual learning: new opportunities and perspectives. J. Educ. Technol. Soc. 3(1), 51–61 (2000)
Hua, Z., Chen, J., Xie, Y.: Brain storm optimization with discrete particle swarm optimization for TSP. In: 2016 12th International Conference on Computational Intelligence and Security (CIS), pp. 190–193. IEEE (2016)
Kundakcı, N., Kulak, O.: Hybrid genetic algorithms for minimizing makespan in dynamic job shop scheduling problem. J. Comput. Ind. Eng. 96, 31–51 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, X., Yao, J., Wang, L., Menhas, M.I. (2017). Application of Human Learning Optimization Algorithm for Production Scheduling Optimization. In: Fei, M., Ma, S., Li, X., Sun, X., Jia, L., Su, Z. (eds) Advanced Computational Methods in Life System Modeling and Simulation. ICSEE LSMS 2017 2017. Communications in Computer and Information Science, vol 761. Springer, Singapore. https://doi.org/10.1007/978-981-10-6370-1_24
Download citation
DOI: https://doi.org/10.1007/978-981-10-6370-1_24
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-6369-5
Online ISBN: 978-981-10-6370-1
eBook Packages: Computer ScienceComputer Science (R0)