Abstract
This paper deals with the performance comparison of three most effective neural network backpropagation training algorithms such as gradient descent, Boyden, Fletcher, Goldfarb and Shanno (BFGS) based Quasi-Newton (Q-N) and Levenberg-Marquardt (L-M) algorithms. The training of the neural network is carried out based on random datasets considering optimal job sequences of the permutation flow shop problems. In the present investigation, a goal of 0.001 of MSE or 3000 of epochs is set as a goal of learning. The overfitting and overtraining are not allowed during model building to avoid poor generalization ability. The performance of different learning techniques is reported in terms of both solution quality and computational times. The computational results demonstrate that the L-M performs best among the three algorithms with respect to both MSE and R2. However, the gradient descent algorithm is the fastest among them.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Johnson, S.: Optimal two- and three-stage production schedules with setup times included. Naval Res. Logistics Q. 1(1), 61–68 (1954)
Palmer, D.: Sequencing jobs through a multi-stage process in the minimum total time – a quick method of obtaining a near optimum. Oper. Res. Q. 16(1), 45–61 (1965)
Campbell, H.G., Dudek, R.A., Smith, M.L.: A heuristics Algorithm for the n job m machine sequencing problem. Manage. Sci. 16(10), 630–637 (1970)
Gupta, J.N.D.: A functional heuristic algorithm for the flow shop scheduling problem. Oper. Res. Q. 22(1), 39–47 (1971)
Dannenbring, D.: An evolution of flowshop scheduling heuristics. Manage. Sci. 23(11), 1174–1182 (1977)
Nawaz, M., Enscore, J., Ham, I.: A Heuristic algorithm for the m-machine, n-job flowshop sequencing problem. Omega 11(1), 91–95 (1983)
Rajendran, C.: Theory and methodology heuristics for scheduling in flow shop with multiple objectives. Eur. J. Oper. Res. 82(3), 540–555 (1995)
Koulamas, C.: A new constructive heuristic for the flowshop scheduling problem. Eur. J. Oper. Res. Soc. 105(1), 66–71 (1998)
Suliman, S.: A two-phase heuristic approach to the permutation flow-shop scheduling problem. Int. J. Prod. Econ. 64(1–3), 143–152 (2000)
Haq, N.A., Ramanan, T.R., Shashikant, K.S., Sridharan, R.: A hybrid neural network–genetic algorithm approach for permutation flow shop scheduling. Int. J. Prod. Res. 48(14), 4217–4231 (2010)
El-bouri, A., Subramaniam, B., Popplewell, N.: A neural network to enhance local search in the permutation flowshop. Comput. Ind. Eng. 49(1), 182–196 (2005)
Ramanan, T.R., Sridharan, R., Shashikant, K.S., Haq, A.N.: An artificial neural network based heuristic for flow shop scheduling problems. J. Intell. Manuf. 22(2), 279–288 (2011)
Akyol, D.: Application of neural networks to heuristic scheduling algorithms. Comput. Ind. Eng. 46(4), 679–696 (2004)
Lee, I., Shaw, M.J.: A neural-net approach to real time flow shop sequencing. Comput. Ind. Eng. 38(1), 125–147 (2000)
Jacobs, R.A.: Increased rates of convergence through learning rate adaptation. Neural Netw. 1, 295307 (1998)
Minai, A.A., Williams, R.D.: Acceleration of back-propagation through learning rate and momentum adaptation. In: Proceedings of the International Joint Conference on Neural Networks, Washington, DC (1990a)
Minai, A.A., Williams, R.D.: Backpropagation heuristics: a study of the extended delta-bar-delta algorithm. In Proceedings of the International Joint Conference on Neural Networks, San Diego, CA (1990b)
Liang, G., Liu, C., Li, Y., Yuan, F.: Training feed-forward neural networks using the gradient descent method with the optimal stepsize. J. Comput. Inf. Syst. 8, 1359–1371 (2012)
Tollenaere, T.: SuperSAB: fast adaptive back propagation with good scaling properties. Neural Netw. 3(5), 561–573 (1990)
Sarkar, D.: Methods to speed up error back-propagation learning algorithm. ACM Comput. Surv. (CSUR) 27, 519–544 (1995)
Andersen, T.J., Wilamowski, B.M.A.: Modified regression algorithm for fast one layer neural network training. World Congr. Neural Netw. 1, 687–690 (1995)
Battiti, R.: First-and second-order methods for learning: between steepest descent and Newton’s method. Neural Comput. 4(2), 141–166 (1995)
Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 5(6), 989–993 (1994)
Shah, S., Palmieri, F.: MEKA-A fast, local algorithm for training feedforward neural networks. Neural Netw., 41–46 (1990)
Gavin, H.: The Levenberg-Marquardt method for nonlinear least squares curve-fitting problems. Department of Civil and Environmental Engineering, Duke University (2011)
Fletcher, R.: Practical Methods of Optimization. Wiley (2013)
Broyden, C.G.: The convergence of a class of double-rank minimization algorithms general considerations. IMA J. Appl. Math. 6(1), 76–90 (1970)
Goldfarb, D.: A family of variable-metric methods derived by variational means. Math. Comput. 24(109), 23–26 (1970)
Shanno, D.F.: Conditioning of quasi-Newton methods for function minimization. Math. Comput. 24(111), 647–656 (1970)
Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. Neural Netw., 586–591 (1993)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Laha, D., Majumder, A. (2019). Modeling of Flow Shop Scheduling with Effective Training Algorithms-Based Neural Networks. In: Ane, B., Cakravastia, A., Diawati, L. (eds) Proceedings of the 18th Online World Conference on Soft Computing in Industrial Applications (WSC18). WSC 2014. Advances in Intelligent Systems and Computing, vol 864. Springer, Cham. https://doi.org/10.1007/978-3-030-00612-9_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-00612-9_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00610-5
Online ISBN: 978-3-030-00612-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)