Abstract
The X-architecture Steiner Minimum Tree (XSMT) is the best connection model for multi-terminal nets in global routing algorithms under non-Manhattan structures, and it is an NP-hard problem. And the successful application of Particle Swarm Optimization (PSO) technique in this field also reflects its extraordinary optimization ability. Therefore, based on Social Learning Particle Swarm Optimization (SLPSO), this paper proposes an XSMT construction algorithm (called SLPSO-XSMT) that can effectively balance exploration and exploitation capabilities. In order to expand the learning range of particles, a novel SLPSO approach based on the learning mechanism of example pool is proposed, which is conductive to break through local extrema. Then the proposed mutation operator is integrated into the inertia component of SLPSO to enhance the exploration ability of the algorithm. At the same time, in order to maintain the exploitation ability, the proposed crossover operator is integrated into the individual cognition and social cognition of SLPSO. Experimental results show that compared with other Steiner tree construction algorithms, the proposed SLPSO-XSMT algorithm has better wirelength optimization capability and superior stability.
This work was supported in part by National Natural Science Foundation of China (No. 61877010, 11501114), Natural Science Foundation of Fujian Province, China (2019J01243).
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Steiner Minimum Tree (SMT) is the best connection model for multi-terminal nets in global routing of Very Large Scale Integration (VLSI). The SMT problem is to find a routing tree with the least cost to connect all given pins by introducing additional points (Steiner points). Therefore, SMT construction is one of the most important issues in VLSI routing.
Most current researches on routing algorithms are based on the Manhattan structure [6, 7], which can only routing in horizontal and vertical directions. In order to make fuller use of routing resources, the scholars are gradually shifting their focus to non-Manhattan structures, thereby improving routing quality and chip performance.
Therefore, the construction of Steiner minimum tree based on non-Manhattan structure becomes a critical step in VLSI routing. In the early years, scholars use precise algorithms [2, 14] to construct a non-Manhattan structure routing tree, which can obtain a shorter wirelength than the Manhattan structure, but the complexity is too high. So some heuristic algorithms [4, 17, 20] are proposed to solve larger-scale SMT problems. However, these traditional heuristics are prone to fall into local extrema. In recent years, evolutionary computing has developed rapidly in many fields, especially Swarm Intelligence (SI) technology [1, 3, 12, 19]. Some routing algorithms [5, 8, 13] consider important optimization goals such as wirelength, obstacles, delay and bends based on Particle Swarm Optimization (PSO) technique. In [10], a hybrid transformation strategy is proposed to expand the search space based on self-adapting PSO. And an unified algorithm for constructing Rectilinear Steiner Minimum Tree (RSMT) and XSMT is proposed in [11], which can obtain multiple topologies of SMT to optimize the congestion in global routing. It can be seen that PSO technique is indeed a powerful tool to solve SMT problems.
Based on the analysis of the above related research work, this paper designs and implements an effective algorithm to solve the XSMT construction problem using Social Learning PSO (SLPSO), called SLPSO-XSMT. The contributions of this paper are as follows:
-
A novel SLPSO approach based on the learning mechanism of example pool is proposed to enable particles to learn from different and better particles in each iteration, and enhance the diversity of population evolution.
-
Mutation and crossover operators are integrated into the update formula of the particles to achieve the discretization of SLPSO, which can well balance the exploration and exploitation capabilities, thereby better solving the XSMT construction problem.
The rest of this paper is organized as follows. Section 2 presents the problem formulation. And the SLPSO method with example pool mechanism is introduced in Sect. 3. Section 4 describes the XSMT construction using SLPSO method in details. In order to verify the good performance of the proposed SLPSO-XSMT algorithm, the experimental comparisons are given in Sect. 5. Section 6 concludes this paper.
2 Problem Formulation
The XSMT problem can be described as follows: Given a set of pins \(P=\{P_1,P_2,...,P_3\}\), each pin is represented by a coordinate pair \((x_i,y_i)\). Then connect all pins in P through some Steiner points to construct an XSMT, where the direction of routing path can be \(45^\circ \) and \(135^\circ \), in addition to the traditional horizontal and vertical directions. Taking a routing net with 10 pins as an example, Table 1 shows the input information of the pins. The layout distribution of the given pins is shown in Fig. 1(a).
Definition 1
Pseudo-Steiner point. In addition to original points formed by given pins, the final XMST can be constructed by introducing additional points called pseudo-Steiner points (PSP). In Fig. 2, the point S is PSP, and PSP contains the Steiner point.
Definition 2
Choice 0 (as shown in Fig. 2(b)). The Choice 0 of PSP corresponding to edge L is defined as leading rectilinear side first from A to PSP S, and then leading non-rectilinear side to B.
Definition 3
Choice 1 (as shown in Fig. 2(c)). The Choice 1 of PSP corresponding to edge L is defined as leading non-rectilinear side first from A to PSP S, and then leading rectilinear side to B.
Definition 4
Choice 2 (as shown in Fig. 2(d)). The Choice 2 of PSP corresponding to edge L is defined as leading vertical side first from A to PSP S, and then leading horizontal side to B.
Definition 5
Choice 3 (as shown in Fig. 2(e)). The Choice 3 of PSP corresponding to edge L is defined as leading horizontal side first from A to PSP S, and then leading vertical side to B.
3 SLPSO
Social learning plays an important role in the learning behavior of swarm intelligence, which helps individuals in the population to learn from other individuals without increasing the cost of their own trials and errors. In SLPSO [18], each particle learns from better individuals (called examples) in the current population, while each particle in PSO only learns from its pbest and gbest.
Definition 6
Example Pool. All particles in the swarm \(S\,\mathrm{{ = }}\,\{ {X_i}|1 \le i \le M\}\) are arranged in ascending order according to the fitness: \(S=\{X_1,...,X_{i - 1},X_i,X_{i + 1},\) \(...,X_M\}\), and then \(EP\,\mathrm{{ = }}\,\{ {X_1},...,{X_{i - 1}}\mathrm{{\} }}\) constitutes the example pool of particle \(X_i\).
Based on the example learning mechanism, the new formulas for updating particles are proposed as follows:
where \(P_i\) is the personal historical best position of particle i, \(K_i\) is the historical best position of the Kth particle in the example pool, which is the social learning object for particle i. \(\omega \) is the inertia weight. \({c_1}\) and \({c_2}\) are acceleration coefficients, which respectively adjust the step size of the particle flying to personal historical best position (\(P_i\)) and its social learning object (\(K_i\)). \({r_1}\) and \({r_2}\) are mutually independent random numbers uniformly distributed in the interval (0, 1).
Figure 3 shows the example pool of a particle. For particle \(X_i\), the particles with better fitness values than it including the global optimal solution \(X_G\) constitute its example pool. \(X_i\) randomly selects any particle in the example pool at each iteration, and learns the historical experience of this particle to complete its own social learning process. This social learning mechanism allows particles to improve themselves through continuous learning from different excellent individuals during the evolution process, which is conducive to the diversified development of the population.
4 XSMT Construction Using SLPSO
4.1 Particle Encoding
The edge-vertex encoding strategy [11] is adopted in this paper, which is more suitable for evolutionary algorithms, especially PSO. For a net with n pins, the corresponding spanning tree has n-1 edges and one extra digit that is the fitness value of particle. Thus the length of a particle encoding is \(3\times (n-1)+1\).
For example, Fig. 1(b) shows an X-architecture routing tree (n = 10) corresponding to the layout distribution of pins given in Fig. 1(a), where the symbol ‘\(\times \)’ represents PSP. And this routing tree can be expressed as the particle whose encoding is the following numeric string:
where the length of the particle is \(3\times (10-1)+1=28\), the last bold number 108.6686 is the fitness of the particle and each italic number represents the choice of PSP for each edge. The first substring (9, 3, 2) represents that Pin 9 and Pin 3 of the spanning tree in Fig. 1(a) are connected through Choice 2.
4.2 Fitness Function
The length of an X-architecture Steiner tree is the sum of the lengths of all the edge segments in the tree, which is calculated as follows:
where \(l({e_i})\) represents the length of each segment \(e_i\) in the tree \(T_x\).
The smaller the fitness value, the better the particle is represented. Thus the particle fitness function is designed as follows.
4.3 Particle Update Formula
In order to better solve the XSMT problem, a new particle update method with mutation and crossover operators is proposed. The specific formula is as follows:
where \(\omega \) is the mutation probability, \(c_1\) and \(c_2\) are crossover probability. \(F_1\) is the mutation operator, which corresponds to the inertia component of PSO. \(F_2\) and \(F_3\) are crossover operators, corresponding to the individual cognition and social cognition, respectively.
Inertia Component. The particle velocity of SLPSO-XSMT is updated through \(F_1\), which is expressed as follows:
where \(\omega \) is the probability of mutation operation, and \(r_1\) is a random number in [0,1].
The proposed algorithm uses two-point mutation. If the generated random number \(r_1<\omega \), the algorithm will randomly replace the PSP choices of any two edges. Otherwise, keep the routing tree unchanged. Figure 4 gives a routing tree with 6 pins. It can be seen that after \(F_1\), the PSP choices of \(m_1\) and \(m_2\) are replaced to Choice 2 and Choice 0, respectively.
Individual Cognition. The SLPSO-XSMT algorithm uses \(F_2\) to complete the individual cognition of particles, which is expressed as follows:
where \(c_1\) represents the probability that the particle crosses with its personal historical optimum (\(X_i^P\)), and \(r_2\) is a random number in [0, 1).
Social Cognition. The SLPSO-XSMT algorithm uses \(F_3\) to complete the social cognition of particles, which is expressed as follows:
where \(c_2\) represents the probability that the particle crosses with the historical optimum of any particle \(X_k^P\) in the example pool, and \(r_3\) is a random number in [0, 1).
Figure 5 shows the crossover operation in individual cognition and social cognition of a particle. \(X_i\) is the particle to be crossed, and its learning object is \(X_i^P\) or \(X_k^P\). The proposed algorithm firstly selects a continuous interval of the encoding, like the corresponding edges to be crossed \(e_1\), \(e_2\), and \(e_3\). Then, replace the encoding on this interval of particle \(X_i\) with the encoding string of its learning object. After the crossover operation, the PSP choices of edges \(e_1\), \(e_2\), and \(e_3\) in \(X_i\) are respectively changed from Choice 2, Choice 3, and Choice 3 to Choice 1, Choice 0, and Choice 3, while the topology of the remaining edges remains unchanged.
Repeated iterative learning can gradually make particle \(X_i\) move closer to the global optimal position. Moreover, the acceleration coefficient \(c_1\) is set to decrease linearly and \(c_2\) is set to increase linearly, so that the algorithm has a higher probability to learn its own historical experience in the early iteration to enhance global search ability. While it has a higher probability to learn outstanding particles in the later iteration to enhance exploitation ability, so as to quickly converge to a position close to the global optimum.
4.4 Overall Procedure
Property 1
The proposed SLPSO-XSMT algorithm with example pool learning mechanism has a good balance between global exploration and local exploitation ability so as to effectively solve the XSMT problem.
The steps for SLPSO-XSMT can be summarized as follows.
-
Step 1. Initialize the population and PSO parameters, where the minimum spanning tree method is utilized to construct initial routing tree.
-
Step 2. Calculate the fitness value of each particle according to Eq. (4), and sort them in ascending order: \(S\,\mathrm{{ = }}\,\{ {X_1},...,{X_{i - 1}},{X_i},{X_{i + 1}},...,{X_M}\} \).
-
Step 3. Initialize pbest of each particle and its learning example pool \(EP=\{ X_1,...,X_{i - 1}\}\), and initialize gbest.
-
Step 4. Update the velocity and position of each particle according to Eqs. (5)–(8).
-
Step 5. Calculate the fitness value of each particle.
-
Step 6. Update pbest of each particle and its example pool EP, as well as gbest.
-
Step 7. If the termination condition is met (the set maximum number of iterations is reached), end the algorithm. Otherwise, return to step 4.
4.5 Complexity Analysis
Lemma 1
Assuming the population size is M, the number of iterations is T, the number of pins is n, and then the complexity of SLPSO-XSMT algorithm is \(O(MT \cdot n{\log _2}n)\).
Proof
The time complexity of mutation and crossover operations are both linear time O(n). As for the calculation of fitness value, its complexity is mainly determined by the complexity of the sorting method \(O(n{\log _2}n)\). Since the example pool of each particle would change at the end of each iteration, the time for updating example pool is mainly spent on sorting, that is, its time complexity is also \(O(n{\log _2}n)\). Therefore, the complexity of the internal loop of the SLPSO-XSMT algorithm is \(O(n{\log _2}n)\). At the same time, the complexity of the external loop of the algorithm is mainly related to the size of the population and the number of iterations. Therefore, the complexity of proposed SLPSO-XSMT algorithm is \(O(MT \cdot n{\log _2}n)\).
5 Experiment Results
In order to verify the performance and effectiveness of the proposed algorithm in this paper, experiments are performed on the benchmark circuit suite [15]. The parameter settings in this paper are consistent with [8]. Considering the randomness of the PSO algorithm, the mean values in all experiments are obtained by independent run 20 times.
5.1 Validation of Social Learning Mechanism
In order to verify the effectiveness of proposed social learning mechanism based on example pool, this section applies PSO [8] and the proposed SLPSO method to seek the solution of XSMT, in which the social cognition of PSO is achieved through crossing with the global optimal solution (gbest). The experiments compare the wirelength optimization capabilities and stability of the two methods, as shown in Table 2. In all test cases, the SLPSO method can achieve shorter wirelength and lower standard deviation than the PSO method. On the three evaluation indicators (best wirelength, mean wirelength and standard deviation), the SLPSO method can achieve optimization rates of 0.171%, 0.289%, and 35.881%, respectively. The experimental data show that SLPSO method has better exploration and exploitation capability than PSO method.
5.2 Validation of SLPSO-Based XSMT Construction Algorithm
In order to verify the good performance of proposed SLPSO-XSMT algorithm, this section gives a comparison between SLPSO-XSMT and two SMT algorithms which are traditional RSMT (R) [9] and DDE-based XSMT (DDE) [16] algorithms. As shown in Table 3, ours performs well in wirelength optimization, and can reduce the average wirelength by 8.76% and 1.81%, respectively. It can be found from the comparison with DDE-based XSMT algorithm that our algorithm is more conducive to the construction of large-scale Steiner trees.
Additionally, SLPSO-XSMT algorithm has an overwhelming advantage in stability. It can be seen that ours is far superior to the two algorithms and can greatly reduce the standard deviation of the algorithm. Among them, the DDE-based algorithm has the worst stability, and ours can reduce the standard deviation by 97.39% on average.
6 Conclusion
Aiming at the XSMT construction problem in VLSI routing, this paper proposes the SLPSO-based XSMT algorithm with the goal of optimizing the total wirelength. The algorithm adopts a novel social learning mechanism based on the example pool, so that particles can learn from different and better particles in each iteration, which expands searching range and helps to break through local extremes. At the same time, mutation and crossover operators are integrated into the update formula of particles to better solve the discrete XSMT problem.
The experimental results show that the proposed SLPSO-XSMT algorithm has obvious advantages in reducing wirelength and enhancing the stability of the algorithm, especially for large-scale Steiner trees. In future work, we will continue to improve this high-performance SLPSO to better solve various problems in the field of VLSI routing.
References
Chen, X., Liu, G., Xiong, N., Su, Y., Chen, G.: A survey of swarm intelligence techniques in VLSI routing problems. IEEE Access 8, 26266–26292 (2020). https://doi.org/10.1109/ACCESS.2020.2971574
Coulston, C.S.: Constructing exact octagonal Steiner minimal trees. In: Proceedings of the 13th ACM Great Lakes symposium on VLSI, pp. 1–6 (2003). https://doi.org/10.1145/764808.764810
Guo, W., Liu, G., Chen, G., Peng, S.: A hybrid multi-objective PSO algorithm with local search strategy for VLSI partitioning. Front. Comput. Sci. China 8(2), 203–216 (2014). https://doi.org/10.1007/S11704-014-3008-Y
Huang, X., Guo, W., Liu, G., Chen, G.: FH-OAOS: a fast four-step heuristic for obstacle-avoiding octilinear Steiner tree construction. ACM Trans. Des. Autom. Electron. Syst. 21(3), 1–31 (2016). https://doi.org/10.1145/2856033
Huang, X., Liu, G., Guo, W., Niu, Y., Chen, G.: Obstacle-avoiding algorithm in x-architecture based on discrete particle swarm optimization for VLSI design. ACM Trans. Des. Autom. Electron. Syst. (TODAES) 20(2), 1–28 (2015). https://doi.org/10.1145/2699862
Lin, K.W., Lin, Y.S., Li, Y.L., Lin, R.B.: A maze routing-based methodology with bounded exploration and path-assessed retracing for constrained multilayer obstacle-avoiding rectilinear steiner tree construction. ACM Trans. Des. Autom. Electron. Syst. (TODAES) 23(4), 1–26 (2018). https://doi.org/10.1145/3177878
Lin, S.E.D., Kim, D.H.: Construction of all rectilinear Steiner minimum trees on the hanan grid. In: Proceedings of the 2018 International Symposium on Physical Design, pp. 18–25 (2018). https://doi.org/10.1145/3177540.3178240
Liu, G., Chen, G., Guo, W.: DPSO based octagonal Steiner tree algorithm for VLSI routing. In: 2012 IEEE Fifth International Conference on Advanced Computational Intelligence (ICACI), pp. 383–387. IEEE (2012). https://doi.org/10.1109/ICACI.2012.6463191
Liu, G., Chen, G., Guo, W., Chen, Z.: DPSO-based rectilinear Steiner minimal tree construction considering bend reduction. In: 2011 Seventh International Conference on Natural Computation, vol. 2, pp. 1161–1165. IEEE (2011). https://doi.org/10.1109/ICNC.2011.6022221
Liu, G., Chen, Z., Guo, W., Chen, G.: Self-adapting PSO algorithm with efficient hybrid transformation strategy for x-architecture Steiner minimal tree construction algorithm. Pattern Recogn. Artif. Intell. 31(5), 398–408 (2018). https://doi.org/10.16451/j.cnki.issn1003-6059.201805002. (In Chinese)
Liu, G., Chen, Z., Zhuang, Z., Guo, W., Chen, G.: A unified algorithm based on HTS and self-adapting PSO for the construction of octagonal and rectilinear SMT. Soft Comput. 24(6), 3943–3961 (2020). https://doi.org/10.1007/S00500-019-04165-2
Liu, G., Guo, W., Li, R., Niu, Y., Chen, G.: Xgrouter: high-quality global router in x-architecture with particle swarm optimization. Front. Comput. Sci. China 9(4), 576–594 (2015). https://doi.org/10.1007/S11704-015-4017-1
Liu, G., Guo, W., Niu, Y., Chen, G., Huang, X.: A PSO-based timing-driven octilinear Steiner tree algorithm for VLSI routing considering bend reduction. Soft Comput. 19(5), 1153–1169 (2015). https://doi.org/10.1007/S00500-014-1329-2
Thurber, A., Xue, G.: Computing hexagonal Steiner trees using PCX [for VLSI]. In: Proceedings of ICECS 1999, 6th IEEE International Conference on Electronics, Circuits and Systems, ICECS 1999 (Cat. No. 99EX357), vol. 1, pp. 381–384. IEEE (1999). https://doi.org/10.1109/ICECS.1999.812302
Warme, D., Winter, P., Zachariasen, M.: Geosteiner software for computing Steiner trees (2003). http://geosteiner.net
Wu, H., Xu, S., Zhuang, Z., Liu, G.: X-architecture Steiner minimal tree construction based on discrete differential evolution. In: Liu, Y., Wang, L., Zhao, L., Yu, Z. (eds.) ICNC-FSKD 2019. AISC, vol. 1074, pp. 433–442. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-32456-8_47
Yan, J.T.: Timing-driven octilinear Steiner tree construction based on Steiner-point reassignment and path reconstruction. ACM Trans. Des. Autom. Electron. Syst. (TODAES) 13(2), 1–18 (2008). https://doi.org/10.1145/1344418.1344422
Zhang, X., Wang, X., Kang, Q., Cheng, J.: Differential mutation and novel social learning particle swarm optimization algorithm. Inf. Sci. 480, 109–129 (2019). https://doi.org/10.1016/J.INS.2018.12.030
Zhao, H., Xia, S., Zhao, J., Zhu, D., Yao, R., Niu, Q.: Pareto-based many-objective convolutional neural networks. In: Meng, X., Li, R., Wang, K., Niu, B., Wang, X., Zhao, G. (eds.) WISA 2018. LNCS, vol. 11242, pp. 3–14. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-02934-0_1
Zhu, Q., Zhou, H., Jing, T., Hong, X.L., Yang, Y.: Spanning graph-based nonrectilinear Steiner tree algorithms. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 24(7), 1066–1075 (2005). https://doi.org/10.1109/TCAD.2005.850862
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Chen, X., Zhou, R., Liu, G., Wang, X. (2020). SLPSO-Based X-Architecture Steiner Minimum Tree Construction. In: Wang, G., Lin, X., Hendler, J., Song, W., Xu, Z., Liu, G. (eds) Web Information Systems and Applications. WISA 2020. Lecture Notes in Computer Science(), vol 12432. Springer, Cham. https://doi.org/10.1007/978-3-030-60029-7_12
Download citation
DOI: https://doi.org/10.1007/978-3-030-60029-7_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60028-0
Online ISBN: 978-3-030-60029-7
eBook Packages: Computer ScienceComputer Science (R0)