1 Introduction

The fixed-charge network flow problem (FCNFP) appears widely in the field of network optimization, in areas such as transportation science [1,2,3,4], integrated scheduling of production [5] and sources supply networks [6]. FCNFP is a subtype of the fixed cost linear programming problem, a nondeterministic polynomial-time hard (NP-hard) problem, introduced in [7].

Let \(G=(V, E)\) be a directed graph with n nodes and m edges, where V is the set of nodes and E is the set of edges. Each node \(i \in V\) is associated with a supply \(b_i\). Each edge \(e\in E\) is associated with a flow \(x_e\), a capacity \(u_e\), a unit cost \(c_e\), and a fixed cost \(s_e\). The cost function \(C_e(x_e)\) of edge e is discontinuous and can be expressed as:

$$\begin{aligned} \begin{aligned} C_e(x_e) = \left\{ \begin{array}{ll} 0 &{} x_e = 0 \\ s_e + c_e x_e &{} x_e\in (0,u_e] \\ \end{array} \right. \end{aligned} \end{aligned}$$
(1)

Let \(x = (x_1,x_2,\dots ,x_m)\) be the vector of flow, \(u = (u_1,u_2,\dots ,u_m)\) be the vector of capacity, A be the node-edge incidence matrix of G, and \(b = (b_1, b_2, \dots , b_n)\) be the node supply vector. The FCNFP can be formulated as the following problem:

$$\begin{aligned} \begin{aligned} \min \limits _{x}&\quad C(x) = \sum \limits _{e\in E} C_e(x_e) \\ \text {s.t.}&\quad Ax = b \\&\quad 0 \leqslant x \leqslant u \\ \end{aligned} \end{aligned}$$
(2)

where \(x, u \in {\mathbb {R}}^{m}, b \in {\mathbb {R}}^{n}, \text { and } A \in {\mathbb {R}}^{n \times m}\).

The algorithms for solving FCNFP mainly include integer programming algorithms, linear programming-based (LP-based) iterative algorithms, and meta-heuristic algorithms. FCNFP can be formulated as a mixed-integer linear programming (MILP) problem by adding a 0–1 integer variable for each edge to indicate whether the flow is greater than zero. MILP can be directly solved by algorithms based on techniques such as cutting plane [8], Benders decomposition [9], column generation [10], lagrangian relaxation [11], and Dantzig-Wolfe decomposition [12]. LP-based iterative algorithms can quickly find suboptimal solutions by approximating discontinuous cost functions using (piecewise) linear functions [13,14,15]. Meta-heuristic algorithms, such as genetic algorithms [16] and population algorithms [1], are also used to solve FCNFP. While MILP is exact, its solving time can be uncertain or computationally infeasible, especially on large-scale problems. In addition, meta-heuristic algorithms lack theoretical guarantees of optimality and efficiency, and their long solving time makes them impractical for use in industrial production. Despite the advantages in efficiency for problem-solving, the performance of LP-based iterative algorithms still requires improvement.

This paper presents a novel algorithm called adaptive dynamic slope scaling procedure (ADSSP) for solving the fixed-charge network flow problem. ADSSP improves upon existing LP-based iterative algorithms by incorporating the dynamic slope scaling procedure (DSSP) and an edge deletion strategy. DSSP can explore the feasible domain by iteratively adjusting the slope scaling factors to guide the search toward promising solutions. The edge deletion strategy can reduce the interference of fixed cost from invalid edges on DSSP by eliminating the edges with zero flow. The alternating iteration framework of DSSP and edge deletion sequentially reduce the feasible domain of FCNFP, enabling ADSSP to converge efficiently to a good solution. Through extensive experiments on large-scale instances, we demonstrate that ADSSP outperforms previous LP-based iterative algorithms and the commercial solver Cplex in both solution quality and computational efficiency. Our research contributes to the development of more effective and scalable optimization methods for real-world problems.

The remainder of this paper is organized as follows. Section 2 lists related work on LP-based iterative algorithms. Section 3 outlines the details of the proposed ADSSP algorithm, while Sect. 4 describes and discusses simulation results. This study concludes in Sect. 5 with a summary of the results.

2 Related work

The development of LP-based iterative algorithms for the FCNFP can be roughly divided into three stages. The first LP-based iterative framework for solving FCNFP called DSSP was proposed in [13]. This pioneering approach approximates the FCNFP as a series of linear programming problems. The linear objective through the origin allows the approximate problems to be solved quickly. Based on the DSSP framework, the meta-heuristic technique called Tabu Scheme was explored to produce promising search neighborhoods for high-quality solutions by iteratively adjusting the linear factors [17]. However, the lack of descent and convergence guarantee makes the solving time of DSSP uncertain, and the simple structure of the single-layer iterative framework makes it far from the optimal solution.

An approximate model using a piecewise linear function to approximate the discontinuous objective of FCNFP was proposed in [14] called the concave piecewise linear network flow (CPLNF) model. The complex piecewise linear objective is closer to the objective of FCNFP than the linear objective, but it also makes the CPLNF model hard to solve. That paper transformed the CPLNF model into a mixed-integer linear programming model and linearly relaxed the integer variables, and proposed an algorithm called the dynamic cost updating procedure to solve. An improved algorithm called adaptive dynamic cost updating procedure (ADCUP) dynamically modified the objective of CPLNF to approach the original objective function in iterations [18].

The complex objective function challenges the efficiency of problem-solving. The objective function of CPLNF can be simplified to a new approximate model called the continuous bilinear network flow (CBLNF), which can also be solved by ADCUP [15]. On this basis, ADCUP can be modified by transferring the two-layer iteration into a single-layer iteration to improve the efficiency, which is introduced in an algorithm called the dynamic problem updating procedure (DPUP) [19]. However, these algorithms yield solutions that might be far from optimal, with objectives up to ten percent larger than the optimal one.

3 Algorithm

3.1 Adaptive dynamic slope scaling procedure

This section introduces the heuristic algorithm ADSSP, an iterative framework combining an edge deletion strategy with DSSP. The edge deletion strategy aims to delete the edges unlikely to appear in the optimal solution, which improves the solution optimality and decreases the problem scale for the next iteration. Therefore, ADSSP can quickly find a good solution by solving FCNFP on sequentially reduced feasible domains. Fig. 1 displays the whole solving process of ADSSP. The initialized graph, \(G^{(0)}(V, E^{(0)})\), generates an initial FCNFP to trig the solving process of ADSSP. In each iteration, DSSP solves the FCNFP induced by the current graph, \(G^{(k)}(V, E^{(k)})\), to provide the solution \(\{ x^{(k)}_e \}\). Based on the selected deletable edge set \(\Delta E_{D}^{(k)}\) generated from \(\{ x^{(k)}_e \}\), edge deletion can update \(E^{(k)}\) by deleting edges in \(\Delta E_{D}^{(k)}\). This iteration terminates when the generated \(\Delta E_{D}^{(k)}\) is empty.

Fig. 1
figure 1

The flowchart of ADSSP

DSSP approximates FCNFP by a series of linear programming problems, where each edge’s cost is redefined as a linear cost through the origin. The slope can be expressed as:

$$\begin{aligned} {\bar{c}}_e = c_e + s_e / x_e \end{aligned}$$
(3)

where \(x_e > 0\) is a given flow of edge e. An optional initialization of the slope is \({\bar{c}}_e^{(0)} = c_e + s_e / u_e\). The dynamic slope update formula is

$$\begin{aligned} \begin{aligned} {\bar{c}}^{(k+1)}_e = \left\{ \begin{array}{ll} \max \limits _{1 \le l \le k} \{ {\bar{c}}^{(l)}_e: x^{(l)}_e> 0 \} &{} x^{(k)}_e = 0 \\ c_e + s_e / x_e^{(k)} &{} x^{(k)}_e > 0 \\ \end{array} \right. \end{aligned} \end{aligned}$$
(4)

DSSP stops when the solution in current iteration satisfies \(x^{(k-1)}_e = x^{(k)}_e\) for all edge e. Because the stopping criterion lacks the guarantee of finite-step termination, a maximum number of iterations is set to avoid excessively long solving time, denoted as \(M_d\). For accelerating the computation, we apply an efficient algorithm [20] to solve the minimum-cost flow problem in DSSP. The detail of DSSP is shown in Algorithm 1, where \(MCF\left( {\bar{c}}^{(k)}_e\right)\) represents the solution of a minimum-cost flow problem.

figure a

In the k-th iteration, the set of deletable edges is defined as \(\Delta E^{(k)} = \{ e \in E^{(k)} \, \ s_e > 0, x^{(k)}_e = 0 \}\), where \(x_e^{(k)}\) is obtained from the solution of DSSP. Since DSSP is not an exact algorithm, deleting all edges in \(\Delta E^{(k)}\) may cause the algorithm to converge to a poor solution. An alternative approach is to delete a certain ratio of edges from \(\Delta E^{(k)}\). The selection on the deleting edges in \(\Delta E^{(k)}\) relies on the value \(\phi _e\) calculated as follows.

  • TYPE 1: Randomly select a proportion of edges from \(\Delta E^{(k)}\) to delete.

  • TYPE 2: Delete the edges with large fixed costs, that is \(\phi _e = s_e\).

  • TYPE 3: Delete the edges with large values \(\phi _e = c_e + s_e / u_e\).

  • TYPE 4: Delete the edges with large values \(\phi _e = c_e + s_e\).

Among these four optional calculations, the first type is not affected by the costs on edges, which can be applied to any graph structure. The second type prioritizes the removal of the zero-flow edges with higher fixed costs. However, it is ineffective when the unit cost is significantly larger than the fixed cost. The third and fourth types rely on the average cost on edges with maximum flow and unit flow, respectively, which are more versatile than the previous two. The set of deleted edges is denoted as \(\Delta E^{(k)}_D\), which can be expressed as

$$\begin{aligned} \Delta E^{(k)}_D = \{ e \in \Delta E^{(k)} \, \ \phi _e > Q_{1-\alpha }(\varvec{\phi }) \} \end{aligned}$$

where \(\alpha\) is the deleting ratio and \(Q_{1-\alpha }(\varvec{\phi })\) is the \((1-\alpha )\) quantile of the set \(\{ \phi _e: e \in \Delta E \}\). After updating the edge set by \(E^{(k+1)} = E^{(k)} - \Delta E_D^{(k)}\), the subproblem on the graph \(G^{(k+1)}(V,E^{(k+1)})\) is also an FCNFP. The stopping criteria is defined as \(\Delta E_D^{(k)} = \emptyset\). Without loss of generality, the number of forced termination steps is set to m. The detail of ADSSP is shown in Algorithm 2.

figure b

3.2 Algorithm convergence and time complexity

DSSP lacks time complexity analysis because its termination condition does not provide a definite number of iterations. However, ADSSP sets the iteration upper limit of DSSP as \(M_d\), which makes Algorithm 2 terminates in a finite number of iterations. Thus, theoretical analysis of time complexity is allowed for ADSSP.

Proposition 1

The ADSSP solves the FCNFP in a finite number of iterations. The worst-case scenario is that the iteration number is \(m-1\), while the best-case scenario is that the iteration number is \(\lceil log_{\frac{1}{1-\alpha }}(m - 1) \rceil + 1\), where m is the number of edges in the graph and \(\alpha\) is the deleting ratio.

Proof

Assuming that the optimal solution consists of only one edge with the nonzero flow in the graph. The worst-case scenario means that only one edge can be deleted per iteration, resulting in a total of \(m-1\) iterations. In the best case, all edges except the nonzero flow edge can be deleted in each iteration. The maximum number of iterations K satisfies \((1-\alpha )^{(K - 1)} (m - 1) = 1\), where \(\alpha\) is the proportion of edges to be deleted. The upper bound on the number of iterations for ADSSP in the best case is \(K = \lceil log_{\frac{1}{1-\alpha }}(m - 1) \rceil + 1\). \(\square\)

Theorem 1

The time bound of ADSSP is \(O(n m \log (n^2) \log (nC)\Gamma (m,\alpha ))\), where \(\Gamma (m,\alpha ) = m\) in the worst case and \(\Gamma (m,\alpha ) = \frac{1 - (1 - \alpha )^{K}}{\alpha }\) in the best case.

Proof

The time bound of \(O(nm\log (n^2 / m)\log (nC))\) on a graph with m edges for minimum-cost circulation problems is given in [20], which is the time complexity of the function \(MCF(\cdot )\) in Algorithm 1. Since the other operations in loop is O(m) and the number of iteration \(M_d\) is a constant, the time bound of Algorithm 1 is also \(O(nm\log (n^2 / m)\log (nC))\).

The time bound of the function \(DSSP(\cdot )\) in Algorithm 2 is \(O(nm_k \log (n^2 / m_k)\log (nC))\). In the worst case, the time bound of ADSSP is

$$\begin{aligned} \begin{aligned}&O\left( \sum _{i=2}^{m} ni \log (n^2 / i)\log (nC)\right) \\&\quad = O\left( n\log (n^2)\log (nC)\sum _{i=2}^{m}i - n\log (nC)\sum _{i=2}^{m}i\log (i)\right) \\&\quad = O\left( nm^2\log (n^2)\log (nC)\right) \\ \end{aligned} \end{aligned}$$

In the best case, the time bound of ADSSP isFootnote 1

$$\begin{aligned} \begin{aligned}&O\left( \sum _{i = 0}^{K - 1}n(1-\alpha )^i m \log (\frac{n^2}{(1 - \alpha )^i m})\log (nC)\right) \\&\quad = O\left( nm\log (n^2) \log (nC) \sum _{i = 0}^{K - 1}(1 - \alpha )^{i} - nm\log (nC)\sum _{i=0}^{K-1}(i\log (1-\alpha ) + \log m)(1-\alpha )^i\right) \\&\quad = O\left( nm\log (n^2) \log (nC) \frac{1 - (1 - \alpha )^{K}}{\alpha }\right) ^1 \\ \end{aligned} \end{aligned}$$

\(\square\)

4 Numerical experiment

This section consists of a data illustration and three experiments. First, we introduce the construction of the synthetic data and provide the topological information and generated distributions of parameters for instances. Second, we give the range of choices for algorithm hyperparameters and the selection of the edge-deleting rule through comparative experiments. Based on the selected hyperparameters and rule, we demonstrate the advantages of ADSSP from two perspectives: problem scale and combinatoriality. The combinatoriality of the problem is defined in terms of the order of magnitude difference between unit cost and fixed cost.

By solving the following MILP model that is equivalent to FCNFP, the solution provides a good benchmark for comparing algorithms.

$$\begin{aligned} \begin{aligned} \min \limits _{x,y}&\quad C_I(x) = \sum _{e \in E} c_e x_e + s_e y_e \\ \text {s.t.}&\quad Ax=b \\&\quad 0 \le x_e \le u_e y_e, \forall e \in E \\&\quad y_e \in \{ 0,1 \}, \forall e \in E \\ \end{aligned} \end{aligned}$$
(5)

Cplex is an advanced commercial solver used for solving MILP, capable of finding the optimal solution. However, as the problem scale increase and combinatoriality enhance, the solving time of Cplex becomes unpredictable. By setting a time limit for solving, Cplex can provide the optimal solution for small-scale problems and suboptimal solutions for large-scale problems.

In this paper, numerical experiments are conducted in C++ 17 on a Windows 10 platform with an Intel Core i7 2.50 GHz processor and 32.0 GB of RAM. The version of Cplex MILP Solver is 12.10.0, and the time limit for solving MILP is 3600 s.

4.1 Data

The experimental instances are constructed on the graph with given parameters, including small-scale and large-scale ones. The scale of FCNFP relies on its number of nodes and edges. Referring to [19], we set nine scales for small-scale instances and four for large-scale instances. The instance with the largest scale in this paper contains ten thousand nodes and one million edges. In graph generation, nodes are selected randomly from a two-dimensional plane and labeled in order. Each node is associated with a supply, the type of which is determined by its relationship with zero. Edges are generated between nodes whose distance is smaller than a specified threshold. The structure data of the graph are shown in Table 1.

In each instance, we need to generate the supply on nodes, and the capacity, unit cost, and fixed cost on edges. The experimental data construction in the related work does not specify how these parameters generating. To construct instances more realistic, we sample these parameters according to distributions in Table 2. In addition, the combinatoriality coefficient \(\gamma\) aims to adjust the magnitude difference between the unit and fixed cost, making the instances with different combinatoriality. The instance feasibility is verified by a minimum-cost flow algorithm by setting the fixed cost to zero.

Table 1 The topological information of the graph for each instance
Table 2 The distributions of parameters on the graph

4.2 Hyperparameters and deleting rule in ADSSP

The hyperparameters \(M_d\) and \(\alpha\) are pre-given in ADSSP. The parameter selection is guided by comparing the relative error between the optimal value and the objective of ADSSP on a grid of parameter values. We choose \(M_d\) from the set \(\{ i\in {\mathbb {Z}}: 1 \le i \le 30 \}\) and select \(\alpha\) from the set \(\{ 0.05*k: k = 1,2,\dots ,20 \}\). The experiments are conducted on multiple groups of small-scale instances with strong combinatoriality to make the results more illustrative. Figure 2 shows the relative errors on gridded parameters. The darker area, i.e., the area with smaller relative error, is mainly concentrated in \(\{ (M_d,\alpha ): 0< M_d \leqslant 5; 0 < \alpha \leqslant 0.5 \}\), which is verified across these four scales. When \(M_d\) is set to one, the relative error between the objective of ADSSP and the optimal value is more than 10%, which causes the red, steep slope in Fig. 2. However, as \(M_d\) increases, the relative error also increases relatively. Therefore, a few steps of dynamic slope scaling in ADSSP can improve the correctness of edge deletion. Also, the smaller value of \(\alpha\) can reduce the impact of incorrect judgments in edge deletion on the solution.

Fig. 2
figure 2

Relative error between ADSSP and the Cplex MILP Solver under gridded parameters

The edge deleting rules for ADSSP cannot be directly specified. We compare them through necessary experiments on small-scale instances. Regarding the solution obtained by Cplex as a benchmark, we calculate the relative error between it and the solutions of ADSSP with different deleting rules. Table 3 shows the average solving time and relative error of ADSSP with different deleting rules on multi-scale instances. The relative error is calculated as

$$\begin{aligned} \begin{aligned} \text {Relative Error} \;(\%) = \frac{C_A - C_I}{C_I} \times 100\% \\ \end{aligned} \end{aligned}$$
(6)

where \(C_A\) is the objective of FCNFP solved by ADSSP and \(C_I\) is the objective of MILP solved by the Cplex MILP Solver. Different edge-deleting rules do not significantly affect the efficiency of problem-solving. However, TYPE 2 and 4 help the algorithm find better solutions, which indicates that fixed costs have a major impact on the effectiveness of edge deletion.

Table 3 Performance of ADSSP under four deleting rules for small-scale instances

In the following experiments, we set the number of iterations in DSSP as three and the deleting ratio as 0.5. The performance of ADSSP under four deleting rules shows that TYPE 2 and TYPE 4 are good choices for edge deletion. Considering that the TYPE 2 rule can not handle the case when the unit cost is much larger than the fixed cost, we apply the TYPE 4 rule on ADSSP to delete edges.

4.3 Simulation on instances with different scale

This section discusses the simulation results on problems with different scales. In small-scale problems, we calculate the relative error between the solutions of ADSSP and Cplex solver and compare it with the previous LP-based iterative algorithms DSSP, ADCUP, and DPUP. In large-scale problems, we compare the performance of DSSP, ADSSP, and Cplex to explore the impact of problem combinatoriality on ADSSP.

4.3.1 Small-scale instances

Small-scale instances are divided into nine groups with the topological information listed in Table 1, and each group contains 30 instances. Table 4 indicates that Cplex can find the optimal solution or a suboptimal solution with low optimality gapsFootnote 2 for the small-scale instance. By regarding the solution of Cplex as the benchmark, we compare the relative errors of solutions solved by DSSP, ADCUP, DPUP, and ADSSP. Table 5 shows the results of the algorithms on small-scale problems, with the minimum value of indicators highlighted in bold. The first column states the problem scale of this group. The second to fifth columns list the average solving times for the four algorithms. The sixth to ninth columns list the average relative error between the benchmark and the solutions of these four algorithms.

Table 4 Performance of Cplex on small-scale instances
Table 5 Performance of DSSP, ADCUP, DPUP, and ADSSP on small-scale instances

The results indicate that ADSSP far exceeds other LP-based iterative algorithms in performance and stability. For small-scale problems, the average solving time of ADSSP can be less than 1.7 s, and the average relative error of the objective can be controlled within 2%. Although the solving time of ADSSP is slightly longer than DSSP in the instances where the number of edges is less than 1000, ADSSP finds a better solution through the edge deletion strategy. However, judging from optimality alone, the solution of ADSSP cannot reach the optimal solution. ADSSP can find a solution within 2% relative error on average in less than 1.7 s, while the Cplex MILP Solver takes much longer, even reaching the time limit.

The complexity of FCNFP is related to the number of variables and constraints. Using the total number of nodes and edges as the problem scale, Fig. 3 shows the trend of solving time with the problem scale. The solving time of ADSSP is not only significantly smaller than that of the other three algorithms, but the slope of the fitting line is also noticeably narrower than the others. ADSSP outperforms the currently available LP-based iterative algorithms in terms of both the solution quality and the solving efficiency. The better result with less solving time indicates that ADSSP is more advantageous in solving large-scale problems.

Fig. 3
figure 3

Scatter plot of solving time and fitting line of problem scale. The solving time of each algorithm is from Table 5. The problem scale is equal to the total number of nodes and edges. To facilitate the display and comparison of algorithms, the solving time and problem scale are logarithmic

4.3.2 Large-scale instances

We construct four groups of instances on large scales, and the number of nodes and edges are (6000, 250000), (7500, 500000), (9000, 750000), and (10000, 1000000). Each group includes nine classes with different combinatoriality coefficients, and each class contains ten instances. We compare the performance of DSSP, ADSSP, and Cplex in these instances. Table 6 shows the optimality gap of Cplex with varying scales and combinatoriality coefficients, which increases with respect to the combinatoriality coefficient. Table 7 shows the relative error calculated by Eq. (6). Regarding the solution of Cplex as the benchmark, the relative error of DSSP on average is greater than zero, increasing with \(\gamma\). On the other hand, the relative error of ADSSP on average is less than zero with a decreasing trend as \(\gamma\) increases. Table 8 shows the solving time of DSSP, ADSSP, and Cplex on the instances. The solving time of FCNFP significantly increases with the increasing magnitude of \(\gamma\). In Table 8, ADSSP shows the slowest growth trend in solving time, with a maximum value of 91.6 s in large-scale instances, which is significantly smaller than Cplex and only one-tenth of DSSP.

Table 6 Optimality gap of Cplex on large-scale instances
Table 7 Performance of DSSP and ADSSP on large-scale instances
Table 8 Solving time of DSSP, ADSSP, and CPLEX on large-scale instances
Table 9 The average number of iterations for ADSSP

ADSSP outperforms the Cplex MILP Solver when the problem scale reaches at least 250,000, finding a better solution faster with a relative error that decreases by 2%, while DSSP performs vastly worse than Cplex. Also, the solving time of ADSSP in large-scale instances is further short than DSSP and Cplex, and it is less affected by the combinatoriality of the problem. The results demonstrate that the edge deletion strategy brings a breakthrough improvement in the effectiveness and efficiency of problem-solving in the large-scale FCNFP. This strategy helps ADSSP quickly find better solutions than the commercial solver in a short time. Therefore, ADSSP is a practical and effective algorithm for solving large-scale FCNFP.

4.4 Simulation on instances with different combinatoriality

The impact of combinatoriality on the solution of ADSSP is evident. Table 7 clearly shows a noticeable difference in relative errors between groups with different \(\gamma\). Figure 4 illustrates that ADSSP performs similarly to the Cplex MILP Solver for weakly combinatorial problems while finding better solutions than the Cplex MILP Solver for strongly combinatorial problems. The relative error of the objectives decreases as the combinatoriality enhances.

Fig. 4
figure 4

The curve of relative error between objectives of ADSSP and Cplex MILP Solver on large-scale problems

Table 8 does not reveal a clear relationship between the combinatoriality coefficient and the solving time. To shed light on the impact of combinatoriality on the solving process of ADSSP, Table 9 provides the iterative number in Algorithm 2 for different values of \(\gamma\), with the best-case iterative number as a reference mentioned in Proposition 1. The findings indicate that the combinatoriality coefficient has a effect on the iterative number, as the difference between the average iterative number and the theoretical best case increases with \(\gamma\). Although this suggests that combinatoriality will lengthen the solving time, the increase is slight. When the unit cost is much higher than the fixed cost, the iterative number for all instances equals the best-case scenario. Even when the fixed cost is significantly greater than the unit cost, the iterative process of ADSSP only requires an additional iteration for solving FCNFP. Overall, the experimental results demonstrate that ADSSP typically solves FCNFP in a best-case or near-best-case iterative process.

5 Conclusion

The FCNFP has a wide range of applications in real-world scenarios. Although FCNFP can be transformed into an integer programming problem and solved by commercial solvers, finding the exact solution in a finite time or a sufficiently good suboptimal solution in a short time is challenging, especially for problems with large scales or strong combinatoriality. This paper proposes an LP-based iterative algorithm called ADSSP to reduce the scale of the subproblem in iterations by deleting edges to converge quickly to a suboptimal solution. ADSSP outperforms the commercial solver Cplex on large-scale problems and achieves a controllable relative error with the optimal solution on small-scale problems. Furthermore, this paper analyzes the convergence and time complexity of ADSSP from theoretical and experimental perspectives. Finally, the hyperparameter selection experiments guide the optimal selection interval for the parameters, allowing the algorithm to be better employed.