Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The previous chapters discussed the algorithm Cohort Intelligence (CI) and its applicability solving several unconstrained and constrained problems. In addition CI was also applied for solving several clustering problems. This validated the learning and self supervising behavior of the cohort. This chapter further tests the ability of CI by solving an NP-hard combinatorial problem such as Knapsack Problem (KP). Several cases of the 0–1 KP are solved. The effect of various parameters on the solution quality has been discussed. The advantages and limitations of the CI methodology are also discussed.

5.1 Knapsack Problem Using CI Method

The Knapsack Problem (KP) can be divided into two categories, Single-constraint KPs and Multiple-constraint KPs. The single-constraint KPs include the Subset-sum, 0–1 Knapsack, Bounded Knapsack, change-making, and Multiple-choice Knapsack. On the other hand, the multiple-constraint KPs include 0–1 Multiple Knapsack, 0–1 Multidimensional Knapsack, generalized assignment, and Bin Packing with a wide range of applications, such as cargo loading, cutting stock problems, resource allocation in computer systems, and economics [1, 2]. The special case of single constraints is generally known as the KP or the Uni-dimensional KP [3]. Another variant of the KP referred to as Multichoice Multidimensional KP (MMKP) is used to represent an optimally graceful Quality of Service (QoS) degradation model where the QoS of a single session multimedia service is gracefully degraded to conform to changes in resource availability [4]. Khan [5] used the MMKP to represent a utility model (UM) which is a mathematical model for a multi-session adaptive multimedia system. The MMKP also appears in the nursing personnel scheduling problem [6], which is defined as the identification of a staffing pattern that specifies the number of nursing personnel of a certain skill to be scheduled and satisfies the total nursing personnel capacity and other relevant constraints. Practical applications of 0–1 Knapsack include finding an optimal investment plan [7], as well as theoretical applications such as a sub-problem when solving generalized assignment problem, which is heavily used when solving vehicle routing problems, efficient packing of cargo containers by considering the weight and volume capacity utilization, etc. [8]. Apart from these applications, KPs are being used for resource allocation problems dealing with the World Wide Web [9]. In this chapter various cases of the 0–1 KP [1012] were solved using CI. In all the problems, the implemented CI methodology produced robust results with reasonable computational cost.

The problem is described as follows [1014]: given a set of N objects, each object i, \( i = 1, \ldots ,N \) is associated with an integer profit \( v_{i} \) and an integer weight \( w_{i} \). Fill the knapsack with a subset of the objects such that the total profit \( f({\mathbf{v}}) \) is maximized and the total weight \( f({\mathbf{w}}) \) does not exceed a given capacity W. The mathematical formulation is as follows:

$$ {\text{Maximize}}\;f({\mathbf{v}}) = \sum\limits_{i = 1}^{N} {v_{i} x_{i} } $$
$$ {\text{Subject}}\;{\text{to}}\;f({\mathbf{w}}) \le W $$

where

$$ f({\mathbf{w}}) = \sum\limits_{i = 1}^{N} {w_{i} x_{i} } ,x_{i} \in \{ 0,1\} ,\quad 1 \le i \le N $$
(5.1)

5.1.1 Illustration of CI Solving 0–1 KP

In the context of CI algorithm (discussed in Sect. 5.1), the objects \( i,i = 1, \ldots ,N \) are considered as characteristics/attributes/qualities which decide the overall profit \( f({\mathbf{v}}) \) and the associated overall weight \( f({\mathbf{w}}) \) of the knapsack. The procedure begins with the initialization of the number of cohort candidates C, and the number of variations t. In the cohort of C candidates, initially every candidate \( c(c = 1, \ldots ,C) \) randomly selects few objects, and the associated profits \( {\mathbf{F}}^{C} = \left\{ {f\left( {{\mathbf{v}}^{1} } \right), \ldots ,f\left( {{\mathbf{v}}^{c} } \right), \ldots ,f\left( {{\mathbf{v}}^{C} } \right)} \right\} \) and weights \( {\mathbf{F}}^{CW} = \left\{ {f\left( {{\mathbf{w}}^{1} } \right), \ldots ,f\left( {{\mathbf{w}}^{c} } \right), \ldots ,f\left( {{\mathbf{w}}^{C} } \right)} \right\} \) are calculated. The further CI algorithm steps are discussed below.

Step 1 :

The probability \( p^{c} (c = 1, \ldots ,C) \) of selecting a profit \( f\left( {{\mathbf{v}}^{c} } \right),\quad (c = 1, \ldots ,C) \), is calculated as \( p^{c} = p_{1}^{c} + p_{2}^{c} \)

where

$$ p_{1}^{c} = \frac{{f\left( {{\mathbf{v}}^{c} } \right)}}{{\sum\nolimits_{c = 1}^{C} {f\left( {{\mathbf{v}}^{c} } \right)} }} $$
(5.2)

and

$$ p_{2}^{c} = \left\{ {\begin{array}{*{20}l} {\frac{{f\left( {{\mathbf{w}}^{c} } \right)}}{W}} \hfill & {f\left( {{\mathbf{w}}^{c} } \right) \le W} \hfill \\ {{3} - \frac{{2f\left( {{\mathbf{w}}^{c} } \right)}}{W}} \hfill & {f\left( {{\mathbf{w}}^{c} } \right) > W} \hfill \\ \end{array} } \right. $$
(5.3)

A probability distribution specially devised to bias the solution towards feasibility is represented in Fig. 5.1. The Probability \( p_{2}^{c} \) increases linearly as the total weight of the knapsack increases, and reaches its peak value at the maximum capacity W. Upon any further increase in weight the probability rapidly decreases. Thus, the probability is highest around maximum capacity and decreases on either side of it with decrease beyond W with twice the slope.

Fig. 5.1
figure 1

Probability distribution for \( p_{2}^{c} \)

Step 2 :

Based on roulette wheel selection approach every candidate \( c(c = 1, \ldots ,C) \) selects a candidate with associated profit \( f\left( {{\mathbf{v}}^{c[?]} } \right) \) and modifies its own solution by incorporating some objects from that candidate. The superscript \( [?] \) indicates that the behavior is selected by candidate c and not known in advance. The modification approach is inspired from the feasibility-based rules discussed in [1517]. The modifications are categorized as follows:

  1. 1.

    If the solution of candidate c \( (c = 1, \ldots ,C) \) is feasible i.e. it satisfies the weight constraint given by Eq. 5.1 then, it randomly chooses one of the following modifications:

    1. 1.1.

      Adds a randomly chosen object from the candidate being followed, such that the object has not been included in the present candidate c and the weight constraint given by Eq. 5.1 is still satisfied.

    2. 1.2.

      Replaces a randomly chosen object with another randomly chosen one from the candidate being followed, such that Eq. 5.1 is satisfied.

  2. 2.

    If the candidate c \( (c = 1, \ldots ,C) \) is infeasible then, it randomly chooses one of the following modifications:

    1. 2.1.

      Removes a randomly chosen object from within its knapsack.

    2. 2.2.

      Replaces a randomly chosen object with another randomly chosen one from the candidate c being followed, such that the total weight \( f\left( {{\mathbf{w}}^{c} } \right) \)of the candidate c decreases.

      Every candidate performs the above procedure t times. This makes every candidate c available with associated profits \( {\mathbf{F}}^{c,t} = \left\{ {f\left( {{\mathbf{v}}^{c} } \right)^{1} , \ldots ,f\left( {{\mathbf{v}}^{c} } \right)^{j} , \ldots ,f\left( {{\mathbf{v}}^{c} } \right)^{t} } \right\},\quad (c = 1, \ldots ,C). \) Furthermore, every candidate selects the best profit \( f^{*} ({\mathbf{v}})\) among them. The best variation is selected based on the following conditions:

      1. 2.2.1.

        If the variations are feasible then the variation with maximum profit is selected.

      2. 2.2.2.

        If the variations are infeasible then the variation with minimum weight is selected.

      3. 2.2.3.

        If there are both infeasible and feasible variations then the feasible variation with maximum profit is selected.

This makes the cohort available with C updated profits \( {\mathbf{F}}^{C} = \left\{ {f^{*} \left( {{\mathbf{v}}^{1} } \right), \ldots ,f^{*} \left( {{\mathbf{v}}^{c} } \right), \ldots ,f^{*} \left( {{\mathbf{v}}^{C} } \right)} \right\} \).

This process continues until saturation (convergence) i.e., every candidate has the same profit and it does not change for successive considerable number of learning attempts.

The above discussed procedure of solving the KP using CI algorithm is illustrated here with number of objects \( N = 4 \) and knapsack capacity \( W = 8 \). The weights \( w_{i} ,\quad i = 1, \ldots ,N \) and profits \( v_{i} ,\quad i = 1, \ldots ,N \) associated with every object are illustrated in Fig. 5.2. Furthermore, the cohort is assumed to have three candidates, i.e. \( C = 3 \) and number of variations \( t = 3 \).

Fig. 5.2
figure 2

Illustrative 0–1 KP example with \( N = 4,W = 8 \)

Initially every candidate \( c(c = 1, \ldots ,C) \) randomly selects few objects, and the associated profits \( {\mathbf{F}}^{C} = \left\{ {f\left( {{\mathbf{v}}^{1} } \right), \ldots ,f\left( {{\mathbf{v}}^{c} } \right), \ldots ,f\left( {{\mathbf{v}}^{C} } \right)} \right\} \) and weights \( {\mathbf{F}}^{C,W} = \left\{ {f\left( {{\mathbf{w}}^{1} } \right), \ldots ,f\left( {{\mathbf{w}}^{c} } \right), \ldots ,f\left( {{\mathbf{w}}^{C} } \right)} \right\} \) are calculated. The further CI algorithm steps are discussed below:

  1. (1)

    The probability \( p^{c} \) associated with each candidate \( c(c = 1, \ldots ,3) \) is calculated using Eqs. 5.2 and 5.3. The calculated probability values are presented in Fig. 5.3.

    Fig. 5.3
    figure 3

    Illustrative 0–1 KP example with \( C = 3 \)

  2. (2)

    Using roulette wheel selection approach, assume that candidate 1 decides to follow candidate 3. As presented in Fig. 5.4, \( t = 3 \) variations are formed along with the associated profits vector \( {\mathbf{F}}^{1,3} = \left\{ {f\left( {{\mathbf{v}}^{1} } \right)^{1} ,f\left( {{\mathbf{v}}^{1} } \right)^{2} ,f\left( {{\mathbf{v}}^{1} } \right)^{3} } \right\} \) and the selected variation with profit \( f^{*} \left( {{\mathbf{v}}^{1} } \right) \). In this way, candidates 2 and 3 also follow certain candidate and update themselves. It makes the cohort available with 3 updated candidates with profits \( {\mathbf{F}}^{3} = \left\{ {f^{*} \left( {{\mathbf{v}}^{1} } \right),f^{*} \left( {{\mathbf{v}}^{2} } \right),f^{*} \left( {{\mathbf{v}}^{3} } \right)} \right\} \). This process continues until saturation (convergence), i.e. every candidate finds the solution and does not change for successive considerable number of learning attempts.

    Fig. 5.4
    figure 4

    Illustrative 0–1 KP example with \( t = 3 \) (variations obtained)

5.2 Results and Discussion

The CI algorithm discussed in Sect. 5.1 was coded in Matlab 7.11 (R2010b) and the simulations were run on a Windows platform using an Intel Core i5 CPU, 2.27 GHz processor speed and 3 GB memory capacity, and further validated using twenty distinct test cases of the 0–1 Knapsack Problems. The standard test cases \( f_{1} - f_{10} \) [1012] are presented in Table 5.1. The cases \( f_{11} - f_{20} \) were generated using a random number generator. In these tests, knapsack capacity is calculated using the formula [11, 12]: \( W = \frac{3}{4}\sum\nolimits_{i = 1}^{N} {w_{i} } \) where \( w_{i} \) is a random weight of item i and N is the number of items. Different values of N were used, varying from 30 to 75. These test cases are presented at the end of this chapter.

Table 5.1 The dimension and parameters of ten test problems

Recently, the instances \( f_{1} - f_{10} \) were solved using Harmony Search (HS) [10, 13], Improved Harmony Search (IHS) [10, 14], Novel Global Harmony Search (NGHS) [1012], Quantum Inspired Cuckoo Search Algorithm QICSA [12], and Quantum Inspired Harmony Search Algorithm (QIHSA) [11]. The HS is based on natural musical performance processes and has been applied to a variety of engineering problems; however, it exhibits poor convergence rate [10]. IHS employs a parameter updating method for generating new solution vectors that enhances accuracy and convergence rate of HS algorithm. The convergence rate is further improved in NGHS which is inspired from the swarm intelligence and employs a dynamic updating strategy and probabilistic mutation approach; however, the performance degenerates significantly when applied for solving constrained problems. All these algorithms lack a method to satisfy constraints and hence, can result in an infeasible solution when solving constrained optimization problems. Zou et al. [10] used a penalty function method along with NGHS in order to handle the weight constraint in 0–1 KP. QICSA integrates the quantum computing principles such as qubit representation, measure operation and quantum mutation, in the Cuckoo Search algorithm. It is different from other evolutionary algorithms in that it offers a large exploration of the search space through intensification and diversification [12]. QIHSA combines the features of HS algorithm and quantum computing. The probabilistic nature of the quantum measure offers a good diversity to the harmony search algorithm, while the interference operation helps to intensify the search around the best solutions [11]. While hybridization between quantum inspired computing and nature inspired algorithms significantly improve the performance over the original nature inspired algorithms, their performance depends largely on the initial solutions, which are selected randomly. Also, when dealing with constrained optimization problems they require the use of a repair operator.

The approach of CI handles constraints using a probability distribution \( p_{2}^{c} \) (refer to Fig. 5.1) which forces the candidates to follow the behaviour/solution with constraints satisfied as well as closer to the ones with constraint values closer to the boundary. Moreover, a well-established feasibility-based rule [1517] was also incorporated which assists candidates select the variations with better objective and constraint satisfaction (refer to Sect. 5.1). The summary of the CI results including the best, mean and worst solutions with the associated average CPU time, average number of function evaluations, standard deviation are listed in Table 5.2. In addition, the CI parameters such as number of candidates C and number of variations t are also listed. As presented in Table 5.3, it can be seen that the solution was comparable for all problems and in most of the cases the optimum solution was obtained. In addition, according to Table 5.2, it is clear that the solution was obtained in reasonable computational cost (time and FE). The results have also been verified with Branch and Bound method, and according to Tables 5.3, 5.4 and Fig. 5.5 it is clear that the performance of CI and Branch and Bound are quite comparable. The CI saturation/convergence plot for one of the problems, \( f_{10} (N = 20) \) is presented in Fig. 5.6 which illustrates the self adaptive learning behavior of every candidate in the cohort. Initially, the distinct behavior of every individual candidate in the cohort can be easily distinguished. As every candidate adopts the qualities of other candidates to improve its own solution, the cohort saturates to a certain improved solution. It is noted that the standard deviation was quite narrow with smaller sized problems; however, increased as the problem size increased. In addition, computational cost, i.e. time and function evaluations also increased with increase in the problem size. However, it was observed that in few runs of CI the candidates converged at suboptimal solutions. Similar to the perturbation approach implemented by Tavares et al. [17], in order to make the candidates jump out of possible local minima, every candidate \( c(c = 1, \ldots ,C) \) randomly selects a candidate to follow without considering its effect on the solution. This approach instantaneously made the solution worse, however, it was found to be helpful to pull the candidates’ solution out of local minima and reach an improved solution. This approach was much simpler as opposed to the perturbation approach discussed by Tavares et al. [17] where several parameters were required to be tuned based on the preliminary trials.

Table 5.2 Summary of solutions of KPs solved using CI
Table 5.3 Comparison of results obtained using CI with other established methods
Table 5.4 Comparison of results obtained using CI with B&B
Fig. 5.5
figure 5

Effect of problem size \( (N) \) on computational time

Fig. 5.6
figure 6

Learning attempts versus the objective function \( f(v)^{\text{ * }} \) for each candidate

The effect of CI parameters viz. the number of candidates C and the number of variations in behaviour t was analyzed using the final values of profit \( f({\mathbf{v}})^{\text{ * }} \), the total number of function evaluations and the computational time, for each problem. For every pair of number of candidates C and the number of variations in behavior t every KP test case was solved 20 times. For all the problems, the computational cost, i.e. the number of function evaluations and computational time was observed to be increasing with increasing number of candidates C, as well as number of variations in behaviour t. This was because, with increase in the number of candidates C and variations t, the number of behavior choices i.e. number of function evaluations also increased. The average values of profit \( f({\mathbf{v}})^{\text{ * }} \), the total number of function evaluations and the computational time for different values of number of candidates C and variations t are shown for problem \( f_{1} \) in Figs. 5.7, 5.8 and 5.9, respectively. Another important observation was that as the problem size i.e. N increased, the computational cost also increased (refer to Table 5.2). Therefore, problems with larger number of objects took a longer time and more number of function evaluations to converge. Furthermore, with fewer number of candidates C, the solution, i.e. total profit \( f({\mathbf{v}})^{\text{ * }} \) was suboptimal. As the value of number of candidates C was increased the solution quality improved up to a certain point after which there was no significant change (refer to Fig. 5.7). This was because for small values of number of candidates C the behavior choices were few and as number of candidates C increased the behavior choices increased and hence, the chances of selecting a better solution increased. In most of the problems there wasn’t any significant change in the solution beyond \( C = 5 \). At the same time for some problems such as \( f_{1} \) with small values of problem size N the optimum solution was reached at \( C = 3 \) and no significant change was observed in the solution upon further increase in number of candidates C. Thus, the effect of number of candidates C on the solution was dependent on the problem size N. In addition, it was observed that even with large values of number of candidates C the solution was suboptimal if the number of variations t was small. As the value of t increased, the solution quality improved. For most of the problems no significant change was observed in the solution beyond \( t = 10 \). For some problems such as \( f_{1} \), with small values of problem size N optimum solution was obtained at \( t = 4 \) and no significant change was seen in the solution upon further increasing the number of variations t. Therefore, even in case of the number of variations t, its effect on the solution was dependent on the problem size N. Accordingly, for all problems the number of candidates C and number of variations t were chosen to be 5 and 10, respectively.

Fig. 5.7
figure 7

Effect of number of candidates \( (C) \) on the profit \( f({\mathbf{v}})^{\text{ * }} \) for different values of variations \( (t) \)

Fig. 5.8
figure 8

Effect of number of candidates \( (C) \) on the function evaluations (FE) for different values of variations \( (t) \)

Fig. 5.9
figure 9

Effect of number of candidates \( (C) \) on the time for different values of variations \( (t) \)

5.3 Conclusions and Future Directions

For the first time emerging CI algorithm has been applied for solving a combinatorial NP-hard problem such as 0–1 KP, with number of objects varying from 4 to 75. In all the problems the implemented CI methodology produced satisfactory results with reasonable computational cost. Furthermore, according to the solution comparison of CI with other contemporary methods it could be seen that the CI solution is comparable and for some problems even better than the other methods. The CI methodology was therefore validated and the self supervising nature of the cohort candidates was successfully demonstrated along with their ability to learn and improve qualities which further improved their individual behavior. In addition, in order to avoid saturation of cohort at suboptimal solution and further make the cohort saturate to the optimum solution, a generic approach such as accepting random behavior was incorporated. The effect of the important parameters such as number of candidates C and the associated variations t on the computational time, function evaluations and the solution was analysed. This could be a useful reference in dealing with future problems using CI.

It was observed that the computational time and function evaluations of the CI algorithm increased considerably with the problem size, in the future a self-adaptive scheme could be developed for these parameters such as number of candidates C and number of variations t. This may make CI algorithm computationally more efficient and improve the rate of convergence. In addition, authors also intend to further modify the CI algorithm to solve complex NP-hard bilevel programming problems from supply chain optimization domain [18]. Also, it is quite important to tune up the learning rate of CI candidates so as to apply to dynamic control systems [19]. The ability of CI in clustering [2022] and classification domain in association with the cross-border transportation system and goods consolidation is currently underway.

5.4 Test Cases

\( f_{11} \). N = 30, W = 577

$$ \begin{aligned} {\text{w}} = \,& \{ 4 6, 1 7, 3 5, 1, 2 6, 1 7, 1 7, 4 8, 3 8, 1 7, 3 2, 2 1, 2 9, 4 8, 3 1, \\ & 8, 4 2, 3 7, 6, 9, 1 5, 2 2, 2 7, 1 4, 4 2, 40, 1 4, 3 1, 6, 3 4\} \\ {\text{v}} = \,& \{ 5 7, 6 4, 50, 6, 5 2, 6, 8 5, 60, 70, 6 5, 6 3, 9 6, 1 8, 4 8, 8 5, \\ & 50, 7 7, 1 8, 70, 9 2, 1 7 , 4 3, 5, 2 3, 6 7, 8 8, 3 5, 3, 9 1, 4 8\} \\ \end{aligned} $$

\( f_{12} \). N = 35, W = 655

$$ \begin{aligned} {\text{w}} = \, & \{ 7, 4, 3 6, 4 7, 6, 3 3, 8, 3 5, 3 2, 3, 40, 50, 2 2, 1 8, 3, 1 2, 30, 3 1, \\ & 1 3, 3 3, 4, 4 8, 5, 1 7, 3 3, 2 6, 2 7, 1 9, 3 9, 1 5, 3 3, 4 7, 1 7, 4 1, 40\} \\ {\text{v}} = \, & \{ 3 5, 6 7, 30, 6 9, 40, 40, 2 1, 7 3, 8 2, 9 3, 5 2, 20, 6 1, 20, 4 2, 8 6, 4 3, \\ & 9 3, 3 8, 70, 5 9, 1 1, 4 2, 9 3, 6, 3 9, 2 5, 2 3, 3 6, 9 3, 5 1, 8 1, 3 6, 4 6, 9 6\} \\ \end{aligned} $$

\( f_{13} \). N = 40, W = 819

$$ \begin{aligned} {\text{w}} = & \{ 2 8, 2 3, 3 5, 3 8, 20, 2 9, 1 1, 4 8, 2 6, 1 4, 1 2, 4 8, 3 5, 3 6, 3 3, 3 9, 30, 2 6, \\ & 4 4, 20, 1 3, 1 5, 4 6, 3 6, 4 3, 1 9, 3 2, 2, 4 7, 2 4, 2 6, 3 9, 1 7, 3 2, 1 7, 1 6, 3 3, 2 2, 6, 1 2\} \\ {\text{v}} = & \{ 1 3, 1 6, 4 2, 6 9, 6 6, 6 8, 1, 1 3, 7 7, 8 5, 7 5, 9 5, 9 2, 2 3, 5 1, 7 9, 5 3, 6 2, 5 6, 7 4, \\ & 7, 50, 2 3, 3 4, 5 6, 7 5, 4 2, 5 1, 1 3, 2 2, 30, 4 5, 2 5, 2 7, 90, 5 9, 9 4, 6 2, 2 6, 1 1\} \\ \end{aligned} $$

\( f_{14} \). N = 45, W = 907

$$ \begin{aligned} {\text{w}} = & \{ 1 8, 1 2, 3 8, 1 2, 2 3, 1 3, 1 8, 4 6, 1, 7, 20, 4 3, 1 1, 4 7, 4 9, 1 9, 50, 7, 3 9, 2 9, 3 2, 2 5, 1 2, \\ & 8, 3 2, 4 1, 3 4, 2 4, 4 8, 30, 1 2, 3 5, 1 7, 3 8, 50, 1 4, 4 7, 3 5, 5, 1 3, 4 7, 2 4, 4 5, 3 9, 1\} \\ {\text{v}} = & \{ 9 8, 70, 6 6, 3 3, 2, 5 8, 4, 2 7, 20, 4 5, 7 7, 6 3, 3 2, 30, 8, 1 8, 7 3, 9, 9 2, 4 3, 8, 5 8, 8 4, \\ & 3 5, 7 8, 7 1, 60, 3 8, 40, 4 3, 4 3, 2 2, 50, 4, 5 7, 5, 8 8, 8 7, 3 4, 9 8, 9 6, 9 9, 1 6, 1, 2 5\} \\ \end{aligned} $$

\( f_{15} \). N = 50, W = 882

$$ \begin{aligned} {\text{w}} = & \{ 1 5, 40, 2 2, 2 8, 50, 3 5, 4 9, 5, 4 5, 3, 7, 3 2, 1 9, 1 6, 40, 1 6, 3 1, 2 4, 1 5, 4 2, \\ &2 9, 4, 1 4, 9, 2 9, 1 1, 2 5, 3 7, 4 8, 3 9, 5, 4 7, 4 9, 3 1, 4 8, 1 7, \\ &4 6, 1, 2 5, 8, 1 6, 9, 30, 3 3, 1 8, 3, 3, 3, 4, 1\} \\ {\text{v}} = & \{ 7 8, 6 9, 8 7, 5 9, 6 3, 1 2, 2 2, 4, 4 5, 3 3, 2 9, 50, 1 9, 9 4, 9 5, 60, 1, 9 1, 6 9, 8, \\ &100,, 8 4, 100, 3 2, 8 1, 4 7, 5 9, 4 8, 5 6, 1 8, 5 9, 1 6, 4 5, 5 4, 4 7 9 8, 7 5, 20, \\ &4, 1 9, 5 8, 6 3, 3 7, 6 4, 90, 2 6, 2 9, 1 3, 5 3, 8 3\} \\ \end{aligned} $$

\( f_{16} \). N = 55, W = 1050

$$ \begin{aligned} {\text{w}} = & \{ 2 7, 1 5, 4 6, 5, 40, 9, 3 6, 1 2, 1 1, 1 1, 4 9, 20, 3 2, 3, 1 2, 4 4, 2 4, 1, 2 4, 4 2, \\ &4 4, 1 6, 1 2, 4 2, 2 2, 2 6, 10, 8, 4 6, 50, 20, 4 2, 4 8, 4 5, 4 3, 3 5, 9, 1 2, \\ &2 2, 2, 1 4, 50, 1 6, 2 9, 3 1, 4 6, 20, 3 5, 1 1, 4, 3 2, 3 5, 1 5, 2 9, 1 6\} \\ {\text{v}} = & \{ 9 8, 7 4, 7 6, 4, 1 2, 2 7, 90, 9 8, 100, 3 5, 30, 1 9, 7 5, 7 2, 1 9, 4 4, 5, 6 6, \\ &7 9, 8 7, 7 9, 4 4, 3 5, 6, 8 2, 1 1, 1, 2 8, 9 5, 6 8, 3 9, 8 6, 6 8, 6 1, 4 4, 9 7, 8 3, 2, 1 5, \\ &4 9, 5 9, 30, 4 4, 40, 1 4, 9 6, 3 7, 8 4, 5, 4 3, 8, 3 2, 9 5, 8 6, 1 8\} \\ \end{aligned} $$

\( f_{17} \). N = 60, W = 1006

$$ \begin{aligned} {\text{w}} = & \{ 7, 1 3, 4 7, 3 3, 3 8, 4 1, 3, 2 1, 3 7, 7, 3 2, 1 3, 4 2, 4 2, 2 3, 20, 4 9, 1, 20, 2 5, 3 1, 4, 8, \\ &3 3, 1 1, 6, 3, 9, 2 6, 4 4, 3 9, 7, 4, 3 4, 2 5, 2 5, 1 6, 1 7, 4 6, 2 3, 3 8, 10, 5, 1 1, \\ & 2 8, 3 4, 4 7, 3, 9, 2 2, 1 7, 5, 4 1, 20, 3 3, 2 9, 1, 3 3, 1 6, 1 4\} \\ {\text{v}} = & \{ 8 1, 3 7, 70, 6 4, 9 7, 2 1, 60, 9, 5 5, 8 5, 5, 3 3, 7 1, 8 7, 5 1, 100, 4 3, 2 7, 4 8, 1 7, 1 6,\\ & 2 7, 7 6, 6 1, 9 7, 7 8, 5 8, 4 6, 2 9, 7 6, 10, 1 1, 7 4, 3 6, 5 9, 30, 7 2, 3 7, 7 2, 100, 9, 4 7, \\ & 10, 7 3, 9 2, 9, 5 2, 5 6, 6 9, 30, 6 1, 20, 6 6, 70, 4 6, 1 6, 4 3, 60, 3 3, 8 4\} \\ \end{aligned} $$

\( f_{18} \). N = 65, W = 1319

$$ \begin{aligned} {\text{w}} = & \{ 4 7, 2 7, 2 4, 2 7, 1 7, 1 7, 50, 2 4, 3 8, 3 4, 40, 1 4, 1 5, 3 6, 10, 4 2, 9, 4 8, 3 7, 7, 4 3, 4 7, 2 9, \\ &20, 2 3, 3 6, 1 4, 2, 4 8, 50, 3 9, 50, 2 5, 7, 2 4, 3 8, 3 4, 4 4, 3 8, 3 1, 1 4, 1 7, 4 2, 20, \\ & 5, 4 4, 2 2, 9, 1, 3 3, 1 9, 1 9, 2 3, 2 6, 1 6, 2 4, 1, 9, 1 6, 3 8, 30, 3 6, 4 1, 4 3, 6\} \\ {\text{v}} = & \{ 4 7, 6 3, 8 1, 5 7, 3, 80, 2 8, 8 3, 6 9, 6 1, 3 9, 7, 100, 6 7, 2 3, 10, 2 5, 9 1, 2 2, 4 8, 9 1, 20, \\ &4 5, 6 2, 60, 6 7, 2 7, 4 3, 80, 9 4, 4 7, 3 1, 4 4, 3 1, 2 8, 1 4, 1 7, 50, 9, 9 3,1 5, 1 7, 7 2, 6 8, 3 6, \\ &10, 1, 3 8, 7 9, 4 5, 10, 8 1, 6 6, 4 6, 5 4, 5 3, 6 3, 6 5, 20, 8 1, 20, 4 2, 2 4, 2 8, 1\} \\ \end{aligned} $$

\( f_{19} \). N = 70, W = 1426

$$ \begin{aligned} {\text{w}} = & \{ 4, 1 6, 1 6, 2, 9, 4 4, 3 3, 4 3, 1 4, 4 5, 1 1, 4 9, 2 1, 1 2, 4 1, 1 9, 2 6, 3 8, 4 2, 20, \\ &5, 1 4, 40, 4 7, 2 9, 4 7, 30, 50, 3 9, 10, 2 6, 3 3, 4 4, 3 1, 50, 7, 1 5, 2 4, 7, 1 2, \\ &10, 3 4, 1 7, 40, 2 8, 1 2, 3 5, 3, 2 9, 50, 1 9, 2 8, 4 7, 1 3, 4 2, 9, 4 4, 1 4, 4 3, 4 1, \\ &10, 4 9, 1 3, 3 9, 4 1, 2 5, 4 6, 6, 7, 4 3\} \\ {\text{v}} = & \{ 6 6, 7 6, 7 1, 6 1, 4, 20, 3 4, 6 5, 2 2, 8, 9 9, 2 1, 9 9, 6 2, 2 5, 5 2, 7 2, 2 6, 1 2, 5 5, \\ &2 2, 3 2, 9 8, 3 1, 9 5, 4 2, 2, 3 2, 1 6, 100, 4 6, 5 5, 2 7, 8 9, 1 1, 8, 3, 4 3, 9 3, 5 3, 8 8,\\ & 3 6, 4 1, 60, 9 2, 1 4, 5, 4 1, 60, 9 2, 30, 5 5, 7 9, 3 3, 10, 4 5, 3, 6 8, 1 2, 20, 5 4, 6 3, \\ &3 8, 6 1, 8 5, 7 1, 40, 5 8, 2 5, 7 3, 3 5\} \\ \end{aligned} $$

\( f_{20} \). N = 75, W = 1433

$$ \begin{aligned} {\text{w}} = & \{ 2 4, 4 5, 1 5, 40, 9, 3 7, 1 3, 5, 4 3, 3 5, 4 8, 50, 2 7, 4 6, 2 4, 4 5, 2, 7, 3 8, 20, \\ &20, 3 1, 2, 20, 3, 3 5, 2 7, 4, 2 1, 2 2, 3 3, 1 1, 5, 2 4, 3 7, 3 1, 4 6, 1 3, 1 2, 1 2, \\ &4 1, 3 6, 4 4, 3 6, 3 4, 2 2, 2 9, 50, 4 8, 1 7, 8, 2 1, 2 8, 2, 4 4, 4 5, 2 5, 1 1, 3 7, 3 5, \\ &2 4, 9, 40, 4 5, 8, 4 7, 1, 2 2, 1, 1 2, 3 6, 3 5, 1 4, 1 7, 5\} \\ {\text{v}} = & \{ 2, 7 3, 8 2, 1 2, 4 9, 3 5, 7 8, 2 9, 8 3, 1 8, 8 7, 9 3, 20, 6, 5 5, 1, 8 3, 9 1, 7 1, 2 5, 5 9, \\ &9 4, 90, 6 1, 80, 8 4, 5 7, 1, 2 6, 4 4, 4 4, 8 8, 7, 3 4, 1 8, 2 5, 7 3, 2 9, 2 4, 1 4, 2 3, 8 2, \\ &3 8, 6 7, 9 4, 4 3, 6 1, 9 7, 3 7, 6 7, 3 2, 8 9, 30, 30, 9 1, 50, 2 1, 3, 1 8, 3 1, 9 7, 7 9, 6 8, \\ &8 5, 4 3, 7 1, 4 9, 8 3, 4 4, 8 6, 1, 100, 2 8, 4, 1 6\} \\ \end{aligned} $$