1 Introduction

In the maximum cardinality bin packing problem, we are given n items with sizes \({t}_{i},\ i \in N =\{ 1,\ldots ,n\}\), and m bins of identical capacity c. The objective is to assign a maximum number of items to the fixed number of bins without violating the capacity constraint. The problem formulation is given by

$$\mathrm{maximize}\quad z =\sum _{i=1}^{n}\sum _{j=1}^{m}{x}_{ ij} (1)$$

subject to

$$\begin{array}{rcl} & \sum _{i=1}^{n}{t}_{i}{x}_{ij} \leq c& \qquad j \in \{ 1,\ldots ,m\} \\ & \sum _{j=1}^{m}{x}_{ij} \leq 1& \qquad j \in \{ 1,\ldots ,n\} \\ & \qquad {x}_{ij} = 0\ \mathrm{or}\ 1 & \qquad i \in \{ 1,\ldots ,n\},\ j \in \{ 1,\ldots ,m\} \\ \end{array}$$

where x ij = 1 if item i is assigned to bin j and x ij = 0 otherwise.

The MCBPP is NP-hard (Labbé, Laporte, and Martello 2003). It has been applied in computing where we need to assign variable-length records to storage. The objective is to maximize the number of records stored in fast memory so as to ensure a minimum access time to the records given a fixed amount of storage space (Labbé, Laporte, and Martello 2003).

The MCBPP has been applied to the management of real-time multi-processors where the objective is to maximize the number of completed tasks with varying job durations before a given deadline (Coffman, Leung, and Ting 1978). It has been used to design processors for mainframe computers and the layout of electronic circuits (Ferreira, Martin, and Weismantel 1996).

A variety of bounds and heuristics have been developed for the MCBPP. Coffman, Leung, and Ting (1978) and Bruno and Downey (1985) provided probabilistic lower bounds. Kellerer (1999) considered this problem as a special case of the multiple knapsack problem where all items have the same profit and all knapsacks (or bins) have the same capacity and solved it with a polynomial approximation scheme for the multiple knapsack problem. Labbé, Laporte, and Martello (2003) developed several upper bounds and embedded them in an enumeration algorithm. Peeters and Degraeve (2006) solved the problem with a branch-and-price algorithm.

In this paper, we develop a heuristic algorithm for solving the MCBPP that is based on the concept of weight annealing. In Section 2, we describe weight annealing. In Section 3, we give the upper bounds and lower bounds that are used in our algorithm. In Section 4, we present our weight annealing algorithm. In Section 5, we apply our algorithm to 4,500 instances and compare our results to those produced by an enumeration algorithm and a branch-and-price algorithm. In Section 6, we summarize our contributions.

2 Weight Annealing

Ninio and Schneider (2005) proposed a weight annealing method that allowed a greedy heuristic to escape from a poor local optimum by changing the problem landscape and making use of the history of each optimization run. The authors changed the landscape by assigning weights to different parts of the solution space. Ninio and Schneider provided the following outline of their weight annealing algorithm.

  1. Step 1.

    Start with an initial configuration from a greedy heuristic solution using the original problem landscape.

  2. Step 2.

    Determine a new set of weights based on the previous optimization run and insight into the problem.

  3. Step 3.

    Perform a new run of the greedy heuristic using the new weights.

  4. Step 4.

    Return to Step 2 until a stopping criterion is met.

In their implementation, Ninio and Schneider required nonnegative values for all of the weights so their algorithm could look for good solutions. They used a cooling schedule with temperature T to change the values of the weights. When the value of T was large, there were significant changes to the weights. As T decreased, all weights approached a value of one. Ninio and Schneider applied their weight annealing to five benchmark traveling salesman problems with 127 to 1,379 nodes and generated results that were competitive with simulated annealing.

Weight annealing shares features with metaheuristics such as simulated annealing (e.g., a cooling schedule) and deterministic annealing (e.g., deteriorating moves) and similarities among these metaheuristics were presented by Ninio and Schneider (2005). In contrast to simulated annealing and deterministic annealing, weight annealing not only considers the value of the objective function, at each stage of an optimization run it also makes use of information on how well every part of the search space is being solved. By creating distortions in different parts of the search space (the size of the distortion is controlled by weight assignments based on insights gained from one iteration to the next), weight annealing seeks to expand and speed up the neighborhood search and focus computational efforts on the poorly solved regions of the search space.

3 Upper and Lower Bounds

3.1 Upper Bounds

Our algorithm uses upper bounds on the optimal value of the objective function (z ) in (1) that were developed by Labbé, Laporte, and Martello (2003). The objective function value in (1) gives the maximum number of items that can be packed into the bins without violating bin capacities. Without loss of generality, we assume that the problem data are integers and \(1 \leq {t}_{1} \leq {t}_{2} \leq \ldots \leq {t}_{n} \leq c\) (we refer to this as the ordered list throughout the rest of this paper).

The first upper bound for z developed by Labbé, Laporte, and Martello (2003) is given by

$$\bar{{U}}_{0} ={ max}_{1\leq k\leq n}\left \{k\ :\ \sum _{i=1}^{k}{t}_{ i} \leq mc\right \}. (2)$$

Since the optimal solution is obtained by selecting the first z smallest items, all items with sizes t i for which i > \(\bar U\) 0 can be disregarded.

Labbé, Laporte, and Martello (2003) derived the second upper bound \(\bar U\) 1 as follows. Let Q(j) be the upper bound on the number of items that can be assigned to j bins. Then

$$Q(j) = max\left \{k\ : j \leq k \leq n,\sum _{i=1}^{k}{t}_{ i} \leq jc\right \}\mathrm{for}\ j = 1,\ldots ,m. (3)$$

An upper bound on z is given by

$${U}_{1}(j) = Q(j) + \lfloor Q(j)/j\rfloor (m - j) (4)$$

since \(\lfloor Q(j)/j\rfloor\) is an upper bound on the number of items that can be packed into each of the remaining (mj) bins. The upper bound is obtained by taking the minimum over all j, that is,

$$\bar{{U}}_{1} ={ min}_{j=1,\ldots ,m}\ {U}_{1}(j). (5)$$

Note that \(\bar U\) 1 dominates \(\bar U\) 0.

The third upper bound \(\bar U\) 2 from Labbé, Laporte, and Martello (2003) is derived in the following way. Let i be the smallest item in an instance with m bins. Then \(m\lfloor c/{t}_{1}\rfloor\) is an upper bound on the number of items that can be assigned to m bins because \(\lfloor c/{t}_{1}\rfloor\) is an upper bound on the number of items that can be packed into one bin. A valid upper bound is given by

$$\bar{{U}}_{2}(i) = (i - 1) + m\lfloor c/{t}_{1}\rfloor . (6)$$

If i is not the smallest item, then an optimal solution will contain all items j < i, and by taking the minimum over all i, we obtain a valid upper bound

$$\bar{{U}}_{2} ={ min}_{j=1,\ldots ,n}\ {U}_{2}(i). (7)$$

It follows that the best a priori upper bound is given by \({U}^{{_\ast}} = min\{\bar{{U}}_{0},\ \bar{{U}}_{1},\ \bar{{U}}_{2}\}\) (which is similar to what is given in Labbé, Laporte, and Martello (2003)). Since the optimal solution is obtained by selecting the first \({z}^{{_\ast}}\) smallest items, all items with sizes t i for which \(i> {U}^{{_\ast}}\) can be disregarded. We point out that the time complexities for the computation of the bounds are given in the paper by Labbé, Laporte, and Martello (2003).

3.2 Lower Bounds

Our algorithm uses lower bounds developed by Martello and Toth (1990). Let I denote a one-dimensional bin packing problem instance. The lower bound L 2 on the optimal number of bins z(I) can be computed in the following way.

Given any integer \(\alpha ,\ 0 \leq \alpha \leq c/2\), let

$$\begin{array}{rcl}{ J}_{1}& =& \{j \in N\ :\ {t}_{j}> c - \alpha \}, \\ {J}_{2}& =& \{j \in N\ :\ c - \alpha \geq {t}_{j}> c/2\}, \\ {J}_{3}& =& \{j \in N\ :\ c/2 \geq {t}_{j} \geq \alpha \},\qquad N =\{ 1,\ldots ,n\}, \\ \end{array}$$

then

$$L(\alpha ) = \vert {J}_{1}\vert + \vert {J}_{2}\vert + max\left (0,\left \lceil \frac{\sum _{j\in {J}_{3}}{t}_{j} -\left (\vert {J}_{2}\vert c -\sum _{j\in {J}_{2}}{t}_{j}\right )} {c} \right \rceil \right ) (8)$$

is a lower bound on z(I).

L 2 is calculated by taking the maximum over α, that is,

$${L}_{2} = max\{L(\alpha)\ :\ 0 \leq \alpha \leq c/2,\ \alpha \ \mathrm{integer}\} (9)$$

In our algorithm, we use the Martello-Toth reduction procedure (denoted by MTRP and given in Martello and Toth 1990) to determine the lower bound L 3 which dominates L 2.

Let I be the original instance, z 1 r be the number of bins reduced after the first application of MTRP to I, and I(z 1 r) be the corresponding residual instance. If I(z 1 r) is relaxed by removing its smallest item, then we can obtain a lower bound by applying L 2 to I(z 1 r) and this yields \({L}_{1}^{{'}} = {z}_{1}^{r} + {L}_{2}(I({z}_{1}^{r})) \geq {L}_{2}(I)\). This process iterates until the residual instance is empty. For iteration k, we have a lower bound \({L}_{k}^{{'}} = {z}_{1}^{r} + {z}_{2}^{r} +\ldots + {z}_{k}^{r} + {L}_{2}(I({z}_{k}^{r}))\). Then

$${L}_{3} = max\{{L}_{1}^{{'}},{L}_{ 2}^{{'}},\ldots ,{L}_{{ k}_{max}}^{{'}}\} (10)$$

is a valid lower bound for I where k max is the number of iterations needed to have the residual instance empty.

4 Weight Annealing Algorithm for the MCBPP

In this section, we present our weight annealing algorithm for the maximum cardinality bin packing problem which we denote by WAMC. Table 1 illustrates WAMC in pseudo code. We point out that a problem has been solved to optimality once we have found a feasible bin packing for the current instance defined by the theoretical upper bound U at Step 4 of our algorithm.

Table 1 Weight annealing algorithm (WAMC) for the MCBPP

The number of items (n), the ordered list of item sizes, the bin capacity (c), and the number of bins (m) are inputs. For the ordered list, the data are integers and \(1 \leq {t}_{1} \leq {t}_{2} \leq \ldots \leq {t}_{n} \leq c\), where t i is the size of item i.

4.1 Computing the Bounds

We begin by computing the three upper bounds and then setting \({U}^{{_\ast}} = min\{\bar{{U}}_{0},\bar{{U}}_{1},\ \bar{{U}}_{2}\}\). Since the optimal solution of any instance is obtained by selecting the first \({z}^{{_\ast}}\) smallest items, we update the ordered list by removing any item i with size t i for which \(i> {U}^{{_\ast}}\).

To improve the upper bound, we compute L 3 by applying MTRP. If L 3 is greater than m, it is not feasible to pack the items on the ordered list into m bins, so we can reduce \({U}^{{_\ast}}\) by 1. We update the ordered list by removing any item i with size t i for which \(i> {U}^{{_\ast}}\). We iterate until L 3 = m.

4.2 Weight Annealing for the Bin Packing Problem

Next, we solve the one-dimensional bin packing problem with the current ordered list. We start with an initial solution generated by the first-fit decreasing procedure (FFD) that we have modified in the following way. We select an item for packing with probability 0.5. In other words, we start with the first item on the ordered list and, based on a coin toss, we pack it into a bin if it is selected, or leave it on the ordered list if it is not selected. We continue down the ordered list until an item is selected for packing. We then pack the second item in the same manner and so on, until we reach the bottom of the list. For each bin i in the FFD solution, we compute the bin load l i which is the sum of sizes of items in bin i (that is, \({l}_{i} =\sum _{j=1}^{{q}_{i}}{t}_{ij}\), where t ij is the size of item j in bin i and q i is the number of items in bin i), and the residual capacity r i which is given by \({r}_{i} = (c - {l}_{i})/c\).

4.2.1 Objective Function

In conducting our neighborhood search, we use the objective function given by Fleszar and Hindi (2002):

$$\mathrm{maximize}\ f =\sum _{i=1}^{p}{({l}_{ j})}^{2}(11)$$

where p is the number of bins in the current solution. This objective function seeks to reduce the number of bins along with maximizing the sum of the squared bin loads.

4.2.2 Weight Assignment

A key feature of our procedure is the distortion of item sizes that allows for both uphill and downhill moves. The changes in the apparent sizes of the items are achieved by assigning different weights to the bins and their items according to how well the bins are packed.

For each bin i, we assign weight w i T according to

$${ {w}_{i}}^{T} = {(1 + K{r}_{ i})}^{T} (12)$$

where K is a constant and T is a temperature parameter. We apply the weight to each item in the bin. The scaling parameter K controls the amount of size distortion for each item. T controls the amount by which a single weight can be varied. We start with a high temperature (T = 1) and this allows more downhill moves. The temperature is reduced at the end of every iteration (T × 0.95), so that the amount of item distortion decreases and the problem space looks more like the original problem space.

At a given temperature T, the size distortion for an item is proportional to the residual capacity of its bin. At a local maximum, not-so-well packed bins will have large residual capacities. We try to escape from a poor local maximum with downhill moves. To enable downhill moves, our weighting function increases the sizes of items in poorly packed bins.

Since the objective function tries to maximize the number of fully filled bins, the size transformation increases the chances of a swap between one of the enlarged items in this bin and a smaller item from another bin. Thus, we have an uphill move in the transformed space, which may be a downhill move in the original space. We make a swap as long as it is feasible in the original space.

4.2.3 Swap Schemes

We start the swapping process by comparing the items in the first bin with the items in the second bin, and so on, sequentially down to the last bin in the initial solution. Neighbors of a current solution can be obtained by swapping (exchanging) items between all possible pairs of bins. We use four different swapping schemes: Swap (1,0), Swap (1,1), Swap (1,2), and Swap (2,2). Fleszar and Hindi (2002) proposed the first two schemes.

In Swap (1,0), one item is moved from bin α to bin β. The change in the objective function value (Δf (1,0)) that results from moving one item i with size t αi from bin α to bin β is given by

$$\Delta {f}_{(1,0)} = {({l}_{\alpha } - {t}_{\alpha i})}^{2} + {({l}_{ \beta } + {t}_{\alpha i})}^{2} -{ {l}_{ \alpha }}^{2} -{ {l}_{ \beta }}^{2}. (13)$$

In Swap (1,1), we swap item i from bin α with item j from bin β. The change in the objective function value that results from swapping item i with size t αi from bin α with item j with size t βj from bin β is given by

$$\Delta {f}_{(1,1)} = {({l}_{\alpha } - {t}_{\alpha i} + {t}_{\beta j})}^{2} + {({l}_{ \beta } - {t}_{\beta i} + {t}_{\alpha i})}^{2} -{ {l}_{ \alpha }}^{2} -{ {l}_{ \beta }}^{2}. (14)$$

In Swap (1,2), we swap item i from bin α with items j and k from bin β. The change in the objective function value that results from swapping item i with size t αi from bin α with item j with size t βj and item k with size t βk from bin β is given by

$$\Delta {f}_{(1,2)} = {({l}_{\alpha } - {t}_{\alpha i} + {t}_{\beta j} + {t}_{\beta k})}^{2} + {({l}_{ \beta } - {t}_{\beta j} - {t}_{\beta k} + {t}_{\alpha i})}^{2} -{ {l}_{ \alpha }}^{2} -{ {l}_{ \beta }}^{2}. (15)$$

In Swap (2,2), we swap item i and item j from bin α with item k and item l from bin β. The change in the objective function value that results from swapping item i with size t αi and item j with size t αj from bin α with item k with size t βk and item l with size t βl from bin β is given by

$$\Delta {f}_{(2,2)} = {({l}_{\alpha }-{t}_{\alpha i}-{t}_{\alpha j}+{t}_{\beta k}+{t}_{\beta l})}^{2} +{({l}_{ \beta }-{t}_{\beta k}-{t}_{\beta l}+{t}_{\alpha i}+{t}_{\alpha j})}^{2} -{{l}_{ a}}^{2} -{{l}_{ \beta }}^{2}. (16)$$

For a current pair of bins (α,β), the swapping of items by Swap (1,0) is carried out as follows. The algorithm evaluates whether the first item (item i) in bin α can be moved to bin β without violating the capacity constraint of bin β in the original space. In other words, does bin β have enough original residual capacity to accommodate the original size of item i? If the answer is yes (the move is feasible), the change in objective function value of the move in the transformed space is evaluated. If \(\Delta {f}_{(1,0)} \geq 0\), item i is moved from bin α to bin β. After this move, if bin α is empty and the total number of utilized bins reaches the specified number of bins (m), the algorithm stops and outputs the final results. If bin α is still partially filled, or the lower bound has not been reached, the algorithm exits Swap (1,0) and proceeds to Swap (1,1). If the move of the first item is infeasible or \(\Delta {f}_{(1,0)} <0\), the second item in bin α is evaluated and so on, until a feasible move with \(\Delta {f}_{(1,0)} \geq 0\) is found or all items in bin α have been considered and no feasible move with \(\Delta {f}_{(1,0)} \geq 0\) has been found. The algorithm then performs Swap (1,1), followed by Swap (1,2), and Swap (2,2). In each of the swapping schemes, we always take the first feasible move with a nonnegative change in objective function value that we find.

We point out that the improvement step (Step 4.2) is carried out 50 times (nloop2 = 50) starting with T = 1, followed by \(T = 1 \times 0.95 = 0.95,\ T = 0.95 \times 0.95 = 0.9025\), etc. At the end of Step 4, if the total number of utilized bins has not reached m, we repeat Step 4 with another initial solution. We exit the program as soon as the required number of bins reaches m or after 20 runs (nloop1 = 20).

5 Computational Experiments

We now describe the test instances, present results generated by WAMC, and compare WAMC’s results to those reported in the literature.

5.1 Test Instances

In this section, we describe how we generated two sets of test instances,

5.1.1 Test Set 1

We followed the procedure described by Labbé, Laporte, and Martello (2003) to randomly generate the first set of test instances. Labbé et al. specified the values of three parameters: number of bins (m = 2, 3, 5, 10, 15, 20), capacity (c = 100, 120, 150, 200, 300, 400, 500, 600, 700, 800), and range of item size [t min , 99] (t min = 1, 20, 50). For each of the 180 triples (m,c,t min ), we created 10 instances by generating item size t i in an interval according to a discrete uniform distribution until the condition Σt i > mc was met. This gave us a total of \(180 \times 10 = 1,800\) instances which we denote by Test Set 1. We requested the 1,800 instances used by Labbé et al. (2003), but Martello (2006) replied that these instances were no longer available.

5.1.2 Test Set 2

Peeters and Degraeve (2006) extended the problems of Labbé et al. by multiplying the capacity c by a factor of 10 and enlarging the range of item size to [t min ,999]. Rather than fixing the number of bins, Peeters and Degraeve fixed the expected number of generated items (denoted by \(E({n}^{{'}})\)). \(E({n}^{{'}})\) is not an input for generating the instances; it is implicitly determined by the number of bins and capacity. Since the item sizes are uniformly distributed on the interval [t min ,999], the expected item size is \(({t}_{min} + 999)/2\) and \(E({n}^{{'}}) = 2cm/({t}_{min} + 999)\). Given the number of expected items \(\bar n\) as an input, the number of bins m must be \(\bar{n}({t}_{min} + 999)/2c\).

We randomly generated the second set of test instances with parameter values specified by Peeters and Degraeve: desired number of items (\(\bar n\) = 100, 150, 200, 250, 300, 350, 400, 450, 500), capacity (c = 1000, 1200, 1500, 2000, 3000, 4000, 5000, 6000, 7000, 8000), and range of item size [t min ,999] (t min = 1, 200, 500). For each of the 270 triples (\(\bar n\),c,t min ), we created 10 instances. This gave us a total of \(270 \times 10 = 2,700\) instances which we denote by Test Set 2.

5.2 Computational Results

We coded WAMC in C and \(\mathrm{C} + +\) and used a 3 GHz Pentium 4 computer with 256 MB of RAM. In the next two sections, we provide the results generated by WAMC on the two sets of test instances.

Table 2 Average value of n over 10 instances for each triple (m,c,t min ) in Test Set 1

5.2.1 Results on Test Set 1

In Table 2, we show the average number of items (n) generated over 10 instances for each triple (m,c,t min ) in Test Set 1. In Table 3, we give the number of instances solved to optimality by WAMC. In Table 4, we give the average running time in seconds for WAMC.

Table 3 Number of instances solved to optimality by WAMC in Test Set 1
Table 4 Average computation time (s) for WAMC over 10 instances for each triple (m,c,t min ) in Test Set 1
Table 5 Number of instances solved to optimality by LLM reported in Labbé et al. (2003)

We see that WAMC found optimal solutions to 1,793 instances. On average, WAMC is very fast with most computation times less than 0.01 s and the longest average time about 1 s.

Labbé, Laporte, and Martello (2003) generated 1,800 instances and solved each instance using a four-step enumeration algorithm (which we denote by LLM) on a Digital VaxStation 3100 (a slow machine that is comparable to a PC486/33). We point out that our Test Set 1 and the 1,800 instances used by Labbé et al. are very similar (the average values of n that we give in Table 2 are nearly the same as those given by Labbé et al. (2003), but they are not exactly the same). In Table 5, we provide the number of instances solved to optimality by LLM. We see that LLM found optimal solutions to 1,759 instances. On average, LLM is fast with many computation times 0.01 s or less and the longest average time several hundred seconds or more.

Peeters and Degraeve (2006) followed the procedure of Labbé et al. (2003) and generated 1,800 instances. They solved each instance using a branch-and-price algorithm (denoted by BP) on a COMPAQ Armada 700M, 500 MHz Intel Pentium III computer with a time limit of 900 seconds. Peeters and Degraeve reported that BP solved 920 instances (“\mathop{} for those types of instances where the average CPU is significantly different from 0\mathop{}”) to optimality. For these 920 instances, most of the computation times were less than 0.01 s. Although not listed in the paper explicitly, we believe that BP also solved the remaining 880 instances to optimality.

In summary, on three different sets of 1,800 instances generated using the specifications of Labbé et al. (2003), the number of optimal solutions found by BP, WAMC, and LLM were 1,800, 1,793, and 1,759, respectively.

Table 6 Average number of bins (m) over 10 instances for each triple (\(\bar n\),c,t min ) in Test Set 2

5.2.2 Results on Test Set 2

In Table 6, we show the average number of bins (n) over 10 instances for each triple (\(\bar n\),c,t min ) in Test Set 2. In Table 7, we give the number of instances from Test Set 2 solved to optimality by WAMC. When the number of instances solved to optimality is less than 10 for WAMC, the maximum deviation from the optimal solution in terms of the number of items is shown in parentheses. In Table 7, we also provide the results generated by BP as reported in Peeters and Degraeve (2006). BP solved 2,700 instances that are similar to, but not exactly the same as, the instances in Test Set 2.

Table 7 Number of instances solved to optimality by WAMC in Test Set 2 and the number of instances solved to optimality by BP reported in Peeters and Degraeve (2006)

We see that WAMC found optimal solutions to 2,665 instances (there are a total of 2,700 instances). BP found optimal solutions to 2,519 instances.

WAMC performed better on instances with large bin capacities and BP performed better on instances with small bin capacities. WAMC solved all 1,080 instances with large bin capacities (c = 5000, 6000, 7000, 8000) to optimality, while BP solved 904 large-capacity instances to optimality. Over the 1,620 small-capacity bins (c = 1000, 1200, 1500, 2000, 3000, 4000), BP solved 1,615 instances to optimality, while WAMC solved 1,585 instances to optimality.

In Table 8, we show the average computation time in seconds for WAMC and BP for the instances solved to optimality. To illustrate, for the triple \((\bar{n} = 200,\ c = 1000,\ {t}_{min} = 1)\), WAMC solved eight instances and averaged 0.1 s, while BP solved all 10 instances and averaged 0.5 s. We point out that for several triples (e.g., \((\bar{n} = 350,\ c = 5000,\ {t}_{min} = 500)\)), BP did not solve any instance to optimality, so that no average computation time is provided in the table.

Table 8 Average computation time (s) for WAMC and BP on instances solved to optimality

Over all 2,665 instances solved to optimality, WAMC had an average computation time of 0.20 s. Over all 2,519 instances solved to optimality, BP had an average computation time of 2.85 s. The Pentium III computer used by Peeters and Degraeve (2006) to run BP is much slower than the Pentium 4 computer that we used to run WAMC.

We point out that our weight annealing algorithm is a robust procedure that can be used to solve several variants of bin packing and knapsack problems such as the dual bin packing problem (see Loh (2006) for more details).

6 Conclusions

We developed a new algorithm (WAMC) to solve the maximum cardinality bin packing problem that is based on weight annealing. WAMC is easy to understand and easy to code.

WAMC produced high-quality solutions very quickly. Over 4,500 instances that we randomly generated, our algorithm solved 4,458 instances to optimality with an average computation time of a few tenths of a second. Clearly, WAMC is a promising approach that deserves further computational study.