Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The search for combinatorial matrices and their associated designs has been a fertile testing ground for algorithm designers for decades. These extremely challenging combinatorial problems have been tackled with algorithms using concepts and techniques from discrete mathematics, number theory, linear algebra, group theory, optimization, and metaheuristics. This chapter will survey some recent algorithms to search for combinatorial matrices, with an emphasis to algorithms within the realm of optimization and metaheuristics. The hope is that this chapter will arouse the interest of algorithm designers from various areas in order to invent new formalisms and new algorithms to search efficiently for combinatorial matrices. There are several open problems in the broad area of combinatorial matrices, and it is clear that new ideas are required to solve them. The existing algorithms continue to yield new results, especially in conjunction with the use of parallel programming techniques, but will still eventually reach a point of saturation, beyond which they will probably not be of much use. Therefore, cross-fertilization of research areas is necessary for producing new results in the search for new combinatorial matrices, at both the theoretical and the practical level.

Combinatorial matrices are defined as matrices that possess some combinatorial properties, such as prescribed row/column sums and special structure described in terms of circulant matrices. The research area of combinatorial matrices has been developed in a systematic manner over the last 50 years in the four books [10, 11, 48, 88]. It is worthwhile to point out that the unifying concept of combinatorial matrices includes well-known and widely studied categories of matrices, such as adjacency and incidence matrices of graphs, Hadamard matrices, and doubly stochastic matrices.

In addition, there is a number of more recent books that contain useful information and new developments in the area of combinatorial matrices and the closely related area of design theory. The book [52] (see also the update [53]) sheds considerable light in the cocyclic (i.e., group cohomological) aspects of certain kinds of combinatorial matrices and is the only systematic book-size treatment of this topic. The book [94] features a very readable exposition of design theory with emphasis on bent functions and coding theory aspects. The book [19] offers an alternative algebraic foundation of design theory. The book [54] places its emphasis on symmetric designs and their properties.

2 Preliminaries and Notations

A wide class of combinatorial matrices (or the relevant sequences) can be defined via the concepts of the periodic and aperiodic (or nonperiodic) autocorrelation functions associated to a finite sequence. These two concepts have been invented and studied widely within the engineering community, and their importance has been recognized in the combinatorics community as well, especially in connection with several kinds of combinatorial matrices.

All sequences in this chapter will be finite and will have elements either from { − 1, + 1} (binary sequences) or from { − 1, 0, + 1} (ternary sequences).

Definition 1

The periodic autocorrelation function associated to a finite sequence A = [a 1, , a n ] of length n is defined as

$$P_{A}(i) =\displaystyle\sum _{ k=1}^{n}a_{ k}a_{k+i},\,\,i = 0,\ldots,n - 1,$$

where k + i is taken modulo n, when k + i > n.

Definition 2

The aperiodic (or nonperiodic) autocorrelation function of a sequence [a 1, , a n ] of length n is defined as

$$N_{A}(i) =\displaystyle\sum _{ k=1}^{n-i}a_{ k}a_{k+i},\,\,i = 0,\ldots,n - 1,$$

where k + i is taken modulo n, when k + i > n.

Definitions 1 and 2 are best clarified with an example.

Example 1

For a finite sequence of length n = 7, A = [a 1, , a 7], we have

$$\displaystyle\begin{array}{rcl} P_{A}(0)& =& a_{1}^{2} +{ a_{ 2}}^{2} +{ a_{ 3}}^{2} +{ a_{ 4}}^{2} +{ a_{ 5}}^{2} +{ a_{ 6}}^{2} +{ a_{ 7}}^{2} \\ P_{A}(1)& =& a_{1}a_{2} + a_{2}a_{3} + a_{3}a_{4} + a_{4}a_{5} + a_{5}a_{6} + a_{6}a_{7} + a_{7}a_{1} \\ P_{A}(2)& =& a_{1}a_{3} + a_{2}a_{4} + a_{3}a_{5} + a_{4}a_{6} + a_{5}a_{7} + a_{6}a_{1} + a_{7}a_{2} \\ P_{A}(3)& =& a_{1}a_{4} + a_{2}a_{5} + a_{3}a_{6} + a_{4}a_{7} + a_{5}a_{1} + a_{6}a_{2} + a_{7}a_{3} \\ P_{A}(4)& =& a_{1}a_{4} + a_{2}a_{5} + a_{3}a_{6} + a_{4}a_{7} + a_{5}a_{1} + a_{6}a_{2} + a_{7}a_{3} \\ P_{A}(5)& =& a_{1}a_{3} + a_{2}a_{4} + a_{3}a_{5} + a_{4}a_{6} + a_{5}a_{7} + a_{6}a_{1} + a_{7}a_{2} \\ P_{A}(6)& =& a_{1}a_{2} + a_{2}a_{3} + a_{3}a_{4} + a_{4}a_{5} + a_{5}a_{6} + a_{6}a_{7} + a_{7}a_{1} \\ N_{A}(0)& =&{ a_{1}}^{2} +{ a_{ 2}}^{2} +{ a_{ 3}}^{2} +{ a_{ 4}}^{2} +{ a_{ 5}}^{2} +{ a_{ 6}}^{2} +{ a_{ 7}}^{2} \\ N_{A}(1)& =& a_{1}a_{2} + a_{2}a_{3} + a_{3}a_{4} + a_{4}a_{5} + a_{5}a_{6} + a_{6}a_{7} \\ N_{A}(2)& =& a_{1}a_{3} + a_{2}a_{4} + a_{3}a_{5} + a_{4}a_{6} + a_{5}a_{7} \\ N_{A}(3)& =& a_{1}a_{4} + a_{2}a_{5} + a_{3}a_{6} + a_{4}a_{7} \\ N_{A}(4)& =& a_{1}a_{5} + a_{2}a_{6} + a_{3}a_{7} \\ N_{A}(5)& =& a_{1}a_{6} + a_{2}a_{7} \\ N_{A}(6)& =& a_{1}a_{7}. \\ & & \\ \end{array}$$

Definition 3

Two finite sequences [a 1, , a n ] and [b 1, , b n ] of length n each are said to have constant periodic autocorrelation if there is a constant c such that

$$P_{A}(i) + P_{B}(i) = c,\quad i = 1,\ldots,n - 1.$$

Definition 4

Two sequences [a 1, , a n ] and [b 1, , b n ] of length n each are said to have constant aperiodic autocorrelation if there is a constant c such that

$$N_{A}(i) + N_{B}(i) = c,\quad i = 1,\ldots,n - 1.$$

Note that the index i = 0 is omitted from Definitions 3 and 4, because of the property \(P_{A}(0) = N_{A}(0) =\sum _{ j=1}^{n}a_{j}^{2}\). Also note that when writing out binary and ternary sequences, it is customary in the combinatorial literature to denote − 1 by −, zero by 0, and + 1 by +, and this is the convention adopted in the current chapter. Also note that Definitions 3 and 4 can be extended to more than two sequences. Definitions 3 and 4 are best illustrated with an example.

Example 2

The following binary sequences with n = 26 have constant aperiodic autocorrelation with c = 0:

$$\begin{array}{l} + + + + - + + -- + - + - + -- + - + + + -- + + +\\ + + + + - + + - - + - + + + + + - + - - - + + - - -\end{array}$$

The following ternary sequences with n = 30 have constant aperiodic autocorrelation with c = 0:

$$\begin{array}{l} + + ----- + --0000 + 0 + - + + + + -- + - + - + +\\ + + - - - - - + - - + + - -0 - 0000 - - + + - + - + - -\end{array}$$

The following properties of the periodic and aperiodic autocorrelation functions can be observed (and verified) in Examples 1 and 2:

Property 1 (Symmetry)

$$P_{A}(i) = P_{A}(n - i),i = 0,1,\ldots,n.$$

Property 2 (Complementarity)

$$P_{A}(i) = N_{A}(i) + N_{A}(n - i),i = 0,1,\ldots,n.$$

Property 3 (Second Elementary Symmetric Function)

$$\begin{array}{cc} P_{A}(1) + \cdots + P_{A}(n - 1) = 2\,e_{2}(a_{1},\ldots,a_{n})& \\ N_{A}(1) + \cdots + N_{A}(n - 1) = e_{2}(a_{1},\ldots,a_{n}) & \end{array}$$

where \(e_{2}(a_{1},\ldots,a_{n}) =\sum _{1\leq i<j\leq n}a_{i}a_{j}\) is the second elementary symmetric function in the n variables a 1, , a n .

The proof of the three properties stated above is a straightforward application of Definitions 1 and 2 and is left to the reader.

The complementarity property of periodic and aperiodic autocorrelation functions implies that:

Property 4

If two sequences have constant aperiodic autocorrelation, then they must also have constant periodic autocorrelation.

Therefore, the two pairs of sequences in Example 2 also have constant periodic autocorrelation with c = 0.

The converse of the above property does not hold, and counterexamples may be easily found.

The concept of a circulant matrix is also important in connection with several kinds of combinatorial matrices and sequences.

Definition 5

A n ×n matrix C(A) is called circulant if every row (except the first) is obtained by the previous row by a right cyclic shift by one. In particular,

$$C(A) = \left [\begin{array}{ccccc} a_{1} & a_{2} & \ldots & a_{n-1} & a_{n} \\ a_{n}&a_{1} & \ldots & a_{n-2} & a_{n-1}\\ \vdots & \vdots & \ldots & \vdots & \vdots \\ a_{3} & a_{4} & \ldots & a_{1} & a_{2} \\ a_{2} & a_{3} & \ldots & a_{n} & a_{1}.\\ \end{array} \right ]$$

The relationship between circulant matrices and periodic autocorrelation is described by the following property, whose proof is also a straightforward application of Definitions 1 and 2.

Property 5

Consider a finite sequence A = [a 1, , a n ] of length n and the circulant matrix C(A) whose first row is equal to A. Then P A (i) is the inner product of the first row of C(A) and the i + 1 row of C(A).

It turns out that the concepts of periodic and aperiodic autocorrelation can serve as the unifying concepts needed to define several kinds of combinatorial matrices and sequences simultaneously. We summarize some of these kinds of combinatorial matrices and sequences in the next table.

Note that in Table 1, TCP stands for “ternary complementary pairs” and PCS stands for “periodic complementary sequences.” We refer the reader to the papers listed in Table 1 for the complete definitions of these combinatorial objects.

Table 1 Combinatorial matrices and sequences

In the remainder of this chapter, we focus on various kinds of algorithms that can be used to search efficiently for combinatorial matrices and sequences listed in Table 1, as well as any other such combinatorial objects that can be defined via periodic and aperiodic autocorrelation. In fact, we will concentrate on general algorithmic schemes that are applicable to all such combinatorial objects and will provide high-level descriptions for these algorithmic schemes.

3 Cyclotomy Algorithms

Cyclotomy algorithms to search for combinatorial matrices and sequences are based on the premise that certain combinatorial concepts called “supplementary difference sets” (abbreviated as SDS) can be constructed by taking unions of cyclotomic classes (or cosets) or group action orbits of a suitable subgroup of the group of invertible elements of the ring of integers modulo n, where n is a parameter referring to the length of the sequences. This powerful theme has been exploited by several authors at the theoretical and practical level over more than two decades.

The crucial concept of an SDS has been introduced by Jennifer Seberry 40 years ago [101, 102].

Definition 6

Let n be a positive integer and let S 1, , S k be subsets of Z n with cardinalities n 1, , n k , respectively. Let \(T_{i} =\{ (s_{1}^{i} - s_{2}^{i}) {\rm mod}\,\,n,(s_{2}^{i} - s_{i}^{i}) {\rm mod}\,\,n,s_{1}^{i},s_{2}^{i} \in S_{i}\}\) be the multiset of all pairwise differences (taken modulo n) of all elements in S i , for i = 1, , k. If the multiset \(T_{1} \cup \cdots \cup T_{k}\) is equal to λ copies of {1, , n − 1}, then S 1, , S k form an SDS with parameters k − { n; n 1, , n k ; λ}.

A simple counting argument shows that existence of an SDS with parameters k − { n; n 1, , n k ; λ} implies the following condition:

$$\displaystyle\sum _{i=1}^{k}n_{ i}(n_{i} - 1) =\lambda (n - 1).$$

Here is a toy example, where the SDS property can be verified by hand.

Example 3

Let n = 13 and t = 3. Then there are 4 cyclotomic classes: S 1 = [1, 3, 9], S 2 = [2, 5, 6], S 3 = [4, 10, 12] and S 4 = [7, 8, 11]. It turns out that S 1 and S 2 form an SDS with parameters 2 − { 13; 3, 3; 1} because (mod 13) we have

$$T_{1}\cup T_{2} =\{ 1-3,1-9,3-9,3-1,9-1,9-3\}\cup \{2-5,2-6,5-6,5-2,6-2,6-5\},$$

i.e.,

$$T_{1} \cup T_{2} =\{ 1,2,3,4,5,6,7,8,9,10,11,12\}.$$

Here is a non-toy example, where the SDS property needs to be verified by a computer program.

Example 4

Let n = 323 and t = 3. Then there are 4 cyclotomic classes C 1, C 2, C 3, C 4 with cardinalities n 1 = 144, n 2 = 144, n 3 = 18 and n 4 = 16. Then the sets S 1 = C 1C 3 and S 2 = S 1 form an SDS with parameters 2 − { 323; 162, 162; 162}.

The paper [103] states a general theorem that indicates conditions under which suitable unions of cyclotomic classes form an SDS with specific parameters. The paper [38] contains recent theoretical results and in particular gives two constructions of skew Hadamard difference sets in the additive groups of suitable finite fields by using unions of cyclotomic classes. The paper [39] gives two constructions of strongly regular Cayley graphs on finite fields by using unions of cyclotomic classes. The paper [97] explores the idea of blending a mathematical programming formalism with a cyclotomy-based approach; the resulting algorithm is quite promising and deserves further investigation. Chapter 3 of the PhD thesis [47] is a virtual treasure trove of cyclotomy-based algorithms and applications of such algorithms to several different kinds of combinatorial matrices and sequences. Some of the most spectacular successes of cyclotomy algorithms include the discovery of new orders of Hadamard and skew Hadamard matrices [21, 2325] as well as D-optimal matrices [29]. These types of methods are bound to continue to produce new results.

4 String Sorting Algorithms

String sorting algorithms to search for combinatorial matrices and sequences are based on certain restrictions that the properties of constant periodic or aperiodic autocorrelation imply. These restrictions possess the important advantage that they can be used to decouple the (quadratic) equations in Definitions 3 and 4. In addition, one needs to carefully distinguish between periodic and aperiodic autocorrelation, because the corresponding restrictions are of a different nature.

4.1 Periodic Autocorrelation

The restrictions imposed by the constancy of periodic autocorrelation are described via the concept of power spectral density (PSD) associated to a finite sequence. We illustrate the main idea in the context of Hadamard matrices with two circulant cores, following [42, 63].

Let n be an odd positive integer and suppose that two binary sequences A = [a 1, , a n ] and B = [b 1, , b n ], both of length n, have constant periodic autocorrelation function − 2, i.e.,

$$P_{A}(s) + P_{B}(s) = -2,\mbox{ for }s = 1,\ldots, \frac{n - 1} {2}.$$
(1)

Definition 7

Let \(\omega = {e}^{\frac{2\pi i} {n} }\) be a primitive n-th root of unity.

The discrete Fourier transform (DFT) of A is defined by

$$\mathrm{DFT}_{A}(s) =\displaystyle\sum _{ k=1}^{n}a{_{ k}\omega }^{ks},\mbox{ for }s = 0,\ldots,n - 1.$$

The power spectral density (PSD) of A is defined by

$$\mathrm{PSD}_{A}(s) = \vert \mathrm{DFT}_{A}(s){\vert }^{2},\mbox{ for }s = 0,\ldots,n - 1.$$

Therefore, the PSD values are the squared magnitudes of the DFT values, i.e., \(\mathrm{PSD}_{A}(s) = Re{(\mathrm{DFT}_{A}(s))}^{2} + Im{(\mathrm{DFT}_{A}(s))}^{2}\). Note that the well-known properties ω n = 1 and \(\mathrm{DFT}_{A}(n - s) = \mathrm{DFT}_{A}(s),s = 0,1,\ldots,n - 1\) are routinely used in order to perform DFT/PSD calculations efficiently.

It can be proved (see [42]) that property (Eq. 1) implies that

$$\mathrm{PSD}_{A}(s) + \mathrm{PSD}_{B}(s) = 2n + 2,\mbox{ for }s = 1,\ldots, \frac{n - 1} {2}.$$
(2)

Example 5

Let n = 55 and consider the two sequences from [13]:

$$\begin{array}{c} - + + - + + + + + + -- + -- + + - + - + + - + - + + -- + --- + - + + + + -- + -- + --- + - + + --\\ + - - + + + - - - + + - + - + - + + - - - + - + + + + + + + - + + + + - - - - + - + - - - + - - - + + + - - \end{array}$$

The above sequences have constant periodic autocorrelation − 2 and can be used to construct a Hadamard matrix of order 2 ⋅55 + 2 = 112. The following table records the \(\frac{55-1} {2} = 27\) PSD values for the above sequences, as well as their sums, which is equal to 112 for s = 1, , 27 as expected.

It is instructive to examine Table 2 in order to see how Eq. (2) are materialized for the particular solution of Example 5.

Table 2 PSD values for binary sequences with n = 55

4.1.1 The PSD Criterion

Equation (2) can be used to derive the so-called PSD criterion by first remarking that since the PSD values are nonnegative, if for some \(s \in \left \{1,\ldots, \frac{n-1} {2} \right \}\) it turns out that \(\mathrm{PSD}_{A}(s) > 2n + 2\) or \(\mathrm{PSD}_{B}(s) > 2n + 2\), then the corresponding A or B candidate sequence can be safely discarded from the search.

4.1.2 A Subtle Point: Integer PSD Values

There is a subtle point regarding the computation of the PSD values that occur in string sorting algorithms, as well as other filtering-based algorithms to search for sequences using the PSD criterion. The subtlety comes from the fact that although in the vast majority of cases the PSD values are (positive) floating point numbers, there exist cases when the PSD values are (positive) integers. A particular case when this phenomenon occurs is described in the following result proved in [64].

Lemma 1

Let n be an odd integer such that n ≡ 0 (mod 3) and let \(m = \frac{n} {3}\). Let \(\omega = {e}^{\frac{2\pi i} {n} } =\cos \left ( \frac{2\pi }{n}\right ) + i\sin \left ( \frac{2\pi }{n}\right )\) the principal n-th root of unity. Let [a 1 ,…,a n ] be a sequence with elements from { − 1,0,+1}. Then we have that DFT ([a 1 ,…,a n ],m) can be evaluated explicitly in closed form and PSD ([a 1 ,…,a n ],m) is a nonnegative integer. The explicit evaluations are given by

$$\displaystyle\begin{array}{rcl} \mathrm{DFT}([a_{1},\ldots,a_{n}],m)& =& \left (A_{1} -\frac{1} {2}A_{2} -\frac{1} {2}A_{3}\right ) + \left (\frac{\sqrt{3}} {2} A_{2} -\frac{\sqrt{3}} {2} A_{3}\right )i \\ \mathrm{PSD}([a_{1},\ldots,a_{n}],m)& =& A_{1}^{2} + A_{ 2}^{2} + A_{ 3}^{2} - A_{ 1}A_{2} - A_{1}A_{3} - A_{2}A_{3} \\ \end{array}$$

where

$$A_{1} =\displaystyle\sum _{ i=0}^{m-1}a_{ 3i+1},\,\,\,A_{2} =\displaystyle\sum _{ i=0}^{m-1}a_{ 3i+2},\,\,\,A_{3} =\displaystyle\sum _{ i=0}^{m-1}a_{ 3i+3}.$$

It is important to carefully account for the phenomenon described by Lemma 1 in any implementation of string sorting algorithms in order not to miss any solutions due to numerical round-off error. One way of doing this is to omit the n ∕ 3-th value from the construction of the string.

4.1.3 Bracelets and Necklaces

The symmetry property of periodic autocorrelation and PSD values under cyclic shifting must also be taken into account in order to avoid redundant computations for sequences that have the same periodic autocorrelation and PSD values. It turns out that the right combinatorial structures to deal with this symmetry property are bracelets and necklaces. The papers [89] and [90] and subsequent work by J. Sawada and his collaborators provide constant amortized time (CAT) algorithms to generate bracelets and necklaces, as well as extremely efficient C implementations of these algorithms.

4.2 Aperiodic Autocorrelation

The restrictions imposed by the constancy of periodic autocorrelation are described via the concept of a certain infinite class of functions associated to a finite sequence. We illustrate the main idea in the context of Turyn-type sequences, following [57].

Let n be an even positive integer and suppose that four binary sequences X = [x 1, , x n ] and Y = [y 1, , y n ], Z = [z 1, , z n ] and W = [w 1, , w n − 1] of lengths n, n, n, n − 1 respectively, have constant aperiodic autocorrelation function in the sense that

$$N_{X}(s) + N_{Y }(s) + 2N_{Z}(s) + 2N_{W}(s) = 0,\mbox{ for }s = 1,\ldots,n - 1.$$
(3)

The sequences X, Y, Z, W satisfying (3) are called Turyn type.

We associate to a binary sequence X of length n, the periodic (of period 2π) nonnegative function

$$f_{X}(\theta ) = N_{X}(0) + 2\displaystyle\sum _{j=1}^{n-1}N_{ X}(j)\cos j\theta.$$

It can be proved (see [57]) that property (Eq. 3) implies that

$$f_{X}(\theta ) + f_{Y }(\theta ) + 2f_{Z}(\theta ) + 2f_{W}(\theta ) = 6n - 2,\,\,\forall \theta.$$
(4)

4.2.1 Aperiodic Version of the PSD Criterion

Equation (4) can be used to derive the analog (an aperiodic version) of the PSD criterion in the aperiodic case. If for some value of θ it turns out that f X (θ) > 6n − 2 or f Y (θ) > 6n − 2, or f Z (θ) > 3n − 1 or f W (θ) > 3n − 1, then the corresponding X, Y, Z, W candidate sequence can be safely discarded from the search. In addition, if for some value of θ it turns out that f X (θ) + f Y (θ) > 6n − 2, then the corresponding pair (X, Y ) of candidates sequences can be safely discarded from the search. Similar conditions can be derived for other partial sums of the four values f X (θ), f Y (θ), f Z (θ), f W (θ) for arbitrary (but fixed) values of θ. The preprocessing of the sequences based on this aperiodic version of the PSD criterion is often carried out via a finite set of θ values, such as \(\left \{ \frac{j\pi } {500},j = 1,\ldots,500\right \}\).

4.3 A General String Sorting Algorithmic Scheme

Based on the PSD criterion (periodic and aperiodic versions), a general string sorting algorithmic scheme can be defined as follows.

INPUT: lengths and types (binary/ternary) of two sequences A, B and the autocorrelation (periodic/aperiodic) property satisfied, value of PSD constant (or its aperiodic analog)

OUTPUT: sequences satisfying the input requirements (if any are found)

  • Preprocess each one of the two sequences separately using the applicable version of the PSD criterion. In particular, discard all candidate sequences that violate the applicable version of the PSD criterion. If the set of all possible candidate sequences is too large to preprocess in its entirety, then use randomized methods to generate a representative part of this set.

  • Encode candidate A-sequences by a string, namely, the concatenation of the integer parts of its PSD values, or the f(θ) values, for a small set of θ values. Encode candidate B-sequences by a similar string, but taking the difference of the PSD (or f(θ) value) from the PSD constant (or its aperiodic analog).

  • Sort the files containing the string encodings corresponding to each A-sequence and B-sequences separately. When the files containing the string encodings are too large to be sorted directly, this phase involves a distributed sorting algorithm.

  • Look for identical strings in the two sorted files and output the corresponding solutions.

String sorting algorithms have been used in [?, 64] to compute several new weighing matrices of various weights constructed from two circulant cores, using the PSD criterion for periodic autocorrelation.

The ternary sequences with n = 30 in Example 2 have been computed by the author using a C implementation of a string sorting algorithm and the aperiodic version of the PSD criterion.

5 Genetic Algorithms

Genetic algorithms (GAs) form a powerful metaheuristic method that exploits algorithmic analogs of concepts from Darwin’s theory of evolution to design efficient search methods. The theory of genetic algorithms is presented in a extremely readable way in the classical book [44]. Among the more recent systematic treatments of GAs, one can consult [87]. Genetic algorithms have been applied successfully to tackle a wide range of difficult problems across many application areas. A central concept in the theory of GAs is the “objective function.” A judicious choice of an objective function is necessary in order to be able to apply GAs to solve a specific problem. Another important ingredient in the theory of GAs is the encoding of the solution space as a set of binary vectors. We will exemplify below how to formalize a problem of searching for combinatorial matrices as a GA problem, using binary sequences with constant periodic autocorrelation − 2 as a case study, and following the exposition in [59].

Let n be an odd positive integer and suppose that two binary sequences A = [a 1, , a n ] and B = [b 1, , b n ], both of length n, have constant periodic autocorrelation function − 2, as in (1). Since the \(m = \frac{n-1} {2}\) Eq. (1) must be satisfied simultaneously, one can consider the following two types of objective functions arising naturally:

$$OF_{1} =\ \mid P_{A}(1) + P_{B}(1) + 2\mid +\ldots +\mid P_{A}(m) + P_{B}(m) + 2\mid$$

and

$$OF_{2} = {(P_{A}(1) + P_{B}(1) + 2)}^{2} +\ldots +{(P_{ A}(m) + P_{B}(m) + 2)}^{2}.$$

There are two potentially important differences between the objective functions OF 1 OF 2. First, OF 1 is not a continuous function, while OF 2 is. Second, OF 1 takes on smaller values than OF 2. The GA will seek to minimize OF 1 and OF 2, i.e., to find 2n { ± 1} values of the 2n variables a 1, , a n , b 1, , b n such that OF 1 and OF 2 attain the value 0. Evidently, a set of 2n { ± 1} values with this property is also a solution of the original problem of finding two binary sequences with constant periodic autocorrelation function − 2.

The GAs requirement of encoding the solution space as a set of binary vectors is inherent in the problem of searching for binary sequences with constant periodic/aperiodic autocorrelation function.

5.1 A General GAs Algorithmic Scheme

INPUT: k binary sequences A 1, , A k , all of length n, and the autocorrelation (periodic/aperiodic) property satisfied

OUTPUT: k-tuples of sequences satisfying the input requirements (if any are found)

  • Specify an initial randomly chosen population (first generation) of binary sequences of length n.

  • Encode the required periodic/aperiodic autocorrelation property in the form of OF 1 or OF 2.

  • Set current generation to be the first generation.

  • Evolve the current generation to the next generation, using a set of genetic operators. Commonly used such operators include reproduction, crossover, and mutation.

  • Examine the next generation to see whether it contains k binary sequences with the required periodic/aperiodic autocorrelation property. If yes, then output this solution and exit. If no, then set current generation to be the next generation and iterate the previous step.

Note that popular termination criteria for GAs include a predetermined number of generations to evolve or a predetermined amount of execution time.

There are several papers that use GAs to solve design-theoretic and combinatorial problems. In [1] the authors devise a GA to search for cocyclic Hadamard matrices. In [2] the author explains how to use genetic algorithms to solve three design-theoretic problems of graph-theoretic flavor. In [50] the authors apply GAs to construct D-optimal designs. In [68] the authors use so-called competent GAs to search efficiently for weighing matrices. In [17] the authors use SGA (simple genetic algorithm) to construct Hadamard matrices of various orders. The thesis [84] is concerned with the application of GAs to construct D-optimal experimental designs. The thesis [46] contains GAs to search for normal and near-Yang sequences.

6 Simulated Annealing

Simulated annealing (SA) was presented in [58] as a new approach to approximate solutions of difficult optimization problems. This approach is inspired from statistical mechanics and is motivated by an analogy to the behavior of physical systems in the presence of heat. In the words of Kirkpatrick et al.,

there is a deep and useful connection between statistical mechanics (the behavior of system with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters).

In particular, they found that the Metropolis algorithm was well suited to approximate the observed behavior of solids when subjected to an annealing process. Recall that a local optimization method starts with some initial solution and, at each iteration, searches its neighbors for a better solution until no better solution can be found. A weakness of the local optimization method is that the program may get trapped at some local minima instead of detecting a global minimum. Simulated annealing addresses this issue by using randomization to improve on the local optimization search process, and in particular, occasional uphill moves (i.e., less than optimal solutions) are accepted with the hope that by doing so, a better solution will be obtainable later on. There are several parameters in the simulated annealing process which can significantly impact the actual performance, and there do not seem to be general rules on how to choose these parameters for the specific problem at hand.

In the annealing process, the physical system being optimized is first “melted” at a high temperature. The temperature is then decreased slowly until the system “freezes,” and no further physical changes occur. When the system’s structure is “frozen,” this correspond to a minimum energy configuration. The physical analogy for simulated annealing is often described using the annealing method for growing a crystal. The rate at which the temperature is reduced is vitally important to the annealing process. If the cooling is done too quickly, widespread irregularities will form and the trapped energy level will be higher than what one would find in a perfectly structured crystal. It can be shown that the states of this physical system correspond to the solutions of an optimization problem. The energy of a state in the physical world corresponds to the cost of a solution in the optimization world. The minimum energy configuration that is obtained when a system is frozen corresponds to an optimal solution in an optimization problem.

6.1 A General SA Algorithmic Scheme

The basic ingredients needed to formulate a problem in a manner amenable to SA algorithms are:

  1. 1.

    A finite set S. This is the set of all possible solutions.

  2. 2.

    An objective function OF defined on S.

  3. 3.

    The set S is the set of global minima (i.e., the desired solutions) using the OF. It is assumed that S S, a proper subset of S.

  4. 4.

    For each iS, we define a set S(i) ⊂ S − { i} to be the set of neighbors of i.

  5. 5.

    A nonincreasing function α(t) called the cooling schedule, which controls how the temperature is lowered.

With the previous terminology in place, a general SA algorithmic scheme looks like:

  • Randomly select initial solution s 0;

  • Select an initial temperature t 0 > 0;

  • Select a temperature reduction function α;

  • Repeat

  • Repeat

  • Randomly select a neighbor, s of s 0;

  • δ = OF(s) − OF(s 0);

  • if(δ < 0)

  • then s 0 = s;

  • else

  • generate random x uniformly in the range (0, 1);

  • if(x < e δt)

  • then s 0 = s;

  • Until iteration_count = nrep

  • Set t = α(t);

  • Until stopping condition = true

  • s 0 is the approximation to the optimal solution

Recall that the goal of simulated annealing is to improve on the local search strategies by allowing some uphill moves in a controlled manner. The above SA scheme accepts a solution at each iteration if it is determined to be better than the current “best” solution. However, we also see how a less than optimal solution can be accepted and that it is accepted with probability e δt. Also note that for the purposes of the OF required by SA, one can use OF 1 and OF 2 as defined previously for GAs. Commonly used cooling schedules include α(t) = αt, where α < 1 and \(\alpha (t) = \frac{t} {1 +\beta t}\) where β is small.

The papers [12, 55, 70, 72, 77] use SA techniques to tackle design theory problems.

7 Particle Swarm Optimization (PSO)

Particle swarm Optimization (PSO) is a population-based metaheuristic algorithm. It was introduced by R.C. Eberhart and J. Kennedy in 1995 [35] primarily for solving numerical optimization problems, as an alternative to evolutionary algorithms (EAs) [3]. Its verified efficiency in challenging optimization problems as well as its easy implementation rapidly placed PSO in a salient position among the state-of-the-art algorithms. Today, PSO counts a vast number of applications in diverse scientific fields [4, 5, 78], as well as an extensive bibliography [14, 37, 56, 83], and published scientific software [100].

Putting it formally, consider the n-dimensional global optimization problem:

$$\min _{x\in X\subset {\mathbb{R}}^{n}}f(x).$$

PSO probes the search space by using a swarm, namely, a set of search points:

$$S =\{ x_{1},x_{2},\ldots,x_{N}\},\qquad x_{i} \in X,\qquad i \in I =\{ 1,2,\ldots,N\}.$$

Each search point of S constitutes a particle, i.e., a search vector:

$$x_{i} = {(x_{i1},x_{i2},\ldots,x_{in})}^{\top }\in X,\qquad i \in I,$$

which is randomly initialized in X and allowed to iteratively move within it. Its motion is based on stochastically adapted position shifts, called velocity, while it retains in memory the best position it has ever visited. These quantities are denoted as

$$v_{i} = {(v_{i1},v_{i2},\ldots,v_{in})}^{\top },\qquad p_{ i} = {(p_{i1},p_{i2},\ldots,p_{in})}^{\top },\qquad i \in I,$$

respectively. Thus, if t denotes the current iteration of the algorithm, it holds that

$$p_{i}(t) =\arg \min _{q\in \{0,1,\ldots,t\}}\left \{f\big{(}x_{i}(q)\big{)}\right \},\qquad i \in I.$$

The best positions constitute a sort of experience for the particles, guiding them towards the most promising regions of the search space, i.e., regions that posses lower function values.

In order to avoid the premature convergence of the particles on local minimizers, the concept of neighborhood was introduced [96]. A neighborhood of the i-th particle is defined as a set,

$${\it \text{NB}}_{i,s} =\{ j_{1},j_{2},\ldots,j_{s}\} \subseteq I,\qquad i \in {\it \text{NB}}_{i,s},$$

and consists of the indices of all the particles with which it can exchange information. Then, the neighborhood’s best position

$$p_{g_{i}} =\arg \min _{j\in {\it \text{NB}}_{ i,s}}\{f(p_{j})\}$$
(5)

is used along with p i to update the i-th particle’s velocity at each iteration. The parameter s defines the neighborhood size. In the special case where s = N, the whole swarm constitutes the neighborhood. This case defines the so-called global PSO model (denoted as gbest), while strictly smaller neighborhoods correspond to the local PSO model (denoted as lbest). The schemes that are used for determining the particles that constitute each neighborhood are called neighborhood topologies, and they can have a crucial impact on PSO’s performance. A popular scheme is the ring topology, where each particle assumes as neighbors of the particles with its adjacent indices, assuming that indices recycle after N. Based on the definitions above, the iterative scheme of PSO is defined as follows [15]:

$$\displaystyle\begin{array}{rcl} v_{ij}(t + 1)& =& \chi \left [v_{ij}(t) + c_{1}\mathcal{R}_{1}\left (p_{ij}(t) - x_{ij}(t)\right ) + c_{2}\mathcal{R}_{2}\left (p_{g_{i},j}(t) - x_{ij}(t)\right )\right ],\quad \end{array}$$
(6)
$$\displaystyle\begin{array}{rcl} x_{ij}(t + 1)& =& x_{ij}(t) + v_{ij}(t + 1),\end{array}$$
(7)

where i = 1, 2, , N; j = 1, 2, , n; the parameter χ is the constriction coefficient; acceleration constants c 1 and c 2 are called the cognitive and social parameter, respectively; and \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\), are random variables uniformly distributed in the range [0, 1]. It shall be noted that a different value of \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) is sampled for each i and j in (6) at each iteration. Also, the best position of each particle is updated at each iteration, as follows:

$$p_{i}(t+1) = \left \{\begin{array}{ll} x_{i}(t + 1),&\quad \text{if }f\big{(}x_{i}(t + 1)\big{)} < f\big{(}p_{i}(t)\big{)}, \\ p_{i}(t), &\quad \text{otherwise}, \end{array} \right.\qquad i \in I.$$
(8)

The PSO variant described above was introduced by M. Clerc and J. Kennedy in [15], and it has gained increasing popularity, especially in interdisciplinary applications. This can be attributed to its simplicity, its efficiency, and its theoretical background.

Based on the stability analysis [15], the parameter set χ = 0. 729, c 1 = c 2 = 2. 05, was determined as a satisfactory setting that produces a balanced convergence speed of the algorithm. Nevertheless, alternative settings have been introduced in relevant works [98]. Pseudocode of PSO is reported in Algorithm 1.

Unified particle swarm optimization (UPSO) was proposed as a scheme that harnesses the lbest and gbest PSO models, combining their exploration and exploitation properties [79, 82]. Let \(\mathcal{G}_{i}(t + 1)\) be the gbest velocity update of x i , while \(\mathcal{L}_{i}(t + 1)\) be the corresponding lbest velocity update. Then, from Eq. (6), it holds that

$$\displaystyle\begin{array}{rcl} \mathcal{G}_{ij}(t + 1)& =& \chi \left [v_{ij}(t) + c_{1}\mathcal{R}_{1}\left (p_{ij}(t) - x_{ij}(t)\right ) + c_{2}\mathcal{R}_{2}\left (p_{g,j}(t) - x_{ij}(t)\right )\right ],\end{array}$$
(9)
$$\displaystyle\begin{array}{rcl} \mathcal{L}_{ij}(t + 1)& =& \chi \left [v_{ij}(t) + c_{1}\mathcal{R}_{1}\left (p_{ij}(t) - x_{ij}(t)\right ) + c_{2}\mathcal{R}_{2}\left (p_{g_{i},j}(t) - x_{ij}(t)\right )\right ],\quad \quad \end{array}$$
(10)

where g denotes the overall best particle. The aggregation of these search directions defines the main UPSO scheme [79]:

$$\displaystyle\begin{array}{rcl} \mathcal{U}_{ij}(t + 1)& =& u\,\mathcal{G}_{ij}(t + 1) + (1 - u)\,\mathcal{L}_{ij}(t + 1),\end{array}$$
(11)
$$\displaystyle\begin{array}{rcl} x_{ij}(t + 1)& =& x_{ij}(t) + \mathcal{U}_{ij}(t + 1).\end{array}$$
(12)

The parameter u ∈ [0, 1] is called the unification factor and determines the trade off between the two directions. Obviously, the standard gbest PSO is obtained by setting u = 1 in (11), while u = 0 corresponds to the standard lbest PSO. All intermediate values of u ∈ (0, 1) define composite UPSO variants that combine the exploration and exploitation properties of the global and local PSO variant.

Besides the basic UPSO scheme, a stochastic parameter can also be incorporated in Eq. (11) to enhance UPSO’s exploration capabilities [79]. Thus, depending on which variant UPSO is mostly based, Eq. (11) becomes

$$\mathcal{U}_{ij}(t + 1) = \mathcal{R}_{3}\,u\,\mathcal{G}_{ij}(t + 1) + (1 - u)\,\mathcal{L}_{ij}(t + 1),$$
(13)

which is mostly based on lbest, or,

$$\mathcal{U}_{ij}(t + 1) = u\,\mathcal{G}_{ij}(t + 1) + \mathcal{R}_{3}\,(1 - u)\,\mathcal{L}_{ij}(t + 1),$$
(14)

which is mostly based on gbest. The random variable \(\mathcal{R}_{3} \sim \mathcal{N}(\mu {,\sigma }^{2})\) is normally distributed, resembling the mutation operator in EAs. Yet, it is biased towards directions that are consistent with the PSO dynamic, instead of the pure random mutation used in EAs. Following the assumptions of Matyas [71], a proof of convergence in probability was derived for the UPSO variants of Eqs. (13) and (14) [79].

Algorithm 1: Pseudocode of the standard PSO algorithm

Input: Objective function, \(f : X \subset {\mathbb{R}}^{n} \rightarrow \mathbb{R}\); swarm size: N; parameters: χ, c 1, c 2.

Output: Best detected solution: x , \(f\left ({x}^{{\ast}}\right )\).

//Initialization

1 t ← 0.

2 for i ← 1 to N do

3 Initialize x i (t) and v i (t), randomly.

4 p i (t) ← x i (t). //Initialize best position

5 Evaluate \(f\left (x_{i}(t)\right )\). //Evaluate particle

6 end

//Main Iteration

7 while (termination criterion not met) do

//Position and Velocity Update

8 for i ← 1 to N do

9 Determine \(p_{g_{i}}(t)\) from the i–th particle’s neighborhood.

10 for j ← 1 to n do

11 \(v_{ij}(t + 1) \leftarrow \chi \left [v_{ij}(t) + c_{1}\mathcal{R}_{1}(p_{ij}(t) - x_{ij}(t)) + c_{2}\mathcal{R}_{2}(p_{g_{i},j}(t) - x_{ij}(t))\right ].\)

12 \(x_{ij}(t + 1) \leftarrow x_{ij}(t) + v_{ij}(t + 1).\)

13 end

14 end

//Best Positions Update

15 for i ← 1 to N do

16 Evaluate \(f\left (x_{i}(t + 1)\right )\). //Evaluate new position

17 \(p_{i}(t+1) \leftarrow \left \{\begin{array}{ll} x_{i}(t + 1),&\quad iff\left (x_{i}(t + 1)\right ) < f(p_{i}(t)), \\ p_{i}(t), &\quad otherwise.\\ \end{array} \right.\)

18 end

19 tt + 1.

20 end

Although PSO and UPSO have been primarily designed for real-valued optimization problems, there are various applications also in problems with integer, mixed-integer, and binary variables (e.g., see Eqs. [62, 69, 80, 81, 85, 86]). The most common and easy way to achieve this is by rounding the corresponding variables to their nearest integers when evaluating the particles. Of course, problem-dependent techniques may be additionally needed to enhance performance.

8 Ant Colony Optimization (ACO)

Ant colony optimization (ACO) is a probabilistic algorithm, primarily for solving combinatorial optimization problems. The general algorithmic concept of ACO was introduced by M. Dorigo in 1992 [30], aiming at detecting optimal paths in graphs by imitating the foraging behavior of real ants. The first approaches proved to be very promising, leading to further developments and enhancements [7, 31, 33].

Today, there is a variety of ant-based algorithms. Perhaps the most popular variants are Ant System (AS) [34], Max–Min Ant System (MMAS) [95], and Ant Colony System (ACS) [32]. Setting aside some differences, most ant-based approaches follow the same procedure flow, which can be summarized in the following actions:

Procedure ACO

WHILE (termination condition is false)

Construct_Solutions()

Optional_Further_Actions()

Update_Pheromones()

END WHILE

In general, the algorithm assumes a number, K, of search agents, called the ants. Each ant constructs a solution component by component. The construction procedure is based on the probabilistic selection of each component’s value from a discrete and finite set. For this purpose, a table of pheromones is retained and continuously updated. The pheromones play the role of quality weights, used for determining the corresponding selection probability of each possible value of a solution component.

Putting it formally, let

$$x = {(x_{1},x_{2},\ldots,x_{n})}^{\top }\in D,$$

be a candidate solution of the problem, with each component x i , i = 1, 2, , n, taking values from a discrete and finite set D i . The construction of a solution begins from an initially empty partial solution, x p = . At each step, the next component of x p is assigned one of the possible values from its corresponding discrete set. Naturally, the values assigned in previous components may affect the feasible set of candidate values for the current one, prohibiting the use of some of them to retain feasibility. Assume that x p is already built up to the (i − 1)-th component, and let D i = { d i1, d i2, , d iM } be the set of candidate values for the i-th component. Each possible value d ij is associated with a pheromone level τ ij . The selection of a specific value for the i-th component is based on the probabilistic selection among the different candidate values, using the selection probabilities:

$$P(d_{ij}\vert {x}^{p}) = \frac{\tau _{ij}^{a}\,\eta {(d_{ ij})}^{b}} {\displaystyle\sum \limits _{d_{il}\in D_{i}}\tau _{il}^{a}\,\eta {(d_{il})}^{b}},\qquad j = 1,2,\ldots,M,$$
(15)

where η(. ) is a heuristic function that offers a measure of the “desirability” of each component value in the solution (e.g., in TSP problems, it can be associated with the traveled distance). The parameters a and b are fixed, and they offer flexibility in tuning the trade off between pheromone and heuristic information.

Upon constructing a complete solution, it is evaluated with the objective value, and the pheromones are updated so that component values that appear in better solutions increase their pheromone levels and, consequently, their selection probabilities for subsequent iterations of the algorithm. Also, all pheromones undergo a standard reduction in their values, a procedure also known as evaporation. Thus, the pheromones are updated as follows:

$$\tau _{ij} = \left \{\begin{array}{ll} (1-\rho )\,\tau _{ij} +\rho \, \Delta \tau,&\text{if }\,\,d_{ij}\,\,\text{appears in a good candidate solution}, \\ (1-\rho )\,\tau _{ij}, &\mathrm{otherwise}, \end{array} \right.$$
(16)

where ρ ∈ (0, 1] is the evaporation rate and Δτ is an increment that can be either fixed or proportional to the quality of the corresponding candidate solution.

Different ant-based approaches can differ from the basic scheme described above. For example, in AS with K ants, the pheromone update takes place after all ants have constructed their solutions and they all contribute to the update:

$$\tau _{ij} = (1-\rho )\,\tau _{ij} +\displaystyle\sum \limits _{ k=1}^{K}\Delta \tau _{ ij}^{(k)},$$

where \(\Delta \tau _{ij}^{(k)}\) is the contributed pheromone from the k-th ant and it is related to its quality. On the other hand, ACS uses a different rule for selecting component values, where the value of the i-th component is determined as

$$d_{ig} =\arg \,\max \limits _{d_{ij}\in D_{i}}\left \{\tau _{ij}^{a}\,\eta {(d_{ ij})}^{b}\right \},\quad \text{if }\,\,q \leq q_{ 0},$$

where q ∈ [0, 1] is a uniformly distributed random number and q 0 ∈ [0, 1] is a user-defined parameter; otherwise, the scheme of Eq. (15) is used. Also, only the best ant contributes to the pheromones, instead of the whole swarm. Finally, MMAS employs pheromone bounds [τ min, τ max]. Each pheromone is initialized to its maximum value and updated only from the best ant, either of the current iteration or overall.

There is a rich literature of applications of ant-based approaches in combinatorial optimization problems [7, 31, 33]. Recently, such approaches were used also for the solution of autocorrelation problems [69].

9 Combinatorial Optimization

Combinatorial optimization methods seem like a natural candidate of methods that should be employed to tackle extremely hard optimization problems such as the search for binary and ternary sequences that satisfy autocorrelation constraints. This is because combinatorial optimization methods routinely deal with problems featuring thousands of discrete variables. The missing link that allows one to formalize autocorrelation problems in a manner suitable for combinatorial optimization methods was provided in [66] and [61]. This formalism made it in fact possible to solve in [66] the entire Bömer-Antweiler diagram, which was proposed 20 years before, in [6]. This formalism is based on the fact that autocorrelation can be expressed in terms of certain symmetric matrices, which correspond to the matrices associated to autocorrelation viewed as quadratic form. We shall illustrate the formalism for D-optimal matrices, following the exposition in [61].

Let n be an odd positive integer and set \(m = \frac{n-1} {2}\). D-optimal matrices correspond to two binary sequences of length n each, with constant periodic autocorrelation 2; see [61]. D-optimal matrices are 2n ×2n { − 1, + 1}-matrices that attain Ehlich’s determinant bound. See [29] for a state-of-the-art survey of existence questions for D-optimal matrices with n < 200. We reproduce the following definition and lemma from [66].

Definition 8

Let a = [a 1, a 2, , a n ]T be a column n ×1 vector, where a 1, a 2, , a n ∈ { − 1, + 1} and consider the elements of the periodic autocorrelation function vector P A (1), , P A (m). Define the following m symmetric matrices (which are independent of the sequence a), for i = 1, , m

$$M_{i} = (m_{jk}),\mbox{ s.t. }\left \{\begin{array}{cc} m_{jk} = m_{kj} = \frac{1} {2},&\mbox{ when }a_{j}a_{k} \in P_{A}(i),\,\,j,k \in \{ 1,\ldots,n\}\\ 0, & \mbox{ otherwise} \\ \end{array} \right.$$

Lemma 2

The matrices M i can be used to write equations involving autocorrelation in a matrix form:

  • For n odd:

    $${a}^{T}M_{ i}a = P_{A}(i),\,\,\,i = 1,\ldots,m.$$
  • For n even:

    $${a}^{T}M_{ i}a = P_{A}(i),\,\,\,i = 1,\ldots,m - 1\mbox{ and }{a}^{T}M_{ m}a = \frac{1} {2}P_{A}(m).$$

Now one can formulate the D-optimal matrices problem as a binary feasibility problem.

Binary Feasibility version of the D-optimal matrices problem

Find two sequences a = [a 1, , a n ], b = [b 1, , b n ], (viewed as n ×1 column vectors) such that

$${a}^{T}M_{ i}a + {b}^{T}M_{ i}b = 2,\,\,\,i = 1,\ldots,m,$$

where a i , b i ∈ { − 1, + 1}, i = 1, , n.

It is now clear that one can use combinatorial optimization algorithms to search for D-optimal matrices, as well as several other combinatorial matrices and sequences referenced in Table 1. Definition 8 can easily be adapted for aperiodic autocorrelation as well.

Note that the D-optimal matrices problem also features certain Diophantine constraints, as proved in [61] and subsequently generalized in [29]. It is not clear what is the right way to incorporate this kind of Diophantine constraints into the combinatorial optimization formalism described here, and this is certainly an issue that deserves to be investigated further.

The papers [7376] demonstrate how to use various optimization techniques to find new designs.

10 Applications

There are several applications of combinatorial matrices and sequences described in Table 1, in such areas as telecommunications, coding theory, and cryptography. The reader can consult [45] for applications in signal design for communications, radar, and cryptography applications. The book [53] describes applications in signal processing, coding theory, and cryptography. The book [104] describes applications in communications, signal processing, and image processing. The paper [99] discusses how partial Hadamard matrices (see [20]) are used in the area of compressed sensing. The paper [93] describes in detail several applications of Hadamard matrices on code division multiple access (CDMA) communication systems as well as in telecommunications. The 55-page paper [49] describes applications of Hadamard matrices in binary codes, information processing, maximum determinant problems, spectrometry, pattern recognition, and other areas. The papers [91] and [92] and the thesis [9] discuss cryptography applications of Hadamard matrices. Chapter 10 of the book [51] is devoted to several combinatorial problems that can be tackled with stochastic search methods.

11 Conclusion

Combinatorial matrices and the associated binary and ternary sequences provide a wide array of extremely challenging optimization problems that can be tackled with a variety of metaheuristic and combinatorial methods. We tried to summarize some of these methods in the present chapter. Although there are no polynomial complexity algorithms to solve the problem of searching for such matrices and sequences, the combination of new insights, new algorithms, and new implementations will continue to produce new results. The second edition of the Handbook of Combinatorial Designs [16], edited by Charles J. Colbourn and Jeffrey H. Dinitz, is a very valuable resource regarding the state of the art on open cases for several types of such matrices and sequences. The on-line updated webpage maintained by the authors is also quite useful, as many open cases have been resolved since the publication of the handbook in 2007. We firmly believe that cross-fertilization of disciplines is a very important process, from which all involved disciplines benefit eventually. It seems reasonable to predict that the area of combinatorial matrices and their associated sequences will continue to experience strong interactions with other research areas, for many years to come.

12 Cross-References

Algorithms for the Satisfiability Problem

Binary Unconstrained Quadratic Optimization Problem