1 Introduction

The running time to optimum is a key factor in determining the success of an evolutionary programming (EP) approach. Ideally, an implementation of an EP approach should run for a sufficient number of generations when the probability of achieving an optimum is greater than some desired value. However, few results on the running times of EP approaches can be found in the current literature.

As a technique of finite state machine, EP was first proposed for continuous optimization [1]. It has since been widely adopted as a powerful optimizing framework for solving continuous optimization problems [24]. More recently, EP research has mainly concentrated on the function of its parameters and its improvement by adaptive strategies. As a result, several EP variants have been proposed, distinguishing from each other mainly with different mutation schemes based on different probability distributions.

Arguably, the first EP algorithm that was widely considered successful was the one with Gauss mutation that has been termed as classical evolutionary programming (CEP) [2]. CEP has been intensively analyzed by Fogel [3, 4], Bäck and Schwefel [5, 6]. Subsequently, Yao et al. proposed a fast evolutionary programming algorithm (FEP) with a new mutation scheme based on the Cauchy distribution [7]. Computational experiments showed that FEP is superior to CEP when tackling the optimization problems of multimodal and dispersed peak functions. Another EP variant [8] was later proposed by using the Lévy distribution-based mutation, which we call \(L\acute{e} vy\) evolutionary programming (LEP) in this paper. Empirical analyses show that, overall, LEP exceeds CEP and FEP when solving the benchmark problems of multimodal and highly dispersed peak functions.

CEP [2], FEP [7], and LEP [8] can be considered as classical evolutionary programming algorithms. Several modified EPs have since been designed based on these three basic approaches [2, 7, 8].

The performances of EP approaches such as CEP, FEP, and LEP have oftentimes been verified experimentally rather than theoretically. The theoretical foundations of EP have been an open problem ever since it was first put forward [1, 2]. In particular, Bäck and Schwefel [5, 6] suggested the convergence analysis of EP algorithms as a research topic in their surveys of evolutionary algorithms. Fogel [24] presented an initial proof of EP convergence on the basis of the discrete solution space. Rodolph [911] then showed that CEP and FEP can converge with an arbitrary initialization; this result is more general since it applies to a continuous solution space.

Previous convergence studies only considered whether an EP algorithm is able to find an optimum within infinite iteration, but did not mention the speed of convergence, i.e., lacking of running-time analysis. To date, running-time analyses have mainly focused on Boolean-individual EAs like (\(1+1\))EA [12], (\(N+N\))EA [13], multi-objective EA [1416]. Alternative theoretical measures for evaluating the running times of Boolean-individual EA approaches have also been proposed [17, 18]. Recently, the impact of particular components of EA on runtime has also been studied on mutation, selection [19], and population size [18, 20, 21]. In addition to these studies on EAs solving Boolean functions, some results of runtime analysis have been obtained on some combinatorial optimization problems [2224]. Lehre and Yao [25] completed the runtime analysis of the (\(1+1\))EA on computing unique input output sequences. Zhou et al. presented a series of EA analysis results for discrete optimization like the minimum label spanning tree problem [26], the multiprocessor scheduling problem [27], and the maximum cut problem [28]. Other proposed studies are on the topics of tight bounds on the running time of EA and randomized search heuristic [2931].

As summarized above, the majority of runtime analyses are limited to discrete search space; analyses for continuous search space require a more sophisticated modeling and remain relatively few, which is unsatisfactory from a theoretical point of view. Jägerskpper conducted a rigorous runtime analysis on (\(1+1\))ES, (\(1+\lambda\))ES minimizing the sphere function [32, 33]. Agapie et al. modeled the (\(1+1\))ES as a renewal process under some reasonable assumptions and analyzed the first hitting time on inclined plane [34] and sphere function [35]. Chen et al. [36] proposed general drift conditions to estimate the upper bound of the first hitting time for EAs to find \(\epsilon\)-approximation solutions. Inspired by the studies above, especially the estimating approach [17] for discrete situation, this paper presents an analytical framework for the running time of evolutionary programming. We also discuss whether the running time of CEP and FEP is influenced by individual number, problem dimension number, searching range, and the Lebesgue measure of the optimal neighborhood in the target problem. Furthermore, we give the approximate conditions under which the EPs can converge in a timespan less than a polynomial of n.

2 EP algorithms and their stochastic process model

2.1 Introduction to EP algorithms

This section introduces the two EP algorithms CEP [2] and FEP [7] that are studied in this paper. The skeleton of EP algorithms analyzed in this paper is given in Algorithm 1; the sole difference among CEP, FEP, and LEP lies in their treatments of step 3.

figure a

In step 1, the generated k individuals are denoted by a couple of real vectors \(\mathbf {v}_i = ( \mathbf {x}_i , \varvec{\sigma }_i )\), \(i = 1,2,\ldots ,k\), where the elements \(\mathbf {x}_i = (x_{i1}, x_{i2}, \ldots , x_{in} )\) are the variables to be optimized, and the elements \(\varvec{\sigma }_i = (\sigma _{i1}, \sigma _{i2},\ldots , \sigma _{in})\) are variation variables that affect the generation of offspring. We set the iteration number \(t = 0\) and the initial \(\sigma _{ij} \le 2 (j = 1,\ldots , n)\) as proposed in reference [8]

The fitness values in steps 2 and 4 are calculated according to the objective function of the target problem.

In step 3, a single \({\bar{\mathbf{v}}}_i = ({\bar{\mathbf{x}}_i},\bar{\varvec{\sigma }_i})\) is generated for each individual \(\mathbf {v}_i\), where \(i = 1,\ldots ,k\). For \(j =1,\ldots ,n\),

$$\begin{aligned} \bar{\sigma }_{ij} = V(\sigma _{ij}) \end{aligned}$$
(1)

where \(V(\sigma _{ij})\) denotes a renewing function of variation variable \(\sigma _{ij}\). The renewing function \(\bar{\sigma }_{ij} = V(\sigma _{ij})\) may have various forms, and a representative implementation of it can be found in reference [7]. However, for ease of theoretical analysis, we will consider a kind of simple renewing function, i.e., constant function, in our case studies (see Sect. 4).

$$\begin{aligned} V(\sigma _{ij})=\sigma \end{aligned}$$
(2)

The three EP algorithms differ in how \({\bar{\mathbf{x}}}_i\) is derived, which can be explained as follows(in this paper, we only consider CEP and FEP).

$$\begin{aligned} CEP:\quad \bar{x}_{ij}= & {} x_{ij}+\bar{\sigma }_{ij}N_j(0,1) \end{aligned}$$
(3)
$$\begin{aligned} FEP:\quad \bar{x}_{ij}= & {} x_{ij}+\bar{\sigma }_{ij}\delta _j\qquad \ \ \end{aligned}$$
(4)

where \(N_j(0,1)\) is a newly generated random variable by Gaussian distribution for each j, \(\delta _j\) is a Cauchy random number produced anew for jth dimension, whose density function is

$$\begin{aligned} C_{\phi =1}(y)=\pi ^{-1}\cdot (1+y^2)^{-1} \quad -\infty< y < +\infty \end{aligned}$$
(5)

In step 5, for each individual in the set of all parents and offsprings, q different opponent individuals are uniformly and randomly selected to be compared. If the selected individual’s fitness value is more than the opponent’s, the individual obtains a “win.” The top k individuals with the most “wins’ are selected to be the parents in the next iteration, breaking ties by greater fitness.

2.2 Target problem of EP algorithms

Without loss of generality, we assume that the EP approaches analyzed in this study aim to solve a minimization problem in a continuous search space, defined as follows.

Definition 1

(minimization problem) Let \(\varvec{S} \subseteq \varvec{R}^n\) be a finite subspace of the n-dimensional real domain \(\varvec{R}^n\), and let \(f: \varvec{S}\rightarrow \varvec{R}\) be an n-dimensional real function. A minimization problem, denoted by the 2-tuple \((\varvec{S}, f)\), is to find an n-dimensional vector \(\mathbf {x}_{\text{min}}\in \varvec{S}\) such that \(\forall \mathbf {x}\in \varvec{S}, f(\mathbf {x}_{{\text{min}}}) \le f(\mathbf {x})\).

Without loss of generality, we can assume that \(\mathbf {S}=\prod _{i=1}^{n}[-b_{i},b_{i}]\) where \(b_{i}=b>0\). The function \(f:\mathbf {S}\rightarrow \mathbf {R}\) is called the objective function of the minimization problem. We do not require f to be continuous, but it must be bounded. Furthermore, we only consider the unconstrained minimization.

The following properties are assumed, and we will make use of them in our analyses.

  1. 1.

    The subset of optimal solutions in \(\mathbf {S}\) is non-empty.

  2. 2.

    Let \(f^{*}=\min \{f(x)|x\in \mathbf {S}\}\) be the fitness value of an optimal solution, and let \(\mathbf {S}^{*}(\varepsilon )=\{x\in \mathbf {S}|f(x)<f^{*}+\varepsilon \}\) be the optimal epsilon neighborhood. Every element of \(\mathbf {S}^{*}(\varepsilon )\) is an optimal solution of the minimization problem.

  3. 3.

    \(\forall \varepsilon >0\), \(m\big (\mathbf {S}^{*}(\varepsilon )\big )>0\), where we denote the Lebesgue measure of \(\mathbf {S}^{*}(\varepsilon )\) as \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\).

The first assumption describes the existence of optimal solutions to the problem. The second assumption presents a rigorous definition of optimal solution for continuous minimization optimization problems. The third assumption indicates that there are always solutions whose objective values are distributed continuously and arbitrarily close to the optimal, which makes the minimization problem solvable.

2.3 Stochastic process model of EP algorithms

Our running-time analyses are based on representing the EP algorithms as Markov processes. In this section, we explain the terminologies and notations used throughout the rest of this study.

Definition 2

(stochastic process of EP) The stochastic process of an evolutionary programming algorithm EP is denoted by \(\{\xi _{t}^{\text{EP}}\}_{t=0}^{+\infty }\); \(\xi _{t}^{{\text{EP}}}=(\mathbf {v}_{1}^{(t)},\mathbf {v}_{2}^{(t)}, \ldots ,\mathbf {v}_{k}^{(t)})\), where \(\mathbf {v}_{i}^{(t)}=(\mathbf {x}_{i}^{(t)}, \varvec{\sigma }_{i}^{(t)})\) is the ith individual at the tth iteration.

The stochastic status \(\xi _{t}^{{\text{EP}}}\) represents the individuals of the tth iteration population for the algorithm EP. All stochastic processes examined in this paper are discrete time, i.e., \(t\in \mathbf {Z}^{+}\).

Definition 3

(status space) The status space of EP is \(\varOmega _{{\text{EP}}}=\{(\mathbf {v}_{1},\mathbf {v}_{2},\ldots ,\mathbf {v}_{k})| \mathbf {v}_{i}=(\mathbf {x}_{i},\varvec{\sigma }_{i}),\mathbf {x}_{i}\in \mathbf {S},\sigma _{ij}\le 2,j=1,\ldots ,n\}\).

\(\varOmega _{{\text{EP}}}\) is the set of all possible population statuses for EP. Intuitively, each element of \(\varOmega _{{\text{EP}}}\) is associated with a possible population in the implementation of EP. Let \(\mathbf {x}^{*}\in \mathbf {S}^{*}(\varepsilon )\) be an optimal solution. We define the optimal status space as follows:

Definition 4

(optimal status space) The optimal status space of EP is the subset \(\varOmega _{{\text{EP}}}^{*}\subseteq \varOmega _{{\text{EP}}}\) such that \(\forall \xi _{{\text{EP}}}^{*}\in \varOmega _{{\text{EP}}}^{*}\), \(\exists (\mathbf {x}^{*},\varvec{\sigma })\in \xi _{{\text{EP}}}^{*}\), where \(\varvec{\sigma }=(\sigma ,\sigma ,\ldots ,\sigma )\), and \(\sigma >0\).

Hence, all members of \(\varOmega _{{\text{EP}}}^{*}\) contains at least one optimal solution represented by individual \(\mathbf {x}^{*}\).

Definition 5

(Markov process) A stochastic process \(\{\xi _{t}\}_{t=0}^{+\infty }\) with status space \(\varOmega\) is a Markov process if \(\forall \tilde{\varOmega }\subseteq \varOmega\), \(P\{\xi _{t+1}\in \tilde{\varOmega }|\xi _{0},\ldots ,\xi _{t}\}=P\{\xi _{t+1} \in \tilde{\varOmega }|\xi _{t}\}\).

Lemma 1

The stochastic process \(\{\xi _{t}^{{\text{EP}}}\}_{t=0}^{+\infty }\) of EP is a Markov process.

Proof

The proof is given in “Appendix” section.\(\{\xi _{t}^{{\text{EP}}}\}_{t=0}^{+\infty }\)

We now show that the stochastic process of EP is an absorbing Markov process, defined as follows.\(\square\)

Definition 6

(absorbing Markov process to optimal status space) A Markov process \(\{\xi _{t}\}_{t=0}^{+\infty }\) is an absorbing Markov process to \(\varOmega ^{*}\) if \(\exists \varOmega ^{*}\subset \varOmega\), such that \(P\{\xi _{t+1}\notin \varOmega ^{*}|\xi _{t}\in \varOmega ^{*}\}=0\) for \(t=0,1,\ldots\).

Lemma 2

The stochastic process \(\{\xi _{t}^{{\text{EP}}}\}_{t=0}^{+\infty }\) of EP is an absorbing Markov process to \(\varOmega _{{\text{EP}}}^{*}\).

Proof

The proof is given in “Appendix” section.\(\square\)

This property implies that once an EP algorithm attains an optimal state, it will never leave optimality.

3 General running-time analysis framework for EP algorithms

The analysis of EP algorithms has usually been approximated using a simpler measure known as the first hitting time [13, 17], which is employed in this study.

Definition 7

(first hitting time of EP) \(\mu _{{\text{EP}}}\) is the first hitting time of EP if \(\mu _{{\text{EP}}}=\min \{t\ge 0:\xi _{t}^{{\text{EP}}}\in \varOmega _{{\text{EP}}}^{*}\}\).

If an EP algorithm is modeled as an absorbing Markov chain, the running time of the EP can be measured by its first hitting time \(\mu _{{\text{EP}}}\). We denote its expected value by \(E\mu _{{\text{EP}}}\) which can be calculated by Theorem 1. Hereinafter, we use the term running time and first hitting time interchangeably.

Let \(\lambda _{t}^{{\text{EP}}}=P\{\xi _{t}^{{\text{EP}}}\in \varOmega _{{\text{EP}}}^{*}\}=P\{\mu _{{\text{EP}}}\le t\}\) be the probability that EP has attained an optimal state by time t.

Theorem 1

If \(\lim \nolimits _{t\rightarrow +\infty }\lambda _{t}^{{\text{EP}}}=1\) , the expected first hitting time \(E\mu _{{\text{EP}}}=\sum _{i=0}^{+\infty }(1-\lambda _{i}^{{\text{EP}}})\) .

Corollary 1

The expected hitting time can also be expressed as \(E\mu _{{\text{EP}}}=(1-\lambda _{0}^{{\text{EP}}})\sum\nolimits_{t=0}^{+\infty }\prod _{i=1}^{t}(1-P\{\xi _{i}^{{\text{EP}}}\in \varOmega _{{\text{EP}}}^{*}|\xi _{i-1}^{{\text{EP}}}\notin \varOmega _{{\text{EP}}}^{*}\})\).

Proof

The proof is given in “Appendix” section.\(\square\)

The conclusions of Theorem 1 and Corollary are a direct approach to calculating the first hitting time of EP. However, Corollary 1 is more practical than Theorem 1 for this purpose since \(p_{i}=P\{\xi _{i}^{{\text{EP}}}\in \varOmega _{{\text{EP}}}^{*}| \xi _{i-1}^{{\text{EP}}}\notin \varOmega _{{\text{EP}}}^{*}\}\), which is the probability that the process first finds an optimal solution at time i, is dependent on the mutation and selection techniques of the EP. Hence, the framework of Corollary 1 is similar to the one in [17]. However, the estimating method [17] is based on a probability \(q_{i}\) in discrete optimization \(q_{i}=\sum \nolimits _{y\notin \varOmega _{{\text{EP}}}^{*}}P\{\xi _{i}^{{\text{EP}}} \in \varOmega _{{\text{EP}}}^{*}|\xi _{i-1}^{{\text{EP}}}=y\}P\{\xi _{i-1}^{{\text{EP}}}=y\}\).

\(p_{i}\) is a probability for continuous status while \(q_{i}\) is for the discrete one. In general, the exact value of \(p_{i}\) is difficult to calculate. Alternatively, first hitting time can also be analyzed in terms of an upper and a lower bound of \(p_{i}\), as shown by the following Corollaries 2 and 3:

Corollary 2

If \(\alpha _{t}\le P\{\xi _{t}^{{\text{EP}}}\in \varOmega _{{\text{EP}}}^{*}|\xi _{t-1}^{{\text{EP}}}\notin \varOmega _{{\text{EP}}}^{*}\}\le \beta _{t}\),

  1. 1.

    \(\sum\nolimits_{t=0}^{+\infty }[(1-\lambda _{0}^{{\text{EP}}})\prod\nolimits_{i=1}^{t}(1-\beta _{i})]\le E\mu _{{\text{EP}}}\le \sum\nolimits_{t=0}^{+\infty }[(1-\lambda _{0}^{{\text{EP}}})\prod\nolimits_{i=1}^{t}(1-\alpha _{i})]\) where \(0<\alpha _{t},\beta _{t}<1\).

  2. 2.

    \(\beta ^{-1}(1-\lambda _{0}^{{\text{EP}}})\le E\mu _{{\text{EP}}}\le \alpha ^{-1}(1-\lambda _{0}^{{\text{EP}}})\) when \(\alpha _{t}=\alpha\) and \(\beta _{t}=\beta\).

Proof

The proof is given in “Appendix” section.\(\square\)

Corollary 2 indicates that \(E\mu _{{\text{EP}}}\) can be studied by the lower bound and upper bound of \(P\{\xi _{t}^{{\text{EP}}}\in \varOmega _{{\text{EP}}}^{*}|\xi _{t-1}^{{\text{EP}}} \notin \varOmega _{{\text{EP}}}^{*}\}\). Therefore, the theorem and corollaries introduce a first-hitting-time framework for the running-time analysis of EP. The running times of EPs based on Gaussian and Cauchys mutation are studied following the framework.

4 Running-time upper bounds of CEP and FEP

In this section, we use the framework proposed in Sect. 3 to study the running times of EPs based on Gaussian and Cauchy mutations, i.e., CEP and FEP. Moreover, our running-time analysis aims at a class of EPs with constant variation, as shown in Eq. (2).

4.1 Mean running time of CEP

CEP [2] is a classical EP, from which several continuous evolutionary algorithms are derived. This subsection mainly focuses on the running time of CEP proposed by Fogel [3], Bäck and Schwefel [5, 6]. The mutation of CEP is based on the standard normal distribution indicated by Eq. (3).

According to the running-time analysis framework, the probability \(p_{t}=P\{\xi _{t}^{C}\in \varOmega _{{\text{EP}}}^{*}|\xi _{t-1}^{C} \notin \varOmega _{{\text{EP}}}^{*}\}\) for CEP is a crucial factor, and its property is introduced by the following.

Theorem 2

Let the stochastic process of CEP be denoted by \(\{\xi _{t}^{C}\}_{t=0}^{+\infty }\) . Let k be the population size of CEP with solution space \(\mathbf {S}=\prod _{i=1}^{n}[-b_{i},b_{i}]\) where \(b_{i}=b>0\) . Then \(\forall \varepsilon >0\) , we have

  1. 1.

    For fixed individual \((\mathbf {x},{\bar{\varvec{\sigma }}})\), \(P\{{\bar{\mathbf{x}}}\in \mathbf {S}^{*}(\varepsilon )\}\ge \frac{m\big (\mathbf {S}^{*}(\varepsilon )\big )}{(\sqrt{2\pi })^{n}}(\prod \nolimits _{j=1}^{n}\frac{1}{\sigma }) \exp \{-\sum \nolimits _{j=1}^{n}\frac{2b^{2}}{\sigma ^{2}}\}\) where the renewing function \(V(\sigma _{ij})=\sigma\).

  2. 2.

    The right part of the inequality of conclusion (1) is maximum when \(\sigma =2b\).

  3. 3.

    \(P\{\xi _{t}^{C}\in \varOmega _{{\text{EP}}}^{*}|\xi _{t-1}^{C}\notin \varOmega _{{\text{EP}}}^{*}\}\ge 1-(1-\frac{m\big (\mathbf {S}^{*}(\varepsilon )\big )}{(4\sqrt{e}\pi b)^{n}})^{k}\) if \(\sigma =2b\).

Proof

The proof is given in “Appendix” section.\(\square\)

In Theorem 2, Part (1) indicates a lower bound of the probability that an individual \(\mathbf {x}\) can be renewed to be the \({\bar{\mathbf{x}}}\) in the optimal neighborhood \(\mathbf {S}^{*}(\varepsilon )\). As a result, the lower bound is a function of variation \(\sigma\). Part (2) presents the lower bound that Part (1) will be tightest if \(\sigma =2b\). By this, the key factor of Corollary 2\(P\{\xi _{t}^{C}\in \varOmega _{{\text{EP}}}^{*}|\xi _{t-1}^{C} \notin \varOmega _{{\text{EP}}}^{*}\}\) may have a tight lower bound shown by Part (3).

Theorem 2 indicates that a larger \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\) leads to faster convergence for CEP since \(P\{\xi _{t}^{C}\in \varOmega _{{\text{EP}}}^{*}|\xi _{t-1}^{C}\notin \varOmega _{{\text{EP}}}^{*}\}\) becomes larger when \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\) is larger. Moreover, Theorem 2 produces the lower bound of \(P\{\xi _{t}^{C}\in \varOmega _{{\text{EP}}}^{*}|\xi _{t-1}^{C}\notin \varOmega _{{\text{EP}}}^{*}\}\), with which the first hitting time of CEP can be estimated following Corollary 2. Corollary 3 indicates the convergence capacity and running-time upper bound of CEP.

Corollary 3

Supposed conditions of Theorem 2 are satisfied,

  1. 1.

    \(\lim \nolimits _{t\rightarrow +\infty }\lambda _{t}^{C}=1\) \((\lambda _{t}^{C}=P\{\xi _{t}^{C}\in \varOmega _{{\text{EP}}}^{*}\})\) and

  2. 2.

    \(\forall \varepsilon >0\),

    $$\begin{aligned} E\mu _{C}\le (1-\lambda _{0}^{C})\big (1-(1-\frac{m\big (\mathbf {S}^{*}(\varepsilon )\big )}{(4\sqrt{e}\pi b)^{n}})^{k}\big )^{-1} \end{aligned}$$
    (6)

Proof

The proof is given in “Appendix” section.\(\square\)

Corollary 3 shows that CEP converge globally and that the upper bound of CEP’s running time decreases as the Lebesgue measure of the optimal \(\varepsilon\) neighborhood \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\) of \(\mathbf {S}^{*}(\varepsilon )\) increases. A larger \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\) translates to a larger a larger optimal \(\varepsilon\) neighborhood \(\forall \varepsilon >0\), which allows the solution of CEP more easily. Moreover, an increase in problem dimension number can increase the upper bound, and enlarging the individual number will make the upper bound smaller.

According to Eq. (6), \(E\mu _{C}\) has a smaller upper bound when the population size k increases. Approximately,

$$\begin{aligned} E\mu _{C}\le (1-\lambda _{0}^{C})\frac{(4\sqrt{e}\pi b)^{n}}{km\big (\mathbf {S}^{*}(\varepsilon )\big )} \end{aligned}$$
(7)

since \(\Big (1-\frac{m\big (\mathbf {S}^{*}(\varepsilon )\big )}{(4\sqrt{e}\pi b)^{n}}\Big )^{k}\approx 1-k\frac{m\big (\mathbf {S}^{*}(\varepsilon )\big )}{(4\sqrt{e}\pi b)^{n}}\) when k is large. Furthermore, the running time of CEP is similar to an exponential order function of size n, if \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\ge C_{0}>0\) where \(C_{0}\) is a positive constant. Hence, the running time of CEP is nearly \(O\big ((4\sqrt{e}\pi b)^{n}\big )\).

Moreover, we can give an approximate condition under which CEP can converge in time polynomial to n when \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\ge c_{C}\frac{(4\sqrt{e}\pi b)^{n}}{P_{n}}\), where \(c_{C}=\frac{(1-\lambda _{0}^{C})}{P_{n}}>0\).

Suppose Eq. (7) holds. Then, \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\ge \frac{(1-\lambda _{0}^{C})(4\sqrt{e}\pi b)^{n}}{P_{n}^2}>0\Leftrightarrow E\mu _{C}\le \frac{(1-\lambda _{0}^{C})(4\sqrt{e}\pi b)^{n}}{P_{n}\cdot m\big (\mathbf {S}^{*}(\varepsilon )\big )}\le P_{n}.\) Thus, the running time of CEP can be polynomial to n, under the constraint that \(\forall \varepsilon >0\), \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\ge \frac{(1-\lambda _{0}^{C})}{k}\cdot \frac{(4\sqrt{e}\pi b)^{n}}{P_{n}}\), where \(\mathbf {S}^{*}(\varepsilon )=\{\mathbf {x}\in \mathbf {S}|f(\mathbf {x})<f^{*}+\varepsilon \}\).

4.2 Mean running time of FEP

FEP was proposed by Yao [7] as an improvement to CEP. The mutation for FEP is based on the Cauchy distribution indicated by Eq. (4). The property of the probability \(p_{t}=P\{\xi _{t}^{F}\in \varOmega _{{\text{EP}}}^{*}| \xi _{t-1}^{F}\notin \varOmega _{{\text{EP}}}^{*}\}\) is discussed by Theorem 3 \((t=0,1,\ldots )\).

Theorem 3

Let \(\{\xi _{t}^{F}\}_{t=0}^{+\infty }\) be the stochastic process of FEP,

  1. 1.

    \(P\{{\bar{\mathbf{x}}}\in \mathbf {S}^{*}(\varepsilon )\}\ge \frac{m\big (\mathbf {S}^{*}(\varepsilon )\big )}{(\pi )^{n}} (\sigma +\frac{4b^{2}}{\sigma })^{-n}\)

  2. 2.

    The right part of the inequality of conclusion (1) is maximum when \(\sigma =2b\)

  3. 3.

    \(P\{\xi _{t}^{F}\in \varOmega _{{\text{EP}}}^{*}|\xi _{t-1}^{F}\notin \varOmega _{{\text{EP}}}^{*}\}\ge 1-(1-\frac{m\big (\mathbf {S}^{*}(\varepsilon )\big )}{(4\pi b)^{n}})^{k}\) if \(\sigma =2b\).

Proof

The proof is given in “Appendix” section.\(\square\)

Similarly to Theorem 2, FEP with \(\sigma =2b\) may lead to a tight lower bound of the probability in Part (1) and (3). Theorem 3 indicates that \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\) affects the convergence of FEP directly. A larger \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\) allows FEP to more easily arrive at a status in the optimal state space (Definition 4). Conversely, a bigger b (the bounds of search space) increases the difficulty of the problem for FEP. Given this convergence time framework, Corollary 4 summarizes the convergence capacity and running-time upper bound of FEP.

Corollary 4

Supposed conditions of Theorem 3 are satisfied,

  1. 1.

    \(\lim \nolimits _{t\rightarrow +\infty }\lambda _{t}^{F}=1\) \((\lambda _{t}^{F}=P\{\xi _{t}^{F}\in \varOmega _{{\text{EP}}}^{*}\})\) and

  2. 2.

    for \(\forall \varepsilon >0\), \(E\mu _{F}\le (1-\lambda _{0}^{F})\big (1-(1- \frac{m\big (\mathbf {S}^{*}(\varepsilon )\big )}{(4b\pi )^{n}})^{k}\big )\).

Proof

The proof is given in “Appendix” section.\(\square\)

Corollary 4 proves the convergence of FEP and indicates that a larger \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\) can make FEP converge faster. Using a similar analysis to that for CEP, a larger dimension number of problem n can increase the upper bound of FEP, but larger individual number k can lead to a smaller upper bound. Furthermore, [\(-b,b\)] is the maximum interval bound for each dimension of the variables to be optimized (Definition 1). As a result, the second conclusion of Corollary 4 also implies that a larger search space will increase the upper bound of \(E\mu _{{\text{EP}}}\).

If k becomes large enough, we have \(\Big (1-\frac{m\big (\mathbf {S}^{*}(\varepsilon )\big )}{(4b\pi )^{n}}\Big )^{k}\approx 1-k\frac{m\big (\mathbf {S}^{*}(\varepsilon )\big )}{(4b\pi )^{n}}\) such that

$$\begin{aligned} E\mu _{F}\le \frac{1-\lambda _{0}^{F}}{km \big (\mathbf {S}^{*}(\varepsilon )\big )} (4b\pi )^{n} \end{aligned}$$
(8)

Hence, the running time of FEP is nearly \(O((4b\pi )^{n})\) when \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\) is a constant greater than zero.

Moreover, we can give an approximate condition where FEP can converge in polynomial time P(n), i.e., \(m\big (\mathbf {S}^{*}(\varepsilon )\big )\ge \frac{1-\lambda _{0}^{F}}{k}\cdot \frac{(4b\pi )^{n}}{P(n)}\), \(\forall \varepsilon >0\) when Eq. (8) is true. The analysis is similar to the one for CEP at the end of Sect. 4.1.

4.3 Case study: simple EPs for sphere function problem

In this section, we analyzed the running time of simple EPs for a concrete problem presented by Definition 8.

Definition 8

A sphere function problem is to minimize the value of function \(f({\mathbf{y}}) = \sum \nolimits _{i = 1}^n {y_i^2}\) where \({y_i} \in [ - 1,1]\).

Obviously, the optimal solution of sphere function problem is the vector \((0,0,\ldots ,0)\).

Definition 9

A simple EP is an EP algorithm such that \({\varvec{\sigma }} = (1,1,\ldots ,1)\),there is only one individual in the population and the variation renewing function \(V({\sigma _j}) = 1\) where \(j = 1,\ldots ,n\).

Then, we have

Simple CEP:

$$\begin{aligned} {\bar{x}_j} = {x_j} + {N_j}(0,1) \end{aligned}$$
(9)

Simple FEP:

$$\begin{aligned} {\bar{x}_j} = {x_j} + {\delta _j} \end{aligned}$$
(10)

We will use the proposed theories of running time to analyze simple CEP and FEP to investigate which algorithm may have a smaller upper bound of running time.

Given the solution space \(\mathbf {S}=\prod \nolimits _{j=1}^{n}[-1,1]\), optimal neighborhood \(\mathbf {S}^{*}(\varepsilon )=\{\mathbf{y }\in \mathbf {S}|f(\mathbf y )<\varepsilon \},\varepsilon \le 1\). Let \(\tilde{\mathbf{S}}=\{\mathbf {z}|\mathbf {z}= {\bar{\mathbf{x}}}-\mathbf {x},{\bar{\mathbf{x}}}\in \mathbf {S}^{*}(\varepsilon ),\mathbf {x}\in \mathbf {S}\}\). We have the following results.

Theorem 4

Let \(\mu _{1}\) and \(\mu _{2}\) be the first hitting time of simple CEP and FEP, then

  1. 1.

    \(E{\mu _1} \le {(\frac{{\sqrt{2\pi e} }}{\varepsilon })^n}\)

  2. 2.

    \(E{\mu _2} \le {(\frac{{2\pi }}{\varepsilon })^n}\)

Proof

The proof is given in “Appendix” section.\(\square\)

As a result, we find simple CEP has a little smaller upper bound than simple FEP. CEP was proved to be a little better than FEP in solving sphere function problem by experimental data [7], respectively. Therefore, Theorem 4 might also reveal that Gauss distribution is better for sphere function problem than Cauchy distribution.

5 Experiment

Theorem 4 gives the upper bounds of the expected first hitting time for simple CEP and FEP on n-dimensional sphere function problem. We conduct experiment to validate the theoretical result, the experiment settings: fix the error \(\varepsilon =0.1\); fix the initial solution \({{\varvec{x}}}_0=(1,1,\ldots ,1)\); for simple CEP and FEP, respectively, we conduct 500 runs and denote \(T_i\) as the first hitting time at the ith run, then \(\frac{{\sum \nolimits _{i = 1}^{500} {{T_i}}} }{{500}}\) is considered to be the estimation of the expected first hitting time.

Fig. 1
figure 1

Expected first hitting time for simple CEP and FEP on n-dimensional sphere function problem

From Fig. 1, we can see that the expected first hitting time of simple CEP and FEP grows exponentially when the dimension n of the sphere function problem increases, and the time complexity of the FEP is higher than that of CEP, which validates the theoretical analysis in Sect. 4.3. We need to point out that the time complexity of CEP and FEP grows so fast that when the dimension \(n>7\), the calculation of the first hitting time is too long to obtain a result on our computer, so we only present the case of \(n\le 7\).

6 Conclusion

In this paper, we propose a running-time framework to calculate the mean running time of EP and present a case study and experiment. Based on this framework, the convergence and mean running time of CEP and FEP with constant variation are studied. We also obtain some results at the worst running time of the considered EPs, although the results show that the upper bounds can be tighter if the variation \(\sigma =2b\) where 2b is the length of the searching interval per dimension. It is shown that the individual number, problem dimension number, searching range, and the Lebesgue measure of the optimal neighborhood of the optimal solution have a direct impact on the bounds of the expected convergence times of the considered EPs. Moreover, the larger the Lebesgue measure of the optimal neighborhood of the optimum, the lower the upper bound of the mean convergence time. In addition, the convergence time of the EPs can be polynomial on average on the condition that the Lebesgue measure is greater than a value that is exponential to b.

However, it is possible to make an improvement on the running-time framework and analysis given in this study. The deduction process in the proofs for Theorems 2 and  3 uses few properties of the distribution functions of the mutations, and so it is possible to tighten the results. By introducing more information on the specific mutation operations, more sound theoretical conclusions may be derivable. More importantly, unlike the rigorous conditions under which CEP and FEP can converge in polynomial time, the running-time analysis of specific EP algorithms for real-world and constructive problems would have a more significant and practical impact. Future research could also focus on the runtime analysis of specific case studies of EPs.

The proposed framework and results can be considered as a first step for analyzing the running time of evolutionary programming algorithms. We hope that our results can serve as a basis for further theoretical studies.