1 Introduction

Image segmentation is to separate an image into some distinct regions with different characteristics and extract the objective of interests from their background. In these years, lots of techniques have been proposed and applied in image segmentation [6, 15, 17, 19, 33]. Among all the existing techniques, the thresholding is thought as a prevalent technique whereas its perspicuity and precision [18]. The key of thresholding is to quickly find an optimal threshold with certain criteria to achieve segmentation accurately. However, for multi-level thresholding, owing to the exhaustive search, the traditional thresholding methods can’t select the optimal thresholds efficiently and the amount of calculation increases. To solve such limitations, researchers formulated the existing criteria of conventional thresholding methods as objective functions, and incorporated meta-heuristic algorithms to enhance the computational speed, such as GA [8], PSO [7], cuckoo search algorithm (CS) [28], etc., Hammouche [8] used GA to find multi-level thresholds. Zhang [32] developed artificial bee colony to optimize the Tsallis entropy. Ghamisi [7] proposed a fractional-order DPSO for determining thresholds. Wei [28] presented Yang’s CS algorithm for solving multi-level Otsu’s problem. Sharma [20] designed a firefly algorithm based on the Lévy flight to maximize Kapur’s entropy. Among these meta-heuristic algorithms, the Yang’s CS algorithm has drawn the attention of researchers. CS algorithm [31] is a technique for optimization problems proposed by Yang in 2009. Many researchers have proved the efficiency of Yang’s CS algorithm in different applications, such as face recognition [22], engineering design [29] and neural network training [24]. Although CS algorithm is simple and has few parameters as well as high efficiency, it’s easy to fall into local optima during the later period which causes premature problem and time-consuming. Therefore, to enhance the performance of CS algorithm, many scholars have proposed improved CS algorithm [10, 13, 21, 25, 26], such as an adaptive cuckoo search algorithm (ACS) was proposed by Naik [12], the adaptive strategy is used to adjust the step size and eventually leads to faster convergence. But the adaptive strategy isn’t applied to the discovery probability pa, this may influence the algorithm accuracy. Therefore, based on ACS algorithm, an improved ACS algorithm (IACS) is introduced to for segmenting. For the proposed method, the initial step size randomly yields without being designed a prior. Furthermore, to avoid local extremum point and enhance the variety of cuckoos, the pa value changes nonlinearly with iterations. To explain the efficiency of the IACS algorithm, two methods, Yang’s [31] and Naik’s [12] are involved to searching multi-level thresholds. The next parts are structured as: Section 2 briefly leads to three types objective functions usually applied for the multi-level image thresholding, namely, Otsu’s method, Kapur’s entropy and Tsallis entropy. Section 3 provides the methodologies of Yang’s CS and ACS algorithms and presents the improved method(IACS). And then in Section 4, some experiments are expressed to verify the effect of the proposed method and analyze the results. Finally, Section 5 offers some conclusions.

2 Multi-level thresholding

Recently, multi-level thresholding methods have been widely applied to separate multiple objectives from background for an image [2,3,4,5, 16, 21]. In this section, the key thought for multi-level thresholding is briefly introduced. Take a image I with L distinct gray levels into account, where L= 256, then the multi-level image thresholding is defined as:

$$ \begin{array}{@{}rcl@{}} {{C}_{0}}&=&\left\{ m\left( x,y \right)\in I\ \text{ }\!\!|\!\!\text{ }\ 0\le m\left( x,y \right)\le {{t}_{1}}-1 \right\} \\ {{C}_{1}}&=&\left\{ m\left( x,y \right)\in I\ \text{ }\!\!|\!\!\text{ }\ {{t}_{1}}\le m\left( x,y \right)\le {{t}_{2}}-1 \right\} \\ \ \ \ \ \ &\vdots& \\ {{C}_{n}}&=&\left\{ m\left( x,y \right)\in I\ \text{ }\!\!|\!\!\text{ }\ {{t}_{n}}\le m\left( x,y \right)\le L-1 \right\} \\ \end{array} $$
(1)

where \(m\left (x,y \right )\) denotes a gray level value of pixel, t1,t2,…,tn are the different thresholds.

2.1 Otsu’s method

The Otsu’s criterion is to maximize the between-class variance [14], assume that the probability of pixels at level i is pi (pi ≥ 0 ), the cumulative probability of each class ωi can be calculated as:

$$ {{\omega }_{0}}=\sum\limits_{i=0}^{{{t}_{1}}-1}{{{p}_{i}}}\ ,\ {{\omega }_{1}}=\sum\limits_{i={{t}_{1}}}^{{{t}_{2}}-1}{{{p}_{i}}}\ ,{\cdots} ,\ {{\omega }_{n}}=\sum\limits_{i={{t}_{n}}}^{L-1}{{{p}_{i}}} $$
(2)

Then the optimal thresholds are obtained as follows:

$$ \left( {t}_{1}^{*},{t}_{2}^{*},...,{t}_{n}^{*} \right)=\operatorname{argmax} \left\{ {\sigma_{0}^{2}}+{\sigma_{1}^{2}}+{\cdots} +{\sigma_{n}^{2}} \right\} $$
(3)

where

$$ \begin{array}{@{}rcl@{}} {\sigma_{0}^{2}}&=&{{\omega }_{0}}{{\left( {{\mu }_{0}}-{{\mu }_{T}} \right)}^{2}},\ {{\mu }_{0}}\text{=}\frac{\sum\nolimits_{i=0}^{{{t}_{1}}-1}{i{{p}_{i}}}}{{{\omega }_{0}}} \\ {\sigma_{1}^{2}}&=&{{\omega }_{1}}{{\left( {{\mu }_{1}}-{{\mu }_{T}} \right)}^{2}},\ {{\mu }_{1}}\text{=}\frac{\sum\nolimits_{i={{t}_{1}}}^{{{t}_{2}}-1}{i{{p}_{i}}}}{{{\omega }_{1}}} \\ \ \ \ \ \ &\vdots& \\ {\sigma_{n}^{2}}&=&{{\omega }_{n}}{{\left( {{\mu }_{n}}-{{\mu }_{T}} \right)}^{2}},\ {{\mu }_{n}}\text{=}\frac{\sum\nolimits_{i={{t}_{n}}}^{L-1}{i{{p}_{i}}}}{{{\omega }_{n}}} \\ \end{array} $$
(4)

where \({{\mu }_{T}}=\sum \nolimits _{i=0}^{L-1}{i{{p}_{i}}}\) is the mean value of input image, μi (i = 0,1,…,n) is the mean value of the pixels of the corresponding region.

2.2 Kapur’s entropy method

Kapur’s entropy was proposed to maximize the sum of the entropy for segmenting images [9]. The entropies Hi are defined as:

$$ \begin{array}{@{}rcl@{}} {{H}_{0}}&=&-\sum\limits_{i=0}^{{{t}_{1}}-1}{\left( \frac{{{p}_{i}}}{{{\omega }_{0}}}\right)}\ln (\frac{{{p}_{i}}}{{{\omega }_{0}}}),\ \ \ {{\omega }_{0}}=\sum\limits_{i=0}^{{{t}_{1}}-1}{{{p}_{i}}} \\ {{H}_{1}}&=&-\sum\limits_{i={{t}_{1}}}^{{{t}_{2}}-1}{\left( \frac{{{p}_{i}}}{{{\omega }_{1}}}\right)}\ln \left( \frac{{{p}_{i}}}{{{\omega }_{1}}}\right),\ \ \ {{\omega }_{1}}=\sum\limits_{i={{t}_{1}}}^{{{t}_{2}}-1}{{{p}_{i}}} \\ \ \ \ \ \ &\vdots& \\ {{H}_{n}}&=&-\sum\limits_{i={{t}_{n}}}^{L-1}{\left( \frac{{{p}_{i}}}{{{\omega }_{n}}}\right)}\ln \left( \frac{{{p}_{i}}}{{{\omega }_{n}}}\right),\ \ \ {{\omega }_{n}}=\sum\limits_{i={{t}_{n}}}^{L-1}{{{p}_{i}}}\ \ \ \ \ \\ \end{array} $$
(5)

The optimal thresholds are gained as:

$$ \left( {t}_{1}^{*},{t}_{2}^{*},{\ldots} ,{t}_{n}^{*} \right)=\operatorname{argmax}\left\{ \sum\limits_{i=0}^{n}{{{H}_{i}}} \right\} $$
(6)

2.3 Tsallis entropy method

Tsallis entropy is come from Shannon entropy [23], it’s defined as:

$$ {{S}_{q}}=\frac{1-\sum\nolimits_{i=1}^{k}{{p_{i}^{q}}}}{q-1} $$
(7)

where q is Tsallis parameter. The relationship between the entropy of each subsystem is as follows:

$$ {{S}_{q}}\left( O+B \right)={{S}_{q}}\left( O \right)+{{S}_{q}}\left( B \right)+\left( 1-q \right).{{S}_{q}}\left( O \right).{{S}_{q}}\left( B \right) $$
(8)

For multi-level image thresholding problem [2], the optimal thresholds are obtained using (9):

$$ \begin{array}{@{}rcl@{}} \left( {t}_{1}^{*},{t}_{2}^{*},...,{t}_{n}^{*} \right)&=&\text{argmax}\left\{ S_{q}^{{{C}_{0}}}\left( t \right)+S_{q}^{{{C}_{1}}}\left( t \right)+\cdots+S_{q}^{{{C}_{n}}}\left( t \right)\right.\\ &&+ \left. \left( 1-q \right).S_{q}^{{{C}_{0}}}\left( t \right).S_{q}^{{{C}_{1}}}\left( t \right){\cdots} S_{q}^{{{C}_{n}}}\left( t \right) \right\} \end{array} $$
(9)

where

$$ \begin{array}{@{}rcl@{}} S_{q}^{{{C}_{0}}}\left( t \right)&=&\frac{1-\sum\nolimits_{i=0}^{{{t}_{1}}-1}{{{\left( {{p}_{i}}/{{p}^{{{C}_{0}}}} \right)}^{q}}}}{q-1},\ {{p}^{{{C}_{0}}}}=\sum\limits_{i=0}^{{{t}_{1}}-1}{{{p}_{i}}} \\ S_{q}^{{{C}_{1}}}\left( t \right)&=&\frac{1-\sum\nolimits_{i={{t}_{1}}}^{{{t}_{2}}-1}{{{\left( {{p}_{i}}/{{p}^{{{C}_{1}}}} \right)}^{q}}}}{q-1},\ {{p}^{{{C}_{1}}}}=\sum\limits_{i={{t}_{1}}}^{{{t}_{2}}-1}{{{p}_{i}}} \\ \ \ \ \ \ \ \ \ \ \ &\vdots& \\ S_{q}^{{{C}_{n}}}\left( t \right)&=&\frac{1-\sum\nolimits_{i={{t}_{n}}}^{L-1}{{{\left( {{p}_{i}}/{{p}^{{{C}_{n}}}} \right)}^{q}}}}{q-1},\ {{p}^{{{C}_{n}}}}=\sum\limits_{i={{t}_{n}}}^{L-1}{{{p}_{i}}} \end{array} $$
(10)

subject to the following constraints:

$$ \begin{array}{@{}rcl@{}} && \left| {{p}^{{{C}_{0}}}}+{{p}^{{{C}_{1}}}} \right|-1<{{S}^{{{C}_{0}}}}<1-\left| {{p}^{{{C}_{0}}}}+{{p}^{{{C}_{1}}}} \right| \\ && \left| {{p}^{{{C}_{1}}}}+{{p}^{{{C}_{2}}}} \right|-1<{{S}^{{{C}_{1}}}}<1-\left| {{p}^{{{C}_{1}}}}+{{p}^{{{C}_{2}}}} \right| \\ && \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\vdots} \\ && \left| {{p}^{{{C}_{\left( n-1 \right)}}}}+{{p}^{{{C}_{n}}}} \right|-1<{{S}^{{{C}_{\left( n-1 \right)}}}}<1-\left| {{p}^{{{C}_{\left( n-1 \right)}}}}+{{p}^{{{C}_{n}}}} \right| \\ \end{array} $$
(11)

In (11), \({{p}^{{{C}_{0}}}},{{p}^{{{C}_{1}}}},...,{{p}^{{{C}_{n}}}}\) corresponding to \({{S}^{{{C}_{0}}}},{{S}^{{{C}_{1}}}},{\ldots } ,{{S}^{{{C}_{n-1}}}}\) are computed with \({t}_{1}^{*},{t}_{2}^{*},...,{t}_{n}^{*}\), respectively.

3 Methodology

3.1 Yang’s CS algorithm

Yang’s CS algorithm is inspired by the obligate brood parasitism of some cuckoos, for simulating the whole process, Yang uses three assumptions [30]:

  1. (1)

    one egg is hatched and put in a nest randomly by each cuckoo;

  2. (2)

    the excellent nests will be preserved for next iteration;

  3. (3)

    the possibility of host bird finding cuckoo’s egg is represented by pa.

    Then a solution \(x_{i}^{\left (t+1 \right )}\) is produced by Lévy flight:

$$ x_{i}^{\left( t+1 \right)}\text{=}x_{i}^{\left( t \right)}+\alpha \oplus L\acute{e}vy\left( \upbeta \right),\ \ i=1,2,{\ldots} ,N $$
(12)

where α represents the step size and ⊕ is entry-wise multiplications. The other part of CS algorithm is biased random walk [25]:

$$ x_{i}^{\left( t+1 \right)}=\left\{ \begin{aligned} & x_{i}^{\left( t \right)}+r\left( x_{p}^{\left( t \right)}-x_{g}^{\left( t \right)} \right),\ \ \ if\ {{r}_{a}}>{{p}_{a}} \\ & x_{i}^{\left( t \right)}\ ,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ otherwise \end{aligned} \right. $$
(13)

where p , g is the p th and g th random solutions in the population, respectively, r ∈ [0,1] and ra ∈ [0,1]. The flowchart of Yang’s CS is given by Fig. 1.

Fig. 1
figure 1

Flowchart of Yang’s CS algorithm

3.2 Adaptive CS algorithm

The Yang’s CS algorithm uses the Lévy step to search the optimal solution, which generally satisfies the Lévy distribution [11], but the Lévy step in the iteration process of Yang’s CS algorithm is not controlled by any measures, the process of optimization may be time-consuming. To promote the efficiency of the Yang’s CS, Naik [12] modified the Lévy step adaptively based on the individual fitness value and the current iterative number

$$ stepsize_{i}^{\left( t+1 \right)}={{\left( \frac{1}{t} \right)}^{\left| \frac{f_{best}^{t}-{f_{i}^{t}}}{f_{best}^{t}-f_{worst}^{t}} \right|}} $$
(14)

where t represents current iterations, \({f_{i}^{t}}\) denotes the fitness of i th nest in tth iteration, \(f_{best}^{t}\), \(f_{worst}^{t}\) is the best, worst fitness in tth iteration, respectively. From the (14), the step size is long at the beginning, but as the iterations increase, it decreases. Hence, the step size will be very short when the algorithm achieves the global best solution. Equation (14) also reveals that the step size changes with fitness adaptively. Then (12) is rewritten as:

$$ x_{i}^{\left( t+1 \right)}=x_{i}^{\left( t \right)}+randn\times stepsize_{i}^{\left( t+1 \right)} $$
(15)

The superiority of ACS algorithm is that, it does not set any initial parameter, Therefore, the modified method is less parameter and faster than Yang’s CS algorithm.

3.3 Improved ACS algorithm

The discovery probability pa introduced in the Yang’s CS represents the possibility whether the nest will be discarded or be renewed [10]. It’s used to control the coordination of global and local search. If pa value is small, a large number of dimensions of a solution are changed in each generation, conversely, the poor individual will remain for the next generation, it may be unable to obtain the best solutions [25]. The parameter pa is fixed to 0.25 in the ACS algorithm [12], but for different problems, they need different values. In this work, the value of pa is changed with the number of iterations adaptively and showed as follows:

$$ \begin{array}{@{}rcl@{}} {{p}_{a}}&=&{{p}_{a\_\max }}-\left( {{p}_{a\_\max }}-{{p}_{a\_\min }} \right)\cdot \exp \left( \eta \cdot t \right) \\ \eta &=&\frac{1}{T}ln\left( \frac{{{p}_{a\_\min }}}{{{p}_{a\_\max }}} \right) \\ \end{array} $$
(16)

where T is total iterations, \({{p}_{a\_{\max \limits } }}\) and \({{p}_{a\_{\min \limits } }}\) are the predefined maximum, minimum discovery probability. It can be observed from the (16), in the early iterations, the pa value is relatively small, it will increase the diversity of solutions. In the final iterations, the value of pa will generate faster convergence to the optimum solutions. In short, pa is nonlinearly altered from a small value to a large value relatively in the whole process, it will improve the accuracy of the algorithm as a whole.

Further, (15) is reformulated as [13]:

$$ x_{i}^{\left( t+1 \right)}=x_{i}^{\left( t \right)}+ randn\times stepsize_{i}^{\left( t+1 \right)}\times \left( x_{i}^{\left( t \right)}-{{x}_{gbest}} \right) $$
(17)

where xgbest is global best solution. In the early search phase, the optimization efficiency of CS and ACS algorithms is faster, but in later period, the two algorithms are easy to sink into local extremum, which leads to low segmentation accuracy. The IACS algorithm can adaptively adjust pa value, hence it jumps out of the local extremum point in time, improves the nest’s fitness continuously. And then, the optimal threshold is obtained to extract the required target accurately. The basic steps of IACS and ACS algorithms are shown in Fig. 2 and the flowchart of IACS is given by Fig. 3.

Fig. 2
figure 2

Pseudo-code of a ACS and b IACS

Fig. 3
figure 3

Flowchart of IACS algorithm

4 Experiments and discussions

Table 1 is parametric settings of optimization algorithms. The number of nests and maximum iterations of all the three algorithms are fixed to 50 and 150, respectively. They are identical for all experiments due to having a great influence on convergence rate and computational time of the algorithm. The discovery probability pa decides whether to give up the nest,when pa = 1, host bird throws out the cuckoo’s egg or simply abandons its nest, when pa = 0, host bird can’t recognize the cuckoo’s egg and may hatch it. Hence, the range of pa is set between 0 and 1. For IACS algorithm, the maximum pa is 0.95 and the minimum pa is 0.15 [26].

Table 1 Parameters values used in CS, ACS and IACS algorithms

4.1 Image dataset

To explore the effectiveness of the three optimization algorithms for segmentation, gray-scale test images namely Lena, Baboon, and Cameraman are taken from Image Databases and given in Fig. 4.

Fig. 4
figure 4

Original test images

The size of Baboon and Cameraman are 512 × 512, Lena has size 256 × 256. As it can be seen from their respective histograms in Fig. 5, they are multimodal in nature, all the images own distinct histogram features with irregular distributions depicting objective and background. Hence, to identify the target region exactly, multi-level thresholding is a more efficient technique.

Fig. 5
figure 5

Histograms of those images

4.2 Performance metrics

The accuracy of the segmentation is assessed by Peak Signal to Noise Ratio(PSNR) [1]. The larger the PSNR, the better quality of thresholding, the mathematical expression is given below:

$$ PSNR=10{{\log }_{10}}\left( \frac{{{255}^{2}}}{MSE} \right) $$
(18)

where

$$ MSE=\frac{1}{EF}\sum\limits_{j=1}^{E}{\sum\limits_{k=1}^{F}{{{[I\left( j,k \right)-{I}^{\prime}\left( j,k \right)]}^{2}}}} $$
(19)

where E × F denotes the image size, I and \({I}^{\prime }\) are original, segmented images, respectively. Mean Square Error(MSE) is computed between the original image and its segmented image for different thresholds and for each algorithm. Lower MSE value indicates lower segmentation errors.

Structural similarity index(SSIM) [27] is to measure the structure of segmented and original image. Higher SSIM value shows that the segmented image contains more information, it’s defined as:

$$ SSIM\left( I,{I}^{\prime} \right)=\frac{\left( 2{{\mu }_{I}}{{\mu }_{{{I}^{\prime}}}}+{{D}_{1}} \right)\left( 2{{\sigma }_{I{I}^{\prime}}}+{{D}_{2}} \right)}{\left( {\mu_{I}^{2}}+\mu_{{{I}^{\prime}}}^{2}+{{D}_{1}} \right)\left( {\sigma_{I}^{2}}+\sigma_{{{I}^{\prime}}}^{2}+{{D}_{2}} \right)} $$
(20)

where μI and \({{\mu }_{{{I}^{\prime }}}}\) are the average values and \({{\sigma }_{I{I}^{\prime }}}\) is the covariance, \({\sigma _{I}^{2}}\) and \(\sigma _{{{I}^{\prime }}}^{2}\) are the variance, D1 = D2 = 0.065.

4.3 Experimental results evaluation

All the images were independently executed 20 times using each of presented multi-level segmentation methods. Tables 23456 and 7 validate the contrast of different quality metric values received by Yang’s CS, ACS and the proposed IACS algorithm, where n represents the thresholds number. Best image segmentation results and the rate of convergence curve between different algorithms are shown visually in Figs. 678910111213141516 and 17.

Table 2 Comparison of mean PSNR, MSE, CPU time and SSIM values between different algorithms using Otsu’s method
Table 3 Comparison of optimal thresholds and corresponding fitness values between CS, ACS and IACS using Otsu’s method
Table 4 Comparison of mean PSNR, MSE, CPU time and SSIM values between different algorithms using Kapur’s method
Table 5 Comparison of optimal thresholds and corresponding fitness values between CS, ACS and IACS using Kapur’s entropy method
Table 6 Comparison of mean PSNR, MSE, CPU time and SSIM values between different algorithms using Tsallis method
Table 7 Comparison of optimal thresholds and corresponding fitness values between CS, ACS and IACS using Tsallis entropy method
Fig. 6
figure 6

Results of segmenting Lena using CS, ACS and IACS for 4 different threshold levels maximizing Otsu’s method

Fig. 7
figure 7

Results of segmenting Baboon using CS, ACS and IACS for 4 different threshold levels maximizing Otsu’s method

Fig. 8
figure 8

Results of segmenting Cameraman using CS, ACS and IACS for 4 different threshold levels maximizing Otsu’s method

Fig. 9
figure 9

Convergence curves of segmenting using Otsu’s method with 11 threshold levels using CS, ACS and IACS algorithms

Fig. 10
figure 10

Results of segmenting Lena using CS, ACS and IACS for 4 different threshold levels maximizing Kapur’s method

Fig. 11
figure 11

Results of segmenting Baboon using CS, ACS and IACS for 4 different threshold levels maximizing Kapur’s method

Fig. 12
figure 12

Results of segmenting Cameraman using CS, ACS and IACS for 4 different threshold levels maximizing Kapur’s method

Fig. 13
figure 13

Convergence curves of segmenting using Kapur’s method with 11 threshold levels using CS, ACS and IACS algorithms

Fig. 14
figure 14

Results of segmenting Lena using CS, ACS and IACS for 4 different threshold levels maximizing Tsallis method

Fig. 15
figure 15

Results of segmenting Baboon using CS, ACS and IACS for 4 different threshold levels maximizing Tsallis method

Fig. 16
figure 16

Results of segmenting Cameraman using CS, ACS and IACS for 4 different threshold levels maximizing Tsallis method

Fig. 17
figure 17

Convergence curves of segmenting using Tsallis method with 11 threshold levels using CS, ACS and IACS algorithms

4.3.1 Analysis of the consequences employing Otsu’s method

In Table 2, all the algorithms perform equally for n= 5, however, as the number of thresholds increases, it’s clear that IACS algorithm has got higher values of PSNR which leads to a reduced MSE, SSIM values are also the highest, ACS algorithm has obtained acceptable results than CS. Hence the IACS algorithm exhibits more accuracy and robustness. Furthermore, Table 2c presents the information of average CPU time, compared to other algorithms, IACS has faster computational speed with minimum CPU time, ACS closely follows IACS. It can also be summarized that the proposed IACS algorithm can keep the lowest computational time along with the threshold levels increase.

For the visual analysis, the segmented results are shown in Figs. 67 and 8. The three algorithms have produced better quality of the segmented images when n= 11. However, the segmented results obtained by IACS algorithm are more pleasant visually, ACS is slightly better than CS. In Fig. 6, at n= 11, the background in the Lena image is the clearest using IACS algorithm, to some extent, the hat and hair are identifiable clearly. But the segmentation quality of CS algorithm is not accurate, especially for hat. This phenomenon of segmenting is also observed in other two images.

Figure 9 presents the convergence curve of the three algorithms for n= 11. In Fig. 9a, after about 60 iterations, the maximum fitness value is acquired only in the case of IACS algorithm. ACS and CS algorithms require at least 150 iterations. In Fig. 9b, at 20 iterations, the fitness value is 1.445e + 03 by using IACS algorithm. However, the fitness value is 1.440e + 03 for ACS and CS algorithms at 20 iterations. In Fig. 9c, when the fitness value is 3.782e + 03, there are not more than 20 iterations for the IACS algorithm, while the ACS algorithm requires 40 iterations and the CS algorithm needs 60 iterations at least. Therefore, the proposed IACS algorithm is much faster to get the global optimum.

4.3.2 Analysis of the consequences employing Kapur’s method

In Table 4, the number of thresholds exceeds 5, for all the test images, IACS algorithm performs superior to the other algorithms quantitatively, it obtains the highest values of PSNR, SSIM and minimum values of MSE, ACS has gained acceptable results than CS. This indicates the IACS is more accurate and robust in image segmentation. Table 4c illustrates the information of average CPU time. IACS algorithm is always around 1 second faster than ACS and CS algorithms. Additionally, at higher threshold levels, IACS algorithm remains the lowest computational time that makes it efficient for segmenting. In Figs. 1011 and 12, the results depict that the segmented outputs based on IACS algorithm are much clearer visually for all the test images, ACS follows the IACS and CS is the last. In Fig. 12, for the Cameraman image when n= 11, the sky and buildings are clear and become recognizable by using IACS algorithm, but the thresholded images gained by the CS and ACS algorithms lose some information to some degrees. Similarly, Figs. 10 and 11 also demonstrate that the results of IACS algorithm seem better qualitatively. Fig. 13 displays the convergence curve of the three algorithms at n= 11. In Fig. 13a, when the fitness value is 34.5, just 20 iterations for IACS algorithm, whereas CS requires about 120 iterations and ACS needs 150 iterations at least. In Fig. 13b, it’s not competitive for IACS algorithm, the maximum fitness value is obtained at around 100 iterations. For CS and ACS algorithms, the maximum fitness value is gained after about 150 iterations. In Fig. 13c, the convergence curve of CS and ACS are not obviously different. For IACS algorithm, it only takes about 40 iterations to reach the fitness value, while the ACS and CS algorithms require 150 iterations at least. Therefore, the IACS algorithm displays a faster convergence to the best solutions.

4.3.3 Analysis of the consequences employing Tsallis method

In Table 6, with the number of thresholds increasing, it’s clear that IACS algorithm has better values of PSNR, SSIM and minimum values of MSE, and ACS algorithm has slightly better results than CS algorithm. In summary, the IACS algorithm can segment images more accurately than other algorithms. In Table 6c, the IACS has taken the least time compared to CS and IACS algorithms, CS algorithm costs the most. At different threshold levels, IACS algorithm is always around 2 seconds faster than CS algorithm and about 1 second faster than ACS algorithm.

Figures. 1415 and 16 give the segmented results visually. From these figures, It can be concluded that higher-threshold images contain more details. When n= 11, for the Baboon image, the results of IACS algorithm make the beard and nose object clearer, but in CS and ACS algorithms, the nose is not clearly distinct.

Further, Fig. 17 shows the convergence curve of the three algorithms at n= 11. It is clearly shown from Fig. 17a the convergence rate of the three algorithms have produced similar results in the early period, but the fitness value of IACS is always higher than the others. Only IACS obtains the maximum fitness value after about 100 iterations. ACS and CS algorithms require at least 150 iterations. In Fig. 17b, at 20 iterations, the fitness value is about 37.5 for IACS algorithm, whereas at least 80 iterations for ACS and CS algorithms to gain the value. In Fig. 17c, convergence rate of the proposed IACS algorithm is much faster. Only IACS algorithm obtains the maximum fitness value after about 80 iterations. ACS and CS algorithms require at least 150 iterations. Consequently, the IACS outperforms CS and ACS algorithms. In summary, from different

assessment measures, the proposed IACS algorithm gives higher segmentation performance using less CPU time and is well suited for segmenting,

5 Conclusions

Due to multi-level thresholding based on Yang’s CS algorithm has larger calculated amount, the IACS algorithm is introduced. Without employing the Lévy distribution, the iterative step size based on its fitness adaptively, it enables step size variable in the process of searching. Moreover, the dynamic adjustment strategy is also applied to the discovery probability pa, which can enrich the population variety and avoid premature. The IACS algorithm is then applied to searching optimal thresholds. To validate its performance, it is compared to Yang’s CS and ACS algorithms. The results reveal that the IACS algorithm can efficiently select the multi-level thresholds and has better segmented quality than other two algorithms. According to the rate of convergence curve, the IACS algorithm attains its optimal value in less iterations without falling into local optima. Next step is to check the effectiveness of IACS algorithm for all kinds of issues in image processing.