1 Introduction

Segmentation of images is a central method in many applications and innovations for image, video and a vision of computers [22]. It is a crucial stage in the image processing process that separates an image into numerous segments in order to boost the significance of the picture representation in the places where the image segments are divided [1]. For computer-based vision systems, these important regions are easier to evaluate, including defence [17, 52] medical [15, 44], object detection [21], quality inspection [11], crack detection [36], and remote sensing [10, 45]. Four popular methods of segmentation are: thresholding, region-based strategies, edge identification, and relaxation techniques to preserve connectivity [28]. The thresholding-based strategy, in particular, has piqued the interest of researchers due to its efficiency and ease of implementation [18]. The thresholding-based technique can be categorized into bi-level and multi-level thresholding. Bi-level thresholding, as the name implies, employs a single threshold value to divide an image into two uniform foreground(object) and background sections. When an image contains a variety of objects with varying intensities, the bi-level threshold was unable to distinguish between them. Multi-level thresholding, on the other hand, separates an image into multiple zones based on pixel intensities [13]. Multilevel thresholding techniques are widely employed in a variety of fields of image processing and computer vision. Satellite image processing, synthetic aperture radar image segmentation and medical image analysis are just a few of the many significant and intriguing uses of multilayer thresholding. However, choosing the right threshold values is still the most difficult part, and more research is needed to figure that out. In the literature, there are a variety of thresholding strategies. Some are self-contained, while others require user intervention. Manually segmenting a large number of images is a time-consuming operation. Manually segmenting a large number of photos is not always practical, and it can also be inaccurate. As a result, automatic segmentation techniques are gaining a lot of support nowadays. Techniques for setting thresholds were utilised using two approaches: parametric and non-parametric approaches, try to identify the best threshold values. For classifying image classes in a parametric approach, various parameters of a probability density function should be determined. However, this method is computationally expensive and time consuming, whereas the non parametric method optimises multiple factors such as the error rate, entropy, and others to obtain the best threshold values [33]. Nonparametric approaches are choose thresholds by optimization (minimizing or maximizing) of certain feature parameter functions and easier to implement computationally rather than parametric approaches [33]. As a segmentation strategy, multi-level image thresholding has recently become a powerful technique. The threshold of the multilevel divides an image into various regions. Bi-level thresholds are the easiest form, provided that only a threshold value is chosen. But, with increasing the number of threshold levels, this is becoming more complicated. In fact, numerical complexity is exponential and contributes to an increase in threshold values over a long period of time [34]. Otsu and Kapur’s entropy are two state-of-the-art thresholding methods among the existing image thresholding approaches. Otsu and Kapur’s entropy help them find the best threshold values based on a set of rules. Otsu maximises the histogram classes’ variance, while Kapur’s entropy maximises the histogram’s entropy. The above-mentioned thresholding approaches can readily be expanded to multilevel thresholding segmentation. The exhaustive search, on the other hand, makes finding the best threshold values inefficient, and the time complexity grows exponentially as the number of thresholds grows. Metaheuristic algorithms are widely employed in multilevel thresholding situations to solve the aforementioned limitation. Such algorithms can mimic natural phenomena to solve the problems of this difficult optimization, to avoid the local optimum by exploring randomization in the search space repeatedly. There were several algorithms widely used to solve multilevel thresholds. Popular meta-heuristic algorithms are included. The most common approaches, including genetic [25], particle swarm [47], honey bee mating [26], artificial bee colony [14], firefly [27], ant colony [53], differential evolution [46], cuckoo Search [3], and bacterial foraging [48] algorithms, have been addressed for the past decade. A newly proposed algorithms, such as whale [8, 37], gray wolf [30, 31], moth swarm [56], animal migration [43], spider monkey [4], krill herd [9], harmony search [41], spherical search [39], flower pollination [49], bat [54], teaching-learning method [50], and elephant herding [51] in the last couple of years have been suggested for multilevel thresholding. The limits on convergence speed or accuracy are all of these methods and all experiments aim to match these two main aspects. But all the algorithms can still be trapped in local optima, which have a significant impact on their segmentation quality. Each optimization approach may face different local solutions; therefore, combining two optimization algorithms is able to escape individual local solutions. Hybridization of one or more algorithms is today the latest trend in research to solve problems of high dimensionality. These algorithms are extremely capable of overcoming one algorithm’s poor exploration abilities and the other algorithm’s poor exploitability [23]. In fact, the hybridization of algorithms is a practical way to overcome the constraint by growing its performance in terms of convergence and solution consistency [23].

2 Related work

Currently, hybrid algorithms to address the multilevel thresholding problem have been suggested. Ewees et al. [20] introduced a multi-level hybrid WOAPSO threshold algorithm based upon two objectives: fuzzy entropy and otsu’s functions. Results showed that the WOAPSO algorithm is highly efficient in demonstrating high competitiveness in almost every aspect of the criteria in compared with other seven algorithms. Mlakar et al. [35] introduced a multi-level image threshold otsu’s hybrid hjDE algorithm. Eleven real-life images has been evaluated and compared to the algorithms CS, DE, jDE, ABC, and PSO. In 2017, Dehshibi et al. [16]implemented a new BFHS hybrid algorithm with otsu’s and kapur entropy criterions utilizing two set (standard and satellite) of images. Research findings show that the algorithm’s ability to select multiple thresholds is important, as compared to the other algorithms that address the same problem. In addition, BFHS < HS < BF < GA is the order of the CPU time from low to high. A hybrid SCABC algorithm, which hybridize ABC to improve the level of exploitation and exploration in the classical ABC algorithm with SCA algorithm, proposed by Gupta and Deep [24]. A better SCABC search capacity as compared with traditional ABC, SCA and The overall analysis therefore suggests that SCABC be better than SCA and ABC algorithms. Aziz et al. [19] introduced a nature inspired behaviour of fireflies and real spider based hybrid FASSO algorithm to achieve the maximum between class variance criteria. Experimental results revealed the utility of the FASSO algorithm and provide comparatively lesser CPU time for quicker convergence. Ahmadi et al. [5] developed a hybrid BMO-DE optimization algorithm focused on bird mating optimization (BMO) and differential evolutionary (DE) strategies utilizing the methods of kapur’s and otsu’s. The algorithm has better results in solutions accuracy compared with other popular evolutionary algorithms, such as GA, PSO, BF and MBF. Yue and Zhang [55] incorporated hybrid invasive weed bat algorithm (IWBA) for the selection of optimal thresholds. The comparatives tests show that the IWBA algorithm is better and more efficient than the GSA, PSO-GSA and BA algorithms. According to the literature reviewed, in recent years few researchers have been interested in finding the solutions of newly developed hybrid algorithms to multilevel thresholding problems and have become an active area of research. Despite the fact that significant work has been done in this field, good image segmentation remains elusive for practitioners, owing to the following two factors: To begin with, there is limited unanimity on what criteria should be used to assess image segmentation quality. It can be difficult to achieve a fair balance between objective metrics that are entirely based on the underlying statistics of imaging data and subjective measures that attempt to empirically approximate human experience. Second, there has been a lack of consensus on acceptable models for a unified representation of image segments in the search for objective metrics. Many researchers have strong reasons for developing strategies that provide near-optimal metaheuristic search through a wide search space, despite the enormous effort required. Therefore, this paper proposes a new hybridized sine-cosine crow search algorithm (SCCSA) for multi-level thresholding. In SCCSA, the combined architecture and operators of the two separate algorithms (CSA and SCA) have made it possible to find a reasonable compromise between exploration and exploitation capabilities. A new insight into the work is the hybridization of CSA and SCA algorithms to boost the solution consistency. The hybridization of one or more search algorithms not only increases their search capacity, but also leads to their optimization and solves further variations of problems to some degree. For this work, two objective functions, like otsu’s and kapur’s entropy criteria, were used. The three main contributions will be summarized as follows:

  • Propose a SCCSA algorithm for multilevel thresholds of two sets, including the objectives of otsu’s and kapur’s.

  • Incorporating the other swarm algorithms for multi-level thresholding, namely ICSA,SCA, CSA and ABC.

  • Comprehensive qualitative and quantitative comparisons of the SCCSA with other algorithms to validate the findings.

The remainder of the paper is accordingly arranged. Section 2 explains the problem formulation. A background of hybrid SCCSA algorithm is covered in Section 3. Section 4 describes the approach of proposed multilevel thresholding based SCCSA algorithm. Experimental work and evaluation, Results and discussion are discussed respectively in Section 5 and 6. Consequently, Section 7 addresses the conclusions and future work.

3 Problem formulation

Traditionally, multi-level thresholding approaches used to grayscale photos placed thresholds on the histogram of the images. The intensities that are placed between two thresholds are assumed to belong to the same segment as the intensities that are placed between two thresholds. Two thresholding strategies are briefly explained in Sections 3.1 and 3.2.

3.1 Based on Otsu’s method

The Otsu’s approach is one of the most popular approaches to two and multiple thresholds based on the finding of the optimum threshold, which can be defined as the number of sigma functions for each region in the following equation by optimizing the segmented region of interclass variation [40].

$$ f(t)={\sigma}_1+{\sigma}_2;{\sigma}_1={\omega}_0{\left({\mu}_0-{\mu}_T\right)}^2\mathrm{and}\ {\sigma}_2={\omega}_1{\left({\mu}_1-{\mu}_T\right)}^2 $$
(1)

Where μT in the above equation denotes the mean image amplitude and for the thresholding of two levels. Mean can be described as of every class [12].

$$ {\mu}_0=\sum \limits_{i=0}^{t-1}\frac{i{p}_i}{\omega_0}\ \mathrm{and}\ {\mu}_1=\sum \limits_{i=t}^{L-1}\frac{i{p}_i}{\omega_1} $$
(2)

Through optimizing interclass variances the desired threshold can be reached [12].

$$ \left({t}^{\ast}\right)= argmax\left(f(t)\right) $$
(3)

The same strategy is used for multilevel threshold problems [12].

$$ f(t)=\sum \limits_{i=0}^m{\sigma}_i $$
(4)

The σ could be extended [12].

$$ {\displaystyle \begin{array}{c}{\sigma}_1={\omega}_1{\left({\mu}_1-{\mu}_T\right)}^2,\\ {}{\sigma}_2={\omega}_2{\left({\mu}_2-{\mu}_T\right)}^2\\ {}{\sigma}_j={\omega}_j{\left({\mu}_j-{\mu}_T\right)}^2\\ {}{\sigma}_m={\omega}_m{\left({\mu}_m-{\mu}_T\right)}^2\end{array}} $$
(5)
$$ {\mu}_0=\sum \limits_{i=0}^{t_1-1}\frac{i{p}_i}{\omega_i};{\mu}_1=\sum \limits_{i={t}_1}^{t_2-1}\frac{i{p}_i}{\omega_i};{\mu}_j=\sum \limits_{i={t}_j+1}^{L-1}\frac{i{p}_i}{\omega_i};{\mu}_1=\sum \limits_{i={t}_1}^{t_2-1}\frac{i{p}_i}{\omega_i} and{\mu}_m=\sum \limits_{i={t}_m}^{t_j+1-1}\frac{i{p}_i}{\omega_i} $$
(6)

Optimizing the objective function of eq. 7 as follows will achieve the desired threshold value [12].

$$ \left({t}^{\ast}\right)=\arg\ \max \left(\sum \limits_{i=0}^m{\sigma}_i\right) $$
(7)

3.2 Based on Kapur’s entropy

The kapur’s method maximizes the entropic measurement of the segmented histogram for the distribution of the region. Kapur’s proposed two distributions of probability, one for the object and the other for the background. Kapur’s entropy describes an essentially gray level histogram image [29]. The entropy of kapur determines the image completely defined by its accompanying histogram at the gray level [29]. Note that there are several thresholds that separate the image into several parts (t1,t2,t3, ....,tm). Consequently, kapur’s entropy is obtained by means of the following equation [56]:

$$ \mathit{\operatorname{Max}}\ J\ \left(t1,t2,t3,.\dots \dots, tm\right)=H0+H1+H2+\dots + Hm $$
(8)

where.

$$ {\displaystyle \begin{array}{c}{H}_1=-\sum \limits_{i=0}^{t_1-1}\left({p}_i/{\omega}_0\right)\ln \left({p}_i/{\omega}_0\right),{\omega}_0=\sum \limits_{i=0}^{t_1-1}{p}_i;{H}_3=-\sum \limits_{i={t}_2}^{t_3-1}\left({p}_i/{\omega}_2\right)\ln \left({p}_i/{\omega}_2\right),{\omega}_2=\sum \limits_{i={t}_2}^{t_3-1}{p}_i;\\ {}{H}_2=-\sum \limits_{i={t}_1}^{t_2-1}\left({p}_i/{\omega}_1\right)\ln \left({p}_i/{\omega}_1\right),{\omega}_1=\sum \limits_{i={t}_1}^{t_2-1}{p}_i;{H}_m=-\sum \limits_{i={t}_m}^{K-1}\left({p}_i/{\omega}_m\right)\ln \left({p}_i/{\omega}_m\right),{\omega}_m=\sum \limits_{i={t}_m}^{t_m-1}{p}_i\end{array}} $$
(9)

Where H1, H2,...,Hm reflect entropy values and ω0, ω1, ω2,..., ωm indicate probabilities of segmented class C0, C1, C2,...,Cm, respectively [29]. In both of the above methods, there are constraints which are defined as follows as: t1 < t2 < t3 < ... < tm.

4 Hybrid sine-cosine crow search algorithm (SCCSA)

4.1 Overview of crow search algorithm

Askarzadeh is developed a population-dependent metaheuristic optimization algorithm (i.e. CSA-crow search algorithm [7], which is simulated by intelligent crow behaviour. It is based on the concealed location of the excess food stock [7]. Crow is moral in stealing food from other birds. Keep watching other birds find out where they are hiding their food. This would allow crow to take food from other birds if they left the hiding place [7]. This action has encouraged by crows to develop algorithm called the CSA. Due to various awareness of other birds, crows change the location on the basis of the following formula [32].

$$ {X}_i^{t+1}=\Big\{{\displaystyle \begin{array}{cc}{X}_i^t+{r}_i\times f{l}_i^t\times \mid {m}_i^t-{X}_i^t\mid, & {r}_i<A{P}_i^t\\ {} randomposition& otherwise\end{array}} $$
(10)

Where \( {AP}_i^t \) is the jth crow’s consciousness. When the victim bird realizes the crow i follow, it tries to get the crow to a random place. Keep in mind that a crow j is randomly chosen for each crow i to change crow ith location. Figure 1 illustrates the crow’s position update in CSA within the search space.

Fig. 1
figure 1

Crows position update in CSA within the search space

4.2 Overview of sine-cosine algorithm

A mathematical motivation focused on the trigonometric functions of the sine and cosine is proposed in 2016 by Mirjalili [2]. With the help of the following formula, the SCA updates the location of the particles within solution space to the position of the best solution [32].

$$ {X}_i^{t+1}=\Big\{{\displaystyle \begin{array}{cc}{X}_i^t+{r}_1\times \sin \left({r}_2\right)\times \mid {r}_3{P}_i^t-{X}_i^t\mid, & {r}_4<0.5\\ {}{X}_i^t+{r}_1\times \cos \left({r}_2\right)\times \mid {r}_3{P}_i^t-{X}_{ii}^t\mid, & {r}_4\ge 0.5\end{array}} $$
(11)

Here, \( {X}_i^t \) refers the position at which it stands, \( {P}_i^t \)is the best solution position and r1,r2, and r3 numbers are produced randomly between zero and one. r1 demonstrates the direction for the update, r2 defines the distance for the update, r3 guarantees a proper balance between emphasis and desalination by creating a random weight, and r4 chooses a movement of the sine or cosine [32]. The effect of difference between the motions on the next position with the range in [−2, 2] in the sine and cosine movements are shown in Fig. 2.

Fig. 2
figure 2

Effects of sine and cosine in Eq. 11 on the next position with the range from −2 to 2 [32].

4.3 Hybridization of sine-cosine and crow search algorithm

The main consideration of the proposed hybrid algorithm (i.e. SCCSA) uses the CSA [32]. The CSA first drawback is that the search agents do not necessarily adopt the best solution they have ever achieved [32]. Further, if \( {r}_i\le {AP}_i^t \) is implemented, the search agents change their location to a random place in solution space, which reduces the CSA’s efficiency [32]. Therefore, first of all, it is considered to increase the efficiency of CSA to update and solution to the best solution to date or on the basis of the random status of the search agent as follows [32]

$$ {X}_i^{t+1}=\Big\{{\displaystyle \begin{array}{cc}\mathrm{update}\ \mathrm{the}\ \mathrm{position}\ \mathrm{based}\ \mathrm{on}\ \mathrm{the}\ \mathrm{position}\ \mathrm{of}\ \mathrm{the}\ \mathrm{best}\ \mathrm{solution}& {r}_1<0.5\\ {}\mathrm{update}\ \mathrm{the}\ \mathrm{position}\ \mathrm{toward}\ \mathrm{a}\ \mathrm{randmly}\ \mathrm{chosen}\ \mathrm{search}\ \mathrm{agent}& {r}_1\ge 0.5\end{array}} $$
(12)

Where r1 is a 0 to 1. Then, a CSA updating procedure or SCA movements can be used by each search agent to update its position accordingly [32].

$$ {X}_i^{t+1}=\Big\{{\displaystyle \begin{array}{cc}{X}_i^t+{r}_1\times \sin \left({r}_2\right)\times \mid {r}_3{P}_i^t-{X}_i^t\mid, & {r}_4<0.3\\ {}{X}_i^t+{r}_1\times \cos \left({r}_2\right)\times \mid {r}_3{P}_i^t-{X}_i^t\mid, & 0.3\le {r}_4\le 0.6\\ {}{X}_i^t+{r}_1\times f{l}_i^t\times \mid {m}_i^t-{X}_i^t\mid, & {r}_4\ge 0.6\end{array}} $$
(13)

These steps ensure that all search agents are intelligent and do not create random, low-quality solutions [32]. Therefore, a different approach can be adopted by each search agent in the solution area, thereby increasing its searching capacity. To maximize the use of metaheuristics, ensuring an efficient trade between exploration and exploitation is important [32]. The solution space must be exploited in the first course of iterations; however, we focus more on exploitation in the final iterations [32]. To this effect, the following method is used in SCCSA throughout focus more on experimentation in the first and last iterations [32]

$$ {r}_1=a-t\frac{a}{T} $$
(14)

Where t represents current iteration, a is a constant and T is the maximum generation.

5 Proposed multilevel thresholding based SCCSA algorithm

As described in the previous section, SCCSA was selected for multilevel image thresholding. The SCCSA algorithm has been proved to be more efficient for optimizing the objective function in the large search space, when an optimum solution is required [32]. The SCCSA algorithm proposed is designed to identify optimal thresholds within gray [0, L-1] levels by optimizing the function of either otsu’s or kapur’s entropy. In the proposed algorithm, the N crow individuals (the size of the flock) are believed to occupy randomly a position in d-dimensional space and it is supposed that in their first positions they hide their food. Within the algorithm, the sine-cosine algorithm based movements were applied and accordingly all the crows are generating new positions and updating their memories. The best fitness was then evaluated using either Eq. (7)-otsu’s function or Eq. (9)-kapur’s entropy is bound by constraint [0,255]. Such step is repeated until the best fitness is achieved (i.e. the maximum objective function value). The block diagram and the complete flow diagram of the proposed SCCSA multi-level threshold image segmentation algorithm are shown in Figs. 3 and 4.

Fig. 3
figure 3

Block diagram for the proposed SCCSA based multilevel thresholding

Fig. 4
figure 4

Flow chart of multilevel thresholding based SCCSA algorithm

6 Experimental work and evaluation

This section discusses experimental work. To evaluate and validate the proposed algorithm, twelve standard bench mark images such as Starfish (https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/BSDS300/html/dataset/images/Gy/12003.html), Lena, Airplane, Cameraman, Hunter, and Living room (http://www.imageprocessingplace.com/root_files_V3/image_databases.htm), Baboon, Pepper and Lake (http://sipi.usc.edu/database/database.php?volume=misc&image=35#top), Boat (http://decsai.ugr.es/cvg/dbimagenes/g512.php) and Gold hill(https://homepages.cae.wisc.edu/~ece533/images/) were taken from different image datasets. All images in JPEG format with size 512 × 512. Table 1 illustrates all original images with their histogram. Due to their multimodal histogram, most images are difficult to segment. In order to achieve better results for such multimodal histograms, sophisticated multi-level threshold segmentation is necessary for those images.

Table 1 Original gray scale test images and related histograms

To examine the viability of the proposed SCCSA algorithm in relation to other state-of - the-art ICSA, SCA, CSA and ABC algorithms for optimally evaluated objective function and performance measures. These algorithms were individually implemented and their search efficiency was tested using the Matlab 2014 environment on the Intel(R) Core(TM) i3-6006 U CPU 8GB RAM at 2 GHz running in Windows 10. The search route for crows in SCCSA, bees in ICSA, agents in SCA, crows in CSA and bees in ABC is extremely different and can be merely applied in the formulated problem. These algorithms have been initialised in similar conditions, but have a very different search nature for the same objective function [10]. For each algorithm, various control parametres have been examined to better explore and explore the features of all algorithms and this helps to bring each algorithm to a faster convergence. The selected control parameters of each algorithm for SCCSA, ICSA, SCA, CSA and ABC are taken from the original reference. The optimum solution is found for the set of m threshold values in every candidate solution in all algorithms [6, 38]. Thus, within [0, L-1] at each aspect at the initialization stage of all algorithms, a population of the solution is formed alone. The population is created first, then the solution of the population is allocated to a fitness value and then the optimum threshold values for its selection criteria are calculated by each algorithm. The elapsed CPU time to reach the desired accuracy with each algorithm is considered. In this way, the stopping condition for all algorithms is based on the objective function value and is not on the number of iterations. Since the evolutionary and swarm-based algorithms used include randomness and the initial solutions were randomly created for each run, all experiments for each threshold number and image were repeated for 30 times to ensure the credibility of the statistics. The between class variance and entropy of the Kapur was attempted to be maximized for the given iteration number for all algorithms. The 2–5 level thresholding was considered for all running environment algorithms for the purpose of visualizing better perception and fidelity assessment of segmented images.

Generic output metrics are measured as: PSNR, standard deviation, mean, SSIM, FSIM, in subsequence to the values of final objective function values(Jmax), thresholds and CPU time. The stability and efficiency of all the algorithms are evaluated by the mean and standard deviation (STD) values for the each objective function by the following Eq. (15) below [8]:

$$ \mu =\frac{\sum \limits_{i=1}^k{\sigma}_i}{k}, STD=\sqrt{\sum \limits_{i=1}^n\frac{\left({\sigma}_i-\mu \right)}{k}} $$
(15)

Where, σi- best fitness value of the ith run of the algorithm. μ- mean value of σ and k- the number of runs for each stochastic (i.e. k = 30 times). The lower STD value here means that the algorithm uses the objective function to have greater stability. In the decibel (dB) unit, PSNR values are determined for measurements of the dissimilarity between the original and the segmented images. The consistency metric thus shows the degree of similarity between the segmented and the original on the basis of MSE of each pixel [42]:

$$ PSNR=20{\log}_{10}{\left(\frac{255}{RMSE}\right)}_{,\left(\mathrm{in}\ \mathrm{dB}\right)} $$
(16)

where,

$$ RMSE=\sqrt{\frac{\sum \limits_{i=1}^X\sum \limits_{j=1}^Y{\left(I\left(i,j\right)- Seg\left(i,j\right)\right)}^2}{X\cdot Y}} $$
(17)

where, 255 is the maximum gray value, I and Seg are original and segmented images of size X.Y, respectively. Generally, the higher value of the PSNR indicates the good quality of segmentation. The SSIM is used to assess the visual similarity between the original and the reconstructed images. This index combines comparisons of luminance, contrast and structure. In addition, it satisfies symmetry, constraint and unique maximum properties. The SSIM metric can be modeled as follows [42].

$$ SSIM\left(I, Seg\right)=\frac{\left(2{\mu}_I{\mu}_{Seg}+{c}_1\right)\left(2{\sigma}_{I, Seg}+{c}_2\right)}{\left({\mu}_I^2+{\mu}_{Seg}^2+{c}_1\right)\left({\sigma}_I^2+{\sigma}_{Seg}^2+{c}_2\right)} $$
(18)

where μI - mean intensity of the image I, μSeg - mean of the image Seg, \( {\sigma}_I^2 \)- the variance of I. \( {\sigma}_{Seg}^2 \)- the variance of Seg. σI, Seg- the covariance of I and Seg. The c1 and c2 are the constants, and are included to avoid instability when \( {\mu}_I^2+{\mu}_{Seg}^2 \) are very close to zero, which are mathematically represented as: c1 = (k1L)2 and c2 = (k2L)2. By default, k1 = 0.01 and k2 = 0.03 were taken for computation. And, L is the gray level number in the image. Improved performance is achieved when the SSIM metric is reached at a higher value. The FSIM metric is being used to determine and measure the resemblance among the two images as [8]:

$$ FSIM=\frac{\sum_{X\in \varOmega }{S}_L(X)P{C}_m(X)}{\sum_{X\in \varOmega }P{C}_m(X)} $$
(19)

where

$$ {\displaystyle \begin{array}{c}{S}_L(X)={S}_{PC}(X){S}_G(X);\\ {}{S}_{PC}(X)=\frac{2P{C}_1(X)P{C}_2(X)+{T}_1}{P{C}_1^2(X)+P{C}_2^2(X)+{T}_1};\\ {}{S}_G(X)=\frac{2{G}_1(X){G}_2(X)+{T}_2}{G_1^2(X)+{G}_2^2(X)+{T}_2}\end{array}} $$
(20)

The constants here are T1 and T2. T1 = 0.85 and T2 = 160 values were selected. The G is the gradient of the image and is defined as mathematically

$$ G=\sqrt{G_x^2+{G}_y^2} $$
(21)

The PC is the compatibility of the phase and expresses itself as:

$$ PC(X)=\frac{E(X)}{\left(\varepsilon +\sum n{A}_n(X)\right)} $$
(22)

An(X) indicates the localized intensity on scale n and E(X) indicates the magnitude of response vector at position X on scale n and ε is a small positive constant. The higher FSIM valve is seen as enhanced threshold approach efficiency.

7 Results and discussion

Population-based SCCSA, ICSA, SCA, CSA and ABC algorithms have been tested on 12 sets of different test images in this experimental study. The sample of visualization results after applying the proposed SCCSA algorithm to the tested image (i.e. Boat) is illustrated in Tables 2 to 3 for the representation purposes. Table 2 presents the results of Otsu’s method and, as shown in Table 3 for Kapur’s entropy results, the threshold values range from 2 to 5 levels. Figuring out, Tables 2 and 3 provide detailed information on the convergence characteristics of the SCCSA algorithm, the different optimal threshold values associated with the histogram and the segmented image. It is clearly noticed from Tables 23, The qualitative evaluation of visual effects shows that the gray scale images are gradually segmented with threshold numbers that from m = 2 to m = 5. With higher threshold numbers (m = 5), each image’s quality is significantly greater and provides a clearer information than m = 2, 3 and 4 for all the images tested for both methods, with a greater number of thresholds (m = 5). Similarly, the same visual results were observed in all other images tested were observed. Further, the multi-level thresholding graphical segmentation results of the SCCSA and the other compared algorithms (ICSA, SCA, CSA and ABC) are shown in Figs.5 (otsu’s method) and 6 (kapur’s entropy) for star image with m = 5 threshold level. From these figures, it can be found that the proposed SCCSA algorithm achieved the best segmentation performance compared with other algorithms. Besides, sample of otsu’s and kapur’s based segmented result of images (i.e. airplane, peppers, lake and goldhill) using the hybridized SCCSA algorithm for m = 2–5 thresholds. Such figures show that, under different thresholds, the SCCSA showed strong segmentation outcomes for different images. Furthermore, these figures show that the segmented images are better for an increasing threshold Figs 6 and 7.

Table 2 SCCSA results on boat image using otsu’s method
Table 3 SCCSA results on boat image using kapur’s entropy
Fig. 5
figure 5

Segmented images of starfish at m = 5 using SCCSA, ICSA, SCA, CSA and ABC based on otsu’s method

Fig. 6
figure 6

Segmented images of starfish at m = 5 using SCCSA, ICSA, SCA, CSA and ABC based on kapur’s entropy

Fig. 7
figure 7

Sample of otsu’s and kapur’s based segmented results of images using the hybridized SCCSA algorithm for m = 2–5 thresholds. From top to bottom: Airplane, Peppers, Lake and Goldhill

Quantitative results are compared and illustrated in Tables 4-10 as well as in Figs. 8-11 with four other state-of-the-art algorithms. For all the five algorithms, the optimal objective function values (i.e. for otsu’s and kapur’s) are reported in Table 4 over 30 evaluation runs for each image being tested. Since, the two methods are a maximization problem, each target function should be as high as possible to achieve the optimal threshold. It is apparent from Table 4, that all algorithms performed were almost equal in the increase of thresholds (from m = 2 to m = 5). In addition, for images tested, the SCCSA offers higher objective function values than all other algorithms for both methods. However, the otsu’s based obtained SCCSA objective function values for the pepper image were lower when compared with the ICSA algorithm for the m = 4 threshold level. Whereas, the kapur’s entropy results from the results of objective function values in the SCCSA on the hunter image was produced by a lower value than the ICSA algorithm when the threshold level was m = 5. This occurred due to the randomness of the different swarm approaches, the results may vary in some cases. Tables 5 and 6 sum up the chosen thresholds for the test images in the total range of the gray values obtained from the otsu’s and kapur’s approaches for determining the consistency of the optimum threshold values. The table results show that the dedicated threshold detection results are not exactly similar (i.e. this is due to the stochastic and random design of all algorithms) and that the values are in many cases more scattered and broader. Compared to other algorithms, the numerical results from the PSNR results of the proposed SCCSA algorithm are presented in Table 7. PSNR values are observed with increasing threshold values, from m = 2 to m = 5, as can be seen in Table 7. Moreover, Table 7 identifies several aspects. In otsu’s method, average PSNR values compared to SCCSA are improved by 26.17%, 46.16%, 65.26% and 96.69% respectively with ICSA, SCA, CSA and ABC. In the case of Kapur’s method, average PSNR values are compared with the SCCSA algorithm by 24.56% for ICSA, 48.57% for SCA, 69.37% for CSA and 96.08% for ABC algorithms.

Table 4 Comparative analysis of optimal objective function values
Table 5 Comparative analysis of best threshold values obtained by otsu’s method
Table 6 Comparative analysis of best threshold values obtained by kapur’s entropy
Table 7 Comparative analysis of PSNR values
Table 8 Comparative analysis of SSIM values obtained by otsu’s and kapur’s methods
Table 9 Comparative analysis of FSIM values obtained by otsu’s and kapur’s methods
Table 10 Comparative analysis of CPU time values
Fig. 8
figure 8

Comparison of SSIM values for different algorithms with SCCSA using Otsu’s method

Fig. 9
figure 9

Comparison of SSIM values for different algorithms with SCCSA using Kapur’s entropy

Fig. 10
figure 10

Comparison of FSIM values for different algorithms with SCCSA using Otsu’s method

Fig. 11
figure 11

Comparison of FSIM values for different algorithms with SCCSA using Kapur’s entropy

The values of SSIM and FSIM are evaluated using eqs. (18 and 19) for each image of all algorithms with threshold numbers from 2 to 5 for the entropy methods of otsu’s and kapur’s. The obtained values of SSIM values is given in Table 8. The SSIM values for the image sample of boats as shown in Figs. 8 and 9 were obtained. These figures show that the SCCSA results in higher values than the rest of the algorithms, which means that the SCCSA algorithm is more sensitive to the threshold increase for the both methods. All other images tested have been observed with the same trend. Table 8 displays the FSIM values obtained. Then, the FSIM values for the sample of starfish image as depicted in Fig. 10 for the otsu’s method and Fig. 11 for kapur’s entropy are then tested for visual similarity between the original and the segmented images. With both methods, it is noteworthy that SCCSA produces higher FSIM values for all threshold levels. Similar findings were witnessed for all other images tested Table 9.

Calculating the CPU time required by any algorithm is important because real-time applications need rapid execution. Accordingly, the computational efficiencies of all algorithms are compared using the best CPU time (in seconds) required to converge on the solution as stated in in four different threshold levels is shown in Table 10. Since the comparison results of CPU time values in the proposed SCCSA are lower in both methods than those of ICSA, SCA, CSA and ABC algorithms. In regard of CPU time for SCCSA algorithm is primarily due to each search agent acquiring experience (exploration and exploitation mechanisms) from the population in the search region. This capability enables hybrid SCCSA to perform efficient searches within the search space and greatly reduce computation time. Although increasing the number of thresholds increases the CPU time of all algorithms, when compared to other algorithms, specifically the CPU time of SCCSA, it has the lowest growth time. The Mean values over 30 runs are provided by five algorithms for each image with threshold numbers from 2 to 5 in Table 11. In terms of mean, this parameter shows the average value of the objective function values during iterations, reflecting to some extent algorithm stability (Sun et al. 2017). The result of the Mean values is almost same when m = 2 and 3 threshold values. In SCCSA, when the m value is greater than 3, the mean value of the goal function is mostly higher than those of other algorithms. The proposed SCCSA therefore provides a more accurate segmentation of the images tested. In addition to checking the stability of the proposed SCCSA with four other algorithms, the values and standard deviation (i.e. STD) are calculated using Eq. (15) and given in Table 12. From the tabular findings, it was concluded that SCCSA apparently produced lower STD values more frequently with rest of other algorithms. The supported results are indicate that the SCCSA algorithm’s stability is better achieved.

Table 11 Comparative analysis of mean values
Table 12 Comparative analysis of STD values

Furthermore, a statistical based wilcoxon test is performed to compare the meaningful difference between the performances of five algorithms at a 5% significance level. Subsequently, the objective function values (both methods) of the SCCSA algorithm is compared with the other four algorithms, such as ICSA, SCA, CSA and ABC. The values of the two target functions are not considered to be significantly different in a null hypothesis. The alternative hypothesis takes into account both values and differences between the two methods. Typically adequate justification against the null hypothesis may be considered if p values are less than 0.05. The demonstrated test results of the wilcoxon test with p value for evaluation, and from Table 13, it is evident that SCCSA outperforms other algorithms in almost all test cases because of the p value <0.05, which proves to be more statistically significant. In addition, there is a considerable difference between SCCSA and the other four algorithms is observed. As a whole, the tabular findings of quantitative results and figures with proposed multilevel thresholding based SCCSA algorithm outperformed as compared with ICSA, SCA, CSA and ABC algorithms.

Table 13 p-values produced by Wilcoxon test comparing SCCSA vs. other algorithms based on objective function values

8 Conclusion and future work

In this work, we introduced a new hybrid sine cosine crow search algorithm for multilevel image thresholding using the objective functions of otsu’s and kapur’s. A standard 12 set of gray images were used for testing of proposed algorithm. The efficacy of the SCCSA algorithm was evaluated by testing and comparing algorithms (ICSA, SCA, CSA and ABC) for best threshold values, PSNR, SSIM, FSIM, and CPU time. Results show that the hybridized SCCSA algorithm can achieve very obvious superiority and competitive achievements in comparison to other algorithms in either method particularly in terms of qualitative and quantitative aspects. Further, the wilcoxon test results between SCCSA and other algorithms show that the difference with other algorithms is significant. Finally, the promising results showed that the proposed SCCSA hybrid algorithm is more feasible and easily implementable for gray image segmentation effectively. Although the findings are indicative of a good result with the proposed SCCSA algorithm, the aim of the analysis is not to generate a proposed algorithm that would be able to overcome all algorithms with the currently available algorithms, but to show that for this reason the SCCSA can be seen as an attracting alternative. Future work involves incorporating with other popular entropy-based segmentation methods such as Fuzzy entropy, Tsallis entropy, Minimal cross-entropy, Renyi entropy, Masi entropy and Shannon entropy for medical and satellite images.