1 Introduction

A confident understanding of fabric behaviour and characteristics are vital in the design and development of a functional garment. For instance, a warp knit mesh fabric made of 100% polyester designed to wick moisture away from the skin, with the quick dry ability, making it ideal for every daywear and preferred in extreme performance requirements. On the other hand, Georgette is a balanced plain-woven fabric generally made of 100% polyester with high twist yarns giving the fabric less smooth appearance used in fashion apparel. Textile materials have evolved in recent times and fabrics play a significant role in the development of sportswear industry. In fact, it reflects the quality of a brand and its identity [6, 12, 16].

The goal of image enhancement is to enhance the contrast among adjacent regions or features in order to support activities such as FDD and monitoring [20]. Among image enhancement techniques, it is shown the wavelet and homomorphic filtering techniques proposed in enable dynamic range compression and contrast enhancement simultaneously [1, 17, 19]. All these approaches concentrate on reinforcing the details of the image to be enhanced in terms of frequency domain [2, 18].

The AT is a powerful tool in image decomposition. If the image is decomposed using the AT, the details can be separated into the higher frequency sub-bands [1, 15]. Also, we use the homomorphic enhancement algorithm for transforming these details to illumination and reflectance components. Then, the reflectance components are amplified showing the details, clearly. Finally a wavelet reconstruction process is performed to get an enhanced image with much more details [35, 14, 21].

In the recent research is gear fault diagnosis in the manufacturing line using image recognition where used wavelet packet bispectrum to process the gear vibration signals [8]. For detecting fluff quality of fabric surface using optimal sensing where optimal sensing assesses fluff quality using practical method [9].

For solving problem of manually measuring fabric surface for thickness by using three-dimensional reconstruction of fleece evaluation [10]. Fabric defects effects on the company of the clothes. Because of the defected fabrics has to be sold at lower cost. So using deep learning to detection those defects in the fabrication of clothes [7, 11, 13, 22].

The motivation and problem definition of this paper:

  • Fabric defects directly affects the profit margins of the company. As the defected fabrics has to be sold at lower cost.

  • To minimize value loss due to variety of defect occurring in the fabric, a manufacturer should try to minimize those defects by taking suitable remedies.

An innovation and Contributions of this paper:

  • The main objective is to obtain an high resolution fabric image with as much details as possible for defects detection

  • Design and development of approaches to make fabric defects fabric.

  • Design and development of approaches to make classification for defects fabric more efficient by reducing their process time.

  • Building an efficient fabric defects detection system.

This research suggests two proposed approaches for FDD. The first approach is based on the SCFC. The second approach depends on the AHSFC. This paper is organized as follows. Section 2 gives motivation and related works. Section 3 gives a discussion of fabric defects. Section 4 surveys the CLAHE. Section 5 gives a discussion of defects classification. Section 6 gives feature extraction presents. Section 7 classification of fabric defects. Section 8 explains segmentation. Section 9 gives the AT. Section 10 gives the HM. Section 11 gives the AH. Section 12 presents the first proposed approach. Section 13 presents the second proposed approach. In section 14, performance metrics is spotted. Section 15 gives a discussion of the simulation results. Finally, section 16 clears the conclusion and future work.

2 Motivation and related works

This framework drives its motivation and importance through the paper topic and the pictures that are utilized for FDD. This research deals with an important topic derived from the problems addressed for the FDD. The basic objective of this proposal analyzes the fabric images [1, 17, 19, 20] by computerization for enabling the radiologist to detect and classify the FDD. This research presents two new approaches for FDD.

The first suggested approach is based SCFC. The SCFC depends on improvement by CLAHE followed by OT and finally FE for FDD. This research suggests two proposed modes for FDD. The first mode is based on the SCFC. The second method  depends on the AHSFC. AHSFC is relied on enhancement with AH following by OT and FE for Classification and FDD. Compared to the most relevant work [810], this framework depends on performance evaluation with entropy, average gradient, contrast, Sobel edge magnitude, sensitivity, specificity, precision,accuracy and identification time. It is clear that the obtained results in this framework are the best from the previous works that introduced in [7, 11, 13, 22].

3 Fabric defects

It is generally a fault that spoils the quality of material. Faults in fabric results into reduction of its cost as well as reduces its value in the market from consumer point of view. A fabric defect is any abnormality in the fabric that hinders its acceptability by the consumer. For example of some the fabric defects in woven fabric are coloured flecks,knots, slub,broken ends woven in a bunch,broken pattern,double end, float, gout, hole, cut, or tear, lashing.

Importance FDD

  • With increasing in demand of quality fabric now customers are more concerned about the quality of the material.

  • In order to fulfill demand of quality material it is importance to avoid defects.

  • A significant reduction in the price of fabric is seen due to presence of faults.

  • It also affects the brand name.

4 CLAHE

This method is depended on applying the HE on pictures and creating out the modification for clip limit after the histogram process.

sing to avoid the noise problem and drawback of the HE.

The steps of this model can be summarized as follows:

1. Split the pictures into non - overlapping tiles.

2. The histograms of each tile are performed.

3. Apply a clip limit to obtain improved images.

4. Optimized for the clip limit CL to get high resolution of images to enable of FDD.

The clip limit CL is obtained by following equation [2, 19]:

$$ {C}_L=\frac{M}{N}\left(1+\frac{\alpha }{100}\left({S}_{max}-1\right)\right) $$
(1)

where CL is a the clip limit, α is a clip factor, M and N are the region size in gray scale value .The maximum clip limit is obtained for α = 100.

5 Defects classification

It is in a many shapes in particular: knots, slub,broken ends woven in a bunch,broken pattern,double end, float, gout, hole, cut, or tear, lashing. It with progressively unpredictable shapes is processed easily. Hence the fabric picture that contains simple defect it’s not carcinogenic, while the dangerous picture is broken pattern hole, cut, or tear in fabric image .it is difficult to processing these defects.

6 Feature extraction

Most merits are produced from the abnormalities of the fabric picture. The extraction of these methods play very significant role in detecting defects of fabric image due to the nature of them. It proves that texture features are helpful in separating good and typical defected fabric. They can disengage isolate good and defected fabric image with classification.

7 Classification of fabric defects

MATLAB program is created out and suggested to improve the detection and classification of FDD in the fabric pictures, these classification steps are:

  1. 1.

    Examine if the picture contains defect or not.

  2. 2.

    On the off chance that no defect discovered, at that point ordinary picture.

  3. 3.

    Whenever discovered defect then- defected picture.

  4. 4.

    Segmentation:

    • Performing the OT on fabric image.

    • Detect the boundaries of picture by Edge Map (EM) with Canny Detection (CD).

  5. 5.

    Classifying the defect dependent on shape of defect, and CD

    • Separate the defect area from the picture.

    • Whenever defect area is thick and thin or hole or tear or cut weft then smash defect.

The detection process of the Region-of-Interest (ROI) exudate edges is performed by utilizing the EM that detects the edges with low contrast.

8 Segmentation

This step is dividing a fabric image into regions with similar properties. The objective of this stage is to get the location and classify suspicious into benign or difficult. This process depended on an OT method. It is a non-parametric and unsupervised way of automatic threshold selection for segmentation of images. It is ideal as in it expands the between-class change, a notable measure utilized in factual discriminant examination [14, 21].

$$ MN={n}_0+{n}_2+\dots +\kern0.5em {n}_{L-1} $$
(2)

where M × N is the dimensions of the image, ni is the total number of pixels in the image with level i. Suppose we select a threshold k, and use it to split the picture into two classes: C1 and C2. Utilizing this threshold, the occurrence, P1(k) that a pixel is assigned to class C1 is given by the cumulative sum as follows:

$$ {P}_1(k)=\sum \limits_{i=0}^k{p}_i $$
(3)

The pixels of the input image can be appointed to L gray levels, k is a selected threshold from 0 < k < L-1.

The probability of Class C2 occurrence is,

$$ {P}_2(k)=\sum \limits_{i=k+1}^{L-1}{p}_i=1-{P}_1(k) $$
(4)

where P1(k) is the probability of Class C1.

The mean intensity values of the pixels assigned to class C1,

$$ {m}_1(k)=\frac{1}{P_1(k)}\sum \limits_{\mathrm{i}=0}^k\ i\ {p}_i $$
(5)

The mean intensity values of the pixels assigned to class C2

$$ {m}_2(k)=\frac{1}{P_2(k)}\sum \limits_{i=k+1}^{L-1}\ i\ {p}_i $$
(6)

where P2(k) is the probability of class C2.

The global mean is known by,

$$ {m}_G(k)=\sum \limits_{i=0}^{L-1}\ i\ {p}_i $$
(7)

The problem is to discover an optimum magnitude for k that maximizes the way realized by this equation:

$$ y(k)=\frac{{\sigma_B}^2(k)}{{\sigma_G}^2(k)} $$
(8)

where σB2(k) is the between-class variance achieved by:

$$ \kern1.5em {\sigma_B}^2(k)\kern0.75em ={P}_1{\left({m}_1-{m}_G\right)}^2+{P}_2{\left({m}_1-{m}_G\right)}^2 $$
(9)

and σG2(k) is the global variance defined as,

$$ \kern1.5em {\sigma_G}^2{(k)}_{=}\sum \limits_{i=0}^{L-1}\ {\left(i-{m}_G\right)}^2\ {P}_i $$
(10)

where the optimum threshold is the k* that maximizes σB2(k).

9 AT

The AT decomposes an image into subbands using the “a’ trous” filtering approach [1215] in several consecutive stages. The low pass filter used in this process has the following mask for all stages [1, 2]:

$$ H=\frac{1}{256}\left(\begin{array}{ccccc}1& 4& 6& 4& 1\\ {}4& 16& 24& 16& 4\\ {}6& 24& 36& 24& 6\\ {}4& 16& 24& 16& 4\\ {}1& 4& 6& 4& 1\end{array}\right) $$
(11)

Each difference between filter outputs of two consecutive stages is a subband of the original picture. We obtain an approximation picture. The difference between the inputs and the approximation images give entropy values. The first detail plane w1. If this process is repeated, we can obtain more detail planes w2 up to wn and finally an approximation plane pn.

10 HM

An image can be used represented as a product of two factors  as in the following equation [3, 6]:

$$ f\left({n}_1,{n}_2\right)=i\left({n}_1,{n}_2\right)r\left({n}_1,{n}_2\right) $$
(12)

where f(n1, n2) is the obtained picture pixel value, i(n1, n2) is the light illumination incident on the object to be imaged and r(n1, n2) is the reflectance of that object .

It is known that illumination is approximately constant since the light falling on all objects is approximately the same. The only change between object images is in the reflectance component. This method is shown in Fig. 1.

Fig. 1
figure 1

Block diagram of HM

If we apply a logarithmic process on Eq. (12), we can change the multiplication process into an addition process as follows:

$$ \log \left(f\left({n}_1,{n}_2\right)\right)=\log \Big(i\left(\left({n}_1,{n}_2\right)\right)+\log \left(r\left({n}_1,{n}_2\right)\right) $$
(13)

The first term in the above equation has small variations but the second term has large variations as it corresponds to the reflectivity of the object to imaged. By attenuating the first term and reinforcing the second term of Eq. (13), we can reinforce the image details.

11 AH

In this approach, we merge the benefits of the HM and AT techniques. First, the image is decomposed into sub bands by  the AT. Then, each sub band is processed, separately, using the HM to reinforce its details. This way is depicted in Fig. 2.

Fig. 2
figure 2

Block diagram of the AH

The steps of this scheme can be summarized as follows:

  1. 1.

    Decompose the fabric image into four subbands p3, w1, w2 and wby  the AT and the low pass mask of Eq. (11).

  2. 2.

    Apply a logarithmic operation an each sub band to obtain the illumination and reflectance components of the subbands w1, w2 and w3 as they contain the details.

  3. 3.

    Perform a reinforcement process of the reflectance factor in each sub band and an attenuation operation of the illumination factor.

  4. 4.

    Reconstruct each sub band from its illumination and reflectance.

  5. 5.

    Perform an inverse AT on the obtained sub-bands by adding p6, w1, w2, w3, w4, w5 and w6 after the HM processing to get the improved image.

12 The first proposed approach

The ASFC presents an efficient model for FDD in fabric images. It depends on improvement by CLAHE in addition to pre-processing followed by segmentation digital fabric pictures and separate the defected regions by classifying these pictures relied on FE, shape of defect, based on the CD sharpness. Then the system is decide if fabric image is not defected or defected, and determines whether the defected one is benign or smash. The graphic form of this technique is declared in the Fig. 3.

Fig. 3
figure 3

Block diagram of the first proposed framework

The steps of this scheme as bellows:

  1. 1.

    Obtaining the original image

  2. 2.

    Apply the CLAHE way on fabric picture as preprocessing process.

  3. 3.

    Create out segmentation stage on the produced image.

  4. 4.

    Perform the FE process on the obtained picture.

  5. 5.

    Split the defected areas by classifying these pictures depended on the FE, shape of defect by the CD sharpness.

  6. 6.

    Finally apply classification process on the acquired image.

  7. 7.

    Then the system is deciding if fabric image is defected or not and determines if the defected one is smash.

13 The second proposed scheme

The AHSFC presents an efficient model for FDD in fabric pictures. It depends on improvement by AH in addition to pre-processing followed by segmentation digital fabric images and separate the defected regions by classifying these pictures relied on FE, shape of defect, based on the CD sharpness. Then the system is decide if fabric image is not defected or defected, and determines whether the defected one is benign. The graphic form of this technique is declared in the Fig. 4.

Fig. 4
figure 4

Block diagram of the second proposed framework

The steps of this scheme as bellows:

  1. 8.

    Picking  up  the original picture from the camera

  2. 9.

    Implement the AH algorithm on fabric picture as preprocessing process.

  3. 10.

    Create out segmentation stage on the produced image.

  4. 11.

    Perform the FE process on the obtained image

  5. 12.

    Split the defected areas by classifying these pictures depended on the FE, shape of defect by the CD sharpness.

  6. 13.

    Finally apply classification process on the acquired image.

  7. 14.

    Then the system is deciding if fabric image is defected or not and determines if the defected one is smash.

14 Performance metrics

This section presents the quality metrics used for the evaluation of enhancement results. These metrics include entropy (E), contrast improvement factor (CIF), average gradient (AG), Sobel edge magnitude SE), Sensitivity (SEN), Precision (PR), Specificity (SP), Accuracy (ACU) and Identification Time (IT) [6, 16].

The philosophy of E that it is metric of the amount of information in a random variable. The histogram of an image can be interpreted in the form of the PDF if it is normalized by the image dimensions. Assuming an equally spread histogram, this equivalent to a uniform random variable. The uniformly distributed random variable has equal probabilities of events leading to maximum entropy .on the other hand discrete peaked histograms as in black and white pictures have low amount of information. Hence the objective of processing of pictures is to get close to histograms with uniform distributions. The E of an image is expressed as follows [1, 3]:

$$ E=\sum \limits_{i=0}^{255}-{p}_i{\log}_2\left({p}_i\right) $$
(14)

where E is the entropy of the image, 255 is the maximum level for an 8-bit gray-scale image. The larger the number of levels in an image, the higher is the entropy. The pi is the probability of occurrence of pixels in the image having intensity ‘i’. Suppose the number of pixels having intensity ‘i’ is ni and the image contains n  pixels\( , \kern1em {p}_i=\raisebox{1ex}{${n}_i$}\!\left/ \!\raisebox{-1ex}{$n$}\right. \).

$$ {C}_{IF}=\kern1em \frac{\left|{C}_O-{C}_e\right|}{Ce} $$
(15)

where CIF is the percentage contrast enhancement factor, CO is the original picture contrast, and Ce is the enhanced picture contrast.

In image processing applications many edges mean much information. Unfortunately, the pictures are dark preventing the edges and details to appear. So, our objective is to reinforce edges revealing more details. Mathematically edges can be obtained with differentiation in both directions in images. Hence, AG is expressed as follows [3]:

$$ AG=\frac{1}{MN}\sum \limits_{x=1}^M\sum \limits_{y=1}^N\sqrt{\frac{\left({\left(\frac{\partial f}{\partial x}\ \right)}^2+{\left(\frac{\partial f}{\partial y}\ \right)}^2\right)}{2}\kern0.5em } $$
(16)

where AG is the average gradient of the picture, f(x, y) is the original image, M × N is the dimensions of the picture.

Another metric for edge intensity is the magnitude of both horizontal and vertical derivatives defined as SE is expressed as follows [3]:

$$ \nabla f=\sqrt{{f_x}^2+{f_y}^2\kern0.75em } $$
(17)

where ∇f is the sobel edge magnitude, fx and fy are two pictures that at each point contain the horizontal and vertical derivative approximations, respectively as shown in Fig. 5.

Fig. 5
figure 5

Differentiation filter masks in both x and y direction, (a) Differentiation mask in x direction (b) Differentiation mask in y direction

To estimate fx, we need to differentiate with x, and need to differentiation mask and also fy we need to differentiate with y, and need to differentiation mask as shown in Fig. 6.

Fig. 6
figure 6

Results of first experiment of the second suggested technique

Another perspective to look at the quality of pictures is to enhance the content of the fabric pictures. In the following sections, we’ll look at how to evaluate classification models using four other metrics are SEN, PR, SP and ACU.

Sensitivity

This factor is capacity of a classifier to recognize the positive outcomes quantitatively is assessed which is given as:

$$ {\mathrm{S}}_{\mathrm{EN}}=\frac{{\mathrm{T}}_{\mathrm{P}}}{{\mathrm{T}}_{\mathrm{P}}+{\mathrm{F}}_{\mathrm{N}}} $$
(18)

where TP is True Positives and FN is False Negatives.

Specificity

This index is capacity of a classifier to distinguish the negative outcomes is given as:

$$ {\mathrm{S}}_{\mathrm{P}}=\frac{{\mathrm{T}}_{\mathrm{N}}}{{\mathrm{T}}_{\mathrm{N}}+{\mathrm{F}}_{\mathrm{P}}} $$
(19)

where TN is True Negatives, FP is False Positives, and FN is False Negatives.

Precision

This metric is defined as the proportion of true positive against all possible results and is given as:

$$ {\mathrm{P}}_{\mathrm{R}}=\frac{{\mathrm{T}}_{\mathrm{P}}}{{\mathrm{T}}_{\mathrm{P}}+{\mathrm{F}}_{\mathrm{P}}} $$
(20)

where TP is True Positives, and FP is False Positives.

Accuracy

This metric decides the productivity of classifier as far as obvious positive and genuine negatives demonstrating the extent of genuine outcomes

$$ {\mathrm{AC}}_{\mathrm{U}}=\frac{{\mathrm{T}}_{\mathrm{P}}+{\mathrm{T}}_{\mathrm{N}}}{{\mathrm{T}}_{\mathrm{P}}+{\mathrm{F}}_{\mathrm{P}}+{\mathrm{T}}_{\mathrm{N}}+{\mathrm{F}}_{\mathrm{N}}} $$
(21)

where TP is True Positives, TN is True Negatives, FP is False Positives, and FN is False Negatives.

where TP represents the number of samples whose labels are positive, and the actual forecasts are positive, FP indicates the number of samples whose labels are negative, and the actual forecasts are positive. FN represents the number of samples whose labels are positive, and the actual forecasts are negative.

15 Simulation results

This section presents several simulation cases executed on seven different pictures to evaluate the two suggested mode. The evaluation metrics for FDD quality are E, CIF, AG, ∆f, SEN, PR, SP, ACU and IT .

These results adopt a strategy of presenting the original fabric pictures with their enhanced versions using the first proposed approach and the second proposed approach. The first approach is based on the SCFC. The second scheme depends on the AHSFC.

The results of FDD for two proposed algorithms are shown in Fig. 6. The performance metrics results for two schemes of the first case are summarized in Table 2.

A similar experiment has been carried out on other fabric picture. The results of the FDD for two proposed algorithms are shown in Fig. 6 to Fig. 12. The obtained results show that the second proposed scheme succeeded in the FDD, and it achieves the best details that have been obtained. Furthermore, it is clear that the second proposed scheme has succeeded in giving the best improvement results for the FDD from both the visual quality and performance metrics. The obtained numerical results are summarized in Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11.

Table 2 presents that the quality metrics of the first experiment. It has been shown that, the first approach has the least E value from the second mode. It has been shown that the value of AG is very low for the first technique and low for the second algorithm. It has been shown that the first mode has the least Svalue with comparing by the second mode. With comparing between the two proposed schemes, clear that the second presented mode achieves the maximum values with EAG and for the first case.

16 Discussion

To further confirm the effectiveness of the first and second presented modes, the cases on many other fabric pictures.

datasets are presented. The defect detection algorithm proposed in this work could distinguish between normal and defect images and identify specific fabric defects.

Therefore, accuracy (ACU) and Identification time are used as metrics for evaluation. The former reflects the algorithm’s ability to distinguish between normal and defective fabric pictures, whereas the latter reflects the time of mode’s ability to recognize specific fabric defects. The obtained results of these suggested techniques are explained in Figs. 6, 7, 8, 9, 10, 11 and 12.

Fig. 7
figure 7

Results of second experiment of the second suggested technique

Fig. 8
figure 8

Results of third case of the second suggested mode

Fig. 9
figure 9

Results of fourth experiment of the second suggested mode

Fig. 10
figure 10

Results of the fifth case of the second suggested technique

Fig. 11
figure 11

Results of the sixth case of the second suggested framework

Fig. 12
figure 12

Results of the seventh case of the second suggested scheme

In this section, seven cases have been created out on different fabric pictures to evaluate the performance of the two presented techniques. The evaluation metrics for image quality are E, CIF, AG, ∆f, SE, SEN, PR, SP and ACU. The numerical properties of the seven case are shown in Table 1.

Table 1 Database properties

The results of the first case are cleared in Fig. 6. Figure 6a presents the original fabric images. The first purposed algorithm, the second purposed algorithm with Three Subband (TS) and with Six Subband (SS), the first purposed algorithm results are declared in Fig. 6b to Fig. 6h respectively. A similar case has been carried out on other fabric pictures and the results are cleared in Figs. 7, 8, 9, 10, 11 and 12.

The obtained results for all cases are declared in Figs. 6, 7, 8, 9, 10, 11 and 12. The obtained results show that the second suggested mode succeeded in the improvement of FDD. Furthermore, it is clear that the second purposed technique has succeeded in giving the best FDD results from both the visual quality and performance metrics. .

Table 2 presents that the quality metrics of the first experiment. It has been cleared that, the first mode  has the least E value from the second framework. It has been shown that the value of AG is very low for the first way and low for the second mode. It has been shown that the first approach has the least Svalue with comparing by the second approach. With comparing between the two modes, clear that the second framework achieves the maximum values with E, AG and SE for the first experiment.

Table 2 Numerical results of the first case

Table 3 presents that the quality metrics of the second experiment. It has been shown that, the first approach has the least value from the second . It has been shown that the value of AG is very low for the first approach and low for the second way. It has been shown that the first approach has the least SE value with comparing by the second mode. With comparing between the two suggested ways, clear that the second proposed scheme achieves the maximum values with E, the average gradient and Sfor the second case.

Table 3 Numerical result of the second experiment

A similar experiment has been carried out on other fabric images and the results are cleared in Table 4 and Table 5.

Table 4 Numerical result of the color third case
Table 5 Numerical result of the fourth case

A similar experiment has been carried out on other fabric pictures and the results are cleared in Table 6 and Table 7.

Table 6 Numerical result of the fifth case 
Table 7 Numerical results of the sixth case

Table 8 shows the performance metrics of FDD for seventh case. It has shown that the more value of the E the increasing the features of the fabric image. It has shown that the more value of AG the increasing the FDD of the fabric picture.

Table 8 Numerical results of the seventh case

Table 9 presents that the contrast enhancement results of the first experiment and the second experiment. It has been shown that, the first approach has the least contrast enhancement value from the second approach for the first experiment. It has been shown that the first approach has the least contrast enhancement value with comparing by the second approach. With comparing between two proposed approaches, clear that the second proposed approach achieves the maximum values with the contrast enhancement for all experiments.

Table 9 CIF results for all experiments

Table 10 presents that the numerical IT results of all experiments. It has been shown that, the first approach has the longer time of IT ms value from the second mode for all cases. With comparing between two proposed schemes, clear that the second suggested approach is the fastest in time for all cases.

Table 10 Numerical IT (ms) results of all cases

Table 11 presents that the numerical SEN, PR, SP and ACU results for all experiments. It has been shown that, the first algorithm has the least values of SEN PR, SP and ACU with respect to the second technique for all experiments. With comparing between two proposed ways, clear that the second suggested mode achieves the maximum values with SEN, PR, SP and ACU for all experiments.

Table 11 Numerical results for all cases

Finally, Table 12 presents a summary of the performance of the suggested scheme alongside results reported for similar schemes.

Table 12 Comparison of the presented schemes and state-of-the-art FDD algorithms

As shown from the table, the ACU values for presented schemes change between 91 to 92. It is clear that the ACU value of the second proposed scheme is the largest from the other ways. The ACU value of the [7] is high. The ACU value of the [22] has the lowest from the other techniques. The ACU value of the second suggested scheme is the largest with respect all the traditional techniques. With respect to the IT values for presented schemes change between 60 to 92 ms. It is clear that the IT value of the second proposed scheme is the fastest IT from the other algorithms. The IT value of the first approach is long IT. The IT value of the [7] is 35 ms. The IT value of the [13] is 3 ms. The IT value of the second suggested scheme is the lowest with respect all the traditional ways. So, clear that the second proposed scheme is the best with comparing the traditional techniques. Relative to the similar studies reported in the table, ACU and IT values for presented techniques outperform those in [7, 11, 13, 22].

17 Conclusion and future work

This framework presents two proposed modes for FDD. The first scheme is based on the SCFC. The second way depends on the AHSFC. The quality assessments for evaluated this paper are E, CIF, AG, ∆f, SEN, PR, SP, ACU and IT. The obtained results for suggested second scheme ensure high ACU values and the fastest in IT compared with the other traditional schemes. The obtained results using the second suggested mode reveal its ability to enhancement and FDD in the clothes with comparing the first technique. In the future work, the presented techniques can be recommended deep learning models for classification of the obtained images.