Introduction

Lower extremity wound (LEW) have become a serious health care problem globally. Mostly, elders with diabetes and obesity suffer from these ulcers. It may cause them severe pain, social isolation, altered mentation, incontinence and decreased productivity. Since re-growth of skin is deferred or dormant, a patient takes costly and prolonged treatment. These wounds are also susceptible to secondary infections. It is reported that the population prevalence rate for chronic wounds in the United States is roughly 2 % of general population and 8.5 % of the elders [1]. The related cost of treatment is about $50 billion per year [2]. The prevalence rate in rest of the world is almost similar and in India, it is nearly 4.5 % [3]. The proper assessment and treatment of chronic wound (CW) will reduce the rate of morbidity.

Precise wound diagnosis and treatment by unassisted vision remains a tedious task for dermatologists as it requires periodic evaluation. Currently, wound evaluation depends on two examinations: spatial measurements and visual inspection. The status evaluation by visual inspection is based on area, texture, color, tissue composition, and severity stage [4]. The spatial measurement techniques include wound area estimation using common ruler, transparency tracing, color dye injection, and alginate molds. However, these techniques are an inaccurate way to deal with complex wounds. The state of art approach of digital planimetry is another way to identify the wound and its tissues [5]. But, the planimetry is time expensive and hence it is a nonideal clinical approach. An ideal method is a non-contact system [69], which estimate the significant wound and tissue composition for prediction of wound’s contraction. These methods are non-invasive, clinically affordable, and are widely used assistive tool for dermatological evaluation.

Machine vision and artificial intelligence technique have been applied in several studies related to automatic wound analysis [1013]. The lower appendage ulcers have intricate coloration, heterogeneous textures and vague wound periphery which makes segmentation process very tedious. The complex nature of ulcer is a challenging computational task for an accurate wound segmentation. The conventional image processing algorithms, such as histogram thresholding, and watersheds usually prove to be inappropriate for such wounds. An active contour modeling segments the wound beds accurately, but the pressure ulcer images should have captured under controlled imaging conditions [14]. The similar study in this field [15] used the statistical color model to detect wound bed and peripheral skin. The color model was based on Bayes theorem and consisted of projecting back the color pixels onto the color histogram model. In spite of high segmentation accuracy, the model showed effective performance for the images with heterogeneous skin color and failed to perform for skin color homogeneity. Yadav et al. [16] proposed the wound area segmentation method for LEW color photographs taken under non-ideal imaging conditions. Db and Dr. color channels were selected by the contrast between wound and non-wound region. The wound area was segmented by k-means and fuzzy c-means. However, the method is less accurate regarding segmentation accuracy. An accurate method was proposed by Fauzi et al. [17] which was based on computation of probability map based on the pixel distance of an image to the red, yellow, black and white colors in a modified HSV color space. But, their method was handicapped for wounds on dark toned skins. The formation of granulation tissue represents wound healing. Most of the studies related to healing rate are based on the estimation of various tissues present within the ulcer circumference. The change in tissue composition is an indicator for healing and is also a deciding factor for better therapeutic interventions. The framework for automated tissue classification was proposed by Mukherjee et al. [18] to assist the clinicians to estimate wound healing dynamics. They suggested that the fuzzy divergence based thresholding applied to saturation plane lesion photograph minimizes edge inexactness. Bayesian and support vector machine were trained by color and textural features for classifying granulation, slough, and necrotic tissues. They inferred that the support vector machine with 3rd order polynomial kernel was best classifier for chronic wound tissue classification. However, their findings were confined to the small dataset of images acquired under ideal imaging conditions. Their study had significant limitations of portability with smart phones and tablets which were resolved by Chakraborty et al. [13] in their most recent work of telemedicine support system.

The accurate wound measurement is an essential and integral part of an assessment process. However, in spite of several efforts to develop non-invasive system; there is still a dearth of the reliable and accurate medical systems for rapid wound-bed detection under uncontrolled illumination. Most of the studies related to wound segmentation from digital images are restricted to the photographs acquired under ideal imaging conditions and biased datasets. This research gap inspired us to endeavor a new user friendly point of care technology solution for wound-bed estimation of complicated lesions with an eye on translational medicine. In view of all the forgoing, we propose spectral based clustering with normalized Ng-Jordan-Weiss Laplacian for segmentation [19]. The proposed framework for wound-bed detection is depicted in Fig. 1.

Fig. 1
figure 1

The proposed framework for wound-bed detection

Methodology

The proposed framework involved spectral based segmentation for wound bed identification and estimation. The aim was to separate abnormal wound region from the normal skin in an image. Firstly, a quadratic approach was used for color homogenization ensued by noise removal through first ordered statistics filter. Secondly, color space was selected by the mean contrast difference between a wound and healthy skin. Thirdly, post processing on segmented lesion was done using morphological operators. Finally, segmented wound image was validated by comparing it with the ground truth image which was marked and rated by clinicians.

Fig. 2
figure 2

Mean of contrast difference between wound and background region of three lower appendage ulcers in six color spaces under study

Ethics statement

The study comform to the guidelines of the Helsinki Declaration and also complies with the ethical guidelines involved in research on human subjects issued by Indian Council of Medical Research (ICMR). The protocol was approved by the ethical board of Indian Institute of Technology (IIT) Kharagpur and Midnapore Medical Collage and Hospital (MMCH) with ref. no: IIT/SRIC/AR/2012 and ref. no: IEC/2012/1 respectively. Few images for study were taken from Medetec medical image online database (MIOD) [20] with copyright free restriction.

Data acquisition

The burst of chronic wound images was acquired using SONY Cyber-shot camera (Model No: DSC W520). The population sample consisted of 46 women and 18 men, with an average age of 56.80 years (Standard deviation 9.25 years). The study was explained to the patients in local language and written consent was obtained. The participation of all individuals was voluntary and privacy of the personal information was ensured to them. Prior to image acquisition wound was prepared by undressing the wound from the infected skin lesion. The wound was cleaned with isotonic saline water and all the remaining surface residual were removed. The wound was freshly dressed after imaging. The imaging was done in non-ideal daylight condition in the outpatient clinic of the MMCH without flash. The collected images contains wide range of light intensities. The length along the camera and wound surface was roughly 15–25 cm. The camera was held almost perpendicular to surface of wound. The pixel resolution of each raw image was 14 megapixels and JPEG format, however photographs were cropped as to get maximum wound area with minimum background. The JPEG average compression was 6:1. The background contains skin and the non-skin environment. The size of cropped images and the ratio of foreground to background varies from image to image. The tissue distribution in an overall dataset has significant variations and has irregular border. The wounds were complex and surrounded by healthy tissue. Table 1 shows tripartite of 105 images i.e., 45 pressure ulcers (PU), 39 leg ulcer (LU) and 21 diabetic foot ulcer (DFU) images. The image dataset used for the study is unbiased to the specific skin tone as well as imaging conditions. In order to reduce the bias towards the specific skin tone, we took images from MIOD and images grabbed in MMCH. MIOD contains most of the images of the patients having European skin tone, whereas the images obtained from MMCH had Asian skin tone.

Table 1 Summary of three category of ulcer images used in the study
Table 2 Inter rater agreement matrix
Table 3 Result of performance measures for spectral clustering, k-means and FCM
Table 4 Results of segmentation accuracy for lower extremity wounds
Table 5 Comparison with reported literature

Preprocessing

The most essential aspect of wound assessment is the color shed. The quality of color in a digital image depends on the camera lens, filter, and visibility. The captured wound image suffers from noise due to inappropriate illumination, reflection due to wound exudates, blood stains, shadows, clinicians’ fingers, etc. Therefore, preprocessing an image for the good color cast is an essential task. The pre-processing stage involves color correction, noise reduction, and color space selection to improve image quality for accurate segmentation of wound area.

Color correction

Color is the prime indicator for visual assessment of wound. The two primary techniques used earlier for illumination correction were gray world assumption [21] and Retinex theory [22, 23], but their applicability depended on the nature of the images. Here, color correction of wound images has been done by the hybrid approach of combining gray world and Retinex theory.

Impulse noise delineation

The presence of fluid (i.e. pus, blood), oozing from the wound, causes reflection which appears as white spots in the photograph. Similarly, shadow on the periphery of wound resembles like actual wound boundary. These spurious regions leads to inaccurate wound area segmentation. The order statistics filter applied on images removed those fallacious regions including random impulse noise. The first order statistics filter is a nonlinear filter whose response depends on the ordering of the pixels in the image area enclosed by the filter.

Color space selection

Red (R), Green (G), and Blue (B) are prime colors of light, but RGB model is more suitable for color generation than synthesis. Color gives most vital information about the wound. Hence, it is better to use more color space along with RGB. Hue-Saturation-Intensity (HSI) plane is an alternative to RGB. Where H represent pure color, S constitute a degree of purity dilution due to white light, and I represent intensity. The color space choice rely on the contrast between the wound and the background [24]. Contrast is calculated from the difference between the pixel mean of the selected rectangular area in wound region and the pixel mean of the selected rectangular areas in the non-wound area of an image. The Eq. 1 is used to calculate the contrast for all the six channels, where, I w (x, y) represents the inner rectangle and I B (x, y) represents the average of three external rectangles selected in the non-wound region (i.e. background).

$$ Contrast=\left|\left(\frac{1}{MN}{\displaystyle \sum_y^M{\displaystyle \sum_x^N{I}_w\left(x,y\right)}}\right)-\left(\frac{1}{MN}{\displaystyle \sum_y^M{\displaystyle \sum_x^N{I}_B\left(x,y\right)}}\right)\right| $$
(1)

Spectral clustering based wound area segmentation

The intuitive goal of segmentation is to partition an image into two classes namely, wound and the non-wound. In our proposed spectral clustering algorithm, the aim is to put N data points in such a way that it forms two clusters in an I-dimensional space. Each group is described by its similarity. The first step in SC is to define graph G = (V, E) with a finite and non-empty node or vertex V = {v 1,  … , v n }. The edge E ⊂ V i  × V j is between two vertices and is denoted by e ij . Here E is symmetric (i.e. e ij  ⇔ e ji ), and G is undirected. The adjacency matrix W ∈ (0, 1)n × n represents the edge set such that:

$$ {W}_{ij}={W}_{ji}=\left\{\begin{array}{cc}\hfill 1,\hfill & \hfill if\left({v}_i,{v}_j\right)\in E\hfill \\ {}\hfill 0,\hfill & \hfill otherwise\hfill \end{array}\right. $$
(2)

If W ij  = 0, means vertices v i and v j are unconnected by an edge. The degree of a vertex v i  ∈ V is defined as \( {d}_i={\displaystyle \sum_{j=1}^n{w}_{ij}} \). The next step is to identify degree matrix D = diag{d 1,  ... , d n } of G. For convenience, i ∈ A is a shortened for the set of indices {i|v i  ∈ A}. Consider two set A and B such that A, B ⊂ V such that \( W\left(A,B\right):={\displaystyle \sum_{i\in A,j\in B}{w}_{ij}} \). The size of the subset A ⊂ V is measured in two ways. One way is to use the number of vertices in A which are given by |A|. Another way is to sum all the weights of edges connected to vertices in A, i.e. \( vol(A):={\displaystyle {\sum}_{i\in A}{d}_i} \). Here, the subset A ⊂ V is connected, if any two vertices in A are connected by an edge such that every single points in between lie entirely within A. Furthermore, subset A is a connected component if it is connected and there is no connection between A and \( \overline{A} \). The third step is to define pairwise similarity s ij . The purpose of similarity graph is to replicate neighbourhood relationship between data points. For this, we associate all the points with positive similarity with each other. A standard approach to model the local neighbourhood, is to use Gaussian similarity function. The Gaussian similarity function is defined by \( {s}_{ij}:= \exp \left(-\frac{d{\left({x}_i,{x}_j\right)}^2}{2{\sigma}^2}\right) \), where σ controls the size of the neighborhoods. Finally, graph Laplacian matrix is constructed to remove degrees from the adjacency matrix. Here we use Ng, Jordan, & Weiss Laplacian [19] defined by L NJW  = D −1/2 AD −1/2, where A ii  = 0.

Algorithm: Spectral clustering

Input: Similarity matrix , cluster size S ∈ R n × n  ,  cluster size k.

Output: Retrieve clusters A 1,...,A k with A i = {j|y j   ∈  C i  } .

Steps:

1. Construct the similarity matrix W

2. Compute normalized Laplacian L NJW.

3. Compute first k largest eigenvector u 1,..., u k of L NJW.

4. Consider T ∈ R n × k be column matrix (i.e. stack of eigenvectors) containing the vectors u 1,...,u k

5. Construct the matrix T ∈R n × k from U by normalizing row to unit length, i.e. set \( T\kern0.5em =\kern0.5em {(t)}_{\left(i,j=1\right)}^n \)

with \( {t}_{i,j\kern0.5em }=\kern0.5em \frac{u}{{\left({{\displaystyle {\sum}_ku}}_{jk}^2\right)}^{\frac{1}{2}}} \)

6. For i = 1,...,n , let y i ∈ R k be the vector corresponding to the i th row of T.

7. Cluster the points \( {\left({y}_i\right)}_{i=1}^n \) into clusters C 1,...,C k using the k-means algorithm.

Post processing and validation

Morphological post processing such as area opening, hole filling, thickening, bridging and thinning was performed to get the desired segmented image. The practical sustainability of our approach is cross-validated using ground truth of images obtained from dermatologists. Two different cross-validations methods were used to ensure robustness.

  1. 1.

    The independent evaluation of ground truth image by three experienced clinicians on basis of an inter-rater agreement between them. The ground truth images were manually segmented under the clinician’s directions.

  2. 2.

    The segmentation method was evaluated using three performance measures: accuracy, sensitivity and positive predictive value. The segmentation accuracy (SA) is the ratio of the amount of pixels accurately clustered by the algorithm to the union of the pixels segmented manually by the algorithm.

$$ SA=\kern0.5em \left(\frac{A+D}{A+B+C+D}\right)\times 100 $$
(3)

Where, A is a pixel in wound region detected as the wound, B is a pixel in non-wound region detected as the wound, C is non-detected of wound pixel, and D is a pixel in non-wound region detected as the non-wound pixel. The percentage of accuracy was calculated based on the comparison of the pixels that represented the resultant segmented image and the pixels that represented ground truth image. Positive predictive value (PPV) gives the success of the algorithm at excluding non-wound region.

$$ PPV=\kern0.5em \left(\frac{A}{A+B}\right)\times 100 $$
(4)

Finally, the last parameter sensitivity gives the ability of an algorithm to detect non-wound pixel in wound region. PPV measures the over-segmentation and sensitivity measures the under-segmentation. So, for exact segmentation PPV and sensitivity should be equal to one.

$$ Sensitivity=\kern0.75em \left(\frac{A}{A+C}\right)\times 100 $$
(5)

Results

The proposed method was tested on a set of 105 images, randomly selected by dermatologists. The selected images had complex tissue content, irregular wound boundary, improper color shed, and acquired in heterogeneous light environment. The preprocessing was done to get uniform color distribution and proper color casts in all the images. The color correction was done by the hybrid approach of combining gray world and Retinex theory. The gray world assumed the average intensity of red (R), blue (B) and green (G) are equal. The Retinex theory used maximum intensity value of the three color channels. Neither of these two methods are effective individually in the wound images because wounds had varied color distribution and dominant color. Therefore, we combined these individual approaches to form a hybrid approach and found that it was more effective in preserving linear correction to R and B channels [23]. Figure 4(b) shows the color corrected photograph with respect to the original images (see Fig. 4(a)) which contained pixels with non uniform intensity values. The dataset of the color homogenized images had fixed pixel intensity values (i.e. R = 93, G = 188, B = 210) throughout. The next step was to delineate the spurious regions and impulse noise present in an image. The impulse noise is mainly the salt and pepper noise. The image containing such noise and fictitious regions had improper intensity mapping, that was, a dark pixel present in bright regions and bright pixels in the dark region. These redundancies were reduced by application of first ordered statistics filter locally. The square structuring element of 5 × 5 window size was applied to color corrected image. The filter was selected based on the rank obtained by applying different image quality metrics. The more detailed discussion on filter selection for lethal wound photographs can be found in recent reported literature by Dhane et al. [25]. The image having spurious free regions obtained after application of filter can be seen in Fig. 5(d).

The working color channel should be selected cautiously since the color is subjective and crucial. The color channel was selected by measuring contrast between wound and background region, defined in Eq. 1. The distinction between the wound and non-wound region was done by defining a rectangular area inside the wound, and three rectangular areas outside the wound circumference. These four RGB samples, soon afterward were transformed to the HSI space. All the channels were normalized between value 0 and 1. Contrast was calculated from the pixel intensity difference between the mean of the inner rectangle, and the average of the three outer rectangles taken together. The contrast for the six channels is summarized in Fig. 2. From Fig. 2 it is clear that, S channel gives highest mean contrast value in three wound categories under study. The average value of mean contrast was obtained by taking an average in three wound categories. The mean value of contrast mean for S channels is found to be 0.310 which is highest as compared to the mean of remaining channels. The second largest mean was observed in I channel (i.e. 0.242), and the lowest mean value was seen in R channel (i.e. 0.098). Since S channels have highest contrast difference between ROI and background, therefore, it was selected as the most appropriate color space for segmentation and the remaining five channels were discarded.

Fig. 3
figure 3

Spectral clustering. a Dataset of an image (see corresponding image in Fig. 4d ). b A well-separated dataset consisting of two clusters indicated by different colors (see corresponding image in Fig.4e). c Normalized similarity matrix constructed by Laplacian modeled using Gaussian function

Segmentation

The S channel image was converted into single dimensional vector. The pixel intensity distribution had color values ranging from 0 to 255. The image (i.e. Figure 4a) is firstly converted into data points as shown in Fig.3a. Choosing a function to calculated similarity graph is a trivial task. Here we use full connected graph by simply connecting all data points with positive similarity. The weighted similarity graph was constructed by Gaussian similarity function with σ = 1. The constructed normalized similarity matrix for Fig.4a is depicted in Fig.3c. The degree matrix was formed from adjacency matrix and the sparse matrix was then constructed by computing eigenvectors of degree matrix. The proposed spectral based clustering algorithm was tested on entire database under study which consisted of 105 photographs of three different type of ulcers. The cluster size in k-means was fixed to 2 and class number K m was varied between 3 to 7. The clustered image was again normalized row wise and thresholded. The threshold value was set to 0.68. The morphological transformations were applied to the normalized image to get two final clusters, one representing wound region and the other representing background region (see Fig. 4f). The well distinct 2-D image data points of segmented image is shown in Fig.3b. Finally, segmented wound image was compared with the manual marked segmented image, which was obtained from a dermatologist (see Fig.4i). Wound area in an input image was marked as per clinician’s direction using CorelDraw software. Three experts independently evaluated the boundaries of marked wound region. The mean agreement between them was considered while drawing ground truth. The ground truth images was drawn, if none or one out of three clinicians disagreed over possible region of interest. The manual segmentation was taken as the segmentation gold standard.

Fig. 4
figure 4

Summary of image results obtained by proposed methodology for wound-bed detection. a Input image; b color homogenized images; c S channel image; d first order statistics filtered image; e spectral clustered image; f normalize clustered image; g morphological post processed image; h contour of segmented image; i ground truth image; j superimposed image; k final segmented image

Post processing and validation

Since some pixels were misclassified, the segmented image required fine tuning with morphological operators. These misclassified pixels have different intensity values than the neighboring pixels. These pixels appeared as the gaps or holes in the ROI. These gaps were filled by using hole filling. After that, the boundaries of an object were thickened so as to 8-connect previously unconnected object. Bridge operation was performed to set 0 valued pixels to 1 followed by thinning with 20 pixels. The result after post-processing an image can be seen in Fig. 4g.

Figure 4 shows step by step results and summary of the proposed methodology. The edge feature gave more meaningful information about the wound perimeter. They indicated a boundary between the wound and non-wound region in an image. A hybrid approach of color homogenization used here preserved the sharpness of the edges. These precise edges can be seen in Fig. 4b. The color homologous RGB image was transformed into HSI color space. From these six different color channels, we chose S channel on the basis of higher contrast difference (see Fig. 4c). Figure 4c was then filtered by applying local first order statistics filter. The ordered statistics filtered used locally removed impulse noise (see Fig. 4d). Figure 4e shows the output of spectral clustering, which has some misclassified pixel than desired. The wrongly classified pixels were removed by normalizing Fig. 4e in between 0 and 1. The normalized image had two regions (see Fig. 4f) and was post-processed to overcome further misclassification. The clinician marked ground truth image can be seen in Fig. 4i. The result of final segmentation is shown in Fig. 4k.

The results of proposed spectral clustering based segmentation were compared with k-means and FCM based segmentation. The two methods were chosen for comparison because they are reported earlier in wound area segmentation and asserted to be more reliable in terms of the performance metric. The comparison of these methods is shown in Fig. 5. The photograph of leg ulcer, pressure ulcer, and diabetic foot ulcer is shown in row 1, row 2 and row 3 of column Fig. 5a. Column Fig. 5b. shows gold standard photographs obtained from dermatologists. The yellow marking inside these images shows the boundary of separation between the wound and non-wound region. The results of SC, k-means, and FCM are shown in the column of Fig.5c, 5d and 5e respectively. These methods were further validated on the basis of performance metric.

Fig. 5
figure 5

Comparison among spectral clustering, k-means and fuzzy c-means. a Original lower extremity wound images; b ground truth marked by the clinician; c segmented wound area by spectral clustering; d segmented wound area by k means; e segmented wound area fuzzy c means

Inter-rater agreement of gold standard

Multi-dermatologists assessment of gold standard reduced the inter-observer segmentation variability. Ground truth image was marked manually as per clinician’s directions. These manually segmented images were shown to three dermatologists. Each clinician were told to independently examine ground truth photograph and document their opinion. All the clinicians had more than six years practice in dermatology. They were told to agree independently or disagree over the ROI marked. If ROI marked is correct, then agree else disagree.

Table 2 shows the inter-rater agreement score of all the images under study. D1, D2, and D3 were three dermatologists. The diagonal element of the matrix represents the total independent score of each rater. D1 agree, D2 disagree and D2 agree, D1 score is represented by row 1 column 2 and row 2 column 1 respectively (i.e. D12 = 3, D21 = 0). Similarly, remaining elements in the matrix were constructed. These non-diagonal elements represent an inter-rater agreement between two raters. By using Fleiss’ kappa statistic method [26], interobserver agreement was found to be 0.890 with a standard error of 0.005 at 95 % confidence interval. According to Fleiss, the inter-rater agreement score above 0.80 is treated as best score and resembles almost perfect agreement.

Table 3 shows the results of comparison of the performance measures for spectral, k-means and FCM based segmentation applied in S channel. SA, PPV, and sensitivity was calculated for each image, and then determined the average and standard deviation across 105 images. The lowest average accuracy among three segmentation methods was seen in k-means. However, k-means and FCM reported the highest average sensitivity of 87.98 % and 88.77 % respectively among the three performance measures. The highest average PPV of 0.918 was reported in spectral based clustering. It is evident from the results that spectral clustering outperforms FCM and k-means with an average accuracy of 86.73 % and PPV of 91.80 %. The average sensitivity in all the segmentation methods was nearly same without any significant difference, however it is an important test measure because it gives the percentage of wound pixels correctly classified as the wound. The higher sensitivity means the rate of correct classification of wound pixel is as a wound pixel is more.

Table 4 shows the results of segmentation accuracy for three categories of wound ulcer: pressure, leg, and diabetic foot. The highest average segmentation accuracy of 86.73 % was found in the spectral clustering. FCM reported lowest accuracy of 72.69 % in pressure ulcer wounds. The highest accuracy of 89.97 % was achieved in leg ulcer with SC as a segmentation method. From Table 4 it is evident that spectral based clustering algorithm is the best suitable algorithm for segmentation of leg ulcer digital images as compared to PU and DFU.

The comparative study of proposed methodology with previously reported work [16] is presented in Table 5. The comparison was done, since the database used in both the studies was same and under the same variability index of light. Yadav et al. have reported highest accuracy of 74.39 % and 75.23 % by k-means and FCM based clustering algorithm respectively. The color channels used in their study was Db and Dr. From Table 5 it is evident that our proposed methodology outperforms the study reported by Yadav et al. in terms of segmentation accuracy.

Discussion

The earlier studies aimed at developing a computationally efficient and accurate tool for obtaining the dominant ulcer region from a digital image for clinical assistance in wound management. Several research groups (Table 5) have reported various segmentation and classification techniques used in their study for an accurate detection of the wound and its tissue composition. However, these studies have either confined to the images with controlled lighting environment or have biased towards specific skin tones. In this study, we tested the performance of proposed algorithm on the set of complex images having the heterogeneous color cast, complex tissue composition, vague wound boundary, and unbiased skin tone.

In our opinion, the significant clinical advantage of proposed approach lies in its capability to segment the wound accurately from its background region. If incorporated in patient facing system, it could be an assistive tool for the nurses and physicians to perform recurrent ulcer evaluation, ensuring that the patient is receiving proper and prompt medical care. The proposed strategy has several advantages over other similar studies for wound-bed detection [16, 28]. One such advantage is that our method is fully-automatic in a sense of selecting a rectangular area in the background and foreground region of an image to calculate the contrast. The selection of rectangular area presented in the study [16] was semi-automatic. The skin tone is an another important parameter to calculate contrast and also for the selection of working color channel. It is worth to note that unlike others [13, 15, 16, 24] the dataset used in our study is unbiased to the particular skin tone as well as imaging conditions. The robustness of our algorithm was measured using classification performance metrics. In spite of diverse illumination conditions, the SA and PPV compared with the gold standard for segmenting very complex set of images was very high (i.e. 86.73 % and 91.80 %). The innovation in our approach was the formation of similarity matrix based on Laplacian suggested by Ng et al. The use of spectral clustering for the segmentation, is the first study to our knowledge conducted for LEW images having complex and irregular wound boundaries. Our results provided compelling evidence that the strategy proposed is more reliable as compared to the other reported for analogous research problem [16, 27, 29, 31].

The patient facing system incorporating the proposed segmentation approach will contribute toward a rapid assessment and follow-up in a busy clinical setup. Such approach is consistent enough to reduce the overall hospital stay. Our algorithm automatically calculated the contrast and selected the channel with the highest contrast (i.e. S channel) and discarded remaining channels. However, the model has some limitations. In few DFU images, the contrast for I and S channel was almost same and in few it exceeded. In that case, our algorithm selected I channel instead of S channel. Based on the results reported here, the spectral approach for segmenting wound-beds is found to be the best concerning SA, PPV and sensitivity as compared to the studies reported earlier (Table 4). However, the computation of similarity graph was the most expensive step in terms of time complexity. It took almost 1 min 7 s to compute the similarity matrix and 1 min 42 s in the entire scheme for 470 × 470 image size. The time complexity was calculated for a photograph, processed on a standard computer having Intel(R) Core(TM) i5 2430 M CPU @2.40GHz, 2GB RAM and licensed Matlab R2012a. In our continuation work, we have included nearly 25 color spaces so as to select the best color channel based on highest contrast. We are working on automatic validation of wound area by comparing with thermal images of LEWs.

Conclusion

In this study, we have proposed a novel method of spectral clustering for segmentation of wounds beds. The new method was applied to three types of lower extremity ulcers: leg, pressure and diabetic foot. The proposed method was compared with FCM and k-means based segmentation. The assessment obtained from three approaches was quantitatively analysed over three performance measures: SA, PPV, and sensitivity. The visual outcome of proposed method showed that it effectively delineated the targeted ulcer with the average segmentation accuracy of 86.73 % among three categories of the wound. Since the results are extremely encouraging, we are of the belief that our method is indeed a useful tool for the accurate delineation of clinically significant regions.