Keywords

1 Introduction

Segmenting medical image is an important task in image analysis. Segmentation of a medical imaging is used to extract the required information regarding the presence of different abnormalities present in the respective image. Artificial Neural network (ANN) is a machine learning model which is used by simulating the anatomic neuron connections present in it. Its effects have been validated in scientific studies for various applications, such as image processing, pattern recognition, system control, and medical image diagnosis [4,5,6,7, 13, 14, 16]. It has some limitations such as false segmentation and region growing faults, when it is applied in some type of medical images. In this paper, improvements in this Neural Networks has been obtained when a trained knowledge as features was incorporated into it which is used to train it. Therefore, we aimed to combine the trained knowledge as features from medical images with a doctor’s trained knowledge in the segmentation of Retinal vessels and Skin lesions, for the application of early finding of diabetic retinopathy in eye fundus images and lesions in the Skin images. Important aim of this proposed methodology is to segment the region of interest in eye fundus images for finding the Diabetic Retinopathy and Skin lesions in medical images more effectively which is applied for training and testing as samples. Many methodologies have been given in the literature for the medical image segmentation. An automatic detection methodology was framed by Gardner et al. [8] for diabetic retinopathy using a neural network. The artifacts which are present in the medical images can be easily identified more effectively from the images of grey level. An automated back propagation type neural network is adopted to examine the eye fundus images. It does not work more effectively since the images are of low contrast.

Thresholding based technique proposed on eye fundus images by Sinthanayothin et al. in [9] The performance of their method is validated o using a 10 × 10 pixels instead of using the whole image. Usher et al. [10] endowed the various region of candidate exudates which are present in the eye fundus images by using a combination of Random Graph and intensity based adaptive thresholding methods. In their methodology, the regions of various candidate are removed and those are further used as an input for an artificial based neural network technology. Due to their algorithm the images with poor quality was affected and then all these images are extracted due to the bright and dark type of lesions.

Zheng et al. in [11] extracted the various types of artifacts which are present in the eye fundus images by combining the algorithms called as thresholding and region growing. The combination of Color normalization and local contrast enhancement methodology along with the clustering based on fuzzy C-means and neural network methodologies was proposed by Osareh et al. [12]. Their techniqua works more effectively only on Luv based colour space images even if it has an illumination of non-uniform in nature. Hence, the accuracy of detection less because of this dis advantage. Stoecker et al. [15] presented a new textural based segmentation methodology in skin images adopting the gray-level co-occurrence matrix which is a statistical based approach. Jeffrey et al. [1] adapted a novel methodology segmentation and classification of lesion present in the skin images. In their methodology, various lesions which are present in the skin images are segmented using a Distinctiveness based Joint Statistical Texture method. It results in an overall accuracy of 93%.

Menzies et al. in [5] proposed a novel algorithm based on a fusion of Semi-automatic and manual based methodologies A regression-based classifier is used for the purpose of classifying the further segmented results. They applied their algorithm. Their algorithm is applied for a set of 2430 lesion images of skin. A 65% average specificity and 91% average sensitivity is obtained to their algorithm.

2 Methodology

This section presents the overall process present in segmentation of region of interest in medical images. In this work, Skin and Retinal images are taken as testing datasets. Figure 1 gives the overall architecture of the proposed methodology for segmenting the medical images.

Fig. 1.
figure 1

Overall architecture of the proposed methodology

3 Image Pre-processing

All the obtained medical images were normalized for illumination, size, and color using an image processing process. In this work, the trained knowledge is defined as the texture features of the Region of Interest. This work is done after consulting with various experienced ophthalmologists and doctors for drawing the ground truth images.

  1. A.

    Computerized seed points extraction

In the segmentation algorithm, usually all the seed points were formed by manual process. To remove each and every seed points in all the images is a big-time process. The seed points which are corresponding to each and every pixel present in the image should be given prior to the segmentation of entire image. The further segmented image should be different from the previously segmented image. For this process, all the regions present in the current image is merged with one another for the process of splitting and then merging. More number of seed points should be given since it should be merged with all the pixels present in the original image. Since the given put is a medical set of data, all the seed points should be merged with all the pixels present in the entire image so that the entire image is considered. The overall method is as follows: The medical image which is to be segmented is considered as s binary image.

Each pixel present in the original binary image which is considered as the target region is considered as 1, and all the others are considered as 0. The proposed algorithm is as follows

  1. (1)

    Define the similarity threshold for color as \( a \).

  2. (2)

    Choose the pixel number manually which is present in the predefined target region, and also define them as an initial seed pixel as \( (x_{0} ,y_{0} ) \);

  3. (3)

    Define the initial seed pixel in center, in order to get the eight pixels in neighborhood as \( (x_{i} ,y_{i} )\left( {i = 0,1, \ldots .,10} \right) \)

  4. (4)

    Calculate the similarity in color between the predefined pixels \( (x_{i} ,y_{i} ) \) and \( (x_{0} ,y_{0} ) \)

  5. (5)

    If the similarity in color in between the predefined pixel is larger than the set similarity threshold \( a \), those pixels can be clustered towards a common region. Hence, the difference Pixel \( (x_{i} ,y_{i} ) \) should stored in a stack.

  6. (6)

    Repeat the above process until the stack become empty.

  7. B.

    Seed points based TBPNN

The seed point based Textural back propagation neural network (STBP-NN) [17, 18] is trained with pixel values as knowledge present in Fig. 2 has three layers such as an input layer, a hidden layer \( H_{in} (j) \) and an output layer. Initial Seed point is assumed as Xi and final seed point is assumed as X2. the pixel values as seed points and a Trained knowledge in the form of textural features are given as the input towards the input layer.

Fig. 2.
figure 2

Proposed TBPNN architecture

The framework of our STBPNN is as follows. The hidden layer input \( H_{in} \left( j \right) \) was defined as:

$$ H_{in}^{n} \left( j \right) = \sum\limits_{i = 1}^{M} {\omega_{ij}^{n} x_{i}^{n} + a_{j}^{n} } $$
(1)

where \( x_{i} \) is the input features, \( \omega_{ij} \) is the weights between the neurons from the input layer and hidden layer, and \( a_{j} \) represents the threshold for hidden layer neurons.

The estimates of RBP-NN is as follows:

$$ \hat{\beta } = \arg \,\,\mathop { \hbox{min} }\limits_{\beta } \left\| {Y - \left. {\sum\limits_{j = 1}^{p} {\mathop x\nolimits_{j} \mathop \beta \nolimits_{j} } } \right\|} \right.^{2} + \lambda \sum\limits_{j = 1}^{p} {\left| {\mathop \beta \nolimits_{j} } \right|} $$
(2)

where, λ is a non-negative regularization parameter, x is the Blood vessel width and blood vessel tortuosity of retinal images, Y is the average accuracy and βs is the regression coefficient.

The number of neurons in the hidden layer is as follows:

$$ \text{h} < n - 1 $$
(3)
$$ {\text{h}} < \sqrt {(m + n)} + a $$
(4)

where \( n \) is the number of input layer neurons, h is the number of neurons in the hidden layer, \( m \) was the number of neurons present in the output layer, and \( a \) is a threshold between 0 and 20. In this work, \( n \) is assumed as 80, m is set as 3, and his from 21 to 30. The texture features such as Energy.

Homogeneity, Contrast and Correlation were separately calculated for each training and testing. The results obtained from the traditional segmentation methods are compared with the proposed STBP-ANN. The same features used in this algorithm were used as the input towards SVM algorithm inorder to compare its performance with our proposed methodology.

4 Results and Discussion

In this research, the SVM, traditional BP-NN, and the proposed STBP-NN were each used to segment the Region of Interest from medical images. The objective of this comparison is to measure the accuracy of proposed algorithm based on sensitivity, specificity, and accuracy using Eqs. (5), (6) and (7) as shown in Table 1.

Table 1. Comparison of the results of methodologies for Retinal Images
$$ {\text{Sensitivity }} = \frac{TP}{TP + FN} $$
(5)
$$ {\text{Specificity }} = \frac{TN}{TN + FP} $$
(6)
$$ {\text{Accuracy }} = \frac{TP + TN}{TP + FN + TN + FP} $$
(7)

The images shown in Figs. 3 and 4 are the original. Ground truth and the segmented results of Skin and Retina images. The segmented images are very close to the ground truth image. In this approach, the ophthalmologists and doctors specified images are considered as ground truth images for the calculation of segmentation accuracy.

Fig. 3.
figure 3

The segmented skin lesions

Fig. 4.
figure 4

The segmented retinal vessels.

The results showed that the accuracy of this proposed methodology is higher than the SVM, BP-NN and STBP-NN was 94.39%, 94.69% and 95.71% respectively as shown in Table 1 for retinal images. This indicates that the Proposed STBP-NN with a priori knowledge can achieve better segmentation results for retinal images.

Table 2 presents the accuracy of this proposed methodology is higher than the SVM, BP-NN and STBP-NN was 94.75%, 94.84% and 95.61% respectively for skin images indicating that the Proposed methodology can achieve better segmentation results for segmenting skin images.

Table 2. Comparison of the results of methodologies for Skin Images

5 Conclusion

In this paper, a new method for segmenting Region of Interest present in the medical images using the combination of Seed point and supervised Texture based Back Propagation Neural Networks method is proposed. Medical images such as Skin and Fundus images are used to test the proposed algorithm in which the region of interests such as the skin lesion and the retinal vessels were segmented effectively. The main purpose of this approach is to improve the DR in Retinal images and Lesion in skin image detection accuracy by segmenting the region of interest in it. In this method, the obtained seed points are given as input for training and testing the proposed TBP-ANN. Compared with other segmentation method, our method can better for processing the fundus image in its vessel and lesion in skin image segmentation. This STBP-ANN could segment the region of interest in medical images better than the traditional Neural Networks and SVM methods and could be a promising measure for early DR and Lesion detection more effectively.